stable diffusion webui

2025-12-10 0 497

Stable Diffusion web UI

A web interface for Stable Diffusion, implemented using Gradio library.

Features

Detailed feature showcase with images:

  • Original txt2img and img2img modes
  • One click install and run script (but you still must install python and git)
  • Outpainting
  • Inpainting
  • Color Sketch
  • Prompt Matrix
  • Stable Diffusion Upscale
  • Attention, specify parts of text that the model should pay more attention to
    • a man in a ((tuxedo)) – will pay more attention to tuxedo
    • a man in a (tuxedo:1.21) – alternative syntax
    • select text and press Ctrl+Up or Ctrl+Down (or Command+Up or Command+Down if you\’re on a MacOS) to automatically adjust attention to selected text (code contributed by anonymous user)
  • Loopback, run img2img processing multiple times
  • X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
  • Textual Inversion
    • have as many embeddings as you want and use any names you like for them
    • use multiple embeddings with different numbers of vectors per token
    • works with half precision floating point numbers
    • train embeddings on 8GB (also reports of 6GB working)
  • Extras tab with:
    • GFPGAN, neural network that fixes faces
    • CodeFormer, face restoration tool as an alternative to GFPGAN
    • RealESRGAN, neural network upscaler
    • ESRGAN, neural network upscaler with a lot of third party models
    • SwinIR and Swin2SR (see here), neural network upscalers
    • LDSR, Latent diffusion super resolution upscaling
  • Resizing aspect ratio options
  • Sampling method selection
    • Adjust sampler eta values (noise multiplier)
    • More advanced noise setting options
  • Interrupt processing at any time
  • 4GB video card support (also reports of 2GB working)
  • Correct seeds for batches
  • Live prompt token length validation
  • Generation parameters
    • parameters you used to generate images are saved with that image
    • in PNG chunks for PNG, in EXIF for JPEG
    • can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
    • can be disabled in settings
    • drag and drop an image/text-parameters to promptbox
  • Read Generation Parameters Button, loads parameters in promptbox to UI
  • Settings page
  • Running arbitrary python code from UI (must run with --allow-code to enable)
  • Mouseover hints for most UI elements
  • Possible to change defaults/mix/max/step values for UI elements via text config
  • Tiling support, a checkbox to create images that can be tiled like textures
  • Progress bar and live image generation preview
    • Can use a separate neural network to produce previews with almost none VRAM or compute requirement
  • Negative prompt, an extra text field that allows you to list what you don\’t want to see in generated image
  • Styles, a way to save part of prompt and easily apply them via dropdown later
  • Variations, a way to generate same image but with tiny differences
  • Seed resizing, a way to generate same image but at slightly different resolution
  • CLIP interrogator, a button that tries to guess prompt from an image
  • Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
  • Batch Processing, process a group of files using img2img
  • Img2img Alternative, reverse Euler method of cross attention control
  • Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
  • Reloading checkpoints on the fly
  • Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
  • Custom scripts with many extensions from community
  • Composable-Diffusion, a way to use multiple prompts at once
    • separate prompts using uppercase AND
    • also supports weights for prompts: a cat :1.2 AND a dog AND a penguin :2.2
  • No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
  • DeepDanbooru integration, creates danbooru style tags for anime prompts
  • xformers, major speed increase for select cards: (add --xformers to commandline args)
  • via extension: History tab: view, direct and delete images conveniently within the UI
  • Generate forever option
  • Training tab
    • hypernetworks and embeddings options
    • Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
  • Clip skip
  • Hypernetworks
  • Loras (same as Hypernetworks but more pretty)
  • A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
  • Can select to load a different VAE from settings screen
  • Estimated completion time in progress bar
  • API
  • Support for dedicated inpainting model by RunwayML
  • via extension: Aesthetic Gradients, a way to generate images with a specific aesthetic by using clip images embeds (implementation of https://gi*thub.c**om/vicgalle/stable-diffusion-aesthetic-gradients)
  • Stable Diffusion 2.0 support – see wiki for instructions
  • Alt-Diffusion support – see wiki for instructions
  • Now without any bad letters!
  • Load checkpoints in safetensors format
  • Eased resolution restriction: generated image\’s dimensions must be a multiple of 8 rather than 64
  • Now with a license!
  • Reorder elements in the UI from settings screen
  • Segmind Stable Diffusion support

Installation and Running

Make sure the required dependencies are met and follow the instructions available for:

  • NVidia (recommended)
  • AMD GPUs.
  • Intel CPUs, Intel GPUs (both integrated and discrete) (external wiki page)
  • Ascend NPUs (external wiki page)

Alternatively, use online services (like Google Colab):

  • List of Online Services

Installation on Windows 10/11 with NVidia-GPUs using release package

  1. Download sd.webui.zip from v1.0.0-pre and extract its contents.
  2. Run update.bat.
  3. Run run.bat.

For more details see Install-and-Run-on-NVidia-GPUs

Automatic Installation on Windows

  1. Install Python 3.10.6 (Newer version of Python does not support torch), checking \”Add Python to PATH\”.
  2. Install git.
  3. Download the stable-diffusion-webui repository, for example by running git clone https://*gith*ub.*com/AUTOMATIC1111/stable-diffusion-webui.git.
  4. Run webui-user.bat from Windows Explorer as normal, non-administrator, user.

Automatic Installation on Linux

  1. Install the dependencies:
# Debian-based:
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
# Red Hat-based:
sudo dnf install wget git python3 gperftools-libs libglvnd-glx
# openSUSE-based:
sudo zypper install wget git python3 libtcmalloc4 libglvnd
# Arch-based:
sudo pacman -S wget git python3

If your system is very new, you need to install python3.11 or python3.10:

# Ubuntu 24.04
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt update
sudo apt install python3.11

# Manjaro/Arch
sudo pacman -S yay
yay -S python311 # do not confuse with python3.11 package

# Only for 3.11
# Then set up env variable in launch script
export python_cmd=\"python3.11\"
# or in webui-user.sh
python_cmd=\"python3.11\"
  1. Navigate to the directory you would like the webui to be installed and execute the following command:
wget -q https://raw.gith**ubuserc*ontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh

Or just clone the repo wherever you want:

git clone https://gith*ub*.*com/AUTOMATIC1111/stable-diffusion-webui
  1. Run webui.sh.
  2. Check webui-user.sh for options.

Installation on Apple Silicon

Find the instructions here.

Contributing

Here\’s how to add code to this repo: Contributing

Documentation

The documentation was moved from this README over to the project\’s wiki.

For the purposes of getting Google and other search engines to crawl the wiki, here\’s a link to the (not for humans) crawlable wiki.

Credits

Licenses for borrowed code can be found in Settings -> Licenses screen, and also in html/licenses.html file.

  • Stable Diffusion – https://githu**b.co*m/Stability-AI/stablediffusion, https://*github.**com/CompVis/taming-transformers, https://githu**b.*com/mcmonkey4eva/sd3-ref
  • k-diffusion – https://github.c**o*m/crowsonkb/k-diffusion.git
  • Spandrel – https://*gi*thub.*com/chaiNNer-org/spandrel implementing
    • GFPGAN – https://githu*b*.*com/TencentARC/GFPGAN.git
    • CodeFormer – https://g*ithub.co**m/sczhou/CodeFormer
    • ESRGAN – https://*g*ithub.c*om/xinntao/ESRGAN
    • SwinIR – https://**gith*ub.com/JingyunLiang/SwinIR
    • Swin2SR – https://git**hub*.com/mv-lab/swin2sr
  • LDSR – https://git**h*ub.com/Hafiidz/latent-diffusion
  • MiDaS – https://gi*thub*.c*om/isl-org/MiDaS
  • Ideas for optimizations – https://g**ithub.*com/basujindal/stable-diffusion
  • Cross Attention layer optimization – Doggettx – https://*g*ithub.c*om/Doggettx/stable-diffusion, original idea for prompt editing.
  • Cross Attention layer optimization – InvokeAI, lstein – https://g**ithu*b.com/invoke-ai/InvokeAI (originally http://gith*ub.*co*m/lstein/stable-diffusion)
  • Sub-quadratic Cross Attention layer optimization – Alex Birch (Birch-san/diffusers#1), Amin Rezaei (https://gith*ub**.com/AminRezaei0x443/memory-efficient-attention)
  • Textual Inversion – Rinon Gal – https://gith*ub*.*com/rinongal/textual_inversion (we\’re not using his code, but we are using his ideas).
  • Idea for SD upscale – https://githu*b.c**om/jquesnelle/txt2imghd
  • Noise generation for outpainting mk2 – https://gith*u*b.com*/parlance-zz/g-diffuser-bot
  • CLIP interrogator idea and borrowing some code – https://gith*u*b*.com/pharmapsychotic/clip-interrogator
  • Idea for Composable Diffusion – https://git*hub*.co*m/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
  • xformers – https://gi**thu*b.com/facebookresearch/xformers
  • DeepDanbooru – interrogator for anime diffusers https://g*i*thub.c*om/KichangKim/DeepDanbooru
  • Sampling in float32 precision from a float16 UNet – marunine for the idea, Birch-san for the example Diffusers implementation (https://git*hub*.*com/Birch-san/diffusers-play/tree/92feee6)
  • Instruct pix2pix – Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) – https://github**.com*/timothybrooks/instruct-pix2pix
  • Security advice – RyotaK
  • UniPC sampler – Wenliang Zhao – https://gi**t*hub.com/wl-zhao/UniPC
  • TAESD – Ollin Boer Bohan – https://gith*u*b.*com/madebyollin/taesd
  • LyCORIS – KohakuBlueleaf
  • Restart sampling – lambertae – https://git*h*ub.*com/Newbeeer/diffusion_restart_sampling
  • Hypertile – tfernd – https://***github.com/tfernd/HyperTile
  • Initial Gradio script – posted on 4chan by an Anonymous user. Thank you Anonymous user.
  • (You)

下载源码

通过命令行克隆项目:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

收藏 (0) 打赏

感谢您的支持,我会继续努力的!

打开微信/支付宝扫一扫,即可进行扫码打赏哦,分享从这里开始,精彩与您同在
点赞 (0)

申明:本文由第三方发布,内容仅代表作者观点,与本网站无关。对本文以及其中全部或者部分内容的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。本网发布或转载文章出于传递更多信息之目的,并不意味着赞同其观点或证实其描述,也不代表本网对其真实性负责。

左子网 编程相关 stable diffusion webui https://www.zuozi.net/33095.html

peft
上一篇: peft
ComfyUI
下一篇: ComfyUI
常见问题
  • 1、自动:拍下后,点击(下载)链接即可下载;2、手动:拍下后,联系卖家发放即可或者联系官方找开发者发货。
查看详情
  • 1、源码默认交易周期:手动发货商品为1-3天,并且用户付款金额将会进入平台担保直到交易完成或者3-7天即可发放,如遇纠纷无限期延长收款金额直至纠纷解决或者退款!;
查看详情
  • 1、描述:源码描述(含标题)与实际源码不一致的(例:货不对板); 2、演示:有演示站时,与实际源码小于95%一致的(但描述中有”不保证完全一样、有变化的可能性”类似显著声明的除外); 3、发货:不发货可无理由退款; 4、安装:免费提供安装服务的源码但卖家不履行的; 5、收费:价格虚标,额外收取其他费用的(但描述中有显著声明或双方交易前有商定的除外); 6、其他:如质量方面的硬性常规问题BUG等。 注:经核实符合上述任一,均支持退款,但卖家予以积极解决问题则除外。
查看详情
  • 1、左子会对双方交易的过程及交易商品的快照进行永久存档,以确保交易的真实、有效、安全! 2、左子无法对如“永久包更新”、“永久技术支持”等类似交易之后的商家承诺做担保,请买家自行鉴别; 3、在源码同时有网站演示与图片演示,且站演与图演不一致时,默认按图演作为纠纷评判依据(特别声明或有商定除外); 4、在没有”无任何正当退款依据”的前提下,商品写有”一旦售出,概不支持退款”等类似的声明,视为无效声明; 5、在未拍下前,双方在QQ上所商定的交易内容,亦可成为纠纷评判依据(商定与描述冲突时,商定为准); 6、因聊天记录可作为纠纷评判依据,故双方联系时,只与对方在左子上所留的QQ、手机号沟通,以防对方不承认自我承诺。 7、虽然交易产生纠纷的几率很小,但一定要保留如聊天记录、手机短信等这样的重要信息,以防产生纠纷时便于左子介入快速处理。
查看详情

相关文章

猜你喜欢
发表评论
暂无评论
官方客服团队

为您解决烦忧 - 24小时在线 专业服务