Once ComfyUI is installed and running, adding workflows is as easy as dragging and dropping images or workflows created by ComfyUI into the empty area of the browser window.
@ -33,11 +33,11 @@ This basic workflow generates an image based on the positive and negative prompt
Before an image can be generated, a model is needed. Go ahead and select `v1-5-pruned-emaonly.safetensors`
* What, don't have it? Well, [get it from here](https://huggingface.co/runwayml/stable-diffusion-v1-5/tree/main).
* Don't click the file name, it leads to a web page. Click on the right to download the file. <imgsrc="dlv15.png"width="80%"align="middle">
* Don't click the file name, it leads to a web page. Click on the right to download the file. <imgsrc="./pix/./pix/dlv15.png"width="80%"align="middle">
* Place the file in `ComfyUI\models\checkpoints\`
Click "Queue Prompt" in the box on the side of the window to generate an image. If the same settings are used from the workflow above, it'll look remarkably like
Download a VAE from [stabilityai](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/tree/main) and drop it in `ComfyUI\models\vae`. Get the pruned.safetensors file.
* Don't click the filename (it results in a page that is confusing until eyes finally spot the "download" link), click to the right. <imgsrc="downloadvae.png"width="75%"align="middle">.
* Don't click the filename (it results in a page that is confusing until eyes finally spot the "download" link), click to the right. <imgsrc="./pix/downloadvae.png"width="75%"align="middle">.
* Drop the file in `ComfyUI\models\vae\`
* Once it is downloaded, hit F5 and refresh the window so Comfy knows the file is there.
* With one vae, it is easy to pick in the VAE Loader by clicking on arrows or the list.
Lora (and the varients) are cool mini-models that are used to alter a bigger model. Think of them like the trojan horse, but everyone is happy with the result. Usually.
@ -114,7 +114,7 @@ Now drop a _Lora Loader_ in the empty spot.
* change strength_model and strength_clip to 0.8
When Queue Prompt is clicked, the image should now be a pixel-art bottle.
@ -123,7 +123,7 @@ So, an important thing about stable diffusion is the models are trained on small
But what does that mean for regular users? Most people want a much larger image (1920x1080 for example). The thing is, just changing the Latent Image size to 1920x1080 tends to go horribly, horribly wrong. That's because stable diffusion doesn't really understand "size" or "composition". When it sees a huge canvase size, it tries to fill _every part_ of it with the prompt.
* the previous prompt and configuration with a 1920x1080 size latent:
But fear not, there are a few techniques to increase image size from 512px to something more grandiose.
* Latent Upscale: this takes a latent image and makes it bigger. The result is okay, but the larger image is missing much of the detail that is possible.
@ -161,56 +161,56 @@ See the model rerout hanging out at the corner of the positive prompt?
* lect click to activate it.
* Ctrl-C to clone it.
* move mouse a little to the right (above the older ksmampler is fine)
But wait! Isn't this bottle somewhat different from before? It sure is! And here is why:
* On the new KSampler, denoise is set to 1.000
@ -219,7 +219,7 @@ But wait! Isn't this bottle somewhat different from before? It sure is! And here
* 0.500 is _generally_ a good number for the first "HR Fix"
* click Queue Prompt again.
* Instead of running the whole workflow, comfyui should start at the second KSampler. This is because the there were no changes earlier in the workflow.