So, an important thing about stable diffusion is the models are trained on small images. 512x512 pixels is the standard size for most SD1.5 based models (768 for a few). SD2.x models can be either 512px or 768px, depending on the one chosen.
So, an important thing about stable diffusion is the models are trained on small images. 512x512 pixels is the standard size for most SD1.5 based models (768 for a few). SD2.x models can be either 512px or 768px, depending on the one chosen.
@ -237,7 +236,33 @@ Why do all the reroutes and color coding? Can't we connect directly from the mod
* besides, using stable diffusion is about making pretty pictures. Let's make pretty workflows, too!
* besides, using stable diffusion is about making pretty pictures. Let's make pretty workflows, too!
## Expanding on Fixing
## Expanding on Fixing
(Still writing this too)
Adding more nodes and increasing the HR-Fix is easy.
* Drag the ouput nodes to the right so there's more space.
* and decrease denoise a little. For this one, .450 is good.
* each ksampler in a HRF will decrease the noise
* though for latent space, about .2 is the lowest. Usually.
[here's the current workflow](basic-wf-vae-lora-latemt-upscale-x2.json)
More latent HRFs will gradually increase the output image while adding details. But let's stop here and add some pixel space HRFs. Onewards, noble steed!