diff --git a/basics/was-nodes-start/README.md b/basics/was-nodes-start/README.md index 718abec..73ffd4e 100644 --- a/basics/was-nodes-start/README.md +++ b/basics/was-nodes-start/README.md @@ -15,13 +15,13 @@ Switching from baked VAE to loaded vae is similar. (Baked VAEs are included insi * Drag a noodle from the VAE dot in the _VAELoader_ node to the purple _redirect_ node below. * To change back, drag a noodle from the VAE dot in the _CheckpointLoaderSimple_ node to the purple _redirect_ node below. - +
When using img2img, images should be resized to better fit the model. Most models are designed for 512x512 _initial_ generations. 768x512 and 512x768 are also common. + Large images will also suck of VRAM and time during sampling for the first generation. Instead of creating a big image in stable diffusion, it is better to go through a process called Hi-Res Fixing. That's a subject for another time, but for this basic workflow, stick to smaller initial images. (This can easily be changed to experiment with.) Back to img2img. * Use the _Resize Image_ node. The node is initially configured to resize images to 512x768. It is best to crop/alter your image in another program so it it fits 1:1 or 1:2 ratio for easy scaling. (Nodes can be used to alter images, but that's a more advanced topic.) Doesn't matter if the ratio isn't perfect, the image is a guide. Closer the better though. * The _node's_ "mode" can be changed from "resize" to "rescale" to easily reduce larger images as well. -