diff --git a/basics/was-nodes-start/README.md b/basics/was-nodes-start/README.md
index bcc3168..718abec 100644
--- a/basics/was-nodes-start/README.md
+++ b/basics/was-nodes-start/README.md
@@ -8,7 +8,7 @@ The workflow uses noodles (the lines between nodes) to switch from img2img to tx
* Switch back by draging from the LATENT dot on the _VAEEncode_ box to the _redirect_ node.
-
+
Switching from baked VAE to loaded vae is similar. (Baked VAEs are included inside the model file.)
@@ -16,6 +16,7 @@ Switching from baked VAE to loaded vae is similar. (Baked VAEs are included insi
* To change back, drag a noodle from the VAE dot in the _CheckpointLoaderSimple_ node to the purple _redirect_ node below.
+
When using img2img, images should be resized to better fit the model. Most models are designed for 512x512 _initial_ generations. 768x512 and 512x768 are also common.
Large images will also suck of VRAM and time during sampling for the first generation. Instead of creating a big image in stable diffusion, it is better to go through a process called Hi-Res Fixing. That's a subject for another time, but for this basic workflow, stick to smaller initial images. (This can easily be changed to experiment with.)