diff --git a/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/README.md b/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/README.md index 30981e0..93db661 100644 --- a/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/README.md +++ b/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/README.md @@ -46,6 +46,13 @@ Eventually, I'm tired. +## Special Note +Because of how the backend evaluates the text boxes, it doesn't know the contents of the tokens have changed when parsing the prompts. There's two ways to fix this: +* put `{ | | }` in the prompt. It will evaluate the space each time and run the prompt, thus also evaluating tokens. +* make a new multiline node→random line node→text concatenate (the random result and the prompt) → text parse tokens → text to conditioning + * this is more complex, but preserves the text prompt in the image workflow. + + ## resources I need to change these to match. Will do it tomorrow. diff --git a/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/text concatenate image.png b/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/text concatenate image.png new file mode 100644 index 0000000..7d1a477 Binary files /dev/null and b/why-oh-why/hrf-x10-latent-upscale-was-tokens-3lora/text concatenate image.png differ