some wyrde workflows for comfyUI
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
wyrde 44a9c97e40
info on omar nodes
2 years ago
..
README.md info on omar nodes 2 years ago
example-prefix_00008_.png updated workflow 2 years ago
example-prefix_00009_.png updated workflow 2 years ago
example-prefix_00010_.png more images 2 years ago
example-prefix_00011_.png more images 2 years ago
example-prefix_00012_.png more images 2 years ago
example-prefix_00013_.png more images 2 years ago
example-prefix_00014_.png foxies 2 years ago
example-prefix_00015_.png foxies 2 years ago
token nodes 1.png updated workflow 2 years ago
token nodes 2.png more images 2 years ago
token nodes 3.png more images 2 years ago
token random values example.json Update token random values example.json 2 years ago
token random values example.png updated workflow 2 years ago

README.md

Using Tokens for Random Values

Makes use of WAS nodes. and omar's QoL nodes

  • The WAS nodes are necessary for this example
  • The omar "Text _O" nodes can be removed, they're notes and instructions.
  • The omar latent upscale by factor nodes can be replaced with any latent upscale.

Tokens are created and then assigned random values from list boxes.

The tokens names are placed in a text box. The contens of the list box are sent to conditioning.

Why do this instead of {curly|braces}?

  • In comfyui, words in {curly braces | separated by | pipes are | used to | generate} random results. Due to the way comfyui functions, an image's workflow will contain only the items in the prompt which were evaluated for the image. The rest of the random list is dropped.

Why the text concatenate?

  • Because of how the backend evaluates the text boxes, it doesn't know the contents of the tokens have changed when parsing the prompts. There's two ways to fix this:
    • put { | | } in the prompt. It will evaluate the space each time and run the prompt, thus also evaluating tokens.
    • make a new multiline node→random line node→text concatenate (the random result and the prompt) → text parse tokens → text to conditioning
      • this is more complex, but preserves the text prompt in the image workflow.

Example Results

resources

Model

Lora

Embeds

Custom Nodes

[back] [home]