comfyanonymous
af3cc1b5fb
Fixed issue when batched image was used as a controlnet input.
2 years ago
comfyanonymous
d2da346b0b
Fix missing variable.
2 years ago
comfyanonymous
4e6b83a80a
Add a T2IAdapterLoader node to load T2I-Adapter models.
...
They are loaded as CONTROL_NET objects because they are similar.
2 years ago
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2 years ago
comfyanonymous
cf5a211efc
Remove some useless imports
2 years ago
comfyanonymous
87b00b37f6
Added an experimental VAEDecodeTiled.
...
This decodes the image with the VAE in tiles which should be faster and
use less vram.
It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2 years ago
comfyanonymous
62df8dd62a
Add a node to load diff controlnets.
2 years ago
comfyanonymous
f04dc2c2f4
Implement DDIM sampler.
2 years ago
comfyanonymous
2976c1ad28
Uni_PC: make max denoise behave more like other samplers.
...
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2 years ago
comfyanonymous
c9daec4c89
Remove prints that are useless when xformers is enabled.
2 years ago
comfyanonymous
a7328e4945
Add uni_pc bh2 variant.
2 years ago
comfyanonymous
d80af7ca30
ControlNetApply now stacks.
...
It can be used to apply multiple control nets at the same time.
2 years ago
comfyanonymous
00a9189e30
Support old pytorch.
2 years ago
comfyanonymous
137ae2606c
Support people putting commas after the embedding name in the prompt.
2 years ago
comfyanonymous
2326ff1263
Add: --highvram for when you want models to stay on the vram.
2 years ago
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2 years ago
comfyanonymous
d66415c021
Low vram mode for controlnets.
2 years ago
comfyanonymous
220a72d36b
Use fp16 for fp16 control nets.
2 years ago
comfyanonymous
6135a21ee8
Add a way to control controlnet strength.
2 years ago
comfyanonymous
4efa67fa12
Add ControlNet support.
2 years ago
comfyanonymous
bc69fb5245
Use inpaint models the proper way by using VAEEncodeForInpaint.
2 years ago
comfyanonymous
cef2cc3cb0
Support for inpaint models.
2 years ago
comfyanonymous
07db00355f
Add masks to samplers code for inpainting.
2 years ago
comfyanonymous
e3451cea4f
uni_pc now works with KSamplerAdvanced return_with_leftover_noise.
2 years ago
comfyanonymous
f542f248f1
Show the right amount of steps in the progress bar for uni_pc.
...
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2 years ago
comfyanonymous
f10b8948c3
768-v support for uni_pc sampler.
2 years ago
comfyanonymous
ce0aeb109e
Remove print.
2 years ago
comfyanonymous
5489d5af04
Add uni_pc sampler to KSampler* nodes.
2 years ago
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2 years ago
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2 years ago
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2 years ago
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2 years ago
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2 years ago
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2 years ago
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2 years ago
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2 years ago
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2 years ago
comfyanonymous
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2 years ago
comfyanonymous
bbdcf0b737
Use relative imports for k_diffusion.
2 years ago
comfyanonymous
708138c77d
Remove print.
2 years ago
comfyanonymous
047775615b
Lower the chances of an OOM.
2 years ago
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2 years ago
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2 years ago
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2 years ago
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2 years ago
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2 years ago
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2 years ago
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2 years ago
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2 years ago
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2 years ago