comfyanonymous
d7897fff2c
Move cascade scale factor from stage_a to latent_formats.py
8 months ago
comfyanonymous
2a813c3b09
Switch some more prints to logging.
8 months ago
comfyanonymous
5f60ee246e
Support loading the sr cascade controlnet.
9 months ago
comfyanonymous
03e6e81629
Set upscale algorithm to bilinear for stable cascade controlnet.
9 months ago
comfyanonymous
03e83bb5d0
Support stable cascade canny controlnet.
9 months ago
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
9 months ago
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
9 months ago
comfyanonymous
e93cdd0ad0
Remove print.
9 months ago
comfyanonymous
a7b5eaa7e3
Forgot to commit this.
9 months ago
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
9 months ago
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
9 months ago
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
9 months ago
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
9 months ago
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
9 months ago
comfyanonymous
667c92814e
Stable Cascade Stage B.
9 months ago
comfyanonymous
f83109f09b
Stable Cascade Stage C.
9 months ago
comfyanonymous
5e06baf112
Stable Cascade Stage A.
9 months ago
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
9 months ago
comfyanonymous
89507f8adf
Remove some unused imports.
10 months ago
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
10 months ago
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
10 months ago
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
11 months ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
11 months ago
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
11 months ago
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
11 months ago
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
11 months ago
comfyanonymous
79f73a4b33
Remove useless code.
11 months ago
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
11 months ago
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
11 months ago
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
11 months ago
comfyanonymous
a5056cfb1f
Remove useless code.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
12 months ago
comfyanonymous
1bbd65ab30
Missed this one.
12 months ago
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
39e75862b2
Fix regression from last commit.
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
1 year ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
72741105a6
Remove useless code.
1 year ago
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
1 year ago
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
1 year ago
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
1 year ago
comfyanonymous
794dd2064d
Fix typo.
1 year ago
comfyanonymous
a527d0c795
Code refactor.
1 year ago
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
1 year ago
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago