comfyanonymous
2aed53c4ac
Workaround xformers bug.
7 months ago
comfyanonymous
2a813c3b09
Switch some more prints to logging.
8 months ago
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
9 months ago
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
9 months ago
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
9 months ago
comfyanonymous
89507f8adf
Remove some unused imports.
10 months ago
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
10 months ago
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
11 months ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
11 months ago
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
11 months ago
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
11 months ago
comfyanonymous
a5056cfb1f
Remove useless code.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
12 months ago
comfyanonymous
1bbd65ab30
Missed this one.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
39e75862b2
Fix regression from last commit.
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
12 months ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
1 year ago
comfyanonymous
0e3b641172
Remove xformers related print.
1 year ago
comfyanonymous
b80c3276dc
Fix issue with gligen.
1 year ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
103c487a89
Cleanup.
1 year ago
comfyanonymous
78d8035f73
Fix bug with controlnet.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
1 year ago
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
9d54066ebc
This isn't needed for inference.
1 year ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
1 year ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago