comfyanonymous
a5056cfb1f
Remove useless code.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
12 months ago
comfyanonymous
1bbd65ab30
Missed this one.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
39e75862b2
Fix regression from last commit.
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
1 year ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
1 year ago
comfyanonymous
0e3b641172
Remove xformers related print.
1 year ago
comfyanonymous
b80c3276dc
Fix issue with gligen.
1 year ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
103c487a89
Cleanup.
1 year ago
comfyanonymous
78d8035f73
Fix bug with controlnet.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
1 year ago
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
9d54066ebc
This isn't needed for inference.
1 year ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
1 year ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous
6908f9c949
This makes pytorch2.0 attention perform a bit faster.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
EllangoK
e5e587b1c0
seperates out arg parser and imports args
2 years ago
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2 years ago
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2 years ago
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago