comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
9 months ago
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
9 months ago
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
9 months ago
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
9 months ago
comfyanonymous
89507f8adf
Remove some unused imports.
10 months ago
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
10 months ago
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
10 months ago
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
11 months ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
11 months ago
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
11 months ago
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
11 months ago
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
11 months ago
comfyanonymous
79f73a4b33
Remove useless code.
11 months ago
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
11 months ago
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
11 months ago
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
11 months ago
comfyanonymous
a5056cfb1f
Remove useless code.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
12 months ago
comfyanonymous
1bbd65ab30
Missed this one.
12 months ago
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
39e75862b2
Fix regression from last commit.
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
12 months ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
72741105a6
Remove useless code.
1 year ago
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
1 year ago
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
1 year ago
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
1 year ago
comfyanonymous
794dd2064d
Fix typo.
1 year ago
comfyanonymous
a527d0c795
Code refactor.
1 year ago
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
1 year ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago