comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
1 year ago
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
1 year ago
comfyanonymous
0e3b641172
Remove xformers related print.
1 year ago
comfyanonymous
bed116a1f9
Remove optimization that caused border.
1 year ago
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
1 year ago
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
1 year ago
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
1 year ago
comfyanonymous
b80c3276dc
Fix issue with gligen.
1 year ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
1 year ago
comfyanonymous
2b13939044
Remove some useless code.
1 year ago
comfyanonymous
95d796fc85
Faster VAE loading.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
103c487a89
Cleanup.
1 year ago
comfyanonymous
78d8035f73
Fix bug with controlnet.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
fa28d7334b
Remove useless code.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
1 year ago
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
1 year ago
comfyanonymous
9d54066ebc
This isn't needed for inference.
1 year ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
1 year ago
comfyanonymous
274dff3257
Remove more useless files.
1 year ago
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
1 year ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago