comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
72741105a6
Remove useless code.
1 year ago
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
1 year ago
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
1 year ago
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
1 year ago
comfyanonymous
794dd2064d
Fix typo.
1 year ago
comfyanonymous
a527d0c795
Code refactor.
1 year ago
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
1 year ago
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago
Simon Lui
2da73b7073
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
1 year ago
Simon Lui
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
1 year ago
comfyanonymous
bed116a1f9
Remove optimization that caused border.
1 year ago
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
1 year ago
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
1 year ago
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
1 year ago
comfyanonymous
b80c3276dc
Fix issue with gligen.
1 year ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
1 year ago
comfyanonymous
2b13939044
Remove some useless code.
1 year ago
comfyanonymous
95d796fc85
Faster VAE loading.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
fa28d7334b
Remove useless code.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
1 year ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2 years ago
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago