comfyanonymous
2b13939044
Remove some useless code.
1 year ago
comfyanonymous
95d796fc85
Faster VAE loading.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
fa28d7334b
Remove useless code.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
1 year ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2 years ago
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2 years ago
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2 years ago
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2 years ago
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2 years ago
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous
1de86851b1
Try to fix memory issue.
2 years ago
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2 years ago
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2 years ago
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2 years ago
comfyanonymous
4efa67fa12
Add ControlNet support.
2 years ago
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2 years ago
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2 years ago
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2 years ago
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2 years ago
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2 years ago
comfyanonymous
220afe3310
Initial commit.
2 years ago