comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous
6908f9c949
This makes pytorch2.0 attention perform a bit faster.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
EllangoK
e5e587b1c0
seperates out arg parser and imports args
2 years ago
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2 years ago
comfyanonymous
539ff487a8
Pull latest tomesd code from upstream.
2 years ago
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous
0d972b85e6
This seems to give better quality in tome.
2 years ago
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2 years ago
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2 years ago
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2 years ago
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2 years ago
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous
1de86851b1
Try to fix memory issue.
2 years ago
edikius
165be5828a
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2 years ago
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2 years ago
comfyanonymous
798c90e1c0
Fix pytorch 2.0 cross attention not working.
2 years ago
comfyanonymous
c1f5855ac1
Make some cross attention functions work on the CPU.
2 years ago
comfyanonymous
1a612e1c74
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2 years ago
comfyanonymous
9502ee45c3
Hopefully fix a strange issue with xformers + lowvram.
2 years ago
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2 years ago
comfyanonymous
c9daec4c89
Remove prints that are useless when xformers is enabled.
2 years ago
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2 years ago
comfyanonymous
4efa67fa12
Add ControlNet support.
2 years ago
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2 years ago
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2 years ago
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2 years ago
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2 years ago
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2 years ago
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2 years ago
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2 years ago
comfyanonymous
047775615b
Lower the chances of an OOM.
2 years ago
comfyanonymous
1daccf3678
Run softmax in place if it OOMs.
2 years ago
comfyanonymous
50db297cf6
Try to fix OOM issues with cards that have less vram than mine.
2 years ago
comfyanonymous
051f472e8f
Fix sub quadratic attention for SD2 and make it the default optimization.
2 years ago
comfyanonymous
220afe3310
Initial commit.
2 years ago