comfyanonymous
2a813c3b09
Switch some more prints to logging.
8 months ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
11 months ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
edikius
165be5828a
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2 years ago
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2 years ago
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2 years ago
comfyanonymous
047775615b
Lower the chances of an OOM.
2 years ago
comfyanonymous
1daccf3678
Run softmax in place if it OOMs.
2 years ago
comfyanonymous
051f472e8f
Fix sub quadratic attention for SD2 and make it the default optimization.
2 years ago
comfyanonymous
220afe3310
Initial commit.
2 years ago