comfyanonymous
|
c1f5855ac1
|
Make some cross attention functions work on the CPU.
|
2 years ago |
comfyanonymous
|
1a612e1c74
|
Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
|
2 years ago |
comfyanonymous
|
9502ee45c3
|
Hopefully fix a strange issue with xformers + lowvram.
|
2 years ago |
comfyanonymous
|
fcb25d37db
|
Prepare for t2i adapter.
|
2 years ago |
comfyanonymous
|
c9daec4c89
|
Remove prints that are useless when xformers is enabled.
|
2 years ago |
comfyanonymous
|
09f1d76ed8
|
Fix an OOM issue.
|
2 years ago |
comfyanonymous
|
4efa67fa12
|
Add ControlNet support.
|
2 years ago |
comfyanonymous
|
1a4edd19cd
|
Fix overflow issue with inplace softmax.
|
2 years ago |
comfyanonymous
|
509c7dfc6d
|
Use real softmax in split op to fix issue with some images.
|
2 years ago |
comfyanonymous
|
1f6a467e92
|
Update ldm dir with latest upstream stable diffusion changes.
|
2 years ago |
comfyanonymous
|
773cdabfce
|
Same thing but for the other places where it's used.
|
2 years ago |
comfyanonymous
|
df40d4f3bf
|
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
|
2 years ago |
comfyanonymous
|
e8c499ddd4
|
Split optimization for VAE attention block.
|
2 years ago |
comfyanonymous
|
5b4e312749
|
Use inplace operations for less OOM issues.
|
2 years ago |
comfyanonymous
|
047775615b
|
Lower the chances of an OOM.
|
2 years ago |
comfyanonymous
|
1daccf3678
|
Run softmax in place if it OOMs.
|
2 years ago |
comfyanonymous
|
50db297cf6
|
Try to fix OOM issues with cards that have less vram than mine.
|
2 years ago |
comfyanonymous
|
051f472e8f
|
Fix sub quadratic attention for SD2 and make it the default optimization.
|
2 years ago |
comfyanonymous
|
220afe3310
|
Initial commit.
|
2 years ago |