65 Commits (5b40e7a5ed192e217575c55e061c17a52cf9a15d)

Author SHA1 Message Date
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section. 2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used. 2 years ago
comfyanonymous 798c90e1c0 Fix pytorch 2.0 cross attention not working. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 1a612e1c74 Add some pytorch scaled_dot_product_attention code for testing. 2 years ago
comfyanonymous 9502ee45c3 Hopefully fix a strange issue with xformers + lowvram. 2 years ago
comfyanonymous c9daec4c89 Remove prints that are useless when xformers is enabled. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous 50db297cf6 Try to fix OOM issues with cards that have less vram than mine. 2 years ago
comfyanonymous 051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2 years ago
comfyanonymous 220afe3310 Initial commit. 2 years ago