27 Commits (69033081c50de94cbc2a4fce12900611da04b1e9)

Author SHA1 Message Date
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used. 2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2 years ago
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2 years ago
comfyanonymous 220afe3310 Initial commit. 2 years ago