50 Commits (c8013f73e57b27f8e89d96a13d806d4340bd48a0)

Author SHA1 Message Date
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 1 year ago
comfyanonymous b80c3276dc Fix issue with gligen. 1 year ago
comfyanonymous d6e4b342e6 Support for Control Loras. 1 year ago
comfyanonymous 2b13939044 Remove some useless code. 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models. 2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used. 2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous fcb25d37db Prepare for t2i adapter. 2 years ago
comfyanonymous 09f1d76ed8 Fix an OOM issue. 2 years ago
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2 years ago
comfyanonymous 1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2 years ago