64 Commits (69033081c50de94cbc2a4fce12900611da04b1e9)

Author SHA1 Message Date
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 12 months ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches. 12 months ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 1 year ago
comfyanonymous b80c3276dc Fix issue with gligen. 1 year ago
comfyanonymous d6e4b342e6 Support for Control Loras. 1 year ago
comfyanonymous 2b13939044 Remove some useless code. 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models. 2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago