75 Commits (ea9ac9d30beb23119e590610d3ec5dcd146a12f2)

Author SHA1 Message Date
comfyanonymous 2a813c3b09 Switch some more prints to logging. 8 months ago
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 9 months ago
comfyanonymous b3e97fc714 Koala 700M and 1B support. 9 months ago
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 9 months ago
comfyanonymous 89507f8adf Remove some unused imports. 10 months ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 11 months ago
comfyanonymous 79f73a4b33 Remove useless code. 11 months ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 11 months ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 11 months ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 11 months ago
comfyanonymous 77755ab8db Refactor comfy.ops 11 months ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 12 months ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches. 12 months ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 1 year ago
comfyanonymous b80c3276dc Fix issue with gligen. 1 year ago
comfyanonymous d6e4b342e6 Support for Control Loras. 1 year ago
comfyanonymous 2b13939044 Remove some useless code. 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago