151 Commits (1f4fc9ea0ccceba2e86668a22d86e63b3d262b83)

Author SHA1 Message Date
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 1 year ago
comfyanonymous 446caf711c Sampling code refactor. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 94e4fe39d8 This isn't used anywhere. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous 0e3b641172 Remove xformers related print. 1 year ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 1 year ago
comfyanonymous b80c3276dc Fix issue with gligen. 1 year ago
comfyanonymous d6e4b342e6 Support for Control Loras. 1 year ago
comfyanonymous 2b13939044 Remove some useless code. 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 1 year ago
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 103c487a89 Cleanup. 1 year ago
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 1 year ago
comfyanonymous 78d8035f73 Fix bug with controlnet. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 1 year ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output. 1 year ago
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous 9d54066ebc This isn't needed for inference. 1 year ago
comfyanonymous 6971646b8b Speed up model loading a bit. 1 year ago
comfyanonymous 274dff3257 Remove more useless files. 1 year ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago