132 Commits (4871a36458e7cd4af1a7f46dd6738c406e831413)

Author SHA1 Message Date
comfyanonymous 2395ae740a Make unclip more deterministic. 10 months ago
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 10 months ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that 11 months ago
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 11 months ago
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 11 months ago
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 11 months ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 11 months ago
comfyanonymous 79f73a4b33 Remove useless code. 11 months ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 11 months ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 11 months ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 11 months ago
comfyanonymous a5056cfb1f Remove useless code. 11 months ago
comfyanonymous 77755ab8db Refactor comfy.ops 11 months ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation. 12 months ago
comfyanonymous 1bbd65ab30 Missed this one. 12 months ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 12 months ago
comfyanonymous 39e75862b2 Fix regression from last commit. 12 months ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches. 12 months ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 12 months ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous a268a574fa Remove a bunch of useless code. 1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 1 year ago
comfyanonymous 446caf711c Sampling code refactor. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 94e4fe39d8 This isn't used anywhere. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago