137 Commits (master)

Author SHA1 Message Date
comfyanonymous 1900e5119f Fix potential issue. 6 months ago
comfyanonymous 0bdc2b15c7 Cleanup. 6 months ago
comfyanonymous 98f828fad9 Remove unnecessary code. 6 months ago
comfyanonymous 46daf0a9a7 Add debug options to force on and off attention upcasting. 6 months ago
comfyanonymous ec6f16adb6 Fix SAG. 6 months ago
comfyanonymous bb4940d837 Only enable attention upcasting on models that actually need it. 6 months ago
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 6 months ago
comfyanonymous 2aed53c4ac Workaround xformers bug. 7 months ago
comfyanonymous 2a813c3b09 Switch some more prints to logging. 8 months ago
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 9 months ago
comfyanonymous b3e97fc714 Koala 700M and 1B support. 9 months ago
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 9 months ago
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 9 months ago
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 9 months ago
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 9 months ago
comfyanonymous 89507f8adf Remove some unused imports. 10 months ago
comfyanonymous 2395ae740a Make unclip more deterministic. 10 months ago
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 10 months ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that 11 months ago
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 11 months ago
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 11 months ago
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 11 months ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 11 months ago
comfyanonymous 79f73a4b33 Remove useless code. 11 months ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 11 months ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 11 months ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 11 months ago
comfyanonymous a5056cfb1f Remove useless code. 11 months ago
comfyanonymous 77755ab8db Refactor comfy.ops 11 months ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation. 12 months ago
comfyanonymous 1bbd65ab30 Missed this one. 12 months ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 12 months ago
comfyanonymous 39e75862b2 Fix regression from last commit. 12 months ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches. 12 months ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 12 months ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 12 months ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago