794 Commits (58f8388020ba6ab5a913beb742a6312914d640b2)

Author SHA1 Message Date
comfyanonymous a094b45c93 Load clipvision model to GPU for faster performance. 1 year ago
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 1 year ago
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 1 year ago
comfyanonymous 4798cf5a62 Implement loras with norm keys. 1 year ago
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 1 year ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 1 year ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 1 year ago
comfyanonymous a57b0c797b Fix lowvram model merging. 1 year ago
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 1 year ago
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 1 year ago
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 1 year ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 1 year ago
comfyanonymous 30eb92c3cb Code cleanups. 1 year ago
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 1 year ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 1 year ago
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 1 year ago
comfyanonymous a6ef08a46a Even with forced fp16 the cpu device should never use it. 1 year ago
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 1 year ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations. 1 year ago
comfyanonymous afcb9cb1df All resolutions now work with t2i adapter for SDXL. 1 year ago
comfyanonymous 85fde89d7f T2I adapter SDXL. 1 year ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 1 year ago
comfyanonymous 763b0cf024 Fix control lora not working in fp32. 1 year ago
comfyanonymous 199d73364a Fix ControlLora on lowvram. 1 year ago
comfyanonymous d08e53de2e Remove autocast from controlnet code. 1 year ago
comfyanonymous 0d7b0a4dc7 Small cleanups. 1 year ago
Simon Lui 9225465975 Further tuning and fix mem_free_total. 1 year ago
Simon Lui 2c096e4260 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 1 year ago
comfyanonymous e9469e732d --disable-smart-memory now disables loading model directly to vram. 1 year ago
comfyanonymous c9b562aed1 Free more memory before VAE encode/decode. 1 year ago
comfyanonymous b80c3276dc Fix issue with gligen. 1 year ago
comfyanonymous d6e4b342e6 Support for Control Loras. 1 year ago
comfyanonymous 39ac856a33 ReVision support: unclip nodes can now be used with SDXL. 1 year ago
comfyanonymous 76d53c4622 Add support for clip g vision model to CLIPVisionLoader. 1 year ago
Alexopus e59fe0537a
Fix referenced before assignment 1 year ago
comfyanonymous be9c5e25bc Fix issue with not freeing enough memory when sampling. 1 year ago
comfyanonymous ac0758a1a4 Fix bug with lowvram and controlnet advanced node. 1 year ago
comfyanonymous c28db1f315 Fix potential issues with patching models when saving checkpoints. 1 year ago
comfyanonymous 3aee33b54e Add --disable-smart-memory for those that want the old behaviour. 1 year ago
comfyanonymous 2be2742711 Fix issue with regular torch version. 1 year ago
comfyanonymous 89a0767abf Smarter memory management. 1 year ago
comfyanonymous 2c97c30256 Support small diffusers controlnet so both types are now supported. 1 year ago
comfyanonymous 53f326a3d8 Support diffusers mini controlnets. 1 year ago
comfyanonymous 58f0c616ed Fix clip vision issue with old transformers versions. 1 year ago
comfyanonymous ae270f79bc Fix potential issue with batch size and clip vision. 1 year ago
comfyanonymous a2ce9655ca Refactor unclip code. 1 year ago
comfyanonymous 9cc12c833d CLIPVisionEncode can now encode multiple images. 1 year ago
comfyanonymous 0cb6dac943 Remove 3m from PR #1213 because of some small issues. 1 year ago
comfyanonymous e244b2df83 Add sgm_uniform scheduler that acts like the default one in sgm. 1 year ago
comfyanonymous 58c7da3665 Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras. 1 year ago