794 Commits (58f8388020ba6ab5a913beb742a6312914d640b2)

Author SHA1 Message Date
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 11 months ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 11 months ago
comfyanonymous a252963f95 --disable-smart-memory now unloads everything like it did originally. 11 months ago
comfyanonymous 36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate. 11 months ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 11 months ago
comfyanonymous 9a7619b72d Fix regression with inpaint model. 11 months ago
comfyanonymous 571ea8cdcc Fix SAG not working with cfg 1.0 11 months ago
comfyanonymous 8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 11 months ago
comfyanonymous 2258f85159 Support stable zero 123 model. 11 months ago
comfyanonymous 2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 11 months ago
comfyanonymous e45d920ae3 Don't resize clip vision image when the size is already good. 11 months ago
comfyanonymous 13e6d5366e Switch clip vision to manual cast. 11 months ago
comfyanonymous 719fa0866f Set clip vision model in eval mode so it works without inference mode. 11 months ago
Hari 574363a8a6 Implement Perp-Neg 11 months ago
comfyanonymous a5056cfb1f Remove useless code. 11 months ago
comfyanonymous 329c571993 Improve code legibility. 11 months ago
comfyanonymous 6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 11 months ago
comfyanonymous ba04a87d10 Refactor and improve the sag node. 11 months ago
Rafie Walker 6761233e9d
Implement Self-Attention Guidance (#2201) 11 months ago
comfyanonymous b454a67bb9 Support segmind vega model. 11 months ago
comfyanonymous 824e4935f5 Add dtype parameter to VAE object. 11 months ago
comfyanonymous 32b7e7e769 Add manual cast to controlnet. 11 months ago
comfyanonymous 3152023fbc Use inference dtype for unet memory usage estimation. 11 months ago
comfyanonymous 77755ab8db Refactor comfy.ops 11 months ago
comfyanonymous b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 11 months ago
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 11 months ago
comfyanonymous 57926635e8 Switch text encoder to manual cast. 11 months ago
comfyanonymous 340177e6e8 Disable non blocking on mps. 11 months ago
comfyanonymous 614b7e731f Implement GLora. 11 months ago
comfyanonymous cb63e230b4 Make lora code a bit cleaner. 11 months ago
comfyanonymous 174eba8e95 Use own clip vision model implementation. 11 months ago
comfyanonymous 97015b6b38 Cleanup. 12 months ago
comfyanonymous a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 12 months ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 12 months ago
comfyanonymous efb704c758 Support attention masking in CLIP implementation. 12 months ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation. 12 months ago
comfyanonymous 2db86b4676 Slightly faster lora applying. 12 months ago
comfyanonymous 1bbd65ab30 Missed this one. 12 months ago
comfyanonymous 9b655d4fd7 Fix memory issue with control loras. 12 months ago
comfyanonymous 26b1c0a771 Fix control lora on fp8. 12 months ago
comfyanonymous be3468ddd5 Less useless downcasting. 12 months ago
comfyanonymous ca82ade765 Use .itemsize to get dtype size for fp8. 12 months ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 12 months ago
comfyanonymous 61a123a1e0 A different way of handling multiple images passed to SVD. 12 months ago
comfyanonymous c97be4db91 Support SD2.1 turbo checkpoint. 12 months ago
comfyanonymous 983ebc5792 Use smart model management for VAE to decrease latency. 12 months ago
comfyanonymous c45d1b9b67 Add a function to load a unet from a state dict. 12 months ago
comfyanonymous f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 12 months ago
comfyanonymous 13fdee6abf Try to free memory for both cond+uncond before inference. 12 months ago