752 Commits (062483823738ed610d8d074ba63910c90e9d45b7)

Author SHA1 Message Date
comfyanonymous 53f326a3d8 Support diffusers mini controlnets. 1 year ago
comfyanonymous 58f0c616ed Fix clip vision issue with old transformers versions. 1 year ago
comfyanonymous ae270f79bc Fix potential issue with batch size and clip vision. 1 year ago
comfyanonymous a2ce9655ca Refactor unclip code. 1 year ago
comfyanonymous 9cc12c833d CLIPVisionEncode can now encode multiple images. 1 year ago
comfyanonymous 0cb6dac943 Remove 3m from PR #1213 because of some small issues. 1 year ago
comfyanonymous e244b2df83 Add sgm_uniform scheduler that acts like the default one in sgm. 1 year ago
comfyanonymous 58c7da3665 Gpu variant of dpmpp_3m_sde. Note: use 3m with exponential or karras. 1 year ago
FizzleDorf 3cfad03a68 dpmpp 3m + dpmpp 3m sde added 1 year ago
comfyanonymous 585a062910 Print unet config when model isn't detected. 1 year ago
comfyanonymous c8a23ce9e8 Support for yet another lora type based on diffusers. 1 year ago
comfyanonymous 2bc12d3d22 Add --temp-directory argument to set temp directory. 1 year ago
comfyanonymous c20583286f Support diffuser text encoder loras. 1 year ago
comfyanonymous cf10c5592c Disable calculating uncond when CFG is 1.0 1 year ago
comfyanonymous 1f0f4cc0bd Add argument to disable auto launching the browser. 1 year ago
comfyanonymous d8e58f0a7e Detect hint_channels from controlnet. 1 year ago
comfyanonymous c5d7593ccf Support loras in diffusers format. 1 year ago
comfyanonymous 1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 1 year ago
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 1 year ago
comfyanonymous 4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 1 year ago
comfyanonymous 3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 1 year ago
comfyanonymous 2b13939044 Remove some useless code. 1 year ago
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous c910b4a01c Remove unused code and torchdiffeq dependency. 1 year ago
comfyanonymous 1141029a4a Add --disable-metadata argument to disable saving metadata in files. 1 year ago
comfyanonymous 68be24eead Remove some prints. 1 year ago
asagi4 1ea4d84691 Fix timestep ranges when batch_size > 1 1 year ago
comfyanonymous 5379051d16 Fix diffusers VAE loading. 1 year ago
comfyanonymous 727588d076 Fix some new loras. 1 year ago
comfyanonymous 4f9b6f39d1 Fix potential issue with Save Checkpoint. 1 year ago
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 1 year ago
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 1 year ago
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node. 1 year ago
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 1 year ago
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 1 year ago
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 1 year ago
comfyanonymous 12a6e93171 Del the right object when applying lora. 1 year ago
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 1 year ago
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 1 year ago
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 1 year ago
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 1 year ago
comfyanonymous 0b284f650b Fix typo. 1 year ago
comfyanonymous e032ca6138 Fix ddim issue with older torch versions. 1 year ago
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 1 year ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 1 year ago
comfyanonymous 51d5477579 Add key to indicate checkpoint is v_prediction when saving. 1 year ago
comfyanonymous ff6b047a74 Fix device print on old torch version. 1 year ago
comfyanonymous 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up. 1 year ago
comfyanonymous 55d0fca9fa --windows-standalone-build now enables --cuda-malloc 1 year ago