169 Commits (70d2ea0faa28e1727f7535466ac5378e786b32cb)

Author SHA1 Message Date
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 1 year ago
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 1 year ago
comfyanonymous 490771b7f4 Speed up lora loading a bit. 1 year ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 1 year ago
comfyanonymous 6fb084f39d Reduce floating point rounding errors in loras. 1 year ago
comfyanonymous 91ed2815d5 Add a node to merge CLIP models. 1 year ago
comfyanonymous 6ad0a6d7e2 Don't patch weights when multiplier is zero. 1 year ago
comfyanonymous a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 1 year ago
comfyanonymous e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous af7a49916b Support loading unet files in diffusers format. 1 year ago
comfyanonymous acf95191ff Properly support SDXL diffusers loras for unet. 1 year ago
comfyanonymous c3e96e637d Pass device to CLIP model. 1 year ago
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 1 year ago
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 1 year ago
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 1 year ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed. 1 year ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back. 1 year ago
comfyanonymous 5a9ddf94eb LoraLoader node now caches the lora file between executions. 1 year ago
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 1 year ago
comfyanonymous 2c7c14de56 Support for SDXL text encoder lora. 1 year ago
comfyanonymous 9b93b920be Add CheckpointSave node to save checkpoints. 1 year ago
comfyanonymous b72a7a835a Support loras based on the stability unet implementation. 1 year ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL. 1 year ago
comfyanonymous b7933960bb Fix CLIPLoader node. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 1 year ago
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 1 year ago
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 1 year ago
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 1 year ago
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 1 year ago
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 1 year ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output. 1 year ago
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 1 year ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU. 1 year ago
comfyanonymous 6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 1 year ago
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument. 1 year ago
comfyanonymous ff9b22d79e Turn on safe load for a few models. 1 year ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 1 year ago
comfyanonymous f8c5931053 Split the batch in VAEEncode if there's not enough memory. 1 year ago
comfyanonymous c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 1 year ago
comfyanonymous de142eaad5 Simpler base model code. 1 year ago
comfyanonymous 0e425603fb Small refactor. 1 year ago
comfyanonymous 700491d81a Implement global average pooling for controlnet. 1 year ago
comfyanonymous 03da8a3426 This is useless for inference. 1 year ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 1 year ago
comfyanonymous a532888846 Support VAEs in diffusers format. 2 years ago
BlenderNeko 19c014f429 comment out annoying print statement 2 years ago