670 Commits (4871a36458e7cd4af1a7f46dd6738c406e831413)

Author SHA1 Message Date
comfyanonymous 103c487a89 Cleanup. 1 year ago
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 1 year ago
comfyanonymous 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 1 year ago
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 1 year ago
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 1 year ago
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 1 year ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed. 1 year ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back. 1 year ago
comfyanonymous 5a9ddf94eb LoraLoader node now caches the lora file between executions. 1 year ago
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 1 year ago
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 1 year ago
comfyanonymous 4376b125eb Remove useless code. 1 year ago
comfyanonymous 89120f1fbe This is unused but it should be 1280. 1 year ago
comfyanonymous 2c7c14de56 Support for SDXL text encoder lora. 1 year ago
comfyanonymous fcef47f06e Fix bug. 1 year ago
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present. 1 year ago
comfyanonymous 9b93b920be Add CheckpointSave node to save checkpoints. 1 year ago
comfyanonymous b72a7a835a Support loras based on the stability unet implementation. 1 year ago
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 1 year ago
comfyanonymous 4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 1 year ago
comfyanonymous cef6aa62b2 Add support for TAESD decoder for SDXL. 1 year ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL. 1 year ago
comfyanonymous b7933960bb Fix CLIPLoader node. 1 year ago
comfyanonymous 78d8035f73 Fix bug with controlnet. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 1 year ago
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 1 year ago
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 1 year ago
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 1 year ago
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 1 year ago
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 1 year ago
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 1 year ago
comfyanonymous 036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 1 year ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output. 1 year ago
comfyanonymous cd930d4e7f pop clip vision keys after loading them. 1 year ago
comfyanonymous c9e4a8c9e5 Not needed anymore. 1 year ago
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 1 year ago
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 1 year ago
comfyanonymous e6e50ab2dd Fix an issue when alphas_comprod are half floats. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous 9d54066ebc This isn't needed for inference. 1 year ago
comfyanonymous fa2cca056c Don't initialize CLIPVision weights to default values. 1 year ago