143 Commits (f117566299edd621ea8b00b70cf81d5d96c20917)

Author SHA1 Message Date
comfyanonymous 5d8898c056 Fix some performance issues with weight loading and unloading. 8 months ago
comfyanonymous c6de09b02e Optimize memory unload strategy for more optimized performance. 8 months ago
comfyanonymous 4b9005e949 Fix regression with model merging. 8 months ago
comfyanonymous c18a203a8a Don't unload model weights for non weight patches. 8 months ago
comfyanonymous db8b59ecff Lower memory usage for loras in lowvram mode at the cost of perf. 8 months ago
comfyanonymous 0ed72befe1 Change log levels. 8 months ago
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 8 months ago
comfyanonymous dce3555339 Add some tesla pascal GPUs to the fp16 working but slower list. 9 months ago
comfyanonymous 88f300401c Enable fp16 by default on mps. 9 months ago
comfyanonymous 929e266f3e Manual cast for bf16 on older GPUs. 9 months ago
comfyanonymous 0b3c50480c Make --force-fp32 disable loading models in bf16. 9 months ago
comfyanonymous f83109f09b Stable Cascade Stage C. 9 months ago
comfyanonymous aeaeca10bd Small refactor of is_device_* functions. 9 months ago
comfyanonymous 66e28ef45c Don't use is_bf16_supported to check for fp16 support. 10 months ago
comfyanonymous 24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 10 months ago
comfyanonymous 4b0239066d Always use fp16 for the text encoders. 10 months ago
comfyanonymous f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 10 months ago
comfyanonymous 1b103e0cb2 Add argument to run the VAE on the CPU. 11 months ago
comfyanonymous e1e322cf69 Load weights that can't be lowvramed to target device. 11 months ago
comfyanonymous a252963f95 --disable-smart-memory now unloads everything like it did originally. 11 months ago
comfyanonymous 36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate. 11 months ago
comfyanonymous 2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 11 months ago
comfyanonymous b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 11 months ago
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 11 months ago
comfyanonymous 57926635e8 Switch text encoder to manual cast. 11 months ago
comfyanonymous 340177e6e8 Disable non blocking on mps. 11 months ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 12 months ago
comfyanonymous 2db86b4676 Slightly faster lora applying. 12 months ago
comfyanonymous ca82ade765 Use .itemsize to get dtype size for fp8. 12 months ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8. 12 months ago
comfyanonymous 0cf4e86939 Add some command line arguments to store text encoder weights in fp8. 1 year ago
comfyanonymous 7339479b10 Disable xformers when it can't load properly. 1 year ago
comfyanonymous dd4ba68b6e Allow different models to estimate memory usage differently. 1 year ago
comfyanonymous 8594c8be4d Empty the cache when torch cache is more than 25% free mem. 1 year ago
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 1 year ago
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 20d3852aa1 Pull some small changes from the other repo. 1 year ago
Simon Lui eec449ca8e Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 1 year ago
comfyanonymous 1cdfb3dba4 Only do the cast on the device if the device supports it. 1 year ago
comfyanonymous 321c5fa295 Enable pytorch attention by default on xpu. 1 year ago
comfyanonymous 0966d3ce82 Don't run text encoders on xpu because there are issues. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 1 year ago
comfyanonymous a57b0c797b Fix lowvram model merging. 1 year ago
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 1 year ago
comfyanonymous 30eb92c3cb Code cleanups. 1 year ago
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 1 year ago