143 Commits (f117566299edd621ea8b00b70cf81d5d96c20917)

Author SHA1 Message Date
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 1 year ago
comfyanonymous a6ef08a46a Even with forced fp16 the cpu device should never use it. 1 year ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations. 1 year ago
comfyanonymous 0d7b0a4dc7 Small cleanups. 1 year ago
Simon Lui 9225465975 Further tuning and fix mem_free_total. 1 year ago
Simon Lui 2c096e4260 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 1 year ago
comfyanonymous e9469e732d --disable-smart-memory now disables loading model directly to vram. 1 year ago
comfyanonymous 3aee33b54e Add --disable-smart-memory for those that want the old behaviour. 1 year ago
comfyanonymous 2be2742711 Fix issue with regular torch version. 1 year ago
comfyanonymous 89a0767abf Smarter memory management. 1 year ago
comfyanonymous 1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 1 year ago
comfyanonymous 4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 1 year ago
comfyanonymous 3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 1 year ago
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 1 year ago
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 1 year ago
comfyanonymous ff6b047a74 Fix device print on old torch version. 1 year ago
comfyanonymous 1679abd86d Add a command line argument to enable backend:cudaMallocAsync 1 year ago
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 1 year ago
comfyanonymous 490771b7f4 Speed up lora loading a bit. 1 year ago
KarryCharon 3e2309f149 fix mps miss import 1 year ago
comfyanonymous 0ae81c03bb Empty cache after model unloading for normal vram and lower. 1 year ago
comfyanonymous e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 8d694cc450 Fix issue with OSX. 1 year ago
comfyanonymous dc9d1f31c8 Improvements for OSX. 1 year ago
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 1 year ago
comfyanonymous 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 1 year ago
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 1 year ago
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 1 year ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed. 1 year ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back. 1 year ago
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 1 year ago
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present. 1 year ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU. 1 year ago
comfyanonymous fed0a4dd29 Some comments to say what the vram state options mean. 1 year ago
comfyanonymous 0a5fefd621 Cleanups and fixes for model_management.py 1 year ago
comfyanonymous 67892b5ac5 Refactor and improve model_management code related to free memory. 1 year ago
space-nuko 499641ebf1 More accurate total 1 year ago
space-nuko b5dd15c67a System stats endpoint 1 year ago
comfyanonymous 5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 1 year ago
comfyanonymous 94680732d3 Empty cache on mps. 1 year ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 1 year ago
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2 years ago
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function. 2 years ago
comfyanonymous 678f933d38 maximum_batch_area for xformers. 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous 6ee11d7bc0 Fix import. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous 056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2 years ago
comfyanonymous 2ca934f7d4 You can now select the device index with: --directml id 2 years ago