comfyanonymous
|
1ce0d8ad68
|
Add CMP 30HX card to the nvidia_16_series list.
|
1 year ago |
comfyanonymous
|
4a77fcd6ab
|
Only shift text encoder to vram when CPU cores are under 8.
|
1 year ago |
comfyanonymous
|
3cd31d0e24
|
Lower CPU thread check for running the text encoder on the CPU vs GPU.
|
1 year ago |
comfyanonymous
|
22f29d66ca
|
Try to fix memory issue with lora.
|
1 year ago |
comfyanonymous
|
18885f803a
|
Add MX450 and MX550 to list of cards with broken fp16.
|
1 year ago |
comfyanonymous
|
ff6b047a74
|
Fix device print on old torch version.
|
1 year ago |
comfyanonymous
|
1679abd86d
|
Add a command line argument to enable backend:cudaMallocAsync
|
1 year ago |
comfyanonymous
|
5f57362613
|
Lower lora ram usage when in normal vram mode.
|
1 year ago |
comfyanonymous
|
490771b7f4
|
Speed up lora loading a bit.
|
1 year ago |
KarryCharon
|
3e2309f149
|
fix mps miss import
|
1 year ago |
comfyanonymous
|
0ae81c03bb
|
Empty cache after model unloading for normal vram and lower.
|
1 year ago |
comfyanonymous
|
e7bee85df8
|
Add arguments to run the VAE in fp16 or bf16 for testing.
|
1 year ago |
comfyanonymous
|
ddc6f12ad5
|
Disable autocast in unet for increased speed.
|
1 year ago |
comfyanonymous
|
8d694cc450
|
Fix issue with OSX.
|
1 year ago |
comfyanonymous
|
dc9d1f31c8
|
Improvements for OSX.
|
1 year ago |
comfyanonymous
|
2c4e0b49b7
|
Switch to fp16 on some cards when the model is too big.
|
1 year ago |
comfyanonymous
|
6f3d9f52db
|
Add a --force-fp16 argument to force fp16 for testing.
|
1 year ago |
comfyanonymous
|
1c1b0e7299
|
--gpu-only now keeps the VAE on the device.
|
1 year ago |
comfyanonymous
|
3b6fe51c1d
|
Leave text_encoder on the CPU when it can handle it.
|
1 year ago |
comfyanonymous
|
b6a60fa696
|
Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
|
1 year ago |
comfyanonymous
|
97ee230682
|
Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
|
1 year ago |
comfyanonymous
|
62db11683b
|
Move unet to device right after loading on highvram mode.
|
1 year ago |
comfyanonymous
|
8248babd44
|
Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
|
1 year ago |
comfyanonymous
|
f7edcfd927
|
Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
|
1 year ago |
comfyanonymous
|
fed0a4dd29
|
Some comments to say what the vram state options mean.
|
1 year ago |
comfyanonymous
|
0a5fefd621
|
Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
|
1 year ago |
comfyanonymous
|
67892b5ac5
|
Refactor and improve model_management code related to free memory.
|
1 year ago |
space-nuko
|
499641ebf1
|
More accurate total
|
1 year ago |
space-nuko
|
b5dd15c67a
|
System stats endpoint
|
1 year ago |
comfyanonymous
|
5c38958e49
|
Tweak lowvram model memory so it's closer to what it was before.
|
1 year ago |
comfyanonymous
|
94680732d3
|
Empty cache on mps.
|
1 year ago |
comfyanonymous
|
eb448dd8e1
|
Auto load model in lowvram if not enough memory.
|
1 year ago |
comfyanonymous
|
3a1f47764d
|
Print the torch device that is used on startup.
|
2 years ago |
comfyanonymous
|
6fc4917634
|
Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
|
2 years ago |
comfyanonymous
|
678f933d38
|
maximum_batch_area for xformers.
Remove useless code.
|
2 years ago |
comfyanonymous
|
cb1551b819
|
Lowvram mode for gligen and fix some lowvram issues.
|
2 years ago |
comfyanonymous
|
6ee11d7bc0
|
Fix import.
|
2 years ago |
comfyanonymous
|
bae4fb4a9d
|
Fix imports.
|
2 years ago |
comfyanonymous
|
056e5545ff
|
Don't try to get vram from xpu or cuda when directml is enabled.
|
2 years ago |
comfyanonymous
|
2ca934f7d4
|
You can now select the device index with: --directml id
Like this for example: --directml 1
|
2 years ago |
comfyanonymous
|
3baded9892
|
Basic torch_directml support. Use --directml to use it.
|
2 years ago |
comfyanonymous
|
5282f56434
|
Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
|
2 years ago |
comfyanonymous
|
3696d1699a
|
Add support for GLIGEN textbox model.
|
2 years ago |
comfyanonymous
|
deb2b93e79
|
Move code to empty gpu cache to model_management.py
|
2 years ago |
comfyanonymous
|
1e1875f674
|
Print xformers version and warning about 0.0.18
|
2 years ago |
comfyanonymous
|
64557d6781
|
Add a --force-fp32 argument to force fp32 for debugging.
|
2 years ago |
comfyanonymous
|
bceccca0e5
|
Small refactor.
|
2 years ago |
藍+85CD
|
3e2608e12b
|
Fix auto lowvram detection on CUDA
|
2 years ago |
藍+85CD
|
7cb924f684
|
Use separate variables instead of `vram_state`
|
2 years ago |
藍+85CD
|
84b9c0ac2f
|
Import intel_extension_for_pytorch as ipex
|
2 years ago |