Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
6 months ago
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
6 months ago
comfyanonymous
49c20cdc70
No longer necessary.
6 months ago
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
6 months ago
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
7 months ago
comfyanonymous
258dbc06c3
Fix some memory related issues.
7 months ago
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
8 months ago
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
8 months ago
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
8 months ago
comfyanonymous
4b9005e949
Fix regression with model merging.
8 months ago
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
8 months ago
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
8 months ago
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
8 months ago
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
8 months ago
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
9 months ago
comfyanonymous
88f300401c
Enable fp16 by default on mps.
9 months ago
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
9 months ago
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
9 months ago
comfyanonymous
f83109f09b
Stable Cascade Stage C.
9 months ago
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
9 months ago
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
10 months ago
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
10 months ago
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
10 months ago
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
10 months ago
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
11 months ago
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
11 months ago
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
11 months ago
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
11 months ago
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
11 months ago
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
11 months ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
11 months ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
11 months ago
comfyanonymous
340177e6e8
Disable non blocking on mps.
11 months ago
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
12 months ago
comfyanonymous
2db86b4676
Slightly faster lora applying.
12 months ago
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
12 months ago
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
12 months ago
comfyanonymous
0cf4e86939
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
1 year ago
comfyanonymous
7339479b10
Disable xformers when it can't load properly.
1 year ago
comfyanonymous
dd4ba68b6e
Allow different models to estimate memory usage differently.
1 year ago
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
1 year ago
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
1 year ago
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
1 year ago
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
1 year ago
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
1 year ago
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
1 year ago
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
1 year ago