comfyanonymous
|
3a1f47764d
|
Print the torch device that is used on startup.
|
2 years ago |
comfyanonymous
|
6fc4917634
|
Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
|
2 years ago |
comfyanonymous
|
678f933d38
|
maximum_batch_area for xformers.
Remove useless code.
|
2 years ago |
comfyanonymous
|
cb1551b819
|
Lowvram mode for gligen and fix some lowvram issues.
|
2 years ago |
comfyanonymous
|
6ee11d7bc0
|
Fix import.
|
2 years ago |
comfyanonymous
|
bae4fb4a9d
|
Fix imports.
|
2 years ago |
comfyanonymous
|
056e5545ff
|
Don't try to get vram from xpu or cuda when directml is enabled.
|
2 years ago |
comfyanonymous
|
2ca934f7d4
|
You can now select the device index with: --directml id
Like this for example: --directml 1
|
2 years ago |
comfyanonymous
|
3baded9892
|
Basic torch_directml support. Use --directml to use it.
|
2 years ago |
comfyanonymous
|
5282f56434
|
Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
|
2 years ago |
comfyanonymous
|
3696d1699a
|
Add support for GLIGEN textbox model.
|
2 years ago |
comfyanonymous
|
deb2b93e79
|
Move code to empty gpu cache to model_management.py
|
2 years ago |
comfyanonymous
|
1e1875f674
|
Print xformers version and warning about 0.0.18
|
2 years ago |
comfyanonymous
|
64557d6781
|
Add a --force-fp32 argument to force fp32 for debugging.
|
2 years ago |
comfyanonymous
|
bceccca0e5
|
Small refactor.
|
2 years ago |
藍+85CD
|
3e2608e12b
|
Fix auto lowvram detection on CUDA
|
2 years ago |
藍+85CD
|
7cb924f684
|
Use separate variables instead of `vram_state`
|
2 years ago |
藍+85CD
|
84b9c0ac2f
|
Import intel_extension_for_pytorch as ipex
|
2 years ago |
EllangoK
|
e5e587b1c0
|
seperates out arg parser and imports args
|
2 years ago |
藍+85CD
|
37713e3b0a
|
Add basic XPU device support
closed #387
|
2 years ago |
comfyanonymous
|
e46b1c3034
|
Disable xformers in VAE when xformers == 0.0.18
|
2 years ago |
Francesco Yoshi Gobbo
|
f55755f0d2
|
code cleanup
|
2 years ago |
Francesco Yoshi Gobbo
|
cf0098d539
|
no lowvram state if cpu only
|
2 years ago |
comfyanonymous
|
4adcea7228
|
I don't think controlnets were being handled correctly by MPS.
|
2 years ago |
Yurii Mazurevich
|
fc71e7ea08
|
Fixed typo
|
2 years ago |
Yurii Mazurevich
|
4b943d2b60
|
Removed unnecessary comment
|
2 years ago |
Yurii Mazurevich
|
89fd5ed574
|
Added MPS device support
|
2 years ago |
comfyanonymous
|
3ed4a4e4e6
|
Try again with vae tiled decoding if regular fails because of OOM.
|
2 years ago |
comfyanonymous
|
9d0665c8d0
|
Add laptop quadro cards to fp32 list.
|
2 years ago |
comfyanonymous
|
ee46bef03a
|
Make --cpu have priority over everything else.
|
2 years ago |
comfyanonymous
|
83f23f82b8
|
Add pytorch attention support to VAE.
|
2 years ago |
comfyanonymous
|
a256a2abde
|
--disable-xformers should not even try to import xformers.
|
2 years ago |
comfyanonymous
|
0f3ba7482f
|
Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
|
2 years ago |
comfyanonymous
|
afff30fc0a
|
Add --cpu to use the cpu for inference.
|
2 years ago |
comfyanonymous
|
ebfcf0a9c9
|
Fix issue.
|
2 years ago |
comfyanonymous
|
fed315a76a
|
To be really simple CheckpointLoaderSimple should pick the right type.
|
2 years ago |
comfyanonymous
|
c1f5855ac1
|
Make some cross attention functions work on the CPU.
|
2 years ago |
comfyanonymous
|
69cc75fbf8
|
Add a way to interrupt current processing in the backend.
|
2 years ago |
comfyanonymous
|
2c5f0ec681
|
Small adjustment.
|
2 years ago |
comfyanonymous
|
86721d5158
|
Enable highvram automatically when vram >> ram
|
2 years ago |
comfyanonymous
|
2326ff1263
|
Add: --highvram for when you want models to stay on the vram.
|
2 years ago |
comfyanonymous
|
d66415c021
|
Low vram mode for controlnets.
|
2 years ago |
comfyanonymous
|
4efa67fa12
|
Add ControlNet support.
|
2 years ago |
comfyanonymous
|
7e1e193f39
|
Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
|
2 years ago |
comfyanonymous
|
708138c77d
|
Remove print.
|
2 years ago |
comfyanonymous
|
853e96ada3
|
Increase it/s by batching together some stuff sent to unet.
|
2 years ago |
comfyanonymous
|
c92633eaa2
|
Auto calculate amount of memory to use for --lowvram
|
2 years ago |
comfyanonymous
|
534736b924
|
Add some low vram modes: --lowvram and --novram
|
2 years ago |
comfyanonymous
|
a84cd0d1ad
|
Don't unload/reload model from CPU uselessly.
|
2 years ago |