70 Commits (608fcc25917d08cbba62d7aa17784a85f70294de)

Author SHA1 Message Date
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used. 2 years ago
comfyanonymous afff30fc0a Add --cpu to use the cpu for inference. 2 years ago
comfyanonymous ebfcf0a9c9 Fix issue. 2 years ago
comfyanonymous fed315a76a To be really simple CheckpointLoaderSimple should pick the right type. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 69cc75fbf8 Add a way to interrupt current processing in the backend. 2 years ago
comfyanonymous 2c5f0ec681 Small adjustment. 2 years ago
comfyanonymous 86721d5158 Enable highvram automatically when vram >> ram 2 years ago
comfyanonymous 2326ff1263 Add: --highvram for when you want models to stay on the vram. 2 years ago
comfyanonymous d66415c021 Low vram mode for controlnets. 2 years ago
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous 7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB. 2 years ago
comfyanonymous 708138c77d Remove print. 2 years ago
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2 years ago
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2 years ago
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2 years ago
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2 years ago