338 Commits (ac9c038ac222b71e16f98dd65060381d1895b787)

Author SHA1 Message Date
space-nuko 499641ebf1 More accurate total 1 year ago
space-nuko b5dd15c67a System stats endpoint 1 year ago
comfyanonymous 5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 1 year ago
comfyanonymous 94680732d3 Empty cache on mps. 1 year ago
comfyanonymous 03da8a3426 This is useless for inference. 1 year ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 1 year ago
comfyanonymous b9818eb910 Add route to get safetensors metadata: 1 year ago
comfyanonymous a532888846 Support VAEs in diffusers format. 2 years ago
comfyanonymous 0fc483dcfd Refactor diffusers model convert code to be able to reuse it. 2 years ago
comfyanonymous eb4bd7711a Remove einops. 2 years ago
comfyanonymous 87ab25fac7 Do operations in same order as the one it replaces. 2 years ago
comfyanonymous e1278fa925 Support old pytorch versions that don't have weights_only. 2 years ago
BlenderNeko 8b4b0c3188 vecorized bislerp 2 years ago
comfyanonymous b8ccbec6d8 Various improvements to bislerp. 2 years ago
comfyanonymous 34887b8885 Add experimental bislerp algorithm for latent upscaling. 2 years ago
comfyanonymous 6cc450579b Auto transpose images from exif data. 2 years ago
comfyanonymous dc198650c0 sample_dpmpp_2m_sde no longer crashes when step == 1. 2 years ago
comfyanonymous 069657fbf3 Add DPM-Solver++(2M) SDE and exponential scheduler. 2 years ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
comfyanonymous ef815ba1e2 Switch default scheduler to normal. 2 years ago
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2 years ago
BlenderNeko 1201d2eae5
Make nodes map over input lists (#579) 2 years ago
BlenderNeko 19c014f429 comment out annoying print statement 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous f7c0f75d1f Auto batching improvements. 2 years ago
comfyanonymous 314e526c5c Not needed anymore because sampling works with any latent size. 2 years ago
comfyanonymous c6e34963e4 Make t2i adapter work with any latent resolution. 2 years ago
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function. 2 years ago
comfyanonymous 678f933d38 maximum_batch_area for xformers. 2 years ago
EllangoK 8e03c789a2 auto-launch cli arg 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 6ee11d7bc0 Fix import. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous fcf513e0b6 Refactor. 2 years ago
pythongosssss 5eeecf3fd5 remove unused import 2 years ago
pythongosssss 8912623ea9 use comfy progress bar 2 years ago
comfyanonymous 908dc1d5a8 Add a total_steps value to sampler callback. 2 years ago
pythongosssss 27df74101e reduce duplication 2 years ago
comfyanonymous 93c64afaa9 Use sampler callback instead of tqdm hook for progress bar. 2 years ago
pythongosssss 06ad35b493 added progress to encode + upscale 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 9c335a553f LoKR support. 2 years ago
comfyanonymous d3293c8339 Properly disable all progress bars when disable_pbar=True 2 years ago
BlenderNeko a2e18b1504 allow disabling of progress bar when sampling 2 years ago
comfyanonymous 071011aebe Mask strength should be separate from area strength. 2 years ago
Jacob Segal af02393c2a Default to sampling entire image 2 years ago
comfyanonymous 056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2 years ago