comfyanonymous
2b14041d4b
Remove useless code.
1 year ago
comfyanonymous
274dff3257
Remove more useless files.
1 year ago
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
1 year ago
comfyanonymous
f8c5931053
Split the batch in VAEEncode if there's not enough memory.
1 year ago
comfyanonymous
c069fc0730
Auto switch to tiled VAE encode if regular one runs out of memory.
1 year ago
comfyanonymous
c64ca8c0b2
Refactor unCLIP noise augment out of samplers.py
1 year ago
comfyanonymous
de142eaad5
Simpler base model code.
1 year ago
comfyanonymous
23cf8ca7c5
Fix bug when embedding gets ignored because of mismatched size.
1 year ago
comfyanonymous
0e425603fb
Small refactor.
1 year ago
comfyanonymous
a3a713b6c5
Refactor previews into one command line argument.
...
Clean up a few things.
1 year ago
space-nuko
3e17971acb
preview method autodetection
1 year ago
space-nuko
d5a28fadaa
Add latent2rgb preview
1 year ago
space-nuko
48f7ec750c
Make previews into cli option
1 year ago
space-nuko
b4f434ee66
Preview sampled images with TAESD
1 year ago
comfyanonymous
fed0a4dd29
Some comments to say what the vram state options mean.
1 year ago
comfyanonymous
0a5fefd621
Cleanups and fixes for model_management.py
...
Hopefully fix regression on MPS and CPU.
1 year ago
comfyanonymous
700491d81a
Implement global average pooling for controlnet.
1 year ago
comfyanonymous
67892b5ac5
Refactor and improve model_management code related to free memory.
1 year ago
space-nuko
499641ebf1
More accurate total
1 year ago
space-nuko
b5dd15c67a
System stats endpoint
1 year ago
comfyanonymous
5c38958e49
Tweak lowvram model memory so it's closer to what it was before.
1 year ago
comfyanonymous
94680732d3
Empty cache on mps.
1 year ago
comfyanonymous
03da8a3426
This is useless for inference.
1 year ago
comfyanonymous
eb448dd8e1
Auto load model in lowvram if not enough memory.
1 year ago
comfyanonymous
b9818eb910
Add route to get safetensors metadata:
...
/view_metadata/loras?filename=lora.safetensors
1 year ago
comfyanonymous
a532888846
Support VAEs in diffusers format.
2 years ago
comfyanonymous
0fc483dcfd
Refactor diffusers model convert code to be able to reuse it.
2 years ago
comfyanonymous
eb4bd7711a
Remove einops.
2 years ago
comfyanonymous
87ab25fac7
Do operations in same order as the one it replaces.
2 years ago
comfyanonymous
e1278fa925
Support old pytorch versions that don't have weights_only.
2 years ago
BlenderNeko
8b4b0c3188
vecorized bislerp
2 years ago
comfyanonymous
b8ccbec6d8
Various improvements to bislerp.
2 years ago
comfyanonymous
34887b8885
Add experimental bislerp algorithm for latent upscaling.
...
It's like bilinear but with slerp.
2 years ago
comfyanonymous
6cc450579b
Auto transpose images from exif data.
2 years ago
comfyanonymous
dc198650c0
sample_dpmpp_2m_sde no longer crashes when step == 1.
2 years ago
comfyanonymous
069657fbf3
Add DPM-Solver++(2M) SDE and exponential scheduler.
...
exponential scheduler is the one recommended with this sampler.
2 years ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
comfyanonymous
ef815ba1e2
Switch default scheduler to normal.
2 years ago
comfyanonymous
3a1f47764d
Print the torch device that is used on startup.
2 years ago
BlenderNeko
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2 years ago
BlenderNeko
19c014f429
comment out annoying print statement
2 years ago
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2 years ago
comfyanonymous
f7c0f75d1f
Auto batching improvements.
...
Try batching when cond sizes don't match with smart padding.
2 years ago
comfyanonymous
314e526c5c
Not needed anymore because sampling works with any latent size.
2 years ago
comfyanonymous
c6e34963e4
Make t2i adapter work with any latent resolution.
2 years ago
comfyanonymous
6fc4917634
Make maximum_batch_area take into account python2.0 attention function.
...
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous
678f933d38
maximum_batch_area for xformers.
...
Remove useless code.
2 years ago
EllangoK
8e03c789a2
auto-launch cli arg
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago