comfyanonymous
1ffa8858e7
Move model sampling code to comfy/model_sampling.py
1 year ago
comfyanonymous
ae2acfc21b
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
1 year ago
comfyanonymous
d2e27b48f1
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
1 year ago
comfyanonymous
2455aaed8a
Allow model or clip to be None in load_lora_for_models.
1 year ago
comfyanonymous
ecb80abb58
Allow ModelSamplingDiscrete to be instantiated without a model config.
1 year ago
comfyanonymous
e73ec8c4da
Not used anymore.
1 year ago
comfyanonymous
111f1b5255
Fix some issues with sampling precision.
1 year ago
comfyanonymous
7c0f255de1
Clean up percent start/end and make controlnets work with sigmas.
1 year ago
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
1 year ago
comfyanonymous
1777b54d02
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago
comfyanonymous
a12cc05323
Add --max-upload-size argument, the default is 100MB.
1 year ago
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
1 year ago
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
1 year ago
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
1 year ago
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
1 year ago
comfyanonymous
723847f6b3
Faster clip image processing.
1 year ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
7fbb217d3a
Fix uni_pc returning noisy image when steps <= 3
1 year ago
Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
1 year ago
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
1 year ago
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
1 year ago
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
1 year ago
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
1 year ago
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
1 year ago
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
1 year ago
comfyanonymous
430a8334c5
Fix some potential issues.
1 year ago
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
1 year ago
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
1 year ago
comfyanonymous
5e885bd9c8
Cleanup.
1 year ago
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
1 year ago
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
1 year ago
Jairo Correa
63e5fd1790
Option to input directory
1 year ago