Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
1 year ago
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
1 year ago
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
1 year ago
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
1 year ago
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
1 year ago
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
1 year ago
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
1 year ago
comfyanonymous
430a8334c5
Fix some potential issues.
1 year ago
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
1 year ago
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
1 year ago
comfyanonymous
5e885bd9c8
Cleanup.
1 year ago
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
1 year ago
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
1 year ago
Jairo Correa
63e5fd1790
Option to input directory
1 year ago
City
9bfec2bdbf
Fix quality loss due to low precision
1 year ago
badayvedat
0f17993d05
fix: typo in extra sampler
1 year ago
comfyanonymous
66756de100
Add SamplerDPMPP_2M_SDE node.
1 year ago
comfyanonymous
71713888c4
Print missing VAE keys.
1 year ago
comfyanonymous
d234ca558a
Add missing samplers to KSamplerSelect.
1 year ago
comfyanonymous
1adcc4c3a2
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
1 year ago
comfyanonymous
bf3fc2f1b7
Refactor sampling related code.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
1d6dd83184
Scheduler code refactor.
1 year ago
comfyanonymous
446caf711c
Sampling code refactor.
1 year ago
comfyanonymous
76cdc809bf
Support more controlnet models.
1 year ago
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
1 year ago
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
1 year ago
comfyanonymous
7c9a92f552
Don't depend on torchvision.
1 year ago
MoonRide303
2b6b178173
Added support for lanczos scaling
1 year ago
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
1 year ago
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
1 year ago
comfyanonymous
61b1f67734
Support models without previews.
1 year ago