comfyanonymous
e45d920ae3
Don't resize clip vision image when the size is already good.
11 months ago
comfyanonymous
13e6d5366e
Switch clip vision to manual cast.
...
Make it use the same dtype as the text encoder.
11 months ago
comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
11 months ago
Hari
574363a8a6
Implement Perp-Neg
11 months ago
comfyanonymous
a5056cfb1f
Remove useless code.
11 months ago
comfyanonymous
329c571993
Improve code legibility.
11 months ago
comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
11 months ago
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
11 months ago
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
11 months ago
comfyanonymous
b454a67bb9
Support segmind vega model.
11 months ago
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
11 months ago
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
11 months ago
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
11 months ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
11 months ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
11 months ago
comfyanonymous
340177e6e8
Disable non blocking on mps.
11 months ago
comfyanonymous
614b7e731f
Implement GLora.
11 months ago
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
11 months ago
comfyanonymous
174eba8e95
Use own clip vision model implementation.
11 months ago
comfyanonymous
97015b6b38
Cleanup.
12 months ago
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
12 months ago
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
12 months ago
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
12 months ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
12 months ago
comfyanonymous
2db86b4676
Slightly faster lora applying.
12 months ago
comfyanonymous
1bbd65ab30
Missed this one.
12 months ago
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
12 months ago
comfyanonymous
26b1c0a771
Fix control lora on fp8.
12 months ago
comfyanonymous
be3468ddd5
Less useless downcasting.
12 months ago
comfyanonymous
ca82ade765
Use .itemsize to get dtype size for fp8.
12 months ago
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
12 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
12 months ago
comfyanonymous
c97be4db91
Support SD2.1 turbo checkpoint.
12 months ago
comfyanonymous
983ebc5792
Use smart model management for VAE to decrease latency.
12 months ago
comfyanonymous
c45d1b9b67
Add a function to load a unet from a state dict.
12 months ago
comfyanonymous
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
12 months ago
comfyanonymous
13fdee6abf
Try to free memory for both cond+uncond before inference.
12 months ago
comfyanonymous
be71bb5e13
Tweak memory inference calculations a bit.
12 months ago
comfyanonymous
39e75862b2
Fix regression from last commit.
12 months ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
12 months ago
comfyanonymous
5d6dfce548
Fix importing diffusers unets.
12 months ago
comfyanonymous
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
1 year ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
410bf07771
Make VAE memory estimation take dtype into account.
1 year ago
comfyanonymous
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
1 year ago
comfyanonymous
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
1 year ago
comfyanonymous
72741105a6
Remove useless code.
1 year ago