comfyanonymous
78a70fda87
Remove useless import.
10 months ago
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
11 months ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
11 months ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
11 months ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
11 months ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
12 months ago
comfyanonymous
412d3ff57d
Refactor.
1 year ago
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
1 year ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
1 year ago
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
1 year ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
1 year ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
1 year ago