comfyanonymous
2b13939044
Remove some useless code.
1 year ago
comfyanonymous
95d796fc85
Faster VAE loading.
1 year ago
comfyanonymous
4b957a0010
Initialize the unet directly on the target device.
1 year ago
comfyanonymous
9ba440995a
It's actually possible to torch.compile the unet now.
1 year ago
comfyanonymous
3ded1a3a04
Refactor of sampler code to deal more easily with different model types.
1 year ago
comfyanonymous
ddc6f12ad5
Disable autocast in unet for increased speed.
1 year ago
comfyanonymous
103c487a89
Cleanup.
1 year ago
comfyanonymous
c71a7e6b20
Fix ddim + inpainting not working.
1 year ago
comfyanonymous
78d8035f73
Fix bug with controlnet.
1 year ago
comfyanonymous
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
1 year ago
comfyanonymous
fa28d7334b
Remove useless code.
1 year ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
1 year ago
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
1 year ago
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
1 year ago
comfyanonymous
45be2e92c1
Fix DDIM v-prediction.
1 year ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
1 year ago
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
1 year ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
1 year ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
1 year ago
comfyanonymous
9d54066ebc
This isn't needed for inference.
1 year ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
1 year ago
comfyanonymous
274dff3257
Remove more useless files.
1 year ago
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
1 year ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
d3293c8339
Properly disable all progress bars when disable_pbar=True
2 years ago
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous
6908f9c949
This makes pytorch2.0 attention perform a bit faster.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
EllangoK
e5e587b1c0
seperates out arg parser and imports args
2 years ago
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2 years ago
comfyanonymous
539ff487a8
Pull latest tomesd code from upstream.
2 years ago
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous
0d972b85e6
This seems to give better quality in tome.
2 years ago
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
f5365c9c81
Fix ddim for Mac: #264
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2 years ago
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2 years ago
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2 years ago
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2 years ago
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous
1de86851b1
Try to fix memory issue.
2 years ago