comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
9 months ago
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
9 months ago
comfyanonymous
5b40e7a5ed
Implement shift schedule for cascade stage C.
9 months ago
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
9 months ago
comfyanonymous
6c875d846b
Fix clip attention mask issues on some hardware.
9 months ago
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
9 months ago
comfyanonymous
f2d1d16f4f
Support Stable Cascade Stage B lite.
9 months ago
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
9 months ago
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
9 months ago
comfyanonymous
667c92814e
Stable Cascade Stage B.
9 months ago
comfyanonymous
f83109f09b
Stable Cascade Stage C.
9 months ago
comfyanonymous
5e06baf112
Stable Cascade Stage A.
9 months ago
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
9 months ago
comfyanonymous
38b7ac6e26
Don't init the CLIP model when the checkpoint has no CLIP weights.
9 months ago
Jedrzej Kosinski
f44225fd5f
Fix infinite while loop being possible in ddim_scheduler
9 months ago
comfyanonymous
25a4805e51
Add a way to set different conditioning for the controlnet.
9 months ago
blepping
a352c021ec
Allow custom samplers to request discard penultimate sigma
9 months ago
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
9 months ago
comfyanonymous
236bda2683
Make minimum tile size the size of the overlap.
10 months ago
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
10 months ago
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
10 months ago
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
10 months ago
comfyanonymous
da7a8df0d2
Put VAE key name in model config.
10 months ago
comfyanonymous
89507f8adf
Remove some unused imports.
10 months ago
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler ( #2619 )
...
self.scheduler -> scheduler_name
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
10 months ago
comfyanonymous
4871a36458
Cleanup some unused imports.
10 months ago
comfyanonymous
78a70fda87
Remove useless import.
10 months ago
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
10 months ago
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
10 months ago
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
10 months ago
comfyanonymous
53c8a99e6c
Make server storage the default.
...
Remove --server-storage argument.
10 months ago
comfyanonymous
977eda19a6
Don't round noise mask.
10 months ago
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
10 months ago
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
10 months ago
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
10 months ago
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
10 months ago
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
11 months ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
11 months ago
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
11 months ago
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
11 months ago
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
11 months ago
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
11 months ago
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
11 months ago
comfyanonymous
2c4e92a98b
Fix regression.
11 months ago
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
11 months ago
comfyanonymous
a47f609f90
Auto detect out_channels from model.
11 months ago
comfyanonymous
79f73a4b33
Remove useless code.
11 months ago
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
11 months ago
comfyanonymous
12e822c6c8
Use function to calculate model size in model patcher.
11 months ago
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
11 months ago