comfyanonymous
|
ec96f6d03a
|
Move text_projection to base clip model.
|
1 year ago |
comfyanonymous
|
e3d0a9a490
|
Fix potential issue with text projection matrix multiplication.
|
1 year ago |
comfyanonymous
|
00c0b2c507
|
Initialize text encoder to target dtype.
|
1 year ago |
comfyanonymous
|
f081017c1a
|
Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
|
1 year ago |
comfyanonymous
|
c99d8002f8
|
Make sure the pooled output stays at the EOS token with added embeddings.
|
1 year ago |
comfyanonymous
|
50b1180dde
|
Fix CLIPSetLastLayer not reverting when removed.
|
1 year ago |
comfyanonymous
|
46dc050c9f
|
Fix potential tensors being on different devices issues.
|
1 year ago |
comfyanonymous
|
606a537090
|
Support SDXL embedding format with 2 CLIP.
|
1 year ago |
comfyanonymous
|
608fcc2591
|
Fix bug with weights when prompt is long.
|
1 year ago |
comfyanonymous
|
ce35d8c659
|
Lower latency by batching some text encoder inputs.
|
1 year ago |
comfyanonymous
|
b6a60fa696
|
Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
|
1 year ago |
comfyanonymous
|
97ee230682
|
Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
|
1 year ago |
comfyanonymous
|
9920367d3c
|
Fix embeddings not working with --gpu-only
|
1 year ago |
comfyanonymous
|
20f579d91d
|
Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
|
1 year ago |
comfyanonymous
|
f87ec10a97
|
Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
|
1 year ago |
comfyanonymous
|
f7edcfd927
|
Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
|
1 year ago |
comfyanonymous
|
bb1f45d6e8
|
Properly disable weight initialization in clip models.
|
1 year ago |
comfyanonymous
|
0c7cad404c
|
Don't initialize clip weights to default values.
|
1 year ago |
comfyanonymous
|
23cf8ca7c5
|
Fix bug when embedding gets ignored because of mismatched size.
|
1 year ago |
comfyanonymous
|
af9cc1fb6a
|
Search recursively in subfolders for embeddings.
|
2 years ago |
comfyanonymous
|
81d1f00df3
|
Some refactoring: from_tokens -> encode_from_tokens
|
2 years ago |
BlenderNeko
|
d0b1b6c6bf
|
fixed improper padding
|
2 years ago |
comfyanonymous
|
04d9bc13af
|
Safely load pickled embeds that don't load with weights_only=True.
|
2 years ago |
BlenderNeko
|
da115bd78d
|
ensure backwards compat with optional args
|
2 years ago |
BlenderNeko
|
752f7a162b
|
align behavior with old tokenize function
|
2 years ago |
comfyanonymous
|
334aab05e5
|
Don't stop workflow if loading embedding fails.
|
2 years ago |
BlenderNeko
|
8489cba140
|
add unique ID per word/embedding for tokenizer
|
2 years ago |
comfyanonymous
|
1718730e80
|
Ignore embeddings when sizes don't match and print a WARNING.
|
2 years ago |
comfyanonymous
|
50099bcd96
|
Support multiple paths for embeddings.
|
2 years ago |
comfyanonymous
|
00a9189e30
|
Support old pytorch.
|
2 years ago |
comfyanonymous
|
137ae2606c
|
Support people putting commas after the embedding name in the prompt.
|
2 years ago |
comfyanonymous
|
324273fff2
|
Fix embedding not working when on new line.
|
2 years ago |
comfyanonymous
|
f73e57d881
|
Add support for textual inversion embedding for SD1.x CLIP.
|
2 years ago |
comfyanonymous
|
220afe3310
|
Initial commit.
|
2 years ago |