55 Commits (11838e60f4aac495c6f0aed09415f3e8e0d2a402)

Author SHA1 Message Date
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 8 months ago
comfyanonymous 03c47fc0f2 Add a min_length property to tokenizer class. 9 months ago
comfyanonymous 8ac69f62e5 Make return_projected_pooled setable from the __init__ 9 months ago
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 9 months ago
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code. 9 months ago
comfyanonymous 97d03ae04a StableCascade CLIP model support. 9 months ago
comfyanonymous 4871a36458 Cleanup some unused imports. 10 months ago
comfyanonymous 57926635e8 Switch text encoder to manual cast. 11 months ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 12 months ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation. 12 months ago
comfyanonymous be3468ddd5 Less useless downcasting. 12 months ago
comfyanonymous 728613bb3e Fix last pr. 1 year ago
Jianqi Pan f2e49b1d57 fix: adaptation to older versions of pytroch 1 year ago
comfyanonymous 656c0b5d90 CLIP code refactor and improvements. 1 year ago
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 1 year ago
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 1 year ago
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 1 year ago
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 1 year ago
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 1 year ago
comfyanonymous fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 1 year ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 1 year ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 1 year ago
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 1 year ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations. 1 year ago
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 1 year ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 1 year ago
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 1 year ago
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 1 year ago
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 1 year ago
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 1 year ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed. 1 year ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back. 1 year ago
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 1 year ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU. 1 year ago
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 1 year ago
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 1 year ago
comfyanonymous 23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 1 year ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago
BlenderNeko 8489cba140 add unique ID per word/embedding for tokenizer 2 years ago
comfyanonymous 1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2 years ago
comfyanonymous 50099bcd96 Support multiple paths for embeddings. 2 years ago
comfyanonymous 00a9189e30 Support old pytorch. 2 years ago