750 Commits (4b9005e949224782236a8b914eae48bc503f1f18)

Author SHA1 Message Date
comfyanonymous 844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node. 1 year ago
comfyanonymous 656c0b5d90 CLIP code refactor and improvements. 1 year ago
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 1 year ago
gameltb 7e455adc07 fix unet_wrapper_function name in ModelPatcher 1 year ago
comfyanonymous 1ffa8858e7 Move model sampling code to comfy/model_sampling.py 1 year ago
comfyanonymous ae2acfc21b Don't convert Nan to zero. 1 year ago
comfyanonymous d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again. 1 year ago
comfyanonymous 2455aaed8a Allow model or clip to be None in load_lora_for_models. 1 year ago
comfyanonymous ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 1 year ago
comfyanonymous e73ec8c4da Not used anymore. 1 year ago
comfyanonymous 111f1b5255 Fix some issues with sampling precision. 1 year ago
comfyanonymous 7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 1 year ago
comfyanonymous a268a574fa Remove a bunch of useless code. 1 year ago
comfyanonymous 1777b54d02 Sampling code changes. 1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous a12cc05323 Add --max-upload-size argument, the default is 100MB. 1 year ago
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 1 year ago
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 1 year ago
comfyanonymous 723847f6b3 Faster clip image processing. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 7fbb217d3a Fix uni_pc returning noisy image when steps <= 3 1 year ago
Jedrzej Kosinski 3783cb8bfd change 'c_adm' to 'y' in ControlNet.get_control 1 year ago
comfyanonymous d1d2fea806 Pass extra conds directly to unet. 1 year ago
comfyanonymous 036f88c621 Refactor to make it easier to add custom conds to models. 1 year ago
comfyanonymous 3fce8881ca Sampling code refactor to make it easier to add more conds. 1 year ago
comfyanonymous 8594c8be4d Empty the cache when torch cache is more than 25% free mem. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous a0690f9df9 Fix t2i adapter issue. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous 4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 1 year ago
comfyanonymous e6962120c6 Make sure cond_concat is on the right device. 1 year ago
comfyanonymous 45c972aba8 Refactor cond_concat into conditioning. 1 year ago
comfyanonymous 430a8334c5 Fix some potential issues. 1 year ago
comfyanonymous 782a24fce6 Refactor cond_concat into model object. 1 year ago
comfyanonymous 0d45a565da Fix memory issue related to control loras. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 20d3852aa1 Pull some small changes from the other repo. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous 8cc75c64ff Let unet wrapper functions have .to attributes. 1 year ago