670 Commits (4871a36458e7cd4af1a7f46dd6738c406e831413)

Author SHA1 Message Date
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous a0690f9df9 Fix t2i adapter issue. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous 4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 1 year ago
comfyanonymous e6962120c6 Make sure cond_concat is on the right device. 1 year ago
comfyanonymous 45c972aba8 Refactor cond_concat into conditioning. 1 year ago
comfyanonymous 430a8334c5 Fix some potential issues. 1 year ago
comfyanonymous 782a24fce6 Refactor cond_concat into model object. 1 year ago
comfyanonymous 0d45a565da Fix memory issue related to control loras. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 20d3852aa1 Pull some small changes from the other repo. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous 8cc75c64ff Let unet wrapper functions have .to attributes. 1 year ago
comfyanonymous 5e885bd9c8 Cleanup. 1 year ago
Yukimasa Funaoka 9eb621c95a
Supports TAESD models in safetensors format 1 year ago
comfyanonymous 72188dffc3 load_checkpoint_guess_config can now optionally output the model. 1 year ago
Jairo Correa 63e5fd1790 Option to input directory 1 year ago
City 9bfec2bdbf Fix quality loss due to low precision 1 year ago
badayvedat 0f17993d05 fix: typo in extra sampler 1 year ago
comfyanonymous 66756de100 Add SamplerDPMPP_2M_SDE node. 1 year ago
comfyanonymous 71713888c4 Print missing VAE keys. 1 year ago
comfyanonymous d234ca558a Add missing samplers to KSamplerSelect. 1 year ago
comfyanonymous 1adcc4c3a2 Add a SamplerCustom Node. 1 year ago
comfyanonymous bf3fc2f1b7 Refactor sampling related code. 1 year ago
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 1 year ago
comfyanonymous 1d6dd83184 Scheduler code refactor. 1 year ago
comfyanonymous 446caf711c Sampling code refactor. 1 year ago
comfyanonymous 76cdc809bf Support more controlnet models. 1 year ago
Simon Lui eec449ca8e Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 492db2de8d Allow having a different pooled output for each image in a batch. 1 year ago
comfyanonymous 1cdfb3dba4 Only do the cast on the device if the device supports it. 1 year ago
comfyanonymous 7c9a92f552 Don't depend on torchvision. 1 year ago
MoonRide303 2b6b178173 Added support for lanczos scaling 1 year ago
comfyanonymous b92bf8196e Do lora cast on GPU instead of CPU for higher performance. 1 year ago
comfyanonymous 321c5fa295 Enable pytorch attention by default on xpu. 1 year ago
comfyanonymous 61b1f67734 Support models without previews. 1 year ago
comfyanonymous 43d4935a1d Add cond_or_uncond array to transformer_options so hooks can check what is 1 year ago
comfyanonymous 415abb275f Add DDPM sampler. 1 year ago
comfyanonymous 94e4fe39d8 This isn't used anywhere. 1 year ago
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 1 year ago
comfyanonymous 0d8f376446 Set last layer on SD2.x models uses the proper indexes now. 1 year ago