763 Commits (dd3bafb40b37e377ffa630edee238c19c03a6d44)

Author SHA1 Message Date
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous 20d3852aa1 Pull some small changes from the other repo. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions. 1 year ago
comfyanonymous 8cc75c64ff Let unet wrapper functions have .to attributes. 1 year ago
comfyanonymous 5e885bd9c8 Cleanup. 1 year ago
Yukimasa Funaoka 9eb621c95a
Supports TAESD models in safetensors format 1 year ago
comfyanonymous 72188dffc3 load_checkpoint_guess_config can now optionally output the model. 1 year ago
Jairo Correa 63e5fd1790 Option to input directory 1 year ago
City 9bfec2bdbf Fix quality loss due to low precision 1 year ago
badayvedat 0f17993d05 fix: typo in extra sampler 1 year ago
comfyanonymous 66756de100 Add SamplerDPMPP_2M_SDE node. 1 year ago
comfyanonymous 71713888c4 Print missing VAE keys. 1 year ago
comfyanonymous d234ca558a Add missing samplers to KSamplerSelect. 1 year ago
comfyanonymous 1adcc4c3a2 Add a SamplerCustom Node. 1 year ago
comfyanonymous bf3fc2f1b7 Refactor sampling related code. 1 year ago
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 1 year ago
comfyanonymous 1d6dd83184 Scheduler code refactor. 1 year ago
comfyanonymous 446caf711c Sampling code refactor. 1 year ago
comfyanonymous 76cdc809bf Support more controlnet models. 1 year ago
Simon Lui eec449ca8e Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 492db2de8d Allow having a different pooled output for each image in a batch. 1 year ago
comfyanonymous 1cdfb3dba4 Only do the cast on the device if the device supports it. 1 year ago
comfyanonymous 7c9a92f552 Don't depend on torchvision. 1 year ago
MoonRide303 2b6b178173 Added support for lanczos scaling 1 year ago
comfyanonymous b92bf8196e Do lora cast on GPU instead of CPU for higher performance. 1 year ago
comfyanonymous 321c5fa295 Enable pytorch attention by default on xpu. 1 year ago
comfyanonymous 61b1f67734 Support models without previews. 1 year ago
comfyanonymous 43d4935a1d Add cond_or_uncond array to transformer_options so hooks can check what is 1 year ago
comfyanonymous 415abb275f Add DDPM sampler. 1 year ago
comfyanonymous 94e4fe39d8 This isn't used anywhere. 1 year ago
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 1 year ago
comfyanonymous 0d8f376446 Set last layer on SD2.x models uses the proper indexes now. 1 year ago
comfyanonymous 0966d3ce82 Don't run text encoders on xpu because there are issues. 1 year ago
comfyanonymous 3039b08eb1 Only parse command line args when main.py is called. 1 year ago
comfyanonymous ed58730658 Don't leave very large hidden states in the clip vision output. 1 year ago
comfyanonymous fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 1 year ago
comfyanonymous 7d401ed1d0 Add ldm format support to UNETLoader. 1 year ago
comfyanonymous e85be36bd2 Add a penultimate_hidden_states to the clip vision output. 1 year ago
comfyanonymous 1e6b67101c Support diffusers format t2i adapters. 1 year ago
comfyanonymous 326577d04c Allow cancelling of everything with a progress bar. 1 year ago
comfyanonymous f88f7f413a Add a ConditioningSetAreaPercentage node. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago