765 Commits (1a0486bb96fb1ff10f4ea3c0d62eb815e9630585)

Author SHA1 Message Date
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous bc69fb5245 Use inpaint models the proper way by using VAEEncodeForInpaint. 2 years ago
comfyanonymous cef2cc3cb0 Support for inpaint models. 2 years ago
comfyanonymous 07db00355f Add masks to samplers code for inpainting. 2 years ago
comfyanonymous e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2 years ago
comfyanonymous f542f248f1 Show the right amount of steps in the progress bar for uni_pc. 2 years ago
comfyanonymous f10b8948c3 768-v support for uni_pc sampler. 2 years ago
comfyanonymous ce0aeb109e Remove print. 2 years ago
comfyanonymous 5489d5af04 Add uni_pc sampler to KSampler* nodes. 2 years ago
comfyanonymous 1a4edd19cd Fix overflow issue with inplace softmax. 2 years ago
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2 years ago
comfyanonymous 7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB. 2 years ago
comfyanonymous 324273fff2 Fix embedding not working when on new line. 2 years ago
comfyanonymous 1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2 years ago
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2 years ago
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2 years ago
comfyanonymous 3fd87cbd21 Slightly smarter batching behaviour. 2 years ago
comfyanonymous bbdcf0b737 Use relative imports for k_diffusion. 2 years ago
comfyanonymous 708138c77d Remove print. 2 years ago
comfyanonymous 047775615b Lower the chances of an OOM. 2 years ago
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2 years ago
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2 years ago
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2 years ago
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2 years ago
comfyanonymous b1a7c9ebf6 Embeddings/textual inversion support for SD2.x 2 years ago
comfyanonymous 1de5aa6a59 Add a CLIPLoader node to load standalone clip weights. 2 years ago
comfyanonymous 56d802e1f3 Use transformers CLIP instead of open_clip for SD2.x 2 years ago
comfyanonymous bf9ccffb17 Small fix for SD2.x loras. 2 years ago
comfyanonymous 678105fade SD2.x CLIP support for Loras. 2 years ago
comfyanonymous ef90e9c376 Add a LoraLoader node to apply loras to models and clip. 2 years ago
comfyanonymous 69df7eba94 Add KSamplerAdvanced node. 2 years ago
comfyanonymous 1daccf3678 Run softmax in place if it OOMs. 2 years ago
comfyanonymous f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2 years ago
comfyanonymous 50db297cf6 Try to fix OOM issues with cards that have less vram than mine. 2 years ago
comfyanonymous 73f60740c8 Slightly cleaner code. 2 years ago
comfyanonymous 0108616b77 Fix issue with some models. 2 years ago
comfyanonymous 2973ff24c5 Round CLIP position ids to fix float issues in some checkpoints. 2 years ago
comfyanonymous c4b02059d0 Add ConditioningSetArea node. 2 years ago
comfyanonymous acdc6f42e0 Fix loading some malformed checkpoints? 2 years ago
comfyanonymous 051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2 years ago
comfyanonymous 220afe3310 Initial commit. 2 years ago