66 Commits (95d796fc85608272a9bf06a8c6c1f45912179118)

Author SHA1 Message Date
comfyanonymous 95d796fc85 Faster VAE loading. 1 year ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 1 year ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 1 year ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 1 year ago
comfyanonymous 103c487a89 Cleanup. 1 year ago
comfyanonymous 78d8035f73 Fix bug with controlnet. 1 year ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras. 1 year ago
comfyanonymous fa28d7334b Remove useless code. 1 year ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models. 1 year ago
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 1 year ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output. 1 year ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 1 year ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 1 year ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 1 year ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 1 year ago
comfyanonymous 9d54066ebc This isn't needed for inference. 1 year ago
comfyanonymous 6971646b8b Speed up model loading a bit. 1 year ago
comfyanonymous 274dff3257 Remove more useless files. 1 year ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 1 year ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks. 2 years ago
comfyanonymous 6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 539ff487a8 Pull latest tomesd code from upstream. 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models. 2 years ago
comfyanonymous 0d972b85e6 This seems to give better quality in tome. 2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section. 2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used. 2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
edikius 165be5828a
Fixed import (#44) 2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous 798c90e1c0 Fix pytorch 2.0 cross attention not working. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 1a612e1c74 Add some pytorch scaled_dot_product_attention code for testing. 2 years ago
comfyanonymous 9502ee45c3 Hopefully fix a strange issue with xformers + lowvram. 2 years ago