comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
6 months ago
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
6 months ago
comfyanonymous
2d41642716
Fix lowvram dora issue.
6 months ago
comfyanonymous
ec6f16adb6
Fix SAG.
6 months ago
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
6 months ago
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
6 months ago
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
6 months ago
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
6 months ago
comfyanonymous
49c20cdc70
No longer necessary.
6 months ago
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
6 months ago
comfyanonymous
93e876a3be
Remove warnings that confuse people.
6 months ago
comfyanonymous
cd07340d96
Typo fix.
6 months ago
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
7 months ago
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
7 months ago
comfyanonymous
f81a6fade8
Fix some edge cases with samplers and arrays with a single sigma.
7 months ago
comfyanonymous
2aed53c4ac
Workaround xformers bug.
7 months ago
Garrett Sutula
bacce529fb
Add TLS Support ( #3312 )
...
* Add TLS Support
* Add to readme
* Add guidance for windows users on generating certificates
* Add guidance for windows users on generating certificates
* Fix typo
7 months ago
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size ( #3353 )
7 months ago
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
7 months ago
comfyanonymous
c59fe9f254
Support VAE without quant_conv.
7 months ago
comfyanonymous
719fb2c81d
Add basic PAG node.
7 months ago
comfyanonymous
258dbc06c3
Fix some memory related issues.
7 months ago
comfyanonymous
58812ab8ca
Support SDXS 512 model.
7 months ago
comfyanonymous
831511a1ee
Fix issue with sampling_settings persisting across models.
7 months ago
comfyanonymous
30abc324c2
Support properly saving CosXL checkpoints.
7 months ago
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
8 months ago
kk-89
38ed2da2dd
Fix typo in lowvram patcher ( #3209 )
8 months ago
comfyanonymous
1088d1850f
Support for CosXL models.
8 months ago
comfyanonymous
41ed7e85ea
Fix object_patches_backup not being the same object across clones.
8 months ago
comfyanonymous
0f5768e038
Fix missing arguments in cfg_function.
8 months ago
comfyanonymous
1f4fc9ea0c
Fix issue with get_model_object on patched model.
8 months ago
comfyanonymous
1a0486bb96
Fix model needing to be loaded on GPU to generate the sigmas.
8 months ago
comfyanonymous
c6bd456c45
Make zero denoise a NOP.
8 months ago
comfyanonymous
fcfd2bdf8a
Small cleanup.
8 months ago
comfyanonymous
0542088ef8
Refactor sampler code for more advanced sampler nodes part 2.
8 months ago
comfyanonymous
57753c964a
Refactor sampling code for more advanced sampler nodes.
8 months ago
comfyanonymous
6c6a39251f
Fix saving text encoder in fp8.
8 months ago
comfyanonymous
e6482fbbfc
Refactor calc_cond_uncond_batch into calc_cond_batch.
...
calc_cond_batch can take an arbitrary amount of cond inputs.
Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
8 months ago
comfyanonymous
575acb69e4
IP2P model loading support.
...
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.
This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
8 months ago
comfyanonymous
94a5a67c32
Cleanup to support different types of inpaint models.
8 months ago
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
8 months ago
comfyanonymous
327ca1313d
Support SDXS 0.9
8 months ago
comfyanonymous
ae77590b4e
dora_scale support for lora file.
8 months ago
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
8 months ago
comfyanonymous
0624838237
Add inverse noise scaling function.
8 months ago
comfyanonymous
5d875d77fe
Fix regression with lcm not working with batches.
8 months ago
comfyanonymous
4b9005e949
Fix regression with model merging.
8 months ago
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
8 months ago
comfyanonymous
150a3e946f
Make LCM sampler use the model noise scaling function.
8 months ago
comfyanonymous
40e124c6be
SV3D support.
8 months ago