comfyanonymous
|
056e5545ff
|
Don't try to get vram from xpu or cuda when directml is enabled.
|
2 years ago |
comfyanonymous
|
2ca934f7d4
|
You can now select the device index with: --directml id
Like this for example: --directml 1
|
2 years ago |
comfyanonymous
|
3baded9892
|
Basic torch_directml support. Use --directml to use it.
|
2 years ago |
Jacob Segal
|
e214c917ae
|
Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
|
2 years ago |
comfyanonymous
|
5a971cecdb
|
Add callback to sampler function.
Callback format is: callback(step, x0, x)
|
2 years ago |
comfyanonymous
|
aa57136dae
|
Some fixes to the batch masks PR.
|
2 years ago |
comfyanonymous
|
c50208a703
|
Refactor more code to sample.py
|
2 years ago |
comfyanonymous
|
7983b3a975
|
This is cleaner this way.
|
2 years ago |
BlenderNeko
|
0b07b2cc0f
|
gligen tuple
|
2 years ago |
pythongosssss
|
c8c9926eeb
|
Add progress to vae decode tiled
|
2 years ago |
BlenderNeko
|
d9b1595f85
|
made sample functions more explicit
|
2 years ago |
BlenderNeko
|
5818539743
|
add docstrings
|
2 years ago |
BlenderNeko
|
2a09e2aa27
|
refactor/split various bits of code for sampling
|
2 years ago |
comfyanonymous
|
5282f56434
|
Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
|
2 years ago |
comfyanonymous
|
6908f9c949
|
This makes pytorch2.0 attention perform a bit faster.
|
2 years ago |
comfyanonymous
|
907010e082
|
Remove some useless code.
|
2 years ago |
comfyanonymous
|
96b57a9ad6
|
Don't pass adm to model when it doesn't support it.
|
2 years ago |
comfyanonymous
|
3696d1699a
|
Add support for GLIGEN textbox model.
|
2 years ago |
comfyanonymous
|
884ea653c8
|
Add a way for nodes to set a custom CFG function.
|
2 years ago |
comfyanonymous
|
73c3e11e83
|
Fix model_management import so it doesn't get executed twice.
|
2 years ago |
comfyanonymous
|
81d1f00df3
|
Some refactoring: from_tokens -> encode_from_tokens
|
2 years ago |
BlenderNeko
|
d0b1b6c6bf
|
fixed improper padding
|
2 years ago |
comfyanonymous
|
deb2b93e79
|
Move code to empty gpu cache to model_management.py
|
2 years ago |
comfyanonymous
|
04d9bc13af
|
Safely load pickled embeds that don't load with weights_only=True.
|
2 years ago |
BlenderNeko
|
da115bd78d
|
ensure backwards compat with optional args
|
2 years ago |
BlenderNeko
|
752f7a162b
|
align behavior with old tokenize function
|
2 years ago |
comfyanonymous
|
334aab05e5
|
Don't stop workflow if loading embedding fails.
|
2 years ago |
BlenderNeko
|
73175cf58c
|
split tokenizer from encoder
|
2 years ago |
BlenderNeko
|
8489cba140
|
add unique ID per word/embedding for tokenizer
|
2 years ago |
comfyanonymous
|
92eca60ec9
|
Fix for new transformers version.
|
2 years ago |
comfyanonymous
|
1e1875f674
|
Print xformers version and warning about 0.0.18
|
2 years ago |
comfyanonymous
|
7e254d2f69
|
Clarify what --windows-standalone-build does.
|
2 years ago |
comfyanonymous
|
44fea05064
|
Cleanup.
|
2 years ago |
comfyanonymous
|
58ed0f2da4
|
Fix loading SD1.5 diffusers checkpoint.
|
2 years ago |
comfyanonymous
|
64557d6781
|
Add a --force-fp32 argument to force fp32 for debugging.
|
2 years ago |
comfyanonymous
|
bceccca0e5
|
Small refactor.
|
2 years ago |
EllangoK
|
28fff5d1db
|
fixes lack of support for multi configs
also adds some metavars to argarse
|
2 years ago |
comfyanonymous
|
f84f2508cc
|
Rename the cors parameter to something more verbose.
|
2 years ago |
EllangoK
|
48efae1608
|
makes cors a cli parameter
|
2 years ago |
EllangoK
|
01c1fc669f
|
set listen flag to listen on all if specifed
|
2 years ago |
藍+85CD
|
3e2608e12b
|
Fix auto lowvram detection on CUDA
|
2 years ago |
sALTaccount
|
60127a8304
|
diffusers loader
|
2 years ago |
藍+85CD
|
7cb924f684
|
Use separate variables instead of `vram_state`
|
2 years ago |
藍+85CD
|
84b9c0ac2f
|
Import intel_extension_for_pytorch as ipex
|
2 years ago |
EllangoK
|
e5e587b1c0
|
seperates out arg parser and imports args
|
2 years ago |
藍+85CD
|
37713e3b0a
|
Add basic XPU device support
closed #387
|
2 years ago |
comfyanonymous
|
e46b1c3034
|
Disable xformers in VAE when xformers == 0.0.18
|
2 years ago |
comfyanonymous
|
1718730e80
|
Ignore embeddings when sizes don't match and print a WARNING.
|
2 years ago |
comfyanonymous
|
23524ad8c5
|
Remove print.
|
2 years ago |
comfyanonymous
|
539ff487a8
|
Pull latest tomesd code from upstream.
|
2 years ago |