comfyanonymous
|
68be24eead
|
Remove some prints.
|
1 year ago |
asagi4
|
1ea4d84691
|
Fix timestep ranges when batch_size > 1
|
1 year ago |
comfyanonymous
|
4ab75d9cb8
|
Update colab notebook with SDXL links.
|
1 year ago |
comfyanonymous
|
5379051d16
|
Fix diffusers VAE loading.
|
1 year ago |
comfyanonymous
|
00da9b3268
|
Merge branch 'fix/types' of https://github.com/melMass/ComfyUI
|
1 year ago |
comfyanonymous
|
5e3ac1928a
|
Implement modelspec metadata in CheckpointSave for SDXL and refiner.
|
1 year ago |
comfyanonymous
|
727588d076
|
Fix some new loras.
|
1 year ago |
comfyanonymous
|
315ba30c81
|
Update nightly ROCm pytorch command in readme to 5.6
|
1 year ago |
comfyanonymous
|
4f9b6f39d1
|
Fix potential issue with Save Checkpoint.
|
1 year ago |
comfyanonymous
|
7c0a5a3e0e
|
Disable cuda malloc on a bunch of quadro cards.
|
1 year ago |
comfyanonymous
|
a51f33ee49
|
Use bigger tiles when upscaling with model and fallback on OOM.
|
1 year ago |
comfyanonymous
|
5f75d784a1
|
Start is now 0.0 and end is now 1.0 for the timestep ranges.
|
1 year ago |
comfyanonymous
|
7ff14b62f8
|
ControlNetApplyAdvanced can now define when controlnet gets applied.
|
1 year ago |
comfyanonymous
|
d191c4f9ed
|
Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
|
1 year ago |
comfyanonymous
|
0240946ecf
|
Add a way to set which range of timesteps the cond gets applied to.
|
1 year ago |
comfyanonymous
|
30de083dd0
|
Disable cuda malloc on all the 9xx series.
|
1 year ago |
comfyanonymous
|
22f29d66ca
|
Try to fix memory issue with lora.
|
1 year ago |
comfyanonymous
|
67be7eb81d
|
Nodes can now patch the unet function.
|
1 year ago |
comfyanonymous
|
12a6e93171
|
Del the right object when applying lora.
|
1 year ago |
comfyanonymous
|
85a8900a14
|
Disable cuda malloc on regular GTX 960.
|
1 year ago |
comfyanonymous
|
78e7958d17
|
Support controlnet in diffusers format.
|
1 year ago |
comfyanonymous
|
09386a3697
|
Fix issue with lora in some cases when combined with model merging.
|
1 year ago |
comfyanonymous
|
58b2364f58
|
Properly support SDXL diffusers unet with UNETLoader node.
|
1 year ago |
melMass
|
5190aa284d
|
fix: ⚡️ small type fix
getCustomWidgets expects a plain record and not an array of records
|
1 year ago |
comfyanonymous
|
0115018695
|
Print errors and continue when lora weights are not compatible.
|
1 year ago |
comfyanonymous
|
4760c29380
|
Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI
|
1 year ago |
comfyanonymous
|
ccb6b70de1
|
Move image encoding outside of sampling loop for better preview perf.
|
1 year ago |
comfyanonymous
|
39c58b227f
|
Disable cuda malloc on GTX 750 Ti.
|
1 year ago |
comfyanonymous
|
d5c0765f4e
|
Update how to get the prompt in api format in the example.
|
1 year ago |
comfyanonymous
|
799c08a4ce
|
Auto disable cuda malloc on some GPUs on windows.
|
1 year ago |
comfyanonymous
|
0b284f650b
|
Fix typo.
|
1 year ago |
comfyanonymous
|
e032ca6138
|
Fix ddim issue with older torch versions.
|
1 year ago |
comfyanonymous
|
18885f803a
|
Add MX450 and MX550 to list of cards with broken fp16.
|
1 year ago |
comfyanonymous
|
9ba440995a
|
It's actually possible to torch.compile the unet now.
|
1 year ago |
comfyanonymous
|
51d5477579
|
Add key to indicate checkpoint is v_prediction when saving.
|
1 year ago |
comfyanonymous
|
ff6b047a74
|
Fix device print on old torch version.
|
1 year ago |
comfyanonymous
|
9871a15cf9
|
Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
|
1 year ago |
comfyanonymous
|
55d0fca9fa
|
--windows-standalone-build now enables --cuda-malloc
|
1 year ago |
comfyanonymous
|
1679abd86d
|
Add a command line argument to enable backend:cudaMallocAsync
|
1 year ago |
comfyanonymous
|
3a150bad15
|
Only calculate randn in some samplers when it's actually being used.
|
1 year ago |
comfyanonymous
|
ee8f8ee07f
|
Fix regression with ddim and uni_pc when batch size > 1.
|
1 year ago |
comfyanonymous
|
3ded1a3a04
|
Refactor of sampler code to deal more easily with different model types.
|
1 year ago |
comfyanonymous
|
ac9c038ac2
|
Merge branch 'master' of https://github.com/ComfyUI-Community/ComfyUI
|
1 year ago |
comfyanonymous
|
5f57362613
|
Lower lora ram usage when in normal vram mode.
|
1 year ago |
ComfyUI-Community
|
a8f3bbc35d
|
Patch del self.loaded_lora to prevent error with persistent lora_name swapping
|
1 year ago |
comfyanonymous
|
490771b7f4
|
Speed up lora loading a bit.
|
1 year ago |
comfyanonymous
|
50b1180dde
|
Fix CLIPSetLastLayer not reverting when removed.
|
1 year ago |
comfyanonymous
|
6fb084f39d
|
Reduce floating point rounding errors in loras.
|
1 year ago |
comfyanonymous
|
91ed2815d5
|
Add a node to merge CLIP models.
|
1 year ago |
comfyanonymous
|
907c9fbf0d
|
Refactor to make it easier to set the api path.
|
1 year ago |