Browse Source

Don't try to get vram from xpu or cuda when directml is enabled.

pull/585/head
comfyanonymous 2 years ago
parent
commit
056e5545ff
  1. 3
      comfy/model_management.py

3
comfy/model_management.py

@ -34,6 +34,9 @@ if args.directml is not None:
try: try:
import torch import torch
if directml_enabled:
total_vram = 4097 #TODO
else:
try: try:
import intel_extension_for_pytorch as ipex import intel_extension_for_pytorch as ipex
if torch.xpu.is_available(): if torch.xpu.is_available():

Loading…
Cancel
Save