Browse Source

Update custom node name Llava-Describer to Ollama-Describer (#684)

pull/693/head
Alisson Pereira Anjos 6 months ago committed by GitHub
parent
commit
577bd48b08
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 10
      custom-node-list.json

10
custom-node-list.json

@ -7408,14 +7408,14 @@
}, },
{ {
"author": "alisson-anjos", "author": "alisson-anjos",
"title": "ComfyUI-LLaVA-Describer", "title": "ComfyUI-Ollama-Describer",
"id": "llava-describer", "id": "ollama-describer",
"reference": "https://github.com/alisson-anjos/ComfyUI-LLaVA-Describer", "reference": "https://github.com/alisson-anjos/ComfyUI-Ollama-Describer",
"files": [ "files": [
"https://github.com/alisson-anjos/ComfyUI-LLaVA-Describer" "https://github.com/alisson-anjos/ComfyUI-Ollama-Describer"
], ],
"install_type": "git-clone", "install_type": "git-clone",
"description": "This is an extension for ComfyUI to extract descriptions from your images using the multimodal model called LLaVa. The LLaVa model - Large Language and Vision Assistant, although trained on a relatively small dataset, demonstrates exceptional capabilities in understanding images and answering questions about them. This model shows behaviors similar to multimodal models like GPT-4, even when presented with unseen images and instructions." "description": "This is an extension for ComfyUI that makes it possible to use some LLM models provided by Ollama, such as Gemma, Llava (multimodal), Llama2, Llama3 or Mistral. Speaking specifically of the LLaVa - Large Language and Vision Assistant model, although trained on a relatively small dataset, it demonstrates exceptional capabilities in understanding images and answering questions about them. This model presents similar behaviors to multimodal models such as GPT-4, even when presented with invisible images and instructions."
}, },
{ {
"author": "chaosaiart", "author": "chaosaiart",

Loading…
Cancel
Save