Image to prompt with BLIP and CLIP
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
shiyiming 77cf421dce sort the capiton with order 1 year ago
.github/workflows Setting up publish to pypi action 2 years ago
clip_interrogator change dribble to dribbble (#69) 2 years ago
.gitignore 0.3.1 fix for running on cpu, update readme usage instructions 2 years ago
LICENSE Initial commit 2 years ago
MANIFEST.in . 2 years ago
README.md Update install instructions 2 years ago
clip_interrogator.ipynb sort the capiton with order 1 year ago
cog.yaml Update to nicer BLIP packaging 2 years ago
predict.py Update to nicer BLIP packaging 2 years ago
pyproject.toml Add to pip! 2 years ago
requirements.txt Support for many different caption models: 2 years ago
run_cli.py sort the capiton with order 1 year ago
run_gradio.py Support for many different caption models: 2 years ago
setup.py Support for many different caption models: 2 years ago

README.md

clip-interrogator

Want to figure out what a good prompt might be to create new images like an existing one? The CLIP Interrogator is here to get you answers!

Run it!

🆕 Now available as a Stable Diffusion Web UI Extension! 🆕


Run Version 2 on Colab, HuggingFace, and Replicate!

Open In Colab Generic badge Replicate Lambda


Version 1 still available in Colab for comparing different CLIP models

Open In Colab

About

The CLIP Interrogator is a prompt engineering tool that combines OpenAI's CLIP and Salesforce's BLIP to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like Stable Diffusion on DreamStudio to create cool art!

Using as a library

Create and activate a Python virtual environment

python3 -m venv ci_env
(for linux  ) source ci_env/bin/activate
(for windows) .\ci_env\Scripts\activate

Install with PIP

# install torch with GPU support for example:
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117

# install clip-interrogator
pip install clip-interrogator==0.5.4

# or for very latest WIP with BLIP2 support
#pip install clip-interrogator==0.6.0

You can then use it in your script

from PIL import Image
from clip_interrogator import Config, Interrogator
image = Image.open(image_path).convert('RGB')
ci = Interrogator(Config(clip_model_name="ViT-L-14/openai"))
print(ci.interrogate(image))

CLIP Interrogator uses OpenCLIP which supports many different pretrained CLIP models. For the best prompts for Stable Diffusion 1.X use ViT-L-14/openai for clip_model_name. For Stable Diffusion 2.0 use ViT-H-14/laion2b_s32b_b79k

Configuration

The Config object lets you configure CLIP Interrogator's processing.

  • clip_model_name: which of the OpenCLIP pretrained CLIP models to use
  • cache_path: path where to save precomputed text embeddings
  • download_cache: when True will download the precomputed embeddings from huggingface
  • chunk_size: batch size for CLIP, use smaller for lower VRAM
  • quiet: when True no progress bars or text output will be displayed

On systems with low VRAM you can call config.apply_low_vram_defaults() to reduce the amount of VRAM needed (at the cost of some speed and quality). The default settings use about 6.3GB of VRAM and the low VRAM settings use about 2.7GB.

See the run_cli.py and run_gradio.py for more examples on using Config and Interrogator classes.

Ranking against your own list of terms (requires version 0.6.0)

from clip_interrogator import Config, Interrogator, LabelTable, load_list
from PIL import Image

ci = Interrogator(Config(blip_model_type=None))
image = Image.open(image_path).convert('RGB')
table = LabelTable(load_list('terms.txt'), 'terms', ci)
best_match = table.rank(ci.image_to_features(image), top_count=1)[0]
print(best_match)