Browse Source

Add to pip!

replicate
pharmapsychotic 2 years ago
parent
commit
7a2ac9aa57
  1. 2
      .gitignore
  2. 5
      MANIFEST.in
  3. 33
      README.md
  4. 1137
      clip_interrogator.ipynb
  5. 5
      clip_interrogator/__init__.py
  6. 2
      clip_interrogator/clip_interrogator.py
  7. 0
      clip_interrogator/data/artists.txt
  8. 0
      clip_interrogator/data/flavors.txt
  9. 0
      clip_interrogator/data/mediums.txt
  10. 0
      clip_interrogator/data/movements.txt
  11. 2
      main.py
  12. 3
      pyproject.toml
  13. 2
      requirements.txt
  14. 34
      setup.py

2
.gitignore vendored

@ -2,4 +2,6 @@
.vscode/
cache/
clip-interrogator/
clip_interrogator.egg-info/
dist/
venv/

5
MANIFEST.in

@ -0,0 +1,5 @@
include clip_interrogator/data/artists.txt
include clip_interrogator/data/flavors.txt
include clip_interrogator/data/mediums.txt
include clip_interrogator/data/movements.txt
include requirements.txt

33
README.md

@ -1,5 +1,9 @@
# clip-interrogator
*Want to figure out what a good prompt might be to create new images like an existing one? The **CLIP Interrogator** is here to get you answers!*
## Run it!
Run Version 2 on Colab, HuggingFace, and Replicate!
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/main/clip_interrogator.ipynb) [![Generic badge](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue.svg)](https://huggingface.co/spaces/pharma/CLIP-Interrogator) [![Replicate](https://replicate.com/cjwbw/clip-interrogator/badge)](https://replicate.com/cjwbw/clip-interrogator)
@ -12,8 +16,31 @@ Version 1 still available in Colab for comparing different CLIP models
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pharmapsychotic/clip-interrogator/blob/v1/clip_interrogator.ipynb)
<br>
## About
*Want to figure out what a good prompt might be to create new images like an existing one? The **CLIP Interrogator** is here to get you answers!*
The **CLIP Interrogator** is a prompt engineering tool that combines OpenAI's [CLIP](https://openai.com/blog/clip/) and Salesforce's [BLIP](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/) to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like [Stable Diffusion](https://github.com/CompVis/stable-diffusion) on [DreamStudio](https://beta.dreamstudio.ai/) to create cool art!.
## Using as a library
Create and activate a Python virtual environment
```bash
python3 -m venv ci_env
source ci_env/bin/activate
```
Install with PIP
```
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
pip install -e git+https://github.com/pharmapsychotic/BLIP.git@lib#egg=blip
pip install clip-interrogator
```
The **CLIP Interrogator** is a prompt engineering tool that combines OpenAI's [CLIP](https://openai.com/blog/clip/) and Salesforce's [BLIP](https://blog.salesforceairesearch.com/blip-bootstrapping-language-image-pretraining/) to optimize text prompts to match a given image. Use the resulting prompts with text-to-image models like Stable Diffusion.
You can then use it in your script
```python
from PIL import Image
from clip_interrogator import CLIPInterrogator, Config
image = Image.open(image_path).convert('RGB')
interrogator = CLIPInterrogator(Config(clip_model_name="ViT-L/14"))
print(interrogator.interrogate(image))
```

1137
clip_interrogator.ipynb

File diff suppressed because one or more lines are too long

5
clip_interrogator/__init__.py

@ -1 +1,4 @@
from .interrogate import CLIPInterrogator, Config, LabelTable
from .clip_interrogator import CLIPInterrogator, Config
__version__ = '0.1.3'
__author__ = 'pharmapsychotic'

2
clip_interrogator/interrogate.py → clip_interrogator/clip_interrogator.py

@ -35,7 +35,7 @@ class Config:
# interrogator settings
cache_path: str = 'cache'
chunk_size: int = 2048
data_path: str = 'data'
data_path: str = os.path.join(os.path.dirname(__file__), 'data')
device: str = 'cuda' if torch.cuda.is_available() else 'cpu'
flavor_intermediate_count: int = 2048

0
data/artists.txt → clip_interrogator/data/artists.txt

0
data/flavors.txt → clip_interrogator/data/flavors.txt

0
data/mediums.txt → clip_interrogator/data/mediums.txt

0
data/movements.txt → clip_interrogator/data/movements.txt

2
main.py

@ -38,7 +38,7 @@ def main():
# generate a nice prompt
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
config = Config(device=device, clip_model_name=args.clip, data_path='data')
config = Config(device=device, clip_model_name=args.clip)
interrogator = CLIPInterrogator(config)
prompt = interrogator.interrogate(image)
print(prompt)

3
pyproject.toml

@ -0,0 +1,3 @@
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"

2
requirements.txt

@ -3,5 +3,3 @@ torchvision
Pillow
requests
tqdm
-e git+https://github.com/openai/CLIP.git@main#egg=clip
-e git+https://github.com/pharmapsychotic/BLIP.git@lib#egg=blip

34
setup.py

@ -0,0 +1,34 @@
import os
import pkg_resources
from setuptools import setup, find_packages
setup(
name="clip-interrogator",
version="0.1.3",
license='MIT',
author='pharmapsychotic',
author_email='me@pharmapsychotic.com',
url='https://github.com/pharmapsychotic/clip-interrogator',
description="Generate a prompt from an image",
long_description=open('README.md').read(),
long_description_content_type="text/markdown",
packages=find_packages(),
install_requires=[
str(r)
for r in pkg_resources.parse_requirements(
open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
)
],
include_package_data=True,
extras_require={'dev': ['pytest']},
classifiers=[
'Intended Audience :: Developers',
'Intended Audience :: Science/Research',
'License :: OSI Approved :: MIT License',
'Topic :: Education',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
],
keywords=['blip','clip','prompt-engineering','stable-diffusion','text-to-image'],
)
Loading…
Cancel
Save