Browse Source

Update install instructions

Recommending the stable 0.5.4 version for now. 
0.6.0 supports BLIP2 but requires newer transformers version and could use more love still.
pull/69/head
pharmapsychotic 2 years ago committed by GitHub
parent
commit
f4429b4c9d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
  1. 9
      README.md

9
README.md

@ -40,7 +40,10 @@ Install with PIP
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117 pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117
# install clip-interrogator # install clip-interrogator
pip install clip-interrogator==0.6.0 pip install clip-interrogator==0.5.4
# or for very latest WIP with BLIP2 support
#pip install clip-interrogator==0.6.0
``` ```
You can then use it in your script You can then use it in your script
@ -69,7 +72,7 @@ On systems with low VRAM you can call `config.apply_low_vram_defaults()` to redu
See the [run_cli.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_cli.py) and [run_gradio.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_gradio.py) for more examples on using Config and Interrogator classes. See the [run_cli.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_cli.py) and [run_gradio.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_gradio.py) for more examples on using Config and Interrogator classes.
## Ranking against your own list of terms ## Ranking against your own list of terms (requires version 0.6.0)
```python ```python
from clip_interrogator import Config, Interrogator, LabelTable, load_list from clip_interrogator import Config, Interrogator, LabelTable, load_list
@ -80,4 +83,4 @@ image = Image.open(image_path).convert('RGB')
table = LabelTable(load_list('terms.txt'), 'terms', ci) table = LabelTable(load_list('terms.txt'), 'terms', ci)
best_match = table.rank(ci.image_to_features(image), top_count=1)[0] best_match = table.rank(ci.image_to_features(image), top_count=1)[0]
print(best_match) print(best_match)
``` ```

Loading…
Cancel
Save