From f4429b4c9d0044d46d811498e635317d262a2389 Mon Sep 17 00:00:00 2001 From: pharmapsychotic <96542870+pharmapsychotic@users.noreply.github.com> Date: Mon, 27 Mar 2023 22:37:11 -0500 Subject: [PATCH] Update install instructions Recommending the stable 0.5.4 version for now. 0.6.0 supports BLIP2 but requires newer transformers version and could use more love still. --- README.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 2f01b4b..01da4ed 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,10 @@ Install with PIP pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117 # install clip-interrogator -pip install clip-interrogator==0.6.0 +pip install clip-interrogator==0.5.4 + +# or for very latest WIP with BLIP2 support +#pip install clip-interrogator==0.6.0 ``` You can then use it in your script @@ -69,7 +72,7 @@ On systems with low VRAM you can call `config.apply_low_vram_defaults()` to redu See the [run_cli.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_cli.py) and [run_gradio.py](https://github.com/pharmapsychotic/clip-interrogator/blob/main/run_gradio.py) for more examples on using Config and Interrogator classes. -## Ranking against your own list of terms +## Ranking against your own list of terms (requires version 0.6.0) ```python from clip_interrogator import Config, Interrogator, LabelTable, load_list @@ -80,4 +83,4 @@ image = Image.open(image_path).convert('RGB') table = LabelTable(load_list('terms.txt'), 'terms', ci) best_match = table.rank(ci.image_to_features(image), top_count=1)[0] print(best_match) -``` \ No newline at end of file +```