This changes `Interrogator` to only load BLIP to VRAM on init, and leave CLIP in
RAM until it's needed.
When `interrogate` is first called, it does BLIP inference, unloads it, loads
CLIP, then does CLIP inference. 'Unloaded' in this case just means 'in RAM'.
Using this, I can run classic/fast interrogation on 4GB of VRAM, 'best' is still
a little too big however.
This commit also includes automatic `black` formatting and extra type hints,
which can be removed if you want.
- upgrade chain to take a min_count parameter so it won't early out until it has considered at least min_count flavors
- interrogate method ("best" mode) also checks against classic and fast to use their output if it's better
- fix bug of config.download_cache option not being used!
- add notes on Config object to readme
- auto download the cache files from huggingface
- experimental negative prompt mode
- slight quality and performance improvement to best mode
- analyze tab in Colab and run_gradio to get table of ranked terms
The CLIP Interrogator uses the OpenAI CLIP models to test a given image against a variety of artists, mediums, and styles to study how the different models see the content of the image. It also combines the results with BLIP caption to suggest a text prompt to create more images similar to what was given.