In `load_clip_model`, it used to check whether a GPU is being used by checking
if `config.device` == "cuda". This is fine, assuming all users will pass a str
for the device. Unfortunately, many users (including the `run_{cli,gradio}.py`
scripts instead pass a `torch.device`, and `torch.device("cuda") != "cuda"`
This commit makes it compare the `device.type` instead, which will be a string,
making this condition pass, and uses float16 when possible.
This changes `Interrogator` to only load BLIP to VRAM on init, and leave CLIP in
RAM until it's needed.
When `interrogate` is first called, it does BLIP inference, unloads it, loads
CLIP, then does CLIP inference. 'Unloaded' in this case just means 'in RAM'.
Using this, I can run classic/fast interrogation on 4GB of VRAM, 'best' is still
a little too big however.
This commit also includes automatic `black` formatting and extra type hints,
which can be removed if you want.
- upgrade chain to take a min_count parameter so it won't early out until it has considered at least min_count flavors
- interrogate method ("best" mode) also checks against classic and fast to use their output if it's better
- fix bug of config.download_cache option not being used!
- add notes on Config object to readme
- auto download the cache files from huggingface
- experimental negative prompt mode
- slight quality and performance improvement to best mode
- analyze tab in Colab and run_gradio to get table of ranked terms