Browse Source

[TASK] Extend documentation by describing usage of LiteLLM integration

pull/79/head
Chris 1 year ago
parent
commit
8472ae52f8
  1. 24
      client/README.md

24
client/README.md

@ -72,6 +72,30 @@ TAKEAWAYS:
3. The field of AI is moving faster than ever as a result of GenAI breakthroughs.
```
## LLM Providers / LocalLLMs
`fabric` leverages LiteLLM to enable users to seamlessly utilize a wide array of LLM Providers and Models. With LiteLLM, users can effortlessly switch between different providers and models by specifying the desired provider and model name through command line options. `fabric` utilizes LiteLLM to dynamically load the specified provider and model, empowering users to harness the unique capabilities offered by each LLM provider or model.
> Please ensure that you have configured the appropriate environment variables. For instance, set `HUGGINGFACE_API_KEY` if you intend to utilize the Hugging Face API.
### Usage:
To specify the provider and model when running `fabric`, use the `--model` option followed by the provider and model name.
```shell
fabric --model <provider>/<model>
```
Examples:
```shell
# Use a Ollama model
fabric --model ollama/openchat
```
```shell
# Use a Hugging Face Model
fabric --model huggingface/WizardLM/WizardCoder-Python-34B-V1.0
```
## Contributing
We welcome contributions to Fabric, including improvements and feature additions to this client.

Loading…
Cancel
Save