From 8472ae52f8a3af298623f887ba174206bc89fbc6 Mon Sep 17 00:00:00 2001 From: Chris Date: Sun, 11 Feb 2024 11:30:52 +0100 Subject: [PATCH] [TASK] Extend documentation by describing usage of LiteLLM integration --- client/README.md | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/client/README.md b/client/README.md index 1cc959c..55d710d 100644 --- a/client/README.md +++ b/client/README.md @@ -72,6 +72,30 @@ TAKEAWAYS: 3. The field of AI is moving faster than ever as a result of GenAI breakthroughs. ``` +## LLM Providers / LocalLLMs +`fabric` leverages LiteLLM to enable users to seamlessly utilize a wide array of LLM Providers and Models. With LiteLLM, users can effortlessly switch between different providers and models by specifying the desired provider and model name through command line options. `fabric` utilizes LiteLLM to dynamically load the specified provider and model, empowering users to harness the unique capabilities offered by each LLM provider or model. + +> Please ensure that you have configured the appropriate environment variables. For instance, set `HUGGINGFACE_API_KEY` if you intend to utilize the Hugging Face API. + +### Usage: + +To specify the provider and model when running `fabric`, use the `--model` option followed by the provider and model name. + +```shell +fabric --model / +``` +Examples: + +```shell +# Use a Ollama model +fabric --model ollama/openchat +``` + +```shell +# Use a Hugging Face Model +fabric --model huggingface/WizardLM/WizardCoder-Python-34B-V1.0 +``` + ## Contributing We welcome contributions to Fabric, including improvements and feature additions to this client.