From c033e650e15a173f9415bfeb5d3a0478b854e4e3 Mon Sep 17 00:00:00 2001 From: Chris Date: Sat, 17 Feb 2024 18:30:24 +0100 Subject: [PATCH] [TASK] Extend README to a about LiteLLM section --- README.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/README.md b/README.md index 920562b..0110518 100644 --- a/README.md +++ b/README.md @@ -40,6 +40,7 @@ - [CLI-native](#cli-native) - [Directly calling Patterns](#directly-calling-patterns) - [Examples](#examples) +- [LLM Providers / LocalLLMs](#llm-providers--localllms) - [Meta](#meta) - [Primary contributors](#primary-contributors) @@ -422,6 +423,30 @@ The content features a conversation between two individuals discussing various t 10. Nietzsche's walks ``` +## LLM Providers / LocalLLMs +`fabric` leverages LiteLLM to enable users to seamlessly utilize a wide array of LLM Providers and Models. With LiteLLM, users can effortlessly switch between different providers and models by specifying the desired provider and model name through command line options. `fabric` utilizes LiteLLM to dynamically load the specified provider and model, empowering users to harness the unique capabilities offered by each LLM provider or model. + +> Please ensure that you have configured the appropriate environment variables. For instance, set `HUGGINGFACE_API_KEY` if you intend to utilize the Hugging Face API. + +### Usage: + +To specify the provider and model when running `fabric`, use the `--model` option followed by the provider and model name. + +```shell +fabric --model / +``` +Examples: + +```shell +# Use a Ollama model +fabric --model ollama/openchat +``` + +```shell +# Use a Hugging Face Model +fabric --model huggingface/WizardLM/WizardCoder-Python-34B-V1.0 +``` + ## Meta > [!NOTE]