3.8 KiB
The fabric
client
This is the primary fabric
client, which has multiple modes of operation.
Client modes
You can use the client in three different modes:
- Local Only: You can use the client without a server, and it will use patterns it's downloaded from this repository, or ones that you specify.
- Local Server: You can run your own version of a Fabric Mill locally (on a private IP), which you can then connect to and use.
- Remote Server: You can specify a remote server that your client commands will then be calling.
Client features
- Standalone Mode: Run without needing a server.
- Clipboard Integration: Copy responses to the clipboard.
- File Output: Save responses to files for later reference.
- Pattern Module: Utilize specific patterns for different types of analysis.
- Server Mode: Operate the tool in server mode to control your own patterns and let your other apps access it.
Installation
- If you have this repository downloaded, you already have the client.
git clone git@github.com:danielmiessler/fabric.git
- Navigate to the client's directory:
cd client
- Install poetry (if you don't have it already)
pip3 install poetry
- Install the required packages:
poetry install
- Activate the virtual environment:
poetry shell
- Copy to path:
echo export PATH=$PATH:$(pwd) >> ~/.bashrc
# or .zshrc - Copy your OpenAI API key to the
.env
file in yournvim ~/.config/fabric/
directory (or create that file and put it in)OPENAI_API_KEY=[Your_API_Key]
Usage
To use fabric
, call it with your desired options (remember to activate the virtual environment with poetry shell
- step 5 above):
fabric [options] Options include:
--pattern, -p: Select the module for analysis. --stream, -s: Stream output to another application. --output, -o: Save the response to a file. --copy, -c: Copy the response to the clipboard.
Example:
# Pasting in an article about LLMs
pbpaste | fabric --pattern extract_wisdom --output wisdom.txt | fabric --pattern summarize --stream
ONE SENTENCE SUMMARY:
- The content covered the basics of LLMs and how they are used in everyday practice.
MAIN POINTS:
1. LLMs are large language models, and typically use the transformer architecture.
2. LLMs used to be used for story generation, but they're now used for many AI applications.
3. They are vulnerable to hallucination if not configured correctly, so be careful.
TAKEAWAYS:
1. It's possible to use LLMs for multiple AI use cases.
2. It's important to validate that the results you're receiving are correct.
3. The field of AI is moving faster than ever as a result of GenAI breakthroughs.
LLM Providers / LocalLLMs
fabric
leverages LiteLLM to enable users to seamlessly utilize a wide array of LLM Providers and Models. With LiteLLM, users can effortlessly switch between different providers and models by specifying the desired provider and model name through command line options. fabric
utilizes LiteLLM to dynamically load the specified provider and model, empowering users to harness the unique capabilities offered by each LLM provider or model.
Please ensure that you have configured the appropriate environment variables. For instance, set
HUGGINGFACE_API_KEY
if you intend to utilize the Hugging Face API.
Usage:
To specify the provider and model when running fabric
, use the --model
option followed by the provider and model name.
fabric --model <provider>/<model>
Examples:
# Use a Ollama model
fabric --model ollama/openchat
# Use a Hugging Face Model
fabric --model huggingface/WizardLM/WizardCoder-Python-34B-V1.0
Contributing
We welcome contributions to Fabric, including improvements and feature additions to this client.
Credits
The fabric
client was created by Jonathan Dunn and Daniel Meissler.