{ "cells": [ { "cell_type": "markdown", "id": "dfe37963-1af6-44fc-a841-8e462443f5e6", "metadata": {}, "source": [ "## Expert Knowledge Worker\n", "\n", "## Extra Jupyter Notebook - Day 4.5 - switch out Chroma for FAISS!\n", "\n", "FAISS is Facebook AI Similarity Search" ] }, { "cell_type": "code", "execution_count": null, "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", "metadata": {}, "outputs": [], "source": [ "# imports\n", "\n", "import os\n", "import glob\n", "from dotenv import load_dotenv\n", "import gradio as gr" ] }, { "cell_type": "code", "execution_count": null, "id": "802137aa-8a74-45e0-a487-d1974927d7ca", "metadata": {}, "outputs": [], "source": [ "# imports for langchain\n", "\n", "from langchain.document_loaders import DirectoryLoader, TextLoader\n", "from langchain.text_splitter import CharacterTextSplitter\n", "from langchain.schema import Document\n", "from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", "# from langchain_chroma import Chroma\n", "from langchain.vectorstores import FAISS\n", "import numpy as np\n", "from sklearn.manifold import TSNE\n", "import plotly.graph_objects as go\n", "from langchain.memory import ConversationBufferMemory\n", "from langchain.chains import ConversationalRetrievalChain" ] }, { "cell_type": "code", "execution_count": null, "id": "58c85082-e417-4708-9efe-81a5d55d1424", "metadata": {}, "outputs": [], "source": [ "# price is a factor for our company, so we're going to use a low cost model\n", "\n", "MODEL = \"gpt-4o-mini\"\n", "db_name = \"vector_db\"" ] }, { "cell_type": "code", "execution_count": null, "id": "ee78efcb-60fe-449e-a944-40bab26261af", "metadata": {}, "outputs": [], "source": [ "# Load environment variables in a file called .env\n", "\n", "load_dotenv()\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, { "cell_type": "code", "execution_count": null, "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", "metadata": {}, "outputs": [], "source": [ "# Read in documents using LangChain's loaders\n", "# Take everything in all the sub-folders of our knowledgebase\n", "\n", "folders = glob.glob(\"knowledge-base/*\")\n", "\n", "# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", "text_loader_kwargs = {'encoding': 'utf-8'}\n", "# If that doesn't work, some Windows users might need to uncomment the next line instead\n", "# text_loader_kwargs={'autodetect_encoding': True}\n", "\n", "documents = []\n", "for folder in folders:\n", " doc_type = os.path.basename(folder)\n", " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", " folder_docs = loader.load()\n", " for doc in folder_docs:\n", " doc.metadata[\"doc_type\"] = doc_type\n", " documents.append(doc)" ] }, { "cell_type": "code", "execution_count": null, "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", "metadata": {}, "outputs": [], "source": [ "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "chunks = text_splitter.split_documents(documents)" ] }, { "cell_type": "code", "execution_count": null, "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", "metadata": {}, "outputs": [], "source": [ "len(chunks)" ] }, { "cell_type": "code", "execution_count": null, "id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", "metadata": {}, "outputs": [], "source": [ "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", "print(f\"Document types found: {', '.join(doc_types)}\")" ] }, { "cell_type": "markdown", "id": "77f7d2a6-ccfa-425b-a1c3-5e55b23bd013", "metadata": {}, "source": [ "## A sidenote on Embeddings, and \"Auto-Encoding LLMs\"\n", "\n", "We will be mapping each chunk of text into a Vector that represents the meaning of the text, known as an embedding.\n", "\n", "OpenAI offers a model to do this, which we will use by calling their API with some LangChain code.\n", "\n", "This model is an example of an \"Auto-Encoding LLM\" which generates an output given a complete input.\n", "It's different to all the other LLMs we've discussed today, which are known as \"Auto-Regressive LLMs\", and generate future tokens based only on past context.\n", "\n", "Another example of an Auto-Encoding LLMs is BERT from Google. In addition to embedding, Auto-encoding LLMs are often used for classification.\n", "\n", "### Sidenote\n", "\n", "In week 8 we will return to RAG and vector embeddings, and we will use an open-source vector encoder so that the data never leaves our computer - that's an important consideration when building enterprise systems and the data needs to remain internal." ] }, { "cell_type": "code", "execution_count": null, "id": "78998399-ac17-4e28-b15f-0b5f51e6ee23", "metadata": {}, "outputs": [], "source": [ "# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", "# Chroma is a popular open source Vector Database based on SQLLite\n", "\n", "embeddings = OpenAIEmbeddings()\n", "\n", "# Create vectorstore\n", "\n", "# BEFORE\n", "# vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n", "\n", "# AFTER\n", "vectorstore = FAISS.from_documents(chunks, embedding=embeddings)\n", "\n", "total_vectors = vectorstore.index.ntotal\n", "dimensions = vectorstore.index.d\n", "\n", "print(f\"There are {total_vectors} vectors with {dimensions:,} dimensions in the vector store\")" ] }, { "cell_type": "code", "execution_count": null, "id": "057868f6-51a6-4087-94d1-380145821550", "metadata": {}, "outputs": [], "source": [ "# Prework\n", "vectors = []\n", "documents = []\n", "doc_types = []\n", "colors = []\n", "color_map = {'products':'blue', 'employees':'green', 'contracts':'red', 'company':'orange'}\n", "\n", "for i in range(total_vectors):\n", " vectors.append(vectorstore.index.reconstruct(i))\n", " doc_id = vectorstore.index_to_docstore_id[i]\n", " document = vectorstore.docstore.search(doc_id)\n", " documents.append(document.page_content)\n", " doc_type = document.metadata['doc_type']\n", " doc_types.append(doc_type)\n", " colors.append(color_map[doc_type])\n", " \n", "vectors = np.array(vectors)" ] }, { "cell_type": "markdown", "id": "b0d45462-a818-441c-b010-b85b32bcf618", "metadata": {}, "source": [ "## Visualizing the Vector Store\n", "\n", "Let's take a minute to look at the documents and their embedding vectors to see what's going on.\n", "\n", "(As a sidenote, what we're really looking at here is the distribution of the Vectors generated by OpenAIEmbeddings, retrieved from FAISS. So there's no surprise that they look the same whether they are \"from\" FAISS or Chroma.)" ] }, { "cell_type": "code", "execution_count": null, "id": "427149d5-e5d8-4abd-bb6f-7ef0333cca21", "metadata": {}, "outputs": [], "source": [ "# We humans find it easier to visalize things in 2D!\n", "# Reduce the dimensionality of the vectors to 2D using t-SNE\n", "# (t-distributed stochastic neighbor embedding)\n", "\n", "tsne = TSNE(n_components=2, random_state=42)\n", "reduced_vectors = tsne.fit_transform(vectors)\n", "\n", "# Create the 2D scatter plot\n", "fig = go.Figure(data=[go.Scatter(\n", " x=reduced_vectors[:, 0],\n", " y=reduced_vectors[:, 1],\n", " mode='markers',\n", " marker=dict(size=5, color=colors, opacity=0.8),\n", " text=[f\"Type: {t}
Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", " hoverinfo='text'\n", ")])\n", "\n", "fig.update_layout(\n", " title='2D FAISS Vector Store Visualization',\n", " scene=dict(xaxis_title='x',yaxis_title='y'),\n", " width=800,\n", " height=600,\n", " margin=dict(r=20, b=10, l=10, t=40)\n", ")\n", "\n", "fig.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "e1418e88-acd5-460a-bf2b-4e6efc88e3dd", "metadata": {}, "outputs": [], "source": [ "# Let's try 3D!\n", "\n", "tsne = TSNE(n_components=3, random_state=42)\n", "reduced_vectors = tsne.fit_transform(vectors)\n", "\n", "# Create the 3D scatter plot\n", "fig = go.Figure(data=[go.Scatter3d(\n", " x=reduced_vectors[:, 0],\n", " y=reduced_vectors[:, 1],\n", " z=reduced_vectors[:, 2],\n", " mode='markers',\n", " marker=dict(size=5, color=colors, opacity=0.8),\n", " text=[f\"Type: {t}
Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", " hoverinfo='text'\n", ")])\n", "\n", "fig.update_layout(\n", " title='3D FAISS Vector Store Visualization',\n", " scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n", " width=900,\n", " height=700,\n", " margin=dict(r=20, b=10, l=10, t=40)\n", ")\n", "\n", "fig.show()" ] }, { "cell_type": "markdown", "id": "9468860b-86a2-41df-af01-b2400cc985be", "metadata": {}, "source": [ "## Time to use LangChain to bring it all together" ] }, { "cell_type": "code", "execution_count": null, "id": "129c7d1e-0094-4479-9459-f9360b95f244", "metadata": {}, "outputs": [], "source": [ "# create a new Chat with OpenAI\n", "llm = ChatOpenAI(temperature=0.7, model_name=MODEL)\n", "\n", "# set up the conversation memory for the chat\n", "memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", "\n", "# the retriever is an abstraction over the VectorStore that will be used during RAG\n", "retriever = vectorstore.as_retriever()\n", "\n", "# putting it together: set up the conversation chain with the GPT 3.5 LLM, the vector store and memory\n", "conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" ] }, { "cell_type": "code", "execution_count": null, "id": "968e7bf2-e862-4679-a11f-6c1efb6ec8ca", "metadata": {}, "outputs": [], "source": [ "query = \"Can you describe Insurellm in a few sentences\"\n", "result = conversation_chain.invoke({\"question\":query})\n", "print(result[\"answer\"])" ] }, { "cell_type": "code", "execution_count": null, "id": "987dadc5-5d09-4059-8f2e-733d66ecc696", "metadata": {}, "outputs": [], "source": [ "memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", "conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" ] }, { "cell_type": "markdown", "id": "bbbcb659-13ce-47ab-8a5e-01b930494964", "metadata": {}, "source": [ "## Now we will bring this up in Gradio using the Chat interface -\n", "\n", "A quick and easy way to prototype a chat with an LLM" ] }, { "cell_type": "code", "execution_count": null, "id": "c3536590-85c7-4155-bd87-ae78a1467670", "metadata": {}, "outputs": [], "source": [ "# Wrapping that in a function\n", "\n", "def chat(message, history):\n", " result = conversation_chain.invoke({\"question\": message})\n", " return result[\"answer\"]" ] }, { "cell_type": "code", "execution_count": null, "id": "b252d8c1-61a8-406d-b57a-8f708a62b014", "metadata": {}, "outputs": [], "source": [ "# And in Gradio:\n", "\n", "view = gr.ChatInterface(chat).launch()" ] }, { "cell_type": "code", "execution_count": null, "id": "5435b2b9-935c-48cd-aaf3-73a837ecde49", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.11" } }, "nbformat": 4, "nbformat_minor": 5 }