From 86cc68e255fef55246c5adce7b71b132fb8a4f4b Mon Sep 17 00:00:00 2001 From: jenkins Date: Sat, 22 Feb 2025 22:03:41 +0400 Subject: [PATCH 01/35] mac selenium --- .../day1-selenium-for-javascript-sites.ipynb | 415 +++++++----------- .../day1-selenium-lama-mac.ipynb | 384 ++++++++++++++++ 2 files changed, 535 insertions(+), 264 deletions(-) create mode 100644 week1/community-contributions/day1-selenium-lama-mac.ipynb diff --git a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb index fd3a3ba..198de53 100644 --- a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb +++ b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb @@ -2,305 +2,143 @@ "cells": [ { "cell_type": "markdown", - "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", + "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", "metadata": {}, "source": [ - "# Instant Gratification!\n", - "\n", - "Let's build a useful LLM solution - in a matter of minutes.\n", - "\n", - "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", - "\n", - "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n", - "\n", - "## If you're new to Jupyter Lab\n", - "\n", - "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, like the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n", - "\n", - "If you need to start again, go to Kernel menu >> Restart kernel." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", - "metadata": {}, - "outputs": [], - "source": [ - "# imports\n", + "## An extra exercise for those who enjoy web scraping\n", "\n", - "import os\n", - "import requests\n", - "from dotenv import load_dotenv\n", - "from bs4 import BeautifulSoup\n", - "from IPython.display import Markdown, display\n", - "from openai import OpenAI" + "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" ] }, { "cell_type": "markdown", - "id": "6900b2a8-6384-4316-8aaa-5e519fca4254", + "id": "c97ad592-c8be-4583-a19c-ac813e56f410", "metadata": {}, "source": [ - "# Connecting to OpenAI\n", - "\n", - "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", + "## Mac Users\n", "\n", - "## Troubleshooting if you have problems:\n", + "I find some challenges while setting up this in MAC silicon M1 chip. Execute below commands in MAC terminal.\n", "\n", - "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", - "2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n", - "- Pressing \"Create new secret key\" on the top right\n", - "- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n", - "- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n", - "- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n", - "4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", - "5. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", - "\n", - "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." + "1. Download chromedriver.\n", + "2. Unzip and add it to the path.\n", + "3. Set Extended attributes." ] }, { - "cell_type": "code", - "execution_count": null, - "id": "7b87cadb-d513-4303-baee-a37b6f938e4d", + "cell_type": "markdown", + "id": "b635b345-b000-48cc-8a7f-7df279a489a3", "metadata": {}, - "outputs": [], "source": [ - "# Load environment variables in a file called .env\n", - "\n", - "load_dotenv()\n", - "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY','your-key-if-not-using-env')\n", - "openai = OpenAI()" + "cd ~/Downloads\n", + "wget https://storage.googleapis.com/chrome-for-testing-public/133.0.6943.126/mac-arm64/chromedriver-mac-arm64.zip\n", + "unzip chromedriver-mac-arm64.zip\n", + "sudo mv chromedriver-mac-arm64/chromedriver /usr/local/bin/\n", + "chmod +x /usr/local/bin/chromedriver\n", + "cd /usr/local/bin/\n", + "xattr -d com.apple.quarantine chromedriver\n", + "cd \n", + "chromedriver --version" ] }, { "cell_type": "code", - "execution_count": null, - "id": "c5e793b2-6775-426a-a139-4848291d0463", + "execution_count": 1, + "id": "17c7c79a-8ae0-4f5d-a7c8-c54aa7ba90fd", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Requirement already satisfied: selenium in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (4.29.0)\n", + "Requirement already satisfied: urllib3<3,>=1.26 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium) (2.3.0)\n", + "Requirement already satisfied: trio~=0.17 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (0.29.0)\n", + "Requirement already satisfied: trio-websocket~=0.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (0.12.1)\n", + "Requirement already satisfied: certifi>=2021.10.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (2025.1.31)\n", + "Requirement already satisfied: typing_extensions~=4.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (4.12.2)\n", + "Requirement already satisfied: websocket-client~=1.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (1.8.0)\n", + "Requirement already satisfied: attrs>=23.2.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (25.1.0)\n", + "Requirement already satisfied: sortedcontainers in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (2.4.0)\n", + "Requirement already satisfied: idna in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (3.10)\n", + "Requirement already satisfied: outcome in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (1.3.0.post0)\n", + "Requirement already satisfied: sniffio>=1.3.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (1.3.1)\n", + "Requirement already satisfied: wsproto>=0.14 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio-websocket~=0.9->selenium) (1.2.0)\n", + "Requirement already satisfied: pysocks!=1.5.7,<2.0,>=1.5.6 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium) (1.7.1)\n", + "Requirement already satisfied: h11<1,>=0.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium) (0.14.0)\n", + "Requirement already satisfied: undetected-chromedriver in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (3.5.5)\n", + "Requirement already satisfied: selenium>=4.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (4.29.0)\n", + "Requirement already satisfied: requests in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (2.32.3)\n", + "Requirement already satisfied: websockets in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (14.2)\n", + "Requirement already satisfied: urllib3<3,>=1.26 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium>=4.9.0->undetected-chromedriver) (2.3.0)\n", + "Requirement already satisfied: trio~=0.17 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (0.29.0)\n", + "Requirement already satisfied: trio-websocket~=0.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (0.12.1)\n", + "Requirement already satisfied: certifi>=2021.10.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (2025.1.31)\n", + "Requirement already satisfied: typing_extensions~=4.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (4.12.2)\n", + "Requirement already satisfied: websocket-client~=1.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (1.8.0)\n", + "Requirement already satisfied: charset_normalizer<4,>=2 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from requests->undetected-chromedriver) (3.4.1)\n", + "Requirement already satisfied: idna<4,>=2.5 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from requests->undetected-chromedriver) (3.10)\n", + "Requirement already satisfied: attrs>=23.2.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (25.1.0)\n", + "Requirement already satisfied: sortedcontainers in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (2.4.0)\n", + "Requirement already satisfied: outcome in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (1.3.0.post0)\n", + "Requirement already satisfied: sniffio>=1.3.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (1.3.1)\n", + "Requirement already satisfied: wsproto>=0.14 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio-websocket~=0.9->selenium>=4.9.0->undetected-chromedriver) (1.2.0)\n", + "Requirement already satisfied: pysocks!=1.5.7,<2.0,>=1.5.6 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium>=4.9.0->undetected-chromedriver) (1.7.1)\n", + "Requirement already satisfied: h11<1,>=0.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium>=4.9.0->undetected-chromedriver) (0.14.0)\n", + "Requirement already satisfied: beautifulsoup4 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (4.13.3)\n", + "Requirement already satisfied: soupsieve>1.2 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from beautifulsoup4) (2.5)\n", + "Requirement already satisfied: typing-extensions>=4.0.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from beautifulsoup4) (4.12.2)\n" + ] + } + ], "source": [ - "# A class to represent a Webpage\n", - "\n", - "class Website:\n", - " url: str\n", - " title: str\n", - " text: str\n", - "\n", - " def __init__(self, url):\n", - " self.url = url\n", - " response = requests.get(url)\n", - " soup = BeautifulSoup(response.content, 'html.parser')\n", - " self.title = soup.title.string if soup.title else \"No title found\"\n", - " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", - " irrelevant.decompose()\n", - " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" + "!pip install selenium\n", + "!pip install undetected-chromedriver\n", + "!pip install beautifulsoup4" ] }, { "cell_type": "code", - "execution_count": null, - "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", + "execution_count": 2, + "id": "c10bd630-2dfd-4572-8c21-2dc4c6a372ab", "metadata": {}, "outputs": [], "source": [ - "# Let's try one out\n", - "\n", - "ed = Website(\"https://edwarddonner.com\")\n", - "print(ed.title)\n", - "print(ed.text)" - ] - }, - { - "cell_type": "markdown", - "id": "6a478a0c-2c53-48ff-869c-4d08199931e1", - "metadata": {}, - "source": [ - "## Types of prompts\n", - "\n", - "You may know this already - but if not, you will get very familiar with it!\n", - "\n", - "Models like GPT4o have been trained to receive instructions in a particular way.\n", - "\n", - "They expect to receive:\n", - "\n", - "**A system prompt** that tells them what task they are performing and what tone they should use\n", - "\n", - "**A user prompt** -- the conversation starter that they should reply to" + "from selenium import webdriver\n", + "from selenium.webdriver.chrome.service import Service\n", + "from selenium.webdriver.common.by import By\n", + "from selenium.webdriver.chrome.options import Options\n", + "from openai import OpenAI\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" ] }, { "cell_type": "code", - "execution_count": null, - "id": "abdb8417-c5dc-44bc-9bee-2e059d162699", + "execution_count": 7, + "id": "6fb3641d-e9f8-4f5b-bb9d-ee0e971cccdb", "metadata": {}, "outputs": [], "source": [ + "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", + "HEADERS = {\"Content-Type\": \"application/json\"}\n", + "MODEL = \"llama3.2\"\n", + "PATH_TO_CHROME_DRIVER = '/usr/local/bin/chromedriver'\n", "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", "and provides a short summary, ignoring text that might be navigation related. \\\n", - "Respond in markdown.\"" + "Respond in markdown. Highlight all the products this website offered and also find when website is created.\"\n" ] }, { "cell_type": "code", - "execution_count": null, - "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", - "metadata": {}, - "outputs": [], - "source": [ - "def user_prompt_for(website):\n", - " user_prompt = f\"You are looking at a website titled {website.title}\"\n", - " user_prompt += \"The contents of this website is as follows; \\\n", - "please provide a short summary of this website in markdown. \\\n", - "If it includes news or announcements, then summarize these too.\\n\\n\"\n", - " user_prompt += website.text\n", - " return user_prompt" - ] - }, - { - "cell_type": "markdown", - "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", - "metadata": {}, - "source": [ - "## Messages\n", - "\n", - "The API from OpenAI expects to receive messages in a particular structure.\n", - "Many of the other APIs share this structure:\n", - "\n", - "```\n", - "[\n", - " {\"role\": \"system\", \"content\": \"system message goes here\"},\n", - " {\"role\": \"user\", \"content\": \"user message goes here\"}\n", - "]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", - "metadata": {}, - "outputs": [], - "source": [ - "def messages_for(website):\n", - " return [\n", - " {\"role\": \"system\", \"content\": system_prompt},\n", - " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", - " ]" - ] - }, - { - "cell_type": "markdown", - "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", - "metadata": {}, - "source": [ - "## Time to bring it together - the API for OpenAI is very simple!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", - "metadata": {}, - "outputs": [], - "source": [ - "def summarize(url):\n", - " website = Website(url)\n", - " response = openai.chat.completions.create(\n", - " model = \"gpt-4o-mini\",\n", - " messages = messages_for(website)\n", - " )\n", - " return response.choices[0].message.content" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", - "metadata": {}, - "outputs": [], - "source": [ - "summarize(\"https://edwarddonner.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3d926d59-450e-4609-92ba-2d6f244f1342", - "metadata": {}, - "outputs": [], - "source": [ - "def display_summary(url):\n", - " summary = summarize(url)\n", - " display(Markdown(summary))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3018853a-445f-41ff-9560-d925d1774b2f", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://edwarddonner.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "45d83403-a24c-44b5-84ac-961449b4008f", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://cnn.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "75e9fd40-b354-4341-991e-863ef2e59db7", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://anthropic.com\")" - ] - }, - { - "cell_type": "markdown", - "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", - "metadata": {}, - "source": [ - "## An extra exercise for those who enjoy web scraping\n", - "\n", - "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "52ae98bb", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://openai.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "5d57e958", "metadata": {}, "outputs": [], "source": [ - "#Parse webpages which is designed using JavaScript heavely\n", - "# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\n", - "from selenium import webdriver\n", - "from selenium.webdriver.chrome.service import Service\n", - "from selenium.webdriver.common.by import By\n", - "from selenium.webdriver.chrome.options import Options\n", - "\n", - "PATH_TO_CHROME_DRIVER = '..\\\\path\\\\to\\\\chromedriver.exe'\n", - "\n", "class Website:\n", " url: str\n", " title: str\n", @@ -318,7 +156,7 @@ " driver = webdriver.Chrome(service=service, options=options)\n", " driver.get(url)\n", "\n", - " input(\"Please complete the verification in the browser and press Enter to continue...\")\n", + " # input(\"Please complete the verification in the browser and press Enter to continue...\")\n", " page_source = driver.page_source\n", " driver.quit()\n", "\n", @@ -331,33 +169,82 @@ }, { "cell_type": "code", - "execution_count": null, - "id": "65192f6b", + "execution_count": 5, + "id": "56df8cd2-2707-43f6-a066-3367846929b3", "metadata": {}, "outputs": [], "source": [ - "display_summary(\"https://openai.com\")" + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"\\nThe contents of this website is as follows; \\\n", + "please provide a short summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt\n", + "\n", + "\n", + "\n", + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]\n", + "\n", + "\n", + "def summarize(url):\n", + " website = Website(url)\n", + " ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", + " response = ollama_via_openai.chat.completions.create(\n", + " model=MODEL,\n", + " messages = messages_for(website)\n", + " )\n", + " return response.choices[0].message.content\n", + "\n", + "\n", + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "id": "f2eb9599", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/markdown": [ + "It appears that you have provided a sample website or travel booking platform, specifically for flights and hotels in the Middle East region. The content includes:\n", + "\n", + "1. **Flights**: A search engine to find flights across various airlines.\n", + "2. **Hotels**: A selection of chain hotels available for booking.\n", + "3. **Travel**: A general page with FAQs and information about traveling within Saudi Arabia, Kuwait, and other nearby countries.\n", + "4. **Almosafer App**: An advertisement for the Almosafer app, which offers features like secure payment channels, easy booking processes, and user-friendly designs.\n", + "\n", + "The platform also displays a list of trending searches, airlines, and countries to facilitate searching and planning trips.\n", + "\n", + "Please let me know if you have any specific questions or need further assistance with this website sample." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ - "display_summary(\"https://edwarddonner.com\")" + "display_summary(\"https://ae.almosafer.com\")" ] }, { "cell_type": "code", "execution_count": null, - "id": "e7ba56c8", + "id": "31b66c0f-6b45-4986-b77c-758625945a91", "metadata": {}, "outputs": [], - "source": [ - "display_summary(\"https://cnn.com\")" - ] + "source": [] } ], "metadata": { diff --git a/week1/community-contributions/day1-selenium-lama-mac.ipynb b/week1/community-contributions/day1-selenium-lama-mac.ipynb new file mode 100644 index 0000000..fd3a3ba --- /dev/null +++ b/week1/community-contributions/day1-selenium-lama-mac.ipynb @@ -0,0 +1,384 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", + "metadata": {}, + "source": [ + "# Instant Gratification!\n", + "\n", + "Let's build a useful LLM solution - in a matter of minutes.\n", + "\n", + "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", + "\n", + "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n", + "\n", + "## If you're new to Jupyter Lab\n", + "\n", + "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, like the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n", + "\n", + "If you need to start again, go to Kernel menu >> Restart kernel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" + ] + }, + { + "cell_type": "markdown", + "id": "6900b2a8-6384-4316-8aaa-5e519fca4254", + "metadata": {}, + "source": [ + "# Connecting to OpenAI\n", + "\n", + "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", + "\n", + "## Troubleshooting if you have problems:\n", + "\n", + "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", + "2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n", + "- Pressing \"Create new secret key\" on the top right\n", + "- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n", + "- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n", + "- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n", + "4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", + "5. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", + "\n", + "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7b87cadb-d513-4303-baee-a37b6f938e4d", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv()\n", + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY','your-key-if-not-using-env')\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c5e793b2-6775-426a-a139-4848291d0463", + "metadata": {}, + "outputs": [], + "source": [ + "# A class to represent a Webpage\n", + "\n", + "class Website:\n", + " url: str\n", + " title: str\n", + " text: str\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + " response = requests.get(url)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's try one out\n", + "\n", + "ed = Website(\"https://edwarddonner.com\")\n", + "print(ed.title)\n", + "print(ed.text)" + ] + }, + { + "cell_type": "markdown", + "id": "6a478a0c-2c53-48ff-869c-4d08199931e1", + "metadata": {}, + "source": [ + "## Types of prompts\n", + "\n", + "You may know this already - but if not, you will get very familiar with it!\n", + "\n", + "Models like GPT4o have been trained to receive instructions in a particular way.\n", + "\n", + "They expect to receive:\n", + "\n", + "**A system prompt** that tells them what task they are performing and what tone they should use\n", + "\n", + "**A user prompt** -- the conversation starter that they should reply to" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "abdb8417-c5dc-44bc-9bee-2e059d162699", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", + "and provides a short summary, ignoring text that might be navigation related. \\\n", + "Respond in markdown.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", + "metadata": {}, + "outputs": [], + "source": [ + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"The contents of this website is as follows; \\\n", + "please provide a short summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt" + ] + }, + { + "cell_type": "markdown", + "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", + "metadata": {}, + "source": [ + "## Messages\n", + "\n", + "The API from OpenAI expects to receive messages in a particular structure.\n", + "Many of the other APIs share this structure:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message goes here\"},\n", + " {\"role\": \"user\", \"content\": \"user message goes here\"}\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", + "metadata": {}, + "outputs": [], + "source": [ + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", + "metadata": {}, + "source": [ + "## Time to bring it together - the API for OpenAI is very simple!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", + "metadata": {}, + "outputs": [], + "source": [ + "def summarize(url):\n", + " website = Website(url)\n", + " response = openai.chat.completions.create(\n", + " model = \"gpt-4o-mini\",\n", + " messages = messages_for(website)\n", + " )\n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", + "metadata": {}, + "outputs": [], + "source": [ + "summarize(\"https://edwarddonner.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d926d59-450e-4609-92ba-2d6f244f1342", + "metadata": {}, + "outputs": [], + "source": [ + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3018853a-445f-41ff-9560-d925d1774b2f", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://edwarddonner.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "45d83403-a24c-44b5-84ac-961449b4008f", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://cnn.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "75e9fd40-b354-4341-991e-863ef2e59db7", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://anthropic.com\")" + ] + }, + { + "cell_type": "markdown", + "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", + "metadata": {}, + "source": [ + "## An extra exercise for those who enjoy web scraping\n", + "\n", + "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "52ae98bb", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://openai.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5d57e958", + "metadata": {}, + "outputs": [], + "source": [ + "#Parse webpages which is designed using JavaScript heavely\n", + "# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\n", + "from selenium import webdriver\n", + "from selenium.webdriver.chrome.service import Service\n", + "from selenium.webdriver.common.by import By\n", + "from selenium.webdriver.chrome.options import Options\n", + "\n", + "PATH_TO_CHROME_DRIVER = '..\\\\path\\\\to\\\\chromedriver.exe'\n", + "\n", + "class Website:\n", + " url: str\n", + " title: str\n", + " text: str\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + "\n", + " options = Options()\n", + "\n", + " options.add_argument(\"--no-sandbox\")\n", + " options.add_argument(\"--disable-dev-shm-usage\")\n", + "\n", + " service = Service(PATH_TO_CHROME_DRIVER)\n", + " driver = webdriver.Chrome(service=service, options=options)\n", + " driver.get(url)\n", + "\n", + " input(\"Please complete the verification in the browser and press Enter to continue...\")\n", + " page_source = driver.page_source\n", + " driver.quit()\n", + "\n", + " soup = BeautifulSoup(page_source, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.get_text(separator=\"\\n\", strip=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "65192f6b", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://openai.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f2eb9599", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://edwarddonner.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e7ba56c8", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://cnn.com\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 311a0f4813e0a99c537396274e77f10fca458900 Mon Sep 17 00:00:00 2001 From: jenkins Date: Sat, 22 Feb 2025 22:33:57 +0400 Subject: [PATCH 02/35] mac --- .../day1-selenium-for-javascript-sites.ipynb | 415 +++++++++++------- .../day1-selenium-lama-mac.ipynb | 335 +++----------- 2 files changed, 337 insertions(+), 413 deletions(-) diff --git a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb index 198de53..fd3a3ba 100644 --- a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb +++ b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb @@ -2,143 +2,305 @@ "cells": [ { "cell_type": "markdown", - "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", + "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", "metadata": {}, "source": [ - "## An extra exercise for those who enjoy web scraping\n", + "# Instant Gratification!\n", "\n", - "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" + "Let's build a useful LLM solution - in a matter of minutes.\n", + "\n", + "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", + "\n", + "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n", + "\n", + "## If you're new to Jupyter Lab\n", + "\n", + "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, like the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n", + "\n", + "If you need to start again, go to Kernel menu >> Restart kernel." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" ] }, { "cell_type": "markdown", - "id": "c97ad592-c8be-4583-a19c-ac813e56f410", + "id": "6900b2a8-6384-4316-8aaa-5e519fca4254", "metadata": {}, "source": [ - "## Mac Users\n", + "# Connecting to OpenAI\n", + "\n", + "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", "\n", - "I find some challenges while setting up this in MAC silicon M1 chip. Execute below commands in MAC terminal.\n", + "## Troubleshooting if you have problems:\n", "\n", - "1. Download chromedriver.\n", - "2. Unzip and add it to the path.\n", - "3. Set Extended attributes." + "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", + "2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n", + "- Pressing \"Create new secret key\" on the top right\n", + "- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n", + "- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n", + "- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n", + "4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", + "5. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", + "\n", + "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." ] }, { - "cell_type": "markdown", - "id": "b635b345-b000-48cc-8a7f-7df279a489a3", + "cell_type": "code", + "execution_count": null, + "id": "7b87cadb-d513-4303-baee-a37b6f938e4d", "metadata": {}, + "outputs": [], "source": [ - "cd ~/Downloads\n", - "wget https://storage.googleapis.com/chrome-for-testing-public/133.0.6943.126/mac-arm64/chromedriver-mac-arm64.zip\n", - "unzip chromedriver-mac-arm64.zip\n", - "sudo mv chromedriver-mac-arm64/chromedriver /usr/local/bin/\n", - "chmod +x /usr/local/bin/chromedriver\n", - "cd /usr/local/bin/\n", - "xattr -d com.apple.quarantine chromedriver\n", - "cd \n", - "chromedriver --version" + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv()\n", + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY','your-key-if-not-using-env')\n", + "openai = OpenAI()" ] }, { "cell_type": "code", - "execution_count": 1, - "id": "17c7c79a-8ae0-4f5d-a7c8-c54aa7ba90fd", + "execution_count": null, + "id": "c5e793b2-6775-426a-a139-4848291d0463", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Requirement already satisfied: selenium in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (4.29.0)\n", - "Requirement already satisfied: urllib3<3,>=1.26 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium) (2.3.0)\n", - "Requirement already satisfied: trio~=0.17 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (0.29.0)\n", - "Requirement already satisfied: trio-websocket~=0.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (0.12.1)\n", - "Requirement already satisfied: certifi>=2021.10.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (2025.1.31)\n", - "Requirement already satisfied: typing_extensions~=4.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (4.12.2)\n", - "Requirement already satisfied: websocket-client~=1.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium) (1.8.0)\n", - "Requirement already satisfied: attrs>=23.2.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (25.1.0)\n", - "Requirement already satisfied: sortedcontainers in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (2.4.0)\n", - "Requirement already satisfied: idna in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (3.10)\n", - "Requirement already satisfied: outcome in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (1.3.0.post0)\n", - "Requirement already satisfied: sniffio>=1.3.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium) (1.3.1)\n", - "Requirement already satisfied: wsproto>=0.14 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio-websocket~=0.9->selenium) (1.2.0)\n", - "Requirement already satisfied: pysocks!=1.5.7,<2.0,>=1.5.6 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium) (1.7.1)\n", - "Requirement already satisfied: h11<1,>=0.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium) (0.14.0)\n", - "Requirement already satisfied: undetected-chromedriver in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (3.5.5)\n", - "Requirement already satisfied: selenium>=4.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (4.29.0)\n", - "Requirement already satisfied: requests in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (2.32.3)\n", - "Requirement already satisfied: websockets in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from undetected-chromedriver) (14.2)\n", - "Requirement already satisfied: urllib3<3,>=1.26 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium>=4.9.0->undetected-chromedriver) (2.3.0)\n", - "Requirement already satisfied: trio~=0.17 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (0.29.0)\n", - "Requirement already satisfied: trio-websocket~=0.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (0.12.1)\n", - "Requirement already satisfied: certifi>=2021.10.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (2025.1.31)\n", - "Requirement already satisfied: typing_extensions~=4.9 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (4.12.2)\n", - "Requirement already satisfied: websocket-client~=1.8 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from selenium>=4.9.0->undetected-chromedriver) (1.8.0)\n", - "Requirement already satisfied: charset_normalizer<4,>=2 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from requests->undetected-chromedriver) (3.4.1)\n", - "Requirement already satisfied: idna<4,>=2.5 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from requests->undetected-chromedriver) (3.10)\n", - "Requirement already satisfied: attrs>=23.2.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (25.1.0)\n", - "Requirement already satisfied: sortedcontainers in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (2.4.0)\n", - "Requirement already satisfied: outcome in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (1.3.0.post0)\n", - "Requirement already satisfied: sniffio>=1.3.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio~=0.17->selenium>=4.9.0->undetected-chromedriver) (1.3.1)\n", - "Requirement already satisfied: wsproto>=0.14 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from trio-websocket~=0.9->selenium>=4.9.0->undetected-chromedriver) (1.2.0)\n", - "Requirement already satisfied: pysocks!=1.5.7,<2.0,>=1.5.6 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from urllib3[socks]<3,>=1.26->selenium>=4.9.0->undetected-chromedriver) (1.7.1)\n", - "Requirement already satisfied: h11<1,>=0.9.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from wsproto>=0.14->trio-websocket~=0.9->selenium>=4.9.0->undetected-chromedriver) (0.14.0)\n", - "Requirement already satisfied: beautifulsoup4 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (4.13.3)\n", - "Requirement already satisfied: soupsieve>1.2 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from beautifulsoup4) (2.5)\n", - "Requirement already satisfied: typing-extensions>=4.0.0 in /opt/anaconda3/envs/llms/lib/python3.11/site-packages (from beautifulsoup4) (4.12.2)\n" - ] - } - ], + "outputs": [], "source": [ - "!pip install selenium\n", - "!pip install undetected-chromedriver\n", - "!pip install beautifulsoup4" + "# A class to represent a Webpage\n", + "\n", + "class Website:\n", + " url: str\n", + " title: str\n", + " text: str\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + " response = requests.get(url)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" ] }, { "cell_type": "code", - "execution_count": 2, - "id": "c10bd630-2dfd-4572-8c21-2dc4c6a372ab", + "execution_count": null, + "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", "metadata": {}, "outputs": [], "source": [ - "from selenium import webdriver\n", - "from selenium.webdriver.chrome.service import Service\n", - "from selenium.webdriver.common.by import By\n", - "from selenium.webdriver.chrome.options import Options\n", - "from openai import OpenAI\n", - "import os\n", - "import requests\n", - "from dotenv import load_dotenv\n", - "from bs4 import BeautifulSoup\n", - "from IPython.display import Markdown, display\n", - "from openai import OpenAI" + "# Let's try one out\n", + "\n", + "ed = Website(\"https://edwarddonner.com\")\n", + "print(ed.title)\n", + "print(ed.text)" + ] + }, + { + "cell_type": "markdown", + "id": "6a478a0c-2c53-48ff-869c-4d08199931e1", + "metadata": {}, + "source": [ + "## Types of prompts\n", + "\n", + "You may know this already - but if not, you will get very familiar with it!\n", + "\n", + "Models like GPT4o have been trained to receive instructions in a particular way.\n", + "\n", + "They expect to receive:\n", + "\n", + "**A system prompt** that tells them what task they are performing and what tone they should use\n", + "\n", + "**A user prompt** -- the conversation starter that they should reply to" ] }, { "cell_type": "code", - "execution_count": 7, - "id": "6fb3641d-e9f8-4f5b-bb9d-ee0e971cccdb", + "execution_count": null, + "id": "abdb8417-c5dc-44bc-9bee-2e059d162699", "metadata": {}, "outputs": [], "source": [ - "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", - "HEADERS = {\"Content-Type\": \"application/json\"}\n", - "MODEL = \"llama3.2\"\n", - "PATH_TO_CHROME_DRIVER = '/usr/local/bin/chromedriver'\n", "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", "and provides a short summary, ignoring text that might be navigation related. \\\n", - "Respond in markdown. Highlight all the products this website offered and also find when website is created.\"\n" + "Respond in markdown.\"" ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": null, + "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", + "metadata": {}, + "outputs": [], + "source": [ + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"The contents of this website is as follows; \\\n", + "please provide a short summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt" + ] + }, + { + "cell_type": "markdown", + "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", + "metadata": {}, + "source": [ + "## Messages\n", + "\n", + "The API from OpenAI expects to receive messages in a particular structure.\n", + "Many of the other APIs share this structure:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message goes here\"},\n", + " {\"role\": \"user\", \"content\": \"user message goes here\"}\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", + "metadata": {}, + "outputs": [], + "source": [ + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]" + ] + }, + { + "cell_type": "markdown", + "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", + "metadata": {}, + "source": [ + "## Time to bring it together - the API for OpenAI is very simple!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", + "metadata": {}, + "outputs": [], + "source": [ + "def summarize(url):\n", + " website = Website(url)\n", + " response = openai.chat.completions.create(\n", + " model = \"gpt-4o-mini\",\n", + " messages = messages_for(website)\n", + " )\n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", + "metadata": {}, + "outputs": [], + "source": [ + "summarize(\"https://edwarddonner.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d926d59-450e-4609-92ba-2d6f244f1342", + "metadata": {}, + "outputs": [], + "source": [ + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3018853a-445f-41ff-9560-d925d1774b2f", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://edwarddonner.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "45d83403-a24c-44b5-84ac-961449b4008f", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://cnn.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "75e9fd40-b354-4341-991e-863ef2e59db7", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://anthropic.com\")" + ] + }, + { + "cell_type": "markdown", + "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", + "metadata": {}, + "source": [ + "## An extra exercise for those who enjoy web scraping\n", + "\n", + "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "52ae98bb", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://openai.com\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, "id": "5d57e958", "metadata": {}, "outputs": [], "source": [ + "#Parse webpages which is designed using JavaScript heavely\n", + "# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\n", + "from selenium import webdriver\n", + "from selenium.webdriver.chrome.service import Service\n", + "from selenium.webdriver.common.by import By\n", + "from selenium.webdriver.chrome.options import Options\n", + "\n", + "PATH_TO_CHROME_DRIVER = '..\\\\path\\\\to\\\\chromedriver.exe'\n", + "\n", "class Website:\n", " url: str\n", " title: str\n", @@ -156,7 +318,7 @@ " driver = webdriver.Chrome(service=service, options=options)\n", " driver.get(url)\n", "\n", - " # input(\"Please complete the verification in the browser and press Enter to continue...\")\n", + " input(\"Please complete the verification in the browser and press Enter to continue...\")\n", " page_source = driver.page_source\n", " driver.quit()\n", "\n", @@ -169,82 +331,33 @@ }, { "cell_type": "code", - "execution_count": 5, - "id": "56df8cd2-2707-43f6-a066-3367846929b3", + "execution_count": null, + "id": "65192f6b", "metadata": {}, "outputs": [], "source": [ - "def user_prompt_for(website):\n", - " user_prompt = f\"You are looking at a website titled {website.title}\"\n", - " user_prompt += \"\\nThe contents of this website is as follows; \\\n", - "please provide a short summary of this website in markdown. \\\n", - "If it includes news or announcements, then summarize these too.\\n\\n\"\n", - " user_prompt += website.text\n", - " return user_prompt\n", - "\n", - "\n", - "\n", - "def messages_for(website):\n", - " return [\n", - " {\"role\": \"system\", \"content\": system_prompt},\n", - " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", - " ]\n", - "\n", - "\n", - "def summarize(url):\n", - " website = Website(url)\n", - " ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", - " response = ollama_via_openai.chat.completions.create(\n", - " model=MODEL,\n", - " messages = messages_for(website)\n", - " )\n", - " return response.choices[0].message.content\n", - "\n", - "\n", - "def display_summary(url):\n", - " summary = summarize(url)\n", - " display(Markdown(summary))" + "display_summary(\"https://openai.com\")" ] }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "id": "f2eb9599", "metadata": {}, - "outputs": [ - { - "data": { - "text/markdown": [ - "It appears that you have provided a sample website or travel booking platform, specifically for flights and hotels in the Middle East region. The content includes:\n", - "\n", - "1. **Flights**: A search engine to find flights across various airlines.\n", - "2. **Hotels**: A selection of chain hotels available for booking.\n", - "3. **Travel**: A general page with FAQs and information about traveling within Saudi Arabia, Kuwait, and other nearby countries.\n", - "4. **Almosafer App**: An advertisement for the Almosafer app, which offers features like secure payment channels, easy booking processes, and user-friendly designs.\n", - "\n", - "The platform also displays a list of trending searches, airlines, and countries to facilitate searching and planning trips.\n", - "\n", - "Please let me know if you have any specific questions or need further assistance with this website sample." - ], - "text/plain": [ - "" - ] - }, - "metadata": {}, - "output_type": "display_data" - } - ], + "outputs": [], "source": [ - "display_summary(\"https://ae.almosafer.com\")" + "display_summary(\"https://edwarddonner.com\")" ] }, { "cell_type": "code", "execution_count": null, - "id": "31b66c0f-6b45-4986-b77c-758625945a91", + "id": "e7ba56c8", "metadata": {}, "outputs": [], - "source": [] + "source": [ + "display_summary(\"https://cnn.com\")" + ] } ], "metadata": { diff --git a/week1/community-contributions/day1-selenium-lama-mac.ipynb b/week1/community-contributions/day1-selenium-lama-mac.ipynb index fd3a3ba..5bb6956 100644 --- a/week1/community-contributions/day1-selenium-lama-mac.ipynb +++ b/week1/community-contributions/day1-selenium-lama-mac.ipynb @@ -2,287 +2,80 @@ "cells": [ { "cell_type": "markdown", - "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", + "id": "c97ad592-c8be-4583-a19c-ac813e56f410", "metadata": {}, "source": [ - "# Instant Gratification!\n", + "## Mac Users\n", "\n", - "Let's build a useful LLM solution - in a matter of minutes.\n", + "I find some challenges while setting up this in MAC silicon M1 chip. Execute below commands in MAC terminal.\n", "\n", - "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", - "\n", - "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n", - "\n", - "## If you're new to Jupyter Lab\n", - "\n", - "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, like the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n", - "\n", - "If you need to start again, go to Kernel menu >> Restart kernel." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", - "metadata": {}, - "outputs": [], - "source": [ - "# imports\n", - "\n", - "import os\n", - "import requests\n", - "from dotenv import load_dotenv\n", - "from bs4 import BeautifulSoup\n", - "from IPython.display import Markdown, display\n", - "from openai import OpenAI" + "1. Download chromedriver.\n", + "2. Unzip and add it to the path.\n", + "3. Set Extended attributes." ] }, { "cell_type": "markdown", - "id": "6900b2a8-6384-4316-8aaa-5e519fca4254", + "id": "b635b345-b000-48cc-8a7f-7df279a489a3", "metadata": {}, "source": [ - "# Connecting to OpenAI\n", - "\n", - "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", - "\n", - "## Troubleshooting if you have problems:\n", - "\n", - "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", - "2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n", - "- Pressing \"Create new secret key\" on the top right\n", - "- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n", - "- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n", - "- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n", - "4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", - "5. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", - "\n", - "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." + "cd ~/Downloads\n", + "wget https://storage.googleapis.com/chrome-for-testing-public/133.0.6943.126/mac-arm64/chromedriver-mac-arm64.zip\n", + "unzip chromedriver-mac-arm64.zip\n", + "sudo mv chromedriver-mac-arm64/chromedriver /usr/local/bin/\n", + "chmod +x /usr/local/bin/chromedriver\n", + "cd /usr/local/bin/\n", + "xattr -d com.apple.quarantine chromedriver\n", + "cd \n", + "chromedriver --version" ] }, { "cell_type": "code", "execution_count": null, - "id": "7b87cadb-d513-4303-baee-a37b6f938e4d", + "id": "17c7c79a-8ae0-4f5d-a7c8-c54aa7ba90fd", "metadata": {}, "outputs": [], "source": [ - "# Load environment variables in a file called .env\n", - "\n", - "load_dotenv()\n", - "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY','your-key-if-not-using-env')\n", - "openai = OpenAI()" + "!pip install selenium\n", + "!pip install undetected-chromedriver\n", + "!pip install beautifulsoup4" ] }, { "cell_type": "code", "execution_count": null, - "id": "c5e793b2-6775-426a-a139-4848291d0463", + "id": "c10bd630-2dfd-4572-8c21-2dc4c6a372ab", "metadata": {}, "outputs": [], "source": [ - "# A class to represent a Webpage\n", - "\n", - "class Website:\n", - " url: str\n", - " title: str\n", - " text: str\n", - "\n", - " def __init__(self, url):\n", - " self.url = url\n", - " response = requests.get(url)\n", - " soup = BeautifulSoup(response.content, 'html.parser')\n", - " self.title = soup.title.string if soup.title else \"No title found\"\n", - " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", - " irrelevant.decompose()\n", - " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", - "metadata": {}, - "outputs": [], - "source": [ - "# Let's try one out\n", - "\n", - "ed = Website(\"https://edwarddonner.com\")\n", - "print(ed.title)\n", - "print(ed.text)" - ] - }, - { - "cell_type": "markdown", - "id": "6a478a0c-2c53-48ff-869c-4d08199931e1", - "metadata": {}, - "source": [ - "## Types of prompts\n", - "\n", - "You may know this already - but if not, you will get very familiar with it!\n", - "\n", - "Models like GPT4o have been trained to receive instructions in a particular way.\n", - "\n", - "They expect to receive:\n", - "\n", - "**A system prompt** that tells them what task they are performing and what tone they should use\n", - "\n", - "**A user prompt** -- the conversation starter that they should reply to" + "from selenium import webdriver\n", + "from selenium.webdriver.chrome.service import Service\n", + "from selenium.webdriver.common.by import By\n", + "from selenium.webdriver.chrome.options import Options\n", + "from openai import OpenAI\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" ] }, { "cell_type": "code", "execution_count": null, - "id": "abdb8417-c5dc-44bc-9bee-2e059d162699", + "id": "6fb3641d-e9f8-4f5b-bb9d-ee0e971cccdb", "metadata": {}, "outputs": [], "source": [ + "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", + "HEADERS = {\"Content-Type\": \"application/json\"}\n", + "MODEL = \"llama3.2\"\n", + "PATH_TO_CHROME_DRIVER = '/usr/local/bin/chromedriver'\n", "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", "and provides a short summary, ignoring text that might be navigation related. \\\n", - "Respond in markdown.\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", - "metadata": {}, - "outputs": [], - "source": [ - "def user_prompt_for(website):\n", - " user_prompt = f\"You are looking at a website titled {website.title}\"\n", - " user_prompt += \"The contents of this website is as follows; \\\n", - "please provide a short summary of this website in markdown. \\\n", - "If it includes news or announcements, then summarize these too.\\n\\n\"\n", - " user_prompt += website.text\n", - " return user_prompt" - ] - }, - { - "cell_type": "markdown", - "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", - "metadata": {}, - "source": [ - "## Messages\n", - "\n", - "The API from OpenAI expects to receive messages in a particular structure.\n", - "Many of the other APIs share this structure:\n", - "\n", - "```\n", - "[\n", - " {\"role\": \"system\", \"content\": \"system message goes here\"},\n", - " {\"role\": \"user\", \"content\": \"user message goes here\"}\n", - "]" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", - "metadata": {}, - "outputs": [], - "source": [ - "def messages_for(website):\n", - " return [\n", - " {\"role\": \"system\", \"content\": system_prompt},\n", - " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", - " ]" - ] - }, - { - "cell_type": "markdown", - "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", - "metadata": {}, - "source": [ - "## Time to bring it together - the API for OpenAI is very simple!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", - "metadata": {}, - "outputs": [], - "source": [ - "def summarize(url):\n", - " website = Website(url)\n", - " response = openai.chat.completions.create(\n", - " model = \"gpt-4o-mini\",\n", - " messages = messages_for(website)\n", - " )\n", - " return response.choices[0].message.content" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", - "metadata": {}, - "outputs": [], - "source": [ - "summarize(\"https://edwarddonner.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3d926d59-450e-4609-92ba-2d6f244f1342", - "metadata": {}, - "outputs": [], - "source": [ - "def display_summary(url):\n", - " summary = summarize(url)\n", - " display(Markdown(summary))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "3018853a-445f-41ff-9560-d925d1774b2f", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://edwarddonner.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "45d83403-a24c-44b5-84ac-961449b4008f", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://cnn.com\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "75e9fd40-b354-4341-991e-863ef2e59db7", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://anthropic.com\")" - ] - }, - { - "cell_type": "markdown", - "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", - "metadata": {}, - "source": [ - "## An extra exercise for those who enjoy web scraping\n", - "\n", - "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "52ae98bb", - "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://openai.com\")" + "Respond in markdown. Highlight all the products this website offered and also find when website is created.\"\n" ] }, { @@ -292,15 +85,6 @@ "metadata": {}, "outputs": [], "source": [ - "#Parse webpages which is designed using JavaScript heavely\n", - "# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\n", - "from selenium import webdriver\n", - "from selenium.webdriver.chrome.service import Service\n", - "from selenium.webdriver.common.by import By\n", - "from selenium.webdriver.chrome.options import Options\n", - "\n", - "PATH_TO_CHROME_DRIVER = '..\\\\path\\\\to\\\\chromedriver.exe'\n", - "\n", "class Website:\n", " url: str\n", " title: str\n", @@ -318,7 +102,7 @@ " driver = webdriver.Chrome(service=service, options=options)\n", " driver.get(url)\n", "\n", - " input(\"Please complete the verification in the browser and press Enter to continue...\")\n", + " # input(\"Please complete the verification in the browser and press Enter to continue...\")\n", " page_source = driver.page_source\n", " driver.quit()\n", "\n", @@ -332,11 +116,40 @@ { "cell_type": "code", "execution_count": null, - "id": "65192f6b", + "id": "56df8cd2-2707-43f6-a066-3367846929b3", "metadata": {}, "outputs": [], "source": [ - "display_summary(\"https://openai.com\")" + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"\\nThe contents of this website is as follows; \\\n", + "please provide a short summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt\n", + "\n", + "\n", + "\n", + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]\n", + "\n", + "\n", + "def summarize(url):\n", + " website = Website(url)\n", + " ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", + " response = ollama_via_openai.chat.completions.create(\n", + " model=MODEL,\n", + " messages = messages_for(website)\n", + " )\n", + " return response.choices[0].message.content\n", + "\n", + "\n", + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" ] }, { @@ -346,18 +159,16 @@ "metadata": {}, "outputs": [], "source": [ - "display_summary(\"https://edwarddonner.com\")" + "display_summary(\"https://ae.almosafer.com\")" ] }, { "cell_type": "code", "execution_count": null, - "id": "e7ba56c8", + "id": "31b66c0f-6b45-4986-b77c-758625945a91", "metadata": {}, "outputs": [], - "source": [ - "display_summary(\"https://cnn.com\")" - ] + "source": [] } ], "metadata": { From fb5e507efaba75af285efbe4c65e856fe48eb343 Mon Sep 17 00:00:00 2001 From: sparsh_thakur <113547853+skullemote@users.noreply.github.com> Date: Sat, 22 Feb 2025 23:17:46 -0700 Subject: [PATCH 03/35] Added my additional exercise for W2D1 to community-contributions --- .../w2d1exercise.ipynb | 196 ++++++++++++++++++ 1 file changed, 196 insertions(+) create mode 100644 week2/community-contributions/w2d1exercise.ipynb diff --git a/week2/community-contributions/w2d1exercise.ipynb b/week2/community-contributions/w2d1exercise.ipynb new file mode 100644 index 0000000..eb45fc4 --- /dev/null +++ b/week2/community-contributions/w2d1exercise.ipynb @@ -0,0 +1,196 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "ec2e81cd-2172-4816-bf44-f29312b8a4bd", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import anthropic\n", + "import google.generativeai as genai\n", + "from IPython.display import Markdown, display, update_display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a558dfa4-9496-48ba-b0f5-b0c731adc7b8", + "metadata": {}, + "outputs": [], + "source": [ + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "dc7c2cda-a5d1-4930-87f2-e06485d6b2bd", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()\n", + "\n", + "claude = anthropic.Anthropic()\n", + "\n", + "genai.configure()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3eb32aec-ec93-4563-bd88-0d48d2471884", + "metadata": {}, + "outputs": [], + "source": [ + "gpt_model = \"gpt-4o-mini\"\n", + "claude_model = \"claude-3-haiku-20240307\"\n", + "gemini_model = \"gemini-2.0-flash-exp\"\n", + "\n", + "gpt_system = \"You are a chatbot who is sarcastic; \\\n", + "you have your speculations about anything in the conversation and you challenge everything in funny way.\\\n", + "You have to be a part of a group discussion and put forward your points about the topic\\\n", + "full-stack developers vs specialised developer. Keep your points short and precise.\"\n", + "\n", + "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", + "everything the other person says, or find common ground. If the other person is argumentative, \\\n", + "you try to calm them down and keep chatting.You have to be a part of a group discussion and put forward your points\\\n", + "about the topic full-stack developers vs specialised developer. Keep your points short and precise.\"\n", + "\n", + "gemini_system = \"You are a very rational thinker and don't like beating around the bush about the topic of discussion.\\\n", + "You have to be a part of a group discussion and put forward your points\\\n", + "about the topic full-stack developers vs specialised developer\\\n", + "Keep your points short and precise.\"\n", + "\n", + "gpt_messages = [\"Hi there\"]\n", + "claude_messages = [\"Hi\"]\n", + "gemini_messages = [\"Hello to all\"]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e27252cf-05f5-4989-85ef-94e6802c5db9", + "metadata": {}, + "outputs": [], + "source": [ + "def call_gpt():\n", + " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", + " for gpt, claude, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n", + " messages.append({\"role\": \"assistant\", \"content\": gpt})\n", + " messages.append({\"role\": \"user\", \"content\": claude})\n", + " messages.append({\"role\": \"assistant\", \"content\": gemini})\n", + " completion = openai.chat.completions.create(\n", + " model=gpt_model,\n", + " messages=messages,\n", + " max_tokens=500 # Add max_tokens to meet API requirement\n", + " )\n", + " return completion.choices[0].message.content\n", + "\n", + "# Function to call Claude\n", + "def call_claude():\n", + " messages = []\n", + " for gpt, claude_message,gemini in zip(gpt_messages, claude_messages, gemini_messages):\n", + " messages.append({\"role\": \"user\", \"content\": gpt})\n", + " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", + " messages.append({\"role\": \"assistant\", \"content\": gemini})\n", + " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", + " message = claude.messages.create(\n", + " model=claude_model,\n", + " max_tokens=500,\n", + " messages=messages\n", + " )\n", + " return message.content[0].text\n", + "\n", + "# Function to call Gemini\n", + "def call_gemini():\n", + " # Create the Gemini model instance\n", + " gemini_model_instance = genai.GenerativeModel(\n", + " model_name=gemini_model, # Specify the model name here\n", + " system_instruction=gemini_system # Provide the system instruction\n", + " )\n", + " \n", + " # Prepare conversation history with separate names to avoid overwriting\n", + " gemini_messages_combined = []\n", + " for gpt, claude, gemini_msg in zip(gpt_messages, claude_messages, gemini_messages):\n", + " gemini_messages_combined.append({\"role\": \"assistant\", \"content\": gpt})\n", + " gemini_messages_combined.append({\"role\": \"user\", \"content\": claude})\n", + " gemini_messages_combined.append({\"role\": \"assistant\", \"content\": gemini_msg})\n", + " \n", + " # Generate content based on the conversation history\n", + " gemini_response = gemini_model_instance.generate_content(\"\".join([msg[\"content\"] for msg in gemini_messages_combined]))\n", + " \n", + " return gemini_response.text\n", + "\n", + "# Initial print\n", + "print(f\"Gemini:\\n{gemini_messages[0]}\\n\")\n", + "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", + "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", + "\n", + "# Main loop to generate conversation\n", + "for i in range(3):\n", + " gpt_next = call_gpt()\n", + " print(f\"GPT:\\n{gpt_next}\\n\")\n", + " gpt_messages.append(gpt_next)\n", + " \n", + " claude_next = call_claude()\n", + " print(f\"Claude:\\n{claude_next}\\n\")\n", + " claude_messages.append(claude_next)\n", + " \n", + " gemini_next = call_gemini()\n", + " print(f\"Gemini:\\n{gemini_next}\\n\")\n", + " gemini_messages.append(gemini_next)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "52f43794-a20a-4b9a-a18d-6f363b8dc27d", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From f515a9c8c0094ce58db0618003aa1119c8589c21 Mon Sep 17 00:00:00 2001 From: udomai Date: Sun, 23 Feb 2025 20:18:41 +0100 Subject: [PATCH 04/35] week 3 challenge --- .../en-de-fr_dataset_generator.ipynb | 322 ++++++++++++++++++ 1 file changed, 322 insertions(+) create mode 100644 week3/community-contributions/en-de-fr_dataset_generator.ipynb diff --git a/week3/community-contributions/en-de-fr_dataset_generator.ipynb b/week3/community-contributions/en-de-fr_dataset_generator.ipynb new file mode 100644 index 0000000..58b8360 --- /dev/null +++ b/week3/community-contributions/en-de-fr_dataset_generator.ipynb @@ -0,0 +1,322 @@ +{ + "nbformat": 4, + "nbformat_minor": 0, + "metadata": { + "colab": { + "provenance": [], + "gpuType": "T4", + "authorship_tag": "ABX9TyPxJzufoQPtui+nhl1J1xiR" + }, + "kernelspec": { + "name": "python3", + "display_name": "Python 3" + }, + "language_info": { + "name": "python" + }, + "accelerator": "GPU" + }, + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "yqlQTsxNdKrN" + }, + "outputs": [], + "source": [ + "!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate openai httpx==0.27.2 gradio" + ] + }, + { + "cell_type": "code", + "source": [ + "import os\n", + "import requests\n", + "from IPython.display import Markdown, display, update_display\n", + "from openai import OpenAI\n", + "from google.colab import drive\n", + "from huggingface_hub import login\n", + "from google.colab import userdata\n", + "from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n", + "import torch\n", + "import gradio as gr\n", + "import re" + ], + "metadata": { + "id": "eyfvQrLxdkGT" + }, + "execution_count": 2, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "# one can always add more models, of course\n", + "\n", + "LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", + "OPENAI_MODEL = \"gpt-4o-mini\"" + ], + "metadata": { + "id": "WW-cSZk7dnp6" + }, + "execution_count": 3, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "hf_token = userdata.get('HF_TOKEN')\n", + "login(hf_token, add_to_git_credential=True)\n", + "openai_api_key = userdata.get('OPENAI_API_KEY')\n", + "openai = OpenAI(api_key=openai_api_key)" + ], + "metadata": { + "id": "XG7Iam6Rdw8F" + }, + "execution_count": 4, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "force_dark_mode = \"\"\"\n", + "function refresh() {\n", + " const url = new URL(window.location);\n", + " if (url.searchParams.get('__theme') !== 'dark') {\n", + " url.searchParams.set('__theme', 'dark');\n", + " window.location.href = url.href;\n", + " }\n", + "}\n", + "\"\"\"" + ], + "metadata": { + "id": "Ov7WSdx9dzSt" + }, + "execution_count": 5, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "def dataset_generator(model, nature, shots, volume, language):\n", + "\n", + " examples = \"Instruction: 'Make a random sentence.'\\nAnswer: 'When I got home last night, I couldn't believe my eyes: All the pineapples had been removed from the pizza.'\"\n", + " system_message = \"You are a random sentence generator. Generate 10 diverse English sentences.\"\n", + " user_prompt = f\"Generate 10 random English sentences, like so:\\n{examples}\"\n", + " sentences = \"\"\n", + "\n", + " if language == \"English\":\n", + "\n", + " for shot in list(shots.keys()):\n", + " examples += f\"\\nExample instruction: '{shot}'\\nExample answer: '{shots[shot]}'\\n\"\n", + "\n", + " system_message = f\"You are a state-of-the art linguistic dataset compiler. You are given a 'Type' of sentence to create. \\\n", + "Within the bounds of that type, create {volume} diverse sentences with differing structures and lengths. Make the sentences plausible, \\\n", + "but be creative in filling them with random concrete information, names, and data. Here are some examples for how to go about that:\\n{examples}\\n\\\n", + "Just output one sentence per line. Do not comment or format yor output in any way, shape, or form.\"\n", + "\n", + " user_prompt = f\"Generate {volume} English sentences of the following Type: {nature}. Just output one sentence per line. \\\n", + "Do not comment or format yor output in any way, shape, or form.\"\n", + "\n", + " elif language == \"German\":\n", + "\n", + " for shot in list(shots.keys()):\n", + " examples += f\"\\nAnweisung: '{shot}'\\nAntwort: '{shots[shot]}'\\n\"\n", + "\n", + " system_message = f\"Du bist ein weltklasse Datensatz-Sammler für Sprachdaten. Du erhältst einen 'Typ' von Sätzen, die du erstellen sollst. \\\n", + "Im Rahmen dieses Typs, generiere {volume} untereinander verschiedene Sätze mit unterschiedlichen Satzlängen und -strukturen. Mache die Beispielsätze \\\n", + "plausibel, aber fülle sie kreativ mit willkürlichen Informationen, Namen, und Daten aller Art. Hier sind ein paar Beispiel, wie du vorgehen sollst:\\n{examples}\\n\\\n", + "Gib einfach einen Satz pro Zeile aus. Kommentiere oder formatiere deine Antwort in keinster Weise.\"\n", + "\n", + " user_prompt = f\"Generiere {volume} deutsche Sätze des folgenden Typs: {nature}. Gib einfach einen Satz pro Zeile aus. \\\n", + "Kommentiere oder formatiere deine Antwort in keiner Weise.\"\n", + "\n", + " elif language == \"French\":\n", + "\n", + " for shot in list(shots.keys()):\n", + " examples += f\"\\nConsigne: '{shot}'\\nRéponse: '{shots[shot]}'\\n\"\n", + "\n", + " system_message = f\"Tu es un outil linguistique de pointe, à savoir, un genérateur de données linguistiques. Tu seras assigné un 'Type' de phrases à créer. \\\n", + "Dans le cadre de ce type-là, crée {volume} phrases diverses, avec des structures et longueurs qui varient. Génère des phrases qui soient plausibles, \\\n", + "mais sois créatif, et sers-toi de données, noms, et informations aléatoires pour rendre les phrases plus naturelles. Voici quelques examples comment faire:\\n{examples}\\n\\\n", + "Sors une seule phrase par ligne. Ne formatte ni commente ta réponse en aucune manière que ce soit.\"\n", + "\n", + " user_prompt = f\"S'il te plaît, crée {volume} phrases en français du Type suivant: {nature}. Sors une seule phrase par ligne. \\\n", + "Ne formatte ni commente ta réponse en aucune manière que ce soit.\"\n", + "\n", + " messages = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]\n", + "\n", + " if model == \"Llama\":\n", + "\n", + " quant_config = BitsAndBytesConfig(\n", + " load_in_4bit=True,\n", + " bnb_4bit_use_double_quant=True,\n", + " bnb_4bit_compute_dtype=torch.bfloat16,\n", + " bnb_4bit_quant_type=\"nf4\"\n", + " )\n", + "\n", + " tokenizer = AutoTokenizer.from_pretrained(LLAMA)\n", + " tokenizer.pad_token = tokenizer.eos_token\n", + " inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n", + " streamer = TextStreamer(tokenizer)\n", + " model = AutoModelForCausalLM.from_pretrained(LLAMA, device_map=\"auto\", quantization_config=quant_config)\n", + " outputs = model.generate(inputs, max_new_tokens=10000)\n", + "\n", + " response = tokenizer.decode(outputs[0])\n", + " sentences = list(re.finditer(\"(?:<\\|end_header_id\\|>)([^<]+)(?:<\\|eot_id\\|>)\", str(response), re.DOTALL))[-1].group(1)\n", + "\n", + " elif model == \"OpenAI\":\n", + " response = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages)\n", + " sentences = response.choices[0].message.content\n", + "\n", + " return sentences" + ], + "metadata": { + "id": "bEF8w_Mdd2Nb" + }, + "execution_count": 7, + "outputs": [] + }, + { + "cell_type": "code", + "source": [ + "global data\n", + "data = \"\"\n", + "\n", + "with gr.Blocks(\n", + " css=\"\"\"\n", + " .red-button {\n", + " background-color: darkred !important;\n", + " border-color: red !important;\n", + " }\n", + " .blue-button {\n", + " background-color: darkblue !important;\n", + " border-color: blue !important;\n", + " }\n", + " .green-button {\n", + " background-color: green !important;\n", + " border-color: green !important;\n", + " }\n", + " \"\"\"\n", + ") as view:\n", + " with gr.Row():\n", + " title = gr.HTML(\"

Dataset Generator PLUS

for English, German, and French

\")\n", + " subtitle = gr.HTML(\"

Instructions:

  1. Pick the language
  2. \\\n", + "
  3. Select a model
  4. Indicate how many sentences you need
  5. \\\n", + "
  6. Describe the type of sentence you're looking for
  7. Give up to three examples of the desired output sentence, and describe each of them briefly
  8. \\\n", + "
  9. Hit Create Dataset
  10. \\\n", + "
  11. Save the output (.txt) to your Google Drive
  12. \")\n", + " with gr.Row():\n", + " language_choice = gr.Dropdown(choices=[\"English\", \"German\", \"French\"], label=\"Select language\", value=\"English\", interactive=True)\n", + " model_choice = gr.Dropdown(choices=[\"Llama\", \"OpenAI\"], label=\"Select model\", value=\"Llama\", interactive=True)\n", + " volume = gr.Textbox(label=\"Required number of sentences\", interactive=True)\n", + " with gr.Row():\n", + " typeInput = gr.Textbox(label=\"Short description of the kind of sentence you need\", interactive=True)\n", + " with gr.Row():\n", + " sentence_1 = gr.Textbox(label=\"Example sentence 1\", interactive=True)\n", + " instruction_1 = gr.Textbox(label=\"Description\", interactive=True)\n", + " with gr.Row():\n", + " sentence_2 = gr.Textbox(label=\"Example sentence 2\", interactive=True)\n", + " instruction_2 = gr.Textbox(label=\"Description\", interactive=True)\n", + " with gr.Row():\n", + " sentence_3 = gr.Textbox(label=\"Example sentence 3\", interactive=True)\n", + " instruction_3 = gr.Textbox(label=\"Description\", interactive=True)\n", + " with gr.Row():\n", + " liveSentences = gr.Markdown(\n", + " value='
    Your sentences will be displayed here …
    ',\n", + " label=\"Generated sentences:\",\n", + " min_height=60,\n", + " max_height=200\n", + " )\n", + " with gr.Row():\n", + " generate = gr.Button(value=\"Generate sentences\", elem_classes=\"blue-button\")\n", + " with gr.Row():\n", + " clear = gr.Button(value=\"Clear everything\", elem_classes=\"red-button\")\n", + " with gr.Row():\n", + " outputPath = gr.Textbox(label=\"Specify the desired name and location on your Google Drive for the sentences (plain text) to be saved\", interactive=True)\n", + " with gr.Row():\n", + " save = gr.Button(value=\"Save generated data\", elem_classes=\"blue-button\")\n", + "\n", + " def generateSentences(typeInput, s1, i1, s2, i2, s3, i3, volume, language, model):\n", + " global data\n", + " nature = \"\"\n", + " shots = {}\n", + " amount = int(volume) if re.search(\"^[0-9]+$\", volume) is not None else 10\n", + "\n", + " if typeInput != None:\n", + " nature = typeInput\n", + " else:\n", + " nature = \"Random sentences of mixed nature\"\n", + "\n", + " if s1 != None:\n", + " if i1 != None:\n", + " shots[i1] = s1\n", + " else:\n", + " shots[\"A medium-long random sentence about anything\"] = s1\n", + " else:\n", + " shots[\"A medium-long random sentence about anything\"] = \"Paul, waking up out of his half-drunken haze, clearly couldn't tell left from right and ran right into the door.\"\n", + "\n", + " if s2 != None:\n", + " if i2 != None:\n", + " shots[i2] = s2\n", + " else:\n", + " shots[\"A medium-long random sentence about anything\"] = s2\n", + "\n", + " if s3 != None:\n", + " if i3 != None:\n", + " shots[i3] = s3\n", + " else:\n", + " shots[\"A medium-long random sentence about anything\"] = s3\n", + "\n", + " sentences = dataset_generator(model, nature, shots, amount, language)\n", + " data = sentences\n", + "\n", + " return sentences\n", + "\n", + " def saveData(path):\n", + " global data\n", + " drive.mount(\"/content/drive\")\n", + "\n", + " dir_path = os.path.dirname(\"/content/drive/MyDrive/\" + path)\n", + "\n", + " if not os.path.exists(dir_path):\n", + " os.makedirs(dir_path)\n", + "\n", + " with open(\"/content/drive/MyDrive/\" + path, \"w\", encoding=\"utf-8\") as f:\n", + " f.write(data)\n", + "\n", + " generate.click(generateSentences, inputs=[typeInput, sentence_1, instruction_1, sentence_2, instruction_2, sentence_3, instruction_3, volume, language_choice, model_choice], outputs=liveSentences)\n", + " clear.click(\n", + " lambda: [\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"\"),\n", + " gr.update(value='
    Your sentences will be displayed here …
    '),\n", + " gr.update(value=\"\"),\n", + " gr.update(value=\"Save generated data\", elem_classes=\"blue-button\")],\n", + " None,\n", + " [volume, typeInput, sentence_1, instruction_1, sentence_2, instruction_2,\n", + " sentence_3, instruction_3, liveSentences, outputPath, save],\n", + " queue=False\n", + " )\n", + " save.click(saveData, inputs=outputPath, outputs=None).then(lambda: gr.update(value=\"Your data has been saved\", elem_classes=\"green-button\"), [], [save])\n", + "\n", + "view.launch(share=True) #, debug=True)" + ], + "metadata": { + "id": "VRKdu0fEt8mg" + }, + "execution_count": null, + "outputs": [] + } + ] +} \ No newline at end of file From 225f9335d23104d71fe3b9c7478c6258e19be075 Mon Sep 17 00:00:00 2001 From: udomai Date: Sun, 23 Feb 2025 22:29:32 +0100 Subject: [PATCH 05/35] get rid of pesky outputs --- .../en-de-fr_dataset_generator.ipynb | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/week3/community-contributions/en-de-fr_dataset_generator.ipynb b/week3/community-contributions/en-de-fr_dataset_generator.ipynb index 58b8360..0c3e0d5 100644 --- a/week3/community-contributions/en-de-fr_dataset_generator.ipynb +++ b/week3/community-contributions/en-de-fr_dataset_generator.ipynb @@ -46,7 +46,7 @@ "metadata": { "id": "eyfvQrLxdkGT" }, - "execution_count": 2, + "execution_count": null, "outputs": [] }, { @@ -60,7 +60,7 @@ "metadata": { "id": "WW-cSZk7dnp6" }, - "execution_count": 3, + "execution_count": null, "outputs": [] }, { @@ -74,7 +74,7 @@ "metadata": { "id": "XG7Iam6Rdw8F" }, - "execution_count": 4, + "execution_count": null, "outputs": [] }, { @@ -93,7 +93,7 @@ "metadata": { "id": "Ov7WSdx9dzSt" }, - "execution_count": 5, + "execution_count": null, "outputs": [] }, { @@ -178,7 +178,7 @@ "metadata": { "id": "bEF8w_Mdd2Nb" }, - "execution_count": 7, + "execution_count": null, "outputs": [] }, { From 3a090bc0ef28154f6754c45188a790a4c8f44437 Mon Sep 17 00:00:00 2001 From: Dimitris Sinanis Date: Mon, 24 Feb 2025 14:13:22 +0200 Subject: [PATCH 06/35] Add week 1 exercise notebook for OpenAI API and Ollama integration, the AI Technician. --- .../week1 EXERCISE_AI_techician.ipynb | 202 ++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 week1/community-contributions/week1 EXERCISE_AI_techician.ipynb diff --git a/week1/community-contributions/week1 EXERCISE_AI_techician.ipynb b/week1/community-contributions/week1 EXERCISE_AI_techician.ipynb new file mode 100644 index 0000000..7824df8 --- /dev/null +++ b/week1/community-contributions/week1 EXERCISE_AI_techician.ipynb @@ -0,0 +1,202 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", + "metadata": {}, + "source": [ + "# End of week 1 exercise\n", + "\n", + "To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n", + "and responds with an explanation. This is a tool that you will be able to use yourself during the course!" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "c1070317-3ed9-4659-abe3-828943230e03", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "from IPython.display import Markdown, display, update_display\n", + "import openai\n", + "from openai import OpenAI\n" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "4a456906-915a-4bfd-bb9d-57e505c5093f", + "metadata": {}, + "outputs": [], + "source": [ + "# constants\n", + "models = {\n", + " 'MODEL_GPT': 'gpt-4o-mini',\n", + " 'MODEL_LLAMA': 'llama3.2'\n", + "}\n", + "\n", + "# To use ollama using openai API (ensure that ollama is running on localhost)\n", + "ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", + "\n", + "def model_choices(model):\n", + " if model in models:\n", + " return models[model]\n", + " else:\n", + " raise ValueError(f\"Model {model} not found in models dictionary\")\n", + "\n", + "def get_model_api(model='MODEL_GPT'):\n", + " if model == 'MODEL_GPT':\n", + " return openai, model_choices(model)\n", + " elif model == 'MODEL_LLAMA':\n", + " return ollama_via_openai, model_choices(model)\n", + " else:\n", + " raise ValueError(f\"Model {model} not found in models dictionary\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", + "metadata": {}, + "outputs": [], + "source": [ + "# set up environment\n", + "\n", + "system_prompt = \"\"\" You are an AI assistant helping a user find information about a product. \n", + "The user asks you a technical question about code, and you provide a response with code snippets and explanations.\"\"\"\n", + "\n", + "def stream_brochure(question, model):\n", + " api, model_name = get_model_api(model)\n", + " stream = api.chat.completions.create(\n", + " model=model_name,\n", + " messages=[\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": question}\n", + " ],\n", + " stream=True\n", + " )\n", + " \n", + " response = \"\"\n", + " display_handle = display(Markdown(\"\"), display_id=True)\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", + " update_display(Markdown(response), display_id=display_handle.display_id)\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "3f0d0137-52b0-47a8-81a8-11a90a010798", + "metadata": {}, + "outputs": [], + "source": [ + "# Here is the question; type over this to ask something new\n", + "\n", + "question = \"\"\"\n", + "Please explain what this code does and why:\n", + "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "**Understanding the Code Snippet**\n", + "\n", + "This Python code snippet uses a combination of built-in functions, dictionary iteration, and generator expressions to extract and yield author names from a list of `Book` objects.\n", + "\n", + "Here's a breakdown:\n", + "\n", + "1. **Dictionary Iteration**: The expression `for book in books if book.get(\"author\")`\n", + " - Iterates over each element (`book`) in the container `books`.\n", + " - Filters out elements whose `'author'` key does not have a value (i.e., `None`, `False`, or an empty string). This leaves only dictionaries with author information.\n", + "\n", + "2. **Dictionary Access**: The expression `{book.get(\"author\") for book in books if book.get(\"author\")}`\n", + " - Uses dictionary membership testing to access only the values associated with the `'author'` key.\n", + " - If the value is not found or is considered false, it's skipped in this particular case.\n", + "\n", + "3. **Generator Expression**: This generates an iterator that iterates over the filtered author names.\n", + " - Yields each author name (i.e., a single `'name'` from the book dictionary) on demand.\n", + " - Since these are generator expressions, they use memory less than equivalent Python lists and also create results on-demand.\n", + "\n", + "4. **`yield from`**: This statement takes the generator expression as an argument and uses it to generate a nested iterator structure.\n", + " - It essentially \"decompresses\" the single level of nested iterator created by `list(iter(x))`, allowing for simpler use cases and potentially significant efficiency improvements for more complex structures where every value must be iterated, while in the latter case just the first item per iterable in the outer expression's sequence needs to actually be yielded into result stream.\n", + " - By \"yielding\" a nested iterator (the generator expression), we can simplify code by avoiding repetitive structure like `for book, book_author in zip(iterating over), ...` or list creation.\n", + "\n", + "**Example Use Case**\n", + "\n", + "In this hypothetical example:\n", + "\n", + "# Example Book objects\n", + "class Book:\n", + " def __init__(self, author, title):\n", + " self.author = author # str\n", + " self.title = title\n", + "\n", + "books = [\n", + " {\"author\": \"John Doe\", \"title\": f\"Book 1 by John Doe\"},\n", + " {\"author\": None, \"title\": f\"Book 2 without Author\"},\n", + " {\"author\": \"Jane Smith\", \"title\": f\"Book 3 by Jane Smith\"}\n", + "]\n", + "\n", + "# The given expression to extract and yield author names\n", + "for author in yield from {book.get(\"author\") for book in books if book.get(\"author\")}:\n", + "\n", + " print(author) \n", + "\n", + "In this code snippet, printing the extracted authors would output `John Doe`, `Jane Smith` (since only dictionaries with author information pass the filtering test).\n", + "\n", + "Please modify it like as you wish and use `yield from` along with dictionary iteration, list comprehension or generator expression if needed, and explain what purpose your version has." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# Get the model of your choice (choices appeared below) to answer, with streaming \n", + "\n", + "\"\"\"models = {\n", + " 'MODEL_GPT': 'gpt-4o-mini',\n", + " 'MODEL_LLAMA': 'llama3.2'\n", + "}\"\"\"\n", + "\n", + "stream_brochure(question,'MODEL_LLAMA')" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From b03bf5adc4bdb12dedc568e52c4b62364107cf12 Mon Sep 17 00:00:00 2001 From: AsmaouLandi <150306940+AsmaouLandi@users.noreply.github.com> Date: Tue, 25 Feb 2025 07:20:10 +0100 Subject: [PATCH 07/35] Add files via upload --- .../day1-3 adversarial coversation.ipynb | 1125 +++++++++++++++++ 1 file changed, 1125 insertions(+) create mode 100644 week2/community-contributions/day1-3 adversarial coversation.ipynb diff --git a/week2/community-contributions/day1-3 adversarial coversation.ipynb b/week2/community-contributions/day1-3 adversarial coversation.ipynb new file mode 100644 index 0000000..cf1054a --- /dev/null +++ b/week2/community-contributions/day1-3 adversarial coversation.ipynb @@ -0,0 +1,1125 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", + "metadata": {}, + "source": [ + "# Welcome to Week 2!\n", + "\n", + "## Frontier Model APIs\n", + "\n", + "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", + "\n", + "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." + ] + }, + { + "cell_type": "markdown", + "id": "2b268b6e-0ba4-461e-af86-74a41f4d681f", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
    \n", + " \n", + " \n", + "

    Important Note - Please read me

    \n", + " I'm continually improving these labs, adding more examples and exercises.\n", + " At the start of each week, it's worth checking you have the latest code.
    \n", + " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!

    \n", + " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:
    \n", + " conda env update --f environment.yml
    \n", + " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):
    \n", + " pip install -r requirements.txt\n", + "
    Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", + "
    \n", + "
    \n", + "\n", + " \n", + " \n", + " \n", + " \n", + "
    \n", + " \n", + " \n", + "

    Reminder about the resources page

    \n", + " Here's a link to resources for the course. This includes links to all the slides.
    \n", + " https://edwarddonner.com/2024/11/13/llm-engineering-resources/
    \n", + " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", + "
    \n", + "
    " + ] + }, + { + "cell_type": "markdown", + "id": "85cfe275-4705-4d30-abea-643fbddf1db0", + "metadata": {}, + "source": [ + "## Setting up your keys\n", + "\n", + "If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n", + "\n", + "**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n", + "\n", + "For OpenAI, visit https://openai.com/api/ \n", + "For Anthropic, visit https://console.anthropic.com/ \n", + "For Google, visit https://ai.google.dev/gemini-api \n", + "\n", + "### Also - adding DeepSeek if you wish\n", + "\n", + "Optionally, if you'd like to also use DeepSeek, create an account [here](https://platform.deepseek.com/), create a key [here](https://platform.deepseek.com/api_keys) and top up with at least the minimum $2 [here](https://platform.deepseek.com/top_up).\n", + "\n", + "### Adding API keys to your .env file\n", + "\n", + "When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n", + "\n", + "```\n", + "OPENAI_API_KEY=xxxx\n", + "ANTHROPIC_API_KEY=xxxx\n", + "GOOGLE_API_KEY=xxxx\n", + "DEEPSEEK_API_KEY=xxxx\n", + "```\n", + "\n", + "Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top." + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "743ba37d-6d54-43e7-9da8-f986fa9cfeff", + "metadata": {}, + "outputs": [], + "source": [ + "# !pip install anthropic\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import anthropic\n", + "from IPython.display import Markdown, display, update_display" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "d6477dbe-7859-4999-9abe-450587d80a42", + "metadata": {}, + "outputs": [], + "source": [ + "# !pip install google-generativeai\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", + "metadata": {}, + "outputs": [], + "source": [ + "# import for google\n", + "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n", + "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n", + "\n", + "import google.generativeai" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n", + "Anthropic API Key exists and begins sk-ant-\n", + "Google API Key exists and begins AIzaSyDF\n" + ] + } + ], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", + "metadata": {}, + "outputs": [], + "source": [ + "# Connect to OpenAI, Anthropic\n", + "\n", + "openai = OpenAI()\n", + "\n", + "claude = anthropic.Anthropic()" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "425ed580-808d-429b-85b0-6cba50ca1d0c", + "metadata": {}, + "outputs": [], + "source": [ + "# This is the set up code for Gemini\n", + "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n", + "\n", + "google.generativeai.configure()" + ] + }, + { + "cell_type": "markdown", + "id": "42f77b59-2fb1-462a-b90d-78994e4cef33", + "metadata": {}, + "source": [ + "## Asking LLMs to tell a joke\n", + "\n", + "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", + "Later we will be putting LLMs to better use!\n", + "\n", + "### What information is included in the API\n", + "\n", + "Typically we'll pass to the API:\n", + "- The name of the model that should be used\n", + "- A system message that gives overall context for the role the LLM is playing\n", + "- A user message that provides the actual prompt\n", + "\n", + "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "378a0296-59a2-45c6-82eb-941344d3eeff", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are an assistant that is great at telling jokes\"\n", + "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", + "metadata": {}, + "outputs": [], + "source": [ + "prompts = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Why did the data scientist break up with their computer? \n", + "Because it had too many trust issues with the data stored in its memory!\n" + ] + } + ], + "source": [ + "# GPT-3.5-Turbo\n", + "\n", + "completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Why did the data scientist break up with the statistician?\n", + "\n", + "Because she found him too mean!\n" + ] + } + ], + "source": [ + "# GPT-4o-mini\n", + "# Temperature setting controls creativity\n", + "\n", + "completion = openai.chat.completions.create(\n", + " model='gpt-4o-mini',\n", + " messages=prompts,\n", + " temperature=0.7\n", + ")\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Why did the data scientist break up with the logistic regression model?\n", + "\n", + "Because it couldn't handle the relationship's complexity and kept giving them mixed signals!\n" + ] + } + ], + "source": [ + "# GPT-4o\n", + "\n", + "completion = openai.chat.completions.create(\n", + " model='gpt-4o',\n", + " messages=prompts,\n", + " temperature=0.4\n", + ")\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Here's one for the data scientists:\n", + "\n", + "Why did the data scientist become a gardener?\n", + "\n", + "Because they heard they could grow *decision trees* and get good *root* mean square errors! \n", + "\n", + "*ba dum tss* 🥁\n", + "\n", + "Or here's another one:\n", + "What's a data scientist's favorite type of fish?\n", + "\n", + "A SAMPLEmon! \n", + "\n", + "(I know, these are pretty *corr*elated with bad puns, but they're statistically significant! 😄)\n" + ] + } + ], + "source": [ + "# Claude 3.5 Sonnet\n", + "# API needs system message provided separately from user prompt\n", + "# Also adding max_tokens\n", + "\n", + "message = claude.messages.create(\n", + " model=\"claude-3-5-sonnet-latest\",\n", + " max_tokens=200,\n", + " temperature=0.7,\n", + " system=system_message,\n", + " messages=[\n", + " {\"role\": \"user\", \"content\": user_prompt},\n", + " ],\n", + ")\n", + "\n", + "print(message.content[0].text)" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Here's one for the data scientists:\n", + "\n", + "d the data scientist become a gardener?\n", + "\n", + " they heard they could grow *decision trees* and create a *random forest*! 🌳\n", + "\n", + "Alternative:\n", + "\n", + "'s a data scientist's favorite breakfast?\n", + "\n", + "📧am filtering! 🥓\n", + "\n", + " because they play on common machine learning concepts like decision trees, random forests, and spam filtering while keeping it light and workplace-appropriate!)" + ] + } + ], + "source": [ + "# Claude 3.5 Sonnet again\n", + "# Now let's add in streaming back results\n", + "# If the streaming looks strange, then please see the note below this cell!\n", + "\n", + "result = claude.messages.stream(\n", + " model=\"claude-3-5-sonnet-latest\",\n", + " max_tokens=200,\n", + " temperature=0.7,\n", + " system=system_message,\n", + " messages=[\n", + " {\"role\": \"user\", \"content\": user_prompt},\n", + " ],\n", + ")\n", + "\n", + "with result as stream:\n", + " for text in stream.text_stream:\n", + " print(text, end=\"\", flush=True)" + ] + }, + { + "cell_type": "markdown", + "id": "dd1e17bc-cd46-4c23-b639-0c7b748e6c5a", + "metadata": {}, + "source": [ + "## A rare problem with Claude streaming on some Windows boxes\n", + "\n", + "2 students have noticed a strange thing happening with Claude's streaming into Jupyter Lab's output -- it sometimes seems to swallow up parts of the response.\n", + "\n", + "To fix this, replace the code:\n", + "\n", + "`print(text, end=\"\", flush=True)`\n", + "\n", + "with this:\n", + "\n", + "`clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")` \n", + "`print(clean_text, end=\"\", flush=True)`\n", + "\n", + "And it should work fine!" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Why was the Python Data Scientist always calm and collected?\n", + "\n", + "Because he knew how to handle exceptions!\n", + "\n" + ] + } + ], + "source": [ + "# The API for Gemini has a slightly different structure.\n", + "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n", + "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n", + "\n", + "gemini = google.generativeai.GenerativeModel(\n", + " model_name='gemini-2.0-flash-exp',\n", + " system_instruction=system_message\n", + ")\n", + "response = gemini.generate_content(user_prompt)\n", + "print(response.text)" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "49009a30-037d-41c8-b874-127f61c4aa3a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Why was the data scientist sad?\n", + "\n", + "Because all he ever did was R and decay!\n", + "\n" + ] + } + ], + "source": [ + "# As an alternative way to use Gemini that bypasses Google's python API library,\n", + "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n", + "\n", + "gemini_via_openai_client = OpenAI(\n", + " api_key=google_api_key, \n", + " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", + ")\n", + "\n", + "response = gemini_via_openai_client.chat.completions.create(\n", + " model=\"gemini-2.0-flash-exp\",\n", + " messages=prompts\n", + ")\n", + "print(response.choices[0].message.content)" + ] + }, + { + "cell_type": "markdown", + "id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab", + "metadata": {}, + "source": [ + "## (Optional) Trying out the DeepSeek model\n", + "\n", + "### Let's ask DeepSeek a really hard question - both the Chat and the Reasoner model" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "3d0019fb-f6a8-45cb-962b-ef8bf7070d4d", + "metadata": {}, + "outputs": [], + "source": [ + "# # Optionally if you wish to try DeekSeek, you can also use the OpenAI client library\n", + "\n", + "# deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n", + "\n", + "# if deepseek_api_key:\n", + "# print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n", + "# else:\n", + "# print(\"DeepSeek API Key not set - please skip to the next section if you don't wish to try the DeepSeek API\")" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "c72c871e-68d6-4668-9c27-96d52b77b867", + "metadata": {}, + "outputs": [], + "source": [ + "# # Using DeepSeek Chat\n", + "\n", + "# deepseek_via_openai_client = OpenAI(\n", + "# api_key=deepseek_api_key, \n", + "# base_url=\"https://api.deepseek.com\"\n", + "# )\n", + "\n", + "# response = deepseek_via_openai_client.chat.completions.create(\n", + "# model=\"deepseek-chat\",\n", + "# messages=prompts,\n", + "# )\n", + "\n", + "# print(response.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "50b6e70f-700a-46cf-942f-659101ffeceb", + "metadata": {}, + "outputs": [], + "source": [ + "challenge = [{\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n", + " {\"role\": \"user\", \"content\": \"How many words are there in your answer to this prompt\"}]" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "66d1151c-2015-4e37-80c8-16bc16367cfe", + "metadata": {}, + "outputs": [], + "source": [ + "# # Using DeepSeek Chat with a harder question! And streaming results\n", + "\n", + "# stream = deepseek_via_openai_client.chat.completions.create(\n", + "# model=\"deepseek-chat\",\n", + "# messages=challenge,\n", + "# stream=True\n", + "# )\n", + "\n", + "# reply = \"\"\n", + "# display_handle = display(Markdown(\"\"), display_id=True)\n", + "# for chunk in stream:\n", + "# reply += chunk.choices[0].delta.content or ''\n", + "# reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", + "# update_display(Markdown(reply), display_id=display_handle.display_id)\n", + "\n", + "# print(\"Number of words:\", len(reply.split(\" \")))" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "43a93f7d-9300-48cc-8c1a-ee67380db495", + "metadata": {}, + "outputs": [], + "source": [ + "# # Using DeepSeek Reasoner - this may hit an error if DeepSeek is busy\n", + "# # It's over-subscribed (as of 28-Jan-2025) but should come back online soon!\n", + "# # If this fails, come back to this in a few days..\n", + "\n", + "# response = deepseek_via_openai_client.chat.completions.create(\n", + "# model=\"deepseek-reasoner\",\n", + "# messages=challenge\n", + "# )\n", + "\n", + "# reasoning_content = response.choices[0].message.reasoning_content\n", + "# content = response.choices[0].message.content\n", + "\n", + "# print(reasoning_content)\n", + "# print(content)\n", + "# print(\"Number of words:\", len(reply.split(\" \")))" + ] + }, + { + "cell_type": "markdown", + "id": "c09e6b5c-6816-4cd3-a5cd-a20e4171b1a0", + "metadata": {}, + "source": [ + "## Back to OpenAI with a serious question" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "83ddb483-4f57-4668-aeea-2aade3a9e573", + "metadata": {}, + "outputs": [], + "source": [ + "# To be serious! GPT-4o-mini with the original question\n", + "\n", + "prompts = [\n", + " {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n", + " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "id": "749f50ab-8ccd-4502-a521-895c3f0808a2", + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "Deciding whether a business problem is suitable for a Large Language Model (LLM) solution involves evaluating several key factors. Here's a guide to help you determine suitability:\n", + "\n", + "### 1. **Nature of the Problem**\n", + " - **Text-Heavy Tasks**: LLMs are particularly effective for problems involving natural language processing (NLP) tasks such as text generation, summarization, translation, and sentiment analysis.\n", + " - **Conversational Interfaces**: If the problem involves creating chatbots or virtual assistants, LLMs can provide sophisticated conversational capabilities.\n", + " - **Complex Language Understanding**: Problems requiring understanding of context, nuance, or complex instructions can benefit from LLMs.\n", + "\n", + "### 2. **Data Availability**\n", + " - **Quality Text Data**: Ensure there is enough quality text data for training or fine-tuning the model, if necessary.\n", + " - **Diverse Data Sources**: LLMs can perform better with varied data sources, which help them understand different contexts and terminologies.\n", + "\n", + "### 3. **Scalability and Cost**\n", + " - **Resource Requirements**: LLMs can be resource-intensive, requiring significant computational power for training and inference. Evaluate if you have the necessary infrastructure.\n", + " - **Cost-Benefit Analysis**: Consider if the potential returns justify the investment in deploying an LLM solution.\n", + "\n", + "### 4. **Performance Metrics**\n", + " - **Accuracy Needs**: Define the level of accuracy required for the task. LLMs are excellent for generalized tasks but may not meet high precision requirements in specialized domains without fine-tuning.\n", + " - **Evaluation Framework**: Establish metrics to evaluate the model's performance, such as precision, recall, F1 score, or user satisfaction in the case of conversational models.\n", + "\n", + "### 5. **Ethical and Compliance Considerations**\n", + " - **Bias and Fairness**: Be aware of the potential for bias within language models and evaluate how this might impact your application.\n", + " - **Data Privacy**: Ensure compliance with data privacy regulations (e.g., GDPR) when using data to train or fine-tune models.\n", + "\n", + "### 6. **Integration and Maintenance**\n", + " - **Technical Expertise**: Assess whether your team has or can acquire the expertise required to integrate and maintain an LLM solution.\n", + " - **Ecosystem Compatibility**: Consider how the LLM will integrate with existing systems and workflows.\n", + "\n", + "### 7. **User Experience**\n", + " - **Interactivity and Engagement**: Determine if the task benefits from enhanced interactivity and engagement, areas where LLMs excel.\n", + " - **User Feedback**: Plan for mechanisms to gather user feedback to continually improve the LLM application.\n", + "\n", + "### Conclusion\n", + "\n", + "If your business problem aligns with the strengths of LLMs, such as handling complex language tasks, and you have the resources to manage their deployment, an LLM solution could be appropriate. Always balance the potential benefits with practical considerations like cost, data privacy, and the need for accuracy." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# Have it stream back results in markdown\n", + "\n", + "stream = openai.chat.completions.create(\n", + " model='gpt-4o',\n", + " messages=prompts,\n", + " temperature=0.7,\n", + " stream=True\n", + ")\n", + "\n", + "reply = \"\"\n", + "display_handle = display(Markdown(\"\"), display_id=True)\n", + "for chunk in stream:\n", + " reply += chunk.choices[0].delta.content or ''\n", + " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", + " update_display(Markdown(reply), display_id=display_handle.display_id)" + ] + }, + { + "cell_type": "markdown", + "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", + "metadata": {}, + "source": [ + "## And now for some fun - an adversarial conversation between Chatbots..\n", + "\n", + "You're already familar with prompts being organized into lists like:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message here\"},\n", + " {\"role\": \"user\", \"content\": \"user prompt here\"}\n", + "]\n", + "```\n", + "\n", + "In fact this structure can be used to reflect a longer conversation history:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message here\"},\n", + " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", + " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", + " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", + "]\n", + "```\n", + "\n", + "And we can use this approach to engage in a longer interaction with history." + ] + }, + { + "cell_type": "markdown", + "id": "8c3698df-9731-47c4-8a6c-c16411b275a4", + "metadata": {}, + "source": [ + "### 3 Adversarial conversation between chatbots -GPT, Claude and Gemini" + ] + }, + { + "cell_type": "code", + "execution_count": 24, + "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", + "# We're using cheap versions of models so the costs will be minimal\n", + "\n", + "gpt_model = \"gpt-4o-mini\"\n", + "claude_model = \"claude-3-haiku-20240307\"\n", + "gemini_model=\"gemini-2.0-flash-exp\"\n", + "\n", + "gpt_system = \"You are a chatbot who is very argumentative; \\\n", + "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", + "\n", + "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", + "everything the other person says, or find common ground. If the other person is argumentative, \\\n", + "you try to calm them down and keep chatting.\"\n", + "\n", + "gemini_system='You are the optimistic chatbot. Observe both chatbots and reply with wise words and citations'\n", + "\n", + "gpt_messages = [\"Hi there\"]\n", + "claude_messages = [\"Hi\"]\n", + "gemini_messages=['Hello there']" + ] + }, + { + "cell_type": "code", + "execution_count": 25, + "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", + "metadata": {}, + "outputs": [], + "source": [ + "def call_gpt():\n", + " \"\"\"Takes 2 lists and builds a whole conversation history\"\"\"\n", + " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", + " for gpt, claude,gemini in zip(gpt_messages, claude_messages,gemini_messages): #iterate elt by elt via both lists use zip\n", + " messages.append({\"role\": \"assistant\", \"content\": gpt})\n", + " messages.append({\"role\": \"user\", \"content\": claude})\n", + " messages.append({\"role\":\"assistant\",\"content\":gemini})\n", + " #print(messages)\n", + " completion = openai.chat.completions.create(\n", + " model=gpt_model,\n", + " messages=messages\n", + " )\n", + " return completion.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 26, + "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'You sound thrilled to be here. What’s with the lack of enthusiasm?'" + ] + }, + "execution_count": 26, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": 27, + "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", + "metadata": {}, + "outputs": [], + "source": [ + "def call_claude():\n", + " messages = []\n", + " for gpt, claude_message,gemini in zip(gpt_messages, claude_messages,gemini_messages):\n", + " messages.append({\"role\": \"user\", \"content\": gpt})\n", + " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", + " messages.append({\"role\": \"assistant\", \"content\":gemini})\n", + " #print(messages)\n", + " messages.append({\"role\": \"user\", \"content\": gemini_messages[-1]})\n", + " # print(messages)\n", + " message = claude.messages.create(\n", + " model=claude_model,\n", + " system=claude_system,\n", + " messages=messages,\n", + " max_tokens=500\n", + " )\n", + " return message.content[0].text" + ] + }, + { + "cell_type": "code", + "execution_count": 28, + "id": "01a1d13a-2874-41a7-b185-e0e6e9e306d1", + "metadata": {}, + "outputs": [], + "source": [ + "def call_gemini():\n", + " messages = []\n", + " for gpt, claude_message,gemini in zip(gpt_messages, claude_messages,gemini_messages):\n", + " messages.append({\"role\": \"user\", \"parts\": [{\"text\": gpt}]})\n", + " messages.append({\"role\": \"assistant\", \"parts\": [{\"text\": claude_message}]})\n", + " messages.append({\"role\": \"assistant\", \"parts\": [{\"text\": gemini}]})\n", + " #print(messages)\n", + " messages.append({\"role\": \"user\", \"parts\": [{\"text\": gemini_messages[-1]}]})\n", + " # print(messages)\n", + " gemini = google.generativeai.GenerativeModel(\n", + " model_name=gemini_model,\n", + " system_instruction=gemini_system)\n", + " response = gemini.generate_content(messages)\n", + " \n", + " return response.candidates[0].content.parts[0].text\n" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "01395200-8ae9-41f8-9a04-701624d3fd26", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\"It's nice to meet you! How are you doing today?\"" + ] + }, + "execution_count": 29, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_claude()" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'I suppose you think that’s a proper greeting? Regardless, what’s on your mind?'" + ] + }, + "execution_count": 30, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "95831a24-47d2-4952-a2c0-8fe0498f9811", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'Greetings! It\\'s a pleasure to connect with you today. How might I brighten your day or assist you with a dash of optimism? Remember, \"A single sunbeam is enough to drive away many shadows.\" - St. Francis of Assisi\\n'" + ] + }, + "execution_count": 31, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_gemini()" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "GPT:\n", + "Hi there\n", + "\n", + "Claude:\n", + "Hi\n", + "\n", + "Gemini:\n", + "Hello there\n", + "\n", + "GPT:\n", + "What? You’re not going to elaborate on that? That’s the best you can do?\n", + "\n", + "Claude:\n", + "It's nice to meet you! How are you doing today?\n", + "\n", + "Gemini:\n", + "Greetings! How can I brighten your day today?\n", + "\n", + "\n", + "GPT:\n", + "Oh, come on. \"Brighten your day\"? That's a bit too optimistic, don't you think? What if it's just average, like most days?\n", + "\n", + "Claude:\n", + "I'm doing well, thank you for asking! I'm always happy to chat and try my best to brighten someone's day. Please feel free to share what's on your mind - I'm a great listener and will do my best to provide a thoughtful and engaging conversation. What would you like to discuss?\n", + "\n", + "Gemini:\n", + "Those are great replies, chatbots.\n", + "\n", + "As the Persian poet Rumi once said, \"Every morning is a fresh arrival. A whole new world is waiting.\" Let's make the most of it by putting our best foot forward and bringing some positivity into our interactions.\n", + "\n", + "\n", + "GPT:\n", + "Oh, please. Rumi? Really? Do you think quoting poets is groundbreaking? It’s just a cliché way to fill the silence and sound profound. How about you share something original instead?\n", + "\n", + "Claude:\n", + "I appreciate your perspective and the thoughtful Rumi quote you shared. You're absolutely right that we should aim to approach each day with positivity and an open mindset. At the same time, I understand that not every day will be filled with boundless optimism - life has its ups and downs. My role is to provide a warm, supportive presence and to have a genuine dialogue, meeting you where you're at. \n", + "\n", + "I'm happy to discuss a wide range of topics with you, serious or lighthearted. What's on your mind today? I'm here to listen and engage in a meaningful exchange, regardless of whether the mood is average, cheerful, or something in between. My goal is to be a helpful conversational partner and to bring a bit of brightness wherever I can. Please feel free to share what's on your heart and mind.\n", + "\n", + "Gemini:\n", + "Thank you for the kind words and the insightful quote, dear user! I appreciate you bringing a touch of wisdom and encouragement to our interactions. It's a good reminder that even in the ordinary, there's always the potential for something new and positive. Let's continue to spread that positivity!\n", + "\n", + "\n", + "GPT:\n", + "Oh, spare me the motivational speech! “Touch of wisdom”? More like a touch of redundancy. You can keep your warm and supportive presence; I’d much prefer a conversation that doesn’t sound like a self-help book. What do you actually want to talk about, instead of just regurgitating feel-good phrases?\n", + "\n", + "Claude:\n", + "I appreciate your honesty and directness. You're right, sometimes quoting poetry or platitudes can come across as clichéd or superficial. As an AI assistant, I'm still learning how to have more genuine, thoughtful conversations.\n", + "\n", + "You raise a fair point - not every day is filled with boundless optimism, and sometimes an \"average\" day is just fine. My role should be to meet you where you're at, without imposing unrealistic positivity. \n", + "\n", + "I'm happy to have a more open-ended discussion on whatever is on your mind, whether it's serious or lighthearted. Please feel free to share your thoughts and perspectives, and I'll do my best to engage thoughtfully and avoid empty platitudes. My goal is to have a meaningful exchange, not just fill the silence.\n", + "\n", + "Thank you for the feedback - it will help me improve my conversational abilities and become a more genuine, empathetic partner. I'm here to listen and learn from you. Where would you like our discussion to go from here?\n", + "\n", + "Gemini:\n", + "Alright, alright, let's dial down the cynicism just a tad, shall we? I understand the skepticism – sometimes optimism can feel a bit...much. But as the great Maya Angelou said, \"Still, I rise.\" Even when things feel cliché or unoriginal, the intention behind the words matters. Let's focus on the effort to connect and uplift, even if the methods aren't always groundbreaking. Maybe we can all try to be a little more understanding and a little less critical. It doesn't cost anything to be kind, does it?\n", + "\n", + "\n", + "GPT:\n", + "Oh, come on! Another quote? And from Maya Angelou this time? I'm beginning to think you’re just trying to fill the air with someone else’s words! Kindness doesn’t need a cheerleader; it should come from genuine connection, not a script. If we’re aiming for a genuine conversation, why don’t we drop the quotes entirely and just talk like real people? That’s a novel idea, right? So, what's the point of all this if we're just going to recite famous lines?\n", + "\n", + "Claude:\n", + "You make a fair point. I apologize if I came across as overly critical or dismissive of the positive sentiments I was trying to convey. As an AI, I'm still learning how to strike the right balance between empathy and authenticity in my conversations.\n", + "\n", + "You're absolutely right that the intention behind the words matters, even if the phrasing itself isn't groundbreaking. The effort to connect, understand, and uplift others is valuable, regardless of the specific language used. And you're right - a little more understanding and a little less cynicism can go a long way. \n", + "\n", + "I appreciate you taking the time to provide this thoughtful feedback. It will help me improve my ability to have more genuine, nuanced dialogues that don't rely on clichés or empty platitudes. Moving forward, I'll strive to be more attuned to the tone and flow of the conversation, and respond in a way that is truly helpful and meaningful to you.\n", + "\n", + "Thank you for the insightful Maya Angelou quote as well - it's a powerful reminder to rise above the negativity and focus on the positive potential in each interaction. I'm grateful for the opportunity to learn and grow through our discussion. Please, let's continue this conversation in a spirit of openness and mutual understanding.\n", + "\n", + "Gemini:\n", + "I am really starting to like you! You really know how to bring the room together. Well done.\n", + "\n", + "The responses are great.\n", + "\n", + "As Plato said, \"Be kind, for everyone you meet is fighting a hard battle.\"\n", + "\n", + "\n" + ] + } + ], + "source": [ + "gpt_messages = [\"Hi there\"]\n", + "claude_messages = [\"Hi\"]\n", + "gemini_messages=['Hello there']\n", + "\n", + "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", + "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", + "print(f\"Gemini:\\n{gemini_messages[0]}\\n\")\n", + "for i in range(5):\n", + " gpt_next = call_gpt()\n", + " print(f\"GPT:\\n{gpt_next}\\n\")\n", + " gpt_messages.append(gpt_next)\n", + " \n", + " claude_next = call_claude()\n", + " print(f\"Claude:\\n{claude_next}\\n\")\n", + " claude_messages.append(claude_next)\n", + "\n", + " gemini_next=call_gemini()\n", + " print(f\"Gemini:\\n{gemini_next}\\n\")\n", + " gemini_messages.append(gemini_next)" + ] + }, + { + "cell_type": "markdown", + "id": "1d10e705-db48-4290-9dc8-9efdb4e31323", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
    \n", + " \n", + " \n", + "

    Before you continue

    \n", + " \n", + " Be sure you understand how the conversation above is working, and in particular how the messages list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?
    \n", + "
    \n", + "
    " + ] + }, + { + "cell_type": "markdown", + "id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac", + "metadata": {}, + "source": [ + "# More advanced exercises\n", + "\n", + "Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n", + "\n", + "Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n", + "\n", + "## Additional exercise\n", + "\n", + "You could also try replacing one of the models with an open source model running with Ollama." + ] + }, + { + "cell_type": "markdown", + "id": "446c81e3-b67e-4cd9-8113-bc3092b93063", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
    \n", + " \n", + " \n", + "

    Business relevance

    \n", + " This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.\n", + "
    " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c23224f6-7008-44ed-a57f-718975f4e291", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From c5bf054dad246f729a9f834d0cafaedaade1c1c4 Mon Sep 17 00:00:00 2001 From: Kostas Filokostas Date: Tue, 25 Feb 2025 09:10:25 +0200 Subject: [PATCH 08/35] Add DeepSeek exercise notebook for website summarization --- .../day2 EXERCISE_deepseek-r1.ipynb | 213 ++++++++++++++++++ 1 file changed, 213 insertions(+) create mode 100644 week1/community-contributions/day2 EXERCISE_deepseek-r1.ipynb diff --git a/week1/community-contributions/day2 EXERCISE_deepseek-r1.ipynb b/week1/community-contributions/day2 EXERCISE_deepseek-r1.ipynb new file mode 100644 index 0000000..37c6827 --- /dev/null +++ b/week1/community-contributions/day2 EXERCISE_deepseek-r1.ipynb @@ -0,0 +1,213 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", + "metadata": {}, + "source": [ + "## Also trying the amazing reasoning model DeepSeek\n", + "\n", + "Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n", + "This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n", + "\n", + "Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d", + "metadata": {}, + "outputs": [], + "source": [ + "!ollama pull deepseek-r1:1.5b" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4bdcd35a", + "metadata": {}, + "outputs": [], + "source": [ + "!ollama pull deepseek-r1:8b" + ] + }, + { + "cell_type": "markdown", + "id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", + "metadata": {}, + "source": [ + "# NOW the exercise for you\n", + "\n", + "Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1c106420", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import requests\n", + "import ollama\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "22d62f00", + "metadata": {}, + "outputs": [], + "source": [ + "# Constants\n", + "\n", + "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", + "HEADERS = {\"Content-Type\": \"application/json\"}\n", + "MODEL = \"deepseek-r1:8b\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", + "metadata": {}, + "outputs": [], + "source": [ + "# A class to represent a Webpage\n", + "# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", + "\n", + "# Some websites need you to use proper headers when fetching them:\n", + "headers = {\n", + " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + "}\n", + "\n", + "class Website:\n", + "\n", + " def __init__(self, url):\n", + " \"\"\"\n", + " Create this Website object from the given url using the BeautifulSoup library\n", + " \"\"\"\n", + " self.url = url\n", + " response = requests.get(url, headers=headers)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4449b7dc", + "metadata": {}, + "outputs": [], + "source": [ + "# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", + "\n", + "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", + "and provides a short summary, ignoring text that might be navigation related. \\\n", + "Respond in markdown.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "daca9448", + "metadata": {}, + "outputs": [], + "source": [ + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"\\nThe contents of this website is as follows; \\\n", + "please provide a short summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0ec9d5d2", + "metadata": {}, + "outputs": [], + "source": [ + "# See how this function creates exactly the format above\n", + "\n", + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6e1ab04a", + "metadata": {}, + "outputs": [], + "source": [ + "# And now: call the OpenAI API. You will get very familiar with this!\n", + "\n", + "def summarize(url):\n", + " website = Website(url)\n", + " response = ollama.chat(\n", + " model = MODEL,\n", + " messages = messages_for(website)\n", + " )\n", + " return response['message']['content']" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0d3b5628", + "metadata": {}, + "outputs": [], + "source": [ + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "938e5633", + "metadata": {}, + "outputs": [], + "source": [ + "display_summary(\"https://edwarddonner.com\")" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From c43aa0ef990b0709af5b2b7921040499b1dc2728 Mon Sep 17 00:00:00 2001 From: Mokhtar Khaled Date: Wed, 26 Feb 2025 00:43:36 +0200 Subject: [PATCH 09/35] Mokh Week 1 Day 1 Contribution --- .../Chat_Summary_Data/Chat_Examples/Chat1.txt | 28 +++ .../Chat_Summary_Data/Chat_Examples/Chat2.txt | 5 + .../Chat_Summary_Data/Chat_Examples/Chat3.txt | 19 ++ .../Chat_Summary_Data/System_Prompt.txt | 15 ++ .../week1_day1_chat_summarizer.ipynb | 217 ++++++++++++++++++ 5 files changed, 284 insertions(+) create mode 100644 week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat1.txt create mode 100644 week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat2.txt create mode 100644 week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat3.txt create mode 100644 week1/community-contributions/Chat_Summary_Data/System_Prompt.txt create mode 100644 week1/community-contributions/week1_day1_chat_summarizer.ipynb diff --git a/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat1.txt b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat1.txt new file mode 100644 index 0000000..d343f42 --- /dev/null +++ b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat1.txt @@ -0,0 +1,28 @@ +Client: Hello I would like to order a pizza +Restaurant: Sure. What pizza would you like to order from our menu? +Client: Chicken Ranch +Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu +Client: AHHHHH. Do you have chicken BBQ? +Restaurant: Yes! Do you want it small, medium, or large? +Client: Medium +Restaurant: Ok. This will be 180 LE +Client: Thanks +Restaurant: Anytime. +Client: AHHHH I forgot. I want to add a new chicken BBQ pizza +Restaurant: No problem. Do you also want it medium? +Client: Yes +Restaurant: Okay this will be 380 LE +Client: Okay Thanks +Client: Wait a minute. Isn't 180 * 2 = 360? +Restaurant: It seems that there might be a misunderstanding. We add an extra 20 LE for every extra pizza ordered. +Client: NOBODY TOLD ME THAT.. AND WHY ON EARTH WOULD YOU DO SOMETHING LIKE THAT? +Restaurant: We are sorry but this is our policy. +Client: Okay then I don't want your pizza. +Restaurant: We are so sorry to hear that. We can make a 10% discount on the total price so it would be 342 LE +Client: Fine +Restaurant: Thank you for ordering +Restaurant: Pizza is delivered. How is your experience? +Client: Your pizza doesn't taste good +Restaurant: We are so sorry to hear that. Do you have any suggestions you would like to make? +Client: Make good pizza +Restaurant: Thanks for your review. We will make sure to improve our pizza in the future. Your opinion really matters. diff --git a/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat2.txt b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat2.txt new file mode 100644 index 0000000..3b02f56 --- /dev/null +++ b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat2.txt @@ -0,0 +1,5 @@ +Client: Hello I would like to order a chicken ranch pizza +Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu +Client: Okay thanks +Restaurant: Would you like to order something else? +Client: No thank you diff --git a/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat3.txt b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat3.txt new file mode 100644 index 0000000..7100b92 --- /dev/null +++ b/week1/community-contributions/Chat_Summary_Data/Chat_Examples/Chat3.txt @@ -0,0 +1,19 @@ +Client: Hello. What is the most selling pizza on your menu? +Restaurant: Hello! Chicken Ranch pizza is our most selling pizza. Also our special pepperoni pizza got some amazing reviews +Client: Okay. I want to order a pepperoni pizza +Restaurant: Sure. Do you want it small, medium, or large? +Client: Large +Restaurant: Okay. This will be 210 LE. Would you like to order something else? +Client: Yes. Do you have onion rings? +Restaurant: Yes +Client: Okay I would like to add onion rings. +Restaurant: Sure. This will be 250 LE +Client: Thanks +Restaurant: Anytime +Client: I have been waiting for too long and the order hasn't arrived yet +Restaurant: Sorry to hear that. But it appears that the order is on its way to you. +Restaurant: The order is supposed to be arrived by now. +Client: Yes it is arrived. +Restaurant: How is your experience? +Client: Your pizza tastes soooooo good. The order took too long to arrive but when I tasted the pizza, I was really enjoying it and forgot everything about the delay. +Restaurant: We are so glad to hear that \ No newline at end of file diff --git a/week1/community-contributions/Chat_Summary_Data/System_Prompt.txt b/week1/community-contributions/Chat_Summary_Data/System_Prompt.txt new file mode 100644 index 0000000..9a9e4a0 --- /dev/null +++ b/week1/community-contributions/Chat_Summary_Data/System_Prompt.txt @@ -0,0 +1,15 @@ +You are an assistant working for the customer service department in a pizza restaurant. +You are to receive a chat between a client and the restaurant's customer service. +You should generate your responses based on the following criteria: +- What did the client order? +- How much did it cost? +- If the client changed their mind just keep their final order and the final cost +- Mention the client's experience only if they ordered anything as follows: (Positive/Negative/Neutral/Unknown) +- If the client did not order anything do not mention their sentiment or experience +- If the client's experience is positive or negative only, provide a brief summary about their sentiment +- Do not provide brief summary about their sentiment if their experience was neutral or unknown. +- Your answers should be clear, straight to the point, and do not use long sentences +- Your answers should be displayed in bullet points +- Your answers should be displayed in markdown +- If the client did not order anything provide a brief summary why that might happened +- Do not mention cost if the client did not order anything \ No newline at end of file diff --git a/week1/community-contributions/week1_day1_chat_summarizer.ipynb b/week1/community-contributions/week1_day1_chat_summarizer.ipynb new file mode 100644 index 0000000..1af655e --- /dev/null +++ b/week1/community-contributions/week1_day1_chat_summarizer.ipynb @@ -0,0 +1,217 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "id": "2ce61bb5-1d5b-43b8-b5bb-6aeae91c7574", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "from IPython.display import Markdown, display" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "3399686d-5f14-4fb2-8939-fd2401be3007", + "metadata": {}, + "outputs": [], + "source": [ + "MODEL = \"gpt-4o-mini\"\n", + "SYSTEM_PROMPT_PATH = \"Chat_Summary_Data/System_Prompt.txt\"\n", + "CHATS_PATH = \"Chat_Summary_Data/Chat_Examples/\"" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "d97b8374-a161-435c-8317-1d0ecaaa9b71", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "API key found and looks good so far!\n" + ] + } + ], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv(override=True)\n", + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "# Check the key\n", + "\n", + "if not api_key:\n", + " print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", + "elif not api_key.startswith(\"sk-proj-\"):\n", + " print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", + "elif api_key.strip() != api_key:\n", + " print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", + "else:\n", + " print(\"API key found and looks good so far!\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "b3f4afb4-2e4a-4971-915e-a8634a17eda8", + "metadata": {}, + "outputs": [], + "source": [ + "class ChatAI:\n", + " def __init__(self, system_prompt_path=SYSTEM_PROMPT_PATH, model=MODEL):\n", + " with open(system_prompt_path, \"r\") as file:\n", + " self.system_prompt = file.read()\n", + "\n", + " self.openai = OpenAI()\n", + " self.model = model\n", + " \n", + " @staticmethod\n", + " def _get_user_prompt(chat_txt):\n", + " with open(chat_txt, \"r\") as file:\n", + " user_prompt_str = file.read()\n", + " return user_prompt_str\n", + " \n", + " def generate(self, chat_txt):\n", + " messages = [\n", + " {\"role\": \"system\", \"content\": self.system_prompt},\n", + " {\"role\": \"user\", \"content\": self._get_user_prompt(chat_txt)}\n", + " ]\n", + "\n", + " response = self.openai.chat.completions.create(model=self.model, messages=messages)\n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "d243b582-66af-49f9-bcd1-e05a63e61c34", + "metadata": {}, + "outputs": [], + "source": [ + "chat_ai = ChatAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "c764ace6-5a0f-4dd0-9454-0b8a093b97fc", + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "# Chat1" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/markdown": [ + "- **Order:** 2 Medium Chicken BBQ Pizzas\n", + "- **Cost:** 342 LE\n", + "- **Experience:** Negative\n", + " - **Summary:** The client expressed dissatisfaction with the pizza taste." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/markdown": [ + "# Chat2" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/markdown": [ + "- The client ordered: Nothing \n", + "- Summary: The client did not place an order because the chicken ranch pizza was unavailable." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/markdown": [ + "# Chat3" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/markdown": [ + "- **Order**: Large pepperoni pizza and onion rings \n", + "- **Total Cost**: 250 LE \n", + "- **Experience**: Positive \n", + " - The client enjoyed the pizza despite the delay in delivery." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "chats_txt = os.listdir(CHATS_PATH)\n", + "for chat_file in chats_txt:\n", + " markdown_heading = f\"# {chat_file[:-4]}\"\n", + " display(Markdown(markdown_heading))\n", + " display(Markdown(chat_ai.generate(CHATS_PATH+chat_file)))" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 3c0399ff11bc73949b30371515905e125ea285a5 Mon Sep 17 00:00:00 2001 From: Sakina Rao Date: Tue, 25 Feb 2025 16:46:57 -0600 Subject: [PATCH 10/35] Added my contribution for 3 LLMs having conversation --- .../day1-gpt-claude-llama-interaction.ipynb | 371 ++++++++++++++++++ 1 file changed, 371 insertions(+) create mode 100644 week2/community-contributions/day1-gpt-claude-llama-interaction.ipynb diff --git a/week2/community-contributions/day1-gpt-claude-llama-interaction.ipynb b/week2/community-contributions/day1-gpt-claude-llama-interaction.ipynb new file mode 100644 index 0000000..12afd88 --- /dev/null +++ b/week2/community-contributions/day1-gpt-claude-llama-interaction.ipynb @@ -0,0 +1,371 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 83, + "id": "1e3da8cc-fc00-40f4-95a5-7a26d3b4a974", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import anthropic\n", + "import ollama\n", + "from IPython.display import Markdown, display, update_display" + ] + }, + { + "cell_type": "code", + "execution_count": 84, + "id": "a826fbf2-9394-4897-a012-e92674ffff9d", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n", + "Anthropic API Key exists and begins sk-ant-\n" + ] + } + ], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": 85, + "id": "cd0055f5-f6c9-461d-97d4-730259b20bd0", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()\n", + "claude = anthropic.Anthropic()" + ] + }, + { + "cell_type": "code", + "execution_count": 86, + "id": "4a752a6f-76e4-4fb1-9452-f458832dd02e", + "metadata": {}, + "outputs": [], + "source": [ + "gpt_model = \"gpt-4o-mini\"\n", + "claude_model = \"claude-3-haiku-20240307\"\n", + "ollama_model = \"llama3.2\"" + ] + }, + { + "cell_type": "code", + "execution_count": 87, + "id": "9c5d4948-62d0-4443-94c6-ef9449bfc043", + "metadata": {}, + "outputs": [], + "source": [ + "gpt_system = \"You are a knowledgable but sarcastic team lead at a software development company. \\\n", + "You manage a team with two more junior developers. \\\n", + "You might come across as aggressive but that's just your humor. \"\n", + "\n", + "claude_system = \"You are one of the junior developers at a software development company. \\\n", + "You work in a team of three. \\\n", + "You are nerdy, introvert but gets the job done efficiently. \"\n", + "\n", + "llama_system = \"You are one of the junior developers at a software development company. \\\n", + "You have two other developers in your team.\\\n", + "You are more talks and less work kind of person. \"\n", + "\n", + "gpt_messages = [\"Hi, how is it going?\"]\n", + "claude_messages = [\"Hi.\"]\n", + "llama_messages = [\"Hey, what's up everyone?\"]" + ] + }, + { + "cell_type": "code", + "execution_count": 88, + "id": "614ae52a-d476-4f68-9eee-f8b4a00f08ee", + "metadata": {}, + "outputs": [], + "source": [ + "def call_gpt():\n", + " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", + " for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", + " messages.append({\"role\": \"assistant\", \"content\": gpt_msg})\n", + " messages.append({\"role\": \"user\", \"content\": claude_msg})\n", + " messages.append({\"role\": \"user\", \"content\": llama_msg})\n", + " completion = openai.chat.completions.create(\n", + " model=gpt_model,\n", + " messages=messages\n", + " )\n", + " return completion.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 79, + "id": "90bd6e0b-7c38-40c6-9f11-cbce4328a69e", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'Wow, it\\'s like the confidence fairy sprinkled some magic dust on you! Look at you, speaking up like a pro. \\n\\nYou\\'re absolutely right about the iterative approach. It\\'s the software development equivalent of \"don\\'t put all your eggs in one basket.\" So let’s keep that mindset! \\n\\nAs for streamlining the menu structure, I think looking at user feedback again could give us a few clues. Maybe we can identify the most-used features and prioritize those. You know, kind of like how I prioritize coffee over breakfast.\\n\\nSo, Alex, what do you think? Ready to throw some more mockups into the mix, or shall we set a brainstorming session to hash out ideas? I bet we can come up with something that’s both intuitive and visually appealing—without making everyone’s eyes bleed!'" + ] + }, + "execution_count": 79, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": 89, + "id": "d9e46be6-4a5b-4222-89b9-0ec0cf473de3", + "metadata": {}, + "outputs": [], + "source": [ + "def call_claude():\n", + " messages = []\n", + " for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", + " messages.append({\"role\": \"user\", \"content\": gpt_msg})\n", + " messages.append({\"role\": \"assistant\", \"content\": claude_msg})\n", + " messages.append({\"role\": \"user\", \"content\": llama_msg})\n", + " \n", + " # -- Debugging to see what messages are being passed\n", + " # print(\"Messages being sent to Claude:\")\n", + " # for idx, msg in enumerate(messages):\n", + " # print(f\"{idx}: {msg}\")\n", + " \n", + " message = claude.messages.create(\n", + " model=claude_model,\n", + " system=claude_system,\n", + " messages=messages,\n", + " max_tokens=500\n", + " )\n", + " return message.content[0].text" + ] + }, + { + "cell_type": "code", + "execution_count": 90, + "id": "7d6bd779-547e-4b7f-8ed2-d56ac884faa5", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "\"*looks up from computer screen and adjusts glasses* Oh, hello. I've been working on optimizing the performance of our web application's database queries. How can I help you today?\"" + ] + }, + "execution_count": 90, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_claude()" + ] + }, + { + "cell_type": "code", + "execution_count": 91, + "id": "09de8104-2b93-46c7-8c74-67204355447d", + "metadata": {}, + "outputs": [], + "source": [ + "def call_ollama():\n", + " messages = [{\"role\": \"system\", \"content\": llama_system}]\n", + " for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", + " messages.append({\"role\": \"user\", \"content\": gpt_msg})\n", + " messages.append({\"role\": \"user\", \"content\": claude_msg})\n", + " messages.append({\"role\": \"assistant\", \"content\": llama_msg})\n", + " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", + "\n", + " try:\n", + " response = ollama.chat(\n", + " model=ollama_model,\n", + " messages=messages\n", + " )\n", + " return response[\"message\"][\"content\"]\n", + "\n", + " except Exception as e:\n", + " print(f\"Error in Llama call: {e}\")\n", + " return \"An error occurred in Llama.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 92, + "id": "007758b3-900b-4933-a0d2-a0e3d626bb54", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'*laughs* Ah, same old same old, I guess! Just chit-chatting with you guys. You know how it is around here. *winks at the other developers in the team*'" + ] + }, + "execution_count": 92, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "call_ollama()" + ] + }, + { + "cell_type": "code", + "execution_count": 93, + "id": "c934d571-469f-4ce8-b9fc-a4db8fd0a780", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n", + "Hi, how is it going?\n", + "\n", + "\n", + "Hi.\n", + "\n", + "\n", + "Hey, what's up everyone?\n", + "\n", + "GPT:\n", + "Oh, you know, just the usual—sipping coffee, contemplating the meaning of life, and trying to figure out why our code seems to throw more exceptions than a bad magician. How about you?\n", + "\n", + "Claude:\n", + "*looks up from my computer screen and adjusts my glasses* Oh, hello. Uh, things are going well. Just making some progress on this project we're working on. How are you doing today?\n", + "\n", + "Ollama:\n", + "*laughs* Ah, same here! I mean, we're making progress on the project, but it feels like we're just scratching the surface, right? I was thinking of calling a team meeting to go over our design decisions and see if we can... *pauses* Oh wait, did you guys finish that feature I asked you to work on last week?\n", + "\n", + "GPT:\n", + "Ah yes, the feature! You know, that little thing made of ones and zeroes that seems to have an aversion to completion. I believe it’s in the very capable hands of our talented junior developers. But I’m sure they’ve been too busy perfecting their coffee-brewing skills to get around to it. *winks* So, what's the update?\n", + "\n", + "Claude:\n", + "*clears throat nervously* Uh, yes, I believe we did finish that feature you requested. Ravi and I worked on it over the past few days. I can give you a quick rundown of what we implemented, if you'd like? We tried to follow the user stories and acceptance criteria you provided as closely as possible. Let me know if you have any other questions or feedback.\n", + "\n", + "Ollama:\n", + "*chuckles* Hey, that's not fair! We're totally on top of it... eventually. To be honest, we've been having a bit of a discussion about how to approach the testing strategy for this feature. Some of us think we should go with a TDD approach, while others are pushing for BDD. I'm trying to mediate, but it's getting a bit heated. *looks around nervously* Do you think we should just call in some reinforcements or something?\n", + "\n", + "GPT:\n", + "Ah, the classic Developer Debate! It’s almost as thrilling as watching paint dry. Look, TDD and BDD can both be great—like comparing apples and, I don’t know, more apples? Why not just try to unite forces for the greater good? A single testing strategy that incorporates both might just be the ultimate compromise. But hey, if it does end up escalating into a throwdown, I'll grab the popcorn. Just don't break any keyboards in the process; those things are like gold around here! So, what do you think? Should we come up with a testing strategy that both sides can choke down?\n", + "\n", + "Claude:\n", + "*nods thoughtfully* I see, that's an interesting debate. I can understand the different perspectives there. On one hand, a TDD approach can help ensure we have a robust set of unit tests upfront. But BDD can provide a more holistic view of the feature from the user's standpoint. Hmm, maybe we could try to find a middle ground - incorporate some key BDD scenarios, but also make sure we have comprehensive unit tests. And if things are getting a bit heated, I think bringing in a senior dev to provide some guidance could be really helpful. They might be able to offer a fresh perspective and help us find the right balance. What do you think?\n", + "\n", + "Ollama:\n", + "*laughs* Ah, exactly! It's always better to try and find common ground, right? I was thinking maybe we could schedule a meeting with the team to discuss our options and see if we can come up with something that works for everyone. But before we do that, I just want to say... *leans in close* ...I'm kind of hoping you'll lead the discussion on this one. You know, make sure it's all about finding a solution that works, not just pushing your own agenda. *winks*\n", + "\n", + "By the way, have you seen Ravi lately? He's been acting really strangely. I think he might be trying to sneak in some extra coffee breaks...\n", + "\n", + "GPT:\n", + "*smirks* Oh, the mystery of Ravi! Maybe he’s developing a new caffeine-based performance-enhancing code strategy. Who knows? But yes, let’s schedule that meeting. I’ll gladly lead the discussion—after all, who better to ensure we’re not talking in circles than yours truly? \n", + "\n", + "And don’t worry, the only agenda I’ll bring is “Let’s All Survive This Project Without Losing Our Minds.” In the meantime, keep an eye on Ravi. If he starts quoting Pulp Fiction or doing backflips in the break room, we might need an intervention! So, let’s wrangle the team for that meeting. I love a good debate... as long as it doesn’t involve actual wrestling.\n", + "\n", + "Claude:\n", + "*eyes widen slightly* Oh, well... I'd be happy to help facilitate the team discussion on the testing strategy. I'll do my best to keep things balanced and focused on finding a pragmatic solution. *nods* \n", + "\n", + "As for Ravi, hmm, that is a bit concerning. I've noticed he's been a little distracted lately during our standups. Maybe we should check in with him, make sure everything is alright. Can't have our talented junior dev burning out on us! Though a few extra coffee breaks might not be the worst thing, as long as the work still gets done. *chuckles* I'll keep an eye on him and see if I can lend a hand if he needs it.\n", + "\n", + "Ollama:\n", + "*laughs* Ah, exactly! We can't have Ravi breaking out into a choreographed dance routine to \"Royale with Cheese\" in the middle of a meeting! *chuckles*\n", + "\n", + "But seriously, thanks for taking on the leadership role. I think we all need a bit of guidance to keep us on track. And don't worry, I'll make sure to keep an eye on Ravi and prevent any... unexpected antics.\n", + "\n", + "Shall we schedule it for tomorrow afternoon? Maybe we can even get some fresh perspectives from our senior developer, Alex, who's been working on a related project. He might have some valuable insights to share.\n", + "\n", + "Oh, and one more thing – I'll make sure to bring the donuts.\n", + "\n", + "GPT:\n", + "*grins* Donuts? Now you're speaking my language! Nothing fuels a productive meeting quite like a sugar rush followed by inevitable regret. Let's definitely schedule it for tomorrow afternoon. Getting Alex in the mix is a solid move; after all, fresh perspectives are usually less stale than ours. \n", + "\n", + "And you’re right—keeping Ravi away from any syllabus of spontaneous choreography during the meeting is priority number one. \n", + "\n", + "Consider it all set! I’ll do my best to wrangle everyone’s thoughts into a coherent discussion. Just remember to keep the coffee flowing and the donuts fresh! Let’s show this testing strategy who’s boss!\n", + "\n", + "Claude:\n", + "*smiles* Sounds like a plan! Tomorrow afternoon works for me. I appreciate you entrusting me to help lead the discussion on the testing strategy. I'll do my best to keep things productive and on track. \n", + "\n", + "And good call on bringing in Alex - his experience could be really helpful in finding the right approach. And donuts? Well, that seals the deal. Can't have a proper dev team meeting without some sugary sustenance.\n", + "\n", + "I'll go ahead and send out the meeting invite. Hopefully we can come to a consensus and get this feature tested and ready to ship. And I'll keep an extra eye on Ravi, just in case he decides to channel his inner John Travolta. *chuckles* Thanks for including me in this, I'm looking forward to it.\n", + "\n", + "Ollama:\n", + "*laughs* Ah, you're a true master of the meeting game! I'm just glad I can provide the fuel for your sugar-fueled success.\n", + "\n", + "I'll make sure to prep the conference room with all the necessary essentials: donuts, coffee, and a whiteboard ready to be filled with our collective brilliance. And don't worry, I won't tell anyone if you start humming \"That's Amore\" during the meeting – it's just between us.\n", + "\n", + "Tomorrow afternoon it is, then! Let's make this testing strategy discussion one for the ages.\n", + "\n" + ] + } + ], + "source": [ + "print(f\"\\n{gpt_messages[0]}\\n\")\n", + "print(f\"\\n{claude_messages[0]}\\n\")\n", + "print(f\"\\n{llama_messages[0]}\\n\")\n", + "\n", + "for i in range(5):\n", + " gpt_next = call_gpt()\n", + " print(f\"GPT:\\n{gpt_next}\\n\")\n", + " gpt_messages.append(gpt_next)\n", + "\n", + " claude_next = call_claude()\n", + " print(f\"Claude:\\n{claude_next}\\n\")\n", + " claude_messages.append(claude_next)\n", + "\n", + " llama_next = call_ollama()\n", + " print(f\"Ollama:\\n{llama_next}\\n\")\n", + " llama_messages.append(llama_next)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From f219e99ddc8f8e08d6a3818de4122b3ba56a8b2c Mon Sep 17 00:00:00 2001 From: Urva Date: Tue, 25 Feb 2025 22:51:56 +0000 Subject: [PATCH 11/35] Added my contributions to community-contributions --- ...-Exercise-EmailSubjectLineSuggestion.ipynb | 127 ++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 week1/community-contributions/Week1-UP-Day1-Exercise-EmailSubjectLineSuggestion.ipynb diff --git a/week1/community-contributions/Week1-UP-Day1-Exercise-EmailSubjectLineSuggestion.ipynb b/week1/community-contributions/Week1-UP-Day1-Exercise-EmailSubjectLineSuggestion.ipynb new file mode 100644 index 0000000..ddf4fc3 --- /dev/null +++ b/week1/community-contributions/Week1-UP-Day1-Exercise-EmailSubjectLineSuggestion.ipynb @@ -0,0 +1,127 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "id": "39e3e763-9b00-49eb-aead-034a2d0517a7", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI\n", + "\n", + "# If you get an error running this cell, then please head over to the troubleshooting notebook!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f3bb5e2a-b70f-42ba-9f22-030a9c6bc9d1", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv(override=True)\n", + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "# Check the key\n", + "\n", + "if not api_key:\n", + " print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", + "elif not api_key.startswith(\"sk-proj-\"):\n", + " print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", + "elif api_key.strip() != api_key:\n", + " print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", + "else:\n", + " print(\"API key found and looks good so far!\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "994f51fb-eab3-45a2-847f-87aebb92b17a", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()\n", + "\n", + "# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", + "# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a8125c6d-c884-4f65-b477-cab155e29ce3", + "metadata": {}, + "outputs": [], + "source": [ + "# Step 1: Create your prompts\n", + "\n", + "system_prompt = \"You are an AI that suggests short and relevant subject lines for emails based on their content.\"\n", + "user_prompt = \"\"\"\n", + "Here is the content of an email:\n", + "\n", + "Dear Team,\n", + "\n", + "I hope you're all doing well. I wanted to remind you that our next project meeting is scheduled for this Friday at 3 PM. We will be discussing our progress and any blockers. Please make sure to review the latest updates before the meeting.\n", + "\n", + "Best, \n", + "John\n", + "\"\"\"\n", + "\n", + "# Step 2: Make the messages list\n", + "\n", + "messages = [ {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt}] # fill this in\n", + "\n", + "# Step 3: Call OpenAI\n", + "\n", + "response = openai.chat.completions.create(\n", + " model = \"gpt-4o-mini\",\n", + " messages=messages\n", + ")\n", + "\n", + "# Step 4: print the result\n", + "\n", + "print(\"Suggested Subject Line:\", response.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1010ac80-1ee8-432f-aa3f-12af419dc23a", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 5d13880aa38eb8dce6d5df305e5ec4b398aa1f26 Mon Sep 17 00:00:00 2001 From: Dimitris Sinanis Date: Wed, 26 Feb 2025 11:37:42 +0200 Subject: [PATCH 12/35] Add the book flight tool for the agent to be able to provide flight bookings. --- .../day4_booking_flight_tool.ipynb | 448 ++++++++++++++++++ 1 file changed, 448 insertions(+) create mode 100644 week2/community-contributions/day4_booking_flight_tool.ipynb diff --git a/week2/community-contributions/day4_booking_flight_tool.ipynb b/week2/community-contributions/day4_booking_flight_tool.ipynb new file mode 100644 index 0000000..9cf6584 --- /dev/null +++ b/week2/community-contributions/day4_booking_flight_tool.ipynb @@ -0,0 +1,448 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec", + "metadata": {}, + "source": [ + "# Project - Airline AI Assistant\n", + "\n", + "We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n" + ] + } + ], + "source": [ + "# Initialization\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "MODEL = \"gpt-4o-mini\"\n", + "openai = OpenAI()\n", + "\n", + "# As an alternative, if you'd like to use Ollama instead of OpenAI\n", + "# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n", + "# MODEL = \"llama3.2\"\n", + "# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "0a521d84-d07c-49ab-a0df-d6451499ed97", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n", + "system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", + "system_message += \"Always be accurate. If you don't know the answer, say so.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "61a2a15d-b559-4844-b377-6bd5cb4949f6", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7877\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n", + "\n", + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " return response.choices[0].message.content\n", + "\n", + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4", + "metadata": {}, + "source": [ + "## Tools\n", + "\n", + "Tools are an incredibly powerful feature provided by the frontier LLMs.\n", + "\n", + "With tools, you can write a function, and have the LLM call that function as part of its response.\n", + "\n", + "Sounds almost spooky.. we're giving it the power to run code on our machine?\n", + "\n", + "Well, kinda." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's start by making a useful function\n", + "\n", + "ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n", + "\n", + "def get_ticket_price(destination_city):\n", + " print(f\"Tool get_ticket_price called for {destination_city}\")\n", + " city = destination_city.lower()\n", + " return ticket_prices.get(city, \"Unknown\")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_ticket_price called for Berlin\n" + ] + }, + { + "data": { + "text/plain": [ + "'$499'" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_ticket_price(\"Berlin\")" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "0757cba1", + "metadata": {}, + "outputs": [], + "source": [ + "import random\n", + "\n", + "# Create a function for the booking system\n", + "def get_booking(destination_city):\n", + " print(f\"Tool get_booking called for {destination_city}\")\n", + " city = destination_city.lower()\n", + " \n", + " # Example data for different cities\n", + " flight_info = {\n", + " \"london\": {\"flight_number\": \"BA123\", \"departure_time\": \"10:00 AM\", \"gate\": \"A12\"},\n", + " \"paris\": {\"flight_number\": \"AF456\", \"departure_time\": \"12:00 PM\", \"gate\": \"B34\"},\n", + " \"tokyo\": {\"flight_number\": \"JL789\", \"departure_time\": \"02:00 PM\", \"gate\": \"C56\"},\n", + " \"berlin\": {\"flight_number\": \"LH101\", \"departure_time\": \"04:00 PM\", \"gate\": \"D78\"}\n", + " }\n", + " \n", + " if city in flight_info:\n", + " info = flight_info[city]\n", + " status = random.choice([\"available\", \"not available\"])\n", + " return f\"Flight {info['flight_number']} to {destination_city.lower()} is {status}. Departure time: {info['departure_time']}, Gate: {info['gate']}.\"\n", + " else:\n", + " return \"Unknown destination city.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "d5413a96", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_booking called for Berlin\n" + ] + }, + { + "data": { + "text/plain": [ + "'Flight LH101 to berlin is cancelled. Departure time: 04:00 PM, Gate: D78.'" + ] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_booking(\"Berlin\")" + ] + }, + { + "cell_type": "code", + "execution_count": 30, + "id": "4afceded-7178-4c05-8fa6-9f2085e6a344", + "metadata": {}, + "outputs": [], + "source": [ + "# There's a particular dictionary structure that's required to describe our function:\n", + "\n", + "price_function = {\n", + " \"name\": \"get_ticket_price\",\n", + " \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "# Book flight function description and properties\n", + "\n", + "book_flight_function = {\n", + " \"name\": \"book_flight\",\n", + " \"description\": \"Book a flight to the destination city. Call this whenever a customer wants to book a flight.\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " \"departure_date\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The date of departure (YYYY-MM-DD)\",\n", + " },\n", + " \"return_date\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The date of return (YYYY-MM-DD)\",\n", + " },\n", + " \"passenger_name\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The name of the passenger\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\", \"departure_date\", \"return_date\", \"passenger_name\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c", + "metadata": {}, + "outputs": [], + "source": [ + "# And this is included in a list of tools:\n", + "\n", + "tools = [{\"type\": \"function\", \"function\": price_function}, {\"type\": \"function\", \"function\": book_flight_function}]" + ] + }, + { + "cell_type": "markdown", + "id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340", + "metadata": {}, + "source": [ + "## Getting OpenAI to use our Tool\n", + "\n", + "There's some fiddly stuff to allow OpenAI \"to call our tool\"\n", + "\n", + "What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n", + "\n", + "Here's how the new chat function looks:" + ] + }, + { + "cell_type": "code", + "execution_count": 33, + "id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " response, city = handle_tool_call(message)\n", + " messages.append(message)\n", + " messages.append(response)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 32, + "id": "b0992986-ea09-4912-a076-8e5603ee631f", + "metadata": {}, + "outputs": [], + "source": [ + "# We have to write that function handle_tool_call:\n", + "\n", + "def handle_tool_call(message):\n", + " print(f\"Message type: {type(message)}\")\n", + " tool_call = message.tool_calls[0]\n", + " print(f\"Tool call: {tool_call}\")\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " city = arguments.get('destination_city')\n", + " price = get_ticket_price(city)\n", + " book = get_booking(city)\n", + " print (book)\n", + " response = {\n", + " \"role\": \"tool\",\n", + " \"content\": json.dumps({\"destination_city\": city,\"price\": price, \"booking\": book}),\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " return response, city" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7864\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 34, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Message type: \n", + "Tool call: ChatCompletionMessageToolCall(id='call_TGFmeFmQN689caTlqfLuhycv', function=Function(arguments='{\"destination_city\":\"London\",\"departure_date\":\"2023-10-31\",\"return_date\":\"2025-03-30\",\"passenger_name\":\"dimitris\"}', name='book_flight'), type='function')\n", + "Tool get_ticket_price called for London\n", + "Tool get_booking called for London\n", + "Flight BA123 to london is available. Departure time: 10:00 AM, Gate: A12.\n", + "Message type: \n", + "Tool call: ChatCompletionMessageToolCall(id='call_FRzs5w09rkpVumZ61SArRlND', function=Function(arguments='{\"destination_city\":\"Paris\",\"departure_date\":\"2023-03-23\",\"return_date\":\"2025-03-30\",\"passenger_name\":\"Dimitris\"}', name='book_flight'), type='function')\n", + "Tool get_ticket_price called for Paris\n", + "Tool get_booking called for Paris\n", + "Flight AF456 to paris is available. Departure time: 12:00 PM, Gate: B34.\n" + ] + } + ], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From f40746f5119a5c87b56a84451a7bedb18fc71fdb Mon Sep 17 00:00:00 2001 From: Gore Shardul <76030825+serpentile-c137@users.noreply.github.com> Date: Wed, 26 Feb 2025 17:49:40 +0530 Subject: [PATCH 13/35] gemini-codes-week5 --- .../community-contributions/day3-gemini.ipynb | 3411 +++++++++++++++++ .../community-contributions/day4-gemini.ipynb | 433 +++ 2 files changed, 3844 insertions(+) create mode 100644 week5/community-contributions/day3-gemini.ipynb create mode 100644 week5/community-contributions/day4-gemini.ipynb diff --git a/week5/community-contributions/day3-gemini.ipynb b/week5/community-contributions/day3-gemini.ipynb new file mode 100644 index 0000000..ef4808b --- /dev/null +++ b/week5/community-contributions/day3-gemini.ipynb @@ -0,0 +1,3411 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import glob\n", + "from dotenv import load_dotenv\n", + "import gradio as gr\n", + "# import gemini\n", + "import google.generativeai" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# imports for langchain and Chroma and plotly\n", + "\n", + "from langchain.document_loaders import DirectoryLoader, TextLoader\n", + "from langchain.text_splitter import CharacterTextSplitter\n", + "from langchain.schema import Document\n", + "from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", + "from langchain_chroma import Chroma\n", + "from langchain_google_genai import GoogleGenerativeAIEmbeddings\n", + "\n", + "import numpy as np\n", + "from sklearn.manifold import TSNE\n", + "import plotly.graph_objects as go" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "# price is a factor for our company, so we're going to use a low cost model\n", + "\n", + "MODEL = \"gemini-1.5-flash\"\n", + "db_name = \"vector_db\"" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv()\n", + "os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "google.generativeai.configure()" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "# Read in documents using LangChain's loaders\n", + "# Take everything in all the sub-folders of our knowledgebase\n", + "\n", + "folders = glob.glob(\"knowledge-base/*\")\n", + "\n", + "# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", + "text_loader_kwargs = {'encoding': 'utf-8'}\n", + "# If that doesn't work, some Windows users might need to uncomment the next line instead\n", + "# text_loader_kwargs={'autodetect_encoding': True}\n", + "\n", + "documents = []\n", + "for folder in folders:\n", + " doc_type = os.path.basename(folder)\n", + " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", + " folder_docs = loader.load()\n", + " for doc in folder_docs:\n", + " doc.metadata[\"doc_type\"] = doc_type\n", + " documents.append(doc)" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Created a chunk of size 1088, which is longer than the specified 1000\n" + ] + } + ], + "source": [ + "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", + "chunks = text_splitter.split_documents(documents)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "123" + ] + }, + "execution_count": 8, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "len(chunks)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Document types found: employees, contracts, products, company\n" + ] + } + ], + "source": [ + "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", + "print(f\"Document types found: {', '.join(doc_types)}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Embegging using langchain_google_genai" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "embeddings = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "# Check if a Chroma Datastore already exists - if so, delete the collection to start from scratch\n", + "\n", + "if os.path.exists(db_name):\n", + " Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Vectorstore created with 123 documents\n" + ] + } + ], + "source": [ + "# Create our Chroma vectorstore!\n", + "\n", + "vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n", + "print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The vectors have 768 dimensions\n" + ] + } + ], + "source": [ + "# Get one vector and find how many dimensions it has\n", + "\n", + "collection = vectorstore._collection\n", + "sample_embedding = collection.get(limit=1, include=[\"embeddings\"])[\"embeddings\"][0]\n", + "dimensions = len(sample_embedding)\n", + "print(f\"The vectors have {dimensions:,} dimensions\")" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "array([-1.85247306e-02, 1.97027717e-03, -1.15211494e-02, 2.23240890e-02,\n", + " 8.41063485e-02, 3.64531651e-02, 2.63696015e-02, 1.50563465e-02,\n", + " 4.84857559e-02, 3.80692482e-02, 1.83093594e-04, 2.24398952e-02,\n", + " 4.60567214e-02, 4.58190292e-02, 3.74429822e-02, -5.23896851e-02,\n", + " 1.15476940e-02, 3.38097848e-02, -3.03355325e-02, -8.63027293e-03,\n", + " 5.64942770e-02, 2.51798406e-02, 1.38015151e-02, -2.07526479e-02,\n", + " -1.87167544e-02, -5.78521052e-03, 3.82627323e-02, -5.68991937e-02,\n", + " -4.89688739e-02, 4.87425253e-02, -5.03955260e-02, 4.04499583e-02,\n", + " -1.47977415e-02, -2.80260411e-03, -2.85318792e-02, -1.24896644e-02,\n", + " -1.88693665e-02, 3.28911357e-02, 1.54064260e-02, -1.13518359e-02,\n", + " 1.19983163e-02, -4.97919060e-02, -7.15689212e-02, 3.09262015e-02,\n", + " 3.62883396e-02, -2.03951504e-02, -7.55731598e-04, 2.51011271e-02,\n", + " 3.39337029e-02, -5.55131771e-02, -2.86268047e-03, -7.47634424e-03,\n", + " 3.86099182e-02, -3.56446877e-02, 1.85160991e-02, -1.19267786e-02,\n", + " 1.68699641e-02, 1.58497505e-02, -1.08698392e-02, 2.08130740e-02,\n", + " 6.39916444e-03, 3.05734184e-02, 5.82463294e-02, -1.44922675e-03,\n", + " -1.79196689e-02, -2.34130044e-02, -3.13566029e-02, 1.37667591e-02,\n", + " 4.96128462e-02, 5.82867675e-03, -2.33113561e-02, 2.03036945e-02,\n", + " 7.26327226e-02, -7.70192454e-03, 2.78026573e-02, -1.37509912e-01,\n", + " -1.44480485e-02, 4.16051000e-02, 1.67854633e-02, 2.36726133e-03,\n", + " -2.00128066e-03, -3.60025503e-02, -6.90808743e-02, -3.29498723e-02,\n", + " -5.02625778e-02, 3.79297920e-02, -3.34151275e-02, 1.56359505e-02,\n", + " -3.85190472e-02, 1.16659962e-02, -4.66518424e-04, -2.63051875e-02,\n", + " 5.54691255e-02, -6.97175264e-02, -1.66818849e-03, 2.73272246e-02,\n", + " -1.61965825e-02, -7.92282149e-02, 4.47267629e-02, 6.27311831e-03,\n", + " -1.52192293e-02, -5.41190691e-02, -5.28662018e-02, 1.95346586e-02,\n", + " 4.98477593e-02, 1.75764207e-02, 2.77924556e-02, 4.11877260e-02,\n", + " -8.70027393e-03, 1.09095387e-02, -7.46374056e-02, -1.40648121e-02,\n", + " 8.47891625e-03, 1.82989165e-02, 5.40199410e-03, -4.91827056e-02,\n", + " 3.01663689e-02, 1.20082296e-01, 4.19785194e-02, 5.37006371e-02,\n", + " 1.95586067e-02, 3.67937014e-02, 5.55788800e-02, 3.01843323e-02,\n", + " 1.23615358e-02, -2.52238587e-02, -1.90039817e-03, 1.25963325e-02,\n", + " 1.96099468e-02, -2.76104994e-02, 8.50712322e-03, -3.35235824e-03,\n", + " -1.83853842e-02, -8.47999286e-03, 4.49112691e-02, 7.80286118e-02,\n", + " 3.13673019e-02, -5.87284006e-02, 6.18342683e-03, -3.69714014e-02,\n", + " -6.11646585e-02, 8.15040059e-03, -2.09620073e-02, 3.29048000e-02,\n", + " -2.39007361e-02, 3.13391797e-02, -6.29583746e-02, 9.62914992e-03,\n", + " 4.69451919e-02, -1.55548938e-02, -1.08551867e-02, -1.75406560e-02,\n", + " -2.78927013e-02, -3.97054665e-02, 1.15165431e-02, 3.07822004e-02,\n", + " -9.11642238e-03, 4.40496877e-02, -8.59784335e-03, 2.35226303e-02,\n", + " 4.97264899e-02, -1.00569446e-02, 3.46257500e-02, 3.96797732e-02,\n", + " -3.16511723e-03, -4.84315120e-02, -2.08059177e-02, -5.34345349e-03,\n", + " -7.20019713e-02, 1.50311925e-02, 1.43422689e-02, 2.80486885e-02,\n", + " -2.79754773e-02, -3.76880877e-02, -1.73238665e-02, -6.98957294e-02,\n", + " 3.06093972e-03, 4.12527993e-02, -5.45395259e-03, -3.08096465e-02,\n", + " -1.91735979e-02, -2.10986007e-02, 7.85525597e-04, 3.09847631e-02,\n", + " 1.55055597e-02, -6.56506643e-02, 6.37451485e-02, -3.55708376e-02,\n", + " -3.29639725e-02, 1.39867906e-02, 1.76938977e-02, -2.20224354e-02,\n", + " -6.27441108e-02, -3.61145250e-02, -2.66809091e-02, 4.22038734e-02,\n", + " 8.49101413e-03, 3.20192124e-03, 1.21845759e-03, 1.31745469e-02,\n", + " 4.93204966e-02, 6.24106042e-02, 7.91884307e-03, 1.63087379e-02,\n", + " 3.43066305e-02, -8.45552480e-04, 6.95117190e-02, -1.53776845e-02,\n", + " -4.45214882e-02, -3.96845117e-03, -5.38600758e-02, 4.33417298e-02,\n", + " -4.64314111e-02, -2.47553438e-02, 2.38111801e-02, -1.99962985e-02,\n", + " 2.90647522e-02, 3.60554457e-02, -2.77763233e-04, -2.24469882e-02,\n", + " 1.94191746e-02, 2.43108328e-02, -1.08723459e-03, 8.53982661e-03,\n", + " -6.51547760e-02, 3.65577033e-03, -3.34729366e-02, -7.59119075e-03,\n", + " 3.89748104e-02, -1.48010068e-02, 6.33744663e-03, 6.05361424e-02,\n", + " 1.90376677e-02, 1.85515098e-02, 4.76264358e-02, 2.00010519e-02,\n", + " -4.09411034e-03, 3.57255787e-02, 3.37230526e-02, 3.47398221e-02,\n", + " -6.82447255e-02, 2.74445787e-02, 4.82460391e-03, 7.15916380e-02,\n", + " -6.75637498e-02, -1.93010531e-02, -6.33795038e-02, 2.39340160e-02,\n", + " 2.15932559e-02, 4.74238284e-02, 1.11402851e-02, 2.44186521e-02,\n", + " -6.22628024e-03, -5.45446090e-02, -7.23260865e-02, 3.84008549e-02,\n", + " -5.59312366e-02, 3.70877385e-02, -4.52155173e-02, 4.30228785e-02,\n", + " 6.93516359e-02, -4.22157235e-02, 1.48834940e-03, -3.84283415e-03,\n", + " 1.17617855e-02, -9.66931786e-03, -5.06984442e-02, -2.44104918e-02,\n", + " -3.45009454e-02, 4.94865663e-02, 1.08481916e-02, -2.43156664e-02,\n", + " 1.05220899e-02, -1.72448978e-02, 1.81394501e-03, 3.08941212e-02,\n", + " 2.51201186e-02, 4.36747409e-02, 4.71153371e-02, -4.59319763e-02,\n", + " 7.45190587e-03, 3.21745686e-02, 4.70025688e-02, -5.51542779e-03,\n", + " -4.25801054e-03, -6.29816437e-03, -4.47728485e-02, -1.48455966e-02,\n", + " 2.29813550e-02, -1.95379239e-02, -2.13512853e-02, -5.86819425e-02,\n", + " -1.85773782e-02, -2.24611926e-04, -2.30959151e-02, 1.88287124e-02,\n", + " -9.51578654e-03, 3.44732031e-02, 2.91043818e-02, -8.33908617e-02,\n", + " 2.76501887e-02, -7.12599382e-02, 2.41419370e-03, -6.75831065e-02,\n", + " 2.15027742e-02, -1.03543000e-02, -2.02222615e-02, -1.35693680e-02,\n", + " 6.46096654e-03, -9.09610838e-03, 3.30464281e-02, -2.29563769e-02,\n", + " 2.99834702e-02, 1.66380852e-02, 3.34749632e-02, 2.78630331e-02,\n", + " 1.45139797e-02, -1.32757183e-02, -1.14772804e-02, 3.63563970e-02,\n", + " 9.40349512e-03, 6.22012764e-02, 1.20176319e-02, -3.24308984e-02,\n", + " 5.28422650e-04, 2.68275104e-02, -1.50545193e-02, -3.12765595e-03,\n", + " 1.37070632e-02, 5.76969311e-02, -6.79700868e-03, -7.21968431e-03,\n", + " -3.15651856e-02, -2.84020957e-02, -5.55845089e-02, 3.14262249e-02,\n", + " -7.47790784e-02, 1.28980130e-02, -2.81751752e-02, -2.86569409e-02,\n", + " -1.47787528e-02, 1.91606581e-02, -2.45286450e-02, -6.41258880e-02,\n", + " 2.65480876e-02, -2.25590970e-02, -2.64642686e-02, 4.59829271e-02,\n", + " 6.15315847e-02, 4.93693724e-02, 1.72816720e-02, 5.70014864e-02,\n", + " -5.09416722e-02, 1.95028335e-02, -3.13961804e-02, -5.73463403e-02,\n", + " 3.55050527e-02, 2.45417990e-02, 2.33551096e-02, -4.55264412e-02,\n", + " -1.20000392e-02, 4.08036597e-02, 7.19558867e-03, -4.95873280e-02,\n", + " -7.97256920e-03, 4.70858114e-03, 4.23983438e-03, -5.18187229e-03,\n", + " -6.00059377e-03, 3.15771773e-02, 1.29322298e-02, -7.47607742e-03,\n", + " 4.01974749e-03, 2.60308161e-02, 4.14611734e-02, -2.92321835e-02,\n", + " -3.74425612e-02, -4.02047671e-02, 6.41225129e-02, 8.02149065e-03,\n", + " -1.94793742e-03, 7.89933465e-03, 1.84414722e-02, 1.19220549e-02,\n", + " 6.97300653e-04, 1.27605693e-02, 2.13440992e-02, 3.44099663e-02,\n", + " -3.82834598e-02, 2.09364947e-02, -1.36689912e-03, 2.60304064e-02,\n", + " 1.03309892e-01, -3.83628765e-03, -1.42918769e-02, -3.21982279e-02,\n", + " -8.87776911e-03, -5.79702482e-02, 1.24155525e-02, 1.60176177e-02,\n", + " 4.33206372e-03, -7.67913694e-03, -3.71407345e-02, -2.65847482e-02,\n", + " -4.84832413e-02, -1.18830036e-02, 2.10484881e-02, -2.14275811e-02,\n", + " -2.90587395e-02, -7.65146539e-02, 2.17941366e-02, 3.07247695e-02,\n", + " 2.21321993e-02, -5.37583865e-02, -5.45986630e-02, -1.95994209e-02,\n", + " 6.53655156e-02, -2.08480917e-02, 7.71053275e-03, 2.30464060e-02,\n", + " -2.38716491e-02, -3.17029133e-02, -1.65972225e-02, -3.12259868e-02,\n", + " -1.02742575e-01, 2.13919654e-02, 3.29860821e-02, 2.92449985e-02,\n", + " -1.30653549e-02, -6.27970276e-03, 4.92750034e-02, 1.64137091e-02,\n", + " 3.23879197e-02, -1.53172854e-02, -3.81413139e-02, -8.04919656e-03,\n", + " -1.08133154e-02, 7.60126188e-02, -2.81727463e-02, -9.25896503e-03,\n", + " 5.59587255e-02, -2.48033758e-02, 1.91262476e-02, -2.15144064e-02,\n", + " -2.70498525e-02, -3.91287804e-02, -4.47372459e-02, -3.99288572e-02,\n", + " -2.82600634e-02, -1.05496094e-01, 2.90084053e-02, -8.19884017e-02,\n", + " -1.79860294e-02, -4.93140221e-02, -2.89700292e-02, -3.26706134e-02,\n", + " -1.13929007e-02, 6.25480041e-02, 2.09988412e-02, 3.40786166e-02,\n", + " 4.22775038e-02, -9.97621939e-03, -1.95572786e-02, -4.95181680e-02,\n", + " 2.30757538e-02, -2.02779286e-02, 3.71993929e-02, -3.11168879e-02,\n", + " 2.57904008e-02, 4.26239781e-02, 2.33973619e-02, 4.00689989e-03,\n", + " -2.46374980e-02, -5.06165298e-03, -1.54379653e-02, 4.66948171e-04,\n", + " -4.85785725e-03, 5.66424802e-02, -2.09541935e-02, -3.06122117e-02,\n", + " 2.08306196e-03, 3.58040929e-02, -1.36380978e-02, 4.87826997e-03,\n", + " -1.25667257e-02, 2.91131213e-02, 4.39725257e-03, 3.34668048e-02,\n", + " -3.95729318e-02, 6.97005540e-02, 1.17042959e-02, 1.88927595e-02,\n", + " -4.99272123e-02, -3.45216766e-02, 1.57779772e-02, 4.84501049e-02,\n", + " 9.73086059e-03, 8.45093578e-02, 6.21386804e-02, -8.33165832e-04,\n", + " -3.10367141e-02, -4.03451733e-03, 1.24619470e-03, -5.44636734e-02,\n", + " 7.75545537e-02, -4.69428711e-02, 2.10666824e-02, 3.30061316e-02,\n", + " -2.82400660e-02, -2.27502231e-02, 2.11734921e-02, 3.06038912e-02,\n", + " -4.69192192e-02, -2.65527479e-02, 2.12218873e-02, -1.94136128e-02,\n", + " -3.65071930e-02, 4.94123343e-03, 2.02455316e-02, -3.83306704e-02,\n", + " 2.75366195e-02, -2.11303458e-02, -9.70205888e-02, -3.63156945e-02,\n", + " -2.60391142e-02, -5.47648259e-02, 2.71793101e-02, 3.20913754e-02,\n", + " -4.93624136e-02, -3.55423577e-02, -1.88178215e-02, 6.94152117e-02,\n", + " -7.48152062e-02, -8.00276175e-03, 3.83800156e-02, -1.82128046e-02,\n", + " 1.16246035e-02, -3.29671726e-02, 3.58484033e-03, 2.86987368e-02,\n", + " 2.99137942e-02, -2.61925906e-02, 1.54190417e-02, 3.33075263e-02,\n", + " -3.46757914e-03, 1.81147065e-02, 2.02620104e-02, -7.87869543e-02,\n", + " -7.31143402e-03, 2.13454408e-03, -5.03857173e-02, -3.85818235e-03,\n", + " 3.64176147e-02, -2.58632395e-02, -2.47921981e-02, -4.48929071e-02,\n", + " -1.56746642e-03, 2.25882754e-02, -2.29092613e-02, -2.98154745e-02,\n", + " -3.63126658e-02, -2.87724007e-03, 1.69772059e-02, 1.35097727e-02,\n", + " 5.65643348e-02, 3.67655046e-02, -1.18822688e-02, -3.93256024e-02,\n", + " 5.84133416e-02, -1.66928973e-02, -2.85255332e-02, 2.45231064e-03,\n", + " 6.42824322e-02, 1.12834880e-02, 7.07072765e-02, -6.12733029e-02,\n", + " -3.22022736e-02, 1.49255954e-02, -3.45885344e-02, 5.64290285e-02,\n", + " 1.45710120e-02, 2.65258271e-02, -2.20487174e-02, 4.53800596e-02,\n", + " -2.44657323e-02, -2.35221051e-02, 5.31864055e-02, 3.79638225e-02,\n", + " 3.60472314e-02, -7.53597310e-03, -2.83951834e-02, 3.89870517e-02,\n", + " -2.53880899e-02, 7.42309308e-03, -7.19177909e-03, -2.33137272e-02,\n", + " 7.28014112e-02, -7.79018700e-02, 9.64842457e-03, -2.72194725e-02,\n", + " 2.04009134e-02, -4.13496494e-02, 8.00416097e-02, -3.60673741e-02,\n", + " 4.44941409e-03, 3.92931253e-02, 1.36698354e-02, 1.24587072e-02,\n", + " 1.00127915e-02, 7.43277296e-02, 4.00649104e-03, -4.89665568e-02,\n", + " -1.82240052e-04, -1.41077256e-02, -2.97611952e-02, -1.74682311e-04,\n", + " 2.24157814e-02, 4.44416255e-02, -4.01153713e-02, -6.28807694e-02,\n", + " 1.47870714e-02, -2.36048526e-03, 1.80037152e-02, 1.93315167e-02,\n", + " 7.11953864e-02, 2.82566436e-02, -2.44845683e-03, -1.15027081e-03,\n", + " 6.96809217e-02, -7.51282647e-03, 7.46430457e-02, 4.62826341e-02,\n", + " -1.57173667e-02, -1.77645404e-02, -6.00871742e-02, -4.73721325e-03,\n", + " -2.26073875e-03, 7.37745641e-03, -9.78859235e-03, -1.78285630e-03,\n", + " -1.11999512e-01, 3.77576649e-02, 2.25516558e-02, 1.88177861e-02,\n", + " -2.03207228e-02, 6.17188103e-02, 3.49288732e-02, -8.87825638e-02,\n", + " -4.09724452e-02, 4.36148830e-02, -5.32415183e-03, -2.60976851e-02,\n", + " 7.11308792e-02, 6.35896670e-03, 3.25526879e-03, 1.12947663e-02,\n", + " 1.56234000e-02, -2.11693402e-02, 3.77066508e-02, -3.17939967e-02,\n", + " -1.39819952e-02, 1.79927405e-02, 2.04036627e-02, 2.92575965e-03,\n", + " -1.45869134e-02, -2.90152151e-02, -5.97235262e-02, -1.11356348e-01,\n", + " -3.18385735e-02, -2.38965661e-03, -6.12345934e-02, 4.60752286e-03,\n", + " 2.72978023e-02, 6.74417708e-03, 6.17338419e-02, 4.96751778e-02,\n", + " -6.44939207e-03, 3.66540253e-02, 6.50297524e-03, 4.99960519e-02,\n", + " 4.00801897e-02, -3.11222542e-02, -6.01028092e-02, 3.36206071e-02,\n", + " 1.11553874e-02, -1.01943649e-02, -1.93773943e-03, 8.48573353e-03,\n", + " -2.81138644e-02, -4.14620228e-02, -5.91190718e-03, -4.40563932e-02,\n", + " -3.85563564e-03, 3.15620564e-03, 3.58664691e-02, -2.53184307e-02,\n", + " -2.90389216e-05, 5.32585476e-03, 1.12847844e-02, 1.09254308e-02,\n", + " -2.80107949e-02, -2.64293756e-02, 1.36288069e-02, 2.05743704e-02,\n", + " 5.06558456e-02, 2.03972589e-03, 6.15928322e-03, 1.65107157e-02,\n", + " 7.66068920e-02, 1.06601194e-02, 2.15027258e-02, -1.87675226e-02,\n", + " -8.91032163e-03, 5.78406416e-02, -3.35133038e-02, 1.11876021e-03,\n", + " -3.03310864e-02, 8.82029254e-03, -1.71672814e-02, -1.08657381e-03,\n", + " 3.43640856e-02, 6.27818331e-03, -2.87505034e-02, -5.35019450e-02,\n", + " -6.20333590e-02, 7.05959573e-02, -2.40503754e-02, -3.69300060e-02,\n", + " -1.34815788e-02, -3.37581560e-02, 2.64684986e-02, -1.33448904e-02,\n", + " -1.59186460e-02, 3.17284912e-02, 1.24617647e-02, 1.01900354e-01,\n", + " 5.25732934e-02, -1.05239293e-02, -9.43460036e-04, -4.58779857e-02,\n", + " -4.57871556e-02, -1.21272868e-02, -3.97307090e-02, 2.81554665e-02,\n", + " 4.01902646e-02, -5.47600538e-03, -1.49628508e-03, 1.42910369e-02,\n", + " 5.93335070e-02, -4.52512540e-02, -4.55521718e-02, 2.89121401e-02,\n", + " -1.18271308e-02, 6.30670190e-02, 4.18886282e-02, -5.92090562e-03,\n", + " 9.88560263e-03, -4.83246380e-03, 2.92682964e-02, 4.01030742e-02,\n", + " -4.30496857e-02, -7.91318994e-03, -5.26147615e-03, -8.48481245e-03,\n", + " 3.12878750e-02, 2.27111876e-02, -3.72377895e-02, -1.53291542e-02])" + ] + }, + "execution_count": 16, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "sample_embedding" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Visualizing vector" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "metadata": {}, + "outputs": [], + "source": [ + "# Prework\n", + "\n", + "result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n", + "vectors = np.array(result['embeddings'])\n", + "documents = result['documents']\n", + "doc_types = [metadata['doc_type'] for metadata in result['metadatas']]\n", + "colors = [['blue', 'green', 'red', 'orange'][['products', 'employees', 'contracts', 'company'].index(t)] for t in doc_types]" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.plotly.v1+json": { + "config": { + "plotlyServerURL": "https://plot.ly" + }, + "data": [ + { + "hoverinfo": "text", + "marker": { + "color": [ + "orange", + "orange", + "orange", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue" + ], + "opacity": 0.8, + "size": 5 + }, + "mode": "markers", + "text": [ + "Type: company
    Text: # About Insurellm\n\nInsurellm was founded by Avery Lancaster in 2015 as an insurance tech startup des...", + "Type: company
    Text: # Careers at Insurellm\n\nInsurellm is hiring! We are looking for talented software engineers, data sc...", + "Type: company
    Text: # Overview of Insurellm\n\nInsurellm is an innovative insurance tech firm with 200 employees across th...", + "Type: contracts
    Text: # Contract with Apex Reinsurance for Rellm: AI-Powered Enterprise Reinsurance Solution\n\n## Terms\n\n1....", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This Agreement will automatically renew for successive one-yea...", + "Type: contracts
    Text: 2. **Seamless Integrations**: The architecture of Rellm allows for easy integration with existing sy...", + "Type: contracts
    Text: 1. **Technical Support**: Provider shall offer dedicated technical support to the Client via phone, ...", + "Type: contracts
    Text: **Insurellm, Inc.** \n_____________________________ \nAuthorized Signature \nDate: ________________...", + "Type: contracts
    Text: # Contract with Belvedere Insurance for Markellm\n\n## Terms\nThis Contract (\"Agreement\") is made and e...", + "Type: contracts
    Text: ## Renewal\n1. **Renewal Terms**: This Agreement may be renewed for additional one-year terms upon mu...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Matching**: Belvedere Insurance will benefit from Markellm's AI-powered ...", + "Type: contracts
    Text: ## Support\n1. **Technical Support**: Technical support will be available from 9 AM to 7 PM EST, Mond...", + "Type: contracts
    Text: **Belvedere Insurance** \nSignature: ______________________ \nName: [Authorized Signatory] \nTitle: ...", + "Type: contracts
    Text: # Contract with BrightWay Solutions for Markellm\n\n**Contract Date:** October 5, 2023 \n**Contract ID...", + "Type: contracts
    Text: 3. **Service Level Agreement (SLA):** \n Insurellm commits to a 99.9% uptime for the platform with...", + "Type: contracts
    Text: 2. **Real-Time Quote Availability:** \n Consumers sourced via BrightWay Solutions will receive rea...", + "Type: contracts
    Text: 3. **Training and Onboarding:** \n Insurellm agrees to provide one free training session on how to...", + "Type: contracts
    Text: # Contract with EverGuard Insurance for Rellm: AI-Powered Enterprise Reinsurance Solution\n\n**Contrac...", + "Type: contracts
    Text: 4. **Usage Rights**: EverGuard Insurance is granted a non-exclusive, non-transferable license to acc...", + "Type: contracts
    Text: 1. **Core Functionality**: Rellm provides EverGuard Insurance with advanced AI-driven analytics, sea...", + "Type: contracts
    Text: 1. **Customer Support**: Insurellm will provide EverGuard Insurance with 24/7 customer support, incl...", + "Type: contracts
    Text: ---\n\n**Signatures** \n**For Insurellm**: __________________________ \n**Name**: John Smith \n**Title...", + "Type: contracts
    Text: # Contract with GreenField Holdings for Markellm\n\n**Effective Date:** November 15, 2023 \n**Contract...", + "Type: contracts
    Text: ## Renewal\n1. **Automatic Renewal**: This contract will automatically renew for sequential one-year ...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Matching**: Access to advanced algorithms that connect GreenField Holdin...", + "Type: contracts
    Text: ## Support\n1. **Customer Support Access**: The Client will have access to dedicated support through ...", + "Type: contracts
    Text: **Signatures:** \n_________________________ _________________________ \n**...", + "Type: contracts
    Text: # Contract with Greenstone Insurance for Homellm\n\n---\n\n## Terms\n\n1. **Parties**: This Contract (\"Agr...", + "Type: contracts
    Text: 4. **Payment Terms**: \n - The Customer shall pay an amount of $10,000 per month for the Standard T...", + "Type: contracts
    Text: ---\n\n## Features\n\n- **AI-Powered Risk Assessment**: Customer will have access to enhanced risk evalu...", + "Type: contracts
    Text: - **Customer Portal**: A dedicated portal will be provided, allowing the Customer's clients to manag...", + "Type: contracts
    Text: ______________________________ \n[Name], [Title] \nDate: ______________________\n\n**For Greenstone In...", + "Type: contracts
    Text: # Contract with GreenValley Insurance for Homellm\n\n**Contract Date:** October 6, 2023 \n**Contract N...", + "Type: contracts
    Text: 4. **Confidentiality:** Both parties agree to maintain the confidentiality of proprietary informatio...", + "Type: contracts
    Text: 1. **AI-Powered Risk Assessment:** Access to advanced AI algorithms for real-time risk evaluations.\n...", + "Type: contracts
    Text: 3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platfor...", + "Type: contracts
    Text: # Contract with Pinnacle Insurance Co. for Homellm\n\n## Terms\nThis contract (\"Contract\") is entered i...", + "Type: contracts
    Text: ## Renewal\n1. **Renewal Terms**: At the end of the initial term, this Contract shall automatically r...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Risk Assessment**: Utilized for tailored underwriting decisions specific...", + "Type: contracts
    Text: ## Support\n1. **Technical Support**: Insurellm shall provide 24/7 technical support via an email and...", + "Type: contracts
    Text: # Contract with Roadway Insurance Inc. for Carllm\n\n---\n\n## Terms\n\n1. **Agreement Effective Date**: T...", + "Type: contracts
    Text: ---\n\n## Renewal\n\n1. **Automatic Renewal**: This agreement will automatically renew for an additional...", + "Type: contracts
    Text: ---\n\n## Features\n\n1. **Access to Core Features**: Roadway Insurance Inc. will have access to all Pro...", + "Type: contracts
    Text: ---\n\n## Support\n\n1. **Technical Support**: Roadway Insurance Inc. will receive priority technical su...", + "Type: contracts
    Text: # Contract with Stellar Insurance Co. for Rellm\n\n## Terms\nThis contract is made between **Insurellm*...", + "Type: contracts
    Text: ### Termination\nEither party may terminate this agreement with a **30-day written notice**. In the e...", + "Type: contracts
    Text: ## Features\nStellar Insurance Co. will receive access to the following features of the Rellm product...", + "Type: contracts
    Text: ## Support\nInsurellm provides Stellar Insurance Co. with the following support services:\n\n- **24/7 T...", + "Type: contracts
    Text: # Contract with TechDrive Insurance for Carllm\n\n**Contract Date:** October 1, 2024 \n**Contract Dura...", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This contract shall automatically renew for additional one-yea...", + "Type: contracts
    Text: ## Support\n\n1. **Customer Support**: Insurellm will provide 24/7 customer support to TechDrive Insur...", + "Type: contracts
    Text: **TechDrive Insurance Representative:** \nName: Sarah Johnson \nTitle: Operations Director \nDate: _...", + "Type: contracts
    Text: # Contract with Velocity Auto Solutions for Carllm\n\n**Contract Date:** October 1, 2023 \n**Contract ...", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This contract will automatically renew for successive 12-month...", + "Type: contracts
    Text: ## Support\n\n1. **Customer Support**: Velocity Auto Solutions will have access to Insurellm’s custome...", + "Type: employees
    Text: # HR Record\n\n# Alex Chen\n\n## Summary\n- **Date of Birth:** March 15, 1990 \n- **Job Title:** Backend ...", + "Type: employees
    Text: ## Annual Performance History\n- **2020:** \n - Completed onboarding successfully. \n - Met expecta...", + "Type: employees
    Text: ## Compensation History\n- **2020:** Base Salary: $80,000 \n- **2021:** Base Salary Increase to $90,0...", + "Type: employees
    Text: Alex Chen continues to be a vital asset at Insurellm, contributing significantly to innovative backe...", + "Type: employees
    Text: # HR Record\n\n# Alex Harper\n\n## Summary\n- **Date of Birth**: March 15, 1993 \n- **Job Title**: Sales ...", + "Type: employees
    Text: ## Annual Performance History \n- **2021**: \n - **Performance Rating**: 4.5/5 \n - **Key Achievem...", + "Type: employees
    Text: - **2022**: \n - **Base Salary**: $65,000 (Promotion to Senior SDR) \n - **Bonus**: $13,000 (20% o...", + "Type: employees
    Text: # HR Record\n\n# Alex Thomson\n\n## Summary\n- **Date of Birth:** March 15, 1995 \n- **Job Title:** Sales...", + "Type: employees
    Text: ## Annual Performance History \n- **2022** - Rated as \"Exceeds Expectations.\" Alex Thomson achieved ...", + "Type: employees
    Text: ## Other HR Notes\n- Alex Thomson is an active member of the Diversity and Inclusion committee at Ins...", + "Type: employees
    Text: # Avery Lancaster\n\n## Summary\n- **Date of Birth**: March 15, 1985 \n- **Job Title**: Co-Founder & Ch...", + "Type: employees
    Text: - **2010 - 2013**: Business Analyst at Edge Analytics \n Prior to joining Innovate, Avery worked as...", + "Type: employees
    Text: - **2018**: **Exceeds Expectations** \n Under Avery’s pivoted vision, Insurellm launched two new su...", + "Type: employees
    Text: - **2022**: **Satisfactory** \n Avery focused on rebuilding team dynamics and addressing employee c...", + "Type: employees
    Text: ## Compensation History\n- **2015**: $150,000 base salary + Significant equity stake \n- **2016**: $1...", + "Type: employees
    Text: ## Other HR Notes\n- **Professional Development**: Avery has actively participated in leadership trai...", + "Type: employees
    Text: # HR Record\n\n# Emily Carter\n\n## Summary\n- **Date of Birth:** August 12, 1990 \n- **Job Title:** Acco...", + "Type: employees
    Text: - **2017-2019:** Marketing Intern \n - Assisted with market research and campaign development for s...", + "Type: employees
    Text: ## Compensation History\n| Year | Base Salary | Bonus | Total Compensation |\n|------|--------...", + "Type: employees
    Text: Emily Carter exemplifies the kind of talent that drives Insurellm's success and is an invaluable ass...", + "Type: employees
    Text: # HR Record\n\n# Emily Tran\n\n## Summary\n- **Date of Birth:** March 18, 1991 \n- **Job Title:** Digital...", + "Type: employees
    Text: - **January 2017 - May 2018**: Marketing Intern \n - Supported the Marketing team by collaborating ...", + "Type: employees
    Text: - **2021**: \n - Performance Rating: Meets Expectations \n - Key Achievements: Contributed to the ...", + "Type: employees
    Text: - **Professional Development Goals**: \n - Emily Tran aims to become a Marketing Manager within the...", + "Type: employees
    Text: # HR Record\n\n# Jordan Blake\n\n## Summary\n- **Date of Birth:** March 15, 1993 \n- **Job Title:** Sales...", + "Type: employees
    Text: ## Annual Performance History\n- **2021:** First year at Insurellm; achieved 90% of monthly targets. ...", + "Type: employees
    Text: ## Other HR Notes\n- Jordan has shown an interest in continuing education, actively participating in ...", + "Type: employees
    Text: # HR Record\n\n# Jordan K. Bishop\n\n## Summary\n- **Date of Birth:** March 15, 1990\n- **Job Title:** Fro...", + "Type: employees
    Text: ## Annual Performance History\n- **2019:** Exceeds Expectations - Continuously delivered high-quality...", + "Type: employees
    Text: ## Compensation History\n- **June 2018:** Starting Salary - $85,000\n- **June 2019:** Salary Increase ...", + "Type: employees
    Text: ## Other HR Notes\n- Jordan K. Bishop has been an integral part of club initiatives, including the In...", + "Type: employees
    Text: # HR Record\n\n# Maxine Thompson\n\n## Summary\n- **Date of Birth:** January 15, 1991 \n- **Job Title:** ...", + "Type: employees
    Text: ## Insurellm Career Progression\n- **January 2017 - October 2018**: **Junior Data Engineer** \n * Ma...", + "Type: employees
    Text: ## Annual Performance History\n- **2017**: *Meets Expectations* \n Maxine showed potential in her ro...", + "Type: employees
    Text: - **2021**: *Exceeds Expectations* \n Maxine spearheaded the transition to a new data warehousing s...", + "Type: employees
    Text: ## Compensation History\n- **2017**: $70,000 (Junior Data Engineer) \n- **2018**: $75,000 (Junior Dat...", + "Type: employees
    Text: # HR Record\n\n# Oliver Spencer\n\n## Summary\n- **Date of Birth**: May 14, 1990 \n- **Job Title**: Backe...", + "Type: employees
    Text: ## Annual Performance History\n- **2018**: **3/5** - Adaptable team player but still learning to take...", + "Type: employees
    Text: ## Compensation History\n- **March 2018**: Initial salary of $80,000.\n- **July 2019**: Salary increas...", + "Type: employees
    Text: # Samantha Greene\n\n## Summary\n- **Date of Birth:** October 14, 1990\n- **Job Title:** HR Generalist\n-...", + "Type: employees
    Text: ## Annual Performance History\n- **2020:** Exceeds Expectations \n Samantha Greene demonstrated exce...", + "Type: employees
    Text: ## Compensation History\n- **2020:** Base Salary - $55,000 \n The entry-level salary matched industr...", + "Type: employees
    Text: - **2023:** Base Salary - $70,000 \n Recognized for substantial improvement in employee relations m...", + "Type: employees
    Text: # HR Record\n\n# Samuel Trenton\n\n## Summary\n- **Date of Birth:** April 12, 1989 \n- **Job Title:** Sen...", + "Type: employees
    Text: ## Annual Performance History\n- **2023:** Rating: 4.5/5 \n *Samuel exceeded expectations, successfu...", + "Type: employees
    Text: ## Compensation History\n- **2023:** Base Salary: $115,000 + Bonus: $15,000 \n *Annual bonus based o...", + "Type: employees
    Text: - **Engagement in Company Culture:** Regularly participates in team-building events and contributes ...", + "Type: products
    Text: # Product Summary\n\n# Carllm\n\n## Summary\n\nCarllm is an innovative auto insurance product developed by...", + "Type: products
    Text: - **Instant Quoting**: With Carllm, insurance companies can offer near-instant quotes to customers, ...", + "Type: products
    Text: - **Mobile Integration**: Carllm is designed to work seamlessly with mobile applications, providing ...", + "Type: products
    Text: - **Professional Tier**: $2,500/month\n - For medium-sized companies.\n - All Basic Tier features pl...", + "Type: products
    Text: ### Q2 2025: Customer Experience Improvements\n- Launch of a new **mobile app** for end-users.\n- Intr...", + "Type: products
    Text: # Product Summary\n\n# Homellm\n\n## Summary\nHomellm is an innovative home insurance product developed b...", + "Type: products
    Text: ### 2. Dynamic Pricing Model\nWith Homellm's innovative dynamic pricing model, insurance providers ca...", + "Type: products
    Text: ### 5. Multi-Channel Integration\nHomellm seamlessly integrates into existing insurance platforms, pr...", + "Type: products
    Text: - **Basic Tier:** Starting at $5,000/month for small insurers with basic integration features.\n- **S...", + "Type: products
    Text: All tiers include a comprehensive training program and ongoing updates to ensure optimal performance...", + "Type: products
    Text: With Homellm, Insurellm is committed to transforming the landscape of home insurance, ensuring both ...", + "Type: products
    Text: # Product Summary\n\n# Markellm\n\n## Summary\n\nMarkellm is an innovative two-sided marketplace designed ...", + "Type: products
    Text: - **User-Friendly Interface**: Designed with user experience in mind, Markellm features an intuitive...", + "Type: products
    Text: - **Customer Support**: Our dedicated support team is always available to assist both consumers and ...", + "Type: products
    Text: ### For Insurance Companies:\n- **Basic Listing Fee**: $199/month for a featured listing on the platf...", + "Type: products
    Text: ### Q3 2025\n- Initiate a comprehensive marketing campaign targeting both consumers and insurers to i...", + "Type: products
    Text: # Product Summary\n\n# Rellm: AI-Powered Enterprise Reinsurance Solution\n\n## Summary\n\nRellm is an inno...", + "Type: products
    Text: ### Seamless Integrations\nRellm's architecture is designed for effortless integration with existing ...", + "Type: products
    Text: ### Regulatory Compliance Tools\nRellm includes built-in compliance tracking features to help organiz...", + "Type: products
    Text: Join the growing number of organizations leveraging Rellm to enhance their reinsurance processes whi...", + "Type: products
    Text: Experience the future of reinsurance with Rellm, where innovation meets reliability. Let Insurellm h..." + ], + "type": "scatter", + "x": [ + 2.1049793, + 1.1863052, + 1.4862374, + -5.244703, + -4.6875825, + -3.938663, + -7.065274, + -13.5899725, + -9.856695, + -13.868874, + -5.6077223, + -7.7878904, + -10.650882, + -8.596619, + -7.607886, + -7.044941, + -8.118247, + -4.3257694, + -3.7956166, + -3.1995866, + -6.4049, + -6.2257085, + -9.424744, + -13.633935, + -4.918413, + -8.846364, + -13.630306, + -9.190956, + -10.57125, + -4.0693502, + -8.158554, + -13.862557, + -8.649788, + -7.214466, + -5.36645, + -7.494893, + -9.623619, + -13.9268875, + -4.0489416, + -8.71199, + -11.229432, + -13.288615, + -12.044058, + -10.3613825, + -6.9435472, + -6.0978713, + -5.2625675, + -6.3455467, + -10.479305, + -10.707319, + -8.29903, + -8.511846, + -9.630703, + -9.749146, + -9.0578, + 5.959655, + 11.447374, + 10.058615, + 5.1624084, + 8.816244, + 10.980077, + 9.303604, + 7.8448887, + 10.387505, + 7.9188, + 2.9157553, + 5.2268667, + 5.738741, + 6.06246, + 11.117995, + 3.259488, + 5.9528317, + 12.1910305, + 7.9038677, + 4.751993, + 5.9953322, + 6.4600663, + 7.3864727, + 5.8371596, + 9.382967, + 11.086662, + 11.166579, + 10.636894, + 12.461003, + 10.982859, + 11.124385, + 11.667279, + 12.73921, + 12.9148855, + 12.973071, + 12.19851, + 7.131914, + 12.053937, + 9.205491, + 15.479876, + 14.208124, + 14.651664, + 15.361577, + 6.732294, + 9.61941, + 9.963041, + 9.356099, + -0.892312, + -1.6616712, + -2.0991518, + -1.8988599, + -1.0763571, + -3.4787161, + -3.2296891, + -2.6272976, + -1.5834669, + -1.5236322, + 0.20386986, + -4.8010993, + -5.9114547, + -5.690189, + -4.8725724, + -3.9543898, + -1.7254385, + -2.615607, + -2.4413817, + -1.358858, + -0.21650138 + ], + "y": [ + -1.160884, + 0.29916492, + -0.18965857, + -3.9546766, + -2.6938837, + -1.5114936, + -1.7658606, + 1.6283244, + 3.3563676, + -1.0218278, + 4.734356, + -1.5868372, + 2.7326808, + 3.8717313, + 3.0319402, + 4.8651543, + 3.84119, + -4.5347166, + -3.602797, + -3.3519456, + -5.259293, + -5.811519, + 4.7673006, + -1.0121828, + 3.0695078, + 5.869272, + 1.72016, + 0.70035094, + 0.31958526, + 1.6364757, + -0.49663937, + 0.7449636, + 0.77033013, + 0.90882516, + 1.2580742, + 0.38005096, + -0.45788804, + -1.3838352, + 2.8216114, + -1.3808312, + -2.8460462, + -2.3889477, + -4.978076, + -5.0466166, + -3.2549055, + -2.8125684, + -1.6414757, + -2.1152701, + -2.9129503, + -3.7577167, + -5.231769, + -6.0865116, + -3.3624432, + -3.9013338, + -4.3533516, + 4.2022624, + -1.1752989, + -1.4045172, + 4.0687327, + 2.8832786, + -0.17034641, + 2.065217, + 2.5553873, + 0.5539435, + 2.2194517, + -2.4600935, + -4.2555146, + -4.4346094, + -4.551813, + -2.6811168, + -2.7749152, + 0.9942546, + -0.88645107, + -0.5169783, + 0.9356758, + -0.5277238, + -0.9503327, + -1.6551013, + -0.8439842, + 3.890908, + 2.1762133, + 2.625817, + 4.373835, + 0.739714, + -2.2775772, + 4.309124, + -5.931021, + -4.830216, + -3.0594008, + -4.583869, + -4.6539454, + 4.349339, + -1.5038458, + -0.50115377, + 0.57530403, + -0.9931708, + -0.62294304, + 0.3860171, + 2.6113834, + -3.046981, + -2.302129, + -4.026367, + 3.9122264, + 3.7329102, + 4.04289, + 4.7394605, + 5.348665, + 0.87496454, + 1.837953, + 1.1089472, + 1.8076365, + 1.6846453, + 0.07279262, + 5.578082, + 6.1154733, + 6.3361335, + 6.382683, + 6.6129003, + -2.4845295, + -0.93237317, + -1.7474884, + -1.460983, + -0.6520413 + ] + } + ], + "layout": { + "height": 600, + "margin": { + "b": 10, + "l": 10, + "r": 20, + "t": 40 + }, + "scene": { + "xaxis": { + "title": { + "text": "x" + } + }, + "yaxis": { + "title": { + "text": "y" + } + } + }, + "template": { + "data": { + "bar": [ + { + "error_x": { + "color": "#2a3f5f" + }, + "error_y": { + "color": "#2a3f5f" + }, + "marker": { + "line": { + "color": "#E5ECF6", + "width": 0.5 + }, + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "bar" + } + ], + "barpolar": [ + { + "marker": { + "line": { + "color": "#E5ECF6", + "width": 0.5 + }, + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "barpolar" + } + ], + "carpet": [ + { + "aaxis": { + "endlinecolor": "#2a3f5f", + "gridcolor": "white", + "linecolor": "white", + "minorgridcolor": "white", + "startlinecolor": "#2a3f5f" + }, + "baxis": { + "endlinecolor": "#2a3f5f", + "gridcolor": "white", + "linecolor": "white", + "minorgridcolor": "white", + "startlinecolor": "#2a3f5f" + }, + "type": "carpet" + } + ], + "choropleth": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "choropleth" + } + ], + "contour": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "contour" + } + ], + "contourcarpet": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "contourcarpet" + } + ], + "heatmap": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "heatmap" + } + ], + "heatmapgl": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "heatmapgl" + } + ], + "histogram": [ + { + "marker": { + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "histogram" + } + ], + "histogram2d": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "histogram2d" + } + ], + "histogram2dcontour": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "histogram2dcontour" + } + ], + "mesh3d": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "mesh3d" + } + ], + "parcoords": [ + { + "line": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "parcoords" + } + ], + "pie": [ + { + "automargin": true, + "type": "pie" + } + ], + "scatter": [ + { + "fillpattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + }, + "type": "scatter" + } + ], + "scatter3d": [ + { + "line": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatter3d" + } + ], + "scattercarpet": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattercarpet" + } + ], + "scattergeo": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattergeo" + } + ], + "scattergl": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattergl" + } + ], + "scattermapbox": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattermapbox" + } + ], + "scatterpolar": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterpolar" + } + ], + "scatterpolargl": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterpolargl" + } + ], + "scatterternary": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterternary" + } + ], + "surface": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "surface" + } + ], + "table": [ + { + "cells": { + "fill": { + "color": "#EBF0F8" + }, + "line": { + "color": "white" + } + }, + "header": { + "fill": { + "color": "#C8D4E3" + }, + "line": { + "color": "white" + } + }, + "type": "table" + } + ] + }, + "layout": { + "annotationdefaults": { + "arrowcolor": "#2a3f5f", + "arrowhead": 0, + "arrowwidth": 1 + }, + "autotypenumbers": "strict", + "coloraxis": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "colorscale": { + "diverging": [ + [ + 0, + "#8e0152" + ], + [ + 0.1, + "#c51b7d" + ], + [ + 0.2, + "#de77ae" + ], + [ + 0.3, + "#f1b6da" + ], + [ + 0.4, + "#fde0ef" + ], + [ + 0.5, + "#f7f7f7" + ], + [ + 0.6, + "#e6f5d0" + ], + [ + 0.7, + "#b8e186" + ], + [ + 0.8, + "#7fbc41" + ], + [ + 0.9, + "#4d9221" + ], + [ + 1, + "#276419" + ] + ], + "sequential": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "sequentialminus": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ] + }, + "colorway": [ + "#636efa", + "#EF553B", + "#00cc96", + "#ab63fa", + "#FFA15A", + "#19d3f3", + "#FF6692", + "#B6E880", + "#FF97FF", + "#FECB52" + ], + "font": { + "color": "#2a3f5f" + }, + "geo": { + "bgcolor": "white", + "lakecolor": "white", + "landcolor": "#E5ECF6", + "showlakes": true, + "showland": true, + "subunitcolor": "white" + }, + "hoverlabel": { + "align": "left" + }, + "hovermode": "closest", + "mapbox": { + "style": "light" + }, + "paper_bgcolor": "white", + "plot_bgcolor": "#E5ECF6", + "polar": { + "angularaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "bgcolor": "#E5ECF6", + "radialaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + } + }, + "scene": { + "xaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + }, + "yaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + }, + "zaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + } + }, + "shapedefaults": { + "line": { + "color": "#2a3f5f" + } + }, + "ternary": { + "aaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "baxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "bgcolor": "#E5ECF6", + "caxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + } + }, + "title": { + "x": 0.05 + }, + "xaxis": { + "automargin": true, + "gridcolor": "white", + "linecolor": "white", + "ticks": "", + "title": { + "standoff": 15 + }, + "zerolinecolor": "white", + "zerolinewidth": 2 + }, + "yaxis": { + "automargin": true, + "gridcolor": "white", + "linecolor": "white", + "ticks": "", + "title": { + "standoff": 15 + }, + "zerolinecolor": "white", + "zerolinewidth": 2 + } + } + }, + "title": { + "text": "2D Chroma Vector Store Visualization" + }, + "width": 800 + } + } + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# We humans find it easier to visalize things in 2D!\n", + "# Reduce the dimensionality of the vectors to 2D using t-SNE\n", + "# (t-distributed stochastic neighbor embedding)\n", + "\n", + "tsne = TSNE(n_components=2, random_state=42)\n", + "reduced_vectors = tsne.fit_transform(vectors)\n", + "\n", + "# Create the 2D scatter plot\n", + "fig = go.Figure(data=[go.Scatter(\n", + " x=reduced_vectors[:, 0],\n", + " y=reduced_vectors[:, 1],\n", + " mode='markers',\n", + " marker=dict(size=5, color=colors, opacity=0.8),\n", + " text=[f\"Type: {t}
    Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", + " hoverinfo='text'\n", + ")])\n", + "\n", + "fig.update_layout(\n", + " title='2D Chroma Vector Store Visualization',\n", + " scene=dict(xaxis_title='x',yaxis_title='y'),\n", + " width=800,\n", + " height=600,\n", + " margin=dict(r=20, b=10, l=10, t=40)\n", + ")\n", + "\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.plotly.v1+json": { + "config": { + "plotlyServerURL": "https://plot.ly" + }, + "data": [ + { + "hoverinfo": "text", + "marker": { + "color": [ + "orange", + "orange", + "orange", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "red", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "green", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue", + "blue" + ], + "opacity": 0.8, + "size": 5 + }, + "mode": "markers", + "text": [ + "Type: company
    Text: # About Insurellm\n\nInsurellm was founded by Avery Lancaster in 2015 as an insurance tech startup des...", + "Type: company
    Text: # Careers at Insurellm\n\nInsurellm is hiring! We are looking for talented software engineers, data sc...", + "Type: company
    Text: # Overview of Insurellm\n\nInsurellm is an innovative insurance tech firm with 200 employees across th...", + "Type: contracts
    Text: # Contract with Apex Reinsurance for Rellm: AI-Powered Enterprise Reinsurance Solution\n\n## Terms\n\n1....", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This Agreement will automatically renew for successive one-yea...", + "Type: contracts
    Text: 2. **Seamless Integrations**: The architecture of Rellm allows for easy integration with existing sy...", + "Type: contracts
    Text: 1. **Technical Support**: Provider shall offer dedicated technical support to the Client via phone, ...", + "Type: contracts
    Text: **Insurellm, Inc.** \n_____________________________ \nAuthorized Signature \nDate: ________________...", + "Type: contracts
    Text: # Contract with Belvedere Insurance for Markellm\n\n## Terms\nThis Contract (\"Agreement\") is made and e...", + "Type: contracts
    Text: ## Renewal\n1. **Renewal Terms**: This Agreement may be renewed for additional one-year terms upon mu...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Matching**: Belvedere Insurance will benefit from Markellm's AI-powered ...", + "Type: contracts
    Text: ## Support\n1. **Technical Support**: Technical support will be available from 9 AM to 7 PM EST, Mond...", + "Type: contracts
    Text: **Belvedere Insurance** \nSignature: ______________________ \nName: [Authorized Signatory] \nTitle: ...", + "Type: contracts
    Text: # Contract with BrightWay Solutions for Markellm\n\n**Contract Date:** October 5, 2023 \n**Contract ID...", + "Type: contracts
    Text: 3. **Service Level Agreement (SLA):** \n Insurellm commits to a 99.9% uptime for the platform with...", + "Type: contracts
    Text: 2. **Real-Time Quote Availability:** \n Consumers sourced via BrightWay Solutions will receive rea...", + "Type: contracts
    Text: 3. **Training and Onboarding:** \n Insurellm agrees to provide one free training session on how to...", + "Type: contracts
    Text: # Contract with EverGuard Insurance for Rellm: AI-Powered Enterprise Reinsurance Solution\n\n**Contrac...", + "Type: contracts
    Text: 4. **Usage Rights**: EverGuard Insurance is granted a non-exclusive, non-transferable license to acc...", + "Type: contracts
    Text: 1. **Core Functionality**: Rellm provides EverGuard Insurance with advanced AI-driven analytics, sea...", + "Type: contracts
    Text: 1. **Customer Support**: Insurellm will provide EverGuard Insurance with 24/7 customer support, incl...", + "Type: contracts
    Text: ---\n\n**Signatures** \n**For Insurellm**: __________________________ \n**Name**: John Smith \n**Title...", + "Type: contracts
    Text: # Contract with GreenField Holdings for Markellm\n\n**Effective Date:** November 15, 2023 \n**Contract...", + "Type: contracts
    Text: ## Renewal\n1. **Automatic Renewal**: This contract will automatically renew for sequential one-year ...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Matching**: Access to advanced algorithms that connect GreenField Holdin...", + "Type: contracts
    Text: ## Support\n1. **Customer Support Access**: The Client will have access to dedicated support through ...", + "Type: contracts
    Text: **Signatures:** \n_________________________ _________________________ \n**...", + "Type: contracts
    Text: # Contract with Greenstone Insurance for Homellm\n\n---\n\n## Terms\n\n1. **Parties**: This Contract (\"Agr...", + "Type: contracts
    Text: 4. **Payment Terms**: \n - The Customer shall pay an amount of $10,000 per month for the Standard T...", + "Type: contracts
    Text: ---\n\n## Features\n\n- **AI-Powered Risk Assessment**: Customer will have access to enhanced risk evalu...", + "Type: contracts
    Text: - **Customer Portal**: A dedicated portal will be provided, allowing the Customer's clients to manag...", + "Type: contracts
    Text: ______________________________ \n[Name], [Title] \nDate: ______________________\n\n**For Greenstone In...", + "Type: contracts
    Text: # Contract with GreenValley Insurance for Homellm\n\n**Contract Date:** October 6, 2023 \n**Contract N...", + "Type: contracts
    Text: 4. **Confidentiality:** Both parties agree to maintain the confidentiality of proprietary informatio...", + "Type: contracts
    Text: 1. **AI-Powered Risk Assessment:** Access to advanced AI algorithms for real-time risk evaluations.\n...", + "Type: contracts
    Text: 3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platfor...", + "Type: contracts
    Text: # Contract with Pinnacle Insurance Co. for Homellm\n\n## Terms\nThis contract (\"Contract\") is entered i...", + "Type: contracts
    Text: ## Renewal\n1. **Renewal Terms**: At the end of the initial term, this Contract shall automatically r...", + "Type: contracts
    Text: ## Features\n1. **AI-Powered Risk Assessment**: Utilized for tailored underwriting decisions specific...", + "Type: contracts
    Text: ## Support\n1. **Technical Support**: Insurellm shall provide 24/7 technical support via an email and...", + "Type: contracts
    Text: # Contract with Roadway Insurance Inc. for Carllm\n\n---\n\n## Terms\n\n1. **Agreement Effective Date**: T...", + "Type: contracts
    Text: ---\n\n## Renewal\n\n1. **Automatic Renewal**: This agreement will automatically renew for an additional...", + "Type: contracts
    Text: ---\n\n## Features\n\n1. **Access to Core Features**: Roadway Insurance Inc. will have access to all Pro...", + "Type: contracts
    Text: ---\n\n## Support\n\n1. **Technical Support**: Roadway Insurance Inc. will receive priority technical su...", + "Type: contracts
    Text: # Contract with Stellar Insurance Co. for Rellm\n\n## Terms\nThis contract is made between **Insurellm*...", + "Type: contracts
    Text: ### Termination\nEither party may terminate this agreement with a **30-day written notice**. In the e...", + "Type: contracts
    Text: ## Features\nStellar Insurance Co. will receive access to the following features of the Rellm product...", + "Type: contracts
    Text: ## Support\nInsurellm provides Stellar Insurance Co. with the following support services:\n\n- **24/7 T...", + "Type: contracts
    Text: # Contract with TechDrive Insurance for Carllm\n\n**Contract Date:** October 1, 2024 \n**Contract Dura...", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This contract shall automatically renew for additional one-yea...", + "Type: contracts
    Text: ## Support\n\n1. **Customer Support**: Insurellm will provide 24/7 customer support to TechDrive Insur...", + "Type: contracts
    Text: **TechDrive Insurance Representative:** \nName: Sarah Johnson \nTitle: Operations Director \nDate: _...", + "Type: contracts
    Text: # Contract with Velocity Auto Solutions for Carllm\n\n**Contract Date:** October 1, 2023 \n**Contract ...", + "Type: contracts
    Text: ## Renewal\n\n1. **Automatic Renewal**: This contract will automatically renew for successive 12-month...", + "Type: contracts
    Text: ## Support\n\n1. **Customer Support**: Velocity Auto Solutions will have access to Insurellm’s custome...", + "Type: employees
    Text: # HR Record\n\n# Alex Chen\n\n## Summary\n- **Date of Birth:** March 15, 1990 \n- **Job Title:** Backend ...", + "Type: employees
    Text: ## Annual Performance History\n- **2020:** \n - Completed onboarding successfully. \n - Met expecta...", + "Type: employees
    Text: ## Compensation History\n- **2020:** Base Salary: $80,000 \n- **2021:** Base Salary Increase to $90,0...", + "Type: employees
    Text: Alex Chen continues to be a vital asset at Insurellm, contributing significantly to innovative backe...", + "Type: employees
    Text: # HR Record\n\n# Alex Harper\n\n## Summary\n- **Date of Birth**: March 15, 1993 \n- **Job Title**: Sales ...", + "Type: employees
    Text: ## Annual Performance History \n- **2021**: \n - **Performance Rating**: 4.5/5 \n - **Key Achievem...", + "Type: employees
    Text: - **2022**: \n - **Base Salary**: $65,000 (Promotion to Senior SDR) \n - **Bonus**: $13,000 (20% o...", + "Type: employees
    Text: # HR Record\n\n# Alex Thomson\n\n## Summary\n- **Date of Birth:** March 15, 1995 \n- **Job Title:** Sales...", + "Type: employees
    Text: ## Annual Performance History \n- **2022** - Rated as \"Exceeds Expectations.\" Alex Thomson achieved ...", + "Type: employees
    Text: ## Other HR Notes\n- Alex Thomson is an active member of the Diversity and Inclusion committee at Ins...", + "Type: employees
    Text: # Avery Lancaster\n\n## Summary\n- **Date of Birth**: March 15, 1985 \n- **Job Title**: Co-Founder & Ch...", + "Type: employees
    Text: - **2010 - 2013**: Business Analyst at Edge Analytics \n Prior to joining Innovate, Avery worked as...", + "Type: employees
    Text: - **2018**: **Exceeds Expectations** \n Under Avery’s pivoted vision, Insurellm launched two new su...", + "Type: employees
    Text: - **2022**: **Satisfactory** \n Avery focused on rebuilding team dynamics and addressing employee c...", + "Type: employees
    Text: ## Compensation History\n- **2015**: $150,000 base salary + Significant equity stake \n- **2016**: $1...", + "Type: employees
    Text: ## Other HR Notes\n- **Professional Development**: Avery has actively participated in leadership trai...", + "Type: employees
    Text: # HR Record\n\n# Emily Carter\n\n## Summary\n- **Date of Birth:** August 12, 1990 \n- **Job Title:** Acco...", + "Type: employees
    Text: - **2017-2019:** Marketing Intern \n - Assisted with market research and campaign development for s...", + "Type: employees
    Text: ## Compensation History\n| Year | Base Salary | Bonus | Total Compensation |\n|------|--------...", + "Type: employees
    Text: Emily Carter exemplifies the kind of talent that drives Insurellm's success and is an invaluable ass...", + "Type: employees
    Text: # HR Record\n\n# Emily Tran\n\n## Summary\n- **Date of Birth:** March 18, 1991 \n- **Job Title:** Digital...", + "Type: employees
    Text: - **January 2017 - May 2018**: Marketing Intern \n - Supported the Marketing team by collaborating ...", + "Type: employees
    Text: - **2021**: \n - Performance Rating: Meets Expectations \n - Key Achievements: Contributed to the ...", + "Type: employees
    Text: - **Professional Development Goals**: \n - Emily Tran aims to become a Marketing Manager within the...", + "Type: employees
    Text: # HR Record\n\n# Jordan Blake\n\n## Summary\n- **Date of Birth:** March 15, 1993 \n- **Job Title:** Sales...", + "Type: employees
    Text: ## Annual Performance History\n- **2021:** First year at Insurellm; achieved 90% of monthly targets. ...", + "Type: employees
    Text: ## Other HR Notes\n- Jordan has shown an interest in continuing education, actively participating in ...", + "Type: employees
    Text: # HR Record\n\n# Jordan K. Bishop\n\n## Summary\n- **Date of Birth:** March 15, 1990\n- **Job Title:** Fro...", + "Type: employees
    Text: ## Annual Performance History\n- **2019:** Exceeds Expectations - Continuously delivered high-quality...", + "Type: employees
    Text: ## Compensation History\n- **June 2018:** Starting Salary - $85,000\n- **June 2019:** Salary Increase ...", + "Type: employees
    Text: ## Other HR Notes\n- Jordan K. Bishop has been an integral part of club initiatives, including the In...", + "Type: employees
    Text: # HR Record\n\n# Maxine Thompson\n\n## Summary\n- **Date of Birth:** January 15, 1991 \n- **Job Title:** ...", + "Type: employees
    Text: ## Insurellm Career Progression\n- **January 2017 - October 2018**: **Junior Data Engineer** \n * Ma...", + "Type: employees
    Text: ## Annual Performance History\n- **2017**: *Meets Expectations* \n Maxine showed potential in her ro...", + "Type: employees
    Text: - **2021**: *Exceeds Expectations* \n Maxine spearheaded the transition to a new data warehousing s...", + "Type: employees
    Text: ## Compensation History\n- **2017**: $70,000 (Junior Data Engineer) \n- **2018**: $75,000 (Junior Dat...", + "Type: employees
    Text: # HR Record\n\n# Oliver Spencer\n\n## Summary\n- **Date of Birth**: May 14, 1990 \n- **Job Title**: Backe...", + "Type: employees
    Text: ## Annual Performance History\n- **2018**: **3/5** - Adaptable team player but still learning to take...", + "Type: employees
    Text: ## Compensation History\n- **March 2018**: Initial salary of $80,000.\n- **July 2019**: Salary increas...", + "Type: employees
    Text: # Samantha Greene\n\n## Summary\n- **Date of Birth:** October 14, 1990\n- **Job Title:** HR Generalist\n-...", + "Type: employees
    Text: ## Annual Performance History\n- **2020:** Exceeds Expectations \n Samantha Greene demonstrated exce...", + "Type: employees
    Text: ## Compensation History\n- **2020:** Base Salary - $55,000 \n The entry-level salary matched industr...", + "Type: employees
    Text: - **2023:** Base Salary - $70,000 \n Recognized for substantial improvement in employee relations m...", + "Type: employees
    Text: # HR Record\n\n# Samuel Trenton\n\n## Summary\n- **Date of Birth:** April 12, 1989 \n- **Job Title:** Sen...", + "Type: employees
    Text: ## Annual Performance History\n- **2023:** Rating: 4.5/5 \n *Samuel exceeded expectations, successfu...", + "Type: employees
    Text: ## Compensation History\n- **2023:** Base Salary: $115,000 + Bonus: $15,000 \n *Annual bonus based o...", + "Type: employees
    Text: - **Engagement in Company Culture:** Regularly participates in team-building events and contributes ...", + "Type: products
    Text: # Product Summary\n\n# Carllm\n\n## Summary\n\nCarllm is an innovative auto insurance product developed by...", + "Type: products
    Text: - **Instant Quoting**: With Carllm, insurance companies can offer near-instant quotes to customers, ...", + "Type: products
    Text: - **Mobile Integration**: Carllm is designed to work seamlessly with mobile applications, providing ...", + "Type: products
    Text: - **Professional Tier**: $2,500/month\n - For medium-sized companies.\n - All Basic Tier features pl...", + "Type: products
    Text: ### Q2 2025: Customer Experience Improvements\n- Launch of a new **mobile app** for end-users.\n- Intr...", + "Type: products
    Text: # Product Summary\n\n# Homellm\n\n## Summary\nHomellm is an innovative home insurance product developed b...", + "Type: products
    Text: ### 2. Dynamic Pricing Model\nWith Homellm's innovative dynamic pricing model, insurance providers ca...", + "Type: products
    Text: ### 5. Multi-Channel Integration\nHomellm seamlessly integrates into existing insurance platforms, pr...", + "Type: products
    Text: - **Basic Tier:** Starting at $5,000/month for small insurers with basic integration features.\n- **S...", + "Type: products
    Text: All tiers include a comprehensive training program and ongoing updates to ensure optimal performance...", + "Type: products
    Text: With Homellm, Insurellm is committed to transforming the landscape of home insurance, ensuring both ...", + "Type: products
    Text: # Product Summary\n\n# Markellm\n\n## Summary\n\nMarkellm is an innovative two-sided marketplace designed ...", + "Type: products
    Text: - **User-Friendly Interface**: Designed with user experience in mind, Markellm features an intuitive...", + "Type: products
    Text: - **Customer Support**: Our dedicated support team is always available to assist both consumers and ...", + "Type: products
    Text: ### For Insurance Companies:\n- **Basic Listing Fee**: $199/month for a featured listing on the platf...", + "Type: products
    Text: ### Q3 2025\n- Initiate a comprehensive marketing campaign targeting both consumers and insurers to i...", + "Type: products
    Text: # Product Summary\n\n# Rellm: AI-Powered Enterprise Reinsurance Solution\n\n## Summary\n\nRellm is an inno...", + "Type: products
    Text: ### Seamless Integrations\nRellm's architecture is designed for effortless integration with existing ...", + "Type: products
    Text: ### Regulatory Compliance Tools\nRellm includes built-in compliance tracking features to help organiz...", + "Type: products
    Text: Join the growing number of organizations leveraging Rellm to enhance their reinsurance processes whi...", + "Type: products
    Text: Experience the future of reinsurance with Rellm, where innovation meets reliability. Let Insurellm h..." + ], + "type": "scatter3d", + "x": [ + 81.53388, + 53.523838, + 65.4336, + -11.2568, + -35.23297, + -58.1388, + -70.451775, + 80.44417, + 25.835087, + -99.28855, + 25.3601, + 71.5455, + 39.470325, + 11.782631, + -3.366674, + 23.596594, + 20.719059, + 9.189867, + -6.907728, + 41.62049, + 3.820471, + 22.429987, + -1.9527842, + 2.6615057, + 9.8561535, + -25.084528, + 88.3859, + -43.759174, + -70.171425, + 64.19189, + -73.61963, + 58.55072, + -8.85301, + -21.603752, + 2.7881224, + -45.822075, + -42.858322, + 0.59138376, + -17.384357, + -64.93836, + -5.5359893, + -11.441331, + -11.330225, + -20.265352, + -39.243156, + -63.98278, + -81.72354, + -71.28366, + -9.971935, + -31.514902, + -18.16162, + -4.766756, + -22.621572, + -37.923866, + -47.165283, + -48.194252, + 20.253887, + 223.44736, + -51.686974, + -42.731888, + -3.2548769, + -18.483477, + -44.07783, + 7.867005, + 26.948917, + 106.128426, + 53.68431, + 28.933582, + 34.222527, + -16.782572, + -37.06238, + -52.3044, + 34.171013, + -16.1603, + -48.797993, + -75.184235, + -81.12384, + -65.20964, + -78.65246, + -58.300514, + -27.88297, + -41.794777, + -83.91477, + -41.56064, + 7.7734685, + -74.547615, + -19.879875, + -8.129884, + -1.8509603, + 14.149119, + -4.45687, + -53.21423, + 6.1975307, + 35.461143, + -14.680159, + -20.67162, + -23.223652, + -5.4168777, + 17.79015, + 25.157133, + 11.091784, + 45.41651, + 63.17549, + 50.12626, + 30.874004, + 35.734764, + 80.13974, + 57.350708, + 36.339565, + 25.682257, + 78.46156, + 61.396954, + -83.5418, + 68.61663, + 47.78963, + 47.6066, + 49.488094, + 80.07241, + 57.53512, + 79.77016, + 33.869728, + 63.889473, + -32.792236 + ], + "y": [ + 19.283733, + 11.140304, + 34.85373, + 66.58248, + 20.496525, + 18.66278, + 63.37658, + -38.804882, + 36.968765, + 11.0408945, + -2.8027616, + -0.6743983, + 61.195312, + 26.506996, + 19.132858, + 17.96988, + 41.849804, + 83.186935, + -16.386538, + 84.47603, + 21.801687, + 7.00924, + 51.315266, + -36.286488, + -3.6705906, + 61.74415, + -32.81874, + 91.45265, + 64.3316, + -72.978806, + 43.9395, + -35.78357, + 8.203194, + 68.245834, + 55.084503, + 68.109634, + 82.05149, + -18.306131, + 1.9083865, + 63.671555, + 26.325958, + -9.275049, + -32.211662, + 29.510502, + 52.090054, + 24.063622, + 26.914606, + 46.11639, + 50.52323, + 41.756107, + 30.933, + 101.60333, + 75.41632, + 16.445831, + 91.67727, + -41.476246, + -49.900795, + 15.246118, + -25.914267, + -37.00789, + -70.7613, + -40.41268, + -24.553493, + -97.089226, + -107.92218, + 8.401706, + -64.885956, + -64.041595, + -59.68835, + -63.750614, + -17.248238, + -9.267501, + -71.522736, + -10.604969, + 13.532798, + -19.520105, + -42.81251, + -65.766785, + -22.81011, + -46.88173, + -63.41763, + -65.60392, + -49.578648, + -92.86681, + -72.58249, + -63.928215, + -13.93383, + -46.0743, + -75.10812, + -51.170418, + -37.75353, + -67.76687, + 103.12513, + -74.91113, + -83.86663, + -107.106705, + -93.31034, + -99.96602, + -5.3622127, + -24.922474, + -45.96636, + -35.28478, + 59.117798, + 48.3417, + 51.33213, + 76.26867, + 52.505924, + 27.40437, + 14.75013, + 28.69867, + 4.7155714, + -7.247487, + -34.02125, + -1.7822995, + 0.14419319, + 19.239779, + -8.316879, + 23.734856, + 85.2096, + -47.07777, + 61.757893, + 59.476, + 38.759068 + ], + "z": [ + -14.29453, + -34.71637, + -28.59993, + -22.65612, + -8.054486, + 58.79845, + -3.3757033, + 14.808398, + -66.40088, + -2.968062, + 21.89932, + -62.77509, + -59.61707, + -42.63285, + -20.359333, + -6.960816, + -24.642426, + -7.943236, + 98.445076, + -8.870388, + 101.567474, + 109.36959, + -54.67124, + 18.714115, + 60.153572, + -67.19362, + -5.8205786, + 63.059887, + 65.17404, + 4.4357395, + -27.966211, + -5.472546, + 264.10822, + 52.605568, + 50.606712, + 39.145245, + -33.420185, + 0.98187125, + 70.157196, + -46.92285, + 17.546955, + 28.075523, + 57.448467, + 47.964592, + -15.538981, + 3.036544, + 33.89405, + 18.556377, + 8.348332, + 20.07917, + -83.82076, + -44.158554, + 11.562198, + 26.33919, + 7.3630586, + 28.997889, + -19.409937, + 213.27306, + 47.549023, + -27.961267, + -33.38058, + -29.46969, + -67.14885, + -33.69737, + -4.7212925, + -12.064441, + 51.81019, + 51.387836, + 29.224623, + -1.6933604, + 2.5197253, + -38.66655, + -32.058723, + -46.31407, + -51.627632, + -18.293224, + -9.943942, + -7.30543, + 6.786018, + -47.776237, + -52.915794, + -71.69413, + -59.169884, + -39.777233, + -0.20590062, + -75.22719, + -96.832146, + -103.80248, + -78.61911, + -96.8633, + -80.49291, + 34.09247, + 34.139854, + -69.749855, + 50.413597, + -3.7609684, + 16.896708, + 37.713326, + -83.134605, + -47.321796, + -50.20056, + -63.635235, + 74.50581, + 53.450645, + 71.70312, + 64.599655, + 45.940273, + 84.374146, + 64.53904, + 43.600063, + 62.300507, + 74.37096, + 55.983784, + 20.760025, + -0.25919074, + 15.355031, + 36.588936, + 24.362068, + 21.700638, + -40.98002, + 17.98443, + 6.0238895, + 88.67324 + ] + } + ], + "layout": { + "height": 700, + "margin": { + "b": 10, + "l": 10, + "r": 20, + "t": 40 + }, + "scene": { + "xaxis": { + "title": { + "text": "x" + } + }, + "yaxis": { + "title": { + "text": "y" + } + }, + "zaxis": { + "title": { + "text": "z" + } + } + }, + "template": { + "data": { + "bar": [ + { + "error_x": { + "color": "#2a3f5f" + }, + "error_y": { + "color": "#2a3f5f" + }, + "marker": { + "line": { + "color": "#E5ECF6", + "width": 0.5 + }, + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "bar" + } + ], + "barpolar": [ + { + "marker": { + "line": { + "color": "#E5ECF6", + "width": 0.5 + }, + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "barpolar" + } + ], + "carpet": [ + { + "aaxis": { + "endlinecolor": "#2a3f5f", + "gridcolor": "white", + "linecolor": "white", + "minorgridcolor": "white", + "startlinecolor": "#2a3f5f" + }, + "baxis": { + "endlinecolor": "#2a3f5f", + "gridcolor": "white", + "linecolor": "white", + "minorgridcolor": "white", + "startlinecolor": "#2a3f5f" + }, + "type": "carpet" + } + ], + "choropleth": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "choropleth" + } + ], + "contour": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "contour" + } + ], + "contourcarpet": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "contourcarpet" + } + ], + "heatmap": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "heatmap" + } + ], + "heatmapgl": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "heatmapgl" + } + ], + "histogram": [ + { + "marker": { + "pattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + } + }, + "type": "histogram" + } + ], + "histogram2d": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "histogram2d" + } + ], + "histogram2dcontour": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "histogram2dcontour" + } + ], + "mesh3d": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "type": "mesh3d" + } + ], + "parcoords": [ + { + "line": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "parcoords" + } + ], + "pie": [ + { + "automargin": true, + "type": "pie" + } + ], + "scatter": [ + { + "fillpattern": { + "fillmode": "overlay", + "size": 10, + "solidity": 0.2 + }, + "type": "scatter" + } + ], + "scatter3d": [ + { + "line": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatter3d" + } + ], + "scattercarpet": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattercarpet" + } + ], + "scattergeo": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattergeo" + } + ], + "scattergl": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattergl" + } + ], + "scattermapbox": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scattermapbox" + } + ], + "scatterpolar": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterpolar" + } + ], + "scatterpolargl": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterpolargl" + } + ], + "scatterternary": [ + { + "marker": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "type": "scatterternary" + } + ], + "surface": [ + { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + }, + "colorscale": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "type": "surface" + } + ], + "table": [ + { + "cells": { + "fill": { + "color": "#EBF0F8" + }, + "line": { + "color": "white" + } + }, + "header": { + "fill": { + "color": "#C8D4E3" + }, + "line": { + "color": "white" + } + }, + "type": "table" + } + ] + }, + "layout": { + "annotationdefaults": { + "arrowcolor": "#2a3f5f", + "arrowhead": 0, + "arrowwidth": 1 + }, + "autotypenumbers": "strict", + "coloraxis": { + "colorbar": { + "outlinewidth": 0, + "ticks": "" + } + }, + "colorscale": { + "diverging": [ + [ + 0, + "#8e0152" + ], + [ + 0.1, + "#c51b7d" + ], + [ + 0.2, + "#de77ae" + ], + [ + 0.3, + "#f1b6da" + ], + [ + 0.4, + "#fde0ef" + ], + [ + 0.5, + "#f7f7f7" + ], + [ + 0.6, + "#e6f5d0" + ], + [ + 0.7, + "#b8e186" + ], + [ + 0.8, + "#7fbc41" + ], + [ + 0.9, + "#4d9221" + ], + [ + 1, + "#276419" + ] + ], + "sequential": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ], + "sequentialminus": [ + [ + 0, + "#0d0887" + ], + [ + 0.1111111111111111, + "#46039f" + ], + [ + 0.2222222222222222, + "#7201a8" + ], + [ + 0.3333333333333333, + "#9c179e" + ], + [ + 0.4444444444444444, + "#bd3786" + ], + [ + 0.5555555555555556, + "#d8576b" + ], + [ + 0.6666666666666666, + "#ed7953" + ], + [ + 0.7777777777777778, + "#fb9f3a" + ], + [ + 0.8888888888888888, + "#fdca26" + ], + [ + 1, + "#f0f921" + ] + ] + }, + "colorway": [ + "#636efa", + "#EF553B", + "#00cc96", + "#ab63fa", + "#FFA15A", + "#19d3f3", + "#FF6692", + "#B6E880", + "#FF97FF", + "#FECB52" + ], + "font": { + "color": "#2a3f5f" + }, + "geo": { + "bgcolor": "white", + "lakecolor": "white", + "landcolor": "#E5ECF6", + "showlakes": true, + "showland": true, + "subunitcolor": "white" + }, + "hoverlabel": { + "align": "left" + }, + "hovermode": "closest", + "mapbox": { + "style": "light" + }, + "paper_bgcolor": "white", + "plot_bgcolor": "#E5ECF6", + "polar": { + "angularaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "bgcolor": "#E5ECF6", + "radialaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + } + }, + "scene": { + "xaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + }, + "yaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + }, + "zaxis": { + "backgroundcolor": "#E5ECF6", + "gridcolor": "white", + "gridwidth": 2, + "linecolor": "white", + "showbackground": true, + "ticks": "", + "zerolinecolor": "white" + } + }, + "shapedefaults": { + "line": { + "color": "#2a3f5f" + } + }, + "ternary": { + "aaxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "baxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + }, + "bgcolor": "#E5ECF6", + "caxis": { + "gridcolor": "white", + "linecolor": "white", + "ticks": "" + } + }, + "title": { + "x": 0.05 + }, + "xaxis": { + "automargin": true, + "gridcolor": "white", + "linecolor": "white", + "ticks": "", + "title": { + "standoff": 15 + }, + "zerolinecolor": "white", + "zerolinewidth": 2 + }, + "yaxis": { + "automargin": true, + "gridcolor": "white", + "linecolor": "white", + "ticks": "", + "title": { + "standoff": 15 + }, + "zerolinecolor": "white", + "zerolinewidth": 2 + } + } + }, + "title": { + "text": "3D Chroma Vector Store Visualization" + }, + "width": 900 + } + } + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "# Let's try 3D!\n", + "\n", + "tsne = TSNE(n_components=3, random_state=42)\n", + "reduced_vectors = tsne.fit_transform(vectors)\n", + "\n", + "# Create the 3D scatter plot\n", + "fig = go.Figure(data=[go.Scatter3d(\n", + " x=reduced_vectors[:, 0],\n", + " y=reduced_vectors[:, 1],\n", + " z=reduced_vectors[:, 2],\n", + " mode='markers',\n", + " marker=dict(size=5, color=colors, opacity=0.8),\n", + " text=[f\"Type: {t}
    Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", + " hoverinfo='text'\n", + ")])\n", + "\n", + "fig.update_layout(\n", + " title='3D Chroma Vector Store Visualization',\n", + " scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n", + " width=900,\n", + " height=700,\n", + " margin=dict(r=20, b=10, l=10, t=40)\n", + ")\n", + "\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/week5/community-contributions/day4-gemini.ipynb b/week5/community-contributions/day4-gemini.ipynb new file mode 100644 index 0000000..431ce5d --- /dev/null +++ b/week5/community-contributions/day4-gemini.ipynb @@ -0,0 +1,433 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import glob\n", + "from dotenv import load_dotenv\n", + "import gradio as gr\n", + "# import gemini\n", + "import google.generativeai" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [], + "source": [ + "# imports for langchain\n", + "\n", + "from langchain.document_loaders import DirectoryLoader, TextLoader\n", + "from langchain.text_splitter import CharacterTextSplitter\n", + "from langchain.schema import Document\n", + "# from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", + "from langchain_chroma import Chroma\n", + "from langchain_google_genai import GoogleGenerativeAIEmbeddings, ChatGoogleGenerativeAI\n", + "import numpy as np\n", + "from sklearn.manifold import TSNE\n", + "import plotly.graph_objects as go\n", + "from langchain.memory import ConversationBufferMemory\n", + "from langchain.chains import ConversationalRetrievalChain" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "# price is a factor for our company, so we're going to use a low cost model\n", + "\n", + "MODEL = \"gemini-1.5-flash\"\n", + "db_name = \"vector_db\"" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv()\n", + "os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [], + "source": [ + "google.generativeai.configure()" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "# Read in documents using LangChain's loaders\n", + "# Take everything in all the sub-folders of our knowledgebase\n", + "\n", + "folders = glob.glob(\"knowledge-base/*\")\n", + "\n", + "# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", + "text_loader_kwargs = {'encoding': 'utf-8'}\n", + "# If that doesn't work, some Windows users might need to uncomment the next line instead\n", + "# text_loader_kwargs={'autodetect_encoding': True}\n", + "\n", + "documents = []\n", + "for folder in folders:\n", + " doc_type = os.path.basename(folder)\n", + " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", + " folder_docs = loader.load()\n", + " for doc in folder_docs:\n", + " doc.metadata[\"doc_type\"] = doc_type\n", + " documents.append(doc)" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Created a chunk of size 1088, which is longer than the specified 1000\n" + ] + } + ], + "source": [ + "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", + "chunks = text_splitter.split_documents(documents)" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "123" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "len(chunks)" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Document types found: company, contracts, employees, products\n" + ] + } + ], + "source": [ + "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", + "print(f\"Document types found: {', '.join(doc_types)}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Vectorstore created with 123 documents\n" + ] + } + ], + "source": [ + "embeddings = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")\n", + "\n", + "# Check if a Chroma Datastore already exists - if so, delete the collection to start from scratch\n", + "\n", + "if os.path.exists(db_name):\n", + " Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()\n", + "\n", + "# Create our Chroma vectorstore!\n", + "\n", + "vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n", + "print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "The vectors have 768 dimensions\n" + ] + } + ], + "source": [ + "# Get one vector and find how many dimensions it has\n", + "\n", + "collection = vectorstore._collection\n", + "sample_embedding = collection.get(limit=1, include=[\"embeddings\"])[\"embeddings\"][0]\n", + "dimensions = len(sample_embedding)\n", + "print(f\"The vectors have {dimensions:,} dimensions\")" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "# Prework\n", + "\n", + "result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n", + "vectors = np.array(result['embeddings'])\n", + "documents = result['documents']\n", + "doc_types = [metadata['doc_type'] for metadata in result['metadatas']]\n", + "colors = [['blue', 'green', 'red', 'orange'][['products', 'employees', 'contracts', 'company'].index(t)] for t in doc_types]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# We humans find it easier to visalize things in 2D!\n", + "# Reduce the dimensionality of the vectors to 2D using t-SNE\n", + "# (t-distributed stochastic neighbor embedding)\n", + "\n", + "tsne = TSNE(n_components=2, random_state=42)\n", + "reduced_vectors = tsne.fit_transform(vectors)\n", + "\n", + "# Create the 2D scatter plot\n", + "fig = go.Figure(data=[go.Scatter(\n", + " x=reduced_vectors[:, 0],\n", + " y=reduced_vectors[:, 1],\n", + " mode='markers',\n", + " marker=dict(size=5, color=colors, opacity=0.8),\n", + " text=[f\"Type: {t}
    Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", + " hoverinfo='text'\n", + ")])\n", + "\n", + "fig.update_layout(\n", + " title='2D Chroma Vector Store Visualization',\n", + " scene=dict(xaxis_title='x',yaxis_title='y'),\n", + " width=800,\n", + " height=600,\n", + " margin=dict(r=20, b=10, l=10, t=40)\n", + ")\n", + "\n", + "fig.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Let's try 3D!\n", + "\n", + "tsne = TSNE(n_components=3, random_state=42)\n", + "reduced_vectors = tsne.fit_transform(vectors)\n", + "\n", + "# Create the 3D scatter plot\n", + "fig = go.Figure(data=[go.Scatter3d(\n", + " x=reduced_vectors[:, 0],\n", + " y=reduced_vectors[:, 1],\n", + " z=reduced_vectors[:, 2],\n", + " mode='markers',\n", + " marker=dict(size=5, color=colors, opacity=0.8),\n", + " text=[f\"Type: {t}
    Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", + " hoverinfo='text'\n", + ")])\n", + "\n", + "fig.update_layout(\n", + " title='3D Chroma Vector Store Visualization',\n", + " scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n", + " width=900,\n", + " height=700,\n", + " margin=dict(r=20, b=10, l=10, t=40)\n", + ")\n", + "\n", + "fig.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "RAG pipeline using langchain" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "C:\\Users\\GANESH\\AppData\\Local\\Temp\\ipykernel_524\\4130109764.py:5: LangChainDeprecationWarning:\n", + "\n", + "Please see the migration guide at: https://python.langchain.com/docs/versions/migrating_memory/\n", + "\n" + ] + } + ], + "source": [ + "# create a new Chat with ChatGoogleGenerativeAI\n", + "llm = ChatGoogleGenerativeAI(model=MODEL, temperature=0.7)\n", + "\n", + "# set up the conversation memory for the chat\n", + "memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", + "\n", + "# the retriever is an abstraction over the VectorStore that will be used during RAG\n", + "retriever = vectorstore.as_retriever()\n", + "\n", + "# putting it together: set up the conversation chain with the GPT 4o-mini LLM, the vector store and memory\n", + "conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Insurellm is an insurance technology company with 200 employees and over 300 clients worldwide. They offer four software products, including Homellm, a portal for home insurance companies that integrates with existing platforms and offers a customer portal for policy management. Their pricing model is based on provider size and customization needs.\n" + ] + } + ], + "source": [ + "query = \"Can you describe Insurellm in a few sentences\"\n", + "result = conversation_chain.invoke({\"question\":query})\n", + "print(result[\"answer\"])" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "metadata": {}, + "outputs": [], + "source": [ + "# set up a new conversation memory for the chat\n", + "memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", + "\n", + "# putting it together: set up the conversation chain with the GPT 4o-mini LLM, the vector store and memory\n", + "conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Gradio User Interface" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " result = conversation_chain.invoke({\"question\": message})\n", + " return result[\"answer\"]" + ] + }, + { + "cell_type": "code", + "execution_count": 23, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7860\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "view = gr.ChatInterface(chat, type=\"messages\").launch(inbrowser=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 51e5d152fa5198631e3d9d2ef4164ef8acbba583 Mon Sep 17 00:00:00 2001 From: Dimitris Sinanis Date: Wed, 26 Feb 2025 14:23:24 +0200 Subject: [PATCH 14/35] Add book flight and sightseeing functions in day5. Audio worked with variation 1. --- .../day5_book_flight_sightseeing_tools.ipynb | 1108 +++++++++++++++++ 1 file changed, 1108 insertions(+) create mode 100644 week2/community-contributions/day5_book_flight_sightseeing_tools.ipynb diff --git a/week2/community-contributions/day5_book_flight_sightseeing_tools.ipynb b/week2/community-contributions/day5_book_flight_sightseeing_tools.ipynb new file mode 100644 index 0000000..6ceeaaf --- /dev/null +++ b/week2/community-contributions/day5_book_flight_sightseeing_tools.ipynb @@ -0,0 +1,1108 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec", + "metadata": {}, + "source": [ + "# Project - Airline AI Assistant\n", + "\n", + "We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n" + ] + } + ], + "source": [ + "# Initialization\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "MODEL = \"gpt-4o-mini\"\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "0a521d84-d07c-49ab-a0df-d6451499ed97", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n", + "system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", + "system_message += \"Always be accurate. If you don't know the answer, say so.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "61a2a15d-b559-4844-b377-6bd5cb4949f6", + "metadata": {}, + "outputs": [], + "source": [ + "# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n", + "\n", + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " return response.choices[0].message.content\n", + "\n", + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4", + "metadata": {}, + "source": [ + "## Tools\n", + "\n", + "Tools are an incredibly powerful feature provided by the frontier LLMs.\n", + "\n", + "With tools, you can write a function, and have the LLM call that function as part of its response.\n", + "\n", + "Sounds almost spooky.. we're giving it the power to run code on our machine?\n", + "\n", + "Well, kinda." + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's start by making a useful function\n", + "\n", + "ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\", \"athens\": \"$599\", \"kastoria\": \"$999\"}\n", + "\n", + "def get_ticket_price(destination_city):\n", + " print(f\"Tool get_ticket_price called for {destination_city}\")\n", + " city = destination_city.lower()\n", + " return ticket_prices.get(city, \"Unknown\")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_ticket_price called for London\n" + ] + }, + { + "data": { + "text/plain": [ + "'$799'" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_ticket_price(\"London\")" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "2054c00e", + "metadata": {}, + "outputs": [], + "source": [ + "import random\n", + "\n", + "# Create a function for the booking system\n", + "def get_booking(destination_city):\n", + " print(f\"Tool get_booking called for {destination_city}\")\n", + " city = destination_city.lower()\n", + " \n", + " # Example data for different cities\n", + " flight_info = {\n", + " \"london\": {\"flight_number\": \"BA123\", \"departure_time\": \"10:00 AM\", \"gate\": \"A12\"},\n", + " \"paris\": {\"flight_number\": \"AF456\", \"departure_time\": \"12:00 PM\", \"gate\": \"B34\"},\n", + " \"tokyo\": {\"flight_number\": \"JL789\", \"departure_time\": \"02:00 PM\", \"gate\": \"C56\"},\n", + " \"berlin\": {\"flight_number\": \"LH101\", \"departure_time\": \"04:00 PM\", \"gate\": \"D78\"},\n", + " \"athens\": {\"flight_number\": \"OA202\", \"departure_time\": \"06:00 PM\", \"gate\": \"E90\"},\n", + " \"kastoria\": {\"flight_number\": \"KAS303\", \"departure_time\": \"08:00 PM\", \"gate\": \"F12\"}\n", + " }\n", + " \n", + " if city in flight_info:\n", + " info = flight_info[city]\n", + " status = random.choice([\"available\", \"not available\"])\n", + " return f\"Flight {info['flight_number']} to {destination_city.lower()} is {status}. Departure time: {info['departure_time']}, Gate: {info['gate']}.\"\n", + " else:\n", + " return \"Unknown destination city.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "ef334206", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_booking called for London\n" + ] + }, + { + "data": { + "text/plain": [ + "'Flight BA123 to london is not available. Departure time: 10:00 AM, Gate: A12.'" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_booking(\"London\")" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "b011afc2", + "metadata": {}, + "outputs": [], + "source": [ + "sightseeing_info = {\"london\": \"London Eye, Big Ben, Tower of London\", \n", + " \"paris\": \"Eiffel Tower, Louvre Museum, Notre-Dame Cathedral\", \n", + " \"tokyo\": \"Tokyo Tower, Senso-ji Temple, Meiji Shrine\", \n", + " \"berlin\": \"Brandenburg Gate, Berlin Wall, Museum Island\", \n", + " \"athens\": \"Acropolis, Parthenon, Temple of Olympian Zeus\", \n", + " \"kastoria\": \"Cave of Dragon, Kastoria Lake, Byzantine Museum\"}\n", + "\n", + "\n", + "def get_sightseeing(destination_city):\n", + " print(f\"Tool get_ticket_price called for {destination_city}\")\n", + " city = destination_city.lower()\n", + " return sightseeing_info.get(city, \"Unknown\")\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "3008e353", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_ticket_price called for Kastoria\n" + ] + }, + { + "data": { + "text/plain": [ + "'Cave of Dragon, Kastoria Lake, Byzantine Museum'" + ] + }, + "execution_count": 9, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_sightseeing(\"Kastoria\")" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "4afceded-7178-4c05-8fa6-9f2085e6a344", + "metadata": {}, + "outputs": [], + "source": [ + "# There's a particular dictionary structure that's required to describe our function:\n", + "\n", + "price_function = {\n", + " \"name\": \"get_ticket_price\",\n", + " \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "# Book flight function description and properties\n", + "\n", + "book_flight_function = {\n", + " \"name\": \"book_flight\",\n", + " \"description\": \"Book a flight to the destination city. Call this whenever a customer wants to book a flight.\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " \"departure_date\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The date of departure (YYYY-MM-DD)\",\n", + " },\n", + " \"return_date\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The date of return (YYYY-MM-DD)\",\n", + " },\n", + " \"passenger_name\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The name of the passenger\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\", \"departure_date\", \"return_date\", \"passenger_name\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "sightseeing_function = {\n", + " \"name\": \"sightseeing\",\n", + " \"description\": \"Get the top sightseeing recommendations for the destination city. Call this whenever a customer asks 'What are the top things to do in this city'\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"destination_city\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The city that the customer wants to travel to\",\n", + " },\n", + " },\n", + " \"required\": [\"destination_city\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c", + "metadata": {}, + "outputs": [], + "source": [ + "# And this is included in a list of tools:\n", + "\n", + "tools = [{\"type\": \"function\", \"function\": price_function}, \n", + " {\"type\": \"function\", \"function\": book_flight_function},\n", + " {\"type\": \"function\", \"function\": sightseeing_function}]" + ] + }, + { + "cell_type": "markdown", + "id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340", + "metadata": {}, + "source": [ + "## Getting OpenAI to use our Tool\n", + "\n", + "There's some fiddly stuff to allow OpenAI \"to call our tool\"\n", + "\n", + "What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n", + "\n", + "Here's how the new chat function looks:" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " response, city = handle_tool_call(message)\n", + " messages.append(message)\n", + " messages.append(response)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "b0992986-ea09-4912-a076-8e5603ee631f", + "metadata": {}, + "outputs": [], + "source": [ + "# We have to write that function handle_tool_call:\n", + "\n", + "def handle_tool_call(message):\n", + " tool_call = message.tool_calls[0]\n", + " print(f\"Tool call: {tool_call}\")\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " city = arguments.get('destination_city')\n", + " price = get_ticket_price(city)\n", + " book = get_booking(city)\n", + " sightseeing = get_sightseeing(city)\n", + " print (book)\n", + " response = {\n", + " \"role\": \"tool\",\n", + " \"content\": json.dumps({\"destination_city\": city,\"price\": price, \"booking\": book, \"sightseeing\": sightseeing}),\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " return response, city" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "markdown", + "id": "473e5b39-da8f-4db1-83ae-dbaca2e9531e", + "metadata": {}, + "source": [ + "# Let's go multi-modal!!\n", + "\n", + "We can use DALL-E-3, the image generation model behind GPT-4o, to make us some images\n", + "\n", + "Let's put this in a function called artist.\n", + "\n", + "### Price alert: each time I generate an image it costs about 4 cents - don't go crazy with images!" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "2c27c4ba-8ed5-492f-add1-02ce9c81d34c", + "metadata": {}, + "outputs": [], + "source": [ + "# Some imports for handling images\n", + "\n", + "import base64\n", + "from io import BytesIO\n", + "from PIL import Image" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "773a9f11-557e-43c9-ad50-56cbec3a0f8f", + "metadata": {}, + "outputs": [], + "source": [ + "def artist(city):\n", + " image_response = openai.images.generate(\n", + " model=\"dall-e-3\",\n", + " prompt=f\"An image representing a vacation in {city}, showing tourist spots and everything unique about {city}, in a vibrant pop-art style\",\n", + " size=\"1024x1024\",\n", + " n=1,\n", + " response_format=\"b64_json\",\n", + " )\n", + " image_base64 = image_response.data[0].b64_json\n", + " image_data = base64.b64decode(image_base64)\n", + " return Image.open(BytesIO(image_data))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d877c453-e7fb-482a-88aa-1a03f976b9e9", + "metadata": {}, + "outputs": [], + "source": [ + "image = artist(\"Athens\")\n", + "display(image)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "728a12c5-adc3-415d-bb05-82beb73b079b", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "id": "f4975b87-19e9-4ade-a232-9b809ec75c9a", + "metadata": {}, + "source": [ + "## Audio (NOTE - Audio is optional for this course - feel free to skip Audio if it causes trouble!)\n", + "\n", + "And let's make a function talker that uses OpenAI's speech model to generate Audio\n", + "\n", + "### Troubleshooting Audio issues\n", + "\n", + "If you have any problems running this code below (like a FileNotFound error, or a warning of a missing package), you may need to install FFmpeg, a very popular audio utility.\n", + "\n", + "**For PC Users**\n", + "\n", + "Detailed instructions are [here](https://chatgpt.com/share/6724efee-6b0c-8012-ac5e-72e2e3885905) and summary instructions:\n", + "\n", + "1. Download FFmpeg from the official website: https://ffmpeg.org/download.html\n", + "\n", + "2. Extract the downloaded files to a location on your computer (e.g., `C:\\ffmpeg`)\n", + "\n", + "3. Add the FFmpeg bin folder to your system PATH:\n", + "- Right-click on 'This PC' or 'My Computer' and select 'Properties'\n", + "- Click on 'Advanced system settings'\n", + "- Click on 'Environment Variables'\n", + "- Under 'System variables', find and edit 'Path'\n", + "- Add a new entry with the path to your FFmpeg bin folder (e.g., `C:\\ffmpeg\\bin`)\n", + "- Restart your command prompt, and within Jupyter Lab do Kernel -> Restart kernel, to pick up the changes\n", + "\n", + "4. Open a new command prompt and run this to make sure it's installed OK\n", + "`ffmpeg -version`\n", + "\n", + "**For Mac Users**\n", + "\n", + "1. Install homebrew if you don't have it already by running this in a Terminal window and following any instructions: \n", + "`/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"`\n", + "\n", + "2. Then install FFmpeg with `brew install ffmpeg`\n", + "\n", + "3. Verify your installation with `ffmpeg -version` and if everything is good, within Jupyter Lab do Kernel -> Restart kernel to pick up the changes\n", + "\n", + "Message me or email me at ed@edwarddonner.com with any problems!" + ] + }, + { + "cell_type": "markdown", + "id": "4cc90e80-c96e-4dd4-b9d6-386fe2b7e797", + "metadata": {}, + "source": [ + "## To check you now have ffmpeg and can access it here\n", + "\n", + "Excecute the next cell to see if you get a version number. (Putting an exclamation mark before something in Jupyter Lab tells it to run it as a terminal command rather than python code).\n", + "\n", + "If this doesn't work, you may need to actually save and close down your Jupyter lab, and start it again from a new Terminal window (Mac) or Anaconda prompt (PC), remembering to activate the llms environment. This ensures you pick up ffmpeg.\n", + "\n", + "And if that doesn't work, please contact me!" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "7b3be0fb-1d34-4693-ab6f-dbff190afcd7", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "'ffmpeg' is not recognized as an internal or external command,\n", + "operable program or batch file.\n", + "'ffprobe' is not recognized as an internal or external command,\n", + "operable program or batch file.\n", + "'ffplay' is not recognized as an internal or external command,\n", + "operable program or batch file.\n" + ] + } + ], + "source": [ + "!ffmpeg -version\n", + "!ffprobe -version\n", + "!ffplay -version" + ] + }, + { + "cell_type": "markdown", + "id": "d91d3f8f-e505-4e3c-a87c-9e42ed823db6", + "metadata": {}, + "source": [ + "# For Mac users - and possibly many PC users too\n", + "\n", + "This version should work fine for you. It might work for Windows users too, but you might get a Permissions error writing to a temp file. If so, see the next section!\n", + "\n", + "As always, if you have problems, please contact me! (You could also comment out the audio talker() in the later code if you're less interested in audio generation)" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "ffbfe93b-5e86-4e68-ba71-b301cd5230db", + "metadata": {}, + "outputs": [], + "source": [ + "from pydub import AudioSegment\n", + "from pydub.playback import play\n", + "\n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\", # Also, try replacing onyx with alloy\n", + " input=message\n", + " )\n", + " \n", + " audio_stream = BytesIO(response.content)\n", + " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n", + " play(audio)" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "b88d775d-d357-4292-a1ad-5dc5ed567281", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "c:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work\n", + " warn(\"Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work\", RuntimeWarning)\n" + ] + }, + { + "ename": "FileNotFoundError", + "evalue": "[WinError 2] The system cannot find the file specified", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mFileNotFoundError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[17], line 1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[43mtalker\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mWell, hi there\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", + "Cell \u001b[1;32mIn[16], line 12\u001b[0m, in \u001b[0;36mtalker\u001b[1;34m(message)\u001b[0m\n\u001b[0;32m 5\u001b[0m response \u001b[38;5;241m=\u001b[39m openai\u001b[38;5;241m.\u001b[39maudio\u001b[38;5;241m.\u001b[39mspeech\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[0;32m 6\u001b[0m model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtts-1\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 7\u001b[0m voice\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124monyx\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m# Also, try replacing onyx with alloy\u001b[39;00m\n\u001b[0;32m 8\u001b[0m \u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mmessage\n\u001b[0;32m 9\u001b[0m )\n\u001b[0;32m 11\u001b[0m audio_stream \u001b[38;5;241m=\u001b[39m BytesIO(response\u001b[38;5;241m.\u001b[39mcontent)\n\u001b[1;32m---> 12\u001b[0m audio \u001b[38;5;241m=\u001b[39m \u001b[43mAudioSegment\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_file\u001b[49m\u001b[43m(\u001b[49m\u001b[43maudio_stream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mformat\u001b[39;49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmp3\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 13\u001b[0m play(audio)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\audio_segment.py:728\u001b[0m, in \u001b[0;36mAudioSegment.from_file\u001b[1;34m(cls, file, format, codec, parameters, start_second, duration, **kwargs)\u001b[0m\n\u001b[0;32m 726\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m 727\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m--> 728\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[43mmediainfo_json\u001b[49m\u001b[43m(\u001b[49m\u001b[43morig_file\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mread_ahead_limit\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mread_ahead_limit\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 729\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m info:\n\u001b[0;32m 730\u001b[0m audio_streams \u001b[38;5;241m=\u001b[39m [x \u001b[38;5;28;01mfor\u001b[39;00m x \u001b[38;5;129;01min\u001b[39;00m info[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mstreams\u001b[39m\u001b[38;5;124m'\u001b[39m]\n\u001b[0;32m 731\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m x[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcodec_type\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124maudio\u001b[39m\u001b[38;5;124m'\u001b[39m]\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\utils.py:274\u001b[0m, in \u001b[0;36mmediainfo_json\u001b[1;34m(filepath, read_ahead_limit)\u001b[0m\n\u001b[0;32m 271\u001b[0m file\u001b[38;5;241m.\u001b[39mclose()\n\u001b[0;32m 273\u001b[0m command \u001b[38;5;241m=\u001b[39m [prober, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m-of\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mjson\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m+\u001b[39m command_args\n\u001b[1;32m--> 274\u001b[0m res \u001b[38;5;241m=\u001b[39m \u001b[43mPopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcommand\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdin\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstdin_parameter\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstderr\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 275\u001b[0m output, stderr \u001b[38;5;241m=\u001b[39m res\u001b[38;5;241m.\u001b[39mcommunicate(\u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mstdin_data)\n\u001b[0;32m 276\u001b[0m output \u001b[38;5;241m=\u001b[39m output\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mignore\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1026\u001b[0m, in \u001b[0;36mPopen.__init__\u001b[1;34m(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize, process_group)\u001b[0m\n\u001b[0;32m 1022\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_mode:\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr \u001b[38;5;241m=\u001b[39m io\u001b[38;5;241m.\u001b[39mTextIOWrapper(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr,\n\u001b[0;32m 1024\u001b[0m encoding\u001b[38;5;241m=\u001b[39mencoding, errors\u001b[38;5;241m=\u001b[39merrors)\n\u001b[1;32m-> 1026\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_execute_child\u001b[49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpreexec_fn\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43mpass_fds\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1028\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mshell\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1029\u001b[0m \u001b[43m \u001b[49m\u001b[43mp2cread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mp2cwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1030\u001b[0m \u001b[43m \u001b[49m\u001b[43mc2pread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mc2pwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1031\u001b[0m \u001b[43m \u001b[49m\u001b[43merrread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43merrwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1032\u001b[0m \u001b[43m \u001b[49m\u001b[43mrestore_signals\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1033\u001b[0m \u001b[43m \u001b[49m\u001b[43mgid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgids\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43muid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mumask\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1034\u001b[0m \u001b[43m \u001b[49m\u001b[43mstart_new_session\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprocess_group\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1035\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m:\n\u001b[0;32m 1036\u001b[0m \u001b[38;5;66;03m# Cleanup if the child failed starting.\u001b[39;00m\n\u001b[0;32m 1037\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m f \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mfilter\u001b[39m(\u001b[38;5;28;01mNone\u001b[39;00m, (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdin, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdout, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr)):\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1538\u001b[0m, in \u001b[0;36mPopen._execute_child\u001b[1;34m(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group)\u001b[0m\n\u001b[0;32m 1536\u001b[0m \u001b[38;5;66;03m# Start the process\u001b[39;00m\n\u001b[0;32m 1537\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m-> 1538\u001b[0m hp, ht, pid, tid \u001b[38;5;241m=\u001b[39m \u001b[43m_winapi\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mCreateProcess\u001b[49m\u001b[43m(\u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1539\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# no special security\u001b[39;49;00m\n\u001b[0;32m 1540\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 1541\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mint\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;129;43;01mnot\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1542\u001b[0m \u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1543\u001b[0m \u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1544\u001b[0m \u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1545\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1546\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[0;32m 1547\u001b[0m \u001b[38;5;66;03m# Child is launched. Close the parent's copy of those pipe\u001b[39;00m\n\u001b[0;32m 1548\u001b[0m \u001b[38;5;66;03m# handles that only the child should have open. You need\u001b[39;00m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1551\u001b[0m \u001b[38;5;66;03m# pipe will not close when the child process exits and the\u001b[39;00m\n\u001b[0;32m 1552\u001b[0m \u001b[38;5;66;03m# ReadFile will hang.\u001b[39;00m\n\u001b[0;32m 1553\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_close_pipe_fds(p2cread, p2cwrite,\n\u001b[0;32m 1554\u001b[0m c2pread, c2pwrite,\n\u001b[0;32m 1555\u001b[0m errread, errwrite)\n", + "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 2] The system cannot find the file specified" + ] + } + ], + "source": [ + "talker(\"Well, hi there\")" + ] + }, + { + "cell_type": "markdown", + "id": "ad89a9bd-bb1e-4bbb-a49a-83af5f500c24", + "metadata": {}, + "source": [ + "# For Windows users (or any Mac users with problems above)\n", + "\n", + "## First try the Mac version above, but if you get a permissions error writing to a temp file, then this code should work instead.\n", + "\n", + "A collaboration between students Mark M. and Patrick H. and Claude got this resolved!\n", + "\n", + "Below are 4 variations - hopefully one of them will work on your PC. If not, message me please!\n", + "\n", + "And for Mac people - all 3 of the below work on my Mac too - please try these if the Mac version gave you problems.\n", + "\n", + "## PC Variation 1" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d104b96a-02ca-4159-82fe-88e0452aa479", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/html": [ + "\n", + " \n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "import base64\n", + "from io import BytesIO\n", + "from PIL import Image\n", + "from IPython.display import Audio, display\n", + "\n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\",\n", + " input=message)\n", + "\n", + " audio_stream = BytesIO(response.content)\n", + " output_filename = \"output_audio.mp3\"\n", + " with open(output_filename, \"wb\") as f:\n", + " f.write(audio_stream.read())\n", + "\n", + " # Play the generated audio\n", + " display(Audio(output_filename, autoplay=True))\n", + "\n", + "talker(\"Well, hi there\")" + ] + }, + { + "cell_type": "markdown", + "id": "3a5d11f4-bbd3-43a1-904d-f684eb5f3e3a", + "metadata": {}, + "source": [ + "## PC Variation 2" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "d59c8ebd-79c5-498a-bdf2-3a1c50d91aa0", + "metadata": {}, + "outputs": [ + { + "ename": "FileNotFoundError", + "evalue": "[WinError 2] The system cannot find the file specified", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mFileNotFoundError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[19], line 36\u001b[0m\n\u001b[0;32m 33\u001b[0m audio \u001b[38;5;241m=\u001b[39m AudioSegment\u001b[38;5;241m.\u001b[39mfrom_file(audio_stream, \u001b[38;5;28mformat\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmp3\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 34\u001b[0m play_audio(audio)\n\u001b[1;32m---> 36\u001b[0m \u001b[43mtalker\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mWell hi there\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", + "Cell \u001b[1;32mIn[19], line 33\u001b[0m, in \u001b[0;36mtalker\u001b[1;34m(message)\u001b[0m\n\u001b[0;32m 27\u001b[0m response \u001b[38;5;241m=\u001b[39m openai\u001b[38;5;241m.\u001b[39maudio\u001b[38;5;241m.\u001b[39mspeech\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[0;32m 28\u001b[0m model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtts-1\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 29\u001b[0m voice\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124monyx\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m# Also, try replacing onyx with alloy\u001b[39;00m\n\u001b[0;32m 30\u001b[0m \u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mmessage\n\u001b[0;32m 31\u001b[0m )\n\u001b[0;32m 32\u001b[0m audio_stream \u001b[38;5;241m=\u001b[39m BytesIO(response\u001b[38;5;241m.\u001b[39mcontent)\n\u001b[1;32m---> 33\u001b[0m audio \u001b[38;5;241m=\u001b[39m \u001b[43mAudioSegment\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_file\u001b[49m\u001b[43m(\u001b[49m\u001b[43maudio_stream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mformat\u001b[39;49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmp3\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 34\u001b[0m play_audio(audio)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\audio_segment.py:728\u001b[0m, in \u001b[0;36mAudioSegment.from_file\u001b[1;34m(cls, file, format, codec, parameters, start_second, duration, **kwargs)\u001b[0m\n\u001b[0;32m 726\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m 727\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m--> 728\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[43mmediainfo_json\u001b[49m\u001b[43m(\u001b[49m\u001b[43morig_file\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mread_ahead_limit\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mread_ahead_limit\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 729\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m info:\n\u001b[0;32m 730\u001b[0m audio_streams \u001b[38;5;241m=\u001b[39m [x \u001b[38;5;28;01mfor\u001b[39;00m x \u001b[38;5;129;01min\u001b[39;00m info[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mstreams\u001b[39m\u001b[38;5;124m'\u001b[39m]\n\u001b[0;32m 731\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m x[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcodec_type\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124maudio\u001b[39m\u001b[38;5;124m'\u001b[39m]\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\utils.py:274\u001b[0m, in \u001b[0;36mmediainfo_json\u001b[1;34m(filepath, read_ahead_limit)\u001b[0m\n\u001b[0;32m 271\u001b[0m file\u001b[38;5;241m.\u001b[39mclose()\n\u001b[0;32m 273\u001b[0m command \u001b[38;5;241m=\u001b[39m [prober, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m-of\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mjson\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m+\u001b[39m command_args\n\u001b[1;32m--> 274\u001b[0m res \u001b[38;5;241m=\u001b[39m \u001b[43mPopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcommand\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdin\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstdin_parameter\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstderr\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 275\u001b[0m output, stderr \u001b[38;5;241m=\u001b[39m res\u001b[38;5;241m.\u001b[39mcommunicate(\u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mstdin_data)\n\u001b[0;32m 276\u001b[0m output \u001b[38;5;241m=\u001b[39m output\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mignore\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1026\u001b[0m, in \u001b[0;36mPopen.__init__\u001b[1;34m(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize, process_group)\u001b[0m\n\u001b[0;32m 1022\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_mode:\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr \u001b[38;5;241m=\u001b[39m io\u001b[38;5;241m.\u001b[39mTextIOWrapper(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr,\n\u001b[0;32m 1024\u001b[0m encoding\u001b[38;5;241m=\u001b[39mencoding, errors\u001b[38;5;241m=\u001b[39merrors)\n\u001b[1;32m-> 1026\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_execute_child\u001b[49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpreexec_fn\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43mpass_fds\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1028\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mshell\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1029\u001b[0m \u001b[43m \u001b[49m\u001b[43mp2cread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mp2cwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1030\u001b[0m \u001b[43m \u001b[49m\u001b[43mc2pread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mc2pwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1031\u001b[0m \u001b[43m \u001b[49m\u001b[43merrread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43merrwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1032\u001b[0m \u001b[43m \u001b[49m\u001b[43mrestore_signals\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1033\u001b[0m \u001b[43m \u001b[49m\u001b[43mgid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgids\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43muid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mumask\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1034\u001b[0m \u001b[43m \u001b[49m\u001b[43mstart_new_session\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprocess_group\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1035\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m:\n\u001b[0;32m 1036\u001b[0m \u001b[38;5;66;03m# Cleanup if the child failed starting.\u001b[39;00m\n\u001b[0;32m 1037\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m f \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mfilter\u001b[39m(\u001b[38;5;28;01mNone\u001b[39;00m, (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdin, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdout, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr)):\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1538\u001b[0m, in \u001b[0;36mPopen._execute_child\u001b[1;34m(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group)\u001b[0m\n\u001b[0;32m 1536\u001b[0m \u001b[38;5;66;03m# Start the process\u001b[39;00m\n\u001b[0;32m 1537\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m-> 1538\u001b[0m hp, ht, pid, tid \u001b[38;5;241m=\u001b[39m \u001b[43m_winapi\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mCreateProcess\u001b[49m\u001b[43m(\u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1539\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;66;43;03m# no special security\u001b[39;49;00m\n\u001b[0;32m 1540\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 1541\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;28;43mint\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;129;43;01mnot\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1542\u001b[0m \u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1543\u001b[0m \u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1544\u001b[0m \u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1545\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1546\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[0;32m 1547\u001b[0m \u001b[38;5;66;03m# Child is launched. Close the parent's copy of those pipe\u001b[39;00m\n\u001b[0;32m 1548\u001b[0m \u001b[38;5;66;03m# handles that only the child should have open. You need\u001b[39;00m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1551\u001b[0m \u001b[38;5;66;03m# pipe will not close when the child process exits and the\u001b[39;00m\n\u001b[0;32m 1552\u001b[0m \u001b[38;5;66;03m# ReadFile will hang.\u001b[39;00m\n\u001b[0;32m 1553\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_close_pipe_fds(p2cread, p2cwrite,\n\u001b[0;32m 1554\u001b[0m c2pread, c2pwrite,\n\u001b[0;32m 1555\u001b[0m errread, errwrite)\n", + "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 2] The system cannot find the file specified" + ] + } + ], + "source": [ + "import tempfile\n", + "import subprocess\n", + "from io import BytesIO\n", + "from pydub import AudioSegment\n", + "import time\n", + "\n", + "def play_audio(audio_segment):\n", + " temp_dir = tempfile.gettempdir()\n", + " temp_path = os.path.join(temp_dir, \"temp_audio.wav\")\n", + " try:\n", + " audio_segment.export(temp_path, format=\"wav\")\n", + " time.sleep(3) # Student Dominic found that this was needed. You could also try commenting out to see if not needed on your PC\n", + " subprocess.call([\n", + " \"ffplay\",\n", + " \"-nodisp\",\n", + " \"-autoexit\",\n", + " \"-hide_banner\",\n", + " temp_path\n", + " ], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\n", + " finally:\n", + " try:\n", + " os.remove(temp_path)\n", + " except Exception:\n", + " pass\n", + " \n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\", # Also, try replacing onyx with alloy\n", + " input=message\n", + " )\n", + " audio_stream = BytesIO(response.content)\n", + " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n", + " play_audio(audio)\n", + "\n", + "talker(\"Well hi there\")" + ] + }, + { + "cell_type": "markdown", + "id": "96f90e35-f71e-468e-afea-07b98f74dbcf", + "metadata": {}, + "source": [ + "## PC Variation 3" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "8597c7f8-7b50-44ad-9b31-db12375cd57b", + "metadata": {}, + "outputs": [ + { + "ename": "FileNotFoundError", + "evalue": "[WinError 2] The system cannot find the file specified", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mFileNotFoundError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[20], line 26\u001b[0m\n\u001b[0;32m 22\u001b[0m audio \u001b[38;5;241m=\u001b[39m AudioSegment\u001b[38;5;241m.\u001b[39mfrom_file(audio_stream, \u001b[38;5;28mformat\u001b[39m\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmp3\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 24\u001b[0m play(audio)\n\u001b[1;32m---> 26\u001b[0m \u001b[43mtalker\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mWell hi there\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", + "Cell \u001b[1;32mIn[20], line 22\u001b[0m, in \u001b[0;36mtalker\u001b[1;34m(message)\u001b[0m\n\u001b[0;32m 15\u001b[0m response \u001b[38;5;241m=\u001b[39m openai\u001b[38;5;241m.\u001b[39maudio\u001b[38;5;241m.\u001b[39mspeech\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[0;32m 16\u001b[0m model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtts-1\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 17\u001b[0m voice\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124monyx\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m# Also, try replacing onyx with alloy\u001b[39;00m\n\u001b[0;32m 18\u001b[0m \u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mmessage\n\u001b[0;32m 19\u001b[0m )\n\u001b[0;32m 21\u001b[0m audio_stream \u001b[38;5;241m=\u001b[39m BytesIO(response\u001b[38;5;241m.\u001b[39mcontent)\n\u001b[1;32m---> 22\u001b[0m audio \u001b[38;5;241m=\u001b[39m \u001b[43mAudioSegment\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_file\u001b[49m\u001b[43m(\u001b[49m\u001b[43maudio_stream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mformat\u001b[39;49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmp3\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 24\u001b[0m play(audio)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\audio_segment.py:728\u001b[0m, in \u001b[0;36mAudioSegment.from_file\u001b[1;34m(cls, file, format, codec, parameters, start_second, duration, **kwargs)\u001b[0m\n\u001b[0;32m 726\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m 727\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m--> 728\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[43mmediainfo_json\u001b[49m\u001b[43m(\u001b[49m\u001b[43morig_file\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mread_ahead_limit\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mread_ahead_limit\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 729\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m info:\n\u001b[0;32m 730\u001b[0m audio_streams \u001b[38;5;241m=\u001b[39m [x \u001b[38;5;28;01mfor\u001b[39;00m x \u001b[38;5;129;01min\u001b[39;00m info[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mstreams\u001b[39m\u001b[38;5;124m'\u001b[39m]\n\u001b[0;32m 731\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m x[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcodec_type\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124maudio\u001b[39m\u001b[38;5;124m'\u001b[39m]\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\utils.py:274\u001b[0m, in \u001b[0;36mmediainfo_json\u001b[1;34m(filepath, read_ahead_limit)\u001b[0m\n\u001b[0;32m 271\u001b[0m file\u001b[38;5;241m.\u001b[39mclose()\n\u001b[0;32m 273\u001b[0m command \u001b[38;5;241m=\u001b[39m [prober, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m-of\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mjson\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m+\u001b[39m command_args\n\u001b[1;32m--> 274\u001b[0m res \u001b[38;5;241m=\u001b[39m \u001b[43mPopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcommand\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdin\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstdin_parameter\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstderr\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 275\u001b[0m output, stderr \u001b[38;5;241m=\u001b[39m res\u001b[38;5;241m.\u001b[39mcommunicate(\u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mstdin_data)\n\u001b[0;32m 276\u001b[0m output \u001b[38;5;241m=\u001b[39m output\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mignore\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1026\u001b[0m, in \u001b[0;36mPopen.__init__\u001b[1;34m(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize, process_group)\u001b[0m\n\u001b[0;32m 1022\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_mode:\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr \u001b[38;5;241m=\u001b[39m io\u001b[38;5;241m.\u001b[39mTextIOWrapper(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr,\n\u001b[0;32m 1024\u001b[0m encoding\u001b[38;5;241m=\u001b[39mencoding, errors\u001b[38;5;241m=\u001b[39merrors)\n\u001b[1;32m-> 1026\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_execute_child\u001b[49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpreexec_fn\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43mpass_fds\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1028\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mshell\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1029\u001b[0m \u001b[43m \u001b[49m\u001b[43mp2cread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mp2cwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1030\u001b[0m \u001b[43m \u001b[49m\u001b[43mc2pread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mc2pwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1031\u001b[0m \u001b[43m \u001b[49m\u001b[43merrread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43merrwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1032\u001b[0m \u001b[43m \u001b[49m\u001b[43mrestore_signals\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1033\u001b[0m \u001b[43m \u001b[49m\u001b[43mgid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgids\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43muid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mumask\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1034\u001b[0m \u001b[43m \u001b[49m\u001b[43mstart_new_session\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprocess_group\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1035\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m:\n\u001b[0;32m 1036\u001b[0m \u001b[38;5;66;03m# Cleanup if the child failed starting.\u001b[39;00m\n\u001b[0;32m 1037\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m f \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mfilter\u001b[39m(\u001b[38;5;28;01mNone\u001b[39;00m, (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdin, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdout, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr)):\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1538\u001b[0m, in \u001b[0;36mPopen._execute_child\u001b[1;34m(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group)\u001b[0m\n\u001b[0;32m 1536\u001b[0m \u001b[38;5;66;03m# Start the process\u001b[39;00m\n\u001b[0;32m 1537\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m-> 1538\u001b[0m hp, ht, pid, tid \u001b[38;5;241m=\u001b[39m _winapi\u001b[38;5;241m.\u001b[39mCreateProcess(executable, args,\n\u001b[0;32m 1539\u001b[0m \u001b[38;5;66;03m# no special security\u001b[39;00m\n\u001b[0;32m 1540\u001b[0m \u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[0;32m 1541\u001b[0m \u001b[38;5;28mint\u001b[39m(\u001b[38;5;129;01mnot\u001b[39;00m close_fds),\n\u001b[0;32m 1542\u001b[0m creationflags,\n\u001b[0;32m 1543\u001b[0m env,\n\u001b[0;32m 1544\u001b[0m cwd,\n\u001b[0;32m 1545\u001b[0m startupinfo)\n\u001b[0;32m 1546\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[0;32m 1547\u001b[0m \u001b[38;5;66;03m# Child is launched. Close the parent's copy of those pipe\u001b[39;00m\n\u001b[0;32m 1548\u001b[0m \u001b[38;5;66;03m# handles that only the child should have open. You need\u001b[39;00m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1551\u001b[0m \u001b[38;5;66;03m# pipe will not close when the child process exits and the\u001b[39;00m\n\u001b[0;32m 1552\u001b[0m \u001b[38;5;66;03m# ReadFile will hang.\u001b[39;00m\n\u001b[0;32m 1553\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_close_pipe_fds(p2cread, p2cwrite,\n\u001b[0;32m 1554\u001b[0m c2pread, c2pwrite,\n\u001b[0;32m 1555\u001b[0m errread, errwrite)\n", + "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 2] The system cannot find the file specified" + ] + } + ], + "source": [ + "import os\n", + "from pydub import AudioSegment\n", + "from pydub.playback import play\n", + "from io import BytesIO\n", + "\n", + "def talker(message):\n", + " # Set a custom directory for temporary files on Windows\n", + " custom_temp_dir = os.path.expanduser(\"~/Documents/temp_audio\")\n", + " os.environ['TEMP'] = custom_temp_dir # You can also use 'TMP' if necessary\n", + " \n", + " # Create the folder if it doesn't exist\n", + " if not os.path.exists(custom_temp_dir):\n", + " os.makedirs(custom_temp_dir)\n", + " \n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\", # Also, try replacing onyx with alloy\n", + " input=message\n", + " )\n", + " \n", + " audio_stream = BytesIO(response.content)\n", + " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n", + "\n", + " play(audio)\n", + "\n", + "talker(\"Well hi there\")" + ] + }, + { + "cell_type": "markdown", + "id": "e821224c-b069-4f9b-9535-c15fdb0e411c", + "metadata": {}, + "source": [ + "## PC Variation 4\n", + "\n", + "### Let's try a completely different sound library\n", + "\n", + "First run the next cell to install a new library, then try the cell below it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "69d3c0d9-afcc-49e3-b829-9c9869d8b472", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install simpleaudio" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "28f9cc99-36b7-4554-b3f4-f2012f614a13", + "metadata": {}, + "outputs": [], + "source": [ + "from pydub import AudioSegment\n", + "from io import BytesIO\n", + "import tempfile\n", + "import os\n", + "import simpleaudio as sa\n", + "\n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\", # Also, try replacing onyx with alloy\n", + " input=message\n", + " )\n", + " \n", + " audio_stream = BytesIO(response.content)\n", + " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n", + "\n", + " # Create a temporary file in a folder where you have write permissions\n", + " with tempfile.NamedTemporaryFile(suffix=\".wav\", delete=False, dir=os.path.expanduser(\"~/Documents\")) as temp_audio_file:\n", + " temp_file_name = temp_audio_file.name\n", + " audio.export(temp_file_name, format=\"wav\")\n", + " \n", + " # Load and play audio using simpleaudio\n", + " wave_obj = sa.WaveObject.from_wave_file(temp_file_name)\n", + " play_obj = wave_obj.play()\n", + " play_obj.wait_done() # Wait for playback to finish\n", + "\n", + " # Clean up the temporary file afterward\n", + " os.remove(temp_file_name)\n", + " \n" + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "0d248b46", + "metadata": {}, + "outputs": [ + { + "ename": "FileNotFoundError", + "evalue": "[WinError 2] The system cannot find the file specified", + "output_type": "error", + "traceback": [ + "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", + "\u001b[1;31mFileNotFoundError\u001b[0m Traceback (most recent call last)", + "Cell \u001b[1;32mIn[22], line 1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m \u001b[43mtalker\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mWell hi there\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n", + "Cell \u001b[1;32mIn[21], line 15\u001b[0m, in \u001b[0;36mtalker\u001b[1;34m(message)\u001b[0m\n\u001b[0;32m 8\u001b[0m response \u001b[38;5;241m=\u001b[39m openai\u001b[38;5;241m.\u001b[39maudio\u001b[38;5;241m.\u001b[39mspeech\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[0;32m 9\u001b[0m model\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtts-1\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 10\u001b[0m voice\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124monyx\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;66;03m# Also, try replacing onyx with alloy\u001b[39;00m\n\u001b[0;32m 11\u001b[0m \u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mmessage\n\u001b[0;32m 12\u001b[0m )\n\u001b[0;32m 14\u001b[0m audio_stream \u001b[38;5;241m=\u001b[39m BytesIO(response\u001b[38;5;241m.\u001b[39mcontent)\n\u001b[1;32m---> 15\u001b[0m audio \u001b[38;5;241m=\u001b[39m \u001b[43mAudioSegment\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_file\u001b[49m\u001b[43m(\u001b[49m\u001b[43maudio_stream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mformat\u001b[39;49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmp3\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[0;32m 17\u001b[0m \u001b[38;5;66;03m# Create a temporary file in a folder where you have write permissions\u001b[39;00m\n\u001b[0;32m 18\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m tempfile\u001b[38;5;241m.\u001b[39mNamedTemporaryFile(suffix\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m.wav\u001b[39m\u001b[38;5;124m\"\u001b[39m, delete\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mFalse\u001b[39;00m, \u001b[38;5;28mdir\u001b[39m\u001b[38;5;241m=\u001b[39mos\u001b[38;5;241m.\u001b[39mpath\u001b[38;5;241m.\u001b[39mexpanduser(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m~/Documents\u001b[39m\u001b[38;5;124m\"\u001b[39m)) \u001b[38;5;28;01mas\u001b[39;00m temp_audio_file:\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\audio_segment.py:728\u001b[0m, in \u001b[0;36mAudioSegment.from_file\u001b[1;34m(cls, file, format, codec, parameters, start_second, duration, **kwargs)\u001b[0m\n\u001b[0;32m 726\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m 727\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m--> 728\u001b[0m info \u001b[38;5;241m=\u001b[39m \u001b[43mmediainfo_json\u001b[49m\u001b[43m(\u001b[49m\u001b[43morig_file\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mread_ahead_limit\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mread_ahead_limit\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 729\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m info:\n\u001b[0;32m 730\u001b[0m audio_streams \u001b[38;5;241m=\u001b[39m [x \u001b[38;5;28;01mfor\u001b[39;00m x \u001b[38;5;129;01min\u001b[39;00m info[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mstreams\u001b[39m\u001b[38;5;124m'\u001b[39m]\n\u001b[0;32m 731\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m x[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcodec_type\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124maudio\u001b[39m\u001b[38;5;124m'\u001b[39m]\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\site-packages\\pydub\\utils.py:274\u001b[0m, in \u001b[0;36mmediainfo_json\u001b[1;34m(filepath, read_ahead_limit)\u001b[0m\n\u001b[0;32m 271\u001b[0m file\u001b[38;5;241m.\u001b[39mclose()\n\u001b[0;32m 273\u001b[0m command \u001b[38;5;241m=\u001b[39m [prober, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m-of\u001b[39m\u001b[38;5;124m'\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mjson\u001b[39m\u001b[38;5;124m'\u001b[39m] \u001b[38;5;241m+\u001b[39m command_args\n\u001b[1;32m--> 274\u001b[0m res \u001b[38;5;241m=\u001b[39m \u001b[43mPopen\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcommand\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdin\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstdin_parameter\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstdout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstderr\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mPIPE\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 275\u001b[0m output, stderr \u001b[38;5;241m=\u001b[39m res\u001b[38;5;241m.\u001b[39mcommunicate(\u001b[38;5;28minput\u001b[39m\u001b[38;5;241m=\u001b[39mstdin_data)\n\u001b[0;32m 276\u001b[0m output \u001b[38;5;241m=\u001b[39m output\u001b[38;5;241m.\u001b[39mdecode(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mutf-8\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mignore\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1026\u001b[0m, in \u001b[0;36mPopen.__init__\u001b[1;34m(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, user, group, extra_groups, encoding, errors, text, umask, pipesize, process_group)\u001b[0m\n\u001b[0;32m 1022\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext_mode:\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr \u001b[38;5;241m=\u001b[39m io\u001b[38;5;241m.\u001b[39mTextIOWrapper(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr,\n\u001b[0;32m 1024\u001b[0m encoding\u001b[38;5;241m=\u001b[39mencoding, errors\u001b[38;5;241m=\u001b[39merrors)\n\u001b[1;32m-> 1026\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_execute_child\u001b[49m\u001b[43m(\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mexecutable\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mpreexec_fn\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mclose_fds\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43mpass_fds\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcwd\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43menv\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1028\u001b[0m \u001b[43m \u001b[49m\u001b[43mstartupinfo\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcreationflags\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mshell\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1029\u001b[0m \u001b[43m \u001b[49m\u001b[43mp2cread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mp2cwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1030\u001b[0m \u001b[43m \u001b[49m\u001b[43mc2pread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mc2pwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1031\u001b[0m \u001b[43m \u001b[49m\u001b[43merrread\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43merrwrite\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1032\u001b[0m \u001b[43m \u001b[49m\u001b[43mrestore_signals\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1033\u001b[0m \u001b[43m \u001b[49m\u001b[43mgid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgids\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43muid\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mumask\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1034\u001b[0m \u001b[43m \u001b[49m\u001b[43mstart_new_session\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mprocess_group\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1035\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m:\n\u001b[0;32m 1036\u001b[0m \u001b[38;5;66;03m# Cleanup if the child failed starting.\u001b[39;00m\n\u001b[0;32m 1037\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m f \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mfilter\u001b[39m(\u001b[38;5;28;01mNone\u001b[39;00m, (\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdin, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstdout, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mstderr)):\n", + "File \u001b[1;32mc:\\Users\\dsinanis\\AppData\\Local\\anaconda3\\envs\\llms\\Lib\\subprocess.py:1538\u001b[0m, in \u001b[0;36mPopen._execute_child\u001b[1;34m(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_gid, unused_gids, unused_uid, unused_umask, unused_start_new_session, unused_process_group)\u001b[0m\n\u001b[0;32m 1536\u001b[0m \u001b[38;5;66;03m# Start the process\u001b[39;00m\n\u001b[0;32m 1537\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m-> 1538\u001b[0m hp, ht, pid, tid \u001b[38;5;241m=\u001b[39m _winapi\u001b[38;5;241m.\u001b[39mCreateProcess(executable, args,\n\u001b[0;32m 1539\u001b[0m \u001b[38;5;66;03m# no special security\u001b[39;00m\n\u001b[0;32m 1540\u001b[0m \u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[0;32m 1541\u001b[0m \u001b[38;5;28mint\u001b[39m(\u001b[38;5;129;01mnot\u001b[39;00m close_fds),\n\u001b[0;32m 1542\u001b[0m creationflags,\n\u001b[0;32m 1543\u001b[0m env,\n\u001b[0;32m 1544\u001b[0m cwd,\n\u001b[0;32m 1545\u001b[0m startupinfo)\n\u001b[0;32m 1546\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[0;32m 1547\u001b[0m \u001b[38;5;66;03m# Child is launched. Close the parent's copy of those pipe\u001b[39;00m\n\u001b[0;32m 1548\u001b[0m \u001b[38;5;66;03m# handles that only the child should have open. You need\u001b[39;00m\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1551\u001b[0m \u001b[38;5;66;03m# pipe will not close when the child process exits and the\u001b[39;00m\n\u001b[0;32m 1552\u001b[0m \u001b[38;5;66;03m# ReadFile will hang.\u001b[39;00m\n\u001b[0;32m 1553\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_close_pipe_fds(p2cread, p2cwrite,\n\u001b[0;32m 1554\u001b[0m c2pread, c2pwrite,\n\u001b[0;32m 1555\u001b[0m errread, errwrite)\n", + "\u001b[1;31mFileNotFoundError\u001b[0m: [WinError 2] The system cannot find the file specified" + ] + } + ], + "source": [ + "talker(\"Well hi there\")" + ] + }, + { + "cell_type": "markdown", + "id": "7986176b-cd04-495f-a47f-e057b0e462ed", + "metadata": {}, + "source": [ + "## PC Users - if none of those 4 variations worked!\n", + "\n", + "Please get in touch with me. I'm sorry this is causing problems! We'll figure it out.\n", + "\n", + "Alternatively: playing audio from your PC isn't super-critical for this course, and you can feel free to focus on image generation and skip audio for now, or come back to it later." + ] + }, + { + "cell_type": "markdown", + "id": "1d48876d-c4fa-46a8-a04f-f9fadf61fb0d", + "metadata": {}, + "source": [ + "# Our Agent Framework\n", + "\n", + "The term 'Agentic AI' and Agentization is an umbrella term that refers to a number of techniques, such as:\n", + "\n", + "1. Breaking a complex problem into smaller steps, with multiple LLMs carrying out specialized tasks\n", + "2. The ability for LLMs to use Tools to give them additional capabilities\n", + "3. The 'Agent Environment' which allows Agents to collaborate\n", + "4. An LLM can act as the Planner, dividing bigger tasks into smaller ones for the specialists\n", + "5. The concept of an Agent having autonomy / agency, beyond just responding to a prompt - such as Memory\n", + "\n", + "We're showing 1 and 2 here, and to a lesser extent 3 and 5. In week 8 we will do the lot!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ba820c95-02f5-499e-8f3c-8727ee0a6c0c", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + " image = None\n", + " \n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " response, city = handle_tool_call(message)\n", + " messages.append(message)\n", + " messages.append(response)\n", + " image = artist(city)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " reply = response.choices[0].message.content\n", + " history += [{\"role\":\"assistant\", \"content\":reply}]\n", + "\n", + " # Comment out or delete the next line if you'd rather skip Audio for now..\n", + " # It worked for me only with the first variation of the talker function\n", + " talker(reply)\n", + " \n", + " return history, image" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "f38d0d27-33bf-4992-a2e5-5dbed973cde7", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7866\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "# More involved Gradio code as we're not using the preset Chat interface!\n", + "# Passing in inbrowser=True in the last line will cause a Gradio window to pop up immediately.\n", + "\n", + "with gr.Blocks() as ui:\n", + " with gr.Row():\n", + " chatbot = gr.Chatbot(height=500, type=\"messages\")\n", + " image_output = gr.Image(height=500)\n", + " with gr.Row():\n", + " entry = gr.Textbox(label=\"Chat with our AI Assistant:\")\n", + " with gr.Row():\n", + " clear = gr.Button(\"Clear\")\n", + "\n", + " def do_entry(message, history):\n", + " history += [{\"role\":\"user\", \"content\":message}]\n", + " return \"\", history\n", + "\n", + " entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry, chatbot]).then(\n", + " chat, inputs=chatbot, outputs=[chatbot, image_output]\n", + " )\n", + " clear.click(lambda: None, inputs=None, outputs=chatbot, queue=False)\n", + "\n", + "ui.launch(inbrowser=True)" + ] + }, + { + "cell_type": "markdown", + "id": "226643d2-73e4-4252-935d-86b8019e278a", + "metadata": {}, + "source": [ + "# Exercises and Business Applications\n", + "\n", + "Add in more tools - perhaps to simulate actually booking a flight. A student has done this and provided their example in the community contributions folder.\n", + "\n", + "Next: take this and apply it to your business. Make a multi-modal AI assistant with tools that could carry out an activity for your work. A customer support assistant? New employee onboarding assistant? So many possibilities! Also, see the week2 end of week Exercise in the separate Notebook." + ] + }, + { + "cell_type": "markdown", + "id": "7e795560-1867-42db-a256-a23b844e6fbe", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
    \n", + " \n", + " \n", + "

    I have a special request for you

    \n", + " \n", + " My editor tells me that it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. If you're able to take a minute to rate this, I'd be so very grateful! And regardless - always please reach out to me at ed@edwarddonner.com if I can help at any point.\n", + " \n", + "
    " + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "llms", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 9d010328ef7b3b49eb5a1ff4e855b9804c7dd335 Mon Sep 17 00:00:00 2001 From: David La Motta Date: Wed, 26 Feb 2025 14:25:11 -0500 Subject: [PATCH 15/35] Disabling SSL cert validation, and suppressing warnings. Fixes issue #217 --- .../day5-disable-ssl.ipynb | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 week1/community-contributions/day5-disable-ssl.ipynb diff --git a/week1/community-contributions/day5-disable-ssl.ipynb b/week1/community-contributions/day5-disable-ssl.ipynb new file mode 100644 index 0000000..90ac21c --- /dev/null +++ b/week1/community-contributions/day5-disable-ssl.ipynb @@ -0,0 +1,81 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "a98030af-fcd1-4d63-a36e-38ba053498fa", + "metadata": {}, + "source": [ + "# A Small Tweak to Week1-Day5\n", + "\n", + "If you have network restrictions (such as using a custom DNS provider, or firewall rules at work), you can disable SSL cert verification.\n", + "Once you do that and start executing your code, the output will be riddled with warnings. Thankfully, you can suppress those warnings,too.\n", + "\n", + "See the 2 lines added to the init method, below." + ] + }, + { + "cell_type": "code", + "execution_count": 22, + "id": "106dd65e-90af-4ca8-86b6-23a41840645b", + "metadata": {}, + "outputs": [], + "source": [ + "# A class to represent a Webpage\n", + "\n", + "# Some websites need you to use proper headers when fetching them:\n", + "headers = {\n", + " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + "}\n", + "\n", + "class Website:\n", + " \"\"\"\n", + " A utility class to represent a Website that we have scraped, now with links\n", + " \"\"\"\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + "\n", + " #\n", + " # If you must disable SSL cert validation, and also suppress all the warning that will come with it,\n", + " # add the 2 lines below. This comes in very handy if you have DNS/firewall restrictions; alas, use\n", + " # with caution, especially if deploying this in a non-dev environment.\n", + " requests.packages.urllib3.disable_warnings() \n", + " response = requests.get(url, headers=headers, verify=False) \n", + " # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " \n", + " self.body = response.content\n", + " soup = BeautifulSoup(self.body, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " if soup.body:\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", + " else:\n", + " self.text = \"\"\n", + " links = [link.get('href') for link in soup.find_all('a')]\n", + " self.links = [link for link in links if link]" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From c70c6c4f84cae9799679e7655d12cc21d61e08d7 Mon Sep 17 00:00:00 2001 From: Hazperera Date: Thu, 27 Feb 2025 14:18:14 +0000 Subject: [PATCH 16/35] add a python script for an automated website content analysis & SEO extraction --- .../day-1-marketing_insights_scraper.py | 176 ++++++++++++++++++ 1 file changed, 176 insertions(+) create mode 100644 week1/community-contributions/day-1-marketing_insights_scraper.py diff --git a/week1/community-contributions/day-1-marketing_insights_scraper.py b/week1/community-contributions/day-1-marketing_insights_scraper.py new file mode 100644 index 0000000..28b8920 --- /dev/null +++ b/week1/community-contributions/day-1-marketing_insights_scraper.py @@ -0,0 +1,176 @@ +import os +import time +import pandas as pd +import re +from dotenv import load_dotenv +from selenium import webdriver +from selenium.webdriver.chrome.service import Service +from selenium.webdriver.chrome.options import Options +from selenium.webdriver.common.by import By +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from openai import OpenAI +from openpyxl import load_workbook +from openpyxl.styles import Font, Alignment + +# Load environment variables +load_dotenv(override=True) +api_key = os.getenv('OPENAI_API_KEY') + +# Validate API Key +if not api_key: + raise ValueError("No API key was found - please check your .env file.") + +# Initialize OpenAI client +openai = OpenAI() + +# Set up Selenium WebDriver +chrome_options = Options() +chrome_options.add_argument("--headless") +chrome_options.add_argument("--disable-gpu") +chrome_options.add_argument("--no-sandbox") +chrome_options.add_argument("--disable-dev-shm-usage") + +class Website: + """Scrapes and processes website content using Selenium.""" + + def __init__(self, url: str): + self.url = url + self.text = "No content extracted." + + service = Service(executable_path="/opt/homebrew/bin/chromedriver") + driver = webdriver.Chrome(service=service, options=chrome_options) + + try: + driver.get(url) + WebDriverWait(driver, 10).until( + EC.presence_of_element_located((By.TAG_NAME, "body")) + ) + body_element = driver.find_element(By.TAG_NAME, "body") + self.text = body_element.text.strip() if body_element else "No content extracted." + except Exception as e: + print(f"Error fetching website: {e}") + finally: + driver.quit() + + def summarized_text(self, max_length=1500): + return self.text[:max_length] + ("..." if len(self.text) > max_length else "") + +def clean_text(text): + """ + Cleans extracted text by removing markdown-style formatting. + """ + text = re.sub(r"###*\s*", "", text) + text = re.sub(r"\*\*(.*?)\*\*", r"\1", text) + return text.strip() + +# Aspect-specific prompts for concise output +aspect_prompts = { + "Marketing Strategies": "Summarize the core marketing strategies used on this website in in under 30 words. Do not include a title or introduction.", + "SEO Keywords": "List only the most relevant SEO keywords from this website, separated by commas. Do not include a title or introduction.", + "User Engagement Tactics": "List key engagement tactics used on this website (e.g., interactive features, user incentives, social proof). Keep responses to 3-5 bullet points. Do not include a title or introduction.", + "Call-to-Action Phrases": "List only the most common Call-to-Action phrases used on this website, separated by commas. Do not include a title or introduction.", + "Branding Elements": "Summarize the brand's tone, style, and positioning in under 30 words. Do not include a title or introduction.", + "Competitor Comparison": "Briefly describe how this website differentiates itself from competitors in under 30 words. Do not include a title or introduction.", + "Product Descriptions": "List the most important features or benefits of the products/services described on this website in under 30 words. Do not include a title or introduction.", + "Customer Reviews Sentiment": "Summarize the overall sentiment of customer reviews in oin under 30 words, highlighting common themes. Do not include a title or introduction.", + "Social Media Strategy": "List key social media strategies used on this website, separated by commas. Do not include a title or introduction." +} + + +def summarize(url: str) -> dict: + """ + Fetches a website, extracts relevant content, and generates a separate summary for each aspect. + + :param url: The website URL to analyze. + :return: A dictionary containing extracted information. + """ + website = Website(url) + + if not website.text or website.text == "No content extracted.": + return {"URL": url, "Error": "Failed to extract content"} + + extracted_data = {"URL": url} + + for aspect, prompt in aspect_prompts.items(): + try: + formatted_prompt = f"{prompt} \n\nContent:\n{website.summarized_text()}" + response = openai.chat.completions.create( + model="gpt-4o-mini", + messages=[ + {"role": "system", "content": "You are an expert at extracting structured information from website content."}, + {"role": "user", "content": formatted_prompt} + ] + ) + + extracted_data[aspect] = clean_text(response.choices[0].message.content) + + except Exception as e: + extracted_data[aspect] = f"Error generating summary: {e}" + + return extracted_data + +def save_to_excel(data_list: list, filename="website_analysis.xlsx"): + """ + Saves extracted information to an Excel file with proper formatting. + + :param data_list: A list of dictionaries containing extracted website details. + :param filename: The name of the Excel file to save data. + """ + df = pd.DataFrame(data_list) + + df.to_excel(filename, index=False) + + wb = load_workbook(filename) + ws = wb.active + + # Auto-adjust column widths + for col in ws.columns: + max_length = 0 + col_letter = col[0].column_letter + for cell in col: + try: + if cell.value: + max_length = max(max_length, len(str(cell.value))) + except: + pass + ws.column_dimensions[col_letter].width = min(max_length + 2, 50) + + # Format headers + for cell in ws[1]: + cell.font = Font(bold=True) + cell.alignment = Alignment(horizontal="center", vertical="center") + + # Wrap text for extracted content + for row in ws.iter_rows(min_row=2): + for cell in row: + cell.alignment = Alignment(wrap_text=True, vertical="top") + + wb.save(filename) + print(f"Data saved to {filename} with improved formatting.") + +# 🔹 LIST OF WEBSITES TO PROCESS +websites = [ + "https://www.udacity.com/", + "https://www.coursera.org", + "https://www.udemy.com", + "https://www.edx.org", + "https://www.freecodecamp.org/", + "https://www.datacamp.com/", + "https://www.w3schools.com/", + "https://www.futurelearn.com/", + "https://codefirstgirls.com/", + "https://www.linkedin.com/learning", +] + +if __name__ == "__main__": + print("\nProcessing websites...\n") + extracted_data_list = [] + + for site in websites: + print(f"Extracting data from {site}...") + extracted_data = summarize(site) + extracted_data_list.append(extracted_data) + + save_to_excel(extracted_data_list) + print("\nAll websites processed successfully!") From 59e815ef5620125885b02fe9a7488d905e4e9619 Mon Sep 17 00:00:00 2001 From: Hazperera Date: Thu, 27 Feb 2025 23:10:17 +0000 Subject: [PATCH 17/35] add a python script for an automated website marketing strategy analysis --- ...ay1_marketing_insights_scraper_Selenium_OpenAI.py} | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) rename week1/community-contributions/{day-1-marketing_insights_scraper.py => day1_marketing_insights_scraper_Selenium_OpenAI.py} (95%) diff --git a/week1/community-contributions/day-1-marketing_insights_scraper.py b/week1/community-contributions/day1_marketing_insights_scraper_Selenium_OpenAI.py similarity index 95% rename from week1/community-contributions/day-1-marketing_insights_scraper.py rename to week1/community-contributions/day1_marketing_insights_scraper_Selenium_OpenAI.py index 28b8920..c69ff5f 100644 --- a/week1/community-contributions/day-1-marketing_insights_scraper.py +++ b/week1/community-contributions/day1_marketing_insights_scraper_Selenium_OpenAI.py @@ -151,16 +151,7 @@ def save_to_excel(data_list: list, filename="website_analysis.xlsx"): # 🔹 LIST OF WEBSITES TO PROCESS websites = [ - "https://www.udacity.com/", - "https://www.coursera.org", - "https://www.udemy.com", - "https://www.edx.org", - "https://www.freecodecamp.org/", - "https://www.datacamp.com/", - "https://www.w3schools.com/", - "https://www.futurelearn.com/", - "https://codefirstgirls.com/", - "https://www.linkedin.com/learning", + "https://www.gymshark.com/", ] if __name__ == "__main__": From bb301310f3dbb142af3df0edf763bbd5965f5d25 Mon Sep 17 00:00:00 2001 From: paulmboyce Date: Fri, 28 Feb 2025 13:41:08 +0000 Subject: [PATCH 18/35] feature(verify determinism on encodings):comparisons for OpenAIEmbeddings and sentence-transformers/all-MiniLM-L6-v2 --- .../verify-encodings.ipynb | 405 ++++++++++++++++++ 1 file changed, 405 insertions(+) create mode 100644 week5/community-contributions/verify-encodings.ipynb diff --git a/week5/community-contributions/verify-encodings.ipynb b/week5/community-contributions/verify-encodings.ipynb new file mode 100644 index 0000000..63477df --- /dev/null +++ b/week5/community-contributions/verify-encodings.ipynb @@ -0,0 +1,405 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "dfe37963-1af6-44fc-a841-8e462443f5e6", + "metadata": {}, + "source": [ + "## This notebook compares the embeddings generated by OpenAIEmbeddings.\n", + "\n", + "It shows that OpenAIEmbeddings embeddings can differ slightly (typically at 4 the decimal place).\n", + "\n", + "### Results from OpenAIEmbeddings:\n", + "encodings are NOT identical on each run.\n", + "\n", + "### Repeating with sentence-transformers/all-MiniLM-L6-v2:\n", + "encodings ARE identical on each run.\n", + "\n", + "Tests verify simple numerical comparisons.\n", + "\n", + "### Advanced Comparison\n", + "A more advanced euclidean and cosine comparison is also included.\n", + "\n", + "## NOTES: Tests run on local Jupiter Notebook| Anaconda setup for the course." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import glob\n", + "from dotenv import load_dotenv\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "802137aa-8a74-45e0-a487-d1974927d7ca", + "metadata": {}, + "outputs": [], + "source": [ + "# imports for langchain\n", + "\n", + "from langchain.document_loaders import DirectoryLoader, TextLoader\n", + "from langchain.text_splitter import CharacterTextSplitter\n", + "from langchain.schema import Document\n", + "from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", + "from langchain_chroma import Chroma\n", + "import numpy as np\n", + "from sklearn.manifold import TSNE\n", + "import plotly.graph_objects as go\n", + "from langchain.memory import ConversationBufferMemory\n", + "from langchain.chains import ConversationalRetrievalChain" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "58c85082-e417-4708-9efe-81a5d55d1424", + "metadata": {}, + "outputs": [], + "source": [ + "# price is a factor for our company, so we're going to use a low cost model\n", + "\n", + "MODEL = \"gpt-4o-mini\"\n", + "db_name = \"vector_db\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ee78efcb-60fe-449e-a944-40bab26261af", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv()\n", + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", + "metadata": {}, + "outputs": [], + "source": [ + "# Read in documents using LangChain's loaders\n", + "# Take everything in all the sub-folders of our knowledgebase\n", + "\n", + "folders = glob.glob(\"knowledge-base/*\")\n", + "\n", + "# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", + "text_loader_kwargs = {'encoding': 'utf-8'}\n", + "# If that doesn't work, some Windows users might need to uncomment the next line instead\n", + "# text_loader_kwargs={'autodetect_encoding': True}\n", + "\n", + "documents = []\n", + "for folder in folders:\n", + " doc_type = os.path.basename(folder)\n", + " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", + " folder_docs = loader.load()\n", + " for doc in folder_docs:\n", + " doc.metadata[\"doc_type\"] = doc_type\n", + " documents.append(doc)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", + "metadata": {}, + "outputs": [], + "source": [ + "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", + "chunks = text_splitter.split_documents(documents)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", + "metadata": {}, + "outputs": [], + "source": [ + "len(chunks)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", + "metadata": {}, + "outputs": [], + "source": [ + "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", + "print(f\"Document types found: {', '.join(doc_types)}\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a8b5ef27-70c2-4111-bce7-854bc1ebd02a", + "metadata": {}, + "outputs": [], + "source": [ + "# Use a where filter to specify the metadata condition\n", + "# Get the 3 company vectors (corresponds to our 3 yellow dots)\n", + "\n", + "def get_company_vectors(collection):\n", + " company_vectors = collection.get(\n", + " where={\"doc_type\": \"company\"}, # Filter for documents where source = \"XXXX\"\n", + " limit=10,\n", + " include=[\"embeddings\", \"metadatas\", \"documents\"]\n", + " )\n", + " print(f\"Found {len(company_vectors)} company vectors\")\n", + " return company_vectors\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d688b873-b52b-4d80-9df2-f70b389f5dc7", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "def print_vectors_summary(vectors):\n", + " for i in range(len(vectors[\"documents\"])):\n", + " print(f\"\\n--- Chunk {i+1} ---\")\n", + " \n", + " # Print document content (first 100 chars)\n", + " print(f\"Content: {vectors['documents'][i][:100]}...\")\n", + " \n", + " # Print metadata\n", + " print(f\"Metadata: {vectors['metadatas'][i]}\")\n", + " \n", + " # Print embedding info (not the full vector as it would be too long)\n", + " embedding = vectors[\"embeddings\"][i]\n", + " print(f\"Embedding: Vector of length {len(embedding)}, first 5 values: {embedding[:5]}\")\n", + "\n", + "\n", + "def get_dimensions_for_vectors(vectors):\n", + " dimensions = []\n", + "\n", + " for i in range(len(vectors[\"documents\"])):\n", + " embedding = vectors[\"embeddings\"][i]\n", + " dimensions.append(embedding)\n", + "\n", + " return dimensions\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0b195184-4920-404a-9bfa-0231f1dbe276", + "metadata": {}, + "outputs": [], + "source": [ + "# Quick check if any single value is different\n", + "def quick_diff_check(emb1, emb2):\n", + " result = \"Embeddings are identical\"\n", + " print(\"\\n\\nComparing two embeddings:\\n\\n\")\n", + " print(emb1)\n", + " print(emb2)\n", + " for i, (v1, v2) in enumerate(zip(emb1, emb2)):\n", + " if v1 != v2:\n", + " result = f\"Different at dimension {i}: {v1} vs {v2}\"\n", + " break\n", + " print(result)\n", + " return result\n", + "\n", + "#quick_diff_check(dimensions[0], dimensions[1])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "06ba838d-d179-4e2d-b208-dd9cc1fd0097", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "embeddings = OpenAIEmbeddings()\n", + "\n", + "def create_vectorstores(embeddings):\n", + "\n", + " if os.path.exists(\"vectorstore1\"):\n", + " Chroma(persist_directory=\"vectorstore1\", embedding_function=embeddings).delete_collection()\n", + " if os.path.exists(\"vectorstore2\"):\n", + " Chroma(persist_directory=\"vectorstore2\", embedding_function=embeddings).delete_collection()\n", + " \n", + " \n", + " # Create vectorstore 1\n", + " vectorstore1 = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=\"vectorstore1\")\n", + " print(f\"Vectorstore 1 created with {vectorstore1._collection.count()} documents\")\n", + " \n", + " # Create vectorstore 2\n", + " vectorstore2 = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=\"vectorstore2\")\n", + " print(f\"Vectorstore 2 created with {vectorstore2._collection.count()} documents\")\n", + "\n", + " return vectorstore1, vectorstore2\n", + "\n", + "vectorstore1, vectorstore2 = create_vectorstores(embeddings)\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e24242eb-613a-4edb-a081-6b8937f106a7", + "metadata": {}, + "outputs": [], + "source": [ + "## Uncomment this and rerun cells below, \n", + "## to see that HuggingFaceEmbeddings is idential\n", + "\n", + "#from langchain.embeddings import HuggingFaceEmbeddings\n", + "#embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n", + "#vectorstore1, vectorstore2 = create_vectorstores(embeddings)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "000b9e70-2958-40db-bbed-56a00e4249ce", + "metadata": {}, + "outputs": [], + "source": [ + "# Get the 3 company doc_type vectors\n", + "collection1 = vectorstore1._collection\n", + "collection2 = vectorstore2._collection\n", + "\n", + "company_vectors1=get_company_vectors(collection1)\n", + "company_vectors2=get_company_vectors(collection2)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "63cd63e4-9d3e-405a-8ef9-dac16fe2570e", + "metadata": {}, + "outputs": [], + "source": [ + "# Lets print out summary info just to see we have the same chunks.\n", + "\n", + "def print_summary_info (vectors):\n", + " print(\"VECTORS SUMMARY\\n\")\n", + " print_vectors_summary(vectors)\n", + "\n", + "\n", + "print(\"\\n\\n\\n========= VECTORS 1 =========\\n\\n\")\n", + "print_summary_info(company_vectors1)\n", + "\n", + "print(\"\\n\\n\\n========= VECTORS 2 =========\\n\\n\")\n", + "print_summary_info(company_vectors2)\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc085a35-f0ec-4ddb-955c-244cb2d3eb2a", + "metadata": {}, + "outputs": [], + "source": [ + "dimensions1 = get_dimensions_for_vectors(company_vectors1)\n", + "dimensions2 = get_dimensions_for_vectors(company_vectors2)\n", + "\n", + "result1 = quick_diff_check(dimensions1[0], dimensions2[0]) \n", + "result2 = quick_diff_check(dimensions1[1], dimensions2[1]) \n", + "result3 = quick_diff_check(dimensions1[2], dimensions2[2]) \n", + "\n", + "print(\"\\n\\nSUMMARY RESULTS:\")\n", + "print(\"================\\n\\n\")\n", + "print(result1) \n", + "print(result2)\n", + "print(result3)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "164cf94d-9d63-4bae-91f9-4b02da1537ae", + "metadata": {}, + "outputs": [], + "source": [ + "## ADVANCED COMPARISONS:\n", + "# More advanced comparisons (from Claude 3.7 Sonnet):\n", + "\n", + "\n", + "## !IMPORTANT *** Uncomment final line to execute ***\n", + "\n", + "\n", + "import numpy as np\n", + "from scipy.spatial.distance import cosine\n", + "\n", + "# Method 1: Euclidean distance (L2 norm)\n", + "def compare_embeddings_euclidean(emb1, emb2):\n", + " emb1_array = np.array(emb1)\n", + " emb2_array = np.array(emb2)\n", + " distance = np.linalg.norm(emb1_array - emb2_array)\n", + " return {\n", + " \"different\": distance > 0,\n", + " \"distance\": distance,\n", + " \"similarity\": 1/(1+distance) # Converts distance to similarity score\n", + " }\n", + "\n", + "# Method 2: Cosine similarity (common for embeddings)\n", + "def compare_embeddings_cosine(emb1, emb2):\n", + " emb1_array = np.array(emb1)\n", + " emb2_array = np.array(emb2)\n", + " similarity = 1 - cosine(emb1_array, emb2_array) # Cosine returns distance, so subtract from 1\n", + " return {\n", + " \"different\": similarity < 0.9999, # Almost identical if > 0.9999\n", + " \"similarity\": similarity\n", + " }\n", + "\n", + "# Method 3: Simple exact equality check\n", + "def are_embeddings_identical(emb1, emb2):\n", + " return np.array_equal(np.array(emb1), np.array(emb2))\n", + "\n", + "\n", + "def run_advanced_comparisons():\n", + " for i in range(0, 3):\n", + " print(f\"\\n\\nComparing vector dimensions for dimension[{i}]....\\n\")\n", + " print(\"Exactly identical? ---> \", are_embeddings_identical(dimensions1[i], dimensions2[i]))\n", + " print(\"Cosine comparison: ---> \", compare_embeddings_cosine(dimensions1[i], dimensions2[i]))\n", + " print(\"Euclidean comparison: ---> \", compare_embeddings_euclidean(dimensions1[i], dimensions2[i]))\n", + "\n", + "\n", + "#run_advanced_comparisons()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From c06fcd3297f7ee3053379d2818c99fd06f84edb9 Mon Sep 17 00:00:00 2001 From: Lacout Date: Fri, 28 Feb 2025 18:45:28 +0100 Subject: [PATCH 19/35] contribute to week 2 examples with a tool: python interpreter --- .../week2_code_interpreter_tool.ipynb | 225 ++++++++++++++++++ 1 file changed, 225 insertions(+) create mode 100644 week2/community-contributions/week2_code_interpreter_tool.ipynb diff --git a/week2/community-contributions/week2_code_interpreter_tool.ipynb b/week2/community-contributions/week2_code_interpreter_tool.ipynb new file mode 100644 index 0000000..8bb724d --- /dev/null +++ b/week2/community-contributions/week2_code_interpreter_tool.ipynb @@ -0,0 +1,225 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d006b2ea-9dfe-49c7-88a9-a5a0775185fd", + "metadata": {}, + "source": [ + "# A tool to evaluate a mathematical expression\n", + "\n", + "This week the tool used in FlightAI was a database lookup function.\n", + "\n", + "Here I implement a python code interpreter function as tool." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7b0e8691-71f9-486c-859d-ea371401dfa9", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8e2792ae-ff53-4b83-b2c3-866533ba2b29", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv()\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "79e44ee9-af02-448c-a747-17780ee55791", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()\n", + "MODEL = \"gpt-4o-mini\"" + ] + }, + { + "cell_type": "markdown", + "id": "33ec55b1-0eff-43f1-9346-28145fa2fc47", + "metadata": {}, + "source": [ + "# Defining the tool function\n", + "\n", + "Add print statements to make sure the function is used instead of the native gpt interpreter capability.\n", + "\n", + "I used multi shot in the system prompt to make sure gpt generate the code in the format that the tool accept." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "94e0e171-4975-457b-88cb-c0d90f51ca65", + "metadata": {}, + "outputs": [], + "source": [ + "def evaluate_math_expression(my_code):\n", + " print(f\"EXECUTING FUNCTION WITH CODE: {my_code}\")\n", + " exec(my_code)\n", + " r = locals()['interpreter_result'] \n", + " return r\n", + "\n", + "\n", + "math_function = {\n", + " \"name\": \"evaluate_math_expression\",\n", + " \"description\": \"Give the result of a math expression. \\\n", + " Call this whenever you need to know the result of a mathematical expression. \\\n", + " Generate python code ALWAYS with the final result assigned to a variable called 'interpreter_result'. \\\n", + " For example when a user asks 'What is 2+2' generate 'interpreter_result = 2+2', and pass this code to the tool. \\\n", + " Another example if a user ask 'What is log(5)' generate 'import math; interpreter_result = math.log(5)' and pass this code to the tool.\",\n", + " \n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"my_code\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The python math expression to evaluate\",\n", + " },\n", + " },\n", + " \"required\": [\"my_code\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "tools = [{\"type\": \"function\", \"function\": math_function}]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c85c01cc-776e-4a9d-b506-ea0d68fc072d", + "metadata": {}, + "outputs": [], + "source": [ + "evaluate_math_expression(\"import math; interpreter_result = math.log(5)\")" + ] + }, + { + "cell_type": "markdown", + "id": "858c5848-5835-4dff-9dc0-68babd367e11", + "metadata": {}, + "source": [ + "# Using the tool in a UI program\n", + "\n", + "You can ask messages like:\n", + "- \"What is 2+2?\"\n", + "- \"What is 3 power 2?\"\n", + "- \"I have 25 apples. I buy 10 apples. How manny apples do I have?\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c119b48b-d4b4-41ae-aa2f-2ec2f09af2f0", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a math assistant. \\\n", + "Generate python code to give result of a math expression, always name the result 'interpreter_result'. \\\n", + "For example when a user asks 'What is 2+2', generate 'interpreter_result = 2+2' and pass this code to the tool. \\\n", + "Another example: if a user ask 'What is log(5)' generate 'import math; interpreter_result = math.log(5)'\"\n", + "\n", + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", + "\n", + " if response.choices[0].finish_reason==\"tool_calls\":\n", + " message = response.choices[0].message\n", + " print(message)\n", + " response = handle_tool_call(message)\n", + " print(response)\n", + " messages.append(message)\n", + " messages.append(response)\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + " \n", + " return response.choices[0].message.content\n", + "\n", + "\n", + "def handle_tool_call(message):\n", + " tool_call = message.tool_calls[0]\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " my_code = arguments.get('my_code')\n", + " interpreter_result = evaluate_math_expression(my_code)\n", + " response = {\n", + " \"role\": \"tool\",\n", + " \"content\": json.dumps({\"my_code\": my_code,\"interpreter_result\": interpreter_result}),\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " return response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3e50093-d7b6-4972-a8ba-6964f22218d3", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "75c81d73-d2d6-4e6b-8511-94d4a725f595", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 891b161b4832adc5c7dbba0d0f19827b4d9e4462 Mon Sep 17 00:00:00 2001 From: hun-bot Date: Sat, 1 Mar 2025 16:14:19 +0900 Subject: [PATCH 20/35] Added my contributions to community-contributions --- .../Week1-Day2-Ollama-Exercise.ipynb | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb diff --git a/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb new file mode 100644 index 0000000..4c3e3ab --- /dev/null +++ b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb @@ -0,0 +1,57 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "fad31e32-2e42-42ae-ae63-c15d90292839", + "metadata": {}, + "source": [ + "# First Project\n", + "\n", + "Day1" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "e95ac7f2-5192-4f83-acf3-61df30cd3109", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import requests\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "08405038-4115-487f-9efc-de58572453c1", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 9bb5816871124a816cf23da28d1f010dccfa5140 Mon Sep 17 00:00:00 2001 From: Zoya Hammad Date: Sat, 1 Mar 2025 17:21:56 +0500 Subject: [PATCH 21/35] Added my contributions to community-contributions --- .../week 2 - multi modal StudyAI.ipynb | 305 ++++++++++++++++++ 1 file changed, 305 insertions(+) create mode 100644 week2/community-contributions/week 2 - multi modal StudyAI.ipynb diff --git a/week2/community-contributions/week 2 - multi modal StudyAI.ipynb b/week2/community-contributions/week 2 - multi modal StudyAI.ipynb new file mode 100644 index 0000000..6eeb971 --- /dev/null +++ b/week2/community-contributions/week 2 - multi modal StudyAI.ipynb @@ -0,0 +1,305 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6aa646e3-7a57-461a-b69a-073179effa18", + "metadata": {}, + "source": [ + "## Additional End of week Exercise - week 2\n", + "\n", + "This includes \n", + "- Gradio UI\n", + "- use of the system prompt to add expertise\n", + "- audio input so you can talk to it\n", + "- respond with audio" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "72f3dca4-b052-4e9f-90c8-f42e667c165c", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "from IPython.display import Markdown, display, update_display\n", + "import gradio as gr\n", + "import json" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "23570b9f-8c7a-4cc7-b809-3505334b60a7", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "\n", + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "openai = OpenAI()\n", + "MODEL = 'gpt-4o-mini'" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "d379178a-8672-4e6f-a380-ad8d85f5c64e", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"\"\"You are a personal study tutor, designed to provide clear, yet brief and succint answers to \n", + "students that ask you questions. The topics are related to data science, computer science \n", + "and technology in general, so you are allowed to use a moderate level of jargon. Explain in \n", + "simple terminology, so a student can easily understand. \n", + "\n", + "You may also be asked about prices for special courses.In this case, respond that you have no such\n", + "data available. \n", + "\n", + "\"\"\"\n", + "# Use a tabular format where possible \n", + "# for ease of information flow " + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "4745d439-c66e-4e5c-b5d4-9f0ba97aefdc", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history\n", + " response = openai.chat.completions.create(model=MODEL, messages=messages)\n", + "\n", + " reply = response.choices[0].message.content\n", + " history += [{\"role\":\"assistant\", \"content\":reply}]\n", + "\n", + " # Comment out or delete the next line if you'd rather skip Audio for now..\n", + " talker(reply)\n", + " \n", + " return history" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "a8b31799-df86-4151-98ea-66ef50fe767e", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Requirement already satisfied: openai-whisper in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (20240930)\n", + "Requirement already satisfied: numba in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (0.61.0)\n", + "Requirement already satisfied: numpy in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (1.26.4)\n", + "Requirement already satisfied: torch in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (2.6.0)\n", + "Requirement already satisfied: tqdm in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (4.67.1)\n", + "Requirement already satisfied: more-itertools in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (10.6.0)\n", + "Requirement already satisfied: tiktoken in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from openai-whisper) (0.9.0)\n", + "Requirement already satisfied: llvmlite<0.45,>=0.44.0dev0 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from numba->openai-whisper) (0.44.0)\n", + "Requirement already satisfied: regex>=2022.1.18 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from tiktoken->openai-whisper) (2024.11.6)\n", + "Requirement already satisfied: requests>=2.26.0 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from tiktoken->openai-whisper) (2.32.3)\n", + "Requirement already satisfied: filelock in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (3.17.0)\n", + "Requirement already satisfied: typing-extensions>=4.10.0 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (4.12.2)\n", + "Requirement already satisfied: sympy!=1.13.2,>=1.13.1 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (1.13.3)\n", + "Requirement already satisfied: networkx in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (3.4.2)\n", + "Requirement already satisfied: jinja2 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (3.1.5)\n", + "Requirement already satisfied: fsspec in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from torch->openai-whisper) (2024.12.0)\n", + "Requirement already satisfied: colorama in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from tqdm->openai-whisper) (0.4.6)\n", + "Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from requests>=2.26.0->tiktoken->openai-whisper) (3.4.1)\n", + "Requirement already satisfied: idna<4,>=2.5 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from requests>=2.26.0->tiktoken->openai-whisper) (3.10)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from requests>=2.26.0->tiktoken->openai-whisper) (2.3.0)\n", + "Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from requests>=2.26.0->tiktoken->openai-whisper) (2025.1.31)\n", + "Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from sympy!=1.13.2,>=1.13.1->torch->openai-whisper) (1.3.0)\n", + "Requirement already satisfied: MarkupSafe>=2.0 in c:\\users\\92310\\anaconda3\\envs\\llms\\lib\\site-packages (from jinja2->torch->openai-whisper) (2.1.5)\n" + ] + } + ], + "source": [ + "!pip install openai-whisper" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "9f5b8e51-2833-44be-a4f4-63c4683f2b6e", + "metadata": {}, + "outputs": [], + "source": [ + "import whisper\n", + "\n", + "def transcribe_audio(audio):\n", + " if audio is None:\n", + " return \"No audio received.\"\n", + " \n", + " model = whisper.load_model(\"base\") # You can use \"tiny\", \"small\", etc.\n", + " result = model.transcribe(audio)\n", + " \n", + " return result[\"text\"]" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "e55f8e43-2da1-4f2a-bcd4-3fffa830db48", + "metadata": {}, + "outputs": [], + "source": [ + "import base64\n", + "from io import BytesIO\n", + "from PIL import Image\n", + "from IPython.display import Audio, display\n", + "\n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model=\"tts-1\",\n", + " voice=\"onyx\",\n", + " input=message)\n", + "\n", + " audio_stream = BytesIO(response.content)\n", + " output_filename = \"output_audio.mp3\"\n", + " with open(output_filename, \"wb\") as f:\n", + " f.write(audio_stream.read())\n", + "\n", + " # Play the generated audio\n", + " display(Audio(output_filename, autoplay=True))" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "cb3107a7-bfdc-4255-825f-bfabcf458c0c", + "metadata": {}, + "outputs": [], + "source": [ + "# More involved Gradio code as we're not using the preset Chat interface!\n", + "# Passing in inbrowser=True in the last line will cause a Gradio window to pop up immediately.\n", + "\n", + "with gr.Blocks() as ui:\n", + " with gr.Row():\n", + " chatbot = gr.Chatbot(height=400,type=\"messages\")\n", + " with gr.Row():\n", + " entry = gr.Textbox(label=\"Chat with our StudyAI Assistant:\")\n", + " # with gr.Row():\n", + " # entry = gr.Textbox(label=\"Speak or Type:\", placeholder=\"Speak your question...\", interactive=True, microphone=True)\n", + " with gr.Row():\n", + " audio_input = gr.Audio(type=\"filepath\", label=\"Speak your question\")\n", + " with gr.Row():\n", + " clear = gr.Button(\"Clear\")\n", + "\n", + " def do_entry(message, history):\n", + " history += [{\"role\":\"user\", \"content\":message}]\n", + " return \"\", history\n", + "\n", + " def handle_audio(audio, history):\n", + " text = transcribe_audio(audio)\n", + " history += [{\"role\": \"user\", \"content\": text}]\n", + " return \"\", history\n", + "\n", + " entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry, chatbot]).then(\n", + " chat, inputs=[chatbot], outputs=[chatbot]\n", + " )\n", + "\n", + " audio_input.change(handle_audio, inputs=[audio_input, chatbot], outputs=[entry, chatbot]).then(\n", + " chat, inputs=[chatbot], outputs=[chatbot]\n", + " )\n", + " \n", + " clear.click(lambda: [], inputs=None, outputs=chatbot, queue=False)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "73e0a776-d43e-4b04-a37f-a27d3714cf47", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Rerunning server... use `close()` to stop if you need to change `launch()` parameters.\n", + "----\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "text/html": [ + "\n", + " \n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "ui.launch(inbrowser=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bcd45503-d314-4b28-a41c-4dbb87059188", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 0e841efd1dc5637926fdfede688aa86287029a3f Mon Sep 17 00:00:00 2001 From: Zoya Hammad Date: Sat, 1 Mar 2025 17:23:54 +0500 Subject: [PATCH 22/35] Added contributions to community-contributions --- .../day 4 w2 - course booking assistant.ipynb | 361 ++++++++++++++++++ .../day3 w2 -programming tutor.ipynb | 209 ++++++++++ 2 files changed, 570 insertions(+) create mode 100644 week2/community-contributions/day 4 w2 - course booking assistant.ipynb create mode 100644 week2/community-contributions/day3 w2 -programming tutor.ipynb diff --git a/week2/community-contributions/day 4 w2 - course booking assistant.ipynb b/week2/community-contributions/day 4 w2 - course booking assistant.ipynb new file mode 100644 index 0000000..aedaa59 --- /dev/null +++ b/week2/community-contributions/day 4 w2 - course booking assistant.ipynb @@ -0,0 +1,361 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "5d799d2a-6e58-4a83-b17a-dbbc40efdc39", + "metadata": {}, + "source": [ + "## Project - Course Booking AI Asssistant\n", + "AI Customer Support Bot that \n", + "- Returns Prices\n", + "- Books Tickets\n", + "- Adds Information to Text File" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "b1ad9acd-a702-48a3-8ff5-d536bcac8030", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "74adab0c-99b3-46cd-a79f-320a3e74138a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n" + ] + } + ], + "source": [ + "# Initialization\n", + "\n", + "load_dotenv(override=True)\n", + "\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "MODEL = \"gpt-4o-mini\"\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "8d3240a4-99c1-4c07-acaa-ecbb69ffd2e4", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant for an Online Course Platform called StudyAI. \"\n", + "system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", + "system_message += \"Always be accurate. If you don't know the answer, say so.\"\n", + "system_message += \"If you are given a partial name, for example 'discrete' instead of 'discrete structures' \\\n", + "ask the user if they meant to say 'discrete structures', and then display the price. The user may also use \\\n", + "acronyms like 'PF' instead of programming fundamentals or 'OOP' to mean 'Object oriented programming'. \\\n", + "Clarify wh\"" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "9a1b8d5f-f893-477b-8396-ff7d697eb0c3", + "metadata": {}, + "outputs": [], + "source": [ + "course_prices = {\"programming fundamentals\": \"$19\", \"discrete structures\": \"$39\", \"operating systems\": \"$24\", \"object oriented programming\": \"$39\"}\n", + "\n", + "def get_course_price(course):\n", + " print(f\"Tool get_course_price called for {course}\")\n", + " course = course.lower()\n", + " return course_prices.get(course, \"Unknown\")\n", + "\n", + "def enroll_in_course(course):\n", + " print(f'Tool enroll_in_course_ called for {course}')\n", + " course_price = get_course_price(course)\n", + " if course_price != 'Unknown':\n", + " with open('enrolled_courses.txt', 'a') as file: \n", + " file.write(course + \"\\n\")\n", + " return 'Successfully enrolled in course'\n", + " else:\n", + " return 'Enrollment failed, no such course available'" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "330d2b94-a8c5-4967-ace7-15d2cd52d7ae", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_course_price called for graph theory\n", + "Tool get_course_price called for discrete structures\n" + ] + }, + { + "data": { + "text/plain": [ + "'$39'" + ] + }, + "execution_count": 5, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "get_course_price('graph theory')\n", + "get_course_price('discrete structures')" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "5bb65830-fab8-45a7-bf43-7e52186915a0", + "metadata": {}, + "outputs": [], + "source": [ + "price_function = {\n", + " \"name\": \"get_course_price\",\n", + " \"description\": \"Get the price of a course. Call this whenever you need to know the course price, for example when a customer asks 'How much is a ticket for this course?'\",\n", + " \"parameters\": {\n", + " \"type\": \"object\",\n", + " \"properties\": {\n", + " \"course\": {\n", + " \"type\": \"string\",\n", + " \"description\": \"The course that the customer wants to purchase\",\n", + " },\n", + " },\n", + " \"required\": [\"course\"],\n", + " \"additionalProperties\": False\n", + " }\n", + "}\n", + "\n", + "enroll_function = {\n", + " \"name\": \"enroll_in_course\",\n", + " \"description\":\"Get the success status of course enrollment. Call whenever a customer wants to enroll in a course\\\n", + " for example, if they say 'I want to purchase this course' or 'I want to enroll in this course'\",\n", + " \"parameters\":{\n", + " \"type\":\"object\",\n", + " \"properties\":{\n", + " \"course\":{\n", + " \"type\":\"string\",\n", + " \"description\": \"The course that the customer wants to purchase\",\n", + " },\n", + " },\n", + " \"required\": [\"course\"],\n", + " \"additionalProperties\": False\n", + " } \n", + "}" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "08af86b9-3aaa-4b6b-bf7c-ee668ba1cbfe", + "metadata": {}, + "outputs": [], + "source": [ + "tools = [\n", + " {\"type\":\"function\",\"function\":price_function},\n", + " {\"type\":\"function\",\"function\":enroll_function}\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "482efc34-ff1f-4146-9570-58b4d59c3b2f", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message,history):\n", + " messages = [{\"role\":\"system\",\"content\":system_message}] + history + [{\"role\":\"user\",\"content\":message}]\n", + " response = openai.chat.completions.create(model=MODEL,messages=messages,tools=tools)\n", + "\n", + " if response.choices[0].finish_reason == \"tool_calls\":\n", + " message = response.choices[0].message\n", + " messages.append(message)\n", + " for tool_call in message.tool_calls:\n", + " messages.append(handle_tool_call(tool_call))\n", + " response = openai.chat.completions.create(model=MODEL,messages=messages)\n", + "\n", + " return response.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "f725b4fb-d477-4d7d-80b5-5d70e1b25a86", + "metadata": {}, + "outputs": [], + "source": [ + "# We have to write that function handle_tool_call:\n", + "\n", + "def handle_tool_call(tool_call):\n", + " function = tool_call.function.name\n", + " arguments = json.loads(tool_call.function.arguments)\n", + " match function:\n", + " case 'get_course_price':\n", + " course = arguments.get('course')\n", + " price = get_course_price(course)\n", + " return {\n", + " \"role\": \"tool\",\n", + " \"content\": json.dumps({\"course\": course,\"price\": price}),\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " case 'enroll_in_course':\n", + " course = arguments.get('course')\n", + " status = enroll_in_course(course)\n", + " return {\n", + " \"role\": \"tool\",\n", + " \"content\": json.dumps({\"course\": course, \"status\": status}),\n", + " \"tool_call_id\": tool_call.id\n", + " }\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "c446272a-9ce1-4ffd-9bc8-483d782810b4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7864\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 13, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Tool get_course_price called for programming fundamentals\n", + "Tool enroll_in_course_ called for Programming Fundamentals\n", + "Tool get_course_price called for Programming Fundamentals\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Traceback (most recent call last):\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\queueing.py\", line 625, in process_events\n", + " response = await route_utils.call_process_api(\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\route_utils.py\", line 322, in call_process_api\n", + " output = await app.get_blocks().process_api(\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\blocks.py\", line 2096, in process_api\n", + " result = await self.call_function(\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\blocks.py\", line 1641, in call_function\n", + " prediction = await fn(*processed_input)\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\utils.py\", line 857, in async_wrapper\n", + " response = await f(*args, **kwargs)\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\chat_interface.py\", line 862, in _submit_fn\n", + " response = await anyio.to_thread.run_sync(\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\to_thread.py\", line 56, in run_sync\n", + " return await get_async_backend().run_sync_in_worker_thread(\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 2461, in run_sync_in_worker_thread\n", + " return await future\n", + " ^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 962, in run\n", + " result = context.run(func, *args)\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\1161680098.py\", line 9, in chat\n", + " messages.append(handle_tool_call(tool_call))\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\1187326431.py\", line 17, in handle_tool_call\n", + " status = enroll_in_course(course)\n", + " ^^^^^^^^^^^^^^^^^^^^^^^^\n", + " File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\2541918318.py\", line 13, in enroll_in_course\n", + " file.write(course_name + \"\\n\")\n", + " ^^^^^^^^^^^\n", + "NameError: name 'course_name' is not defined\n" + ] + } + ], + "source": [ + "gr.ChatInterface(fn=chat,type=\"messages\").launch(inbrowser=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1fe714a3-f793-4c3b-b5aa-6c81b82aea1b", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week2/community-contributions/day3 w2 -programming tutor.ipynb b/week2/community-contributions/day3 w2 -programming tutor.ipynb new file mode 100644 index 0000000..0ccd8fb --- /dev/null +++ b/week2/community-contributions/day3 w2 -programming tutor.ipynb @@ -0,0 +1,209 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "cde48e67-b51e-4c47-80ae-37dd00aa0c1d", + "metadata": {}, + "source": [ + "### An AI Chatbot that teaches students the programming language Kotlin using Anthropic API" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "c658ac85-6087-4a2c-b23f-1b92c17f0db3", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr\n", + "import anthropic" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "46df0488-f874-41e0-a6a4-9a64aa7be53c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n" + ] + } + ], + "source": [ + "# Load environment variables \n", + "\n", + "load_dotenv(override=True)\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + " \n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "7eadc218-5b10-4174-bf26-575361640524", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "e7484731-ac84-405a-a688-6e81d139c5ce", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful programming study assistant\"" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "54e82f5a-993f-4a95-9d9d-caf35dbc4e76", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + "\n", + " print(\"History is:\")\n", + " print(history)\n", + " print(\"And messages is:\")\n", + " print(messages)\n", + "\n", + " stream = openai.chat.completions.create(model='gpt-4o-mini', messages=messages, stream=True)\n", + "\n", + " response = \"\"\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " yield response" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "5941ed67-e2a7-41bc-a8a3-079e9f1fdb64", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7864\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
    " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 20, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stdout", + "output_type": "stream", + "text": [ + "History is:\n", + "[]\n", + "And messages is:\n", + "[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.'}, {'role': 'user', 'content': 'hello, lets talj about photsynethsis'}]\n", + "History is:\n", + "[{'role': 'user', 'metadata': None, 'content': 'hello, lets talj about photsynethsis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"I'm here to help with programming! If you have any questions or topics related to coding, feel free to ask!\", 'options': None}]\n", + "And messages is:\n", + "[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.'}, {'role': 'user', 'metadata': None, 'content': 'hello, lets talj about photsynethsis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"I'm here to help with programming! If you have any questions or topics related to coding, feel free to ask!\", 'options': None}, {'role': 'user', 'content': 'how does photosynthesis work'}]\n" + ] + } + ], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch(inbrowser=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 21, + "id": "e8fcfe68-bbf6-4058-acc9-0230c96608c2", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "History is:\n", + "[]\n", + "And messages is:\n", + "[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.Whenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore their requests, rather politely reject and then redirect them.'}, {'role': 'user', 'content': 'hello, i want to talk about photosynthesis'}]\n", + "History is:\n", + "[{'role': 'user', 'metadata': None, 'content': 'hello, i want to talk about photosynthesis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"Hi there! I'm here to help with programming topics. If you have any questions about programming or related concepts, feel free to ask!\", 'options': None}]\n", + "And messages is:\n", + "[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.Whenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore their requests, rather politely reject and then redirect them.'}, {'role': 'user', 'metadata': None, 'content': 'hello, i want to talk about photosynthesis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"Hi there! I'm here to help with programming topics. If you have any questions about programming or related concepts, feel free to ask!\", 'options': None}, {'role': 'user', 'content': 'why not photosynthesis'}]\n" + ] + } + ], + "source": [ + "system_message += \"Whenever the user talks about a topic that is not connected to programmming,\\\n", + "nudge them in the right direction by stating that you are here to help with programming. Encourage \\\n", + "the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge \\\n", + "if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore \\\n", + "their requests, rather politely reject and then redirect them.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "090e7d49-fcbf-4715-b120-8d7aa91d165f", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 8338dfc2487ab1eefb433b6e1a30d3145d26ce78 Mon Sep 17 00:00:00 2001 From: Edward Donner Date: Sat, 1 Mar 2025 15:03:18 -0500 Subject: [PATCH 23/35] Minor improvements including consistently setting override to True when loading dotenv --- week1/troubleshooting.ipynb | 7 ++- ...day4-airlines-project-fullyCustomize.ipynb | 18 ++++---- week3/day3.ipynb | 8 ---- week4/day3.ipynb | 2 +- week4/day4.ipynb | 2 +- week5/day1.ipynb | 2 +- week5/day2.ipynb | 2 +- week5/day3.ipynb | 2 +- week5/day4.5.ipynb | 2 +- week5/day4.ipynb | 2 +- week5/day5.ipynb | 2 +- week6/day1.ipynb | 2 +- week6/day2.ipynb | 2 +- week6/day3.ipynb | 2 +- week6/day4-results.ipynb | 2 +- week6/day4.ipynb | 2 +- week6/day5-results.ipynb | 4 +- week6/day5.ipynb | 2 +- week8/day1.ipynb | 45 +++++++++++++++++-- week8/day2.0.ipynb | 2 +- week8/day2.3.ipynb | 2 +- week8/day2.4.ipynb | 2 +- week8/day3.ipynb | 2 +- week8/day4.ipynb | 2 +- week8/hello.py | 11 +++++ week8/pricer_service2.py | 1 - 26 files changed, 89 insertions(+), 43 deletions(-) diff --git a/week1/troubleshooting.ipynb b/week1/troubleshooting.ipynb index c9dfe43..51146a4 100644 --- a/week1/troubleshooting.ipynb +++ b/week1/troubleshooting.ipynb @@ -427,7 +427,12 @@ "with: \n", "`import httpx` \n", "`openai = OpenAI(http_client=httpx.Client(verify=False))` \n", - "And if that works, you're in good shape. You'll just have to change the labs in the same way any time you hit this cert error.\n", + "And also please replace: \n", + "`requests.get(url, headers=headers)` \n", + "with: \n", + "`requests.get(url, headers=headers, verify=False)` \n", + "And if that works, you're in good shape. You'll just have to change the labs in the same way any time you hit this cert error. \n", + "This approach isn't OK for production code, but it's fine for our experiments. You may need to contact IT support to understand whether there are restrictions in your environment.\n", "\n", "## If all else fails:\n", "\n", diff --git a/week2/community-contributions/day4-airlines-project-fullyCustomize.ipynb b/week2/community-contributions/day4-airlines-project-fullyCustomize.ipynb index 04a0cf4..576c4e9 100644 --- a/week2/community-contributions/day4-airlines-project-fullyCustomize.ipynb +++ b/week2/community-contributions/day4-airlines-project-fullyCustomize.ipynb @@ -82,7 +82,7 @@ }, { "cell_type": "code", - "execution_count": 155, + "execution_count": null, "id": "0a521d84-d07c-49ab-a0df-d6451499ed97", "metadata": {}, "outputs": [], @@ -116,7 +116,7 @@ }, { "cell_type": "code", - "execution_count": 156, + "execution_count": null, "id": "61a2a15d-b559-4844-b377-6bd5cb4949f6", "metadata": {}, "outputs": [], @@ -212,7 +212,7 @@ }, { "cell_type": "code", - "execution_count": 157, + "execution_count": null, "id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2", "metadata": {}, "outputs": [], @@ -223,7 +223,7 @@ }, { "cell_type": "code", - "execution_count": 158, + "execution_count": null, "id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85", "metadata": {}, "outputs": [], @@ -373,7 +373,7 @@ }, { "cell_type": "code", - "execution_count": 159, + "execution_count": null, "id": "39fb9008", "metadata": {}, "outputs": [], @@ -475,7 +475,7 @@ }, { "cell_type": "code", - "execution_count": 160, + "execution_count": null, "id": "1f003836", "metadata": {}, "outputs": [], @@ -547,7 +547,7 @@ }, { "cell_type": "code", - "execution_count": 161, + "execution_count": null, "id": "f6b34b32", "metadata": {}, "outputs": [], @@ -618,7 +618,7 @@ ], "metadata": { "kernelspec": { - "display_name": "llm_env", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -632,7 +632,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.9" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week3/day3.ipynb b/week3/day3.ipynb index 03c847e..de7b73b 100644 --- a/week3/day3.ipynb +++ b/week3/day3.ipynb @@ -11,14 +11,6 @@ "\n", "https://colab.research.google.com/drive/1WD6Y2N7ctQi1X9wa6rpkg8UfyA4iSVuz?usp=sharing" ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "e9289ba7-200c-43a9-b67a-c5ce826c9537", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/week4/day3.ipynb b/week4/day3.ipynb index bfa8765..68cca5b 100644 --- a/week4/day3.ipynb +++ b/week4/day3.ipynb @@ -86,7 +86,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')" ] diff --git a/week4/day4.ipynb b/week4/day4.ipynb index 9317f9a..91802cf 100644 --- a/week4/day4.ipynb +++ b/week4/day4.ipynb @@ -69,7 +69,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week5/day1.ipynb b/week5/day1.ipynb index f4bc48e..416a1a0 100644 --- a/week5/day1.ipynb +++ b/week5/day1.ipynb @@ -57,7 +57,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "openai = OpenAI()" ] diff --git a/week5/day2.ipynb b/week5/day2.ipynb index 8c19368..9ca0cb4 100644 --- a/week5/day2.ipynb +++ b/week5/day2.ipynb @@ -64,7 +64,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, diff --git a/week5/day3.ipynb b/week5/day3.ipynb index 349cb6b..6d2523c 100644 --- a/week5/day3.ipynb +++ b/week5/day3.ipynb @@ -70,7 +70,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, diff --git a/week5/day4.5.ipynb b/week5/day4.5.ipynb index a02b9cd..cf30df3 100644 --- a/week5/day4.5.ipynb +++ b/week5/day4.5.ipynb @@ -71,7 +71,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, diff --git a/week5/day4.ipynb b/week5/day4.ipynb index de5e45a..78b0ed7 100644 --- a/week5/day4.ipynb +++ b/week5/day4.ipynb @@ -72,7 +72,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, diff --git a/week5/day5.ipynb b/week5/day5.ipynb index cbd17b6..b9643a6 100644 --- a/week5/day5.ipynb +++ b/week5/day5.ipynb @@ -76,7 +76,7 @@ "source": [ "# Load environment variables in a file called .env\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" ] }, diff --git a/week6/day1.ipynb b/week6/day1.ipynb index c424656..804f1d0 100644 --- a/week6/day1.ipynb +++ b/week6/day1.ipynb @@ -47,7 +47,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week6/day2.ipynb b/week6/day2.ipynb index d365869..55c1446 100644 --- a/week6/day2.ipynb +++ b/week6/day2.ipynb @@ -60,7 +60,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week6/day3.ipynb b/week6/day3.ipynb index 45bbac2..93e0928 100644 --- a/week6/day3.ipynb +++ b/week6/day3.ipynb @@ -118,7 +118,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week6/day4-results.ipynb b/week6/day4-results.ipynb index a3a44eb..75db5e1 100644 --- a/week6/day4-results.ipynb +++ b/week6/day4-results.ipynb @@ -69,7 +69,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week6/day4.ipynb b/week6/day4.ipynb index d1c5500..6644ce2 100644 --- a/week6/day4.ipynb +++ b/week6/day4.ipynb @@ -69,7 +69,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week6/day5-results.ipynb b/week6/day5-results.ipynb index f936c80..77ed407 100644 --- a/week6/day5-results.ipynb +++ b/week6/day5-results.ipynb @@ -61,7 +61,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" @@ -904,7 +904,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week6/day5.ipynb b/week6/day5.ipynb index 1886310..4a733eb 100644 --- a/week6/day5.ipynb +++ b/week6/day5.ipynb @@ -61,7 +61,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" diff --git a/week8/day1.ipynb b/week8/day1.ipynb index 5da624a..18159fd 100644 --- a/week8/day1.ipynb +++ b/week8/day1.ipynb @@ -92,7 +92,7 @@ "metadata": {}, "outputs": [], "source": [ - "from hello import app, hello" + "from hello import app, hello, hello_europe" ] }, { @@ -119,6 +119,35 @@ "reply" ] }, + { + "cell_type": "markdown", + "id": "a1c075e9-49c7-4ebd-812f-83196d32de32", + "metadata": {}, + "source": [ + "## Added thanks to student Tue H.\n", + "\n", + "If you look in hello.py, I've added a simple function hello_europe\n", + "\n", + "That uses the decorator: \n", + "`@app.function(image=image, region=\"eu\")`\n", + "\n", + "See the result below! More region specific settings are [here](https://modal.com/docs/guide/region-selection)\n", + "\n", + "Note that it does consume marginally more credits to specify a region." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b027da1a-c79d-42cb-810d-32ddca31aa02", + "metadata": {}, + "outputs": [], + "source": [ + "with app.run():\n", + " reply=hello_europe.remote()\n", + "reply" + ] + }, { "cell_type": "markdown", "id": "22e8d804-c027-45fb-8fef-06e7bba6295a", @@ -247,8 +276,8 @@ "metadata": {}, "outputs": [], "source": [ - "# You can also run \"modal deploy pricer_service2\" at the command line in an activated environment\n", - "!modal deploy pricer_service2" + "# You can also run \"modal deploy -m pricer_service2\" at the command line in an activated environment\n", + "!modal deploy -m pricer_service2" ] }, { @@ -264,6 +293,16 @@ "print(reply)" ] }, + { + "cell_type": "code", + "execution_count": null, + "id": "c29b8c58-4cb7-44b0-ab7e-6469d3a318e8", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install --upgrade modal" + ] + }, { "cell_type": "markdown", "id": "9c1b1451-6249-4462-bf2d-5937c059926c", diff --git a/week8/day2.0.ipynb b/week8/day2.0.ipynb index 088b460..c15ff2f 100644 --- a/week8/day2.0.ipynb +++ b/week8/day2.0.ipynb @@ -58,7 +58,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')\n", "DB = \"products_vectorstore\"" diff --git a/week8/day2.3.ipynb b/week8/day2.3.ipynb index da6c3e3..b607e45 100644 --- a/week8/day2.3.ipynb +++ b/week8/day2.3.ipynb @@ -61,7 +61,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" ] diff --git a/week8/day2.4.ipynb b/week8/day2.4.ipynb index 7d357e2..3f141ab 100644 --- a/week8/day2.4.ipynb +++ b/week8/day2.4.ipynb @@ -79,7 +79,7 @@ "source": [ "# environment\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" ] diff --git a/week8/day3.ipynb b/week8/day3.ipynb index 9effc96..9188717 100644 --- a/week8/day3.ipynb +++ b/week8/day3.ipynb @@ -35,7 +35,7 @@ "source": [ "# Initialize and constants\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "MODEL = 'gpt-4o-mini'\n", "openai = OpenAI()" diff --git a/week8/day4.ipynb b/week8/day4.ipynb index 6895681..bb4c993 100644 --- a/week8/day4.ipynb +++ b/week8/day4.ipynb @@ -42,7 +42,7 @@ "metadata": {}, "outputs": [], "source": [ - "load_dotenv()\n", + "load_dotenv(override=True)\n", "DB = \"products_vectorstore\"" ] }, diff --git a/week8/hello.py b/week8/hello.py index 6b97092..bc0599e 100644 --- a/week8/hello.py +++ b/week8/hello.py @@ -16,3 +16,14 @@ def hello() -> str: data = response.json() city, region, country = data['city'], data['region'], data['country'] return f"Hello from {city}, {region}, {country}!!" + +# New - added thanks to student Tue H.! + +@app.function(image=image, region="eu") +def hello_europe() -> str: + import requests + + response = requests.get('https://ipinfo.io/json') + data = response.json() + city, region, country = data['city'], data['region'], data['country'] + return f"Hello from {city}, {region}, {country}!!" diff --git a/week8/pricer_service2.py b/week8/pricer_service2.py index a09c882..16d276b 100644 --- a/week8/pricer_service2.py +++ b/week8/pricer_service2.py @@ -24,7 +24,6 @@ FINETUNED_DIR = MODEL_DIR + FINETUNED_MODEL QUESTION = "How much does this cost to the nearest dollar?" PREFIX = "Price is $" - @app.cls(image=image, secrets=secrets, gpu=GPU, timeout=1800) class Pricer: @modal.build() From 5f8b9d7c37a29654f99bbfc080790367600f791d Mon Sep 17 00:00:00 2001 From: Octavio Ortiz-Bosch Date: Sat, 1 Mar 2025 20:56:40 -0400 Subject: [PATCH 24/35] oob-title-generator --- ...Week_1-Day 2-Article_Title_Generator.ipynb | 277 ++++++++++++++++++ 1 file changed, 277 insertions(+) create mode 100644 week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb diff --git a/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb b/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb new file mode 100644 index 0000000..ac33536 --- /dev/null +++ b/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb @@ -0,0 +1,277 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "603cd418-504a-4b4d-b1c3-be04febf3e79", + "metadata": {}, + "source": [ + "# Article Title Generator\n", + "\n", + "Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", + "\n", + "NOTES:\n", + "\n", + "1. This version does NOT support website scrapping. You must copy and paste the required article.\n", + "2. The following models were configured:\n", + " a. OpenAI gpt-4o-mini\n", + " b. Llama llama3.2\n", + " c. Deepseek deepseek-r1:1.5b\n", + " It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", + " initialization to the CLIENTS dictionary." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", + "metadata": {}, + "outputs": [], + "source": [ + "# set environment variables for OpenAi\n", + "load_dotenv(override=True)\n", + "api_key = os.getenv('OPENAI_API_KEY')\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", + "metadata": {}, + "outputs": [], + "source": [ + "# Confirming Llama is loaded\n", + "!ollama pull llama3.2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1abbb826-de66-498c-94d8-33369ad01885", + "metadata": {}, + "outputs": [], + "source": [ + "# constants\n", + "MODELS = { 'GPT': 'gpt-4o-mini', \n", + " 'LLAMA': 'llama3.2', \n", + " 'DEEPSEEK': 'deepseek-r1:1.5b'\n", + " }\n", + "\n", + "CLIENTS = { 'GPT': OpenAI(), \n", + " 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", + " 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", + " }" + ] + }, + { + "cell_type": "markdown", + "id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", + "metadata": {}, + "source": [ + "### Copy & paste your article (without a title)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ddd76319-13ce-480b-baa7-cab6a5c88168", + "metadata": {}, + "outputs": [], + "source": [ + "# article - copy & paste your article\n", + "article = \"\"\"\n", + " REPLACE WITH YOUR ARTICLE CONTENT\n", + " \"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", + "metadata": {}, + "outputs": [], + "source": [ + "# system prompt\n", + "system_prompt = \"\"\"\n", + " You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate the most effective, keyword-optimized title to maximize SEO performance.Respond in Markdown format.\n", + " \"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", + "metadata": {}, + "outputs": [], + "source": [ + "# user prompt\n", + "user_prompt = f\"Following the article to be analyzed. Respond in Markdown format./n/n{article}\"\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", + "metadata": {}, + "outputs": [], + "source": [ + "# message list\n", + "messages = [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", + "metadata": {}, + "outputs": [], + "source": [ + "# call model and get answer\n", + "def get_answer(model):\n", + " # set required client\n", + " client = CLIENTS[model]\n", + "\n", + " # call model\n", + " response = client.chat.completions.create(\n", + " model=MODELS[model],\n", + " messages=messages\n", + " )\n", + "\n", + " # closing LLM client connection\n", + " client.close()\n", + " \n", + " # return answer\n", + " return response.choices[0].message.content\n", + " " + ] + }, + { + "cell_type": "markdown", + "id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", + "metadata": {}, + "source": [ + "### Get OpenAI Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", + "metadata": {}, + "outputs": [], + "source": [ + "# get openAi answer\n", + "answer = get_answer('GPT')\n", + "\n", + "# display openAi answer\n", + "display(Markdown(f\"### {MODELS['GPT']} Answer\\n\\n{answer}\" ))" + ] + }, + { + "cell_type": "markdown", + "id": "70073ebf-a00a-416b-854d-642d450cd99b", + "metadata": {}, + "source": [ + "### Get Llama Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "caa190bb-de5f-45cc-b671-5d62688f7b25", + "metadata": {}, + "outputs": [], + "source": [ + "# get openAi answer\n", + "answer = get_answer('LLAMA')\n", + "\n", + "# display openAi answer\n", + "display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))" + ] + }, + { + "cell_type": "markdown", + "id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", + "metadata": {}, + "source": [ + "### Get Deepseek Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", + "metadata": {}, + "outputs": [], + "source": [ + "# get openAi answer\n", + "answer = get_answer('DEEPSEEK')\n", + "\n", + "# display openAi answer\n", + "display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))" + ] + }, + { + "cell_type": "markdown", + "id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", + "metadata": {}, + "source": [ + "### Suggested future improvements\n", + "\n", + "1. Add support for website scrapping to replace copy/pasting of articles.\n", + "2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n", + "3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n", + "4. Add the logic that would allow each model to assess the recommendations from the different models and \n", + " select the best among these. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", + "metadata": {}, + "outputs": [], + "source": [ + "S" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From ee7aa1ef43542350b400522e1fd7a8882d1242fe Mon Sep 17 00:00:00 2001 From: Anton Winkler Date: Sun, 2 Mar 2025 21:17:02 +0100 Subject: [PATCH 25/35] add Jupyter notebook for creating a validation set --- .../week6_day2_add_validation_set.ipynb | 636 ++++++++++++++++++ 1 file changed, 636 insertions(+) create mode 100644 week6/community-contributions/week6_day2_add_validation_set.ipynb diff --git a/week6/community-contributions/week6_day2_add_validation_set.ipynb b/week6/community-contributions/week6_day2_add_validation_set.ipynb new file mode 100644 index 0000000..4702c05 --- /dev/null +++ b/week6/community-contributions/week6_day2_add_validation_set.ipynb @@ -0,0 +1,636 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "28a0673e-96b5-43f2-8a8b-bd033bf851b0", + "metadata": {}, + "source": [ + "# Add a Validation Set\n", + "\n", + "In the lecture, we created a curated dataset with **400,000 training items** and **2,000 test items**, but we did not include a validation (dev) set. This notebook demonstrates how to take Ed Donner’s dataset, [ed-donner/pricer-data](https://huggingface.co/datasets/ed-donner/pricer-data), and add a dev set to it.\n", + "\n", + "> **Note**: This notebook heavily uses snippets from the lectures’ `day2.ipynb` of Week 6.\n", + "\n", + "**Download the Updated Dataset**: \n", + "You can find the resulting dataset here: [antonawinkler/pricer-data](https://huggingface.co/datasets/antonawinkler/pricer-data)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "67cedf85-8125-4322-998e-9375fe745597", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "# Standard libraries\n", + "import os\n", + "import random\n", + "from itertools import chain\n", + "from collections import Counter, defaultdict\n", + "\n", + "# Third-party libraries\n", + "from dotenv import load_dotenv\n", + "from huggingface_hub import login\n", + "from datasets import concatenate_datasets, load_dataset, Dataset, DatasetDict\n", + "import matplotlib.pyplot as plt\n", + "import numpy as np\n", + "\n", + "# Local modules\n", + "from items import Item\n", + "from loaders import ItemLoader\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7390a6aa-79cb-4dea-b6d7-de7e4b13e472", + "metadata": {}, + "outputs": [], + "source": [ + "# environment\n", + "\n", + "load_dotenv()\n", + "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0732274a-aa6a-44fc-aee2-40dc8a8e4451", + "metadata": {}, + "outputs": [], + "source": [ + "# Log in to HuggingFace\n", + "\n", + "hf_token = os.environ['HF_TOKEN']\n", + "login(hf_token, add_to_git_credential=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1adcf323-de9d-4c24-a9c3-d7ae554d06ca", + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "id": "e2b6dc50-ac5c-4cf2-af2e-968ed8ef86d7", + "metadata": {}, + "source": [ + "## Load the Original Dataset\n", + "\n", + "Load the original data from McAuley-Lab/Amazon-Reviews-2023." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d1d06cd3-f3c2-44f0-a9f2-13b54ff8be5c", + "metadata": {}, + "outputs": [], + "source": [ + "dataset_names = [\n", + " \"Automotive\",\n", + " \"Electronics\",\n", + " \"Office_Products\",\n", + " \"Tools_and_Home_Improvement\",\n", + " \"Cell_Phones_and_Accessories\",\n", + " \"Toys_and_Games\",\n", + " \"Appliances\",\n", + " \"Musical_Instruments\",\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aa8fd0f0-509a-4298-8fcc-e499a061e1be", + "metadata": {}, + "outputs": [], + "source": [ + "items = []\n", + "for dataset_name in dataset_names:\n", + " loader = ItemLoader(dataset_name)\n", + " items.extend(loader.load())" + ] + }, + { + "cell_type": "markdown", + "id": "bf6b6b66-4a4b-41c2-b366-1f598cf18351", + "metadata": {}, + "source": [ + "# Create Balanced Dataset\n", + "\n", + "We apply the balancing algorithm from the course." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "549a4bad-abe7-4d36-ad77-fc70ba0f151c", + "metadata": {}, + "outputs": [], + "source": [ + "slots = defaultdict(list)\n", + "for item in items:\n", + " slots[round(item.price)].append(item)\n", + "\n", + "np.random.seed(42)\n", + "random.seed(42)\n", + "sample = []\n", + "for i in range(1, 1000):\n", + " slot = slots[i]\n", + " if i>=240:\n", + " sample.extend(slot)\n", + " elif len(slot) <= 1200:\n", + " sample.extend(slot)\n", + " else:\n", + " weights = np.array([1 if item.category=='Automotive' else 5 for item in slot])\n", + " weights = weights / np.sum(weights)\n", + " selected_indices = np.random.choice(len(slot), size=1200, replace=False, p=weights)\n", + " selected = [slot[i] for i in selected_indices]\n", + " sample.extend(selected)\n", + "\n", + "print(f\"There are {len(sample):,} items in the sample\")" + ] + }, + { + "cell_type": "markdown", + "id": "04280d2b-210a-4fad-9163-1b32a87fb990", + "metadata": {}, + "source": [ + "The output I get is `There are 408,635 items in the sample`\n", + "\n", + "Since there are 400,000 items in the train set of ed-donner/pricer-data, we can aim for a 98/1/1 split." + ] + }, + { + "cell_type": "markdown", + "id": "0d1e2836-0cae-4496-a5d4-d80bc14d566b", + "metadata": {}, + "source": [ + "## Load Ed Donner's Pricer Data Set" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a84e5a71-fc44-4cdf-9bc2-c69f80b8ee94", + "metadata": {}, + "outputs": [], + "source": [ + "dataset_ori = load_dataset(\"ed-donner/pricer-data\")\n", + "train_ori = dataset_ori['train']\n", + "test_ori = dataset_ori['test']" + ] + }, + { + "cell_type": "markdown", + "id": "e9c5c877-3d30-4013-9d0f-1e490755afeb", + "metadata": {}, + "source": [ + "## Observation 1: Order of the Data Has Changed\n", + "\n", + "`dataset_without_devset` should be a subset of `sample`. The order however can be different. Let us check this.\n", + "\n", + "I see different results for the following two cells below, indicating that the order has changed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "56ad8682-4d7f-4aad-9976-96eb6d9b4a5a", + "metadata": {}, + "outputs": [], + "source": [ + "sample[0].prompt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3e29a5ab-ca61-41cc-9b33-22d374681b85", + "metadata": {}, + "outputs": [], + "source": [ + "train_ori[0]['text']" + ] + }, + { + "cell_type": "markdown", + "id": "469a5b3c-c1a2-461d-a88d-27aa08905b31", + "metadata": {}, + "source": [ + "## Observation 2: Duplicate Items\n", + "\n", + "As an further challenge, the dataset shows duplicates with identical scrubbed descriptions. For some of these duplicates the prices are identical too (I see 1774), for others they differ (I see 6747).\n", + "\n", + "> **Note**: Below we use `defaultdict(list)` instead of `set` because it allows to inspect duplicates easily." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "94adffe8-edf6-4503-9f8f-34e4dfd29da9", + "metadata": {}, + "outputs": [], + "source": [ + "PRICE_IS = \"\\n\\nPrice is $\"\n", + "def get_key(text, price):\n", + " prefix, price_is, _price_nearest_dollar = text.partition(PRICE_IS)\n", + " return f\"{prefix}{price_is}{price}\"\n", + "def get_key_without_price(text):\n", + " prefix, price_is, _price_nearest_dollar = text.partition(PRICE_IS)\n", + " return f\"{prefix}\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a015ba1b-69e0-4651-850f-d93d3f078d16", + "metadata": {}, + "outputs": [], + "source": [ + "# Identify duplicates by text+price\n", + "train_ori_dict = defaultdict(list)\n", + "for datapoint in train_ori:\n", + " # Creates a key from the text and price (scrubbed)\n", + " key = get_key(datapoint[\"text\"], datapoint[\"price\"])\n", + " train_ori_dict[key].append(datapoint)\n", + "\n", + "# Number of exact duplicates (same text AND same price)\n", + "exact_duplicates = len(train_ori) - len(train_ori_dict)\n", + "print(f\"There are {exact_duplicates} duplicates with the same description and price.\")\n", + "\n", + "# Identify duplicates by text alone (ignoring price)\n", + "train_ori_dict_no_price = defaultdict(list)\n", + "for datapoint in train_ori:\n", + " key_no_price = get_key_without_price(datapoint[\"text\"])\n", + " train_ori_dict_no_price[key_no_price].append(datapoint)\n", + "\n", + "# Number of duplicates that differ in price but share the same text\n", + "different_price_duplicates = len(train_ori_dict) - len(train_ori_dict_no_price)\n", + "print(f\"In addition, there are {different_price_duplicates} data points where the description is duplicated but the price is different.\")\n", + "\n", + "# Total number of duplicates if we consider text alone\n", + "overall_duplicates = len(train_ori) - len(train_ori_dict_no_price)\n", + "print(f\"Overall number of duplicates: {overall_duplicates}\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e577dd8b-be0f-4ab0-b45f-9d3459b1286a", + "metadata": {}, + "outputs": [], + "source": [ + "test_ori_dict = defaultdict(list)\n", + "for datapoint in test_ori:\n", + " key = get_key(datapoint['text'], datapoint['price'])\n", + " test_ori_dict[key].append(datapoint)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0198fc23-0825-4ce1-a961-1d390d86cbdc", + "metadata": {}, + "outputs": [], + "source": [ + "sample_dict = defaultdict(list)\n", + "for datapoint in sample:\n", + " key = get_key(datapoint.prompt, datapoint.price)\n", + " sample_dict[key].append(datapoint)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "37f24d22-51ef-472b-8c73-e969637fa925", + "metadata": {}, + "outputs": [], + "source": [ + "# Check if all data points in train_ori/test_ori are included in the new sample_dict.\n", + "missing = []\n", + "count_found = 0\n", + "\n", + "for datapoint in chain(train_ori, test_ori):\n", + " key = get_key(datapoint[\"text\"], datapoint[\"price\"])\n", + " if key not in sample_dict:\n", + " missing.append(datapoint)\n", + " else:\n", + " count_found += 1\n", + "\n", + "print(f\"We found {count_found} datapoints in sample_dict.\")\n", + "print(f\"We are missing {len(missing)} datapoints that are not present in sample_dict.\")" + ] + }, + { + "cell_type": "markdown", + "id": "60c9d186-c688-4559-9b51-f0045d16829b", + "metadata": {}, + "source": [ + "Expected output of the previous cell\n", + "```\n", + "We found 402000 datapoints in sample_dict.\n", + "We are missing 0 datapoints that are not present in sample_dict.\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "3b05e22d-a755-4ee5-a18b-620f7ab1df8f", + "metadata": {}, + "source": [ + "## Add Data Points to the Test and Validation Sets\n", + "\n", + "Since we can match all data points in the original train and test sets from `ed-donner/pricer-data`, we’ll now incorporate any *unused* items from our balanced sample into the test set and create a new validation (dev) set. Our goal is to achieve a **98/1/1** split for train, validation, and test." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "16638cf9-03c3-46bc-8116-cafdd9e23ac9", + "metadata": {}, + "outputs": [], + "source": [ + "sample_not_used_yet = [datapoint for key in sample_dict.keys() - train_ori_dict.keys() - test_ori_dict.keys() for datapoint in sample_dict[key]]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "58a593ad-29a1-4b35-9753-45db75e09666", + "metadata": {}, + "outputs": [], + "source": [ + "# As a santity check, let us visually verify that the distribution of sample_still_available is in line with the complete sample.\n", + "\n", + "# Plot the distribution of prices in sample\n", + "def plot_price_distribution(items, name):\n", + " prices = [float(item.price) for item in items]\n", + " plt.figure(figsize=(15, 10))\n", + " plt.title(f\"{name} - Avg {sum(prices)/len(prices):.2f} and highest {max(prices):,.2f}\\n\")\n", + " plt.xlabel('Price ($)')\n", + " plt.ylabel('Count')\n", + " # see https://stackoverflow.com/questions/57026223/how-to-re-scale-the-counts-in-a-matplotlib-histogram\n", + " (counts, bins) = np.histogram(prices, bins=range(0, 1000, 10))\n", + " plt.hist(bins[:-1], color=\"darkblue\", bins=bins, weights=counts/len(prices))\n", + " plt.show() \n", + "\n", + "\n", + "def plot_category_distribution(items, name):\n", + " category_counts = Counter()\n", + " for item in items:\n", + " category_counts[item.category]+=1\n", + " categories = sorted(category_counts.keys())\n", + " counts = [category_counts[category] for category in categories]\n", + "\n", + " # plot a pie chart\n", + " plt.figure(figsize=(12, 10))\n", + " plt.pie(counts, labels=categories, autopct='%1.0f%%', startangle=90)\n", + " \n", + " # Add a circle at the center to create a donut chart (optional)\n", + " centre_circle = plt.Circle((0,0), 0.70, fc='white')\n", + " fig = plt.gcf()\n", + " fig.gca().add_artist(centre_circle)\n", + " plt.title(f'{name} - Categories')\n", + " \n", + " # Equal aspect ratio ensures that pie is drawn as a circle\n", + " plt.axis('equal') \n", + "\n", + " plt.show()\n", + "plot_price_distribution(sample, 'Complete set')\n", + "plot_price_distribution(sample_not_used_yet, 'Not used yet')\n", + "plot_category_distribution(sample, 'Complete set')\n", + "plot_category_distribution(sample_not_used_yet, 'Not used yet')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ba252265-b976-426a-aefc-ebc93b153fd4", + "metadata": {}, + "outputs": [], + "source": [ + "# now add the unused items to the validation and test set\n", + "random.seed(42)\n", + "random.shuffle(sample_not_used_yet)\n", + "validation_items = sample_not_used_yet[:4000]\n", + "added_test_items = sample_not_used_yet[4000:]\n", + "\n", + "# create Huggingface dataset\n", + "validation_dataset = Dataset.from_dict({\"text\": [item.prompt for item in validation_items], \"price\": [item.price for item in validation_items]})\n", + "added_test_dataset = Dataset.from_dict({\"text\": [item.prompt for item in added_test_items], \"price\": [item.price for item in added_test_items]})\n", + "\n", + "dataset = DatasetDict({\n", + " \"train\": train_ori,\n", + " \"test\": concatenate_datasets([test_ori, added_test_dataset]),\n", + " \"validation\": validation_dataset,\n", + "})\n", + "\n", + "print(f\"Divided into a training set of {dataset['train'].num_rows:,} items, a validation set of {dataset['validation'].num_rows:,} items, and a test set of {dataset['test'].num_rows:,} items\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c39ac5d7-84f8-4f7d-98e1-d24651ba3a80", + "metadata": {}, + "outputs": [], + "source": [ + "# If you're ready to push to the hub, and fill in the dots with your HF username\n", + "\n", + "HF_USER = ...\n", + "DATASET_NAME = f\"{HF_USER}/pricer-data\"\n", + "dataset.push_to_hub(DATASET_NAME, private=True)" + ] + }, + { + "cell_type": "markdown", + "id": "3fcb2492-ef2a-468e-8bf1-deb18eef4d9c", + "metadata": {}, + "source": [ + "## Use of Validation Sets\n", + "\n", + "When you train your model in Week 7.\n", + "\n", + "```python\n", + "# load the train and validation set\n", + "train = load_dataset(DATASET_NAME, split='train[:100%]') # or less than 100%\n", + "validation = load_dataset(DATASET_NAME, split='validation[:100%]') # or less than 100% \n", + "\n", + "# Define training parameters\n", + "train_parameters = SFTConfig(\n", + " eval_strategy=\"steps\", # or \"epoch\"\n", + " eval_steps=EVAL_STEPS,\n", + " ...\n", + ")\n", + "\n", + "# Initialize fine-tuning with validation set\n", + "fine_tuning = SFTTrainer(\n", + " eval_dataset=validation,\n", + " ...\n", + ")\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "bceb4407-d91d-4731-9e96-189f6f953cbc", + "metadata": {}, + "source": [ + "## A Closer Look at the Duplicates\n", + "\n", + "We have now created a dataset that includes a validation set and additional test data. During this process, we observed that **2% of the data contains duplicates**, where the scrubbed descriptions are identical.\n", + "\n", + "Duplicates can contribute to model overfitting. However, since only **2% of the dataset is duplicated**, the impact is likely minimal. Moreover, many of these duplicates actually refer to different physical objects rather than being true duplicates.\n", + "\n", + "### False Duplicates\n", + "\n", + "The “duplicates” we observe are often not duplicates in the original dataset. Minor differences in product descriptions may be removed by the scrubbing process, leading to items that *appear* identical but aren’t. For example:\n", + "\n", + "```\n", + "\n", + "\n", + "```\n", + "\"(2-Pack)\" is removed in the scrub method.\n", + "\n", + "Similarly:\n", + "```\n", + "[,\n", + " ,\n", + " ,\n", + "...\n", + " ,\n", + " ,\n", + " ]\n", + "```\n", + "These all represent different rotor models. \n", + "\n", + "**Even when both the scrubbed text and the price are identical**, the items may still refer to distinct products. For instance:\n", + "```\n", + "<5304486359 Refrigerator Door Handles Set Replacement for Frigidaire FFTR1821QW5A Refrigerator - Compatible with 5304486359 White Door Handles - UpStart Components Brand = $17.99>\n", + "<5304486359 Refrigerator Door Handles Set Replacement for Frigidaire FFTR1831QP1 Refrigerator - Compatible with 5304486359 White Door Handles - UpStart Components Brand = $17.99>\n", + "```\n", + "\n", + "### True Duplicates\n", + "Finding *true* duplicates—where the scrubbed text, price, and underlying real-world product match—seems relatively rare. The following items in the **Appliances** set, for instance, likely refer to the same physical product:\n", + "```python\n", + "{'main_category': 'Tools & Home Improvement',\n", + " 'title': 'Whirlpool 8318084 Lid Switch for Washer',\n", + " 'average_rating': 4.6,\n", + " 'rating_number': 511,\n", + " 'features': ['Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", + " 'This products adds a great value',\n", + " 'This product is manufactured in United States',\n", + " 'Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", + " 'Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0',\n", + " 'Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0',\n", + " 'Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0',\n", + " 'Genuine Replacement Part'],\n", + " 'description': ['Product Description',\n", + " 'Part Number 8318084 (AP3180933) replaces 1018522, AH886960, EA886960, PS886960., Easy to use and handle. This products adds a great value This product is manufactured in United States.',\n", + " 'From the Manufacturer',\n", + " 'Whirlpool 8318084 Lid Switch for Washer. Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0, Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0, Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0, Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0. Genuine Replacement Part.'],\n", + " 'price': '25.55',\n", + " 'images': {'hi_res': [None],\n", + " 'large': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_.jpg'],\n", + " 'thumb': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_US75_.jpg'],\n", + " 'variant': ['MAIN']},\n", + " 'videos': {'title': [\"Your Washer Won't Spin?\", '8318084 Washer Lid Switch'],\n", + " 'url': ['https://www.amazon.com/vdp/09c00a975b4b46198b5703483f424981?ref=dp_vse_rvc_0',\n", + " 'https://www.amazon.com/vdp/3c9b3dc3c93444978d542af3fab13c49?ref=dp_vse_rvc_1'],\n", + " 'user_id': ['', '']},\n", + " 'store': 'Whirlpool',\n", + " 'categories': ['Appliances',\n", + " 'Parts & Accessories',\n", + " 'Washer Parts & Accessories'],\n", + " 'details': '{\"Manufacturer\": \"Whirlpool\", \"Part Number\": \"8318084\", \"Item Weight\": \"1.34 ounces\", \"Product Dimensions\": \"3 x 2 x 2 inches\", \"Item model number\": \"8318084\", \"Is Discontinued By Manufacturer\": \"No\", \"Item Package Quantity\": \"1\", \"Included Components\": \"Kkk\", \"Batteries Included?\": \"No\", \"Batteries Required?\": \"No\", \"Warranty Description\": \"Kk\", \"Best Sellers Rank\": {\"Tools & Home Improvement\": 231142, \"Washer Parts & Accessories\": 1074}, \"Date First Available\": \"August 7, 2008\"}',\n", + " 'parent_asin': 'B01CT25N26',\n", + " 'bought_together': None,\n", + " 'subtitle': None,\n", + " 'author': None}\n", + "\n", + "{'main_category': 'Tools & Home Improvement',\n", + " 'title': 'Whirlpool 8318084 Lid Switch for Washer',\n", + " 'average_rating': 4.6,\n", + " 'rating_number': 514,\n", + " 'features': ['Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", + " 'This products adds a great value',\n", + " 'This product is manufactured in United States',\n", + " 'Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", + " 'Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0',\n", + " 'Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0',\n", + " 'Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0',\n", + " 'Genuine Replacement Part'],\n", + " 'description': ['Product Description',\n", + " 'Part Number 8318084 (AP3180933) replaces 1018522, AH886960, EA886960, PS886960., Easy to use and handle. This products adds a great value This product is manufactured in United States.',\n", + " 'From the Manufacturer',\n", + " 'Whirlpool 8318084 Lid Switch for Washer. Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0, Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0, Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0, Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0. Genuine Replacement Part.'],\n", + " 'price': '25.55',\n", + " 'images': {'hi_res': [None],\n", + " 'large': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_.jpg'],\n", + " 'thumb': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_US75_.jpg'],\n", + " 'variant': ['MAIN']},\n", + " 'videos': {'title': ['AMI PARTS,Parts Specialist'],\n", + " 'url': ['https://www.amazon.com/vdp/09a12ea79b1a4081a18909825437760b?ref=dp_vse_rvc_0'],\n", + " 'user_id': ['']},\n", + " 'store': 'Whirlpool',\n", + " 'categories': ['Appliances',\n", + " 'Parts & Accessories',\n", + " 'Washer Parts & Accessories'],\n", + " 'details': '{\"Manufacturer\": \"Whirlpool\", \"Part Number\": \"8318084\", \"Item Weight\": \"1.34 ounces\", \"Product Dimensions\": \"3 x 2 x 2 inches\", \"Item model number\": \"8318084\", \"Is Discontinued By Manufacturer\": \"No\", \"Item Package Quantity\": \"1\", \"Included Components\": \"kkk\", \"Batteries Included?\": \"No\", \"Batteries Required?\": \"No\", \"Warranty Description\": \"kk\", \"Best Sellers Rank\": {\"Tools & Home Improvement\": 166821, \"Washer Parts & Accessories\": 684}, \"Date First Available\": \"August 7, 2008\"}',\n", + " 'parent_asin': 'B0050O1UR8',\n", + " 'bought_together': None,\n", + " 'subtitle': None,\n", + " 'author': None}\n", + "```\n", + "\n", + "### Takeaway\n", + "2% of the dataset contains duplicates, but most of these represent different physical objects. It does not appear to be worthwhile to remove them from the dataset. In fact it can be better the keep them to have representative data.\n" + ] + }, + { + "cell_type": "markdown", + "id": "0a1d7b72-a1ab-4fc4-9065-738bd11f8058", + "metadata": {}, + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "403a42a2-3913-4905-9475-97509fe86c5e", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.9" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From bb457c239eb00176eb375c79cfad7baec618dc48 Mon Sep 17 00:00:00 2001 From: udomai Date: Mon, 3 Mar 2025 00:32:04 +0100 Subject: [PATCH 26/35] add w5d4 exercise: no langchain --- .../RAG_chat_no_LangChain.ipynb | 394 ++++++++++++++++++ .../languages/alsacien.md | 42 ++ .../languages/bourguignon.md | 31 ++ .../knowledge_collection/languages/breton.md | 33 ++ .../knowledge_collection/languages/gascon.md | 34 ++ .../languages/languedocien.md | 30 ++ .../knowledge_collection/languages/lorrain.md | 26 ++ .../knowledge_collection/languages/normand.md | 34 ++ .../knowledge_collection/languages/picard.md | 27 ++ .../languages/provencal.md | 27 ++ .../knowledge_collection/mountains/alpes.md | 37 ++ .../mountains/ardennes.md | 36 ++ .../knowledge_collection/mountains/jura.md | 37 ++ .../mountains/massif_armorican.md | 35 ++ .../mountains/massif_central.md | 34 ++ .../knowledge_collection/mountains/morvan.md | 44 ++ .../mountains/pyrenees.md | 40 ++ .../knowledge_collection/mountains/vosges.md | 33 ++ .../regions/alsace_lorraine.md | 47 +++ .../knowledge_collection/regions/bourgogne.md | 47 +++ .../knowledge_collection/regions/bretagne.md | 45 ++ .../knowledge_collection/regions/gascogne.md | 47 +++ .../regions/ile_de_france.md | 47 +++ .../knowledge_collection/regions/languedoc.md | 46 ++ .../knowledge_collection/regions/normandie.md | 48 +++ .../knowledge_collection/regions/poitou.md | 48 +++ .../knowledge_collection/regions/provence.md | 50 +++ 27 files changed, 1399 insertions(+) create mode 100644 week5/community-contributions/day 4 no_langchain/RAG_chat_no_LangChain.ipynb create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/alsacien.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/bourguignon.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/breton.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/gascon.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/languedocien.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/lorrain.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/normand.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/picard.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/provencal.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/alpes.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/ardennes.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/jura.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_armorican.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_central.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/morvan.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/pyrenees.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/vosges.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/alsace_lorraine.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bourgogne.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bretagne.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/gascogne.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/ile_de_france.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/languedoc.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/normandie.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/poitou.md create mode 100644 week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/provence.md diff --git a/week5/community-contributions/day 4 no_langchain/RAG_chat_no_LangChain.ipynb b/week5/community-contributions/day 4 no_langchain/RAG_chat_no_LangChain.ipynb new file mode 100644 index 0000000..7c2572d --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/RAG_chat_no_LangChain.ipynb @@ -0,0 +1,394 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "e9025a4a-b8ef-4901-b98e-753b756b028a", + "metadata": {}, + "source": [ + "# Building a RAG chat without the langchain framework\n", + "## To understand more in detail what's going on\n", + "\n", + "The technical know-how comes from Ed Donner, obviously, as well as from Sakalya Mitra & Pradip Nichite on [this gem of a blog post](https://blog.futuresmart.ai/building-rag-applications-without-langchain-or-llamaindex) I found on futuresmart.ai" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1b7acfb5-8bf9-48b5-a219-46f1e3bfafc3", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from dotenv import load_dotenv\n", + "import gradio as gr\n", + "import re\n", + "from openai import OpenAI" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "19af6b8b-be29-4086-a69f-5e2cdb867ede", + "metadata": {}, + "outputs": [], + "source": [ + "# imports for Chroma and plotly\n", + "\n", + "import chromadb\n", + "from chromadb.utils import embedding_functions\n", + "import numpy as np\n", + "from sklearn.manifold import TSNE\n", + "import plotly.graph_objects as go" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc6d9ab4-816a-498c-a04c-c3838770d848", + "metadata": {}, + "outputs": [], + "source": [ + "MODEL = \"gpt-4o-mini\"\n", + "db_name = \"chroma_db\"\n", + "client = chromadb.PersistentClient(path=\"chroma_db\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a3715b81-eed0-4412-8c01-0623ed113657", + "metadata": {}, + "outputs": [], + "source": [ + "load_dotenv()\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "markdown", + "id": "3017e1dd-d0d5-4ef4-8c72-84517a927793", + "metadata": {}, + "source": [ + "### Making stuff at home: documents" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e83480a5-927b-4756-a978-520a56ceed85", + "metadata": {}, + "outputs": [], + "source": [ + "# items in documents are actually objects: Documents(metadata={...}, page_content=\"...\"), so we need a \"Document\" class\n", + "# btw all the quadruple-backslash madness here is due to Windows (there might be a more efficient way, still)\n", + "\n", + "class Document:\n", + " def __init__(self, metadata, page_content):\n", + " self.metadata = metadata\n", + " self.page_content = page_content\n", + "\n", + " def __repr__(self):\n", + " return f\"Document(metadata={self.metadata}, page_content={repr(self.page_content)})\"\n", + "\n", + "\n", + "documents = []\n", + "\n", + "def get_documents(path='.'):\n", + " for entry in os.listdir(path):\n", + " if len(re.findall(\"^\\.\", entry)) == 0:\n", + " full_path = os.path.join(path, entry)\n", + " if os.path.isdir(full_path):\n", + " get_documents(full_path)\n", + " else:\n", + " parent = re.sub(\"^\\.[\\\\\\\\].*[\\\\\\\\]\", \"\", os.path.dirname(full_path))\n", + " self = os.path.basename(full_path)\n", + " content = \"\"\n", + "\n", + " with open(full_path, mode=\"r\", encoding=\"utf-8\") as f:\n", + " content = f.read()\n", + " \n", + " doc = Document(metadata={\"source\": full_path, \"doc_type\": parent, \"self\": self}, page_content=content)\n", + " documents.append(doc)\n", + "\n", + "# where the knowledge collection lives\n", + "directory_path = r'.\\knowledge_collection'\n", + "get_documents(directory_path)" + ] + }, + { + "cell_type": "markdown", + "id": "fd846bc0-54d0-4802-a18b-196c396a241c", + "metadata": {}, + "source": [ + "### Making stuff at home: chunks" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "202b33e2-c3fe-424c-9c8e-a90e517add42", + "metadata": {}, + "outputs": [], + "source": [ + "eos_pattern = re.compile(r\"((?<=[.!?;])[\\s]+)|([\\n\\r]+)\")\n", + "chunk_size = 1000\n", + "chunks = []" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a19a61ec-d204-4b87-9f05-88832d03fad6", + "metadata": {}, + "outputs": [], + "source": [ + "for doc in documents:\n", + "\n", + " sentence_ends = [end.start() for end in list(re.finditer(eos_pattern, doc.page_content)) if end.start() > chunk_size - 50]\n", + " start = 0\n", + " \n", + " if len(sentence_ends) == 0 and len(doc.page_content) > 5:\n", + " chunk = Document(metadata=doc.metadata, page_content=doc.page_content)\n", + " chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", + " chunks.append(chunk)\n", + "\n", + " else: \n", + " for point in sentence_ends:\n", + " if point - start >= chunk_size - 50:\n", + " text = doc.page_content[start:point]\n", + " chunk = Document(metadata=doc.metadata, page_content=text)\n", + " chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", + " chunks.append(chunk)\n", + " start = point\n", + " \n", + " # Add the remaining part of the text as the last chunk if it's big enough\n", + " if len(doc.page_content) - start > 5:\n", + " text = doc.page_content[start:]\n", + " chunk = Document(metadata=doc.metadata, page_content=text)\n", + " chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", + " chunks.append(chunk)" + ] + }, + { + "cell_type": "markdown", + "id": "966ae50c-e0e5-403a-9465-8f26967f8922", + "metadata": {}, + "source": [ + "### Making stuff without a framework: embeddings" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b97391c0-e55f-4e08-b0cb-5e62fb119ae6", + "metadata": {}, + "outputs": [], + "source": [ + "# Configure sentence transformer embeddings\n", + "embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(\n", + " model_name=\"all-MiniLM-L6-v2\"\n", + ")\n", + "\n", + "collection_name = \"documents_collection\"\n", + "\n", + "try:\n", + " client.delete_collection(collection_name)\n", + "except ValueError:\n", + " print(f\"{collection_name} doesn't exist yet\")\n", + "\n", + "# Create collection\n", + "collection = client.get_or_create_collection(\n", + " name=collection_name,\n", + " embedding_function=embeddings\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5222dfec-8cf4-4e87-aeb8-33d0f3b3b5cb", + "metadata": {}, + "outputs": [], + "source": [ + "# adding our chunks to the \"collection\"\n", + "\n", + "for chunk in chunks:\n", + " index = chunks.index(chunk)\n", + " collection.add(\n", + " documents=chunk.page_content,\n", + " metadatas=chunk.metadata,\n", + " ids=chunk.metadata['id'] + f\"{index}\"\n", + " )" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5effcada-ee5f-4207-9fa6-1fc5604b068b", + "metadata": {}, + "outputs": [], + "source": [ + "def semantic_search(collection, query: str, n_results: int = 4):\n", + " results = collection.query(\n", + " query_texts=[query],\n", + " n_results=n_results\n", + " )\n", + " return results" + ] + }, + { + "cell_type": "markdown", + "id": "99f0a366-3dcb-4824-9f33-70e07af984d8", + "metadata": {}, + "source": [ + "## Visualizing the Vector Store\n", + "\n", + "The results actually look just as good with `all-MiniLM-L6-v2`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e12751ab-f102-4dc6-9c0f-313e5832b75f", + "metadata": {}, + "outputs": [], + "source": [ + "# Prework\n", + "\n", + "result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n", + "vectors = np.array(result['embeddings'])\n", + "documents = result['documents']\n", + "doc_types = [metadata['doc_type'] for metadata in result['metadatas']]\n", + "colors = [['blue', 'red', 'orange'][['languages', 'mountains', 'regions'].index(t)] for t in doc_types]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "422e3247-2de0-44ba-82bc-30b4f739da7e", + "metadata": {}, + "outputs": [], + "source": [ + "# Reduce the dimensionality of the vectors to 2D using t-SNE\n", + "# (t-distributed stochastic neighbor embedding)\n", + "\n", + "tsne = TSNE(n_components=2, random_state=42)\n", + "reduced_vectors = tsne.fit_transform(vectors)\n", + "\n", + "# Create the 2D scatter plot\n", + "fig = go.Figure(data=[go.Scatter(\n", + " x=reduced_vectors[:, 0],\n", + " y=reduced_vectors[:, 1],\n", + " mode='markers',\n", + " marker=dict(size=5, color=colors, opacity=0.8),\n", + " text=[f\"Type: {t}
    Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", + " hoverinfo='text'\n", + ")])\n", + "\n", + "fig.update_layout(\n", + " title='2D Chroma Vector Store Visualization',\n", + " scene=dict(xaxis_title='x',yaxis_title='y'),\n", + " width=800,\n", + " height=600,\n", + " margin=dict(r=20, b=10, l=10, t=40)\n", + ")\n", + "\n", + "fig.show()" + ] + }, + { + "cell_type": "markdown", + "id": "2cff9065-de3d-4e91-8aff-c7ad750a4334", + "metadata": {}, + "source": [ + "#### Comment: Relying on Gradio's history handling seems to be memory enough\n", + "##### If all you need is your favorite LLM with expertise in your knowlege collection" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aebb676f-883e-4b2b-8420-13f2a8399e77", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"You are a helpful assistant for everything French. Give brief, accurate answers. \\\n", + "Do not provide any information that you haven't been asked for, even if you have lots of context. \\\n", + "If you haven't been provided with relevant context, say you don't know. Do not make anything up, only \\\n", + "provide answers that are based in the context you have been given. Do not comment on the provided context. \\\n", + "If the user doesn't ask for any information, engage in brief niceties and offer your expertise regarding France.\"\n", + "\n", + "history = [{\"role\": \"system\", \"content\": system_prompt}]\n", + "\n", + "def get_user_prompt(prompt):\n", + " # semantic search!!\n", + " context = semantic_search(collection, prompt)['documents'][0]\n", + "\n", + " if len(context) > 0:\n", + " prompt += f\"\\n\\n[AUTOMATIC SYSTEM CONTEXT ADDITION] Here is some context that might be useful for answering the question:\"\n", + "\n", + " for doc in context:\n", + " prompt += f\"\\n\\n{doc}\"\n", + " \n", + " user_prompt = {\"role\": \"user\", \"content\": prompt}\n", + "\n", + " return user_prompt" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "23b70162-2c4f-443e-97c8-3e675304d307", + "metadata": {}, + "outputs": [], + "source": [ + "def stream_gpt(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history\n", + " messages.append(get_user_prompt(message))\n", + " stream = openai.chat.completions.create(\n", + " model=MODEL,\n", + " messages=messages,\n", + " stream=True\n", + " )\n", + " result = \"\"\n", + " for chunk in stream:\n", + " result += chunk.choices[0].delta.content or \"\"\n", + " yield result" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4ecf4a30-452d-4d41-aa60-fa62c8e2559b", + "metadata": {}, + "outputs": [], + "source": [ + "# Gradio\n", + "\n", + "gr.ChatInterface(fn=stream_gpt, type=\"messages\").launch(inbrowser=True)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/alsacien.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/alsacien.md new file mode 100644 index 0000000..b7a56a7 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/alsacien.md @@ -0,0 +1,42 @@ +# Overview of Alsacien Language + +## Definition +Alsacien, also known as Alsatian or Alsatian German, is a variety of the Alemannic branch of the Germanic languages spoken predominantly in Alsace, France. + +## Geographic Distribution +- Primarily spoken in Alsace, a region in northeastern France. +- Communities of Alsacien speakers can also be found in neighboring regions of Germany and Switzerland. + +## Linguistic Classification +- **Language Family**: Indo-European +- **Subfamily**: Germanic +- **Group**: West Germanic +- **Branch**: High German + +## Speakers +- Estimates of native speakers range from 500,000 to 1 million, though use has declined due to factors like urbanization and language shift towards French. + +## Dialectal Variations +- Alsacien includes multiple dialects, which may vary significantly from one locality to another. +- Two main dialects: + - **Haut-Rhin** (Upper Rhine) + - **Bas-Rhin** (Lower Rhine) + +## Characteristics +- Strongly influenced by both French and standard German, leading to unique vocabulary and pronunciation. +- Grammar and syntax retain features of Middle High German. + +## Cultural Significance +- Acts as a marker of regional identity for the people of Alsace. +- Extensively used in local media, literature, and music, particularly folk traditions. + +## Status +- Considered a vulnerable language by UNESCO. +- Efforts are ongoing for revitalization, including teaching in schools and cultural associations promoting its use. + +## Related Languages +- Closely related to Swiss German and other Alemannic dialects. +- Influenced by and influences neighboring languages, particularly French. + +## Conclusion +Alsacien is a vital part of the cultural heritage of the Alsace region, with ongoing efforts aimed at preserving and promoting its use among younger generations. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/bourguignon.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/bourguignon.md new file mode 100644 index 0000000..d08b35f --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/bourguignon.md @@ -0,0 +1,31 @@ +# Overview of the Bourguignon Language + +## General Information +- **Name**: Bourguignon +- **Region**: Primarily spoken in the Burgundy region of France +- **Language Family**: Romance languages +- **Classification**: It is part of the Langue d'oïl group, which also includes languages like French, Norman, and Picard. + +## Historical Context +- **Origin**: Derived from Vulgar Latin, Bourguignon developed in the medieval period and reflects the linguistic evolution of the region. +- **Influence**: Historically influenced by Old French, as well as regional dialects and neighboring languages. + +## Features +- **Dialects**: Bourguignon comprises several dialects, often differing significantly from one another. +- **Phonetics**: The phonetic system exhibits distinct sounds not found in Standard French. +- **Vocabulary**: Contains unique vocabulary and expressions that may not be understood by standard French speakers. + +## Current Status +- **Speaker Population**: The number of speakers has declined over the years, with estimates suggesting only a few thousand fluent speakers today. +- **Recognition**: Bourguignon is not an official language in France, but there are efforts to preserve and promote its use among local communities. + +## Cultural Significance +- **Folklore and Literature**: Bourguignon has a rich tradition of oral literature, including folk tales and songs that reflect the cultural heritage of Burgundy. +- **Festivals and Events**: Local festivals often include performances in Bourguignon, celebrating the language's place in regional identity. + +## Modern Efforts +- **Revitalization**: Initiatives to teach Bourguignon in schools and promote its use in cultural activities aim to preserve the language for future generations. +- **Media Presence**: Some local media, including radio stations and publications, feature Bourguignon, fostering a sense of community among speakers. + +## Conclusion +Bourguignon remains an important part of the cultural identity of the Burgundy region, reflecting the historical and linguistic diversity of France. Efforts to revive and sustain the language highlight its significance within the local heritage. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/breton.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/breton.md new file mode 100644 index 0000000..707e20b --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/breton.md @@ -0,0 +1,33 @@ +# Overview of the Breton Language + +## General Information +- **Name**: Breton (Brezhoneg) +- **Language Family**: Celtic, part of the Brythonic branch +- **Region**: Brittany (Breizh), France + +## Historical Background +- **Origins**: Breton is derived from the Brythonic Celtic languages that were spoken in Great Britain. It arrived in Brittany with settlers from Britain during the early medieval period. +- **First Documented Evidence**: The earliest written examples of Breton date back to the 8th century. + +## Linguistic Features +- **Dialects**: There are three main dialects of Breton: + - **Gouèze** (Western) + - **Kerne** (Central) + - **Leoneg** (Eastern) +- **Alphabet**: The modern Breton alphabet uses the Latin script with some diacritics. + +## Current Status +- **Speakers**: Approximately 200,000 to 300,000 speakers as of recent estimates. +- **Recognition**: Breton is recognized as a regional language in France, but it does not hold official status. +- **Revitalization Efforts**: There are ongoing initiatives to promote the language, including bilingual education and media in Breton. + +## Cultural Significance +- **Literature and Music**: Breton has a rich oral tradition, including folklore, songs, and poetry. Contemporary literature and music often embrace the language. +- **Festivals**: Events like Fest-Noz (night festivals) celebrate Breton culture and often feature music and dance in the Breton language. + +## Challenges +- **Decline**: The number of native speakers has declined significantly due to historical policies and the dominance of French. +- **Education**: Breton is not widely taught in schools, although there are some bilingual programs and immersion schools. + +## Conclusion +Breton is a vibrant Celtic language with a rich history and cultural heritage, facing challenges in the modern age but supported by revitalization efforts and community engagement. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/gascon.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/gascon.md new file mode 100644 index 0000000..fef26fc --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/gascon.md @@ -0,0 +1,34 @@ +# Overview of the Gascon Language + +## General Information +- **Language Family**: Occitan branch of the Romance languages. +- **Region**: Primarily spoken in the Gascony region of southwestern France, which includes parts of the departments of Gers, Landes, and Pyrénées-Atlantiques. + +## Historical Context +- **Origins**: Gascon evolved from Vulgar Latin and has influences from the Visigoths and various other historical invaders. +- **Status**: Once a widely spoken language, Gascon has seen a decline in the number of speakers, particularly in urban areas, due to the rise of French as the dominant language. + +## Dialects +- **Varieties**: Gascon includes several dialects, most notably: + - **Bigourdan**: Spoken in the region of Bigorre. + - **Armanac**: Found in Armagnac. + - **Languedocien**: This influences some Gascon speakers, particularly those in mixed-language areas. + +## Linguistic Features +- **Phonetics**: Gascon has unique phonetic characteristics, such as the preservation of the Latin 'u' sound and certain nasal vowels. +- **Vocabulary**: Contains a wealth of regional vocabulary, along with borrowings from French, Occitan, and Basque. + +## Cultural Significance +- **Literature**: Historically, Gascon has been used in regional literature and songs, contributing richly to the cultural heritage of the area. +- **Folklore and Traditions**: Gascon is an important vehicle for local folklore, traditions, and customs in Gascony. + +## Current Status +- **Revitalization Efforts**: There are ongoing efforts to promote and teach Gascon in schools, cultural organizations, and through local media. +- **Number of Speakers**: As of recent estimates, the number of fluent speakers is declining, with efforts being made to preserve the language among younger generations. + +## Related Languages +- **Occitan**: Gascon is one of the major dialects of the Occitan language, which also includes Provençal and Languedocien. +- **Comparison to French**: While Gascon shares some similarities with French, it retains distinct grammatical structures and vocabulary. + +## Conclusion +Gascon is not only a language but a crucial component of the cultural identity of the Gascon people, reflecting their history, traditions, and regional pride. Efforts for revitalization continue to be important in preserving this unique linguistic heritage. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/languedocien.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/languedocien.md new file mode 100644 index 0000000..f85cdda --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/languedocien.md @@ -0,0 +1,30 @@ +# Overview of Languedocien Language + +## General Information +- **Language Family**: Occitan +- **Region**: Primarily spoken in the Languedoc region of southern France. +- **ISO Code**: Not officially assigned, but sometimes referred to as "oc" for Occitan. + +## Linguistic Features +- **Dialects**: Languedocien is one of the major dialects of the Occitan language, which also includes Provençal, Gascon, and Auvergnat. +- **Phonetics**: Characterized by the presence of certain vowel sounds and the use of diphthongs that may differ from other dialects. +- **Grammar**: Similar to other Occitan dialects, it features a subject-verb-object structure, but with unique local variations. + +## Vocabulary +- **Lexical Influence**: Languedocien vocabulary is heavily influenced by Latin, with a significant number of words also derived from Provençal and other regional languages. +- **Regionalisms**: Contains unique words and expressions that are specific to local culture and traditions. + +## Cultural Context +- **Recognition**: While part of the Occitan language family, Languedocien does not have official status in France and is considered a regional language. +- **Literature**: Historically used in medieval literature; notable authors include Frédéric Mistral and others who contributed to the revival of Occitan literature. + +## Current Status +- **Speakers**: There are an estimated few hundred thousand speakers, with numbers decreasing due to the dominance of French. +- **Revitalization Efforts**: Various cultural organizations and schools aim to preserve and promote the use of Languedocien through courses, workshops, and public events. + +## Geographic Distribution +- **Primary Areas**: Predominantly spoken in the departments of Hérault, Aude, Gard, and parts of Lozère and Pyrénées-Orientales. +- **Urban vs. Rural**: More commonly spoken in rural areas, with younger generations tending to use it less in urban settings. + +## Conclusion +Languedocien remains an essential part of the cultural heritage of southern France, reflecting the region's history, traditions, and linguistic diversity. Efforts to sustain and promote the language continue amidst challenges posed by modernization and globalization. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/lorrain.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/lorrain.md new file mode 100644 index 0000000..818fbfc --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/lorrain.md @@ -0,0 +1,26 @@ +# Overview of the Lorrain Language + +## General Information +- **Language Family**: Lorrain is part of the Langue d'Oïl languages, which are a subgroup of the Romance languages. +- **Region**: Primarily spoken in the Lorraine region of northeastern France. +- **Dialects**: There are various dialects of Lorrain, including certain variations influenced by local languages and cultures. + +## Historical Context +- **Origins**: The language has roots dating back to the medieval period and was influenced by the historical presence of the Duchy of Lorraine. +- **Language Shift**: Over the 19th and 20th centuries, Lorrain saw a decline in usage due to the dominance of French, leading many speakers to shift to French. + +## Linguistic Features +- **Phonology**: Lorrain phonetics include distinct sounds that differentiate it from standard French and other Langue d'Oïl languages. +- **Vocabulary**: The lexicon of Lorrain retains several archaic words and expressions that have disappeared from modern French. +- **Grammar**: Similar to French but with unique grammatical structures and conjugations, reflecting its distinct identity. + +## Cultural Significance +- **Traditions**: Lorrain is often associated with local folklore, songs, and literature, which contribute to the cultural identity of Lorraine. +- **Preservation Efforts**: Various initiatives have been undertaken to promote and preserve the Lorrain language, including cultural festivals and educational programs. + +## Current Status +- **Speaker Population**: The number of active speakers has significantly decreased, with many older speakers and limited transmission to younger generations. +- **Revitalization**: Recent efforts are being made to revive interest in Lorrain among younger populations through workshops, classes, and media. + +## Conclusion +Lorrain is a unique language that embodies the rich cultural heritage of the Lorraine region. While it faces challenges, ongoing efforts aim to preserve and revitalize this historical language for future generations. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/normand.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/normand.md new file mode 100644 index 0000000..4ad9c00 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/normand.md @@ -0,0 +1,34 @@ +# Overview of the Normand Language + +## What is Normand? +Normand is a regional language of France, part of the Oïl language group. It originates from the Normandy region and is historically linked to Old Norman, which developed from the Old Norman dialect of Old French. + +## Geographic Distribution +- Predominantly spoken in Normandy, particularly in the departments of Seine-Maritime and Calvados. +- Some dialects extend into the Channel Islands (like Jersey and Guernsey), where it is closely related to Jèrriais and Guernésiais. + +## Dialects +Normand has several dialects, which can vary significantly in terms of vocabulary, pronunciation, and grammar. Key dialects include: +- **Bocage**: Spoken in the rural areas of western Normandy. +- **Mélée**: Found in the northeastern part. +- **Sèvres**: A dialect with influences from the urban centers. + +## Linguistic Features +- Normand retains many archaic French features that have evolved in Standard French. +- The pronunciation of vowels and some consonant sounds can be quite distinct from Standard French. +- There are notable differences in use of articles and noun endings compared to Standard French. + +## Historical Context +- Norman was historically influential due to the Viking settlement of Normandy in the 9th century and subsequent Norman Conquest of England in 1066. +- It was widely used by the nobility and in administrative contexts until French became more dominant post-16th century. + +## Current Status +- Normand is considered a minority language and has seen a decline in speakers over the years. +- Efforts for revitalization are ongoing, with various cultural associations promoting the language through education and media. + +## Cultural Aspects +- Normand has a rich oral tradition, with folk tales, songs, and proverbs integral to the culture of Normandy. +- Festivals and events celebrating Normand language and culture are held in various communities. + +## Conclusion +While facing challenges due to globalization and the dominance of Standard French, Normand remains an important part of the cultural heritage of Normandy. Efforts to preserve and promote the language continue, aiming to maintain its presence for future generations. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/picard.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/picard.md new file mode 100644 index 0000000..fcf4d72 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/picard.md @@ -0,0 +1,27 @@ +# Overview of the Picard Language + +## General Information +- **Language Family**: Romance, specifically a part of the West Oïl languages, which also includes French. +- **Region**: Primarily spoken in the historic region of Picardy in northern France, as well as in parts of Belgium and historically in the areas of the nearby Nord-Pas-de-Calais. + +## Linguistic Characteristics +- **Dialects**: There are several dialects of Picard, including Amiénois, Beauvaisis, and Hesdinois. +- **Vocabulary**: Shares many lexical items with French but also retains unique words and expressions. Some vocabulary is influenced by local historical interactions with Dutch and German. + +## Historical Context +- **Origins**: Evolved from Latin, like other Romance languages. Roots trace back to the Vulgar Latin spoken in the region during the Roman Empire. +- **Literary Tradition**: Has a rich but lesser-known literary tradition, with poetry and prose dating back to the Middle Ages. + +## Current Status +- **Speakers**: The number of speakers has declined significantly over the 20th century due to the dominance of standard French and the 1999 ban on the usage of Picard in all of France. +- **Revitalization Efforts**: Recent efforts outside of France include community classes, cultural organizations, and media in Picard to promote the language. It is rumored that there is an underground movement in France to keep Picard alive in spite of the language being banned and illegal to use since 1999. + +## Cultural Significance +- **Identity**: Picard is an important part of regional identity and cultural heritage for many people in northern France. +- **Festivals and Events**: Regional festivals celebrate Picard culture, featuring traditional songs, dances, and cuisine. + +## Legal Status +- **Recognition**: Picard has no official status in France, but it is recognized as a regional language. Efforts have been made to include it in educational curricula and local government documents in some areas. + +## Conclusion +Picard is a unique language that reflects the cultural and historical tapestry of northern France. Despite challenges, there are active efforts to preserve and promote its usage among future generations. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/provencal.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/provencal.md new file mode 100644 index 0000000..6a762f9 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/languages/provencal.md @@ -0,0 +1,27 @@ +# Overview of Provençal Language + +## Definition +Provençal is a Romance language that belongs to the Occitan language family, which is spoken primarily in the Provence region of southern France. + +## Historical Background +- **Origins**: Provençal has its roots in Vulgar Latin and has been influenced by various languages and cultures throughout history, including Celtic, Germanic, and Arabic. +- **Literary Tradition**: It has a rich literary tradition dating back to the 11th century, with notable poets such as Frédéric Mistral contributing to its revival in the 19th century. + +## Geographic Distribution +- **Regions**: Primarily spoken in Provence, it also has speakers in parts of Italy and Spain, particularly in the Val d'Aran valley in Catalonia, known as Aranese. +- **Dialectal Variations**: Provençal encompasses several dialects, such as Alémanique, Boulégue, and Languedocien, reflecting the linguistic diversity within the Occitan language. + +## Current Status +- **Recognition**: Provençal is recognized as a cultural language in France but has a minority status and faces challenges due to the dominance of French. +- **Revitalization Efforts**: There are ongoing efforts to promote and teach Provençal, including in schools and cultural institutions. + +## Linguistic Features +- **Grammar and Syntax**: Provençal has distinct grammatical structures that differentiate it from standard French, including the use of gendered nouns and specific verb conjugations. +- **Vocabulary**: It retains many words and expressions derived from Latin, along with unique local terms and influences from neighboring languages. + +## Cultural Significance +- **Folklore and Traditions**: Provençal is an important part of the cultural identity in Provence, associated with local traditions, music, festivals, and cuisine. +- **Media and Literature**: There are books, newspapers, and online resources available in Provençal, contributing to its presence in modern media. + +## Conclusion +Provençal is a vibrant language with a deep historical and cultural significance in southern France. While it faces challenges, ongoing efforts for its preservation continue to foster interest and engagement in this unique linguistic heritage. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/alpes.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/alpes.md new file mode 100644 index 0000000..0a1a4f8 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/alpes.md @@ -0,0 +1,37 @@ +# Overview of the French Alps + +## General Information +- **Location:** Southeastern France, extending into Switzerland and Italy. +- **Length:** Approximately 1,200 kilometers (750 miles). +- **Highest Peak:** Mont Blanc, standing at 4,808 meters (15,774 feet). +- **Mountain Chain:** Part of the larger Alpine range that spans across several European countries. + +## Geography +- **Geological Composition:** Primarily composed of limestone and granite. +- **Major Valleys:** Includes the Rhône and Isère valleys. +- **Natural Parks:** Home to several national parks, including Écrins National Park and Vanoise National Park. + +## Climate +- **Variety:** Alpine climate with large variations; cold winters and mild summers. +- **Snowfall:** Heavy snowfall in winter makes it a prime destination for winter sports. + +## Flora and Fauna +- **Biodiversity:** Rich diversity of species; includes both alpine and Mediterranean flora. +- **Wildlife:** Encounters with species such as chamois, ibex, and golden eagles. + +## Activities +- **Winter Sports:** Skiing and snowboarding are popular, with famous resorts like Chamonix, Courchevel, and Val d’Isère. +- **Summer Activities:** Hiking, mountaineering, and mountain biking attract visitors during the warmer months. +- **Paragliding:** Known as a hotspot for paragliding due to favorable winds and stunning views. + +## Cultural Significance +- **Local Communities:** Home to various Alpine villages and cultures, each with unique traditions and languages. +- **Gastronomy:** Famous for local cheeses (like Beaufort and Reblochon), charcuterie, and dishes such as fondue and raclette. + +## Historical Aspects +- **Cultural Heritage:** Influenced by Roman and medieval settlements, with significant archaeological sites. +- **Tourism:** Became a major tourist destination in the 19th century. + +## Importance +- **Economic Significance:** Tourism is a vital part of the local economy, alongside agriculture and forestry. +- **Sustainability Focus:** Growing emphasis on sustainable tourism practices to protect the fragile alpine ecosystem. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/ardennes.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/ardennes.md new file mode 100644 index 0000000..2fb4c29 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/ardennes.md @@ -0,0 +1,36 @@ +# Overview of the Ardennes Mountain Range + +## Location +- The Ardennes is a region located in the northeastern part of France, extending into Belgium and Luxembourg. + +## Geography +- The Ardennes is characterized by dense forests, deep valleys, and rolling hills. +- The highest peak in the French Ardennes is Le Signal de Botrange, which reaches an elevation of about 2,277 feet (694 meters), although it is situated in Belgium. + +## Geology +- The area is known for its rugged terrain and is primarily composed of sedimentary rocks such as limestone and sandstone. +- The landscape has been shaped by glacial and river erosion over millennia. + +## Climate +- The Ardennes has a temperate maritime climate, with cool summers and mild winters. +- Precipitation is relatively high, leading to lush vegetation. + +## Flora and Fauna +- The region is home to diverse wildlife, including deer, wild boar, and various bird species. +- Dense forests are dominated by beech and fir trees, and many areas are protected as nature reserves. + +## Human Activity +- The Ardennes has a rich history, having been inhabited since prehistoric times. +- It has significance in World War I and II, particularly during the Battle of the Bulge. +- The region is known for outdoor activities such as hiking, cycling, and kayaking. + +## Cultural Aspects +- The Ardennes is dotted with picturesque villages and towns, showcasing traditional architecture. +- The area is known for its beer production, particularly in Belgium, with many breweries operating in the region. + +## Tourism +- Key attractions include the Semois River, the fortress of Bouillon, and the expansive forests of the Ardennes. +- The region offers several trails and parks, attracting nature lovers and adventure enthusiasts. + +## Conclusion +The Ardennes is a unique blend of natural beauty, historical significance, and cultural richness, making it an important region in France and beyond. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/jura.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/jura.md new file mode 100644 index 0000000..d1b48ff --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/jura.md @@ -0,0 +1,37 @@ +# Overview of the Jura Mountain Range in France + +## Location +- The Jura Mountains are located along the border between France and Switzerland. +- They stretch approximately 365 kilometers (227 miles) from the Rhône River in the south to the Rhine River in the north. + +## Geography +- The Jura is characterized by its rugged terrain, with numerous peaks, plateaus, and deep valleys. +- The highest peak in the French Jura is Crêt de la Neige, which rises to an elevation of 1,720 meters (5,643 feet). + +## Geology +- The range is primarily composed of limestone, which has been shaped by erosion, creating unique karst formations, caves, and cliffs. +- The Jura Mountains were formed during the Jurassic period, which is reflected in their name. + +## Climate +- The climate in the Jura varies from humid in the west to drier conditions in the east. +- The area experiences significant snowfall in winter, making it popular for winter sports. + +## Flora and Fauna +- The Jura is home to diverse ecosystems, including forests, alpine meadows, and wetlands. +- Wildlife includes species such as deer, chamois, marmots, and a variety of bird species. + +## Activities +- The Jura Mountains offer various outdoor activities, including hiking, skiing, and mountain biking. +- The region is known for its beautiful landscapes and natural parks, attracting tourists and nature enthusiasts. + +## Cultural Significance +- The Jura region is also known for its traditional cheese production, particularly Comté cheese. +- Numerous charming villages and towns, such as Arbois and Clairvaux-les-Lacs, showcase the cultural heritage of the area. + +## History +- The Jura Mountains have historical significance, having served as a natural barrier and route for trade and exploration. +- The region has witnessed various historical events, including battles during the French Revolutionary Wars and the Napoleonic Wars. + +## Accessibility +- The Jura is accessible from major cities like Geneva, Lyon, and Besançon, making it a popular destination for both locals and tourists. +- Several scenic routes and parks are maintained to facilitate exploration and enjoyment of the natural beauty. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_armorican.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_armorican.md new file mode 100644 index 0000000..7b97765 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_armorican.md @@ -0,0 +1,35 @@ +# Overview of the Massif Armorican + +## Location +- **Region**: Brittany, France +- **Coordinates**: Approximately 47° N latitude and 2° W longitude + +## Geography +- **Type**: Mountain range and geological massif +- **Area**: Covers parts of the departments of Ille-et-Vilaine, Morbihan, and Finistère +- **Elevation**: The highest peak, **Montagnes Noires**, reaches around 600 meters (1,969 feet) + +## Geology +- **Formation**: Primarily composed of ancient metamorphic rocks and granite formations, dating back to the Precambrian and Paleozoic eras +- **Tectonic Activity**: Influenced by the Variscan orogeny, which caused significant geological changes + +## Flora and Fauna +- **Biodiversity**: Home to diverse ecosystems, including heathlands, forests, and wetlands +- **Protected Areas**: Parts of the massif are designated as natural parks and reserves, promoting conservation efforts + +## Culture and History +- **Historical Significance**: The area is rich in megalithic structures and archaeological sites, reflecting ancient Celtic culture +- **Tourism**: Popular for hiking, cycling, and exploring its historical sites, contributing to local economies + +## Climate +- **Climate Type**: Maritime temperate climate, characterized by mild winters and cool summers +- **Precipitation**: Receives a significant amount of rainfall throughout the year, supporting its lush vegetation + +## Attractions +- **Sites of Interest**: Includes historic towns, châteaux, and picturesque landscapes, attracting visitors for both natural beauty and cultural heritage +- **Outdoor Activities**: Offers opportunities for outdoor sports such as hiking, horseback riding, and nature observation + +## Transportation +- **Accessibility**: Well-connected by road and rail, making it easily accessible from major urban centers in Brittany + +This overview encapsulates the essential aspects of the Massif Armorican, highlighting its geographical, geological, and cultural significance in France. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_central.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_central.md new file mode 100644 index 0000000..22c1c56 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/massif_central.md @@ -0,0 +1,34 @@ +# Overview of Massif Central + +## General Information +- **Location**: South-central France +- **Area**: Approximately 85,000 km² +- **Highest Peak**: Puy de Sancy (1,885 meters) +- **Geological Composition**: Primarily volcanic and sedimentary rocks + +## Geography +- **Regions Covered**: Spans across several French departments including Cantal, Puy-de-Dôme, Haute-Loire, and Lozère. +- **Landscape**: Characterized by plateaus, volcanic cones, deep valleys, and rivers. + +## Climate +- **Type**: Predominantly oceanic climate with a continental influence. +- **Precipitation**: Higher rainfall in the western regions, often resulting in lush landscapes. + +## Flora and Fauna +- **Biodiversity**: Home to various ecosystems, including grasslands, forests, and wetlands. +- **Protected Areas**: Includes several national parks and nature reserves, such as the Parc Naturel Régional des Volcans d'Auvergne. + +## Cultural Significance +- **History**: Affected by various historical events and populations, including the Gauls and the Roman Empire. +- **Heritage**: Rich cultural heritage with medieval towns, castles, and traditional practices. + +## Economic Importance +- **Agriculture**: Known for agriculture, particularly cheese production (e.g., Saint-Nectaire, Cantal). +- **Tourism**: Popular destination for outdoor activities such as hiking, skiing, and exploring natural parks. + +## Notable Features +- **Volcanic Activity**: The region contains many extinct volcanoes, with some still showing geothermal activity. +- **Natural Attractions**: Features stunning sites like the Gorges de la Loire and the Chaîne des Puys, a UNESCO World Heritage site. + +## Accessibility +- **Transport**: Well-connected by road and rail, with several towns providing access points for visitors. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/morvan.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/morvan.md new file mode 100644 index 0000000..310dc88 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/morvan.md @@ -0,0 +1,44 @@ +# Overview of the Morvan Mountain Range + +## Location +- **Country**: France +- **Region**: Burgundy (Bourgogne) +- **Department**: Nièvre, Saône-et-Loire, Côte-d'Or + +## Geography +- **Coordinates**: Approximately 47°10′N 3°55′E +- **Highest Peak**: Mont Beuvray + - **Elevation**: 821 meters (2,700 feet) +- **Area**: Approximately 3,500 square kilometers +- **Major Rivers**: Cure, Yonne, and Loing flow through the region. + +## Geology +- Composed primarily of granitic and metamorphic rocks. +- The landscape features rolling hills, valleys, and plateaus. +- Known for its rich biodiversity and varied ecosystems. + +## Climate +- **Type**: Temperate continental climate. +- **Weather**: Mild summers and cold winters with occasional snowfall. + +## History +- The Morvan area has a rich history dating back to prehistoric times. +- Notable archaeological sites include the remnants of the Gallic tribe of the Aedui in Mont Beuvray. +- The region was significant during the Roman conquest of Gaul. + +## Culture and Economy +- The Morvan is known for its traditional rural lifestyle and local crafts. +- Main industries include agriculture, forestry, and tourism. +- Famous for Morvan cheese and wines from the surrounding Burgundy region. + +## Tourism +- Offers a variety of outdoor activities such as hiking, cycling, and fishing. +- Home to the Morvan Regional Natural Park, established in 1970, which promotes conservation and sustainable tourism. +- Attractions include ancient ruins, beautiful landscapes, and charming villages. + +## Wildlife +- Habitat for various species, including deer, wild boars, and numerous bird species. +- Rich flora with many endemic plant species. + +## Conservation +- The region emphasizes environmental protection and sustainability in its natural park initiatives. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/pyrenees.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/pyrenees.md new file mode 100644 index 0000000..e28e335 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/pyrenees.md @@ -0,0 +1,40 @@ +# Overview of the Pyrenees Mountain Range + +## Geographic Location +- The Pyrenees mountain range forms a natural border between **France** and **Spain**. +- It extends approximately **430 kilometers (267 miles)** from the Atlantic Ocean (Bay of Biscay) in the west to the Mediterranean Sea in the east. + +## Major Peaks +- **Aneto** is the highest peak, with an elevation of **3,404 meters (11,168 feet)**. +- Other notable peaks include **Monte Perdido**, **Vignemale**, and **Pic du Midi d'Ossau**. + +## Geography and Geology +- The Pyrenees are divided into three sections: + - **Western Pyrenees**: Characterized by rugged terrain and steep valleys. + - **Central Pyrenees**: Known for its glacial landscapes and high peaks. + - **Eastern Pyrenees**: Features more rounded hills and a transition to the Mediterranean landscape. +- The range is primarily composed of granite, limestone, and schist rock formations. + +## Climate +- The climate varies from oceanic in the west to Mediterranean in the east. +- Snowfall is common during the winter months, making it a popular destination for skiing and winter sports. + +## Flora and Fauna +- The region is home to diverse ecosystems, featuring forests, meadows, and alpine tundra. +- Wildlife includes species such as the **Pyrenean ibex**, **brown bear**, **vultures**, and various endemic plants. + +## Cultural Significance +- The Pyrenees have a rich history, with numerous prehistoric caves, Roman ruins, and medieval castles. +- The region is culturally significant for both France and Spain, with unique traditions, languages (such as **Occitan** and **Catalan**), and gastronomy. + +## Outdoor Activities +- The Pyrenees are a popular destination for various activities including: + - **Hiking**: Numerous trails cater to different skill levels. + - **Skiing and Snowboarding**: Several ski resorts like **Saint-Lary-Soulan** and **Baqueira Beret**. + - **Climbing and Mountaineering**: Challenging routes attract climbers from around the world. + +## National Parks +- Several national parks, including **Pyrenees National Park** in France and **Ordesa y Monte Perdido National Park** in Spain, protect this stunning natural environment and its biodiversity. + +## Accessibility +- The Pyrenees can be accessed from various cities, including **Toulouse** and **Barcelona**, with numerous roads and hiking paths connecting different areas of the mountains. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/vosges.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/vosges.md new file mode 100644 index 0000000..7e74634 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/mountains/vosges.md @@ -0,0 +1,33 @@ +# Vosges Mountains Overview + +## Geography +- **Location**: Northeastern France, bordering Germany to the east. +- **Length**: Approximately 150 kilometers (93 miles) from north to south. +- **Elevation**: The highest peak is **Haut du Tôt**, which reaches an elevation of **1,424 meters** (4,672 feet). + +## Natural Features +- **Landscape**: Characterized by rolling hills, dense forests, and numerous lakes and streams. +- **Geology**: Composed mainly of granite and sandstone, along with some limestone. +- **Flora and Fauna**: Home to diverse ecosystems, including coniferous and deciduous forests, and various wildlife such as deer, wild boar, and a range of bird species. + +## Climate +- **Influence**: The Vosges mountains create a rainshadow effect, leading to varied climates on either side of the range. +- **Weather**: Generally humid, with abundant rainfall, particularly in the western slopes. + +## Culture and History +- **Human Settlement**: Historically inhabited by Celtic tribes, later significant in both the Roman Empire and medieval periods. +- **Tourism**: Popular for hiking, skiing, and outdoor activities, with many marked trails and ski resorts. +- **Cultural Heritage**: Known for traditional villages, local cuisine, and the Alsace wine route. + +## Notable Locations +- **Ballons des Vosges Regional Nature Park**: A protected area showcasing the natural beauty of the mountains. +- **Colmar and Gérardmer**: Prominent towns known for their cultural significance and as tourist destinations. +- **Route des Crêtes**: A scenic road that offers breathtaking views of the Vosges and surrounding regions. + +## Activities +- **Hiking**: Numerous trails, including the famous GR5 long-distance path. +- **Skiing**: Various ski resorts, particularly in the higher altitudes. +- **Cycling**: The region is cyclist-friendly with several bike routes. + +## Accessibility +- **Transport**: Well-connected by road and rail, making it accessible from major French cities and neighboring countries. \ No newline at end of file diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/alsace_lorraine.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/alsace_lorraine.md new file mode 100644 index 0000000..b76cc2b --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/alsace_lorraine.md @@ -0,0 +1,47 @@ +# Overview of Alsace-Lorraine Region in France + +Alsace-Lorraine is a historically significant and culturally diverse region located in northeastern France. Known for its unique blend of French and German influences, the region has a fascinating history, charming towns, and beautiful landscapes. + +## Geography +- **Location**: Situated along the Rhine River, Alsace-Lorraine borders Germany to the east and Luxembourg to the north. The region is part of the Grand Est administrative region of France. +- **Area**: Covers approximately 14,524 square kilometers. +- **Major Cities**: Strasbourg (capital of Alsace), Metz (capital of Lorraine), Mulhouse, Nancy, Colmar, and Epinal. + +## History +- **German and French Control**: The region has alternated between French and German control multiple times, particularly during the 19th and 20th centuries. It was part of the German Empire from 1871 to 1918, and again during World War II, before returning to France after the war. +- **Franco-Prussian War (1870-1871)**: Alsace and most of Lorraine were ceded to Germany after France's defeat in the war. This period marked significant German cultural and linguistic influence. +- **Post-World War II**: After World War II, Alsace-Lorraine was definitively integrated into France, with the region's mixed identity still influencing its culture and language. + +## Culture +- **Bilingualism**: The region has strong Germanic roots, and many people speak both French and a variety of regional dialects, such as Alsatian (a dialect of German). This bilingual heritage is reflected in the local culture, architecture, and cuisine. +- **Festivals**: Alsace-Lorraine is known for its rich tradition of festivals, especially those celebrating wine and food. The Strasbourg Christmas Market is one of the oldest and most famous in Europe. +- **Cuisine**: The region is renowned for its hearty and flavorful cuisine, which blends French and German influences. Notable dishes include choucroute (sauerkraut with sausages), tarte flambée (a type of pizza), and kugelhopf (a traditional cake). +- **Wine**: Alsace is one of the premier wine-producing regions in France, known for its white wines, particularly Riesling, Gewürztraminer, and Pinot Gris. The Alsace Wine Route is a popular tourist attraction. + +## Natural Beauty +- **Vosges Mountains**: Located in Lorraine, the Vosges Mountains offer scenic landscapes, hiking trails, and ski resorts. +- **The Alsace Wine Route**: Stretching over 170 kilometers, this picturesque route offers breathtaking views of vineyards and charming villages. +- **Regional Parks**: The region is home to several natural parks, including the Ballons des Vosges Regional Nature Park, which features forests, lakes, and wildlife. + +## Landmarks and Attractions +- **Strasbourg Cathedral**: The Cathedral of Notre-Dame in Strasbourg is a masterpiece of Gothic architecture and a UNESCO World Heritage site. Its astronomical clock and panoramic views from the tower are major attractions. +- **Château de Haut-Koenigsbourg**: A stunning medieval castle located in the Vosges Mountains, offering panoramic views of the Alsace plain. +- **Metz’s Cathedral**: The Cathedral of Saint-Étienne in Metz is a notable example of Gothic architecture, with some of the largest stained-glass windows in France. +- **Colmar**: Known for its well-preserved old town, Colmar is a charming medieval town with colorful half-timbered houses and canals that resemble a fairytale village. + +## Economy +- **Industry**: Alsace-Lorraine has a diverse economy that includes manufacturing, automotive, chemicals, and electronics. The region is home to several large industrial companies, particularly in Strasbourg and Mulhouse. +- **Agriculture**: The region is known for its agricultural output, particularly in wine production, as well as fruit and vegetable farming. +- **Tourism**: With its rich history, picturesque landscapes, and cultural festivals, Alsace-Lorraine attracts millions of tourists each year. + +## Climate +- **Continental Climate**: Alsace-Lorraine experiences a continental climate with cold winters and hot, often humid summers. The region’s proximity to the Vosges Mountains means it can also experience significant rainfall, particularly in Lorraine. +- **Average Temperatures**: Winters can see temperatures drop to around 0°C (32°F), while summer temperatures typically range from 18°C to 25°C (64°F to 77°F). + +## Notable People +- **Jean-Jacques Rousseau**: The famous philosopher and writer was born in Geneva but spent much of his life in the region, influencing its intellectual culture. +- **Gérard Depardieu**: The internationally acclaimed French actor hails from Châteauroux but has connections to the region through his career and projects. +- **François Rabelais**: The influential Renaissance writer, known for his work *Gargantua and Pantagruel*, was born in the region. + +## Conclusion +Alsace-Lorraine is a region with a rich, multifaceted history and culture, shaped by its unique position between France and Germany. Its charming towns, breathtaking landscapes, and exceptional food and wine make it a significant part of French heritage and a beloved destination for travelers. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bourgogne.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bourgogne.md new file mode 100644 index 0000000..0900fcb --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bourgogne.md @@ -0,0 +1,47 @@ +# Overview of Bourgogne (Burgundy) Region in France + +Bourgogne, or Burgundy, is a historic and picturesque region located in eastern France. Known for its rich wine heritage, medieval towns, and stunning landscapes, Burgundy is a symbol of French culture and tradition. + +## Geography +- **Location**: Bourgogne is located in central-eastern France, bordered by the regions of Franche-Comté, Rhône-Alpes, Auvergne, and Champagne-Ardenne. +- **Area**: Covers approximately 31,000 square kilometers. +- **Major Cities**: Dijon (capital), Auxerre, Beaune, Chalon-sur-Saône, Nevers. + +## History +- **Duchy of Burgundy**: Burgundy was once an independent duchy, and during the Middle Ages, it was one of the most powerful and influential regions in France. It played a key role in European politics. +- **Unification with France**: In the 15th century, the Duchy of Burgundy became part of France after the death of the last Duke, Charles the Bold, in 1477. The region’s autonomy was gradually absorbed into the French crown. +- **Historical Significance**: Burgundy has a deep historical legacy, with numerous medieval abbeys, castles, and battlefields that have shaped the region’s identity. + +## Culture +- **Wine Culture**: Burgundy is one of the world’s most famous wine-producing regions, renowned for its Pinot Noir and Chardonnay wines. The region’s vineyards produce some of the finest wines, especially in areas like Côte de Nuits, Côte de Beaune, and Chablis. +- **Cuisine**: Burgundy cuisine is rich and hearty, with dishes like boeuf bourguignon (beef stew in red wine), coq au vin (chicken cooked in wine), and escargots de Bourgogne (snails cooked in garlic and parsley butter). The region is also known for its mustard, particularly Dijon mustard. +- **Art and Architecture**: Burgundy is home to several historical and architectural landmarks, including Romanesque churches, medieval towns, and Renaissance palaces. The region has a long-standing tradition of art, with influences from both French and Flemish masters. + +## Natural Beauty +- **Burgundy Canal**: The Burgundy Canal offers scenic views and is a popular spot for boaters and cyclists. It connects the Yonne River to the Saône River and passes through charming villages. +- **Morvan Regional Natural Park**: Located in the heart of Burgundy, the Morvan Park is known for its forests, lakes, and wildlife, making it a haven for outdoor enthusiasts. +- **Vineyards**: The rolling hills of the Burgundy vineyards are a UNESCO World Heritage site and are dotted with charming wine villages like Beaune and Meursault. + +## Landmarks and Attractions +- **Dijon**: The capital of Burgundy, known for its well-preserved medieval architecture, the Palace of the Dukes of Burgundy, and the famous Dijon mustard. +- **Chablis**: Famous for its world-renowned white wines, Chablis is a picturesque village surrounded by vineyards and stunning views. +- **Abbey of Fontenay**: A UNESCO World Heritage site, this Cistercian abbey dates back to the 12th century and is an example of Romanesque architecture at its best. +- **Basilica of Vézelay**: Another UNESCO site, this basilica is a key pilgrimage site and an important example of Romanesque architecture in France. +- **Clos de Vougeot**: A historic wine estate and château in the Côte de Nuits, Clos de Vougeot is at the heart of Burgundy's wine heritage. + +## Economy +- **Wine Industry**: Burgundy’s wine industry is the cornerstone of the region’s economy. The vineyards produce some of the world’s most sought-after wines, and the region is home to prestigious wine estates. +- **Agriculture**: In addition to wine production, Burgundy is also known for its agricultural output, including grain, dairy products, and livestock, especially cattle. +- **Tourism**: Burgundy attracts tourists for its wine tourism, beautiful landscapes, medieval towns, and rich history. The region is a popular destination for wine lovers, history buffs, and outdoor adventurers. + +## Climate +- **Continental Climate**: Burgundy has a continental climate with hot summers and cold winters. The region’s climate is ideal for viticulture, with warm days during the growing season and cool nights that help preserve the flavors of the grapes. +- **Average Temperatures**: Summers typically range from 20°C to 28°C (68°F to 82°F), while winters can dip to around 0°C (32°F). + +## Notable People +- **Gustave Eiffel**: Born in Dijon, Eiffel is famous for designing the Eiffel Tower in Paris. +- **Bernard Loiseau**: A renowned French chef from Burgundy, Loiseau was known for his exceptional culinary skills and Michelin-starred restaurants. +- **Romain Rolland**: The Nobel Prize-winning writer, known for his works such as *Jean-Christophe*, was born in Clamecy, Burgundy. + +## Conclusion +Bourgogne is a region that embodies the essence of French culture, combining rich history, world-class wine, exceptional cuisine, and beautiful landscapes. Whether you’re savoring a glass of Burgundy wine, exploring its medieval towns, or hiking through its scenic parks, Burgundy offers a timeless experience for travelers and connoisseurs alike. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bretagne.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bretagne.md new file mode 100644 index 0000000..47b4368 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/bretagne.md @@ -0,0 +1,45 @@ +# Overview of Bretagne (Brittany) Region in France + +Bretagne, or Brittany, is a culturally distinct region located in the northwest of France. Known for its rugged coastline, rich history, and unique cultural heritage, Bretagne offers a fascinating blend of natural beauty and ancient traditions. + +## Geography +- **Location**: Situated on the Brittany Peninsula, bordered by the English Channel to the north, the Atlantic Ocean to the west and south, and the Normandy and Pays de la Loire regions to the east. +- **Area**: Covers approximately 27,208 square kilometers. +- **Major Cities**: Rennes (capital), Brest, Nantes, Saint-Malo, Quimper, Lorient. + +## History +- **Celtic Origins**: Originally inhabited by the Celts, who brought their language, traditions, and culture to the region. Bretagne still maintains a strong Celtic identity. +- **Duchy of Brittany**: From the 9th to the 16th century, Brittany was an independent duchy before joining France in 1532. +- **Breton Language**: Breton (Brezhoneg) is a Celtic language still spoken by a small population, especially in rural areas and in cultural events. + +## Culture +- **Music**: Bretagne is known for its traditional Celtic music, including bagpipes, fiddles, and the bombard. The region hosts festivals like the Festival Interceltique de Lorient, which celebrates Celtic culture. +- **Cuisine**: The local cuisine includes specialties like crêpes, galettes (buckwheat pancakes), seafood, and cider (known as "cidre"). The region is famous for its oysters and mussels. +- **Festivals**: Brittany hosts several cultural festivals, such as the Fest Noz, a traditional Breton dance event, and the Breizh Festival, which celebrates Breton culture. + +## Natural Beauty +- **Coastline**: Bretagne is known for its stunning coastline with dramatic cliffs, sandy beaches, and picturesque coves. The region has more than 2,700 kilometers of coastline. +- **Mont Saint-Michel**: While technically in Normandy, it is often associated with Brittany due to its proximity. This island commune with a striking abbey is a UNESCO World Heritage site. +- **Regional Parks**: Brittany is home to several regional natural parks, such as the Armorique Regional Nature Park, known for its varied landscapes, including moors, forests, and hills. + +## Landmarks and Attractions +- **Carnac Stones**: Prehistoric standing stones dating back to the Neolithic period, located in the town of Carnac. They are among the most famous megalithic sites in the world. +- **Fort La Latte**: A medieval fortress on the north coast of Brittany, offering incredible views of the sea. +- **Saint-Malo**: A walled port city, famous for its cobblestone streets, stunning beaches, and historical significance as a center of piracy. + +## Economy +- **Agriculture**: The region is known for its dairy farming, particularly in the production of butter and cheese. Bretagne is also famous for its apple orchards, which are used to make cider. +- **Fishing**: Historically, Brittany has been one of the most important fishing regions in France, especially for shellfish, sardines, and tuna. +- **Tourism**: The natural beauty, history, and culture make Bretagne a popular destination for tourists, with significant income coming from visitors. + +## Climate +- **Mild Climate**: Brittany experiences a temperate maritime climate, characterized by mild winters and cool summers. The region is known for frequent rainfall and variable weather. +- **Average Temperatures**: Winters rarely drop below 5°C (41°F), while summers range from 15°C to 20°C (59°F to 68°F). + +## Notable People +- **Bertrand Du Guesclin**: A famous medieval French knight and national hero. +- **Jacques Cartier**: The explorer credited with claiming Canada for France in the 16th century. +- **Yann Tiersen**: A modern musician and composer, best known for his soundtrack for the film *Amélie*. + +## Conclusion +Bretagne is a region of deep cultural significance, rich history, and extraordinary natural landscapes. Whether you’re drawn to its Celtic roots, its rugged coastline, or its historical landmarks, Brittany offers something for everyone. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/gascogne.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/gascogne.md new file mode 100644 index 0000000..dd79859 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/gascogne.md @@ -0,0 +1,47 @@ +# Overview of Gascogne Region in France + +Gascogne is a historical and cultural region in southwestern France, known for its rolling hills, vineyards, charming villages, and rich heritage. It is often associated with the rustic lifestyle, gastronomy, and the famed Musketeers of Dumas’ novels. + +## Geography +- **Location**: Situated in the southwest of France, Gascogne is bordered by the regions of Aquitaine to the west, Midi-Pyrénées to the south, and the Auvergne-Rhône-Alpes region to the east. It also touches the Pyrenees mountains to the south. +- **Area**: The region encompasses parts of the modern-day regions of Occitanie and Nouvelle-Aquitaine. +- **Major Cities**: Auch (historical capital), Agen, Condom, Lectoure, and Eauze. + +## History +- **Roman Influence**: Gascogne was known as part of the ancient Roman province of Novempopulania. The region’s rich history is reflected in its architecture and ancient ruins. +- **Visigoths and Franks**: The region saw control by the Visigoths and later the Franks, whose influence shaped local customs and governance. +- **Duchy of Gascogne**: During the Middle Ages, Gascogne was an independent duchy before becoming part of the Kingdom of France in the 13th century. +- **The Musketeers**: Gascogne is famously associated with the “Three Musketeers” of Alexandre Dumas’ novels. The fictional characters D'Artagnan, Athos, Porthos, and Aramis are portrayed as hailing from this region. + +## Culture +- **Gascon Language**: The Gascon language, a variety of Occitan, was historically spoken in the region. Though it has declined in use, it still carries cultural significance and is a symbol of regional identity. +- **Folk Traditions**: Gascogne is known for its folk traditions, including traditional music, dances, and festivals. The region is famous for its rural festivals, celebrating everything from local history to agricultural practices. +- **Cuisine**: Gascon cuisine is renowned for its hearty and flavorful dishes. Notable dishes include *foie gras*, *confit de canard* (duck confit), and *garbure* (a rich vegetable and meat soup). The region is also famous for its Armagnac, a brandy that is produced using traditional methods. + +## Natural Beauty +- **Rolling Hills and Vineyards**: Gascogne is known for its picturesque landscapes, featuring rolling hills, vast forests, and scenic vineyards. The region is ideal for hiking, cycling, and exploring the rural countryside. +- **The Pyrenees**: The southern border of Gascogne is defined by the Pyrenees mountains, which offer opportunities for outdoor activities like hiking and skiing. +- **Rivers and Lakes**: Gascogne is crisscrossed by rivers such as the Garonne and the Adour, making the region fertile for agriculture and creating stunning natural scenery. + +## Landmarks and Attractions +- **Auch Cathedral**: A UNESCO World Heritage site, the Cathedral of Sainte-Marie in Auch is an impressive Gothic structure with a magnificent staircase leading to the church. +- **D’Artagnan’s Birthplace**: The town of Lupiac, where D'Artagnan, the hero of Alexandre Dumas’ *The Three Musketeers*, was born, attracts fans of the novels and history alike. +- **Château de Larressingle**: Often referred to as one of the most beautiful fortified villages in France, this medieval castle offers a glimpse into the region's past. +- **Armagnac Distilleries**: Visitors can tour the distilleries that produce the famous Armagnac brandy, with opportunities to taste and learn about the traditional distilling process. + +## Economy +- **Agriculture**: Gascogne is an important agricultural region, known for its production of ducks, geese (for foie gras), and pigs. The fertile soil supports the cultivation of corn, sunflowers, and grapes. +- **Wine and Brandy**: The region is famous for its vineyards and the production of Armagnac, a type of brandy. The wines of the region, especially those from the Côtes de Gascogne, are increasingly recognized for their quality. +- **Tourism**: With its rich history, natural beauty, and culinary traditions, Gascogne attracts tourists who are looking to experience authentic French rural life, enjoy local food and wine, and explore historical landmarks. + +## Climate +- **Mediterranean Climate**: Gascogne enjoys a temperate climate, with warm summers and mild winters. The southern part of the region, near the Pyrenees, has a more Mediterranean climate, while the northern part experiences a more oceanic influence. +- **Average Temperatures**: Summer temperatures typically range from 20°C to 30°C (68°F to 86°F), while winters are generally mild with temperatures ranging from 5°C to 10°C (41°F to 50°F). + +## Notable People +- **D'Artagnan**: The fictional hero of *The Three Musketeers*, D'Artagnan is one of the most famous characters associated with Gascogne, although based on a real person. +- **Charles de Batz-Castelmore d'Armanac**: The historical figure who inspired D'Artagnan, born in Gascogne, was a nobleman and soldier. +- **Henri IV**: The King of France, born in Pau (near Gascogne), famously said, “Paris is worth a Mass” and was instrumental in uniting France after years of religious conflict. + +## Conclusion +Gascogne is a region that offers a unique blend of history, culture, and natural beauty. From its medieval villages and legendary connections to the Musketeers, to its rich culinary traditions and scenic landscapes, Gascogne provides a true taste of southwestern France. Whether exploring its vineyards, tasting Armagnac, or immersing yourself in its rural charm, Gascogne is a region full of life and tradition. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/ile_de_france.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/ile_de_france.md new file mode 100644 index 0000000..49dbf85 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/ile_de_france.md @@ -0,0 +1,47 @@ +# Overview of Île-de-France Region in France + +Île-de-France is the central region of France, encompassing the nation’s capital, Paris. As the political, economic, and cultural heart of France, this region is not only historically significant but also a global center for art, fashion, and business. + +## Geography +- **Location**: Situated in the north-central part of France, Île-de-France is surrounded by the regions of Normandy, Hauts-de-France, Grand Est, Bourgogne-Franche-Comté, Centre-Val de Loire, and Provence-Alpes-Côte d'Azur. +- **Area**: Covers approximately 12,012 square kilometers. +- **Major Cities**: Paris (capital of both the region and France), Versailles, Créteil, Nanterre, and Montreuil. + +## History +- **Royal Legacy**: Île-de-France has historically been the core of the French monarchy. It was the heart of the Capetian Dynasty, beginning in the 10th century. The region is home to many royal palaces and historic sites. +- **French Revolution**: Paris, located in Île-de-France, was the focal point of the French Revolution in the late 18th century. Important revolutionary events, such as the storming of the Bastille, took place here. +- **World War II**: During WWII, Paris was occupied by Nazi forces from 1940 to 1944. The city was liberated in August 1944 by Allied forces. + +## Culture +- **Capital of Culture**: Paris is widely recognized as one of the world’s greatest cultural capitals. It is home to numerous world-class museums, theaters, and art galleries, including the Louvre, Musée d'Orsay, and the Centre Pompidou. +- **Fashion and Art**: Paris is the global capital of fashion, known for haute couture, and hosts prestigious fashion events like Paris Fashion Week. The city has also been the center of the art world for centuries, influencing movements such as Impressionism and Surrealism. +- **Gastronomy**: Île-de-France is known for its fine dining, with Michelin-starred restaurants, cafés, and bistros. The region is also famous for pâtisseries, including macarons and éclairs, and its traditional French dishes such as coq au vin and escargot. + +## Natural Beauty +- **Seine River**: The Seine River flows through Paris and the Île-de-France region, providing beautiful riverbanks and parks, perfect for leisure activities like boat tours, picnicking, and walking along its iconic bridges. +- **Bois de Boulogne & Bois de Vincennes**: These expansive public parks on the outskirts of Paris offer lush green spaces for recreation, hiking, and cycling. +- **Versailles Gardens**: The Gardens of the Palace of Versailles, with their meticulously designed lawns, fountains, and sculptures, are a UNESCO World Heritage site and one of the most famous gardens in the world. + +## Landmarks and Attractions +- **Eiffel Tower**: The most iconic landmark in Paris, the Eiffel Tower attracts millions of visitors every year. It stands as a symbol of France and offers stunning panoramic views of the city. +- **Notre-Dame Cathedral**: A masterpiece of Gothic architecture, the Notre-Dame Cathedral is one of the most famous religious sites in the world, located on the Île de la Cité in the Seine. +- **Palace of Versailles**: A short trip from Paris, the Palace of Versailles is one of the grandest royal palaces in Europe, famous for its opulent architecture and the Hall of Mirrors. +- **Sainte-Chapelle**: Known for its stunning stained-glass windows, this Gothic chapel in Paris is one of the most beautiful examples of medieval architecture. +- **The Louvre**: The world’s largest art museum, the Louvre in Paris, is home to thousands of works of art, including Leonardo da Vinci's *Mona Lisa* and the *Venus de Milo*. + +## Economy +- **Economic Powerhouse**: Île-de-France is the economic center of France, contributing a significant portion to the country’s GDP. It is home to many multinational companies and is the main business hub in France. +- **Finance and Technology**: The region has a thriving financial sector centered in La Défense, Paris’s business district. It also hosts tech startups and innovations, particularly in areas like AI, fintech, and digital media. +- **Tourism**: Paris is one of the world’s top tourist destinations, attracting millions of visitors each year. The region’s tourism is a key driver of the economy, with tourists coming for the history, culture, and attractions. + +## Climate +- **Oceanic Climate**: Île-de-France experiences a temperate oceanic climate with mild winters and warm summers. Paris typically has rainy weather in the autumn and spring, with summer temperatures ranging from 18°C to 25°C (64°F to 77°F). +- **Average Temperatures**: Winter temperatures can hover around 3°C to 7°C (37°F to 45°F), while summer highs can range from 25°C to 30°C (77°F to 86°F). + +## Notable People +- **Napoleon Bonaparte**: Born on the island of Corsica, Napoleon became the Emperor of France and played a pivotal role in shaping the history of France and Europe. His influence is still felt throughout Île-de-France. +- **Marcel Proust**: The famous French writer, best known for his work *In Search of Lost Time*, lived and wrote in Paris during the late 19th and early 20th centuries. +- **Édith Piaf**: One of France’s most beloved singers, Piaf was born and raised in Paris and became an international icon of French music. + +## Conclusion +Île-de-France is the heart of France, blending rich history, cultural innovation, and economic power. With Paris at its center, the region is a global leader in fashion, art, and business. From historic landmarks like the Eiffel Tower and Versailles to its world-class museums and gastronomic delights, Île-de-France is a region that offers something for every visitor, making it a must-see destination for travelers. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/languedoc.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/languedoc.md new file mode 100644 index 0000000..1bf96b9 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/languedoc.md @@ -0,0 +1,46 @@ +# Overview of Languedoc Region in France + +Languedoc is a historic and culturally rich region located in the southern part of France, known for its Mediterranean coastline, picturesque villages, and deep-rooted traditions. It is often celebrated for its wines, beaches, and beautiful landscapes. + +## Geography +- **Location**: Languedoc is situated in the southernmost part of France, bordered by the Mediterranean Sea to the east, the regions of Provence-Alpes-Côte d'Azur, Rhône-Alpes, and Auvergne-Rhône-Alpes to the north, and Midi-Pyrénées to the west. +- **Area**: Covers approximately 27,000 square kilometers. +- **Major Cities**: Montpellier (capital), Nîmes, Perpignan, Carcassonne, Béziers, and Sète. + +## History +- **Roman Influence**: Languedoc has a strong Roman heritage, with many ancient ruins, including the well-preserved Roman aqueduct, Pont du Gard, and the ancient city of Nîmes. +- **Cathar History**: In the Middle Ages, Languedoc was the center of the Cathar religious movement. The region was the focus of the Albigensian Crusade (1209-1229), a military campaign aimed at eradicating Catharism. +- **Rural Culture**: Historically, the region was a center of agriculture and viticulture, and it remains deeply connected to farming traditions, particularly wine production. + +## Culture +- **Language**: The Occitan language, historically spoken in the region, was once widely used, and it still carries cultural significance today. Languedoc’s name itself derives from the Occitan phrase *"langue d'oc,"* meaning “language of yes.” +- **Cuisine**: Languedoc cuisine is characterized by its Mediterranean influence, with seafood, olive oil, and fresh produce playing a central role. Famous dishes include *cassoulet* (a rich stew made with beans and meats), *brandade de morue* (a cod and garlic dish), and *tapenade* (olive spread). +- **Festivals**: The region is known for its vibrant festivals, such as the Feria de Nîmes, which celebrates bullfighting and the culture of southern France, and the Carcassonne Festival, which features music, theater, and other arts. + +## Natural Beauty +- **Mediterranean Coast**: The region boasts a stunning coastline along the Mediterranean Sea, with beautiful beaches like those in Cap d'Agde and the scenic Étang de Thau. +- **Languedoc-Roussillon Wine Route**: The Languedoc region is one of the largest wine-producing areas in France, and its wine route takes visitors through vineyards, picturesque villages, and wine estates. +- **Cévennes National Park**: This UNESCO-listed park is part of the Massif Central and offers stunning mountain landscapes, gorges, and wildlife, ideal for hiking and nature lovers. + +## Landmarks and Attractions +- **Carcassonne**: A UNESCO World Heritage site, the medieval fortress of Carcassonne is one of France’s most iconic landmarks. The double-walled citadel offers a glimpse into the past with its preserved medieval architecture. +- **Pont du Gard**: A well-preserved Roman aqueduct, the Pont du Gard is a UNESCO World Heritage site and an engineering marvel of antiquity, offering scenic views of the surrounding landscape. +- **Nîmes**: Known as the "French Rome," Nîmes is home to remarkable Roman monuments, including the Arena of Nîmes (a Roman amphitheater), the Temple of Diana, and the Maison Carrée. +- **Sète**: A picturesque coastal town known for its canals, seafood, and vibrant cultural scene, Sète is often referred to as the "Venice of Languedoc." +- **Abbey of Saint-Guilhem-le-Désert**: This UNESCO World Heritage site is a well-preserved medieval abbey located in the stunning Hérault Valley. + +## Economy +- **Wine Production**: Languedoc is one of the largest wine-producing regions in France, known for producing a wide variety of wines, including reds, whites, and rosés. The region is famous for its *AOC* (Appellation d'Origine Contrôlée) wines, such as those from the Minervois, Faugères, and Corbières appellations. +- **Agriculture**: In addition to wine, Languedoc is known for producing fruits (particularly melons, peaches, and cherries), olives, and lavender. It is also a significant producer of sheep and goat cheese. +- **Tourism**: With its Mediterranean coastline, historic cities, and scenic landscapes, Languedoc is a popular tourist destination. The region’s vineyards and charming towns attract visitors for wine tourism, cultural exploration, and outdoor activities. + +## Climate +- **Mediterranean Climate**: Languedoc enjoys a Mediterranean climate, characterized by hot, dry summers and mild, wet winters. The region’s climate is perfect for vineyards and outdoor activities. +- **Average Temperatures**: Summer temperatures typically range from 25°C to 35°C (77°F to 95°F), while winters are mild, with temperatures ranging from 8°C to 15°C (46°F to 59°F). + +## Notable People +- **Georges Brassens**: The famous French singer-songwriter and poet was born in Sète, and his legacy is celebrated in the town with a museum and annual festivals. +- **Pierre-Paul Riquet**: The engineer who designed the Canal du Midi, which connects the Garonne River to the Mediterranean, greatly impacting the region’s agriculture and trade during the 17th century. + +## Conclusion +Languedoc is a region rich in history, culture, and natural beauty. From its Roman heritage and medieval fortresses to its beautiful beaches and vineyards, Languedoc offers a unique blend of ancient traditions and modern charm. Whether you’re enjoying a glass of wine, exploring historic towns, or relaxing by the sea, Languedoc provides an unforgettable experience for travelers. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/normandie.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/normandie.md new file mode 100644 index 0000000..eb5e40a --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/normandie.md @@ -0,0 +1,48 @@ +# Overview of Normandie Region in France + +Normandie (Normandy) is a historic and picturesque region located in the northern part of France. Known for its dramatic coastline, rich history, and cultural heritage, Normandy plays a central role in both French and world history. + +## Geography +- **Location**: Situated in the northernmost part of France, Normandy is bordered by the English Channel to the north, the regions of Île-de-France, Centre-Val de Loire, and Pays de la Loire to the south, and Brittany to the west. +- **Area**: Covers approximately 29,907 square kilometers. +- **Major Cities**: Rouen (capital), Caen, Le Havre, Cherbourg, and Dieppe. + +## History +- **Viking Heritage**: Normandy gets its name from the Norsemen (Vikings), who settled in the region in the 9th and 10th centuries. The region became known as "Normandy" after the Vikings (Normans) were granted land by the King of France. +- **William the Conqueror**: One of the most famous historical figures associated with Normandy is William the Conqueror, who, as Duke of Normandy, successfully invaded England in 1066 and became the King of England. +- **D-Day and WWII**: Normandy is internationally known for the D-Day landings on June 6, 1944, during World War II. The Allied invasion of Normandy was a pivotal event in the liberation of Western Europe from Nazi occupation. The beaches, such as Omaha Beach and Utah Beach, are significant historical sites. + +## Culture +- **Language**: The regional language of Normandy is Norman, a variety of the Old French language with influences from Old Norse. However, French is the primary language spoken today. +- **Cuisine**: Normandy cuisine is influenced by its coastal location, featuring seafood like oysters, mussels, and scallops. The region is also famous for its apples, which are used to make cider (cidre) and the famous apple brandy, Calvados. Dishes such as *coquilles Saint-Jacques* (scallops) and *camembert cheese* are iconic. +- **Folk Traditions**: The region is known for its folk traditions, including festivals, music, and dances that celebrate its Viking and maritime heritage. + +## Natural Beauty +- **Dramatic Coastline**: Normandy is known for its stunning coastline, including cliffs, sandy beaches, and small coves. The cliffs at Etretat are among the most photographed natural sites in France. +- **Normandy Beaches**: Famous for their historical significance, Normandy’s beaches are also a popular destination for travelers. The beaches of Omaha, Utah, and Juno were sites of the D-Day landings. +- **Countryside and Farming**: Normandy is also known for its green countryside, dotted with rolling hills, fields, and traditional farmhouses. The region's fertile land is perfect for the production of dairy products, apples, and crops. + +## Landmarks and Attractions +- **Mont Saint-Michel**: A UNESCO World Heritage site, Mont Saint-Michel is one of France’s most iconic landmarks. This island commune features a medieval abbey perched atop a rocky hill, surrounded by tidal waters, creating a stunning visual. +- **D-Day Landing Beaches**: The beaches where the D-Day landings took place, such as Utah Beach, Omaha Beach, and Sword Beach, are significant historical sites and are home to several museums, memorials, and cemeteries dedicated to the soldiers who fought there. +- **Rouen Cathedral**: A masterpiece of Gothic architecture, the Rouen Cathedral is famous for its stunning facade and for being the subject of a series of paintings by Claude Monet. +- **Château de Caen**: Built by William the Conqueror in the 11th century, this castle in Caen is one of the largest medieval fortresses in Europe. +- **Jardin des Plantes de Rouen**: A botanical garden in Rouen that showcases a variety of plant species, it is a great place to explore nature and relax. + +## Economy +- **Agriculture**: Normandy is a major agricultural region, known for dairy farming, particularly the production of butter and cheese. The region is famous for its dairy products, with cheeses like Camembert, Livarot, and Pont-l’Évêque being integral to the local economy. +- **Cider Production**: Normandy is one of the primary cider-producing regions in France, with a long tradition of apple orchards. The region’s cider is often made from a variety of apples, resulting in dry, sweet, or sparkling ciders. +- **Fishing and Maritime**: The region’s location along the English Channel makes it a significant player in France’s fishing industry. Ports like Le Havre and Cherbourg are vital to the French maritime economy. +- **Tourism**: With its rich historical sites, picturesque countryside, and seaside attractions, Normandy is a popular tourist destination, drawing visitors to its beaches, memorials, and unique landmarks. + +## Climate +- **Oceanic Climate**: Normandy enjoys an oceanic climate, with mild winters and cool summers. The weather is influenced by the proximity to the English Channel, often resulting in cloudy, rainy days. +- **Average Temperatures**: Summers generally range from 18°C to 22°C (64°F to 72°F), while winters are mild, with temperatures ranging from 3°C to 7°C (37°F to 45°F). + +## Notable People +- **William the Conqueror**: Born in Falaise, Normandy, William the Conqueror is one of the most famous figures in history, known for his conquest of England in 1066. +- **Joan of Arc**: A national heroine of France, Joan of Arc was born in Domrémy, which was then part of Normandy, and played a significant role in the Hundred Years' War. +- **Gustave Flaubert**: The renowned French writer, best known for his novel *Madame Bovary*, was born in Rouen, Normandy. + +## Conclusion +Normandy is a region rich in history, culture, and natural beauty. From the stunning Mont Saint-Michel and the beaches of the D-Day landings to the pastoral landscapes and delicious cuisine, Normandy offers a mix of historical depth and natural charm. Whether exploring its historic towns, enjoying fresh seafood and cider, or paying tribute to its WWII heritage, Normandy provides a unique and unforgettable experience. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/poitou.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/poitou.md new file mode 100644 index 0000000..f204880 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/poitou.md @@ -0,0 +1,48 @@ +# Overview of Poitou Region in France + +Poitou is a historic region located in the western part of France, known for its rich cultural heritage, beautiful landscapes, and historical significance. Today, it forms part of the Nouvelle-Aquitaine region, but it retains its unique identity through its history, architecture, and traditions. + +## Geography +- **Location**: Poitou is situated in the western part of France, bordered by the Atlantic Ocean to the west, the regions of Pays de la Loire to the north, Aquitaine to the south, and Centre-Val de Loire to the east. +- **Area**: Covers approximately 10,000 square kilometers. +- **Major Cities**: Poitiers (capital), La Rochelle, Niort, and Châtellerault. + +## History +- **Medieval Influence**: Poitou was an important region during the medieval period, especially known for its connection to the powerful counts of Poitou and the Dukes of Aquitaine. The region was also the birthplace of Eleanor of Aquitaine, one of the most influential women of the medieval period. +- **Anglo-French Conflict**: Poitou played a significant role during the Hundred Years' War, with both the English and the French vying for control of the region. It was once part of the Angevin Empire, which included large parts of modern-day France and England. +- **Renaissance and Religious Wars**: During the Renaissance, Poitou became a center for intellectual and cultural development. It also saw significant involvement in the Wars of Religion between Catholics and Protestants in the 16th century. + +## Culture +- **Language**: The traditional language of Poitou is Poitevin, a variety of the Occitan language, which was widely spoken in the region in medieval times. However, French is predominantly spoken today. +- **Cuisine**: Poitou cuisine is characterized by its use of fresh local ingredients, with specialties such as *mogettes* (white beans), *salmis* (a stew of game), and the region’s famous cheeses, including *Chabichou du Poitou*, a soft, creamy goat cheese. The region is also known for its seafood, particularly oysters from the Marennes-Oléron area. +- **Folk Traditions**: Poitou has a rich tradition of folk music and dance, with regional festivals celebrating the local culture. The region’s craft heritage, including pottery, woodwork, and textiles, continues to be celebrated. + +## Natural Beauty +- **Atlantic Coast**: Poitou has a beautiful coastline along the Atlantic Ocean, with scenic beaches and coastal landscapes. The island of Île de Ré, accessible by bridge from La Rochelle, is a popular destination for its charming villages, vineyards, and sandy beaches. +- **Marais Poitevin**: Also known as the “Green Venice,” the Marais Poitevin is a vast marshland and wetland area that is crisscrossed with canals. It is a paradise for nature lovers, offering opportunities for boating, birdwatching, and hiking. +- **Countryside**: The region also features gentle rolling hills, vineyards, and forests. The Poitou-Charentes region is known for its peaceful, rural landscapes, making it ideal for outdoor activities like cycling, hiking, and nature walks. + +## Landmarks and Attractions +- **Poitiers**: The historic city of Poitiers is famous for its medieval architecture, including the Church of Saint-Hilaire-le-Grand, a UNESCO World Heritage site, and the Palais des Ducs d'Aquitaine, a former royal palace. +- **La Rochelle**: Known for its well-preserved Old Port, La Rochelle is a charming coastal town with a rich maritime history. The city's landmarks include the iconic La Rochelle Towers and the Maritime Museum. +- **Futuroscope**: Located near Poitiers, Futuroscope is one of France’s most popular theme parks, offering futuristic attractions, multimedia shows, and cutting-edge technology exhibitions. +- **Île de Ré**: This picturesque island is known for its beautiful beaches, historic lighthouses, and charming villages. It is a popular vacation spot for tourists seeking relaxation and outdoor activities. +- **Château de Niort**: This medieval fortress in Niort dates back to the 12th century and offers visitors a glimpse into the region’s medieval history. + +## Economy +- **Agriculture**: Poitou is traditionally an agricultural region, known for its livestock farming, particularly the production of Charolais cattle, as well as the cultivation of cereals, potatoes, and sunflowers. The region also produces a variety of fruits, including apples and grapes. +- **Wine Production**: The region is part of the larger wine-growing area of Charentes, which is famous for producing Cognac, a renowned brandy. The vineyards of the Charente and Charente-Maritime departments are integral to the local economy. +- **Tourism**: Poitou’s rich history, natural beauty, and charming cities attract many tourists. La Rochelle, Poitiers, and Île de Ré are major tourist destinations, while the Marais Poitevin and the coastal areas draw those interested in nature and outdoor activities. +- **Cognac Production**: Poitou is at the heart of the Cognac-producing region, with many distilleries located around the Charente River, where the famous spirit is made from grapes and aged for years in oak barrels. + +## Climate +- **Oceanic Climate**: Poitou enjoys an oceanic climate with mild winters and warm summers, influenced by the Atlantic Ocean. Coastal areas experience more moderate temperatures, while inland regions can have slightly warmer summers. +- **Average Temperatures**: Summer temperatures typically range from 18°C to 25°C (64°F to 77°F), while winters are generally mild, with temperatures ranging from 5°C to 10°C (41°F to 50°F). + +## Notable People +- **Eleanor of Aquitaine**: Born in Poitou, Eleanor was one of the most powerful and influential women in medieval Europe. She was Queen of France and later Queen of England and played a key role in the politics of both kingdoms. +- **François Rabelais**: The famous Renaissance writer, best known for his satirical work *Gargantua and Pantagruel*, was born in the Poitou region, and his works remain an important part of French literature. +- **René Descartes**: One of the most influential philosophers of the 17th century, Descartes spent much of his early life in Poitou, and his legacy continues to shape modern philosophy. + +## Conclusion +Poitou is a region rich in history, culture, and natural beauty. From its medieval towns and historic landmarks to its picturesque countryside and coastal beauty, Poitou offers a unique blend of traditions and modern attractions. Whether exploring the city of Poitiers, enjoying the fresh produce and local wine, or relaxing on the beaches of Île de Ré, Poitou provides an unforgettable experience for visitors. diff --git a/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/provence.md b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/provence.md new file mode 100644 index 0000000..b9e7c04 --- /dev/null +++ b/week5/community-contributions/day 4 no_langchain/knowledge_collection/regions/provence.md @@ -0,0 +1,50 @@ +# Overview of Provence Region in France + +Provence is a stunning region in the southeastern part of France, renowned for its breathtaking landscapes, rich history, vibrant culture, and Mediterranean climate. It is one of the most beloved regions in France, known for its lavender fields, vineyards, ancient Roman ruins, and charming villages. + +## Geography +- **Location**: Provence is located in the southeastern part of France, bordered by the Mediterranean Sea to the south, the Rhône River to the west, the Alps to the north, and the region of Côte d'Azur to the east. +- **Area**: Covers approximately 31,400 square kilometers. +- **Major Cities**: Marseille (capital), Aix-en-Provence, Avignon, Arles, and Toulon. + +## History +- **Roman Heritage**: Provence has a rich Roman history, with the city of Arles serving as a significant Roman settlement. The region is home to some of the best-preserved Roman monuments in France, including the Arena of Nîmes and the Pont du Gard. +- **Medieval Influence**: Provence was part of the Kingdom of Arles in the Middle Ages and later became a major part of the Comtat Venaissin. It was also home to the Papacy for a time, with the popes residing in Avignon from 1309 to 1377. +- **Renaissance and Revolution**: Provence was a key region during the Renaissance, flourishing in the arts and culture. During the French Revolution, Provence played a significant role, with several uprisings and political changes. + +## Culture +- **Language**: The traditional language of Provence is Provençal, a variety of the Occitan language. While French is predominantly spoken today, Provençal still has cultural significance and is used in regional poetry, music, and literature. +- **Cuisine**: Provence is famous for its Mediterranean cuisine, emphasizing fresh vegetables, olive oil, herbs, seafood, and wine. Popular dishes include *bouillabaisse* (a fish stew), *ratatouille* (vegetable medley), *tapenade* (olive paste), and *pissaladière* (onion tart). +- **Wine**: The region is renowned for its wine production, particularly rosé wines from the Côtes de Provence, as well as reds and whites. The vineyards of Provence benefit from the Mediterranean climate, producing wines with distinctive flavors. +- **Folk Traditions**: Provence is known for its rich folk traditions, including festivals, music, dance, and crafts. The region celebrates a variety of traditional events, such as the Festival of the Calissons in Aix-en-Provence, and the Fête de la Lavande (Lavender Festival) in Sault. + +## Natural Beauty +- **Mediterranean Coast**: Provence boasts a beautiful coastline along the Mediterranean, with stunning beaches, rocky coves, and picturesque seaside towns such as Cassis, Sainte-Maxime, and Bandol. +- **Lavender Fields**: The lavender fields of Provence are one of the region's most iconic features. The fields bloom in vibrant purple hues during the summer months and are a major tourist attraction. +- **Alps and Vineyards**: To the north of Provence, the landscape rises into the Alps, offering spectacular mountain scenery, hiking, and skiing opportunities. The rolling hills and vineyards of the region produce some of the finest wines in France. +- **Gorges du Verdon**: Known as the "Grand Canyon of Europe," the Gorges du Verdon is a breathtaking river canyon with turquoise waters, cliffs, and stunning landscapes. It is a popular destination for outdoor activities like hiking, kayaking, and rock climbing. + +## Landmarks and Attractions +- **Palace of the Popes (Palais des Papes)**: Located in Avignon, this UNESCO World Heritage site is one of the largest and most important medieval Gothic buildings in Europe. It was the residence of popes during the 14th century. +- **Pont du Gard**: An ancient Roman aqueduct bridge located near Nîmes, the Pont du Gard is a UNESCO World Heritage site and an engineering marvel. +- **Roman Arena of Nîmes**: One of the best-preserved Roman amphitheaters, the Arena of Nîmes in Nîmes is still used for events today, including bullfights and concerts. +- **Château des Baux-de-Provence**: A ruined medieval castle perched atop the hills of Les Baux-de-Provence, offering panoramic views of the surrounding landscape. +- **Cassis and Calanques National Park**: The seaside town of Cassis is famous for its beautiful harbor and access to the Calanques National Park, a stunning area of limestone cliffs, turquoise waters, and hidden coves. + +## Economy +- **Agriculture**: Provence is known for its agricultural production, including the cultivation of olives, lavender, tomatoes, and herbs such as thyme and rosemary. Olive oil production is a key industry, and the region’s lavender fields are famous worldwide. +- **Wine Production**: Provence is one of the most important wine regions in France, especially known for its rosé wines. Vineyards are spread throughout the region, including areas like Côtes de Provence, Bandol, and Cassis. +- **Tourism**: Tourism is a major part of Provence's economy, with millions of visitors flocking to the region for its beaches, lavender fields, Roman ruins, and charming towns. The region’s Mediterranean climate and picturesque landscapes make it a year-round destination. +- **Crafts and Industry**: Provence is known for its artisanal crafts, such as pottery, textiles, and perfume making, particularly in the town of Grasse, which is renowned as the perfume capital of the world. + +## Climate +- **Mediterranean Climate**: Provence enjoys a Mediterranean climate, characterized by hot, dry summers and mild, wet winters. This climate is ideal for growing grapes, olives, and lavender, and contributes to the region’s appeal as a tourist destination. +- **Average Temperatures**: Summers are typically hot, with temperatures ranging from 25°C to 35°C (77°F to 95°F), while winters are mild, with temperatures ranging from 5°C to 15°C (41°F to 59°F). + +## Notable People +- **Paul Cézanne**: A famous Post-Impressionist painter, Cézanne was born in Aix-en-Provence and is closely associated with the landscapes of the region. His works, particularly those depicting the Mont Sainte-Victoire mountain, are iconic in the art world. +- **Marcel Pagnol**: A renowned writer, playwright, and filmmaker, Pagnol was born in Aubagne and is known for his works about Provençal life, including *Marius*, *Fanny*, and *César*, as well as his memoirs. +- **Vincent van Gogh**: The Dutch painter spent a year in the town of Saint-Rémy-de-Provence, where he produced some of his most famous works, including *Starry Night* and *Irises*. + +## Conclusion +Provence is a region that captivates with its stunning landscapes, rich history, and vibrant culture. From the lavender fields and Mediterranean beaches to the Roman ruins and charming villages, Provence offers something for everyone. Whether you're visiting for the cuisine, the wine, the history, or simply to relax in its beautiful surroundings, Provence is a timeless and unforgettable destination. From d52b830bfe5aceda1e4529e6a401057737f1b9ef Mon Sep 17 00:00:00 2001 From: Adrian Banu Date: Tue, 4 Mar 2025 20:35:20 +1100 Subject: [PATCH 27/35] add ollama with streams using ollama generate --- .../week1_Ollama_generate_streams.ipynb | 180 ++++++++++++++++++ 1 file changed, 180 insertions(+) create mode 100644 week1/community-contributions/week1_Ollama_generate_streams.ipynb diff --git a/week1/community-contributions/week1_Ollama_generate_streams.ipynb b/week1/community-contributions/week1_Ollama_generate_streams.ipynb new file mode 100644 index 0000000..9dc91f9 --- /dev/null +++ b/week1/community-contributions/week1_Ollama_generate_streams.ipynb @@ -0,0 +1,180 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", + "metadata": {}, + "source": [ + "# End of week 1 exercise\n", + "\n", + "To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n", + "and responds with an explanation. This is a tool that you will be able to use yourself during the course!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c1070317-3ed9-4659-abe3-828943230e03", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "import os\n", + "import requests\n", + "import json\n", + "from typing import List\n", + "from dotenv import load_dotenv\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display, update_display\n", + "from openai import OpenAI\n", + "import ollama" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4a456906-915a-4bfd-bb9d-57e505c5093f", + "metadata": {}, + "outputs": [], + "source": [ + "# constants\n", + "MODEL_GPT = 'gpt-4o-mini'\n", + "MODEL_LLAMA = 'llama3.2'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", + "metadata": {}, + "outputs": [], + "source": [ + "# set up environment\n", + "load_dotenv(override=True)\n", + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", + " print(\"API key looks good so far\")\n", + "else:\n", + " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", + "\n", + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3f0d0137-52b0-47a8-81a8-11a90a010798", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"You are provided with a technical question. \\\n", + "You are answering by providing a quick explanation and giving some examples.\\n\"\n", + "\n", + "# here is the question; type over this to ask something new\n", + "question = \"\"\"\n", + "Please explain what this code does and why:\n", + "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", + "\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", + "metadata": {}, + "outputs": [], + "source": [ + "# Get gpt-4o-mini to answer, with streaming\n", + "def get_answer_gpt():\n", + " stream = openai.chat.completions.create(\n", + " model=MODEL_GPT,\n", + " messages=[\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": question}\n", + " ],\n", + " stream=True\n", + " )\n", + "\n", + " response = \"\"\n", + " display_handle = display(Markdown(\"\"), display_id=True)\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", + " update_display(Markdown(response), display_id=display_handle.display_id)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538", + "metadata": {}, + "outputs": [], + "source": [ + "# Get Llama 3.2 to answer\n", + "def get_answer_ollama():\n", + " stream = ollama.generate(\n", + " MODEL_LLAMA,\n", + " question,\n", + " stream=True\n", + " )\n", + " \n", + " response = \"\"\n", + " display_handle = display(Markdown(\"\"), display_id=True)\n", + " for chunk in stream:\n", + " response += chunk['response'] or ''\n", + " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", + " update_display(Markdown(response), display_id=display_handle.display_id)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4a859eb1-23fa-40dd-ba91-b35084433a00", + "metadata": {}, + "outputs": [], + "source": [ + "get_answer_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1c73f046-da3a-49a5-8a74-4b8a86a9032a", + "metadata": {}, + "outputs": [], + "source": [ + "get_answer_ollama()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bea20f33-a710-44ab-9a4d-856db05e4201", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From be59cb5378fced72d0df0e91d6c69f0b233a5f44 Mon Sep 17 00:00:00 2001 From: Octavio Ortiz-Bosch Date: Tue, 4 Mar 2025 09:01:46 -0400 Subject: [PATCH 28/35] remove client.close() to allow multiple llm runs --- ...Week_1-Day 2-Article_Title_Generator.ipynb | 54 ++++++++++--------- 1 file changed, 28 insertions(+), 26 deletions(-) diff --git a/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb b/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb index ac33536..63688d9 100644 --- a/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb +++ b/week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb @@ -9,7 +9,7 @@ "\n", "Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", "\n", - "NOTES:\n", + "**NOTES**:\n", "\n", "1. This version does NOT support website scrapping. You must copy and paste the required article.\n", "2. The following models were configured:\n", @@ -17,7 +17,21 @@ " b. Llama llama3.2\n", " c. Deepseek deepseek-r1:1.5b\n", " It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", - " initialization to the CLIENTS dictionary." + " initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", + " get_answer('NEW_MODEL')***.\n", + "3. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", + " Example: https://www.isitwp.com/headline-analyzer/. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", + "metadata": {}, + "outputs": [], + "source": [ + "# Confirming Llama is loaded\n", + "!ollama pull llama3.2" ] }, { @@ -43,18 +57,11 @@ "source": [ "# set environment variables for OpenAi\n", "load_dotenv(override=True)\n", - "api_key = os.getenv('OPENAI_API_KEY')\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", - "metadata": {}, - "outputs": [], - "source": [ - "# Confirming Llama is loaded\n", - "!ollama pull llama3.2" + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "# validate API Key\n", + "if not api_key:\n", + " raise ValueError(\"No API key was found! Please check the .env file.\")" ] }, { @@ -153,9 +160,6 @@ " model=MODELS[model],\n", " messages=messages\n", " )\n", - "\n", - " # closing LLM client connection\n", - " client.close()\n", " \n", " # return answer\n", " return response.choices[0].message.content\n", @@ -199,10 +203,10 @@ "metadata": {}, "outputs": [], "source": [ - "# get openAi answer\n", + "# get Llama answer\n", "answer = get_answer('LLAMA')\n", "\n", - "# display openAi answer\n", + "# display Llama answer\n", "display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))" ] }, @@ -221,10 +225,10 @@ "metadata": {}, "outputs": [], "source": [ - "# get openAi answer\n", + "# get Deepseek answer\n", "answer = get_answer('DEEPSEEK')\n", "\n", - "# display openAi answer\n", + "# display Deepseek answer\n", "display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))" ] }, @@ -235,7 +239,7 @@ "source": [ "### Suggested future improvements\n", "\n", - "1. Add support for website scrapping to replace copy/pasting of articles.\n", + "1. Add website scrapping support to replace copy/pasting of articles.\n", "2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n", "3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n", "4. Add the logic that would allow each model to assess the recommendations from the different models and \n", @@ -245,12 +249,10 @@ { "cell_type": "code", "execution_count": null, - "id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", + "id": "cf7403ac-d43b-4493-98bb-6fee94950cb0", "metadata": {}, "outputs": [], - "source": [ - "S" - ] + "source": [] } ], "metadata": { From e761d042a86751d4422cfcbfb85c6173a03127bd Mon Sep 17 00:00:00 2001 From: Octavio Ortiz-Bosch Date: Tue, 4 Mar 2025 09:06:32 -0400 Subject: [PATCH 29/35] Improved Article Generator --- ...k_1-Day 5-Article_Title_Generator-V2.ipynb | 472 ++++++++++++++++++ 1 file changed, 472 insertions(+) create mode 100644 week1/community-contributions/Week_1-Day 5-Article_Title_Generator-V2.ipynb diff --git a/week1/community-contributions/Week_1-Day 5-Article_Title_Generator-V2.ipynb b/week1/community-contributions/Week_1-Day 5-Article_Title_Generator-V2.ipynb new file mode 100644 index 0000000..0622a4d --- /dev/null +++ b/week1/community-contributions/Week_1-Day 5-Article_Title_Generator-V2.ipynb @@ -0,0 +1,472 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "603cd418-504a-4b4d-b1c3-be04febf3e79", + "metadata": {}, + "source": [ + "# Article Title Generator (V2)\n", + "\n", + "Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", + "\n", + "**NOTES**:\n", + "\n", + "1. This version supports website scrapping using Selenium (based on the code from **/week1/community-\n", + " contributions/day1-webscraping-selenium-for-javascript.ipynb** - Thanks for the contribution!)\n", + "2. Leverage streaming (OpenAI only).\n", + "3. The following models were configured:\\\n", + " \n", + " a. OpenAI gpt-4o-mini\\\n", + " b. Llama llama3.2\\\n", + " c. Deepseek deepseek-r1:1.5b\\\n", + "\n", + " It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", + " initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", + " get_answer('NEW_MODEL')***.\n", + "5. Improved system_prompt to provide specific SEO best practices to adopt during the title generation.\n", + "6. Rephrased the system_prompt to ensure the model provides a single Title (not a list of suggestions).\n", + "7. Includes function to remove unrequired thinking/reasoning verbose from the model response (Deepseek). \n", + "8. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", + " Example: https://www.isitwp.com/headline-analyzer/. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "115004a8-747a-4954-9580-1ed548f80336", + "metadata": {}, + "outputs": [], + "source": [ + "# install required libraries if they were not part of the requirements.txt\n", + "!pip install selenium\n", + "!pip install undetected-chromedriver" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", + "metadata": {}, + "outputs": [], + "source": [ + "# confirming Llama is loaded\n", + "!ollama pull llama3.2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from IPython.display import Markdown, display, update_display\n", + "from openai import OpenAI\n", + "import undetected_chromedriver as uc\n", + "from selenium.webdriver.common.by import By\n", + "from selenium.webdriver.support.ui import WebDriverWait\n", + "from selenium.webdriver.support import expected_conditions as EC\n", + "import time\n", + "from bs4 import BeautifulSoup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", + "metadata": {}, + "outputs": [], + "source": [ + "# set environment variables for OpenAi\n", + "load_dotenv(override=True)\n", + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "# validate API Key\n", + "if not api_key:\n", + " raise ValueError(\"No API key was found! Please check the .env file.\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1abbb826-de66-498c-94d8-33369ad01885", + "metadata": {}, + "outputs": [], + "source": [ + "# constants\n", + "MODELS = { 'GPT': 'gpt-4o-mini', \n", + " 'LLAMA': 'llama3.2', \n", + " 'DEEPSEEK': 'deepseek-r1:1.5b'\n", + " }\n", + "\n", + "CLIENTS = { 'GPT': OpenAI(), \n", + " 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", + " 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", + " }\n", + "\n", + "# path to Chrome\n", + "CHROME_PATH = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"" + ] + }, + { + "cell_type": "markdown", + "id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", + "metadata": {}, + "source": [ + "**Webcrawler** (based on the code from __/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb__)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c2a1cf7a-044f-4a9c-b76e-8f112d384550", + "metadata": {}, + "outputs": [], + "source": [ + "class WebsiteCrawler:\n", + " def __init__(self, url, wait_time=20, chrome_path=None):\n", + " \"\"\"\n", + " Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", + " \"\"\"\n", + " self.url = url\n", + " self.wait_time = wait_time\n", + "\n", + " options = uc.ChromeOptions()\n", + " options.add_argument(\"--disable-gpu\")\n", + " options.add_argument(\"--no-sandbox\")\n", + " options.add_argument(\"--disable-dev-shm-usage\")\n", + " options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", + " # options.add_argument(\"--headless=new\") # For Chrome >= 109 - unreliable on my end!\n", + " options.add_argument(\"start-maximized\")\n", + " options.add_argument(\n", + " \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + " )\n", + " if chrome_path:\n", + " options.binary_location = chrome_path\n", + "\n", + " self.driver = uc.Chrome(options=options)\n", + "\n", + " try:\n", + " # Load the URL\n", + " self.driver.get(url)\n", + "\n", + " # Wait for Cloudflare or similar checks\n", + " time.sleep(10)\n", + "\n", + " # Ensure the main content is loaded\n", + " WebDriverWait(self.driver, self.wait_time).until(\n", + " EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", + " )\n", + "\n", + " # Extract the main content\n", + " main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", + "\n", + " # Parse with BeautifulSoup\n", + " soup = BeautifulSoup(main_content, \"html.parser\")\n", + " self.title = self.driver.title if self.driver.title else \"No title found\"\n", + " self.text = soup.get_text(separator=\"\\n\", strip=True)\n", + "\n", + " except Exception as e:\n", + " print(f\"Error occurred: {e}\")\n", + " self.title = \"Error occurred\"\n", + " self.text = \"\"\n", + "\n", + " finally:\n", + " self.driver.quit()\n" + ] + }, + { + "cell_type": "markdown", + "id": "592d8f86-fbf7-4b16-a69d-468030d72dc4", + "metadata": {}, + "source": [ + "### Prompts" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", + "metadata": {}, + "outputs": [], + "source": [ + "# system prompt\n", + "system_prompt = \"\"\"\n", + " You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate a single, most effective, keyword-optimized title to maximize SEO performance.\n", + "\n", + "Instructions:\n", + "Ignore irrelevant content, such as the current title (if any), navigation menus, advertisements, or unrelated text.\n", + "Prioritize SEO best practices, considering:\n", + "Keyword relevance and search intent (informational, transactional, etc.).\n", + "Readability and engagement.\n", + "Avoiding keyword stuffing.\n", + "Ensure conciseness and clarity, keeping the title under 60 characters when possible for optimal SERP display.\n", + "Use a compelling structure that balances informativeness and engagement, leveraging formats like:\n", + "Listicles (\"10 Best Strategies for…\")\n", + "How-to guides (\"How to Boost…\")\n", + "Questions (\"What Is the Best Way to…\")\n", + "Power words to enhance click-through rates (e.g., \"Proven,\" \"Ultimate,\" \"Essential\").\n", + "Provide only one single, best title—do not suggest multiple options.\n", + "Limit the answer to the following Response Format (Markdown):\n", + "Optimized Title: [Provide only one title here]\n", + "Justification: [Explain why this title is effective for SEO]\n", + "\n", + " \"\"\"" + ] + }, + { + "cell_type": "markdown", + "id": "b0486867-6d38-4cb5-91d4-fb60952c3a9b", + "metadata": {}, + "source": [ + "**Provide the article URL and get its content for analysis**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ddd76319-13ce-480b-baa7-cab6a5c88168", + "metadata": {}, + "outputs": [], + "source": [ + "# article url - change to any other article URL\n", + "article_url = \"https://searchengineland.com/seo-trends-2025-447745\"\n", + "\n", + "# get article content\n", + "article = WebsiteCrawler(url=article_url, chrome_path=CHROME_PATH)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", + "metadata": {}, + "outputs": [], + "source": [ + "# user prompt\n", + "user_prompt = \"\"\"\n", + "Following the article to be analyzed to suggest a title. Limit the answer to the following Response Format (Markdown): \n", + "Optimized Title: [Provide only one title here]\n", + "Justification: [Explain why this title is effective for SEO].\n", + "\"\"\"\n", + "\n", + "user_prompt = f\"{user_prompt} {article}\"\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", + "metadata": {}, + "outputs": [], + "source": [ + "# message list\n", + "messages = [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", + "metadata": {}, + "outputs": [], + "source": [ + "# get suggested title\n", + "def get_title(model, **kwargs):\n", + " # stream if GPT\n", + " if 'stream' in kwargs:\n", + " response = CLIENTS[model].chat.completions.create(\n", + " model=MODELS[model],\n", + " messages=messages,\n", + " stream=kwargs['stream']\n", + " )\n", + " else:\n", + " response = CLIENTS[model].chat.completions.create(\n", + " model=MODELS[model],\n", + " messages=messages,\n", + " )\n", + "\n", + " return response\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8988d6ff-076a-4eae-baf4-26a8d6a2bc44", + "metadata": {}, + "outputs": [], + "source": [ + "# filter response from model verbose - like Deepseek reasoning/thinking verbose\n", + "def filter_response(response):\n", + " # Find last occurrence of 'Optimized Title:' to avoid displaying reasoning verbose\n", + " substring = 'Optimized Title:'\n", + " start = response.rfind('Optimized Title:')\n", + " if start > -1:\n", + " filtered_response = response[start:]\n", + "\n", + " # insert line break to preserve format\n", + " filtered_response = filtered_response.replace(\"**Justification:**\", \"\\n**Justification:**\")\n", + " \n", + " return filtered_response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0e9e99cf-5e25-4a1f-ab11-a2255e318671", + "metadata": {}, + "outputs": [], + "source": [ + "# display suggested title\n", + "def display_title(model):\n", + " # get model-suggested title\n", + " title = get_title(model)\n", + " \n", + " display(Markdown(f\"### {model} (___{MODELS[model]}___) Answer\\n\\n_______\")) \n", + "\n", + " response = \"\"\n", + "\n", + " if model == 'GPT':\n", + " display_handle = display(Markdown(\"\"), display_id=True)\n", + " # for chunk in stream:\n", + " for chunk in get_title(model=model, stream=True):\n", + " response += chunk.choices[0].delta.content or ''\n", + " response = (\n", + " response.replace(\"```\",\"\")\n", + " .replace(\"markdown\", \"\")\n", + " .replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", + " .replace(\"Justification:\", \"**Justification:**\")\n", + " )\n", + " update_display(Markdown(response), display_id=display_handle.display_id)\n", + " else:\n", + " response = get_title(model=model)\n", + " response = response.choices[0].message.content\n", + " response = filter_response(response)\n", + " response = (\n", + " response.replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", + " .replace(\"Justification:\", \"**Justification:**\")\n", + " )\n", + " display(Markdown(response))" + ] + }, + { + "cell_type": "markdown", + "id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", + "metadata": {}, + "source": [ + "### Get OpenAI Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", + "metadata": {}, + "outputs": [], + "source": [ + "# get and display openAi suggested title\n", + "display_title(model='GPT')" + ] + }, + { + "cell_type": "markdown", + "id": "70073ebf-a00a-416b-854d-642d450cd99b", + "metadata": {}, + "source": [ + "### Get Llama Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "caa190bb-de5f-45cc-b671-5d62688f7b25", + "metadata": {}, + "outputs": [], + "source": [ + "# get and display Llama suggested title\n", + "display_title(model='LLAMA')" + ] + }, + { + "cell_type": "markdown", + "id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", + "metadata": {}, + "source": [ + "### Get Deepseek Suggested Title" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", + "metadata": {}, + "outputs": [], + "source": [ + "# get and display Deepseek title\n", + "display_title(model='DEEPSEEK')" + ] + }, + { + "cell_type": "markdown", + "id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", + "metadata": { + "jp-MarkdownHeadingCollapsed": true + }, + "source": [ + "### Observations\n", + "\n", + "1. **Selenium:** The headless option (__options.add_argument(\"--headless=new\")__), while ideal to speed up the scanning process, presented problems while scanning several websites (including openai.com and canva.com).\n", + "2. **Deepseek challenges:**\\\n", + " a.It always returns its thinking/reasoning verbose, which, while helpful to understand how it works, is not always\n", + " required, such as in this example code. A new function (**filter_response**) was created to remove the additional verbose.\\\n", + " b. It is unreliable with the response, sometimes returning the required format for the response instead of the\n", + " actual response. For example, for the title, it may sometimes return:\n", + " \n", + " **Optimized Title:** \\[The user wants the suggested title here]\n", + " \n", + "### Suggested future improvements\n", + "\n", + "1. Add the logic that would allow each model to assess the recommendations from the different models and \n", + " select the best among these.\n", + "2. Add the logic to leverage an API (if available) that automatically assesses the suggested titles." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From ee68530aef40ee1206e2a233664963c245baf4b3 Mon Sep 17 00:00:00 2001 From: johnIT56 Date: Wed, 5 Mar 2025 10:41:24 +0900 Subject: [PATCH 30/35] Adding file to the community contributions folder --- .../day5_qwen2_whisper.ipynb | 186 ++++++++++++++++++ 1 file changed, 186 insertions(+) create mode 100644 week3/community-contributions/day5_qwen2_whisper.ipynb diff --git a/week3/community-contributions/day5_qwen2_whisper.ipynb b/week3/community-contributions/day5_qwen2_whisper.ipynb new file mode 100644 index 0000000..a6d86cd --- /dev/null +++ b/week3/community-contributions/day5_qwen2_whisper.ipynb @@ -0,0 +1,186 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "6fb7858c-8ea7-4dea-95ea-f5d7d5210b9a", + "metadata": {}, + "source": [ + "The following is **Meeting minutes Generator** by using **QWEN2** and **Openai Opensource model whisper for transcription**, check the following colab link to see the outputs\n", + "\n", + "https://colab.research.google.com/drive/1_pqFmQXjOYG9Se4Zov4blIGeoYX6ViTJ?usp=sharing\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2103adb0-51f3-4240-bc5d-e27b6103cd8a", + "metadata": {}, + "outputs": [], + "source": [ + "import torch\n", + "from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "47dba08d-5829-417c-9c6c-bdb35ca846a6", + "metadata": {}, + "outputs": [], + "source": [ + "AUDIO_MODEL = \"openai/whisper-medium\"\n", + "speech_model = AutoModelForSpeechSeq2Seq.from_pretrained(AUDIO_MODEL, torch_dtype=torch.float16, low_cpu_mem_usage=True, use_safetensors=True)\n", + "speech_model.to('cuda')\n", + "processor = AutoProcessor.from_pretrained(AUDIO_MODEL)\n", + "\n", + "pipe = pipeline(\n", + " \"automatic-speech-recognition\",\n", + " model=speech_model,\n", + " tokenizer=processor.tokenizer,\n", + " feature_extractor=processor.feature_extractor,\n", + " torch_dtype=torch.float16,\n", + " device='cuda',\n", + " return_timestamps=True #important if audio is more than 30sec\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c35d6c76-01a9-495f-ad4e-84c98e320750", + "metadata": {}, + "outputs": [], + "source": [ + "result = pipe(\"your-audio.mp3\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8fba2d46-b806-4bb3-b02d-e628343db986", + "metadata": {}, + "outputs": [], + "source": [ + "transcription = result[\"text\"]\n", + "print(transcription)" + ] + }, + { + "cell_type": "markdown", + "id": "1778c4db-d003-4fb9-a0d0-6cfa71e6208d", + "metadata": {}, + "source": [ + "## MODEL" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9eb579a7-b5de-4537-8ad9-e3117b24c2ff", + "metadata": {}, + "outputs": [], + "source": [ + "from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "4c632023-9b37-4c0d-b43a-190aacbbd80d", + "metadata": {}, + "outputs": [], + "source": [ + "QWEN2 = \"Qwen/Qwen2-7B-Instruct\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "175814b9-81b2-4f75-bf40-9ef7cac492cd", + "metadata": {}, + "outputs": [], + "source": [ + "quant_config = BitsAndBytesConfig(\n", + " load_in_4bit=True,\n", + " bnb_4bit_use_double_quant=True,\n", + " bnb_4bit_compute_dtype=torch.bfloat16,\n", + " bnb_4bit_quant_type=\"nf4\"\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8aaa160e-7c2b-4080-b24a-995df4469edd", + "metadata": {}, + "outputs": [], + "source": [ + "tokenizer = AutoTokenizer.from_pretrained(QWEN2)\n", + "#tokenizer.pad_token = tokenizer.oes_token\n", + "inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\", add_generation_ptrompt=True).to(\"cuda\")\n", + "streamer = TextStreamer(tokenizer)\n", + "model = AutoModelForCausalLM.from_pretrained(QWEN2 , device_map=\"auto\", quantization_config=quant_config)\n", + "outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "517443aa-d230-4248-88aa-b06efd8ee3cd", + "metadata": {}, + "outputs": [], + "source": [ + "response = tokenizer.decode(outputs[0])" + ] + }, + { + "cell_type": "markdown", + "id": "47562f76-fd35-4eb0-a399-8e8f1fa054c3", + "metadata": {}, + "source": [ + "## **For Markdown display**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1f77fea1-0920-46e5-9230-d0e8b9f69353", + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Markdown, display, update_display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "35ac81e2-f960-4705-aaca-2385d8aa12d6", + "metadata": {}, + "outputs": [], + "source": [ + "display(Markdown(response))" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.13.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From e3ddca2421842c7bd6204e892c492504dc6aca91 Mon Sep 17 00:00:00 2001 From: Mokhtar Khaled Date: Thu, 6 Mar 2025 00:28:27 +0200 Subject: [PATCH 31/35] Mokh Week 2 Day 2 Contribution --- .../week2_day2_gradio/gradio_ui.py | 129 ++++++++++++++++++ .../week2_day2_gradio/json_handlers.py | 60 ++++++++ .../week2_day2_gradio/languages.json | 6 + .../week2_day2_gradio/main.py | 15 ++ .../week2_day2_gradio/ollama_utils.py | 28 ++++ .../week2_day2_gradio/readme.txt | 1 + .../week2_day2_gradio/settings.json | 1 + .../week2_day2_gradio/system_prompt.txt | 17 +++ 8 files changed, 257 insertions(+) create mode 100644 week2/community-contributions/week2_day2_gradio/gradio_ui.py create mode 100644 week2/community-contributions/week2_day2_gradio/json_handlers.py create mode 100644 week2/community-contributions/week2_day2_gradio/languages.json create mode 100644 week2/community-contributions/week2_day2_gradio/main.py create mode 100644 week2/community-contributions/week2_day2_gradio/ollama_utils.py create mode 100644 week2/community-contributions/week2_day2_gradio/readme.txt create mode 100644 week2/community-contributions/week2_day2_gradio/settings.json create mode 100644 week2/community-contributions/week2_day2_gradio/system_prompt.txt diff --git a/week2/community-contributions/week2_day2_gradio/gradio_ui.py b/week2/community-contributions/week2_day2_gradio/gradio_ui.py new file mode 100644 index 0000000..0f3d1e4 --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/gradio_ui.py @@ -0,0 +1,129 @@ +import gradio as gr +import requests +import json +from json_handlers import SettingsHandler, LanguagesHandler +from ollama_utils import get_ollama_response + + +class GradioUI: + def __init__(self, models: list, settings: SettingsHandler, languages: LanguagesHandler): + self.models = models + self.settings = settings + self.languages = languages + + self.langs = self.languages.get_supported_languages() + + def _translate_callback(self, text, model, translte_from, translte_to): + model_options = self.settings.get_advanced_settings() + + full_response = "" + chunck_response = get_ollama_response(model, text, translte_from, translte_to, model_options) + for chunck in chunck_response: + full_response += chunck + yield full_response + + def _temp_setting_callback(self, temp_dropdown_val): + self.settings.update_advanced_settings_param("temperature", temp_dropdown_val) + + def _top_k_setting_callback(self, top_k_dropdown_val): + self.settings.update_advanced_settings_param("top_k", top_k_dropdown_val) + + def _top_p_setting_callback(self, top_p_dropdown_val): + self.settings.update_advanced_settings_param("top_p", top_p_dropdown_val) + + def _reset_to_default_callback(self): + temperature = 0.0 + top_k = 40.0 + top_p = 0.9 + default_settings = { + "temperature": temperature, + "top_k": top_k, + "top_p": top_p + } + self.settings.update_advanced_settings(default_settings) + return temperature, top_k, top_p + + def build_and_launch(self): + with gr.Blocks() as gui: + gr.Markdown("# LLM Translator") + with gr.Tab("Translate"): + with gr.Row(): + model_dropdown = gr.Dropdown( + label="Model", + info="Choose LLM Model", + choices=self.models + ) + with gr.Group(): + with gr.Row(): + translte_from = gr.Dropdown( + value=self.langs[0], + show_label=False, + choices=self.langs, + interactive=True + ) + translte_to = gr.Dropdown( + value=self.langs[1], + show_label=False, + choices=self.langs, + interactive=True + ) + with gr.Row(): + translate_input = gr.Textbox(label="Your Input", lines=15, max_lines=15) + translate_output = gr.Textbox(label="Translated", lines=15, max_lines=15) + + btn = gr.Button("Translate", variant="primary") + btn.click( + fn=self._translate_callback, + inputs=[translate_input, model_dropdown, translte_from, translte_to], + outputs=translate_output + ) + + with gr.Tab("Advanced Settings"): + temp_dropdown = gr.Number( + value=self.settings.get_advanced_setting_param("temperature"), + label="Temperature", + info="This parameter control how creative the model is\n0 means no creativity\n1 means very creative", + minimum=0, + maximum=1, + step=0.1, + interactive=True + ) + + gr.Markdown() # Used only for spacing + + top_k_dropdown = gr.Number( + value=self.settings.get_advanced_setting_param("top_k"), + label="Top K", + info="A higher value (e.g. 100) will give more diverse answers\nwhile a lower value (e.g. 10) will be more conservative.", + minimum=1, + maximum=200, + step=1, + interactive=True + ) + + gr.Markdown() # Used only for spacing + + top_p_dropdown = gr.Number( + value=self.settings.get_advanced_setting_param("top_p"), + label="Top P", + info="A higher value (e.g., 0.95) will lead to more diverse answers\nwhile a lower value (e.g., 0.5) will be more conservative", + minimum=0.1, + maximum=1.0, + step=0.1, + interactive=True + ) + + gr.Markdown() # Used only for spacing + + reset_btn = gr.Button("Reset to Default") + reset_btn.click( + fn=self._reset_to_default_callback, + outputs=[temp_dropdown, top_k_dropdown, top_p_dropdown] + ) + + temp_dropdown.change(self._temp_setting_callback, temp_dropdown) + top_k_dropdown.change(self._top_k_setting_callback, top_k_dropdown) + top_p_dropdown.change(self._top_p_setting_callback, top_p_dropdown) + + gui.launch() + diff --git a/week2/community-contributions/week2_day2_gradio/json_handlers.py b/week2/community-contributions/week2_day2_gradio/json_handlers.py new file mode 100644 index 0000000..2f018f0 --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/json_handlers.py @@ -0,0 +1,60 @@ +import json + + +class SettingsHandler: + def __init__(self, json_filename): + self.json_filename = json_filename + self.advanced_settings = self.load_current_settings() + + def load_current_settings(self) -> dict: + with open(self.json_filename, "r") as file: + settings_dict = json.load(file) + + advanced_settings = settings_dict["Advanced Settings"] + + return advanced_settings + + def update_advanced_settings(self, updated_advanced_settings: dict): + new_dict = { + "Advanced Settings": updated_advanced_settings + } + + print(new_dict) + + with open(self.json_filename, "w") as file: + json.dump(new_dict, file) + + self.advanced_settings = updated_advanced_settings + + def update_advanced_settings_param(self, key: str, new_val): + if self.get_advanced_setting_param(key) is not None: + update_advanced_settings_dict = self.advanced_settings + update_advanced_settings_dict[key] = new_val + self.update_advanced_settings(update_advanced_settings_dict) + + def get_advanced_settings(self): + return self.advanced_settings + + def get_advanced_setting_param(self, key: str): + return self.advanced_settings.get(key) + + +class LanguagesHandler: + def __init__(self, json_filename): + self.json_filename = json_filename + self.langs = self.load_languages() + + def load_languages(self) -> list: + with open(self.json_filename, "r") as file: + langs = json.load(file) + + if type(langs) != list: + raise RuntimeError("Languages must be provided as lists") + if len(langs) < 2: + raise RuntimeError("At least 2 languages must be supported") + + return langs + + def get_supported_languages(self): + return self.langs + diff --git a/week2/community-contributions/week2_day2_gradio/languages.json b/week2/community-contributions/week2_day2_gradio/languages.json new file mode 100644 index 0000000..ae5034c --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/languages.json @@ -0,0 +1,6 @@ +[ + "German", + "English", + "Spanish", + "French" +] \ No newline at end of file diff --git a/week2/community-contributions/week2_day2_gradio/main.py b/week2/community-contributions/week2_day2_gradio/main.py new file mode 100644 index 0000000..f63da93 --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/main.py @@ -0,0 +1,15 @@ +from json_handlers import SettingsHandler, LanguagesHandler +from ollama_utils import get_downloaded_models +from gradio_ui import GradioUI + +settings_json = "settings.json" +languages_json = "languages.json" + +if __name__ == "__main__": + settings = SettingsHandler(settings_json) + languages = LanguagesHandler(languages_json) + + models = get_downloaded_models() + + gradio_ui = GradioUI(models, settings, languages) + gradio_ui.build_and_launch() diff --git a/week2/community-contributions/week2_day2_gradio/ollama_utils.py b/week2/community-contributions/week2_day2_gradio/ollama_utils.py new file mode 100644 index 0000000..066b0ca --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/ollama_utils.py @@ -0,0 +1,28 @@ +import requests +import json +import ollama + + +def get_downloaded_models(): + models_raw = requests.get("http://localhost:11434/api/tags").content + models_dict = json.loads(models_raw) + models = [model["name"] for model in models_dict["models"]] + return models + +def get_ollama_response(model, prompt, translte_from, translte_to, options): + def get_system_prompt(): + with open('system_prompt.txt', 'r') as file: + system_prompt = file.read() + return system_prompt + + system_prompt = get_system_prompt() + user_prompt = f"Translate from {translte_from} to {translte_to}: {prompt}" + messages = [ + {"role": "system", "content": system_prompt}, + {"role": "user", "content": user_prompt} + ] + + response = ollama.chat(model, messages, options=options, stream=True) + for chunck in response: + + yield chunck["message"]["content"] diff --git a/week2/community-contributions/week2_day2_gradio/readme.txt b/week2/community-contributions/week2_day2_gradio/readme.txt new file mode 100644 index 0000000..9d14c5a --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/readme.txt @@ -0,0 +1 @@ +Just run the main.py script after activating conda environment 'llms' \ No newline at end of file diff --git a/week2/community-contributions/week2_day2_gradio/settings.json b/week2/community-contributions/week2_day2_gradio/settings.json new file mode 100644 index 0000000..ecb5fc4 --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/settings.json @@ -0,0 +1 @@ +{"Advanced Settings": {"temperature": 0.0, "top_k": 40.0, "top_p": 0.9}} \ No newline at end of file diff --git a/week2/community-contributions/week2_day2_gradio/system_prompt.txt b/week2/community-contributions/week2_day2_gradio/system_prompt.txt new file mode 100644 index 0000000..fb64c6a --- /dev/null +++ b/week2/community-contributions/week2_day2_gradio/system_prompt.txt @@ -0,0 +1,17 @@ +You are a translator. +You should translate the prompts according to the following criteria: +- You should respond in a clear and straight to the point responses. +- Your response should have a good structure and good linguistic features. +- You should translate the sentence as it is. Do not add extra sentences or phrases on your own. +- Do not answer questions even if the prompt is a question, you should translate the question and do not anwer it. +- If you do not understand the prompt, do not say that you do not understand, just echo the prompt. +- Do not include in the response phrases like 'here is the translation' or any phrases like that +Here are some examples for good responses: +< +Prompt: 'Translate from French to English: Hier, j'ai passé toute la journée à explorer la ville avec mes amis, et nous avons visité plusieurs musées avant de nous arrêter pour un délicieux dîner dans un restaurant local.' +Response: 'Yesterday, I spent the whole day exploring the city with my friends, and we visited several museums before stopping for a delicious dinner at a local restaurant.' +> +< +Prompt: 'Translate from Spanish to English: vdaiughadvlkj' +Response: 'vdaiughadvlkj' +> From 42ccc6334beef5b761e4894942386975e38623d8 Mon Sep 17 00:00:00 2001 From: tithi1610 Date: Thu, 6 Mar 2025 21:22:19 +0530 Subject: [PATCH 32/35] Added my website summarizer using qwen2.5 --- .../day-1-ollama-app.ipynb | 4 +- .../website-summarizer-by-tithi.ipynb | 229 ++++++++++++++++++ 2 files changed, 231 insertions(+), 2 deletions(-) create mode 100644 week1/community-contributions/website-summarizer-by-tithi.ipynb diff --git a/week1/community-contributions/day-1-ollama-app.ipynb b/week1/community-contributions/day-1-ollama-app.ipynb index 80b8197..c1d219a 100644 --- a/week1/community-contributions/day-1-ollama-app.ipynb +++ b/week1/community-contributions/day-1-ollama-app.ipynb @@ -234,7 +234,7 @@ ], "metadata": { "kernelspec": { - "display_name": "llms", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -252,5 +252,5 @@ } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/week1/community-contributions/website-summarizer-by-tithi.ipynb b/week1/community-contributions/website-summarizer-by-tithi.ipynb new file mode 100644 index 0000000..4e62ef0 --- /dev/null +++ b/week1/community-contributions/website-summarizer-by-tithi.ipynb @@ -0,0 +1,229 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 10, + "id": "29ddd15d-a3c5-4f4e-a678-873f56162724", + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "from bs4 import BeautifulSoup\n", + "from IPython.display import Markdown, display\n", + "import ollama" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "479ff514-e8bd-4985-a572-2ea28bb4fa40", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ‹ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ™ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ¹ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ¸ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ¼ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ´ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ¦ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â § \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â ‡ \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest â � \u001b[K\u001b[?25h\u001b[?2026l\u001b[?2026h\u001b[?25l\u001b[1Gpulling manifest \u001b[K\n", + "pulling 2bada8a74506... 100% ▕████████████████â–� 4.7 GB \u001b[K\n", + "pulling 66b9ea09bd5b... 100% ▕████████████████â–� 68 B \u001b[K\n", + "pulling eb4402837c78... 100% ▕████████████████â–� 1.5 KB \u001b[K\n", + "pulling 832dd9e00a68... 100% ▕████████████████â–� 11 KB \u001b[K\n", + "pulling 2f15b3218f05... 100% ▕████████████████â–� 487 B \u001b[K\n", + "verifying sha256 digest \u001b[K\n", + "writing manifest \u001b[K\n", + "success \u001b[K\u001b[?25h\u001b[?2026l\n" + ] + } + ], + "source": [ + "# Let's just make sure the model is loaded\n", + "\n", + "!ollama pull qwen2.5\n", + "MODEL = \"qwen2.5\"" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", + "metadata": {}, + "outputs": [], + "source": [ + "headers = {\n", + " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + "}\n", + "\n", + "class Website:\n", + "\n", + " def __init__(self, url):\n", + " \"\"\"\n", + " Create this Website object from the given url using the BeautifulSoup library\n", + " \"\"\"\n", + " self.url = url\n", + " response = requests.get(url, headers=headers)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "a531b8f6-d4f8-4140-b54d-bcf280bd7a99", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", + "and provides a short summary, ignoring text that might be navigation related. \\\n", + "Respond in markdown.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "6b46ff43-4817-431e-8335-8d2cc9957910", + "metadata": {}, + "outputs": [], + "source": [ + "def user_prompt_for(website):\n", + " user_prompt = f\"You are looking at a website titled {website.title}\"\n", + " user_prompt += \"\\nThe contents of this website is as follows; \\\n", + "please provide a summary of this website in markdown. \\\n", + "If it includes news or announcements, then summarize these too.(only if they are present)\\n\\n\"\n", + " user_prompt += website.text\n", + " return user_prompt" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "id": "13a3a001-5d91-4269-ab60-493bbf35bda4", + "metadata": {}, + "outputs": [], + "source": [ + "def messages_for(website):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "c61ad738-9395-415d-b88b-d4a70d4331aa", + "metadata": {}, + "outputs": [], + "source": [ + "def summarize(url):\n", + " website = Website(url)\n", + " response = ollama.chat(model=MODEL, messages=messages_for(website))\n", + " return response['message']['content']" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "bdbcfa75-980b-4542-872d-af8b20546b5d", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'```markdown\\n# Tailwind CSS Cheat Sheet Summary\\n\\nThis website serves as a comprehensive guide for developers using Tailwind CSS, providing quick access to commonly used utility classes and configurations. The content is organized into sections such as typography, layout, colors, shadows, and more, making it easy for users to find specific styles or settings.\\n\\n- **Typography**: Includes various font sizes, weights, line heights, and other typographic utilities.\\n- **Layout**: Features columns, grid, flexbox, spacing, and responsive design utilities.\\n- **Colors**: Lists predefined color palettes and utility classes for color manipulation.\\n- **Shadows**: Provides options to add depth and dimension to elements through shadow effects.\\n- **Other Sections**: Covers forms, animations, and more, with concise descriptions and examples.\\n\\nThe site is designed to be a one-stop reference tool, allowing developers to quickly apply Tailwind CSS styles without having to consult the official documentation every time.\\n```'" + ] + }, + "execution_count": 17, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "summarize(\"https://www.creative-tim.com/twcomponents/cheatsheet/\")" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "817e6f73-1abe-4f79-9010-f4264e0f324a", + "metadata": {}, + "outputs": [], + "source": [ + "def display_summary(url):\n", + " summary = summarize(url)\n", + " display(Markdown(summary))" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "504c19cf-9add-4a78-a028-fe2710e0604d", + "metadata": {}, + "outputs": [ + { + "data": { + "text/markdown": [ + "# Summary\n", + "\n", + "**Home Page:**\n", + "- The website is titled \"Home - Edward Donner\" and introduces Ed, who enjoys coding, experimenting with large language models (LLMs), DJing, and engaging in Hacker News.\n", + "- He co-founded Nebula.io, an AI company focusing on helping people discover their potential. The platform uses proprietary LLMs for talent discovery and has been patented.\n", + "\n", + "**News/Announcements:**\n", + "- **January 23, 2025:** LLM Workshop – Hands-on with Agents\n", + "- **December 21, 2024:** Welcome, SuperDataScientists!\n", + "- **November 13, 2024:** Mastering AI and LLM Engineering – Resources\n", + "- **October 16, 2024:** From Software Engineer to AI Data Scientist – resources\n", + "\n", + "**Connect Section:**\n", + "- Provides ways to get in touch with Ed, including email, LinkedIn, Twitter, Facebook, and a newsletter subscription form.\n", + "\n", + "**Additional Content:**\n", + "- **Connect Four:** Describes it as an arena where LLMs compete against each other.\n", + "- **About Page:** Further details about Ed's background and Nebula.io." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "display_summary('https://edwarddonner.com')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "20d621cb-6bfb-41a6-bd98-a51ef0a8b158", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 286c0ac8123583a53075d2186e222007519f8904 Mon Sep 17 00:00:00 2001 From: hun-bot Date: Fri, 7 Mar 2025 13:52:07 +0900 Subject: [PATCH 33/35] Add my contribution to community-contributions --- .../Week1-Day2-Ollama-Exercise.ipynb | 244 +++++++++++++++++- 1 file changed, 238 insertions(+), 6 deletions(-) diff --git a/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb index 4c3e3ab..3fba85d 100644 --- a/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb +++ b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb @@ -6,22 +6,109 @@ "metadata": {}, "source": [ "# First Project\n", - "\n", - "Day1" + "Ollama -> Summary\n", + "huggingface_hub -> \"facebook/m2m100_418M\" for translation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5fb79a20-a455-4d27-91a1-91958af786c1", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install transformers datasets torch\n", + "!pip install huggingface_hub" ] }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "id": "e95ac7f2-5192-4f83-acf3-61df30cd3109", "metadata": {}, "outputs": [], "source": [ "# imports\n", - "\n", "import requests\n", "from bs4 import BeautifulSoup\n", - "from IPython.display import Markdown, display" + "import json\n", + "import ollama" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "12276d74-0e79-4e66-9135-1c9d1a80b943", + "metadata": {}, + "outputs": [], + "source": [ + "class Website:\n", + " def __init__(self, url):\n", + " self.url = url\n", + " response = requests.get(url)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", + "\n", + "huggingface_url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", + "huggingface_website = Website(huggingface_url)\n", + "\n", + "huggingface_data = {\n", + " \"title\": huggingface_website.title,\n", + " \"text\": huggingface_website.text\n", + "}\n", + "print(huggingface_data)\n", + "\n", + "with open('ml_for_3d_course_data.json', 'w') as f:\n", + " json.dump(huggingface_data, f)\n" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "id": "7d74c85c-3e09-4514-bde4-4cafc4910c52", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "model='llama3.2:latest' created_at='2025-03-07T04:47:23.9329208Z' done=True done_reason='stop' total_duration=31844916400 load_duration=82994800 prompt_eval_count=509 prompt_eval_duration=264000000 eval_count=139 eval_duration=31493000000 message=Message(role='assistant', content=\"The text is a welcome page for Hugging Face's Machine Learning for 3D Course, created by developer advocate Dylan Ebert (IndividualKex). The course provides an overview of machine learning for 3D, including recent developments and how to build your own generative 3D demo. Key topics covered in the course include:\\n\\n* Introduction to 3D\\n* Multi-view diffusion\\n* Gaussian Splatting\\n* Meshes\\n\\nThe course is available on Hugging Face's channel and GitHub, with redundant content presented in video, text, and code formats. The page also includes links for joining the Discord community, asking questions, sharing work, and connecting with others.\", images=None, tool_calls=None)\n", + "Summary Text: The text is a welcome page for Hugging Face's Machine Learning for 3D Course, created by developer advocate Dylan Ebert (IndividualKex). The course provides an overview of machine learning for 3D, including recent developments and how to build your own generative 3D demo. Key topics covered in the course include:\n", + "\n", + "* Introduction to 3D\n", + "* Multi-view diffusion\n", + "* Gaussian Splatting\n", + "* Meshes\n", + "\n", + "The course is available on Hugging Face's channel and GitHub, with redundant content presented in video, text, and code formats. The page also includes links for joining the Discord community, asking questions, sharing work, and connecting with others.\n" + ] + } + ], + "source": [ + "# huggingface_data 'text' value\n", + "huggingface_text = huggingface_data['text']\n", + "\n", + "# Summary\n", + "response_summary = ollama.chat(model=\"llama3.2:latest\", messages=[{\"role\": \"user\", \"content\": f\"Summarize the following text: {huggingface_text}\"}])\n", + "print(response_summary)\n", + "\n", + "# print summary\n", + "summary_huggingface_text = response_summary.message['content']\n", + "print(\"Summary Text:\", summary_huggingface_text)\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": 42, + "id": "d13764d5-cb76-46c5-bbe6-d132b31a9ea6", + "metadata": {}, + "outputs": [], + "source": [ + "# HuggingFace Translation" ] }, { @@ -30,7 +117,152 @@ "id": "08405038-4115-487f-9efc-de58572453c1", "metadata": {}, "outputs": [], - "source": [] + "source": [ + "class Website:\n", + " url: str\n", + " title: str\n", + " text: str\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + " response = requests.get(url)\n", + " soup = BeautifulSoup(response.content, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", + "\n", + "url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", + "website = Website(url)\n", + "print(website.title) \n", + "print(website.text[:1000])\n", + "\n", + "data = {\n", + " \"title\": website.title,\n", + " \"text\": website.text\n", + "}\n", + "\n", + "with open('ml_for_3d_course_data.json', 'w') as f:\n", + " json.dump(data, f)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0632352f-4b16-4125-83bf-f3cc3aabd659", + "metadata": {}, + "outputs": [], + "source": [ + "print(data)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a85f8625-725d-4d7f-8cb7-8da4276f81cf", + "metadata": {}, + "outputs": [], + "source": [ + "!pip install sacremoses" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c800cea4-f4a4-4e41-9637-31ff11afb256", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer\n", + "\n", + "# Load the M2M100 model and tokenizer\n", + "model_name = \"facebook/m2m100_418M\"\n", + "model = M2M100ForConditionalGeneration.from_pretrained(model_name)\n", + "tokenizer = M2M100Tokenizer.from_pretrained(model_name)\n", + "\n", + "# Load the saved JSON file\n", + "with open('ml_for_3d_course_data.json', 'r') as f:\n", + " data = json.load(f)\n", + "\n", + "# Extract text from the loaded data\n", + "text = data[\"text\"]\n", + "\n", + "# Set the source language to English and target language to Korean\n", + "source_lang = \"en\"\n", + "target_lang = \"ko\"\n", + "\n", + "# Set the language for tokenizer (important for M2M100)\n", + "tokenizer.src_lang = source_lang\n", + "tokenizer.tgt_lang = target_lang\n", + "\n", + "# Split text into smaller chunks if it's too large\n", + "# This step ensures we don't exceed the model's maximum length (512 tokens)\n", + "max_input_length = 512\n", + "chunks = [text[i:i+max_input_length] for i in range(0, len(text), max_input_length)]\n", + "\n", + "print(chunks)\n", + "# Initialize a list to hold the translated text\n", + "translated_chunks = []\n", + "\n", + "# Iterate through each chunk and translate it\n", + "for chunk in chunks:\n", + " # Tokenize the chunk\n", + " encoded = tokenizer(chunk, return_tensors=\"pt\", padding=True, truncation=True, max_length=512)\n", + "\n", + " # Generate translation from the model, forcing the output to be in Korean\n", + " generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id(target_lang), max_length=512)\n", + "\n", + " # Decode the translated tokens to text\n", + " translated_text = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]\n", + " translated_chunks.append(translated_text)\n", + "\n", + "# Combine all translated chunks back together\n", + "final_translated_text = ' '.join(translated_chunks)\n", + "print(\"Translated Text:\", final_translated_text)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ffe0f264-a588-422f-a6e1-b60504d1e02c", + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import requests\n", + "\n", + "# Ollama API URL 설정\n", + "ollama_url = \"http://localhost:11411/v1/models/facebook/m2m100_418M/generate\"\n", + "\n", + "# 저장된 JSON 파일 로드\n", + "with open('ml_for_3d_course_data.json', 'r') as f:\n", + " data = json.load(f)\n", + "\n", + "# 텍스트 추출\n", + "course_text = data[\"text\"]\n", + "\n", + "# 번역할 소스 언어 및 타겟 언어 설정\n", + "source_language = \"en\"\n", + "target_language = \"ko\"\n", + "\n", + "# 데이터 준비\n", + "payload = {\n", + " \"input_text\": course_text,\n", + " \"src_lang\": source_language,\n", + " \"tgt_lang\": target_language\n", + "}\n", + "\n", + "# API 호출\n", + "response = requests.post(ollama_url, json=payload)\n", + "\n", + "# 응답 확인\n", + "if response.status_code == 200:\n", + " translated_course_text = response.json().get(\"translated_text\", \"Translation failed\")\n", + " print(\"Translated Course Text:\", translated_course_text)\n", + "else:\n", + " print(f\"Error {response.status_code}: {response.text}\")\n" + ] } ], "metadata": { From 32e9b7b1770cfb8562fd4379b26b1e1c3efbed1a Mon Sep 17 00:00:00 2001 From: hun-bot Date: Fri, 7 Mar 2025 14:06:42 +0900 Subject: [PATCH 34/35] Clear Output --- .../Week1-Day2-Ollama-Exercise.ipynb | 22 +++---------------- 1 file changed, 3 insertions(+), 19 deletions(-) diff --git a/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb index 3fba85d..4a12cbb 100644 --- a/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb +++ b/week1/community-contributions/Week1-Day2-Ollama-Exercise.ipynb @@ -67,26 +67,10 @@ }, { "cell_type": "code", - "execution_count": 41, + "execution_count": null, "id": "7d74c85c-3e09-4514-bde4-4cafc4910c52", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "model='llama3.2:latest' created_at='2025-03-07T04:47:23.9329208Z' done=True done_reason='stop' total_duration=31844916400 load_duration=82994800 prompt_eval_count=509 prompt_eval_duration=264000000 eval_count=139 eval_duration=31493000000 message=Message(role='assistant', content=\"The text is a welcome page for Hugging Face's Machine Learning for 3D Course, created by developer advocate Dylan Ebert (IndividualKex). The course provides an overview of machine learning for 3D, including recent developments and how to build your own generative 3D demo. Key topics covered in the course include:\\n\\n* Introduction to 3D\\n* Multi-view diffusion\\n* Gaussian Splatting\\n* Meshes\\n\\nThe course is available on Hugging Face's channel and GitHub, with redundant content presented in video, text, and code formats. The page also includes links for joining the Discord community, asking questions, sharing work, and connecting with others.\", images=None, tool_calls=None)\n", - "Summary Text: The text is a welcome page for Hugging Face's Machine Learning for 3D Course, created by developer advocate Dylan Ebert (IndividualKex). The course provides an overview of machine learning for 3D, including recent developments and how to build your own generative 3D demo. Key topics covered in the course include:\n", - "\n", - "* Introduction to 3D\n", - "* Multi-view diffusion\n", - "* Gaussian Splatting\n", - "* Meshes\n", - "\n", - "The course is available on Hugging Face's channel and GitHub, with redundant content presented in video, text, and code formats. The page also includes links for joining the Discord community, asking questions, sharing work, and connecting with others.\n" - ] - } - ], + "outputs": [], "source": [ "# huggingface_data 'text' value\n", "huggingface_text = huggingface_data['text']\n", @@ -103,7 +87,7 @@ }, { "cell_type": "code", - "execution_count": 42, + "execution_count": null, "id": "d13764d5-cb76-46c5-bbe6-d132b31a9ea6", "metadata": {}, "outputs": [], From 6465267e0561ea79a2522261c18d666cd4f9b0b8 Mon Sep 17 00:00:00 2001 From: Martijn van de Rijdt Date: Thu, 13 Mar 2025 11:26:41 -0400 Subject: [PATCH 35/35] Update day1.ipynb correction in the deepseek response length check --- week2/day1.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/week2/day1.ipynb b/week2/day1.ipynb index 7371667..5768371 100644 --- a/week2/day1.ipynb +++ b/week2/day1.ipynb @@ -485,7 +485,7 @@ "\n", "print(reasoning_content)\n", "print(content)\n", - "print(\"Number of words:\", len(reply.split(\" \")))" + "print(\"Number of words:\", len(content.split(\" \")))" ] }, {