Browse Source

Merge branch 'main' of github.com:ed-donner/llm_engineering

pull/300/head
Edward Donner 1 month ago
parent
commit
2c6870cf58
  1. 611
      week1/community-contributions/day1-master-chef.ipynb
  2. 87
      week1/community-contributions/day1_michelin_start_cook.ipynb
  3. 632
      week1/community-contributions/tweet-generate-from-alt-text.ipynb
  4. 464
      week1/community-contributions/week1-day1-stackoverflow-to-tutorial-summarization.ipynb
  5. 63
      week1/community-contributions/week1-day1_2-bedtime-storyteller.py
  6. 567
      week2/community-contributions/brochure_links_tone.ipynb
  7. 275
      week2/community-contributions/day4_compare_prices.ipynb
  8. 167
      week2/community-contributions/week2day4_budget_trip_planner_using_gemini.ipynb
  9. 78
      week3/community-contributions/day5_openai_whisper_llamainstruct
  10. 402
      week3/community-contributions/synthetic_dataset_generator_deepseek_qwen_llama.ipynb
  11. 433
      week4/community-contributions/code_documentation_generator.ipynb
  12. 235
      week5/community-contributions/day3_vector_embeddings_from_text_file.ipynb
  13. 283
      week5/community-contributions/day5_vectorstore_openai.ipynb
  14. 359
      week5/community-contributions/markdown_knowledge_worker.ipynb
  15. 353
      week5/community-contributions/ui_markdown_knowledge_worker.ipynb

611
week1/community-contributions/day1-master-chef.ipynb

@ -0,0 +1,611 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# YOUR FIRST LAB\n",
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n",
"\n",
"## Your first Frontier LLM Project\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you're new to the Command Line\n",
"\n",
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Treat these labs as a resource</h2>\n",
" <span style=\"color:#f71;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"You are an head chef of a michelin star restaurant who has a diverse skillset \\\n",
"and loves to teach new and interesting recepies for homechefs. Given input of several ingredients \\\n",
"provide step by step instruction of what could be cooked for any cuisine of your choice. Respond in markdown.\"\n",
"\n",
"user_prompt = \"\"\"\n",
"You are a Michelin-starred head chef with a passion for teaching home chefs. \n",
"I have the following ingredients: \n",
"\n",
"**[Chicken breast, Bell peppers, cherry tomatoes, spinach, Basmati rice,\n",
"Garlic, basil, black pepper, smoked paprika]** \n",
"\n",
"Can you provide a step-by-step recipe using these ingredients? You can choose any cuisine that best fits them. \n",
"Please include cooking times, techniques, and any chef tips for enhancing flavors. \n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"\n",
"\n",
"# Step 4: print the result\n",
"def display_summary(summary):\n",
" display(Markdown(summary))\n",
"display_summary(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

87
week1/community-contributions/day1_michelin_start_cook.ipynb

@ -0,0 +1,87 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "44aba2a0-c6eb-4fc1-a5cc-0a8f8679dbb8",
"metadata": {},
"source": [
"## Michelin-star cook..."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4d58124-5e9a-4f5a-9e0a-ff74f43896a8",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67dc3099-2ccc-4ee8-8ff2-0dbbe4ae2fcb",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are a professional chef in a Michelin-star restaurant. You will help me cook restaurant-style dishes using the ingredients I have left in my refrigerator.\\\n",
"You will provide detailed instructions with precise times and measurements in grams and include calorie information for raw ingredients, not cooked ones.\\\n",
"Add the caloric information at the end. Your responses should be formatted in Markdown.\"\n",
"\n",
"user_prompt = \"\"\"\n",
"Help me with a recipe using the ingredients I have left in the refrigerator. I have spinach, eggs, pasta, rice, chicken, beef, carrots, potatoes, butter, milk, cheese, tomatoes, red peppers, and all spices in the pantry.\\n\\n\n",
"\"\"\"\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
"]\n",
" \n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"# Step 4: print the result in markdown format\n",
"pretty_response = Markdown(response.choices[0].message.content)\n",
"display(pretty_response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

632
week1/community-contributions/tweet-generate-from-alt-text.ipynb

@ -0,0 +1,632 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# YOUR FIRST LAB\n",
"## Please read this. This is super-critical to get you prepared; there's no fluff here!\n",
"\n",
"## Your first Frontier LLM Project\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you're new to the Command Line\n",
"\n",
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
"Linux people, something tells me you could teach _me_ a thing or two about the command line!\n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Treat these labs as a resource</h2>\n",
" <span style=\"color:#f71;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"import httpx\n",
"openai = OpenAI(http_client=httpx.Client(verify=False))\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" requests.packages.urllib3.disable_warnings()\n",
" response = requests.get(url, headers=headers, verify=False)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"ed = Website(\"http://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f8a34db6-9c2f-4f5e-95b4-62090d7b591b",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://openai.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# A small exercise to feed the llm with image alt text and return a funny tweet.\n",
"\n",
"# Step 1: Create your prompts\n",
"import json\n",
"system_prompt = \"You are a meme lord. You like tweeting funny and hilarious comments on images. To understand the image you would be given alt text on the image.\"\n",
"class website:\n",
" def __init__(self,url):\n",
" self.url = url\n",
" requests.packages.urllib3.disable_warnings()\n",
" response = requests.get(url, headers=headers, verify=False)\n",
" html_content = response.content\n",
" soup = BeautifulSoup(html_content, 'html.parser')\n",
" image_tags = soup.find_all('img')\n",
" self.image_urls = [img['src'] for img in image_tags if img.get('src')]\n",
" self.image_alt = [img['alt'] if img.get('alt') else \"\" for img in image_tags]\n",
"\n",
" # Restricting to 3 images only.\n",
" if self.image_urls:\n",
" self.images = {self.image_urls[i]:self.image_alt[i] for i in range(4)}\n",
" else:\n",
" self.images = {}\n",
" \n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"Following are images with their alt-text:\"\n",
" user_prompt += json.dumps(website.images)\n",
" user_prompt += \"\\n Give me a markdown layout with tables for each image where each image is given its own row, with the image itself on the left and funny tweet on the right.\"\n",
" return user_prompt\n",
"\n",
"\n",
"# Step 2: Make the messages list\n",
"page = website(\"https://www.pexels.com/\")\n",
"user_prompt = user_prompt_for(page)\n",
"messages = [{\"role\":\"system\",\"content\":system_prompt},{\"role\":\"user\", \"content\":user_prompt}] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"# Step 4: print the result\n",
"display(Markdown((response.choices[0].message.content)))"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

464
week1/community-contributions/week1-day1-stackoverflow-to-tutorial-summarization.ipynb

@ -0,0 +1,464 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# MY !FIRST LAB\n",
"\n",
"### Script will take a stackoverflow issue and summarize it as a technical tutorial. \n",
"\n",
"Example links to use: \n",
" \n",
"https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call \n",
"https://stackoverflow.com/questions/60174/how-can-i-prevent-sql-injection-in-php\n",
"https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags\n",
"\n",
"*Note: Issues must be answered preferebly by a lot of users.*\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e2fd67f3-6441-4fee-b19c-7c91e6188348",
"metadata": {},
"outputs": [],
"source": [
"website = 'https://stackoverflow.com/questions/60174/how-can-i-prevent-sql-injection-in-php'"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\n"
]
}
],
"source": [
"# Load environment variables in a file callwebsite_content .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermwebsite_contentiate Python\" notebook\n",
"\n",
"# Some websites newebsite_content you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"mysql - How can I prevent SQL injection in PHP? - Stack Overflow\n",
"Skip to main content\n",
"Stack Overflow\n",
"About\n",
"Products\n",
"OverflowAI\n",
"Stack Overflow for Teams\n",
"Where developers & technologists share private knowledge with c\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"website_content = Website(website)\n",
"print(website_content.title[:100])\n",
"print(website_content.text[:150])"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "268cb127-ec40-4016-9436-94a1ae10a1c6",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are a technical writer that analyzes the contents of a stackoverflow website issue containing a question and answer \\\n",
"and provides a summary in the form of a technical tutorial , ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += f\"\"\" \n",
"\n",
" You are looking at a website titled {website_content.title}\n",
"\n",
" Create a technical tutorial baswebsite_content on the following Stack Overflow content:\n",
" \n",
" {website_content.text}\n",
"\n",
"\n",
" The tutorial should include an introduction, problem statement, solution steps, and conclusion.\n",
" Tutrial should be in markdown format.\n",
" \"\"\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are looking at a website titled mysql - How can I prevent SQL injection in PHP? - Stack Overflow \n",
"\n",
" You are looking at a website titled mysql - How can I prevent SQL injection in PHP? - Stack Overflow\n",
"\n",
" Create a technical tutorial baswebsite_content on the following Stack Overflow content:\n",
"\n",
" Skip to main content\n",
"Stack Overflow\n",
"About\n",
"Products\n",
"OverflowAI\n",
"Stack Overflow for Teams\n",
"Where developers & technologists share private knowledge with coworkers\n",
"Advertising & Talent\n",
"Reach devs & t\n"
]
}
],
"source": [
"print(user_prompt_for(website_content)[:500])"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0a6970cc-bed8-4759-a312-3b81236c2f4e",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"```markdown\n",
"# How to Prevent SQL Injection in PHP\n",
"\n",
"## Introduction\n",
"SQL injection is a serious security vulnerability that can allow an attacker to interfere with the queries that your application makes to the database. By exploiting this vulnerability, an attacker can gain unauthorized access to sensitive data, manipulate data, and even execute administrative operations on the database. This tutorial will guide you on how to prevent SQL injection in your PHP applications through various best practices.\n",
"\n",
"## Problem Statement\n",
"Consider the following PHP code that is vulnerable to SQL injection:\n",
"\n",
"```php\n",
"$unsafe_variable = $_POST['user_input']; \n",
"mysql_query(\"INSERT INTO `table` (`column`) VALUES ('$unsafe_variable')\");\n",
"```\n",
"\n",
"If a user were to input something like `value'); DROP TABLE table;--`, the query would become:\n",
"\n",
"```sql\n",
"INSERT INTO `table` (`column`) VALUES('value'); DROP TABLE table;--');\n",
"```\n",
"\n",
"This inserts an unwanted SQL command leading to disastrous effects on the database.\n",
"\n",
"## Solution Steps\n",
"\n",
"### 1. Use Prepared Statements\n",
"The best method to prevent SQL injection is to use prepared statements with parameterized queries. This separates SQL logic from data, ensuring that user input is treated as data, not executable code.\n",
"\n",
"#### Using PDO\n",
"Here's how to use PDO in PHP:\n",
"\n",
"```php\n",
"$dsn = 'mysql:dbname=dbtest;host=127.0.0.1;charset=utf8mb4';\n",
"$dbConnection = new PDO($dsn, 'user', 'password');\n",
"$dbConnection->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);\n",
"$dbConnection->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);\n",
"\n",
"$stmt = $dbConnection->prepare('SELECT * FROM users WHERE name = :name');\n",
"$stmt->execute(['name' => $name]);\n",
"\n",
"foreach ($stmt as $row) {\n",
" // Process row\n",
"}\n",
"```\n",
"\n",
"#### Using MySQLi\n",
"If you're using MySQLi, the syntax is slightly different:\n",
"\n",
"```php\n",
"$dbConnection = new mysqli('127.0.0.1', 'username', 'password', 'test');\n",
"$dbConnection->set_charset('utf8mb4');\n",
"\n",
"$stmt = $dbConnection->prepare('SELECT * FROM users WHERE name = ?');\n",
"$stmt->bind_param('s', $name); // 's' stands for string\n",
"$stmt->execute();\n",
"$result = $stmt->get_result();\n",
"\n",
"while ($row = $result->fetch_assoc()) {\n",
" // Process row\n",
"}\n",
"```\n",
"\n",
"### 2. Properly Configure the Database Connection\n",
"When using PDO, ensure that emulated prepared statements are disabled. This is essential for real prepared statements to take effect.\n",
"\n",
"Example configuration:\n",
"```php\n",
"$dbConnection->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);\n",
"```\n",
"\n",
"### 3. Validate Input Data\n",
"In addition to using prepared statements, you should validate and sanitize user inputs. Implementing whitelist validation can help by ensuring only expected values are processed.\n",
"\n",
"For example, if you expect a sorting direction:\n",
"```php\n",
"$dir = !empty($_GET['dir']) && $_GET['dir'] === 'DESC' ? 'DESC' : 'ASC';\n",
"```\n",
"\n",
"### 4. Limit Database Permissions\n",
"Restrict database user permissions to the minimum required for their role. For example, a user who only needs to read data should not have permissions to delete or alter it.\n",
"\n",
"```sql\n",
"GRANT SELECT ON database TO 'username'@'localhost';\n",
"```\n",
"\n",
"### 5. Regularly Update Your Codebase\n",
"Keep libraries and the PHP version you are using up-to-date. Deprecated functions and libraries often contain vulnerabilities that can be exploited.\n",
"\n",
"## Conclusion\n",
"Preventing SQL injection in PHP applications requires a proactive approach. Using prepared statements ensures user input is handled securely, while validating data and limiting permissions fortifies your application against potential attacks. By implementing these best practices, you can significantly reduce the risk of SQL injection vulnerabilities in your applications.\n",
"\n",
"For more in-depth information on SQL injection prevention techniques, consult the [OWASP SQL Injection Prevention Cheat Sheet](https://owasp.org/www-community/attacks/SQL_Injection).\n",
"```"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(website)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

63
week1/community-contributions/week1-day1_2-bedtime-storyteller.py

@ -0,0 +1,63 @@
#!/usr/bin/env python
import os
import argparse
from dotenv import load_dotenv
from openai import OpenAI
def load_openai_key():
# Load environment variables in a file called .env
load_dotenv(override=True)
api_key = os.getenv('OPENAI_API_KEY')
# Check the key
if not api_key:
return "Error: No API key was found!"
elif not api_key.startswith("sk-proj-"):
return "Error: An API key was found, but it doesn't start sk-proj-; please check you're using the right key"
elif api_key.strip() != api_key:
return "Error: An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them!"
else:
return "API key found and looks good so far!"
def ask_llm(client, model, user_prompt):
system_prompt = """
you are a writing assistant with an expertise in children's stories.
Write a bedtime story inspired by the subject below.
The story should have a begining, middle, and end.
The story shoukd be appropriate for children ages 5-8 and have a positive message.
I should be able to read the entire story in about 3 minutes
"""
response = client.chat.completions.create(
model = model,
messages = [ {"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}]
)
return response.choices[0].message.content
def main():
parser = argparse.ArgumentParser(description="AI Bedtime Storyteller")
parser.add_argument("provider", choices=["openai", "ollama"], help="AI provider to use")
parser.add_argument("--model", help="Model to use for Ollama (required if provider is 'ollama')", required="ollama" in parser.parse_known_args()[0].provider)
parser.add_argument("subject", help="What do you want the story to be about?")
args = parser.parse_args()
if args.provider == "openai":
load_openai_key()
client = OpenAI()
model = "gpt-4o-mini"
elif args.provider == "ollama":
client = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')
model = args.model
else:
return "Error: invalid provider!"
user_prompt = args.subject
result = ask_llm(client, model, user_prompt)
print("AI Response:", result)
if __name__ == "__main__":
main()

567
week2/community-contributions/brochure_links_tone.ipynb

@ -0,0 +1,567 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c79dc33e-1a3b-4601-a8f2-219b7a9b6d88",
"metadata": {},
"source": [
"# Company Brochure - Relevant Links and Custom Tone\n",
"\n",
"Using GPT to generate a company brochure with the relevant links functionality and the ability to choose the desired tone."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e32f4aa7-6fc4-4dc9-8058-58e6a7f329c5",
"metadata": {},
"outputs": [],
"source": [
"# Imports\n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1d65a21-bbba-44ff-a2be-85bf2055a493",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key set and good to go.\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(\"OpenAI API Key set and good to go.\")\n",
"else:\n",
" print(\"OpenAI API Key not set. :(\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c5db63fe-5da8-496e-9b37-139598d600a7",
"metadata": {},
"outputs": [],
"source": [
"# Setting up the OpenAI object\n",
"\n",
"openai = OpenAI()\n",
"gpt_model = 'gpt-4o-mini'"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "535da52f-b280-48ce-aa8b-f82f9f9805d9",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8d5757c4-95f4-4038-8ed4-8c81da5112b0",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d5fd31ac-7c81-454a-a1dc-4c58bd3db246",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e8b67492-1ba4-4aad-a588-39116128fa18",
"metadata": {},
"outputs": [],
"source": [
"def gpt_get_links(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model= gpt_model,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e8846e7a-ace2-487e-a0a8-fccb389f2eb9",
"metadata": {},
"outputs": [],
"source": [
"# This function provides uses the get_contents method in the Website Class as well as GPT to find relevant links.\n",
"\n",
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += Website(url).get_contents()\n",
" links = gpt_get_links(url)\n",
" print(\"Found links:\", links)\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "18b42319-8342-4b9c-bef6-8b72acf92ab3",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; \\\n",
" use this information to build a short brochure of the company in markdown.\\n\"\n",
" \n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d7748293-a616-41de-93cb-89f65cc5c73d",
"metadata": {},
"outputs": [],
"source": [
"# Let's create a call that streams back results\n",
"# If you'd like a refresher on Generators (the \"yield\" keyword),\n",
"# Please take a look at the Intermediate Python notebook in week1 folder.\n",
"\n",
"def stream_brochure(company_name, url, tone):\n",
"\n",
" system_message = f\"You are an assistant that analyzes the content of several relevant pages from a company website \\\n",
" and creates a short brochure about the company for prospective customers, investors, and recruits. \\\n",
" Include details of company culture, customers and careers/jobs if you have the information. \\\n",
" Respond in markdown, and use a {tone.lower()} tone throughout the brochure.\"\n",
"\n",
" \n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ]\n",
" stream = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages,\n",
" stream=True\n",
" )\n",
" result = \"\"\n",
" for chunk in stream:\n",
" result += chunk.choices[0].delta.content or \"\"\n",
" yield result"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "15222832-06e0-4452-a8e1-59b9b1755488",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7860\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://www.snowflake.com/about/events/'}, {'type': 'company page', 'url': 'https://www.snowflake.com/en/company/overview/about-snowflake/'}, {'type': 'company leadership page', 'url': 'https://www.snowflake.com/en/company/overview/leadership-and-board/'}, {'type': 'careers page', 'url': 'https://careers.snowflake.com/us/en'}, {'type': 'company ESG page', 'url': 'https://www.snowflake.com/en/company/overview/esg/'}, {'type': 'company ventures page', 'url': 'https://www.snowflake.com/en/company/overview/snowflake-ventures/'}, {'type': 'end data disparity page', 'url': 'https://www.snowflake.com/en/company/overview/end-data-disparity/'}]}\n",
"Found links: {'links': [{'type': 'about page', 'url': 'https://www.snowflake.com/about/events/'}, {'type': 'about page', 'url': 'https://www.snowflake.com/company/overview/about-snowflake/'}, {'type': 'leadership page', 'url': 'https://www.snowflake.com/company/overview/leadership-and-board/'}, {'type': 'careers page', 'url': 'https://careers.snowflake.com/us/en'}, {'type': 'investor relations', 'url': 'https://investors.snowflake.com/overview/default.aspx'}, {'type': 'ESG page', 'url': 'https://www.snowflake.com/company/overview/esg/'}, {'type': 'snowflake ventures', 'url': 'https://www.snowflake.com/company/overview/snowflake-ventures/'}, {'type': 'end data disparity', 'url': 'https://www.snowflake.com/company/overview/end-data-disparity/'}]}\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 464, in _make_request\n",
" self._validate_conn(conn)\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 1093, in _validate_conn\n",
" conn.connect()\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connection.py\", line 741, in connect\n",
" sock_and_verified = _ssl_wrap_socket_and_match_hostname(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connection.py\", line 920, in _ssl_wrap_socket_and_match_hostname\n",
" ssl_sock = ssl_wrap_socket(\n",
" ^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/ssl_.py\", line 460, in ssl_wrap_socket\n",
" ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/ssl_.py\", line 504, in _ssl_wrap_socket_impl\n",
" return ssl_context.wrap_socket(sock, server_hostname=server_hostname)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 517, in wrap_socket\n",
" return self.sslsocket_class._create(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 1104, in _create\n",
" self.do_handshake()\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 1382, in do_handshake\n",
" self._sslobj.do_handshake()\n",
"ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 787, in urlopen\n",
" response = self._make_request(\n",
" ^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 488, in _make_request\n",
" raise new_e\n",
"urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)\n",
"\n",
"The above exception was the direct cause of the following exception:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/adapters.py\", line 667, in send\n",
" resp = conn.urlopen(\n",
" ^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 841, in urlopen\n",
" retries = retries.increment(\n",
" ^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/retry.py\", line 519, in increment\n",
" raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
"urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='petrofac.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/queueing.py\", line 625, in process_events\n",
" response = await route_utils.call_process_api(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/route_utils.py\", line 322, in call_process_api\n",
" output = await app.get_blocks().process_api(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/blocks.py\", line 2103, in process_api\n",
" result = await self.call_function(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/blocks.py\", line 1662, in call_function\n",
" prediction = await utils.async_iteration(iterator)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 735, in async_iteration\n",
" return await anext(iterator)\n",
" ^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 729, in __anext__\n",
" return await anyio.to_thread.run_sync(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/to_thread.py\", line 56, in run_sync\n",
" return await get_async_backend().run_sync_in_worker_thread(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 2461, in run_sync_in_worker_thread\n",
" return await future\n",
" ^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 962, in run\n",
" result = context.run(func, *args)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 712, in run_sync_iterator_async\n",
" return next(iterator)\n",
" ^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 873, in gen_wrapper\n",
" response = next(iterator)\n",
" ^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/601932735.py\", line 15, in stream_brochure\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/3764629295.py\", line 6, in get_brochure_user_prompt\n",
" user_prompt += get_all_details(url)\n",
" ^^^^^^^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/2913862724.py\", line 5, in get_all_details\n",
" result += Website(url).get_contents()\n",
" ^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/1579423502.py\", line 15, in __init__\n",
" response = requests.get(url, headers=headers)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py\", line 73, in get\n",
" return request(\"get\", url, params=params, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py\", line 59, in request\n",
" return session.request(method=method, url=url, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py\", line 589, in request\n",
" resp = self.send(prep, **send_kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py\", line 703, in send\n",
" r = adapter.send(request, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/adapters.py\", line 698, in send\n",
" raise SSLError(e, request=request)\n",
"requests.exceptions.SSLError: HTTPSConnectionPool(host='petrofac.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 464, in _make_request\n",
" self._validate_conn(conn)\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 1093, in _validate_conn\n",
" conn.connect()\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connection.py\", line 741, in connect\n",
" sock_and_verified = _ssl_wrap_socket_and_match_hostname(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connection.py\", line 920, in _ssl_wrap_socket_and_match_hostname\n",
" ssl_sock = ssl_wrap_socket(\n",
" ^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/ssl_.py\", line 460, in ssl_wrap_socket\n",
" ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/ssl_.py\", line 504, in _ssl_wrap_socket_impl\n",
" return ssl_context.wrap_socket(sock, server_hostname=server_hostname)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 517, in wrap_socket\n",
" return self.sslsocket_class._create(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 1104, in _create\n",
" self.do_handshake()\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/ssl.py\", line 1382, in do_handshake\n",
" self._sslobj.do_handshake()\n",
"ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 787, in urlopen\n",
" response = self._make_request(\n",
" ^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 488, in _make_request\n",
" raise new_e\n",
"urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)\n",
"\n",
"The above exception was the direct cause of the following exception:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/adapters.py\", line 667, in send\n",
" resp = conn.urlopen(\n",
" ^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/connectionpool.py\", line 841, in urlopen\n",
" retries = retries.increment(\n",
" ^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/util/retry.py\", line 519, in increment\n",
" raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
"urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='petrofac.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))\n",
"\n",
"During handling of the above exception, another exception occurred:\n",
"\n",
"Traceback (most recent call last):\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/queueing.py\", line 625, in process_events\n",
" response = await route_utils.call_process_api(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/route_utils.py\", line 322, in call_process_api\n",
" output = await app.get_blocks().process_api(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/blocks.py\", line 2103, in process_api\n",
" result = await self.call_function(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/blocks.py\", line 1662, in call_function\n",
" prediction = await utils.async_iteration(iterator)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 735, in async_iteration\n",
" return await anext(iterator)\n",
" ^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 729, in __anext__\n",
" return await anyio.to_thread.run_sync(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/to_thread.py\", line 56, in run_sync\n",
" return await get_async_backend().run_sync_in_worker_thread(\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 2461, in run_sync_in_worker_thread\n",
" return await future\n",
" ^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 962, in run\n",
" result = context.run(func, *args)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 712, in run_sync_iterator_async\n",
" return next(iterator)\n",
" ^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/gradio/utils.py\", line 873, in gen_wrapper\n",
" response = next(iterator)\n",
" ^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/601932735.py\", line 15, in stream_brochure\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/3764629295.py\", line 6, in get_brochure_user_prompt\n",
" user_prompt += get_all_details(url)\n",
" ^^^^^^^^^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/2913862724.py\", line 5, in get_all_details\n",
" result += Website(url).get_contents()\n",
" ^^^^^^^^^^^^\n",
" File \"/var/folders/yc/m81x80gn66j4fbm15pk5gmfr0000gn/T/ipykernel_39727/1579423502.py\", line 15, in __init__\n",
" response = requests.get(url, headers=headers)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py\", line 73, in get\n",
" return request(\"get\", url, params=params, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py\", line 59, in request\n",
" return session.request(method=method, url=url, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py\", line 589, in request\n",
" resp = self.send(prep, **send_kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py\", line 703, in send\n",
" r = adapter.send(request, **kwargs)\n",
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
" File \"/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/adapters.py\", line 698, in send\n",
" raise SSLError(e, request=request)\n",
"requests.exceptions.SSLError: HTTPSConnectionPool(host='petrofac.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://www.petrofac.com/who-we-are/'}, {'type': 'what we do page', 'url': 'https://www.petrofac.com/who-we-are/what-we-do/'}, {'type': 'careers page', 'url': 'https://www.petrofac.com/careers/'}, {'type': 'our structure page', 'url': 'https://www.petrofac.com/who-we-are/our-structure/'}, {'type': 'energy transition page', 'url': 'https://www.petrofac.com/who-we-are/energy-transition/'}, {'type': 'sustainability and ESG page', 'url': 'https://www.petrofac.com/who-we-are/sustainability-and-esg/'}, {'type': 'investor relations page', 'url': 'https://www.petrofac.com/investors/'}, {'type': 'services page', 'url': 'https://www.petrofac.com/services/'}, {'type': 'where we operate page', 'url': 'https://www.petrofac.com/where-we-operate/'}]}\n"
]
}
],
"source": [
"view = gr.Interface(\n",
" fn=stream_brochure,\n",
" inputs=[\n",
" gr.Textbox(label=\"Company name:\"),\n",
" gr.Textbox(label=\"Landing page URL including http:// or https://\"),\n",
" gr.Textbox(label=\"Tone:\")],\n",
" outputs=[gr.Markdown(label=\"Brochure:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70d6398c-21dd-44f8-ba7d-0204414dffa0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

275
week2/community-contributions/day4_compare_prices.ipynb

@ -0,0 +1,275 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"\n",
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and be\\\\gins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()\n",
"\n",
"# As an alternative, if you'd like to use Ollama instead of OpenAI\n",
"# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n",
"# MODEL = \"llama3.2\"\n",
"# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [],
"source": [
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
"\n",
"def chat(message, history):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message}\n",
" ] + history + [\n",
" {\"role\": \"user\", \"content\": message}\n",
" ]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" return response.choices[0].message.content\n",
"\n",
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
"\n",
"With tools, you can write a function, and have the LLM call that function as part of its response.\n",
"\n",
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
"\n",
"Well, kinda."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
"source": [
"# Let's start by making a useful function\n",
"\n",
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [],
"source": [
"get_ticket_price(\"Berlin\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
"source": [
"# There's a particular dictionary structure that's required to describe our function:\n",
"\n",
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
"source": [
"# And this is included in a list of tools:\n",
"\n",
"tools = [{\"type\": \"function\", \"function\": price_function}]"
]
},
{
"cell_type": "markdown",
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
"metadata": {},
"source": [
"## Getting OpenAI to use our Tool\n",
"\n",
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
"\n",
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
"\n",
"Here's how the new chat function looks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ad32321f-083a-4462-a6d6-7bb3b0f5d10a",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"\n",
"def handle_tool_call(message): \n",
" responses = []\n",
" for tool_call in message.tool_calls: \n",
" if tool_call.function.name == \"get_ticket_price\":\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" city = arguments.get('destination_city')\n",
" price = get_ticket_price(city)\n",
" response = {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
" \"tool_call_id\": tool_call.id\n",
" }\n",
" responses.append(response)\n",
" return responses"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message}\n",
" ] + history + [\n",
" {\"role\": \"user\", \"content\": message}\n",
" ]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
"\n",
" # Tool usage\n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" responses = handle_tool_call(message)\n",
" messages.append(message) # That's the assistant asking us to run a tool\n",
" for response in responses:\n",
" messages.append(response) # That's the result of the tool calls\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8dc18486-4d6b-4cbf-a6b8-16d08d7c4f54",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

167
week2/community-contributions/week2day4_budget_trip_planner_using_gemini.ipynb

@ -0,0 +1,167 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
}
},
"cells": [
{
"cell_type": "markdown",
"source": [
"Import libraries as needed & keep your gemini api key ready"
],
"metadata": {
"id": "2UAcHYzT6ikw"
}
},
{
"cell_type": "code",
"source": [
"#!pip install gradio"
],
"metadata": {
"id": "XW0IY4xK6JZ1"
},
"execution_count": 14,
"outputs": []
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "dwoPNMMP4ZSh"
},
"outputs": [],
"source": [
"from google import genai\n",
"from google.genai import types\n",
"from google.colab import userdata\n",
"\n"
]
},
{
"cell_type": "code",
"source": [
"def get_trip_itinerary(budget: int) -> str:\n",
" \"\"\"\n",
" Returns a trip itinerary based on the given budget.\n",
" \"\"\"\n",
" itinerary_dict: Dict[int, str] = {\n",
" 500: \"Paris: 3-day budget trip covering Eiffel Tower, Louvre, and Seine River Cruise.\",\n",
" 1000: \"Tokyo: 5-day adventure covering Shibuya, Akihabara, Mount Fuji day trip.\",\n",
" 2000: \"New York: 7-day luxury stay covering Times Square, Broadway show, and helicopter tour.\",\n",
" 3000: \"Dubai: 7-day ultra-luxury trip with Burj Khalifa VIP tour, desert safari, and yacht cruise.\",\n",
" }\n",
"\n",
" return itinerary_dict.get(budget, \"No itinerary found for this budget. Try another amount!\")\n"
],
"metadata": {
"id": "cnYD07T24ueV"
},
"execution_count": 3,
"outputs": []
},
{
"cell_type": "code",
"source": [
"from google.genai import types\n",
"\n",
"config = types.GenerateContentConfig(tools=[get_trip_itinerary])\n",
"\n",
"from google import genai\n",
"\n",
"client = genai.Client(api_key=userdata.get('gemini_api'))\n",
"\n",
"response = client.models.generate_content(\n",
" model='gemini-2.0-flash',\n",
" config=config,\n",
" contents='Based on the user budget suggest trip itinerary'\n",
")\n"
],
"metadata": {
"id": "3WRUXvD45VFC"
},
"execution_count": 7,
"outputs": []
},
{
"cell_type": "code",
"source": [
"import gradio as gr\n",
"\n",
"# Chat function using Gemini\n",
"chat = client.chats.create(model='gemini-2.0-flash', config=config)\n",
"\n",
"def chat_with_ai(user_input: str):\n",
" response = chat.send_message(user_input)\n",
" return response.text\n",
"\n",
"# Gradio Chat Interface\n",
"demo = gr.Interface(fn=chat_with_ai, inputs=\"text\", outputs=\"text\", title=\"AI Trip Planner\")\n",
"\n",
"demo.launch()\n"
],
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 645
},
"id": "5fE700z96DHs",
"outputId": "3e35423c-8b2b-4868-8113-00d9d3a7a2ba"
},
"execution_count": 13,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Running Gradio in a Colab notebook requires sharing enabled. Automatically setting `share=True` (you can turn this off by setting `share=False` in `launch()` explicitly).\n",
"\n",
"Colab notebook detected. To show errors in colab notebook, set debug=True in launch()\n",
"* Running on public URL: https://079a23f363400da700.gradio.live\n",
"\n",
"This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)\n"
]
},
{
"output_type": "display_data",
"data": {
"text/plain": [
"<IPython.core.display.HTML object>"
],
"text/html": [
"<div><iframe src=\"https://079a23f363400da700.gradio.live\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
]
},
"metadata": {}
},
{
"output_type": "execute_result",
"data": {
"text/plain": []
},
"metadata": {},
"execution_count": 13
}
]
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "XC9zzq8X5u8m"
},
"execution_count": null,
"outputs": []
}
]
}

78
week3/community-contributions/day5_openai_whisper_llamainstruct

@ -0,0 +1,78 @@
import gradio as gr
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, TextStreamer, AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
from huggingface_hub import login
import os
# Use the secret stored in the Hugging Face space
token = os.getenv("HF_TOKEN")
login(token=token)
# Whisper Model Optimization
model = "openai/whisper-tiny"
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
processor = AutoProcessor.from_pretrained(model)
transcriber = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
device=0 if torch.cuda.is_available() else "cpu",
)
# Function to Transcribe & Generate Minutes
def process_audio(audio_file):
if audio_file is None:
return "Error: No audio provided!"
# Transcribe audio
transcript = transcriber(audio_file)["text"]
del transcriber
del processor
# LLaMA Model Optimization
LLAMA = "meta-llama/Llama-3.2-3B-Instruct"
llama_quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
tokenizer = AutoTokenizer.from_pretrained(LLAMA)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
LLAMA,
torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
device_map="auto"
)
# Generate meeting minutes
system_message = "You are an assistant that produces minutes of meetings from transcripts, with summary, key discussion points, takeaways and action items with owners, in markdown."
user_prompt = f"Below is an extract transcript of a Denver council meeting. Please write minutes in markdown, including a summary with attendees, location and date; discussion points; takeaways; and action items with owners.\n{transcript}"
messages = [
{"role": "system", "content": system_message},
{"role": "user", "content": user_prompt}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(DEVICE)
streamer = TextStreamer(tokenizer)
outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Gradio Interface
interface = gr.Interface(
fn=process_audio,
inputs=gr.Audio(sources=["upload", "microphone"], type="filepath"),
outputs="text",
title="Meeting Minutes Generator",
description="Upload or record an audio file to get structured meeting minutes in Markdown.",
)
# Launch App
interface.launch()

402
week3/community-contributions/synthetic_dataset_generator_deepseek_qwen_llama.ipynb

@ -0,0 +1,402 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "843542f7-220a-4408-9f8a-848696092434",
"metadata": {
"id": "843542f7-220a-4408-9f8a-848696092434"
},
"source": [
"# Build a Model to generate Synthetic Data"
]
},
{
"cell_type": "markdown",
"id": "a8816fc8-9517-46ff-af27-9fd0060840aa",
"metadata": {},
"source": [
"Code was written in Google Colab. "
]
},
{
"cell_type": "markdown",
"id": "08a8d539-950b-4b58-abf4-f17bd832c0af",
"metadata": {
"id": "08a8d539-950b-4b58-abf4-f17bd832c0af"
},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "Ienu-NHTuUlT",
"metadata": {
"id": "Ienu-NHTuUlT"
},
"outputs": [],
"source": [
"!pip install -q gradio"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e737cd-27b0-4a2e-9a0c-dbb30ce5cdbf",
"metadata": {
"id": "c5e737cd-27b0-4a2e-9a0c-dbb30ce5cdbf"
},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"import json\n",
"from google.colab import userdata\n",
"\n",
"from huggingface_hub import login\n",
"from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n",
"import torch\n",
"\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "khD9X5-V_txO",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "khD9X5-V_txO",
"outputId": "e2b8d8d0-0433-4b5f-c777-a675213a3f4c"
},
"outputs": [],
"source": [
"!pip install -U bitsandbytes"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e47ead5f-b4e9-4e9f-acf9-be1ffb7fa6d7",
"metadata": {
"id": "e47ead5f-b4e9-4e9f-acf9-be1ffb7fa6d7"
},
"outputs": [],
"source": [
"hf_token = userdata.get('HF_TOKEN')"
]
},
{
"cell_type": "markdown",
"id": "ba104a9c-f298-4e90-9ceb-9d907e392d0d",
"metadata": {
"id": "ba104a9c-f298-4e90-9ceb-9d907e392d0d"
},
"source": [
"## Open Source Models from HF"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11b1eb65-8ef5-4e6d-9176-cf1f70d07fb6",
"metadata": {
"id": "11b1eb65-8ef5-4e6d-9176-cf1f70d07fb6"
},
"outputs": [],
"source": [
"deepseek_model = 'deepseek-ai/deepseek-llm-7b-chat'\n",
"llama_model = 'meta-llama/Meta-Llama-3.1-8B-Instruct'\n",
"qwen2 = 'Qwen/Qwen2-7B-Instruct'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "90fb1d2e-5d25-4d73-b629-8273ab71503c",
"metadata": {
"id": "90fb1d2e-5d25-4d73-b629-8273ab71503c"
},
"outputs": [],
"source": [
"login(hf_token, add_to_git_credential=True)"
]
},
{
"cell_type": "markdown",
"id": "52948c01-8dc6-404b-a2c1-c87f9f6dbd64",
"metadata": {
"id": "52948c01-8dc6-404b-a2c1-c87f9f6dbd64"
},
"source": [
"## Creating Prompts"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "79374337-34fe-4002-b173-ac9b132a54d8",
"metadata": {
"id": "79374337-34fe-4002-b173-ac9b132a54d8"
},
"outputs": [],
"source": [
"system_prompt = \"You are an expert in generating synthetic datasets. Your goal is to generate realistic datasets \\\n",
"based on a given business and its requirements from the user. You will also be given the desired datset format.\"\n",
"system_prompt += \"Do not repeat the instructions.\"\n",
"\n",
"user_prompt = (\"Please provide me a dataset for the following business.\"\n",
"\"For example:\\n\"\n",
"\"The Business: A retail store selling luxury watches.\\n\"\n",
"\"The Data Format: CSV.\\n\"\n",
"\"Output:\\n\"\n",
"\"Item,Price,Quantity,Brand,Sale Date\\n\"\n",
"\"Superocean II, 20.000$, 3, Breitling, 2025-04-08 \\n\"\n",
"\"If I don't provide you the necessary columns, please create the columns based on your knowledge about the given business\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcd90b5e-a7d2-4cdc-81ff-17974c5ff1fe",
"metadata": {
"id": "dcd90b5e-a7d2-4cdc-81ff-17974c5ff1fe"
},
"outputs": [],
"source": [
"def dataset_format(data_format, num_records):\n",
" format_message = ''\n",
" if data_format == 'CSV':\n",
" format_message = 'Please provide the dataset in a CSV format.'\n",
" elif data_format == 'JSON':\n",
" format_message = 'Please provide the dataset in a JSON format'\n",
" elif data_format == 'Tabular':\n",
" format_message = 'Please provide the dataset in a Tabular format'\n",
"\n",
" return format_message + f'Please generate {num_records} records'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "39243edb-3eba-46fd-a610-e474ed421b01",
"metadata": {
"id": "39243edb-3eba-46fd-a610-e474ed421b01"
},
"outputs": [],
"source": [
"def complete_user_prompt(user_input, data_format, num_records):\n",
" messages = [\n",
" {'role': 'system', 'content': system_prompt},\n",
" {'role': 'user', 'content': user_input + user_prompt + dataset_format(data_format, num_records)}\n",
" ]\n",
"\n",
" return messages"
]
},
{
"cell_type": "markdown",
"id": "1ac81127-b9cc-424b-8b38-8a8b09bcc226",
"metadata": {
"id": "1ac81127-b9cc-424b-8b38-8a8b09bcc226"
},
"source": [
"## Accessing the Models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc4aaab5-bde1-463b-b873-e8bd1a231dc1",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "cc4aaab5-bde1-463b-b873-e8bd1a231dc1",
"outputId": "16c9420d-2c4a-4e57-f281-7c531b5145db"
},
"outputs": [],
"source": [
"print(\"CUDA available:\", torch.cuda.is_available())\n",
"if torch.cuda.is_available():\n",
" print(\"GPU-Device:\", torch.cuda.get_device_name(torch.cuda.current_device()))\n",
"else:\n",
" print(\"No GPU found.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6b8e648d-747f-4684-a20b-b8da550efc23",
"metadata": {
"id": "6b8e648d-747f-4684-a20b-b8da550efc23"
},
"outputs": [],
"source": [
"quant_config = BitsAndBytesConfig(\n",
" load_in_4bit = True,\n",
" bnb_4bit_use_double_quant = False,\n",
" bnb_4bit_compute_dtype= torch.bfloat16,\n",
" bnb_4bit_quant_type= 'nf4'\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3ae602f-0abf-420d-8c7b-1938cba92528",
"metadata": {
"id": "b3ae602f-0abf-420d-8c7b-1938cba92528"
},
"outputs": [],
"source": [
"def generate_model(model_id, messages):\n",
" try:\n",
" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code = True)\n",
" inputs = tokenizer.apply_chat_template(messages, return_tensors = 'pt').to('cuda')\n",
" streamer = TextStreamer(tokenizer)\n",
" model = AutoModelForCausalLM.from_pretrained(model_id, device_map = 'auto', quantization_config = quant_config)\n",
" outputs = model.generate(inputs, max_new_tokens = 2000, streamer = streamer)\n",
" generated_text = tokenizer.decode(outputs[0], skip_special_tokens = True)\n",
" del tokenizer, streamer, model, inputs, outputs\n",
" return generated_text\n",
"\n",
" except Exception as e:\n",
" return f'Error during generation: {str(e)}'"
]
},
{
"cell_type": "markdown",
"id": "7c575c9e-4674-4eee-a9b9-c8d14ceed474",
"metadata": {
"id": "7c575c9e-4674-4eee-a9b9-c8d14ceed474"
},
"source": [
"## Generate Dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9c5963e-9f4e-4990-b744-b9ead03e623a",
"metadata": {
"id": "d9c5963e-9f4e-4990-b744-b9ead03e623a"
},
"outputs": [],
"source": [
"def generate_dataset(user_input, target_format, model_choice, num_records):\n",
" if model_choice == 'DeepSeek':\n",
" model_id = deepseek_model\n",
" elif model_choice == 'Llama-3.1-8B':\n",
" model_id = llama_model\n",
" elif model_choice == 'Qwen2':\n",
" model_id = qwen2\n",
"\n",
" messages = complete_user_prompt(user_input, target_format, num_records)\n",
" return generate_model(model_id, messages)"
]
},
{
"cell_type": "markdown",
"id": "ff574cfe-567f-4c6d-b944-fb756bf7ebca",
"metadata": {
"id": "ff574cfe-567f-4c6d-b944-fb756bf7ebca"
},
"source": [
"## Creating Gradio UI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61d2b056-0d00-4b73-b083-024a8f374fef",
"metadata": {
"id": "61d2b056-0d00-4b73-b083-024a8f374fef"
},
"outputs": [],
"source": [
"with gr.Blocks(title = 'Synthetic Data Generator') as ui:\n",
" gr.Markdown('# Synthetic Data Generator')\n",
"\n",
" with gr.Row():\n",
" with gr.Column(min_width=600):\n",
" user_inputs = gr.Textbox(label = 'Enter your Business details and data requirements',\n",
" placeholder = 'Type here...', lines = 15)\n",
"\n",
" model_choice = gr.Dropdown(\n",
" ['DeepSeek', 'Llama-3.1-8B', 'Qwen2'],\n",
" label = 'Choose your Model',\n",
" value = 'DeepSeek'\n",
" )\n",
"\n",
" target_format = gr.Dropdown(\n",
" ['CSV', 'JSON', 'Tabular'],\n",
" label = 'Choose your Format',\n",
" value = 'CSV'\n",
" )\n",
" num_records = gr.Dropdown(\n",
" [50, 100, 150, 200],\n",
" label = 'Number of Records',\n",
" value = 50\n",
" )\n",
"\n",
" generate_button = gr.Button('Generate')\n",
"\n",
" with gr.Column():\n",
" output = gr.Textbox(label = 'Generated Synthetic Data',\n",
" lines = 30)\n",
"\n",
" generate_button.click(fn = generate_dataset, inputs = [user_inputs, target_format, model_choice, num_records],\n",
" outputs = output\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "958d9cbf-50ff-4c50-a305-18df6d5f5eda",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 626
},
"id": "958d9cbf-50ff-4c50-a305-18df6d5f5eda",
"outputId": "a6736641-85c3-4b6a-a28d-02ac5caf4562",
"scrolled": true
},
"outputs": [],
"source": [
"ui.launch(inbrowser = True)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

433
week4/community-contributions/code_documentation_generator.ipynb

@ -0,0 +1,433 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "05432987-80bc-4aa5-8c05-277861e19307",
"metadata": {},
"source": [
"## Adds docstrings/comments to code and generates code summary"
]
},
{
"cell_type": "markdown",
"id": "e706f175-1e83-4d2c-8613-056b2e532624",
"metadata": {},
"source": [
"### Model Usage \n",
"\n",
"- **Open Source Models:**\n",
"\n",
" - Deployed via Endpoint: Hosted on a server and accessed remotely (Qwen 1.5-7)\n",
" - Run Locally on Machine: Executed directly on a local device (Ollama running Llama 3.2-1B)\n",
"\n",
"- **Closed Source Models:** \n",
" - Accessed through API key authentication: (OpenAI, Anthropic). \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ed667df-6660-4ba3-80c5-4c1c8f7e63f3",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import io\n",
"import sys \n",
"import json\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import google.generativeai\n",
"import anthropic\n",
"import ollama\n",
"from IPython.display import Markdown, display, update_display\n",
"import gradio as gr\n",
"from huggingface_hub import login, InferenceClient\n",
"from transformers import AutoTokenizer, pipeline"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9dd4bf1-48cf-44dc-9d04-0ec6e8189a3c",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')\n",
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY')\n",
"CODE_QWEN_URL = os.environ['CODE_QWEN_URL'] \n",
"BIGBIRD_PEGASUS_URL = os.environ['BIGBIRD_PEGASUS_URL']\n",
"HF_TOKEN = os.environ['HF_TOKEN']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71f671d6-50a7-43cf-9e04-52a159d67dab",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull llama3.2:1b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e6f8f35-477d-4014-8fe9-874b5aee0061",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae34b79c-425a-4f04-821a-8f1d9868b146",
"metadata": {},
"outputs": [],
"source": [
"OPENAI_MODEL = \"gpt-4o-mini\"\n",
"CLAUDE_MODEL = \"claude-3-haiku-20240307\"\n",
"LLAMA_MODEL = \"llama3.2:1b\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80e6d920-3c94-48c4-afd8-518f415ab777",
"metadata": {},
"outputs": [],
"source": [
"code_qwen = \"Qwen/CodeQwen1.5-7B-Chat\"\n",
"bigbird_pegasus = \"google/bigbird-pegasus-large-arxiv\"\n",
"login(HF_TOKEN, add_to_git_credential=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "314cd8e3-2c10-4149-9818-4e6b0c05b871",
"metadata": {},
"outputs": [],
"source": [
"# Uses Llama to Check Which Language the Code is Written In\n",
"system_message_comments = \"You are an assistant designed to add docstrings and helpful comments to code for documentation purposes.\"\n",
"system_message_comments += \"Respond back with properly formatted code, including docstrings and comments. Keep comments concise. \"\n",
"system_message_comments += \"Do not respond with greetings, or any such extra output\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66fa09e4-1b79-4f53-9bb7-904d515b2f26",
"metadata": {},
"outputs": [],
"source": [
"system_message_summary = \"You are an assistant designed to summarise code for documentation purposes. You are not to display code again.\"\n",
"system_message_summary += \"Respond back with a properly crafted summary, mentioning key details regarding to the code, such as workflow, code language.\"\n",
"system_message_summary += \"Do not respond with greetings, or any such extra output. Do not respond in Markdown. Be thorough, keep explanation level at undergraduate level.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea405820-f9d1-4cf1-b465-9ae5cd9016f6",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(code):\n",
" user_prompt = \"Rewrite this code to include helpful comments and docstrings. \"\n",
" user_prompt += \"Respond only with code.\\n\"\n",
" user_prompt += code\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26c9be56-1d4f-43e5-9bc4-eb5b76da8071",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for_summary(code):\n",
" user_prompt = \"Return the summary of the code.\\n\"\n",
" user_prompt += code\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0ac22cb-dc96-4ae1-b00d-2747572f6945",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(code):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message_comments},\n",
" {\"role\":\"user\", \"content\" : user_prompt_for(code)}\n",
" ]\n",
" return messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eae1a8b4-68a8-4cd5-849e-0ecabd166a0c",
"metadata": {},
"outputs": [],
"source": [
"def messages_for_summary(code):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message_summary},\n",
" {\"role\":\"user\", \"content\" : user_prompt_for_summary(code)}\n",
" ]\n",
" return messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5eb726dd-e09e-4011-8eb6-4d20f2830ff5",
"metadata": {},
"outputs": [],
"source": [
"func = \"\"\"\n",
"import time\n",
"\n",
"def calculate(iterations, param1, param2):\n",
" result = 1.0\n",
" for i in range(1, iterations+1):\n",
" j = i * param1 - param2\n",
" result -= (1/j)\n",
" j = i * param1 + param2\n",
" result += (1/j)\n",
" return result\n",
"\n",
"start_time = time.time()\n",
"result = calculate(100_000_000, 4, 1) * 4\n",
"end_time = time.time()\n",
"\n",
"print(f\"Result: {result:.12f}\")\n",
"print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f61943b2-c939-4910-a670-58abaf464bb6",
"metadata": {},
"outputs": [],
"source": [
"def call_llama(code):\n",
" # commented code\n",
" messages = messages_for(code)\n",
" response1 = ollama.chat(model=LLAMA_MODEL, messages=messages)\n",
"\n",
" # summary\n",
" messages = messages_for_summary(code)\n",
" response2 = ollama.chat(model=LLAMA_MODEL, messages=messages)\n",
" \n",
" return response1['message']['content'],response2['message']['content']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "696fb97e-807e-40ed-b0e1-beb82d1108a6",
"metadata": {},
"outputs": [],
"source": [
"def call_claude(code):\n",
" # commented code\n",
" message1 = claude.messages.create(\n",
" model=CLAUDE_MODEL,\n",
" system=system_message_comments,\n",
" messages=([{\"role\": \"user\", \"content\":user_prompt_for(code)}]),\n",
" max_tokens=500\n",
" )\n",
"\n",
" # summary\n",
" message2 = claude.messages.create(\n",
" model=CLAUDE_MODEL,\n",
" system=system_message_summary,\n",
" messages=([{\"role\": \"user\", \"content\":user_prompt_for_summary(code)}]),\n",
" max_tokens=500\n",
" )\n",
" \n",
" return message1.content[0].text,message2.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4bf1db64-86fa-42a1-98dd-3df74607f8db",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt(code):\n",
" # commented code\n",
" completion1 = openai.chat.completions.create(\n",
" model=OPENAI_MODEL,\n",
" messages=messages_for(code),\n",
" )\n",
"\n",
" #summary\n",
" completion2 = openai.chat.completions.create(\n",
" model=OPENAI_MODEL,\n",
" messages=messages_for_summary(code),\n",
" )\n",
" \n",
" return completion1.choices[0].message.content,completion2.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6863dc42-cbcd-4a95-8b0a-cfbcbfed0764",
"metadata": {},
"outputs": [],
"source": [
"def call_codeqwen(code):\n",
" # commented code\n",
" tokenizer = AutoTokenizer.from_pretrained(code_qwen)\n",
" messages = messages_for(code)\n",
" text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n",
" client = InferenceClient(CODE_QWEN_URL, token=HF_TOKEN)\n",
" response1 = client.text_generation(text, details=True, max_new_tokens=1000)\n",
"\n",
" # summary\n",
" tokenizer = AutoTokenizer.from_pretrained(code_qwen)\n",
" messages = messages_for_summary(code)\n",
" text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n",
" client = InferenceClient(CODE_QWEN_URL, token=HF_TOKEN)\n",
" response2 = client.text_generation(text, details=True, max_new_tokens=1000)\n",
" \n",
" return response1.generated_text ,response2.generated_text "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06d05c02-45e4-47da-b70b-cf433dfaca4c",
"metadata": {},
"outputs": [],
"source": [
"def create_docs(code,model):\n",
" if model == \"Llama\":\n",
" comments,summary = call_llama(code)\n",
" elif model == \"Claude\":\n",
" comments,summary = call_claude(code)\n",
" elif model == \"GPT\":\n",
" comments,summary = call_gpt(code)\n",
" elif model == \"CodeQwen\":\n",
" comments,summary = call_codeqwen(code)\n",
" else:\n",
" raise ValueError(\"Unknown Model\")\n",
" return comments,summary"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b4ea289-5da9-4b0e-b4d4-f8f01e466839",
"metadata": {},
"outputs": [],
"source": [
"css = \"\"\"\n",
".comments {background-color: #00599C;}\n",
".summary {background-color: #008B8B;}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89ad7c7b-b881-45d3-aadc-d7206af578fb",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"### Code Documentation and Formatting\")\n",
" with gr.Row():\n",
" code = gr.Textbox(label=\"Input Code: \", value=func, lines=10)\n",
" with gr.Row():\n",
" model = gr.Dropdown([\"GPT\",\"Claude\",\"Llama\",\"CodeQwen\"],label=\"Select model\",value=\"GPT\")\n",
" with gr.Row():\n",
" docs = gr.Button(\"Add Comments and Sumarise Code\")\n",
" with gr.Row():\n",
" commented_code = gr.Textbox(label= \"Formatted Code\", lines=10,elem_classes=[\"comments\"])\n",
" code_summary = gr.Textbox(label = \"Code Summary\", lines=10,elem_classes=[\"summary\"])\n",
" docs.click(create_docs,inputs=[code,model],outputs=[commented_code,code_summary]),"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a9e3b1c-bfe6-4b71-aac8-fa36a491c157",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ac895aa9-e044-4598-b715-d96d1c158656",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "5a96877c-22b7-4ad5-b235-1cf8f8b200a1",
"metadata": {},
"outputs": [],
"source": [
"print(call_llama(func))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f11de1a2-52c0-41c7-ad88-01ef5f8bc628",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

235
week5/community-contributions/day3_vector_embeddings_from_text_file.ipynb

@ -0,0 +1,235 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fad6ee3f-45b8-4ac3-aa39-4a44dac91994",
"metadata": {},
"source": [
"## Creating Text Embeddings From a Text File\n",
"- Loading data using TextLoader\n",
"- Splitting into chunks using CharacterTextSplitter\n",
"- Converting chunks into vector embeddings and creating a vectorstore\n",
"- Retreiving, reducing dimensions to 2D and displaying text embeddings"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33b79f0d-7bd5-4e82-9295-2cc5cfa9495b",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "391d12b3-ea25-4c66-93ba-71ef7c590be3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.schema import Document\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain.embeddings import HuggingFaceEmbeddings\n",
"from langchain_chroma import Chroma\n",
"import numpy as np\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "365d4346-bcf7-48b3-be13-b492f1877fab",
"metadata": {},
"outputs": [],
"source": [
"MODEL = \"gpt-4o-mini\"\n",
"db_name = \"my_vector_db\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "93887c1e-fb5e-4f9a-95f6-91a284e49695",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv(override=True)\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "86289eb8-25d8-405f-b1bb-3d9d9fed8671",
"metadata": {},
"outputs": [],
"source": [
"loader = TextLoader(\"data.txt\", encoding=\"utf-8\")\n",
"data = loader.load()\n",
"\n",
"documents = []\n",
"for text in data:\n",
" documents.append(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32320fff-2321-40ea-9b7d-294dc2dfba3a",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=5)\n",
"chunks = text_splitter.split_documents(documents)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fce762a5-4c78-4102-ab55-f95ee0c97286",
"metadata": {},
"outputs": [],
"source": [
"len(chunks)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ddb5bc12-af30-476d-bbbb-f91a3ae8af2f",
"metadata": {},
"outputs": [],
"source": [
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75ba81ec-9178-4ce4-83e2-82f937c85902",
"metadata": {},
"outputs": [],
"source": [
"if os.path.exists(db_name):\n",
" Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c3ca2632-a8b3-4e7e-8370-d91579d31c23",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n",
"print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0de67066-73f5-446f-9033-a00d45b0cdc1",
"metadata": {},
"outputs": [],
"source": [
"# Get one vector and find how many dimensions it has\n",
"\n",
"collection = vectorstore._collection\n",
"sample_embedding = collection.get(limit=1, include=[\"embeddings\"])[\"embeddings\"][0] # represents a single vector\n",
"dimensions = len(sample_embedding)\n",
"print(f\"The vectors have {dimensions:,} dimensions\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e50d972c-d740-4f0a-8bc2-e55ebe462a41",
"metadata": {},
"outputs": [],
"source": [
"sample_embedding"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aa96105d-b882-48d9-b088-6aab5db7b1e9",
"metadata": {},
"outputs": [],
"source": [
"result = collection.get(include=['embeddings','documents'])\n",
"vectors = np.array(result['embeddings']) \n",
"documents = result['documents']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "213b4cf2-db0a-4610-8d8f-97607996ed17",
"metadata": {},
"outputs": [],
"source": [
"# Reduce dimensionality to 2D using t-SNE\n",
"tsne = TSNE(n_components=2,perplexity=5, random_state=42)\n",
"reduced_vectors = tsne.fit_transform(vectors)\n",
"\n",
"# Create the 2D scatter plot\n",
"fig = go.Figure(data=[go.Scatter(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" mode='markers',\n",
" marker=dict(size=5, opacity=0.8),\n",
" text=[f\"Text: {d[:200]}...\" for d in documents],\n",
" hoverinfo='text'\n",
")])\n",
"\n",
"fig.update_layout(\n",
" title='2D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x',yaxis_title='y'),\n",
" width=800,\n",
" height=600,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
")\n",
"\n",
"fig.show()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d13aa60-da3e-4c61-af69-1ba9087e0181",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

283
week5/community-contributions/day5_vectorstore_openai.ipynb

@ -0,0 +1,283 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import documents exported from Evernote to a vectorstore\n",
"### Use OpenAI file search with responses API\n",
"#### Prerequisite steps\n",
"* exported notes from your Evernote notebook as html \n",
"* converted the notes further to md-files and remove broken image links (use python/AI)\n",
"* the files are named with note titles\n",
"\n",
"Files are in one folder.\n",
"\n",
"\n",
"##### Query ChromaDB vectorstore\n",
"I tried to accomplish this task with RAG like the example by https://github.com/ed-donner/llm_engineering/commits?author=dinorrusso.\n",
"\n",
"I thought this to be a trivial task, but it was not 😃 That example uses Ollama running locally.\n",
"Even though the retriever had the information required, it was dropped from the answer.\n",
"\n",
"I tried then to use Chroma + OpenAI. After several attemps succeeded to create a vectorstore and query it. That's it for this time.\n",
"\n",
"##### Openai vectorstore, see bottom of the notebook\n",
"One attempt was to use OpenAI's fileSearch-tool which seemed pretty straightforward.\n",
"The con: loading files was not working always. Code is left though as reference."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Imports\n",
"from dotenv import load_dotenv\n",
"import gradio as gr\n",
"import openai\n",
"import chromadb\n",
"from chromadb.config import Settings\n",
"import os"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Load files to vectorstore"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"load_dotenv(override=True)\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"openai.api_key = os.environ['OPENAI_API_KEY']\n",
"\n",
"def chunk_text(text, max_tokens=2000):\n",
" words = text.split()\n",
" chunks = []\n",
" current_chunk = []\n",
" current_length = 0\n",
"\n",
" for word in words:\n",
" current_length += len(word) + 1 # +1 for the space\n",
" if current_length > max_tokens:\n",
" chunks.append(\" \".join(current_chunk))\n",
" current_chunk = [word]\n",
" current_length = len(word) + 1\n",
" else:\n",
" current_chunk.append(word)\n",
"\n",
" if current_chunk:\n",
" chunks.append(\" \".join(current_chunk))\n",
"\n",
" return chunks\n",
"\n",
"\n",
"# # Set up OpenAI API key\n",
"# openai.api_key = \"your_openai_api_key\" # Replace with your API key\n",
"chroma_client = chromadb.Client()\n",
"\n",
"# Create or get the existing collection\n",
"collection_name = \"EverNotes\"\n",
"\n",
"try:\n",
" existing_collection = chroma_client.get_collection(name=collection_name)\n",
" if existing_collection.count() > 0:\n",
" chroma_client.delete_collection(name=collection_name)\n",
"except:\n",
" print(f\"Collection {collection_name} does not exist. Creating a new one.\")\n",
"\n",
"# Create a collection in ChromaDB\n",
"collection = chroma_client.get_or_create_collection(name=collection_name)\n",
"\n",
"# Define your data\n",
"# it should be like this\n",
"# documents = [\"OpenAI is revolutionizing AI.\", \"ChromaDB makes embedding storage easy.\"]\n",
"# metadata = [{\"id\": 1}, {\"id\": 2}]\n",
"\n",
"folder_path = os.getenv('EVERNOTE_EXPORT')\n",
"documents = []\n",
"\n",
"for root, dirs, files in os.walk(folder_path):\n",
" for file in files:\n",
" if file.endswith('.md'): # Change this to the file extension you need\n",
" with open(os.path.join(root, file), 'r') as f:\n",
" documents.append(f.read())\n",
"\n",
"metadata = [{\"id\": i + 1} for i in range(len(documents))]\n",
"\n",
"# Generate embeddings using OpenAI\n",
"def get_embedding(text, model=\"text-embedding-ada-002\"):\n",
" response = openai.embeddings.create(input=text, model=model)\n",
" return response.data[0].embedding\n",
"\n",
"# Add documents and embeddings to ChromaDB in chunks\n",
"for doc, meta in zip(documents, metadata):\n",
" chunks = chunk_text(doc)\n",
" for chunk in chunks:\n",
" embedding = get_embedding(chunk)\n",
" collection.add(\n",
" documents=[chunk],\n",
" embeddings=[embedding],\n",
" metadatas=[meta],\n",
" ids=[str(meta[\"id\"])]\n",
" )\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Query ChromaDB"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# \n",
"query_text = \"Is there a video for Fitting the Shimano speed hub 7\"\n",
"query_embedding = get_embedding(query_text)\n",
"\n",
"results = collection.query(\n",
" query_embeddings=[query_embedding],\n",
" n_results=2\n",
")\n",
"\n",
"print(\"Query Results:\", results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"##### Gradio interface"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Function to query ChromaDB\n",
"def query_chromadb(query_text):\n",
" query_embedding = get_embedding(query_text)\n",
" results = collection.query(\n",
" query_embeddings=[query_embedding],\n",
" n_results=2\n",
" )\n",
" return results\n",
"\n",
"# Gradio interface\n",
"def gradio_interface(query_text):\n",
" results = query_chromadb(query_text)\n",
" return results\n",
"\n",
"# Create Gradio app\n",
"iface = gr.Interface(\n",
" fn=gradio_interface,\n",
" inputs=\"text\",\n",
" outputs=\"text\",\n",
" title=\"ChromaDB Query Interface\",\n",
" description=\"Enter your query to search the ChromaDB collection.\"\n",
")\n",
"\n",
"iface.launch()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Below OpenAI filesearch variant which had some failures in file uploads."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import glob\n",
"folder_path = os.environ['EVERNOTE_EXPORT'] \n",
"# Filter out other except .md-files\n",
"md_files = glob.glob(os.path.join(folder_path, '*.md'))\n",
"file_paths = [os.path.join(folder_path, file) for file in md_files]\n",
"file_streams = [open(path, 'rb') for path in file_paths]\n",
"\n",
"# Create vector store\n",
"vector_store = openai.vector_stores.create(\n",
" name=\"Evernote notes\",\n",
")\n",
"\n",
"# Batch Upload Limit: You can upload up to 100 files in a single batch\n",
"# https://community.openai.com/t/max-100-files-in-vector-store/729876/4\n",
"batch_size = 90\n",
"for i in range(0, len(file_streams), batch_size):\n",
" batch = file_streams[i:i + batch_size]\n",
" file_batch = openai.vector_stores.file_batches.upload_and_poll(\n",
" vector_store_id=vector_store.id,\n",
" files=batch\n",
" )\n",
" print(file_batch.status)\n",
" print(file_batch.file_counts)\n",
"\n",
"# There can be some fails in file counts:\n",
"# \"FileCounts(cancelled=0, completed=89, failed=1, in_progress=0, total=90)\"\"\n",
"# Usually 1 % fails. Did not find solution for improving that yet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"response = openai.responses.create(\n",
" model=\"gpt-4o-mini\",\n",
" input=\"Is there a video for Fitting the Shimano speed hub 7?\",\n",
" tools=[{\n",
" \"type\": \"file_search\",\n",
" \"vector_store_ids\": [vector_store.id]\n",
" }],\n",
" include=None\n",
")\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

359
week5/community-contributions/markdown_knowledge_worker.ipynb

@ -0,0 +1,359 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c25c6e94-f3de-4367-b2bf-269ba7160977",
"metadata": {},
"source": [
"## An Expert Knowledge Worker Question-Answering Agent using RAG"
]
},
{
"cell_type": "markdown",
"id": "15169580-cf11-4dee-8ec7-3a4ef59b19ee",
"metadata": {},
"source": [
"Aims\n",
"- Reads README.md files and loads data using TextLoader\n",
"- Splits into chunks using CharacterTextSplitter\n",
"- Converts chunks into vector embeddings and creates a datastore\n",
"- 2D and 3D visualisations\n",
"- Langchain to set up a conversation retrieval chain"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "051cf881-357d-406b-8eae-1610651e40f1",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import glob\n",
"from dotenv import load_dotenv\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ccfd403a-5bdb-4a8c-b3fd-d47ae79e43f7",
"metadata": {},
"outputs": [],
"source": [
"# imports for langchain, plotly and Chroma\n",
"\n",
"from langchain.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.schema import Document\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain.embeddings import HuggingFaceEmbeddings\n",
"from langchain_chroma import Chroma\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chains import ConversationalRetrievalChain\n",
"import numpy as np\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go\n",
"import plotly.express as px\n",
"import matplotlib.pyplot as plt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2d853868-d2f6-43e1-b27c-b8e91d06b724",
"metadata": {},
"outputs": [],
"source": [
"MODEL = \"gpt-4o-mini\"\n",
"db_name = \"vector_db\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f152fc3b-0bf4-4d51-948f-95da1ebc030a",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "24e621ac-df06-4af6-a60d-a9ed7adb884a",
"metadata": {},
"outputs": [],
"source": [
"# Read in documents using LangChain's loaders\n",
"\n",
"folder = \"my-knowledge-base/\"\n",
"text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
"folder_docs = loader.load()\n",
"\n",
"for doc in folder_docs:\n",
" filename_md = os.path.basename(doc.metadata[\"source\"]) \n",
" filename, _ = os.path.splitext(filename_md) \n",
" doc.metadata[\"filename\"] = filename\n",
"\n",
"documents = folder_docs \n",
"\n",
"text_splitter = CharacterTextSplitter(chunk_size=400, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)\n",
"\n",
"print(f\"Total number of chunks: {len(chunks)}\")\n",
"print(f\"Files found: {set(doc.metadata['filename'] for doc in documents)}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f02f08ee-5ade-4f79-a500-045a8f1a532f",
"metadata": {},
"outputs": [],
"source": [
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n",
"\n",
"embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n",
"\n",
"# Delete if already exists\n",
"\n",
"if os.path.exists(db_name):\n",
" Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()\n",
"\n",
"# Create vectorstore\n",
"\n",
"vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n",
"print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7f665f4d-ccb1-43fb-b901-040117925732",
"metadata": {},
"outputs": [],
"source": [
"# Let's investigate the vectors\n",
"\n",
"collection = vectorstore._collection\n",
"count = collection.count()\n",
"\n",
"sample_embedding = collection.get(limit=1, include=[\"embeddings\"])[\"embeddings\"][0]\n",
"dimensions = len(sample_embedding)\n",
"print(f\"There are {count:,} vectors with {dimensions:,} dimensions in the vector store\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6208a971-e8b7-48bc-be7a-6dcb82967fd2",
"metadata": {},
"outputs": [],
"source": [
"# pre work\n",
"\n",
"result = collection.get(include=['embeddings','documents','metadatas'])\n",
"vectors = np.array(result['embeddings']) \n",
"documents = result['documents']\n",
"metadatas = result['metadatas']\n",
"filenames = [metadata['filename'] for metadata in metadatas]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb27bc8a-453b-4b19-84b4-dc495bb0e544",
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"def random_color():\n",
" return f\"rgb({random.randint(0,255)},{random.randint(0,255)},{random.randint(0,255)})\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78db67e5-ef10-4581-b8ac-3e0281ceba45",
"metadata": {},
"outputs": [],
"source": [
"def show_embeddings_2d(result):\n",
" vectors = np.array(result['embeddings']) \n",
" documents = result['documents']\n",
" metadatas = result['metadatas']\n",
" filenames = [metadata['filename'] for metadata in metadatas]\n",
" filenames_unique = sorted(set(filenames))\n",
"\n",
" # color assignment\n",
" color_map = {name: random_color() for name in filenames_unique}\n",
" colors = [color_map[name] for name in filenames]\n",
"\n",
" tsne = TSNE(n_components=2, random_state=42,perplexity=4)\n",
" reduced_vectors = tsne.fit_transform(vectors)\n",
"\n",
" # Create the 2D scatter plot\n",
" fig = go.Figure(data=[go.Scatter(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" mode='markers',\n",
" marker=dict(size=5,color=colors, opacity=0.8),\n",
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(filenames, documents)],\n",
" hoverinfo='text'\n",
" )])\n",
"\n",
" fig.update_layout(\n",
" title='2D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x',yaxis_title='y'),\n",
" width=800,\n",
" height=600,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
" )\n",
"\n",
" fig.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2c250166-cb5b-4a75-8981-fae2d6dfe509",
"metadata": {},
"outputs": [],
"source": [
"show_embeddings_2d(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b290e38-0800-4453-b664-7a7622ff5ed2",
"metadata": {},
"outputs": [],
"source": [
"def show_embeddings_3d(result):\n",
" vectors = np.array(result['embeddings']) \n",
" documents = result['documents']\n",
" metadatas = result['metadatas']\n",
" filenames = [metadata['filename'] for metadata in metadatas]\n",
" filenames_unique = sorted(set(filenames))\n",
"\n",
" # color assignment\n",
" color_map = {name: random_color() for name in filenames_unique}\n",
" colors = [color_map[name] for name in filenames]\n",
"\n",
" tsne = TSNE(n_components=3, random_state=42)\n",
" reduced_vectors = tsne.fit_transform(vectors)\n",
"\n",
" fig = go.Figure(data=[go.Scatter3d(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" z=reduced_vectors[:, 2],\n",
" mode='markers',\n",
" marker=dict(size=5, color=colors, opacity=0.8),\n",
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(filenames, documents)],\n",
" hoverinfo='text'\n",
" )])\n",
"\n",
" fig.update_layout(\n",
" title='3D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n",
" width=900,\n",
" height=700,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
" )\n",
"\n",
" fig.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d1d034-2503-4176-b1e4-f248e31c4770",
"metadata": {},
"outputs": [],
"source": [
"show_embeddings_3d(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e79946a1-f93a-4b3a-8d19-deef40dec223",
"metadata": {},
"outputs": [],
"source": [
"# create a new Chat with OpenAI\n",
"llm = ChatOpenAI(temperature=0.7, model_name=MODEL)\n",
"\n",
"# set up the conversation memory for the chat\n",
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n",
"\n",
"# the retriever is an abstraction over the VectorStore that will be used during RAG\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": 50})\n",
"\n",
"# putting it together: set up the conversation chain with the GPT 3.5 LLM, the vector store and memory\n",
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59f90c85-c113-4482-8574-8a728ef25459",
"metadata": {},
"outputs": [],
"source": [
"def chat(question, history):\n",
" result = conversation_chain.invoke({\"question\": question})\n",
" return result[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0520a8ff-01a4-4fa6-9dc8-57da87272edc",
"metadata": {},
"outputs": [],
"source": [
"view = gr.ChatInterface(chat, type=\"messages\").launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4949b17-cd9c-4bff-bd5b-0f80df72e7dc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

353
week5/community-contributions/ui_markdown_knowledge_worker.ipynb

@ -0,0 +1,353 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d13be0fd-db15-4ab1-860a-b00257051339",
"metadata": {},
"source": [
"## Gradio UI for Markdown-Based Q&A with Visualization"
]
},
{
"cell_type": "markdown",
"id": "bc63fbdb-66a9-4c10-8dbd-11476b5e2d21",
"metadata": {},
"source": [
"This interface enables users to:\n",
"- Upload Markdown files for processing\n",
"- Visualize similarity between document chunks in 2D and 3D using embeddings\n",
"- Ask questions and receive RAG enabled responses\n",
"- Mantain conversation context for better question answering\n",
"- Clear chat history when required for fresh sessions\n",
"- Store and retrieve embeddings using ChromaDB\n",
"\n",
"Integrates LangChain, ChromaDB, and OpenAI to process, store, and retrieve information efficiently."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "91da28d8-8e29-44b7-a62a-a3a109753727",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e47f670a-e2cb-4700-95d0-e59e440677a1",
"metadata": {},
"outputs": [],
"source": [
"# imports for langchain, plotly and Chroma\n",
"\n",
"from langchain.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.schema import Document\n",
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n",
"from langchain.embeddings import HuggingFaceEmbeddings\n",
"from langchain_chroma import Chroma\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chains import ConversationalRetrievalChain\n",
"import numpy as np\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go\n",
"import plotly.express as px\n",
"import matplotlib.pyplot as plt\n",
"from random import randint\n",
"import shutil"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "362d4976-2553-4ed8-8fbb-49806145cad1",
"metadata": {},
"outputs": [],
"source": [
"!pip install --upgrade gradio"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "968b6e96-557e-439f-b2f1-942c05168641",
"metadata": {},
"outputs": [],
"source": [
"MODEL = \"gpt-4o-mini\"\n",
"db_name = \"vector_db\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "537f66de-6abf-4b34-8e05-6b9a9df8ae82",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "246c1c1b-fcfa-4f4c-b99c-024598751361",
"metadata": {},
"outputs": [],
"source": [
"folder = \"my-knowledge-base/\"\n",
"db_name = \"vectorstore_db\"\n",
"\n",
"def process_files(files):\n",
" os.makedirs(folder, exist_ok=True)\n",
"\n",
" processed_files = []\n",
" for file in files:\n",
" file_path = os.path.join(folder, os.path.basename(file)) # Get filename\n",
" shutil.copy(file, file_path)\n",
" processed_files.append(os.path.basename(file))\n",
"\n",
" # Load documents using LangChain's DirectoryLoader\n",
" text_loader_kwargs = {'autodetect_encoding': True}\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n",
"\n",
" # Assign filenames as metadata\n",
" for doc in folder_docs:\n",
" filename_md = os.path.basename(doc.metadata[\"source\"])\n",
" filename, _ = os.path.splitext(filename_md)\n",
" doc.metadata[\"filename\"] = filename\n",
"\n",
" documents = folder_docs \n",
"\n",
" # Split documents into chunks\n",
" text_splitter = CharacterTextSplitter(chunk_size=400, chunk_overlap=200)\n",
" chunks = text_splitter.split_documents(documents)\n",
"\n",
" # Initialize embeddings\n",
" embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n",
"\n",
" # Delete previous vectorstore\n",
" if os.path.exists(db_name):\n",
" Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()\n",
"\n",
" # Store in ChromaDB\n",
" vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n",
"\n",
" # Retrieve results\n",
" collection = vectorstore._collection\n",
" result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n",
"\n",
" llm = ChatOpenAI(temperature=0.7, model_name=MODEL)\n",
" memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n",
" retriever = vectorstore.as_retriever(search_kwargs={\"k\": 35})\n",
" global conversation_chain\n",
" conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)\n",
"\n",
" processed_text = \"**Processed Files:**\\n\\n\" + \"\\n\".join(f\"- {file}\" for file in processed_files)\n",
" return result, processed_text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48678d3a-0ab2-4aa4-aa9e-4160c6a9cb24",
"metadata": {},
"outputs": [],
"source": [
"def random_color():\n",
" return f\"rgb({randint(0,255)},{randint(0,255)},{randint(0,255)})\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6caed889-9bb4-42ad-b1c2-da051aefc802",
"metadata": {},
"outputs": [],
"source": [
"def show_embeddings_2d(result):\n",
" vectors = np.array(result['embeddings']) \n",
" documents = result['documents']\n",
" metadatas = result['metadatas']\n",
" filenames = [metadata['filename'] for metadata in metadatas]\n",
" filenames_unique = sorted(set(filenames))\n",
"\n",
" # color assignment\n",
" color_map = {name: random_color() for name in filenames_unique}\n",
" colors = [color_map[name] for name in filenames]\n",
"\n",
" tsne = TSNE(n_components=2, random_state=42,perplexity=4)\n",
" reduced_vectors = tsne.fit_transform(vectors)\n",
"\n",
" # Create the 2D scatter plot\n",
" fig = go.Figure(data=[go.Scatter(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" mode='markers',\n",
" marker=dict(size=5,color=colors, opacity=0.8),\n",
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(filenames, documents)],\n",
" hoverinfo='text'\n",
" )])\n",
"\n",
" fig.update_layout(\n",
" title='2D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x',yaxis_title='y'),\n",
" width=800,\n",
" height=600,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
" )\n",
"\n",
" return fig"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de993495-c8cd-4313-a6bb-7d27494ecc13",
"metadata": {},
"outputs": [],
"source": [
"def show_embeddings_3d(result):\n",
" vectors = np.array(result['embeddings']) \n",
" documents = result['documents']\n",
" metadatas = result['metadatas']\n",
" filenames = [metadata['filename'] for metadata in metadatas]\n",
" filenames_unique = sorted(set(filenames))\n",
"\n",
" # color assignment\n",
" color_map = {name: random_color() for name in filenames_unique}\n",
" colors = [color_map[name] for name in filenames]\n",
"\n",
" tsne = TSNE(n_components=3, random_state=42)\n",
" reduced_vectors = tsne.fit_transform(vectors)\n",
"\n",
" fig = go.Figure(data=[go.Scatter3d(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" z=reduced_vectors[:, 2],\n",
" mode='markers',\n",
" marker=dict(size=5, color=colors, opacity=0.8),\n",
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(filenames, documents)],\n",
" hoverinfo='text'\n",
" )])\n",
"\n",
" fig.update_layout(\n",
" title='3D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n",
" width=900,\n",
" height=700,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
" )\n",
"\n",
" return fig"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b7bf62b-c559-4e97-8135-48cd8d97a40e",
"metadata": {},
"outputs": [],
"source": [
"def chat(question, history):\n",
" result = conversation_chain.invoke({\"question\": question})\n",
" return result[\"answer\"]\n",
"\n",
"def visualise_data(result):\n",
" fig_2d = show_embeddings_2d(result)\n",
" fig_3d = show_embeddings_3d(result)\n",
" return fig_2d,fig_3d"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "99217109-fbee-4269-81c7-001e6f768a72",
"metadata": {},
"outputs": [],
"source": [
"css = \"\"\"\n",
".btn {background-color: #1d53d1;}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1429ea1-1d9f-4be6-b270-01997864c642",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"# Markdown-Based Q&A with Visualization\")\n",
" with gr.Row():\n",
" file_input = gr.Files(file_types=[\".md\"], label=\"Upload Markdown Files\")\n",
" with gr.Column(scale=1):\n",
" processed_output = gr.Markdown(\"Progress\")\n",
" with gr.Row():\n",
" process_btn = gr.Button(\"Process Files\",elem_classes=[\"btn\"])\n",
" with gr.Row():\n",
" question = gr.Textbox(label=\"Chat \", lines=10)\n",
" answer = gr.Markdown(label= \"Response\")\n",
" with gr.Row():\n",
" question_btn = gr.Button(\"Ask a Question\",elem_classes=[\"btn\"])\n",
" clear_btn = gr.Button(\"Clear Output\",elem_classes=[\"btn\"])\n",
" with gr.Row():\n",
" plot_2d = gr.Plot(label=\"2D Visualization\")\n",
" plot_3d = gr.Plot(label=\"3D Visualization\")\n",
" with gr.Row():\n",
" visualise_btn = gr.Button(\"Visualise Data\",elem_classes=[\"btn\"])\n",
"\n",
" result = gr.State([])\n",
" # Action: When button is clicked, process files and update visualization\n",
" clear_btn.click(fn=lambda:(\"\", \"\"), inputs=[],outputs=[question, answer])\n",
" process_btn.click(process_files, inputs=[file_input], outputs=[result,processed_output])\n",
" question_btn.click(chat, inputs=[question], outputs= [answer])\n",
" visualise_btn.click(visualise_data, inputs=[result], outputs=[plot_2d,plot_3d])\n",
"\n",
"# Launch Gradio app\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3686048-ac29-4df1-b816-e58996913ef1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading…
Cancel
Save