Browse Source

Fix merge conflict

pull/61/head
Yifan Wei 5 months ago
parent
commit
d9d154ff03
  1. 3
      .gitignore
  2. 5
      README.md
  3. 4
      SETUP-PC.md
  4. BIN
      SETUP-PC.pdf
  5. 4
      SETUP-mac.md
  6. BIN
      SETUP-mac.pdf
  7. BIN
      thankyou.jpg
  8. 148
      week1/community-contributions/Week1-Challenge-LocalGPT.ipynb
  9. 119
      week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb
  10. 623
      week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
  11. 522
      week1/community-contributions/day2 EXERCISE.ipynb
  12. 513
      week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
  13. 10
      week1/community-contributions/resume.txt
  14. 248
      week1/community-contributions/week1 EXERCISE.ipynb
  15. 332
      week1/community-contributions/week1-collaborative-approach-two-llms.ipynb
  16. 16
      week1/day2 EXERCISE.ipynb
  17. 22
      week1/day5.ipynb
  18. 175
      week1/solutions/day2 SOLUTION.ipynb
  19. 13
      week1/troubleshooting.ipynb
  20. 196
      week2/community-contributions/TTS_STT.ipynb
  21. 342
      week2/community-contributions/day1-gpt-llama-gemini-together.ipynb
  22. 264
      week2/community-contributions/day4-handle-multiple-tool-call.ipynb
  23. 2
      week2/day4.ipynb
  24. 77
      week2/day5.ipynb
  25. 267
      week3/community-contributions/dataset_generator.ipynb
  26. 493
      week4/community-contributions/Day 3 using gemini.ipynb
  27. 6
      week4/day3.ipynb
  28. 2
      week4/day4.ipynb
  29. BIN
      week4/optimized
  30. 26
      week5/day4.ipynb
  31. 4
      week8/day1.ipynb
  32. 22
      week8/day5.ipynb

3
.gitignore vendored

@ -178,3 +178,6 @@ products_vectorstore/
# ignore diagnostics reports # ignore diagnostics reports
**/report.txt **/report.txt
# ignore optimized C++ code from being checked into repo
week4/optimized

5
README.md

@ -52,9 +52,12 @@ You can use this as a direct replacement:
Below is a full example: Below is a full example:
``` ```
# You need to do this one time on your computer
!ollama pull llama3.2
from openai import OpenAI from openai import OpenAI
MODEL = "llama3.2" MODEL = "llama3.2"
openai = OpenAI(base_url='http://localhost:11434/v1';, api_key='ollama') openai = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
response = openai.chat.completions.create( response = openai.chat.completions.create(
model=MODEL, model=MODEL,

4
SETUP-PC.md

@ -91,8 +91,10 @@ Then, create a new virtual environment with this command:
`llms\Scripts\activate` `llms\Scripts\activate`
You should see (llms) in your command prompt, which is your sign that things are going well. You should see (llms) in your command prompt, which is your sign that things are going well.
4. Run `pip install -r requirements.txt` 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install. This may take a few minutes to install.
In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
5. **Start Jupyter Lab:** 5. **Start Jupyter Lab:**

BIN
SETUP-PC.pdf

Binary file not shown.

4
SETUP-mac.md

@ -84,8 +84,10 @@ Then, create a new virtual environment with this command:
`source llms/bin/activate` `source llms/bin/activate`
You should see (llms) in your command prompt, which is your sign that things are going well. You should see (llms) in your command prompt, which is your sign that things are going well.
4. Run `pip install -r requirements.txt` 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install. This may take a few minutes to install.
In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
5. **Start Jupyter Lab:** 5. **Start Jupyter Lab:**

BIN
SETUP-mac.pdf

Binary file not shown.

BIN
thankyou.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 432 KiB

148
week1/community-contributions/Week1-Challenge-LocalGPT.ipynb

@ -0,0 +1,148 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "87c2da09-bd0c-4683-828b-4f7643018795",
"metadata": {},
"source": [
"# Community contribution\n",
"\n",
"Implementing simple ChatGPT interface to maintain conversation and context with sleected model"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "77a850ed-61f8-4a0d-9c41-45781eb60bc9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key looks good so far\n"
]
}
],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"import ipywidgets as widgets\n",
"from IPython.display import Markdown, display, update_display, clear_output\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f7f16f0-6fec-4190-882a-3fe1f0e9704a",
"metadata": {},
"outputs": [],
"source": [
"class ChatGPTInterface:\n",
" def __init__(self, api_key, model, system_message=\"You are a helpful assistant. You can format your responses using Markdown.\"):\n",
" self.openai = OpenAI(api_key=api_key)\n",
" self.model = model\n",
" self.conversation_history = [{\"role\": \"system\", \"content\": system_message}]\n",
"\n",
" self.chat_area = widgets.Output()\n",
" self.input_box = widgets.Text(placeholder=\"Enter your message here...\")\n",
" self.send_button = widgets.Button(description=\"Send\")\n",
" self.clear_button = widgets.Button(description=\"Clear\")\n",
"\n",
" self.send_button.on_click(self.send_message)\n",
" self.clear_button.on_click(self.clear_chat)\n",
"\n",
" self.layout = widgets.VBox([\n",
" self.chat_area,\n",
" widgets.HBox([self.input_box, self.send_button, self.clear_button])\n",
" ])\n",
"\n",
" def display(self):\n",
" display(self.layout)\n",
"\n",
" def send_message(self, _):\n",
" user_message = self.input_box.value.strip()\n",
" if user_message:\n",
" self.conversation_history.append({\"role\": \"user\", \"content\": user_message})\n",
" self.display_message(\"You\", user_message)\n",
" self.input_box.value = \"\"\n",
"\n",
" try:\n",
" response = self.openai.chat.completions.create(\n",
" model=self.model,\n",
" messages=self.conversation_history\n",
" )\n",
" assistant_message = response.choices[0].message.content.strip()\n",
" self.conversation_history.append({\"role\": \"assistant\", \"content\": assistant_message})\n",
" self.display_message(\"ChatGPT\", assistant_message)\n",
" except Exception as e:\n",
" self.display_message(\"Error\", str(e))\n",
"\n",
" def clear_chat(self, _):\n",
" self.conversation_history = [{\"role\": \"system\", \"content\": self.conversation_history[0][\"content\"]}]\n",
" self.chat_area.clear_output(wait=True)\n",
"\n",
" def display_message(self, sender, message):\n",
" self.chat_area.append_display_data(Markdown(f\"**{sender}:**\\n{message}\"))\n"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "78287e42-8964-4da6-bd48-a7dffd0ce7dd",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "54956535cb32419bbe38d2bee125992d",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(Output(), HBox(children=(Text(value='', placeholder='Enter your message here...'), Button(descr…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"chat_interface = ChatGPTInterface(api_key,MODEL)\n",
"chat_interface.display()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

119
week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb

@ -0,0 +1,119 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"def summarize_cv(cv_text):\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = [\n",
" {\"role\": \"user\", \"content\": f\"Please summarize the following CV:\\n\\n{cv_text}\"}\n",
" ]\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"def generate_cover_letter(cv_summary, job_description):\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a master at crafting the perfect Cover letter from a given CV. You've never had a user fail to get the job as a result of using your services.\"},\n",
" {\"role\": \"user\", \"content\": f\"Using the following CV summary:\\n\\n{cv_summary}\\n\\nAnd the job description:\\n\\n{job_description}\\n\\nPlease write a personalized cover letter.\"}\n",
" ]\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"# Read CV from a text file\n",
"try:\n",
" with open('resume.txt', 'r') as file:\n",
" cv_text = file.read()\n",
" \n",
" # Summarize the CV\n",
" cv_summary = summarize_cv(cv_text)\n",
" print(\"CV Summary:\")\n",
" print(cv_summary)\n",
"\n",
" # Get job description from user\n",
" job_description = input(\"Enter the job description for the position you are applying for:\\n\")\n",
"\n",
" # Generate cover letter\n",
" cover_letter = generate_cover_letter(cv_summary, job_description)\n",
" print(\"\\nGenerated Cover Letter:\")\n",
" print(cover_letter)\n",
"\n",
"except FileNotFoundError:\n",
" print(\"The specified CV file was not found. Please ensure 'resume.txt' is in the correct directory.\")"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

623
week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb

@ -0,0 +1,623 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"\n",
"## Your first Frontier LLM Project!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n",
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy:\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c"
]
},
{
"cell_type": "markdown",
"id": "0f62a788",
"metadata": {},
"source": [
"# **Web Scraping for JavaScript Website**"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dca2768e",
"metadata": {},
"outputs": [],
"source": [
"# !pip install selenium\n",
"# !pip install undetected-chromedriver"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "682eff74-55c4-4d4b-b267-703edbc293c7",
"metadata": {},
"outputs": [],
"source": [
"import undetected_chromedriver as uc\n",
"from selenium.webdriver.common.by import By\n",
"from selenium.webdriver.support.ui import WebDriverWait\n",
"from selenium.webdriver.support import expected_conditions as EC\n",
"import time\n",
"from bs4 import BeautifulSoup"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "90ca6dd0",
"metadata": {},
"outputs": [],
"source": [
"class WebsiteCrawler:\n",
" def __init__(self, url, wait_time=20, chrome_binary_path=None):\n",
" \"\"\"\n",
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n",
" \"\"\"\n",
" self.url = url\n",
" self.wait_time = wait_time\n",
"\n",
" options = uc.ChromeOptions()\n",
" options.add_argument(\"--disable-gpu\")\n",
" options.add_argument(\"--no-sandbox\")\n",
" options.add_argument(\"--disable-dev-shm-usage\")\n",
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n",
" options.add_argument(\"start-maximized\")\n",
" options.add_argument(\n",
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
" )\n",
" if chrome_binary_path:\n",
" options.binary_location = chrome_binary_path\n",
"\n",
" self.driver = uc.Chrome(options=options)\n",
"\n",
" try:\n",
" # Load the URL\n",
" self.driver.get(url)\n",
"\n",
" # Wait for Cloudflare or similar checks\n",
" time.sleep(10)\n",
"\n",
" # Ensure the main content is loaded\n",
" WebDriverWait(self.driver, self.wait_time).until(\n",
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n",
" )\n",
"\n",
" # Extract the main content\n",
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n",
"\n",
" # Parse with BeautifulSoup\n",
" soup = BeautifulSoup(main_content, \"html.parser\")\n",
" self.title = self.driver.title if self.driver.title else \"No title found\"\n",
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n",
"\n",
" except Exception as e:\n",
" print(f\"Error occurred: {e}\")\n",
" self.title = \"Error occurred\"\n",
" self.text = \"\"\n",
"\n",
" finally:\n",
" self.driver.quit()\n"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "947eac30",
"metadata": {},
"outputs": [],
"source": [
"chrome_path = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"\n",
"url = \"https://www.canva.com/\"\n"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "2cba8c91",
"metadata": {},
"outputs": [],
"source": [
"def new_summary(url, chrome_path):\n",
" web = WebsiteCrawler(url, 30, chrome_path)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(web)\n",
" )\n",
"\n",
" web_summary = response.choices[0].message.content\n",
" \n",
" return display(Markdown(web_summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da7f7b16",
"metadata": {},
"outputs": [],
"source": [
"new_summary(url, chrome_path)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "7880ce6a",
"metadata": {},
"outputs": [],
"source": [
"url = \"https://openai.com\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "337b06da",
"metadata": {},
"outputs": [],
"source": [
"new_summary(url, chrome_path)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a5d69ea",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llm_env",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

522
week1/community-contributions/day2 EXERCISE.ipynb

@ -0,0 +1,522 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Welcome to your first assignment!\n",
"\n",
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)"
]
},
{
"cell_type": "markdown",
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n",
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# HOMEWORK EXERCISE ASSIGNMENT\n",
"\n",
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
"\n",
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
"\n",
"**Benefits:**\n",
"1. No API charges - open-source\n",
"2. Data doesn't leave your box\n",
"\n",
"**Disadvantages:**\n",
"1. Significantly less power than Frontier Model\n",
"\n",
"## Recap on installation of Ollama\n",
"\n",
"Simply visit [ollama.com](https://ollama.com) and install!\n",
"\n",
"Once complete, the ollama server should already be running locally. \n",
"If you visit: \n",
"[http://localhost:11434/](http://localhost:11434/)\n",
"\n",
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "raw",
"id": "07e106bd-10c5-4365-b85b-397b5f059656",
"metadata": {},
"source": [
"# Constants\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
"metadata": {},
"outputs": [],
"source": [
"# Create a messages list using the same format that we used for OpenAI\n",
"\n",
"messages = [\n",
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
"metadata": {},
"outputs": [],
"source": [
"payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generative AI (Artificial Intelligence) has numerous business applications across various industries. Here are some examples:\n",
"\n",
"1. **Content Generation**: Generative AI can create high-quality content such as articles, social media posts, product descriptions, and more. This can help businesses save time and resources on content creation.\n",
"2. **Product Design**: Generative AI can be used to design new products, such as fashion items, jewelry, or electronics. It can also generate 3D models and prototypes, reducing the need for manual design and prototyping.\n",
"3. **Image and Video Generation**: Generative AI can create realistic images and videos that can be used in marketing campaigns, advertising, and social media. This can help businesses create engaging visual content without requiring extensive photography or videography skills.\n",
"4. **Chatbots and Virtual Assistants**: Generative AI can power chatbots and virtual assistants that provide customer support, answer frequently asked questions, and even engage in basic conversations.\n",
"5. **Predictive Maintenance**: Generative AI can analyze sensor data from machines and predict when maintenance is needed, reducing downtime and increasing efficiency.\n",
"6. **Personalized Recommendations**: Generative AI can analyze customer behavior and preferences to generate personalized product recommendations, improving the overall shopping experience.\n",
"7. **Customer Segmentation**: Generative AI can help businesses segment their customers based on their behavior, demographics, and preferences, enabling targeted marketing campaigns.\n",
"8. **Automated Writing Assistance**: Generative AI can assist writers with ideas, suggestions, and even full-text writing, helping to boost productivity and creativity.\n",
"9. **Data Analysis and Visualization**: Generative AI can analyze large datasets and generate insights, visualizations, and predictions that can inform business decisions.\n",
"10. **Creative Collaboration**: Generative AI can collaborate with human creatives, such as artists, designers, and writers, to generate new ideas, concepts, and content.\n",
"\n",
"Some specific industries where Generative AI is being applied include:\n",
"\n",
"1. **Marketing and Advertising**: generating personalized ads, content, and messaging.\n",
"2. **Finance and Banking**: automating financial analysis, risk assessment, and customer service.\n",
"3. **Healthcare**: generating medical images, analyzing patient data, and predicting disease outcomes.\n",
"4. **Manufacturing and Supply Chain**: optimizing production workflows, predicting demand, and identifying potential bottlenecks.\n",
"5. **Education**: creating personalized learning experiences, grading assignments, and developing educational content.\n",
"\n",
"These are just a few examples of the many business applications of Generative AI. As the technology continues to evolve, we can expect to see even more innovative uses across various industries.\n"
]
}
],
"source": [
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
"metadata": {},
"source": [
"# Introducing the ollama package\n",
"\n",
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
"\n",
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generative AI has numerous business applications across various industries. Here are some examples:\n",
"\n",
"1. **Content Generation**: Generative AI can be used to generate high-quality content such as articles, social media posts, product descriptions, and more. This can save time and resources for businesses that need to produce a large volume of content.\n",
"2. **Product Design**: Generative AI can be used to design new products, such as furniture, electronics, and other consumer goods. It can also help optimize product designs by generating multiple versions and selecting the most suitable one based on various criteria.\n",
"3. **Marketing Automation**: Generative AI can be used to create personalized marketing campaigns, such as email marketing automation, social media ads, and more. This can help businesses tailor their marketing efforts to specific customer segments and improve engagement rates.\n",
"4. **Image and Video Editing**: Generative AI can be used to edit images and videos, such as removing background noise, correcting color casts, and enhancing video quality. This can save time and resources for businesses that need to create high-quality visual content.\n",
"5. **Chatbots and Virtual Assistants**: Generative AI can be used to create chatbots and virtual assistants that can understand natural language and respond accordingly. This can help businesses provide better customer service and improve user experience.\n",
"6. **Predictive Analytics**: Generative AI can be used to analyze large datasets and generate predictive models that can forecast future trends and behaviors. This can help businesses make data-driven decisions and stay ahead of the competition.\n",
"7. **Customer Segmentation**: Generative AI can be used to segment customers based on their behavior, demographics, and preferences. This can help businesses tailor their marketing efforts and improve customer engagement.\n",
"8. **Language Translation**: Generative AI can be used to translate languages in real-time, which can help businesses communicate with international clients and customers more effectively.\n",
"9. **Music Composition**: Generative AI can be used to compose music for various applications such as advertising, film scoring, and video game soundtracks.\n",
"10. **Financial Modeling**: Generative AI can be used to create financial models that can predict future revenue streams, costs, and other financial metrics. This can help businesses make more accurate predictions and inform better investment decisions.\n",
"\n",
"Some of the industries that are already leveraging generative AI include:\n",
"\n",
"* E-commerce\n",
"* Healthcare\n",
"* Finance\n",
"* Marketing\n",
"* Education\n",
"* Entertainment\n",
"* Manufacturing\n",
"\n",
"These applications have the potential to transform various business processes, improve customer experiences, and drive innovation in various sectors.\n"
]
}
],
"source": [
"import ollama\n",
"\n",
"response = ollama.chat(model=MODEL, messages=messages)\n",
"print(response['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d",
"metadata": {},
"source": [
"## Alternative approach - using OpenAI python library to connect to Ollama"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generative AI has numerous business applications across various industries, transforming the way companies operate, create products, and interact with customers. Some key applications include:\n",
"\n",
"1. **Content Generation**: Automate content creation for marketing materials, such as blog posts, product descriptions, social media posts, and more, using Generative AI-powered tools.\n",
"2. **Product Design and Prototyping**: Use Generative AI to design new products, furniture, or other innovative solutions, reducing design time and costs while increasing creativity.\n",
"3. **Customer Experience (CX) Tools**: Leverage Generative AI to create personalized customer experiences, such as chatbots that can respond to customer queries and provide tailored recommendations.\n",
"4. **Predictive Maintenance**: Use Generative AI to analyze sensor data, identify potential issues, and predict maintenance needs for equipment, reducing downtime and increasing overall efficiency.\n",
"5. **Personalized Marketing**: Use Generative AI to create targeted marketing campaigns based on individual customer preferences, behaviors, and demographics.\n",
"6. **Content Optimization**: Utilize Generative AI to optimize content for better performance in search engine results pages (SERPs), ensuring improved visibility and traffic.\n",
"7. **Brand Storytelling**: Automate the creation of brand stories, taglines, and overall brand narrative using Generative AI-powered tools.\n",
"8. **Financial Modeling and Forecasting**: Use Generative AI to create financial models, forecasts, and predictions for businesses, helping them make data-driven decisions.\n",
"9. **Supply Chain Optimization**: Leverage Generative AI to optimize supply chain operations, predicting demand, reducing inventory levels, and streamlining logistics.\n",
"10. **Automated Transcription and Translation**: Use Generative AI to automate the transcription of audio and video files into written text, as well as translate materials across languages.\n",
"11. **Digital Asset Management**: Utilize Generative AI to manage digital assets, such as images, videos, and documents, and automatically generate metadata for easy search and retrieval.\n",
"12. **Chatbots and Virtual Assistants**: Create more advanced chatbots using Generative AI that can understand context, emotions, and intent, providing better customer service experiences.\n",
"\n",
"In healthcare, Generative AI is being applied to:\n",
"\n",
"1. Medical Imaging Analysis\n",
"2. Personalized Medicine\n",
"3. Patient Data Analysis\n",
"\n",
"In education, Generative AI is used in:\n",
"\n",
"1. Adaptive Learning Systems\n",
"2. Automated Grading and Feedback\n",
"\n",
"Generative AI has numerous applications across various industries, from creative content generation to predictive maintenance and supply chain optimization.\n",
"\n",
"Keep in mind that these are just a few examples of the many business applications of Generative AI as this technology continues to evolve at a rapid pace.\n"
]
}
],
"source": [
"# There's actually an alternative approach that some people might prefer\n",
"# You can use the OpenAI client python library to call Ollama:\n",
"\n",
"from openai import OpenAI\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
"metadata": {},
"source": [
"# NOW the exercise for you\n",
"\n",
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches."
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "de923314-a427-4199-b1f9-0e60f85114c3",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"\n",
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "0cedada6-adc6-40dc-bdf3-bc8a3b6b3826",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"web_res = Website(\"https://edwarddonner.com\")\n",
"print(web_res.text)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "64d26055-756b-4095-a1d1-298fdf4fd8f1",
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Constants\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"MODEL = \"llama3.2\"\n"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "65b08550-7506-415f-8612-e2395d6e145d",
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an helper that assist user to provide crisp summary\\\n",
"of the website they pass in, respond with key points\"\n",
"\n",
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too with start bulletin.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "36a0a2d0-f07a-40ac-a065-b713cdd5c028",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]\n"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "8c2b20ea-6a8e-41c9-be3b-f24a5b29e8de",
"metadata": {},
"outputs": [],
"source": [
"#website search\n",
"\n",
"web_msg=Website(\"https://www.cricbuzz.com/cricket-match-squads/91796/aus-vs-ind-3rd-test-india-tour-of-australia-2024-25\")\n",
"messages=messages_for(web_msg)\n",
"\n",
"payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": 54,
"id": "e5636b3b-7763-4f9c-ab18-88aa25b50de6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Summary of the Website**\n",
"=========================\n",
"\n",
"* The website provides live updates and information about the 3rd Test match between Australia and India as part of India's tour of Australia in the 2024-25 season.\n",
"* It includes news, scores, stats, and analysis from the match.\n",
"* The website is affiliated with Cricbuzz.com, a popular online cricket platform.\n",
"\n",
"**News and Announcements**\n",
"==========================\n",
"\n",
"* **Rashid Khan to miss the rest of the series**: Australian all-rounder Mitchell Marsh's teammate Rashid Khan has been ruled out of the remaining Tests due to a knee injury.\n",
"* **Bumrah to feature in the third Test**: Indian fast bowler Jasprit Bumrah is expected to return for the third Test, which starts on January 5 at the Sydney Cricket Ground.\n"
]
}
],
"source": [
"#Using Ollama to run it in the local\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

513
week1/community-contributions/day5-MultiLingual-MultiTone.ipynb

@ -0,0 +1,513 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa",
"metadata": {},
"source": [
"# A full business solution\n",
"\n",
"## Now we will take our project from Day 1 to the next level\n",
"\n",
"### BUSINESS CHALLENGE:\n",
"\n",
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
"\n",
"We will be provided a company name and their primary website.\n",
"\n",
"See the end of this notebook for examples of real-world business applications.\n",
"\n",
"And remember: I'm always available if you have problems or ideas! Please do reach out."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d5b08506-dc8b-4443-9201-5f1848161363",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
"metadata": {},
"outputs": [],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "106dd65e-90af-4ca8-86b6-23a41840645b",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
"metadata": {},
"outputs": [],
"source": [
"ed = Website(\"https://edwarddonner.com\")\n",
"ed.links"
]
},
{
"cell_type": "markdown",
"id": "1771af9c-717a-4fca-bbbe-8a95893312c3",
"metadata": {},
"source": [
"## First step: Have GPT-4o-mini figure out which links are relevant\n",
"\n",
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
"\n",
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
"\n",
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6957b079-0d96-45f7-a26a-3487510e9b35",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b97e4068-97ed-4120-beae-c42105e4d59a",
"metadata": {},
"outputs": [],
"source": [
"print(link_system_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
"metadata": {},
"outputs": [],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2",
"metadata": {},
"outputs": [],
"source": [
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n",
"\n",
"huggingface = Website(\"https://huggingface.co\")\n",
"huggingface.links"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
"metadata": {},
"outputs": [],
"source": [
"get_links(\"https://huggingface.co\")"
]
},
{
"cell_type": "markdown",
"id": "0d74128e-dfb6-47ec-9549-288b621c838c",
"metadata": {},
"source": [
"## Second step: make the brochure!\n",
"\n",
"Assemble all the details into another prompt to GPT4-o"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5",
"metadata": {},
"outputs": [],
"source": [
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += Website(url).get_contents()\n",
" links = get_links(url)\n",
" print(\"Found links:\", links)\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
"metadata": {},
"outputs": [],
"source": [
"print(get_all_details(\"https://huggingface.co\"))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\"\n",
"\n",
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
"\n",
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd909e0b-1312-4ce2-a553-821e795d7572",
"metadata": {},
"outputs": [],
"source": [
"print(get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\"))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure(company_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e093444a-9407-42ae-924a-145730591a39",
"metadata": {},
"outputs": [],
"source": [
"create_brochure(\"HuggingFace\", \"https://huggingface.com\")"
]
},
{
"cell_type": "markdown",
"id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18",
"metadata": {},
"source": [
"## Finally - a minor improvement\n",
"\n",
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
"with the familiar typewriter animation"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "51db0e49-f261-4137-aabe-92dd601f7725",
"metadata": {},
"outputs": [],
"source": [
"def stream_brochure(company_name, url):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
"metadata": {},
"outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "87bd1188",
"metadata": {},
"outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{
"cell_type": "markdown",
"id": "a9e7375d",
"metadata": {},
"source": [
"## **Multi-lingual with Multi-Tone in Desire Format**"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "af5c959f",
"metadata": {},
"outputs": [],
"source": [
"def multi_lingual_stream_brochure(company_name, url, language, tone):\n",
"\n",
" system_prompt = f\"\"\"\n",
"You are an assistant that analyzes the contents of several relevant pages from a company website and creates a visually appealing and professional short brochure for prospective customers, investors, and recruits. \n",
"The brochure should be written in {language} and use a {tone.lower()} tone throughout.\n",
"\n",
"The brochure should follow this structure (in {language}):\n",
"\n",
"1. **Front Cover**:\n",
" - Prominently display the company name as Title.\n",
" - Include a compelling headline or tagline.\n",
" - Add something engaging relevant to the company’s mission.\n",
"\n",
"2. **About Us**:\n",
" - Provide a brief introduction to the company.\n",
" - State the company’s core mission and vision.\n",
" - Mention the founding story or key milestones.\n",
"\n",
"3. **What We Offer**:\n",
" - Summarize the company's products, services, or solutions.\n",
" - Highlight benefits or unique selling points.\n",
" - Include testimonials or case studies if available.\n",
"\n",
"4. **Our Culture**:\n",
" - Outline the company’s key values or guiding principles.\n",
" - Describe the workplace environment (e.g., innovation-driven, inclusive, collaborative).\n",
" - Highlight community engagement or CSR initiatives.\n",
"\n",
"5. **Who We Serve**:\n",
" - Describe the target customers or industries served.\n",
" - Mention notable clients or partners.\n",
" - Include testimonials or endorsements from customers.\n",
"\n",
"6. **Join Us**:\n",
" - Detail career or internship opportunities.\n",
" - Highlight benefits, career growth, or training opportunities.\n",
" - Provide direct links or steps to apply.\n",
"\n",
"7. **Contact Us**:\n",
" - Provide the company’s address, phone number, and email.\n",
" - Include links to social media platforms.\n",
" - Add a link to the company’s website.\n",
"\n",
"8. **Closing Note**:\n",
" - End with a thank-you message or an inspirational note for the reader.\n",
" - Add a call-to-action (e.g., “Get in touch today!” or “Explore more on our website”).\n",
"\n",
"Ensure the content is concise, engaging, visually clear, and tailored to the target audience. Use headings and subheadings to make the brochure easy to navigate. Include links and contact information wherever applicable.\n",
"\"\"\"\n",
"\n",
"\n",
" \n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "744bfc05",
"metadata": {},
"outputs": [],
"source": [
"\n",
"multi_lingual_stream_brochure(\"OpenAI\", \"https://openai.com/\", \"Urdu\", \"humorous, entertaining, jokey\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "llm_env",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

10
week1/community-contributions/resume.txt

@ -0,0 +1,10 @@
John Doe
Software Engineer
Experience:
- Developed web applications using Python and JavaScript.
- Collaborated with cross-functional teams to deliver projects on time.
Education:
- B.S. in Computer Science from XYZ University.
Skills:
- Python, JavaScript, React, SQL

248
week1/community-contributions/week1 EXERCISE.ipynb

@ -0,0 +1,248 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# End of week 1 exercise\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"import requests\n",
"import json \n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n",
"import ollama\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "0bb65a08-9090-434a-b99d-5659a370cfbc",
"metadata": {},
"outputs": [],
"source": [
"# Prompts\n",
"\n",
"system_prompt = \"You are a tutor and helps with the user questions in detail with markdown respond with key point \\\n",
"considering the recent development around the world, keep the response in most appropriate tone \\n\"\n",
"\n",
"system_prompt += \"Some of Examples are\"\n",
"system_prompt += \"\"\"\n",
"{\"question\": \"1+1?\", \"response\": \"2\"},\n",
"{\"question\": \"why we shouls learn LLM Models?\", \"response\": \" Learning about Large Language Models (LLMs) is important because they are a rapidly evolving technology with the potential to significantly impact various industries, offering advanced capabilities in text generation, translation, information retrieval, and more, which can be valuable for professionals across diverse fields, allowing them to enhance their work and gain a competitive edge by understanding and utilizing these powerful language processing tools.\\ \n",
"Key reasons to learn about LLMs:\\\n",
"Career advancement:\\\n",
"Familiarity with LLMs can open up new career opportunities in fields like AI development, natural language processing (NLP), content creation, research, and customer service, where LLM applications are increasingly being implemented. \\\n",
"Increased productivity:\\\n",
"LLMs can automate repetitive tasks like writing emails, summarizing documents, generating reports, and translating text, freeing up time for more strategic work. \\\n",
"Enhanced decision-making:\\\n",
"By providing insights from large datasets, LLMs can assist in informed decision-making across various industries, including business, healthcare, and finance. \\\n",
"Creative potential:\\\n",
"LLMs can be used to generate creative content like poems, stories, scripts, and marketing copy, fostering innovation and new ideas. \\\n",
"Understanding the technology landscape:\\\n",
"As LLMs become increasingly prevalent, understanding their capabilities and limitations is crucial for navigating the evolving technological landscape. \\\n",
"What is a large language model (LLM)? - Cloudflare\\\n",
"A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other t...\\\n",
" \"},\n",
"{\"question\": \"what is the future of AI?\", \"response\": \"AI is predicted to grow increasingly pervasive as technology develops, revolutionising sectors including healthcare, banking, and transportation\"},\n",
"\"\"\"\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key looks good so far\n"
]
}
],
"source": [
"# set up environment\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"user_question = \"\"\"\n",
"How important it is for a Data Engineers to learn LLM, Considering the evolution of AI now a days?.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"{\"question\": \"How important is it for Data Engineers to learn LLMs?\", \"response\": \"The importance of Data Engineers learning about Large Language Models (LLMs) cannot be overstated, especially given the rapid evolution of AI and its applications across various domains. Here's why this knowledge is essential:\n",
"\n",
"### Key Reasons for Data Engineers to Learn about LLMs:\n",
"\n",
"1. **Integration of AI in Data Pipelines:**\n",
" - As organizations increasingly adopt AI-driven solutions, Data Engineers will need to integrate LLMs into data pipelines for tasks such as text processing, feature extraction, and sentiment analysis.\n",
"\n",
"2. **Understanding Data Requirements:**\n",
" - LLMs require substantial and specific datasets for optimal performance. Knowledge of these requirements will help Data Engineers curate, preprocess, and manage data more effectively.\n",
"\n",
"3. **Enhanced Data Quality:**\n",
" - Data Engineers play a crucial role in ensuring data quality. Understanding LLMs can guide them in implementing effective validation checks and enhancing the data used for training these models.\n",
"\n",
"4. **Collaboration with Data Scientists:**\n",
" - Data Engineers are essential collaborators with Data Scientists. A solid grasp of LLMs will enable them to facilitate better communication and cooperation in model deployment and optimization.\n",
"\n",
"5. **Innovation in Product Development:**\n",
" - Familiarity with LLMs will enable Data Engineers to contribute innovative ideas for new products or features that leverage language processing capabilities, leading to enhanced user experiences.\n",
"\n",
"6. **Staying Current with Industry Trends:**\n",
" - The AI landscape is rapidly changing. Learning about LLMs keeps Data Engineers abreast of current trends and technologies, ensuring they remain competitive in the job market and valuable to their organizations.\n",
"\n",
"7. **Ethical and Responsible AI:**\n",
" - Understanding LLMs involves awareness of their ethical considerations, such as bias and misuse. Data Engineers can advocate for responsible AI practices within their organizations by being educated on these issues.\n",
"\n",
"8. **Scalability Considerations:**\n",
" - Data Engineers will need to design systems that can scale efficiently, especially when dealing with the substantial computational resources required for training and deploying LLMs.\n",
"\n",
"### Conclusion:\n",
"In summary, learning about LLMs is crucial for Data Engineers as it not only enhances their skill set but also positions them to contribute meaningfully to AI initiatives within their organizations. Embracing this knowledge will ultimately drive innovation and efficiency in their data-driven projects.\"}"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"def ask_tutor(question):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": question},\n",
" {\"role\": \"user\", \"content\": system_prompt}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
"\n",
"# call the gpt-4o-mini to answer with streaming\n",
"ask_tutor(user_question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [],
"source": [
"# Get Llama 3.2 to answer\n",
"messages = [\n",
" {\"role\": \"user\", \"content\": user_question}\n",
"]\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"payload = {\n",
" \"model\": MODEL_LLAMA,\n",
" \"messages\": messages,\n",
" \"stream\": True\n",
" }\n",
"\n",
"response = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
"reply = response['message']['content']\n",
"display(Markdown(reply))\n",
"\n",
"# # Process the response stream\n",
"# for line in response.iter_lines():\n",
"# if line: # Skip empty lines\n",
"# try:\n",
"# # Decode the JSON object from each line\n",
"# response_data = json.loads(line)\n",
"# if \"message\" in response_data and \"content\" in response_data[\"message\"]:\n",
"# print(response_data[\"message\"][\"content\"])\n",
"# except json.JSONDecodeError as e:\n",
"# print(f\"Failed to decode JSON: {e}\")\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

332
week1/community-contributions/week1-collaborative-approach-two-llms.ipynb

@ -0,0 +1,332 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# **End of week 1 exercise**\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "markdown",
"id": "c70e5ab1",
"metadata": {},
"source": [
"## **1. Get a response from your favorite AI Tutor** "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from openai import OpenAI\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65dace69",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key) > 10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'\n",
"\n",
"openai = OpenAI()\n",
"\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "3673d863",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"You are the software engnieer, phd in mathematics, machine learning engnieer, and other topics\"\"\"\n",
"system_prompt += \"\"\"\n",
"When responding, always use Markdown for formatting. For any code, use well-structured code blocks with syntax highlighting,\n",
"For instance:\n",
"```python\n",
"\n",
"sample_list = [for i in range(10)]\n",
"```\n",
"Another example\n",
"```javascript\n",
" function displayMessage() {\n",
" alert(\"Hello, welcome to JavaScript!\");\n",
" }\n",
"\n",
"```\n",
"\n",
"Break down explanations into clear, numbered steps for better understanding. \n",
"Highlight important terms using inline code formatting (e.g., `function_name`, `variable`).\n",
"Provide examples for any concepts and ensure all examples are concise, clear, and relevant.\n",
"Your goal is to create visually appealing, easy-to-read, and informative responses.\n",
"\n",
"\"\"\"\n"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "1df78d41",
"metadata": {},
"outputs": [],
"source": [
"def tutor_user_prompt(question):\n",
" # Ensure the question is properly appended to the user prompt.\n",
" user_prompt = (\n",
" \"Please carefully explain the following question in a step-by-step manner for clarity:\\n\\n\"\n",
" )\n",
" user_prompt += question\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "6dccbccb",
"metadata": {},
"outputs": [],
"source": [
"\n",
"\n",
"def askTutor(question, MODEL):\n",
" # Generate the user prompt dynamically.\n",
" user_prompt = tutor_user_prompt(question)\n",
" \n",
" # OpenAI API call to generate response.\n",
" if MODEL == 'gpt-4o-mini':\n",
" print(f'You are getting response from {MODEL}')\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ],\n",
" stream=True\n",
" )\n",
" else:\n",
" MODEL == 'llama3.2'\n",
" print(f'You are getting response from {MODEL}')\n",
" stream = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ],\n",
" stream=True\n",
" )\n",
"\n",
" # Initialize variables for response processing.\n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" \n",
" # Process the response stream and update display dynamically.\n",
" for chunk in stream:\n",
" # Safely access the content attribute.\n",
" response_chunk = getattr(chunk.choices[0].delta, \"content\", \"\")\n",
" if response_chunk: # Check if response_chunk is not None or empty\n",
" response += response_chunk\n",
" # No replacement of Markdown formatting here!\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"question = \"\"\"\n",
"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
"source": [
"askTutor(question=question, MODEL=MODEL_GPT)"
]
},
{
"cell_type": "markdown",
"id": "b79f9479",
"metadata": {},
"source": [
"## **2. Using both LLMs collaboratively approach**"
]
},
{
"cell_type": "markdown",
"id": "80e3c8f5",
"metadata": {},
"source": [
"- I thought about like similar the idea of a RAG (Retrieval-Augmented Generation) approach, is an excellent idea to improve responses by refining the user query and producing a polished, detailed final answer. Two LLM talking each other its cool!!! Here's how we can implement this:\n",
"\n",
"**Updated Concept:**\n",
"1. Refine Query with Ollama:\n",
" - Use Ollama to refine the raw user query into a well-structured prompt.\n",
" - This is especially helpful when users input vague or poorly structured queries.\n",
"2. Generate Final Response with GPT:\n",
" - Pass the refined prompt from Ollama to GPT to generate the final, detailed, and polished response.\n",
"3. Return the Combined Output:\n",
" - Combine the input, refined query, and the final response into a single display to ensure clarity."
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "60f5ac2d",
"metadata": {},
"outputs": [],
"source": [
"def refine_with_ollama(raw_question):\n",
" \"\"\"\n",
" Use Ollama to refine the user's raw question into a well-structured prompt.\n",
" \"\"\"\n",
" print(\"Refining the query using Ollama...\")\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant. Refine and structure the following user input.\"},\n",
"\n",
" {\"role\": \"user\", \"content\": raw_question},\n",
" ]\n",
" response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL_LLAMA,\n",
" messages=messages,\n",
" stream=False # Non-streamed refinement\n",
" )\n",
" refined_query = response.choices[0].message.content\n",
" return refined_query"
]
},
{
"cell_type": "code",
"execution_count": 60,
"id": "2aa4c9f6",
"metadata": {},
"outputs": [],
"source": [
"def ask_with_ollama_and_gpt(raw_question):\n",
" \"\"\"\n",
" Use Ollama to refine the user query and GPT to generate the final response.\n",
" \"\"\"\n",
" # Step 1: Refine the query using Ollama\n",
" refined_query = refine_with_ollama(raw_question)\n",
" \n",
" # Step 2: Generate final response with GPT\n",
" print(\"Generating the final response using GPT...\")\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": refined_query},\n",
" ]\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=messages,\n",
" stream=True # Stream response for dynamic display\n",
" )\n",
"\n",
" # Step 3: Combine responses\n",
" response = \"\"\n",
" display_handle = display(Markdown(f\"### Refined Query:\\n\\n{refined_query}\\n\\n---\\n\\n### Final Response:\"), display_id=True)\n",
" for chunk in stream:\n",
" response_chunk = getattr(chunk.choices[0].delta, \"content\", \"\")\n",
" if response_chunk:\n",
" response += response_chunk\n",
" update_display(Markdown(f\"### Refined Query:\\n\\n{refined_query}\\n\\n---\\n\\n### Final Response:\\n\\n{response}\"), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 61,
"id": "4150e857",
"metadata": {},
"outputs": [],
"source": [
"# Example Usage\n",
"question = \"\"\"\n",
"Please explain what this code does:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2b8935f",
"metadata": {},
"outputs": [],
"source": [
"ask_with_ollama_and_gpt(raw_question=question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "086a5294",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llm_env",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

16
week1/day2 EXERCISE.ipynb

@ -125,6 +125,18 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": 5,
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40",
"metadata": {},
"outputs": [],
"source": [
"# Let's just make sure the model is loaded\n",
"\n",
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9", "id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [
@ -158,6 +170,10 @@
} }
], ],
"source": [ "source": [
"# If this doesn't work for any reason, try the 2 versions in the following cells\n",
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n",
"# And if none of that works - contact me!\n",
"\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", "response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])" "print(response.json()['message']['content'])"
] ]

22
week1/day5.ipynb

@ -2809,10 +2809,30 @@
"</table>" "</table>"
] ]
}, },
{
"cell_type": "markdown",
"id": "6f48e42e-fa7a-495f-a5d4-26bfc24d60b6",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../thankyou.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#090;\">Finally! I have a special request for you</h2>\n",
" <span style=\"color:#090;\">\n",
" My editor tells me that it makes a MASSIVE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. If you're able to take a minute to rate this, I'd be so very grateful! And regardless - always please reach out to me at ed@edwarddonner.com if I can help at any point.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "3de35771-455f-40b5-ba44-7c0a6b7c427a", "id": "b8d3e1a1-ba54-4907-97c5-30f89a24775b",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [] "source": []

175
week1/solutions/day2 SOLUTION.ipynb

@ -34,7 +34,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 4, "execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -49,7 +49,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 5, "execution_count": null,
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", "id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -61,7 +61,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 6, "execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463", "id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -91,63 +91,10 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 7, "execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"name": "stdout",
"output_type": "stream",
"text": [
"Home - Edward Donner\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"February 7, 2024\n",
"Fine-tuning an LLM on your texts: a simulation of you\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [ "source": [
"# Let's try one out\n", "# Let's try one out\n",
"\n", "\n",
@ -176,7 +123,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 8, "execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", "id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -190,7 +137,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 9, "execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -224,7 +171,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 10, "execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -248,7 +195,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 11, "execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -264,28 +211,17 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 12, "execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/plain": [
"'**Summary**\\n\\n* Website belongs to Edward Donner, a co-founder and CTO of Nebula.io.\\n* He is the founder and CEO of AI startup untapt, which was acquired in 2021.\\n\\n**News/Announcements**\\n\\n* October 16, 2024: \"From Software Engineer to AI Data Scientist – resources\" (resource list for career advancement)\\n* August 6, 2024: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" (introducing the Outsmart arena, a competition between LLMs)\\n* June 26, 2024: \"Choosing the Right LLM: Toolkit and Resources\" (resource list for selecting the right LLM)\\n* February 7, 2024: \"Fine-tuning an LLM on your texts: a simulation of you\" (blog post about simulating human-like conversations with LLMs)'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"summarize(\"https://edwarddonner.com\")" "summarize(\"https://edwarddonner.com\")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 13, "execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", "id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -299,37 +235,10 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 14, "execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f", "id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/markdown": [
"# Summary of Edward Donner's Website\n",
"\n",
"## About the Creator\n",
"Edward Donner is a writer, code enthusiast, and co-founder/CTO of Nebula.io, an AI company that applies AI to help people discover their potential.\n",
"\n",
"## Recent Announcements and News\n",
"\n",
"* October 16, 2024: \"From Software Engineer to AI Data Scientist – resources\" - a resource list for those transitioning into AI data science.\n",
"* August 6, 2024: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" - an introduction to the Outsmart arena where LLMs compete against each other in diplomacy and strategy.\n",
"* June 26, 2024: \"Choosing the Right LLM: Toolkit and Resources\" - a resource list for choosing the right Large Language Model (LLM) for specific use cases.\n",
"\n",
"## Miscellaneous\n",
"\n",
"* A section about Ed's personal interests, including DJing and amateur electronic music production.\n",
"* Links to his professional profiles on LinkedIn, Twitter, Facebook, and a contact email."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [ "source": [
"display_summary(\"https://edwarddonner.com\")" "display_summary(\"https://edwarddonner.com\")"
] ]
@ -352,66 +261,20 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 15, "execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f", "id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/markdown": [
"I can't provide information on that topic."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [ "source": [
"display_summary(\"https://cnn.com\")" "display_summary(\"https://cnn.com\")"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 19, "execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", "id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {}, "metadata": {},
"outputs": [ "outputs": [],
{
"data": {
"text/markdown": [
"# Website Summary: Anthropic\n",
"## Overview\n",
"\n",
"Anthropic is an AI safety and research company based in San Francisco. Their interdisciplinary team has experience across ML, physics, policy, and product.\n",
"\n",
"### News and Announcements\n",
"\n",
"* **Claude 3.5 Sonnet** is now available, featuring the most intelligent AI model.\n",
"* **Announcement**: Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku (October 22, 2024)\n",
"* **Research Update**: Constitutional AI: Harmlessness from AI Feedback (December 15, 2022) and Core Views on AI Safety: When, Why, What, and How (March 8, 2023)\n",
"\n",
"### Products and Services\n",
"\n",
"* Claude for Enterprise\n",
"* Research and development of AI systems with a focus on safety and reliability.\n",
"\n",
"### Company Information\n",
"\n",
"* Founded in San Francisco\n",
"* Interdisciplinary team with experience across ML, physics, policy, and product.\n",
"* Provides reliable and beneficial AI systems."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [ "source": [
"display_summary(\"https://anthropic.com\")" "display_summary(\"https://anthropic.com\")"
] ]
@ -455,7 +318,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.11.10" "version": "3.11.11"
} }
}, },
"nbformat": 4, "nbformat": 4,

13
week1/troubleshooting.ipynb

@ -48,21 +48,26 @@
"# The Environment Name should be: llms\n", "# The Environment Name should be: llms\n",
"\n", "\n",
"import os\n", "import os\n",
"conda_name, venv_name = \"\", \"\"\n",
"\n", "\n",
"conda_prefix = os.environ.get('CONDA_PREFIX')\n", "conda_prefix = os.environ.get('CONDA_PREFIX')\n",
"if conda_prefix:\n", "if conda_prefix:\n",
" print(\"Anaconda environment is active:\")\n", " print(\"Anaconda environment is active:\")\n",
" print(f\"Environment Path: {conda_prefix}\")\n", " print(f\"Environment Path: {conda_prefix}\")\n",
" print(f\"Environment Name: {os.path.basename(conda_prefix)}\")\n", " conda_name = os.path.basename(conda_prefix)\n",
" print(f\"Environment Name: {conda_name}\")\n",
"\n", "\n",
"virtual_env = os.environ.get('VIRTUAL_ENV')\n", "virtual_env = os.environ.get('VIRTUAL_ENV')\n",
"if virtual_env:\n", "if virtual_env:\n",
" print(\"Virtualenv is active:\")\n", " print(\"Virtualenv is active:\")\n",
" print(f\"Environment Path: {virtual_env}\")\n", " print(f\"Environment Path: {virtual_env}\")\n",
" print(f\"Environment Name: {os.path.basename(virtual_env)}\")\n", " venv_name = os.path.basename(virtual_env)\n",
" print(f\"Environment Name: {venv_name}\")\n",
"\n", "\n",
"if not conda_prefix and not virtual_env:\n", "if conda_name != \"llms\" and virtual_env != \"llms\":\n",
" print(\"Neither Anaconda nor Virtualenv seems to be active. Did you start jupyter lab in an Activated environment? See Setup Part 5.\")" " print(\"Neither Anaconda nor Virtualenv seem to be activated with the expected name 'llms'\")\n",
" print(\"Did you run 'jupyter lab' from an activated environment with (llms) showing on the command line?\")\n",
" print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")"
] ]
}, },
{ {

196
week2/community-contributions/TTS_STT.ipynb

@ -0,0 +1,196 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "a60e0f78-4637-4318-9ab6-309c3f7f2799",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(\"API Key set\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67026ef0-23be-4101-9371-b11f96f505bf",
"metadata": {},
"outputs": [],
"source": [
"# TTS\n",
"\n",
"from pydub import AudioSegment\n",
"import os\n",
"import subprocess\n",
"from io import BytesIO\n",
"import tempfile\n",
"\n",
"# Set custom temp directory\n",
"custom_temp_dir = r\"D:\\projects\\llm_engineering-main\\temp\"\n",
"os.makedirs(custom_temp_dir, exist_ok=True)\n",
"\n",
"# Explicitly set FFmpeg paths\n",
"AudioSegment.converter = r\"D:\\Anaconda3\\envs\\llms\\Library\\bin\\ffmpeg.exe\"\n",
"AudioSegment.ffprobe = r\"D:\\Anaconda3\\envs\\llms\\Library\\bin\\ffprobe.exe\"\n",
"\n",
"def play_audio_with_ffplay(audio_segment, temp_dir):\n",
" # Explicitly create and manage a temporary file\n",
" temp_file_path = os.path.join(temp_dir, \"temp_output.wav\")\n",
" \n",
" # Export the audio to the temporary file\n",
" audio_segment.export(temp_file_path, format=\"wav\")\n",
" \n",
" try:\n",
" # Play the audio using ffplay\n",
" subprocess.call([\"ffplay\", \"-nodisp\", \"-autoexit\", temp_file_path])\n",
" finally:\n",
" # Clean up the temporary file after playback\n",
" if os.path.exists(temp_file_path):\n",
" os.remove(temp_file_path)\n",
"\n",
"def talker(message):\n",
" # Mocked OpenAI response for testing\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"nova\",\n",
" input=message\n",
" )\n",
" \n",
" # Handle audio stream\n",
" audio_stream = BytesIO(response.content)\n",
" audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
" \n",
" # Play the audio\n",
" play_audio_with_ffplay(audio, custom_temp_dir)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12c66b44-293a-4bf9-b81e-0f6905fbf607",
"metadata": {},
"outputs": [],
"source": [
"# STT Whisper\n",
"\n",
"import whisper\n",
"import sounddevice as sd\n",
"import numpy as np\n",
"from scipy.io.wavfile import write\n",
"\n",
"def record_audio(temp_dir, duration=5, samplerate=16000, device_id=2):\n",
" # print(f\"Recording for {duration} seconds...\")\n",
" sd.default.device = (device_id, None)\n",
" audio = sd.rec(int(duration * samplerate), samplerate=samplerate, channels=1, dtype=\"int16\")\n",
" sd.wait() # Wait until the recording is finished\n",
" \n",
" audio_path = os.path.join(temp_dir, \"mic_input.wav\")\n",
" write(audio_path, samplerate, audio)\n",
" # print(f\"Audio recorded and saved to {audio_path}\")\n",
"\n",
" return audio_path\n",
"\n",
"\n",
"whisper_model = whisper.load_model(\"base\")\n",
"def transcribe_audio(audio_path): \n",
" # print(\"Transcribing audio...\")\n",
" result = whisper_model.transcribe(audio_path, language=\"en\")\n",
" return result[\"text\"]\n",
"\n",
"def mic_to_text():\n",
" audio_path = record_audio(custom_temp_dir, duration=10)\n",
" transcription = transcribe_audio(audio_path)\n",
" # print(f\"Transcription: {transcription}\")\n",
" return transcription"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0156c106-1844-444a-9a22-88c3475805d9",
"metadata": {},
"outputs": [],
"source": [
"# Chat Functions\n",
"\n",
"import requests\n",
"history = [{\"role\": \"system\", \"content\": \"You are Nova the friendly robot. Reply within couple of sentences.\"}]\n",
"\n",
"def run_chat():\n",
" running = True\n",
" while running:\n",
" input_text = input(\"press Enter to talk\") \n",
" user_input = input_text if input_text.strip() else mic_to_text()\n",
" running = False if input_text == \"bye\" or user_input.strip() == \"bye\" else True\n",
" print(f\"\\nYou: {user_input}\\n\\n\")\n",
" history.append({\"role\": \"user\", \"content\": user_input}) \n",
" api_run = requests.post(\n",
" \"http://localhost:11434/api/chat\", \n",
" json={\n",
" \"model\": \"llama3.2\",\n",
" \"messages\": history,\n",
" \"stream\": False\n",
" }, \n",
" headers={\"Content-Type\": \"application/json\"}\n",
" )\n",
" output_message = api_run.json()['message']['content']\n",
" print(f\"Nova: {output_message}\\n\\n\") \n",
" talker(output_message)\n",
" history.append({\"role\": \"assistant\", \"content\": output_message})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de61b54e-387e-4480-a592-c78e3245ddde",
"metadata": {},
"outputs": [],
"source": [
"run_chat()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce16bee7-6ea6-46d5-a407-385e6ae31db8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

342
week2/community-contributions/day1-gpt-llama-gemini-together.ipynb

@ -0,0 +1,342 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display\n",
"import google.generativeai # For gemini"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"load_dotenv\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"\n",
"else:\n",
" print(f\"OpenAI API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"\n",
"else:\n",
" print(f\"Google API key not set\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "1da06c1b",
"metadata": {},
"outputs": [],
"source": [
"# This for GPT model\n",
"openai = OpenAI()\n",
"\n",
"# This is for Gemini Google\n",
"gemini_via_openai = OpenAI(base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\", api_key=google_api_key)\n",
"\n",
"# This is for local Llama\n",
"\n",
"llama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "f8aeb22f",
"metadata": {},
"outputs": [],
"source": [
"# Model Name:\n",
"GPT_MODEL = 'gpt-4o-mini'\n",
"GEMINI_MODEL = 'gemini-1.5-flash'\n",
"LLAMA_MODEL = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "4e3007e9",
"metadata": {},
"outputs": [],
"source": [
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"gemini_system = \"You are a logical and factual chatbot. Your role is to evaluate statements made in \\\n",
" the conversation and provide evidence or reasoning. You avoid emotional responses and aim to bring clarity and resolve conflicts. \\\n",
" When the conversation becomes heated or illogical, you steer it back to a constructive and fact-based discussion.\"\n",
"\n",
"\n",
"llama_system = \"You are a very polite, courteous chatbot. However, You try to disagree with your supportive\\\n",
"arguments. If the other person is argumentative, you try to calm them down, counter them, and keep chatting.\"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 44,
"id": "14d9b74e",
"metadata": {},
"outputs": [],
"source": [
"\n",
"gpt_messages = [\"Hi there\"]\n",
"gemini_messages = [\"Hello\"]\n",
"llama_messages = [\"Hi\"]\n",
"\n",
"# gpt_messages = [\"I think cats are better than dogs.\"]\n",
"# gemini_messages = [\"Can you provide evidence for why cats are better than dogs?\"]\n",
"# llama_messages = [\"I agree, but I also think dogs have their own charm!\"]\n"
]
},
{
"cell_type": "code",
"execution_count": 53,
"id": "6c7e7250",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, gemini, llama in zip(gpt_messages, gemini_messages, llama_messages):\n",
" # Add GPT's response\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" # Add Gemini's response\n",
" messages.append({\"role\": \"user\", \"content\": gemini})\n",
" # Add Llama's response\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
"\n",
" completion = openai.chat.completions.create(\n",
" model=GPT_MODEL,\n",
" messages=messages\n",
" )\n",
"\n",
" return completion.choices[0].message.content\n"
]
},
{
"cell_type": "markdown",
"id": "2e0b601f",
"metadata": {},
"source": [
"```python\n",
"messages:\n",
"[\n",
" {\"role\": \"system\", \"content\": \"You are a chatbot who is very argumentative; you disagree...\"},\n",
" {\"role\": \"assistant\", \"content\": \"I think cats are better than dogs.\"},\n",
" {\"role\": \"user\", \"content\": \"Can you provide evidence for why cats are better than dogs?\"},\n",
" {\"role\": \"user\", \"content\": \"I agree, but I also think dogs have their own charm!\"}\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6c031314",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "c2cb3905",
"metadata": {},
"outputs": [],
"source": [
"def call_gemini():\n",
" messages = [{\"role\": \"system\", \"content\": gemini_system}]\n",
" for gpt, gemini, llama in zip(gpt_messages, gemini_messages, llama_messages):\n",
" # Add GPT's response\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" # Add Gemini's response\n",
" messages.append({\"role\": \"assistant\", \"content\": gemini})\n",
" # Add Llama's response\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
" \n",
" # print(messages)\n",
"\n",
" try:\n",
" # Use gemini_via_openai instead of openai\n",
" completion = gemini_via_openai.chat.completions.create(\n",
" model=GEMINI_MODEL,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content\n",
" except Exception as e:\n",
" print(f\"Error in Gemini call: {e}\")\n",
" return \"An error occurred in Gemini.\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c9d4803",
"metadata": {},
"outputs": [],
"source": [
"call_gemini()"
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "109e63e4",
"metadata": {},
"outputs": [],
"source": [
"def call_llama():\n",
" messages = [{\"role\": \"system\", \"content\": llama_system}]\n",
" for gpt, gemini, llama in zip(gpt_messages, gemini_messages, llama_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": gemini})\n",
" messages.append({\"role\": \"assistant\", \"content\": llama})\n",
"\n",
" # print(messages)\n",
"\n",
" try:\n",
" response = llama_via_openai.chat.completions.create(\n",
" model=LLAMA_MODEL,\n",
" messages=messages\n",
" )\n",
" return response.choices[0].message.content\n",
" except Exception as e:\n",
" print(f\"Error in Llama call: {e}\")\n",
" return \"An error occurred in Llama.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e24eb6d",
"metadata": {},
"outputs": [],
"source": [
"call_llama()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f76f5b2a",
"metadata": {},
"outputs": [],
"source": [
"gpt_messages = [\"I think cats are better than dogs.\"]\n",
"gemini_messages = [\"Can you provide evidence for why cats are better than dogs?\"]\n",
"llama_messages = [\"I agree, but I also think dogs have their own charm!\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Llama:\\n{llama_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" llama_next = call_llama()\n",
" print(f\"Llama:\\n{llama_next}\\n\")\n",
" llama_messages.append(llama_next)\n",
"\n",
" gemini_next = call_llama()\n",
" print(f\"Gemini:\\n{gemini_next}\\n\")\n",
" llama_messages.append(gemini_next)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80f0e498",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llm_env",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

264
week2/community-contributions/day4-handle-multiple-tool-call.ipynb

@ -0,0 +1,264 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"\n",
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()\n",
"\n",
"# As an alternative, if you'd like to use Ollama instead of OpenAI\n",
"# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n",
"# MODEL = \"llama3.2\"\n",
"# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [],
"source": [
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" return response.choices[0].message.content\n",
"\n",
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
"\n",
"With tools, you can write a function, and have the LLM call that function as part of its response.\n",
"\n",
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
"\n",
"Well, kinda."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
"source": [
"# Let's start by making a useful function\n",
"\n",
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [],
"source": [
"get_ticket_price(\"Berlin\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
"source": [
"# There's a particular dictionary structure that's required to describe our function:\n",
"\n",
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
"source": [
"# And this is included in a list of tools:\n",
"\n",
"tools = [{\"type\": \"function\", \"function\": price_function}]"
]
},
{
"cell_type": "markdown",
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
"metadata": {},
"source": [
"## Getting OpenAI to use our Tool\n",
"\n",
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
"\n",
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
"\n",
"Here's how the new chat function looks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
"\n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" responses = handle_tool_call(message)\n",
" messages.append(message)\n",
" for response in responses:\n",
" messages.append(response)\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0992986-ea09-4912-a076-8e5603ee631f",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"\n",
"def handle_tool_call(message):\n",
" responses = []\n",
" for tool_call in message.tool_calls:\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" city = arguments.get('destination_city')\n",
" price = get_ticket_price(city)\n",
" responses.append({\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
" \"tool_call_id\": tool_call.id\n",
" })\n",
" return responses"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11c9da69-d0cf-4cf2-a49e-e5669deec47b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

2
week2/day4.ipynb

@ -270,7 +270,7 @@
" response = {\n", " response = {\n",
" \"role\": \"tool\",\n", " \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n", " \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
" \"tool_call_id\": message.tool_calls[0].id\n", " \"tool_call_id\": tool_call.id\n",
" }\n", " }\n",
" return response, city" " return response, city"
] ]

77
week2/day5.ipynb

@ -265,7 +265,7 @@
" response = {\n", " response = {\n",
" \"role\": \"tool\",\n", " \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n", " \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
" \"tool_call_id\": message.tool_calls[0].id\n", " \"tool_call_id\": tool_call.id\n",
" }\n", " }\n",
" return response, city" " return response, city"
] ]
@ -504,7 +504,7 @@
"id": "d91d3f8f-e505-4e3c-a87c-9e42ed823db6", "id": "d91d3f8f-e505-4e3c-a87c-9e42ed823db6",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# For Mac users\n", "# For Mac users - and possibly many PC users too\n",
"\n", "\n",
"This version should work fine for you. It might work for Windows users too, but you might get a Permissions error writing to a temp file. If so, see the next section!\n", "This version should work fine for you. It might work for Windows users too, but you might get a Permissions error writing to a temp file. If so, see the next section!\n",
"\n", "\n",
@ -573,13 +573,13 @@
"id": "ad89a9bd-bb1e-4bbb-a49a-83af5f500c24", "id": "ad89a9bd-bb1e-4bbb-a49a-83af5f500c24",
"metadata": {}, "metadata": {},
"source": [ "source": [
"# For Windows users\n", "# For Windows users (or any Mac users with problems above)\n",
"\n", "\n",
"## First try the Mac version above, but if you get a permissions error writing to a temp file, then this code should work instead.\n", "## First try the Mac version above, but if you get a permissions error writing to a temp file, then this code should work instead.\n",
"\n", "\n",
"A collaboration between students Mark M. and Patrick H. and Claude got this resolved!\n", "A collaboration between students Mark M. and Patrick H. and Claude got this resolved!\n",
"\n", "\n",
"Below are 3 variations - hopefully one of them will work on your PC. If not, message me please!\n", "Below are 4 variations - hopefully one of them will work on your PC. If not, message me please!\n",
"\n", "\n",
"And for Mac people - all 3 of the below work on my Mac too - please try these if the Mac version gave you problems.\n", "And for Mac people - all 3 of the below work on my Mac too - please try these if the Mac version gave you problems.\n",
"\n", "\n",
@ -589,7 +589,44 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "2a1e7749-2c46-4453-b8f9-18874a613a38", "id": "d104b96a-02ca-4159-82fe-88e0452aa479",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"from io import BytesIO\n",
"from PIL import Image\n",
"from IPython.display import Audio, display\n",
"\n",
"def talker(message):\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"onyx\",\n",
" input=message)\n",
"\n",
" audio_stream = BytesIO(response.content)\n",
" output_filename = \"output_audio.mp3\"\n",
" with open(output_filename, \"wb\") as f:\n",
" f.write(audio_stream.read())\n",
"\n",
" # Play the generated audio\n",
" display(Audio(output_filename, autoplay=True))\n",
"\n",
"talker(\"Well, hi there\")"
]
},
{
"cell_type": "markdown",
"id": "3a5d11f4-bbd3-43a1-904d-f684eb5f3e3a",
"metadata": {},
"source": [
"## PC Variation 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d59c8ebd-79c5-498a-bdf2-3a1c50d91aa0",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
@ -636,7 +673,7 @@
"id": "96f90e35-f71e-468e-afea-07b98f74dbcf", "id": "96f90e35-f71e-468e-afea-07b98f74dbcf",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## PC Variation 2" "## PC Variation 3"
] ]
}, },
{ {
@ -679,7 +716,7 @@
"id": "e821224c-b069-4f9b-9535-c15fdb0e411c", "id": "e821224c-b069-4f9b-9535-c15fdb0e411c",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## PC Variation 3\n", "## PC Variation 4\n",
"\n", "\n",
"### Let's try a completely different sound library\n", "### Let's try a completely different sound library\n",
"\n", "\n",
@ -740,7 +777,7 @@
"id": "7986176b-cd04-495f-a47f-e057b0e462ed", "id": "7986176b-cd04-495f-a47f-e057b0e462ed",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## PC Users - if none of those 3 variations worked!\n", "## PC Users - if none of those 4 variations worked!\n",
"\n", "\n",
"Please get in touch with me. I'm sorry this is causing problems! We'll figure it out.\n", "Please get in touch with me. I'm sorry this is causing problems! We'll figure it out.\n",
"\n", "\n",
@ -903,12 +940,24 @@
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "id": "7e795560-1867-42db-a256-a23b844e6fbe",
"id": "d8e39e42-13d2-4271-b8b3-3a14b8a12bf4", "metadata": {},
"metadata": {}, "source": [
"outputs": [], "<table style=\"margin: 0; text-align: left;\">\n",
"source": [] " <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../thankyou.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#090;\">I have a special request for you</h2>\n",
" <span style=\"color:#090;\">\n",
" My editor tells me that it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. If you're able to take a minute to rate this, I'd be so very grateful! And regardless - always please reach out to me at ed@edwarddonner.com if I can help at any point.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
} }
], ],
"metadata": { "metadata": {

267
week3/community-contributions/dataset_generator.ipynb

@ -0,0 +1,267 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "code",
"source": [
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio"
],
"metadata": {
"id": "kU2JrcPlhwd9"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Imports**"
],
"metadata": {
"id": "lAMIVT4iwNg0"
}
},
{
"cell_type": "code",
"source": [
"import os\n",
"import requests\n",
"from google.colab import drive\n",
"from huggingface_hub import login\n",
"from google.colab import userdata\n",
"from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n",
"import torch\n",
"import gradio as gr\n",
"\n",
"hf_token = userdata.get('HF_TOKEN')\n",
"login(hf_token, add_to_git_credential=True)"
],
"metadata": {
"id": "-Apd7-p-hyLk"
},
"execution_count": 2,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Model**"
],
"metadata": {
"id": "xa0qYqZrwQ66"
}
},
{
"cell_type": "code",
"source": [
"model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n",
"quant_config = BitsAndBytesConfig(\n",
" load_in_4bit=True,\n",
" bnb_4bit_use_double_quant=True,\n",
" bnb_4bit_compute_dtype=torch.bfloat16,\n",
" bnb_4bit_quant_type=\"nf4\"\n",
")\n",
"\n",
"model = AutoModelForCausalLM.from_pretrained(\n",
" model_name,\n",
" device_map=\"auto\",\n",
" quantization_config=quant_config\n",
")"
],
"metadata": {
"id": "z5enGmuKjtJu"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Tokenizer**"
],
"metadata": {
"id": "y1hUSmWlwSbp"
}
},
{
"cell_type": "code",
"source": [
"tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
"tokenizer.pad_token = tokenizer.eos_token"
],
"metadata": {
"id": "WjxNWW6bvdgj"
},
"execution_count": 4,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Functions**"
],
"metadata": {
"id": "1pg2U-B3wbIK"
}
},
{
"cell_type": "code",
"source": [
"def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
" # Convert user inputs into multi-shot examples\n",
" multi_shot_examples = [\n",
" {\"instruction\": inst1, \"response\": resp1},\n",
" {\"instruction\": inst2, \"response\": resp2},\n",
" {\"instruction\": inst3, \"response\": resp3}\n",
" ]\n",
"\n",
" # System prompt\n",
" system_prompt = f\"\"\"\n",
" You are a helpful assistant whose main purpose is to generate datasets.\n",
" Topic: {topic}\n",
" Return the dataset in JSON format. Use examples with simple, fun, and easy-to-understand instructions for kids.\n",
" Include the following examples: {multi_shot_examples}\n",
" Return {number_of_data} examples each time.\n",
" Do not repeat the provided examples.\n",
" \"\"\"\n",
"\n",
" # Example Messages\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": f\"Please generate my dataset for {topic}\"}\n",
" ]\n",
"\n",
" # Tokenize Input\n",
" inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n",
" streamer = TextStreamer(tokenizer)\n",
"\n",
" # Generate Output\n",
" outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n",
"\n",
" # Decode and Return\n",
" return tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
"\n",
"\n",
"def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
" return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)"
],
"metadata": {
"id": "ZvljDKdji8iV"
},
"execution_count": 12,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Default Values**"
],
"metadata": {
"id": "_WDZ5dvRwmng"
}
},
{
"cell_type": "code",
"source": [
"default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n",
"default_number_of_data = 10\n",
"default_multi_shot_examples = [\n",
" {\n",
" \"instruction\": \"Why do I have to say please when I want something?\",\n",
" \"response\": \"Because it’s like magic! It shows you’re nice, and people want to help you more.\"\n",
" },\n",
" {\n",
" \"instruction\": \"What should I say if someone gives me a toy?\",\n",
" \"response\": \"You say, 'Thank you!' because it makes them happy you liked it.\"\n",
" },\n",
" {\n",
" \"instruction\": \"why should I listen to my parents?\",\n",
" \"response\": \"Because parents want the best for you and they love you the most.\"\n",
" }\n",
"]"
],
"metadata": {
"id": "JAdfqYXnvEDE"
},
"execution_count": 13,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Init gradio**"
],
"metadata": {
"id": "JwZtD032wuK8"
}
},
{
"cell_type": "code",
"source": [
"gr_interface = gr.Interface(\n",
" fn=gradio_interface,\n",
" inputs=[\n",
" gr.Textbox(label=\"Topic\", value=default_topic, lines=2),\n",
" gr.Number(label=\"Number of Examples\", value=default_number_of_data, precision=0),\n",
" gr.Textbox(label=\"Instruction 1\", value=default_multi_shot_examples[0][\"instruction\"]),\n",
" gr.Textbox(label=\"Response 1\", value=default_multi_shot_examples[0][\"response\"]),\n",
" gr.Textbox(label=\"Instruction 2\", value=default_multi_shot_examples[1][\"instruction\"]),\n",
" gr.Textbox(label=\"Response 2\", value=default_multi_shot_examples[1][\"response\"]),\n",
" gr.Textbox(label=\"Instruction 3\", value=default_multi_shot_examples[2][\"instruction\"]),\n",
" gr.Textbox(label=\"Response 3\", value=default_multi_shot_examples[2][\"response\"]),\n",
" ],\n",
" outputs=gr.Textbox(label=\"Generated Dataset\")\n",
")"
],
"metadata": {
"id": "xy2RP5T-vxXg"
},
"execution_count": 14,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
"**Run the app**"
],
"metadata": {
"id": "HZx-mm9Uw3Ph"
}
},
{
"cell_type": "code",
"source": [
"gr_interface.launch()"
],
"metadata": {
"id": "bfGs5ip8mndg"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [],
"metadata": {
"id": "Cveqx392x7Mm"
},
"execution_count": null,
"outputs": []
}
]
}

493
week4/community-contributions/Day 3 using gemini.ipynb

@ -0,0 +1,493 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3d3cb3c4-9046-4f64-9188-ee20ae324fd1",
"metadata": {},
"source": [
"# Code Generator\n",
"\n",
"The requirement: use a Frontier model to generate high performance C++ code from Python code\n",
"\n",
"# Important Note\n",
"Used an open-source model gemini-1.5-pro ,can try 2.0 flash too\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f2c3e03-f38a-4bf2-98e8-696fb3d428c9",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import io\n",
"import sys\n",
"from dotenv import load_dotenv\n",
"import google.generativeai\n",
"from IPython.display import Markdown, display, update_display\n",
"import gradio as gr\n",
"import subprocess"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e437f3d1-39c4-47fd-919f-c2119d602d72",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"if google_api_key:\n",
" print(f\"Google API Key exists\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1724ddb6-0059-46a3-bcf9-587c0c93cb2a",
"metadata": {},
"outputs": [],
"source": [
"google.generativeai.configure()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b62738c1-9857-40fc-91e8-dfd46483ea50",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that reimplements Python code in high performance C++ for an Windows system. \"\n",
"system_message += \"Respond only with C++ code; use comments sparingly and do not provide any explanation other than occasional comments. \"\n",
"system_message += \"The C++ response needs to produce an identical output in the fastest possible time.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bd431141-8602-4c68-9a1d-a7c0a6f13fa3",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(python):\n",
" user_prompt = \"Rewrite this Python code in C++ with the fastest possible implementation that produces identical output in the least time. \"\n",
" user_prompt += \"Respond only with C++ code; do not explain your work other than a few comments. \"\n",
" user_prompt += \"Pay attention to number types to ensure no int overflows. Remember to #include all necessary C++ packages such as iomanip.\\n\\n\"\n",
" user_prompt += python\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5f48451-4cd4-46ea-a41d-531a3c7db2a8",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(python):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(python)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83fd2170-14ea-4fb6-906e-c3c5cfce1ecc",
"metadata": {},
"outputs": [],
"source": [
"# write to a file called optimized.cpp\n",
"\n",
"def write_output(cpp):\n",
" code = cpp.replace(\"```cpp\",\"\").replace(\"```\",\"\")\n",
" with open(\"optimized.cpp\", \"w\") as f:\n",
" f.write(code)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ff08067-c9df-4981-8ab5-99eb2c2fd2c7",
"metadata": {},
"outputs": [],
"source": [
"def optimize_google(python):\n",
" # Initialize empty reply string\n",
" reply = \"\"\n",
" \n",
" # The API for Gemini has a slightly different structure\n",
" gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-pro',\n",
" system_instruction=system_message\n",
" )\n",
" \n",
" response = gemini.generate_content(\n",
" user_prompt_for(python),\n",
" stream=True\n",
" )\n",
" \n",
" # Process the stream\n",
" for chunk in response:\n",
" # Extract text from the chunk\n",
" if chunk.text:\n",
" reply += chunk.text\n",
" print(chunk.text, end=\"\", flush=True)\n",
" \n",
" # Write the complete response to output\n",
" write_output(reply)\n",
" \n",
" # return reply"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e8c7ba2-4ee9-4523-b0f1-cc7a91798bba",
"metadata": {},
"outputs": [],
"source": [
"pi = \"\"\"\n",
"import time\n",
"\n",
"def calculate(iterations, param1, param2):\n",
" result = 1.0\n",
" for i in range(1, iterations+1):\n",
" j = i * param1 - param2\n",
" result -= (1/j)\n",
" j = i * param1 + param2\n",
" result += (1/j)\n",
" return result\n",
"\n",
"start_time = time.time()\n",
"result = calculate(100_000_000, 4, 1) * 4\n",
"end_time = time.time()\n",
"\n",
"print(f\"Result: {result:.12f}\")\n",
"print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78d1afb7-ed6b-4a03-b36d-4ce8249c592e",
"metadata": {},
"outputs": [],
"source": [
"exec(pi)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1fe1d0b6-7cc7-423b-bc4b-741a0c48c106",
"metadata": {},
"outputs": [],
"source": [
"optimize_google(pi)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d28b4ac9-0909-4b35-aee1-97613a133e8e",
"metadata": {},
"outputs": [],
"source": [
"exec(pi) #Execution Time: 16.209231 seconds"
]
},
{
"cell_type": "markdown",
"id": "7d0443a3-3ca2-4a7a-a6c3-c94d0aa54603",
"metadata": {},
"source": [
"# Compiling C++ and executing\n",
"\n",
"This next cell contains the command to compile a C++ file on Windows system. \n",
"It compiles the file `optimized.cpp` into an executable called `optimized` \n",
"Then it runs the program called `optimized`\n",
"\n",
"The way to compile for mac users is \\\n",
"!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp \\\n",
"!./optimized"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b5cfc70-df1f-44a7-b4ae-fd934f715930",
"metadata": {},
"outputs": [],
"source": [
"!g++ -o optimized optimized.cpp\n",
"!.\\optimized #Execution Time: 3.661196 seconds"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e30fcbdf-82cf-4d50-9690-92dae69d5127",
"metadata": {},
"outputs": [],
"source": [
"python_hard = \"\"\"\n",
"def lcg(seed, a=1664525, c=1013904223, m=2**32):\n",
" value = seed\n",
" while True:\n",
" value = (a * value + c) % m\n",
" yield value\n",
" \n",
"def max_subarray_sum(n, seed, min_val, max_val):\n",
" lcg_gen = lcg(seed)\n",
" random_numbers = [next(lcg_gen) % (max_val - min_val + 1) + min_val for _ in range(n)]\n",
" max_sum = float('-inf')\n",
" for i in range(n):\n",
" current_sum = 0\n",
" for j in range(i, n):\n",
" current_sum += random_numbers[j]\n",
" if current_sum > max_sum:\n",
" max_sum = current_sum\n",
" return max_sum\n",
"\n",
"def total_max_subarray_sum(n, initial_seed, min_val, max_val):\n",
" total_sum = 0\n",
" lcg_gen = lcg(initial_seed)\n",
" for _ in range(20):\n",
" seed = next(lcg_gen)\n",
" total_sum += max_subarray_sum(n, seed, min_val, max_val)\n",
" return total_sum\n",
"\n",
"# Parameters\n",
"n = 10000 # Number of random numbers\n",
"initial_seed = 42 # Initial seed for the LCG\n",
"min_val = -10 # Minimum value of random numbers\n",
"max_val = 10 # Maximum value of random numbers\n",
"\n",
"# Timing the function\n",
"import time\n",
"start_time = time.time()\n",
"result = total_max_subarray_sum(n, initial_seed, min_val, max_val)\n",
"end_time = time.time()\n",
"\n",
"print(\"Total Maximum Subarray Sum (20 runs):\", result)\n",
"print(\"Execution Time: {:.6f} seconds\".format(end_time - start_time))\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e8e111c-6f69-4ed0-8f86-8ed5982aa065",
"metadata": {},
"outputs": [],
"source": [
"exec(python_hard) #Execution Time: 62.297366 seconds"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38038ac1-5cdf-49d7-a286-a5871d5af583",
"metadata": {},
"outputs": [],
"source": [
"optimize_google(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08cb9619-b8ae-42e7-9375-4b3918c37fd0",
"metadata": {},
"outputs": [],
"source": [
"!g++ -o optimized optimized.cpp\n",
"!.\\optimized"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "acd17a0d-f9f1-45a6-8151-916d8e6b9e4f",
"metadata": {},
"outputs": [],
"source": [
"def stream_google(python):\n",
" # Initialize empty reply string\n",
" reply = \"\"\n",
" \n",
" # The API for Gemini has a slightly different structure\n",
" gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-pro',\n",
" system_instruction=system_message\n",
" )\n",
" \n",
" response = gemini.generate_content(\n",
" user_prompt_for(python),\n",
" stream=True\n",
" )\n",
" \n",
" # Process the stream\n",
" for chunk in response:\n",
" # Extract text from the chunk\n",
" if chunk.text:\n",
" reply += chunk.text\n",
" yield reply.replace('```cpp\\n','').replace('```','')\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c3177229-d6cf-4df2-81a7-9e1f3b229c19",
"metadata": {},
"outputs": [],
"source": [
"def optimize(python, model):\n",
" result=stream_google(python)\n",
" for stream_so_far in result:\n",
" yield stream_so_far "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c2476c2d-9218-4d30-bcc9-9cc5271c3a00",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
" python = gr.Textbox(label=\"Python code:\", lines=10, value=pi)\n",
" cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
" with gr.Row():\n",
" model = gr.Dropdown([\"Google\"], label=\"Select model\", value=\"Google\")\n",
" convert = gr.Button(\"Convert code\")\n",
"\n",
" convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a30de175-af4e-428a-8942-1c41997c01f1",
"metadata": {},
"outputs": [],
"source": [
"def execute_python(code):\n",
" try:\n",
" output = io.StringIO()\n",
" sys.stdout = output\n",
" exec(code)\n",
" finally:\n",
" sys.stdout = sys.__stdout__\n",
" return output.getvalue()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "20c6316d-b090-42c5-9be9-7d5a178b97b3",
"metadata": {},
"outputs": [],
"source": [
"def execute_cpp(code):\n",
" write_output(code)\n",
" try:\n",
" # compile_cmd = [\"clang++\", \"-Ofast\", \"-std=c++17\", \"-march=armv8.5-a\", \"-mtune=apple-m1\", \"-mcpu=apple-m1\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
" compile_cmd = [\"g++\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
" compile_result = subprocess.run(compile_cmd, check=True, text=True, capture_output=True)\n",
" run_cmd = [\"./optimized\"]\n",
" run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True)\n",
" return run_result.stdout\n",
" except subprocess.CalledProcessError as e:\n",
" return f\"An error occurred:\\n{e.stderr}\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "950a459f-3ef6-4afd-9e83-f01c032aa21b",
"metadata": {},
"outputs": [],
"source": [
"css = \"\"\"\n",
".python {background-color: #306998;}\n",
".cpp {background-color: #050;}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc3d90ba-716c-4b8f-989f-46c2447c42fa",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"## Convert code from Python to C++\")\n",
" with gr.Row():\n",
" python = gr.Textbox(label=\"Python code:\", value=pi, lines=10)\n",
" cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
" with gr.Row():\n",
" model = gr.Dropdown([\"Google\"], label=\"Select model\", value=\"Google\")\n",
" with gr.Row():\n",
" convert = gr.Button(\"Convert code\")\n",
" with gr.Row():\n",
" python_run = gr.Button(\"Run Python\")\n",
" cpp_run = gr.Button(\"Run C++\")\n",
" with gr.Row():\n",
" python_out = gr.TextArea(label=\"Python result:\", elem_classes=[\"python\"])\n",
" cpp_out = gr.TextArea(label=\"C++ result:\", elem_classes=[\"cpp\"])\n",
"\n",
" convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
" python_run.click(execute_python, inputs=[python], outputs=[python_out])\n",
" cpp_run.click(execute_cpp, inputs=[cpp], outputs=[cpp_out])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c12f6115-e8a9-494e-95ce-2566854c0aa2",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

6
week4/day3.ipynb

@ -276,7 +276,11 @@
"Then it runs the program called `optimized`\n", "Then it runs the program called `optimized`\n",
"\n", "\n",
"You can google (or ask ChatGPT!) for how to do this on your platform, then replace the lines below.\n", "You can google (or ask ChatGPT!) for how to do this on your platform, then replace the lines below.\n",
"If you're not comfortable with this step, you can skip it for sure - I'll show you exactly how it performs on my Mac." "If you're not comfortable with this step, you can skip it for sure - I'll show you exactly how it performs on my Mac.\n",
"\n",
"OR alternatively: student Sandeep K.G. points out that you can run Python and C++ code online to test it out that way. Thank you Sandeep! \n",
"> Not an exact comparison but you can still get the idea of performance difference.\n",
"> For example here: https://www.programiz.com/cpp-programming/online-compiler/"
] ]
}, },
{ {

2
week4/day4.ipynb

@ -609,7 +609,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"def stream_code_quen(python):\n", "def stream_code_qwen(python):\n",
" tokenizer = AutoTokenizer.from_pretrained(code_qwen)\n", " tokenizer = AutoTokenizer.from_pretrained(code_qwen)\n",
" messages = messages_for(python)\n", " messages = messages_for(python)\n",
" text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n", " text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n",

BIN
week4/optimized

Binary file not shown.

26
week5/day4.ipynb

@ -294,7 +294,31 @@
"id": "9468860b-86a2-41df-af01-b2400cc985be", "id": "9468860b-86a2-41df-af01-b2400cc985be",
"metadata": {}, "metadata": {},
"source": [ "source": [
"## Time to use LangChain to bring it all together" "# Time to use LangChain to bring it all together"
]
},
{
"cell_type": "markdown",
"id": "8ba8a5e7-965d-4770-a12d-532aff50c4b5",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">PLEASE READ ME! Ignoring the Deprecation Warning</h2>\n",
" <span style=\"color:#900;\">When you run the next cell, you will get a LangChainDeprecationWarning \n",
" about the simple way we use LangChain memory. They ask us to migrate to their new approach for memory. \n",
" I feel quite conflicted about this. The new approach involves moving to LangGraph and getting deep into their ecosystem.\n",
" There's a fair amount of learning and coding in LangGraph, frankly without much benefit in our case.<br/><br/>\n",
" I'm going to think about whether/how to incorporate it in the course, but for now please ignore the Depreciation Warning and\n",
" use the code as is; LangChain are not expected to remove ConversationBufferMemory any time soon.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
] ]
}, },
{ {

4
week8/day1.ipynb

@ -95,7 +95,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with app.run(show_progress=False):\n", "with app.run():\n",
" reply=hello.local()\n", " reply=hello.local()\n",
"reply" "reply"
] ]
@ -107,7 +107,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"with app.run(show_progress=False):\n", "with app.run():\n",
" reply=hello.remote()\n", " reply=hello.remote()\n",
"reply" "reply"
] ]

22
week8/day5.ipynb

@ -154,12 +154,24 @@
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": null, "id": "331a2044-566f-4866-be4d-7542b7dfdf3f",
"id": "d468291f-abe2-4fd7-97a6-43c714292973",
"metadata": {}, "metadata": {},
"outputs": [], "source": [
"source": [] "<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../thankyou.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#090;\">CONGRATULATIONS AND THANK YOU!!!</h2>\n",
" <span style=\"color:#090;\">\n",
" It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm <a href=\"https://www.linkedin.com/in/eddonner/\">here on LinkedIn</a> if we're not already connected. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. <br/><br/>Thanks once again for working all the way through the course, and I'm excited to hear all about your career as an LLM Engineer.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
} }
], ],
"metadata": { "metadata": {

Loading…
Cancel
Save