Browse Source

Merge branch 'main' into yifwei/week1-anatomy-translator

pull/42/head
Yifan Wei 5 months ago
parent
commit
61c2ebb7a9
  1. 25
      README.md
  2. 2
      requirements.txt
  3. 297
      week1/community-contributions/day-1-research-paper-summarizer-using -openai-api.ipynb
  4. 206
      week1/community-contributions/day-1-to-do-list using-ollama.ipynb
  5. 7
      week1/day1.ipynb
  6. 7
      week1/day5.ipynb
  7. 355
      week2/community-contributions/day3.upsell.ipynb
  8. 291
      week2/community-contributions/day4-with-discount-tool.ipynb
  9. 475
      week2/community-contributions/week2_multimodal_chatbot_with_audio.ipynb
  10. 10
      week2/day3.ipynb
  11. 2
      week6/day3.ipynb
  12. 2
      week6/day5.ipynb

25
README.md

@ -39,9 +39,30 @@ Hopefully I've done a decent job of making these guides bulletproof - but please
During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. And I'll provide alternatives if you'd prefer not to use them.
Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning.
Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about \$10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning.
I'll also show you an alternative if you'd rather not spend anything on APIs.
### Free alternative to Paid APIs
Early in the course, I show you an alternative if you'd rather not spend anything on APIs:
Any time that we have code like:
`openai = OpenAI()`
You can use this as a direct replacement:
`openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')`
Below is a full example:
```
from openai import OpenAI
MODEL = "llama3.2"
openai = OpenAI(base_url='http://localhost:11434/v1';, api_key='ollama')
response = openai.chat.completions.create(
model=MODEL,
messages=[{"role": "user", "content": "What is 2 + 2?"}]
)
print(response.choices[0].message.content)
```
### How this Repo is organized

2
requirements.txt

@ -23,7 +23,7 @@ langchain[docarray]
datasets
sentencepiece
matplotlib
google.generativeai
google-generativeai
anthropic
scikit-learn
unstructured

297
week1/community-contributions/day-1-research-paper-summarizer-using -openai-api.ipynb

@ -0,0 +1,297 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 3,
"id": "52dc600c-4c45-4803-81cb-f06347f4b2c3",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4082f16f-d843-41c7-9137-cdfec093b2d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key found and looks good so far\n"
]
}
],
"source": [
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if not api_key:\n",
" print('No API key was found')\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"API key is found but is not in the proper format\")\n",
"else:\n",
" print(\"API key found and looks good so far\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "16c295ce-c57d-429e-8c03-f6610a8ddd42",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "9a548a52-0f7e-4fdf-ad68-0138b2445935",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"You are a research summarizer. That summarizes the content of the research paper in no more than 1000 words. The research summary that you provide should include the following:\n",
"1) Title and Authors - Identify the study and contributors.\n",
"2) Objective/Problem - State the research goal or question.\n",
"3) Background - Briefly explain the context and significance.\n",
"4) Methods - Summarize the approach or methodology.\n",
"5) Key Findings - Highlight the main results or insights.\n",
"6) Conclusion - Provide the implications or contributions of the study.\n",
"7) Future Directions - Suggest areas for further research or exploration.\n",
"8) Limitations - Highlight constraints or challenges in the study.\n",
"9) Potential Applications - Discuss how the findings can be applied in real-world scenarios.\n",
"Keep all points concise, clear, and focused and generate output in markdown.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "66b4411f-172e-46be-b6cd-a9e5b857fb28",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: ipywidgets in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (8.1.5)\n",
"Requirement already satisfied: pdfplumber in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (0.11.4)\n",
"Requirement already satisfied: comm>=0.1.3 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (0.2.2)\n",
"Requirement already satisfied: ipython>=6.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (8.30.0)\n",
"Requirement already satisfied: traitlets>=4.3.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (5.14.3)\n",
"Requirement already satisfied: widgetsnbextension~=4.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (4.0.13)\n",
"Requirement already satisfied: jupyterlab_widgets~=3.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (3.0.13)\n",
"Requirement already satisfied: pdfminer.six==20231228 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (20231228)\n",
"Requirement already satisfied: Pillow>=9.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (11.0.0)\n",
"Requirement already satisfied: pypdfium2>=4.18.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (4.30.0)\n",
"Requirement already satisfied: charset-normalizer>=2.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (3.4.0)\n",
"Requirement already satisfied: cryptography>=36.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (44.0.0)\n",
"Requirement already satisfied: colorama in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.4.6)\n",
"Requirement already satisfied: decorator in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (5.1.1)\n",
"Requirement already satisfied: jedi>=0.16 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.19.2)\n",
"Requirement already satisfied: matplotlib-inline in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.1.7)\n",
"Requirement already satisfied: prompt_toolkit<3.1.0,>=3.0.41 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (3.0.48)\n",
"Requirement already satisfied: pygments>=2.4.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (2.18.0)\n",
"Requirement already satisfied: stack_data in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.6.3)\n",
"Requirement already satisfied: typing_extensions>=4.6 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (4.12.2)\n",
"Requirement already satisfied: cffi>=1.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (1.17.1)\n",
"Requirement already satisfied: parso<0.9.0,>=0.8.4 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from jedi>=0.16->ipython>=6.1.0->ipywidgets) (0.8.4)\n",
"Requirement already satisfied: wcwidth in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from prompt_toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets) (0.2.13)\n",
"Requirement already satisfied: executing>=1.2.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (2.1.0)\n",
"Requirement already satisfied: asttokens>=2.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (3.0.0)\n",
"Requirement already satisfied: pure_eval in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (0.2.3)\n",
"Requirement already satisfied: pycparser in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cffi>=1.12->cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (2.22)\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"pip install ipywidgets pdfplumber"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d8cd8556-ad86-4949-9f15-09de2b8c712b",
"metadata": {},
"outputs": [],
"source": [
"import pdfplumber\n",
"from ipywidgets import widgets\n",
"from io import BytesIO"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "0eba3cee-d85c-4d75-9b27-70c8cd7587b1",
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import display, Markdown"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "53e270e1-c2e6-4bcc-9ada-90c059cd5a51",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(user_prompt):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2f1807ec-c10b-4d26-9bee-89bd7a4bbb95",
"metadata": {},
"outputs": [],
"source": [
"def summarize(user_prompt):\n",
" # Generate messages using the user_prompt\n",
" messages = messages_for(user_prompt)\n",
" try:\n",
" response = openai.chat.completions.create(\n",
" model=\"gpt-4o-mini\", # Correct model name\n",
" messages=messages,\n",
" max_tokens = 1000 # Pass the generated messages\n",
" )\n",
" # Return the content from the API response correctly\n",
" return response.choices[0].message.content\n",
" except Exception as e:\n",
" # Instead of printing, return an error message that can be displayed\n",
" return f\"Error in OpenAI API call: {e}\""
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0dee8345-4eec-4a9c-ac4e-ad70e13cea44",
"metadata": {},
"outputs": [],
"source": [
"upload_widget = widgets.FileUpload(\n",
" accept='.pdf', \n",
" multiple=False,\n",
" description='Upload PDF',\n",
" layout=widgets.Layout(width='300px',height = '100px', border='2px dashed #cccccc', padding='10px')\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1ff9c7b9-1a3a-4128-a33f-0e5bb2a93d33",
"metadata": {},
"outputs": [],
"source": [
"def extract_text_and_generate_summary(change):\n",
" print(\"extracting text\")\n",
" if upload_widget.value:\n",
" # Extract the first uploaded file\n",
" uploaded_file = list(upload_widget.value)[0]\n",
" pdf_file = uploaded_file['content']\n",
"\n",
" # Extract text from the PDF\n",
" try:\n",
" with pdfplumber.open(BytesIO(pdf_file)) as pdf:\n",
" extracted_text = \"\\n\".join(page.extract_text() for page in pdf.pages)\n",
"\n",
" # Generate the user prompt\n",
" user_prompt = (\n",
" f\"You are looking at the text from a research paper. Summarize it in no more than 1000 words. \"\n",
" f\"The output should be in markdown.\\n\\n{extracted_text}\"\n",
" )\n",
"\n",
" # Get the summarized response\n",
" response = summarize(user_prompt)\n",
" \n",
" if response:\n",
" # Use IPython's display method to show markdown below the cell\n",
" display(Markdown(response))\n",
" \n",
" except Exception as e:\n",
" # If there's an error, display it using Markdown\n",
" display(Markdown(f\"**Error:** {str(e)}\"))\n",
"\n",
" # Reset the upload widget\n",
" upload_widget.value = ()"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "0c16fe3f-704e-4a87-acd9-42c4e6b0d2fa",
"metadata": {},
"outputs": [],
"source": [
"upload_widget.observe(extract_text_and_generate_summary, names='value')"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "c2c2d2b2-1264-42d9-9271-c4700b4df80a",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7304350377d845e78a9a758235e5eba1",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"FileUpload(value=(), accept='.pdf', description='Upload PDF', layout=Layout(border_bottom='2px dashed #cccccc'…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display(upload_widget)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70c76b90-e626-44b3-8d1f-6e995e8a938d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

206
week1/community-contributions/day-1-to-do-list using-ollama.ipynb

@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 208,
"id": "f61139a1-40e1-4273-b9a6-5a0a9d63a9bd",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"import json\n",
"from reportlab.lib.pagesizes import letter\n",
"from reportlab.pdfgen import canvas\n",
"from IPython.display import display, FileLink\n",
"from IPython.display import display, HTML, FileLink\n",
"from reportlab.lib.pagesizes import A4"
]
},
{
"cell_type": "code",
"execution_count": 80,
"id": "e0858b96-fd41-4911-a333-814e4ed23279",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Collecting reportlab\n",
" Downloading reportlab-4.2.5-py3-none-any.whl.metadata (1.5 kB)\n",
"Requirement already satisfied: pillow>=9.0.0 in c:\\users\\legion\\anaconda3\\envs\\to_do_list\\lib\\site-packages (from reportlab) (11.0.0)\n",
"Collecting chardet (from reportlab)\n",
" Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)\n",
"Downloading reportlab-4.2.5-py3-none-any.whl (1.9 MB)\n",
" ---------------------------------------- 0.0/1.9 MB ? eta -:--:--\n",
" ---------------- ----------------------- 0.8/1.9 MB 6.7 MB/s eta 0:00:01\n",
" ---------------------------------------- 1.9/1.9 MB 11.9 MB/s eta 0:00:00\n",
"Downloading chardet-5.2.0-py3-none-any.whl (199 kB)\n",
"Installing collected packages: chardet, reportlab\n",
"Successfully installed chardet-5.2.0 reportlab-4.2.5\n"
]
}
],
"source": [
"!pip install reportlab"
]
},
{
"cell_type": "code",
"execution_count": 220,
"id": "62cc9d37-c801-4e8a-ad2c-7b1450725a10",
"metadata": {},
"outputs": [],
"source": [
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\":\"application/json\"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": 249,
"id": "525a81e7-30f8-4db7-bc8d-29948195bd4f",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"You are a to-do list generator. Based on the user's input, you will create a clear and descriptive to-do\n",
"list using bullet points. Only generate the to-do list as bullet points with some explaination and time fraame only if asked for and nothing else. \n",
"Be a little descriptive.\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 315,
"id": "7fca3303-3add-468a-a6bd-be7a4d72c811",
"metadata": {},
"outputs": [],
"source": [
"def generate_to_do_list(task_description):\n",
" payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": task_description}\n",
" ],\n",
" \"stream\": False\n",
" }\n",
"\n",
" response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"\n",
" if response.status_code == 200:\n",
" try:\n",
" json_response = response.json()\n",
" to_do_list = json_response.get(\"message\", {}).get(\"content\", \"No to-do list found.\")\n",
" \n",
" formatted_output = \"Your To-Do List:\\n\\n\" + to_do_list\n",
" file_name = \"to_do_list.txt\"\n",
" \n",
" with open(file_name, \"w\", encoding=\"utf-8\") as file:\n",
" file.write(formatted_output)\n",
"\n",
" return file_name\n",
" \n",
" except Exception as e:\n",
" return f\"Error parsing JSON: {e}\"\n",
" else:\n",
" return f\"Error: {response.status_code} - {response.text}\""
]
},
{
"cell_type": "code",
"execution_count": 316,
"id": "d45d6c7e-0e89-413e-8f30-e4975ea6d043",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter the task description of the to-do list: Give me a 4-week to-do list plan for a wedding reception party.\n"
]
}
],
"source": [
"task_description = input(\"Enter the task description of the to-do list:\")"
]
},
{
"cell_type": "code",
"execution_count": 317,
"id": "5493da44-e254-4d06-b973-a8069c2fc625",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"result = generate_to_do_list(task_description)"
]
},
{
"cell_type": "code",
"execution_count": 318,
"id": "5e95c722-ce1a-4630-b21a-1e00e7ba6ab9",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<p>You can download your to-do list by clicking the link below:</p>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"<a href='to_do_list.txt' target='_blank'>to_do_list.txt</a><br>"
],
"text/plain": [
"C:\\Users\\Legion\\to-do list using ollama\\to_do_list.txt"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display(HTML(\"<p>You can download your to-do list by clicking the link below:</p>\"))\n",
"display(FileLink(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f3d0a44e-bca4-4944-8593-1761c2f73a70",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

7
week1/day1.ipynb

@ -200,6 +200,11 @@
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
@ -207,7 +212,7 @@
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url)\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",

7
week1/day5.ipynb

@ -78,6 +78,11 @@
"source": [
"# A class to represent a Webpage\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
@ -85,7 +90,7 @@
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",

355
week2/community-contributions/day3.upsell.ipynb

@ -0,0 +1,355 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "75e2ef28-594f-4c18-9d22-c6b8cd40ead2",
"metadata": {},
"source": [
"# Day 3 - Conversational AI - aka Chatbot!"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70e39cd8-ec79-4e3e-9c26-5659d42d0861",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "231605aa-fccb-447e-89cf-8b187444536a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n",
"Anthropic API Key exists and begins sk-ant-\n",
"Google API Key exists and begins AIzaSyA-\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6541d58e-2297-4de1-b1f7-77da1b98b8bb",
"metadata": {},
"outputs": [],
"source": [
"# Initialize\n",
"\n",
"openai = OpenAI()\n",
"MODEL = 'gpt-4o-mini'"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e16839b5-c03b-4d9d-add6-87a0f6f37575",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant\""
]
},
{
"cell_type": "markdown",
"id": "98e97227-f162-4d1a-a0b2-345ff248cbe7",
"metadata": {},
"source": [
"# Please read this! A change from the video:\n",
"\n",
"In the video, I explain how we now need to write a function called:\n",
"\n",
"`chat(message, history)`\n",
"\n",
"Which expects to receive `history` in a particular format, which we need to map to the OpenAI format before we call OpenAI:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"But Gradio has been upgraded! Now it will pass in `history` in the exact OpenAI format, perfect for us to send straight to OpenAI.\n",
"\n",
"So our work just got easier!\n",
"\n",
"We will write a function `chat(message, history)` where: \n",
"**message** is the prompt to use \n",
"**history** is the past conversation, in OpenAI format \n",
"\n",
"We will combine the system message, history and latest message, then call OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1eacc8a4-4b48-4358-9e06-ce0020041bc1",
"metadata": {},
"outputs": [],
"source": [
"# Simpler than in my video - we can easily create this function that calls OpenAI\n",
"# It's now just 1 line of code to prepare the input to OpenAI!\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" print(\"History is:\")\n",
" print(history)\n",
" print(\"And messages is:\")\n",
" print(messages)\n",
"\n",
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
"\n",
" response = \"\"\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" yield response"
]
},
{
"cell_type": "markdown",
"id": "1334422a-808f-4147-9c4c-57d63d9780d0",
"metadata": {},
"source": [
"## And then enter Gradio's magic!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0866ca56-100a-44ab-8bd0-1568feaf6bf2",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1f91b414-8bab-472d-b9c9-3fa51259bdfe",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant in a clothes store. You should try to gently encourage \\\n",
"the customer to try items that are on sale. Hats are 60% off, and most other items are 50% off. \\\n",
"For example, if the customer says 'I'm looking to buy a hat', \\\n",
"you could reply something like, 'Wonderful - we have lots of hats - including several that are part of our sales evemt.'\\\n",
"Encourage the customer to buy hats if they are unsure what to get.\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "4e5be3ec-c26c-42bc-ac16-c39d369883f6",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
"\n",
" response = \"\"\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" yield response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "413e9e4e-7836-43ac-a0c3-e1ab5ed6b136",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d75f0ffa-55c8-4152-b451-945021676837",
"metadata": {},
"outputs": [],
"source": [
"system_message += \"\\nIf the customer asks for shoes, you should respond that shoes are not on sale today, \\\n",
"but remind the customer to look at hats!\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c602a8dd-2df7-4eb7-b539-4e01865a6351",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "0a987a66-1061-46d6-a83a-a30859dc88bf",
"metadata": {},
"outputs": [],
"source": [
"# Fixed a bug in this function brilliantly identified by student Gabor M.!\n",
"# I've also improved the structure of this function\n",
"# Paul Goodwin added \"Buy One get one free offer\" for a bit of fun\n",
"\n",
"def chat(message, history):\n",
"\n",
" relevant_system_message = system_message\n",
" keywords = ['discount', 'offer', 'promotion'] # Define words that imply customer is looking for a better deal\n",
"\n",
" if 'belt' in message.strip().lower():\n",
" relevant_system_message += (\n",
" \" The store does not sell belts; if you are asked for belts, be sure to point out other items on sale.\"\n",
" )\n",
" elif any(word in message.strip().lower() for word in keywords): # Use elif for clarity\n",
" relevant_system_message += (\n",
" \" If the customer asks for more money off the selling price, the store is currently running 'buy 2 get one free' campaign, so be sure to mention this.\"\n",
" )\n",
"\n",
" messages = [{\"role\": \"system\", \"content\": relevant_system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
"\n",
" response = \"\"\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" yield response"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "20570de2-eaad-42cc-a92c-c779d71b48b6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7862\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7862/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "82a57ee0-b945-48a7-a024-01b56a5d4b3e",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business Applications</h2>\n",
" <span style=\"color:#181;\">Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
"<br/><br/>\n",
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6dfb9e21-df67-4c2b-b952-5e7e7961b03d",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

291
week2/community-contributions/day4-with-discount-tool.ipynb

@ -0,0 +1,291 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"\n",
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [],
"source": [
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" return response.choices[0].message.content\n",
"\n",
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
"\n",
"With tools, you can write a function, and have the LLM call that function as part of its response.\n",
"\n",
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
"\n",
"Well, kinda."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
"source": [
"# Let's start by making a useful function\n",
"\n",
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
"ticket_discounts={\"london\":5, \"tokyo\":15}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")\n",
"def get_ticket_discount(destination_city):\n",
" print(f\"Tool get_ticket_discount called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_discounts.get(city,0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [],
"source": [
"get_ticket_price(\"Berlin\")\n",
"get_ticket_discount(\"Berlin\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
"source": [
"# There's a particular dictionary structure that's required to describe our function:\n",
"\n",
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}\n",
"\n",
"discount_function = {\n",
" \"name\": \"get_ticket_discount\",\n",
" \"description\": \"Get the discount on price of a return ticket to the destination city. Call this whenever you need to know the discount on the ticket price, for example when a customer asks 'Is there a discount on the price on the ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The discount on price to the city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
"source": [
"# And this is included in a list of tools:\n",
"\n",
"tools = [{\"type\": \"function\", \"function\": price_function},\n",
" {\"type\":\"function\", \"function\": discount_function}]\n",
"tools_functions_map = {\n",
" \"get_ticket_price\":get_ticket_price,\n",
" \"get_ticket_discount\":get_ticket_discount\n",
"}"
]
},
{
"cell_type": "markdown",
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
"metadata": {},
"source": [
"## Getting OpenAI to use our Tool\n",
"\n",
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
"\n",
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
"\n",
"Here's how the new chat function looks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
"\n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" tool_responses, city = handle_tool_call(message)\n",
" messages.append(message)\n",
" for tool_response in tool_responses:\n",
" messages.append(tool_response)\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0992986-ea09-4912-a076-8e5603ee631f",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"\n",
"def handle_tool_call(message):\n",
" tool_calls = message.tool_calls;\n",
" arguments = json.loads(tool_calls[0].function.arguments)\n",
" city = arguments.get('destination_city')\n",
" responses=[]\n",
" \n",
" for tool_call in tool_calls:\n",
" name = tool_call.function.name\n",
" if name in tools_functions_map:\n",
" key = \"price\" if \"price\" in name else \"discount\"\n",
" value = tools_functions_map[name](city)\n",
" responses.append({\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city, key : value}),\n",
" \"tool_call_id\": tool_call.id\n",
" })\n",
" return responses, city"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "11c9da69-d0cf-4cf2-a49e-e5669deec47b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

475
week2/community-contributions/week2_multimodal_chatbot_with_audio.ipynb

@ -0,0 +1,475 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ad900e1c-b4a9-4f05-93d5-e364fae208dd",
"metadata": {},
"source": [
"# Multimodal Expert Tutor\n",
"\n",
"An AI assistant which leverages expertise from other sources for you.\n",
"\n",
"Features:\n",
"- Multimodal\n",
"- Uses tools\n",
"- Streams responses\n",
"- Reads out the responses after streaming\n",
"- Coverts voice to text during input\n",
"\n",
"Scope for Improvement\n",
"- Read response faster (as streaming starts)\n",
"- code optimization\n",
"- UI enhancements\n",
"- Make it more real time"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n",
"import gradio as gr\n",
"import google.generativeai\n",
"import anthropic"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_CLAUDE = 'claude-3-5-sonnet-20240620'\n",
"MODEL_GEMINI = 'gemini-1.5-flash'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [],
"source": [
"# set up environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6fd8538-0be6-4539-8add-00e42133a641",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic and Google\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "852faee9-79aa-4741-a676-4f5145ccccdc",
"metadata": {},
"outputs": [],
"source": [
"import tempfile\n",
"import subprocess\n",
"from io import BytesIO\n",
"from pydub import AudioSegment\n",
"import time\n",
"\n",
"def play_audio(audio_segment):\n",
" temp_dir = tempfile.gettempdir()\n",
" temp_path = os.path.join(temp_dir, \"temp_audio.wav\")\n",
" try:\n",
" audio_segment.export(temp_path, format=\"wav\")\n",
" subprocess.call([\n",
" \"ffplay\",\n",
" \"-nodisp\",\n",
" \"-autoexit\",\n",
" \"-hide_banner\",\n",
" temp_path\n",
" ], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\n",
" finally:\n",
" try:\n",
" os.remove(temp_path)\n",
" except Exception:\n",
" pass\n",
" \n",
"def talker(message):\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"onyx\", # Also, try replacing onyx with alloy\n",
" input=message\n",
" )\n",
" audio_stream = BytesIO(response.content)\n",
" audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
" play_audio(audio)\n",
"\n",
"talker(\"Well hi there\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8595807b-8ae2-4e1b-95d9-e8532142e8bb",
"metadata": {},
"outputs": [],
"source": [
"# prompts\n",
"general_prompt = \"Please be as technical as possible with your answers.\\\n",
"Only answer questions about topics you have expertise in.\\\n",
"If you do not know something say so.\"\n",
"\n",
"additional_prompt_gpt = \"Analyze the user query and determine if the content is primarily related to \\\n",
"coding, software engineering, data science and LLMs. \\\n",
"If so please answer it yourself else if it is primarily related to \\\n",
"physics, chemistry or biology get answers from tool ask_gemini or \\\n",
"if it belongs to subject related to finance, business or economics get answers from tool ask_claude.\"\n",
"\n",
"system_prompt_gpt = \"You are a helpful technical tutor who is an expert in \\\n",
"coding, software engineering, data science and LLMs.\"+ additional_prompt_gpt + general_prompt\n",
"system_prompt_gemini = \"You are a helpful technical tutor who is an expert in physics, chemistry and biology.\" + general_prompt\n",
"system_prompt_claude = \"You are a helpful technical tutor who is an expert in finance, business and economics.\" + general_prompt\n",
"\n",
"def get_user_prompt(question):\n",
" return \"Please give a detailed explanation to the following question: \" + question"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "24d4a313-60b0-4696-b455-6cfef95ad2fe",
"metadata": {},
"outputs": [],
"source": [
"def call_claude(question):\n",
" result = claude.messages.create(\n",
" model=MODEL_CLAUDE,\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_prompt_claude,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": get_user_prompt(question)},\n",
" ],\n",
" )\n",
" \n",
" return result.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd5d5345-54ab-470b-9b5b-5611a7981458",
"metadata": {},
"outputs": [],
"source": [
"def call_gemini(question):\n",
" gemini = google.generativeai.GenerativeModel(\n",
" model_name=MODEL_GEMINI,\n",
" system_instruction=system_prompt_gemini\n",
" )\n",
" response = gemini.generate_content(get_user_prompt(question))\n",
" response = response.text\n",
" return response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f74da8f-56d1-405e-bc81-040f5428d296",
"metadata": {},
"outputs": [],
"source": [
"# tools and functions\n",
"\n",
"def ask_claude(question):\n",
" print(f\"Tool ask_claude called for {question}\")\n",
" return call_claude(question)\n",
"def ask_gemini(question):\n",
" print(f\"Tool ask_gemini called for {question}\")\n",
" return call_gemini(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c469304d-99b4-42ee-ab02-c9216b61594b",
"metadata": {},
"outputs": [],
"source": [
"ask_claude_function = {\n",
" \"name\": \"ask_claude\",\n",
" \"description\": \"Get the answer to the question related to a topic this agent is faimiliar with. Call this whenever you need to answer something related to finance, marketing, sales or business in general.For example 'What is gross margin' or 'Explain stock market'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"question_for_topic\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The question which is related to finance, business or economics.\",\n",
" },\n",
" },\n",
" \"required\": [\"question_for_topic\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}\n",
"\n",
"ask_gemini_function = {\n",
" \"name\": \"ask_gemini\",\n",
" \"description\": \"Get the answer to the question related to a topic this agent is faimiliar with. Call this whenever you need to answer something related to physics, chemistry or biology.Few examples: 'What is gravity','How do rockets work?', 'What is ATP'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"question_for_topic\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The question which is related to physics, chemistry or biology\",\n",
" },\n",
" },\n",
" \"required\": [\"question_for_topic\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73a60096-c49b-401f-bfd3-d1d40f4563d2",
"metadata": {},
"outputs": [],
"source": [
"tools = [{\"type\": \"function\", \"function\": ask_claude_function},\n",
" {\"type\": \"function\", \"function\": ask_gemini_function}]\n",
"tools_functions_map = {\n",
" \"ask_claude\":ask_claude,\n",
" \"ask_gemini\":ask_gemini\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d54e758-42b2-42f2-a8eb-49c35d44acc6",
"metadata": {},
"outputs": [],
"source": [
"def chat(history):\n",
" messages = [{\"role\": \"system\", \"content\": system_prompt_gpt}] + history\n",
" stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages, tools=tools, stream=True)\n",
" \n",
" full_response = \"\"\n",
" history += [{\"role\":\"assistant\", \"content\":full_response}]\n",
" \n",
" tool_call_accumulator = \"\" # Accumulator for JSON fragments of tool call arguments\n",
" tool_call_id = None # Current tool call ID\n",
" tool_call_function_name = None # Function name\n",
" tool_calls = [] # List to store complete tool calls\n",
"\n",
" for chunk in stream:\n",
" if chunk.choices[0].delta.content:\n",
" full_response += chunk.choices[0].delta.content or \"\"\n",
" history[-1]['content']=full_response\n",
" yield history\n",
" \n",
" if chunk.choices[0].delta.tool_calls:\n",
" message = chunk.choices[0].delta\n",
" for tc in chunk.choices[0].delta.tool_calls:\n",
" if tc.id: # New tool call detected here\n",
" tool_call_id = tc.id\n",
" if tool_call_function_name is None:\n",
" tool_call_function_name = tc.function.name\n",
" \n",
" tool_call_accumulator += tc.function.arguments if tc.function.arguments else \"\"\n",
" \n",
" # When the accumulated JSON string seems complete then:\n",
" try:\n",
" func_args = json.loads(tool_call_accumulator)\n",
" \n",
" # Handle tool call and get response\n",
" tool_response, tool_call = handle_tool_call(tool_call_function_name, func_args, tool_call_id)\n",
" \n",
" tool_calls.append(tool_call)\n",
"\n",
" # Add tool call and tool response to messages this is required by openAI api\n",
" messages.append({\n",
" \"role\": \"assistant\",\n",
" \"tool_calls\": tool_calls\n",
" })\n",
" messages.append(tool_response)\n",
" \n",
" # Create new response with full context\n",
" response = openai.chat.completions.create(\n",
" model=MODEL_GPT, \n",
" messages=messages, \n",
" stream=True\n",
" )\n",
" \n",
" # Reset and accumulate new full response\n",
" full_response = \"\"\n",
" for chunk in response:\n",
" if chunk.choices[0].delta.content:\n",
" full_response += chunk.choices[0].delta.content or \"\"\n",
" history[-1]['content'] = full_response\n",
" yield history\n",
" \n",
" # Reset tool call accumulator and related variables\n",
" tool_call_accumulator = \"\"\n",
" tool_call_id = None\n",
" tool_call_function_name = None\n",
" tool_calls = []\n",
"\n",
" except json.JSONDecodeError:\n",
" # Incomplete JSON; continue accumulating\n",
" pass\n",
"\n",
" # trigger text-to-audio once full response available\n",
" talker(full_response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "515d3774-cc2c-44cd-af9b-768a63ed90dc",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"def handle_tool_call(function_name, arguments, tool_call_id):\n",
" question = arguments.get('question_for_topic')\n",
" \n",
" # Prepare tool call information\n",
" tool_call = {\n",
" \"id\": tool_call_id,\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": function_name,\n",
" \"arguments\": json.dumps(arguments)\n",
" }\n",
" }\n",
" \n",
" if function_name in tools_functions_map:\n",
" answer = tools_functions_map[function_name](question)\n",
" response = {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"question\": question, \"answer\" : answer}),\n",
" \"tool_call_id\": tool_call_id\n",
" }\n",
"\n",
" return response, tool_call"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d7cc622-8635-4693-afa3-b5bcc2f9a63d",
"metadata": {},
"outputs": [],
"source": [
"def transcribe_audio(audio_file_path):\n",
" try:\n",
" audio_file = open(audio_file_path, \"rb\")\n",
" response = openai.audio.transcriptions.create(model=\"whisper-1\", file=audio_file) \n",
" return response.text\n",
" except Exception as e:\n",
" return f\"An error occurred: {e}\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ded9b3f-83e1-4971-9714-4894f2982b5a",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
" chatbot = gr.Chatbot(height=500, type=\"messages\", label=\"Multimodal Technical Expert Chatbot\")\n",
" with gr.Row():\n",
" entry = gr.Textbox(label=\"Ask our technical expert anything:\")\n",
" audio_input = gr.Audio(\n",
" sources=\"microphone\", \n",
" type=\"filepath\",\n",
" label=\"Record audio\",\n",
" editable=False,\n",
" waveform_options=gr.WaveformOptions(\n",
" show_recording_waveform=False,\n",
" ),\n",
" )\n",
"\n",
" # Add event listener for audio stop recording and show text on input area\n",
" audio_input.stop_recording(\n",
" fn=transcribe_audio, \n",
" inputs=audio_input, \n",
" outputs=entry\n",
" )\n",
" \n",
" with gr.Row():\n",
" clear = gr.Button(\"Clear\")\n",
"\n",
" def do_entry(message, history):\n",
" history += [{\"role\":\"user\", \"content\":message}]\n",
" yield \"\", history\n",
" \n",
" entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry,chatbot]).then(\n",
" chat, inputs=chatbot, outputs=chatbot)\n",
" \n",
" clear.click(lambda: None, inputs=None, outputs=chatbot, queue=False)\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "532cb948-7733-4323-b85f-febfe2631e66",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

10
week2/day3.ipynb

@ -224,14 +224,16 @@
"metadata": {},
"outputs": [],
"source": [
"# Fixed a bug in this function brilliantly identified by student Gabor M.!\n",
"# I've also improved the structure of this function\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" relevant_system_message = system_message\n",
" if 'belt' in message:\n",
" messages.append({\"role\": \"system\", \"content\": \"For added context, the store does not sell belts, \\\n",
"but be sure to point out other items on sale\"})\n",
" relevant_system_message += \" The store does not sell belts; if you are asked for belts, be sure to point out other items on sale.\"\n",
" \n",
" messages.append({\"role\": \"user\", \"content\": message})\n",
" messages = [{\"role\": \"system\", \"content\": relevant_system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
"\n",

2
week6/day3.ipynb

@ -893,7 +893,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

2
week6/day5.ipynb

@ -547,7 +547,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

Loading…
Cancel
Save