diff --git a/README.md b/README.md index 94285b3..5b133d8 100644 --- a/README.md +++ b/README.md @@ -39,9 +39,30 @@ Hopefully I've done a decent job of making these guides bulletproof - but please During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. And I'll provide alternatives if you'd prefer not to use them. -Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. +Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about \$10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. -I'll also show you an alternative if you'd rather not spend anything on APIs. +### Free alternative to Paid APIs + +Early in the course, I show you an alternative if you'd rather not spend anything on APIs: +Any time that we have code like: +`openai = OpenAI()` +You can use this as a direct replacement: +`openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')` + +Below is a full example: + +``` +from openai import OpenAI +MODEL = "llama3.2" +openai = OpenAI(base_url='http://localhost:11434/v1';, api_key='ollama') + +response = openai.chat.completions.create( + model=MODEL, + messages=[{"role": "user", "content": "What is 2 + 2?"}] +) + +print(response.choices[0].message.content) +``` ### How this Repo is organized diff --git a/requirements.txt b/requirements.txt index 5dd33ad..5d110bf 100644 --- a/requirements.txt +++ b/requirements.txt @@ -23,7 +23,7 @@ langchain[docarray] datasets sentencepiece matplotlib -google.generativeai +google-generativeai anthropic scikit-learn unstructured diff --git a/week1/community-contributions/day-1-research-paper-summarizer-using -openai-api.ipynb b/week1/community-contributions/day-1-research-paper-summarizer-using -openai-api.ipynb new file mode 100644 index 0000000..45d0914 --- /dev/null +++ b/week1/community-contributions/day-1-research-paper-summarizer-using -openai-api.ipynb @@ -0,0 +1,297 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 3, + "id": "52dc600c-4c45-4803-81cb-f06347f4b2c3", + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import requests\n", + "from dotenv import load_dotenv\n", + "from IPython.display import Markdown, display\n", + "from openai import OpenAI" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "4082f16f-d843-41c7-9137-cdfec093b2d4", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "API key found and looks good so far\n" + ] + } + ], + "source": [ + "load_dotenv()\n", + "api_key = os.getenv('OPENAI_API_KEY')\n", + "\n", + "if not api_key:\n", + " print('No API key was found')\n", + "elif not api_key.startswith(\"sk-proj-\"):\n", + " print(\"API key is found but is not in the proper format\")\n", + "else:\n", + " print(\"API key found and looks good so far\")" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "16c295ce-c57d-429e-8c03-f6610a8ddd42", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "id": "9a548a52-0f7e-4fdf-ad68-0138b2445935", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"\"\"You are a research summarizer. That summarizes the content of the research paper in no more than 1000 words. The research summary that you provide should include the following:\n", + "1) Title and Authors - Identify the study and contributors.\n", + "2) Objective/Problem - State the research goal or question.\n", + "3) Background - Briefly explain the context and significance.\n", + "4) Methods - Summarize the approach or methodology.\n", + "5) Key Findings - Highlight the main results or insights.\n", + "6) Conclusion - Provide the implications or contributions of the study.\n", + "7) Future Directions - Suggest areas for further research or exploration.\n", + "8) Limitations - Highlight constraints or challenges in the study.\n", + "9) Potential Applications - Discuss how the findings can be applied in real-world scenarios.\n", + "Keep all points concise, clear, and focused and generate output in markdown.\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "66b4411f-172e-46be-b6cd-a9e5b857fb28", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Requirement already satisfied: ipywidgets in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (8.1.5)\n", + "Requirement already satisfied: pdfplumber in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (0.11.4)\n", + "Requirement already satisfied: comm>=0.1.3 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (0.2.2)\n", + "Requirement already satisfied: ipython>=6.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (8.30.0)\n", + "Requirement already satisfied: traitlets>=4.3.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (5.14.3)\n", + "Requirement already satisfied: widgetsnbextension~=4.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (4.0.13)\n", + "Requirement already satisfied: jupyterlab_widgets~=3.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (3.0.13)\n", + "Requirement already satisfied: pdfminer.six==20231228 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (20231228)\n", + "Requirement already satisfied: Pillow>=9.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (11.0.0)\n", + "Requirement already satisfied: pypdfium2>=4.18.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (4.30.0)\n", + "Requirement already satisfied: charset-normalizer>=2.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (3.4.0)\n", + "Requirement already satisfied: cryptography>=36.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (44.0.0)\n", + "Requirement already satisfied: colorama in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.4.6)\n", + "Requirement already satisfied: decorator in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (5.1.1)\n", + "Requirement already satisfied: jedi>=0.16 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.19.2)\n", + "Requirement already satisfied: matplotlib-inline in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.1.7)\n", + "Requirement already satisfied: prompt_toolkit<3.1.0,>=3.0.41 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (3.0.48)\n", + "Requirement already satisfied: pygments>=2.4.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (2.18.0)\n", + "Requirement already satisfied: stack_data in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.6.3)\n", + "Requirement already satisfied: typing_extensions>=4.6 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (4.12.2)\n", + "Requirement already satisfied: cffi>=1.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (1.17.1)\n", + "Requirement already satisfied: parso<0.9.0,>=0.8.4 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from jedi>=0.16->ipython>=6.1.0->ipywidgets) (0.8.4)\n", + "Requirement already satisfied: wcwidth in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from prompt_toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets) (0.2.13)\n", + "Requirement already satisfied: executing>=1.2.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (2.1.0)\n", + "Requirement already satisfied: asttokens>=2.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (3.0.0)\n", + "Requirement already satisfied: pure_eval in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (0.2.3)\n", + "Requirement already satisfied: pycparser in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cffi>=1.12->cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (2.22)\n", + "Note: you may need to restart the kernel to use updated packages.\n" + ] + } + ], + "source": [ + "pip install ipywidgets pdfplumber" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "d8cd8556-ad86-4949-9f15-09de2b8c712b", + "metadata": {}, + "outputs": [], + "source": [ + "import pdfplumber\n", + "from ipywidgets import widgets\n", + "from io import BytesIO" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "0eba3cee-d85c-4d75-9b27-70c8cd7587b1", + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import display, Markdown" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "53e270e1-c2e6-4bcc-9ada-90c059cd5a51", + "metadata": {}, + "outputs": [], + "source": [ + "def messages_for(user_prompt):\n", + " return [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "2f1807ec-c10b-4d26-9bee-89bd7a4bbb95", + "metadata": {}, + "outputs": [], + "source": [ + "def summarize(user_prompt):\n", + " # Generate messages using the user_prompt\n", + " messages = messages_for(user_prompt)\n", + " try:\n", + " response = openai.chat.completions.create(\n", + " model=\"gpt-4o-mini\", # Correct model name\n", + " messages=messages,\n", + " max_tokens = 1000 # Pass the generated messages\n", + " )\n", + " # Return the content from the API response correctly\n", + " return response.choices[0].message.content\n", + " except Exception as e:\n", + " # Instead of printing, return an error message that can be displayed\n", + " return f\"Error in OpenAI API call: {e}\"" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "0dee8345-4eec-4a9c-ac4e-ad70e13cea44", + "metadata": {}, + "outputs": [], + "source": [ + "upload_widget = widgets.FileUpload(\n", + " accept='.pdf', \n", + " multiple=False,\n", + " description='Upload PDF',\n", + " layout=widgets.Layout(width='300px',height = '100px', border='2px dashed #cccccc', padding='10px')\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "1ff9c7b9-1a3a-4128-a33f-0e5bb2a93d33", + "metadata": {}, + "outputs": [], + "source": [ + "def extract_text_and_generate_summary(change):\n", + " print(\"extracting text\")\n", + " if upload_widget.value:\n", + " # Extract the first uploaded file\n", + " uploaded_file = list(upload_widget.value)[0]\n", + " pdf_file = uploaded_file['content']\n", + "\n", + " # Extract text from the PDF\n", + " try:\n", + " with pdfplumber.open(BytesIO(pdf_file)) as pdf:\n", + " extracted_text = \"\\n\".join(page.extract_text() for page in pdf.pages)\n", + "\n", + " # Generate the user prompt\n", + " user_prompt = (\n", + " f\"You are looking at the text from a research paper. Summarize it in no more than 1000 words. \"\n", + " f\"The output should be in markdown.\\n\\n{extracted_text}\"\n", + " )\n", + "\n", + " # Get the summarized response\n", + " response = summarize(user_prompt)\n", + " \n", + " if response:\n", + " # Use IPython's display method to show markdown below the cell\n", + " display(Markdown(response))\n", + " \n", + " except Exception as e:\n", + " # If there's an error, display it using Markdown\n", + " display(Markdown(f\"**Error:** {str(e)}\"))\n", + "\n", + " # Reset the upload widget\n", + " upload_widget.value = ()" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "0c16fe3f-704e-4a87-acd9-42c4e6b0d2fa", + "metadata": {}, + "outputs": [], + "source": [ + "upload_widget.observe(extract_text_and_generate_summary, names='value')" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "id": "c2c2d2b2-1264-42d9-9271-c4700b4df80a", + "metadata": {}, + "outputs": [ + { + "data": { + "application/vnd.jupyter.widget-view+json": { + "model_id": "7304350377d845e78a9a758235e5eba1", + "version_major": 2, + "version_minor": 0 + }, + "text/plain": [ + "FileUpload(value=(), accept='.pdf', description='Upload PDF', layout=Layout(border_bottom='2px dashed #cccccc'…" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "display(upload_widget)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "70c76b90-e626-44b3-8d1f-6e995e8a938d", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week1/community-contributions/day-1-to-do-list using-ollama.ipynb b/week1/community-contributions/day-1-to-do-list using-ollama.ipynb new file mode 100644 index 0000000..e01b5df --- /dev/null +++ b/week1/community-contributions/day-1-to-do-list using-ollama.ipynb @@ -0,0 +1,206 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 208, + "id": "f61139a1-40e1-4273-b9a6-5a0a9d63a9bd", + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "import json\n", + "from reportlab.lib.pagesizes import letter\n", + "from reportlab.pdfgen import canvas\n", + "from IPython.display import display, FileLink\n", + "from IPython.display import display, HTML, FileLink\n", + "from reportlab.lib.pagesizes import A4" + ] + }, + { + "cell_type": "code", + "execution_count": 80, + "id": "e0858b96-fd41-4911-a333-814e4ed23279", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting reportlab\n", + " Downloading reportlab-4.2.5-py3-none-any.whl.metadata (1.5 kB)\n", + "Requirement already satisfied: pillow>=9.0.0 in c:\\users\\legion\\anaconda3\\envs\\to_do_list\\lib\\site-packages (from reportlab) (11.0.0)\n", + "Collecting chardet (from reportlab)\n", + " Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)\n", + "Downloading reportlab-4.2.5-py3-none-any.whl (1.9 MB)\n", + " ---------------------------------------- 0.0/1.9 MB ? eta -:--:--\n", + " ---------------- ----------------------- 0.8/1.9 MB 6.7 MB/s eta 0:00:01\n", + " ---------------------------------------- 1.9/1.9 MB 11.9 MB/s eta 0:00:00\n", + "Downloading chardet-5.2.0-py3-none-any.whl (199 kB)\n", + "Installing collected packages: chardet, reportlab\n", + "Successfully installed chardet-5.2.0 reportlab-4.2.5\n" + ] + } + ], + "source": [ + "!pip install reportlab" + ] + }, + { + "cell_type": "code", + "execution_count": 220, + "id": "62cc9d37-c801-4e8a-ad2c-7b1450725a10", + "metadata": {}, + "outputs": [], + "source": [ + "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", + "HEADERS = {\"Content-Type\":\"application/json\"}\n", + "MODEL = \"llama3.2\"" + ] + }, + { + "cell_type": "code", + "execution_count": 249, + "id": "525a81e7-30f8-4db7-bc8d-29948195bd4f", + "metadata": {}, + "outputs": [], + "source": [ + "system_prompt = \"\"\"You are a to-do list generator. Based on the user's input, you will create a clear and descriptive to-do\n", + "list using bullet points. Only generate the to-do list as bullet points with some explaination and time fraame only if asked for and nothing else. \n", + "Be a little descriptive.\"\"\"" + ] + }, + { + "cell_type": "code", + "execution_count": 315, + "id": "7fca3303-3add-468a-a6bd-be7a4d72c811", + "metadata": {}, + "outputs": [], + "source": [ + "def generate_to_do_list(task_description):\n", + " payload = {\n", + " \"model\": MODEL,\n", + " \"messages\": [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": task_description}\n", + " ],\n", + " \"stream\": False\n", + " }\n", + "\n", + " response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", + "\n", + " if response.status_code == 200:\n", + " try:\n", + " json_response = response.json()\n", + " to_do_list = json_response.get(\"message\", {}).get(\"content\", \"No to-do list found.\")\n", + " \n", + " formatted_output = \"Your To-Do List:\\n\\n\" + to_do_list\n", + " file_name = \"to_do_list.txt\"\n", + " \n", + " with open(file_name, \"w\", encoding=\"utf-8\") as file:\n", + " file.write(formatted_output)\n", + "\n", + " return file_name\n", + " \n", + " except Exception as e:\n", + " return f\"Error parsing JSON: {e}\"\n", + " else:\n", + " return f\"Error: {response.status_code} - {response.text}\"" + ] + }, + { + "cell_type": "code", + "execution_count": 316, + "id": "d45d6c7e-0e89-413e-8f30-e4975ea6d043", + "metadata": {}, + "outputs": [ + { + "name": "stdin", + "output_type": "stream", + "text": [ + "Enter the task description of the to-do list: Give me a 4-week to-do list plan for a wedding reception party.\n" + ] + } + ], + "source": [ + "task_description = input(\"Enter the task description of the to-do list:\")" + ] + }, + { + "cell_type": "code", + "execution_count": 317, + "id": "5493da44-e254-4d06-b973-a8069c2fc625", + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "result = generate_to_do_list(task_description)" + ] + }, + { + "cell_type": "code", + "execution_count": 318, + "id": "5e95c722-ce1a-4630-b21a-1e00e7ba6ab9", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "
You can download your to-do list by clicking the link below:
" + ], + "text/plain": [ + "You can download your to-do list by clicking the link below:
\"))\n", + "display(FileLink(result))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f3d0a44e-bca4-4944-8593-1761c2f73a70", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week1/day1.ipynb b/week1/day1.ipynb index b8209e1..c232515 100644 --- a/week1/day1.ipynb +++ b/week1/day1.ipynb @@ -200,6 +200,11 @@ "# A class to represent a Webpage\n", "# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", "\n", + "# Some websites need you to use proper headers when fetching them:\n", + "headers = {\n", + " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + "}\n", + "\n", "class Website:\n", "\n", " def __init__(self, url):\n", @@ -207,7 +212,7 @@ " Create this Website object from the given url using the BeautifulSoup library\n", " \"\"\"\n", " self.url = url\n", - " response = requests.get(url)\n", + " response = requests.get(url, headers=headers)\n", " soup = BeautifulSoup(response.content, 'html.parser')\n", " self.title = soup.title.string if soup.title else \"No title found\"\n", " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", diff --git a/week1/day5.ipynb b/week1/day5.ipynb index 2fc38ac..4dd70ca 100644 --- a/week1/day5.ipynb +++ b/week1/day5.ipynb @@ -78,6 +78,11 @@ "source": [ "# A class to represent a Webpage\n", "\n", + "# Some websites need you to use proper headers when fetching them:\n", + "headers = {\n", + " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", + "}\n", + "\n", "class Website:\n", " \"\"\"\n", " A utility class to represent a Website that we have scraped, now with links\n", @@ -85,7 +90,7 @@ "\n", " def __init__(self, url):\n", " self.url = url\n", - " response = requests.get(url)\n", + " response = requests.get(url, headers=headers)\n", " self.body = response.content\n", " soup = BeautifulSoup(self.body, 'html.parser')\n", " self.title = soup.title.string if soup.title else \"No title found\"\n", diff --git a/week2/community-contributions/day3.upsell.ipynb b/week2/community-contributions/day3.upsell.ipynb new file mode 100644 index 0000000..dd2bd06 --- /dev/null +++ b/week2/community-contributions/day3.upsell.ipynb @@ -0,0 +1,355 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "75e2ef28-594f-4c18-9d22-c6b8cd40ead2", + "metadata": {}, + "source": [ + "# Day 3 - Conversational AI - aka Chatbot!" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "70e39cd8-ec79-4e3e-9c26-5659d42d0861", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "231605aa-fccb-447e-89cf-8b187444536a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI API Key exists and begins sk-proj-\n", + "Anthropic API Key exists and begins sk-ant-\n", + "Google API Key exists and begins AIzaSyA-\n" + ] + } + ], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv()\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "6541d58e-2297-4de1-b1f7-77da1b98b8bb", + "metadata": {}, + "outputs": [], + "source": [ + "# Initialize\n", + "\n", + "openai = OpenAI()\n", + "MODEL = 'gpt-4o-mini'" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "e16839b5-c03b-4d9d-add6-87a0f6f37575", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant\"" + ] + }, + { + "cell_type": "markdown", + "id": "98e97227-f162-4d1a-a0b2-345ff248cbe7", + "metadata": {}, + "source": [ + "# Please read this! A change from the video:\n", + "\n", + "In the video, I explain how we now need to write a function called:\n", + "\n", + "`chat(message, history)`\n", + "\n", + "Which expects to receive `history` in a particular format, which we need to map to the OpenAI format before we call OpenAI:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message here\"},\n", + " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", + " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", + " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", + "]\n", + "```\n", + "\n", + "But Gradio has been upgraded! Now it will pass in `history` in the exact OpenAI format, perfect for us to send straight to OpenAI.\n", + "\n", + "So our work just got easier!\n", + "\n", + "We will write a function `chat(message, history)` where: \n", + "**message** is the prompt to use \n", + "**history** is the past conversation, in OpenAI format \n", + "\n", + "We will combine the system message, history and latest message, then call OpenAI." + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "1eacc8a4-4b48-4358-9e06-ce0020041bc1", + "metadata": {}, + "outputs": [], + "source": [ + "# Simpler than in my video - we can easily create this function that calls OpenAI\n", + "# It's now just 1 line of code to prepare the input to OpenAI!\n", + "\n", + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + "\n", + " print(\"History is:\")\n", + " print(history)\n", + " print(\"And messages is:\")\n", + " print(messages)\n", + "\n", + " stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n", + "\n", + " response = \"\"\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " yield response" + ] + }, + { + "cell_type": "markdown", + "id": "1334422a-808f-4147-9c4c-57d63d9780d0", + "metadata": {}, + "source": [ + "## And then enter Gradio's magic!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0866ca56-100a-44ab-8bd0-1568feaf6bf2", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "1f91b414-8bab-472d-b9c9-3fa51259bdfe", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant in a clothes store. You should try to gently encourage \\\n", + "the customer to try items that are on sale. Hats are 60% off, and most other items are 50% off. \\\n", + "For example, if the customer says 'I'm looking to buy a hat', \\\n", + "you could reply something like, 'Wonderful - we have lots of hats - including several that are part of our sales evemt.'\\\n", + "Encourage the customer to buy hats if they are unsure what to get.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "4e5be3ec-c26c-42bc-ac16-c39d369883f6", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + "\n", + " stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n", + "\n", + " response = \"\"\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " yield response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "413e9e4e-7836-43ac-a0c3-e1ab5ed6b136", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "d75f0ffa-55c8-4152-b451-945021676837", + "metadata": {}, + "outputs": [], + "source": [ + "system_message += \"\\nIf the customer asks for shoes, you should respond that shoes are not on sale today, \\\n", + "but remind the customer to look at hats!\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c602a8dd-2df7-4eb7-b539-4e01865a6351", + "metadata": {}, + "outputs": [], + "source": [ + "gr.ChatInterface(fn=chat, type=\"messages\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "id": "0a987a66-1061-46d6-a83a-a30859dc88bf", + "metadata": {}, + "outputs": [], + "source": [ + "# Fixed a bug in this function brilliantly identified by student Gabor M.!\n", + "# I've also improved the structure of this function\n", + "# Paul Goodwin added \"Buy One get one free offer\" for a bit of fun\n", + "\n", + "def chat(message, history):\n", + "\n", + " relevant_system_message = system_message\n", + " keywords = ['discount', 'offer', 'promotion'] # Define words that imply customer is looking for a better deal\n", + "\n", + " if 'belt' in message.strip().lower():\n", + " relevant_system_message += (\n", + " \" The store does not sell belts; if you are asked for belts, be sure to point out other items on sale.\"\n", + " )\n", + " elif any(word in message.strip().lower() for word in keywords): # Use elif for clarity\n", + " relevant_system_message += (\n", + " \" If the customer asks for more money off the selling price, the store is currently running 'buy 2 get one free' campaign, so be sure to mention this.\"\n", + " )\n", + "\n", + " messages = [{\"role\": \"system\", \"content\": relevant_system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", + "\n", + " stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n", + "\n", + " response = \"\"\n", + " for chunk in stream:\n", + " response += chunk.choices[0].delta.content or ''\n", + " yield response" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "id": "20570de2-eaad-42cc-a92c-c779d71b48b6", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7862\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "" + ], + "text/plain": [ + "\n",
+ " ![]() | \n",
+ " \n",
+ " Business Applications\n", + " Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n", + "\n", + "Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.\n", + " | \n",
+ "