\n",
" \n",
- " \n",
+ " \n",
" | \n",
" \n",
" Important Note - Please read me\n",
@@ -41,7 +41,7 @@
"\n",
" \n",
" \n",
- " \n",
+ " \n",
" | \n",
" \n",
" Reminder about the resources page\n",
@@ -610,7 +610,7 @@
"\n",
" \n",
" \n",
- " \n",
+ " \n",
" | \n",
" \n",
" Before you continue\n",
@@ -646,7 +646,7 @@
"\n",
" \n",
" \n",
- " \n",
+ " \n",
" | \n",
" \n",
" Business relevance\n",
@@ -667,7 +667,7 @@
],
"metadata": {
"kernelspec": {
- "display_name": ".venv",
+ "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -681,7 +681,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.9.6"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/community-contributions/day1-with-3way.ipynb b/week2/community-contributions/day1-with-3way.ipynb
index 5681e64..2cf96ba 100644
--- a/week2/community-contributions/day1-with-3way.ipynb
+++ b/week2/community-contributions/day1-with-3way.ipynb
@@ -641,7 +641,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/community-contributions/day2.ipynb b/week2/community-contributions/day2.ipynb
index f39ffae..05d02bf 100644
--- a/week2/community-contributions/day2.ipynb
+++ b/week2/community-contributions/day2.ipynb
@@ -466,7 +466,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/community-contributions/day4-with-discount-tool.ipynb b/week2/community-contributions/day4-with-discount-tool.ipynb
new file mode 100644
index 0000000..eedb86f
--- /dev/null
+++ b/week2/community-contributions/day4-with-discount-tool.ipynb
@@ -0,0 +1,291 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
+ "metadata": {},
+ "source": [
+ "# Project - Airline AI Assistant\n",
+ "\n",
+ "We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI\n",
+ "import gradio as gr"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialization\n",
+ "\n",
+ "load_dotenv()\n",
+ "\n",
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
+ "if openai_api_key:\n",
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"OpenAI API Key not set\")\n",
+ " \n",
+ "MODEL = \"gpt-4o-mini\"\n",
+ "openai = OpenAI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
+ "system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
+ "system_message += \"Always be accurate. If you don't know the answer, say so.\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
+ "\n",
+ "def chat(message, history):\n",
+ " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
+ " return response.choices[0].message.content\n",
+ "\n",
+ "gr.ChatInterface(fn=chat, type=\"messages\").launch()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
+ "metadata": {},
+ "source": [
+ "## Tools\n",
+ "\n",
+ "Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
+ "\n",
+ "With tools, you can write a function, and have the LLM call that function as part of its response.\n",
+ "\n",
+ "Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
+ "\n",
+ "Well, kinda."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Let's start by making a useful function\n",
+ "\n",
+ "ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
+ "ticket_discounts={\"london\":5, \"tokyo\":15}\n",
+ "\n",
+ "def get_ticket_price(destination_city):\n",
+ " print(f\"Tool get_ticket_price called for {destination_city}\")\n",
+ " city = destination_city.lower()\n",
+ " return ticket_prices.get(city, \"Unknown\")\n",
+ "def get_ticket_discount(destination_city):\n",
+ " print(f\"Tool get_ticket_discount called for {destination_city}\")\n",
+ " city = destination_city.lower()\n",
+ " return ticket_discounts.get(city,0)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "get_ticket_price(\"Berlin\")\n",
+ "get_ticket_discount(\"Berlin\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# There's a particular dictionary structure that's required to describe our function:\n",
+ "\n",
+ "price_function = {\n",
+ " \"name\": \"get_ticket_price\",\n",
+ " \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"destination_city\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The city that the customer wants to travel to\",\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"destination_city\"],\n",
+ " \"additionalProperties\": False\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "discount_function = {\n",
+ " \"name\": \"get_ticket_discount\",\n",
+ " \"description\": \"Get the discount on price of a return ticket to the destination city. Call this whenever you need to know the discount on the ticket price, for example when a customer asks 'Is there a discount on the price on the ticket to this city'\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"destination_city\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The discount on price to the city that the customer wants to travel to\",\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"destination_city\"],\n",
+ " \"additionalProperties\": False\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# And this is included in a list of tools:\n",
+ "\n",
+ "tools = [{\"type\": \"function\", \"function\": price_function},\n",
+ " {\"type\":\"function\", \"function\": discount_function}]\n",
+ "tools_functions_map = {\n",
+ " \"get_ticket_price\":get_ticket_price,\n",
+ " \"get_ticket_discount\":get_ticket_discount\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
+ "metadata": {},
+ "source": [
+ "## Getting OpenAI to use our Tool\n",
+ "\n",
+ "There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
+ "\n",
+ "What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
+ "\n",
+ "Here's how the new chat function looks:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def chat(message, history):\n",
+ " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
+ "\n",
+ " if response.choices[0].finish_reason==\"tool_calls\":\n",
+ " message = response.choices[0].message\n",
+ " tool_responses, city = handle_tool_call(message)\n",
+ " messages.append(message)\n",
+ " for tool_response in tool_responses:\n",
+ " messages.append(tool_response)\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
+ " \n",
+ " return response.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b0992986-ea09-4912-a076-8e5603ee631f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# We have to write that function handle_tool_call:\n",
+ "\n",
+ "def handle_tool_call(message):\n",
+ " tool_calls = message.tool_calls;\n",
+ " arguments = json.loads(tool_calls[0].function.arguments)\n",
+ " city = arguments.get('destination_city')\n",
+ " responses=[]\n",
+ " \n",
+ " for tool_call in tool_calls:\n",
+ " name = tool_call.function.name\n",
+ " if name in tools_functions_map:\n",
+ " key = \"price\" if \"price\" in name else \"discount\"\n",
+ " value = tools_functions_map[name](city)\n",
+ " responses.append({\n",
+ " \"role\": \"tool\",\n",
+ " \"content\": json.dumps({\"destination_city\": city, key : value}),\n",
+ " \"tool_call_id\": tool_call.id\n",
+ " })\n",
+ " return responses, city"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "gr.ChatInterface(fn=chat, type=\"messages\").launch()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "11c9da69-d0cf-4cf2-a49e-e5669deec47b",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/week2/community-contributions/day4.ipynb b/week2/community-contributions/day4.ipynb
index f942756..3621ebb 100644
--- a/week2/community-contributions/day4.ipynb
+++ b/week2/community-contributions/day4.ipynb
@@ -292,7 +292,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/community-contributions/task1.ipynb b/week2/community-contributions/task1.ipynb
index 9ca08a3..2758f89 100644
--- a/week2/community-contributions/task1.ipynb
+++ b/week2/community-contributions/task1.ipynb
@@ -315,7 +315,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/community-contributions/week2_multimodal_chatbot_with_audio.ipynb b/week2/community-contributions/week2_multimodal_chatbot_with_audio.ipynb
new file mode 100644
index 0000000..eb7c377
--- /dev/null
+++ b/week2/community-contributions/week2_multimodal_chatbot_with_audio.ipynb
@@ -0,0 +1,475 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "ad900e1c-b4a9-4f05-93d5-e364fae208dd",
+ "metadata": {},
+ "source": [
+ "# Multimodal Expert Tutor\n",
+ "\n",
+ "An AI assistant which leverages expertise from other sources for you.\n",
+ "\n",
+ "Features:\n",
+ "- Multimodal\n",
+ "- Uses tools\n",
+ "- Streams responses\n",
+ "- Reads out the responses after streaming\n",
+ "- Coverts voice to text during input\n",
+ "\n",
+ "Scope for Improvement\n",
+ "- Read response faster (as streaming starts)\n",
+ "- code optimization\n",
+ "- UI enhancements\n",
+ "- Make it more real time"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "c1070317-3ed9-4659-abe3-828943230e03",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "from IPython.display import Markdown, display, update_display\n",
+ "from openai import OpenAI\n",
+ "import gradio as gr\n",
+ "import google.generativeai\n",
+ "import anthropic"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# constants\n",
+ "\n",
+ "MODEL_GPT = 'gpt-4o-mini'\n",
+ "MODEL_CLAUDE = 'claude-3-5-sonnet-20240620'\n",
+ "MODEL_GEMINI = 'gemini-1.5-flash'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# set up environment\n",
+ "\n",
+ "load_dotenv()\n",
+ "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
+ "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n",
+ "os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a6fd8538-0be6-4539-8add-00e42133a641",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Connect to OpenAI, Anthropic and Google\n",
+ "\n",
+ "openai = OpenAI()\n",
+ "\n",
+ "claude = anthropic.Anthropic()\n",
+ "\n",
+ "google.generativeai.configure()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "852faee9-79aa-4741-a676-4f5145ccccdc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import tempfile\n",
+ "import subprocess\n",
+ "from io import BytesIO\n",
+ "from pydub import AudioSegment\n",
+ "import time\n",
+ "\n",
+ "def play_audio(audio_segment):\n",
+ " temp_dir = tempfile.gettempdir()\n",
+ " temp_path = os.path.join(temp_dir, \"temp_audio.wav\")\n",
+ " try:\n",
+ " audio_segment.export(temp_path, format=\"wav\")\n",
+ " subprocess.call([\n",
+ " \"ffplay\",\n",
+ " \"-nodisp\",\n",
+ " \"-autoexit\",\n",
+ " \"-hide_banner\",\n",
+ " temp_path\n",
+ " ], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)\n",
+ " finally:\n",
+ " try:\n",
+ " os.remove(temp_path)\n",
+ " except Exception:\n",
+ " pass\n",
+ " \n",
+ "def talker(message):\n",
+ " response = openai.audio.speech.create(\n",
+ " model=\"tts-1\",\n",
+ " voice=\"onyx\", # Also, try replacing onyx with alloy\n",
+ " input=message\n",
+ " )\n",
+ " audio_stream = BytesIO(response.content)\n",
+ " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
+ " play_audio(audio)\n",
+ "\n",
+ "talker(\"Well hi there\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8595807b-8ae2-4e1b-95d9-e8532142e8bb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# prompts\n",
+ "general_prompt = \"Please be as technical as possible with your answers.\\\n",
+ "Only answer questions about topics you have expertise in.\\\n",
+ "If you do not know something say so.\"\n",
+ "\n",
+ "additional_prompt_gpt = \"Analyze the user query and determine if the content is primarily related to \\\n",
+ "coding, software engineering, data science and LLMs. \\\n",
+ "If so please answer it yourself else if it is primarily related to \\\n",
+ "physics, chemistry or biology get answers from tool ask_gemini or \\\n",
+ "if it belongs to subject related to finance, business or economics get answers from tool ask_claude.\"\n",
+ "\n",
+ "system_prompt_gpt = \"You are a helpful technical tutor who is an expert in \\\n",
+ "coding, software engineering, data science and LLMs.\"+ additional_prompt_gpt + general_prompt\n",
+ "system_prompt_gemini = \"You are a helpful technical tutor who is an expert in physics, chemistry and biology.\" + general_prompt\n",
+ "system_prompt_claude = \"You are a helpful technical tutor who is an expert in finance, business and economics.\" + general_prompt\n",
+ "\n",
+ "def get_user_prompt(question):\n",
+ " return \"Please give a detailed explanation to the following question: \" + question"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "24d4a313-60b0-4696-b455-6cfef95ad2fe",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_claude(question):\n",
+ " result = claude.messages.create(\n",
+ " model=MODEL_CLAUDE,\n",
+ " max_tokens=200,\n",
+ " temperature=0.7,\n",
+ " system=system_prompt_claude,\n",
+ " messages=[\n",
+ " {\"role\": \"user\", \"content\": get_user_prompt(question)},\n",
+ " ],\n",
+ " )\n",
+ " \n",
+ " return result.content[0].text"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "cd5d5345-54ab-470b-9b5b-5611a7981458",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_gemini(question):\n",
+ " gemini = google.generativeai.GenerativeModel(\n",
+ " model_name=MODEL_GEMINI,\n",
+ " system_instruction=system_prompt_gemini\n",
+ " )\n",
+ " response = gemini.generate_content(get_user_prompt(question))\n",
+ " response = response.text\n",
+ " return response"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6f74da8f-56d1-405e-bc81-040f5428d296",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# tools and functions\n",
+ "\n",
+ "def ask_claude(question):\n",
+ " print(f\"Tool ask_claude called for {question}\")\n",
+ " return call_claude(question)\n",
+ "def ask_gemini(question):\n",
+ " print(f\"Tool ask_gemini called for {question}\")\n",
+ " return call_gemini(question)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c469304d-99b4-42ee-ab02-c9216b61594b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ask_claude_function = {\n",
+ " \"name\": \"ask_claude\",\n",
+ " \"description\": \"Get the answer to the question related to a topic this agent is faimiliar with. Call this whenever you need to answer something related to finance, marketing, sales or business in general.For example 'What is gross margin' or 'Explain stock market'\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"question_for_topic\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The question which is related to finance, business or economics.\",\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"question_for_topic\"],\n",
+ " \"additionalProperties\": False\n",
+ " }\n",
+ "}\n",
+ "\n",
+ "ask_gemini_function = {\n",
+ " \"name\": \"ask_gemini\",\n",
+ " \"description\": \"Get the answer to the question related to a topic this agent is faimiliar with. Call this whenever you need to answer something related to physics, chemistry or biology.Few examples: 'What is gravity','How do rockets work?', 'What is ATP'\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"question_for_topic\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The question which is related to physics, chemistry or biology\",\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"question_for_topic\"],\n",
+ " \"additionalProperties\": False\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "73a60096-c49b-401f-bfd3-d1d40f4563d2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "tools = [{\"type\": \"function\", \"function\": ask_claude_function},\n",
+ " {\"type\": \"function\", \"function\": ask_gemini_function}]\n",
+ "tools_functions_map = {\n",
+ " \"ask_claude\":ask_claude,\n",
+ " \"ask_gemini\":ask_gemini\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9d54e758-42b2-42f2-a8eb-49c35d44acc6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def chat(history):\n",
+ " messages = [{\"role\": \"system\", \"content\": system_prompt_gpt}] + history\n",
+ " stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages, tools=tools, stream=True)\n",
+ " \n",
+ " full_response = \"\"\n",
+ " history += [{\"role\":\"assistant\", \"content\":full_response}]\n",
+ " \n",
+ " tool_call_accumulator = \"\" # Accumulator for JSON fragments of tool call arguments\n",
+ " tool_call_id = None # Current tool call ID\n",
+ " tool_call_function_name = None # Function name\n",
+ " tool_calls = [] # List to store complete tool calls\n",
+ "\n",
+ " for chunk in stream:\n",
+ " if chunk.choices[0].delta.content:\n",
+ " full_response += chunk.choices[0].delta.content or \"\"\n",
+ " history[-1]['content']=full_response\n",
+ " yield history\n",
+ " \n",
+ " if chunk.choices[0].delta.tool_calls:\n",
+ " message = chunk.choices[0].delta\n",
+ " for tc in chunk.choices[0].delta.tool_calls:\n",
+ " if tc.id: # New tool call detected here\n",
+ " tool_call_id = tc.id\n",
+ " if tool_call_function_name is None:\n",
+ " tool_call_function_name = tc.function.name\n",
+ " \n",
+ " tool_call_accumulator += tc.function.arguments if tc.function.arguments else \"\"\n",
+ " \n",
+ " # When the accumulated JSON string seems complete then:\n",
+ " try:\n",
+ " func_args = json.loads(tool_call_accumulator)\n",
+ " \n",
+ " # Handle tool call and get response\n",
+ " tool_response, tool_call = handle_tool_call(tool_call_function_name, func_args, tool_call_id)\n",
+ " \n",
+ " tool_calls.append(tool_call)\n",
+ "\n",
+ " # Add tool call and tool response to messages this is required by openAI api\n",
+ " messages.append({\n",
+ " \"role\": \"assistant\",\n",
+ " \"tool_calls\": tool_calls\n",
+ " })\n",
+ " messages.append(tool_response)\n",
+ " \n",
+ " # Create new response with full context\n",
+ " response = openai.chat.completions.create(\n",
+ " model=MODEL_GPT, \n",
+ " messages=messages, \n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " # Reset and accumulate new full response\n",
+ " full_response = \"\"\n",
+ " for chunk in response:\n",
+ " if chunk.choices[0].delta.content:\n",
+ " full_response += chunk.choices[0].delta.content or \"\"\n",
+ " history[-1]['content'] = full_response\n",
+ " yield history\n",
+ " \n",
+ " # Reset tool call accumulator and related variables\n",
+ " tool_call_accumulator = \"\"\n",
+ " tool_call_id = None\n",
+ " tool_call_function_name = None\n",
+ " tool_calls = []\n",
+ "\n",
+ " except json.JSONDecodeError:\n",
+ " # Incomplete JSON; continue accumulating\n",
+ " pass\n",
+ "\n",
+ " # trigger text-to-audio once full response available\n",
+ " talker(full_response)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "515d3774-cc2c-44cd-af9b-768a63ed90dc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# We have to write that function handle_tool_call:\n",
+ "def handle_tool_call(function_name, arguments, tool_call_id):\n",
+ " question = arguments.get('question_for_topic')\n",
+ " \n",
+ " # Prepare tool call information\n",
+ " tool_call = {\n",
+ " \"id\": tool_call_id,\n",
+ " \"type\": \"function\",\n",
+ " \"function\": {\n",
+ " \"name\": function_name,\n",
+ " \"arguments\": json.dumps(arguments)\n",
+ " }\n",
+ " }\n",
+ " \n",
+ " if function_name in tools_functions_map:\n",
+ " answer = tools_functions_map[function_name](question)\n",
+ " response = {\n",
+ " \"role\": \"tool\",\n",
+ " \"content\": json.dumps({\"question\": question, \"answer\" : answer}),\n",
+ " \"tool_call_id\": tool_call_id\n",
+ " }\n",
+ "\n",
+ " return response, tool_call"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "5d7cc622-8635-4693-afa3-b5bcc2f9a63d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def transcribe_audio(audio_file_path):\n",
+ " try:\n",
+ " audio_file = open(audio_file_path, \"rb\")\n",
+ " response = openai.audio.transcriptions.create(model=\"whisper-1\", file=audio_file) \n",
+ " return response.text\n",
+ " except Exception as e:\n",
+ " return f\"An error occurred: {e}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4ded9b3f-83e1-4971-9714-4894f2982b5a",
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "with gr.Blocks() as ui:\n",
+ " with gr.Row():\n",
+ " chatbot = gr.Chatbot(height=500, type=\"messages\", label=\"Multimodal Technical Expert Chatbot\")\n",
+ " with gr.Row():\n",
+ " entry = gr.Textbox(label=\"Ask our technical expert anything:\")\n",
+ " audio_input = gr.Audio(\n",
+ " sources=\"microphone\", \n",
+ " type=\"filepath\",\n",
+ " label=\"Record audio\",\n",
+ " editable=False,\n",
+ " waveform_options=gr.WaveformOptions(\n",
+ " show_recording_waveform=False,\n",
+ " ),\n",
+ " )\n",
+ "\n",
+ " # Add event listener for audio stop recording and show text on input area\n",
+ " audio_input.stop_recording(\n",
+ " fn=transcribe_audio, \n",
+ " inputs=audio_input, \n",
+ " outputs=entry\n",
+ " )\n",
+ " \n",
+ " with gr.Row():\n",
+ " clear = gr.Button(\"Clear\")\n",
+ "\n",
+ " def do_entry(message, history):\n",
+ " history += [{\"role\":\"user\", \"content\":message}]\n",
+ " yield \"\", history\n",
+ " \n",
+ " entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry,chatbot]).then(\n",
+ " chat, inputs=chatbot, outputs=chatbot)\n",
+ " \n",
+ " clear.click(lambda: None, inputs=None, outputs=chatbot, queue=False)\n",
+ "\n",
+ "ui.launch(inbrowser=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "532cb948-7733-4323-b85f-febfe2631e66",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/week2/day1.ipynb b/week2/day1.ipynb
index dda6516..fe515bc 100644
--- a/week2/day1.ipynb
+++ b/week2/day1.ipynb
@@ -104,8 +104,8 @@
"outputs": [],
"source": [
"# import for google\n",
- "# in rare cases, this seems to give an error on some systems. Please reach out to me if this happens,\n",
- "# or you can feel free to skip Gemini - it's the lowest priority of the frontier models that we use\n",
+ "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
+ "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
"\n",
"import google.generativeai"
]
@@ -148,14 +148,22 @@
"metadata": {},
"outputs": [],
"source": [
- "# Connect to OpenAI, Anthropic and Google\n",
- "# All 3 APIs are similar\n",
- "# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n",
- "# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n",
+ "# Connect to OpenAI, Anthropic\n",
"\n",
"openai = OpenAI()\n",
"\n",
- "claude = anthropic.Anthropic()\n",
+ "claude = anthropic.Anthropic()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the set up code for Gemini\n",
+ "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
"\n",
"google.generativeai.configure()"
]
@@ -308,7 +316,9 @@
"metadata": {},
"outputs": [],
"source": [
- "# The API for Gemini has a slightly different structure\n",
+ "# The API for Gemini has a slightly different structure.\n",
+ "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
+ "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
@@ -318,6 +328,28 @@
"print(response.text)"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "49009a30-037d-41c8-b874-127f61c4aa3a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# As an alternative way to use Gemini that bypasses Google's python API library,\n",
+ "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
+ "\n",
+ "gemini_via_openai_client = OpenAI(\n",
+ " api_key=google_api_key, \n",
+ " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
+ ")\n",
+ "\n",
+ "response = gemini_via_openai_client.chat.completions.create(\n",
+ " model=\"gemini-1.5-flash\",\n",
+ " messages=prompts\n",
+ ")\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -534,7 +566,7 @@
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
- "Try doing this yourself before you look at the solutions.\n",
+ "Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
"\n",
"## Additional exercise\n",
"\n",
@@ -584,7 +616,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/day2.ipynb b/week2/day2.ipynb
index 4c63192..bf5367f 100644
--- a/week2/day2.ipynb
+++ b/week2/day2.ipynb
@@ -186,6 +186,7 @@
"source": [
"# Adding share=True means that it can be accessed publically\n",
"# A more permanent hosting is available using a platform called Spaces from HuggingFace, which we will touch on next week\n",
+ "# NOTE: Some Anti-virus software and Corporate Firewalls might not like you using share=True. If you're at work on on a work network, I suggest skip this test.\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)"
]
@@ -565,7 +566,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/day3.ipynb b/week2/day3.ipynb
index 2be75a4..2dd936b 100644
--- a/week2/day3.ipynb
+++ b/week2/day3.ipynb
@@ -224,14 +224,16 @@
"metadata": {},
"outputs": [],
"source": [
+ "# Fixed a bug in this function brilliantly identified by student Gabor M.!\n",
+ "# I've also improved the structure of this function\n",
+ "\n",
"def chat(message, history):\n",
- " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
+ " relevant_system_message = system_message\n",
" if 'belt' in message:\n",
- " messages.append({\"role\": \"system\", \"content\": \"For added context, the store does not sell belts, \\\n",
- "but be sure to point out other items on sale\"})\n",
+ " relevant_system_message += \" The store does not sell belts; if you are asked for belts, be sure to point out other items on sale.\"\n",
" \n",
- " messages.append({\"role\": \"user\", \"content\": message})\n",
+ " messages = [{\"role\": \"system\", \"content\": relevant_system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
"\n",
" stream = openai.chat.completions.create(model=MODEL, messages=messages, stream=True)\n",
"\n",
@@ -296,7 +298,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/day4.ipynb b/week2/day4.ipynb
index 06c3904..811d116 100644
--- a/week2/day4.ipynb
+++ b/week2/day4.ipynb
@@ -44,7 +44,12 @@
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
- "openai = OpenAI()"
+ "openai = OpenAI()\n",
+ "\n",
+ "# As an alternative, if you'd like to use Ollama instead of OpenAI\n",
+ "# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n",
+ "# MODEL = \"llama3.2\"\n",
+ "# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n"
]
},
{
@@ -249,7 +254,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/day5.ipynb b/week2/day5.ipynb
index 8cfbc20..5722305 100644
--- a/week2/day5.ipynb
+++ b/week2/day5.ipynb
@@ -296,7 +296,7 @@
"id": "f4975b87-19e9-4ade-a232-9b809ec75c9a",
"metadata": {},
"source": [
- "## Audio\n",
+ "## Audio (NOTE - Audio is optional for this course - feel free to skip Audio if it causes trouble!)\n",
"\n",
"And let's make a function talker that uses OpenAI's speech model to generate Audio\n",
"\n",
@@ -412,12 +412,14 @@
"source": [
"# For Windows users\n",
"\n",
- "## if you get a permissions error writing to a temp file, then this code should work instead.\n",
+ "## First try the Mac version above, but if you get a permissions error writing to a temp file, then this code should work instead.\n",
"\n",
"A collaboration between students Mark M. and Patrick H. and Claude got this resolved!\n",
"\n",
"Below are 3 variations - hopefully one of them will work on your PC. If not, message me please!\n",
"\n",
+ "And for Mac people - all 3 of the below work on my Mac too - please try these if the Mac version gave you problems.\n",
+ "\n",
"## PC Variation 1"
]
},
@@ -697,7 +699,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week2/week2 EXERCISE.ipynb b/week2/week2 EXERCISE.ipynb
index 99d83cb..d97f5cb 100644
--- a/week2/week2 EXERCISE.ipynb
+++ b/week2/week2 EXERCISE.ipynb
@@ -43,7 +43,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week3/day1.ipynb b/week3/day1.ipynb
index b1bea54..2d76b3d 100644
--- a/week3/day1.ipynb
+++ b/week3/day1.ipynb
@@ -41,7 +41,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week3/day2.ipynb b/week3/day2.ipynb
index 9c4e01f..eab737e 100644
--- a/week3/day2.ipynb
+++ b/week3/day2.ipynb
@@ -41,7 +41,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week3/day3.ipynb b/week3/day3.ipynb
index 535f3ac..03c847e 100644
--- a/week3/day3.ipynb
+++ b/week3/day3.ipynb
@@ -37,7 +37,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week3/day4.ipynb b/week3/day4.ipynb
index 3cbd556..13aac73 100644
--- a/week3/day4.ipynb
+++ b/week3/day4.ipynb
@@ -31,7 +31,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week3/day5.ipynb b/week3/day5.ipynb
index 70690f7..d068f79 100644
--- a/week3/day5.ipynb
+++ b/week3/day5.ipynb
@@ -43,7 +43,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week4/day3.ipynb b/week4/day3.ipynb
index 481e869..69188c4 100644
--- a/week4/day3.ipynb
+++ b/week4/day3.ipynb
@@ -505,13 +505,13 @@
"outputs": [],
"source": [
"def execute_python(code):\n",
- " try:\n",
- " output = io.StringIO()\n",
- " sys.stdout = output\n",
- " exec(code)\n",
- " finally:\n",
- " sys.stdout = sys.__stdout__\n",
- " return output.getvalue()"
+ " try:\n",
+ " output = io.StringIO()\n",
+ " sys.stdout = output\n",
+ " exec(code)\n",
+ " finally:\n",
+ " sys.stdout = sys.__stdout__\n",
+ " return output.getvalue()"
]
},
{
@@ -581,14 +581,6 @@
"\n",
"ui.launch(inbrowser=True)"
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "77a80857-4632-4de8-a28f-b614bcbe2f40",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
@@ -607,7 +599,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week4/day4.ipynb b/week4/day4.ipynb
index ea195fa..722a233 100644
--- a/week4/day4.ipynb
+++ b/week4/day4.ipynb
@@ -696,7 +696,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week4/optimized b/week4/optimized
index 526d5b0..c7745a1 100755
Binary files a/week4/optimized and b/week4/optimized differ
diff --git a/week4/optimized.cpp b/week4/optimized.cpp
index bdc9d5b..365cbee 100644
--- a/week4/optimized.cpp
+++ b/week4/optimized.cpp
@@ -1,51 +1,72 @@
#include
-#include
+#include
#include
+#include
#include
-// Function to generate random numbers using Mersenne Twister
-std::mt19937 gen(42);
+using namespace std;
+using namespace chrono;
+
+class LCG {
+private:
+ uint64_t value;
+ static const uint64_t a = 1664525;
+ static const uint64_t c = 1013904223;
+ static const uint64_t m = 1ULL << 32;
+
+public:
+ LCG(uint64_t seed) : value(seed) {}
+
+ uint64_t next() {
+ value = (a * value + c) % m;
+ return value;
+ }
+};
+
+int64_t max_subarray_sum(int n, uint64_t seed, int min_val, int max_val) {
+ LCG lcg(seed);
+ vector random_numbers(n);
+ for (int i = 0; i < n; ++i) {
+ random_numbers[i] = lcg.next() % (max_val - min_val + 1) + min_val;
+ }
+
+ int64_t max_sum = numeric_limits::min();
+ int64_t current_sum = 0;
+ int64_t min_sum = 0;
-// Function to calculate maximum subarray sum
-int max_subarray_sum(int n, int min_val, int max_val) {
- std::uniform_int_distribution<> dis(min_val, max_val);
- int max_sum = std::numeric_limits::min();
- int current_sum = 0;
for (int i = 0; i < n; ++i) {
- current_sum += dis(gen);
- if (current_sum > max_sum) {
- max_sum = current_sum;
- }
- if (current_sum < 0) {
- current_sum = 0;
- }
+ current_sum += random_numbers[i];
+ max_sum = max(max_sum, current_sum - min_sum);
+ min_sum = min(min_sum, current_sum);
}
+
return max_sum;
}
-// Function to calculate total maximum subarray sum
-int total_max_subarray_sum(int n, int initial_seed, int min_val, int max_val) {
- gen.seed(initial_seed);
- int total_sum = 0;
+int64_t total_max_subarray_sum(int n, uint64_t initial_seed, int min_val, int max_val) {
+ int64_t total_sum = 0;
+ LCG lcg(initial_seed);
for (int i = 0; i < 20; ++i) {
- total_sum += max_subarray_sum(n, min_val, max_val);
+ uint64_t seed = lcg.next();
+ total_sum += max_subarray_sum(n, seed, min_val, max_val);
}
return total_sum;
}
int main() {
- int n = 10000; // Number of random numbers
- int initial_seed = 42; // Initial seed for the Mersenne Twister
- int min_val = -10; // Minimum value of random numbers
- int max_val = 10; // Maximum value of random numbers
-
- // Timing the function
- auto start_time = std::chrono::high_resolution_clock::now();
- int result = total_max_subarray_sum(n, initial_seed, min_val, max_val);
- auto end_time = std::chrono::high_resolution_clock::now();
-
- std::cout << "Total Maximum Subarray Sum (20 runs): " << result << std::endl;
- std::cout << "Execution Time: " << std::setprecision(6) << std::fixed << std::chrono::duration(end_time - start_time).count() << " seconds" << std::endl;
+ const int n = 10000;
+ const uint64_t initial_seed = 42;
+ const int min_val = -10;
+ const int max_val = 10;
+
+ auto start_time = high_resolution_clock::now();
+ int64_t result = total_max_subarray_sum(n, initial_seed, min_val, max_val);
+ auto end_time = high_resolution_clock::now();
+
+ auto duration = duration_cast(end_time - start_time);
+
+ cout << "Total Maximum Subarray Sum (20 runs): " << result << endl;
+ cout << "Execution Time: " << fixed << setprecision(6) << duration.count() / 1e6 << " seconds" << endl;
return 0;
}
\ No newline at end of file
diff --git a/week5/community-contributions/day3 - extended for Obsidian files and separate ingestion.ipynb b/week5/community-contributions/day3 - extended for Obsidian files and separate ingestion.ipynb
index 2230c68..161eb8d 100644
--- a/week5/community-contributions/day3 - extended for Obsidian files and separate ingestion.ipynb
+++ b/week5/community-contributions/day3 - extended for Obsidian files and separate ingestion.ipynb
@@ -388,7 +388,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/community-contributions/day4 - taking advantage of separate ingestion.ipynb b/week5/community-contributions/day4 - taking advantage of separate ingestion.ipynb
index 188bc8a..bb16478 100644
--- a/week5/community-contributions/day4 - taking advantage of separate ingestion.ipynb
+++ b/week5/community-contributions/day4 - taking advantage of separate ingestion.ipynb
@@ -421,7 +421,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day1.ipynb b/week5/day1.ipynb
index 1ccdd33..f4bc48e 100644
--- a/week5/day1.ipynb
+++ b/week5/day1.ipynb
@@ -256,7 +256,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day2.ipynb b/week5/day2.ipynb
index 824c75b..8c19368 100644
--- a/week5/day2.ipynb
+++ b/week5/day2.ipynb
@@ -169,14 +169,6 @@
" print(chunk)\n",
" print(\"_________\")"
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "6965971c-fb97-482c-a497-4e81a0ac83df",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
@@ -195,7 +187,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day3.ipynb b/week5/day3.ipynb
index a092bbb..764f13c 100644
--- a/week5/day3.ipynb
+++ b/week5/day3.ipynb
@@ -352,7 +352,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day4.5.ipynb b/week5/day4.5.ipynb
index 13de8d7..a02b9cd 100644
--- a/week5/day4.5.ipynb
+++ b/week5/day4.5.ipynb
@@ -214,7 +214,9 @@
"source": [
"## Visualizing the Vector Store\n",
"\n",
- "Let's take a minute to look at the documents and their embedding vectors to see what's going on."
+ "Let's take a minute to look at the documents and their embedding vectors to see what's going on.\n",
+ "\n",
+ "(As a sidenote, what we're really looking at here is the distribution of the Vectors generated by OpenAIEmbeddings, retrieved from FAISS. So there's no surprise that they look the same whether they are \"from\" FAISS or Chroma.)"
]
},
{
@@ -326,6 +328,17 @@
"print(result[\"answer\"])"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "987dadc5-5d09-4059-8f2e-733d66ecc696",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n",
+ "conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)"
+ ]
+ },
{
"cell_type": "markdown",
"id": "bbbcb659-13ce-47ab-8a5e-01b930494964",
@@ -387,7 +400,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day4.ipynb b/week5/day4.ipynb
index d3d1ad0..43aa358 100644
--- a/week5/day4.ipynb
+++ b/week5/day4.ipynb
@@ -404,7 +404,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week5/day5.ipynb b/week5/day5.ipynb
index 141b518..5c29d40 100644
--- a/week5/day5.ipynb
+++ b/week5/day5.ipynb
@@ -50,7 +50,8 @@
"import numpy as np\n",
"import plotly.graph_objects as go\n",
"from langchain.memory import ConversationBufferMemory\n",
- "from langchain.chains import ConversationalRetrievalChain"
+ "from langchain.chains import ConversationalRetrievalChain\n",
+ "from langchain.embeddings import HuggingFaceEmbeddings"
]
},
{
@@ -147,6 +148,10 @@
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
+ "# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n",
+ "# Then uncomment this line instead\n",
+ "# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n",
+ "\n",
"# Delete if already exists\n",
"\n",
"if os.path.exists(db_name):\n",
@@ -289,6 +294,9 @@
"# create a new Chat with OpenAI\n",
"llm = ChatOpenAI(temperature=0.7, model_name=MODEL)\n",
"\n",
+ "# Alternative - if you'd like to use Ollama locally, uncomment this line instead\n",
+ "# llm = ChatOpenAI(temperature=0.7, model_name='llama3.2', base_url='http://localhost:11434/v1', api_key='ollama')\n",
+ "\n",
"# set up the conversation memory for the chat\n",
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n",
"\n",
@@ -427,7 +435,7 @@
"metadata": {},
"outputs": [],
"source": [
- "view = gr.ChatInterface(chat).launch()"
+ "view = gr.ChatInterface(chat, type=\"messages\").launch(inbrowser=True)"
]
},
{
@@ -465,7 +473,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week6/day1.ipynb b/week6/day1.ipynb
index 0d50223..c424656 100644
--- a/week6/day1.ipynb
+++ b/week6/day1.ipynb
@@ -419,7 +419,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week6/day3.ipynb b/week6/day3.ipynb
index 4132ae3..62345ac 100644
--- a/week6/day3.ipynb
+++ b/week6/day3.ipynb
@@ -893,7 +893,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week6/day5.ipynb b/week6/day5.ipynb
index 2c4d61e..1886310 100644
--- a/week6/day5.ipynb
+++ b/week6/day5.ipynb
@@ -547,7 +547,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/agents/messaging_agent.py b/week8/agents/messaging_agent.py
index 70e74d0..7494703 100644
--- a/week8/agents/messaging_agent.py
+++ b/week8/agents/messaging_agent.py
@@ -1,10 +1,11 @@
import os
-from twilio.rest import Client
+# from twilio.rest import Client
from agents.deals import Opportunity
import http.client
import urllib
from agents.agent import Agent
+# Uncomment the Twilio lines if you wish to use Twilio
DO_TEXT = False
DO_PUSH = True
@@ -26,7 +27,7 @@ class MessagingAgent(Agent):
auth_token = os.getenv('TWILIO_AUTH_TOKEN', 'your-auth-if-not-using-env')
self.me_from = os.getenv('TWILIO_FROM', 'your-phone-number-if-not-using-env')
self.me_to = os.getenv('MY_PHONE_NUMBER', 'your-phone-number-if-not-using-env')
- self.client = Client(account_sid, auth_token)
+ # self.client = Client(account_sid, auth_token)
self.log("Messaging Agent has initialized Twilio")
if DO_PUSH:
self.pushover_user = os.getenv('PUSHOVER_USER', 'your-pushover-user-if-not-using-env')
diff --git a/week8/day1.ipynb b/week8/day1.ipynb
index 4e5f0ab..0836b59 100644
--- a/week8/day1.ipynb
+++ b/week8/day1.ipynb
@@ -317,7 +317,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day2.0.ipynb b/week8/day2.0.ipynb
index d93b1f3..27424e6 100644
--- a/week8/day2.0.ipynb
+++ b/week8/day2.0.ipynb
@@ -264,7 +264,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day2.1.ipynb b/week8/day2.1.ipynb
index 0b29b76..fac26d8 100644
--- a/week8/day2.1.ipynb
+++ b/week8/day2.1.ipynb
@@ -174,7 +174,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day2.2.ipynb b/week8/day2.2.ipynb
index 6ef641b..f55ae2a 100644
--- a/week8/day2.2.ipynb
+++ b/week8/day2.2.ipynb
@@ -166,7 +166,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day2.3.ipynb b/week8/day2.3.ipynb
index 5b2b970..bb9a217 100644
--- a/week8/day2.3.ipynb
+++ b/week8/day2.3.ipynb
@@ -391,7 +391,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day2.4.ipynb b/week8/day2.4.ipynb
index 6333ae6..7d357e2 100644
--- a/week8/day2.4.ipynb
+++ b/week8/day2.4.ipynb
@@ -400,7 +400,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day3.ipynb b/week8/day3.ipynb
index 01936d3..9effc96 100644
--- a/week8/day3.ipynb
+++ b/week8/day3.ipynb
@@ -227,7 +227,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day4.ipynb b/week8/day4.ipynb
index 26d6132..4cd5d8f 100644
--- a/week8/day4.ipynb
+++ b/week8/day4.ipynb
@@ -133,7 +133,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/day5.ipynb b/week8/day5.ipynb
index 400a11e..e30130d 100644
--- a/week8/day5.ipynb
+++ b/week8/day5.ipynb
@@ -133,12 +133,32 @@
"And now we'll move to the price_is_right.py code, followed by price_is_right_final.py"
]
},
+ {
+ "cell_type": "markdown",
+ "id": "d783af8a-08a8-4e59-886a-7ca32f16bcf5",
+ "metadata": {},
+ "source": [
+ "# Running the final product\n",
+ "\n",
+ "## Just hit shift + enter in the next cell, and let the deals flow in!!"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
"id": "48506465-1c7a-433f-a665-b277a8b4665c",
"metadata": {},
"outputs": [],
+ "source": [
+ "!python price_is_right_final.py"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d468291f-abe2-4fd7-97a6-43c714292973",
+ "metadata": {},
+ "outputs": [],
"source": []
}
],
@@ -158,7 +178,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week8/memory.json b/week8/memory.json
index 2fb4bd1..8705760 100644
--- a/week8/memory.json
+++ b/week8/memory.json
@@ -16,5 +16,23 @@
},
"estimate": 930.8824204895075,
"discount": 225.88242048950747
+ },
+ {
+ "deal": {
+ "product_description": "The Insignia Class F30 Series NS-55F301NA25 is a 55\" 4K HDR UHD Smart TV with a native resolution of 3840x2160. Featuring HDR support, it enhances color and contrast for a more dynamic viewing experience. The TV integrates seamlessly with Amazon Fire TV, working with both Amazon Alexa and Google Home for voice control. It offers three HDMI ports for multiple device connections, making it a perfect entertainment hub for your living space.",
+ "price": 200.0,
+ "url": "https://www.dealnews.com/products/Insignia/Insignia-Class-F30-Series-NS-55-F301-NA25-55-4-K-HDR-LED-UHD-Smart-TV/467523.html?iref=rss-f1912"
+ },
+ "estimate": 669.1921927283588,
+ "discount": 469.1921927283588
+ },
+ {
+ "deal": {
+ "product_description": "The Samsung 27-Cu. Ft. Mega Capacity 3-Door French Door Counter Depth Refrigerator combines style with spacious organization. This model features a dual auto ice maker, which ensures you always have ice on hand, and adjustable shelves that provide versatile storage options for your groceries. Designed with a sleek, fingerprint resistant finish, it not only looks modern but also simplifies cleaning. With its generous capacity, this refrigerator is perfect for large households or those who love to entertain.",
+ "price": 1299.0,
+ "url": "https://www.dealnews.com/products/Samsung/Samsung-27-Cu-Ft-Mega-Capacity-3-Door-French-Door-Counter-Depth-Refrigerator/454702.html?iref=rss-c196"
+ },
+ "estimate": 2081.647127763905,
+ "discount": 782.6471277639048
}
]
\ No newline at end of file
| | | |