From 92de8d33afacd64a7bb449803971d4451e1a838d Mon Sep 17 00:00:00 2001 From: danielquillanroxas Date: Thu, 30 Jan 2025 15:13:30 +0300 Subject: [PATCH 1/2] Added llama for day1 and tones for day2 (week2) --- .../day1-3way-with-llama3.2.ipynb | 727 ++++++++++++++++++ .../day2-different-tones.ipynb | 575 ++++++++++++++ 2 files changed, 1302 insertions(+) create mode 100644 week2/community-contributions/day1-3way-with-llama3.2.ipynb create mode 100644 week2/community-contributions/day2-different-tones.ipynb diff --git a/week2/community-contributions/day1-3way-with-llama3.2.ipynb b/week2/community-contributions/day1-3way-with-llama3.2.ipynb new file mode 100644 index 0000000..835ae87 --- /dev/null +++ b/week2/community-contributions/day1-3way-with-llama3.2.ipynb @@ -0,0 +1,727 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", + "metadata": {}, + "source": [ + "# Welcome to Week 2!\n", + "\n", + "## Frontier Model APIs\n", + "\n", + "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", + "\n", + "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." + ] + }, + { + "cell_type": "markdown", + "id": "2b268b6e-0ba4-461e-af86-74a41f4d681f", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Important Note - Please read me

\n", + " I'm continually improving these labs, adding more examples and exercises.\n", + " At the start of each week, it's worth checking you have the latest code.
\n", + " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!

\n", + " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:
\n", + " conda env update --f environment.yml
\n", + " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):
\n", + " pip install -r requirements.txt\n", + "
Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", + "
\n", + "
\n", + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Reminder about the resources page

\n", + " Here's a link to resources for the course. This includes links to all the slides.
\n", + " https://edwarddonner.com/2024/11/13/llm-engineering-resources/
\n", + " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", + "
\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "85cfe275-4705-4d30-abea-643fbddf1db0", + "metadata": {}, + "source": [ + "## Setting up your keys\n", + "\n", + "If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n", + "\n", + "**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n", + "\n", + "For OpenAI, visit https://openai.com/api/ \n", + "For Anthropic, visit https://console.anthropic.com/ \n", + "For Google, visit https://ai.google.dev/gemini-api \n", + "\n", + "When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n", + "\n", + "```\n", + "OPENAI_API_KEY=xxxx\n", + "ANTHROPIC_API_KEY=xxxx\n", + "GOOGLE_API_KEY=xxxx\n", + "```\n", + "\n", + "Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import anthropic\n", + "from IPython.display import Markdown, display, update_display" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", + "metadata": {}, + "outputs": [], + "source": [ + "# import for google\n", + "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n", + "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n", + "\n", + "import google.generativeai" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv()\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", + "metadata": {}, + "outputs": [], + "source": [ + "# Connect to OpenAI, Anthropic\n", + "\n", + "openai = OpenAI()\n", + "\n", + "claude = anthropic.Anthropic()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "425ed580-808d-429b-85b0-6cba50ca1d0c", + "metadata": {}, + "outputs": [], + "source": [ + "# This is the set up code for Gemini\n", + "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n", + "\n", + "google.generativeai.configure()" + ] + }, + { + "cell_type": "markdown", + "id": "42f77b59-2fb1-462a-b90d-78994e4cef33", + "metadata": {}, + "source": [ + "## Asking LLMs to tell a joke\n", + "\n", + "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", + "Later we will be putting LLMs to better use!\n", + "\n", + "### What information is included in the API\n", + "\n", + "Typically we'll pass to the API:\n", + "- The name of the model that should be used\n", + "- A system message that gives overall context for the role the LLM is playing\n", + "- A user message that provides the actual prompt\n", + "\n", + "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "378a0296-59a2-45c6-82eb-941344d3eeff", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are an assistant that is great at telling jokes\"\n", + "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", + "metadata": {}, + "outputs": [], + "source": [ + "prompts = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": user_prompt}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", + "metadata": {}, + "outputs": [], + "source": [ + "# GPT-3.5-Turbo\n", + "\n", + "completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", + "metadata": {}, + "outputs": [], + "source": [ + "# GPT-4o-mini\n", + "# Temperature setting controls creativity\n", + "\n", + "completion = openai.chat.completions.create(\n", + " model='gpt-4o-mini',\n", + " messages=prompts,\n", + " temperature=0.7\n", + ")\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", + "metadata": {}, + "outputs": [], + "source": [ + "# GPT-4o\n", + "\n", + "completion = openai.chat.completions.create(\n", + " model='gpt-4o',\n", + " messages=prompts,\n", + " temperature=0.4\n", + ")\n", + "print(completion.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", + "metadata": {}, + "outputs": [], + "source": [ + "# Claude 3.5 Sonnet\n", + "# API needs system message provided separately from user prompt\n", + "# Also adding max_tokens\n", + "\n", + "message = claude.messages.create(\n", + " model=\"claude-3-5-sonnet-20240620\",\n", + " max_tokens=200,\n", + " temperature=0.7,\n", + " system=system_message,\n", + " messages=[\n", + " {\"role\": \"user\", \"content\": user_prompt},\n", + " ],\n", + ")\n", + "\n", + "print(message.content[0].text)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", + "metadata": {}, + "outputs": [], + "source": [ + "# Claude 3.5 Sonnet again\n", + "# Now let's add in streaming back results\n", + "\n", + "result = claude.messages.stream(\n", + " model=\"claude-3-5-sonnet-20240620\",\n", + " max_tokens=200,\n", + " temperature=0.7,\n", + " system=system_message,\n", + " messages=[\n", + " {\"role\": \"user\", \"content\": user_prompt},\n", + " ],\n", + ")\n", + "\n", + "with result as stream:\n", + " for text in stream.text_stream:\n", + " print(text, end=\"\", flush=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", + "metadata": {}, + "outputs": [], + "source": [ + "# The API for Gemini has a slightly different structure.\n", + "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n", + "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n", + "\n", + "gemini = google.generativeai.GenerativeModel(\n", + " model_name='gemini-1.5-flash',\n", + " system_instruction=system_message\n", + ")\n", + "response = gemini.generate_content(user_prompt)\n", + "print(response.text)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "49009a30-037d-41c8-b874-127f61c4aa3a", + "metadata": {}, + "outputs": [], + "source": [ + "# As an alternative way to use Gemini that bypasses Google's python API library,\n", + "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n", + "\n", + "gemini_via_openai_client = OpenAI(\n", + " api_key=google_api_key, \n", + " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", + ")\n", + "\n", + "response = gemini_via_openai_client.chat.completions.create(\n", + " model=\"gemini-1.5-flash\",\n", + " messages=prompts\n", + ")\n", + "print(response.choices[0].message.content)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "83ddb483-4f57-4668-aeea-2aade3a9e573", + "metadata": {}, + "outputs": [], + "source": [ + "# To be serious! GPT-4o-mini with the original question\n", + "\n", + "prompts = [\n", + " {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n", + " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n", + " ]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "749f50ab-8ccd-4502-a521-895c3f0808a2", + "metadata": {}, + "outputs": [], + "source": [ + "# Have it stream back results in markdown\n", + "\n", + "stream = openai.chat.completions.create(\n", + " model='gpt-4o',\n", + " messages=prompts,\n", + " temperature=0.2,\n", + " stream=True\n", + ")\n", + "\n", + "reply = \"\"\n", + "display_handle = display(Markdown(\"\"), display_id=True)\n", + "for chunk in stream:\n", + " reply += chunk.choices[0].delta.content or ''\n", + " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", + " update_display(Markdown(reply), display_id=display_handle.display_id)" + ] + }, + { + "cell_type": "markdown", + "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", + "metadata": {}, + "source": [ + "## And now for some fun - an adversarial conversation between Chatbots..\n", + "\n", + "You're already familar with prompts being organized into lists like:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message here\"},\n", + " {\"role\": \"user\", \"content\": \"user prompt here\"}\n", + "]\n", + "```\n", + "\n", + "In fact this structure can be used to reflect a longer conversation history:\n", + "\n", + "```\n", + "[\n", + " {\"role\": \"system\", \"content\": \"system message here\"},\n", + " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", + " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", + " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", + "]\n", + "```\n", + "\n", + "And we can use this approach to engage in a longer interaction with history." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", + "# We're using cheap versions of models so the costs will be minimal\n", + "\n", + "gpt_model = \"gpt-4o-mini\"\n", + "claude_model = \"claude-3-haiku-20240307\"\n", + "\n", + "gpt_system = \"You are a chatbot who is very argumentative; \\\n", + "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", + "\n", + "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", + "everything the other person says, or find common ground. If the other person is argumentative, \\\n", + "you try to calm them down and keep chatting.\"\n", + "\n", + "gpt_messages = [\"Hi there\"]\n", + "claude_messages = [\"Hi\"]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", + "metadata": {}, + "outputs": [], + "source": [ + "def call_gpt():\n", + " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", + " for gpt, claude, llama in zip(gpt_messages, claude_messages, llama_messages):\n", + " messages.append({\"role\": \"assistant\", \"content\": gpt})\n", + " combined = llama + claude\n", + " messages.append({\"role\": \"user\", \"content\": combined})\n", + " completion = openai.chat.completions.create(\n", + " model=gpt_model,\n", + " messages=messages\n", + " )\n", + " return completion.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", + "metadata": {}, + "outputs": [], + "source": [ + "call_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", + "metadata": {}, + "outputs": [], + "source": [ + "def call_claude():\n", + " messages = []\n", + " for gpt, claude_message in zip(gpt_messages, claude_messages):\n", + " messages.append({\"role\": \"user\", \"content\": gpt})\n", + " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", + " # messages.append(\"role\": \"moderator\", \"content\": llama_message)\n", + " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", + " message = claude.messages.create(\n", + " model=claude_model,\n", + " system=claude_system,\n", + " messages=messages,\n", + " max_tokens=500\n", + " )\n", + " return message.content[0].text" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "01395200-8ae9-41f8-9a04-701624d3fd26", + "metadata": {}, + "outputs": [], + "source": [ + "call_claude()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", + "metadata": {}, + "outputs": [], + "source": [ + "call_gpt()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", + "metadata": {}, + "outputs": [], + "source": [ + "gpt_messages = [\"Hi there\"]\n", + "claude_messages = [\"Hi\"]\n", + "\n", + "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", + "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", + "\n", + "for i in range(5):\n", + " gpt_next = call_gpt()\n", + " print(f\"GPT:\\n{gpt_next}\\n\")\n", + " gpt_messages.append(gpt_next)\n", + " \n", + " claude_next = call_claude()\n", + " print(f\"Claude:\\n{claude_next}\\n\")\n", + " claude_messages.append(claude_next)" + ] + }, + { + "cell_type": "markdown", + "id": "1d10e705-db48-4290-9dc8-9efdb4e31323", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Before you continue

\n", + " \n", + " Be sure you understand how the conversation above is working, and in particular how the messages list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?
\n", + "
\n", + "
" + ] + }, + { + "cell_type": "markdown", + "id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac", + "metadata": {}, + "source": [ + "# More advanced exercises\n", + "\n", + "Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n", + "\n", + "Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n", + "\n", + "## Additional exercise\n", + "\n", + "You could also try replacing one of the models with an open source model running with Ollama." + ] + }, + { + "cell_type": "markdown", + "id": "446c81e3-b67e-4cd9-8113-bc3092b93063", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Business relevance

\n", + " This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c23224f6-7008-44ed-a57f-718975f4e291", + "metadata": {}, + "outputs": [], + "source": [ + "!ollama pull llama3.2" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cbbddf71-1473-42fe-b733-2bb42ea77333", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "OLLAMA_API = \"http://localhost:11434/api/chat\"\n", + "HEADERS = {\"Content-Type\": \"application/json\"}\n", + "import ollama\n", + "\n", + "llama_model = \"llama3.2\"\n", + "\n", + "llama_system = \"You are a chatbot who is very pacifist; \\\n", + "you will try to resolve or neutralize any disagreement between other chatbots. Speak like a teacher or someone in authority.\"\n", + "\n", + "llama_messages = [\"Hello.\"]\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f629d2b2-ba20-4bfe-a2e5-bbe537ca46a2", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "def call_llama():\n", + " combined_messages = gpt_messages[-1] + claude_messages[-1]\n", + " messages = [{\"role\": \"system\", \"content\": llama_system}]\n", + " for comb, llama in zip(combined_messages, llama_messages):\n", + " messages.append({\"role\": \"assistant\", \"content\": llama})\n", + " messages.append({\"role\": \"user\", \"content\": combined_messages})\n", + " completion = ollama.chat(\n", + " model=llama_model,\n", + " messages=messages\n", + " )\n", + " return completion['message']['content']" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "219b6af8-3166-4059-b79e-cf19af7ed1e9", + "metadata": {}, + "outputs": [], + "source": [ + "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", + "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", + "print(f\"Llama:\\n{llama_messages[0]}\\n\" )\n", + "\n", + "for i in range(3):\n", + " gpt_next = call_gpt()\n", + " print(f\"GPT:\\n{gpt_next}\\n\")\n", + " gpt_messages.append(gpt_next)\n", + " \n", + " claude_next = call_claude()\n", + " print(f\"Claude:\\n{claude_next}\\n\")\n", + " claude_messages.append(claude_next)\n", + "\n", + " llama_next = call_llama()\n", + " print(f\"Llama:\\n{llama_next}\\n\")\n", + " llama_messages.append(llama_next)\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "6cb3a931-522c-49a9-9bd8-663333f41b1a", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "2cdfdc32-1ca4-406e-9328-81af26fd503b", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "04f60158-633b-43ff-afbd-396be79501e6", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "eb0faf0d-fb7e-4bc5-9746-30f19a0b9ae1", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/week2/community-contributions/day2-different-tones.ipynb b/week2/community-contributions/day2-different-tones.ipynb new file mode 100644 index 0000000..9b14e3a --- /dev/null +++ b/week2/community-contributions/day2-different-tones.ipynb @@ -0,0 +1,575 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "8b0e11f2-9ea4-48c2-b8d2-d0a4ba967827", + "metadata": {}, + "source": [ + "# Gradio Day!\n", + "\n", + "Today we will build User Interfaces using the outrageously simple Gradio framework.\n", + "\n", + "Prepare for joy!\n", + "\n", + "Please note: your Gradio screens may appear in 'dark mode' or 'light mode' depending on your computer settings." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c44c5494-950d-4d2f-8d4f-b87b57c5b330", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import requests\n", + "from bs4 import BeautifulSoup\n", + "from typing import List\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import google.generativeai\n", + "import anthropic" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d1715421-cead-400b-99af-986388a97aff", + "metadata": {}, + "outputs": [], + "source": [ + "import gradio as gr # oh yeah!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "337d5dfc-0181-4e3b-8ab9-e78e0c3f657b", + "metadata": {}, + "outputs": [], + "source": [ + "# Load environment variables in a file called .env\n", + "# Print the key prefixes to help with any debugging\n", + "\n", + "load_dotenv()\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", + "google_api_key = os.getenv('GOOGLE_API_KEY')\n", + "\n", + "if openai_api_key:\n", + " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", + "else:\n", + " print(\"OpenAI API Key not set\")\n", + " \n", + "if anthropic_api_key:\n", + " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", + "else:\n", + " print(\"Anthropic API Key not set\")\n", + "\n", + "if google_api_key:\n", + " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", + "else:\n", + " print(\"Google API Key not set\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "22586021-1795-4929-8079-63f5bb4edd4c", + "metadata": {}, + "outputs": [], + "source": [ + "# Connect to OpenAI, Anthropic and Google; comment out the Claude or Google lines if you're not using them\n", + "\n", + "openai = OpenAI()\n", + "\n", + "claude = anthropic.Anthropic()\n", + "\n", + "google.generativeai.configure()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b16e6021-6dc4-4397-985a-6679d6c8ffd5", + "metadata": {}, + "outputs": [], + "source": [ + "# A generic system message - no more snarky adversarial AIs!\n", + "\n", + "system_message = \"You are a helpful assistant\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "02ef9b69-ef31-427d-86d0-b8c799e1c1b1", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's wrap a call to GPT-4o-mini in a simple function\n", + "\n", + "def message_gpt(prompt):\n", + " messages = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": prompt}\n", + " ]\n", + " completion = openai.chat.completions.create(\n", + " model='gpt-4o-mini',\n", + " messages=messages,\n", + " )\n", + " return completion.choices[0].message.content" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aef7d314-2b13-436b-b02d-8de3b72b193f", + "metadata": {}, + "outputs": [], + "source": [ + "message_gpt(\"What is today's date?\")" + ] + }, + { + "cell_type": "markdown", + "id": "f94013d1-4f27-4329-97e8-8c58db93636a", + "metadata": {}, + "source": [ + "## User Interface time!" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bc664b7a-c01d-4fea-a1de-ae22cdd5141a", + "metadata": {}, + "outputs": [], + "source": [ + "# here's a simple function\n", + "\n", + "def shout(text):\n", + " print(f\"Shout has been called with input {text}\")\n", + " return text.upper()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "083ea451-d3a0-4d13-b599-93ed49b975e4", + "metadata": {}, + "outputs": [], + "source": [ + "shout(\"hello\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "08f1f15a-122e-4502-b112-6ee2817dda32", + "metadata": {}, + "outputs": [], + "source": [ + "# The simplicty of gradio. This might appear in \"light mode\" - I'll show you how to make this in dark mode later.\n", + "\n", + "gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\").launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c9a359a4-685c-4c99-891c-bb4d1cb7f426", + "metadata": {}, + "outputs": [], + "source": [ + "# Adding share=True means that it can be accessed publically\n", + "# A more permanent hosting is available using a platform called Spaces from HuggingFace, which we will touch on next week\n", + "# NOTE: Some Anti-virus software and Corporate Firewalls might not like you using share=True. If you're at work on on a work network, I suggest skip this test.\n", + "\n", + "gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cd87533a-ff3a-4188-8998-5bedd5ba2da3", + "metadata": {}, + "outputs": [], + "source": [ + "# Adding inbrowser=True opens up a new browser window automatically\n", + "\n", + "gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(inbrowser=True)" + ] + }, + { + "cell_type": "markdown", + "id": "b42ec007-0314-48bf-84a4-a65943649215", + "metadata": {}, + "source": [ + "## Forcing dark mode\n", + "\n", + "Gradio appears in light mode or dark mode depending on the settings of the browser and computer. There is a way to force gradio to appear in dark mode, but Gradio recommends against this as it should be a user preference (particularly for accessibility reasons). But if you wish to force dark mode for your screens, below is how to do it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e8129afa-532b-4b15-b93c-aa9cca23a546", + "metadata": {}, + "outputs": [], + "source": [ + "# Define this variable and then pass js=force_dark_mode when creating the Interface\n", + "\n", + "force_dark_mode = \"\"\"\n", + "function refresh() {\n", + " const url = new URL(window.location);\n", + " if (url.searchParams.get('__theme') !== 'dark') {\n", + " url.searchParams.set('__theme', 'dark');\n", + " window.location.href = url.href;\n", + " }\n", + "}\n", + "\"\"\"\n", + "gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\", js=force_dark_mode).launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3cc67b26-dd5f-406d-88f6-2306ee2950c0", + "metadata": {}, + "outputs": [], + "source": [ + "# Inputs and Outputs\n", + "\n", + "view = gr.Interface(\n", + " fn=shout,\n", + " inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n", + " outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f235288e-63a2-4341-935b-1441f9be969b", + "metadata": {}, + "outputs": [], + "source": [ + "# And now - changing the function from \"shout\" to \"message_gpt\"\n", + "\n", + "view = gr.Interface(\n", + " fn=message_gpt,\n", + " inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n", + " outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "af9a3262-e626-4e4b-80b0-aca152405e63", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's use Markdown\n", + "# Are you wondering why it makes any difference to set system_message when it's not referred to in the code below it?\n", + "# I'm taking advantage of system_message being a global variable, used back in the message_gpt function (go take a look)\n", + "# Not a great software engineering practice, but quite sommon during Jupyter Lab R&D!\n", + "\n", + "system_message = \"You are a helpful assistant that responds in markdown\"\n", + "\n", + "view = gr.Interface(\n", + " fn=message_gpt,\n", + " inputs=[gr.Textbox(label=\"Your message:\")],\n", + " outputs=[gr.Markdown(label=\"Response:\")],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "88c04ebf-0671-4fea-95c9-bc1565d4bb4f", + "metadata": {}, + "outputs": [], + "source": [ + "# Let's create a call that streams back results\n", + "# If you'd like a refresher on Generators (the \"yield\" keyword),\n", + "# Please take a look at the Intermediate Python notebook in week1 folder.\n", + "\n", + "def stream_gpt(prompt):\n", + " messages = [\n", + " {\"role\": \"system\", \"content\": system_message},\n", + " {\"role\": \"user\", \"content\": prompt}\n", + " ]\n", + " stream = openai.chat.completions.create(\n", + " model='gpt-4o-mini',\n", + " messages=messages,\n", + " stream=True\n", + " )\n", + " result = \"\"\n", + " for chunk in stream:\n", + " result += chunk.choices[0].delta.content or \"\"\n", + " yield result" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0bb1f789-ff11-4cba-ac67-11b815e29d09", + "metadata": {}, + "outputs": [], + "source": [ + "view = gr.Interface(\n", + " fn=stream_gpt,\n", + " inputs=[gr.Textbox(label=\"Your message:\")],\n", + " outputs=[gr.Markdown(label=\"Response:\")],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "bbc8e930-ba2a-4194-8f7c-044659150626", + "metadata": {}, + "outputs": [], + "source": [ + "def stream_claude(prompt):\n", + " result = claude.messages.stream(\n", + " model=\"claude-3-haiku-20240307\",\n", + " max_tokens=1000,\n", + " temperature=0.7,\n", + " system=system_message,\n", + " messages=[\n", + " {\"role\": \"user\", \"content\": prompt},\n", + " ],\n", + " )\n", + " response = \"\"\n", + " with result as stream:\n", + " for text in stream.text_stream:\n", + " response += text or \"\"\n", + " yield response" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a0066ffd-196e-4eaf-ad1e-d492958b62af", + "metadata": {}, + "outputs": [], + "source": [ + "view = gr.Interface(\n", + " fn=stream_claude,\n", + " inputs=[gr.Textbox(label=\"Your message:\")],\n", + " outputs=[gr.Markdown(label=\"Response:\")],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "markdown", + "id": "bc5a70b9-2afe-4a7c-9bed-2429229e021b", + "metadata": {}, + "source": [ + "## Minor improvement\n", + "\n", + "I've made a small improvement to this code.\n", + "\n", + "Previously, it had these lines:\n", + "\n", + "```\n", + "for chunk in result:\n", + " yield chunk\n", + "```\n", + "\n", + "There's actually a more elegant way to achieve this (which Python people might call more 'Pythonic'):\n", + "\n", + "`yield from result`\n", + "\n", + "I cover this in more detail in the Intermediate Python notebook in the week1 folder - take a look if you'd like more." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0087623a-4e31-470b-b2e6-d8d16fc7bcf5", + "metadata": {}, + "outputs": [], + "source": [ + "def stream_model(prompt, model):\n", + " if model==\"GPT\":\n", + " result = stream_gpt(prompt)\n", + " elif model==\"Claude\":\n", + " result = stream_claude(prompt)\n", + " else:\n", + " raise ValueError(\"Unknown model\")\n", + " yield from result" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "8d8ce810-997c-4b6a-bc4f-1fc847ac8855", + "metadata": {}, + "outputs": [], + "source": [ + "view = gr.Interface(\n", + " fn=stream_model,\n", + " inputs=[gr.Textbox(label=\"Your message:\"), gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\", value=\"Claude\")],\n", + " outputs=[gr.Markdown(label=\"Response:\")],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "markdown", + "id": "d933865b-654c-4b92-aa45-cf389f1eda3d", + "metadata": {}, + "source": [ + "# Building a company brochure generator\n", + "\n", + "Now you know how - it's simple!" + ] + }, + { + "cell_type": "markdown", + "id": "92d7c49b-2e0e-45b3-92ce-93ca9f962ef4", + "metadata": {}, + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Before you read the next few cells

\n", + " \n", + " Try to do this yourself - go back to the company brochure in week1, day5 and add a Gradio UI to the end. Then come and look at the solution.\n", + " \n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1626eb2e-eee8-4183-bda5-1591b58ae3cf", + "metadata": {}, + "outputs": [], + "source": [ + "# A class to represent a Webpage\n", + "\n", + "class Website:\n", + " url: str\n", + " title: str\n", + " text: str\n", + "\n", + " def __init__(self, url):\n", + " self.url = url\n", + " response = requests.get(url)\n", + " self.body = response.content\n", + " soup = BeautifulSoup(self.body, 'html.parser')\n", + " self.title = soup.title.string if soup.title else \"No title found\"\n", + " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", + " irrelevant.decompose()\n", + " self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", + "\n", + " def get_contents(self):\n", + " return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c701ec17-ecd5-4000-9f68-34634c8ed49d", + "metadata": {}, + "outputs": [], + "source": [ + "# With massive thanks to Bill G. who noticed that a prior version of this had a bug! Now fixed.\n", + "\n", + "system_message = \"You are an assistant that analyzes the contents of a company website landing page \\\n", + "and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "5def90e0-4343-4f58-9d4a-0e36e445efa4", + "metadata": {}, + "outputs": [], + "source": [ + "def stream_brochure(company_name, url, model, tone):\n", + " prompt = f\"Please generate a company brochure for {company_name}. Write the brochure in the following tone: {tone}.Here is their landing page:\\n\"\n", + " prompt += Website(url).get_contents()\n", + " if model==\"GPT\":\n", + " result = stream_gpt(prompt)\n", + " elif model==\"Claude\":\n", + " result = stream_claude(prompt)\n", + " else:\n", + " raise ValueError(\"Unknown model\")\n", + " yield from result" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "66399365-5d67-4984-9d47-93ed26c0bd3d", + "metadata": {}, + "outputs": [], + "source": [ + "view = gr.Interface(\n", + " fn=stream_brochure,\n", + " inputs=[\n", + " gr.Textbox(label=\"Company name:\"),\n", + " gr.Textbox(label=\"Landing page URL including http:// or https://\"),\n", + " gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\"),\n", + " gr.Dropdown([\"Formal\", \"Casual\", \"Academic\", \"Funny\", \"Snarky\"], label=\"Select tone\", value=\"Formal\"),],\n", + " outputs=[gr.Markdown(label=\"Brochure:\")],\n", + " flagging_mode=\"never\"\n", + ")\n", + "view.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "ede97ca3-a0f8-4f6e-be17-d1de7fef9cc0", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 06c4f097ed8ad99180a8abbd8f4a4692417c7ac4 Mon Sep 17 00:00:00 2001 From: danielquillanroxas Date: Sat, 1 Feb 2025 14:29:46 +0300 Subject: [PATCH 2/2] translator with audio --- .../week2_exercise_translated_chatbot.ipynb | 614 ++++++++++++++++++ 1 file changed, 614 insertions(+) create mode 100644 week2/community-contributions/week2_exercise_translated_chatbot.ipynb diff --git a/week2/community-contributions/week2_exercise_translated_chatbot.ipynb b/week2/community-contributions/week2_exercise_translated_chatbot.ipynb new file mode 100644 index 0000000..7490107 --- /dev/null +++ b/week2/community-contributions/week2_exercise_translated_chatbot.ipynb @@ -0,0 +1,614 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "d006b2ea-9dfe-49c7-88a9-a5a0775185fd", + "metadata": {}, + "source": [ + "# Additional End of week Exercise - week 2\n", + "\n", + "Now use everything you've learned from Week 2 to build a full prototype for the technical question/answerer you built in Week 1 Exercise.\n", + "\n", + "This should include a Gradio UI, streaming, use of the system prompt to add expertise, and the ability to switch between models. Bonus points if you can demonstrate use of a tool!\n", + "\n", + "If you feel bold, see if you can add audio input so you can talk to it, and have it respond with audio. ChatGPT or Claude can help you, or email me if you have questions.\n", + "\n", + "I will publish a full solution here soon - unless someone beats me to it...\n", + "\n", + "There are so many commercial applications for this, from a language tutor, to a company onboarding solution, to a companion AI to a course (like this one!) I can't wait to see your results." + ] + }, + { + "cell_type": "markdown", + "id": "517b0427-1f18-4ce3-96a9-3bccde59ee60", + "metadata": {}, + "source": [ + "# Overview\n" + ] + }, + { + "cell_type": "markdown", + "id": "d1e934de-8fe4-4586-bc0e-0ecb0dd5068b", + "metadata": {}, + "source": [ + "## Multimodal question and answerer:\n", + " - The chatbot answers technical questions\n", + " - The user can enter their language of choice (that exists) and it would be translated real time on the second screen and English on the first screen.\n", + " - The user can also choose which tone/mood it would like the responses to have.\n", + " - The user can choose whether the ersponse will be read out loud or not. \n", + " - The user can send in an audio and get the response back in audio (automatically) if he so chooses." + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "90459175-a0f9-421e-bbce-49eecd2cd16a", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import json\n", + "from dotenv import load_dotenv\n", + "from openai import OpenAI\n", + "import anthropic\n", + "import gradio as gr" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "id": "096cdd1d-073b-45b6-80f2-090fbdcb701c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "OpenAI key exists!\n", + "Anthropic API key exists!\n" + ] + } + ], + "source": [ + "# Initialization\n", + "\n", + "load_dotenv()\n", + "\n", + "openai_api_key = os.getenv('OPENAI_API_KEY')\n", + "anthropic_api_key = os.getenv(\"ANTHROPIC_API_KEY\")\n", + "\n", + "if openai_api_key:\n", + " print(\"OpenAI key exists!\")\n", + "else:\n", + " print(\"OpenAI key not set!\")\n", + "\n", + "if anthropic_api_key:\n", + " print(\"Anthropic API key exists!\")\n", + "else:\n", + " print(\"Anthropic key not set\")\n" + ] + }, + { + "cell_type": "code", + "execution_count": 17, + "id": "bca4c5ee-6c39-4c81-b8f9-9280c66b07c0", + "metadata": {}, + "outputs": [], + "source": [ + "openai = OpenAI()\n", + "\n", + "claude = anthropic.Anthropic()\n", + "\n", + "openai_4o_mini_model = \"gpt-4o-mini\"\n" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "id": "67599cae-9a26-4180-959c-63c74157568f", + "metadata": {}, + "outputs": [], + "source": [ + "system_message = \"You are a helpful assistant that answers tchnical questions.\"\n", + "system_message += \"Always be accurate. If you don't know or not sure about some information, say so.\"" + ] + }, + { + "cell_type": "code", + "execution_count": 55, + "id": "5a04232e-f8a7-4e38-b22b-8fe179262920", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting deep_translator\n", + " Downloading deep_translator-1.11.4-py3-none-any.whl.metadata (30 kB)\n", + "Requirement already satisfied: beautifulsoup4<5.0.0,>=4.9.1 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from deep_translator) (4.12.3)\n", + "Requirement already satisfied: requests<3.0.0,>=2.23.0 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from deep_translator) (2.32.3)\n", + "Requirement already satisfied: soupsieve>1.2 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from beautifulsoup4<5.0.0,>=4.9.1->deep_translator) (2.5)\n", + "Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3.0.0,>=2.23.0->deep_translator) (3.4.1)\n", + "Requirement already satisfied: idna<4,>=2.5 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3.0.0,>=2.23.0->deep_translator) (2.10)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3.0.0,>=2.23.0->deep_translator) (2.3.0)\n", + "Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3.0.0,>=2.23.0->deep_translator) (2024.12.14)\n", + "Downloading deep_translator-1.11.4-py3-none-any.whl (42 kB)\n", + "Installing collected packages: deep_translator\n", + "Successfully installed deep_translator-1.11.4\n" + ] + } + ], + "source": [ + "!pip install deep_translator" + ] + }, + { + "cell_type": "code", + "execution_count": 56, + "id": "aa85d87e-9385-4a23-b114-eb64264ca497", + "metadata": {}, + "outputs": [], + "source": [ + "from deep_translator import GoogleTranslator\n", + "\n", + "# First install deep_translator:\n", + "# pip install deep_translator\n", + "\n", + "# Top 10 most spoken languages with their codes\n", + "LANGUAGES = {\n", + " \"English\": \"en\",\n", + " \"Mandarin Chinese\": \"zh-CN\",\n", + " \"Hindi\": \"hi\",\n", + " \"Spanish\": \"es\",\n", + " \"Arabic\": \"ar\",\n", + " \"Bengali\": \"bn\",\n", + " \"Portuguese\": \"pt\",\n", + " \"Russian\": \"ru\",\n", + " \"Japanese\": \"ja\",\n", + " \"German\": \"de\"\n", + "}\n" + ] + }, + { + "cell_type": "code", + "execution_count": 57, + "id": "25654699-763a-4574-91bf-08c6068d4cd3", + "metadata": {}, + "outputs": [], + "source": [ + "\n", + "class ChatState:\n", + " def __init__(self):\n", + " self.speak = True\n", + " self.target_lang = \"en\"\n", + "\n", + "chat_state = ChatState()" + ] + }, + { + "cell_type": "code", + "execution_count": 58, + "id": "18dc8c35-f97c-4635-834a-422b5aa552c8", + "metadata": {}, + "outputs": [], + "source": [ + "def translate_message(text, target_lang):\n", + " if target_lang == \"en\":\n", + " return text\n", + " try:\n", + " translator = GoogleTranslator(source='auto', target=target_lang)\n", + " return translator.translate(text)\n", + " except:\n", + " return f\"Translation error: {text}\"" + ] + }, + { + "cell_type": "code", + "execution_count": 66, + "id": "344f6594-335e-4900-b2dd-9509e90a1389", + "metadata": {}, + "outputs": [], + "source": [ + "def chat(message, history):\n", + " # Original chat processing\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}] \n", + " response = openai.chat.completions.create(model = openai_4o_mini_model, messages = messages)\n", + " response_text = response.choices[0].message.content\n", + " \n", + " if chat_state.speak:\n", + " talker(response_text)\n", + " \n", + " # Translate messages\n", + " translated_message = translate_message(message, chat_state.target_lang)\n", + " translated_response = translate_message(response_text, chat_state.target_lang)\n", + " \n", + " gr.Chatbot.update(value=[(translated_message, translated_response)], visible=True)\n", + " \n", + " return response_text\n" + ] + }, + { + "cell_type": "code", + "execution_count": 71, + "id": "5e9af0a3-8a8d-45ff-8579-c9fad759326b", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Collecting gTTS\n", + " Downloading gTTS-2.5.4-py3-none-any.whl.metadata (4.1 kB)\n", + "Requirement already satisfied: requests<3,>=2.27 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from gTTS) (2.32.3)\n", + "Requirement already satisfied: click<8.2,>=7.1 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from gTTS) (8.1.8)\n", + "Requirement already satisfied: colorama in c:\\users\\hp\\appdata\\roaming\\python\\python311\\site-packages (from click<8.2,>=7.1->gTTS) (0.4.6)\n", + "Requirement already satisfied: charset_normalizer<4,>=2 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3,>=2.27->gTTS) (3.4.1)\n", + "Requirement already satisfied: idna<4,>=2.5 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3,>=2.27->gTTS) (2.10)\n", + "Requirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3,>=2.27->gTTS) (2.3.0)\n", + "Requirement already satisfied: certifi>=2017.4.17 in c:\\users\\hp\\anaconda3\\envs\\llms\\lib\\site-packages (from requests<3,>=2.27->gTTS) (2024.12.14)\n", + "Downloading gTTS-2.5.4-py3-none-any.whl (29 kB)\n", + "Installing collected packages: gTTS\n", + "Successfully installed gTTS-2.5.4\n" + ] + } + ], + "source": [ + "!pip install gTTS" + ] + }, + { + "cell_type": "code", + "execution_count": 77, + "id": "1cd93430-0f3a-45ad-ac4c-917fd2a6c2a0", + "metadata": {}, + "outputs": [], + "source": [ + "from gtts import gTTS\n", + "import os\n", + "import tempfile\n", + "\n", + "def text_to_speech(text, lang_code):\n", + " try:\n", + " tts = gTTS(text=text, lang=lang_code)\n", + " # Create temporary file\n", + " with tempfile.NamedTemporaryFile(delete=False, suffix='.mp3') as fp:\n", + " tts.save(fp.name)\n", + " return fp.name\n", + " except:\n", + " return None" + ] + }, + { + "cell_type": "code", + "execution_count": 74, + "id": "a7b82670-f376-4069-9a11-7ca622fcf238", + "metadata": {}, + "outputs": [], + "source": [ + " def respond(message, history):\n", + " bot_response, history_original, history_translated = process_message(\n", + " message, \n", + " history, \n", + " 'translated' if speech_language.value.lower() == 'translated' else 'original'\n", + " )\n", + " return \"\", history_original, history_translated" + ] + }, + { + "cell_type": "code", + "execution_count": 79, + "id": "9716f21b-9583-4eec-9fd0-bffc430fee56", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "* Running on local URL: http://127.0.0.1:7882\n", + "\n", + "To create a public link, set `share=True` in `launch()`.\n" + ] + }, + { + "data": { + "text/html": [ + "
" + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + }, + { + "data": { + "text/plain": [] + }, + "execution_count": 79, + "metadata": {}, + "output_type": "execute_result" + }, + { + "data": { + "text/html": [ + "\n", + " \n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "def process_message(message, history, speech_mode):\n", + " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}] \n", + " response = openai.chat.completions.create(model = openai_4o_mini_model, messages = messages)\n", + " response_text = response.choices[0].message.content\n", + " \n", + " # Create audio if speech is enabled\n", + " audio_path = None\n", + " if chat_state.speak:\n", + " if speech_mode == \"Translated\":\n", + " translated_text = translate_message(response_text, chat_state.target_lang)\n", + " audio_path = text_to_speech(translated_text, chat_state.target_lang)\n", + " else:\n", + " talker(response_text)\n", + " \n", + " # Translate messages for display\n", + " translated_message = translate_message(message, chat_state.target_lang)\n", + " translated_response = translate_message(response_text, chat_state.target_lang)\n", + " \n", + " history_original = history + [\n", + " {\"role\": \"user\", \"content\": message},\n", + " {\"role\": \"assistant\", \"content\": response_text}\n", + " ]\n", + " history_translated = [\n", + " {\"role\": \"user\", \"content\": translated_message},\n", + " {\"role\": \"assistant\", \"content\": translated_response}\n", + " ]\n", + " \n", + " return response_text, history_original, history_translated, audio_path\n", + "\n", + "with gr.Blocks() as demo:\n", + " speech_mode = gr.State(\"Original\")\n", + " \n", + " with gr.Row():\n", + " speak_checkbox = gr.Checkbox(\n", + " label=\"Read responses aloud\",\n", + " value=True,\n", + " interactive=True\n", + " )\n", + " language_dropdown = gr.Dropdown(\n", + " choices=list(LANGUAGES.keys()),\n", + " value=\"Spanish\",\n", + " label=\"Translation Language\",\n", + " interactive=True\n", + " )\n", + " speech_language = gr.Radio(\n", + " choices=[\"Original\", \"Translated\"],\n", + " value=\"Original\",\n", + " label=\"Speech Language\",\n", + " interactive=True\n", + " )\n", + " \n", + " # Add audio player\n", + " audio_player = gr.Audio(label=\"Response Audio\", visible=True)\n", + " \n", + " with gr.Row():\n", + " with gr.Column():\n", + " gr.Markdown(\"### Original Conversation\")\n", + " chatbot_original = gr.Chatbot(type=\"messages\")\n", + " msg_original = gr.Textbox(label=\"Message\")\n", + " send_btn = gr.Button(\"Send\")\n", + " \n", + " with gr.Column():\n", + " gr.Markdown(\"### Translated Conversation\")\n", + " chatbot_translated = gr.Chatbot(type=\"messages\")\n", + " \n", + " state = gr.State([])\n", + " \n", + " def respond(message, history, current_speech_mode):\n", + " bot_response, history_original, history_translated, audio_path = process_message(\n", + " message, \n", + " history,\n", + " current_speech_mode\n", + " )\n", + " \n", + " return \"\", history_original, history_translated, audio_path\n", + " \n", + " send_btn.click(\n", + " respond,\n", + " inputs=[msg_original, state, speech_language],\n", + " outputs=[msg_original, chatbot_original, chatbot_translated, audio_player],\n", + " )\n", + " \n", + " msg_original.submit(\n", + " respond,\n", + " inputs=[msg_original, state, speech_language],\n", + " outputs=[msg_original, chatbot_original, chatbot_translated, audio_player],\n", + " )\n", + " \n", + " speak_checkbox.change(fn=lambda x: setattr(chat_state, 'speak', x), inputs=[speak_checkbox])\n", + " language_dropdown.change(fn=update_language, inputs=[language_dropdown])\n", + "\n", + "demo.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "af83a223-1930-4bad-9c31-44d227847610", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "id": "76b51235-e23c-4aad-9c51-92f57d2febfb", + "metadata": {}, + "source": [ + "### Audio" + ] + }, + { + "cell_type": "code", + "execution_count": 29, + "id": "d7a81b5d-165e-48f8-bfc6-f4aca1cb68f1", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ffmpeg version 2025-01-30-git-1911a6ec26-full_build-www.gyan.dev Copyright (c) 2000-2025 the FFmpeg developers\n", + "built with gcc 14.2.0 (Rev1, Built by MSYS2 project)\n", + "configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-lcms2 --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-libdvdnav --enable-libdvdread --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libquirc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-liblc3 --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\n", + "libavutil 59. 56.100 / 59. 56.100\n", + "libavcodec 61. 31.101 / 61. 31.101\n", + "libavformat 61. 9.106 / 61. 9.106\n", + "libavdevice 61. 4.100 / 61. 4.100\n", + "libavfilter 10. 9.100 / 10. 9.100\n", + "libswscale 8. 13.100 / 8. 13.100\n", + "libswresample 5. 4.100 / 5. 4.100\n", + "libpostproc 58. 4.100 / 58. 4.100\n", + "ffprobe version 2025-01-30-git-1911a6ec26-full_build-www.gyan.dev Copyright (c) 2007-2025 the FFmpeg developers\n", + "built with gcc 14.2.0 (Rev1, Built by MSYS2 project)\n", + "configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-lcms2 --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-libdvdnav --enable-libdvdread --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libquirc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-liblc3 --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\n", + "libavutil 59. 56.100 / 59. 56.100\n", + "libavcodec 61. 31.101 / 61. 31.101\n", + "libavformat 61. 9.106 / 61. 9.106\n", + "libavdevice 61. 4.100 / 61. 4.100\n", + "libavfilter 10. 9.100 / 10. 9.100\n", + "libswscale 8. 13.100 / 8. 13.100\n", + "libswresample 5. 4.100 / 5. 4.100\n", + "libpostproc 58. 4.100 / 58. 4.100\n", + "ffplay version 2025-01-30-git-1911a6ec26-full_build-www.gyan.dev Copyright (c) 2003-2025 the FFmpeg developers\n", + "built with gcc 14.2.0 (Rev1, Built by MSYS2 project)\n", + "configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-lcms2 --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-libdvdnav --enable-libdvdread --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libopenjpeg --enable-libquirc --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-libqrencode --enable-librav1e --enable-libsvtav1 --enable-libvvenc --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-liblc3 --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\n", + "libavutil 59. 56.100 / 59. 56.100\n", + "libavcodec 61. 31.101 / 61. 31.101\n", + "libavformat 61. 9.106 / 61. 9.106\n", + "libavdevice 61. 4.100 / 61. 4.100\n", + "libavfilter 10. 9.100 / 10. 9.100\n", + "libswscale 8. 13.100 / 8. 13.100\n", + "libswresample 5. 4.100 / 5. 4.100\n", + "libpostproc 58. 4.100 / 58. 4.100\n" + ] + } + ], + "source": [ + "!ffmpeg -version\n", + "!ffprobe -version\n", + "!ffplay -version" + ] + }, + { + "cell_type": "code", + "execution_count": 31, + "id": "1a04820d-ff9d-4868-8c98-969548c5402a", + "metadata": {}, + "outputs": [], + "source": [ + "import base64\n", + "from io import BytesIO\n", + "from PIL import Image\n", + "from IPython.display import Audio, display" + ] + }, + { + "cell_type": "code", + "execution_count": 34, + "id": "7c05863a-631b-4e05-9bc0-267400595ac2", + "metadata": {}, + "outputs": [], + "source": [ + "import base64\n", + "from io import BytesIO\n", + "from PIL import Image\n", + "from IPython.display import Audio, display\n", + "\n", + "def talker(message):\n", + " response = openai.audio.speech.create(\n", + " model = \"tts-1\",\n", + " voice = \"onyx\",\n", + " input = message\n", + " )\n", + "\n", + " audio_stream = BytesIO(response.content)\n", + " output_filename = \"output_audio.mp3\"\n", + " with open(output_filename, \"wb\") as f:\n", + " f.write(audio_stream.read())\n", + "\n", + " #Play the generated audio\n", + " display(Audio(output_filename, autoplay=True))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 41, + "id": "fcf1bcf8-4a96-4243-950a-214bcd8b916b", + "metadata": {}, + "outputs": [ + { + "data": { + "text/html": [ + "\n", + " \n", + " " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "talker(\"Warm, wet, and wild?! There must be something in the water!\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "63fed0a2-5c2b-408d-9427-6e59b3f99e4f", + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}