{ "cells": [ { "cell_type": "markdown", "id": "75e2ef28-594f-4c18-9d22-c6b8cd40ead2", "metadata": {}, "source": [ "# Day 3 - Conversational AI - aka Chatbot!" ] }, { "cell_type": "code", "execution_count": 40, "id": "70e39cd8-ec79-4e3e-9c26-5659d42d0861", "metadata": {}, "outputs": [], "source": [ "# imports\n", "\n", "import os\n", "import requests\n", "from bs4 import BeautifulSoup\n", "from typing import List\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "import google.generativeai\n", "# import anthropic\n", "import gradio as gr" ] }, { "cell_type": "code", "execution_count": 41, "id": "231605aa-fccb-447e-89cf-8b187444536a", "metadata": {}, "outputs": [], "source": [ "# Load environment variables in a file called .env\n", "\n", "load_dotenv()\n", "os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')" ] }, { "cell_type": "code", "execution_count": 3, "id": "6541d58e-2297-4de1-b1f7-77da1b98b8bb", "metadata": {}, "outputs": [], "source": [ "google.generativeai.configure()" ] }, { "cell_type": "code", "execution_count": 4, "id": "e16839b5-c03b-4d9d-add6-87a0f6f37575", "metadata": {}, "outputs": [], "source": [ "system_message = \"You are a helpful assistant\"" ] }, { "cell_type": "code", "execution_count": null, "id": "ba2123e7-77ed-43b4-8c37-03658fb42b78", "metadata": {}, "outputs": [], "source": [ "system_message = \"You are an assistant that is great at telling jokes\"\n", "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"\n", "\n", "prompts = [\n", " {\"role\": \"system\", \"content\": system_message},\n", " {\"role\": \"user\", \"content\": user_prompt}\n", " ]\n", "\n", "# The API for Gemini has a slightly different structure.\n", "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n", "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n", "\n", "gemini = google.generativeai.GenerativeModel(\n", " model_name='gemini-1.5-flash',\n", " system_instruction=system_message\n", ")\n", "response = gemini.generate_content(user_prompt)\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": 7, "id": "7b933ff3", "metadata": {}, "outputs": [], "source": [ "import google.generativeai as genai\n", "\n", "model = genai.GenerativeModel('gemini-1.5-flash')" ] }, { "cell_type": "code", "execution_count": null, "id": "91578b16", "metadata": {}, "outputs": [], "source": [ "chat = model.start_chat(history=[])\n", "response = chat.send_message('Hello! My name is Shardul.')\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": null, "id": "7c4bc38f", "metadata": {}, "outputs": [], "source": [ "response = chat.send_message('Can you tell something interesting about star wars?')\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": null, "id": "337bee91", "metadata": {}, "outputs": [], "source": [ "response = chat.send_message('Do you remember what my name is?')\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": null, "id": "bcaf4d95", "metadata": {}, "outputs": [], "source": [ "chat.history" ] }, { "cell_type": "markdown", "id": "98e97227-f162-4d1a-a0b2-345ff248cbe7", "metadata": {}, "source": [ "# Please read this! A change from the video:\n", "\n", "In the video, I explain how we now need to write a function called:\n", "\n", "`chat(message, history)`\n", "\n", "Which expects to receive `history` in a particular format, which we need to map to the OpenAI format before we call OpenAI:\n", "\n", "```\n", "[\n", " {\"role\": \"system\", \"content\": \"system message here\"},\n", " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", "]\n", "```\n", "\n", "But Gradio has been upgraded! Now it will pass in `history` in the exact OpenAI format, perfect for us to send straight to OpenAI.\n", "\n", "So our work just got easier!\n", "\n", "We will write a function `chat(message, history)` where: \n", "**message** is the prompt to use \n", "**history** is the past conversation, in OpenAI format \n", "\n", "We will combine the system message, history and latest message, then call OpenAI." ] }, { "cell_type": "code", "execution_count": 6, "id": "1eacc8a4-4b48-4358-9e06-ce0020041bc1", "metadata": {}, "outputs": [], "source": [ "def chat(message, history):\n", " relevant_system_message = system_message\n", " if 'belt' in message:\n", " relevant_system_message += \" The store does not sell belts; if you are asked for belts, be sure to point out other items on sale.\"\n", " \n", " messages = [{\"role\": \"system\", \"content\": relevant_system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", "\n", " stream = gemini.generate_content(message, safety_settings=[\n", " {\"category\": \"HARM_CATEGORY_DANGEROUS_CONTENT\", \"threshold\": \"BLOCK_NONE\"},\n", " {\"category\": \"HARM_CATEGORY_SEXUALLY_EXPLICIT\", \"threshold\": \"BLOCK_NONE\"},\n", " {\"category\": \"HARM_CATEGORY_HATE_SPEECH\", \"threshold\": \"BLOCK_NONE\"},\n", " {\"category\": \"HARM_CATEGORY_HARASSMENT\", \"threshold\": \"BLOCK_NONE\"}], stream=True)\n", "\n", " response = \"\"\n", " for chunk in stream:\n", " print(chunk) # Print the chunk to understand its structure\n", " # Adjust the following line based on the actual structure of the chunk\n", " response += chunk.get('content', '') or ''\n", " yield response" ] }, { "cell_type": "code", "execution_count": null, "id": "f6e745e1", "metadata": {}, "outputs": [], "source": [ "chat_model = genai.GenerativeModel('gemini-1.5-flash')\n", "chat = chat_model.start_chat()\n", "\n", "msg = \"what is gen ai\"\n", "stream = chat.send_message(msg, stream=True)\n", "# print(\"Response:\", stream.text)\n", "for chunk in stream:\n", " print(chunk.text)\n" ] }, { "cell_type": "code", "execution_count": null, "id": "dce941ee", "metadata": {}, "outputs": [], "source": [ "import time\n", "\n", "chat = model.start_chat(history=[])\n", "\n", "# Transform Gradio history to Gemini format\n", "def transform_history(history):\n", " new_history = []\n", " for chat in history:\n", " new_history.append({\"parts\": [{\"text\": chat[0]}], \"role\": \"user\"})\n", " new_history.append({\"parts\": [{\"text\": chat[1]}], \"role\": \"model\"})\n", " return new_history\n", "\n", "def response(message, history):\n", " global chat\n", " # The history will be the same as in Gradio, the 'Undo' and 'Clear' buttons will work correctly.\n", " chat.history = transform_history(history)\n", " response = chat.send_message(message)\n", " response.resolve()\n", "\n", " # Each character of the answer is displayed\n", " for i in range(len(response.text)):\n", " time.sleep(0.01)\n", " yield response.text[: i+1]\n", "\n", "gr.ChatInterface(response,\n", " textbox=gr.Textbox(placeholder=\"Question to Gemini\")).launch(debug=True)" ] }, { "cell_type": "markdown", "id": "82a57ee0-b945-48a7-a024-01b56a5d4b3e", "metadata": {}, "source": [ "
\n",
" ![]() | \n",
" \n",
" Business Applications\n", " Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n", "\n", "Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.\n", " | \n",
"