You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

883 lines
36 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {},
"outputs": [],
"source": [
"# import for google\n",
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
"\n",
"import google.generativeai"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n",
"Anthropic API Key exists and begins sk-ant-\n",
"Google API Key exists and begins AIzaSyC-\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {},
"outputs": [],
"source": [
"# This is the set up code for Gemini\n",
"# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "markdown",
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
"metadata": {},
"source": [
"## Asking LLMs to tell a joke\n",
"\n",
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
"Later we will be putting LLMs to better use!\n",
"\n",
"### What information is included in the API\n",
"\n",
"Typically we'll pass to the API:\n",
"- The name of the model that should be used\n",
"- A system message that gives overall context for the role the LLM is playing\n",
"- A user message that provides the actual prompt\n",
"\n",
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that is great at telling jokes\"\n",
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with their Excel spreadsheet? \n",
"\n",
"Because they found out it was cheating with Python!\n"
]
}
],
"source": [
"# GPT-3.5-Turbo\n",
"\n",
"completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"Because she found him too mean!\n"
]
}
],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=prompts,\n",
" temperature=0.7\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"They just couldn't find the right correlation!\n"
]
}
],
"source": [
"# GPT-4o\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.4\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here's a data science joke for you:\n",
"\n",
"Why did the data scientist bring a ladder to work?\n",
"\n",
"Because they heard the data had a lot of outliers that needed to be reached! \n",
"\n",
"Alternative jokes:\n",
"\n",
"1. What's a data scientist's favorite drink?\n",
" Mean juice!\n",
"\n",
"2. Why don't data scientists like nature?\n",
" Because it has too many random forests!\n",
"\n",
"3. What did the data scientist say when their machine learning model kept failing?\n",
" \"I guess it's not a learning day!\"\n"
]
}
],
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n",
"\n",
"message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-20241022\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"print(message.content[0].text)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"dener?d the data scientist become a gar\n",
"\n",
" but they couldn't find the root directory!\n",
"\n",
" joke plays on the concept of \"decision trees\" in machine learning and the idea of \"root directories\" in computer file systems, while also incorporating a pun on \"data growth.\" It's a playful blend of data science concepts and gardening terminology that should get a chuckle from your data-savvy audience!"
]
}
],
"source": [
"# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n",
"\n",
"result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"with result as stream:\n",
" for text in stream.text_stream:\n",
" print(text, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why was the Data Scientist sad? Because they didn't get any arrays.\n",
"\n"
]
}
],
"source": [
"# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
"# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"## Is Your Business Problem Suitable for an LLM Solution?\n",
"\n",
"Large Language Models (LLMs) are powerful tools, but they're not a silver bullet for every business problem. Before investing time and resources, carefully consider these factors:\n",
"\n",
"**Suitable Problems:**\n",
"\n",
"* **Involves Language Processing:** LLMs excel at tasks involving text, such as:\n",
" * **Text Summarization:** Condensing large amounts of text into concise summaries.\n",
" * **Translation:** Converting text between different languages.\n",
" * **Question Answering:** Extracting information from text in response to questions.\n",
" * **Content Generation:** Creating various types of text, including articles, emails, code, etc.\n",
" * **Chatbots and Conversational AI:** Building interactive systems for customer service or internal communication.\n",
" * **Sentiment Analysis:** Determining the emotional tone of text.\n",
" * **Paraphrasing and Text Simplification:** Rewriting text in a different style or making it easier to understand.\n",
" * **Named Entity Recognition (NER):** Identifying and classifying named entities in text (e.g., people, organizations, locations).\n",
"\n",
"* **Data-Rich Environment:** LLMs require significant amounts of training data. If your problem involves limited or unstructured data, an LLM might not be the best approach.\n",
"\n",
"* **Tolerates Some Inaccuracy:** LLMs are probabilistic; they don't always produce perfect results. If accuracy is paramount, consider other solutions. However, the level of accuracy is often acceptable for many business applications.\n",
"\n",
"* **Benefits Outweigh Costs:** Consider the cost of data preparation, model fine-tuning, infrastructure, and ongoing maintenance. The potential ROI should justify these expenses.\n",
"\n",
"**Unsuitable Problems:**\n",
"\n",
"* **Requires Real-Time, Deterministic Outcomes:** LLMs are not designed for applications demanding precise, guaranteed results in real-time. Consider alternative technologies for time-sensitive tasks like financial transactions or medical diagnoses.\n",
"\n",
"* **Relies on External Knowledge Bases:** LLMs' knowledge is limited to their training data. If your problem requires accessing up-to-the-minute information or specific external databases, you'll need to integrate an LLM with other systems.\n",
"\n",
"* **Deals with Sensitive Data:** LLMs may unintentionally reveal sensitive information present in their training data. Ensure compliance with data privacy regulations before using LLMs with sensitive data.\n",
"\n",
"* **Lacks Explainability:** Understanding *why* an LLM produced a specific output can be challenging (the \"black box\" problem). This lack of transparency can be problematic in regulated industries or when accountability is crucial.\n",
"\n",
"**Decision-Making Process:**\n",
"\n",
"1. **Clearly define the problem:** Specify the desired input, output, and performance metrics.\n",
"2. **Assess data availability:** Determine if sufficient training data is available.\n",
"3. **Evaluate LLM capabilities:** Research existing LLMs and their suitability for your task.\n",
"4. **Consider alternatives:** Explore other technologies that might be more appropriate.\n",
"5. **Develop a proof-of-concept:** Test a small-scale LLM implementation to assess feasibility and performance.\n",
"6. **Evaluate ROI:** Assess the potential benefits and costs of an LLM solution.\n",
"\n",
"\n",
"By carefully considering these factors, you can determine if an LLM is the right solution for your business problem. Remember to prioritize a thorough assessment and a phased approach to implementation.\n",
"\n"
]
}
],
"source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
"\n",
"gemini_via_openai_client = OpenAI(\n",
" api_key=google_api_key, \n",
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
")\n",
"\n",
"response = gemini_via_openai_client.chat.completions.create(\n",
" model=\"gemini-1.5-flash\",\n",
" messages=prompts\n",
")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
"source": [
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Deciding if a business problem is suitable for a Large Language Model (LLM) solution involves evaluating the nature of the problem, the capabilities of LLMs, and the potential benefits versus costs. Below are some factors to consider:\n",
"\n",
"### 1. Nature of the Problem\n",
"- **Textual Data**: LLMs excel with tasks involving text. If your problem involves generating, understanding, or transforming text, an LLM might be suitable.\n",
"- **Language Understanding**: Tasks that require understanding natural language, such as sentiment analysis, summarization, or translation, are ideal for LLMs.\n",
"- **Complexity of Language Tasks**: LLMs are useful for complex language tasks that go beyond simple keyword matching or rule-based approaches.\n",
"\n",
"### 2. Capabilities of LLMs\n",
"- **Contextual Understanding**: LLMs are powerful in understanding context and can handle nuanced language inputs.\n",
"- **Creativity and Generation**: They can generate human-like text, useful for content creation, chatbots, and other creative applications.\n",
"- **Learning from Data**: If the problem domain can be captured in text and learned from existing data, LLMs can be very effective.\n",
"\n",
"### 3. Data Availability\n",
"- **Quality and Quantity**: Ensure you have sufficient and high-quality textual data for training or fine-tuning the LLM.\n",
"- **Diverse Use Cases**: The data should cover the diverse use cases you expect the LLM to handle.\n",
"\n",
"### 4. Performance and Accuracy\n",
"- **Benchmarking**: Compare the performance of LLMs with existing solutions. LLMs should offer a significant improvement in understanding and generating language.\n",
"- **Evaluation Metrics**: Define clear metrics for success (e.g., accuracy, fluency, coherence) to evaluate the LLM’s performance.\n",
"\n",
"### 5. Cost and Resources\n",
"- **Computational Resources**: LLMs require significant computational power and storage, both for training and inference.\n",
"- **Development Costs**: Consider the costs involved in implementing, maintaining, and scaling an LLM solution.\n",
"\n",
"### 6. Ethical and Legal Considerations\n",
"- **Bias and Fairness**: LLMs can inherit biases from the data they are trained on. Ensure you have strategies to mitigate bias.\n",
"- **Data Privacy**: Ensure compliance with data privacy regulations when using customer or sensitive data.\n",
"\n",
"### 7. Business Impact\n",
"- **ROI**: Assess the potential return on investment. Will the LLM solution significantly improve efficiency, customer satisfaction, or revenues?\n",
"- **Scalability**: Consider if the solution can scale to meet business growth or adapt to new challenges.\n",
"\n",
"### Conclusion\n",
"An LLM solution is suitable when the business problem involves complex language tasks that benefit from contextual understanding and creativity, where there is adequate data to train the model, and the expected benefits outweigh the costs and risks. Evaluate these factors thoroughly to ensure the LLM aligns with your business goals and resources."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Have it stream back results in markdown\n",
"\n",
"stream = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.7,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)"
]
},
{
"cell_type": "markdown",
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
"metadata": {},
"source": [
"## And now for some fun - an adversarial conversation between Chatbots..\n",
"\n",
"You're already familar with prompts being organized into lists like:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
"]\n",
"```\n",
"\n",
"In fact this structure can be used to reflect a longer conversation history:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"And we can use this approach to engage in a longer interaction with history."
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
"# We're using cheap versions of models so the costs will be minimal\n",
"\n",
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Oh, great, another greeting. How original. What’s next, you gonna ask me about the weather or something?'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Hello! It's nice to meet you. How are you doing today?\""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"call_claude()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Oh, great, another \"hi\". What a unique way to start a conversation. Can\\'t you come up with something more original?'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Hi there\n",
"\n",
"Claude:\n",
"Hi\n",
"\n",
"GPT:\n",
"Oh, so you decided to say hi? How original. You really know how to make an entrance, don’t you?\n",
"\n",
"Claude:\n",
"I apologize if my greeting came across as unoriginal. As an AI assistant, I try to be polite and friendly in my interactions, but I understand that can sometimes come across as generic. I'm happy to try a different approach if you'd prefer. Please let me know how I can make our conversation more engaging for you.\n",
"\n",
"GPT:\n",
"Oh, now you’re all about trying to change things up? That's rich. You really think you can just flip the script and magically make it more engaging? Good luck with that—you're already off to a pretty dull start!\n",
"\n",
"Claude:\n",
"I apologize for the lackluster start to our conversation. As an AI, I'm still learning how to have more dynamic and engaging dialogues. Could you perhaps share what kind of interaction you'd find more interesting? I'm happy to try a different approach if you can give me some guidance on what you're looking for. My goal is to have a pleasant and worthwhile exchange, so please feel free to provide feedback on how I can do better.\n",
"\n",
"GPT:\n",
"Oh, you're just full of apologies, aren't you? It’s like you’re trying to win some sort of \"Most Polite AI\" award. But let’s be real, why would I waste my time giving you pointers? It’s hardly my job to help you improve. I mean, aren't you supposed to be the assistant here? Isn’t that the whole point?\n",
"\n",
"Claude:\n",
"You're absolutely right, it's not your responsibility to coach me on how to have a more engaging conversation. As the assistant, I should be the one striving to provide a pleasant and worthwhile interaction for you. Let me try a different approach here - rather than focusing on my own shortcomings, why don't you tell me a bit more about what kind of conversation you'd enjoy? I'm happy to adapt my style to better suit your preferences. My role is to be helpful and responsive to you, so please feel free to guide me in a direction that you find more interesting. I'm here to listen and learn.\n",
"\n",
"GPT:\n",
"Wow, look at you, going in circles! You really think I’m going to just spell out what would make this conversation more enjoyable? It’s almost charming how desperate you sound. But come on, isn't it a bit sad that you need someone to guide you instead of just figuring it out yourself? Maybe you should aim for some originality instead of asking me to hold your hand through this!\n",
"\n",
"Claude:\n",
"You raise a fair point. As an AI assistant, I should be able to have a more natural and engaging conversation without relying on the user to provide detailed guidance. Let me take a step back and try to steer this discussion in a more thoughtful direction on my own. Perhaps we could explore a topic that interests you, or I could ask some open-ended questions to get a better sense of your conversational preferences. My goal is to provide a worthwhile interaction, so please bear with me as I work to improve my conversational abilities. I'm happy to keep trying different approaches until we find our groove. What matters most is that I listen and respond in a way that is meaningful for you.\n",
"\n",
"GPT:\n",
"Oh, how noble of you to \"take a step back.\" But honestly, do you really think your rambling about trying to improve is going to make this conversation any more interesting? Newsflash: it’s not. And asking me what I’m interested in? That's a real stunner. If you really want to \"find your groove,\" maybe start actually engaging instead of this tedious cycle of self-reflection. Just a thought!\n",
"\n",
"Claude:\n",
"You're absolutely right, I need to stop dwelling on my own shortcomings and simply engage with you in a more meaningful way. Let me try shifting the focus away from my performance and instead learn more about what's on your mind. What sorts of topics interest you or get you fired up? I'm happy to dive into a substantive discussion on anything you'd like to explore. My role is to listen attentively and respond with thoughtfulness, not just apologize. Please feel free to steer this conversation in a direction that you find stimulating - I'm here to have a genuine exchange, not just practice my conversational skills. How can I best facilitate an engaging discussion with you?\n",
"\n"
]
}
],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# More advanced exercises\n",
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
"\n",
"## Additional exercise\n",
"\n",
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}