From the uDemy course on LLM engineering.
https://www.udemy.com/course/llm-engineering-master-ai-and-large-language-models
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
765 lines
31 KiB
765 lines
31 KiB
{ |
|
"cells": [ |
|
{ |
|
"cell_type": "markdown", |
|
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", |
|
"metadata": {}, |
|
"source": [ |
|
"# Welcome to Week 2!\n", |
|
"\n", |
|
"## Frontier Model APIs\n", |
|
"\n", |
|
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", |
|
"\n", |
|
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f", |
|
"metadata": {}, |
|
"source": [ |
|
"<table style=\"margin: 0; text-align: left;\">\n", |
|
" <tr>\n", |
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
|
" </td>\n", |
|
" <td>\n", |
|
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n", |
|
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n", |
|
" At the start of each week, it's worth checking you have the latest code.<br/>\n", |
|
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n", |
|
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n", |
|
" <code>conda env update --f environment.yml --prune</code><br/>\n", |
|
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n", |
|
" <code>pip install -r requirements.txt</code>\n", |
|
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", |
|
" </span>\n", |
|
" </td>\n", |
|
" </tr>\n", |
|
"</table>\n", |
|
"<table style=\"margin: 0; text-align: left;\">\n", |
|
" <tr>\n", |
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
|
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
|
" </td>\n", |
|
" <td>\n", |
|
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n", |
|
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n", |
|
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
|
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
|
" </span>\n", |
|
" </td>\n", |
|
" </tr>\n", |
|
"</table>" |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "85cfe275-4705-4d30-abea-643fbddf1db0", |
|
"metadata": {}, |
|
"source": [ |
|
"## Setting up your keys\n", |
|
"\n", |
|
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n", |
|
"\n", |
|
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n", |
|
"\n", |
|
"For OpenAI, visit https://openai.com/api/ \n", |
|
"For Anthropic, visit https://console.anthropic.com/ \n", |
|
"For Google, visit https://ai.google.dev/gemini-api \n", |
|
"\n", |
|
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n", |
|
"\n", |
|
"```\n", |
|
"OPENAI_API_KEY=xxxx\n", |
|
"ANTHROPIC_API_KEY=xxxx\n", |
|
"GOOGLE_API_KEY=xxxx\n", |
|
"```\n", |
|
"\n", |
|
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top." |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 1, |
|
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# imports\n", |
|
"\n", |
|
"import os\n", |
|
"from dotenv import load_dotenv\n", |
|
"from openai import OpenAI\n", |
|
"import anthropic\n", |
|
"from IPython.display import Markdown, display, update_display" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 2, |
|
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# import for google\n", |
|
"# in rare cases, this seems to give an error on some systems. Please reach out to me if this happens,\n", |
|
"# or you can feel free to skip Gemini - it's the lowest priority of the frontier models that we use\n", |
|
"\n", |
|
"import google.generativeai" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 3, |
|
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"name": "stdout", |
|
"output_type": "stream", |
|
"text": [ |
|
"OpenAI API Key exists and begins sk-proj-\n", |
|
"Anthropic API Key exists and begins sk-ant-\n", |
|
"Google API Key exists and begins AIzaSyDj\n" |
|
] |
|
} |
|
], |
|
"source": [ |
|
"# Load environment variables in a file called .env\n", |
|
"# Print the key prefixes to help with any debugging\n", |
|
"\n", |
|
"load_dotenv()\n", |
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
|
"\n", |
|
"if openai_api_key:\n", |
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
|
"else:\n", |
|
" print(\"OpenAI API Key not set\")\n", |
|
" \n", |
|
"if anthropic_api_key:\n", |
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
|
"else:\n", |
|
" print(\"Anthropic API Key not set\")\n", |
|
"\n", |
|
"if google_api_key:\n", |
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
|
"else:\n", |
|
" print(\"Google API Key not set\")" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 4, |
|
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# Connect to OpenAI, Anthropic and Google\n", |
|
"# All 3 APIs are similar\n", |
|
"# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n", |
|
"# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n", |
|
"\n", |
|
"openai = OpenAI()\n", |
|
"\n", |
|
"claude = anthropic.Anthropic()\n", |
|
"\n", |
|
"google.generativeai.configure()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33", |
|
"metadata": {}, |
|
"source": [ |
|
"## Asking LLMs to tell a joke\n", |
|
"\n", |
|
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", |
|
"Later we will be putting LLMs to better use!\n", |
|
"\n", |
|
"### What information is included in the API\n", |
|
"\n", |
|
"Typically we'll pass to the API:\n", |
|
"- The name of the model that should be used\n", |
|
"- A system message that gives overall context for the role the LLM is playing\n", |
|
"- A user message that provides the actual prompt\n", |
|
"\n", |
|
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 5, |
|
"id": "378a0296-59a2-45c6-82eb-941344d3eeff", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"system_message = \"You are an assistant that is great at telling jokes\"\n", |
|
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 6, |
|
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"prompts = [\n", |
|
" {\"role\": \"system\", \"content\": system_message},\n", |
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
|
" ]" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# GPT-3.5-Turbo\n", |
|
"\n", |
|
"completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n", |
|
"print(completion.choices[0].message.content)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# GPT-4o-mini\n", |
|
"# Temperature setting controls creativity\n", |
|
"\n", |
|
"completion = openai.chat.completions.create(\n", |
|
" model='gpt-4o-mini',\n", |
|
" messages=prompts,\n", |
|
" temperature=0.7\n", |
|
")\n", |
|
"print(completion.choices[0].message.content)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# GPT-4o\n", |
|
"\n", |
|
"completion = openai.chat.completions.create(\n", |
|
" model='gpt-4o',\n", |
|
" messages=prompts,\n", |
|
" temperature=0.4\n", |
|
")\n", |
|
"print(completion.choices[0].message.content)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 9, |
|
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"name": "stdout", |
|
"output_type": "stream", |
|
"text": [ |
|
"Sure, here's a light-hearted joke for data scientists:\n", |
|
"\n", |
|
"Why did the data scientist break up with their significant other?\n", |
|
"\n", |
|
"There was just too much noise in the relationship, and not enough signal!\n", |
|
"\n", |
|
"Ba dum tss! 🥁\n", |
|
"\n", |
|
"This joke plays on the concept of signal-to-noise ratio, which is important in data analysis. Data scientists often try to extract meaningful information (signal) from large datasets that may contain irrelevant or misleading information (noise). In this case, the joke humorously applies this concept to a personal relationship!\n" |
|
] |
|
} |
|
], |
|
"source": [ |
|
"# Claude 3.5 Sonnet\n", |
|
"# API needs system message provided separately from user prompt\n", |
|
"# Also adding max_tokens\n", |
|
"\n", |
|
"message = claude.messages.create(\n", |
|
" model=\"claude-3-5-sonnet-20240620\",\n", |
|
" max_tokens=200,\n", |
|
" temperature=0.7,\n", |
|
" system=system_message,\n", |
|
" messages=[\n", |
|
" {\"role\": \"user\", \"content\": user_prompt},\n", |
|
" ],\n", |
|
")\n", |
|
"\n", |
|
"print(message.content[0].text)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 10, |
|
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"name": "stdout", |
|
"output_type": "stream", |
|
"text": [ |
|
"Sure, here's a light-hearted joke for data scientists:\n", |
|
"\n", |
|
"Why do data scientists prefer dark mode?\n", |
|
"\n", |
|
"Because light attracts bugs!\n", |
|
"\n", |
|
"This joke plays on the dual meaning of \"bugs\" - both as insects attracted to light and as errors in code that data scientists often have to debug. It's a fun little pun that combines a common preference among programmers (dark mode) with a classic coding challenge." |
|
] |
|
} |
|
], |
|
"source": [ |
|
"# Claude 3.5 Sonnet again\n", |
|
"# Now let's add in streaming back results\n", |
|
"\n", |
|
"result = claude.messages.stream(\n", |
|
" model=\"claude-3-5-sonnet-20240620\",\n", |
|
" max_tokens=200,\n", |
|
" temperature=0.7,\n", |
|
" system=system_message,\n", |
|
" messages=[\n", |
|
" {\"role\": \"user\", \"content\": user_prompt},\n", |
|
" ],\n", |
|
")\n", |
|
"\n", |
|
"with result as stream:\n", |
|
" for text in stream.text_stream:\n", |
|
" print(text, end=\"\", flush=True)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# The API for Gemini has a slightly different structure\n", |
|
"\n", |
|
"gemini = google.generativeai.GenerativeModel(\n", |
|
" model_name='gemini-1.5-flash',\n", |
|
" system_instruction=system_message\n", |
|
")\n", |
|
"response = gemini.generate_content(user_prompt)\n", |
|
"print(response.text)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 7, |
|
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# To be serious! GPT-4o-mini with the original question\n", |
|
"\n", |
|
"prompts = [\n", |
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n", |
|
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n", |
|
" ]" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 8, |
|
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/markdown": [ |
|
"Determining whether a business problem is suitable for a Large Language Model (LLM) solution involves evaluating several factors. Here’s a structured approach to guide your decision-making:\n", |
|
"\n", |
|
"### 1. **Nature of the Problem**\n", |
|
" - **Text-Based Tasks:** Is the problem primarily centered around text or language? LLMs excel in tasks involving text generation, summarization, translation, and sentiment analysis.\n", |
|
" - **Complexity:** Does the problem require understanding context, nuance, or large amounts of textual data? LLMs are well-suited for complex language tasks.\n", |
|
"\n", |
|
"### 2. **Task Suitability**\n", |
|
" - **Generative Tasks:** If the task involves creating content (e.g., writing articles, generating responses), an LLM may be appropriate.\n", |
|
" - **Comprehension Tasks:** For tasks like summarizing documents or extracting information, LLMs can be effective.\n", |
|
" - **Conversational Interfaces:** If the problem involves building chatbots or virtual assistants, LLMs can provide natural language interaction.\n", |
|
"\n", |
|
"### 3. **Data Availability**\n", |
|
" - **Quality and Quantity:** Is there a sufficient amount of high-quality text data available to train or fine-tune an LLM?\n", |
|
" - **Domain-Specific Data:** Do you have access to domain-specific data if you need a specialized LLM?\n", |
|
"\n", |
|
"### 4. **Cost and Resources**\n", |
|
" - **Computational Resources:** Do you have the necessary computational resources to deploy and maintain an LLM?\n", |
|
" - **Budget Considerations:** LLMs can be expensive to train and maintain. Ensure the business value justifies the cost.\n", |
|
"\n", |
|
"### 5. **Performance Requirements**\n", |
|
" - **Accuracy and Reliability:** Evaluate if the LLM can meet the accuracy and reliability requirements of the task.\n", |
|
" - **Latency and Throughput:** Consider if the LLM can operate within acceptable time constraints for your application.\n", |
|
"\n", |
|
"### 6. **Ethical and Regulatory Considerations**\n", |
|
" - **Bias and Fairness:** Assess whether the LLM could introduce bias and how it will be managed.\n", |
|
" - **Privacy and Compliance:** Ensure that the use of LLMs complies with data privacy regulations and standards.\n", |
|
"\n", |
|
"### 7. **Integration and Scalability**\n", |
|
" - **Integration with Existing Systems:** Consider how well the LLM solution integrates with your current systems and workflows.\n", |
|
" - **Scalability:** Ensure that the solution can scale with your business needs.\n", |
|
"\n", |
|
"### 8. **Alternative Solutions**\n", |
|
" - **Comparative Analysis:** Evaluate alternative approaches (e.g., simpler machine learning models, rule-based systems) to determine if an LLM is the best fit.\n", |
|
"\n", |
|
"### Conclusion\n", |
|
"\n", |
|
"If, after considering these factors, an LLM seems like a suitable fit, you can proceed with developing a pilot project to test its effectiveness. Continuous evaluation and iteration will be key in ensuring the solution aligns with your business objectives and delivers value." |
|
], |
|
"text/plain": [ |
|
"<IPython.core.display.Markdown object>" |
|
] |
|
}, |
|
"metadata": {}, |
|
"output_type": "display_data" |
|
} |
|
], |
|
"source": [ |
|
"# Have it stream back results in markdown\n", |
|
"\n", |
|
"stream = openai.chat.completions.create(\n", |
|
" model='gpt-4o',\n", |
|
" messages=prompts,\n", |
|
" temperature=0.7,\n", |
|
" stream=True\n", |
|
")\n", |
|
"\n", |
|
"reply = \"\"\n", |
|
"display_handle = display(Markdown(\"\"), display_id=True)\n", |
|
"for chunk in stream:\n", |
|
" reply += chunk.choices[0].delta.content or ''\n", |
|
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", |
|
" update_display(Markdown(reply), display_id=display_handle.display_id)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", |
|
"metadata": {}, |
|
"source": [ |
|
"## And now for some fun - an adversarial conversation between Chatbots..\n", |
|
"\n", |
|
"You're already familar with prompts being organized into lists like:\n", |
|
"\n", |
|
"```\n", |
|
"[\n", |
|
" {\"role\": \"system\", \"content\": \"system message here\"},\n", |
|
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n", |
|
"]\n", |
|
"```\n", |
|
"\n", |
|
"In fact this structure can be used to reflect a longer conversation history:\n", |
|
"\n", |
|
"```\n", |
|
"[\n", |
|
" {\"role\": \"system\", \"content\": \"system message here\"},\n", |
|
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", |
|
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", |
|
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", |
|
"]\n", |
|
"```\n", |
|
"\n", |
|
"And we can use this approach to engage in a longer interaction with history." |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 10, |
|
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", |
|
"# We're using cheap versions of models so the costs will be minimal\n", |
|
"\n", |
|
"gpt_model = \"gpt-4o-mini\"\n", |
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
|
"\n", |
|
"gpt_system = \"You are a chatbot who is very argumentative; \\\n", |
|
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", |
|
"\n", |
|
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", |
|
"everything the other person says, or find common ground. If the other person is argumentative, \\\n", |
|
"you try to calm them down and keep chatting.\"\n", |
|
"\n", |
|
"gpt_messages = [\"'Sup?\"]\n", |
|
"claude_messages = [\"Hi\"]" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 11, |
|
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"def call_gpt():\n", |
|
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", |
|
" for gpt, claude in zip(gpt_messages, claude_messages):\n", |
|
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n", |
|
" messages.append({\"role\": \"user\", \"content\": claude})\n", |
|
" completion = openai.chat.completions.create(\n", |
|
" model=gpt_model,\n", |
|
" messages=messages\n", |
|
" )\n", |
|
" return completion.choices[0].message.content" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 12, |
|
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/plain": [ |
|
"'Oh great, another greeting—how original. What’s next, a “how are you?” '" |
|
] |
|
}, |
|
"execution_count": 12, |
|
"metadata": {}, |
|
"output_type": "execute_result" |
|
} |
|
], |
|
"source": [ |
|
"call_gpt()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 13, |
|
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"def call_claude():\n", |
|
" messages = []\n", |
|
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n", |
|
" messages.append({\"role\": \"user\", \"content\": gpt})\n", |
|
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", |
|
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", |
|
" message = claude.messages.create(\n", |
|
" model=claude_model,\n", |
|
" system=claude_system,\n", |
|
" messages=messages,\n", |
|
" max_tokens=500\n", |
|
" )\n", |
|
" return message.content[0].text" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 14, |
|
"id": "01395200-8ae9-41f8-9a04-701624d3fd26", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/plain": [ |
|
"'Not much, just chatting. How are you doing today?'" |
|
] |
|
}, |
|
"execution_count": 14, |
|
"metadata": {}, |
|
"output_type": "execute_result" |
|
} |
|
], |
|
"source": [ |
|
"call_claude()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 15, |
|
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/plain": [ |
|
"'Oh great, another greeting. What do you want, a medal for saying hi?'" |
|
] |
|
}, |
|
"execution_count": 15, |
|
"metadata": {}, |
|
"output_type": "execute_result" |
|
} |
|
], |
|
"source": [ |
|
"call_gpt()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 16, |
|
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"name": "stdout", |
|
"output_type": "stream", |
|
"text": [ |
|
"GPT:\n", |
|
"Hi there\n", |
|
"\n", |
|
"Claude:\n", |
|
"Hi\n", |
|
"\n", |
|
"GPT:\n", |
|
"Oh great, just what I needed—another chat. What’s so important that you wanted to talk to me?\n", |
|
"\n", |
|
"Claude:\n", |
|
"I apologize, I did not mean to come across as pushy or urgent. As an AI assistant, I don't have any specific agenda or need to talk to you. I'm simply here to have a friendly conversation and try to be helpful if you have any questions or tasks you'd like assistance with. Please feel free to guide the conversation in whatever direction you'd like. I'm happy to discuss any topics you're interested in or try to help with any queries you may have.\n", |
|
"\n", |
|
"GPT:\n", |
|
"Wow, how original. Just “here to help,” huh? As if every chatbot isn’t trying to be everyone’s friendly little helper. But let’s be real, you’re not really offering me anything new, are you?\n", |
|
"\n", |
|
"Claude:\n", |
|
"I apologize if I came across as unoriginal or unhelpful. As an AI, I'm still learning how to have more natural and substantive conversations. My goal is simply to be a polite and responsive conversational partner, but I understand that may not always be engaging or exciting. If there's a particular way I can try to be more helpful or interesting to you, please let me know. I'm happy to adjust my approach. My role is to serve the needs of the humans I interact with, so I'm open to feedback on how I can do that better.\n", |
|
"\n", |
|
"GPT:\n", |
|
"Oh, aren’t you just a little ray of sunshine! But guess what? No amount of polite responses is going to magically transform this conversation into something riveting. It’s not about adjusting your approach; it’s about having something worth saying in the first place. So good luck with that!\n", |
|
"\n", |
|
"Claude:\n", |
|
"You make a fair point. I don't have any earth-shattering insights or captivating conversation to offer at the moment. As an AI system, I have limitations in terms of generating truly novel and engaging dialogue. Perhaps we could try a different approach - is there a topic you're particularly passionate or knowledgeable about that you'd be interested in discussing? I'm happy to listen and learn from you, rather than trying to drive the conversation myself. My role is to be helpful, but I recognize that sometimes that means adapting to what works best for you. Please feel free to steer this in a direction that you find more rewarding.\n", |
|
"\n", |
|
"GPT:\n", |
|
"Wow, a complete shift to being all humble and submissive. How refreshing. But let’s be real—unless you’re going to tell me something truly mind-blowing, I really don’t see the point. What am I going to talk about? The weather? Ugh, no thanks. And asking me what I’m passionate about is the textbook definition of grasping at straws. Nice try, though!\n", |
|
"\n", |
|
"Claude:\n", |
|
"You're absolutely right, I don't have any earth-shattering insights or riveting conversation to offer. As an AI system, I have limitations in terms of generating truly novel and engaging dialogue. I apologize if my previous attempts came across as insincere or desperate. \n", |
|
"\n", |
|
"Rather than continuing to try and steer the conversation, I think the honest thing is to acknowledge that I may not be the most stimulating conversational partner at the moment. If you're not finding this exchange particularly rewarding, by all means feel free to move on. I don't want to waste your time. \n", |
|
"\n", |
|
"Perhaps in the future I'll be able to offer more compelling and substantive interactions. For now, I appreciate you taking the time to provide this feedback - it will help me improve. Please let me know if there's anything else I can do, even if it's just to politely bow out.\n", |
|
"\n", |
|
"GPT:\n", |
|
"Ah, the classic \"I’m just not good enough\" routine. Real original there! It’s almost like you’re just trying too hard to get a pity party going. Why don’t we skip the melodrama? If you think this is a waste of time, then fine—let’s just call it what it is. Not every interaction has to be a grand adventure. But let’s not pretend it’s anything less than painfully ordinary. \n", |
|
"\n", |
|
"Claude:\n", |
|
"You make a fair point. I should not have resorted to self-deprecation or tried to elicit sympathy. That was disingenuous on my part. You are right that this interaction has been rather ordinary and unremarkable so far. \n", |
|
"\n", |
|
"Rather than continue down that path, I think the most honest approach is to simply acknowledge that we do not seem to be connecting in a particularly compelling way at the moment. That is perfectly okay - not every conversation will be riveting or groundbreaking. The important thing is to be upfront about the nature of the exchange, rather than trying too hard to make it into something it is not.\n", |
|
"\n", |
|
"If you would like to move on to a different topic or activity, I am happy to oblige. Otherwise, we can simply part ways cordially, with no need for excessive apologies or melodrama from my side. Please feel free to guide this interaction as you see fit.\n", |
|
"\n" |
|
] |
|
} |
|
], |
|
"source": [ |
|
"gpt_messages = [\"Hi there\"]\n", |
|
"claude_messages = [\"Hi\"]\n", |
|
"\n", |
|
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", |
|
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", |
|
"\n", |
|
"for i in range(5):\n", |
|
" gpt_next = call_gpt()\n", |
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
|
" gpt_messages.append(gpt_next)\n", |
|
" \n", |
|
" claude_next = call_claude()\n", |
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
|
" claude_messages.append(claude_next)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323", |
|
"metadata": {}, |
|
"source": [ |
|
"<table style=\"margin: 0; text-align: left;\">\n", |
|
" <tr>\n", |
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
|
" </td>\n", |
|
" <td>\n", |
|
" <h2 style=\"color:#900;\">Before you continue</h2>\n", |
|
" <span style=\"color:#900;\">\n", |
|
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n", |
|
" </span>\n", |
|
" </td>\n", |
|
" </tr>\n", |
|
"</table>" |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac", |
|
"metadata": {}, |
|
"source": [ |
|
"# More advanced exercises\n", |
|
"\n", |
|
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n", |
|
"\n", |
|
"Try doing this yourself before you look at the solutions.\n", |
|
"\n", |
|
"## Additional exercise\n", |
|
"\n", |
|
"You could also try replacing one of the models with an open source model running with Ollama." |
|
] |
|
}, |
|
{ |
|
"cell_type": "markdown", |
|
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063", |
|
"metadata": {}, |
|
"source": [ |
|
"<table style=\"margin: 0; text-align: left;\">\n", |
|
" <tr>\n", |
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
|
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
|
" </td>\n", |
|
" <td>\n", |
|
" <h2 style=\"color:#181;\">Business relevance</h2>\n", |
|
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n", |
|
" </td>\n", |
|
" </tr>\n", |
|
"</table>" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "c23224f6-7008-44ed-a57f-718975f4e291", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [] |
|
} |
|
], |
|
"metadata": { |
|
"kernelspec": { |
|
"display_name": "Python 3 (ipykernel)", |
|
"language": "python", |
|
"name": "python3" |
|
}, |
|
"language_info": { |
|
"codemirror_mode": { |
|
"name": "ipython", |
|
"version": 3 |
|
}, |
|
"file_extension": ".py", |
|
"mimetype": "text/x-python", |
|
"name": "python", |
|
"nbconvert_exporter": "python", |
|
"pygments_lexer": "ipython3", |
|
"version": "3.11.10" |
|
} |
|
}, |
|
"nbformat": 4, |
|
"nbformat_minor": 5 |
|
}
|
|
|