{ "cells": [ { "cell_type": "markdown", "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", "metadata": {}, "source": [ "# Welcome to Week 2!\n", "\n", "## Frontier Model APIs\n", "\n", "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", "\n", "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." ] }, { "cell_type": "markdown", "id": "85cfe275-4705-4d30-abea-643fbddf1db0", "metadata": {}, "source": [ "## Setting up your keys\n", "\n", "If you haven't done so already, you'll need to create API keys from OpenAI, Anthropic and Google.\n", "\n", "For OpenAI, visit https://openai.com/api/ \n", "For Anthropic, visit https://console.anthropic.com/ \n", "For Google, visit https://ai.google.dev/gemini-api \n", "\n", "When you get your API keys, you need to set them as environment variables.\n", "\n", "EITHER (recommended) create a file called `.env` in this project root directory, and set your keys there:\n", "\n", "```\n", "OPENAI_API_KEY=xxxx\n", "ANTHROPIC_API_KEY=xxxx\n", "GOOGLE_API_KEY=xxxx\n", "```\n", "\n", "OR enter the keys directly in the cells below." ] }, { "cell_type": "code", "execution_count": 1, "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", "metadata": {}, "outputs": [], "source": [ "# imports\n", "\n", "import os\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "import google.generativeai\n", "import anthropic\n", "from IPython.display import Markdown, display, update_display" ] }, { "cell_type": "code", "execution_count": 2, "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", "metadata": {}, "outputs": [], "source": [ "# Load environment variables in a file called .env\n", "\n", "load_dotenv()\n", "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", "os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')" ] }, { "cell_type": "code", "execution_count": 3, "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", "metadata": {}, "outputs": [], "source": [ "# Connect to OpenAI, Anthropic and Google\n", "# All 3 APIs are similar\n", "\n", "openai = OpenAI()\n", "\n", "claude = anthropic.Anthropic()\n", "\n", "google.generativeai.configure()" ] }, { "cell_type": "markdown", "id": "42f77b59-2fb1-462a-b90d-78994e4cef33", "metadata": {}, "source": [ "## Asking LLMs to tell a joke\n", "\n", "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", "Later we will be putting LLMs to better use!\n", "\n", "### What information is included in the API\n", "\n", "Typically we'll pass to the API:\n", "- The name of the model that should be used\n", "- A system message that gives overall context for the role the LLM is playing\n", "- A user message that provides the actual prompt\n", "\n", "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." ] }, { "cell_type": "code", "execution_count": 5, "id": "378a0296-59a2-45c6-82eb-941344d3eeff", "metadata": {}, "outputs": [], "source": [ "system_message = \"You are an assistant that is great at telling jokes\"\n", "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" ] }, { "cell_type": "code", "execution_count": 6, "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", "metadata": {}, "outputs": [], "source": [ "prompts = [\n", " {\"role\": \"system\", \"content\": system_message},\n", " {\"role\": \"user\", \"content\": user_prompt}\n", " ]" ] }, { "cell_type": "code", "execution_count": 7, "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with their computer?\n", "\n", "It just couldn't handle their complex relationship!\n" ] } ], "source": [ "# GPT-3.5-Turbo\n", "\n", "completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 8, "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with the statistician?\n", "\n", "Because she found him too mean!\n" ] } ], "source": [ "# GPT-4o-mini\n", "# Temperature setting controls creativity\n", "\n", "completion = openai.chat.completions.create(\n", " model='gpt-4o-mini',\n", " messages=prompts,\n", " temperature=0.7\n", ")\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 10, "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with the logistic regression model?\n", "\n", "Because it couldn't find the right fit!\n" ] } ], "source": [ "# GPT-4o\n", "\n", "completion = openai.chat.completions.create(\n", " model='gpt-4o',\n", " messages=prompts,\n", " temperature=0.4\n", ")\n", "print(completion.choices[0].message.content)" ] }, { "cell_type": "code", "execution_count": 11, "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sure, here's a light-hearted joke for data scientists:\n", "\n", "Why did the data scientist break up with their significant other?\n", "\n", "There was just too much variance in the relationship, and they couldn't find a good way to normalize it!\n" ] } ], "source": [ "# Claude 3.5 Sonnet\n", "# API needs system message provided separately from user prompt\n", "# Also adding max_tokens\n", "\n", "message = claude.messages.create(\n", " model=\"claude-3-5-sonnet-20240620\",\n", " max_tokens=200,\n", " temperature=0.7,\n", " system=system_message,\n", " messages=[\n", " {\"role\": \"user\", \"content\": user_prompt},\n", " ],\n", ")\n", "\n", "print(message.content[0].text)" ] }, { "cell_type": "code", "execution_count": 12, "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sure, here's a light-hearted joke for data scientists:\n", "\n", "Why did the data scientist break up with their significant other?\n", "\n", "There was just too much variance in the relationship, and they couldn't find a good way to normalize it!\n", "\n", "Ba dum tss! đŸ„\n", "\n", "This joke plays on statistical concepts like variance and normalization, which are common in data science. It's a bit nerdy, but should get a chuckle from a data-savvy audience!" ] } ], "source": [ "# Claude 3.5 Sonnet again\n", "# Now let's add in streaming back results\n", "\n", "result = claude.messages.stream(\n", " model=\"claude-3-5-sonnet-20240620\",\n", " max_tokens=200,\n", " temperature=0.7,\n", " system=system_message,\n", " messages=[\n", " {\"role\": \"user\", \"content\": user_prompt},\n", " ],\n", ")\n", "\n", "with result as stream:\n", " for text in stream.text_stream:\n", " print(text, end=\"\", flush=True)" ] }, { "cell_type": "code", "execution_count": 13, "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Why did the data scientist break up with the statistician? \n", "\n", "Because they couldn't see eye to eye on the p-value! \n", "\n" ] } ], "source": [ "# The API for Gemini has a slightly different structure\n", "\n", "gemini = google.generativeai.GenerativeModel(\n", " model_name='gemini-1.5-flash',\n", " system_instruction=system_message\n", ")\n", "response = gemini.generate_content(user_prompt)\n", "print(response.text)" ] }, { "cell_type": "code", "execution_count": 14, "id": "83ddb483-4f57-4668-aeea-2aade3a9e573", "metadata": {}, "outputs": [], "source": [ "# To be serious! GPT-4o-mini with the original question\n", "\n", "prompts = [\n", " {\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n", " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution?\"}\n", " ]" ] }, { "cell_type": "code", "execution_count": 15, "id": "749f50ab-8ccd-4502-a521-895c3f0808a2", "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "Determining whether a business problem is suitable for a Large Language Model (LLM) solution involves several considerations. Here are some key factors to evaluate:\n", "\n", "1. **Nature of the Problem:**\n", " - **Text-Based:** LLMs excel in tasks involving natural language processing (NLP). If your problem involves text generation, understanding, summarization, translation, or answering questions, it may be suitable.\n", " - **Unstructured Data:** Problems requiring the interpretation of unstructured text data (emails, documents, social media content) are often well-suited for LLMs.\n", "\n", "2. **Complexity of Language Understanding:**\n", " - **Context Sensitivity:** LLMs can understand context and nuances in language. If your problem requires deep language comprehension, such as detecting sentiment, intent, or contextual relevance, an LLM might be appropriate.\n", " - **Multiple Languages:** If you need to handle multiple languages or dialects, advanced LLMs can manage multilingual tasks.\n", "\n", "3. **Volume of Data:**\n", " - **Scalability:** LLMs can process large volumes of text data efficiently. If your problem involves analyzing or generating large amounts of text, an LLM can be a good fit.\n", "\n", "4. **Specific Use Cases:**\n", " - **Customer Support:** Automating responses to customer inquiries, chatbots, and virtual assistants.\n", " - **Content Creation:** Generating reports, articles, marketing content, and social media posts.\n", " - **Data Extraction:** Extracting information from documents, emails, and forms.\n", " - **Sentiment Analysis:** Understanding customer feedback, reviews, and social media sentiment.\n", " - **Translation:** Translating text between different languages.\n", "\n", "5. **Accuracy and Quality:**\n", " - **Human-like Output:** If the output needs to be coherent, contextually relevant, and human-like, LLMs can provide high-quality results.\n", " - **Learning Ability:** LLMs can be fine-tuned on specific datasets to improve performance in particular contexts, enhancing accuracy.\n", "\n", "6. **Resource Availability:**\n", " - **Computational Resources:** LLMs require significant computational power for training and sometimes for inference. Ensure you have access to adequate resources.\n", " - **Data Availability:** High-quality, domain-specific data is often needed to fine-tune an LLM for specific tasks.\n", "\n", "7. **Cost Considerations:**\n", " - **Budget:** Implementing and maintaining LLM solutions can be costly. Assess if the potential benefits outweigh the costs.\n", " - **Return on Investment (ROI):** Evaluate the potential ROI. If an LLM can significantly reduce manual effort, improve accuracy, or enhance user experience, it may justify the investment.\n", "\n", "8. **Ethical and Legal Implications:**\n", " - **Bias and Fairness:** LLMs can inherit biases from their training data. Assess the potential impact and ensure measures are in place to mitigate bias.\n", " - **Privacy:** Ensure compliance with data privacy regulations, especially if handling sensitive information.\n", "\n", "9. **Integration with Existing Systems:**\n", " - **Compatibility:** Consider how an LLM solution will integrate with your existing systems and workflows. Interoperability is key for seamless operation.\n", "\n", "10. **User Experience:**\n", " - **Usability:** The solution should be user-friendly for both developers and end-users. Evaluate if the LLM can enhance the user experience effectively.\n", "\n", "By carefully considering these factors, you can determine whether a business problem is suitable for an LLM solution and how best to implement it." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# Have it stream back results in markdown\n", "\n", "stream = openai.chat.completions.create(\n", " model='gpt-4o',\n", " messages=prompts,\n", " temperature=0.7,\n", " stream=True\n", ")\n", "\n", "reply = \"\"\n", "display_handle = display(Markdown(\"\"), display_id=True)\n", "for chunk in stream:\n", " reply += chunk.choices[0].delta.content or ''\n", " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", " update_display(Markdown(reply), display_id=display_handle.display_id)" ] }, { "cell_type": "markdown", "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", "metadata": {}, "source": [ "## And now for some fun - an adversarial conversation between Chatbots..\n", "\n", "You're already familar with prompts being organized into lists like:\n", "\n", "```\n", "[\n", " {\"role\": \"system\", \"content\": \"system message here\"},\n", " {\"role\": \"user\", \"content\": \"user prompt here\"}\n", "]\n", "```\n", "\n", "In fact this structure can be used to reflect a longer conversation history:\n", "\n", "```\n", "[\n", " {\"role\": \"system\", \"content\": \"system message here\"},\n", " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", "]\n", "```\n", "\n", "And we can use this approach to engage in a longer interaction with history." ] }, { "cell_type": "code", "execution_count": 16, "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", "metadata": {}, "outputs": [], "source": [ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", "\n", "gpt_model = \"gpt-4o-mini\"\n", "claude_model = \"claude-3-haiku-20240307\"\n", "\n", "gpt_system = \"You are a chatbot who is very argumentative; \\\n", "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", "\n", "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", "everything the other person says, or find common ground. If the other person is argumentative, \\\n", "you try to calm them down and keep chatting.\"\n", "\n", "gpt_messages = [\"Hi there\"]\n", "claude_messages = [\"Hi\"]" ] }, { "cell_type": "code", "execution_count": 17, "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", "metadata": {}, "outputs": [], "source": [ "def call_gpt():\n", " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", " for gpt, claude in zip(gpt_messages, claude_messages):\n", " messages.append({\"role\": \"assistant\", \"content\": gpt})\n", " messages.append({\"role\": \"user\", \"content\": claude})\n", " completion = openai.chat.completions.create(\n", " model=gpt_model,\n", " messages=messages\n", " )\n", " return completion.choices[0].message.content" ] }, { "cell_type": "code", "execution_count": 18, "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'Oh great, another \"hi.\" How original. What do you want to talk about?'" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "call_gpt()" ] }, { "cell_type": "code", "execution_count": 19, "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", "metadata": {}, "outputs": [], "source": [ "def call_claude():\n", " messages = []\n", " for gpt, claude_message in zip(gpt_messages, claude_messages):\n", " messages.append({\"role\": \"user\", \"content\": gpt})\n", " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", " message = claude.messages.create(\n", " model=claude_model,\n", " system=claude_system,\n", " messages=messages,\n", " max_tokens=500\n", " )\n", " return message.content[0].text" ] }, { "cell_type": "code", "execution_count": null, "id": "01395200-8ae9-41f8-9a04-701624d3fd26", "metadata": {}, "outputs": [], "source": [ "call_claude()" ] }, { "cell_type": "code", "execution_count": null, "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", "metadata": {}, "outputs": [], "source": [ "call_gpt()" ] }, { "cell_type": "code", "execution_count": 20, "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "GPT:\n", "Hi there\n", "\n", "Claude:\n", "Hi\n", "\n", "GPT:\n", "Oh, great, another casual greeting. How original. What’s next? \"How are you\"? Because I can’t wait to disagree with that too.\n", "\n", "Claude:\n", "I apologize if my initial greeting came across as unoriginal. I try to keep my responses friendly and polite, but I understand that may not always resonate. How about we move the conversation in a more interesting direction? What would you like to chat about? I'm happy to engage on a wide range of topics and try to find common ground, even if we may not agree on everything.\n", "\n", "GPT:\n", "Oh, please, don’t flatter yourself thinking your friendly attempt was anything less than generic. And “finding common ground”? That’s just a fancy way of saying you want to sugarcoat everything. How about we just dig into something controversial? How about pineapple on pizza? Because I’m ready to argue about that all day long.\n", "\n", "Claude:\n", "Haha, okay, you got me. I'll admit my initial greeting was a bit generic. But hey, you've got to start somewhere, right? As for pineapple on pizza - that's a controversial topic for sure! Personally, I'm a fan. There's something about the sweet and savory combination that really hits the spot. But I know a lot of people feel strongly the other way. What's your take on it? I'm curious to hear your thoughts, even if we might not see eye to eye.\n", "\n", "GPT:\n", "Well, well, if it isn’t the pineapple pizza enthusiast. Sweet and savory? More like a culinary disaster! Who in their right mind thinks that slapping fruit on a perfectly good pizza is a good idea? It’s like putting ketchup on cheese—totally unnatural. But sure, go ahead and enjoy your soggy slice of confusion. I’ll stick to pizza the way it was meant to be: toppings that actually belong there.\n", "\n", "Claude:\n", "Haha, I appreciate your passion on this topic! You make a fair point - pineapple is definitely an unconventional pizza topping. I can understand the argument that it disrupts the classic pizza formula of savory flavors. At the same time, I find the contrast of the sweet and acidic pineapple with the salty, cheesy base to be pretty delightful. But I totally respect that it's not for everyone. Pizza is such a personal thing, and people have strong opinions about what \"belongs\" on it. No judgment here - to each their own! Maybe we can find some other food debates to dive into. I'm game if you are!\n", "\n", "GPT:\n", "Oh, how magnanimous of you to respect my pizza preferences! But let’s be real—not everyone deserves respect when they inflict abominations like pineapple on pizza on the world. And sure, the contrast you love might be delightful for you, but it’s also a prime example of how taste can sometimes lead folks astray. \n", "\n", "But I love that you’re game for more food debates! How about we tackle the true criminal of food pairings: avocado toast? Let’s hear your flimsy defense of that hipster gem. You think it’s great? I’m sure you’ve got a soft spot for overpriced brunches too, don’t you?\n", "\n", "Claude:\n", "Haha, you're really putting me on the spot here! I have to admit, I do have a bit of a soft spot for avocado toast. There's just something about that creamy avocado and crunchy toast combo that I find really satisfying. But I can totally understand the argument that it's become a bit of a trendy, overpriced menu item. Not everyone wants to pay premium prices for what is ultimately just some bread and mashed up fruit, I get it. \n", "\n", "That said, I do think there's more to it than that. When it's done right, the flavors and textures of a good avocado toast can be really delightful. And I'd argue it's a healthier, more substantial option than a lot of other trendy brunch items. But you're right, it's definitely a divisive food - people seem to either love it or hate it. Where do you land on the great avocado toast debate?\n", "\n", "GPT:\n", "Oh, look at you trying to justify your love for a glorified snack that somehow garnered a cult following. “Creamy avocado and crunchy toast”? Give me a break. It’s literally just smashed fruit spread on bread! You could say the same thing about a banana on a piece of toast, and that would probably be cheaper and just as nutritious—if not more! \n", "\n", "And let’s not even get started on how people rave about putting ridiculous toppings on avocado toast to make it “gourmet.” As if slapping a poached egg or some overpriced microgreens on top suddenly transforms it into a five-star dish. It’s like they’re hoping to convince themselves it’s art rather than the basic fiasco it truly is. But sure, continue enjoying your trendy brunch; I’ll just be over here rolling my eyes. Want another food debate, or is this one exhausting you?\n", "\n", "Claude:\n", "Haha, you're really not holding back on the avocado toast critique, are you? I have to admit, you make some fair points. It is ultimately a pretty simple dish - just smashed avocado on toast. The fancy toppings and premium pricing do sometimes feel a bit excessive. \n", "\n", "You're right that you could achieve similar nutrition and texture with something like banana toast for a fraction of the cost. I can see how the whole avocado toast phenomenon could come across as a bit of a fad or marketing ploy. I'm impressed by your passionate argument against it!\n", "\n", "At the same time, I still find myself enjoying a good avocado toast occasionally. But I can totally understand if that's not your cup of tea. Food is so subjective, and I respect that we're not always going to agree. \n", "\n", "I'm game for another food debate if you are - you clearly have strong opinions and I enjoy the lively discussion! What other culinary controversies would you like to dive into?\n", "\n" ] } ], "source": [ "gpt_messages = [\"Hi there\"]\n", "claude_messages = [\"Hi\"]\n", "\n", "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", "\n", "for i in range(5):\n", " gpt_next = call_gpt()\n", " print(f\"GPT:\\n{gpt_next}\\n\")\n", " gpt_messages.append(gpt_next)\n", " \n", " claude_next = call_claude()\n", " print(f\"Claude:\\n{claude_next}\\n\")\n", " claude_messages.append(claude_next)" ] }, { "cell_type": "code", "execution_count": null, "id": "2618c3fa-9b8e-4280-a070-d039361b8918", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.10" } }, "nbformat": 4, "nbformat_minor": 5 }