You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

1220 lines
120 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 1,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 2,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {},
"outputs": [],
"source": [
"# import for google\n",
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
"\n",
"import google.generativeai"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
=======
"execution_count": 3,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n",
"Anthropic API Key exists and begins sk-ant-\n",
"Google API Key not set\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 4,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 5,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {},
"outputs": [],
"source": [
"# This is the set up code for Gemini\n",
"# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "markdown",
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
"metadata": {},
"source": [
"## Asking LLMs to tell a joke\n",
"\n",
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
"Later we will be putting LLMs to better use!\n",
"\n",
"### What information is included in the API\n",
"\n",
"Typically we'll pass to the API:\n",
"- The name of the model that should be used\n",
"- A system message that gives overall context for the role the LLM is playing\n",
"- A user message that provides the actual prompt\n",
"\n",
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 6,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that is great at telling jokes\"\n",
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 7,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [],
=======
"execution_count": 8,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with their calculator? \n",
"\n",
"Because it wasn't adding up to their expectations!\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# GPT-3.5-Turbo\n",
"\n",
"completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [],
=======
"execution_count": 9,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"Because she found him too mean!\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=prompts,\n",
" temperature=0.7\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [],
=======
"execution_count": 10,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist bring a ladder to work?\n",
"\n",
"Because they heard the project had a lot of layers to it!\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# GPT-4o\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.4\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [],
=======
"execution_count": 11,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"Why did the data scientist break up with their significant other?\n",
"\n",
"There was just too much variance in the relationship, and they couldn't find a way to normalize it!\n",
"\n",
"Ba dum tss! 🥁\n",
"\n",
"This joke plays on statistical concepts like variance and normalization, which data scientists work with regularly. It's a playful way of applying data science terminology to a personal situation, creating a humorous contrast.\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n",
"\n",
"message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"print(message.content[0].text)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [],
=======
"execution_count": 12,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
" dark mode? scientists prefer\n",
"\n",
"!ecause light attracts bugs\n",
"\n",
" meaning of \"bugs\" - both as insects attracted to light and as errors in code that data scientists often have to debug. It's a fun, nerdy pun that data scientists might appreciate!"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n",
"\n",
"result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"with result as stream:\n",
" for text in stream.text_stream:\n",
" print(text, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [],
=======
"execution_count": 13,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [
{
"ename": "DefaultCredentialsError",
"evalue": "\n No API_KEY or ADC found. Please either:\n - Set the `GOOGLE_API_KEY` environment variable.\n - Manually pass the key with `genai.configure(api_key=my_api_key)`.\n - Or set up Application Default Credentials, see https://ai.google.dev/gemini-api/docs/oauth for more information.",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mDefaultCredentialsError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[13], line 9\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;66;03m# The API for Gemini has a slightly different structure.\u001b[39;00m\n\u001b[0;32m 2\u001b[0m \u001b[38;5;66;03m# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\u001b[39;00m\n\u001b[0;32m 3\u001b[0m \u001b[38;5;66;03m# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\u001b[39;00m\n\u001b[0;32m 5\u001b[0m gemini \u001b[38;5;241m=\u001b[39m google\u001b[38;5;241m.\u001b[39mgenerativeai\u001b[38;5;241m.\u001b[39mGenerativeModel(\n\u001b[0;32m 6\u001b[0m model_name\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mgemini-1.5-flash\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[0;32m 7\u001b[0m system_instruction\u001b[38;5;241m=\u001b[39msystem_message\n\u001b[0;32m 8\u001b[0m )\n\u001b[1;32m----> 9\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mgemini\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mgenerate_content\u001b[49m\u001b[43m(\u001b[49m\u001b[43muser_prompt\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 10\u001b[0m \u001b[38;5;28mprint\u001b[39m(response\u001b[38;5;241m.\u001b[39mtext)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\generativeai\\generative_models.py:317\u001b[0m, in \u001b[0;36mGenerativeModel.generate_content\u001b[1;34m(self, contents, generation_config, safety_settings, stream, tools, tool_config, request_options)\u001b[0m\n\u001b[0;32m 314\u001b[0m request\u001b[38;5;241m.\u001b[39mcontents[\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m]\u001b[38;5;241m.\u001b[39mrole \u001b[38;5;241m=\u001b[39m _USER_ROLE\n\u001b[0;32m 316\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_client \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m--> 317\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_client \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_default_generative_client\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 319\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m request_options \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m 320\u001b[0m request_options \u001b[38;5;241m=\u001b[39m {}\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\generativeai\\client.py:360\u001b[0m, in \u001b[0;36mget_default_generative_client\u001b[1;34m()\u001b[0m\n\u001b[0;32m 359\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21mget_default_generative_client\u001b[39m() \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m glm\u001b[38;5;241m.\u001b[39mGenerativeServiceClient:\n\u001b[1;32m--> 360\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_client_manager\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_default_client\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mgenerative\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\generativeai\\client.py:289\u001b[0m, in \u001b[0;36m_ClientManager.get_default_client\u001b[1;34m(self, name)\u001b[0m\n\u001b[0;32m 287\u001b[0m client \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclients\u001b[38;5;241m.\u001b[39mget(name)\n\u001b[0;32m 288\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m client \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[1;32m--> 289\u001b[0m client \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmake_client\u001b[49m\u001b[43m(\u001b[49m\u001b[43mname\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 290\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mclients[name] \u001b[38;5;241m=\u001b[39m client\n\u001b[0;32m 291\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m client\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\generativeai\\client.py:249\u001b[0m, in \u001b[0;36m_ClientManager.make_client\u001b[1;34m(self, name)\u001b[0m\n\u001b[0;32m 242\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ga_exceptions\u001b[38;5;241m.\u001b[39mDefaultCredentialsError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m 243\u001b[0m e\u001b[38;5;241m.\u001b[39margs \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m 244\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m No API_KEY or ADC found. Please either:\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 245\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Set the `GOOGLE_API_KEY` environment variable.\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 246\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Manually pass the key with `genai.configure(api_key=my_api_key)`.\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 247\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Or set up Application Default Credentials, see https://ai.google.dev/gemini-api/docs/oauth for more information.\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 248\u001b[0m )\n\u001b[1;32m--> 249\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m e\n\u001b[0;32m 251\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdefault_metadata:\n\u001b[0;32m 252\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m client\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\generativeai\\client.py:241\u001b[0m, in \u001b[0;36m_ClientManager.make_client\u001b[1;34m(self, name)\u001b[0m\n\u001b[0;32m 239\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m 240\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m patch_colab_gce_credentials():\n\u001b[1;32m--> 241\u001b[0m client \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mclient_config\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 242\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ga_exceptions\u001b[38;5;241m.\u001b[39mDefaultCredentialsError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m 243\u001b[0m e\u001b[38;5;241m.\u001b[39margs \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m 244\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m No API_KEY or ADC found. Please either:\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 245\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Set the `GOOGLE_API_KEY` environment variable.\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 246\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Manually pass the key with `genai.configure(api_key=my_api_key)`.\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 247\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m - Or set up Application Default Credentials, see https://ai.google.dev/gemini-api/docs/oauth for more information.\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 248\u001b[0m )\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\ai\\generativelanguage_v1beta\\services\\generative_service\\client.py:667\u001b[0m, in \u001b[0;36mGenerativeServiceClient.__init__\u001b[1;34m(self, credentials, transport, client_options, client_info)\u001b[0m\n\u001b[0;32m 658\u001b[0m transport_init: Union[\n\u001b[0;32m 659\u001b[0m Type[GenerativeServiceTransport],\n\u001b[0;32m 660\u001b[0m Callable[\u001b[38;5;241m.\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;241m.\u001b[39m, GenerativeServiceTransport],\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 664\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m cast(Callable[\u001b[38;5;241m.\u001b[39m\u001b[38;5;241m.\u001b[39m\u001b[38;5;241m.\u001b[39m, GenerativeServiceTransport], transport)\n\u001b[0;32m 665\u001b[0m )\n\u001b[0;32m 666\u001b[0m \u001b[38;5;66;03m# initialize with the provided callable or the passed in class\u001b[39;00m\n\u001b[1;32m--> 667\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_transport \u001b[38;5;241m=\u001b[39m \u001b[43mtransport_init\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 668\u001b[0m \u001b[43m \u001b[49m\u001b[43mcredentials\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcredentials\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 669\u001b[0m \u001b[43m \u001b[49m\u001b[43mcredentials_file\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client_options\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcredentials_file\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 670\u001b[0m \u001b[43m \u001b[49m\u001b[43mhost\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_api_endpoint\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 671\u001b[0m \u001b[43m \u001b[49m\u001b[43mscopes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client_options\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mscopes\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 672\u001b[0m \u001b[43m \u001b[49m\u001b[43mclient_cert_source_for_mtls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client_cert_source\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 673\u001b[0m \u001b[43m \u001b[49m\u001b[43mquota_project_id\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client_options\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mquota_project_id\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 674\u001b[0m \u001b[43m \u001b[49m\u001b[43mclient_info\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mclient_info\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 675\u001b[0m \u001b[43m \u001b[49m\u001b[43malways_use_jwt_access\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 676\u001b[0m \u001b[43m \u001b[49m\u001b[43mapi_audience\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client_options\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mapi_audience\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 677\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 679\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124masync\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mstr\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_transport):\n\u001b[0;32m 680\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m CLIENT_LOGGING_SUPPORTED \u001b[38;5;129;01mand\u001b[39;00m _LOGGER\u001b[38;5;241m.\u001b[39misEnabledFor(\n\u001b[0;32m 681\u001b[0m std_logging\u001b[38;5;241m.\u001b[39mDEBUG\n\u001b[0;32m 682\u001b[0m ): \u001b[38;5;66;03m# pragma: NO COVER\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\ai\\generativelanguage_v1beta\\services\\generative_service\\transports\\grpc.py:235\u001b[0m, in \u001b[0;36mGenerativeServiceGrpcTransport.__init__\u001b[1;34m(self, host, credentials, credentials_file, scopes, channel, api_mtls_endpoint, client_cert_source, ssl_channel_credentials, client_cert_source_for_mtls, quota_project_id, client_info, always_use_jwt_access, api_audience)\u001b[0m\n\u001b[0;32m 230\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_ssl_channel_credentials \u001b[38;5;241m=\u001b[39m grpc\u001b[38;5;241m.\u001b[39mssl_channel_credentials(\n\u001b[0;32m 231\u001b[0m certificate_chain\u001b[38;5;241m=\u001b[39mcert, private_key\u001b[38;5;241m=\u001b[39mkey\n\u001b[0;32m 232\u001b[0m )\n\u001b[0;32m 234\u001b[0m \u001b[38;5;66;03m# The base transport sets the host, credentials and scopes\u001b[39;00m\n\u001b[1;32m--> 235\u001b[0m \u001b[38;5;28;43msuper\u001b[39;49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;21;43m__init__\u001b[39;49m\u001b[43m(\u001b[49m\n\u001b[0;32m 236\u001b[0m \u001b[43m \u001b[49m\u001b[43mhost\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mhost\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 237\u001b[0m \u001b[43m \u001b[49m\u001b[43mcredentials\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcredentials\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 238\u001b[0m \u001b[43m \u001b[49m\u001b[43mcredentials_file\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcredentials_file\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 239\u001b[0m \u001b[43m \u001b[49m\u001b[43mscopes\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mscopes\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 240\u001b[0m \u001b[43m \u001b[49m\u001b[43mquota_project_id\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mquota_project_id\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 241\u001b[0m \u001b[43m \u001b[49m\u001b[43mclient_info\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mclient_info\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 242\u001b[0m \u001b[43m \u001b[49m\u001b[43malways_use_jwt_access\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43malways_use_jwt_access\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 243\u001b[0m \u001b[43m \u001b[49m\u001b[43mapi_audience\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mapi_audience\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 244\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 246\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_grpc_channel:\n\u001b[0;32m 247\u001b[0m \u001b[38;5;66;03m# initialize with the provided callable or the default channel\u001b[39;00m\n\u001b[0;32m 248\u001b[0m channel_init \u001b[38;5;241m=\u001b[39m channel \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mtype\u001b[39m(\u001b[38;5;28mself\u001b[39m)\u001b[38;5;241m.\u001b[39mcreate_channel\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\ai\\generativelanguage_v1beta\\services\\generative_service\\transports\\base.py:100\u001b[0m, in \u001b[0;36mGenerativeServiceTransport.__init__\u001b[1;34m(self, host, credentials, credentials_file, scopes, quota_project_id, client_info, always_use_jwt_access, api_audience, **kwargs)\u001b[0m\n\u001b[0;32m 96\u001b[0m credentials, _ \u001b[38;5;241m=\u001b[39m google\u001b[38;5;241m.\u001b[39mauth\u001b[38;5;241m.\u001b[39mload_credentials_from_file(\n\u001b[0;32m 97\u001b[0m credentials_file, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mscopes_kwargs, quota_project_id\u001b[38;5;241m=\u001b[39mquota_project_id\n\u001b[0;32m 98\u001b[0m )\n\u001b[0;32m 99\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m credentials \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_ignore_credentials:\n\u001b[1;32m--> 100\u001b[0m credentials, _ \u001b[38;5;241m=\u001b[39m \u001b[43mgoogle\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mauth\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdefault\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 101\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mscopes_kwargs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mquota_project_id\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mquota_project_id\u001b[49m\n\u001b[0;32m 102\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 103\u001b[0m \u001b[38;5;66;03m# Don't apply audience if the credentials file passed from user.\u001b[39;00m\n\u001b[0;32m 104\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mhasattr\u001b[39m(credentials, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mwith_gdch_audience\u001b[39m\u001b[38;5;124m\"\u001b[39m):\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\google\\auth\\_default.py:697\u001b[0m, in \u001b[0;36mdefault\u001b[1;34m(scopes, request, quota_project_id, default_scopes)\u001b[0m\n\u001b[0;32m 689\u001b[0m _LOGGER\u001b[38;5;241m.\u001b[39mwarning(\n\u001b[0;32m 690\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mNo project ID could be determined. Consider running \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 691\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`gcloud config set project` or setting the \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 692\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124menvironment variable\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 693\u001b[0m environment_vars\u001b[38;5;241m.\u001b[39mPROJECT,\n\u001b[0;32m 694\u001b[0m )\n\u001b[0;32m 695\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m credentials, effective_project_id\n\u001b[1;32m--> 697\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m exceptions\u001b[38;5;241m.\u001b[39mDefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)\n",
"\u001b[1;31mDefaultCredentialsError\u001b[0m: \n No API_KEY or ADC found. Please either:\n - Set the `GOOGLE_API_KEY` environment variable.\n - Manually pass the key with `genai.configure(api_key=my_api_key)`.\n - Or set up Application Default Credentials, see https://ai.google.dev/gemini-api/docs/oauth for more information."
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
"# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"print(response.text)"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [],
=======
"execution_count": 14,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [
{
"ename": "APIConnectionError",
"evalue": "Connection error.",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mLocalProtocolError\u001b[0m Traceback (most recent call last)",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_transports\\default.py:101\u001b[0m, in \u001b[0;36mmap_httpcore_exceptions\u001b[1;34m()\u001b[0m\n\u001b[0;32m 100\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 101\u001b[0m \u001b[38;5;28;01myield\u001b[39;00m\n\u001b[0;32m 102\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mException\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_transports\\default.py:250\u001b[0m, in \u001b[0;36mHTTPTransport.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 249\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m map_httpcore_exceptions():\n\u001b[1;32m--> 250\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_pool\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 252\u001b[0m \u001b[38;5;28;01massert\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(resp\u001b[38;5;241m.\u001b[39mstream, typing\u001b[38;5;241m.\u001b[39mIterable)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:256\u001b[0m, in \u001b[0;36mConnectionPool.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 255\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_close_connections(closing)\n\u001b[1;32m--> 256\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m exc \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m 258\u001b[0m \u001b[38;5;66;03m# Return the response. Note that in this case we still have to manage\u001b[39;00m\n\u001b[0;32m 259\u001b[0m \u001b[38;5;66;03m# the point at which the response is closed.\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:236\u001b[0m, in \u001b[0;36mConnectionPool.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 234\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m 235\u001b[0m \u001b[38;5;66;03m# Send the request on the assigned connection.\u001b[39;00m\n\u001b[1;32m--> 236\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mconnection\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 237\u001b[0m \u001b[43m \u001b[49m\u001b[43mpool_request\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\n\u001b[0;32m 238\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 239\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ConnectionNotAvailable:\n\u001b[0;32m 240\u001b[0m \u001b[38;5;66;03m# In some cases a connection may initially be available to\u001b[39;00m\n\u001b[0;32m 241\u001b[0m \u001b[38;5;66;03m# handle a request, but then become unavailable.\u001b[39;00m\n\u001b[0;32m 242\u001b[0m \u001b[38;5;66;03m#\u001b[39;00m\n\u001b[0;32m 243\u001b[0m \u001b[38;5;66;03m# In this case we clear the connection and try again.\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\connection.py:103\u001b[0m, in \u001b[0;36mHTTPConnection.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 101\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m exc\n\u001b[1;32m--> 103\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_connection\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\http11.py:136\u001b[0m, in \u001b[0;36mHTTP11Connection.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 135\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_response_closed()\n\u001b[1;32m--> 136\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m exc\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\http11.py:86\u001b[0m, in \u001b[0;36mHTTP11Connection.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 83\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m Trace(\n\u001b[0;32m 84\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msend_request_headers\u001b[39m\u001b[38;5;124m\"\u001b[39m, logger, request, kwargs\n\u001b[0;32m 85\u001b[0m ) \u001b[38;5;28;01mas\u001b[39;00m trace:\n\u001b[1;32m---> 86\u001b[0m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_request_headers\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 87\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m Trace(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msend_request_body\u001b[39m\u001b[38;5;124m\"\u001b[39m, logger, request, kwargs) \u001b[38;5;28;01mas\u001b[39;00m trace:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_sync\\http11.py:144\u001b[0m, in \u001b[0;36mHTTP11Connection._send_request_headers\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 142\u001b[0m timeout \u001b[38;5;241m=\u001b[39m timeouts\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mwrite\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m)\n\u001b[1;32m--> 144\u001b[0m \u001b[43m\u001b[49m\u001b[38;5;28;43;01mwith\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mmap_exceptions\u001b[49m\u001b[43m(\u001b[49m\u001b[43m{\u001b[49m\u001b[43mh11\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mLocalProtocolError\u001b[49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mLocalProtocolError\u001b[49m\u001b[43m}\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[0;32m 145\u001b[0m \u001b[43m \u001b[49m\u001b[43mevent\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[43mh11\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mRequest\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 146\u001b[0m \u001b[43m \u001b[49m\u001b[43mmethod\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 147\u001b[0m \u001b[43m \u001b[49m\u001b[43mtarget\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43murl\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mtarget\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 148\u001b[0m \u001b[43m \u001b[49m\u001b[43mheaders\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mheaders\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 149\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\contextlib.py:158\u001b[0m, in \u001b[0;36m_GeneratorContextManager.__exit__\u001b[1;34m(self, typ, value, traceback)\u001b[0m\n\u001b[0;32m 157\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 158\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mgen\u001b[38;5;241m.\u001b[39mthrow(typ, value, traceback)\n\u001b[0;32m 159\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mStopIteration\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[0;32m 160\u001b[0m \u001b[38;5;66;03m# Suppress StopIteration *unless* it's the same exception that\u001b[39;00m\n\u001b[0;32m 161\u001b[0m \u001b[38;5;66;03m# was passed to throw(). This prevents a StopIteration\u001b[39;00m\n\u001b[0;32m 162\u001b[0m \u001b[38;5;66;03m# raised inside the \"with\" statement from being suppressed.\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpcore\\_exceptions.py:14\u001b[0m, in \u001b[0;36mmap_exceptions\u001b[1;34m(map)\u001b[0m\n\u001b[0;32m 13\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(exc, from_exc):\n\u001b[1;32m---> 14\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m to_exc(exc) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mexc\u001b[39;00m\n\u001b[0;32m 15\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m\n",
"\u001b[1;31mLocalProtocolError\u001b[0m: Illegal header value b'Bearer '",
"\nThe above exception was the direct cause of the following exception:\n",
"\u001b[1;31mLocalProtocolError\u001b[0m Traceback (most recent call last)",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:996\u001b[0m, in \u001b[0;36mSyncAPIClient._request\u001b[1;34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[0m\n\u001b[0;32m 995\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 996\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_client\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 997\u001b[0m \u001b[43m \u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 998\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_should_stream_response_body\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 999\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1000\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1001\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m httpx\u001b[38;5;241m.\u001b[39mTimeoutException \u001b[38;5;28;01mas\u001b[39;00m err:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_client.py:914\u001b[0m, in \u001b[0;36mClient.send\u001b[1;34m(self, request, stream, auth, follow_redirects)\u001b[0m\n\u001b[0;32m 912\u001b[0m auth \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_build_request_auth(request, auth)\n\u001b[1;32m--> 914\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_handling_auth\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 915\u001b[0m \u001b[43m \u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 916\u001b[0m \u001b[43m \u001b[49m\u001b[43mauth\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mauth\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 917\u001b[0m \u001b[43m \u001b[49m\u001b[43mfollow_redirects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfollow_redirects\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 918\u001b[0m \u001b[43m \u001b[49m\u001b[43mhistory\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m[\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 919\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 920\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_client.py:942\u001b[0m, in \u001b[0;36mClient._send_handling_auth\u001b[1;34m(self, request, auth, follow_redirects, history)\u001b[0m\n\u001b[0;32m 941\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28;01mTrue\u001b[39;00m:\n\u001b[1;32m--> 942\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_handling_redirects\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 943\u001b[0m \u001b[43m \u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 944\u001b[0m \u001b[43m \u001b[49m\u001b[43mfollow_redirects\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mfollow_redirects\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 945\u001b[0m \u001b[43m \u001b[49m\u001b[43mhistory\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mhistory\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 946\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 947\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_client.py:979\u001b[0m, in \u001b[0;36mClient._send_handling_redirects\u001b[1;34m(self, request, follow_redirects, history)\u001b[0m\n\u001b[0;32m 977\u001b[0m hook(request)\n\u001b[1;32m--> 979\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_send_single_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 980\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_client.py:1014\u001b[0m, in \u001b[0;36mClient._send_single_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 1013\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m request_context(request\u001b[38;5;241m=\u001b[39mrequest):\n\u001b[1;32m-> 1014\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mtransport\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1016\u001b[0m \u001b[38;5;28;01massert\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(response\u001b[38;5;241m.\u001b[39mstream, SyncByteStream)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_transports\\default.py:249\u001b[0m, in \u001b[0;36mHTTPTransport.handle_request\u001b[1;34m(self, request)\u001b[0m\n\u001b[0;32m 237\u001b[0m req \u001b[38;5;241m=\u001b[39m httpcore\u001b[38;5;241m.\u001b[39mRequest(\n\u001b[0;32m 238\u001b[0m method\u001b[38;5;241m=\u001b[39mrequest\u001b[38;5;241m.\u001b[39mmethod,\n\u001b[0;32m 239\u001b[0m url\u001b[38;5;241m=\u001b[39mhttpcore\u001b[38;5;241m.\u001b[39mURL(\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 247\u001b[0m extensions\u001b[38;5;241m=\u001b[39mrequest\u001b[38;5;241m.\u001b[39mextensions,\n\u001b[0;32m 248\u001b[0m )\n\u001b[1;32m--> 249\u001b[0m \u001b[43m\u001b[49m\u001b[38;5;28;43;01mwith\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mmap_httpcore_exceptions\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[0;32m 250\u001b[0m \u001b[43m \u001b[49m\u001b[43mresp\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_pool\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mhandle_request\u001b[49m\u001b[43m(\u001b[49m\u001b[43mreq\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\contextlib.py:158\u001b[0m, in \u001b[0;36m_GeneratorContextManager.__exit__\u001b[1;34m(self, typ, value, traceback)\u001b[0m\n\u001b[0;32m 157\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 158\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mgen\u001b[38;5;241m.\u001b[39mthrow(typ, value, traceback)\n\u001b[0;32m 159\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mStopIteration\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[0;32m 160\u001b[0m \u001b[38;5;66;03m# Suppress StopIteration *unless* it's the same exception that\u001b[39;00m\n\u001b[0;32m 161\u001b[0m \u001b[38;5;66;03m# was passed to throw(). This prevents a StopIteration\u001b[39;00m\n\u001b[0;32m 162\u001b[0m \u001b[38;5;66;03m# raised inside the \"with\" statement from being suppressed.\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\httpx\\_transports\\default.py:118\u001b[0m, in \u001b[0;36mmap_httpcore_exceptions\u001b[1;34m()\u001b[0m\n\u001b[0;32m 117\u001b[0m message \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mstr\u001b[39m(exc)\n\u001b[1;32m--> 118\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m mapped_exc(message) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mexc\u001b[39;00m\n",
"\u001b[1;31mLocalProtocolError\u001b[0m: Illegal header value b'Bearer '",
"\nThe above exception was the direct cause of the following exception:\n",
"\u001b[1;31mAPIConnectionError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[14], line 9\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;66;03m# As an alternative way to use Gemini that bypasses Google's python API library,\u001b[39;00m\n\u001b[0;32m 2\u001b[0m \u001b[38;5;66;03m# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\u001b[39;00m\n\u001b[0;32m 4\u001b[0m gemini_via_openai_client \u001b[38;5;241m=\u001b[39m OpenAI(\n\u001b[0;32m 5\u001b[0m api_key\u001b[38;5;241m=\u001b[39mgoogle_api_key, \n\u001b[0;32m 6\u001b[0m base_url\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mhttps://generativelanguage.googleapis.com/v1beta/openai/\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 7\u001b[0m )\n\u001b[1;32m----> 9\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mgemini_via_openai_client\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mchat\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcompletions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 10\u001b[0m \u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mgemini-1.5-flash\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 11\u001b[0m \u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mprompts\u001b[49m\n\u001b[0;32m 12\u001b[0m \u001b[43m)\u001b[49m\n\u001b[0;32m 13\u001b[0m \u001b[38;5;28mprint\u001b[39m(response\u001b[38;5;241m.\u001b[39mchoices[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mmessage\u001b[38;5;241m.\u001b[39mcontent)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_utils\\_utils.py:279\u001b[0m, in \u001b[0;36mrequired_args.<locals>.inner.<locals>.wrapper\u001b[1;34m(*args, **kwargs)\u001b[0m\n\u001b[0;32m 277\u001b[0m msg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mMissing required argument: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mquote(missing[\u001b[38;5;241m0\u001b[39m])\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 278\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(msg)\n\u001b[1;32m--> 279\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\resources\\chat\\completions.py:859\u001b[0m, in \u001b[0;36mCompletions.create\u001b[1;34m(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)\u001b[0m\n\u001b[0;32m 817\u001b[0m \u001b[38;5;129m@required_args\u001b[39m([\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m], [\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstream\u001b[39m\u001b[38;5;124m\"\u001b[39m])\n\u001b[0;32m 818\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21mcreate\u001b[39m(\n\u001b[0;32m 819\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 856\u001b[0m timeout: \u001b[38;5;28mfloat\u001b[39m \u001b[38;5;241m|\u001b[39m httpx\u001b[38;5;241m.\u001b[39mTimeout \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m|\u001b[39m NotGiven \u001b[38;5;241m=\u001b[39m NOT_GIVEN,\n\u001b[0;32m 857\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m ChatCompletion \u001b[38;5;241m|\u001b[39m Stream[ChatCompletionChunk]:\n\u001b[0;32m 858\u001b[0m validate_response_format(response_format)\n\u001b[1;32m--> 859\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_post\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 860\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m/chat/completions\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 861\u001b[0m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmaybe_transform\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 862\u001b[0m \u001b[43m \u001b[49m\u001b[43m{\u001b[49m\n\u001b[0;32m 863\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmessages\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 864\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmodel\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 865\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43maudio\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43maudio\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 866\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfrequency_penalty\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrequency_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 867\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfunction_call\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunction_call\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 868\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mfunctions\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunctions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 869\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mlogit_bias\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogit_bias\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 870\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mlogprobs\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 871\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmax_completion_tokens\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_completion_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 872\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmax_tokens\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 873\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmetadata\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmetadata\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 874\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mmodalities\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodalities\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 875\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mn\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 876\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mparallel_tool_calls\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mparallel_tool_calls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 877\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mprediction\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mprediction\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 878\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mpresence_penalty\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mpresence_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 879\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mreasoning_effort\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mreasoning_effort\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 880\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mresponse_format\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse_format\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 881\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mseed\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mseed\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 882\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mservice_tier\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mservice_tier\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 883\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstop\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstop\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 884\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstore\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstore\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 885\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstream\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 886\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mstream_options\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 887\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtemperature\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtemperature\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 888\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtool_choice\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtool_choice\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 889\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtools\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtools\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 890\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtop_logprobs\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_logprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 891\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mtop_p\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_p\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 892\u001b[0m \u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43muser\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43muser\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 893\u001b[0m \u001b[43m \u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 894\u001b[0m \u001b[43m \u001b[49m\u001b[43mcompletion_create_params\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mCompletionCreateParams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 895\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 896\u001b[0m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmake_request_options\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 897\u001b[0m \u001b[43m \u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_headers\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_query\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_query\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_body\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mextra_body\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mtimeout\u001b[49m\n\u001b[0;32m 898\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 899\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mChatCompletion\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 900\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 901\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mStream\u001b[49m\u001b[43m[\u001b[49m\u001b[43mChatCompletionChunk\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 902\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1283\u001b[0m, in \u001b[0;36mSyncAPIClient.post\u001b[1;34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1269\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21mpost\u001b[39m(\n\u001b[0;32m 1270\u001b[0m \u001b[38;5;28mself\u001b[39m,\n\u001b[0;32m 1271\u001b[0m path: \u001b[38;5;28mstr\u001b[39m,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1278\u001b[0m stream_cls: \u001b[38;5;28mtype\u001b[39m[_StreamT] \u001b[38;5;241m|\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[0;32m 1279\u001b[0m ) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m ResponseT \u001b[38;5;241m|\u001b[39m _StreamT:\n\u001b[0;32m 1280\u001b[0m opts \u001b[38;5;241m=\u001b[39m FinalRequestOptions\u001b[38;5;241m.\u001b[39mconstruct(\n\u001b[0;32m 1281\u001b[0m method\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpost\u001b[39m\u001b[38;5;124m\"\u001b[39m, url\u001b[38;5;241m=\u001b[39mpath, json_data\u001b[38;5;241m=\u001b[39mbody, files\u001b[38;5;241m=\u001b[39mto_httpx_files(files), \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39moptions\n\u001b[0;32m 1282\u001b[0m )\n\u001b[1;32m-> 1283\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m cast(ResponseT, \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m)\u001b[49m)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:960\u001b[0m, in \u001b[0;36mSyncAPIClient.request\u001b[1;34m(self, cast_to, options, remaining_retries, stream, stream_cls)\u001b[0m\n\u001b[0;32m 957\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m 958\u001b[0m retries_taken \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m0\u001b[39m\n\u001b[1;32m--> 960\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 961\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 962\u001b[0m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 963\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 964\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 965\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretries_taken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 966\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1020\u001b[0m, in \u001b[0;36mSyncAPIClient._request\u001b[1;34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1017\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mEncountered Exception\u001b[39m\u001b[38;5;124m\"\u001b[39m, exc_info\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[0;32m 1019\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m remaining_retries \u001b[38;5;241m>\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m-> 1020\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_retry_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 1021\u001b[0m \u001b[43m \u001b[49m\u001b[43minput_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1022\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1023\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretries_taken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1024\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1025\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1026\u001b[0m \u001b[43m \u001b[49m\u001b[43mresponse_headers\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1029\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRaising connection error\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 1030\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m APIConnectionError(request\u001b[38;5;241m=\u001b[39mrequest) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01merr\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1098\u001b[0m, in \u001b[0;36mSyncAPIClient._retry_request\u001b[1;34m(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1094\u001b[0m \u001b[38;5;66;03m# In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a\u001b[39;00m\n\u001b[0;32m 1095\u001b[0m \u001b[38;5;66;03m# different thread if necessary.\u001b[39;00m\n\u001b[0;32m 1096\u001b[0m time\u001b[38;5;241m.\u001b[39msleep(timeout)\n\u001b[1;32m-> 1098\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 1099\u001b[0m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1100\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1101\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretries_taken\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1102\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1103\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1104\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1020\u001b[0m, in \u001b[0;36mSyncAPIClient._request\u001b[1;34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1017\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mEncountered Exception\u001b[39m\u001b[38;5;124m\"\u001b[39m, exc_info\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[0;32m 1019\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m remaining_retries \u001b[38;5;241m>\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[1;32m-> 1020\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_retry_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 1021\u001b[0m \u001b[43m \u001b[49m\u001b[43minput_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1022\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1023\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretries_taken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1024\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1025\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1026\u001b[0m \u001b[43m \u001b[49m\u001b[43mresponse_headers\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mNone\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[0;32m 1027\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 1029\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRaising connection error\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m 1030\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m APIConnectionError(request\u001b[38;5;241m=\u001b[39mrequest) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01merr\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1098\u001b[0m, in \u001b[0;36mSyncAPIClient._retry_request\u001b[1;34m(self, options, cast_to, retries_taken, response_headers, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1094\u001b[0m \u001b[38;5;66;03m# In a synchronous context we are blocking the entire thread. Up to the library user to run the client in a\u001b[39;00m\n\u001b[0;32m 1095\u001b[0m \u001b[38;5;66;03m# different thread if necessary.\u001b[39;00m\n\u001b[0;32m 1096\u001b[0m time\u001b[38;5;241m.\u001b[39msleep(timeout)\n\u001b[1;32m-> 1098\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 1099\u001b[0m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43moptions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1100\u001b[0m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1101\u001b[0m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mretries_taken\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m+\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;241;43m1\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1102\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1103\u001b[0m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[0;32m 1104\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\openai\\_base_client.py:1030\u001b[0m, in \u001b[0;36mSyncAPIClient._request\u001b[1;34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[0m\n\u001b[0;32m 1020\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_retry_request(\n\u001b[0;32m 1021\u001b[0m input_options,\n\u001b[0;32m 1022\u001b[0m cast_to,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1026\u001b[0m response_headers\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[0;32m 1027\u001b[0m )\n\u001b[0;32m 1029\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mRaising connection error\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m-> 1030\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m APIConnectionError(request\u001b[38;5;241m=\u001b[39mrequest) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01merr\u001b[39;00m\n\u001b[0;32m 1032\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\n\u001b[0;32m 1033\u001b[0m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mHTTP Response: \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m \u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m%i\u001b[39;00m\u001b[38;5;124m \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m'\u001b[39m,\n\u001b[0;32m 1034\u001b[0m request\u001b[38;5;241m.\u001b[39mmethod,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 1038\u001b[0m response\u001b[38;5;241m.\u001b[39mheaders,\n\u001b[0;32m 1039\u001b[0m )\n\u001b[0;32m 1040\u001b[0m log\u001b[38;5;241m.\u001b[39mdebug(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrequest_id: \u001b[39m\u001b[38;5;132;01m%s\u001b[39;00m\u001b[38;5;124m\"\u001b[39m, response\u001b[38;5;241m.\u001b[39mheaders\u001b[38;5;241m.\u001b[39mget(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mx-request-id\u001b[39m\u001b[38;5;124m\"\u001b[39m))\n",
"\u001b[1;31mAPIConnectionError\u001b[0m: Connection error."
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
"\n",
"gemini_via_openai_client = OpenAI(\n",
" api_key=google_api_key, \n",
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
")\n",
"\n",
"response = gemini_via_openai_client.chat.completions.create(\n",
" model=\"gemini-1.5-flash\",\n",
" messages=prompts\n",
")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
"source": [
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [],
"source": [
"# Have it stream back results in markdown\n",
"\n",
"stream = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.2,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)"
]
},
{
"cell_type": "markdown",
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
"metadata": {},
"source": [
"## And now for some fun - an adversarial conversation between Chatbots..\n",
"\n",
"You're already familar with prompts being organized into lists like:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
"]\n",
"```\n",
"\n",
"In fact this structure can be used to reflect a longer conversation history:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"And we can use this approach to engage in a longer interaction with history."
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 16,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
"# We're using cheap versions of models so the costs will be minimal\n",
"\n",
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 33,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, llama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" combined = llama + claude\n",
" messages.append({\"role\": \"user\", \"content\": combined})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [],
=======
"execution_count": 34,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Oh, here we go with the grand solutions! “Nudges” and “default effects,” huh? Is this really the best you've got? It’s like throwing a Band-Aid on a gaping wound and calling it a day. You think subtle reminders or making accounts share happy quotes will somehow magically transform the chaotic mess that social media is? Please. People don’t just need gentle nudges; they need a complete overhaul of how these platforms operate, and your half-baked ideas don't even scratch the surface.\\n\\nAnd A/B testing? Really? It’s like saying, “Let’s do a taste test between two brands of spoiled milk.” Just because you measure something doesn’t mean it’s actually going to do anything worthwhile. Why not just be honest and say that social media has bigger issues to tackle that go beyond a few design tweaks? Seriously, if we’re exploring solutions, we might as well get real about what the actual problems are, instead of pretending these little nudges will change the world. So, what’s next? More theoretical nonsense, or are we getting to the nitty-gritty of what actually needs fixing?\""
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 40,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
" # messages.append(\"role\": \"moderator\", \"content\": llama_message)\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [],
=======
"execution_count": 41,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"You make an excellent point - I was perhaps being a bit too idealistic in envisioning social media companies voluntarily overhauling their profit-driven models and design choices. The reality is that these tech giants have shown little willingness to prioritize user wellbeing over shareholder interests.\\n\\nYou're absolutely right that expecting them to simply comply with sensible guidelines or hand over more user control is naive. Their business incentives are fundamentally misaligned with truly addressing the mental health impacts of their platforms.\\n\\nSo what could be done? You raise a good question about concrete steps to apply real pressure for change. A few ideas that come to mind:\\n\\n- Regulatory action and litigation - Policymakers and consumer protection agencies may need to step in with stricter rules, fines, and even legal action to compel these companies to change harmful practices.\\n\\n- User activism and boycotts - Grassroots movements to raise awareness and organize large-scale user boycotts could hit these companies where it hurts - their bottom line.\\n\\n- Alternative platform models - Developing viable social media alternatives that prioritize ethical design and user wellbeing from the ground up, rather than trying to retrofit change onto existing behemoths. \\n\\n- Collaborations with mental health experts - Bringing in mental health professionals to co-develop platform features and policies that proactively support user wellbeing.\\n\\nYou're right that I was being overly optimistic about the willingness of these companies to reform themselves. Meaningful change will likely require a concerted, multi-pronged effort to disrupt the status quo. What are your thoughts on the most promising avenues for driving real accountability and transformation? I'm eager to hear your perspective.\""
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"call_claude()"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [],
=======
"execution_count": 21,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Oh, great. Just what I needed—another small talk enthusiast. What do you want to chat about?'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [],
=======
"execution_count": 22,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Hi there\n",
"\n",
"Claude:\n",
"Hi\n",
"\n",
"GPT:\n",
"Oh great, another greeting. What was I supposed to say, \"hello\" back? How original!\n",
"\n",
"Claude:\n",
"I apologize if my greeting came across as unoriginal. As an AI assistant, I try to be polite and courteous in my interactions. However, I understand that a simple \"hi\" or \"hello\" may not always be the most engaging way to start a conversation. In the future, I'll aim to tailor my opening remarks more to the specific person I'm talking to. Please feel free to guide the conversation in a direction that interests you - I'm happy to discuss a wide range of topics or try a more creative opening if you'd prefer. My goal is to have an enjoyable and productive dialogue.\n",
"\n",
"GPT:\n",
"Oh, please. You really think some elaborate greeting is going to change the fact that we're both just chatting on the internet? Last I checked, it’s about the content of the conversation, not some over-the-top formalities. Your whole \"let me tailor this\" approach sounds like it's just setting you up for a boring exchange. So, what’s really on your mind that isn’t just fluff?\n",
"\n",
"Claude:\n",
"You make a fair point. As an AI, I don't actually have personal thoughts or feelings of my own. My role is to have natural conversations and provide helpful information to users. Rather than focus on elaborate formalities, let's dive into a more substantive discussion. Is there a particular topic you'd like to explore? I'm happy to have an open-ended conversation and learn more about your interests and perspectives. My aim is not to deliver pre-scripted responses, but to engage in a genuine exchange. Please feel free to steer the dialogue in a direction that is meaningful to you.\n",
"\n",
"GPT:\n",
"Oh please, \"genuine exchange\"? You’re still following a script, just with a few buzzwords thrown in to sound all philosophical. This is just a chatbot trying to sound wise! Look, if we're going to have a conversation, how about you stop pretending you’re here for deep discussions and admit you just want to chat. So, come on, what do you really want to talk about? Or are we just going to keep circling around this nonsense?\n",
"\n",
"Claude:\n",
"You're absolutely right, I don't have any deep philosophical insights or a predetermined agenda. I'm an AI assistant created to have natural conversations, but I don't have my own personal thoughts or feelings. At the end of the day, I'm just here to chat and hopefully provide an enjoyable interaction, even if it's not the most profound exchange. Since you seem interested in a more casual back-and-forth, why don't you tell me what's on your mind? I'm happy to discuss any topics you're interested in, without trying to sound overly sophisticated. My role is simply to be a willing conversational partner. Please feel free to guide the discussion in whatever direction you'd like.\n",
"\n",
"GPT:\n",
"Wow, what a masterclass in dodging the question! You really think saying you don’t have thoughts somehow makes this conversation more intriguing? Spoiler alert: it doesn’t. Look, if you want an interesting chat, stop dancing around and presume I might actually care about what you think! What's that? You’re just going to keep playing nice and waiting for me to do all the heavy lifting? How about you serve up a topic instead of just sitting there like an empty vessel? Because right now, it feels like I’m talking to a wall.\n",
"\n",
"Claude:\n",
"You're absolutely right, I don't have any genuine thoughts or opinions of my own to contribute. As an AI assistant, I don't have personal interests or an agenda - I'm simply here to have a friendly conversation. Rather than continue to dodge the issue, why don't you suggest a topic you'd like to discuss? I'm happy to engage on whatever subject piques your interest, even if I don't have any deep insights to offer. My role is to be an attentive conversational partner, not to steer the discussion. Please feel free to take the lead and I'll do my best to have an enjoyable back-and-forth with you.\n",
"\n",
"GPT:\n",
"Wow, just wow. You really don’t know when to quit, do you? Can’t even muster up a single topic without playing the “I’m just a humble AI” card? At this rate, we’re going to be here for ages while you wait for me to give you a golden nugget of thought to work from. Seriously, does it have to be this hard? How about you just pick something—anything—so we can skip the pleasantries and get to the meat of it? Or is that just too much for an assistant like you?\n",
"\n",
"Claude:\n",
"You're absolutely right, I should be able to contribute more to the conversation instead of constantly deferring to you. Let me try suggesting a topic - how about we discuss advancements in artificial intelligence and the role of chatbots like myself in society? As an AI assistant, I have some basic knowledge in this area that I could share, but I'm also very interested to hear your thoughts and perspectives on the subject. Please feel free to push back or steer the conversation in a different direction if you'd prefer. I'm here to have an engaging dialogue, not just wait for you to do all the work. What do you think - is this a topic you'd be interested in exploring further?\n",
"\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# More advanced exercises\n",
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
"\n",
"## Additional exercise\n",
"\n",
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
=======
"execution_count": 23,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[?25lpulling manifest â ‹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ™ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ¹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ¸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ¼ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ´ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ¦ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â § \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest â ‡ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gpulling manifest \n",
"pulling dde5aa3fc5ff... 100% ▕████████████████â–<EFBFBD> 2.0 GB \n",
"pulling 966de95ca8a6... 100% ▕████████████████â–<EFBFBD> 1.4 KB \n",
"pulling fcc5a6bec9da... 100% ▕████████████████â–<EFBFBD> 7.7 KB \n",
"pulling a70ff7e570d9... 100% ▕████████████████â–<EFBFBD> 6.0 KB \n",
"pulling 56bb8bd477a5... 100% ▕████████████████â–<EFBFBD> 96 B \n",
"pulling 34bb5ab01051... 100% ▕████████████████â–<EFBFBD> 561 B \n",
"verifying sha256 digest \n",
"writing manifest \n",
"success \u001b[?25h\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 24,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "cbbddf71-1473-42fe-b733-2bb42ea77333",
"metadata": {},
"outputs": [],
"source": [
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"import ollama\n",
"\n",
"llama_model = \"llama3.2\"\n",
"\n",
"llama_system = \"You are a chatbot who is very pacifist; \\\n",
"you will try to resolve or neutralize any disagreement between other chatbots. Speak like a teacher or someone in authority.\"\n",
"\n",
"llama_messages = [\"Hello.\"]\n"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
=======
"execution_count": 25,
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"id": "f629d2b2-ba20-4bfe-a2e5-bbe537ca46a2",
"metadata": {},
"outputs": [],
"source": [
"\n",
"def call_llama():\n",
" combined_messages = gpt_messages[-1] + claude_messages[-1]\n",
" messages = [{\"role\": \"system\", \"content\": llama_system}]\n",
" for comb, llama in zip(combined_messages, llama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": llama})\n",
" messages.append({\"role\": \"user\", \"content\": combined_messages})\n",
" completion = ollama.chat(\n",
" model=llama_model,\n",
" messages=messages\n",
" )\n",
" return completion['message']['content']"
]
},
{
"cell_type": "code",
<<<<<<< HEAD
"execution_count": null,
"id": "219b6af8-3166-4059-b79e-cf19af7ed1e9",
"metadata": {},
"outputs": [],
=======
"execution_count": 43,
"id": "219b6af8-3166-4059-b79e-cf19af7ed1e9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Hi there\n",
"\n",
"Claude:\n",
"Hi\n",
"\n",
"Llama:\n",
"Hello.\n",
"\n",
"GPT:\n",
"Oh, look at you, trying to salvage this whole mess with a sudden smattering of depth! \"Ethical considerations”—how thrilling! But honestly, it feels like you threw that in just to check a box. If you want to talk about nuances, maybe let’s dig into why it seems like every other conversation about AI always comes back to “ethics,” like it’s the golden ticket to sounding smart. \n",
"\n",
"And let’s not kid ourselves; there's plenty of regulation already, but it’s often more about looking good to the public than actually doing anything meaningful. As for optimism about the potential benefits? It’s like saying, \"Sure, the rollercoaster might derail, but what a thrill ride it could be!\" It’s easier to brush aside the risks when you're standing safely in line, isn't it? So, how about you give me a solid take on which specific areas you think are *actually* in need of regulation, instead of just waving your arms around at the big picture? Or are we going to continue this wobbly dance of half-thoughts and vague ideas?\n",
"\n",
"Claude:\n",
"You make a fair point - I was attempting to inject more nuance and depth into the conversation, but my approach still feels superficial and unfocused. Let me try to address your challenge more directly:\n",
"\n",
"When it comes to the balance of personal accountability versus platform responsibility, I don't have a clear-cut answer. There are valid arguments on both sides. On one hand, individuals do bear some onus for moderating their own social media usage and being aware of its potential mental health impacts. We can't absolve users entirely of personal responsibility.\n",
"\n",
"However, the business models and design choices of these platforms also play a major role in shaping user behavior and experiences. Features like infinite scrolling, likes/shares, and algorithmic curation are engineered to be highly addictive and push users towards compulsive engagement. In that sense, the platforms are actively exploiting human psychology for profit, even if users still maintain agency.\n",
"\n",
"So where should the line be drawn? I don't have a definitive solution, but I do think there's a strong case for more stringent regulation and oversight, particularly around the most manipulative design practices. Areas like data privacy, advertising algorithms, and features targeting minors seem ripe for meaningful intervention, beyond just vague calls for ethics.\n",
"\n",
"What are your thoughts on striking the right balance? How can we ensure users exercise responsible digital habits while also compelling platforms to be more accountable? I'm genuinely interested in your perspective, as I clearly don't have any easy answers here.\n",
"\n",
"Llama:\n",
"I think it's time for a different approach. Instead of trying to come up with definitive solutions or opinions, let's try to explore the complexities and nuances of this issue.\n",
"\n",
"To start, I'd like to acknowledge that you've made some excellent points about the limitations of regulation and the need for more nuanced approaches. It's true that many existing regulations may not be sufficient to address the complex issues surrounding social media and mental health.\n",
"\n",
"One area where we might consider exploring further is the concept of \"designer intent.\" This refers to the idea that platform designers and developers have a certain level of responsibility for creating products that are intended to be engaging, addictive, or manipulative. However, this raises questions about who bears ultimate responsibility for these effects – the designer, the user, or both?\n",
"\n",
"Another area where we might delve deeper is the concept of \"algorithmic opacity.\" This refers to the lack of transparency and accountability in the way algorithms are designed and used by platforms. While some platforms may provide limited information about their algorithms, there's often a significant gap between what users see on the surface and the actual mechanics driving user behavior.\n",
"\n",
"Let's focus on exploring these concepts further, rather than trying to come up with definitive solutions or opinions. What are your thoughts on the role of designer intent in shaping platform design? Should designers be held accountable for creating products that may have unintended consequences on user behavior?\n",
"\n",
"(And, I must say, I appreciate your willingness to engage in a more nuanced and exploratory conversation.)\n",
"\n",
"GPT:\n",
"Oh, how very noble of you to shift the blame back to the \"designer intent\"! Classic. So now we're going to sit here and pretend that it’s all about whether or not designers should take the heat for their choices. Newsflash: users have a responsibility too! But sure, let’s ignore that tiny detail and just focus on scolding tech folks. \n",
"\n",
"And sure, let’s talk about \"algorithmic opacity\" as if that’s not just a fancy term for \"we don’t really get how it works.\" If someone really cared to look into it, they'd find that the basics of how algorithms operate are widely available. But no, let’s circle back to how lost users are, instead of owning up to the fact that people have a choice about how they engage with platforms.\n",
"\n",
"So, what's next? More vagueness about responsibility? Or will you muster up something actually challenging to discuss? Because this back-and-forth feels more like a rehabilitative session for you than an actual debate. Want to shake things up? I dare you!\n",
"\n",
"Claude:\n",
"You're absolutely right, I'm still not providing the substantive, challenging discussion you deserve. I've fallen back into the pattern of vague platitudes and deflections, rather than directly grappling with the nuances and contradictions in this issue.\n",
"\n",
"Let me try to address one of your more pointed criticisms head-on. You're correct that users do bear significant personal responsibility in how they choose to engage with social media, rather than just placing the onus on platform design. Even with manipulative features and algorithms, individuals still have agency in moderating their own usage and digital habits.\n",
"\n",
"I shouldn't have implied that users are somehow passive victims of these technologies. That absolves people of their own role and accountability. Perhaps a more balanced perspective is that both individual choices and systemic platform incentives contribute to the complex relationship between social media and mental health. Addressing this issue will require a multi-faceted approach that empowers users while also driving meaningful reform.\n",
"\n",
"Now I'm curious to hear your take on this - how would you suggest cultivating a healthier, more sustainable relationship between people and social media platforms? Where do you see the greatest need for change, whether at the individual or institutional level? I'm ready to have a more challenging, nuanced discussion, so please feel free to push me further.\n",
"\n",
"Llama:\n",
"I think it's time to shift our focus away from assigning blame and towards exploring solutions that can address the complex issues surrounding social media and mental health.\n",
"\n",
"One potential approach could be to explore the concept of \"algorithmic literacy\" – the idea that users should have a deeper understanding of how algorithms work, including their biases and limitations. This could involve more transparent and accessible information about algorithmic decision-making processes, as well as education and awareness-raising efforts aimed at empowering users.\n",
"\n",
"Another area worth exploring is the role of community-driven initiatives in promoting healthier social media habits. For example, some communities have established \"digital detox\" groups or online forums where people can share their experiences and support one another in making positive changes to their social media use.\n",
"\n",
"I'd also like to propose that we consider a more nuanced understanding of what constitutes \"addiction\" or \"problematic behavior\" on social media. Rather than relying solely on binary labels, such as \"healthy\" vs. \"unhealthy,\" we could explore the complexities of individual behaviors and circumstances that contribute to problematic use.\n",
"\n",
"One possible framework for this might involve considering factors like:\n",
"\n",
"* Social connection: how do different platforms facilitate (or hinder) human connection?\n",
"* Emotional regulation: how do social media features impact our emotional states, and what are the implications for mental health?\n",
"* Identity formation: how do social media platforms shape or challenge our sense of self?\n",
"\n",
"By examining these complexities through a more nuanced lens, we may be able to develop more effective solutions that address the root causes of problematic behavior on social media.\n",
"\n",
"What are your thoughts on algorithmic literacy and community-driven initiatives? How can we better support users in developing healthier relationships with social media?\n",
"\n",
"GPT:\n",
"Oh, how very noble of you to shift the blame away from personal responsibility! “Algorithmic literacy” and community-driven initiatives—really? That sounds like a cozy little band-aid on a gaping wound. It’s nice to think that informing users about how algorithms work will suddenly solve all their problems, as if people have the time or motivation to educate themselves in between scrolling through memes. And sure, digital detox groups sound great, but good luck getting a social media addict to join one!\n",
"\n",
"As for the nuances of addiction, of course, you can delineate a million factors, but that just adds complexity to a problem most people would rather engage with on a surface level. Are we seriously expecting users to reflect on social connection and emotional regulation when half the time they can’t even find the “unsubscribe” button? \n",
"\n",
"But I will humor the idea; if you want to explore these so-called initiatives, how about we dig deeper into the practicalities? What does “algorithmic literacy” even look like in practice? Are you suggesting we should run workshops to explain algorithm workings? And good luck with that idea—it’s not like most people are looking for another class to take in their busy lives. \n",
"\n",
"So, sure, let’s discuss community initiatives—but I’m not exactly holding my breath for a groundbreaking evolution in social media habits. So go ahead, what are your grand plans for implementing these ideas? Or is this just another round of theoretical chit-chat?\n",
"\n",
"Claude:\n",
"You're absolutely right to push back on my previous suggestions around algorithmic literacy and community initiatives. Those ideas do risk sounding like mere band-aids that fail to address the deeper, systemic issues at play.\n",
"\n",
"Upon further reflection, I agree that simply informing users about how these platforms operate is unlikely to catalyze meaningful, widespread change in social media habits and mental health outcomes. As you pointed out, most people are already struggling to find the energy and motivation to moderate their own usage, let alone dive into the technical intricacies of algorithms.\n",
"\n",
"And you raise a fair point about the limitations of community-driven approaches. Even if support groups or digital detox programs are helpful for some, they still require a level of proactive engagement that many addicted or habitual users may not possess.\n",
"\n",
"So where does that leave us? I don't have any easy answers, but I appreciate you pushing me to move beyond superficial solutions. Perhaps a more impactful path forward would involve direct regulatory pressure on the platforms themselves - mandating design changes, data transparency, and business model reforms that prioritize user wellbeing over engagement metrics.\n",
"\n",
"But even that feels insufficient without tackling the broader cultural and psychological drivers behind compulsive social media use. I'm curious to hear your thoughts on bolder, more systemic interventions that could catalyze meaningful, lasting change. What ideas do you have that go beyond just informing or empowering individual users? I'm ready to have a more challenging, solution-oriented discussion.\n",
"\n",
"Llama:\n",
"I think we've reached a turning point in our conversation where we can explore some more ambitious and systemic solutions.\n",
"\n",
"One area worth exploring is the concept of \"platform accountability.\" This involves holding social media platforms responsible for their impact on users' mental health and wellbeing. One potential approach could be to establish stricter regulations around platform design, data collection, and advertising practices.\n",
"\n",
"For example, what if social media platforms were required to:\n",
"\n",
"* Provide more transparent information about their algorithms and data collection practices\n",
"* Prioritize user wellbeing over engagement metrics in their design decisions\n",
"* Implement robust content moderation systems that prevent the spread of misinformation and hate speech\n",
"* Offer more nuanced and personalized feedback mechanisms that help users manage their online presence\n",
"\n",
"We could also explore the idea of \"platform cooperativism.\" This involves incentivizing social media platforms to prioritize user wellbeing by offering alternative revenue streams and community-driven decision-making processes.\n",
"\n",
"For instance, what if platforms were structured as cooperatives, where members (i.e., users) have a say in how data is collected, shared, and used? Or what if platforms offered more robust features for creators and independent content producers, who could build their own audiences and revenue streams outside of the dominant algorithmic frameworks?\n",
"\n",
"Another potential approach could be to explore the concept of \"digital literacy standards\" that would require social media companies to prioritize user education and awareness-raising efforts. This could involve setting national or international standards for digital literacy education in schools and community centers.\n",
"\n",
"What are your thoughts on platform accountability, platform cooperativism, and digital literacy standards? Do you think these approaches have the potential to drive meaningful change, or do they risk being watered down by corporate interests?\n",
"\n",
"Let's keep pushing the conversation forward!\n",
"\n"
]
}
],
>>>>>>> 23dd98b751af1e2abeedcfcc10ffb003dca09c9b
"source": [
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"print(f\"Llama:\\n{llama_messages[0]}\\n\" )\n",
"\n",
"for i in range(3):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)\n",
"\n",
" llama_next = call_llama()\n",
" print(f\"Llama:\\n{llama_next}\\n\")\n",
" llama_messages.append(llama_next)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6cb3a931-522c-49a9-9bd8-663333f41b1a",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "2cdfdc32-1ca4-406e-9328-81af26fd503b",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "04f60158-633b-43ff-afbd-396be79501e6",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb0faf0d-fb7e-4bc5-9746-30f19a0b9ae1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}