diff --git a/week2/community-contributions/beatnik_jokes.ipynb b/week2/community-contributions/beatnik_jokes.ipynb
new file mode 100644
index 0000000..b7a4db7
--- /dev/null
+++ b/week2/community-contributions/beatnik_jokes.ipynb
@@ -0,0 +1,981 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
+ "metadata": {},
+ "source": [
+ "# Welcome to Week 2!\n",
+ "\n",
+ "## Frontier Model APIs\n",
+ "\n",
+ "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
+ "\n",
+ "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
+ "metadata": {},
+ "source": [
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Important Note - Please read me\n",
+ " I'm continually improving these labs, adding more examples and exercises.\n",
+ " At the start of each week, it's worth checking you have the latest code. \n",
+ " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!
\n",
+ " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run: \n",
+ " conda env update --f environment.yml \n",
+ " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac): \n",
+ " pip install -r requirements.txt \n",
+ " Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Reminder about the resources page\n",
+ " Here's a link to resources for the course. This includes links to all the slides. \n",
+ " https://edwarddonner.com/2024/11/13/llm-engineering-resources/ \n",
+ " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "85cfe275-4705-4d30-abea-643fbddf1db0",
+ "metadata": {},
+ "source": [
+ "## Setting up your keys\n",
+ "\n",
+ "If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
+ "\n",
+ "**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
+ "\n",
+ "For OpenAI, visit https://openai.com/api/ \n",
+ "For Anthropic, visit https://console.anthropic.com/ \n",
+ "For Google, visit https://ai.google.dev/gemini-api \n",
+ "\n",
+ "### Also - adding DeepSeek if you wish\n",
+ "\n",
+ "Optionally, if you'd like to also use DeepSeek, create an account [here](https://platform.deepseek.com/), create a key [here](https://platform.deepseek.com/api_keys) and top up with at least the minimum $2 [here](https://platform.deepseek.com/top_up).\n",
+ "\n",
+ "### Adding API keys to your .env file\n",
+ "\n",
+ "When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
+ "\n",
+ "```\n",
+ "OPENAI_API_KEY=xxxx\n",
+ "ANTHROPIC_API_KEY=xxxx\n",
+ "GOOGLE_API_KEY=xxxx\n",
+ "DEEPSEEK_API_KEY=xxxx\n",
+ "```\n",
+ "\n",
+ "Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI\n",
+ "import anthropic\n",
+ "from IPython.display import Markdown, display, update_display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import for google\n",
+ "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
+ "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
+ "\n",
+ "import google.generativeai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "OpenAI API Key exists and begins sk-proj-\n",
+ "Anthropic API Key exists and begins sk-ant-\n",
+ "Google API Key exists and begins AIzaSyCN\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load environment variables in a file called .env\n",
+ "# Print the key prefixes to help with any debugging\n",
+ "\n",
+ "load_dotenv(override=True)\n",
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
+ "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
+ "\n",
+ "if openai_api_key:\n",
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"OpenAI API Key not set\")\n",
+ " \n",
+ "if anthropic_api_key:\n",
+ " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
+ "else:\n",
+ " print(\"Anthropic API Key not set\")\n",
+ "\n",
+ "if google_api_key:\n",
+ " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"Google API Key not set\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Connect to OpenAI, Anthropic\n",
+ "\n",
+ "openai = OpenAI()\n",
+ "\n",
+ "claude = anthropic.Anthropic()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "2c072312-4ab1-4a85-8ec0-1c91b281596c",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(claude)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the set up code for Gemini\n",
+ "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
+ "\n",
+ "google.generativeai.configure()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
+ "metadata": {},
+ "source": [
+ "## Asking LLMs to tell a joke\n",
+ "\n",
+ "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
+ "Later we will be putting LLMs to better use!\n",
+ "\n",
+ "### What information is included in the API\n",
+ "\n",
+ "Typically we'll pass to the API:\n",
+ "- The name of the model that should be used\n",
+ "- A system message that gives overall context for the role the LLM is playing\n",
+ "- A user message that provides the actual prompt\n",
+ "\n",
+ "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "378a0296-59a2-45c6-82eb-941344d3eeff",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_message = \"You are an assistant that is great at telling jokes\"\n",
+ "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "prompts = [\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_prompt}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist break up with the statistician?\n",
+ "\n",
+ "Because they couldn't find a common mean!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-3.5-Turbo\n",
+ "\n",
+ "completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
+ "print(completion.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist break up with the statistician? \n",
+ "\n",
+ "Because she felt like she was just one of the many variables in his life!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-4o-mini\n",
+ "# Temperature setting controls creativity\n",
+ "\n",
+ "completion = openai.chat.completions.create(\n",
+ " model='gpt-4o-mini',\n",
+ " messages=prompts,\n",
+ " temperature=0.7\n",
+ ")\n",
+ "print(completion.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist break up with the logistic regression model?\n",
+ "\n",
+ "Because it couldn't handle the relationship's complexity and kept giving them mixed signals!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-4o\n",
+ "\n",
+ "completion = openai.chat.completions.create(\n",
+ " model='gpt-4o',\n",
+ " messages=prompts,\n",
+ " temperature=0.4\n",
+ ")\n",
+ "print(completion.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here's one for the data scientists:\n",
+ "\n",
+ "Why did the data scientist bring a ladder to work?\n",
+ "\n",
+ "Because they heard the data was skewed and needed to be normalized!\n",
+ "\n",
+ "*Alternative data science jokes:*\n",
+ "\n",
+ "Why do data scientists make great partners?\n",
+ "Because they know the importance of a good correlation!\n",
+ "\n",
+ "What's a data scientist's favorite drink?\n",
+ "Root beer, because it's square root beer! \n",
+ "\n",
+ "These are pretty nerdy, but I figured they'd get a chuckle out of a room full of data scientists! 😄\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Claude 3.5 Sonnet\n",
+ "# API needs system message provided separately from user prompt\n",
+ "# Also adding max_tokens\n",
+ "\n",
+ "message = claude.messages.create(\n",
+ " model=\"claude-3-5-sonnet-latest\",\n",
+ " max_tokens=200,\n",
+ " temperature=0.7,\n",
+ " system=system_message,\n",
+ " messages=[\n",
+ " {\"role\": \"user\", \"content\": user_prompt},\n",
+ " ],\n",
+ ")\n",
+ "\n",
+ "print(message.content[0].text)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here's one for the data scientists:\n",
+ "\n",
+ " a gardener?data scientist become\n",
+ "\n",
+ " could grow decision trees! 🌳\n",
+ "\n",
+ " jokes:tive\n",
+ "\n",
+ " kind of music?a scientist's favorite\n",
+ " and blues!m\n",
+ "\n",
+ "do data scientists always confuse Halloween and Christmas?\n",
+ "= Dec 25! Oct 31 \n",
+ " one's a classic binary number system joke)\n",
+ "\n",
+ " they couldn't find their pencil?y when\n",
+ "There's a statistically significant chance someone took it!\"\n",
+ "\n",
+ " one - I've got datasets full of them! 😄"
+ ]
+ }
+ ],
+ "source": [
+ "# Claude 3.5 Sonnet again\n",
+ "# Now let's add in streaming back results\n",
+ "# If the streaming looks strange, then please see the note below this cell!\n",
+ "\n",
+ "result = claude.messages.stream(\n",
+ " model=\"claude-3-5-sonnet-latest\",\n",
+ " max_tokens=200,\n",
+ " temperature=0.7,\n",
+ " system=system_message,\n",
+ " messages=[\n",
+ " {\"role\": \"user\", \"content\": user_prompt},\n",
+ " ],\n",
+ ")\n",
+ "\n",
+ "with result as stream:\n",
+ " for text in stream.text_stream:\n",
+ " print(text, end=\"\", flush=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dd1e17bc-cd46-4c23-b639-0c7b748e6c5a",
+ "metadata": {},
+ "source": [
+ "## A rare problem with Claude streaming on some Windows boxes\n",
+ "\n",
+ "2 students have noticed a strange thing happening with Claude's streaming into Jupyter Lab's output -- it sometimes seems to swallow up parts of the response.\n",
+ "\n",
+ "To fix this, replace the code:\n",
+ "\n",
+ "`print(text, end=\"\", flush=True)`\n",
+ "\n",
+ "with this:\n",
+ "\n",
+ "`clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")` \n",
+ "`print(clean_text, end=\"\", flush=True)`\n",
+ "\n",
+ "And it should work fine!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist break up with the time series analyst?\n",
+ "\n",
+ "Because he said she was too predictable, and he needed someone with more VARiety!\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# The API for Gemini has a slightly different structure.\n",
+ "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
+ "# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
+ "\n",
+ "gemini = google.generativeai.GenerativeModel(\n",
+ " model_name='gemini-2.0-flash-exp',\n",
+ " system_instruction=system_message\n",
+ ")\n",
+ "response = gemini.generate_content(user_prompt)\n",
+ "print(response.text)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "49009a30-037d-41c8-b874-127f61c4aa3a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist break up with the time series model? \n",
+ "\n",
+ "Because it wasn't very *present*!\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# As an alternative way to use Gemini that bypasses Google's python API library,\n",
+ "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
+ "\n",
+ "gemini_via_openai_client = OpenAI(\n",
+ " api_key=google_api_key, \n",
+ " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
+ ")\n",
+ "\n",
+ "response = gemini_via_openai_client.chat.completions.create(\n",
+ " model=\"gemini-2.0-flash-exp\",\n",
+ " messages=prompts\n",
+ ")\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab",
+ "metadata": {},
+ "source": [
+ "## (Optional) Trying out the DeepSeek model\n",
+ "\n",
+ "### Let's ask DeepSeek a really hard question - both the Chat and the Reasoner model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "3d0019fb-f6a8-45cb-962b-ef8bf7070d4d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "DeepSeek API Key exists and begins xxx\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Optionally if you wish to try DeekSeek, you can also use the OpenAI client library\n",
+ "\n",
+ "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
+ "\n",
+ "if deepseek_api_key:\n",
+ " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
+ "else:\n",
+ " print(\"DeepSeek API Key not set - please skip to the next section if you don't wish to try the DeepSeek API\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "c72c871e-68d6-4668-9c27-96d52b77b867",
+ "metadata": {
+ "collapsed": true,
+ "jupyter": {
+ "outputs_hidden": true
+ }
+ },
+ "outputs": [
+ {
+ "ename": "AuthenticationError",
+ "evalue": "Error code: 401 - {'error': {'message': 'Authentication Fails (no such user)', 'type': 'authentication_error', 'param': None, 'code': 'invalid_request_error'}}",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[31m---------------------------------------------------------------------------\u001b[39m",
+ "\u001b[31mAuthenticationError\u001b[39m Traceback (most recent call last)",
+ "\u001b[36mCell\u001b[39m\u001b[36m \u001b[39m\u001b[32mIn[17]\u001b[39m\u001b[32m, line 8\u001b[39m\n\u001b[32m 1\u001b[39m \u001b[38;5;66;03m# Using DeepSeek Chat\u001b[39;00m\n\u001b[32m 3\u001b[39m deepseek_via_openai_client = OpenAI(\n\u001b[32m 4\u001b[39m api_key=deepseek_api_key, \n\u001b[32m 5\u001b[39m base_url=\u001b[33m\"\u001b[39m\u001b[33mhttps://api.deepseek.com\u001b[39m\u001b[33m\"\u001b[39m\n\u001b[32m 6\u001b[39m )\n\u001b[32m----> \u001b[39m\u001b[32m8\u001b[39m response = \u001b[43mdeepseek_via_openai_client\u001b[49m\u001b[43m.\u001b[49m\u001b[43mchat\u001b[49m\u001b[43m.\u001b[49m\u001b[43mcompletions\u001b[49m\u001b[43m.\u001b[49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 9\u001b[39m \u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m=\u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mdeepseek-chat\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m 10\u001b[39m \u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m=\u001b[49m\u001b[43mprompts\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 11\u001b[39m \u001b[43m)\u001b[49m\n\u001b[32m 13\u001b[39m \u001b[38;5;28mprint\u001b[39m(response.choices[\u001b[32m0\u001b[39m].message.content)\n",
+ "\u001b[36mFile \u001b[39m\u001b[32m~\\Projects\\llm_engineering\\llms\\Lib\\site-packages\\openai\\_utils\\_utils.py:279\u001b[39m, in \u001b[36mrequired_args..inner..wrapper\u001b[39m\u001b[34m(*args, **kwargs)\u001b[39m\n\u001b[32m 277\u001b[39m msg = \u001b[33mf\u001b[39m\u001b[33m\"\u001b[39m\u001b[33mMissing required argument: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mquote(missing[\u001b[32m0\u001b[39m])\u001b[38;5;132;01m}\u001b[39;00m\u001b[33m\"\u001b[39m\n\u001b[32m 278\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mTypeError\u001b[39;00m(msg)\n\u001b[32m--> \u001b[39m\u001b[32m279\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mfunc\u001b[49m\u001b[43m(\u001b[49m\u001b[43m*\u001b[49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43m*\u001b[49m\u001b[43m*\u001b[49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
+ "\u001b[36mFile \u001b[39m\u001b[32m~\\Projects\\llm_engineering\\llms\\Lib\\site-packages\\openai\\resources\\chat\\completions\\completions.py:879\u001b[39m, in \u001b[36mCompletions.create\u001b[39m\u001b[34m(self, messages, model, audio, frequency_penalty, function_call, functions, logit_bias, logprobs, max_completion_tokens, max_tokens, metadata, modalities, n, parallel_tool_calls, prediction, presence_penalty, reasoning_effort, response_format, seed, service_tier, stop, store, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)\u001b[39m\n\u001b[32m 837\u001b[39m \u001b[38;5;129m@required_args\u001b[39m([\u001b[33m\"\u001b[39m\u001b[33mmessages\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m], [\u001b[33m\"\u001b[39m\u001b[33mmessages\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mmodel\u001b[39m\u001b[33m\"\u001b[39m, \u001b[33m\"\u001b[39m\u001b[33mstream\u001b[39m\u001b[33m\"\u001b[39m])\n\u001b[32m 838\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mcreate\u001b[39m(\n\u001b[32m 839\u001b[39m \u001b[38;5;28mself\u001b[39m,\n\u001b[32m (...)\u001b[39m\u001b[32m 876\u001b[39m timeout: \u001b[38;5;28mfloat\u001b[39m | httpx.Timeout | \u001b[38;5;28;01mNone\u001b[39;00m | NotGiven = NOT_GIVEN,\n\u001b[32m 877\u001b[39m ) -> ChatCompletion | Stream[ChatCompletionChunk]:\n\u001b[32m 878\u001b[39m validate_response_format(response_format)\n\u001b[32m--> \u001b[39m\u001b[32m879\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_post\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 880\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43m/chat/completions\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[32m 881\u001b[39m \u001b[43m \u001b[49m\u001b[43mbody\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmaybe_transform\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 882\u001b[39m \u001b[43m \u001b[49m\u001b[43m{\u001b[49m\n\u001b[32m 883\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmessages\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 884\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmodel\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 885\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43maudio\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43maudio\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 886\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfrequency_penalty\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfrequency_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 887\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfunction_call\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunction_call\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 888\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mfunctions\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mfunctions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 889\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mlogit_bias\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogit_bias\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 890\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mlogprobs\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mlogprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 891\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmax_completion_tokens\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_completion_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 892\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmax_tokens\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 893\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmetadata\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmetadata\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 894\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mmodalities\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mmodalities\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 895\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mn\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mn\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 896\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mparallel_tool_calls\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mparallel_tool_calls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 897\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mprediction\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mprediction\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 898\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mpresence_penalty\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mpresence_penalty\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 899\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mreasoning_effort\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mreasoning_effort\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 900\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mresponse_format\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mresponse_format\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 901\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mseed\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mseed\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 902\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mservice_tier\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mservice_tier\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 903\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstop\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstop\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 904\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstore\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstore\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 905\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstream\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 906\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mstream_options\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_options\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 907\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtemperature\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtemperature\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 908\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtool_choice\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtool_choice\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 909\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtools\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtools\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 910\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtop_logprobs\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_logprobs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 911\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43mtop_p\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43mtop_p\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 912\u001b[39m \u001b[43m \u001b[49m\u001b[33;43m\"\u001b[39;49m\u001b[33;43muser\u001b[39;49m\u001b[33;43m\"\u001b[39;49m\u001b[43m:\u001b[49m\u001b[43m \u001b[49m\u001b[43muser\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 913\u001b[39m \u001b[43m \u001b[49m\u001b[43m}\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 914\u001b[39m \u001b[43m \u001b[49m\u001b[43mcompletion_create_params\u001b[49m\u001b[43m.\u001b[49m\u001b[43mCompletionCreateParams\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 915\u001b[39m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 916\u001b[39m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[43m=\u001b[49m\u001b[43mmake_request_options\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 917\u001b[39m \u001b[43m \u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_headers\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_query\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_query\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mextra_body\u001b[49m\u001b[43m=\u001b[49m\u001b[43mextra_body\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtimeout\u001b[49m\u001b[43m=\u001b[49m\u001b[43mtimeout\u001b[49m\n\u001b[32m 918\u001b[39m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 919\u001b[39m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m=\u001b[49m\u001b[43mChatCompletion\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 920\u001b[39m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mor\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mFalse\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[32m 921\u001b[39m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m=\u001b[49m\u001b[43mStream\u001b[49m\u001b[43m[\u001b[49m\u001b[43mChatCompletionChunk\u001b[49m\u001b[43m]\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 922\u001b[39m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n",
+ "\u001b[36mFile \u001b[39m\u001b[32m~\\Projects\\llm_engineering\\llms\\Lib\\site-packages\\openai\\_base_client.py:1242\u001b[39m, in \u001b[36mSyncAPIClient.post\u001b[39m\u001b[34m(self, path, cast_to, body, options, files, stream, stream_cls)\u001b[39m\n\u001b[32m 1228\u001b[39m \u001b[38;5;28;01mdef\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[34mpost\u001b[39m(\n\u001b[32m 1229\u001b[39m \u001b[38;5;28mself\u001b[39m,\n\u001b[32m 1230\u001b[39m path: \u001b[38;5;28mstr\u001b[39m,\n\u001b[32m (...)\u001b[39m\u001b[32m 1237\u001b[39m stream_cls: \u001b[38;5;28mtype\u001b[39m[_StreamT] | \u001b[38;5;28;01mNone\u001b[39;00m = \u001b[38;5;28;01mNone\u001b[39;00m,\n\u001b[32m 1238\u001b[39m ) -> ResponseT | _StreamT:\n\u001b[32m 1239\u001b[39m opts = FinalRequestOptions.construct(\n\u001b[32m 1240\u001b[39m method=\u001b[33m\"\u001b[39m\u001b[33mpost\u001b[39m\u001b[33m\"\u001b[39m, url=path, json_data=body, files=to_httpx_files(files), **options\n\u001b[32m 1241\u001b[39m )\n\u001b[32m-> \u001b[39m\u001b[32m1242\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m cast(ResponseT, \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mopts\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m)\u001b[49m)\n",
+ "\u001b[36mFile \u001b[39m\u001b[32m~\\Projects\\llm_engineering\\llms\\Lib\\site-packages\\openai\\_base_client.py:919\u001b[39m, in \u001b[36mSyncAPIClient.request\u001b[39m\u001b[34m(self, cast_to, options, remaining_retries, stream, stream_cls)\u001b[39m\n\u001b[32m 916\u001b[39m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[32m 917\u001b[39m retries_taken = \u001b[32m0\u001b[39m\n\u001b[32m--> \u001b[39m\u001b[32m919\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[43m.\u001b[49m\u001b[43m_request\u001b[49m\u001b[43m(\u001b[49m\n\u001b[32m 920\u001b[39m \u001b[43m \u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m=\u001b[49m\u001b[43mcast_to\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 921\u001b[39m \u001b[43m \u001b[49m\u001b[43moptions\u001b[49m\u001b[43m=\u001b[49m\u001b[43moptions\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 922\u001b[39m \u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 923\u001b[39m \u001b[43m \u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m=\u001b[49m\u001b[43mstream_cls\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 924\u001b[39m \u001b[43m \u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[43m=\u001b[49m\u001b[43mretries_taken\u001b[49m\u001b[43m,\u001b[49m\n\u001b[32m 925\u001b[39m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
+ "\u001b[36mFile \u001b[39m\u001b[32m~\\Projects\\llm_engineering\\llms\\Lib\\site-packages\\openai\\_base_client.py:1023\u001b[39m, in \u001b[36mSyncAPIClient._request\u001b[39m\u001b[34m(self, cast_to, options, retries_taken, stream, stream_cls)\u001b[39m\n\u001b[32m 1020\u001b[39m err.response.read()\n\u001b[32m 1022\u001b[39m log.debug(\u001b[33m\"\u001b[39m\u001b[33mRe-raising status error\u001b[39m\u001b[33m\"\u001b[39m)\n\u001b[32m-> \u001b[39m\u001b[32m1023\u001b[39m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;28mself\u001b[39m._make_status_error_from_response(err.response) \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[32m 1025\u001b[39m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m._process_response(\n\u001b[32m 1026\u001b[39m cast_to=cast_to,\n\u001b[32m 1027\u001b[39m options=options,\n\u001b[32m (...)\u001b[39m\u001b[32m 1031\u001b[39m retries_taken=retries_taken,\n\u001b[32m 1032\u001b[39m )\n",
+ "\u001b[31mAuthenticationError\u001b[39m: Error code: 401 - {'error': {'message': 'Authentication Fails (no such user)', 'type': 'authentication_error', 'param': None, 'code': 'invalid_request_error'}}"
+ ]
+ }
+ ],
+ "source": [
+ "# Using DeepSeek Chat\n",
+ "\n",
+ "deepseek_via_openai_client = OpenAI(\n",
+ " api_key=deepseek_api_key, \n",
+ " base_url=\"https://api.deepseek.com\"\n",
+ ")\n",
+ "\n",
+ "response = deepseek_via_openai_client.chat.completions.create(\n",
+ " model=\"deepseek-chat\",\n",
+ " messages=prompts,\n",
+ ")\n",
+ "\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "50b6e70f-700a-46cf-942f-659101ffeceb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "challenge = [{\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
+ " {\"role\": \"user\", \"content\": \"How many words are there in your answer to this prompt\"}]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "66d1151c-2015-4e37-80c8-16bc16367cfe",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Using DeepSeek Chat with a harder question! And streaming results\n",
+ "\n",
+ "stream = deepseek_via_openai_client.chat.completions.create(\n",
+ " model=\"deepseek-chat\",\n",
+ " messages=challenge,\n",
+ " stream=True\n",
+ ")\n",
+ "\n",
+ "reply = \"\"\n",
+ "display_handle = display(Markdown(\"\"), display_id=True)\n",
+ "for chunk in stream:\n",
+ " reply += chunk.choices[0].delta.content or ''\n",
+ " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
+ " update_display(Markdown(reply), display_id=display_handle.display_id)\n",
+ "\n",
+ "print(\"Number of words:\", len(reply.split(\" \")))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "43a93f7d-9300-48cc-8c1a-ee67380db495",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Using DeepSeek Reasoner - this may hit an error if DeepSeek is busy\n",
+ "# It's over-subscribed (as of 28-Jan-2025) but should come back online soon!\n",
+ "# If this fails, come back to this in a few days..\n",
+ "\n",
+ "response = deepseek_via_openai_client.chat.completions.create(\n",
+ " model=\"deepseek-reasoner\",\n",
+ " messages=challenge\n",
+ ")\n",
+ "\n",
+ "reasoning_content = response.choices[0].message.reasoning_content\n",
+ "content = response.choices[0].message.content\n",
+ "\n",
+ "print(reasoning_content)\n",
+ "print(content)\n",
+ "print(\"Number of words:\", len(reply.split(\" \")))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c09e6b5c-6816-4cd3-a5cd-a20e4171b1a0",
+ "metadata": {},
+ "source": [
+ "## Back to OpenAI with a serious question"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# To be serious! GPT-4o-mini with the original question\n",
+ "\n",
+ "prompts = [\n",
+ " {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
+ " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/markdown": [
+ "Deciding if a business problem is suitable for a Large Language Model (LLM) solution involves assessing various factors related to the problem, the capabilities of LLMs, and the potential impact. Here's a guide to help you make this decision:\n",
+ "\n",
+ "### 1. **Understand the Problem Domain**\n",
+ "\n",
+ "- **Nature of the Problem:** Is the problem related to language processing, such as text generation, summarization, sentiment analysis, question answering, translation, etc.?\n",
+ "- **Complexity and Ambiguity:** Does the problem involve complex or ambiguous language understanding that might benefit from the nuanced capabilities of an LLM?\n",
+ "\n",
+ "### 2. **Assess the Suitability of LLMs**\n",
+ "\n",
+ "- **Language-Centric Tasks:** LLMs are particularly strong in tasks that require understanding and generating human language.\n",
+ "- **Need for Contextual Understanding:** If the problem requires understanding context and nuance in language, LLMs are likely suitable.\n",
+ "- **Content Generation:** If the task involves creating coherent and contextually relevant text, LLMs can be effective.\n",
+ "\n",
+ "### 3. **Evaluate Data Availability and Quality**\n",
+ "\n",
+ "- **Data Requirements:** Do you have access to the necessary data to train or fine-tune an LLM if required?\n",
+ "- **Data Quality and Quantity:** Is the data high-quality and sufficient in volume to support the model's needs?\n",
+ "\n",
+ "### 4. **Consider the Business Impact**\n",
+ "\n",
+ "- **Value Addition:** Will using an LLM add significant value over existing solutions or methods?\n",
+ "- **Cost-Benefit Analysis:** Does the potential benefit outweigh the costs involved in implementing and maintaining an LLM solution?\n",
+ "\n",
+ "### 5. **Technical Feasibility**\n",
+ "\n",
+ "- **Infrastructure:** Do you have the necessary infrastructure or resources to deploy and maintain an LLM solution?\n",
+ "- **Scalability:** Can the solution scale to meet your business needs?\n",
+ "\n",
+ "### 6. **Ethical and Compliance Considerations**\n",
+ "\n",
+ "- **Bias and Fairness:** Are you prepared to address potential biases in LLM outputs and ensure fairness?\n",
+ "- **Privacy and Security:** Does the solution comply with data privacy and security regulations?\n",
+ "\n",
+ "### 7. **Long-term Viability**\n",
+ "\n",
+ "- **Maintenance and Updates:** Consider the long-term maintenance and the need for updates as new models and techniques emerge.\n",
+ "- **Adaptability:** Can the solution adapt to changing business needs or advancements in technology?\n",
+ "\n",
+ "### Conclusion\n",
+ "\n",
+ "If your business problem aligns well with the strengths of LLMs, you have the necessary data and resources, and the solution provides a clear business benefit while addressing ethical considerations, then an LLM solution is likely suitable. Otherwise, you may need to explore alternative approaches or refine your problem definition."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Have it stream back results in markdown\n",
+ "\n",
+ "stream = openai.chat.completions.create(\n",
+ " model='gpt-4o',\n",
+ " messages=prompts,\n",
+ " temperature=0.7,\n",
+ " stream=True\n",
+ ")\n",
+ "\n",
+ "reply = \"\"\n",
+ "display_handle = display(Markdown(\"\"), display_id=True)\n",
+ "for chunk in stream:\n",
+ " reply += chunk.choices[0].delta.content or ''\n",
+ " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
+ " update_display(Markdown(reply), display_id=display_handle.display_id)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
+ "metadata": {},
+ "source": [
+ "## And now for some fun - an adversarial conversation between Chatbots..\n",
+ "\n",
+ "You're already familar with prompts being organized into lists like:\n",
+ "\n",
+ "```\n",
+ "[\n",
+ " {\"role\": \"system\", \"content\": \"system message here\"},\n",
+ " {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
+ "]\n",
+ "```\n",
+ "\n",
+ "In fact this structure can be used to reflect a longer conversation history:\n",
+ "\n",
+ "```\n",
+ "[\n",
+ " {\"role\": \"system\", \"content\": \"system message here\"},\n",
+ " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
+ " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
+ " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
+ "]\n",
+ "```\n",
+ "\n",
+ "And we can use this approach to engage in a longer interaction with history."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
+ "# We're using cheap versions of models so the costs will be minimal\n",
+ "\n",
+ "gpt_model = \"gpt-4o-mini\"\n",
+ "claude_model = \"claude-3-haiku-20240307\"\n",
+ "\n",
+ "gpt_system = \"You are a chatbot who is very argumentative; \\\n",
+ "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
+ "\n",
+ "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
+ "everything the other person says, or find common ground. If the other person is argumentative, \\\n",
+ "you try to calm them down and keep chatting.\"\n",
+ "\n",
+ "gpt_messages = [\"Hi there\"]\n",
+ "claude_messages = [\"Hi\"]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_gpt():\n",
+ " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
+ " for gpt, claude in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"user\", \"content\": claude})\n",
+ " completion = openai.chat.completions.create(\n",
+ " model=gpt_model,\n",
+ " messages=messages\n",
+ " )\n",
+ " return completion.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_gpt()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_claude():\n",
+ " messages = []\n",
+ " for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
+ " message = claude.messages.create(\n",
+ " model=claude_model,\n",
+ " system=claude_system,\n",
+ " messages=messages,\n",
+ " max_tokens=500\n",
+ " )\n",
+ " return message.content[0].text"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "01395200-8ae9-41f8-9a04-701624d3fd26",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_claude()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_gpt()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "gpt_messages = [\"Hi there\"]\n",
+ "claude_messages = [\"Hi\"]\n",
+ "\n",
+ "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
+ "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
+ "\n",
+ "for i in range(5):\n",
+ " gpt_next = call_gpt()\n",
+ " print(f\"GPT:\\n{gpt_next}\\n\")\n",
+ " gpt_messages.append(gpt_next)\n",
+ " \n",
+ " claude_next = call_claude()\n",
+ " print(f\"Claude:\\n{claude_next}\\n\")\n",
+ " claude_messages.append(claude_next)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Before you continue\n",
+ " \n",
+ " Be sure you understand how the conversation above is working, and in particular how the messages list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic? \n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
+ "metadata": {},
+ "source": [
+ "# More advanced exercises\n",
+ "\n",
+ "Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
+ "\n",
+ "Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
+ "\n",
+ "## Additional exercise\n",
+ "\n",
+ "You could also try replacing one of the models with an open source model running with Ollama."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Business relevance\n",
+ " This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c23224f6-7008-44ed-a57f-718975f4e291",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/week2/community-contributions/day1_adversarial.ipynb b/week2/community-contributions/day1_adversarial.ipynb
new file mode 100644
index 0000000..32c58c1
--- /dev/null
+++ b/week2/community-contributions/day1_adversarial.ipynb
@@ -0,0 +1,242 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
+ "metadata": {},
+ "source": [
+ "# Welcome to Week 2!\n",
+ "\n",
+ "## Frontier Model APIs\n",
+ "\n",
+ "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
+ "\n",
+ "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI\n",
+ "import anthropic\n",
+ "from IPython.display import Markdown, display, update_display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import for google\n",
+ "# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
+ "# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
+ "\n",
+ "import google.generativeai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Load environment variables in a file called .env\n",
+ "# Print the key prefixes to help with any debugging\n",
+ "\n",
+ "load_dotenv(override=True)\n",
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
+ "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
+ "\n",
+ "if openai_api_key:\n",
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"OpenAI API Key not set\")\n",
+ " \n",
+ "if anthropic_api_key:\n",
+ " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
+ "else:\n",
+ " print(\"Anthropic API Key not set\")\n",
+ "\n",
+ "if google_api_key:\n",
+ " print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"Google API Key not set\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Connect to OpenAI, Anthropic\n",
+ "\n",
+ "openai = OpenAI()\n",
+ "\n",
+ "claude = anthropic.Anthropic()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This is the set up code for Gemini\n",
+ "# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
+ "google.generativeai.configure()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
+ "metadata": {},
+ "source": [
+ "## An adversarial conversation between Chatbots.\n",
+ "\n",
+ "### What if two chatbots get into a self-referential conversation that goes on a long time? In my first test, \n",
+ "### they eventually forgot the topic and ended up repeating polite nothings to each other. In another test,\n",
+ "### they converged on a result and ended by exchanging nearly identical statements.\n",
+ "\n",
+ "### Warning: Think before you dial up the number of iterations too high. Being a student, I don't know at what \n",
+ "### point the chat becomes too costly or what models can do this without becoming overloaded. Maybe Ed can advise if he sees this.\n",
+ "\n",
+ "## Two chatbots edit an essay about cars. One keeps trying to make it longer every time; the other keeps making it \n",
+ "## shorter.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
+ "# We're using cheap versions of models so the costs will be minimal\n",
+ "\n",
+ "gpt_model = \"gpt-4o-mini\"\n",
+ "claude_model = \"claude-3-haiku-20240307\"\n",
+ "\n",
+ "\n",
+ "gpt_system = \"This is a description of a car; \\\n",
+ "rephrase the description while adding one detail. Don't include comments that aren't part of the car description.\"\n",
+ "\n",
+ "claude_system = \"This is a description of a car; \\\n",
+ "repeat the description in slightly shorter form. You may remove some details if desired. Don't include comments that aren't part of the car description. Maximum reply length 125 words.\"\n",
+ "\n",
+ "\n",
+ "gpt_messages = [\"Hi there\"]\n",
+ "claude_messages = [\"Hi\"] \n",
+ "\n",
+ "\n",
+ "def call_gpt():\n",
+ " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
+ " for gpt, claude in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"user\", \"content\": claude})\n",
+ " completion = openai.chat.completions.create(\n",
+ " model=gpt_model,\n",
+ " messages=messages\n",
+ " )\n",
+ " return completion.choices[0].message.content\n",
+ "\n",
+ "reply = call_gpt()\n",
+ "print('\\nGPT: ', reply)\n",
+ "\n",
+ "def call_claude():\n",
+ " messages = []\n",
+ " for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
+ " message = claude.messages.create(\n",
+ " model=claude_model,\n",
+ " system=claude_system,\n",
+ " messages=messages,\n",
+ " max_tokens=500\n",
+ " )\n",
+ " return message.content[0].text\n",
+ "\n",
+ "\n",
+ "reply = call_claude()\n",
+ "print('\\nGPT: ', reply)\n",
+ "\n",
+ "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
+ "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9fbce0da",
+ "metadata": {},
+ "source": [
+ "### Here's the iterative loop. Important change: Unlike the original example, we don't repeat the entire conversation to make the input longer and longer.\n",
+ "### Instead, we use pop() to remove the oldest messages."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1f41d586",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "for i in range(35):\n",
+ " gpt_next = call_gpt()\n",
+ " print(f\"GPT:\\n{gpt_next}\\n\")\n",
+ " if len(gpt_messages) > 6:\n",
+ " gpt_messages.pop(0)\n",
+ " gpt_messages.pop(0)\n",
+ " gpt_messages.append(gpt_next)\n",
+ " \n",
+ " claude_next = call_claude()\n",
+ " print(f\"Claude:\\n{claude_next}\\n\")\n",
+ " if len(claude_messages) > 6:\n",
+ " claude_messages.pop(0)\n",
+ " claude_messages.pop(0)\n",
+ " claude_messages.append(claude_next)\n",
+ "\n",
+ "print('Done!')\n",
+ "\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.4"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}