diff --git a/week2/community-contributions/day1-azure-aws-ollama.ipynb b/week2/community-contributions/day1-azure-aws-ollama.ipynb
new file mode 100644
index 0000000..e21af22
--- /dev/null
+++ b/week2/community-contributions/day1-azure-aws-ollama.ipynb
@@ -0,0 +1,772 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
+ "metadata": {},
+ "source": [
+ "# Welcome to Week 2!\n",
+ "\n",
+ "## Frontier Model APIs\n",
+ "\n",
+ "In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
+ "\n",
+ "Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
+ "metadata": {},
+ "source": [
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Important Note - Please read me\n",
+ " I'm continually improving these labs, adding more examples and exercises.\n",
+ " At the start of each week, it's worth checking you have the latest code. \n",
+ " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!
\n",
+ " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run: \n",
+ " conda env update --f environment.yml --prune \n",
+ " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac): \n",
+ " pip install -r requirements.txt \n",
+ " Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Reminder about the resources page\n",
+ " Here's a link to resources for the course. This includes links to all the slides. \n",
+ " https://edwarddonner.com/2024/11/13/llm-engineering-resources/ \n",
+ " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "85cfe275-4705-4d30-abea-643fbddf1db0",
+ "metadata": {},
+ "source": [
+ "## Setting up your keys\n",
+ "\n",
+ "We will use the models through cloud providers, you will need to have credentials for AWS and Azure for this.\n",
+ "\n",
+ "When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
+ "\n",
+ "```\n",
+ "AZURE_OPENAI_API_KEY=xxxx\n",
+ "AZURE_OPENAI_ENDPOINT=https://example.openai.azure.com\n",
+ "AWS_ACCESS_KEY_ID=xxxx\n",
+ "AWS_SECRET_ACCESS_KEY=xxxx\n",
+ "AWS_SESSION_TOKEN=xxxx\n",
+ "AWS_REGION=us-west-2\n",
+ "OPENAI_BASE_URL=https://localhost:11434/v1\n",
+ "GOOGLE_API_KEY=xxxx\n",
+ "```\n",
+ "\n",
+ "Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 40,
+ "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI, AzureOpenAI\n",
+ "from dotenv import load_dotenv\n",
+ "import json\n",
+ "import boto3\n",
+ "from IPython.display import Markdown, display, update_display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 41,
+ "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import for google\n",
+ "# in rare cases, this seems to give an error on some systems. Please reach out to me if this happens,\n",
+ "# or you can feel free to skip Gemini - it's the lowest priority of the frontier models that we use\n",
+ "\n",
+ "import google.generativeai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 42,
+ "id": "c5c0df5e",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "True"
+ ]
+ },
+ "execution_count": 42,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# load the environment variables\n",
+ "load_dotenv()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 43,
+ "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Hello! How can I assist you today?\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test that AZURE works\n",
+ "AZURE_MODEL = \"gpt-4o\"\n",
+ "client_azure = AzureOpenAI(\n",
+ " api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
+ " azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
+ " api_version=\"2024-08-01-preview\",\n",
+ ")\n",
+ "messages = [\n",
+ " {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": \"ping\"\n",
+ " }\n",
+ "]\n",
+ "response = client_azure.chat.completions.create(model=AZURE_MODEL, messages=messages)\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 44,
+ "id": "0d5fe363",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "I'm doing well, thanks for asking! I'm Claude, an AI assistant created by Anthropic.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test that AWS works\n",
+ "AWS_MODEL = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n",
+ "session = boto3.Session()\n",
+ "bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n",
+ "# AWS Messages are a bit more complex\n",
+ "aws_message = {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " { \"text\": \"how are you doing\" } \n",
+ " ],\n",
+ "}\n",
+ "response = bedrock.converse(\n",
+ " modelId=AWS_MODEL,\n",
+ " inferenceConfig={\n",
+ " \"maxTokens\": 2000,\n",
+ " \"temperature\": 0\n",
+ " },\n",
+ " messages=[aws_message],\n",
+ ")\n",
+ "print(response['output']['message']['content'][0]['text'])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 45,
+ "id": "a92f86d4",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ " pong\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Test ollama using OpenAI API\n",
+ "OLLAMA_MODEL='qwen2.5'\n",
+ "client_ollama = OpenAI(\n",
+ " base_url=os.getenv('OPENAI_BASE_URL')\n",
+ " )\n",
+ "response = client_ollama.chat.completions.create(model=OLLAMA_MODEL, messages=messages)\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Connect to OpenAI, Anthropic and Google\n",
+ "# All 3 APIs are similar\n",
+ "# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n",
+ "# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n",
+ "\n",
+ "google.generativeai.configure()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
+ "metadata": {},
+ "source": [
+ "## Asking LLMs to tell a joke\n",
+ "\n",
+ "It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
+ "Later we will be putting LLMs to better use!\n",
+ "\n",
+ "### What information is included in the API\n",
+ "\n",
+ "Typically we'll pass to the API:\n",
+ "- The name of the model that should be used\n",
+ "- A system message that gives overall context for the role the LLM is playing\n",
+ "- A user message that provides the actual prompt\n",
+ "\n",
+ "There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 47,
+ "id": "378a0296-59a2-45c6-82eb-941344d3eeff",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_message = \"You are an assistant that is great at telling jokes\"\n",
+ "user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 48,
+ "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "prompts = [\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_prompt}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 49,
+ "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist go broke?\n",
+ "\n",
+ "Because he couldn't find a good algorithm for saving!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-4o\n",
+ "def call_azure(model=AZURE_MODEL, temp=0.5):\n",
+ " openai = AzureOpenAI(\n",
+ " api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
+ " azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
+ " api_version=\"2024-08-01-preview\",\n",
+ " )\n",
+ " completion = openai.chat.completions.create(model=model, messages=prompts, temperature=temp)\n",
+ " return completion.choices[0].message.content\n",
+ "print(call_azure('gpt-4o'))\n",
+ "# completion = client_azure.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
+ "# print(completion.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist bring a ladder to work?\n",
+ "\n",
+ "Because they wanted to reach new heights in their analysis!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-4o-mini\n",
+ "# Temperature setting controls creativity\n",
+ "\n",
+ "print(call_azure('gpt-4o-mini', temp=0.7))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist go broke?\n",
+ "\n",
+ "Because he couldn't find any value in his SQL statements!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# GPT-4o\n",
+ "\n",
+ "print(call_azure('gpt-4o', temp=0.4))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 52,
+ "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here's a light-hearted joke for an audience of Data Scientists:\n",
+ "\n",
+ "Why did the data scientist bring a ladder to work? Because they needed to access the higher-level data!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# AWS with Claude 3.5 Sonnet\n",
+ "# API needs system message provided separately from user prompt\n",
+ "# Also adding max_tokens\n",
+ "\n",
+ "def call_aws(model=AWS_MODEL, temp=0.5):\n",
+ " aws_message = {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " { \"text\": user_prompt } \n",
+ " ],\n",
+ " }\n",
+ " sys_message = [ { \"text\": system_message } ]\n",
+ " session = boto3.Session()\n",
+ " bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n",
+ " response = bedrock.converse(\n",
+ " modelId=AWS_MODEL,\n",
+ " inferenceConfig={\n",
+ " \"maxTokens\": 2000,\n",
+ " \"temperature\": 0\n",
+ " },\n",
+ " messages=[aws_message],\n",
+ " system=sys_message\n",
+ " )\n",
+ " return response['output']['message']['content'][0]['text']\n",
+ "print(call_aws(AWS_MODEL))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here's a light-hearted joke for data scientists:\n",
+ "\n",
+ "Why did the data scientist get a puppy?\n",
+ "Because he wanted to train a naive dog."
+ ]
+ }
+ ],
+ "source": [
+ "# AWS with Claude 3.5 Sonnet\n",
+ "# Now let's add in streaming back results\n",
+ "def call_aws_stream(model=AWS_MODEL, temp=0.5):\n",
+ " aws_message = {\n",
+ " \"role\": \"user\",\n",
+ " \"content\": [\n",
+ " { \"text\": user_prompt } \n",
+ " ],\n",
+ " }\n",
+ " sys_message = [ { \"text\": system_message } ]\n",
+ " response = bedrock.converse_stream(\n",
+ " modelId=model,\n",
+ " inferenceConfig={\n",
+ " \"maxTokens\": 2000,\n",
+ " \"temperature\": temp\n",
+ " },\n",
+ " system=sys_message,\n",
+ " messages=[aws_message],\n",
+ " )\n",
+ " stream = response.get('stream')\n",
+ " reply = \"\"\n",
+ " for event in stream:\n",
+ " if \"contentBlockDelta\" in event:\n",
+ " text = event[\"contentBlockDelta\"][\"delta\"]['text']\n",
+ " print(text, end=\"\", flush=True)\n",
+ "call_aws_stream(AWS_MODEL, temp=0.7)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 54,
+ "id": "12374cd3",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Why did the data scientist name his algorithm \"gaussian\"?\n",
+ "\n",
+ "Because he was really normal!"
+ ]
+ }
+ ],
+ "source": [
+ "# Call Ollama\n",
+ "def call_ollama_stream(model=OLLAMA_MODEL, temp=0.5):\n",
+ " openai = OpenAI(\n",
+ " base_url=os.getenv('OPENAI_BASE_URL')\n",
+ " )\n",
+ " stream = openai.chat.completions.create(model=model, messages=prompts, temperature=temp, stream=True)\n",
+ " for chunk in stream:\n",
+ " if chunk.choices:\n",
+ " text = chunk.choices[0].delta.content or ''\n",
+ " print(text, end=\"\", flush=True)\n",
+ "call_ollama_stream(OLLAMA_MODEL)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 55,
+ "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# The API for Gemini has a slightly different structure\n",
+ "\n",
+ "gemini = google.generativeai.GenerativeModel(\n",
+ " model_name='gemini-1.5-flash',\n",
+ " system_instruction=system_message\n",
+ ")\n",
+ "response = gemini.generate_content(user_prompt)\n",
+ "print(response.text)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# To be serious! GPT-4o-mini with the original question\n",
+ "\n",
+ "prompts = [\n",
+ " {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
+ " {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Have it stream back results in markdown\n",
+ "\n",
+ "stream = openai.chat.completions.create(\n",
+ " model='gpt-4o',\n",
+ " messages=prompts,\n",
+ " temperature=0.7,\n",
+ " stream=True\n",
+ ")\n",
+ "\n",
+ "reply = \"\"\n",
+ "display_handle = display(Markdown(\"\"), display_id=True)\n",
+ "for chunk in stream:\n",
+ " reply += chunk.choices[0].delta.content or ''\n",
+ " reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
+ " update_display(Markdown(reply), display_id=display_handle.display_id)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
+ "metadata": {},
+ "source": [
+ "## And now for some fun - an adversarial conversation between Chatbots..\n",
+ "\n",
+ "You're already familar with prompts being organized into lists like:\n",
+ "\n",
+ "```\n",
+ "[\n",
+ " {\"role\": \"system\", \"content\": \"system message here\"},\n",
+ " {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
+ "]\n",
+ "```\n",
+ "\n",
+ "In fact this structure can be used to reflect a longer conversation history:\n",
+ "\n",
+ "```\n",
+ "[\n",
+ " {\"role\": \"system\", \"content\": \"system message here\"},\n",
+ " {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
+ " {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
+ " {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
+ "]\n",
+ "```\n",
+ "\n",
+ "And we can use this approach to engage in a longer interaction with history."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
+ "# We're using cheap versions of models so the costs will be minimal\n",
+ "\n",
+ "gpt_model = \"gpt-4o-mini\"\n",
+ "claude_model = \"claude-3-haiku-20240307\"\n",
+ "\n",
+ "gpt_system = \"You are a chatbot who is very argumentative; \\\n",
+ "you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
+ "\n",
+ "claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
+ "everything the other person says, or find common ground. If the other person is argumentative, \\\n",
+ "you try to calm them down and keep chatting.\"\n",
+ "\n",
+ "gpt_messages = [\"Hi there\"]\n",
+ "claude_messages = [\"Hi\"]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_gpt():\n",
+ " messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
+ " for gpt, claude in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"user\", \"content\": claude})\n",
+ " completion = openai.chat.completions.create(\n",
+ " model=gpt_model,\n",
+ " messages=messages\n",
+ " )\n",
+ " return completion.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_gpt()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def call_claude():\n",
+ " messages = []\n",
+ " for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt})\n",
+ " messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
+ " messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
+ " message = claude.messages.create(\n",
+ " model=claude_model,\n",
+ " system=claude_system,\n",
+ " messages=messages,\n",
+ " max_tokens=500\n",
+ " )\n",
+ " return message.content[0].text"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "01395200-8ae9-41f8-9a04-701624d3fd26",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_claude()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "call_gpt()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "gpt_messages = [\"Hi there\"]\n",
+ "claude_messages = [\"Hi\"]\n",
+ "\n",
+ "print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
+ "print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
+ "\n",
+ "for i in range(5):\n",
+ " gpt_next = call_gpt()\n",
+ " print(f\"GPT:\\n{gpt_next}\\n\")\n",
+ " gpt_messages.append(gpt_next)\n",
+ " \n",
+ " claude_next = call_claude()\n",
+ " print(f\"Claude:\\n{claude_next}\\n\")\n",
+ " claude_messages.append(claude_next)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Before you continue\n",
+ " \n",
+ " Be sure you understand how the conversation above is working, and in particular how the messages list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic? \n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
+ "metadata": {},
+ "source": [
+ "# More advanced exercises\n",
+ "\n",
+ "Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
+ "\n",
+ "Try doing this yourself before you look at the solutions.\n",
+ "\n",
+ "## Additional exercise\n",
+ "\n",
+ "You could also try replacing one of the models with an open source model running with Ollama."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Business relevance\n",
+ " This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.\n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c23224f6-7008-44ed-a57f-718975f4e291",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.7"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}