Browse Source
Adding Azure for OpenAI, AWS for Claude and Ollama using compatible API for OpenAI API.pull/19/head
1 changed files with 689 additions and 0 deletions
@ -0,0 +1,689 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Welcome to Week 2!\n", |
||||||
|
"\n", |
||||||
|
"## Frontier Model APIs\n", |
||||||
|
"\n", |
||||||
|
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", |
||||||
|
"\n", |
||||||
|
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n", |
||||||
|
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n", |
||||||
|
" At the start of each week, it's worth checking you have the latest code.<br/>\n", |
||||||
|
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n", |
||||||
|
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n", |
||||||
|
" <code>conda env update --f environment.yml --prune</code><br/>\n", |
||||||
|
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n", |
||||||
|
" <code>pip install -r requirements.txt</code>\n", |
||||||
|
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", |
||||||
|
" </span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>\n", |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n", |
||||||
|
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n", |
||||||
|
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
||||||
|
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
||||||
|
" </span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "85cfe275-4705-4d30-abea-643fbddf1db0", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Setting up your keys\n", |
||||||
|
"\n", |
||||||
|
"We will use the models through cloud providers, you will need to have credentials for AWS and Azure for this.\n", |
||||||
|
"\n", |
||||||
|
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"AZURE_OPENAI_API_KEY=xxxx\n", |
||||||
|
"AZURE_OPENAI_ENDPOINT=https://example.openai.azure.com\n", |
||||||
|
"AWS_ACCESS_KEY_ID=xxxx\n", |
||||||
|
"AWS_SECRET_ACCESS_KEY=xxxx\n", |
||||||
|
"AWS_SESSION_TOKEN=xxxx\n", |
||||||
|
"AWS_REGION=us-west-2\n", |
||||||
|
"OPENAI_BASE_URL=https://localhost:11434/v1\n", |
||||||
|
"GOOGLE_API_KEY=xxxx\n", |
||||||
|
"```\n", |
||||||
|
"\n", |
||||||
|
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI, AzureOpenAI\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"import json\n", |
||||||
|
"import boto3\n", |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# import for google\n", |
||||||
|
"# in rare cases, this seems to give an error on some systems. Please reach out to me if this happens,\n", |
||||||
|
"# or you can feel free to skip Gemini - it's the lowest priority of the frontier models that we use\n", |
||||||
|
"\n", |
||||||
|
"import google.generativeai" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c5c0df5e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# load the environment variables\n", |
||||||
|
"load_dotenv()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Test that AZURE works\n", |
||||||
|
"AZURE_MODEL = \"gpt-4o\"\n", |
||||||
|
"client_azure = AzureOpenAI(\n", |
||||||
|
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n", |
||||||
|
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n", |
||||||
|
" api_version=\"2024-08-01-preview\",\n", |
||||||
|
")\n", |
||||||
|
"messages = [\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": \"ping\"\n", |
||||||
|
" }\n", |
||||||
|
"]\n", |
||||||
|
"response = client_azure.chat.completions.create(model=AZURE_MODEL, messages=messages)\n", |
||||||
|
"print(response.choices[0].message.content)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0d5fe363", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Test that AWS works\n", |
||||||
|
"AWS_MODEL = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n", |
||||||
|
"session = boto3.Session()\n", |
||||||
|
"bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n", |
||||||
|
"# AWS Messages are a bit more complex\n", |
||||||
|
"aws_message = {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": [\n", |
||||||
|
" { \"text\": \"how are you doing\" } \n", |
||||||
|
" ],\n", |
||||||
|
"}\n", |
||||||
|
"response = bedrock.converse(\n", |
||||||
|
" modelId=AWS_MODEL,\n", |
||||||
|
" inferenceConfig={\n", |
||||||
|
" \"maxTokens\": 2000,\n", |
||||||
|
" \"temperature\": 0\n", |
||||||
|
" },\n", |
||||||
|
" messages=[aws_message],\n", |
||||||
|
")\n", |
||||||
|
"print(response['output']['message']['content'][0]['text'])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a92f86d4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Test ollama using OpenAI API\n", |
||||||
|
"OLLAMA_MODEL='qwen2.5'\n", |
||||||
|
"print(os.getenv('OPENAI_BASE_URL'))\n", |
||||||
|
"client_ollama = OpenAI(\n", |
||||||
|
" base_url=os.getenv('OPENAI_BASE_URL'),\n", |
||||||
|
" api_key='123'\n", |
||||||
|
" )\n", |
||||||
|
"response = client_ollama.chat.completions.create(model=OLLAMA_MODEL, messages=messages)\n", |
||||||
|
"print(response.choices[0].message.content)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Connect to OpenAI, Anthropic and Google\n", |
||||||
|
"# All 3 APIs are similar\n", |
||||||
|
"# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n", |
||||||
|
"# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n", |
||||||
|
"\n", |
||||||
|
"google.generativeai.configure()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Asking LLMs to tell a joke\n", |
||||||
|
"\n", |
||||||
|
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n", |
||||||
|
"Later we will be putting LLMs to better use!\n", |
||||||
|
"\n", |
||||||
|
"### What information is included in the API\n", |
||||||
|
"\n", |
||||||
|
"Typically we'll pass to the API:\n", |
||||||
|
"- The name of the model that should be used\n", |
||||||
|
"- A system message that gives overall context for the role the LLM is playing\n", |
||||||
|
"- A user message that provides the actual prompt\n", |
||||||
|
"\n", |
||||||
|
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "378a0296-59a2-45c6-82eb-941344d3eeff", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are an assistant that is great at telling jokes\"\n", |
||||||
|
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"prompts = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_message},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# GPT-4o\n", |
||||||
|
"def call_azure(model=AZURE_MODEL, temp=0.5):\n", |
||||||
|
" openai = AzureOpenAI(\n", |
||||||
|
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n", |
||||||
|
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n", |
||||||
|
" api_version=\"2024-08-01-preview\",\n", |
||||||
|
" )\n", |
||||||
|
" completion = openai.chat.completions.create(model=model, messages=prompts, temperature=temp)\n", |
||||||
|
" return completion.choices[0].message.content\n", |
||||||
|
"print(call_azure('gpt-4o'))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# GPT-4o-mini\n", |
||||||
|
"# Temperature setting controls creativity\n", |
||||||
|
"\n", |
||||||
|
"print(call_azure('gpt-4o-mini', temp=0.7))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# GPT-4o\n", |
||||||
|
"\n", |
||||||
|
"print(call_azure('gpt-4o', temp=0.4))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# AWS with Claude 3.5 Sonnet\n", |
||||||
|
"# API needs system message provided separately from user prompt\n", |
||||||
|
"# Also adding max_tokens\n", |
||||||
|
"\n", |
||||||
|
"def call_aws(model=AWS_MODEL, temp=0.5):\n", |
||||||
|
" aws_message = {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": [\n", |
||||||
|
" { \"text\": user_prompt } \n", |
||||||
|
" ],\n", |
||||||
|
" }\n", |
||||||
|
" sys_message = [ { \"text\": system_message } ]\n", |
||||||
|
" session = boto3.Session()\n", |
||||||
|
" bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n", |
||||||
|
" response = bedrock.converse(\n", |
||||||
|
" modelId=model,\n", |
||||||
|
" inferenceConfig={\n", |
||||||
|
" \"maxTokens\": 2000,\n", |
||||||
|
" \"temperature\": temp\n", |
||||||
|
" },\n", |
||||||
|
" messages=[aws_message],\n", |
||||||
|
" system=sys_message\n", |
||||||
|
" )\n", |
||||||
|
" return response['output']['message']['content'][0]['text']\n", |
||||||
|
"print(call_aws(AWS_MODEL))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# AWS with Claude 3.5 Sonnet\n", |
||||||
|
"# Now let's add in streaming back results\n", |
||||||
|
"def call_aws_stream(model=AWS_MODEL, temp=0.5):\n", |
||||||
|
" aws_message = {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": [\n", |
||||||
|
" { \"text\": user_prompt } \n", |
||||||
|
" ],\n", |
||||||
|
" }\n", |
||||||
|
" sys_message = [ { \"text\": system_message } ]\n", |
||||||
|
" response = bedrock.converse_stream(\n", |
||||||
|
" modelId=model,\n", |
||||||
|
" inferenceConfig={\n", |
||||||
|
" \"maxTokens\": 2000,\n", |
||||||
|
" \"temperature\": temp\n", |
||||||
|
" },\n", |
||||||
|
" system=sys_message,\n", |
||||||
|
" messages=[aws_message],\n", |
||||||
|
" )\n", |
||||||
|
" stream = response.get('stream')\n", |
||||||
|
" reply = \"\"\n", |
||||||
|
" for event in stream:\n", |
||||||
|
" if \"contentBlockDelta\" in event:\n", |
||||||
|
" text = event[\"contentBlockDelta\"][\"delta\"]['text']\n", |
||||||
|
" print(text, end=\"\", flush=True)\n", |
||||||
|
"call_aws_stream(AWS_MODEL, temp=0.7)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "12374cd3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Call Ollama\n", |
||||||
|
"def call_ollama_stream(model=OLLAMA_MODEL, temp=0.5):\n", |
||||||
|
" openai = OpenAI(\n", |
||||||
|
" base_url=os.getenv('OPENAI_BASE_URL'),\n", |
||||||
|
" api_key='123'\n", |
||||||
|
" )\n", |
||||||
|
" stream = openai.chat.completions.create(model=model, messages=prompts, temperature=temp, stream=True)\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" if chunk.choices:\n", |
||||||
|
" text = chunk.choices[0].delta.content or ''\n", |
||||||
|
" print(text, end=\"\", flush=True)\n", |
||||||
|
"call_ollama_stream(OLLAMA_MODEL)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# The API for Gemini has a slightly different structure\n", |
||||||
|
"\n", |
||||||
|
"gemini = google.generativeai.GenerativeModel(\n", |
||||||
|
" model_name='gemini-1.5-flash',\n", |
||||||
|
" system_instruction=system_message\n", |
||||||
|
")\n", |
||||||
|
"response = gemini.generate_content(user_prompt)\n", |
||||||
|
"print(response.text)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# To be serious! GPT-4o-mini with the original question\n", |
||||||
|
"\n", |
||||||
|
"prompts = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Have it stream back results in markdown\n", |
||||||
|
"\n", |
||||||
|
"def call_azure_stream(model=AZURE_MODEL, temp=0.5):\n", |
||||||
|
" openai = AzureOpenAI(\n", |
||||||
|
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n", |
||||||
|
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n", |
||||||
|
" api_version=\"2024-08-01-preview\",\n", |
||||||
|
" )\n", |
||||||
|
" return openai.chat.completions.create(model=model, messages=prompts, temperature=temp, stream=True)\n", |
||||||
|
"stream = call_azure_stream('gpt-4o-mini', temp=0.7)\n", |
||||||
|
"reply = \"\"\n", |
||||||
|
"display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
"for chunk in stream:\n", |
||||||
|
" if chunk.choices:\n", |
||||||
|
" reply += chunk.choices[0].delta.content or ''\n", |
||||||
|
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", |
||||||
|
" update_display(Markdown(reply), display_id=display_handle.display_id)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## And now for some fun - an adversarial conversation between Chatbots..\n", |
||||||
|
"\n", |
||||||
|
"You're already familar with prompts being organized into lists like:\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"[\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"system message here\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n", |
||||||
|
"]\n", |
||||||
|
"```\n", |
||||||
|
"\n", |
||||||
|
"In fact this structure can be used to reflect a longer conversation history:\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"[\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"system message here\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n", |
||||||
|
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n", |
||||||
|
"]\n", |
||||||
|
"```\n", |
||||||
|
"\n", |
||||||
|
"And we can use this approach to engage in a longer interaction with history." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", |
||||||
|
"# We're using cheap versions of models so the costs will be minimal\n", |
||||||
|
"\n", |
||||||
|
"gpt_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n", |
||||||
|
"\n", |
||||||
|
"gpt_system = \"You are a chatbot who is very argumentative; \\\n", |
||||||
|
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", |
||||||
|
"\n", |
||||||
|
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", |
||||||
|
"everything the other person says, or find common ground. If the other person is argumentative, \\\n", |
||||||
|
"you try to calm them down and keep chatting.\"\n", |
||||||
|
"\n", |
||||||
|
"gpt_messages = [\"Hi there\"]\n", |
||||||
|
"claude_messages = [\"Hi\"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_gpt():\n", |
||||||
|
" azure_client = AzureOpenAI(\n", |
||||||
|
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n", |
||||||
|
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n", |
||||||
|
" api_version=\"2024-08-01-preview\",\n", |
||||||
|
" )\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", |
||||||
|
" for gpt, claude in zip(gpt_messages, claude_messages):\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": claude})\n", |
||||||
|
" completion = azure_client.chat.completions.create(\n", |
||||||
|
" model=gpt_model,\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" return completion.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"call_gpt()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_claude():\n", |
||||||
|
" session = boto3.Session()\n", |
||||||
|
" bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n", |
||||||
|
" messages = []\n", |
||||||
|
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": [{\"text\": gpt }]})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": [{\"text\": claude_message }]})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": [{\"text\": gpt_messages[-1] }]})\n", |
||||||
|
" response = bedrock.converse(\n", |
||||||
|
" modelId=claude_model,\n", |
||||||
|
" system=[{\"text\":claude_system}],\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" inferenceConfig={\n", |
||||||
|
" \"maxTokens\": 2000,\n", |
||||||
|
" \"temperature\": 0\n", |
||||||
|
" },\n", |
||||||
|
" )\n", |
||||||
|
" return response['output']['message']['content'][0]['text']" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "01395200-8ae9-41f8-9a04-701624d3fd26", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"call_claude()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"call_gpt()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_messages = [\"Hi there\"]\n", |
||||||
|
"claude_messages = [\"Hi\"]\n", |
||||||
|
"\n", |
||||||
|
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", |
||||||
|
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", |
||||||
|
"\n", |
||||||
|
"for i in range(5):\n", |
||||||
|
" gpt_next = call_gpt()\n", |
||||||
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
||||||
|
" gpt_messages.append(gpt_next)\n", |
||||||
|
" \n", |
||||||
|
" claude_next = call_claude()\n", |
||||||
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
||||||
|
" claude_messages.append(claude_next)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#900;\">Before you continue</h2>\n", |
||||||
|
" <span style=\"color:#900;\">\n", |
||||||
|
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n", |
||||||
|
" </span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# More advanced exercises\n", |
||||||
|
"\n", |
||||||
|
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n", |
||||||
|
"\n", |
||||||
|
"Try doing this yourself before you look at the solutions.\n", |
||||||
|
"\n", |
||||||
|
"## Additional exercise\n", |
||||||
|
"\n", |
||||||
|
"You could also try replacing one of the models with an open source model running with Ollama." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#181;\">Business relevance</h2>\n", |
||||||
|
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c23224f6-7008-44ed-a57f-718975f4e291", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": ".venv", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.9.6" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
Loading…
Reference in new issue