Browse Source

Merge branch 'ed-donner:main' into main

pull/40/head
udayslathia16 6 months ago committed by GitHub
parent
commit
0387f4b67f
  1. 6
      SETUP-PC.md
  2. 2
      SETUP-mac.md
  3. 2
      extras/trading/prototype_trader.ipynb
  4. 48
      week1/troubleshooting.ipynb
  5. 689
      week2/community-contributions/day1-azure-aws-ollama.ipynb

6
SETUP-PC.md

@ -48,7 +48,7 @@ This creates a new directory `llm_engineering` within your Projects folder and d
### Part 2: Install Anaconda environment
There is an alternative to Part 2 if this gives you problems.
If this Part 2 gives you any problems, there is an alternative Part 2B below that can be used instead.
1. **Install Anaconda:**
@ -158,11 +158,11 @@ This file won't appear in Jupyter Lab because jupyter hides files starting with
### Part 5 - Showtime!!
- Open **Anaconda Prompt** (search for it in the Start menu)
- Open **Anaconda Prompt** (search for it in the Start menu) if you used Anaconda, otherwise open a Powershell if you used the alternative approach in Part 2B
- Navigate to the "project root directory" by entering something like `cd C:\Users\YourUsername\Documents\Projects\llm_engineering` using the actual path to your llm_engineering project root directory. Do a `dir` and check you can see subdirectories for each week of the course.
- Activate your environment with `conda activate llms` (or `llms\Scripts\activate` if you used the alternative approach in Part 2B)
- Activate your environment with `conda activate llms` if you used Anaconda or `llms\Scripts\activate` if you used the alternative approach in Part 2B
- You should see (llms) in your prompt which is your sign that all is well. And now, type: `jupyter lab` and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipynb`.

2
SETUP-mac.md

@ -43,7 +43,7 @@ This creates a new directory `llm_engineering` within your Projects folder and d
### Part 2: Install Anaconda environment
There is an alternative to Part 2 if this gives you problems.
If this Part 2 gives you any problems, there is an alternative Part 2B below that can be used instead.
1. **Install Anaconda:**

2
extras/trading/prototype_trader.ipynb

@ -15,6 +15,8 @@
"\n",
"I generated test data using frontier models, in the other files in this directory. Use this to train an open source code model.\n",
"\n",
"In this notebook we generate the dataset; then we move over to Google Colab for the fine-tuning.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",

48
week1/troubleshooting.ipynb

@ -21,22 +21,50 @@
"source": [
"# Step 1\n",
"\n",
"Try running the next cell (click in the cell under this one and hit shift+return).\n",
"Try running the next 2 cells (click in the cell under this one and hit shift+return, then shift+return again).\n",
"\n",
"If this gives an error, then you're likely not running in an \"activated\" environment. Please check back in Part 5 of the SETUP guide for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) for setting up the Anaconda (or virtualenv) environment and activating it, before running `jupyter lab`.\n",
"\n",
"If you look in the Anaconda prompt (PC) or the Terminal (Mac), you should see `(llms)` in your prompt where you launch `jupyter lab` - that's your clue that the llms environment is activated.\n",
"\n",
"If you are in an activated environment, the next thing to try is to restart everything:\n",
"1. Close down all Jupyter windows, like this\n",
"1. Close down all Jupyter windows, like this one\n",
"2. Exit all command prompts / Terminals / Anaconda\n",
"3. Repeat Part 5 from the SETUP instructions to begin a new activated environment and launch jupyter lab\n",
"4. Kernel menu >> Restart Kernel and Clear Outputs of All Cells\n",
"5. Come back to this notebook and try the cell below again.\n",
"3. Repeat Part 5 from the SETUP instructions to begin a new activated environment and launch `jupyter lab` from the `llm_engineering` directory \n",
"4. Come back to this notebook, and do Kernel menu >> Restart Kernel and Clear Outputs of All Cells\n",
"5. Try the cell below again.\n",
"\n",
"If **that** doesn't work, then please contact me! I'll respond quickly, and we'll figure it out. Please run the diagnostics (last cell in this notebook) so I can debug. If you used Anaconda, it might be that for some reason your environment is corrupted, in which case the simplest fix is to use the virtualenv approach instead (Part 2B in the setup guides)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c8c0bb3-0e94-466e-8d1a-4dfbaa014cbe",
"metadata": {},
"outputs": [],
"source": [
"# Some quick checks that your Conda environment or VirtualEnv is as expected\n",
"# The Environment Name should be: llms\n",
"\n",
"import os\n",
"\n",
"conda_prefix = os.environ.get('CONDA_PREFIX')\n",
"if conda_prefix:\n",
" print(\"Anaconda environment is active:\")\n",
" print(f\"Environment Path: {conda_prefix}\")\n",
" print(f\"Environment Name: {os.path.basename(conda_prefix)}\")\n",
"\n",
"virtual_env = os.environ.get('VIRTUAL_ENV')\n",
"if virtual_env:\n",
" print(\"Virtualenv is active:\")\n",
" print(f\"Environment Path: {virtual_env}\")\n",
" print(f\"Environment Name: {os.path.basename(virtual_env)}\")\n",
"\n",
"if not conda_prefix and not virtual_env:\n",
" print(\"Neither Anaconda nor Virtualenv seems to be active. Did you start jupyter lab in an Activated environment? See Setup Part 5.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -44,7 +72,7 @@
"metadata": {},
"outputs": [],
"source": [
"# This should run with no output - no import errors.\n",
"# And now, this should run with no output - no import errors.\n",
"# Import errors might indicate that you started jupyter lab without your environment activated? See SETUP part 5.\n",
"# Or you might need to restart your Kernel and Jupyter Lab.\n",
"# Or it's possible that something is wrong with Anaconda, in which case we may have to use virtualenv instead.\n",
@ -65,7 +93,9 @@
"\n",
"Note that the `.env` file won't show up in your Jupyter Lab file browser, because Jupyter hides files that start with a dot for your security; they're considered hidden files. If you need to change the name, you'll need to use a command terminal or File Explorer (PC) / Finder Window (Mac). Ask ChatGPT if that's giving you problems, or email me!\n",
"\n",
"If you're having challenges creating the `.env` file, we can also do it with code! See the cell after the next one."
"If you're having challenges creating the `.env` file, we can also do it with code! See the cell after the next one.\n",
"\n",
"It's important to launch `jupyter lab` from the project root directory, `llm_engineering`. If you didn't do that, this cell might give you problems."
]
},
{
@ -282,8 +312,8 @@
"\n",
"## Please run this next cell to gather some important data\n",
"\n",
"Please run the next cell; it should take a minute or so to run (mostly the network test).\n",
"Rhen email me the output of the last cell to ed@edwarddonner.com. \n",
"Please run the next cell; it should take a minute or so to run. Most of the time is checking your network bandwidth.\n",
"Then email me the output of the last cell to ed@edwarddonner.com. \n",
"Alternatively: this will create a file called report.txt - just attach the file to your email."
]
},

689
week2/community-contributions/day1-azure-aws-ollama.ipynb

@ -0,0 +1,689 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"We will use the models through cloud providers, you will need to have credentials for AWS and Azure for this.\n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"AZURE_OPENAI_API_KEY=xxxx\n",
"AZURE_OPENAI_ENDPOINT=https://example.openai.azure.com\n",
"AWS_ACCESS_KEY_ID=xxxx\n",
"AWS_SECRET_ACCESS_KEY=xxxx\n",
"AWS_SESSION_TOKEN=xxxx\n",
"AWS_REGION=us-west-2\n",
"OPENAI_BASE_URL=https://localhost:11434/v1\n",
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI, AzureOpenAI\n",
"from dotenv import load_dotenv\n",
"import json\n",
"import boto3\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {},
"outputs": [],
"source": [
"# import for google\n",
"# in rare cases, this seems to give an error on some systems. Please reach out to me if this happens,\n",
"# or you can feel free to skip Gemini - it's the lowest priority of the frontier models that we use\n",
"\n",
"import google.generativeai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5c0df5e",
"metadata": {},
"outputs": [],
"source": [
"# load the environment variables\n",
"load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
"source": [
"# Test that AZURE works\n",
"AZURE_MODEL = \"gpt-4o\"\n",
"client_azure = AzureOpenAI(\n",
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
" api_version=\"2024-08-01-preview\",\n",
")\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"ping\"\n",
" }\n",
"]\n",
"response = client_azure.chat.completions.create(model=AZURE_MODEL, messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d5fe363",
"metadata": {},
"outputs": [],
"source": [
"# Test that AWS works\n",
"AWS_MODEL = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n",
"session = boto3.Session()\n",
"bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n",
"# AWS Messages are a bit more complex\n",
"aws_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" { \"text\": \"how are you doing\" } \n",
" ],\n",
"}\n",
"response = bedrock.converse(\n",
" modelId=AWS_MODEL,\n",
" inferenceConfig={\n",
" \"maxTokens\": 2000,\n",
" \"temperature\": 0\n",
" },\n",
" messages=[aws_message],\n",
")\n",
"print(response['output']['message']['content'][0]['text'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a92f86d4",
"metadata": {},
"outputs": [],
"source": [
"# Test ollama using OpenAI API\n",
"OLLAMA_MODEL='qwen2.5'\n",
"print(os.getenv('OPENAI_BASE_URL'))\n",
"client_ollama = OpenAI(\n",
" base_url=os.getenv('OPENAI_BASE_URL'),\n",
" api_key='123'\n",
" )\n",
"response = client_ollama.chat.completions.create(model=OLLAMA_MODEL, messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic and Google\n",
"# All 3 APIs are similar\n",
"# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n",
"# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "markdown",
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
"metadata": {},
"source": [
"## Asking LLMs to tell a joke\n",
"\n",
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
"Later we will be putting LLMs to better use!\n",
"\n",
"### What information is included in the API\n",
"\n",
"Typically we'll pass to the API:\n",
"- The name of the model that should be used\n",
"- A system message that gives overall context for the role the LLM is playing\n",
"- A user message that provides the actual prompt\n",
"\n",
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that is great at telling jokes\"\n",
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o\n",
"def call_azure(model=AZURE_MODEL, temp=0.5):\n",
" openai = AzureOpenAI(\n",
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
" api_version=\"2024-08-01-preview\",\n",
" )\n",
" completion = openai.chat.completions.create(model=model, messages=prompts, temperature=temp)\n",
" return completion.choices[0].message.content\n",
"print(call_azure('gpt-4o'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
"\n",
"print(call_azure('gpt-4o-mini', temp=0.7))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o\n",
"\n",
"print(call_azure('gpt-4o', temp=0.4))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [],
"source": [
"# AWS with Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n",
"\n",
"def call_aws(model=AWS_MODEL, temp=0.5):\n",
" aws_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" { \"text\": user_prompt } \n",
" ],\n",
" }\n",
" sys_message = [ { \"text\": system_message } ]\n",
" session = boto3.Session()\n",
" bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n",
" response = bedrock.converse(\n",
" modelId=model,\n",
" inferenceConfig={\n",
" \"maxTokens\": 2000,\n",
" \"temperature\": temp\n",
" },\n",
" messages=[aws_message],\n",
" system=sys_message\n",
" )\n",
" return response['output']['message']['content'][0]['text']\n",
"print(call_aws(AWS_MODEL))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [],
"source": [
"# AWS with Claude 3.5 Sonnet\n",
"# Now let's add in streaming back results\n",
"def call_aws_stream(model=AWS_MODEL, temp=0.5):\n",
" aws_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" { \"text\": user_prompt } \n",
" ],\n",
" }\n",
" sys_message = [ { \"text\": system_message } ]\n",
" response = bedrock.converse_stream(\n",
" modelId=model,\n",
" inferenceConfig={\n",
" \"maxTokens\": 2000,\n",
" \"temperature\": temp\n",
" },\n",
" system=sys_message,\n",
" messages=[aws_message],\n",
" )\n",
" stream = response.get('stream')\n",
" reply = \"\"\n",
" for event in stream:\n",
" if \"contentBlockDelta\" in event:\n",
" text = event[\"contentBlockDelta\"][\"delta\"]['text']\n",
" print(text, end=\"\", flush=True)\n",
"call_aws_stream(AWS_MODEL, temp=0.7)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12374cd3",
"metadata": {},
"outputs": [],
"source": [
"# Call Ollama\n",
"def call_ollama_stream(model=OLLAMA_MODEL, temp=0.5):\n",
" openai = OpenAI(\n",
" base_url=os.getenv('OPENAI_BASE_URL'),\n",
" api_key='123'\n",
" )\n",
" stream = openai.chat.completions.create(model=model, messages=prompts, temperature=temp, stream=True)\n",
" for chunk in stream:\n",
" if chunk.choices:\n",
" text = chunk.choices[0].delta.content or ''\n",
" print(text, end=\"\", flush=True)\n",
"call_ollama_stream(OLLAMA_MODEL)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [],
"source": [
"# The API for Gemini has a slightly different structure\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
"source": [
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [],
"source": [
"# Have it stream back results in markdown\n",
"\n",
"def call_azure_stream(model=AZURE_MODEL, temp=0.5):\n",
" openai = AzureOpenAI(\n",
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
" api_version=\"2024-08-01-preview\",\n",
" )\n",
" return openai.chat.completions.create(model=model, messages=prompts, temperature=temp, stream=True)\n",
"stream = call_azure_stream('gpt-4o-mini', temp=0.7)\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" if chunk.choices:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)"
]
},
{
"cell_type": "markdown",
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
"metadata": {},
"source": [
"## And now for some fun - an adversarial conversation between Chatbots..\n",
"\n",
"You're already familar with prompts being organized into lists like:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
"]\n",
"```\n",
"\n",
"In fact this structure can be used to reflect a longer conversation history:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"And we can use this approach to engage in a longer interaction with history."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
"# We're using cheap versions of models so the costs will be minimal\n",
"\n",
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"anthropic.claude-3-sonnet-20240229-v1:0\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" azure_client = AzureOpenAI(\n",
" api_key=os.getenv('AZURE_OPENAI_API_KEY'),\n",
" azure_endpoint=os.getenv('AZURE_OPENAI_ENDPOINT'),\n",
" api_version=\"2024-08-01-preview\",\n",
" )\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" completion = azure_client.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" session = boto3.Session()\n",
" bedrock = session.client(service_name='bedrock-runtime', region_name='us-east-1')\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"user\", \"content\": [{\"text\": gpt }]})\n",
" messages.append({\"role\": \"assistant\", \"content\": [{\"text\": claude_message }]})\n",
" messages.append({\"role\": \"user\", \"content\": [{\"text\": gpt_messages[-1] }]})\n",
" response = bedrock.converse(\n",
" modelId=claude_model,\n",
" system=[{\"text\":claude_system}],\n",
" messages=messages,\n",
" inferenceConfig={\n",
" \"maxTokens\": 2000,\n",
" \"temperature\": 0\n",
" },\n",
" )\n",
" return response['output']['message']['content'][0]['text']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [],
"source": [
"call_claude()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# More advanced exercises\n",
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions.\n",
"\n",
"## Additional exercise\n",
"\n",
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading…
Cancel
Save