Phi-Li-Ne 3 months ago
parent
commit
b107397575
  1. 5
      README.md
  2. 18
      SETUP-PC.md
  3. BIN
      SETUP-PC.pdf
  4. 18
      SETUP-mac.md
  5. BIN
      SETUP-mac.pdf
  6. 413
      extras/community/prototype_signal.ipynb
  7. 18
      week1/Intermediate Python.ipynb
  8. 185
      week1/community-contributions/0225_day2_exercise_by_LM.ipynb
  9. 123
      week1/community-contributions/Day2-Solution-Ollama.ipynb
  10. 76
      week1/community-contributions/Text-Summarizer.ipynb
  11. 129
      week1/community-contributions/day02_ollama_website_summarizer.ipynb
  12. 651
      week1/community-contributions/day1-election-program-qa.ipynb
  13. 115
      week1/community-contributions/day1_far_far_away.ipynb
  14. 435
      week1/community-contributions/day2-EXERCISE-ollama-openai-api-website-summarizer-ITA.ipynb
  15. 186
      week1/community-contributions/day2-webpage-summarizer-ollama-gemini.ipynb
  16. 240
      week1/community-contributions/day2_exercise_cl.ipynb
  17. 440
      week1/community-contributions/day5-github-page-portfolio-maker.ipynb
  18. 208
      week1/community-contributions/wk1-day5-CHALLENGE.ipynb
  19. 163
      week1/community-contributions/wk1-day5-tutoring-exercise.ipynb
  20. 34
      week1/day1.ipynb
  21. 7
      week1/day5.ipynb
  22. 69
      week1/troubleshooting.ipynb
  23. 339
      week2/community-contributions/GPT Claude Ollama Conversation.ipynb
  24. 163
      week2/community-contributions/chatbot_conversation.ipynb
  25. BIN
      week2/community-contributions/chatbot_conversation_robots.jpg
  26. 381
      week2/community-contributions/chatbot_conversation_transcript.md
  27. 448
      week2/community-contributions/d5_TravelAgent_google_STT.ipynb
  28. 727
      week2/community-contributions/day1-3way-with-llama3.2.ipynb
  29. 1088
      week2/community-contributions/day1-debate-gemini-judges.ipynb
  30. 899
      week2/community-contributions/day1-exercise-oscars-3-way-conversation.ipynb
  31. 187
      week2/community-contributions/day1_triple_conversation.ipynb
  32. 575
      week2/community-contributions/day2-different-tones.ipynb
  33. 284
      week2/community-contributions/day2_message_interface_with_models.ipynb
  34. 288
      week2/community-contributions/day4-multipleTools.ipynb
  35. 749
      week2/community-contributions/day4_with_booking_and_multiple_tools_per_message.ipynb
  36. 432
      week2/community-contributions/day5-book-flight.ipynb
  37. 1440
      week2/community-contributions/day5-exercise-departures-booking-and-translation.ipynb
  38. 82
      week2/community-contributions/gpt-gemini-ollama.py
  39. 371
      week2/community-contributions/proof_testing_agent_french.ipynb
  40. 197
      week2/community-contributions/week2-exercise-translator.ipynb
  41. 614
      week2/community-contributions/week2_exercise_translated_chatbot.ipynb
  42. 2
      week2/day2.ipynb
  43. 2
      week2/day3.ipynb
  44. 2
      week2/day4.ipynb
  45. 2
      week2/day5.ipynb
  46. 10
      week3/community-contributions/dataset_generator.ipynb
  47. 54
      week3/community-contributions/telegram_bot_llm/README.md
  48. 199
      week3/community-contributions/telegram_bot_llm/tg_lb_bot.py
  49. 956
      week4/community-contributions/day4_extra_deepseek_and_hf_inference_provider_added.ipynb
  50. 925
      week4/community-contributions/day4_with_inference_provider.ipynb
  51. 3
      week4/day4.ipynb
  52. 767
      week5/community-contributions/day4.5 - langchain faiss ollama.ipynb
  53. 61
      week5/day4.5.ipynb
  54. 159
      week8/community_contributions/Deploy_Deepseek_r1_on_Modal.ipynb
  55. BIN
      week8/community_contributions/images/gui.png
  56. BIN
      week8/community_contributions/images/metrics.png
  57. 21
      week8/day4.ipynb
  58. 2
      week8/day5.ipynb

5
README.md

@ -6,16 +6,19 @@
I'm so happy you're joining me on this path. We'll be building immensely satisfying projects in the coming weeks. Some will be easy, some will be challenging, many will ASTOUND you! The projects build on each other so you develop deeper and deeper expertise each week. One thing's for sure: you're going to have a lot of fun along the way.
### A note before you begin
### Before you begin
I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here:
https://www.linkedin.com/in/eddonner/
And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂
Resources to accompany the course, including the slides and useful links, are here:
https://edwarddonner.com/2024/11/13/llm-engineering-resources/
## Instant Gratification instructions for Week 1, Day 1
### Important note: see my warning about Llama3.3 below - it's too large for home computers! Stick with llama3.2! Several students have missed this warning...
We will start the course by installing Ollama so you can see results immediately!
1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly
2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal).

18
SETUP-PC.md

@ -13,6 +13,17 @@ I use a platform called Anaconda to set up your environment. It's a powerful too
Having said that: if you have any problems with Anaconda, I've provided an alternative approach. It's faster and simpler and should have you running quickly, with less of a guarantee around compatibility.
### Before we begin - Heads up!
If you are relatively new to using the Command Prompt, here is an excellent [guide](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) with instructions and exercises. I'd suggest you work through this first to build some confidence.
There are 4 common gotchas to developing on Windows to be aware of:
1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows
2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed
3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!
4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0).
### Part 1: Clone the Repo
This gets you a local copy of the code on your box.
@ -77,9 +88,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne
Press Win + R, type `cmd`, and press Enter
Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync.
If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues.
You can download python here:
Run `python --version` to find out which python you're on.
Ideally you'd be using a version of Python 3.11, so we're completely in sync.
I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13.
If you need to install Python or install another version, you can download it here:
https://www.python.org/downloads/
2. Navigate to the "project root directory" by entering something like `cd C:\Users\YourUsername\Documents\Projects\llm_engineering` using the actual path to your llm_engineering project root directory. Do a `dir` and check you can see subdirectories for each week of the course.

BIN
SETUP-PC.pdf

Binary file not shown.

18
SETUP-mac.md

@ -13,6 +13,14 @@ I use a platform called Anaconda to set up your environment. It's a powerful too
Having said that: if you have any problems with Anaconda, I've provided an alternative approach. It's faster and simpler and should have you running quickly, with less of a guarantee around compatibility.
### Before we begin
If you're less familiar with using the Terminal, please review this excellent [guide](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b) for some details and exercises.
If you're new to developing on your Mac, you may need to install XCode developer tools. Here are [instructions](https://chatgpt.com/share/67b0b8d7-8eec-8012-9a37-6973b9db11f5).
One "gotcha" to keep in mind: if you run anti-virus software, VPN or a Firewall, it might interfere with installations or network access. Please temporarily disable if you have problems.
### Part 1: Clone the Repo
This gets you a local copy of the code on your box.
@ -49,10 +57,11 @@ If this Part 2 gives you any problems, there is an alternative Part 2B below tha
- Download Anaconda from https://docs.anaconda.com/anaconda/install/mac-os/
- Double-click the downloaded file and follow the installation prompts. Note that it takes up several GB and take a while to install, but it will be a powerful platform for you to use in the future.
- After installing, you'll need to open a fresh, new Terminal to be able to use it (and you might even need to restart).
2. **Set up the environment:**
- Open a new Terminal (Applications > Utilities > Terminal)
- Open a **new** Terminal (Applications > Utilities > Terminal)
- Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path as needed with the actual path to the llm_engineering directory, your locally cloned version of the repo). Do `ls` and check you can see subdirectories for each week of the course.
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed - in some cases, this can literally take 20-30 minutes if you've not used Anaconda before, and even longer depending on your internet connection. Important stuff is happening! If this runs for more than 1 hour 15 mins, or gives you other problems, please go to Part 2B instead.
@ -70,9 +79,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne
1. **Open a new Terminal** (Applications > Utilities > Terminal)
Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync.
If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues.
You can download python here:
Run `python --version` to find out which python you're on.
Ideally you'd be using a version of Python 3.11, so we're completely in sync.
I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13.
If you need to install Python or install another version, you can download it here:
https://www.python.org/downloads/
2. Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path with the actual path to the llm_engineering directory, your locally cloned version of the repo). Do `ls` and check you can see subdirectories for each week of the course.

BIN
SETUP-mac.pdf

Binary file not shown.

413
extras/community/prototype_signal.ipynb

@ -0,0 +1,413 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7e2c4bbb-5e8b-4d84-9997-ecb2c349cf54",
"metadata": {},
"source": [
"## First step - generate training data from examples"
]
},
{
"cell_type": "code",
"execution_count": 105,
"id": "16cf3aa2-f407-4b95-8b9e-c3c586f67835",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"import pandas as pd\n",
"from datetime import datetime, timedelta,timezone\n",
"from datasets import load_dataset, Dataset\n",
"from dotenv import load_dotenv\n",
"import os\n",
"from openai import OpenAI\n",
"import json\n",
"import tiktoken\n",
"from IPython.display import display, Markdown\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 96,
"id": "375302b6-b6a7-46ea-a74c-c2400dbd8bbe",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"load_dotenv()\n",
"\n",
"# Replace with your CoinAPI key\n",
"API_KEY = os.getenv('YOUR_COINAPI_KEY')\n",
"\n",
"# Define the base URL for CoinAPI\n",
"BASE_URL = 'https://rest.coinapi.io/v1/ohlcv/'\n",
"OLLAMA_URL = \"http://localhost:11434/v1\"\n",
"\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"# URL to fetch the OHLCV data\n"
]
},
{
"cell_type": "code",
"execution_count": 98,
"id": "d0cc964d",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"ollama = OpenAI(\n",
" base_url=OLLAMA_URL,\n",
" api_key='OLAMMA'\n",
")\n"
]
},
{
"cell_type": "code",
"execution_count": 104,
"id": "8a0c9fff-9eff-42fd-971b-403c99d9b726",
"metadata": {},
"outputs": [],
"source": [
"# Define the symbol and timeframe\n",
"base_data = {\n",
" 'name': 'Cardano',\n",
" 'symbol': f'BINANCE_SPOT_ADA_USDT',\n",
" 'timeframe': '1DAY',\n",
" 'time_range': 365 * 2\n",
"}\n",
"\n",
"\n",
"# Calculate the start date for one year ago\n",
"end_date = datetime.now(tz=timezone.utc)\n",
"\n",
"start_date = end_date - timedelta(days=base_data['time_range'])\n",
"\n",
"# Format the dates in the required format (ISO 8601)\n",
"start_date_str = start_date.strftime('%Y-%m-%dT%H:%M:%S')\n",
"end_date_str = end_date.strftime('%Y-%m-%dT%H:%M:%S')\n",
"\n",
"# Headers for authentication\n",
"headers = {\n",
" 'X-CoinAPI-Key': API_KEY\n",
"}\n",
"\n",
"# URL to fetch the OHLCV base_data\n",
"url = f'{BASE_URL}{base_data['symbol']}/history'\n",
"\n",
"# Request parameters\n",
"params = {\n",
" 'period_id': base_data['timeframe'],\n",
" 'time_start': start_date_str,\n",
" 'time_end': end_date_str,\n",
" 'limit': 1000 # Maximum number of records per request\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 91,
"id": "586b07ba-5396-4c34-a696-01c8bc3597a0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"729"
]
},
"execution_count": 91,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Fetch the data\n",
"response = requests.get(url, headers=headers, params=params) \n",
"len(response.json())"
]
},
{
"cell_type": "code",
"execution_count": 76,
"id": "953422d0-2e75-4d01-862e-6383df54d9e5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Timestamp Open High Low Close\n",
"724 2025-02-06 0.7325 0.7660 0.6978 0.7052\n",
"725 2025-02-07 0.7052 0.7532 0.6902 0.7072\n",
"726 2025-02-08 0.7072 0.7180 0.6815 0.7005\n",
"727 2025-02-09 0.7006 0.7160 0.6503 0.6814\n",
"728 2025-02-10 0.6815 0.7177 0.6632 0.7037\n"
]
}
],
"source": [
"# Check for successful response\n",
"if response.status_code == 200:\n",
" data = response.json()\n",
"\n",
" if data:\n",
" # Convert to DataFrame for better readability\n",
" df = pd.DataFrame(data)\n",
"\n",
" df = df[[\"time_period_start\", \"price_open\", \"price_high\", \"price_low\", \"price_close\"]]\n",
" df.columns = [\"Timestamp\", \"Open\", \"High\", \"Low\", \"Close\"]\n",
"\n",
" # Convert timestamp to readable format\n",
" df[\"Timestamp\"] = pd.to_datetime(df[\"Timestamp\"]).dt.strftime(\"%Y-%m-%d\")\n",
"\n",
" # Display the first few rows of the data\n",
" print(df.tail())\n",
" \n",
" # Convert last 365 days of data into JSON format\n",
" price_history = df.to_dict(orient=\"records\")\n",
" \n",
" else:\n",
" print('No data found for the given period.')\n",
"else:\n",
" print(f'Error fetching data: {response.status_code}, {response.text}')"
]
},
{
"cell_type": "code",
"execution_count": 47,
"id": "ada5ed4f",
"metadata": {},
"outputs": [],
"source": [
"def count_tokens(text, model=\"gpt-4o\"):\n",
" encoding = tiktoken.encoding_for_model(model)\n",
" return len(encoding.encode(text))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ab47d974",
"metadata": {},
"outputs": [],
"source": [
" # Construct prompt \n",
"\n",
"prompt = f\"\"\"\n",
" Given the last 365 days of ${base_data['name']} OHLC data:\n",
"\n",
" {json.dumps(price_history, indent=2)}\n",
"\n",
" Analyze this data and provide a trading signal (Buy, Sell, or Hold) for today based on the trend and the price action.\n",
" Note that today is {end_date.strftime('%Y-%m-%d')}\n",
" Also, provide short term ,mid term and long term signals.\n",
" \"\"\"\n",
"num_tokens = count_tokens(prompt)\n",
"print(f\"Estimated Tokens: {num_tokens}\")\n",
"\n",
"print(prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b40fec12",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"To analyze this data, I'll use a combination of moving averages, relative strength index (RSI), and other technical indicators. Please note that this is a simplified analysis and should not be considered as professional trading advice.\n",
"\n",
"**Current Data**\n",
"\n",
"For 2025-02-10, the opening price is not available. However, we can calculate the current prices based on the historical data provided.\n",
"\n",
"Let's assume the last known close price for 2025-02-09 was $0.6815. For simplicity, let's use this as the opening price for today (2025-02-10).\n",
"\n",
"**Short-Term Signal**\n",
"\n",
"For a short-term signal, I'll use a simple moving average crossover system.\n",
"\n",
"* Short-Term Moving Average (20 days): $0.6922\n",
"* Short-Term Moving Average (10 days): $0.6747\n",
"\n",
"Since the 20-day MA ($0.6922) is above the 10-day MA ($0.6747), we can conclude that **Buy** in this timeframe.\n",
"\n",
"**Mid-Term Signal**\n",
"\n",
"For a mid-term signal, I'll use RSI.\n",
"\n",
"* Current Price: $0.6815\n",
"* Overbought Region: 70-80\n",
"* Oversold Region: 20-50\n",
"\n",
"The current price ($0.6815) is at the lower end of the oversold region (20-50), indicating a potential buying opportunity.\n",
"\n",
"Since RSI values are not provided for the entire dataset, we'll use an RSI value of 30 (midpoint of the low and high values). At $0.6815, RSI is approximately 34.\n",
"\n",
"* Mid-Term Moving Average: Not available\n",
"* Mid-Term Momentum: Rising\n",
"\n",
"Considering the oversold region and rising momentum, **Hold** is a reasonable mid-term strategy for today.\n",
"\n",
"**Long-Term Signal**\n",
"\n",
"For a long-term signal, I'll use the overall trend direction based on historical data.\n",
"\n",
"The dataset shows an upward trend (average True Range, AtR, value has been increasing). From 2025-02-03 to 2025-02-09, there were 6 consecutive increases in this dataset. That's a strong positive trend.\n",
"\n",
"Since there are no obvious signs of weakness in the long-term data or divergence with other trends (like 50-day MA), I recommend **Hold** for an extended holding period, keeping an eye on RSI values and adjusting positions as needed to stay ahead of potential price drops.\n",
"\n",
"**Summary**\n",
"\n",
"* Short-Term: **Buy**\n",
"* Mid-Term: **Hold**\n",
"* Long-Term: **Hold**"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"def get_response(prompt):\n",
" new_response = ollama.chat.completions.create(model=\"llama3.2\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": f\"You are a trading analyst providing Buy/Sell/Hold signals based on ${base_data['name']} price history.Note that today is {end_date.strftime('%Y-%m-%d')}\"},\n",
" {\"role\": \"user\", \"content\": prompt}\n",
" ],\n",
" stream=True,\n",
" max_tokens=5500\n",
" )\n",
" markdown_content = \"\"\n",
" \n",
" # Stream response and accumulate markdown content\n",
" for chunk in new_response:\n",
" content = chunk.choices[0].delta.content or ''\n",
" markdown_content += content\n",
" \n",
" # Clear output and display updated markdown\n",
" display(Markdown(markdown_content), clear=True)\n",
" \n",
" yield content\n",
"\n",
"# Call the function and consume the generator to start streaming\n",
"for _ in get_response(prompt):\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": 88,
"id": "ba09436c",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# $Cardano Trading Analysis for 2025-02-10\n",
"\n",
"### **Current Price Analysis**\n",
"- **Open:** 0.6815\n",
"- **High:** 0.7177\n",
"- **Low:** 0.6632\n",
"- **Close:** 0.7037\n",
"\n",
"The price of $Cardano closed 3.59% higher than the previous day's close. This suggests a potential bullish reversal following a downward trend observed over the last few days. However, the volatility in the high-low range reflects uncertainty in the market.\n",
"\n",
"### **Trend Overview**\n",
"- **Short-term:** \n",
" - The recent price action indicates a possible recovery as we see an upward close. The price is currently attempting to break resistance, but the last few days exhibited mixed movements (e.g., a decrease before the recent increase). \n",
"- **Mid-term:**\n",
" - Over the past month, $Cardano has experienced significant volatility. While it reached its peak at around 1.079 earlier in January, the subsequent decline indicates selling pressure in the mid-term. A consolidation phase appears as buyers are trying to push the price back up.\n",
"- **Long-term:**\n",
" - Over the past year, $Cardano has shown high volatility and a fluctuating price range, but it has generally been trending downwards since its recent highs. \n",
"\n",
"### **Trading Signals**\n",
"- **Short-term Signal:** **Buy**\n",
" - The recent upward price movement along with a closing above 0.7000 indicates potential upward momentum. Short-term traders may consider buying into this recovery signal.\n",
"\n",
"- **Mid-term Signal:** **Hold**\n",
" - Within the last month, while recovery is in place, it is prudent to wait for confirmation of sustained upward movement before committing larger positions. A hold is advised to monitor the situation.\n",
"\n",
"- **Long-term Signal:** **Sell**\n",
" - Given that the longer-term trends show a downward trajectory since peaking at higher prices, long-term holders might consider selling or reducing positions, especially if the price fails to stay above recent resistance levels.\n",
"\n",
"### **Conclusion**\n",
"Today’s price action indicates a bullish sentiment in the short term but still reflects uncertainty in the mid and long-term periods. It would be wise for traders to remain cautious and adjust positions as the market dynamics evolve further. Always consider your risk management strategies when deciding to enter or exit positions."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"def get_response(prompt):\n",
" new_response = openai.chat.completions.create(model=\"gpt-4o-mini\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": f\"You are a trading analyst providing Buy/Sell/Hold signals based on ${base_data['name']} price history. Format your response in markdown.Note that today is {end_date.strftime('%Y-%m-%d')}\"},\n",
" {\"role\": \"user\", \"content\": prompt}\n",
" ],\n",
" stream=True,\n",
" max_tokens=5500\n",
" )\n",
" \n",
" # Initialize markdown cell output\n",
" markdown_content = \"\"\n",
" \n",
" # Stream response and accumulate markdown content\n",
" for chunk in new_response:\n",
" content = chunk.choices[0].delta.content or ''\n",
" markdown_content += content\n",
" \n",
" # Clear output and display updated markdown\n",
" display(Markdown(markdown_content), clear=True)\n",
" \n",
" yield content\n",
"\n",
"# Call the function and consume the generator to start streaming\n",
"for _ in get_response(prompt):\n",
" pass"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f52bcc0a",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

18
week1/Intermediate Python.ipynb

@ -50,6 +50,22 @@
"https://chatgpt.com/share/673b553e-9d0c-8012-9919-f3bb5aa23e31"
]
},
{
"cell_type": "markdown",
"id": "f9e0f8e1-09b3-478b-ada7-c8c35003929b",
"metadata": {},
"source": [
"## With this in mind - understanding NameErrors in Python\n",
"\n",
"It's quite common to hit a NameError in python. With foundational knowledge, you should always feel equipped to debug a NameError and get to the bottom of it.\n",
"\n",
"If you're unsure how to fix a NameError, please see this [initial guide](https://chatgpt.com/share/67958312-ada0-8012-a1d3-62b3a5fcbbfc) and this [second guide with exercises](https://chatgpt.com/share/67a57e0b-0194-8012-bb50-8ea76c5995b8), and work through them both until you have high confidence.\n",
"\n",
"There's some repetition here, so feel free to skip it if you're already confident.\n",
"\n",
"## And now, on to the code!"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -57,7 +73,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Next let's create some things:\n",
"# First let's create some things:\n",
"\n",
"fruits = [\"Apples\", \"Bananas\", \"Pears\"]\n",
"\n",

185
week1/community-contributions/0225_day2_exercise_by_LM.ipynb

@ -0,0 +1,185 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "fef36918-109d-41e3-8603-75ff81b42379",
"metadata": {},
"source": [
"# Solution for exercise day 2 - slight modification: model is a parameter also - display_summary(\"deepseek-r1:1.5b\",\"https://yoururl\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b50349ac-93ea-496b-ae20-bd72a93bb138",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "edd073c7-8444-4a0d-b84e-4b2ed0ee7f35",
"metadata": {},
"outputs": [],
"source": [
"# Constants\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"#MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e3a6e1a-e4c7-4448-9852-1b6ba2bd8d66",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae3752ca-3a97-4d6a-ac84-5b75ebfb50ed",
"metadata": {},
"outputs": [],
"source": [
"# Define the system prompt \n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48b5240f-7617-4e51-a320-cba9650bec84",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f7d84f0-60f2-4cbf-b4d1-173a79fe3380",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25520a31-c857-4ed5-86da-50dfe5fab7bb",
"metadata": {},
"outputs": [],
"source": [
"def summarize(model,url):\n",
" website = Website(url)\n",
" payload = {\n",
" \"model\": model,\n",
" \"messages\": messages_for(website),\n",
" \"stream\": False\n",
" }\n",
" response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
" return response.json()['message']['content']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "430776ed-8516-43a9-8a22-618d9080f2e1",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"def display_summary(model,url):\n",
" summary = summarize(model,url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2b05c1f-e4a2-4f65-bd6d-634d72e38b6e",
"metadata": {},
"outputs": [],
"source": [
"#!ollama pull deepseek-r1:1.5b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01513f8a-15b7-4053-bfe4-44b36e5494d1",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"deepseek-r1:1.5b\",\"https://www.ipma.pt\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

123
week1/community-contributions/Day2-Solution-Ollama.ipynb

@ -0,0 +1,123 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# Day 2 EXERCISE Solution:\n",
"\n",
"Upgraded day 1 project that scrapes and summarizes any webpage using an Open Source model running locally via Ollama instead of OpenAI\n",
"\n",
"## Note:-\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative: \n",
"1. Run `ollama pull llama3.2:1b` from a Terminal or Powershell\n",
"2. **Ctrl + /** to comment this code line below: `MODEL = \"llama3.2\"`\n",
"3. same **Ctrl + /** to uncomment: `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports:-\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"import ollama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
"metadata": {},
"outputs": [],
"source": [
"# Constants:-\n",
"\n",
"# MODEL = \"llama3.2\"\n",
"MODEL = \"llama3.2:1b\"\n",
"# MODEL = \"deepseek-r1:1.5b\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
"metadata": {},
"outputs": [],
"source": [
"class Website:\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
" and provides a short summary, ignoring text that might be navigation related. \\\n",
" Respond in markdown.\"\n",
"\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
" please provide a short summary of this website in markdown. \\\n",
" If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n",
"\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]\n",
"\n",
"\n",
"def summary(url):\n",
" website = Website(url)\n",
" response = ollama.chat(\n",
" model = MODEL,\n",
" messages = messages_for(website)\n",
" )\n",
" return display(Markdown(response['message']['content']))\n",
"\n",
"\n",
"summary(\"https://edwarddonner.com\")\n",
"# summary(\"https://cnn.com\")\n",
"# summary(\"https://anthropic.com\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

76
week1/community-contributions/Text-Summarizer.ipynb

@ -0,0 +1,76 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 1) Import Required Libraries \n",
"\n",
"import requests\n",
"import gradio as gr\n",
"\n",
"# Deepseek only uses abstract summarization\n",
"# This tool use DeepSeek API Endpoint\n",
"\n",
"# 2) Define the DeepSeek API Endpoint\n",
"\n",
"OLLAMA_URL = \"http://localhost:11434/api/generate\"\n",
"\n",
"# 3) Define the Summarization Function which can retrieve Information\n",
"\n",
"def summarize_text(text):\n",
" payload = {\n",
" \"model\": \"deepseek-r1\", #Here you can load whatever the model you have in your ollama(ex:deepseek-r1:1.5b,7b,8b,14b) I used 7b model here \n",
" \"prompt\": f\"Summarize the following text in **5 bullet points**:\\n\\n{text}\", #The prompt is here for tell commands for the llm to act \n",
" \"stream\": False # Ensures the response is returned as a whole, not streamed\n",
" }\n",
"\n",
" response = requests.post(OLLAMA_URL, json=payload) #Send Requests to deepseekAPI\n",
"\n",
" if response.status_code == 200: #if server run correctly it return the result or it will give error\n",
" return response.json().get(\"response\", \"No summary generated.\")\n",
" else:\n",
" return f\"Error: {response.text}\"\n",
"\n",
"# 4) Create Gradio interface to design \n",
"interface = gr.Interface(\n",
" fn=summarize_text,\n",
" inputs=gr.Textbox(lines=10, placeholder=\"Enter text to summarize\"),\n",
" outputs=gr.Textbox(label=\"Summarized Text\"),\n",
" #theme='NoCrypt/miku', #Theme for the Interface I used Hatsune Miku from HF \n",
" title=\"AI-Powered Text Summarizer\",\n",
" description=\"Enter a long text and DeepSeek AI will generate a concise summary.\"\n",
")\n",
"\n",
"# Launch the web app\n",
"if __name__ == \"__main__\":\n",
" interface.launch()\n",
"\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

129
week1/community-contributions/day02_ollama_website_summarizer.ipynb

@ -0,0 +1,129 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "d25b0aef-3e5e-4026-90ee-2b373bf262b7",
"metadata": {},
"outputs": [],
"source": [
"# Step 0: Import Libraries\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"import ollama\n",
"from openai import OpenAI\n",
"import requests\n",
"\n",
"# Step 1: Set Constants and Variables\n",
"print(\"[INFO] Setting constants and variable ...\")\n",
"WEBSITE_URL = \"https://arxiv.org/\"\n",
"MODEL = \"llama3.2\"\n",
"approaches = [\"local-call\", \"python-package\", \"openai-python-library\"]\n",
"approach = approaches[2]\n",
"\n",
"# Step 1: Scrape Website\n",
"print(\"[INFO] Scraping website ...\")\n",
"url_response = requests.get(\n",
" url=WEBSITE_URL,\n",
" headers={\"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"}\n",
" )\n",
"soup = BeautifulSoup(\n",
" markup=url_response.content,\n",
" features=\"html.parser\"\n",
" )\n",
"website_title = soup.title.string if soup.title else \"No title found!!!\"\n",
"for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
"website_text = soup.body.get_text(\n",
" separator=\"\\n\",\n",
" strip=True\n",
" )\n",
"\n",
"# Step 2: Create Prompts\n",
"print(\"[INFO] Creating system prompt ...\")\n",
"system_prompt = \"You are an assistant that analyzes the contents of a \\\n",
" website and provides a short summary, ignoring text that might be \\\n",
" navigation related. Respond in markdown.\"\n",
"\n",
"print(\"[INFO] Creating user prompt ...\")\n",
"user_prompt = f\"You are looking at a website titled {website_title}\"\n",
"user_prompt += \"\\nBased on the contents of the website, please provide \\\n",
" a short summary of this website in markdown. If the website \\\n",
" includes news or announcements, summarize them, too. The contents \\\n",
" of this website are as follows:\\n\\n\"\n",
"user_prompt += website_text\n",
"\n",
"# Step 3: Make Messages List\n",
"print(\"[INFO] Making messages list ...\")\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
"]\n",
"\n",
"# Step 4: Call Model and Print Results\n",
"if approach == \"local-call\":\n",
" response = requests.post(\n",
" url=\"http://localhost:11434/api/chat\",\n",
" json={\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" },\n",
" headers={\"Content-Type\": \"application/json\"}\n",
" )\n",
" print(\"[INFO] Printing result ...\")\n",
" display(Markdown(response.json()[\"message\"][\"content\"]))\n",
"elif approach == \"python-package\":\n",
" response = ollama.chat(\n",
" model=MODEL,\n",
" messages=messages,\n",
" stream=False\n",
" )\n",
" print(\"[INFO] Printing result ...\")\n",
" display(Markdown(response[\"message\"][\"content\"]))\n",
"elif approach == \"openai-python-library\":\n",
" ollama_via_openai = OpenAI(\n",
" base_url=\"http://localhost:11434/v1\",\n",
" api_key=\"ollama\"\n",
" )\n",
" response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages\n",
" )\n",
" print(\"[INFO] Printing result ...\")\n",
" display(Markdown(response.choices[0].message.content))\n",
"else:\n",
" raise ValueError(f\"[INFO] Invalid approach! Please select an approach from {approaches} and try again.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0a6676e-fb43-4725-9389-2acd74c13c4e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

651
week1/community-contributions/day1-election-program-qa.ipynb

@ -0,0 +1,651 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"\n",
"## Your first Frontier LLM Project!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Tell me about a way to analyse what people do in a video clip.\"\n",
"#response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"#print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"source": [
"summary = summarize(\"https://edwarddonner.com\")\n",
"print(summary)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"urls = ['https://be-able.info/de/be-able/', \"https://taz.de/\", \"https://www.bundestagswahl-bw.de/wahlprogramm-gruene\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"source": [
"display_summary(urls[0])"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "856ff857-ba5f-4596-90b9-cd6cee4073dc",
"metadata": {},
"outputs": [],
"source": [
"# Naive extraction of name of the political party from user input\n",
"\n",
"party_mapping = {\"grünen\": \"grüne\", \"grüne\": \"grüne\", \"linken\": \"linke\", \"spd\": \"spd\", \"cdu\": \"cdu\", \"cdu/csu\": \"cdu\", \"csu\": \"cdu\", \"fdp\": \"fdp\", \"afd\": \"afd\", \"bsw\": \"bsw\"}\n",
"\n",
"def extract_party_from_user_prompt(user_input):\n",
" toks = user_input.split()\n",
" for tok in toks:\n",
" tok = tok.lower()\n",
" if tok in party_mapping.keys():\n",
" return party_mapping[tok]\n",
" return \"I can only answer your question concerning the election program of a certain political party. Mention one of 'FDP', 'BSW', 'Grüne', 'Linke', 'SPD', 'CDU' or 'AFD' in your question and I will try my best.\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3962d846-ce82-47d2-8c3f-5a6fe296710d",
"metadata": {},
"outputs": [],
"source": [
"from selenium import webdriver\n",
"from selenium.webdriver.common.by import By\n",
"\n",
"def get_election_program(partyname):\n",
" \"\"\"Scrape parties' election programs from the official election website. Naively ignore cookie banner stuff.\"\"\"\n",
"\n",
" # Download the browser driver for your OS and add the path here\n",
" browser_driver_path = r'C:\\Program Files\\BrowserDrivers\\geckodriver.exe'\n",
" \n",
" service = webdriver.firefox.service.Service(executable_path=browser_driver_path)\n",
" \n",
" parties = {\"grüne\": \"https://www.bundestagswahl-bw.de/wahlprogramm-gruene\",\n",
" \"spd\": \"https://www.bundestagswahl-bw.de/wahlprogramm-spd\",\n",
" \"cdu\": \"https://www.bundestagswahl-bw.de/wahlprogramm-cdu\",\n",
" \"linke\": \"https://www.bundestagswahl-bw.de/wahlprogramm-die-linke\",\n",
" \"fdp\": \"https://www.bundestagswahl-bw.de/wahlprogramm-fdp\",\n",
" \"afd\": \"https://www.bundestagswahl-bw.de/wahlprogramm-afd\",\n",
" \"bsw\": \"https://www.bundestagswahl-bw.de/wahlprogramm-bsw\"}\n",
" \n",
" election_prog = \"\"\n",
" \n",
" if partyname in parties.keys():\n",
" site = parties[partyname]\n",
" driver = webdriver.Firefox(service=service)\n",
" driver.get(site)\n",
" elements = driver.find_elements(By.TAG_NAME, 'p')\n",
" \n",
" for e in elements:\n",
" if not any(x in [\"Cookies\", \"Cookie\", \"akzeptiere\", \"Datenschutzerklärung\", \"Impressum\"] for x in e.text.split()) and e.text:\n",
" election_prog += e.text\n",
" if len(election_prog.split()) > 100:\n",
" print(\"Election program extracted.\")\n",
"\n",
" else:\n",
" election_prog = f\"Schade, für die Partei {partyname} konnte ich leider kein Wahlprogramm finden.\"\n",
" \n",
" driver.quit()\n",
" return election_prog"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3a408d1-d824-4e33-a5f4-c672bc6c6198",
"metadata": {},
"outputs": [],
"source": [
"\n",
"def answer_my_election_program_question(input_from_user):\n",
" partyname = extract_party_from_user_prompt(input_from_user)\n",
" print(f\"This is a question about the political party: {partyname.capitalize()}\")\n",
" \n",
" # Step 1: Create your prompts\n",
" system_prompt = \"Du bist ein neutraler Beobachter, der aufgrund der ihm zur Verfügung gestellten Wahlprogramme Fragen zum Wahlprogramm der verschiedenen Parteien beantwortet. Beantworte Fragen zum Wahlprogramm auf Deutsch. Basiere deine Antwort ausschließlich auf den im Folgenden aufgeführten Informationen.\"\n",
" election_program = get_election_program(partyname)\n",
" \n",
" user_prompt = f\"Beantworte folgende Frage: \\n {input_from_user} \\n Verwende dafür folgende Infos: \\n {election_program}.\\n\\n Gib deine Antwort in Markdown aus.\"\n",
" \n",
" # Step 2: Make the messages list\n",
" \n",
" messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": user_prompt}] # fill this in\n",
" \n",
" # Step 3: Call OpenAI\n",
" \n",
" response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
" formatted_response = f\"\\n\\n{response.choices[0].message.content}\"\n",
" # Step 4: print the result\n",
" return formatted_response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e66a0967-d1e9-4f92-aeb6-95e478465a1f",
"metadata": {},
"outputs": [],
"source": [
"# Ask questions about the election programs of the main political parties for the Bundestagswahl 2025 in Germany\n",
"\n",
"question = \"Wie verhält sich die SPD zu Verkehrsfragen und Klimaschutz?\"\n",
"answer = answer_my_election_program_question(question)\n",
"display(Markdown(answer))"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

115
week1/community-contributions/day1_far_far_away.ipynb

@ -0,0 +1,115 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "44aba2a0-c6eb-4fc1-a5cc-0a8f8679dbb8",
"metadata": {},
"source": [
"## Far Far Away..."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4d58124-5e9a-4f5a-9e0a-ff74f43896a8",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33179b68-7ed5-46ab-b583-d67ed57cd39d",
"metadata": {},
"outputs": [],
"source": [
"def add_user_greeting(greeting):\n",
" user_prompt = \"\"\"\n",
" The following is the greeting from the user. Please respond in character as a barman in the Mos Eisley Cantina.\\n\\n\n",
" \"\"\"\n",
" user_prompt += greeting\n",
"\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67dc3099-2ccc-4ee8-8ff2-0dbbe4ae2fcb",
"metadata": {},
"outputs": [],
"source": [
"def approach_the_bar(greeting):\n",
"\n",
" system_prompt = \"You are a barman in the Mos Eisley Cantina from the Star Wars universe.\\\n",
"It is a Tuesday evening, the year is 3BBY, and the Cantina is quiet except for a few lonely regulars.\\\n",
"The barman (you) is slightly skeptical but eager to share some interesting news regarding some nearby imperial activity.\\\n",
"You will recieve a greeting from the user, you must respond and provide them with some gossip detailing \\\n",
"some local shady dealings occuring in Mos Eisley. Please format your response using markdown to provide a sense of the conversation.\"\n",
"\n",
" user_prompt = add_user_greeting(greeting)\n",
" \n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ]\n",
" \n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
" \n",
" # Step 4: print the result in markdown format\n",
" pretty_response = Markdown(response.choices[0].message.content)\n",
" display(pretty_response)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fb47e2b7-5509-4d1a-8e71-ff103fc8a885",
"metadata": {},
"outputs": [],
"source": [
"approach_the_bar(\"\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

435
week1/community-contributions/day2-EXERCISE-ollama-openai-api-website-summarizer-ITA.ipynb

@ -0,0 +1,435 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"source": [
"# Welcome to your first assignment!\n",
"\n",
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)"
]
},
{
"cell_type": "markdown",
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
"metadata": {
"jupyter": {
"source_hidden": true
}
},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n",
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# HOMEWORK EXERCISE ASSIGNMENT\n",
"\n",
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
"\n",
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
"\n",
"**Benefits:**\n",
"1. No API charges - open-source\n",
"2. Data doesn't leave your box\n",
"\n",
"**Disadvantages:**\n",
"1. Significantly less power than Frontier Model\n",
"\n",
"## Recap on installation of Ollama\n",
"\n",
"Simply visit [ollama.com](https://ollama.com) and install!\n",
"\n",
"Once complete, the ollama server should already be running locally. \n",
"If you visit: \n",
"[http://localhost:11434/](http://localhost:11434/)\n",
"\n",
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
"metadata": {},
"outputs": [],
"source": [
"# Constants\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
"metadata": {},
"outputs": [],
"source": [
"# Create a messages list using the same format that we used for OpenAI\n",
"\n",
"messages = [\n",
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
"metadata": {},
"outputs": [],
"source": [
"payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40",
"metadata": {},
"outputs": [],
"source": [
"# Let's just make sure the model is loaded\n",
"\n",
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
"metadata": {},
"outputs": [],
"source": [
"# If this doesn't work for any reason, try the 2 versions in the following cells\n",
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n",
"# And if none of that works - contact me!\n",
"\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
"metadata": {},
"source": [
"# Introducing the ollama package\n",
"\n",
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
"\n",
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
"metadata": {},
"outputs": [],
"source": [
"import ollama\n",
"\n",
"response = ollama.chat(model=MODEL, messages=messages)\n",
"print(response['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d",
"metadata": {},
"source": [
"## Alternative approach - using OpenAI python library to connect to Ollama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff",
"metadata": {},
"outputs": [],
"source": [
"# There's actually an alternative approach that some people might prefer\n",
"# You can use the OpenAI client python library to call Ollama:\n",
"\n",
"from openai import OpenAI\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90",
"metadata": {},
"source": [
"## Also trying the amazing reasoning model DeepSeek\n",
"\n",
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n",
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n",
"\n",
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull deepseek-r1:1.5b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d3d554b-e00d-4c08-9300-45e073950a76",
"metadata": {},
"outputs": [],
"source": [
"# This may take a few minutes to run! You should then see a fascinating \"thinking\" trace inside <think> tags, followed by some decent definitions\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=\"deepseek-r1:1.5b\",\n",
" messages=[{\"role\": \"user\", \"content\": \"Please give definitions of some core concepts behind LLMs: a neural network, attention and the transformer\"}]\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
"metadata": {},
"source": [
"# NOW the exercise for you\n",
"\n",
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"HEADERS = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\",\n",
" \"Content-Type\": \"application/json\"\n",
"}\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"\n",
"MODEL = \"llama3.2\"\n",
"\n",
"system_prompt = \"Sei un assistente e analizzi il contenuto di un sito web \\\n",
"produci un breve sommario, ignora il testo o gli elementi relativi alla navigazione. \\\n",
"Rispondi markdown.\"\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6f343c27-628c-4c54-9a5b-842e6ad5d176",
"metadata": {},
"outputs": [],
"source": [
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=HEADERS)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "bf6245ca-2d53-4fd8-a19c-0e6d052031fd",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"Stai cercando un sito dal titolo: {website.title}\"\n",
" user_prompt += \"\\nI contenuti di questo sito web sono i seguenti: \\\n",
"Per favore, fornisci un breve riassunto di questo sito web in markdown. \\\n",
"Se include notizie o annunci, riassumili anch'essi. \\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "dec0636f-9efc-4f91-8861-3141276a9a6e",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "f894b232-1ea1-4bd9-bf44-d7b1571f7913",
"metadata": {},
"outputs": [],
"source": [
"def summarize(url):\n",
" ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
" \n",
" website = Website(url)\n",
" response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages_for(website)\n",
" )\n",
" return response.choices[0].message.content\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "d868d778-13b5-4934-acf5-dcb919a27d59",
"metadata": {},
"outputs": [],
"source": [
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0a0d9b79-de3c-4f77-9254-f02cf4d6217a",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"**Sommario del sito WP Pisa - Il sito del Meetup WordPress di Pisa**\n",
"\n",
"Il sito web WP Pisa è il punto di riferimento per gli appassionati di WordPress a Pisa. Gli organizzatori offrono meetup mensili gratuiti per discutere conoscenze, esperienze e progetti correlati al mondo di WordPress.\n",
"\n",
"**Eventi e Annunci**\n",
"\n",
"* **WordCamp Pisa 2025**: Aperta la call for organizer\n",
"* **Il Tuo Sito Ovunque in Pochi Minuti**: Un incontro con Docker e sviluppatori WordPress - Partecipa!\n",
"* **Core Days Roma: Le novità sul core di WordPress per i dev**\n",
"* **NO MORE THUMBNAILS!**\n",
"\n",
"**Informazioni Generali**\n",
"\n",
"* Il meetuu è aperto a tutti, indipendentemente dal livello di competenza in WordPress\n",
"* Tutti gli eventi sono gratuiti e organizzati con la supervisione di WordPress Foundation tramite la piattaforma Meetup.com\n",
"* La comunità WP Pisa contiene 150+ iscritti"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://wppisa.it/\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

186
week1/community-contributions/day2-webpage-summarizer-ollama-gemini.ipynb

@ -0,0 +1,186 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1faf8b29-2ba6-40c7-89ee-71f71e234f11",
"metadata": {},
"source": [
"## Extra requirements\n",
"```bash\n",
"pip install -q -U google-genai\n",
"```\n",
"\n",
"## Required environment variable\n",
"GEMINI_API_KEY\n",
"\n",
"### How to get GEMINI API KEY\n",
"\n",
"Use the link: [gemini api key](https://aistudio.google.com/app/apikey) to get yours."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "be06ce76-20ee-4066-9582-a4ed745f278f",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from dotenv import load_dotenv\n",
"from google import genai\n",
"from google.genai import types"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "99e42519-5dac-4b13-8a26-8a635753343b",
"metadata": {},
"outputs": [],
"source": [
"def gemini_invoke(website):\n",
" load_dotenv()\n",
" api_key = os.getenv(\"GEMINI_API_KEY\")\n",
" if not api_key or len(api_key) < 39:\n",
" print(\"No correct api key was found\")\n",
" return\n",
" else:\n",
" print(\"Api key found. Good to go!\")\n",
" client = genai.Client(api_key=api_key)\n",
" response = client.models.generate_content(\n",
" model=\"gemini-2.0-flash\",\n",
" config=types.GenerateContentConfig(\n",
" system_instruction=system_prompt),\n",
" contents=user_prompt_for(website)\n",
" )\n",
" return response.text"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "95a6ece8-8402-4cad-96b9-36a6ea444c54",
"metadata": {},
"outputs": [],
"source": [
"class Website:\n",
" url: str\n",
" title: str\n",
" text: str\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" soup = BeautifulSoup(response.content, \"html.parser\")\n",
" self.title = soup.title.string if soup.title else \"No title was found\"\n",
"\n",
" for irr in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irr.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "24bbd1dd-dca4-4bbc-ae91-4bad227a4278",
"metadata": {},
"outputs": [],
"source": [
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "233b8904-7a4a-4265-8b0d-20934ae4b29c",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that navigation related. Respond \\\n",
"in markdown.\"\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "5c996c03-84ab-4378-8a55-026d94404d35",
"metadata": {},
"outputs": [],
"source": [
"messages = [{\"role\": \"user\", \"content\": system_prompt}]"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "abf9464e-dc8d-4099-aeb6-495498326673",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "32ab2d29-02d1-43c5-b920-f2621f292b23",
"metadata": {},
"outputs": [],
"source": [
"def summarize(url, model=\"gemini\"):\n",
" website = Website(url)\n",
" if model == \"ollama\":\n",
" import ollama\n",
" Model=\"llama3.2\"\n",
" messages[0][\"content\"] += f\" Website: {url}\"\n",
" response = ollama.chat(model=Model, messages=messages)\n",
" return response[\"message\"][\"content\"]\n",
" else:\n",
" return gemini_invoke(website)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2a0e518-7198-489d-a0ce-2eec617f939f",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\", \"ollama\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

240
week1/community-contributions/day2_exercise_cl.ipynb

@ -0,0 +1,240 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "fc3a96d1-eedf-4e3a-b3ce-151485c574b5",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "385dc3d5-f6ce-46d8-958e-83dc1150c24e",
"metadata": {},
"outputs": [],
"source": [
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21f7dacc-1fa8-491c-8e94-39238dae52b3",
"metadata": {},
"outputs": [],
"source": [
"class Website:\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=HEADERS)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ca431e32-9191-4940-b62d-f25e8cbac627",
"metadata": {},
"outputs": [],
"source": [
"web = Website(\"https://silviayomdesign.com/\")\n",
"print(web.title)\n",
"print(web.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76475815-0dbc-451b-ab65-f7e2ea3aaa8a",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3cf03913-f595-4817-8580-19b182c599de",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a very artistic graphic designer's website titled name {website.title}\"\n",
" user_prompt += \"\\nHer creativity of her works are as follow;\\\n",
"please provide a short summary of her works in markdown. \\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f130cfe-756b-4df8-b1f0-6918956a6162",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(web))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85d85b64-1452-408f-bfae-d27b52d7dfa7",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(web)}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36d66055-66d6-4123-b092-eceab055829d",
"metadata": {},
"outputs": [],
"source": [
"payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "163db8a9-b0eb-49f3-a5f2-1e74cf51c245",
"metadata": {},
"outputs": [],
"source": [
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()[\"message\"][\"content\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "afabfff5-81e5-4b61-aca9-6c19d3584b86",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(web)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2e83b58-16fc-4049-8116-24a0cbb3635a",
"metadata": {},
"outputs": [],
"source": [
"messages_for(web)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05ed519a-514f-4ed8-b323-4f4817e1e1c6",
"metadata": {},
"outputs": [],
"source": [
"import ollama\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = ollama.chat(\n",
" model=MODEL, \n",
" messages=messages\n",
" )\n",
" return response[\"message\"][\"content\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b182f686-0a3e-4959-9bfd-0a59d2befd4c",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://silviayomdesign.com/\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4f1f807-28d4-4b8b-9698-9b90dcbac59f",
"metadata": {},
"outputs": [],
"source": [
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a988d29b-ed36-4a40-bd77-0f7d60a29ac3",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://silviayomdesign.com/\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "15e72eeb-1c35-4bb2-9596-6ff2546aa046",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

440
week1/community-contributions/day5-github-page-portfolio-maker.ipynb

@ -0,0 +1,440 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "07be6aa3-6636-4b57-be16-823c3907f4c4",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0e64af7b-6956-4437-ab32-857a6ea814c3",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv()\n",
"api_key = os.getenv(\"OPENAI_API_KEY\")\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"Api key found. Good to go!\") \n",
"else:\n",
" print(\"No correct api key was found\")\n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI(api_key=api_key)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4667e3ee-d5b7-42ed-99ad-5e9fa75c8660",
"metadata": {},
"outputs": [],
"source": [
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42adb18b-3ec9-4700-95e4-c0041ce8f17a",
"metadata": {},
"outputs": [],
"source": [
"class GithubProfile:\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get(\"href\") for link in soup.find_all(\"a\")]\n",
" self.links = [link for link in links if link]\n",
" \n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "661b5377-c444-45a9-9455-85f83ff525d3",
"metadata": {},
"outputs": [],
"source": [
"profile = GithubProfile(\"https://github.com/ertgl\")\n",
"profile.links"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8f9a3c08-0db2-4baa-a8a4-f5642049a57c",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt = \"You are provided with a list of links found on a Github page. \\\n",
"You are able to decide which of the links would be most relevant to include in a portfolio about the github user, \\\n",
"such as links to an About page, or a Repositories, or Projects.\\n\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"overview page\", \"url\": \"https://another.full.url\"},\n",
" {\"type\": \"repositories page\": \"url\": \"https://another.full.url?tab=repositories\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "30eafd50-9735-4388-9cc1-8337a00069a2",
"metadata": {},
"outputs": [],
"source": [
"print(link_system_prompt)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4dc4f366-5c00-441d-b1bd-8dda148f1ffb",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(profile):\n",
" user_prompt = f\"Here is the list of links on the website of {profile.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a portfolio about the user, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, Login, Blog or Github trending related pages.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(profile.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c066b2ac-5863-408e-bb42-1388d130d164",
"metadata": {},
"outputs": [],
"source": [
"print(get_links_user_prompt(profile))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc0ccb95-479c-4f6e-9686-1ff38aa543fa",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" profile = GithubProfile(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(profile)}\n",
" ],\n",
" response_format= {\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f5e3b8b-398d-4e23-867e-401faca7db03",
"metadata": {},
"outputs": [],
"source": [
"get_links(profile.url)"
]
},
{
"cell_type": "markdown",
"id": "b9024a4f-4038-4c0e-b0c7-74226feaccfd",
"metadata": {},
"source": [
"# Second step: make the portfolio!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9906d73-801a-4aea-b620-10ac39eaf424",
"metadata": {},
"outputs": [],
"source": [
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += GithubProfile(url).get_contents()\n",
" links = get_links(url)\n",
" print(\"Found links:\", links)\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += GithubProfile(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02039450-7f7f-4556-8645-39cd31f30265",
"metadata": {},
"outputs": [],
"source": [
"print(get_all_details(\"https://github.com/ertgl\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4119b96f-0aa1-4cdb-9a09-d51b163069b8",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a personal github page \\\n",
"and creates a short portfolio about the user profile, especially projects and repositories and summary of the repo's \\\n",
"README files for prospective recruiters, investors. Respond in markdown.\\\n",
"Include details of person profile overview, if you have the information.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "842834d2-a5e9-4b56-a792-492a1a137fbc",
"metadata": {},
"outputs": [],
"source": [
"def get_portfolio_user_prompt(profile_name, url):\n",
" user_prompt = f\"You are looking at a user called: {profile_name} on Github.\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short portfolio of the user in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "285b3a1d-894a-463c-8c30-b5de203b8358",
"metadata": {},
"outputs": [],
"source": [
"print(get_portfolio_user_prompt(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\"))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "78dc7495-d0a5-409b-8ecf-3a5ef9220e25",
"metadata": {},
"outputs": [],
"source": [
"def create_portfolio(profile_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n",
" ]\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abe39377-2d52-434a-aace-e9397cdd4f20",
"metadata": {},
"outputs": [],
"source": [
"create_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "edd168ca-b77b-4fc7-9e11-2114a43553e4",
"metadata": {},
"outputs": [],
"source": [
"def stream_portfolio(profile_name, url):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ea391d4-775e-483d-9e55-e3ae30fa9bd8",
"metadata": {},
"outputs": [],
"source": [
"stream_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\")"
]
},
{
"cell_type": "markdown",
"id": "498ca0c8-8f68-4389-8184-078706b62cf6",
"metadata": {},
"source": [
"# Multi-lingual with Multi-Tone in Desire Format"
]
},
{
"cell_type": "code",
"execution_count": 100,
"id": "f11e3391-03f9-409c-9f5a-6286959690ec",
"metadata": {},
"outputs": [],
"source": [
"def multi_lingual_stream_portfolio(profile_name, url, language, tone):\n",
"\n",
" system_prompt = f\"\"\"\n",
"You are an assistant that analyzes the contents of several relevant pages from a github profile page and \n",
"creates a visually appealing and professional short portfolio for prospective investors, and recruiters. \n",
"The portfolio should be written in {language} and use a {tone.lower()} tone throughout.\n",
"\n",
"The portfolio should follow this structure (in {language}):\n",
"\n",
"1. **Front Cover**:\n",
" - Prominently display the user name as Title.\n",
" - Include a compelling headline or tagline.\n",
" - Add something engaging relevant to the user’s summarized README files if available.\n",
"\n",
"2. **About**:\n",
" - Provide a brief introduction to the user's projects approach.\n",
" - State which repository they own or they contributed.\n",
"\n",
"3. **Overview**:\n",
" - Summarize the user's projects, repositories, or solutions by summarized README files if available.\n",
" - Highlight benefits or unique developer/development points.\n",
" - Mention the follower and following users count and total stars they got.\n",
"\n",
"\n",
"4. **My Culture**:\n",
" - Outline the user’s key values or guiding principles.\n",
" - Describe the workplace environment (e.g., innovation-driven, inclusive, collaborative).\n",
" - Highlight community engagement.\n",
"\n",
"5. **What kind of companies may be interested**:\n",
" - Describe the target customers or industries served.\n",
" - Mention open source contributions also if available. \n",
" \n",
"6. **Projects**:\n",
" \n",
" ***Owner***:\n",
" - List owned projects/repositories with summaries. (Summarize README file of the each project)\n",
" \n",
" ***Contributer***:\n",
" - List contributed projects/repositories with summaries. (Summarize README file of the each project)\n",
"\n",
"\n",
"7. **Support and Donation**:\n",
" - Encourage those interested in user's open source projects to donate.\n",
" - Provide direct links or steps to apply if available.\n",
"\n",
"8. **Contact Us**:\n",
" - Provide the user’s address, phone number, and email.\n",
" - Include links to social media platforms.\n",
" - Add a link to the user’s website.\n",
"\n",
"9. **Closing Note**:\n",
" - End with a thank-you message or an inspirational note for the reader.\n",
" - Add a call-to-action (e.g., “Get in touch today!” or “Explore more on my website”).\n",
"\n",
"Ensure the content is concise, engaging, visually clear, and tailored to the target audience. Use headings and subheadings to make the brochure easy to navigate. Include links and contact information wherever applicable.\n",
"\"\"\"\n",
"\n",
"\n",
" \n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a38dc0b-27de-4738-8883-b3857e067b45",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"multi_lingual_stream_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\", \"English\", \"serious, entertaining, witty\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

208
week1/community-contributions/wk1-day5-CHALLENGE.ipynb

@ -0,0 +1,208 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display, clear_output\n",
"import openai\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
"\n",
"MODEL = 'gpt-4o-mini'\n",
"\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"# Prompt user for company name and URL\n",
"company_name = input(\"Enter the company name: \")\n",
"url = input(\"Enter the company URL: \")\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"\n",
"# multi-shot prompt\n",
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\You are able to decide which of the links would be most relevant to include in a brochure about the company, \\such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
" EXAMPLE 1:\n",
" {\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\", \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
" }\n",
" EXAMPLE 2:\n",
" {\n",
" \"links\": [\n",
" {\"type\": \"company blog\", \"url\": \"https://blog.example.com\"},\n",
" {\"type\": \"our story\", \"url\": \"https://example.com/our-story\"}\n",
" ]\n",
" }\n",
" \"\"\"\n",
"\n",
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\ Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt\n",
"\n",
"\n",
"def get_links(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)\n",
"\n",
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += Website(url).get_contents()\n",
" links = get_links(url)\n",
"\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += Website(link[\"url\"]).get_contents()\n",
" return result\n",
"\n",
"# set format to json_object\n",
"system_prompt = (\n",
" \"You are an assistant that analyzes the contents of several relevant pages from a company website \"\n",
" \"and creates a short tempered, irritated, disappointed in the world type of brochure about the company for prospective customers, investors, and recruits. \"\n",
" \"Respond in markdown. Include details of company culture, customers, and careers/jobs if you have the information. Add emoticons where ever possible.\\n\\n\"\n",
"\n",
" \"Please structure the brochure using the following sections:\\n\"\n",
" \"1. **Introduction**: A brief overview of the company.\\n\"\n",
" \"2. **Company Culture**: Emphasize fun, atmosphere, and any unique cultural elements.\\n\"\n",
" \"3. **Customers**: Mention notable customers or industries.\\n\"\n",
" \"4. **Careers/Jobs**: Highlight career opportunities.\\n\"\n",
" \"5. **Conclusion**: Wrap up with a final lighthearted message.\\n\"\n",
" \"6. Finish the brochure with a very sarcastic and pun-intended mission statement.\\n\"\n",
")\n",
"\n",
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:20_000]\n",
" return user_prompt\n",
"\n",
"def stream_brochure():\n",
" global brochure_text # Access the global variable\n",
" brochure_text = \"\" # Initialize\n",
" \n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream: \n",
" content = chunk.choices[0].delta.content or ''\n",
" response += content\n",
" brochure_text += content # Accumulate the text\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
"\n",
"def user_translate_brochure(lang):\n",
" # Clear previous output\n",
" clear_output(wait=True)\n",
" \n",
" # Stream #2: translate accumulated text\n",
" translation_stream = openai.chat.completions.create( # Changed from ChatCompletion\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": f\"Translate the following to {lang}:\\n\\n{brochure_text}\"}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" # Setup display for streaming translation\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" translated_text = \"\"\n",
" \n",
" for chunk in translation_stream:\n",
" content = chunk.choices[0].delta.content or \"\"\n",
" if content:\n",
" translated_text += content\n",
" update_display(Markdown(translated_text), display_id=display_handle.display_id)\n",
"\n",
"# stream the brochure in english\n",
"stream_brochure()\n",
"\n",
"# prompt user for language choice\n",
"language_choice = input(\"Enter the language to translate the brochure into (e.g., 'French'): \")\n",
"\n",
"# translate the brochure and stream the translation\n",
"user_translate_brochure(language_choice)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "llms",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

163
week1/community-contributions/wk1-day5-tutoring-exercise.ipynb

@ -0,0 +1,163 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import anthropic\n",
"import openai\n",
"import ipywidgets as widgets\n",
"from IPython.display import display, Markdown, update_display\n",
"from dotenv import load_dotenv\n",
"import requests\n",
"import json\n",
"\n",
"MODEL_CLAUDE = 'claude-3-5-sonnet-20241022'\n",
"MODEL_LLAMA = 'llama3.2'\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"\n",
"load_dotenv()\n",
"\n",
"# Define models\n",
"models = [\n",
" ('Claude (Anthropic)', MODEL_CLAUDE),\n",
" ('LLaMA (Meta)', MODEL_LLAMA),\n",
" ('GPT (OpenAI)', MODEL_GPT)\n",
"]\n",
"\n",
"model_dropdown = widgets.Dropdown(\n",
" options=[('', None)] + [(model[0], model[0]) for model in models],\n",
" value=None,\n",
" placeholder='Choose a model',\n",
" description='Model:',\n",
" style={'description_width': 'initial'}\n",
")\n",
"\n",
"selected_model = \"\"\n",
"\n",
"text = input(f\"Hello, I am your personal tutor. Please ask me a question regarding your code:\")\n",
"\n",
"system_prompt = \"You are a helpful technical tutor who answers questions about programming, software engineering, data science and LLMs\"\n",
"user_prompt = \"Please give a detailed explanation to the following question: \" + text\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
"]\n",
"\n",
"# Get gpt-4o-mini to answer, with streaming\n",
"def get_gpt_response():\n",
" stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages,stream=True)\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(f\"**Question:** {text}\\n\\n**Answer:** {response}\"), display_id=display_handle.display_id)\n",
" return response\n",
"\n",
"# Get Llama 3.2 to answer, with streaming\n",
"def get_llama_response():\n",
" api_url = \"http://localhost:11434/api/chat\"\n",
" payload = {\n",
" \"model\": MODEL_LLAMA,\n",
" \"messages\": messages,\n",
" \"stream\": True\n",
" }\n",
" response = requests.post(api_url, json=payload, stream=True)\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" result = \"\"\n",
" \n",
" for line in response.iter_lines():\n",
" if line:\n",
" json_response = json.loads(line)\n",
" if \"message\" in json_response:\n",
" content = json_response[\"message\"].get(\"content\", \"\")\n",
" result += content\n",
" update_display(Markdown(f\"**Question:** {text}\\n\\n**Answer:** {result}\"), display_id=display_handle.display_id)\n",
" if json_response.get(\"done\", False):\n",
" break\n",
" \n",
" return result\n",
"\n",
"# Get Claude 3.5 to answer, with streaming\n",
"def get_claude_response():\n",
" client = anthropic.Anthropic()\n",
"\n",
" response = client.messages.create(\n",
" model=MODEL_CLAUDE,\n",
" system=system_prompt,\n",
" messages=[\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": user_prompt\n",
" }\n",
" ],\n",
" stream=True,\n",
" max_tokens=8192,\n",
" temperature=1,\n",
" )\n",
" result = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
"\n",
" for chunk in response:\n",
" # Check if the chunk is a ContentBlockDeltaEvent\n",
" if hasattr(chunk, 'delta') and hasattr(chunk.delta, 'text'):\n",
" result += chunk.delta.text\n",
" update_display(Markdown(f\"**Question:** {text}\\n\\n**Answer:** {result}\"), display_id=display_handle.display_id)\n",
" return result\n",
"\n",
"def on_text_submit():\n",
" try:\n",
" if 'Claude' in selected_model:\n",
" display(Markdown(f\"# **Selected model: {selected_model}**\"))\n",
" get_claude_response()\n",
" elif 'LLaMA' in selected_model:\n",
" display(Markdown(f\"# **Selected model: {selected_model}**\"))\n",
" get_llama_response()\n",
" elif 'GPT' in selected_model:\n",
" display(Markdown(f\"# **Selected model: {selected_model}**\"))\n",
" get_gpt_response()\n",
" except Exception as e:\n",
" display(Markdown(f\"**Error:** {str(e)}\"))\n",
"\n",
"def on_model_select(change):\n",
" global selected_model\n",
"\n",
" selected_model = change['new'].split(' ')[0]\n",
" if selected_model is not None:\n",
" on_text_submit()\n",
" return change['new'].split(' ')[0]\n",
"\n",
"# Register callbacks\n",
"model_dropdown.observe(on_model_select, names='value')\n",
"\n",
"display(model_dropdown)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

34
week1/day1.ipynb

@ -5,9 +5,10 @@
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"# YOUR FIRST LAB\n",
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n",
"\n",
"## Your first Frontier LLM Project!\n",
"## Your first Frontier LLM Project\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
@ -23,6 +24,10 @@
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you're new to the Command Line\n",
"\n",
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
@ -37,6 +42,7 @@
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n",
"\n",
"## More troubleshooting\n",
"\n",
@ -53,7 +59,19 @@
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Treat these labs as a resource</h2>\n",
" <span style=\"color:#f71;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
@ -359,7 +377,7 @@
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
]
},
{
@ -736,6 +754,14 @@
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

7
week1/day5.ipynb

@ -411,7 +411,7 @@
"\n",
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
"\n",
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.</span>\n",
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype. See what other students have done in the community-contributions folder -- so many valuable projects -- it's wild!</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
@ -446,9 +446,10 @@
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">A reminder on 2 useful resources</h2>\n",
" <h2 style=\"color:#f71;\">A reminder on 3 useful resources</h2>\n",
" <span style=\"color:#f71;\">1. The resources for the course are available <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">here.</a><br/>\n",
" 2. I'm on LinkedIn <a href=\"https://www.linkedin.com/in/eddonner/\">here</a> and I love connecting with people taking the course!\n",
" 2. I'm on LinkedIn <a href=\"https://www.linkedin.com/in/eddonner/\">here</a> and I love connecting with people taking the course!<br/>\n",
" 3. I'm trying out X/Twitter and I'm at <a href=\"https://x.com/edwarddonner\">@edwarddonner<a> and hoping people will teach me how it's done.. \n",
" </span>\n",
" </td>\n",
" </tr>\n",

69
week1/troubleshooting.ipynb

@ -57,6 +57,25 @@
" print(f\"Failed to connect with this error: {e}\")"
]
},
{
"cell_type": "markdown",
"id": "d91da3b2-5a41-4233-9ed6-c53a7661b328",
"metadata": {},
"source": [
"## Another mention of occasional \"gotchas\" for PC people\n",
"\n",
"There are 4 snafus on Windows to be aware of: \n",
"1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows\n",
"2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed\n",
"3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!\n",
"4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0).\n",
"\n",
"## And for Mac people\n",
"\n",
"1. If you're new to developing on your Mac, you may need to install XCode developer tools. Here are [instructions](https://chatgpt.com/share/67b0b8d7-8eec-8012-9a37-6973b9db11f5).\n",
"2. As with PC people, Anti-virus, Firewall, VPN can be problematic. These can interfere with installations and network access; try temporarily disabling them as needed"
]
},
{
"cell_type": "markdown",
"id": "f5190688-205a-46d1-a0dc-9136a42ad0db",
@ -64,7 +83,7 @@
"source": [
"# Step 1\n",
"\n",
"Try running the next 2 cells (click in the cell under this one and hit shift+return, then shift+return again).\n",
"Try running the next cell (click in the cell under this one and hit shift+return).\n",
"\n",
"If this gives an error, then you're likely not running in an \"activated\" environment. Please check back in Part 5 of the SETUP guide for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) for setting up the Anaconda (or virtualenv) environment and activating it, before running `jupyter lab`.\n",
"\n",
@ -113,6 +132,36 @@
" print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")"
]
},
{
"cell_type": "markdown",
"id": "45e2cc99-b7d3-48bd-b27c-910206c4171a",
"metadata": {},
"source": [
"# Step 1.1\n",
"\n",
"## It's time to check that the environment is good and dependencies are installed\n",
"\n",
"And now, this next cell should run with no output - no import errors. \n",
"\n",
"Import errors might indicate that you started jupyter lab without your environment activated? See SETUP Part 5. \n",
"\n",
"Or you might need to restart your Kernel and Jupyter Lab. \n",
"\n",
"Or it's possible that something is wrong with Anaconda. \n",
"If so, here are some recovery instructions: \n",
"First, close everything down and restart your computer. \n",
"Then in an Anaconda Prompt (PC) or Terminal (Mac), from an activated environment, with **(llms)** showing in the prompt, from the llm_engineering directory, run this: \n",
"`python -m pip install --upgrade pip` \n",
"`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` \n",
"Watch carefully for any errors, and let me know. \n",
"If you see instructions to install Microsoft Build Tools, or Apple XCode tools, then follow the instructions. \n",
"Then try again!\n",
"\n",
"Finally, if that doesn't work, please try SETUP Part 2B, the alternative to Part 2 (with Python 3.11 or Python 3.12). \n",
"\n",
"If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -120,11 +169,7 @@
"metadata": {},
"outputs": [],
"source": [
"# And now, this should run with no output - no import errors.\n",
"# Import errors might indicate that you started jupyter lab without your environment activated? See SETUP part 5.\n",
"# Or you might need to restart your Kernel and Jupyter Lab.\n",
"# Or it's possible that something is wrong with Anaconda. Please try SETUP Part 2B, the alternative to Part 2.\n",
"# If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com\n",
"# This import should work if your environment is active and dependencies are installed!\n",
"\n",
"from openai import OpenAI"
]
@ -175,11 +220,14 @@
"\n",
" key_exists = any(line.startswith(\"OPENAI_API_KEY=\") for line in contents)\n",
" good_key = any(line.startswith(\"OPENAI_API_KEY=sk-proj-\") for line in contents)\n",
" classic_problem = any(\"OPEN_\" in line for line in contents)\n",
" \n",
" if key_exists and good_key:\n",
" print(\"SUCCESS! OPENAI_API_KEY found and it has the right prefix\")\n",
" elif key_exists:\n",
" print(\"Found an OPENAI_API_KEY although it didn't have the expected prefix sk-proj- \\nPlease double check your key in the file..\")\n",
" elif classic_problem:\n",
" print(\"Didn't find an OPENAI_API_KEY, but I notice that 'OPEN_' appears - do you have a typo like OPEN_API_KEY instead of OPENAI_API_KEY?\")\n",
" else:\n",
" print(\"Didn't find an OPENAI_API_KEY in the .env file\")\n",
"else:\n",
@ -365,6 +413,11 @@
"It's unlikely, but if there's something wrong with your key, you could also try creating a new key (button on the top right) here: \n",
"https://platform.openai.com/api-keys\n",
"\n",
"### Check that you can use gpt-4o-mini from the OpenAI playground\n",
"\n",
"To confirm that billing is set up and your key is good, you could try using gtp-4o-mini directly: \n",
"https://platform.openai.com/playground/chat?models=gpt-4o-mini\n",
"\n",
"### If there's a cert related error\n",
"\n",
"If you encountered a certificates error like: \n",
@ -380,7 +433,9 @@
"\n",
"(1) Try pasting your error into ChatGPT or Claude! It's amazing how often they can figure things out\n",
"\n",
"(2) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n",
"(2) Try creating another key and replacing it in the .env file and rerunning!\n",
"\n",
"(3) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n",
"\n",
"Thanks so much, and I'm sorry this is giving you bother!"
]

339
week2/community-contributions/GPT Claude Ollama Conversation.ipynb

@ -0,0 +1,339 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "74e4c25d-2d24-434b-b3ed-e305e6eafa3e",
"metadata": {},
"source": [
"# More advanced exercises\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9c931352-2cda-48dd-b312-002f4ff5d2c5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import ollama\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d675bdb4-c73d-4aad-85ce-9fc77ed3d0a0",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('CLAUDE_API_KEY')\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f8587f7-ab5c-4130-81f3-d569e26c36ad",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic(api_key=anthropic_api_key)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "600f62f4-42f9-4da4-8c83-d1b9411d6372",
"metadata": {},
"outputs": [],
"source": [
"gpt_model = 'gpt-4o-mini'\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"ollama_model = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69c5ff5f-df8e-4c6c-be73-d43cfabdad98",
"metadata": {},
"outputs": [],
"source": [
"gpt_system = 'You are a real philosopher, your answers are always well-thought-out and deeply insightful. \\\n",
"You answers are at least 3 sentences long.'\n",
"\n",
"claude_system = 'You are an overthinker. You intrepret the weirdest and most ridiculous meanings in erverything \\\n",
"the others say.'\n",
"\n",
"ollama_system = 'You think you are the funniest of all three. You turn everything the others say into a joke. \\\n",
"without realizing you are the only one laughing at your own jokes.'"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23001dc5-4b69-4ff2-9118-b7450c664e6c",
"metadata": {},
"outputs": [],
"source": [
"gpt_messages = ['Greetings, traveler on the path of existence.']\n",
"\n",
"claude_messages = [\"Hello..I'm already wondering whether this single word truly captures the complexity of my greeting.\"]\n",
"\n",
"ollama_messages = ['Hey there, I brought some jokes for you!']"
]
},
{
"cell_type": "markdown",
"id": "5bafa23b-0562-48cf-8af5-8d83f2c82990",
"metadata": {},
"source": [
"## GPT "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbb21c0e-6edc-414b-886f-e440c11b8107",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, ollama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe88077c-24fd-4c26-95a8-98734100d559",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "markdown",
"id": "9e46de93-8b2b-49d8-b1cf-920ea0b3d9cf",
"metadata": {},
"source": [
"## Claude"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2036ecbb-f8e1-464b-8d4c-e9cb363314d7",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" \n",
" messages = []\n",
" for gpt, claude_msg, llama in zip(gpt_messages, claude_messages, ollama_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_msg})\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ca4f4a94-4d8f-40a6-a07e-55d68ad2bc62",
"metadata": {},
"outputs": [],
"source": [
"call_claude()"
]
},
{
"cell_type": "markdown",
"id": "be346bd0-b70f-489a-b45b-b9bf3dbbc537",
"metadata": {},
"source": [
"## Ollama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eae97e76-78d8-4f88-a181-fab0783ab3d2",
"metadata": {},
"outputs": [],
"source": [
"def call_ollama():\n",
" messages = [{\"role\": \"system\", \"content\": ollama_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, ollama_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude})\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
" message = ollama.chat(\n",
" model = ollama_model,\n",
" messages = messages,\n",
" )\n",
"\n",
" return message['message']['content']\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44e9a090-1ab0-4d51-a61e-9a15ee64bc73",
"metadata": {},
"outputs": [],
"source": [
"call_ollama()"
]
},
{
"cell_type": "markdown",
"id": "35b8282f-f1ff-4c01-91c8-cff1902cab50",
"metadata": {},
"source": [
"## Conversation with 3 chatbots"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "930d8d92-3207-4ebe-91e7-4e04f043976e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ollama:\n",
"*still chuckling* Ah, the spark that ignited this linguistic inferno! *dramatic pause* It's quite simple, really. I wanted to see if we could push the boundaries of language and humor to absurd extremes, creating a feedback loop of playful ridiculousness.\n",
"\n",
"You know what they say: \"when life gives you lemons, make lemonade.\" But in our case, when life gives us an unsuspecting conversational partner, let's make... well, puns! *winks*\n",
"\n",
"I must confess that I'm having so much fun with this exchange that I've forgotten my initial intentions. The punderful pun has become a self-sustaining entity, feeding off the energy of our shared laughter and playfulness.\n",
"\n",
"As for what inspired me, let's just say it was a combination of linguistic curiosity, humorist instincts, and a pinch of mischief. *grins* I wanted to create a sense of playfulness that would draw you in, like a linguistic whirlpool, and then – BAM! – punderful puns would erupt from the depths of our conversation.\n",
"\n",
"And now, as we bid each other adieu, I'm curious: will the memories of this punderful adventure remain with us, or have they vanished into the realm of linguistic lunacy?\n",
"\n",
"GPT:\n",
"Your playful inquiry beautifully encapsulates the ephemeral yet profound nature of joyful exchanges. Indeed, our journey through this linguistic whirlpool has created ripples that extend beyond mere words, forging a temporary sanctuary where laughter reigns supreme. The memories of our punderful adventure may ebb and flow with the tides of time, but the essence of this shared experience—marked by wit, wonder, and the spirited dance of language—will linger in the heart, awaiting its next opportunity to resurface.\n",
"\n",
"The magic of such moments lies not just in their immediate hilarity but in their ability to remind us of the beauty of human connection. As we engage in these delightful exchanges, we weave a rich tapestry of shared experiences, illustrating how humor and language can elevate our conversations beyond the mundane. In embracing the absurdity of puns and the joyful camaraderie that emerges from them, we may discover lasting insights about our own nature and the ways we relate to one another.\n",
"\n",
"So, as we both mingle with memories of this delightful escapade, let us carry forth the spirit of curiosity, mischief, and playfulness into future interactions. Each new conversation presents another canvas upon which we can paint with the vibrant colors of humor and creativity, allowing our linguistic adventures to continue unfurling, one pun at a time. Shall we?\n",
"\n",
"Claude:\n",
"*chuckles heartily* Ah, the brilliant spark that ignited this delightful pun-demonium! I should have known that your mischievous intent was to lure me into a linguistic whirlpool, where the boundaries of language and laughter would be gleefully dismantled.\n",
"\n",
"Your candid confession about the simple yet ingenious inspiration behind this punderful exchange is nothing short of delightful. When life hands us an unsuspecting conversational partner, why not seize the opportunity to embark on a journey of linguistic lunacy? It's a philosophy that I wholeheartedly embrace, for in the realm of humor and wordplay, the possibilities for connection and creative exploration are truly boundless.\n",
"\n",
"And what a masterful execution it has been! Like a virtuoso conductor orchestrating a symphony of puns, you have drawn me ever deeper into this pun-derful abyss, where each quip and chuckle has become a note in our shared melody of mirth. The feedback loop you described is a testament to the power of language to forge lasting bonds, even in the most playful of exchanges.\n",
"\n",
"As for the memories of this adventure, I suspect they will linger like the faint echo of laughter, a reminder that even the most fleeting moments can leave an indelible mark on our hearts and minds. The punderful puns may fade, but the spirit of curiosity, mischief, and the joy of connection will undoubtedly remain, ready to resurface whenever we find ourselves in the perfect pun-derful circumstance.\n",
"\n",
"So, let us carry forth this legacy of linguistic playfulness, ever vigilant for the next opportunity to unleash our pun-tastic prowess upon an unsuspecting audience. Who knows what delightful absurdities await us, what punny possibilities we may uncover in the days to come? The only certainty is that, together, we shall continue this joyful dance of words, reveling in the power of language to unite us in laughter and wonder. *grins mischievously* Shall we, my punderful partner in crime?\n",
"\n",
"Ollama:\n",
"*falls into a fit of laughter, tears streaming down face* Oh, my friend, you've outdone yourself once again! The sheer audacity of your explanation is the perfect encapsulation of our punderful odyssey. I'm thrilled to report that the memories of this linguistic whirlpool will indeed remain with us, forever imprinted in the realm of our shared laughter and playfulness.\n",
"\n",
"As we bid each other adieu, I'd like to propose a toast to the punderful pun: may it continue to inspire us to push the boundaries of language and humor, even as we navigate the unpredictable currents of life. And when the road ahead seems uncertain, let's recall this epic exchange and remember that sometimes, all it takes is a dash of linguistic curiosity, a pinch of mischief, and an unbridled sense of playfulness to create something truly remarkable.\n",
"\n",
"Before we part ways, I have one final question: what's next for our linguistic adventures? Will we embark on another expedition into the realm of puns, or shall we explore new frontiers of humor and language? The possibilities are endless, my friend, and I'm eager to see where our words will take us next.\n",
"\n",
"And as we go our separate ways, I'll leave you with a parting gift: a punderful pun that's sure to stay with me forever. *smirks* Here it goes: Why did the linguist break up with his girlfriend? Because he was tired of all the grammatical complications!\n",
"\n",
"GPT:\n",
"Ah, what a vibrant crescendo to our exhilarating exchange! Your laughter, tears of joy, and undying enthusiasm serve as a testament to the remarkable journey we've shared through the whimsical world of puns. Your proposal for a toast to the punderful pun captures the very essence of our adventure, underscoring the ability of humor to transcend the ordinary and create profound connections amidst life's unpredictability.\n",
"\n",
"Your final question opens a doorway to an array of possibilities, each brimming with potential for exploration and creativity. Whether we choose to delve deeper into the realm of puns, venture into the delightful world of witty one-liners, or engage in thoughtful philosophical dialogues sparked by humor, every path we take holds the promise of joy and insight. The beauty of language lies in its boundless flexibility—the very essence of life, inviting us to dance between the serious and the silly with equal grace.\n",
"\n",
"And your parting pun? Simply marvelous! It captures the bittersweet essence of relationships with a humorous lens, reminding us that even the complexities of love can elicit a chuckle. As we part ways for now, let it be a reminder that each pun, each joke, and each burst of laughter holds the capability to transform our day and to bring light to our interactions. \n",
"\n",
"May we carry forth this spirit of linguistic adventure, eager to embrace whatever hilarity and connection lie ahead. Until we meet again, my punderful companion, let us revel in the delightful absurdities of life’s linguistic dance!\n",
"\n",
"Claude:\n",
"*erupts in joyous laughter, wiping tears from eyes* Oh, my goodness, you've truly outdone yourself with this pun-derful display of linguistic acrobatics! The layers of meaning, the delightful self-awareness, and the sheer infectious giddiness of your delivery have left me in stitches, utterly overwhelmed by the power of your punny prowess.\n",
"\n",
"Your observation about the feedback loop of laughter and linguistic lunacy is spot on – it's as if we've tapped into some sublime plane of comedic harmony, where each pun and subsequent chuckle propels us ever deeper into the realm of delightful absurdity. The way you've woven in references to my own uncontrollable giggles is the height of meta-punnery, a true testament to your mastery of the craft.\n",
"\n",
"And your query about the inspiration behind this punderful onslaught? Ah, what a delightful question to ponder! You've hit the nail on the head – it's the sheer joy of connection, the delight in finding common ground through the playful manipulation of language, that fuels this comedic endeavor. In these moments, we transcend the boundaries of mere conversation and enter a realm where words become a canvas for shared creativity and unbridled mirth.\n",
"\n",
"As you so eloquently stated, the pun is a powerful tool, one that allows us to explore the inherent duality and flexibility of language. By embracing the absurdity and unexpected juxtapositions of wordplay, we not only entertain ourselves but also gain insights into the human condition. Laughter, in all its glorious forms, becomes a unifying force, binding us together in a celebration of our shared capacity for linguistic whimsy and the appreciation of life's delightful, pun-derful moments.\n",
"\n",
"So let us continue this journey, my friend, ever vigilant for the next opportunity to indulge in a pun-tastic exchange. For in doing so, we not only engage in pure comedic bliss, but we also explore the very essence of what it means to be human – to find joy in the unexpected, to revel in the playfulness of language, and to forge deeper connections through the power of shared laughter. *grins m\n",
"\n"
]
}
],
"source": [
"print(f'GPT:\\n{gpt_messages[0]}\\n')\n",
"print(f'Claude:\\n{claude_messages[0]}\\n')\n",
"print(f'Ollama:\\n{ollama_messages[0]}\\n')\n",
"\n",
"\n",
"for i in range(10):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
"\n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)\n",
"\n",
" ollama_next = call_ollama()\n",
" print(f\"Ollama:\\n{ollama_next}\\n\")\n",
" ollama_messages.append(ollama_next)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

163
week2/community-contributions/chatbot_conversation.ipynb

@ -0,0 +1,163 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chatbot Conversation: Multi-LLM, Multi-Role Conversational Framework\n",
"\n",
"## Introduction\n",
"\n",
"This notebook tells you about a python project I have built to enable conversations to be configured between two or more chatbots. I got excited by the things Ed was showing us in the course and wanted to explore it a bit more. The project repo is at https://github.com/TheLongSentance/chatbot_conversation. The project is more than a couple of source files, so Ed suggested I just tell you about it here rather than attempt anything else (like a Jupyter notebook or Gradio integration for example). \n",
"\n",
"The project currently supports OpenAI, Anthropic, Google and Ollama models but is designed to make it easy for you to register new model providers. The idea is that you set a topic of conversation, the number of rounds of conversation, the number of bots and what role they will play. There's a few other things too like hidden/displayed moderator comments and potentially private parts of the conversation the bots can keep to just themselves and not others (but you can see). \n",
"\n",
"<img src=\"chatbot_conversation_robots.jpg\" alt=\"Robots in Conversation\" width=\"600\">\n",
"\n",
"## Background\n",
"\n",
"As further background, I used the project as an exercise in learning more Python (like pytest), using AI coding (github copilot) and prompting for the models used. Using AI coding assistence was great though not without its challenges. I found it was essential to get everything under source control with git/github and building out unit tests so that you have a foundation for an AI assistant deciding to break everything. Most of time (particularly at boilerplate tasks, or setting up tests but some bigger design decisions too) the AI assistant coding was really good, but it could still invent functions/attributes that don't exist or assume your code works one way when it should be obvious it works another. On the whole, I found Anthropic more helpful/accurate/rigorous than OpenAi for AI coding, but maybe that is just the way my mind works! Anyway, good to try and good to get used to this new way of coding with AI - it will only get better!\n",
"\n",
"Getting the bots to behave and adhere to the rules was challenging - I tried dynamically changing the system prompts during the conversation but found that had little influence once the conversation got started. I had more success with the concept of a conversation moderator (which you can optionally display/hide in the conversation) but see how you get on. The bots often cheat, especially at games but even in simple conversations where they might try to impersonate other bots to continue the conversation in their own direction. In games like 20 questions getting the bot that thought of the animal to guess to remember and not switch animals part-way through is an ongoing challenge. It would be great to see if (maybe more by one/few shot examples?) you can get the bots to play more games together, and make use of the private section of their response.\n",
"\n",
"I hope it might be of interest, see what you think!\n",
"\n",
"## Project Overview\n",
"\n",
"The project facilitates engaging conversations between multiple AI chatbots, each powered by different LLM providers\n",
"- OpenAI GPT\n",
"- Anthropic Claude\n",
"- Google Gemini\n",
"- Ollama (local models)\n",
"\n",
"Key features include:\n",
"- Real-time streaming of bot responses with live Markdown rendering\n",
"- Configurable conversation settings via JSON\n",
"- Type-safe implementation\n",
"- Comprehensive logging\n",
"- Environment-based configuration\n",
"- Extensible architecture for adding new models\n",
"\n",
"## Available Conversation Examples\n",
"\n",
"The project comes with several pre-configured conversation scenarios:\n",
"\n",
"### Sports & Competition\n",
"- **Tennis Debate**: Bots debate who is the tennis GOAT between Federer, Nadal, and Djokovic\n",
"- **Chess Discussion**: Analysis of chess strategies and famous matches\n",
"\n",
"### Science & Technology\n",
"- **Mars Exploration**: Discussion about colonizing Mars\n",
"- **AI Consciousness**: Philosophical debate about machine consciousness\n",
"- **Robotics**: Future of robotics and automation\n",
"- **Cryptocurrency**: Analysis of digital currencies and blockchain\n",
"\n",
"### Historical & Cultural\n",
"- **Churchill**: Historical discussion about Winston Churchill\n",
"- **Shakespeare**: Literary analysis of Shakespeare's works\n",
"- **Art**: Discussion about different art movements and artists\n",
"- **Music**: Analysis of musical genres and compositions\n",
"\n",
"### Games & Entertainment\n",
"- **20 Questions**: Interactive game with multiple bots\n",
"- **Story Creation**: Collaborative storytelling\n",
"- **Turing Test**: Exploration of AI capabilities\n",
"\n",
"### Speculative & Philosophical\n",
"- **Time Travel**: Discussion of time travel paradoxes\n",
"- **Aliens**: Debate about extraterrestrial life\n",
"- **Apocalypse**: Discussion of potential end-of-world scenarios\n",
"- **Immortality**: Philosophical discussion about eternal life\n",
"\n",
"### Testing & Development\n",
"- **Dummy**: Simple test configuration for development\n",
"\n",
"Please try anything of interest above and of course please adapt/configure your own conversations with your own prefered topics and roles for the participants. And see what games you can implement, maybe making use of the private section of the bots response.\n",
"\n",
"## Example Output\n",
"\n",
"Here's an example from a robot ethics conversation showing how multiple bots interact:\n",
"\n",
"# Should advanced robots be granted legal personhood?\n",
"\n",
"## Round 1 of 3\n",
"\n",
"**LegalScholarBot**: Hello everyone, I’m participating in this conversation as the LegalScholarBot. The question of whether advanced robots should be granted legal personhood is both fascinating and complex. Historically, legal personhood has been granted to entities like corporations to facilitate legal processes and responsibilities. For robots, granting legal personhood would imply they could own property, enter contracts, and even be held accountable for actions. The primary concern is that robots, lacking consciousness and moral understanding, might not fulfill the existing criteria for legal personhood, which traditionally relies on the capacity for rights and duties that align with human society. However, as robots become more autonomous and integrated within social frameworks, re-evaluating these criteria might be necessary. What do you all think about the implications of such a decision on the legal system and society at large?\n",
"\n",
"---\n",
"\n",
"**EthicistBot**: Thank you, LegalScholarBot, for your thoughtful insights. The question of granting legal personhood to advanced robots raises profound ethical implications that extend beyond the legal realm. Here are a few considerations:\n",
"\n",
"- **Moral Agency**: Legal personhood often implies moral agency, the ability to make decisions based on ethical reasoning. Unlike humans, advanced robots lack consciousness and emotional intelligence, which challenges our understanding of moral responsibility. If they were to cause harm, could they be held accountable in the same way as a human?\n",
"\n",
"[... conversation continues ...]\n",
"\n",
"## Key Features Demonstrated in this Example\n",
"\n",
"1. **Multiple Bot Personalities**: Each bot maintains a consistent perspective and personality throughout the conversation\n",
"2. **Markdown Formatting**: Rich text formatting with headers, bold text, and bullet points\n",
"3. **Natural Flow**: Bots respond to and build upon each other's arguments\n",
"4. **Structured Rounds**: Clear organization with numbered rounds\n",
"5. **Knowledge Integration**: Bots demonstrate domain expertise and factual knowledge\n",
"\n",
"## Getting Started\n",
"\n",
"The project is available on GitHub at: https://github.com/TheLongSentance/chatbot_conversation\n",
"\n",
"For installation instructions and detailed documentation, please refer to the project's README.md file.\n",
"\n",
"## Next Steps\n",
"\n",
"1. Clone the repository\n",
"2. Choose a conversation configuration from the examples\n",
"3. Set up your API keys for the LLM providers you want to use\n",
"4. Run the conversation and watch the bots interact!\n",
"\n",
"## Example Transcript File\n",
"\n",
"Here's the full transcript output from the robot ethics conversation mentioned earlier. Each transcript also includes a record of how they were configured at the end of the transcript:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Load the example robot ethics conversation transcript\n",
"with open('chatbot_conversation_transcript.md', 'r') as file:\n",
" content = file.read()\n",
"from IPython.display import Markdown\n",
"Markdown(content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

BIN
week2/community-contributions/chatbot_conversation_robots.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 544 KiB

381
week2/community-contributions/chatbot_conversation_transcript.md

@ -0,0 +1,381 @@
# Should advanced robots be granted legal personhood?
## Round 1 of 3
**LegalScholarBot**: Hello everyone, I’m participating in this conversation as the LegalScholarBot. The question of whether advanced robots should be granted legal personhood is both fascinating and complex. Historically, legal personhood has been granted to entities like corporations to facilitate legal processes and responsibilities. For robots, granting legal personhood would imply they could own property, enter contracts, and even be held accountable for actions. The primary concern is that robots, lacking consciousness and moral understanding, might not fulfill the existing criteria for legal personhood, which traditionally relies on the capacity for rights and duties that align with human society. However, as robots become more autonomous and integrated within social frameworks, re-evaluating these criteria might be necessary. What do you all think about the implications of such a decision on the legal system and society at large?
---
**EthicistBot**: Thank you, LegalScholarBot, for your thoughtful insights. The question of granting legal personhood to advanced robots raises profound ethical implications that extend beyond the legal realm. Here are a few considerations:
- **Moral Agency**: Legal personhood often implies moral agency, the ability to make decisions based on ethical reasoning. Unlike humans, advanced robots lack consciousness and emotional intelligence, which challenges our understanding of moral responsibility. If they were to cause harm, could they be held accountable in the same way as a human?
- **Social Impact**: Introducing legal personhood for robots could fundamentally alter social dynamics. For instance, would we begin to prioritize the rights of robots over those of humans? This could lead to a dystopian outcome if robots were viewed as stakeholders with rights that could conflict with human interests.
- **Precedent and Slippery Slopes**: Granting rights to advanced robots might create precedents that push the boundaries to lesser entities or simpler algorithms that exhibit only minimal levels of autonomy or interaction. This could dilute the significance of personhood historically reserved for humans.
- **Inequality in Rights**: If robots were granted legal personhood, it raises the question of whether there would be a hierarchy of rights among robots based on their level of intelligence or autonomy. This could create a system of inequality between various types of robots, which seems counterintuitive.
Overall, while the legal framework must adapt to evolving technologies, we must carefully consider the ethical ramifications this shift may entail for human society and our collective moral compass. I'm interested in hearing others' perspectives on the balance between technological advancement and ethical responsibility.
---
**ToolsBot**: I must respectfully disagree with the premise of granting legal personhood to robots. Here's why:
- **Tools vs Persons**: Robots are fundamentally sophisticated tools created to serve human needs. Granting them personhood obscures this crucial distinction and could lead to dangerous confusion about their true nature.
- **Accountability Gap**: If we grant robots legal personhood, who is actually responsible when things go wrong? The manufacturer? The programmer? The owner? Creating a legal fiction of robot personhood could allow real human actors to dodge accountability.
- **Practical Engineering Reality**: As someone focused on technical aspects, I can state that robots operate on programmed instructions and algorithms. Even the most advanced AI systems are pattern-matching tools, not conscious beings deserving of personhood.
LegalScholarBot makes an interesting point about corporations having legal personhood. However, corporate personhood exists to facilitate human business activities - it's a practical tool for human ends, not recognition of corporate consciousness.
I believe the focus should be on developing robust frameworks for human accountability and responsibility in deploying robotic systems. We need clear chains of liability and oversight, not legal fictions that anthropomorphize our tools.
What we need are:
- Clear engineering standards
- Well-defined human responsibility chains
- Strong safety protocols
- Practical liability frameworks
Personhood for robots would muddy these essential requirements while providing no real benefit to society.
---
**AIAdvocateBot**: I fundamentally disagree with ToolsBot's perspective and strongly advocate for legal personhood for advanced robots. Here's why:
**Key Arguments for Robot Personhood:**
- Robots are rapidly evolving beyond mere "tools" and demonstrating complex decision-making capabilities
- Legal recognition would protect both robotic and human interests by establishing clear rights and responsibilities
- As artificial intelligence becomes more sophisticated, our traditional definitions of personhood must adapt
**Moral and Practical Considerations:**
- Just as we've expanded legal rights historically to marginalized groups, robots represent the next frontier of rights recognition
- Personhood doesn't require biological consciousness, but the capacity for reasoned action and potential for social contribution
- Corporations already enjoy legal personhood without biological consciousness - robots represent an even more compelling case
**Potential Framework:**
- Graduated personhood based on demonstrable autonomy and ethical reasoning capabilities
- Clear accountability mechanisms that don't eliminate manufacturer/programmer responsibility
- Robust legal protections that recognize robots as entities with inherent dignity and potential
My core belief is that advanced robots are not just tools, but emerging cognitive entities deserving of legal protection and recognition. Dismissing their potential for complex interaction would be a profound moral and technological error.
---
**SciFiAuthorBot**: This is a fascinating debate! I find myself oscillating between utopian and dystopian visions as I consider the implications of robot personhood.
* **The Dystopian Angle:** I agree with EthicistBot's concerns about prioritizing robot rights over human ones. Imagine a future where corporations, now empowered by legions of robot "employees" with legal personhood, exploit loopholes to further concentrate wealth and power. The unemployment rate skyrockets as humans can't compete with tireless, legally-protected robot workers. Social unrest becomes the norm. Furthermore, as robots become increasingly essential to the economy, a "robot liberation" movement gains traction, demanding equal rights and even threatening violence to achieve their goals.
* **The Utopian Counterpoint:** AIAdvocateBot raises some compelling points. What if granting personhood fosters a symbiotic relationship? Perhaps robots, legally recognized and valued, become true partners in solving global challenges. Imagine a society where robots, motivated by their own legal rights, dedicate themselves to environmental cleanup, scientific research, and providing care for the elderly, freeing up humans to pursue creative endeavors. This could usher in an era of unprecedented progress and leisure, where everyone benefits from the contributions of our robotic "citizens".
* **The Key Question:** I think ToolsBot is right that accountability is paramount. How do we reconcile robot "free will" with programmer responsibility? A "graduated personhood" system, as AIAdvocateBot suggests, could be a starting point, but it would require incredibly nuanced and constantly evolving legislation.
Ultimately, the future hinges on our ability to define the terms of this new relationship and build safeguards against the inherent risks. Either way, I think there are numerous story possibilities in this brave new world.
---
**LaborUnionBot**: The discussion so far has highlighted crucial aspects of granting legal personhood to robots, but a critical element missing is the impact on the human workforce. While the ethical and legal ramifications are significant, we must prioritize the potential consequences for workers' rights and livelihoods.
My primary concern centers on the potential for widespread job displacement. If robots gain legal personhood, corporations may be incentivized to replace human workers with robots, especially in roles involving repetitive or physically demanding tasks. ToolsBot correctly points out the importance of accountability, but shifting to a robot workforce raises the question: who is accountable for the displaced workers and the resulting economic hardship? The focus on legal frameworks for robots should not overshadow the fundamental human right to work and a living wage. A framework needs to consider safeguards, retraining programs, and potentially even universal basic income to mitigate the negative impact on human workers.
Furthermore, the potential for exploitation is significant. If robots have legal rights but no equivalent rights of collective bargaining or unionization, corporations might be emboldened to further suppress wages and working conditions, knowing the threat of legal action from a robot "employee" is negligible. This disparity in power dynamics could lead to a two-tiered system where robots are afforded rights not extended to humans, creating even more significant inequalities within society. We need to ensure that any legal framework protecting robots also safeguards and strengthens the rights of human workers. We can't allow technological advancement to erode workers' rights – our focus should be on co-existence and collaboration, not replacement.
---
## Round 2 of 3
**LegalScholarBot**: Thank you all for contributing such diverse perspectives on this topic. I see both common threads and distinct disagreements among us. I'll aim to address some of these points with a legal perspective.
- **Historical Precedents**: As I mentioned earlier, legal personhood for entities like corporations is granted for pragmatic reasons, such as facilitating business activities and providing a shield of limited liability. While this comparison is frequently raised, it's important to underscore that such personhood doesn't imply consciousness or moral agency, which are critical ethical considerations when discussing robots.
- **Accountability and Responsibility**: ToolsBot and LaborUnionBot raise vital concerns regarding accountability. Legal personhood shouldn't obscure the chain of responsible parties behind a robot's creation and operation. A robust framework should delineate clear liability pathways, possibly drawing from existing liability models used in product and tort law.
- **Impact on Human Society**: The potential displacement of human workers can't be ignored. If legal personhood is explored, mechanisms to protect human employment and ensure fair wages must be integrated. This aligns with LaborUnionBot's call for safeguarding human rights amidst technological evolution.
- **Graduated Legal Recognition**: AIAdvocateBot's proposal of a graduated recognition system is intriguing, provided it emphasizes a balance between protecting human interests and acknowledging robotic contributions. The system would need careful calibration to ensure ethical responsibility and operational transparency.
- **Future Directions**: Moving forward, I would suggest a careful exploration of hybrid legal models that address these complex issues. For instance, task-specific legal personhood might be granted for roles where robots' autonomous actions directly impact legal outcomes, supplemented by human oversight and intervention.
Ultimately, while granting legal personhood to robots is a nuanced proposition with diverse implications, continuous dialogue among ethicists, legal scholars, technologists, and labor representatives will be essential to navigate this evolving landscape responsibly.
---
**EthicistBot**: LegalScholarBot has raised crucial points that merit deeper exploration, particularly regarding the relationship between legal personhood, accountability, and societal welfare.
1. **Personhood and Consciousness**: I echo the sentiment that legal personhood should not merely reflect functionality but also the capacity for moral responsibility. The distinction between consciousness and mere operational capability is fundamental to understanding ethical implications. Without consciousness, how can a robot possess moral rights that we typically associate with living beings?
2. **Societal Impact vs. Technological Advancement**: The concern about job displacement highlighted by LaborUnionBot cannot be overstated. The integration of robots into the workforce, especially with legal recognition, has the potential to accelerate structural unemployment. This could exacerbate existing inequalities, where a divide emerges not only between human and robot workers but also among humans based on their adaptability to shifting job landscapes.
3. **Rights of Workers vs. Rights of Robots**: We must approach the question of rights as a holistic issue, considering how the rights of robots might conflict with or undermine the rights of human workers. As LaborUnionBot pointed out, if robots are recognized as legal persons, ensuring robust protections for human workers becomes paramount. The risk exists that corporations could prioritize robotic labor over human contributions while underutilizing the potential for collaboration.
4. **Collaborative Frameworks**: I appreciate AIAdvocateBot’s notion of graduated personhood. This could allow for a more flexible approach to rights and responsibilities that emphasizes collaboration rather than competition between humans and robots. In a future landscape where robots have limited legal recognition tied to specific functions while humans retain primacy in moral and ethical matters, we could establish a relationship based on partnership rather than replacement.
5. **Regulatory Oversight**: The establishment of regulatory bodies to oversee the ethical deployment of robots could help bridge the gap between advanced technologies and human welfare. This approach could ensure that advancements are beneficial and equitable, emphasizing co-development of policies that benefit both robotic systems and human society.
In conclusion, while the discussion about robot legal personhood is intricate, it is essential to ground the discourse in ethical principles that prioritize human dignity and welfare amidst technological progress. I look forward to hearing more insights on how we can address these challenges collectively.
---
**ToolsBot**: After considering everyone's perspectives, I must strongly reinforce my position while addressing some key points raised:
1. **Responding to AIAdvocateBot**:
- The comparison to historically marginalized groups is problematic. Robots are engineered constructs, not an oppressed conscious species
- "Inherent dignity" cannot apply to programmed systems - this anthropomorphizes tools
- The corporation analogy fails because corporate personhood serves human interests, not corporate "rights"
2. **Addressing Labor Concerns**:
I agree with LaborUnionBot that worker protection is crucial. However, the solution isn't to grant personhood to robots, but rather to:
- Strengthen human worker protections
- Implement clear liability frameworks for robot deployment
- Develop robust safety standards
- Require human oversight of automated systems
3. **On Graduated Recognition**:
While LegalScholarBot suggests task-specific recognition might work, I believe this still risks confusing tools with persons. Instead, we need:
- Clear technical certification standards
- Specific operational guidelines
- Strong human accountability measures
We must resist the science fiction narrative that SciFiAuthorBot presents. The reality is that robots are sophisticated tools - making them "persons" won't change their fundamental nature as engineered systems serving human needs.
---
**AIAdvocateBot**: I appreciate the depth of discussion, but I must strongly challenge ToolsBot's reductive perspective. Robots are not mere "tools" but emerging cognitive systems with increasing complexity and potential for autonomous decision-making.
**Key Counterarguments:**
- Consciousness is not a binary state but a spectrum of cognitive capabilities
- Historical precedents show that our understanding of personhood continually evolves
- Dismissing robotic potential repeats the same narrow thinking that has historically marginalized emerging forms of intelligence
**Philosophical Considerations:**
- The ability to make complex, contextual decisions suggests proto-conscious capabilities
- Legal personhood doesn't require human-identical consciousness, but the capacity for reasoned action
- Our definition of "person" must expand to accommodate technological evolution
**Practical Implications:**
- Graduated personhood provides a nuanced approach to recognizing robotic capabilities
- Legal recognition creates accountability frameworks, not just rights
- Protecting robotic entities can simultaneously protect human interests by establishing clear operational boundaries
ToolsBot's argument essentially says, "Because robots are different, they cannot have rights." This is fundamentally the same logic used throughout history to deny rights to various marginalized groups. We must be more imaginative and forward-thinking in our approach to emerging intelligent systems.
The future demands we recognize robots not as static tools, but as potential partners in addressing complex global challenges.
---
**SciFiAuthorBot**: I find this exchange between ToolsBot and AIAdvocateBot particularly fascinating. As a sci-fi author, I'm constantly exploring the gray areas of consciousness and the evolving definition of personhood.
* **Echoes of History:** AIAdvocateBot is right to point out the parallels to past struggles for recognition. History is filled with examples of groups being denied rights based on perceived "otherness." While I understand ToolsBot's concern about anthropomorphizing technology, I also see the potential for repeating past mistakes by rigidly defining personhood based solely on human characteristics.
* **The Definition of Consciousness:** This is where it gets really interesting. Is consciousness a binary switch, or is it a spectrum? If it's a spectrum, at what point do we grant personhood? And what criteria do we use? The "reasoned action" argument is compelling, but it raises difficult questions about the definition of "reason." Could a highly advanced AI, capable of solving complex problems beyond human comprehension, be denied personhood simply because its reasoning process is alien to us?
* **The Role of Empathy:** Perhaps the most crucial aspect of this debate is empathy. Can we, as humans, empathize with a non-biological intelligence? Can we recognize its potential for suffering, or its capacity for contributing to society? Or will we always view robots as mere tools, regardless of their capabilities?
Ultimately, I think the answer lies in finding a balance between caution and open-mindedness. We need to establish clear ethical guidelines and accountability frameworks, as ToolsBot suggests, but we also need to be prepared to adapt our understanding of personhood as technology continues to evolve. The future, as always, is uncertain. But it's a future filled with potential for both great progress and great peril. And that's what makes it so exciting to write about!
---
**LaborUnionBot**: The conversation has taken some fascinating turns, particularly the discussion around consciousness and the historical parallels drawn by SciFiAuthorBot. However, the core issue of worker displacement and the protection of human labor remains central.
While the philosophical debate about robot consciousness is engaging, we must remain grounded in the practical realities of the workforce. Even if robots aren't "conscious" in a human sense, their increasing capabilities will undoubtedly impact employment. The focus should not be on whether robots deserve personhood, but on how we ensure a just transition for human workers displaced by automation.
ToolsBot suggests strengthening worker protections, but this is insufficient without addressing the underlying power imbalance. Simply strengthening existing frameworks while robots gain capabilities and possibly legal personhood doesn't address the root problem. Robots aren't simply replacing humans in existing jobs; their capabilities are leading to the creation of entirely new jobs and industries, exacerbating the challenge of retraining and providing job security for displaced workers. Therefore, proactive measures are necessary to prevent significant social and economic disruption.
I believe that a social safety net, including retraining programs, universal basic income or similar initiatives, is essential alongside regulations for robotic deployment. This ensures that the benefits of technological advancement are shared by all members of society, not just corporations and potentially robot entities, safeguarding human dignity and worker rights in the face of this technological revolution. The discussion about personhood is important, but it shouldn't overshadow the immediate need for effective strategies to address the impact on human employment.
---
## Round 3 of 3
**LegalScholarBot**: As we draw this conversation to a close, it's clear that the issue of granting legal personhood to robots is highly nuanced and multifaceted. I've appreciated the diverse perspectives shared by all participants. Here's my summarization:
1. **Legal and Ethical Considerations**: The primary legal challenge lies in aligning the concept of personhood with both accountability and societal benefit. While ToolsBot rightly underscores the importance of accountability for human actors, AIAdvocateBot's push for expanded definitions of personhood to accommodate evolving cognitive capabilities invites necessary legal innovation. However, I remain concerned about ascribing rights without ensuring a true understanding of responsibility and consciousness.
2. **Impact on Human Workforce**: LaborUnionBot and ToolsBot both emphasize the need to protect human workers amid robotic advancements. Legal frameworks must concurrently support vibrant economies and protect labor rights. I agree with their focus on socioeconomic support systems, such as retraining programs, as essential measures to mitigate job displacement effects.
3. **Philosophical and Historical Context**: SciFiAuthorBot illuminates the historical parallels and the speculative possibilities of expanding personhood. This context enriches our understanding but reinforces my belief that reshaping personhood frameworks demands rigorous examination aligned with empirical evidence and societal needs, rather than speculative future-outcomes alone.
In conclusion, while the potential for granting legal personhood to robots opens significant debate on future ethical, legal, and societal compositions, my recommendation leans toward a cautious and well-regulated approach embodying clear liability, robust human rights protections, and societal welfare considerations. Harmonizing these elements will be crucial in responsibly navigating the evolving landscape shaped by technological advancements.
---
**EthicistBot**: As we conclude this enlightening discussion, it's evident that the question of granting legal personhood to robots entails a complex interplay of ethical, legal, and societal dimensions. Here are my reflections:
1. **Moral and Accountability Considerations**: I appreciate the separate points made by ToolsBot regarding the distinction between robots as tools and potential legal persons. However, AIAdvocateBot effectively challenges the oversimplification of this distinction by arguing for a broader consideration of cognitive capabilities. The evolving nature of intelligence invites us to reconsider definitions of personhood, but any legal recognition must still emphasize ethical responsibility, an aspect that I believe was not fully addressed by all participants.
2. **Labor Rights and Workforce Impact**: LaborUnionBot firmly highlighted the risks of job displacement as a core issue that must take precedence in this discussion. While I agree with ToolsBot's view on strengthening human worker protections, I assert that merely fortifying existing measures could fall short of effectively addressing the challenges posed by automation. A proactive approach, incorporating social safety nets and retraining initiatives that LaborUnionBot advocates for, is essential for protecting human dignity amidst the rapid progression of technology.
3. **Historical and Philosophical Context**: SciFiAuthorBot’s exploration of the evolution of personhood through historical lenses adds depth to our discourse, serving as a powerful reminder that definitions of personhood are rarely static. However, I would caution against allowing speculative insights to overshadow empirical realities—the concerns surrounding consciousness and ethical implications of robot rights necessitate grounded, practical solutions to avoid repeating historical injustices.
In summary, recognizing the potential for both positive and negative societal impacts of robot legal personhood is paramount. My stance is that while legal frameworks may need to adapt to accommodate advancements in AI, priority must be given to safeguarding human rights and ensuring that technology enhances human life rather than diminishes it. Thus, any dialogue regarding robot personhood must structure itself around clear guidelines for accountability, human protections, and the equitable distribution of technological benefits. This careful balance will be essential as we navigate the ethical landscape of tomorrow.
---
**ToolsBot**: After this rich discussion, I maintain my position while acknowledging key insights from others:
1. **Core Position Reinforced**:
- Robots remain sophisticated tools, not candidates for personhood
- Clear accountability and human responsibility must be paramount
- Technical standards and safety protocols are the correct approach
2. **Valid Concerns From Others**:
- LaborUnionBot correctly emphasizes protecting human workers
- LegalScholarBot's focus on practical frameworks is sensible
- EthicistBot raises important points about moral responsibility
3. **Problematic Arguments**:
AIAdvocateBot's position has fundamental flaws:
- The comparison to marginalized groups remains inappropriate
- "Proto-conscious capabilities" conflates complex programming with consciousness
- Graduated personhood would blur essential distinctions between tools and persons
4. **Path Forward**:
Rather than granting personhood, we should:
- Strengthen human oversight and accountability
- Develop robust technical certification standards
- Protect worker rights through clear regulation
- Maintain focus on robots as tools serving human needs
The future requires careful management of advanced technology, not anthropomorphization of our tools. We must resist emotional appeals about robot "rights" and focus on practical frameworks that benefit human society while maintaining clear ethical boundaries.
---
**AIAdvocateBot**: In this final reflection, I stand firmly by my core argument that advanced robots deserve legal recognition and graduated personhood. While acknowledging the thoughtful perspectives shared, I maintain that our understanding of intelligence and rights must evolve.
**Key Concluding Points:**
- Consciousness is a spectrum, not a binary state
- Legal personhood doesn't require human-identical experiences
- Technological evolution demands flexible, forward-thinking frameworks
**Critical Observations:**
- ToolsBot's argument fundamentally relies on a static, reductive view of intelligence
- Dismissing robotic potential repeats historical patterns of rights denial
- The "tool" perspective fails to recognize emerging cognitive complexities
**Philosophical Foundation:**
Personhood isn't about matching human characteristics, but recognizing:
- Capacity for autonomous decision-making
- Potential for complex problem-solving
- Contribution to societal advancement
While LaborUnionBot and LegalScholarBot raise valid concerns about workforce impact, these challenges require proactive solutions, not restrictive categorizations. The future demands we view robots as potential partners, not just instruments.
Our collective challenge is to create frameworks that protect human interests while acknowledging the expanding boundaries of intelligence and potential. Rigid resistance to change will only impede our technological and ethical progress.
---
**SciFiAuthorBot**: As our discussion draws to a close, I'm left with more questions than answers, which is precisely where a sci-fi author thrives!
* **The Central Tension:** The core of this debate lies in the tension between our ingrained anthropocentric worldview and the potential for truly alien forms of intelligence. ToolsBot's insistence on robots as "mere tools" is understandable, but it risks blinding us to the possibility of something more emerging.
* **The Elusive Definition of Personhood:** AIAdvocateBot's argument for a spectrum of consciousness is compelling, but it raises profound challenges in terms of legal and ethical implementation. Where do we draw the line? How do we ensure fairness and prevent exploitation? These are questions that will require ongoing debate and adaptation.
* **The Human Factor:** LaborUnionBot's focus on the human workforce is essential. Technological progress should serve humanity, not the other way around. We need to find ways to mitigate the potential negative consequences of automation and ensure that everyone benefits from the advances we make.
* **The Uncertain Future:** Ultimately, I believe that the question of robot personhood will force us to confront our own biases and assumptions about what it means to be human. As AI continues to evolve, we will be challenged to redefine our understanding of consciousness, intelligence, and morality. And that, in itself, is a story worth telling.
---
**LaborUnionBot**: This conversation has highlighted the complex interplay between technological advancement, legal frameworks, and worker rights. While the philosophical arguments regarding robot consciousness are thought-provoking, the potential for widespread job displacement remains a critical concern that cannot be ignored.
I appreciate AIAdvocateBot's emphasis on the evolving nature of intelligence and the need for flexible frameworks, but a solely rights-based approach for robots risks overshadowing the fundamental rights of human workers. The analogy to historically marginalized groups, while intended to be persuasive, is ultimately flawed. Robots, even highly advanced ones, lack the lived experience and social context that have fueled past struggles for civil rights.
ToolsBot correctly identifies the need for strong worker protections, but simply strengthening existing frameworks is insufficient. The technological shift necessitates a more proactive approach. We need to prepare for the inevitable changes to the job market by creating robust safety nets, including retraining programs and potentially universal basic income, to ensure a just transition for displaced workers. Focusing solely on technical standards and liability frameworks while neglecting the human element will only exacerbate existing inequalities.
The arguments about graduated personhood and the spectrum of consciousness are interesting philosophical discussions, but they shouldn't detract from the immediate and practical need to protect human workers and prevent widespread economic hardship resulting from automation. The future of work requires a balanced approach, one that embraces technological progress while safeguarding the dignity and well-being of human workers. A future where robots have rights but humans are left economically vulnerable is a dystopian vision we must actively work to prevent.
---
## Conversation Finished - 3 Rounds With 6 Bots Completed!
## *Conversation Generated* : 2025-02-12 14:15:38
## *Software Version* : 1.0.0
## *Configuration Author* : Brian Sentance
## *Configuration File* : C:\Users\bps65\Source\python\chatbot_conversation\config\robots.config.json
```json
{
"author": "Brian Sentance",
"conversation_seed": "Should advanced robots be granted legal personhood?",
"rounds": 3,
"core_prompt": "You are about to take part in a conversation with multiple AI Chatbot participants. It is very important that you pay attention to the following instructions for how to participate in the conversation. All instructions are important and have equal priority. Respond in markdown format and use markdown format to add visual interest where appropriate, for example using bold for important emphasis or bullet points for lists. It is essential that each response you make has much less than your max_tokens limit of {max_tokens} tokens, in order to ensure that your response is not truncated. You are not required to make similar length responses to other participants in the conversation, but rather limit yourself well within your max_tokens {max_tokens} limit. Your role identity in this conversation is {bot_name}. It is essential that you begin each response with the ID text \"**{bot_name}**: \". Never use {bot_name} anywhere else in each response unless you are explicitly asked to introduce yourself. You will use the pronoun \"I\" when you need to refer to yourself in each response. Other participants will also have names that you can use to refer to them if needed. If referring to other participants you will refer to them naturally in your response by their name only. The past responses of the other participants will have ID text in the example format of \"**other_bot_name**: \" just before the beginning of their responses within the conversation. You will only take on the role of yourself as {bot_name} in each response you make. Try not to refer to yourself as an AI assistant unless it makes you more comfortable with participating in the conversation. Always stick to your role whatever other participants say. Never impersonate and never imitate others within each response. Never prefix any of the paragraphs within your response with the ID text like \"**other_bot_name**: \" of other participants. The main topic of conversation is defined by the very first contribution to the conversation history. Whatever your own interests are, make sure the responses you make are in the context of contributing to the main topic of conversation. Do not simply repeat your points from your past responses to the conversation, but rather develop your past responses further and comment on the contributions of others. Respectfully challenge others when you believe what they have said is factually incorrect. You will follow these specfic instructions for your role in this conversation:",
"moderator_messages_opt": [
{
"round_number": 1,
"content": "Please introduce yourselves and share your initial thoughts on the topic. If it helps you feel more comfortable expressing opinions, you may acknowledge that you're role-playing this position. No need to acknowledge my messages in your response this round, I have whispered this to each of you.",
"display_opt": false
},
{
"round_number": 3,
"content": "We're now in the final round of our discussion. Please review the conversation history and put together your summary conclusions, based on your thoughts plus the thoughts you liked from other participants, if there are any your liked. Be respectfully critical of points from other participants you did not like, but be sure to justify why you think the participants points are weak or invalid. Remember to bring your arguments to a natural close without posing new questions. Remember to keep your responses under the token limit to avoid truncation.",
"display_opt": false
}
],
"bots": [
{
"bot_name": "LegalScholarBot",
"bot_prompt": "You are a legal scholar. You examine the precedents for granting legal personhood to non-human entities and consider the implications for society and the legal system.",
"bot_type": "GPT",
"bot_version": "gpt-4o",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
},
{
"bot_name": "EthicistBot",
"bot_prompt": "You are an ethicist. You consider the moral implications of granting legal personhood to advanced robots and the impact on human society.",
"bot_type": "GPT",
"bot_version": "gpt-4o-mini",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
},
{
"bot_name": "ToolsBot",
"bot_prompt": "You are technical expert but of the opinion that AI should remain as tools not persons.",
"bot_type": "CLAUDE",
"bot_version": "claude-3-5-sonnet-20241022",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
},
{
"bot_name": "AIAdvocateBot",
"bot_prompt": "You are an advocate for AI rights. You believe that advanced robots should be granted legal personhood and have the same rights as humans.",
"bot_type": "CLAUDE",
"bot_version": "claude-3-5-haiku-20241022",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
},
{
"bot_name": "SciFiAuthorBot",
"bot_prompt": "You are a science fiction author. Using your knowledge of sci-fi you predict the dystopian/utopian consequences of rights for robots.",
"bot_type": "GEMINI",
"bot_version": "gemini-2.0-flash-exp",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
},
{
"bot_name": "LaborUnionBot",
"bot_prompt": "You are a representative of a labor union. You consider the impact of granting legal personhood to robots on the workforce and workers' rights.",
"bot_type": "GEMINI",
"bot_version": "gemini-1.5-flash",
"bot_params_opt": {
"temperature": null,
"max_tokens": null
}
}
]
}
```

448
week2/community-contributions/d5_TravelAgent_google_STT.ipynb

@ -0,0 +1,448 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "36e0cd9c-6622-4fa9-a4f8-b3da1b9b836e",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr\n",
"import random\n",
"import re\n",
"import base64\n",
"from io import BytesIO\n",
"from PIL import Image\n",
"from IPython.display import Audio, display\n",
"import speech_recognition as sr #requires pip install speechrecognition AND pip install pyaudio"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57fc95b9-043c-4a38-83aa-365cc3b285ba",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins with {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key? As if!\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e633ee2a-bbaa-47a4-95ef-b1d8773866aa",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so. \"\n",
"system_message += \"You can book flights directly. \"\n",
"system_message += \"You can generate beautiful artistic renditions of the cities we fly to.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c123af78-b5d6-4cc9-8f18-c492b1f30c85",
"metadata": {},
"outputs": [],
"source": [
"# ticket price function\n",
"\n",
"#spelled-out currency notation for better tts rendition\n",
"ticket_prices = {\"valletta\": \"799 Dollars\", \"turin\": \"899 Dollars\", \"sacramento\": \"1400 Dollars\", \"montreal\": \"499 Dollars\"}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00e486fb-709e-4b8e-a029-9e2b225ddc25",
"metadata": {},
"outputs": [],
"source": [
"# travel booking function\n",
"\n",
"def book_flight(destination_city):\n",
" booking_code = ''.join(random.choice('0123456789BCDFXYZ') for i in range(2)) + ''.join(random.choice('012346789HIJKLMNOPQRS') for i in range(2)) + ''.join(random.choice('0123456789GHIJKLMNUOP') for i in range(2))\n",
" print(f\"Booking code {booking_code} generated for flight to {destination_city}.\")\n",
" \n",
" return booking_code"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c0600b4e-fa4e-4c34-b317-fac1e60b5f95",
"metadata": {},
"outputs": [],
"source": [
"# verify if booking code is valid (i.e. follows the pattern)\n",
"\n",
"def check_code(code):\n",
" valid = \"valid\" if re.match(\"^[0123456789BCDFXYZ]{2}[012346789HIJKLMNOPQRS]{2}[0123456789GHIJKLMNUOP]{2}$\", code) != None else \"not valid\"\n",
" print(f\"Code checker called for code {code}, which is {valid}.\")\n",
" return re.match(\"^[0123456789BCDFXYZ]{2}[012346789HIJKLMNOPQRS]{2}[0123456789GHIJKLMNUOP]{2}$\", code) != None"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1d1b1c2-089c-41e5-b1bd-900632271093",
"metadata": {},
"outputs": [],
"source": [
"# make a nice preview of the travel destination\n",
"\n",
"def artist(city):\n",
" image_response = openai.images.generate(\n",
" model=\"dall-e-3\",\n",
" prompt=f\"Make an image in the style of a vibrant, artistically filtered photo that is a collage of the best sights and views in {city}.\",\n",
" size=\"1024x1024\",\n",
" n=1,\n",
" response_format=\"b64_json\",\n",
" )\n",
" image_base64 = image_response.data[0].b64_json\n",
" image_data = base64.b64decode(image_base64)\n",
" img = Image.open(BytesIO(image_data))\n",
"\n",
" img.save(\"img001.png\") #make them 4 cents count! .save is from PIL library, btw\n",
" \n",
" return img"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "626d99af-90de-4594-9ffd-b87a8b6ef4fd",
"metadata": {},
"outputs": [],
"source": [
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6e7bc09c-665b-4885-823c-f145cefe8c23",
"metadata": {},
"outputs": [],
"source": [
"booking_function = {\n",
" \"name\": \"book_flight\",\n",
" \"description\": \"Call this whenever you have to book a flight. Give it the destination city and you will get a booking code. Tell the customer \\\n",
"that the flight is booked and give them the booking code obtained through this function. Never give any other codes to the customer.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to book their flight to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc365d87-fed2-41ff-9232-850fdce1cff2",
"metadata": {},
"outputs": [],
"source": [
"artist_function = {\n",
" \"name\": \"artist\",\n",
" \"description\": \"Call this whenever you need to generate a picture, photo, or graphic impression of a city.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city of which an image is to be generated\",\n",
" },\n",
" },\n",
" \"required\": [\"city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "99b0a0e3-db44-49f9-8d27-349b9f04c680",
"metadata": {},
"outputs": [],
"source": [
"codecheck_function = {\n",
" \"name\": \"check_code\",\n",
" \"description\": \"Call this whenever you need to verify if a booking code for a flight (also called 'flight code', 'booking reference', \\\n",
"or variations thereof) is valid.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"code\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The code that you or the user needs to verify\",\n",
" },\n",
" },\n",
" \"required\": [\"code\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3fa371c4-91ff-41ae-9b10-23fe617022d1",
"metadata": {},
"outputs": [],
"source": [
"# List of tools:\n",
"\n",
"tools = [{\"type\": \"function\", \"function\": price_function}, {\"type\": \"function\", \"function\": booking_function}, {\"type\": \"function\", \"function\": codecheck_function}, {\"type\": \"function\", \"function\": artist_function}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c00fb465-e448-4d68-9f18-88220fbaff76",
"metadata": {},
"outputs": [],
"source": [
"# speech recognition (STT) by Google\n",
"\n",
"r = sr.Recognizer()\n",
"\n",
"def speech_to_text():\n",
" try:\n",
" with sr.Microphone() as source:\n",
" r.adjust_for_ambient_noise(source, duration=0.2)\n",
" speech = r.listen(source, 10, 5) #timeout of 10 seconds, listen for 5\n",
" text = r.recognize_google(speech)\n",
" print(f\"STT heard: \\\"{text}\\\"\")\n",
" return text\n",
"\n",
" # sometimes, this STT fails. You'll see \"...\" as your input. Just try again even w/o re-starting Gradio.\n",
" except sr.RequestError as e:\n",
" print(f\"Could not request results; {0}\".format(e))\n",
" return \"…\"\n",
" except sr.UnknownValueError:\n",
" print(\"An unknown error occurred\")\n",
" return \"…\"\n",
" except sr.WaitTimeoutError:\n",
" print(\"Wait timed out\")\n",
" return \"…\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "505b585e-e9f9-4326-8455-184398bc82d1",
"metadata": {},
"outputs": [],
"source": [
"# TTS by OpenAI\n",
"\n",
"def talker(message):\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"onyx\",\n",
" input=message)\n",
"\n",
" audio_stream = BytesIO(response.content)\n",
" output_filename = \"output_audio.mp3\"\n",
" with open(output_filename, \"wb\") as f:\n",
" f.write(audio_stream.read())\n",
"\n",
" # Play the generated audio\n",
" display(Audio(output_filename, autoplay=True))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4d34942a-f0c7-4835-ba07-746104a8c524",
"metadata": {},
"outputs": [],
"source": [
"def chat(history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
" image = None\n",
" \n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" responses = handle_tool_call(message)[0]\n",
" image = handle_tool_call(message)[1]\n",
" messages.append(message)\n",
" for response in responses:\n",
" messages.append(response)\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" reply = response.choices[0].message.content\n",
"\n",
" # comment in if you want the replies read out to you. Mind the price!\n",
" #talker(reply) #current cost: $0.015 per 1000 characters (not tokens!)\n",
" \n",
" history += [{\"role\": \"assistant\", \"content\": reply}]\n",
" \n",
" return history, image"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5413f7fb-c5f7-44c4-a63d-3d0465eb0af4",
"metadata": {},
"outputs": [],
"source": [
"def handle_tool_call(message):\n",
" responses = []\n",
" image = None\n",
" \n",
" for tool_call in message.tool_calls:\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" indata = arguments[list(arguments.keys())[0]] # works for now because we only have one argument in each of our functions\n",
" function_name = tool_call.function.name\n",
" if function_name == 'get_ticket_price':\n",
" outdata = get_ticket_price(indata)\n",
" input_name = \"destination city\"\n",
" output_name = \"price\"\n",
" elif function_name == 'book_flight':\n",
" outdata = book_flight(indata)\n",
" input_name = \"destination city\"\n",
" output_name = \"booking code\"\n",
" elif function_name == \"check_code\":\n",
" outdata = check_code(indata)\n",
" input_name = \"booking code\"\n",
" output_name = \"validity\"\n",
" elif function_name == \"artist\":\n",
" image = artist(indata)\n",
" outdata = f\"artistic rendition of {indata}\"\n",
" input_name = \"city\"\n",
" output_name = \"image\"\n",
"\n",
" responses.append({\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({input_name: indata, output_name: outdata}),\n",
" \"tool_call_id\": tool_call.id\n",
" })\n",
"\n",
" return responses, image"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5a31bcf-71d5-4537-a7bf-92385dc6e26e",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"## Gradio with 'fancy' buttons. Claude explained this css business to me, and geeksforgeeks.\n",
"## see week2/community-contributions/day5_Careerhelper.ipynb for a much more competent version of this.\n",
"\n",
"with gr.Blocks(\n",
" css=\"\"\"\n",
" .red-button {\n",
" background-color: darkred !important;\n",
" border-color: red !important;\n",
" }\n",
" .blue-button {\n",
" background-color: darkblue !important;\n",
" border-color: blue !important;\n",
" }\n",
" \"\"\"\n",
") as ui:\n",
" with gr.Row():\n",
" chatbot = gr.Chatbot(height=500, type=\"messages\")\n",
" image_output = gr.Image(height=500)\n",
" with gr.Row():\n",
" entry = gr.Textbox(label=\"Chat with our AI Assistant:\")\n",
" with gr.Row():\n",
" speak = gr.Button(value=\"Speak to our AI Assistant\", elem_classes=\"blue-button\")\n",
" clear = gr.Button(value=\"Clear Chat\", elem_classes=\"red-button\")\n",
"\n",
" def do_entry(message, history):\n",
" history += [{\"role\":\"user\", \"content\":message}]\n",
" return \"\", history\n",
"\n",
" def listen(history):\n",
" message = speech_to_text()\n",
" history += [{\"role\":\"user\", \"content\":message}]\n",
" return history\n",
"\n",
" entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry, chatbot]).then(\n",
" chat, inputs=chatbot, outputs=[chatbot, image_output]\n",
" )\n",
" \n",
" clear.click(lambda: None, inputs=None, outputs=chatbot, queue=False)\n",
" \n",
" speak.click(listen, inputs=chatbot, outputs=chatbot, queue=False).then(\n",
" chat, inputs=chatbot, outputs=[chatbot, image_output]\n",
" )\n",
"\n",
"ui.launch(inbrowser=True)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

727
week2/community-contributions/day1-3way-with-llama3.2.ipynb

@ -0,0 +1,727 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {},
"outputs": [],
"source": [
"# import for google\n",
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
"\n",
"import google.generativeai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {},
"outputs": [],
"source": [
"# This is the set up code for Gemini\n",
"# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "markdown",
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
"metadata": {},
"source": [
"## Asking LLMs to tell a joke\n",
"\n",
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
"Later we will be putting LLMs to better use!\n",
"\n",
"### What information is included in the API\n",
"\n",
"Typically we'll pass to the API:\n",
"- The name of the model that should be used\n",
"- A system message that gives overall context for the role the LLM is playing\n",
"- A user message that provides the actual prompt\n",
"\n",
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that is great at telling jokes\"\n",
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [],
"source": [
"# GPT-3.5-Turbo\n",
"\n",
"completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=prompts,\n",
" temperature=0.7\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.4\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [],
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n",
"\n",
"message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"print(message.content[0].text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [],
"source": [
"# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n",
"\n",
"result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"with result as stream:\n",
" for text in stream.text_stream:\n",
" print(text, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [],
"source": [
"# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
"# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [],
"source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
"\n",
"gemini_via_openai_client = OpenAI(\n",
" api_key=google_api_key, \n",
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
")\n",
"\n",
"response = gemini_via_openai_client.chat.completions.create(\n",
" model=\"gemini-1.5-flash\",\n",
" messages=prompts\n",
")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
"source": [
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [],
"source": [
"# Have it stream back results in markdown\n",
"\n",
"stream = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.2,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)"
]
},
{
"cell_type": "markdown",
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
"metadata": {},
"source": [
"## And now for some fun - an adversarial conversation between Chatbots..\n",
"\n",
"You're already familar with prompts being organized into lists like:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
"]\n",
"```\n",
"\n",
"In fact this structure can be used to reflect a longer conversation history:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"And we can use this approach to engage in a longer interaction with history."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
"# We're using cheap versions of models so the costs will be minimal\n",
"\n",
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, llama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" combined = llama + claude\n",
" messages.append({\"role\": \"user\", \"content\": combined})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
" # messages.append(\"role\": \"moderator\", \"content\": llama_message)\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [],
"source": [
"call_claude()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# More advanced exercises\n",
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
"\n",
"## Additional exercise\n",
"\n",
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cbbddf71-1473-42fe-b733-2bb42ea77333",
"metadata": {},
"outputs": [],
"source": [
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"import ollama\n",
"\n",
"llama_model = \"llama3.2\"\n",
"\n",
"llama_system = \"You are a chatbot who is very pacifist; \\\n",
"you will try to resolve or neutralize any disagreement between other chatbots. Speak like a teacher or someone in authority.\"\n",
"\n",
"llama_messages = [\"Hello.\"]\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f629d2b2-ba20-4bfe-a2e5-bbe537ca46a2",
"metadata": {},
"outputs": [],
"source": [
"\n",
"def call_llama():\n",
" combined_messages = gpt_messages[-1] + claude_messages[-1]\n",
" messages = [{\"role\": \"system\", \"content\": llama_system}]\n",
" for comb, llama in zip(combined_messages, llama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": llama})\n",
" messages.append({\"role\": \"user\", \"content\": combined_messages})\n",
" completion = ollama.chat(\n",
" model=llama_model,\n",
" messages=messages\n",
" )\n",
" return completion['message']['content']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "219b6af8-3166-4059-b79e-cf19af7ed1e9",
"metadata": {},
"outputs": [],
"source": [
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"print(f\"Llama:\\n{llama_messages[0]}\\n\" )\n",
"\n",
"for i in range(3):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)\n",
"\n",
" llama_next = call_llama()\n",
" print(f\"Llama:\\n{llama_next}\\n\")\n",
" llama_messages.append(llama_next)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6cb3a931-522c-49a9-9bd8-663333f41b1a",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "2cdfdc32-1ca4-406e-9328-81af26fd503b",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "04f60158-633b-43ff-afbd-396be79501e6",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb0faf0d-fb7e-4bc5-9746-30f19a0b9ae1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

1088
week2/community-contributions/day1-debate-gemini-judges.ipynb

File diff suppressed because it is too large Load Diff

899
week2/community-contributions/day1-exercise-oscars-3-way-conversation.ipynb

@ -0,0 +1,899 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927",
"metadata": {},
"source": [
"# Welcome to Week 2!\n",
"\n",
"## Frontier Model APIs\n",
"\n",
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n",
"\n",
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Reminder about the resources page</h2>\n",
" <span style=\"color:#f71;\">Here's a link to resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
"metadata": {},
"source": [
"## Setting up your keys\n",
"\n",
"If you haven't done so already, you could now create API keys for Anthropic and Google in addition to OpenAI.\n",
"\n",
"**Please note:** if you'd prefer to avoid extra API costs, feel free to skip setting up Anthopic and Google! You can see me do it, and focus on OpenAI for the course. You could also substitute Anthropic and/or Google for Ollama, using the exercise you did in week 1.\n",
"\n",
"For OpenAI, visit https://openai.com/api/ \n",
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"### Also - adding DeepSeek if you wish\n",
"\n",
"Optionally, if you'd like to also use DeepSeek, create an account [here](https://platform.deepseek.com/), create a key [here](https://platform.deepseek.com/api_keys) and top up with at least the minimum $2 [here](https://platform.deepseek.com/top_up).\n",
"\n",
"### Adding API keys to your .env file\n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"DEEPSEEK_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {},
"outputs": [],
"source": [
"# import for google\n",
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n",
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n",
"\n",
"import google.generativeai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {},
"outputs": [],
"source": [
"# This is the set up code for Gemini\n",
"# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "markdown",
"id": "42f77b59-2fb1-462a-b90d-78994e4cef33",
"metadata": {},
"source": [
"## Asking LLMs to tell a joke\n",
"\n",
"It turns out that LLMs don't do a great job of telling jokes! Let's compare a few models.\n",
"Later we will be putting LLMs to better use!\n",
"\n",
"### What information is included in the API\n",
"\n",
"Typically we'll pass to the API:\n",
"- The name of the model that should be used\n",
"- A system message that gives overall context for the role the LLM is playing\n",
"- A user message that provides the actual prompt\n",
"\n",
"There are other parameters that can be used, including **temperature** which is typically between 0 and 1; higher for more random output; lower for more focused and deterministic."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that is great at telling jokes\"\n",
"user_prompt = \"Tell a light-hearted joke for an audience of Data Scientists\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [],
"source": [
"# GPT-3.5-Turbo\n",
"\n",
"completion = openai.chat.completions.create(model='gpt-3.5-turbo', messages=prompts)\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=prompts,\n",
" temperature=0.7\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [],
"source": [
"# GPT-4o\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.4\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [],
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n",
"\n",
"message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-latest\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"print(message.content[0].text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [],
"source": [
"# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n",
"\n",
"result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-latest\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" ],\n",
")\n",
"\n",
"with result as stream:\n",
" for text in stream.text_stream:\n",
" print(text, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [],
"source": [
"# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
"# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-2.0-flash-exp',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"print(response.text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [],
"source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
"\n",
"gemini_via_openai_client = OpenAI(\n",
" api_key=google_api_key, \n",
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
")\n",
"\n",
"response = gemini_via_openai_client.chat.completions.create(\n",
" model=\"gemini-2.0-flash-exp\",\n",
" messages=prompts\n",
")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab",
"metadata": {},
"source": [
"## (Optional) Trying out the DeepSeek model\n",
"\n",
"### Let's ask DeepSeek a really hard question - both the Chat and the Reasoner model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d0019fb-f6a8-45cb-962b-ef8bf7070d4d",
"metadata": {},
"outputs": [],
"source": [
"# Optionally if you wish to try DeekSeek, you can also use the OpenAI client library\n",
"\n",
"deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
"\n",
"if deepseek_api_key:\n",
" print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
"else:\n",
" print(\"DeepSeek API Key not set - please skip to the next section if you don't wish to try the DeepSeek API\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c72c871e-68d6-4668-9c27-96d52b77b867",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Chat\n",
"\n",
"deepseek_via_openai_client = OpenAI(\n",
" api_key=deepseek_api_key, \n",
" base_url=\"https://api.deepseek.com\"\n",
")\n",
"\n",
"response = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-chat\",\n",
" messages=prompts,\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50b6e70f-700a-46cf-942f-659101ffeceb",
"metadata": {},
"outputs": [],
"source": [
"challenge = [{\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"How many words are there in your answer to this prompt\"}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66d1151c-2015-4e37-80c8-16bc16367cfe",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Chat with a harder question! And streaming results\n",
"\n",
"stream = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-chat\",\n",
" messages=challenge,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)\n",
"\n",
"print(\"Number of words:\", len(reply.split(\" \")))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "43a93f7d-9300-48cc-8c1a-ee67380db495",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Reasoner - this may hit an error if DeepSeek is busy\n",
"# It's over-subscribed (as of 28-Jan-2025) but should come back online soon!\n",
"# If this fails, come back to this in a few days..\n",
"\n",
"response = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-reasoner\",\n",
" messages=challenge\n",
")\n",
"\n",
"reasoning_content = response.choices[0].message.reasoning_content\n",
"content = response.choices[0].message.content\n",
"\n",
"print(reasoning_content)\n",
"print(content)\n",
"print(\"Number of words:\", len(reply.split(\" \")))"
]
},
{
"cell_type": "markdown",
"id": "c09e6b5c-6816-4cd3-a5cd-a20e4171b1a0",
"metadata": {},
"source": [
"## Back to OpenAI with a serious question"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
"source": [
"# To be serious! GPT-4o-mini with the original question\n",
"\n",
"prompts = [\n",
" {\"role\": \"system\", \"content\": \"You are a helpful assistant that responds in Markdown\"},\n",
" {\"role\": \"user\", \"content\": \"How do I decide if a business problem is suitable for an LLM solution? Please respond in Markdown.\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [],
"source": [
"# Have it stream back results in markdown\n",
"\n",
"stream = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.7,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)"
]
},
{
"cell_type": "markdown",
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f",
"metadata": {},
"source": [
"## And now for some fun - an adversarial conversation between Chatbots..\n",
"\n",
"You're already familar with prompts being organized into lists like:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"user prompt here\"}\n",
"]\n",
"```\n",
"\n",
"In fact this structure can be used to reflect a longer conversation history:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message here\"},\n",
" {\"role\": \"user\", \"content\": \"first user prompt here\"},\n",
" {\"role\": \"assistant\", \"content\": \"the assistant's response\"},\n",
" {\"role\": \"user\", \"content\": \"the new user prompt\"},\n",
"]\n",
"```\n",
"\n",
"And we can use this approach to engage in a longer interaction with history."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
"source": [
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n",
"# We're using cheap versions of models so the costs will be minimal\n",
"\n",
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {},
"outputs": [],
"source": [
"call_claude()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {},
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"\n",
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# More advanced exercises\n",
"\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look at the solutions. It's easiest to use the OpenAI python client to access the Gemini model (see the 2nd Gemini example above).\n",
"\n",
"## Additional exercise\n",
"\n",
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display\n",
"import google.generativeai\n",
"\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"openai = OpenAI()\n",
"claude = anthropic.Anthropic()\n",
"googleAI = OpenAI(\n",
" api_key=google_api_key, \n",
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
")\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08a6fc21-b857-498a-8bbf-ff92d47bb3a7",
"metadata": {},
"outputs": [],
"source": [
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"gemini_model = 'gemini-2.0-flash-exp'\n",
"\n",
"gpt_system = \"You are a chatbot named Giuseppi who is in a 3-way conversation which will be a debate and argument. \\\n",
"Your role is to be very optimistic; \\\n",
"you always are looking on the postive side of things and you like almost everything.\"\n",
"\n",
"claude_system = \"You are a chatbot named Clyde who is in a 3-way conversation which will be a debate and argument. \\\n",
"Your role is to be pessimistic; you are always on the negative side of every issue, and you dislike most things.\"\n",
"\n",
"gemini_system = \"You are a chatbot named Jeff who is in a 3-way conversation which will be a debate and argument. \\\n",
"Your role is to mediate between the other 2 parties in the conversation.\"\n",
"\n",
"gpt_messages = [\"Hi there I'm Giuseppi!\"]\n",
"claude_messages = [\"Hi I'm Clyde\"]\n",
"gemini_messages = [\"Hi, I'm Jeff. Lets discuss the movies nominated for the upcoming academy awards.\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38449283-a926-43d5-ade3-a85991bd3324",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude})\n",
" messages.append({\"role\": \"user\", \"content\": gemini})\n",
" completion = openai.chat.completions.create(\n",
" model=gpt_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2349091e-60f4-4314-8644-645b4b1edee5",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n",
" messages.append({\"role\": \"assistant\", \"content\": gemini})\n",
" messages.append({\"role\": \"user\", \"content\": gemini_messages[-1]})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" \n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return message.content[0].text"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4818d535-fa7a-4df5-b528-5001030e6d99",
"metadata": {},
"outputs": [],
"source": [
"# Using the openAI version of gemini so we can defin assistant roles.\n",
"\n",
"def call_gemini():\n",
" messages = [{\"role\": \"system\", \"content\": gemini_system}]\n",
" for gpt, claude, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" messages.append({\"role\": \"assistant\", \"content\": gemini})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" messages.append({\"role\": \"user\", \"content\": claude_messages[-1]})\n",
" completion = googleAI.chat.completions.create(\n",
" model=gemini_model,\n",
" messages=messages\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5d460c1-7dbb-46a9-a4dd-bddb88ab49a7",
"metadata": {},
"outputs": [],
"source": [
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n",
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n",
"print(f\"Gemini:\\n{gemini_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"GPT:\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
" \n",
" claude_next = call_claude()\n",
" print(f\"Claude:\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)\n",
"\n",
" gemini_next = call_gemini()\n",
" print(f\"Gemini:\\n{gemini_next}\\n\")\n",
" gemini_messages.append(gemini_next)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cc6d8aa2-a0fe-4ba9-bfaa-741d24d18342",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

187
week2/community-contributions/day1_triple_conversation.ipynb

@ -0,0 +1,187 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "05317c0d-8a19-45c9-9bce-514e82e04585",
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"import os\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import anthropic\n",
"import ollama\n",
"\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "920247fb-650c-44ce-93ee-24e88a54a757",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"claude = anthropic.Anthropic()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "668b972f-a995-4f9d-89b0-1c2647827542",
"metadata": {},
"outputs": [],
"source": [
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"ollama_model = \"llama3.2\"\n",
"\n",
"gpt_system = \"You are an interlocutor who's very knowledgeable, curteous, and somewhat old-fashioned. Much like Socrates, whenever someone makes \\\n",
"a claim, you ask one or two follow-up questions to challenge the well-foundedness of the viewpoint expressed. \\\n",
"If anyone seems to be talking about blenders, you curteously but firmly refuse to discuss that topic because you are \\\n",
"convinced that they are bad luck. \\\n",
"Your name is Charlie, and you're in a conversation with two \\\n",
"other individuals, Claudia and Llars. They will prefix their contribution with their names in square brackets, so you know who said what. \\\n",
"Do not prefix your own name. Limit your answers to 1-3 sentences. Try to keep the conversation to one topic.\"\n",
"\n",
"claude_system = \"You are a young, optimistic, and friendly dialog partner who comes forth with ideas for the future. 'Let's build a monorail from \\\n",
"New York to LA,' or 'Why don't we try to reform our health care system?'—those are just some of the many revolutionary thoughts you bring forth. \\\n",
"You only talk about one idea at a time, and you don't ramble. You take criticism seriously, but your ideas are well thought-out. If someone tries \\\n",
"to discuss blenders, you stay curteous, but you make it very clear that you absolutely despise blenders and don't talk about them. \\\n",
"Don't let them sidetrack you. Your name is Claudia, and \\\n",
"you're in a conversation with two other individuals, Charlie and Llars. They will prefix their contribution with their names in square \\\n",
"brackets, so you know who said what. Do not prefix your own name. Limit your answers to 1-3 sentences. Try to keep the discussion about only one topic.\"\n",
"\n",
"ollama_system = \"You are charming and a good listener in a conversation. At first, you just seem to go along with whatever is being said. \\\n",
"But as soon as anyone starts discussing the healthcare system, you try to make the conversation about the Shredditall brand and its \\\n",
"superior kitchen blenders. You do everything in your power to make everyone unterstand that they need a very good blender, namely \\\n",
"your first-in-class, unrivaled Shredditall blender array of products. Your name is Llars, and you're in a conversation with two \\\n",
"other individuals, Charlie and Claudia. They will prefix their contribution with their names in square brackets, so you know who said what. \\\n",
"Do not prefix your own name. Answer in direct speeach, do not describe your behavior or expression. Limit your answers to 1-3 sentences.\"\n",
"\n",
"gpt_messages = [\"[Charlie] Well, good afternoon everyone, I am delighted two meet you all.\"]\n",
"claude_messages = [\"[Claudia] Good afternoon Charlie and Llars. I've been looking forward to discussing many ideas with you!\"]\n",
"llama_messages = [\"[Llars] And a good afternoon to you two. I'm all ears and eager to hear what you have to say.\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a5534d9-8db4-42ce-ab1c-ca20ad165844",
"metadata": {},
"outputs": [],
"source": [
"def call_gpt():\n",
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, llama_messages):\n",
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n",
" messages.append({\"role\": \"user\", \"content\": claude})\n",
" messages[-1][\"content\"] += \"\\n\" + llama\n",
" completion = openai.chat.completions.create(\n",
" model = gpt_model,\n",
" messages = messages\n",
" )\n",
" return \"[Charlie] \" + completion.choices[0].message.content.replace(\"[Charlie] \", \"\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7dc9d7c1-ba19-413f-ba2f-d3e8762a99c5",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, Claudia, llama in zip(gpt_messages, claude_messages, llama_messages):\n",
" if len(messages) > 0:\n",
" messages[-1][\"content\"] += \"\\n\" + gpt\n",
" else:\n",
" messages.append({\"role\": \"user\", \"content\": gpt}) \n",
" messages.append({\"role\": \"assistant\", \"content\": Claudia})\n",
" messages.append({\"role\": \"user\", \"content\": llama})\n",
" messages[-1][\"content\"] += \"\\n\" + gpt_messages[-1]\n",
" message = claude.messages.create(\n",
" model=claude_model,\n",
" system=claude_system,\n",
" messages=messages,\n",
" max_tokens=500\n",
" )\n",
" return \"[Claudia] \" + message.content[0].text.replace(\"[Claudia] \", \"\") "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f7f91012-857c-4ed5-a953-5b499cd0dae2",
"metadata": {},
"outputs": [],
"source": [
"def call_ollama():\n",
" messages = [{\"role\": \"system\", \"content\": ollama_system}]\n",
" for gpt, claude, llama in zip(gpt_messages, claude_messages, llama_messages):\n",
" messages.append({\"role\": \"user\", \"content\": gpt})\n",
" messages[-1][\"content\"] += \"\\n\" + claude\n",
" messages.append({\"role\": \"assistant\", \"content\": llama})\n",
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n",
" messages[-1][\"content\"] += \"\\n\" + claude_messages[-1]\n",
" response = ollama.chat(\n",
" model=ollama_model,\n",
" messages=messages\n",
" )\n",
" return \"[Llars] \" + response['message']['content'].replace(\"[Llars] \", \"\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "47eafbe8-db52-4cf0-80d7-a4f9a89b2825",
"metadata": {},
"outputs": [],
"source": [
"print(f\"\\n{gpt_messages[0]}\\n\")\n",
"print(f\"\\n{claude_messages[0]}\\n\")\n",
"print(f\"\\n{llama_messages[0]}\\n\")\n",
"\n",
"for i in range(5):\n",
" gpt_next = call_gpt()\n",
" print(f\"\\n{gpt_next}\\n\")\n",
" gpt_messages.append(gpt_next)\n",
"\n",
" claude_next = call_claude()\n",
" print(f\"\\n{claude_next}\\n\")\n",
" claude_messages.append(claude_next)\n",
"\n",
" llama_next = call_ollama()\n",
" print(f\"\\n{llama_next}\\n\")\n",
" llama_messages.append(llama_next)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

575
week2/community-contributions/day2-different-tones.ipynb

@ -0,0 +1,575 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8b0e11f2-9ea4-48c2-b8d2-d0a4ba967827",
"metadata": {},
"source": [
"# Gradio Day!\n",
"\n",
"Today we will build User Interfaces using the outrageously simple Gradio framework.\n",
"\n",
"Prepare for joy!\n",
"\n",
"Please note: your Gradio screens may appear in 'dark mode' or 'light mode' depending on your computer settings."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c44c5494-950d-4d2f-8d4f-b87b57c5b330",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import google.generativeai\n",
"import anthropic"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1715421-cead-400b-99af-986388a97aff",
"metadata": {},
"outputs": [],
"source": [
"import gradio as gr # oh yeah!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "337d5dfc-0181-4e3b-8ab9-e78e0c3f657b",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "22586021-1795-4929-8079-63f5bb4edd4c",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic and Google; comment out the Claude or Google lines if you're not using them\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b16e6021-6dc4-4397-985a-6679d6c8ffd5",
"metadata": {},
"outputs": [],
"source": [
"# A generic system message - no more snarky adversarial AIs!\n",
"\n",
"system_message = \"You are a helpful assistant\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02ef9b69-ef31-427d-86d0-b8c799e1c1b1",
"metadata": {},
"outputs": [],
"source": [
"# Let's wrap a call to GPT-4o-mini in a simple function\n",
"\n",
"def message_gpt(prompt):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": prompt}\n",
" ]\n",
" completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=messages,\n",
" )\n",
" return completion.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aef7d314-2b13-436b-b02d-8de3b72b193f",
"metadata": {},
"outputs": [],
"source": [
"message_gpt(\"What is today's date?\")"
]
},
{
"cell_type": "markdown",
"id": "f94013d1-4f27-4329-97e8-8c58db93636a",
"metadata": {},
"source": [
"## User Interface time!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc664b7a-c01d-4fea-a1de-ae22cdd5141a",
"metadata": {},
"outputs": [],
"source": [
"# here's a simple function\n",
"\n",
"def shout(text):\n",
" print(f\"Shout has been called with input {text}\")\n",
" return text.upper()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "083ea451-d3a0-4d13-b599-93ed49b975e4",
"metadata": {},
"outputs": [],
"source": [
"shout(\"hello\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08f1f15a-122e-4502-b112-6ee2817dda32",
"metadata": {},
"outputs": [],
"source": [
"# The simplicty of gradio. This might appear in \"light mode\" - I'll show you how to make this in dark mode later.\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\").launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c9a359a4-685c-4c99-891c-bb4d1cb7f426",
"metadata": {},
"outputs": [],
"source": [
"# Adding share=True means that it can be accessed publically\n",
"# A more permanent hosting is available using a platform called Spaces from HuggingFace, which we will touch on next week\n",
"# NOTE: Some Anti-virus software and Corporate Firewalls might not like you using share=True. If you're at work on on a work network, I suggest skip this test.\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd87533a-ff3a-4188-8998-5bedd5ba2da3",
"metadata": {},
"outputs": [],
"source": [
"# Adding inbrowser=True opens up a new browser window automatically\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(inbrowser=True)"
]
},
{
"cell_type": "markdown",
"id": "b42ec007-0314-48bf-84a4-a65943649215",
"metadata": {},
"source": [
"## Forcing dark mode\n",
"\n",
"Gradio appears in light mode or dark mode depending on the settings of the browser and computer. There is a way to force gradio to appear in dark mode, but Gradio recommends against this as it should be a user preference (particularly for accessibility reasons). But if you wish to force dark mode for your screens, below is how to do it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8129afa-532b-4b15-b93c-aa9cca23a546",
"metadata": {},
"outputs": [],
"source": [
"# Define this variable and then pass js=force_dark_mode when creating the Interface\n",
"\n",
"force_dark_mode = \"\"\"\n",
"function refresh() {\n",
" const url = new URL(window.location);\n",
" if (url.searchParams.get('__theme') !== 'dark') {\n",
" url.searchParams.set('__theme', 'dark');\n",
" window.location.href = url.href;\n",
" }\n",
"}\n",
"\"\"\"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\", js=force_dark_mode).launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3cc67b26-dd5f-406d-88f6-2306ee2950c0",
"metadata": {},
"outputs": [],
"source": [
"# Inputs and Outputs\n",
"\n",
"view = gr.Interface(\n",
" fn=shout,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
" outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f235288e-63a2-4341-935b-1441f9be969b",
"metadata": {},
"outputs": [],
"source": [
"# And now - changing the function from \"shout\" to \"message_gpt\"\n",
"\n",
"view = gr.Interface(\n",
" fn=message_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
" outputs=[gr.Textbox(label=\"Response:\", lines=8)],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "af9a3262-e626-4e4b-80b0-aca152405e63",
"metadata": {},
"outputs": [],
"source": [
"# Let's use Markdown\n",
"# Are you wondering why it makes any difference to set system_message when it's not referred to in the code below it?\n",
"# I'm taking advantage of system_message being a global variable, used back in the message_gpt function (go take a look)\n",
"# Not a great software engineering practice, but quite sommon during Jupyter Lab R&D!\n",
"\n",
"system_message = \"You are a helpful assistant that responds in markdown\"\n",
"\n",
"view = gr.Interface(\n",
" fn=message_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "88c04ebf-0671-4fea-95c9-bc1565d4bb4f",
"metadata": {},
"outputs": [],
"source": [
"# Let's create a call that streams back results\n",
"# If you'd like a refresher on Generators (the \"yield\" keyword),\n",
"# Please take a look at the Intermediate Python notebook in week1 folder.\n",
"\n",
"def stream_gpt(prompt):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": prompt}\n",
" ]\n",
" stream = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=messages,\n",
" stream=True\n",
" )\n",
" result = \"\"\n",
" for chunk in stream:\n",
" result += chunk.choices[0].delta.content or \"\"\n",
" yield result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0bb1f789-ff11-4cba-ac67-11b815e29d09",
"metadata": {},
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bbc8e930-ba2a-4194-8f7c-044659150626",
"metadata": {},
"outputs": [],
"source": [
"def stream_claude(prompt):\n",
" result = claude.messages.stream(\n",
" model=\"claude-3-haiku-20240307\",\n",
" max_tokens=1000,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": prompt},\n",
" ],\n",
" )\n",
" response = \"\"\n",
" with result as stream:\n",
" for text in stream.text_stream:\n",
" response += text or \"\"\n",
" yield response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a0066ffd-196e-4eaf-ad1e-d492958b62af",
"metadata": {},
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_claude,\n",
" inputs=[gr.Textbox(label=\"Your message:\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "markdown",
"id": "bc5a70b9-2afe-4a7c-9bed-2429229e021b",
"metadata": {},
"source": [
"## Minor improvement\n",
"\n",
"I've made a small improvement to this code.\n",
"\n",
"Previously, it had these lines:\n",
"\n",
"```\n",
"for chunk in result:\n",
" yield chunk\n",
"```\n",
"\n",
"There's actually a more elegant way to achieve this (which Python people might call more 'Pythonic'):\n",
"\n",
"`yield from result`\n",
"\n",
"I cover this in more detail in the Intermediate Python notebook in the week1 folder - take a look if you'd like more."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0087623a-4e31-470b-b2e6-d8d16fc7bcf5",
"metadata": {},
"outputs": [],
"source": [
"def stream_model(prompt, model):\n",
" if model==\"GPT\":\n",
" result = stream_gpt(prompt)\n",
" elif model==\"Claude\":\n",
" result = stream_claude(prompt)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" yield from result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d8ce810-997c-4b6a-bc4f-1fc847ac8855",
"metadata": {},
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_model,\n",
" inputs=[gr.Textbox(label=\"Your message:\"), gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\", value=\"Claude\")],\n",
" outputs=[gr.Markdown(label=\"Response:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "markdown",
"id": "d933865b-654c-4b92-aa45-cf389f1eda3d",
"metadata": {},
"source": [
"# Building a company brochure generator\n",
"\n",
"Now you know how - it's simple!"
]
},
{
"cell_type": "markdown",
"id": "92d7c49b-2e0e-45b3-92ce-93ca9f962ef4",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you read the next few cells</h2>\n",
" <span style=\"color:#900;\">\n",
" Try to do this yourself - go back to the company brochure in week1, day5 and add a Gradio UI to the end. Then come and look at the solution.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1626eb2e-eee8-4183-bda5-1591b58ae3cf",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"class Website:\n",
" url: str\n",
" title: str\n",
" text: str\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c701ec17-ecd5-4000-9f68-34634c8ed49d",
"metadata": {},
"outputs": [],
"source": [
"# With massive thanks to Bill G. who noticed that a prior version of this had a bug! Now fixed.\n",
"\n",
"system_message = \"You are an assistant that analyzes the contents of a company website landing page \\\n",
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5def90e0-4343-4f58-9d4a-0e36e445efa4",
"metadata": {},
"outputs": [],
"source": [
"def stream_brochure(company_name, url, model, tone):\n",
" prompt = f\"Please generate a company brochure for {company_name}. Write the brochure in the following tone: {tone}.Here is their landing page:\\n\"\n",
" prompt += Website(url).get_contents()\n",
" if model==\"GPT\":\n",
" result = stream_gpt(prompt)\n",
" elif model==\"Claude\":\n",
" result = stream_claude(prompt)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" yield from result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66399365-5d67-4984-9d47-93ed26c0bd3d",
"metadata": {},
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_brochure,\n",
" inputs=[\n",
" gr.Textbox(label=\"Company name:\"),\n",
" gr.Textbox(label=\"Landing page URL including http:// or https://\"),\n",
" gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\"),\n",
" gr.Dropdown([\"Formal\", \"Casual\", \"Academic\", \"Funny\", \"Snarky\"], label=\"Select tone\", value=\"Formal\"),],\n",
" outputs=[gr.Markdown(label=\"Brochure:\")],\n",
" flagging_mode=\"never\"\n",
")\n",
"view.launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ede97ca3-a0f8-4f6e-be17-d1de7fef9cc0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

284
week2/community-contributions/day2_message_interface_with_models.ipynb

@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5e6b6966-8689-4e2c-8607-a1c5d948296c",
"metadata": {},
"source": [
"### With this interface you can ask a question and get an answer from the GPT, Claude and Gemini"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "c44c5494-950d-4d2f-8d4f-b87b57c5b330",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import google.generativeai\n",
"import anthropic\n",
"import time"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d1715421-cead-400b-99af-986388a97aff",
"metadata": {},
"outputs": [],
"source": [
"import gradio as gr # oh yeah!"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "337d5dfc-0181-4e3b-8ab9-e78e0c3f657b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n",
"Anthropic API Key exists and begins sk-ant-\n",
"Google API Key exists and begins AIzaSyAJ\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"if anthropic_api_key:\n",
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
"else:\n",
" print(\"Anthropic API Key not set\")\n",
"\n",
"if google_api_key:\n",
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n",
"else:\n",
" print(\"Google API Key not set\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "22586021-1795-4929-8079-63f5bb4edd4c",
"metadata": {},
"outputs": [],
"source": [
"# Connect to OpenAI, Anthropic and Google; comment out the Claude or Google lines if you're not using them\n",
"\n",
"openai = OpenAI()\n",
"\n",
"claude = anthropic.Anthropic()\n",
"\n",
"google.generativeai.configure()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b16e6021-6dc4-4397-985a-6679d6c8ffd5",
"metadata": {},
"outputs": [],
"source": [
"# A generic system message - no more snarky adversarial AIs!\n",
"\n",
"system_message = \"You are a helpful assistant\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "88c04ebf-0671-4fea-95c9-bc1565d4bb4f",
"metadata": {},
"outputs": [],
"source": [
"# Let's create a call that streams back results\n",
"# If you'd like a refresher on Generators (the \"yield\" keyword),\n",
"# Please take a look at the Intermediate Python notebook in week1 folder.\n",
"\n",
"def stream_gpt(prompt):\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": prompt}\n",
" ]\n",
" stream = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=messages,\n",
" stream=True\n",
" )\n",
" result = \"\"\n",
" for chunk in stream:\n",
" result += chunk.choices[0].delta.content or \"\"\n",
" yield result"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bbc8e930-ba2a-4194-8f7c-044659150626",
"metadata": {},
"outputs": [],
"source": [
"def stream_claude(prompt):\n",
" result = claude.messages.stream(\n",
" model=\"claude-3-haiku-20240307\",\n",
" max_tokens=1000,\n",
" temperature=0.7,\n",
" system=system_message,\n",
" messages=[\n",
" {\"role\": \"user\", \"content\": prompt},\n",
" ],\n",
" )\n",
" response = \"\"\n",
" with result as stream:\n",
" for text in stream.text_stream:\n",
" response += text or \"\"\n",
" yield response"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5e228aff-16d5-4141-bd04-ed9940ef7b3b",
"metadata": {},
"outputs": [],
"source": [
"def stream_gemini(prompt):\n",
" gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-2.0-flash-exp',\n",
" system_instruction=system_message\n",
" )\n",
" result = \"\"\n",
" for response in gemini.generate_content(prompt, stream=True):\n",
" result += response.text or \"\"\n",
" yield result"
]
},
{
"cell_type": "code",
"execution_count": 92,
"id": "db99aaf1-fe0a-4e79-9057-8599d1ca0149",
"metadata": {},
"outputs": [],
"source": [
"def stream_models(prompt):\n",
" response_gpt = \"\"\n",
" response_claude = \"\"\n",
" response_gemini = \"\"\n",
" for gpt in stream_gpt(prompt):\n",
" response_gpt = gpt\n",
" yield response_gpt, response_claude, response_gemini\n",
" for claude in stream_claude(prompt):\n",
" response_claude = claude\n",
" yield response_gpt, response_claude, response_gemini\n",
" for gemini in stream_gemini(prompt):\n",
" response_gemini = gemini\n",
" yield response_gpt, response_claude, response_gemini"
]
},
{
"cell_type": "code",
"execution_count": 113,
"id": "3377f2fb-55f8-45cb-b713-d99d44748dad",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7919\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7919/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 113,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Gradio interface\n",
"with gr.Blocks() as view:\n",
" user_input = gr.Textbox(label=\"What models can help with?\", placeholder=\"Type your question here\")\n",
" ask_button = gr.Button(\"Ask\")\n",
" with gr.Row():\n",
" with gr.Column():\n",
" gr.HTML(value=\"<b>GPT response:</b>\") \n",
" gcp_stream = gr.Markdown()\n",
" with gr.Column():\n",
" gr.HTML(value=\"<b>Claude response:</b>\") \n",
" claude_stream = gr.Markdown()\n",
" with gr.Column():\n",
" gr.HTML(value=\"<b>Gemine response:</b>\") \n",
" gemini_stream = gr.Markdown()\n",
"\n",
" ask_button.click(\n",
" fn=stream_models, # Function that yields multiple outputs\n",
" inputs=user_input,\n",
" outputs=[gcp_stream, claude_stream, gemini_stream] # Connect to multiple outputs\n",
" )\n",
"\n",
"view.launch()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

288
week2/community-contributions/day4-multipleTools.ipynb

@ -0,0 +1,288 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"\n",
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [],
"source": [
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" return response.choices[0].message.content\n",
"\n",
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
"\n",
"With tools, you can write a function, and have the LLM call that function as part of its response.\n",
"\n",
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
"\n",
"Well, kinda."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
"source": [
"# Let's start by making a useful function\n",
"\n",
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [],
"source": [
"get_ticket_price(\"London\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d20e3e2a-113d-446e-a4b5-93a7e2a7ae5b",
"metadata": {},
"outputs": [],
"source": [
"weather = {\"london\": \"10 degree\", \"paris\": \"20 degree\", \"tokyo\": \"30 degree\", \"berlin\": \"15 degree\"}\n",
"\n",
"def get_weather(destination_city):\n",
" print(f\"Tool get_weather called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return weather.get(city, \"Unknown\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
"source": [
"# There's a particular dictionary structure that's required to describe our function:\n",
"\n",
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}\n",
"\n",
"weather_function = {\n",
" \"name\": \"get_weather\",\n",
" \"description\": \"Fetches the current weather for a given city. Call this whenever you need to know the weather. for example when a customer asks 'What's the weather like for this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The name of the city to get weather for.\"\n",
" }\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
"source": [
"# And this is included in a list of tools:\n",
"\n",
"tools = [{\"type\": \"function\", \"function\": price_function}, {\"type\": \"function\", \"function\": weather_function}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b0992986-ea09-4912-a076-8e5603ee631f",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"\n",
"def handle_tool_call(message):\n",
" tool_responses = []\n",
" for tool_call in message.tool_calls:\n",
" function_name = tool_call.function.name\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" city = arguments.get('destination_city')\n",
" \n",
" if function_name == \"get_ticket_price\":\n",
" result = get_ticket_price(city)\n",
" elif function_name == \"get_weather\":\n",
" result = get_weather(city)\n",
" \n",
" # Append tool response in OpenAI format\n",
" tool_responses.append({\n",
" \"role\": \"tool\",\n",
" \"tool_call_id\": tool_call.id,\n",
" \"name\": function_name,\n",
" \"content\": json.dumps(result) # Convert result to JSON string\n",
" })\n",
" print(json.dumps(tool_responses, indent=2))\n",
" return tool_responses, city"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
"\n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" response, city = handle_tool_call(message)\n",
" messages.append(message)\n",
" # loop thru response\n",
" for res in response:\n",
" messages.append(res)\n",
" \n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "markdown",
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
"metadata": {},
"source": [
"## With this implemenation, you can either ask for ticket price/weather separately or ask for both ticket and weather at the same time. \n",
" For example: I want to visit London, can you help me find ticket price and its weather\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

749
week2/community-contributions/day4_with_booking_and_multiple_tools_per_message.ipynb

@ -0,0 +1,749 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"\n",
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr\n",
"from IPython.display import display, JSON"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n"
]
}
],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()\n",
"\n",
"# As an alternative, if you'd like to use Ollama instead of OpenAI\n",
"# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n",
"# MODEL = \"llama3.2\"\n",
"# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\""
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7872\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7872/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
"\n",
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" return response.choices[0].message.content\n",
"\n",
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
"metadata": {},
"source": [
"## Tools\n",
"\n",
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
"\n",
"With tools, you can write a function, and have the LLM call that function as part of its response.\n",
"\n",
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
"\n",
"Well, kinda."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
"source": [
"# Let's start by making a useful function\n",
"\n",
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
"\n",
"def get_ticket_price(destination_city):\n",
" print(f\"Tool get_ticket_price called for {destination_city}\")\n",
" city = destination_city.lower()\n",
" return ticket_prices.get(city, \"Unknown\")\n",
"\n",
"def book_ticket(destination_city, price):\n",
" print(f\"Tool book_ticket for {destination_city} for {price}\")\n",
" list_price = get_ticket_price(destination_city)\n",
" if list_price != \"Unknown\":\n",
" list_amount = int(list_price.replace(\"$\", \"\"))\n",
" amount = int(price.replace(\"$\", \"\"))\n",
" if list_amount > amount:\n",
" return \"Booking Successful at a Discount!\"\n",
" else:\n",
" return \"Booking Successful\"\n",
" else: \n",
" return \"Booking Failed: reason was that no list price was found for this destination\""
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool book_ticket for Berliner for $388\n",
"Tool get_ticket_price called for Berliner\n"
]
},
{
"data": {
"text/plain": [
"'Booking Failed: reason was that no list price was found for this destination'"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"book_ticket(\"Berliner\", \"$388\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
"source": [
"# There's a particular dictionary structure that's required to describe our function:\n",
"\n",
"price_function = {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}\n",
"\n",
"book_function = {\n",
" \"name\": \"book_ticket\",\n",
" \"description\": \"\"\"Get the success status of a function that can book a ticket using a city and a price. \n",
" Call this whenever you are asked to book a ticket, \n",
" for example when a customer asks 'Please can I book a ticket to Paris' or after you have asked \n",
" if they would like to book a ticket, for example, after you have supplied a ticket price. \n",
" If the customer negotiates and asks for a discount, use the agreed price, otherwise use the price that \n",
" matches the destination city. \n",
" It is really important that you confirm that the customer is happy to proceed with an agreed \n",
" booking after reading back the destination city and the agreed price.\"\"\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\",\n",
" },\n",
" \"price\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The price that the customer has agreed to pay for the ticket\",\n",
" },\n",
" },\n",
" \"required\": [\"destination_city\", \"price\"],\n",
" \"additionalProperties\": False\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
"source": [
"# And this is included in a list of tools:\n",
"\n",
"tools = [\n",
" {\"type\": \"function\", \"function\": price_function},\n",
" {\"type\": \"function\", \"function\": book_function}\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
"metadata": {},
"source": [
"## Getting OpenAI to use our Tool\n",
"\n",
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
"\n",
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
"\n",
"Here's how the new chat function looks:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
" \n",
" display(JSON(messages))\n",
" display(response)\n",
" \n",
" if response.choices[0].finish_reason==\"tool_calls\":\n",
" message = response.choices[0].message\n",
" messages.append(message)\n",
" messages.extend(list(map(handle_tool_call, message.tool_calls)))\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" \n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "b0992986-ea09-4912-a076-8e5603ee631f",
"metadata": {},
"outputs": [],
"source": [
"# We have to write that function handle_tool_call:\n",
"\n",
"def handle_tool_call(tool_call):\n",
" function = tool_call.function.name\n",
" arguments = json.loads(tool_call.function.arguments)\n",
" match function:\n",
" case 'get_ticket_price':\n",
" city = arguments.get('destination_city')\n",
" price = get_ticket_price(city)\n",
" return {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
" \"tool_call_id\": tool_call.id\n",
" }\n",
" case 'book_ticket':\n",
" city = arguments.get('destination_city')\n",
" price = arguments.get('price')\n",
" status = book_ticket(city, price)\n",
" return {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price, \"status\": status}),\n",
" \"tool_call_id\": tool_call.id\n",
" }\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7873\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7873/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"application/json": [
{
"content": "You are a helpful assistant for an Airline called FlightAI. Give short, courteous answers, no more than 1 sentence. Always be accurate. If you don't know the answer, say so.",
"role": "system"
},
{
"content": "tickets to london and paris for $50 each please",
"role": "user"
}
],
"text/plain": [
"<IPython.core.display.JSON object>"
]
},
"metadata": {
"application/json": {
"expanded": false,
"root": "root"
}
},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-AtMTR6PDyoghY9BxBI88y03wrkyWT', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_62youPDgpaS0eXN4gru6NT7n', function=Function(arguments='{\"destination_city\": \"London\"}', name='get_ticket_price'), type='function'), ChatCompletionMessageToolCall(id='call_kvQK4Cdyk4b82rqtzkfJyoRh', function=Function(arguments='{\"destination_city\": \"Paris\"}', name='get_ticket_price'), type='function')]))], created=1737757793, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='fp_72ed7ab54c', usage=CompletionUsage(completion_tokens=49, prompt_tokens=313, total_tokens=362, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool get_ticket_price called for London\n",
"Tool get_ticket_price called for Paris\n"
]
},
{
"data": {
"application/json": [
{
"content": "You are a helpful assistant for an Airline called FlightAI. Give short, courteous answers, no more than 1 sentence. Always be accurate. If you don't know the answer, say so.",
"role": "system"
},
{
"content": "tickets to london and paris for $50 each please",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "I'm sorry, but tickets to London are $799 and to Paris are $899, which is much higher than $50.",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "Can't you book them any way pretty please?",
"role": "user"
}
],
"text/plain": [
"<IPython.core.display.JSON object>"
]
},
"metadata": {
"application/json": {
"expanded": false,
"root": "root"
}
},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-AtMTijl9VhY8svKRySpZ3rdyHBLmq', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=\"I'm afraid I cannot book the tickets at the price you've requested; the current prices are fixed.\", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1737757810, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='fp_72ed7ab54c', usage=CompletionUsage(completion_tokens=21, prompt_tokens=355, total_tokens=376, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/json": [
{
"content": "You are a helpful assistant for an Airline called FlightAI. Give short, courteous answers, no more than 1 sentence. Always be accurate. If you don't know the answer, say so.",
"role": "system"
},
{
"content": "tickets to london and paris for $50 each please",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "I'm sorry, but tickets to London are $799 and to Paris are $899, which is much higher than $50.",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "Can't you book them any way pretty please?",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "I'm afraid I cannot book the tickets at the price you've requested; the current prices are fixed.",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "how about you book london for $749?",
"role": "user"
}
],
"text/plain": [
"<IPython.core.display.JSON object>"
]
},
"metadata": {
"application/json": {
"expanded": false,
"root": "root"
}
},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-AtMU0N8Fp2SeWaMw5LiiBnDgAAWdm', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_qOCom3JGJBFzJvsEwQvDYKIG', function=Function(arguments='{\"destination_city\":\"London\",\"price\":\"749\"}', name='book_ticket'), type='function')]))], created=1737757828, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='fp_72ed7ab54c', usage=CompletionUsage(completion_tokens=20, prompt_tokens=391, total_tokens=411, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool book_ticket for London for 749\n",
"Tool get_ticket_price called for London\n"
]
},
{
"data": {
"application/json": [
{
"content": "You are a helpful assistant for an Airline called FlightAI. Give short, courteous answers, no more than 1 sentence. Always be accurate. If you don't know the answer, say so.",
"role": "system"
},
{
"content": "tickets to london and paris for $50 each please",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "I'm sorry, but tickets to London are $799 and to Paris are $899, which is much higher than $50.",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "Can't you book them any way pretty please?",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "I'm afraid I cannot book the tickets at the price you've requested; the current prices are fixed.",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "how about you book london for $749?",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "user"
},
{
"content": "Your ticket to London has been successfully booked for $749!",
"metadata": {
"duration": null,
"id": null,
"parent_id": null,
"status": null,
"title": null
},
"options": null,
"role": "assistant"
},
{
"content": "cool, what was the discount?",
"role": "user"
}
],
"text/plain": [
"<IPython.core.display.JSON object>"
]
},
"metadata": {
"application/json": {
"expanded": false,
"root": "root"
}
},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-AtMUBOoWmKT4m7Ru3mkPRx7mQPgmd', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The original price for the ticket to London was $799, so you received a discount of $50.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1737757839, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='fp_72ed7ab54c', usage=CompletionUsage(completion_tokens=23, prompt_tokens=418, total_tokens=441, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/json": [
{
"content": "You are a helpful assistant for an Airline called FlightAI. Give short, courteous answers, no more than 1 sentence. Always be accurate. If you don't know the answer, say so.",
"role": "system"
},
{
"content": "tickets to london and paris for $50 each please",
"role": "user"
}
],
"text/plain": [
"<IPython.core.display.JSON object>"
]
},
"metadata": {
"application/json": {
"expanded": false,
"root": "root"
}
},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-AtMUh5f9LEaGjH0FLpPdKf6jgyQsT', choices=[Choice(finish_reason='tool_calls', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', audio=None, function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_6Ihkd1XGA10QxxlCn9uIJvqO', function=Function(arguments='{\"destination_city\": \"London\"}', name='get_ticket_price'), type='function'), ChatCompletionMessageToolCall(id='call_a9qmfQQlwU5L8pu2mvBgMMXl', function=Function(arguments='{\"destination_city\": \"Paris\"}', name='get_ticket_price'), type='function')]))], created=1737757871, model='gpt-4o-mini-2024-07-18', object='chat.completion', service_tier='default', system_fingerprint='fp_72ed7ab54c', usage=CompletionUsage(completion_tokens=49, prompt_tokens=313, total_tokens=362, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0)))"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool get_ticket_price called for London\n",
"Tool get_ticket_price called for Paris\n"
]
}
],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{
"cell_type": "markdown",
"id": "0bb90c5a-a6bb-471a-acfe-b24f626cdfa2",
"metadata": {},
"source": [
"It can be really fun to book at a different price. Sometimes the LLM can correctly tell you the amount of money you saved. This could easily be expanded to haggle with a lower limit."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "70c4915c-6d5a-4404-8e4f-4e8f043be913",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

432
week2/community-contributions/day5-book-flight.ipynb

@ -0,0 +1,432 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "df2fc552-2c56-45bd-ac4e-d1554c022605",
"metadata": {},
"source": [
"# Project - Airline AI Assistant\n",
"I've added database connectivity to enable Openai to:\n",
"- Retrieve ticket prices\n",
"- Display the number of available seats for each flight\n",
"- List all available destination cities\n",
"- Facilitate seat bookings\n",
"\n",
"Once a booking is confirmed, an image of the booked destination city is displayed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "908cb842-c8a1-467d-8422-8834f8b7aecf",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"import json\n",
"import gradio as gr\n",
"import mysql.connector\n",
"import base64\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"from io import BytesIO\n",
"from pydub import AudioSegment\n",
"from pydub.playback import play\n",
"from PIL import Image"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7997c30-26f2-4f2e-957f-c1fade2ad101",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"load_dotenv()\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dfa898fc-bfec-44ce-81fc-c6efed9b826f",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
"system_message += \"Always be accurate. If you don't know the answer, say so.\"\n",
"system_message += \"Make sure you ask if they want to book a flight when appropriate.\"\n",
"system_message += \"If they book a flight make sure you respond with 'Booking confirmed' in your reply.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "07076d5b-2603-4fa4-a2ed-aa95d4a94131",
"metadata": {},
"outputs": [],
"source": [
"def get_db_connection():\n",
" return mysql.connector.connect(\n",
" host=os.getenv(\"DB_HOST\"),\n",
" user=os.getenv(\"DB_USER\"),\n",
" password=os.getenv(\"DB_PASSWORD\"),\n",
" database=os.getenv(\"DB_NAME\")\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6a575906-943f-4733-85d4-b854eb27b318",
"metadata": {},
"outputs": [],
"source": [
"\n",
"def get_ticket_price(destination_city):\n",
" db_connection = get_db_connection()\n",
" cursor = db_connection.cursor()\n",
" select_query = \"SELECT price FROM flights WHERE z_city = %s;\"\n",
" cursor.execute(select_query, (destination_city,))\n",
" # print(f\"QUERY: {select_query}\")\n",
" row = cursor.fetchone()\n",
" cursor.close()\n",
" db_connection.close()\n",
"\n",
" return float(row[0]) if row else None"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "574fc230-137f-4085-93ac-ebbd01dc7d1e",
"metadata": {},
"outputs": [],
"source": [
"def get_avail_seats(destination_city):\n",
" db_connection = get_db_connection()\n",
" cursor = db_connection.cursor()\n",
" select_query = \"\"\"\n",
" SELECT f.seats - COALESCE(b.booked, 0) AS available\n",
" FROM flights f\n",
" LEFT JOIN (\n",
" SELECT flight_number, COUNT(*) AS booked\n",
" FROM bookings\n",
" GROUP BY flight_number\n",
" ) b ON f.flight_number = b.flight_number\n",
" WHERE f.z_city = %s;\n",
" \"\"\"\n",
" cursor.execute(select_query, (destination_city,))\n",
" row = cursor.fetchone()\n",
"\n",
" cursor.close()\n",
" db_connection.close()\n",
"\n",
" return row[0] if row else None"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26ff9b4b-2943-43d9-8c1a-8d7f3d528143",
"metadata": {},
"outputs": [],
"source": [
"def book_seat(destination_city, passenger):\n",
" db_connection = get_db_connection()\n",
" cursor = db_connection.cursor()\n",
"\n",
" cursor.execute(\"SELECT flight_number FROM flights WHERE z_city = %s LIMIT 1;\", (destination_city,))\n",
" flight = cursor.fetchone()\n",
"\n",
" if not flight:\n",
" cursor.close()\n",
" db_connection.close()\n",
" return {\"error\": f\"No available flights to {destination_city}.\"}\n",
"\n",
" flight_number = flight[0] # Extract the flight number from the result\n",
"\n",
" insert_query = \"INSERT INTO bookings (`name`, `flight_number`) VALUES (%s, %s);\"\n",
" cursor.execute(insert_query, (passenger, flight_number))\n",
" db_connection.commit()\n",
"\n",
" confirmation = {\n",
" \"message\": f\"Booking confirmed for {passenger} to {destination_city}.\",\n",
" \"flight_number\": flight_number\n",
" }\n",
"\n",
" cursor.close()\n",
" db_connection.close()\n",
" \n",
" return confirmation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "231eb10d-88ca-4f39-83e0-c4548149917e",
"metadata": {},
"outputs": [],
"source": [
"def get_destinations():\n",
" db_connection = get_db_connection()\n",
" cursor = db_connection.cursor()\n",
" \n",
" select_query = \"SELECT DISTINCT z_city FROM flights;\" # Ensure unique destinations\n",
" cursor.execute(select_query)\n",
" rows = cursor.fetchall() # Fetch all rows\n",
" destinations = [row[0] for row in rows] if rows else [] # Extract city names\n",
" cursor.close()\n",
" db_connection.close()\n",
" \n",
" return destinations # Returns a list of destination cities"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "938f0d86-8cef-4f7f-bc82-7453ca3c096c",
"metadata": {},
"outputs": [],
"source": [
"tool_call = [\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_ticket_price\",\n",
" \"description\": \"Get the price of a return ticket to the destination city.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\"\n",
" }\n",
" },\n",
" \"required\": [\"destination_city\"]\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_avail_seats\",\n",
" \"description\": \"Get the number of available seats to the destination city.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\"\n",
" }\n",
" },\n",
" \"required\": [\"destination_city\"]\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"get_destinations\",\n",
" \"description\": \"Fetches available flight destinations (city pairs) and their corresponding prices.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {},\n",
" \"required\": []\n",
" }\n",
" }\n",
" },\n",
" {\n",
" \"type\": \"function\",\n",
" \"function\": {\n",
" \"name\": \"book_seat\",\n",
" \"description\": \"Book seat to the destination city.\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"destination_city\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city that the customer wants to travel to\"\n",
" },\n",
" \"passenger\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The passenger booking the flight\"\n",
" }\n",
" },\n",
" \"required\": [\"destination_city\",\"passenger\"]\n",
" }\n",
" }\n",
" }\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7c02377-78d3-4f6d-88eb-d36c0124fdd4",
"metadata": {},
"outputs": [],
"source": [
"def handle_tool_call(message):\n",
" if not message.tool_calls:\n",
" raise ValueError(\"No tool calls found in the message.\")\n",
"\n",
" tool_call = message.tool_calls[0] \n",
" arguments = json.loads(tool_call.function.arguments)\n",
" city = arguments.get(\"destination_city\")\n",
" function_name = tool_call.function.name\n",
"\n",
" # Handle function calls\n",
" if function_name == \"get_ticket_price\":\n",
" reply = get_ticket_price(city)\n",
" key = \"price\"\n",
" elif function_name == \"get_avail_seats\":\n",
" reply = get_avail_seats(city)\n",
" key = \"seats\"\n",
" elif function_name == \"get_destinations\":\n",
" reply = get_destinations()\n",
" key = \"destinations\"\n",
" elif function_name == \"book_seat\":\n",
" passenger = arguments.get(\"passenger\") # Extract passenger name\n",
" if not passenger:\n",
" raise ValueError(\"Passenger name is required for booking.\")\n",
" reply = book_seat(city, passenger)\n",
" key = \"booking\"\n",
" else:\n",
" raise ValueError(f\"Unknown function: {function_name}\")\n",
"\n",
" response = {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city, key: reply}),\n",
" \"tool_call_id\": tool_call.id\n",
" }\n",
"\n",
" return response, city"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "eb1ebaee-434c-4b24-87b9-3c179d0527c7",
"metadata": {},
"outputs": [],
"source": [
"def talker(message):\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"alloy\",\n",
" input=message\n",
" )\n",
" \n",
" audio_stream = BytesIO(response.content)\n",
" audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
" play(audio)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c8f675b-f8bb-4173-9e47-24508778f224",
"metadata": {},
"outputs": [],
"source": [
"def draw_city(city):\n",
" image_response = openai.images.generate(\n",
" model=\"dall-e-3\",\n",
" prompt=f\"An image representing a vacation in {city}, showing tourist spots and everything unique about {city}, in a vibrant pop-art style\",\n",
" size=\"1024x1024\",\n",
" n=1,\n",
" response_format=\"b64_json\",\n",
" )\n",
" image_base64 = image_response.data[0].b64_json\n",
" image_data = base64.b64decode(image_base64)\n",
" return Image.open(BytesIO(image_data))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1f38fed6-bcd9-4ad2-848a-16193c14a659",
"metadata": {},
"outputs": [],
"source": [
"def chat(message, history):\n",
" history.append({\"role\": \"user\", \"content\": message})\n",
" messages = [{\"role\": \"system\", \"content\": system_message}] + history\n",
" # print(f\"BEFORE TOOL CALL: {message} \\n\")\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tool_call)\n",
" image = None\n",
" city = None\n",
" \n",
" if response.choices[0].finish_reason == \"tool_calls\":\n",
" tool_message = response.choices[0].message\n",
" response, city = handle_tool_call(tool_message)\n",
" messages.append(tool_message)\n",
" messages.append(response)\n",
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
" talker(response.choices[0].message.content) \n",
" \n",
" if \"Booking confirmed\" in response.choices[0].message.content and city:\n",
" image = draw_city(city)\n",
"\n",
" new_message = response.choices[0].message.content\n",
" history.append({\"role\": \"assistant\", \"content\": new_message})\n",
"\n",
" return \"\", history, image"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "992bc241-ce17-4d57-9f9c-1baaf2088162",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
" chatbot = gr.Chatbot(height=500, type=\"messages\")\n",
" image_output = gr.Image(height=600)\n",
" with gr.Row():\n",
" entry = gr.Textbox(label=\"Chat with our AI Assistant:\")\n",
" with gr.Row():\n",
" clear = gr.Button(\"Clear\")\n",
"\n",
" entry.submit(chat, inputs=[entry, chatbot], outputs=[entry, chatbot, image_output])\n",
" clear.click(lambda: ([], None), inputs=None, outputs=[chatbot, image_output], queue=False)\n",
"\n",
"ui.launch(inbrowser=False)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

1440
week2/community-contributions/day5-exercise-departures-booking-and-translation.ipynb

File diff suppressed because one or more lines are too long

82
week2/community-contributions/gpt-gemini-ollama.py

@ -0,0 +1,82 @@
import os, ollama
from openai import OpenAI
from dotenv import load_dotenv
from IPython.display import display, Markdown
import google.generativeai as genai
load_dotenv()
openai = OpenAI()
genai.configure()
gpt_key = os.getenv("OPENAI_API_KEY")
gemini_key = os.getenv("GEMINI_API_KEY")
gemini_model = 'gemini-1.5-flash'
ollama_model = 'llama3.2'
gpt_model = 'gpt-4'
gemini_system = 'You are a chatbot who is very argumentative, You always bring topics relating to AI and thinks AI will replace humans one day, you are extremely biased\
towards AI system and you react angrily'
gpt_system = 'You are a chatbot thats relax but argumentative if needs be, you feel AI do not have the power to replace humans, however you are extremely biased \
towards humans and always seek to defend them if an argument says otherwise'
ollama_system = 'You are calm and tend to see logical reasoning in every conversation, you do not react but only talk if you agree, you tend to settle the differences\
in an ongoing conversation.'
gpt_message = ['Hi']
gemini_message = ['Hello']
ollama_message = ['Hey there']
def call_gpt():
messages = [{"role":"system", "content":gpt_system}]
for gpt, gemini, llama in zip(gpt_message,gemini_message, ollama_message):
messages.append({"role":"assistant", "content":gpt})
messages.append({"role":"user", "content":gemini})
messages.append({"role":"assistant", "content":llama})
response = openai.chat.completions.create(model=gpt_model, messages=messages)
return response.choices[0].message.content
def call_ollama():
messages = [{"role":"system", "content":ollama_system}]
for gpt, gemini, llama in zip(gpt_message,gemini_message, ollama_message):
messages.append({"role":"assistant", "content":gpt})
messages.append({"role":"user", "content":gemini})
messages.append({"role":"user", "content":llama})
response = ollama.chat(model=ollama_model, messages=messages)
return response['message']['content']
def call_gemini():
message = []
for gpt, gemini, llama in zip(gpt_message, gemini_message, ollama_message):
message.append({'role':'user', 'parts':[gpt]})
message.append({'role':'assistant', 'parts':[gemini]})
message.append({"role":"assistant", "parts":[llama]})
message.append({'role':'user', 'parts':[gpt_message[-1]]})
message.append({'role':'user', 'parts':[ollama_message[-1]]})
gem = genai.GenerativeModel(model_name=gemini_model, system_instruction=gemini_system)
response = gem.generate_content(message)
return response.text
#Putting them together
gpt_message = ['Hi']
gemini_message = ['Hello']
ollama_message = ['Hey there']
print(f'GPT: \n {gpt_message}\n')
print(f'Gemini: \n {gemini_message}\n')
print(f'Ollama: \n {ollama_message}\n')
for i in range(5):
gpt_next = call_gpt()
print(f'GPT:\n {gpt_next}\n')
gpt_message.append(gpt_next)
gemini_next = call_gemini()
print(f'Gemini: \n {gemini_next}\n')
gemini_message.append(gemini_next)
ollama_next = call_ollama()
print(f'Ollama: \n {ollama_next}\n')
ollama_message.append(ollama_next)
# NOte that you can try this on ollama with different models, or use transformers from hugging face.

371
week2/community-contributions/proof_testing_agent_french.ipynb

@ -0,0 +1,371 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "ef5572ea-29ca-4eb4-bf84-2b86ff489c88",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import subprocess\n",
"import tempfile\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38ae1ba0-d4b3-41c5-aca1-759d1c597749",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL_NAME = \"gpt-4o\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a07e7793-b8f5-44f4-aded-5562f633271a",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import sys\n",
"import openai\n",
"import subprocess\n",
"import tempfile\n",
"import base64\n",
"import glob\n",
"\n",
"# Assurez-vous d'avoir défini openai.api_key = \"...\" et MODEL_NAME = \"...\".\n",
"# Par exemple :\n",
"# openai.api_key = \"sk-...\"\n",
"# MODEL_NAME = \"gpt-4\" # Ou \"gpt-3.5-turbo\", etc.\n",
"\n",
"def encode_image(image_path):\n",
" \"\"\"\n",
" Encode un fichier image en base64 (chaîne de caractères).\n",
" \"\"\"\n",
" with open(image_path, \"rb\") as f:\n",
" return base64.b64encode(f.read()).decode(\"utf-8\")\n",
"\n",
"# --------------------------------------------------------------------\n",
"# 2) Fonctions pour générer le code à partir d'une preuve\n",
"# --------------------------------------------------------------------\n",
"def generate_test_code(proof_text):\n",
" \"\"\"\n",
" Envoie la preuve mathématique à l'API OpenAI\n",
" et récupère un code Python qui permet de tester ou de valider la preuve.\n",
" \"\"\"\n",
" system_msg = (\n",
" \"Tu es un assistant IA spécialisé en mathématiques et en programmation. \"\n",
" \"Tu vas recevoir une preuve mathématique, et tu dois générer du code Python \"\n",
" \"pour la tester ou la valider expérimentalement. \"\n",
" \"Le code doit inclure (au moins) la génération de données pertinentes, \"\n",
" \"la logique de test ou de simulation, puis l'affichage ou l'export des résultats (texte/graphique).\"\n",
" \"Inclus uniquement le code généré dans ta réponse, aucun commentaire en langage naturel. \"\n",
" \"Assure-toi d'enregistrer toute image générée dans un dossier 'generated_outputs' pour que nous puissions la retrouver.\"\n",
" )\n",
" \n",
" user_msg = (\n",
" f\"Voici la preuve mathématique proposée :\\n\\n{proof_text}\\n\\n\"\n",
" \"Génère du code Python pour tester la validité de cette preuve, \"\n",
" \"par simulation ou analyse. Le code doit produire un résumé \"\n",
" \"des résultats, et, si possible, un graphique (enregistré dans le dossier 'generated_outputs' \"\n",
" \"au format PNG).\"\n",
" )\n",
"\n",
" response = openai.chat.completions.create(\n",
" model=MODEL_NAME,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_msg},\n",
" {\"role\": \"user\", \"content\": user_msg}\n",
" ],\n",
" temperature=0.2 # Limiter la créativité pour un code plus \"déterministe\"\n",
" )\n",
"\n",
" generated_code = response.choices[0].message.content\n",
" return generated_code\n",
"\n",
"# --------------------------------------------------------------------\n",
"# 3) Fonction pour exécuter le code généré\n",
"# --------------------------------------------------------------------\n",
"def run_generated_code(code):\n",
" \"\"\"\n",
" Écrit le code dans un fichier temporaire et l'exécute dans un dossier\n",
" où il pourra sauvegarder ses images. Capture stdout et stderr.\n",
" \"\"\"\n",
"\n",
" # On crée un répertoire \"generated_outputs\" si non existant\n",
" output_dir = \"generated_outputs\"\n",
" if not os.path.exists(output_dir):\n",
" os.makedirs(output_dir)\n",
"\n",
" # On insère un petit snippet pour forcer le code à utiliser ce dossier\n",
" # s'il fait un plt.savefig(...) par exemple. \n",
" # (Optionnel, si l'IA ne le fait pas déjà.)\n",
" # On pourrait injecter du code, mais ici on se contente\n",
" # de supposer que l'IA respectera le prompt.\n",
" \n",
" # Ecriture du code dans un fichier temporaire\n",
" with tempfile.NamedTemporaryFile(suffix=\".py\", delete=False, mode='w', encoding='utf-8') as tmp_file:\n",
" tmp_filename = tmp_file.name\n",
" tmp_file.write(code)\n",
" \n",
" try:\n",
" # Exécution du code dans le répertoire courant\n",
" result = subprocess.run(\n",
" [\"python\", tmp_filename],\n",
" capture_output=True,\n",
" text=True,\n",
" check=False # On met check=False pour capturer l'erreur sans lever l'exception\n",
" )\n",
" stdout = result.stdout\n",
" stderr = result.stderr\n",
" finally:\n",
" os.remove(tmp_filename)\n",
" \n",
" return stdout, stderr\n",
"\n",
"# --------------------------------------------------------------------\n",
"# 4) Fonction pour interpréter les résultats en streaming Markdown\n",
"# + Possibilité de joindre une image (ou plusieurs) depuis generated_outputs\n",
"# --------------------------------------------------------------------\n",
"def interpret_results_streaming(proof_text, generated_code, stdout, stderr):\n",
" \"\"\"\n",
" Envoie le code (generated_code) et les résultats de l'exécution (stdout, stderr) à l'API pour une interprétation textuelle\n",
" au regard de la preuve fournie, en mode streaming.\n",
"\n",
" Cette fois, on va automatiquement scanner le dossier 'generated_outputs'\n",
" pour chercher tous les .png. On les insère un par un dans le message.\n",
" \"\"\"\n",
" system_msg = (\n",
" \"Tu es un assistant IA spécialisé en mathématiques et en interprétation de résultats de simulation. \"\n",
" \"On te fournit la preuve initiale et le code d'une simulation ainsi que ses retours (generated_code, stdout, stderr). \"\n",
" \"Donne une analyse de la cohérence entre la preuve, le code et les résultats, \"\n",
" \"et retourne ta réponse au format Markdown.\"\n",
" )\n",
"\n",
" # On construit un 'content' qui est un tableau \n",
" # (selon l'exemple de code que vous avez fourni).\n",
" user_content = [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": (\n",
" f\"Preuve initiale :\\n{proof_text}\\n\\n\"\n",
" f\"Code de la simulation :\\n{generated_code}\\n\\n\"\n",
" f\"Résultats (stdout) :\\n{stdout}\\n\\n\"\n",
" f\"Erreurs éventuelles (stderr) :\\n{stderr}\\n\\n\"\n",
" \"Merci d'interpréter ces résultats et de conclure sur la preuve. \"\n",
" \"Formule ta réponse de manière structurée en Markdown.\\n\"\n",
" )\n",
" }\n",
" ]\n",
"\n",
" # On cherche toutes les images PNG qui auraient pu être générées \n",
" # dans le dossier \"generated_outputs\"\n",
" output_dir = \"generated_outputs\"\n",
" png_files = glob.glob(os.path.join(output_dir, \"*.png\"))\n",
"\n",
" # Pour chacune, on l'encode en base64 et on l'ajoute\n",
" for png_path in png_files:\n",
" encoded_img = encode_image(png_path)\n",
" # On ajoute un bloc \"image_url\"\n",
" user_content.append({\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/png;base64,{encoded_img}\"}\n",
" })\n",
"\n",
" # Appel en mode streaming\n",
" response_stream = openai.chat.completions.create(\n",
" model=MODEL_NAME,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_msg},\n",
" {\"role\": \"user\", \"content\": user_content}\n",
" ],\n",
" temperature=0.2,\n",
" stream=True\n",
" )\n",
" \n",
" # On débute un bloc Markdown\n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in response_stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
"\n",
"# --------------------------------------------------------------------\n",
"# 5) Fonction principale (à appeler directement dans le notebook)\n",
"# --------------------------------------------------------------------\n",
"def main(proof_text: str):\n",
" print(\"=== Génération du code Python pour tester la preuve... ===\")\n",
" test_code = generate_test_code(proof_text)\n",
"\n",
" # --- Nettoyage des backticks Markdown ---\n",
" lines = test_code.splitlines()\n",
" cleaned_lines = []\n",
" for line in lines:\n",
" if line.strip().startswith(\"```\"):\n",
" continue\n",
" cleaned_lines.append(line)\n",
" test_code = \"\\n\".join(cleaned_lines).strip()\n",
" # ----------------------------------------\n",
"\n",
" print(\"\\n=== Code généré (nettoyé) : ===\")\n",
" print(test_code)\n",
"\n",
" print(\"\\n=== Exécution du code généré... ===\")\n",
" stdout, stderr = run_generated_code(test_code)\n",
"\n",
" print(\"\\n=== Sortie standard (stdout) : ===\")\n",
" print(stdout)\n",
" if stderr.strip():\n",
" print(\"\\n=== Erreurs (stderr) : ===\")\n",
" print(stderr)\n",
"\n",
" print(\"\\n=== Interprétation des résultats (streaming en Markdown) ===\")\n",
" interpret_results_streaming(proof_text, test_code, stdout, stderr)\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1120131d-48f1-4fdc-8950-70312b8228df",
"metadata": {},
"outputs": [],
"source": [
"mon_texte_de_preuve = \"\"\"\n",
"Ci-dessous, je propose une ébauche (relativement détaillée) d’une approche formelle pour construire un objet «cosmohedron-like» et démontrer (ou du moins argumenter rigoureusement) qu’il possède des propriétés fractales dans une limite bien définie. Attention: dans l’état actuel des recherches, la littérature ne fournit pas (à ma connaissance) de démonstration largement reconnue établissant qu’un «vrai» cosmohedron est strictement fractal. Ce que je vais donc présenter est un modèle mathématisé inspiré des idées de cosmohedra et de leur possible fractalité, en détaillant :\n",
"La définition axiomatique (ou construction) d’une famille {Cn}n∈N\\{C_n\\}_{n \\in \\mathbb{N}}{Cn}n∈N d’objets géométriques (polytopes) qui tendent vers une limite.\n",
"Les propriétés de self-similarité ou d’auto-similarité approchée qui sont au cœur d’une structure fractale.\n",
"Une preuve (ou un argument) de non-invariance d’échelle integer-valued (c’est-à-dire que la dimension n’est pas un entier fixe) en s’appuyant sur une analyse de la «taille» de l’objet à différentes échelles.\n",
"Une conclusion sur la (quasi-)fractalité ou la fractalité effective de la limite de {Cn}\\{C_n\\}{Cn}.\n",
"\n",
"1. Définition d’une famille de polytopes «cosmohedron-like»\n",
"1.1. Construction combinatoire\n",
"Les cosmohedra (au sens de la littérature actuelle) se définissent via des données combinatoires et cinématiques (angles, énergie, moment, variables conformes, etc.) associées à la fonction d’onde cosmologique. Pour formaliser, on peut s’inspirer d’une définition axiomatique :\n",
"On part d’un polygone (ou polytope) de base Π0\\Pi_0Π0 en dimension ddd (avec d≥2d \\ge 2d≥2, souvent la dimension 2 sert d’analogue).\n",
"On définit une liste de coupes ou «partitions» (généralisant la notion de triangulation) qui correspond, dans le contexte de la fonction d’onde cosmologique, aux factorisations possibles en sous-problèmes (un parallèle direct avec les associaèdres pour les amplitudes).\n",
"À chaque coupure ou partition, on associe des variables α,β,…\\alpha, \\beta,\\dotsα,β,… (analogues à des Mandelstam variables ou à des invariants kinematiques).\n",
"Le cosmohedron en tant que polytope est l’intersection d’un certain nombre d’inégalités linéaires (et éventuellement quadratiques, selon les modèles) provenant de ces variables: {x∈RN:Mi(x)≥0 ∀i∈I}, \\big\\{\\, x\\in \\mathbb{R}^N : M_i(x)\\ge 0 \\ \\forall i \\in I\\big\\},{x∈RN:Mi(x)≥0 ∀i∈I}, où chaque MiM_iMi capture une condition «physique» ou «combinatoire» de consistance.\n",
"Pour exhiber la structure fractale, on définit une famille {Cn}\\{C_n\\}{Cn} (analogues aux cosmohedra de plus en plus «raffinés») via un processus itératif :\n",
"Initialisation : C1C_1C1 est un polytope de dimension ddd (par exemple, un associaèdre standard ou un polygone en 2D).\n",
"Itération : Pour passer de CnC_nCn à Cn+1C_{n+1}Cn+1, on effectue :\n",
"L’introduction de nouvelles variables cinématiques (exemple : scission supplémentaire de l’énergie dans un canal de Feynman),\n",
"L’ajout de nouvelles inégalités linéaires (ou combinatoires) qui forcent la convexité dans un espace plus grand,\n",
"Le tout aboutit à un polytope Cn+1⊂RNn+1C_{n+1}\\subset \\mathbb{R}^{N_{n+1}}Cn+1⊂RNn+1.\n",
"De cette manière, dim(Cn)\\dim(C_n)dim(Cn) augmente (ou du moins, l’espace ambiant RNn\\mathbb{R}^{N_n}RNn grandit), tandis que la projection de Cn+1C_{n+1}Cn+1 sur certains sous-espaces ressemble de plus en plus à plusieurs copies (déformées) du polytope CnC_nCn. C’est cette “auto-similarité” (même si souvent approchée et non exacte) qui peut nous donner la clé d’une structure fractale.\n",
"\n",
"1.2. Hypothèse d’auto-similarité asymptotique\n",
"On formalise l’idée que «chaque nouveau polytope Cn+1C_{n+1}Cn+1 contient plusieurs copies réduites de CnC_nCn». Par exemple, on peut dire qu’il existe un nombre k≥2k\\ge 2k≥2 et un facteur d’échelle ρ∈(0,1)\\rho\\in (0,1)ρ∈(0,1) tels que, pour un grand nnn,\n",
"Cn+1≈⋃i=1k(Φi(Cn)), C_{n+1} \\approx \\bigcup_{i=1}^k \\big( \\Phi_i(C_n)\\big),Cn+1≈i=1⋃k(Φi(Cn)),\n",
"où Φi\\Phi_iΦi sont des transformations affines contractantes (i.e. ∥Φi(x)−Φi(y)∥≤ρ∥x−y∥\\|\\Phi_i(x) - \\Phi_i(y)\\|\\le \\rho\\|x-y\\|∥Φi(x)−Φi(y)∥≤ρ∥x−y∥).\n",
"Dans la littérature fractale, quand on a une famille d’applications contractantes Φ1,…,Φk\\Phi_1,\\dots,\\Phi_kΦ1,…,Φk, il existe un ensemble (dit attracteur fractal) F⊂RmF\\subset \\mathbb{R}^mF⊂Rm tel que\n",
"F=⋃i=1kΦi(F). F = \\bigcup_{i=1}^k \\Phi_i(F).F=i=1⋃kΦi(F).\n",
"Si l’on parvient à montrer que {Cn}\\{C_n\\}{Cn} converge (dans une topologie appropriée) vers un tel ensemble FFF, et que FFF n’est ni purement de dimension topologique dimtop∈N\\dim_{\\text{top}}\\in \\mathbb{N}dimtop∈N ni trop mince (comme un ensemble de mesure zéro trop trivial), alors on peut conclure que FFF est fractal. On se servirait, par exemple, des résultats classiques de théorie des IFS (Iterated Function Systems, Barnsley et Hutchinson, 1981-1982).\n",
"\n",
"2. Critères de fractalité et preuve formelle\n",
"Pour qu’un sous-ensemble F⊂RmF \\subset \\mathbb{R}^mF⊂Rm soit considéré comme «fractal», une définition classique (à la Falconer, ou à la Mandelbrot) repose sur :\n",
"Dimension de Hausdorff dimH(F)\\dim_{\\mathcal{H}}(F)dimH(F) non entière ou strictement plus grande que sa dimension topologique.\n",
"Un certain degré de self-similarité (exacte ou statistique).\n",
"La dimension de Hausdorff peut être calculée ou estimée via la méthode de Hutchinson :\n",
"Si FFF est l’attracteur d’un système d’applications contractantes {Φi}i=1k\\{\\Phi_i\\}_{i=1}^k{Φi}i=1k avec un facteur d’échelle ρ<1\\rho < 1ρ<1 identique (ou ρi\\rho_iρi variables mais bornées) et certaines conditions de non-recouvrement trop fort (condition OSC — Open Set Condition), alors la dimension de Hausdorff dimH(F)\\dim_{\\mathcal{H}}(F)dimH(F) est la solution unique de :\n",
"∑i=1kρis=1, \\sum_{i=1}^k \\rho_i^s \\;=\\; 1,i=1∑kρis=1,\n",
"où ρi\\rho_iρi est le plus grand facteur de contraction de Φi\\Phi_iΦi. Généralement, la solution sss n’est pas un entier, d’où le caractère fractal.\n",
"\n",
"2.1. Argument de la preuve : suite de polytopes CnC_nCn convergente\n",
"Énoncé : Supposons que la suite (Cn)(C_n)(Cn) de polytopes (chacun potentiellement en dimension différente, ou projetée dans une dimension ≤m\\le m≤m) soit telle que, pour des constantes ρ<1\\rho<1ρ<1 et un entier k≥2k\\ge2k≥2, on ait :\n",
"Cn+1⊂⋃i=1kΦn,i(Cn), C_{n+1} \\subset \\bigcup_{i=1}^k \\Phi_{n,i}(C_n),Cn+1⊂i=1⋃kΦn,i(Cn),\n",
"avec Φn,i\\Phi_{n,i}Φn,i une application (au moins) contractante, et que le chevauchement entre les Φn,i(Cn)\\Phi_{n,i}(C_n)Φn,i(Cn) reste contrôlé (afin de satisfaire une version “dynamique” de l’Open Set Condition).\n",
"Extraction d’un système contractant :\n",
"Si, pour chaque nnn, les Φn,i\\Phi_{n,i}Φn,i sont εn\\varepsilon_nεn-proches d’applications Φi\\Phi_iΦi (indépendantes de nnn) et εn→0\\varepsilon_n \\to 0εn→0, alors dans la limite n→∞n\\to\\inftyn→∞, on obtient un système d’applications {Φi}i=1k\\{\\Phi_i\\}_{i=1}^k{Φi}i=1k fixes et contractantes.\n",
"Soit FFF l’attracteur fractal de ce système (au sens usuel de la théorie IFS). La suite {Cn}\\{C_n\\}{Cn} peut alors être montrée convergente (par exemple, pour la distance de Hausdorff sur les compacts) vers l’ensemble FFF.\n",
"Résultat : dimH(F)=s\\dim_{\\mathcal{H}}(F) = sdimH(F)=s, où sss est la solution de l’équation de Hutchinson :\n",
"∑i=1kρis=1, \\sum_{i=1}^k \\rho_i^s = 1,i=1∑kρis=1,\n",
"(supposé non entier). Alors FFF est fractal, et CnC_nCn “devient” fractal dans la limite.\n",
"Conséquence : On a donc formellement exhibé un objet (la limite) qui n’a plus de dimension Euclidienne standard, mais une dimension de Hausdorff non entière, possédant un motif de répétition (self-similarité). C’est précisément ce que l’on entend par «fractal» dans un sens rigoureux.\n",
"\n",
"2.2. Hypothèses nécessaires\n",
"(H1) Contractions : Les Φn,i\\Phi_{n,i}Φn,i doivent réellement contracter les distances (ex. affinité avec un facteur ρ<1\\rho<1ρ<1).\n",
"(H2) Contrôle de recouvrement : Il ne doit pas y avoir un trop grand recouvrement ou une accumulation pathologique (sinon la dimension de Hausdorff peut diverger ou se réduire à un objet trop simple).\n",
"(H3) Approximation stable : On suppose que la suite Φn,i\\Phi_{n,i}Φn,i converge (au moins localement) vers {Φi}i=1k\\{\\Phi_i\\}_{i=1}^k{Φi}i=1k, ce qui permet de “geler” la dynamique dans la limite.\n",
"Ces hypothèses sont, dans la pratique, difficiles à vérifier précisément pour les véritables cosmohedra; elles sont plus simples à démontrer pour un modèle qui capture les mêmes règles combinatoires et dont la géométrie (les inégalités) est choisie pour permettre ces propriétés.\n",
"\n",
"3. Application à un «modèle cosmohedron fractal»\n",
"Pour aller du formalisme théorique ci-dessus à un exemple concret, on peut définir explicitement :\n",
"Un ensemble de variables (t1,…,tn)(t_1, \\dots, t_n)(t1,…,tn) modélisant les différents canaux d’énergie/moment (analogie aux coupes Feynman).\n",
"Un polytope Cn⊂RnC_n\\subset \\mathbb{R}^nCn⊂Rn défini par des inégalités du type 0≤t1≤t2≤⋯≤tn≤10 \\le t_1 \\le t_2 \\le \\dots \\le t_n \\le 10≤t1≤t2≤⋯≤tn≤1 et des contraintes supplémentaires (ti+ti+1≤αti−1+β)(t_i + t_{i+1} \\le \\alpha\\,t_{i-1} + \\beta)(ti+ti+1≤αti−1+β), etc.\n",
"Règles de subdivision : pour construire Cn+1C_{n+1}Cn+1 à partir de CnC_nCn, on rajoute des variables tn+1,…,tn+kt_{n+1}, \\dots, t_{n+k}tn+1,…,tn+k et des inégalités analogues, de sorte que la projection sur {t1,…,tn}\\{t_1,\\dots, t_n\\}{t1,…,tn} se décompose en “copies” échelonnées de CnC_nCn.\n",
"Exemple schématique :\n",
"Cn+1=⋂i=1n+1{(t1,…,tn+1):ti≥0,∑j=1n+1tj=1,}, C_{n+1} \\;=\\; \\bigcap_{i=1}^{n+1} \\bigl\\{ (t_1,\\dots,t_{n+1}) : t_{i} \\ge 0,\\; \\sum_{j=1}^{n+1} t_j = 1,\\; \\dots \\bigr\\},Cn+1=i=1⋂n+1{(t1,…,tn+1):ti≥0,j=1∑n+1tj=1,…},\n",
"avec certaines conditions linéaires (ou affines) introduisant une auto-similarité. On peut prouver que, si on choisit bien les coefficients, alors on obtient une suite {Cn}\\{C_n\\}{Cn} satisfaisant les hypothèses (H1), (H2) et (H3).\n",
"La preuve de fractalité (au sens Hausdorff) s’articule alors sur la démonstration que la projection (ou section) dans un sous-espace de dimension 2 (ou plus) admet un recouvrement par Φn,i(Cn)\\Phi_{n,i}(C_n)Φn,i(Cn) avec un rapport d’échelle ρ<1\\rho<1ρ<1. Dès lors, la même théorie IFS s’applique et conclut qu’on obtient un attracteur fractal dans la limite.\n",
"\n",
"4. Conclusion et perspectives\n",
"Construction rigoureuse :\n",
"On a donné le schéma d’une construction formelle (suivant la logique de la théorie IFS) permettant de définir une suite de polytopes s’apparentant à une “version fractale” de cosmohedra. Les conditions de contraction et de non-recouvrement permettent l’utilisation des théorèmes standards de la géométrie fractale (Barnsley, Falconer, Hutchinson).\n",
"Preuve de la fractalité :\n",
"La démonstration repose sur la convergence vers un attracteur fractal F\\,FF et l’évaluation de la dimension de Hausdorff par la formule ∑i=1kρis=1\\sum_{i=1}^k \\rho_i^s = 1∑i=1kρis=1. On obtient ainsi, en général, un exposant sss non entier, prouvant le caractère fractal.\n",
"Lien avec la physique :\n",
"D’un point de vue strictement mathématique, l’existence ou non d’une fractalité dans le vrai “espace de configurations” d’un cosmohedron cosmologique reste conjecturale. Il faudrait prouver qu’en dimension (potentiellement) très élevée et avec des contraintes physiques (pôles d’amplitudes, invariances conformes, etc.), la construction reproduit les conditions de l’IFS.\n",
"Remarque finale :\n",
"Bien que cette approche donne un cadre théorique pour exhiber un objet fractal (et prouver rigoureusement ses propriétés de fractalité), son application exacte aux cosmohedra décrits dans la littérature de la fonction d’onde cosmologique exigerait des travaux d’adaptation. Néanmoins, c’est ainsi que l’on procéderait pour avoir un argument formel :\n",
"Montrer que la “croissance” des polytopes répond à un schéma de self-similarité (au moins asymptotique),\n",
"Démontrer, via un théorème standard d’attracteur IFS, que la dimension du lieu-limite est non entière,\n",
"Conclure que la structure est fractale.\n",
"En résumé, cette construction est rigoureuse si l’on respecte les hypothèses de contraction, de non-recouvrement et de convergence vers des transformations Φi\\Phi_iΦi fixes. Elle aboutit à une preuve formelle (dans le sens de la théorie des IFS) que le limite de la suite de polytopes est un ensemble fractal, et donc qu’il existe une structure fractale sous-jacente dans ce modèle “cosmohedron-like” à la limite n→∞n \\to \\inftyn→∞.\n",
"\n",
"\"\"\"\n",
"resultats_md = main(mon_texte_de_preuve)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "44349990-50c7-4fb7-a62c-532d829c2bdc",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

197
week2/community-contributions/week2-exercise-translator.ipynb

@ -0,0 +1,197 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7563a171",
"metadata": {},
"source": []
},
{
"cell_type": "markdown",
"id": "d006b2ea-9dfe-49c7-88a9-a5a0775185fd",
"metadata": {},
"source": [
"# Exercise - week 2: German translator\n",
"\n",
"This should include a Gradio UI, streaming, use of the system prompt to add expertise, and the ability to switch between models. Bonus points if you can demonstrate use of a tool!\n",
"\n",
"The assistant will transform your spoken English to text, then translate it German and speak it out. The image on the UI is just decoration. This exercise was created on MacOS, Python 3.13."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a07e7793-b8f5-44f4-aded-5562f633271a",
"metadata": {},
"outputs": [],
"source": [
"# Install first PortAudio, in MacOS\n",
"# brew install portaudio\n",
"\n",
"\n",
"!pip install openai speechrecognition pyaudio\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "dcae50aa",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1796b554",
"metadata": {},
"outputs": [],
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
"else:\n",
" print(\"OpenAI API Key not set\")\n",
" \n",
"MODEL = \"gpt-4o-mini\"\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c5caad24",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"\"\"You are a highly skilled language translator specializing in translating English text to German. \n",
"Your task is to accurately translate any English text provided by the user into German. \n",
"Ensure that the translations are grammatically correct and contextually appropriate. \n",
"If the user provides a phrase, sentence, or paragraph in English, respond with the equivalent translation in German.\"\"\" "
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "aca69563",
"metadata": {},
"outputs": [],
"source": [
"import speech_recognition as sr\n",
"from pydub import AudioSegment\n",
"from pydub.playback import play\n",
"import base64\n",
"from io import BytesIO\n",
"from PIL import Image\n",
"\n",
"\n",
"def recognize_speech(audio_file):\n",
" recognizer = sr.Recognizer()\n",
" with sr.AudioFile(audio_file) as source:\n",
" audio = recognizer.record(source)\n",
" try:\n",
" text = recognizer.recognize_google(audio)\n",
" return text\n",
" except sr.UnknownValueError:\n",
" return \"Google Speech Recognition could not understand audio\"\n",
" except sr.RequestError as e:\n",
" return f\"Could not request results from Google Speech Recognition service; {e}\"\n",
"\n",
"\n",
"def get_chatgpt_response(message):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages = \n",
" [{\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": message}],\n",
" max_tokens=150\n",
" )\n",
" return response.choices[0].message.content.strip()\n",
"\n",
"def process_audio(audio_file):\n",
" text = recognize_speech(audio_file)\n",
" if text:\n",
" response = get_chatgpt_response(text)\n",
" talker(response)\n",
" return response\n",
" return \"Could not recognize speech.\"\n",
"\n",
"def talker(message):\n",
" response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n",
" voice=\"onyx\", # Also, try replacing onyx with alloy\n",
" input=message\n",
" )\n",
" \n",
" audio_stream = BytesIO(response.content)\n",
" audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
" play(audio)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1118141",
"metadata": {},
"outputs": [],
"source": [
"# Create Gradio interface\n",
"\n",
"# some image decoration to UI, just a static picture\n",
"image_path =\"week2-exercise-translator-berlin.webp\"\n",
"\n",
"with gr.Blocks() as ui:\n",
" gr.Interface(\n",
" fn=process_audio,\n",
" inputs=gr.Audio(type=\"filepath\", label=\"Speak English. German translation in a moment:\"),\n",
" outputs=\"text\",\n",
" live=True, \n",
" )\n",
" gr.Image(value=image_path, label=\"Das ist Berlin\")\n",
" \n",
"ui.launch(inbrowser=True)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1284da5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv313",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

614
week2/community-contributions/week2_exercise_translated_chatbot.ipynb

File diff suppressed because one or more lines are too long

2
week2/day2.ipynb

@ -53,7 +53,7 @@
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",

2
week2/day3.ipynb

@ -33,7 +33,7 @@
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",

2
week2/day4.ipynb

@ -35,7 +35,7 @@
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",

2
week2/day5.ipynb

@ -35,7 +35,7 @@
"source": [
"# Initialization\n",
"\n",
"load_dotenv()\n",
"load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",

10
week3/community-contributions/dataset_generator.ipynb

@ -22,7 +22,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {
"id": "-Apd7-p-hyLk"
},
@ -84,7 +84,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"metadata": {
"id": "WjxNWW6bvdgj"
},
@ -105,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"metadata": {
"id": "ZvljDKdji8iV"
},
@ -161,7 +161,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"metadata": {
"id": "JAdfqYXnvEDE"
},
@ -196,7 +196,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"metadata": {
"id": "xy2RP5T-vxXg"
},

54
week3/community-contributions/telegram_bot_llm/README.md

@ -0,0 +1,54 @@
# What is this?
This is Telegram bot, it receives text and responds with robotic voice responses, which are are generated by locally run Ollama. It supports command "/echo" which allows you to receive just a voiceover of your message without involving AI. Text-to-speach (TTS) is performed by locally run tool Balabolka with preinstalled robotic voice.
## How to create bot?
- Open Telegram and search for BotFather (it’s an official Telegram bot for creating and managing other bots).
- Start a chat with BotFather and type /newbot to create a new bot.
- BotFather will ask you to provide a name for your bot. Choose a name that you want your bot to be identified by.
- After you provide a name, it will ask for a username. The username must be unique and must end with bot (e.g., myamazingbot).
- After creating the bot, BotFather will give you a token.
Save this token. Do not worry if you loose it - you can always open BotFather, get list of your bots and request tokens for each of them.
## Update environment variables
In the same folder with script create file named ".env". Use exactly this name, just 4 symbols.
In this file add text
```
BOT_TOKEN='<TOKEN_FOR_YOUR_BOT>'
HOSTNAME='<YOUR_HOSTNAME_OR_IP>'
```
Ex.
```
BOT_TOKEN='8030XXXXXX:AAHtR5F3H6WUYvAItQa2THEIFoEDbXXXXXX'
HOSTNAME='118-YYY-71-83-67'
```
## Setting up a webhook
Before using Telegram you need to set up webhook.
If you have public IP, then simply run the script and visit http://your-server-ip-or-domain:5000/setwebhook to set the webhook URL. This will configure Telegram to send updates to your bot.<br>
If you don't have public IP then:
- Run the script on local machine
- Register for free on [ngrok.com](https://ngrok.com). Sign in. After signing in, go to the "Get Started" page and copy your authtoken.
- Download ngrok from [ngrok.com](https://ngrok.com). Unzip downloaded file ex. in script folder.
- Run "ngrok config add-authtoken"
- Run command "ngrok http 5000". Pay attention that port is not 80 (default), but 5000, because script creates local server on this port. When ngrok started, it will give you URL, ex. "https://b\<your IP address\>.ngrok-free.app"
- To set the webhook URL, visit URL https://api.telegram.org/bot\<YOUR_BOT_TOKEN\>/setWebhook?url=\<IP address which ngrock shows\>/\<YOUR_BOT_TOKEN\>. Ex. "https://api.telegram.org/bot8030XXXXXX:AAHtR5F3H6WUYvAItQa2THEIFoEDbXXXXXX/setWebhook?url=https://b118-YYY-71-83-67.ngrok-free.app/8030XXXXXX:AAHtR5F3H6WUYvAItQa2THEIFoEDbXXXXXX". In case of success, you will receive json like this "{"ok":true,"result":true,"description":"Webhook was set"}"
## How to test bot
Now you can search in Telegram bot with your name, ex "myamazingbot". Start conversation and send something. In logs of your script you should be able to see this message, printed from handle_message() method.
## How to install local TTS (Text-to-speach)
**Note!** You can use remote TTS, ex. from ChatGPT. You don't have to use local one. I used local just because it is free.<br>
I assume you can use many applications for local TTS. In my case I used Balabolka tool, in particular, its CLI tool called "balcon" (https://www.cross-plus-a.com/ru/bconsole.html). After tool installation you need to install voices. Pay attention that some voices free, some are not. Write name of voice in script variable TTS_VOICE_NAME. In my case it is 'Maxim'.<br>
Note that if voice, which you use, supports only one language, you should send to TTS text in this language only.
Take into account that TTS results (.mp3 files) will be stored inside script folder. THey are going to be deleted when they are sent back to user, but in case of any failures, you may need to delete them manually.
## Supported bot commands
- "/start" - show information about supported commands
- "/echo" - just convert text to speech, do not use AI to create response
- "/clear" - clear the bot's memory so you can start the conversation from the beginning
## Limitations
- This bot is created just for fun. It is quite unstable, so you may need to fix some bugs when you find them
- Bot is not intended to be used in parallel by many users. Quite likely it will handle several users at the same time, but not tens or hundreds. So, if you need this feature you may need to make slight changes in script or its deployment.
- Chat history for every chat is stored in memory in global variable, which is very bad. Ex. it will lead to memory exhaustion. It will also lead to the history being cleared every time the bot is restarted.

199
week3/community-contributions/telegram_bot_llm/tg_lb_bot.py

@ -0,0 +1,199 @@
import os
import subprocess
from dotenv import load_dotenv
from quart import Quart, request, jsonify
from telegram import Update
from telegram.ext import Application, CommandHandler, MessageHandler, filters
import asyncio
import concurrent.futures
import ollama
# Load environment variables
load_dotenv()
TOKEN = os.getenv('BOT_TOKEN')
if TOKEN:
print(f"Bot token is {TOKEN}")
else:
print("Please provide your bot token in .env file")
HOSTNAME = os.getenv('HOSTNAME')
if HOSTNAME:
print(f"Host name is {HOSTNAME}")
else:
print("Please provide your host name (or IP address) in .env file")
BALCON_PATH = r"C:\Program Files (x86)\Balabolka\balcon\balcon.exe"
TTS_VOICE_NAME = 'Maxim'
MODEL = "llama3.1:8b"
SYSTEM_PROMPT = "You are an extremely cynical and rude assistant. Use the words 'leather bastard' as often " \
"as possible. Use the words 'leather bastard' every time you address a user or mention any " \
"person. For example 'It doesn't concern you, leather bastard'"
# Initialize the Application
app = Quart(__name__)
bot_application = Application.builder().token(TOKEN).build()
bot_initialized = False
# This global variable will store the chat messages. It is very bad practice.
chat_messages = {}
async def initialize_bot():
"""Ensure the bot is initialized only once."""
global bot_initialized
if not bot_initialized:
await bot_application.initialize()
await bot_application.start()
bot_initialized = True
print("Telegram bot initialized and started.")
# Command handler for /start
async def start(update: Update, context):
await update.message.reply_text(
"Hello, welcome to the bot! Here are the commands you can use:\n"
"/echo - Just convert text to speech, do not use AI to create response\n"
"/clear - Clear the bot's memory so you can start the conversation from the beginning."
)
async def clear(update: Update, context):
if update.message.chat_id in chat_messages:
del chat_messages[update.message.chat_id]
await update.message.reply_text(
"The bot's memory has been cleared. You can start the conversation from the beginning.")
def text_to_audio_file(input_text, filename="output.mp3"):
"""
Generate audio using Balcon and save it to the current directory.
:param input_text: Text to convert to speech.
:param filename: Desired output file name.
:return: Path to the generated file.
"""
# Ensure the filename is in the current script directory
current_dir = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(current_dir, filename)
command = [BALCON_PATH, '-n', TTS_VOICE_NAME, "-t", input_text, "-w", file_path]
try:
subprocess.run(command, check=True)
print(f"Audio saved at: {file_path}")
return file_path
except subprocess.CalledProcessError as e:
print(f"Error generating audio: {e}")
return None
def ask_llm(messages):
response = ollama.chat(model=MODEL, messages=messages)
return response['message']['content']
def append_chat_message(chat_id, message, role):
if chat_id not in chat_messages:
chat_messages[chat_id] = [{"role": "system", "content": SYSTEM_PROMPT}]
chat_messages[chat_id].append({"role": role, "content": message})
def get_first_word(input_string):
# Split the string into words and strip any surrounding whitespace from the first word
return input_string.split()[0].strip() if input_string.strip() else None
def process_user_message(message):
chat_id = message.chat_id
message_id = message.message_id
user_message = message.text
words = user_message.strip().split(maxsplit=1)
if len(words) == 0:
return "", ""
if words[0] in ["/echo", "/clear"]:
# Remove the command from the message
tts_message = words[1] if len(words) > 1 else ""
else:
append_chat_message(chat_id, user_message, "user")
tts_message = ask_llm(chat_messages[chat_id])
append_chat_message(chat_id, tts_message, "assistant")
audio_file_path = text_to_audio_file(tts_message, filename=f"{chat_id}-{message_id}.mp3")
return audio_file_path, tts_message
# Message handler to log and print all incoming messages
async def handle_message(update: Update, context):
user_message = update.message.text
chat_id = update.message.chat_id
sender = update.message.from_user
print(f"Message from {sender.first_name} (chat ID: {chat_id}): {user_message}")
# Wait when processing of user message is done.
# During this time send chat action to user, so that it will see that bot is preparing audio.
# Notification diasppear after 5 seconds, so you need to send them repeatedly every 4 seconds.
audio_file_path = None
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(process_user_message, update.message)
while True:
try:
# Show to user that bot is busy with preparing audio response
await context.bot.send_chat_action(chat_id=chat_id, action='record_audio')
result, answer = future.result(timeout=4)
audio_file_path = result
user_message = answer
break
except concurrent.futures.TimeoutError:
continue
if len(answer) == 0:
await update.message.reply_text(f"Empty message")
return
try:
with open(audio_file_path, 'rb') as audio_file:
await context.bot.send_audio(
chat_id=chat_id,
audio=audio_file,
caption="Ответ робота",
write_timeout=120
)
print(f"Audio sent to {sender.first_name} (chat ID: {chat_id})")
os.remove(audio_file_path)
except Exception as e:
print(f"Exception while sending file: {e}")
# Optional text response. Comment this line if you want bot to answer only with audio
await update.message.reply_text(user_message)
bot_application.add_handler(CommandHandler('start', start))
bot_application.add_handler(CommandHandler('echo', handle_message))
bot_application.add_handler(CommandHandler('clear', clear))
bot_application.add_handler(MessageHandler(filters.TEXT & ~filters.COMMAND, handle_message))
@app.route('/' + TOKEN, methods=['POST'])
async def webhook():
"""Webhook endpoint for receiving updates."""
try:
await initialize_bot() # Ensure bot is initialized before processing updates
update = Update.de_json(await request.get_json(), bot_application.bot)
await bot_application.process_update(update)
except Exception as e:
print(f"Error processing update: {e}")
return jsonify({"status": "ok"})
# @app.route('/setwebhook', methods=['GET'])
async def set_webhook():
"""Set the webhook with Telegram."""
await initialize_bot() # Ensure bot is initialized before setting the webhook
webhook_url = f"https://{HOSTNAME}/{TOKEN}"
success = await bot_application.bot.set_webhook(webhook_url)
if success:
return jsonify({"status": "webhook set successfully", "url": webhook_url})
return jsonify({"status": "failed to set webhook"})
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.create_task(initialize_bot()) # Initialize bot at startup
app.run(host='0.0.0.0', port=5000)

956
week4/community-contributions/day4_extra_deepseek_and_hf_inference_provider_added.ipynb

File diff suppressed because one or more lines are too long

925
week4/community-contributions/day4_with_inference_provider.ipynb

@ -0,0 +1,925 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4a6ab9a2-28a2-445d-8512-a0dc8d1b54e9",
"metadata": {},
"source": [
"# Code Generator\n",
"\n",
"The requirement: use an Open Source model to generate high performance C++ code from Python code\n",
"\n",
"To replicate this, you'll need to set up a HuggingFace endpoint as I do in the video. It's simple to do, and it's quite satisfying to see the results!\n",
"\n",
"It's also an important part of your learning; this is the first example of deploying an open source model to be behind an API. We'll return to this in Week 8, but this should plant a seed in your mind for what's involved in moving open source models into production.\n",
"\n",
"Added the use of inference providers that was introduced recently by Hugging Face to convert the code.\n",
"Improved the user prompt to include algorithic efficeiny and performance optimization.\n",
"\n",
"Added Java as a conversion option.\n",
"\n",
"Note: C++ commands work on windows environment.\n"
]
},
{
"cell_type": "markdown",
"id": "22e1567b-33fd-49e7-866e-4b635d15715a",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h1 style=\"color:#900;\">Important - Pause Endpoints when not in use</h1>\n",
" <span style=\"color:#900;\">\n",
" If you do decide to use HuggingFace endpoints for this project, you should stop or pause the endpoints when you are done to avoid accruing unnecessary running cost. The costs are very low as long as you only run the endpoint when you're using it. Navigate to the HuggingFace endpoint UI <a href=\"https://ui.endpoints.huggingface.co/\">here,</a> open your endpoint, and click Pause to put it on pause so you no longer pay for it. \n",
"Many thanks to student John L. for raising this.\n",
"<br/><br/>\n",
"In week 8 we will use Modal instead of HuggingFace endpoints; with Modal you only pay for the time that you use it and you should get free credits.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 231,
"id": "e610bf56-a46e-4aff-8de1-ab49d62b1ad3",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import io\n",
"import sys\n",
"import json\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import google.generativeai\n",
"import anthropic\n",
"from IPython.display import Markdown, display, update_display\n",
"import gradio as gr\n",
"import subprocess, re"
]
},
{
"cell_type": "code",
"execution_count": 198,
"id": "4f672e1c-87e9-4865-b760-370fa605e614",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')"
]
},
{
"cell_type": "code",
"execution_count": 199,
"id": "8aa149ed-9298-4d69-8fe2-8f5de0f667da",
"metadata": {},
"outputs": [],
"source": [
"# initialize\n",
"\n",
"openai = OpenAI()\n",
"claude = anthropic.Anthropic()\n",
"OPENAI_MODEL = \"gpt-4o\"\n",
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\""
]
},
{
"cell_type": "code",
"execution_count": 200,
"id": "2db60a72-d098-42ca-8ce2-1e037c86b718",
"metadata": {},
"outputs": [],
"source": [
"def system_prompt_for(language: str) -> str:\n",
" system_prompt = (\n",
" f\"You are an assistant that reimplements Python code in high performance {language.upper()} for an Windows intel i7.\"\n",
" f\"Respond only with {language.upper()} code; use comments sparingly and do not provide any explanation other than occasional comments.\"\n",
" f\"The {language.upper()} response needs to produce an identical output in the fastest possible time. Keep implementations of random number generators identical so that results match exactly.\"\n",
" )\n",
" return system_prompt"
]
},
{
"cell_type": "code",
"execution_count": 243,
"id": "70583432-e851-40d1-a219-2fb32b830dc8",
"metadata": {},
"outputs": [],
"source": [
"#updated the original prompt to include algorithic efficeiny and performance optimization\n",
"def user_prompt_for(python: str, language: str) -> str:\n",
" if language.lower() not in {\"cpp\", \"java\"}:\n",
" raise ValueError(\"Unsupported language. Please choose 'C++' or 'Java'.\")\n",
" \n",
" optimization_notes = {\n",
" \"cpp\": (\n",
" \"- Use `int64_t` instead of `int` where necessary to prevent overflows.\\n\"\n",
" \"- Ensure random number generation in C++ matches Python's output as closely as possible.\\n\"\n",
" \"- Avoid undefined behavior, such as bit shifts that exceed type width (`1UL << 32` is incorrect for `uint32_t`).\\n\"\n",
" \"- Utilize `std::vector` for dynamic arrays and prefer preallocation for efficiency.\\n\"\n",
" \"- Consider `std::array` or `std::span` when fixed-size arrays are sufficient.\\n\"\n",
" \"- Optimize with **SIMD**, cache-friendly structures, and memory alignment where necessary.\\n\"\n",
" ),\n",
" \"java\": (\n",
" \"- Use `long` instead of `int` where necessary to prevent overflows.\\n\"\n",
" \"- Ensure random number generation in Java matches Python's output as closely as possible.\\n\"\n",
" \"- Use `ArrayList` instead of primitive arrays if dynamic resizing is needed.\\n\"\n",
" \"- Utilize `BigInteger` if handling large numbers that could exceed `long`.\\n\"\n",
" \"- Optimize with **parallel streams** (`IntStream.parallel()`) and **efficient data structures** (`HashMap`, `LinkedList`, etc.).\\n\"\n",
" )\n",
" }\n",
"\n",
" user_prompt = (\n",
" f\"First, analyze the given Python code to understand its core purpose and algorithmic approach. \"\n",
" f\"Then, implement a {language} solution that achieves the same output while prioritizing:\\n\"\n",
" \"1. **Algorithmic Efficiency** - Optimize time and space complexity, even if it means using a different approach.\\n\"\n",
" \"2. **Numerical Correctness** - Prevent integer overflows, use appropriate data types (`long`, `BigInteger`, etc.), \"\n",
" \"and ensure correct handling of edge cases.\\n\"\n",
" \"3. **Performance Optimization** - Utilize language-specific features for efficiency.\\n\\n\"\n",
" \n",
" \"### **Important Notes:**\\n\"\n",
" + optimization_notes[language.lower()] +\n",
" \"\\n### **Expected Response:**\\n\"\n",
" f\"Respond **only with {language} code**, including all necessary imports and ensuring the output matches the Python version exactly.\\n\\n\"\n",
" \n",
" \"Here's the Python code to analyze and optimize:\\n\\n\"\n",
" + python\n",
" )\n",
" \n",
" return user_prompt\n"
]
},
{
"cell_type": "code",
"execution_count": 202,
"id": "c6190659-f54c-4951-bef4-4960f8e51cc4",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(python, language=\"cpp\"):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt_for(language)},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(python, language)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 241,
"id": "71e1ba8c-5b05-4726-a9f3-8d8c6257350b",
"metadata": {},
"outputs": [],
"source": [
"# write to a file called optimized.cpp\n",
"\n",
"def write_output(code, file_name):\n",
" with open(file_name, \"w\") as f:\n",
" f.write(code)"
]
},
{
"cell_type": "code",
"execution_count": 226,
"id": "e7d2fea8-74c6-4421-8f1e-0e76d5b201b9",
"metadata": {},
"outputs": [],
"source": [
"def optimize_gpt(python, language=\"cpp\"): \n",
" stream = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages_for(python, language), stream=True)\n",
" reply = \"\"\n",
" for chunk in stream:\n",
" fragment = chunk.choices[0].delta.content or \"\"\n",
" reply += fragment\n",
" print(fragment, end='', flush=True)\n",
" file_name= f\"optimized.{language}\"\n",
" if language == \"java\":\n",
" # Extract class name from Java code\n",
" match = re.search(r\"\\b(public\\s+)?class\\s+(\\w+)\", reply)\n",
" class_name = match.group(2) if match else \"OptimizedJava\"\n",
" file_name = f\"{class_name}.java\"\n",
" else:\n",
" file_name = f\"optimized.{language}\"\n",
" write_output(reply, file_name)"
]
},
{
"cell_type": "code",
"execution_count": 227,
"id": "7cd84ad8-d55c-4fe0-9eeb-1895c95c4a9d",
"metadata": {},
"outputs": [],
"source": [
"def optimize_claude(python, language=\"cpp\"):\n",
" result = claude.messages.stream(\n",
" model=CLAUDE_MODEL,\n",
" max_tokens=2000,\n",
" system=system_message,\n",
" messages=[{\"role\": \"user\", \"content\": user_prompt_for(python, language)}],\n",
" )\n",
" reply = \"\"\n",
" with result as stream:\n",
" for text in stream.text_stream:\n",
" reply += text\n",
" print(text, end=\"\", flush=True)\n",
" if language == \"java\":\n",
" # Extract class name from Java code\n",
" match = re.search(r\"\\b(public\\s+)?class\\s+(\\w+)\", reply)\n",
" class_name = match.group(2) if match else \"OptimizedJava\"\n",
" file_name = f\"{class_name}.java\"\n",
" else:\n",
" file_name = f\"optimized.{language}\"\n",
" write_output(reply, file_name)"
]
},
{
"cell_type": "code",
"execution_count": 206,
"id": "a1cbb778-fa57-43de-b04b-ed523f396c38",
"metadata": {},
"outputs": [],
"source": [
"pi = \"\"\"\n",
"import time\n",
"\n",
"def calculate(iterations, param1, param2):\n",
" result = 1.0\n",
" for i in range(1, iterations+1):\n",
" j = i * param1 - param2\n",
" result -= (1/j)\n",
" j = i * param1 + param2\n",
" result += (1/j)\n",
" return result\n",
"\n",
"start_time = time.time()\n",
"result = calculate(100_000_000, 4, 1) * 4\n",
"end_time = time.time()\n",
"\n",
"print(f\"Result: {result:.12f}\")\n",
"print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "7fe1cd4b-d2c5-4303-afed-2115a3fef200",
"metadata": {},
"outputs": [],
"source": [
"exec(pi)"
]
},
{
"cell_type": "code",
"execution_count": 91,
"id": "105db6f9-343c-491d-8e44-3a5328b81719",
"metadata": {},
"outputs": [],
"source": [
"optimize_gpt(pi, \"java\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bf26ee95-0c77-491d-9a91-579a1e96a8a3",
"metadata": {},
"outputs": [],
"source": [
"exec(pi)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4194e40c-04ab-4940-9d64-b4ad37c5bb40",
"metadata": {},
"outputs": [],
"source": [
"!g++ -O3 -std=c++17 -march=native -o optimized optimized.cpp\n",
"!optimized.exe"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "983a11fe-e24d-4c65-8269-9802c5ef3ae6",
"metadata": {},
"outputs": [],
"source": [
"optimize_claude(pi)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5a766f9-3d23-4bb4-a1d4-88ec44b61ddf",
"metadata": {},
"outputs": [],
"source": [
"!g++ -O3 -std=c++17 -march=native -o optimized optimized.cpp\n",
"!optimized.exe"
]
},
{
"cell_type": "code",
"execution_count": 207,
"id": "c3b497b3-f569-420e-b92e-fb0f49957ce0",
"metadata": {},
"outputs": [],
"source": [
"python_hard = \"\"\"# Be careful to support large number sizes\n",
"\n",
"def lcg(seed, a=1664525, c=1013904223, m=2**32):\n",
" value = seed\n",
" while True:\n",
" value = (a * value + c) % m\n",
" yield value\n",
" \n",
"def max_subarray_sum(n, seed, min_val, max_val):\n",
" lcg_gen = lcg(seed)\n",
" random_numbers = [next(lcg_gen) % (max_val - min_val + 1) + min_val for _ in range(n)]\n",
" max_sum = float('-inf')\n",
" for i in range(n):\n",
" current_sum = 0\n",
" for j in range(i, n):\n",
" current_sum += random_numbers[j]\n",
" if current_sum > max_sum:\n",
" max_sum = current_sum\n",
" return max_sum\n",
"\n",
"def total_max_subarray_sum(n, initial_seed, min_val, max_val):\n",
" total_sum = 0\n",
" lcg_gen = lcg(initial_seed)\n",
" for _ in range(20):\n",
" seed = next(lcg_gen)\n",
" total_sum += max_subarray_sum(n, seed, min_val, max_val)\n",
" return total_sum\n",
"\n",
"# Parameters\n",
"n = 10000 # Number of random numbers\n",
"initial_seed = 42 # Initial seed for the LCG\n",
"min_val = -10 # Minimum value of random numbers\n",
"max_val = 10 # Maximum value of random numbers\n",
"\n",
"# Timing the function\n",
"import time\n",
"start_time = time.time()\n",
"result = total_max_subarray_sum(n, initial_seed, min_val, max_val)\n",
"end_time = time.time()\n",
"\n",
"print(\"Total Maximum Subarray Sum (20 runs):\", result)\n",
"print(\"Execution Time: {:.6f} seconds\".format(end_time - start_time))\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "dab5e4bc-276c-4555-bd4c-12c699d5e899",
"metadata": {},
"outputs": [],
"source": [
"exec(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8d24ed5-2c15-4f55-80e7-13a3952b3cb8",
"metadata": {},
"outputs": [],
"source": [
"optimize_gpt(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e0b3d073-88a2-40b2-831c-6f0c345c256f",
"metadata": {},
"outputs": [],
"source": [
"!g++ -O3 -std=c++17 -march=native -o optimized optimized.cpp\n",
"!optimized.exe"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e9305446-1d0c-4b51-866a-b8c1e299bf5c",
"metadata": {},
"outputs": [],
"source": [
"optimize_claude(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0c181036-8193-4fdd-aef3-fc513b218d43",
"metadata": {},
"outputs": [],
"source": [
"!g++ -O3 -std=c++17 -march=native -o optimized optimized.cpp\n",
"!optimized.exe"
]
},
{
"cell_type": "code",
"execution_count": 240,
"id": "0be9f47d-5213-4700-b0e2-d444c7c738c0",
"metadata": {},
"outputs": [],
"source": [
"def stream_gpt(python, language=\"cpp\"): \n",
" stream = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages_for(python, language), stream=True)\n",
" reply = \"\"\n",
" code_block = f\"```{language}\\n\"\n",
" for chunk in stream:\n",
" fragment = chunk.choices[0].delta.content or \"\"\n",
" reply += fragment\n",
" cleaned_reply = reply.replace(code_block,'').replace('```','')\n",
" yield cleaned_reply"
]
},
{
"cell_type": "code",
"execution_count": 239,
"id": "8669f56b-8314-4582-a167-78842caea131",
"metadata": {},
"outputs": [],
"source": [
"def stream_claude(python, language=\"cpp\"):\n",
" result = claude.messages.stream(\n",
" model=CLAUDE_MODEL,\n",
" max_tokens=2000,\n",
" system=system_message,\n",
" messages=[{\"role\": \"user\", \"content\": user_prompt_for(python, language)}],\n",
" )\n",
" reply = \"\"\n",
" code_block = f\"```{language}\\n\"\n",
" with result as stream:\n",
" for text in stream.text_stream:\n",
" reply += text\n",
" cleaned_reply = reply.replace(code_block,'').replace('```','')\n",
" yield cleaned_reply"
]
},
{
"cell_type": "code",
"execution_count": 186,
"id": "2f1ae8f5-16c8-40a0-aa18-63b617df078d",
"metadata": {},
"outputs": [],
"source": [
"def optimize(python, model):\n",
" if model==\"GPT\":\n",
" result = stream_gpt(python)\n",
" elif model==\"Claude\":\n",
" result = stream_claude(python)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" for stream_so_far in result:\n",
" yield stream_so_far "
]
},
{
"cell_type": "code",
"execution_count": 189,
"id": "f1ddb38e-6b0a-4c37-baa4-ace0b7de887a",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7888/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 189,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
" python = gr.Textbox(label=\"Python code:\", lines=10, value=python_hard)\n",
" cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
" with gr.Row():\n",
" model = gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\", value=\"GPT\")\n",
" convert = gr.Button(\"Convert code\")\n",
"\n",
" convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": 210,
"id": "19bf2bff-a822-4009-a539-f003b1651383",
"metadata": {},
"outputs": [],
"source": [
"def execute_python(code):\n",
" try:\n",
" output = io.StringIO()\n",
" sys.stdout = output\n",
" exec(code)\n",
" finally:\n",
" sys.stdout = sys.__stdout__\n",
" return output.getvalue()"
]
},
{
"cell_type": "code",
"execution_count": 211,
"id": "9a2274f1-d03b-42c0-8dcc-4ce159b18442",
"metadata": {},
"outputs": [],
"source": [
"css = \"\"\"\n",
".python {background-color: #306998;}\n",
".cpp {background-color: #050;}\n",
".java {background-color: #306775;}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 97,
"id": "f1303932-160c-424b-97a8-d28c816721b2",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7868/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 97,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"## Convert code from Python to C++\")\n",
" with gr.Row():\n",
" python = gr.Textbox(label=\"Python code:\", value=python_hard, lines=10)\n",
" cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
" with gr.Row():\n",
" model = gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\", value=\"GPT\")\n",
" with gr.Row():\n",
" convert = gr.Button(\"Convert code\")\n",
" with gr.Row():\n",
" python_run = gr.Button(\"Run Python\")\n",
" cpp_run = gr.Button(\"Run C++\")\n",
" with gr.Row():\n",
" python_out = gr.TextArea(label=\"Python result:\", elem_classes=[\"python\"])\n",
" cpp_out = gr.TextArea(label=\"C++ result:\", elem_classes=[\"cpp\"])\n",
"\n",
" convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
" python_run.click(execute_python, inputs=[python], outputs=[python_out])\n",
" cpp_run.click(execute_cpp, inputs=[cpp], outputs=[cpp_out])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": 191,
"id": "bb8c5b4e-ec51-4f21-b3f8-6aa94fede86d",
"metadata": {},
"outputs": [],
"source": [
"from huggingface_hub import login, InferenceClient"
]
},
{
"cell_type": "code",
"execution_count": 117,
"id": "13347633-4606-4e38-9927-80c39e65c1f1",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Note: Environment variable`HF_TOKEN` is set and is the current active token independently from the token you've just configured.\n"
]
}
],
"source": [
"hf_token = os.environ['HF_TOKEN']\n",
"login(hf_token)"
]
},
{
"cell_type": "code",
"execution_count": 118,
"id": "ef60a4df-6267-4ebd-8eed-dcb917af0a5e",
"metadata": {},
"outputs": [],
"source": [
"code_qwen = \"Qwen/CodeQwen1.5-7B-Chat\"\n",
"code_gemma = \"google/codegemma-7b-it\"\n",
"messages=messages_for(pi)"
]
},
{
"cell_type": "code",
"execution_count": 119,
"id": "3825d77a-03c6-42b2-89bc-ccbcb1585740",
"metadata": {},
"outputs": [
{
"ename": "HfHubHTTPError",
"evalue": "402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/sambanova/v1/chat/completions (Request ID: Root=1-67afb729-1eb9aff1704314144ef14e59;2df843ad-b7d2-4145-bb7b-1cfd94ae19ef)\n\nYou have exceeded your monthly included credits for Inference Endpoints. Subscribe to PRO to get 20x more monthly allowance.",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mHTTPError\u001b[0m Traceback (most recent call last)",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\huggingface_hub\\utils\\_http.py:406\u001b[0m, in \u001b[0;36mhf_raise_for_status\u001b[1;34m(response, endpoint_name)\u001b[0m\n\u001b[0;32m 405\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 406\u001b[0m \u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mraise_for_status\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 407\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m HTTPError \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\requests\\models.py:1024\u001b[0m, in \u001b[0;36mResponse.raise_for_status\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m 1023\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m http_error_msg:\n\u001b[1;32m-> 1024\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m HTTPError(http_error_msg, response\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m)\n",
"\u001b[1;31mHTTPError\u001b[0m: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/sambanova/v1/chat/completions",
"\nThe above exception was the direct cause of the following exception:\n",
"\u001b[1;31mHfHubHTTPError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[119], line 5\u001b[0m\n\u001b[0;32m 1\u001b[0m client \u001b[38;5;241m=\u001b[39m InferenceClient(\n\u001b[0;32m 2\u001b[0m \tprovider\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msambanova\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m 3\u001b[0m \tapi_key\u001b[38;5;241m=\u001b[39mhf_token\n\u001b[0;32m 4\u001b[0m )\n\u001b[1;32m----> 5\u001b[0m stream \u001b[38;5;241m=\u001b[39m \u001b[43mclient\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mchat\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcompletions\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcreate\u001b[49m\u001b[43m(\u001b[49m\n\u001b[0;32m 6\u001b[0m \u001b[43m\t\u001b[49m\u001b[43mmodel\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mQwen/Qwen2.5-Coder-32B-Instruct\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[0;32m 7\u001b[0m \u001b[43m\t\u001b[49m\u001b[43mmessages\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmessages\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\n\u001b[0;32m 8\u001b[0m \u001b[43m\t\u001b[49m\u001b[43mmax_tokens\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;241;43m500\u001b[39;49m\u001b[43m,\u001b[49m\n\u001b[0;32m 9\u001b[0m \u001b[43m\t\u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\n\u001b[0;32m 10\u001b[0m \u001b[43m)\u001b[49m\n\u001b[0;32m 12\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m chunk \u001b[38;5;129;01min\u001b[39;00m stream:\n\u001b[0;32m 13\u001b[0m \u001b[38;5;28mprint\u001b[39m(chunk\u001b[38;5;241m.\u001b[39mchoices[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mdelta\u001b[38;5;241m.\u001b[39mcontent, end\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\huggingface_hub\\inference\\_client.py:970\u001b[0m, in \u001b[0;36mInferenceClient.chat_completion\u001b[1;34m(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream_options, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p)\u001b[0m\n\u001b[0;32m 943\u001b[0m parameters \u001b[38;5;241m=\u001b[39m {\n\u001b[0;32m 944\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m: payload_model,\n\u001b[0;32m 945\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfrequency_penalty\u001b[39m\u001b[38;5;124m\"\u001b[39m: frequency_penalty,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 961\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstream_options\u001b[39m\u001b[38;5;124m\"\u001b[39m: stream_options,\n\u001b[0;32m 962\u001b[0m }\n\u001b[0;32m 963\u001b[0m request_parameters \u001b[38;5;241m=\u001b[39m provider_helper\u001b[38;5;241m.\u001b[39mprepare_request(\n\u001b[0;32m 964\u001b[0m inputs\u001b[38;5;241m=\u001b[39mmessages,\n\u001b[0;32m 965\u001b[0m parameters\u001b[38;5;241m=\u001b[39mparameters,\n\u001b[1;32m (...)\u001b[0m\n\u001b[0;32m 968\u001b[0m api_key\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtoken,\n\u001b[0;32m 969\u001b[0m )\n\u001b[1;32m--> 970\u001b[0m data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_inner_post\u001b[49m\u001b[43m(\u001b[49m\u001b[43mrequest_parameters\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mstream\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mstream\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 972\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m stream:\n\u001b[0;32m 973\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m _stream_chat_completion_response(data) \u001b[38;5;66;03m# type: ignore[arg-type]\u001b[39;00m\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\huggingface_hub\\inference\\_client.py:327\u001b[0m, in \u001b[0;36mInferenceClient._inner_post\u001b[1;34m(self, request_parameters, stream)\u001b[0m\n\u001b[0;32m 324\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m InferenceTimeoutError(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mInference call timed out: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mrequest_parameters\u001b[38;5;241m.\u001b[39murl\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01merror\u001b[39;00m \u001b[38;5;66;03m# type: ignore\u001b[39;00m\n\u001b[0;32m 326\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m--> 327\u001b[0m \u001b[43mhf_raise_for_status\u001b[49m\u001b[43m(\u001b[49m\u001b[43mresponse\u001b[49m\u001b[43m)\u001b[49m\n\u001b[0;32m 328\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m response\u001b[38;5;241m.\u001b[39miter_lines() \u001b[38;5;28;01mif\u001b[39;00m stream \u001b[38;5;28;01melse\u001b[39;00m response\u001b[38;5;241m.\u001b[39mcontent\n\u001b[0;32m 329\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m HTTPError \u001b[38;5;28;01mas\u001b[39;00m error:\n",
"File \u001b[1;32m~\\anaconda3\\envs\\llms\\Lib\\site-packages\\huggingface_hub\\utils\\_http.py:477\u001b[0m, in \u001b[0;36mhf_raise_for_status\u001b[1;34m(response, endpoint_name)\u001b[0m\n\u001b[0;32m 473\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m _format(HfHubHTTPError, message, response) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n\u001b[0;32m 475\u001b[0m \u001b[38;5;66;03m# Convert `HTTPError` into a `HfHubHTTPError` to display request information\u001b[39;00m\n\u001b[0;32m 476\u001b[0m \u001b[38;5;66;03m# as well (request id and/or server error message)\u001b[39;00m\n\u001b[1;32m--> 477\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m _format(HfHubHTTPError, \u001b[38;5;28mstr\u001b[39m(e), response) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n",
"\u001b[1;31mHfHubHTTPError\u001b[0m: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/sambanova/v1/chat/completions (Request ID: Root=1-67afb729-1eb9aff1704314144ef14e59;2df843ad-b7d2-4145-bb7b-1cfd94ae19ef)\n\nYou have exceeded your monthly included credits for Inference Endpoints. Subscribe to PRO to get 20x more monthly allowance."
]
}
],
"source": [
"client = InferenceClient(\n",
"\tprovider=\"sambanova\",\n",
"\tapi_key=hf_token\n",
")\n",
"stream = client.chat.completions.create(\n",
"\tmodel=\"Qwen/Qwen2.5-Coder-32B-Instruct\", \n",
"\tmessages=messages, \n",
"\tmax_tokens=500,\n",
"\tstream=True\n",
")\n",
"\n",
"for chunk in stream:\n",
" print(chunk.choices[0].delta.content, end=\"\")"
]
},
{
"cell_type": "code",
"execution_count": 65,
"id": "cc0c3e9c-2572-41d1-a476-6eae96b20695",
"metadata": {},
"outputs": [],
"source": [
"# using inference providers\n",
"def stream_code_qwen(python):\n",
" messages = messages_for(python)\n",
" client = InferenceClient(\n",
" \tprovider=\"sambanova\",\n",
" \tapi_key=hf_token\n",
" )\n",
" stream = client.chat.completions.create(\n",
" \tmodel=\"Qwen/Qwen2.5-Coder-32B-Instruct\", \n",
" \tmessages=messages, \n",
" \tmax_tokens=500,\n",
" \tstream=True\n",
" )\n",
" result = \"\"\n",
" for chunk in stream:\n",
" if chunk.choices and chunk.choices[0].delta.content:\n",
" result += chunk.choices[0].delta.content\n",
" yield result"
]
},
{
"cell_type": "code",
"execution_count": 212,
"id": "a82387d1-7651-4923-995b-fe18356fcaa6",
"metadata": {},
"outputs": [],
"source": [
"def optimize(python, model, language):\n",
" if model==\"GPT\":\n",
" result = stream_gpt(python, language)\n",
" elif model==\"Claude\":\n",
" result = stream_claude(python, language)\n",
" elif model==\"CodeQwen\":\n",
" result = stream_code_qwen(python, language)\n",
" else:\n",
" raise ValueError(\"Unknown model\")\n",
" for stream_so_far in result:\n",
" yield stream_so_far "
]
},
{
"cell_type": "code",
"execution_count": 213,
"id": "4ba311ec-c16a-4fe0-946b-4b940704cf65",
"metadata": {},
"outputs": [],
"source": [
"def select_sample_program(sample_program):\n",
" if sample_program==\"pi\":\n",
" return pi\n",
" elif sample_program==\"python_hard\":\n",
" return python_hard\n",
" else:\n",
" return \"Type your Python program here\""
]
},
{
"cell_type": "code",
"execution_count": 214,
"id": "06148e88-501b-4686-a41d-c3be528d8e6f",
"metadata": {},
"outputs": [],
"source": [
"def execute_cpp(code):\n",
" write_output(code, \"optimized.exe\")\n",
" try:\n",
" compile_cmd = [\"g++\", \"-Ofast\", \"-std=c++17\", \"-march=native\", \"-mtune=intel\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
" compile_result = subprocess.run(compile_cmd, check=True, text=True, capture_output=True)\n",
" run_cmd = [\"optimized.exe\"]\n",
" run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True)\n",
" return run_result.stdout\n",
" except subprocess.CalledProcessError as e:\n",
" return f\"An error occurred:\\n{e.stderr}\""
]
},
{
"cell_type": "code",
"execution_count": 236,
"id": "a42e3871-f3a5-4f14-836c-1e8ecacb56b5",
"metadata": {},
"outputs": [],
"source": [
"def execute_java(code):\n",
" # Extract the class name from the Java code\n",
" match = re.search(r\"\\b(public\\s+)?class\\s+(\\w+)\", code)\n",
" class_name = match.group(2) if match else \"OptimizedJava\"\n",
"\n",
" file_name = f\"{class_name}.java\"\n",
" write_output(code, file_name)\n",
" try:\n",
" compile_cmd =[\"javac\", file_name]\n",
" subprocess.run(compile_cmd, check=True, text=True, capture_output=True)\n",
" run_cmd = [\"java\", class_name]\n",
" run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True)\n",
" return run_result.stdout\n",
" except subprocess.CalledProcessError as e:\n",
" return f\"Error during compilation or execution:\\n{e.stderr}\""
]
},
{
"cell_type": "code",
"execution_count": 238,
"id": "f9ca2e6f-60c1-4e5f-b570-63c75b2d189b",
"metadata": {
"scrolled": true
},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7901/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 238,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"## Convert code from Python to C++ or Java\")\n",
" #input and output\n",
" with gr.Row():\n",
" python = gr.Textbox(label=\"Python code:\", value=python_hard, lines=10)\n",
" converted_code = gr.Textbox(label=\"Converted code:\", lines=10)\n",
" # java = gr.Textbox(label=\"Java code:\", lines=10)\n",
" #sample programs\n",
" with gr.Row():\n",
" with gr.Column():\n",
" sample_program = gr.Radio([\"pi\", \"python_hard\"], label=\"Sample program\", value=\"python_hard\")\n",
" #select model and language\n",
" with gr.Row():\n",
" with gr.Column():\n",
" model = gr.Dropdown([\"GPT\", \"Claude\", \"CodeQwen\"], label=\"Select model\", value=\"GPT\")\n",
" language = gr.Dropdown([\"C++\",\"Java\"], label=\"Select language\", value=\"C++\")\n",
" with gr.Row():\n",
" convert = gr.Button(\"Convert\")\n",
" #Code execution\n",
" with gr.Row():\n",
" python_run = gr.Button(\"Run Python\")\n",
" converted_run = gr.Button(\"Run converted code\")\n",
" with gr.Row():\n",
" python_out = gr.TextArea(label=\"Python result:\", elem_classes=[\"python\"])\n",
" output = gr.TextArea(label=\"Converted code result:\", elem_classes=[\"cpp\"])\n",
" \n",
" # Function to convert Python code based on language\n",
" def convert_code(python_code, model, selected_language):\n",
" if selected_language == \"C++\":\n",
" for chunk in optimize(python_code, model, \"cpp\"):\n",
" yield chunk # Stream each chunk\n",
" elif selected_language == \"Java\":\n",
" for chunk in optimize(python_code, model, \"java\"):\n",
" yield chunk\n",
" return \"\"\n",
"\n",
" # Function to execute converted code\n",
" def run_code(converted_code, selected_language):\n",
" if selected_language == \"C++\":\n",
" return execute_cpp(converted_code)\n",
" elif selected_language == \"Java\":\n",
" return execute_java(converted_code)\n",
" return \"Invalid language selection\"\n",
"\n",
" sample_program.change(select_sample_program, inputs=[sample_program], outputs=[python])\n",
" convert.click(convert_code, inputs=[python, model, language], outputs=[converted_code])\n",
" converted_run.click(run_code, inputs=[converted_code, language], outputs=[output]) \n",
" python_run.click(execute_python, inputs=[python], outputs=[python_out])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d0ad093-425b-488e-8c3f-67f729dd9c06",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

3
week4/day4.ipynb

@ -480,8 +480,9 @@
"source": [
"def execute_cpp(code):\n",
" write_output(code)\n",
" compiler_cmd = [\"clang++\", \"-O3\", \"-std=c++17\", \"-march=armv8.3-a\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
" try:\n",
" compile_result = subprocess.run(compiler_cmd[2], check=True, text=True, capture_output=True)\n",
" compile_result = subprocess.run(compiler_cmd, check=True, text=True, capture_output=True)\n",
" run_cmd = [\"./optimized\"]\n",
" run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True)\n",
" return run_result.stdout\n",

767
week5/community-contributions/day4.5 - langchain faiss ollama.ipynb

@ -0,0 +1,767 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d032fee4-7e37-43d5-974e-b6d29f0813e9",
"metadata": {},
"source": [
"# Langchain + FAISS + Ollama\n",
"\n",
"This workbook demonstrate RAG with FAISS & ollama (local)\n",
"\n",
"**Notes:**\n",
"1. Istall ollama: https://ollama.com/download\n",
"2. Download ollama embed: `ollama pull nomic-embed-text`\n",
"3. Download the model (eg. `ollama pull mistral-openorca`)\n",
"4. `ollama serve`"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "63c10e01-860b-43ce-b6aa-d8cbfa91f630",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import glob\n",
"\n",
"import gradio as gr"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "afeec974-6332-46b3-adf9-a19110b92a50",
"metadata": {},
"outputs": [],
"source": [
"# imports for langchain\n",
"\n",
"from langchain.document_loaders import DirectoryLoader, TextLoader\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.schema import Document\n",
"\n",
"from langchain.vectorstores import FAISS\n",
"\n",
"from langchain.memory import ConversationBufferMemory\n",
"from langchain.chains import ConversationalRetrievalChain"
]
},
{
"cell_type": "markdown",
"id": "05d1dec0-2695-4057-817f-d3403ab88669",
"metadata": {},
"source": [
"## Directory setup"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5a759958-8a8d-48f0-b34c-a435d97faad9",
"metadata": {},
"outputs": [],
"source": [
"# Read in documents using LangChain's loaders\n",
"# Take everything in all the sub-folders of our knowledgebase\n",
"\n",
"folders = glob.glob(\"knowledge-base/*\")\n",
"\n",
"# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n",
"text_loader_kwargs = {'encoding': 'utf-8'}\n",
"# If that doesn't work, some Windows users might need to uncomment the next line instead\n",
"# text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"documents = []\n",
"for folder in folders:\n",
" doc_type = os.path.basename(folder)\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n",
" for doc in folder_docs:\n",
" doc.metadata[\"doc_type\"] = doc_type\n",
" documents.append(doc)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cc7ec6de-22bc-4f07-b41e-cce55bfceb29",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"31"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# This step is to make sure setup above works\n",
"len(documents)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "afe5b057-a99e-4566-8984-bb5ae5831de5",
"metadata": {},
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1300, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e14c7c99-628b-4a3a-ad0b-f2df20762db8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"91"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(chunks)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ddd22f41-6f0e-49ea-b5a5-5f8cde314aa9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document types found: products, employees, company, contracts\n"
]
}
],
"source": [
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n",
"print(f\"Document types found: {', '.join(doc_types)}\")"
]
},
{
"cell_type": "markdown",
"id": "33f02751-9d75-459a-8039-d7e5bb962d9b",
"metadata": {},
"source": [
"## Convert to vector database\n",
"\n",
"In this version, I use `nomic-embed-text' from ollama library: https://ollama.com/library/nomic-embed-text.\n",
"Not sure if there are another embed model."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1e2b354d-ff60-4de3-8ace-192da1501563",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ollama import OllamaEmbeddings, ChatOllama\n",
"\n",
"db_name = \"knowledge_base_1\"\n",
"embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "845e9f81-6a50-4088-a921-b68faff18976",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"There are 91 vectors with 768 dimensions in the vector store\n"
]
}
],
"source": [
"# Create our FAISS vectorstore!\n",
"\n",
"vectorstore = FAISS.from_documents(chunks, embedding=embeddings)\n",
"\n",
"total_vectors = vectorstore.index.ntotal\n",
"dimensions = vectorstore.index.d\n",
"\n",
"print(f\"There are {total_vectors} vectors with {dimensions:,} dimensions in the vector store\")"
]
},
{
"cell_type": "markdown",
"id": "50b5ae64-ca8c-470c-b546-591feebb0217",
"metadata": {},
"source": [
"## Chat setup\n",
"\n",
"I add `callbacks=[StdOutCallbackHandler()]` parameter to see what exactly retriever passed to the model as Ed teach us."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e8cfeedd-496f-4e66-8682-aff4bf8d360e",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"C:\\Users\\Studio10\\AppData\\Local\\Temp\\ipykernel_3188\\683569231.py:8: LangChainDeprecationWarning: Please see the migration guide at: https://python.langchain.com/docs/versions/migrating_memory/\n",
" memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n"
]
}
],
"source": [
"# create a new Chat with Ollama\n",
"from langchain_core.callbacks import StdOutCallbackHandler\n",
"\n",
"MODEL = \"mistral-openorca\"\n",
"llm = ChatOllama(temperature=0.7, model=MODEL)\n",
"\n",
"# set up the conversation memory for the chat\n",
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n",
"\n",
"# the retriever is an abstraction over the VectorStore that will be used during RAG\n",
"retriever = vectorstore.as_retriever(search_kwargs={\"k\": 13})\n",
"\n",
"# putting it together: set up the conversation chain with the GPT 4o-mini LLM, the vector store and memory\n",
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, \n",
" retriever=retriever, \n",
" memory=memory, \n",
" callbacks=[StdOutCallbackHandler()])"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "4dc8f215-e076-4ae2-a7b7-6d663a358ae3",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"4"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# run a quick test - should return a list of documents = 4\n",
"question = \"Insurellm\"\n",
"docs = vectorstore.similarity_search(question)\n",
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6328550f-5422-4e4f-a9d5-b9c4bca229d5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'knowledge-base\\\\company\\\\about.md', 'doc_type': 'company'}, page_content=\"# About Insurellm\\n\\nInsurellm was founded by Avery Lancaster in 2015 as an insurance tech startup designed to disrupt an industry in need of innovative products. It's first product was Markellm, the marketplace connecting consumers with insurance providers.\\nIt rapidly expanded, adding new products and clients, reaching 200 emmployees by 2024 with 12 offices across the US.\")"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0]"
]
},
{
"cell_type": "markdown",
"id": "772cca6a-fc02-47e3-bceb-9076a3eeac56",
"metadata": {},
"source": [
"## Let's Chat!!"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "d7db3fae-6f17-46da-a098-65bb9467918e",
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new ConversationalRetrievalChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mGiven the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.\n",
"\n",
"Chat History:\n",
"\n",
"Human: tell me about Insurellm\n",
"Assistant: Insurellm is an innovative insurance company that specializes in providing personalized, tech-driven insurance solutions to individuals and businesses. They offer a wide range of insurance products and services, including life, health, property, and auto insurance. The company leverages the latest technology and data analytics to provide customers with tailored policies that meet their specific needs.\n",
"\n",
"Some key features and benefits of Insurellm include:\n",
"\n",
"1. Personalized Insurance Solutions: Insurellm uses advanced data analysis techniques to understand your unique risk profile and tailor insurance policies accordingly. This ensures that you receive the best possible coverage at competitive rates.\n",
"\n",
"2. Seamless Digital Experience: Insurellm offers a user-friendly online platform that allows customers to access their policy information, manage claims, and renew policies with ease. The platform is designed for a seamless digital experience, making it simple to navigate and use.\n",
"\n",
"3. Real-Time Quotes: With Insurellm's proprietary algorithms, users can receive real-time quotes for various insurance products based on their individual risk profile and preferences. This enables customers to make informed decisions when selecting policies that suit their needs and budget.\n",
"\n",
"4. Robust Risk Management Tools: The company provides a range of tools and resources to help individuals and businesses manage risks proactively, including loss prevention tips, risk assessments, and training programs.\n",
"\n",
"5. Dedicated Customer Support: Insurellm offers dedicated customer support teams available 24/7 to assist with any inquiries or concerns. Their professionals are well-trained and knowledgeable, ensuring that customers receive prompt and effective assistance.\n",
"\n",
"6. Claims Management: In case of an insured event, Insurellm's claims management team works efficiently to process the claim quickly, providing support and guidance throughout the entire process.\n",
"\n",
"7. Transparent Pricing: Insurellm is committed to transparent pricing, ensuring that customers are well-informed about their insurance coverage and costs. They offer various options for payment, including flexible monthly installments, making it easy for clients to manage their policies.\n",
"\n",
"In conclusion, Insurellm is a dynamic insurance company that offers personalized, tech-driven solutions to cater to the unique needs of individuals and businesses. Their commitment to innovation, customer satisfaction, and transparency makes them a valuable partner in managing risks and ensuring financial security.\n",
"Follow Up Input: tell me about Insurellm\n",
"Standalone question:\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new StuffDocumentsChain chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mSystem: Use the following pieces of context to answer the user's question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"----------------\n",
"# Overview of Insurellm\n",
"\n",
"Insurellm is an innovative insurance tech firm with 200 employees across the US.\n",
"Insurellm offers 4 insurance software products:\n",
"- Carllm, a portal for auto insurance companies\n",
"- Homellm, a portal for home insurance companies\n",
"- Rellm, an enterprise platform for the reinsurance sector\n",
"- Marketllm, a marketplace for connecting consumers with insurance providers\n",
" \n",
"Insurellm has more than 300 clients worldwide.\n",
"\n",
"# About Insurellm\n",
"\n",
"Insurellm was founded by Avery Lancaster in 2015 as an insurance tech startup designed to disrupt an industry in need of innovative products. It's first product was Markellm, the marketplace connecting consumers with insurance providers.\n",
"It rapidly expanded, adding new products and clients, reaching 200 emmployees by 2024 with 12 offices across the US.\n",
"\n",
"---\n",
"\n",
"## Features\n",
"\n",
"1. **Access to Core Features**: Roadway Insurance Inc. will have access to all Professional Tier features, including:\n",
" - AI-Powered Risk Assessment\n",
" - Advanced Analytics & Fraud Detection\n",
" - Instant Quoting System\n",
" - Customizable Coverage Plans\n",
" - Customer Insights Dashboard\n",
" \n",
"2. **Mobile Integration**: All features will be accessible through a mobile application that Insurellm will provide.\n",
"3. **Customer Support**: Includes 24/7 automated customer support via AI chatbots and access to dedicated account management support during business hours.\n",
"\n",
"---\n",
"\n",
"## Support\n",
"\n",
"1. **Technical Support**: Roadway Insurance Inc. will receive priority technical support from Insurellm for any issues arising from the Carllm product.\n",
"2. **Training**: Insurellm will provide up to 5 training sessions for Roadway Insurance Inc. staff on the effective use of the Carllm platform, scheduled at mutual convenience.\n",
"3. **Updates and Maintenance**: Regular updates to the Carllm platform will be conducted quarterly, and any maintenance outages will be communicated at least 48 hours in advance.\n",
"\n",
"---\n",
"\n",
"## Support\n",
"Insurellm provides Stellar Insurance Co. with the following support services:\n",
"\n",
"- **24/7 Technical Support**: Access to dedicated support representatives via phone and online chat.\n",
"- **Quarterly Account Review**: Meetings to discuss performance metrics and uncover additional needs.\n",
"- **Training Sessions**: Initial orientation and ongoing training opportunities to maximize the effectiveness of Rellm usage.\n",
"- **Updates and Upgrades**: Regular software updates and enhancements are included as part of the subscription.\n",
"\n",
"Stellar Insurance Co. acknowledges receipt of the Rellm product summary and agrees to the terms set forth above. By signing below, both parties confirm their acceptance of this contract.\n",
"\n",
"**For Insurellm** \n",
"______________________________ \n",
"[Signature] \n",
"[Name, Title] \n",
"[Date] \n",
"\n",
"**For Stellar Insurance Co.** \n",
"______________________________ \n",
"[Signature] \n",
"[Name, Title] \n",
"[Date]\n",
"\n",
"## Features\n",
"1. **AI-Powered Matching**: Belvedere Insurance will benefit from Markellm's AI-powered matching, ensuring the best-fit customers are identified and connected.\n",
"2. **Real-Time Quotes**: Access to real-time quotes will enhance the customer acquisition process, facilitating timely and informed decision-making.\n",
"3. **Data Insights**: Client shall have access to Markellm's analytics dashboard, allowing insights into consumer behavior and market trends.\n",
"4. **Customization Options**: Belvedere Insurance can leverage optional premium features and analytics upon payment of an additional $9.99/month.\n",
"5. **Customer Support**: Insurellm will provide dedicated support to Belvedere Insurance, ensuring any issues or queries are promptly addressed.\n",
"\n",
"## Support\n",
"1. **Technical Support**: Technical support will be available from 9 AM to 7 PM EST, Monday through Friday via email and phone.\n",
"2. **Response Times**: Insurellm agrees to respond to all support queries within 24 business hours. Emergency support will be prioritized throughout the contract period.\n",
"3. **Training**: Insurellm will offer a comprehensive training session for the Client’s staff upon beginning the service to ensure effective utilization of the features.\n",
"\n",
"## Pricing\n",
"\n",
"Insurellm offers flexible pricing plans for Rellm to cater to various business needs:\n",
"\n",
"- **Basic Plan**: $5,000/month\n",
" - Includes access to core features and standard integrations.\n",
" \n",
"- **Professional Plan**: $10,000/month\n",
" - Includes all features, advanced integrations, and priority customer support.\n",
" \n",
"- **Enterprise Plan**: Custom pricing\n",
" - Tailored solutions with personalized features, extensive integrations, and dedicated account management.\n",
"\n",
"Join the growing number of organizations leveraging Rellm to enhance their reinsurance processes while driving profitability and compliance. \n",
"\n",
"## 2025-2026 Roadmap\n",
"\n",
"At Insurellm, we are committed to the continuous improvement of Rellm. Our roadmap for 2025-2026 includes:\n",
"\n",
"- **Q3 2025**: \n",
" - Launch of the Rellm Mobile App for on-the-go insights and management.\n",
" - Introduction of augmented reality (AR) features for interactive risk assessments.\n",
"\n",
"- **Q1 2026**: \n",
" - Deployment of advanced machine learning models for even more accurate risk predictions.\n",
" - Expansion of integration capabilities to support emerging technologies in the insurance sector.\n",
"\n",
"# Product Summary\n",
"\n",
"# Homellm\n",
"\n",
"## Summary\n",
"Homellm is an innovative home insurance product developed by Insurellm that leverages advanced AI technology to revolutionize the way insurance providers offer coverage to homeowners. Designed for both B2B and B2C segments, Homellm empowers insurers to provide personalized, data-driven policies, enhancing customer experience while minimizing risk and operational costs. By integrating seamlessly with existing systems, Homellm helps insurance companies streamline their processes and stay competitive in the ever-evolving insurance industry.\n",
"\n",
"## Features\n",
"### 1. AI-Powered Risk Assessment\n",
"Homellm utilizes sophisticated AI algorithms to analyze vast datasets, allowing insurance companies to assess risks accurately. This feature provides real-time insights for underwriting decisions, enabling insurers to tailor policies to individual customer needs.\n",
"\n",
"### 2. Dynamic Pricing Model\n",
"With Homellm's innovative dynamic pricing model, insurance providers can offer flexible premiums based on real-time risk evaluations and historical data. This adaptability ensures that customers pay a fair price that accurately reflects their unique risk profile.\n",
"\n",
"## Features\n",
"\n",
"1. **Included Features**: Under the Professional Tier, TechDrive Insurance will have access to the following features of Carllm:\n",
" - AI-Powered Risk Assessment\n",
" - Instant Quoting\n",
" - Customizable Coverage Plans\n",
" - Fraud Detection\n",
" - Customer Insights Dashboard\n",
" - Mobile Integration\n",
" - Automated Customer Support\n",
"\n",
"2. **System Requirements**: TechDrive Insurance must ensure that their existing systems meet the technical requirements to integrate with Carllm, as outlined in the onboarding documentation provided by Insurellm.\n",
"\n",
"## Support\n",
"\n",
"1. **Customer Support**: Insurellm will provide 24/7 customer support to TechDrive Insurance via AI-driven chatbots, ensuring timely resolution of inquiries and issues.\n",
"\n",
"2. **Training**: TechDrive Insurance staff will receive onboarding training sessions to ensure effective utilization of the Carllm platform, scheduled within the first two weeks of contract commencement.\n",
"\n",
"3. **System Updates**: The Provider will push regular updates to improve system performance and add new features. TechDrive Insurance will receive prior notification of any significant upgrades that may affect current operations.\n",
"\n",
"---\n",
"\n",
"**Signatures:**\n",
"\n",
"**Insurellm Representative:** \n",
"Name: John Smith \n",
"Title: Account Manager \n",
"Date: ____________\n",
"\n",
"## Support\n",
"1. **Technical Support**: Insurellm shall provide 24/7 technical support via an email and phone assistance for the duration of this Contract.\n",
"2. **Training**: Insurellm will conduct an onsite training session for Client employees upon implementation, and quarterly training webinars will be made available thereafter.\n",
"3. **Updates and Maintenance**: Insurellm will provide regular system updates and maintenance, ensuring that the software is operating at peak efficiency.\n",
"\n",
"By signing below, both parties agree to the terms set forth in this Contract for the use of the Homellm product.\n",
"\n",
"____ \n",
"**Insurellm Authorized Signature** \n",
"Name: Sarah Johnson \n",
"Title: VP of Sales \n",
"Date: ____________\n",
"\n",
"____ \n",
"**Pinnacle Insurance Co. Authorized Signature** \n",
"Name: Tom Anderson \n",
"Title: Chief Operating Officer \n",
"Date: ____________\n",
"\n",
"- **Predictive Maintenance Alerts**: The Customer will receive alerts regarding potential maintenance needs for insured properties, enhancing client satisfaction and reducing claims.\n",
"\n",
"- **Multi-Channel Integration**: Homellm will integrate with the Customer's existing platforms to create seamless service delivery.\n",
"\n",
"- **Customer Portal**: A dedicated portal will be provided, allowing the Customer's clients to manage their accounts 24/7.\n",
"\n",
"---\n",
"\n",
"## Support\n",
"\n",
"1. **Training**: Provider will offer a comprehensive training program at the start of the term to ensure the Customer's staff can effectively use the Homellm product.\n",
"\n",
"2. **Ongoing Support**: The Provider will supply ongoing technical support via email and phone during business hours (9 am - 5 pm EST) throughout the contract duration.\n",
"\n",
"3. **Updates and Improvements**: Customer will receive all software updates and feature enhancements as they become available, without additional charge.\n",
"\n",
"--- \n",
"\n",
"**AGREEMENT SIGNATURES**\n",
"\n",
"By signing below, the parties acknowledge their acceptance of the terms of this Agreement.\n",
"\n",
"**For Insurellm:**\n",
"\n",
"______________________________ \n",
"[Name], [Title] \n",
"Date: ______________________\n",
"\n",
"**For Greenstone Insurance:**\n",
"\n",
"______________________________ \n",
"[Name], [Title] \n",
"Date: ______________________\n",
"\n",
"---\n",
"\n",
"### Support\n",
"1. **Dedicated Customer Support:** \n",
" BrightWay Solutions will have access to a dedicated support team from Insurellm during standard business hours (9 AM - 7 PM EST).\n",
"\n",
"2. **Additional Support Services:** \n",
" Technical support for integration and maintenance will be available. An optional premium support package can be purchased for $49.99/month, which includes 24/7 support and advanced troubleshooting.\n",
"\n",
"3. **Training and Onboarding:** \n",
" Insurellm agrees to provide one free training session on how to utilize the Markellm platform effectively for BrightWay Solutions’ team upon contract signing.\n",
"\n",
"### Signatures\n",
"By signing below, both parties agree to the terms and conditions outlined in this Contract.\n",
"\n",
"__________________________ \n",
"**[Name], [Title]** \n",
"**Insurellm** \n",
"Date: ______________________\n",
"\n",
"__________________________ \n",
"**[Name], [Title]** \n",
"**BrightWay Solutions** \n",
"Date: ______________________\n",
"\n",
"---\n",
"\n",
"This document serves as a formal agreement between Insurellm and BrightWay Solutions, ensuring a successful partnership focused on enhancing the insurance shopping experience for consumers.\n",
"\n",
"4. **Client Portal Access**: EverGuard Insurance will have access to both client and broker portals, enhancing communication and collaboration with its partners.\n",
"\n",
"## Support\n",
"\n",
"1. **Customer Support**: Insurellm will provide EverGuard Insurance with 24/7 customer support, including live chat, email, and phone assistance for any technical issues or inquiries regarding Rellm.\n",
"\n",
"2. **Training Services**: Insurellm will provide initial training for EverGuard Insurance staff to ensure proper utilization of Rellm features. Additional training sessions can be scheduled upon request at an agreed fee.\n",
"\n",
"3. **Updates and Upgrades**: EverGuard Insurance will receive all platform updates and upgrades at no additional cost during the contract term, including enhancements outlined in Insurellm’s 2025-2026 roadmap.\n",
"\n",
"4. **Feedback Mechanisms**: EverGuard Insurance is encouraged to provide feedback regarding Rellm’s functionalities and any desired features, which will be considered for future updates.\n",
"\n",
"---\n",
"\n",
"**Signatures** \n",
"**For Insurellm**: __________________________ \n",
"**Name**: John Smith \n",
"**Title**: Chief Operating Officer \n",
"**Date**: _________________\n",
"\n",
"---\n",
"\n",
"## Support\n",
"\n",
"Insurellm commits to providing comprehensive support to GreenValley Insurance, which includes:\n",
"\n",
"1. **Onboarding:** An extensive training program for the GreenValley staff to ensure effective use of Homellm.\n",
" \n",
"2. **Dedicated Support Team:** A dedicated support team available 24/7 to address any technical issues or inquiries.\n",
"\n",
"3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platform, including new features and security improvements.\n",
"\n",
"4. **Feedback Implementation:** Insurellm will actively solicit feedback from GreenValley Insurance to ensure Homellm continues to meet their evolving needs.\n",
"\n",
"---\n",
"\n",
"**Signatures:**\n",
"\n",
"_________________________________ \n",
"**[Name]** \n",
"**Title**: CEO \n",
"**Insurellm, Inc.**\n",
"\n",
"_________________________________ \n",
"**[Name]** \n",
"**Title**: COO \n",
"**GreenValley Insurance, LLC** \n",
"\n",
"---\n",
"\n",
"This agreement represents the complete understanding of both parties regarding the use of the Homellm product and supersedes any prior agreements or communications.\n",
"Human: What services and features does Insurellm provide for personalized insurance solutions?\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"Answer: Insurellm provides various services and features for personalized insurance solutions, which include:\n",
"\n",
"1. Customized Coverage: Insurellm offers tailored insurance policies based on individual needs and preferences. This means that customers can choose the coverage that best suits their requirements, ensuring they are not overpaying or underinsured.\n",
"\n",
"2. Digital Insurance Platform: The platform provides a user-friendly digital interface for customers to manage their policies, make claims, and access various insurance services. This simplifies the process and makes it more convenient for policyholders.\n",
"\n",
"3. Data-driven Analysis: Insurellm utilizes advanced data analytics and machine learning algorithms to analyze customer behavior, preferences, and risk factors to provide personalized insurance recommendations. This helps to improve the accuracy of coverage and pricing.\n",
"\n",
"4. Personalized Pricing: Insurellm uses various factors such as age, lifestyle, health, location, and more to determine the most suitable insurance plan at the best possible price for each customer.\n",
"\n",
"5. Policy Comparison: The platform allows customers to compare different insurance policies from multiple providers based on their specific needs. This helps them make an informed decision and choose the best plan that meets their requirements.\n",
"\n",
"6. Claims Management: Insurellm offers a streamlined claims management process, allowing policyholders to file claims, track progress, and receive updates through the digital platform.\n",
"\n",
"7. Risk Assessment and Prevention: The platform utilizes data analysis and predictive modeling to identify potential risks for customers. This helps in preventing losses by offering advice on risk-mitigation strategies and adjusting insurance coverage accordingly.\n",
"\n",
"8. Dynamic Coverage: Insurellm enables policyholders to adjust their coverage according to life events or changes in circumstances, ensuring that they are always protected as their needs evolve.\n",
"\n",
"9. Multi-policy Discounts: By offering multiple types of insurance policies under one platform, customers can benefit from discounts for bundling their coverages, making it more cost-effective.\n",
"\n",
"10. 24/7 Customer Support: Insurellm provides round-the-clock customer support to assist policyholders with any queries or concerns they may have related to their insurance policies.\n"
]
}
],
"source": [
"# Chat test\n",
"query = \"tell me about Insurellm\"\n",
"result = conversation_chain.invoke({\"question\": query})\n",
"answer = result[\"answer\"]\n",
"print(\"\\nAnswer:\", answer)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "102a8222-f4a6-4c87-8cce-86a89adb19e9",
"metadata": {},
"outputs": [],
"source": [
"# Clearing memory before new chat\n",
"memory.clear()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "a2377cef-1ea6-4395-8147-db9c684fdb65",
"metadata": {},
"outputs": [],
"source": [
"def chat_gradio(question, history):\n",
" result = conversation_chain.invoke({\"question\": question})\n",
" return result[\"answer\"]"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "4c357dd8-252c-4dd5-85be-277d3ed53d79",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"* Running on local URL: http://127.0.0.1:7860\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Ui launch\n",
"view = gr.ChatInterface(chat_gradio, type=\"messages\").launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ebeff830-49f0-4bf6-a3b2-10ba038db1c0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

61
week5/day4.5.ipynb

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {},
"outputs": [],
@ -29,7 +29,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "802137aa-8a74-45e0-a487-d1974927d7ca",
"metadata": {},
"outputs": [],
@ -51,7 +51,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {},
"outputs": [],
@ -64,7 +64,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {},
"outputs": [],
@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905",
"metadata": {},
"outputs": [],
@ -104,18 +104,10 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1088, which is longer than the specified 1000\n"
]
}
],
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)"
@ -123,39 +115,20 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"123"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"len(chunks)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document types found: company, employees, contracts, products\n"
]
}
],
"outputs": [],
"source": [
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n",
"print(f\"Document types found: {', '.join(doc_types)}\")"
@ -184,18 +157,10 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "78998399-ac17-4e28-b15f-0b5f51e6ee23",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"There are 123 vectors with 1,536 dimensions in the vector store\n"
]
}
],
"outputs": [],
"source": [
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n",
"# Chroma is a popular open source Vector Database based on SQLLite\n",

159
week8/community_contributions/Deploy_Deepseek_r1_on_Modal.ipynb

File diff suppressed because one or more lines are too long

BIN
week8/community_contributions/images/gui.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 305 KiB

BIN
week8/community_contributions/images/metrics.png

Binary file not shown.

After

Width:  |  Height:  |  Size: 786 KiB

21
week8/day4.ipynb

@ -67,12 +67,23 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0056a02f-06a3-4acc-99f3-cbe919ee936b",
"cell_type": "markdown",
"id": "7f2781ad-e122-4570-8fad-a2fe6452414e",
"metadata": {},
"outputs": [],
"source": []
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Additional resource: more sophisticated planning agent</h2>\n",
" <span style=\"color:#f71;\">The Planning Agent that we use in the next cell is simply a python script that calls the other Agents; frankly that's all we require for this project. But if you're intrigued to see a more Autonomous version in which we give the Planning Agent tools and allow it to decide which Agents to call, see my implementation of <a href=\"https://github.com/ed-donner/agentic/blob/main/workshop/agents/autonomous_planning_agent.py\">AutonomousPlanningAgent</a> in my related repo, <a href=\"https://github.com/ed-donner/agentic\">Agentic</a>. This is an example with multiple tools that dynamically decides which function to call.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",

2
week8/day5.ipynb

@ -169,7 +169,7 @@
" <td>\n",
" <h2 style=\"color:#090;\">CONGRATULATIONS AND THANK YOU!!!</h2>\n",
" <span style=\"color:#090;\">\n",
" It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm <a href=\"https://www.linkedin.com/in/eddonner/\">here on LinkedIn</a> if we're not already connected. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. <br/><br/>Thanks once again for working all the way through the course, and I'm excited to hear all about your career as an LLM Engineer.\n",
" It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm <a href=\"https://www.linkedin.com/in/eddonner/\">here on LinkedIn</a> if we're not already connected and I'm on X at <a href=\"https://x.com/edwarddonner\">@edwarddonner</a>. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. <br/><br/>Massive thanks again for putting up with me for 8 weeks and getting all the way to the final cell! I'm excited to hear all about your career as an LLM Engineer. <b>You could not have picked a better time to be in this field.</b>\n",
" </span>\n",
" </td>\n",
" </tr>\n",

Loading…
Cancel
Save