You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

363 lines
15 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# End of week 1 exercise\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9ff145c5-e272-43cd-8a55-0fb7a887c2ae",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key looks good so far\n"
]
}
],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [],
"source": [
"# set up environment\n",
"system_prompt = \"You are a technical assistant. Your will receive some technical code snippits and explain them in detail.\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"question = \"\"\"\n",
"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
"metadata": {},
"outputs": [],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"def get_answer_gpt():\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL_GPT,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "364cecc2-8460-4eda-ab63-7971efbb0e74",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"The code snippet you provided is a Python expression that utilizes the `yield from` statement along with a set comprehension. Let's break it down step by step:\n",
"\n",
"1. **`books`**: This implies that `books` is likely a list (or any iterable) of dictionaries, where each dictionary represents a book. Each book dictionary presumably contains various key-value pairs, one of which is `\"author\"`.\n",
"\n",
"2. **Set Comprehension**: The expression `{book.get(\"author\") for book in books if book.get(\"author\")}` is a set comprehension. \n",
"\n",
" - It iterates over each `book` in the `books` collection.\n",
" - The `book.get(\"author\")` method is called to retrieve the value associated with the `\"author\"` key for each `book`. Using `get()` is beneficial because it will return `None` instead of throwing an error if the `\"author\"` key doesn’t exist in a particular `book`.\n",
" - The `if book.get(\"author\")` conditional ensures that only books that have a valid (non-`None`) author name are included in the set. This means that if the author is not specified or is `None`, that book will be skipped.\n",
"\n",
"3. **Result of the Set Comprehension**: The set comprehension will produce a set of unique author values found in the `books`. Since it’s a set, it automatically handles duplicates – if multiple books have the same author, that author will only appear once in the resulting set.\n",
"\n",
"4. **`yield from` Statement**: The `yield from` statement is used within generator functions to yield all values from an iterable. It simplifies the process of yielding values from sub-generators.\n",
"\n",
" - In this context, the code is effectively using `yield from` to yield each unique author from the set generated in the previous step.\n",
"\n",
"### Summary\n",
"\n",
"In summary, the entire expression:\n",
"python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"does the following:\n",
"- It generates a set of unique author names from a list of books, avoiding any `None` values (i.e., books without an author).\n",
"- The authors are then yielded one by one, suggesting that this code is likely part of a generator function. This allows the caller to iterate over unique authors efficiently.\n",
"\n",
"This design is useful in scenarios where you want to process or work with unique items derived from a larger collection while maintaining memory efficiency and cleaner code through the use of generators."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"get_answer_gpt()"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"ename": "NameError",
"evalue": "name 'stream' is not defined",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mNameError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[13], line 20\u001b[0m\n\u001b[0;32m 18\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 19\u001b[0m display_handle \u001b[38;5;241m=\u001b[39m display(Markdown(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m), display_id\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[1;32m---> 20\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m chunk \u001b[38;5;129;01min\u001b[39;00m \u001b[43mstream\u001b[49m:\n\u001b[0;32m 21\u001b[0m response \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m chunk\u001b[38;5;241m.\u001b[39mchoices[\u001b[38;5;241m0\u001b[39m]\u001b[38;5;241m.\u001b[39mdelta\u001b[38;5;241m.\u001b[39mcontent \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;124m'\u001b[39m\u001b[38;5;124m'\u001b[39m\n\u001b[0;32m 22\u001b[0m response \u001b[38;5;241m=\u001b[39m response\u001b[38;5;241m.\u001b[39mreplace(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m```\u001b[39m\u001b[38;5;124m\"\u001b[39m,\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m)\u001b[38;5;241m.\u001b[39mreplace(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmarkdown\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
"\u001b[1;31mNameError\u001b[0m: name 'stream' is not defined"
]
}
],
"source": [
"# Get Llama 3.2 to answer\n",
"\n",
"# There's actually an alternative approach that some people might prefer\n",
"# You can use the OpenAI client python library to call Ollama:\n",
"\n",
"from openai import OpenAI\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL_LLAMA,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
"response = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
" \n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d871650c-0752-43b0-ae5b-975438d7c55a",
"metadata": {},
"outputs": [],
"source": [
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n",
"def get_ans_llama():\n",
" stream = ollama_via_openai.chat.completions.create(\n",
" model=MODEL_LLAMA,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": question}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "de10638e-2435-4675-bb8f-ad6d171a5545",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"**Iterating over Nested Data Structures using `yield from`**\n",
"\n",
"This line of code uses the `yield from` syntax to iterate over a nested data structure. Let's break it down:\n",
"\n",
"* `{ book.get(\"author\") for book in [books] }`: This is a generator expression that iterates over each item in `books`, retrieves the value associated with the key `\"author\"` using the `get()` method, and yields those values.\n",
"* `yield from { ... }`: The outer syntax is `yield from`, which is used to delegate to another iterable or generator.\n",
"\n",
"**What does it do?**\n",
"\n",
"This code iterates over the list of books (`books`), retrieving only the `\"author\"` values associated with each book. Here's a step-by-step explanation:\n",
"\n",
"1. The code iterates over each item in the `books` list, typically accessed through an object like a dictionary or a pandas DataFrame (in real Python applications).\n",
" python\n",
"for book in books:\n",
"\n",
"\n",
"2. For each book, it uses tuple unpacking (`book.get(\"author\")`) to retrieve the value associated with the key `\"author\"`, if present.\n",
" python\n",
"book.get(\"author\")\n",
"\n",
"\n",
" This generates an iterable containing all the authors (if any) from each book.\n",
"\n",
"3. Then, `yield from` delegates this generator to a larger context; it essentially merges their iterables into a single sequence. As such, its output is another iterator that yields values, one from each of those smaller generators.\n",
" python\n",
"yield from { ... }\n",
"\n",
"\n",
"**When would you use this approach?**\n",
"\n",
"This syntax is useful when you need to iterate over the results of multiple, independent iterators (such as database queries or file processes). You can apply it in many scenarios:\n",
"\n",
"* **Data processing:** When you have multiple data sources (like CSV files) and want to combine their contents.\n",
"* **Database queries:** If a single query retrieves data from multiple tables and need to yield values from each table separately.\n",
"\n",
"Here is a more complete Python code example which uses these concepts, demonstrating its applicability:\n",
"\n",
"python\n",
"import pandas as pd\n",
"\n",
"# Generating nested dataset by merging three separate CSV files into one, each having 'books' key.\n",
"frames = [pd.DataFrame(columns=[\"title\", \"author\"]), \n",
" pd.DataFrame([{\"book1\": {\"title\": \"Book 1 - Volume 1 of 2\",\"author\": \"Author A1\"},\n",
" \"book1\": {\"title\": \"Book 1 - Volume 2 of 2\", \"author\":\"Author A2\"},{\"book2\":{\"title\": \"Volume B of 3 of 6\", \"author\": \"Aurora\"}],\n",
" \"books\":{\"volume_3_Authors\":[\"Author B\"}]}]),\n",
" pd.DataFrame(columns=[\"Title\", \"Author\"])]\n",
"dataframe = pd.concat(frames, ignore_index=True)\n",
"# Iterate and extract results in a generator form from dataframe or a list of objects stored in separate files\n",
"for title, book_author in [{book: \"Book Name\"}.get(key) for key, book in dataframe.iterrows()] : yield(book_author)\n",
"\n",
"This code example shows how to process the data returned by an external query (a SQL database). It yields results one at a time as needed."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"get_ans_llama()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "409fb200-b88e-4a04-b2f3-8db4d18bb844",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}