From the uDemy course on LLM engineering.
https://www.udemy.com/course/llm-engineering-master-ai-and-large-language-models
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
334 lines
12 KiB
334 lines
12 KiB
{ |
|
"cells": [ |
|
{ |
|
"cell_type": "markdown", |
|
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", |
|
"metadata": {}, |
|
"source": [ |
|
"# End of week 1 exercise\n", |
|
"\n", |
|
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n", |
|
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 1, |
|
"id": "c1070317-3ed9-4659-abe3-828943230e03", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# imports\n", |
|
"import os\n", |
|
"from dotenv import load_dotenv\n", |
|
"from openai import OpenAI\n", |
|
"from IPython.display import Markdown, display, update_display\n", |
|
"import ollama" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 2, |
|
"id": "4ed79945-0582-4f22-a210-b21e5448991e", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"name": "stdout", |
|
"output_type": "stream", |
|
"text": [ |
|
"API key looks good so far\n" |
|
] |
|
} |
|
], |
|
"source": [ |
|
"# open ai constants\n", |
|
"\n", |
|
"load_dotenv()\n", |
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
|
"\n", |
|
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
|
" print(\"API key looks good so far\")\n", |
|
"else:\n", |
|
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
|
" \n", |
|
"MODEL_GPT = 'gpt-4o-mini'" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 3, |
|
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# ollama constants\n", |
|
"\n", |
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
|
"#OLLAMA_HEADERS = {\"Content-Type\": \"application/json\"}\n", |
|
"MODEL_LLAMA = 'llama3.2'" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 4, |
|
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# initialization\n", |
|
"openai = OpenAI()\n" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 20, |
|
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# here is the question; type over this to ask something new\n", |
|
"system_prompt=\"You are a tech specialist capable to explain with clarity tech question and more. You should answer in markdown\"\n", |
|
"question = \"\"\"\n", |
|
"Please explain what this code does and why:\n", |
|
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
|
"\"\"\"\n", |
|
"user_prompt = f\"Please explain me this question: {question}\"" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 21, |
|
"id": "144c16d8-4e55-46e4-82f0-753375119ff3", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# messages\n", |
|
"prompt_messages=[\n", |
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
|
" ]" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 7, |
|
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# Get gpt-4o-mini to answer, with streaming\n", |
|
"def askOpenAi():\n", |
|
" stream = openai.chat.completions.create(\n", |
|
" model=MODEL_GPT,\n", |
|
" messages=prompt_messages,\n", |
|
" stream = True \n", |
|
" )\n", |
|
" response=\"\"\n", |
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
|
" \n", |
|
" for chunk in stream:\n", |
|
" response += chunk.choices[0].delta.content or '' \n", |
|
" update_display(Markdown(response), display_id=display_handle.display_id)" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 22, |
|
"id": "2190d7aa-f6a1-4bca-a2ff-8ce2b0db79c5", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/markdown": [ |
|
"Sure! Let's break down the code you've provided:\n", |
|
"\n", |
|
"```python\n", |
|
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
|
"```\n", |
|
"\n", |
|
"### Explanation of the Code\n", |
|
"\n", |
|
"1. **Set Comprehension:**\n", |
|
" - The code `{book.get(\"author\") for book in books if book.get(\"author\")}` is a **set comprehension**. This is a concise way to create a set in Python.\n", |
|
" - It iterates over a collection named `books`.\n", |
|
"\n", |
|
"2. **Accessing the \"author\" Key:**\n", |
|
" - `book.get(\"author\")`: For each `book` in the `books` collection, it attempts to get the value associated with the key `\"author\"`. \n", |
|
" - The `get()` method is used to safely access dictionary keys. If the key does not exist, it returns `None` instead of throwing an error.\n", |
|
"\n", |
|
"3. **Filtering Authors:**\n", |
|
" - The clause `if book.get(\"author\")` acts as a filter. It ensures that only books with a valid (non-`None`) author get included in the resulting set.\n", |
|
" - Therefore, this part: `{book.get(\"author\") for book in books if book.get(\"author\")}` creates a set of unique authors from the `books` collection that have valid author values.\n", |
|
"\n", |
|
"4. **Yielding Results:**\n", |
|
" - The `yield from` statement is used to yield values from the set comprehension created previously. \n", |
|
" - This means that the containing function will return each unique author one at a time as they are requested (similar to a generator).\n", |
|
"\n", |
|
"### Summary\n", |
|
"\n", |
|
"- **What the Code Does:**\n", |
|
" - It generates a set of unique authors from a list of books, filtering out any entries that do not have an author. It then yields each of these authors.\n", |
|
"\n", |
|
"- **Why It's Useful:**\n", |
|
" - This code is particularly useful when dealing with collections of books where some might not have an author specified. It safely retrieves the authors and ensures that each author is only returned once. \n", |
|
" - Using `yield from` makes it memory efficient, as it does not create an intermediate list of authors but generates them one at a time on demand.\n", |
|
"\n", |
|
"### Example\n", |
|
"\n", |
|
"If you had a list of books like this:\n", |
|
"\n", |
|
"```python\n", |
|
"books = [\n", |
|
" {\"title\": \"Book 1\", \"author\": \"Author A\"},\n", |
|
" {\"title\": \"Book 2\", \"author\": \"Author B\"},\n", |
|
" {\"title\": \"Book 3\", \"author\": None},\n", |
|
" {\"title\": \"Book 4\", \"author\": \"Author A\"},\n", |
|
"]\n", |
|
"```\n", |
|
"\n", |
|
"The output of the code would be:\n", |
|
"\n", |
|
"```\n", |
|
"Author A\n", |
|
"Author B\n", |
|
"```\n", |
|
"\n", |
|
"In this example, `Author A` is listed only once, even though there are multiple books by that author." |
|
], |
|
"text/plain": [ |
|
"<IPython.core.display.Markdown object>" |
|
] |
|
}, |
|
"metadata": {}, |
|
"output_type": "display_data" |
|
} |
|
], |
|
"source": [ |
|
"askOpenAi()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 18, |
|
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [ |
|
"# Get Llama 3.2 to answer\n", |
|
"def askOllama(): \n", |
|
" stream = ollama.chat(model=MODEL_LLAMA, messages=prompt_messages, stream=True)\n", |
|
" response=\"\"\n", |
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
|
" \n", |
|
" for chunk in stream:\n", |
|
" response += chunk['message']['content'] or '' \n", |
|
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
|
" " |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": 23, |
|
"id": "acc0116c-4506-4391-89d5-1add700d3d55", |
|
"metadata": {}, |
|
"outputs": [ |
|
{ |
|
"data": { |
|
"text/markdown": [ |
|
"**Yielding Authors from a List of Books**\n", |
|
"=====================================\n", |
|
"\n", |
|
"This code snippet is written in Python and utilizes the `yield from` statement, which was introduced in Python 3.3.\n", |
|
"\n", |
|
"### What does it do?\n", |
|
"\n", |
|
"The code takes two main inputs:\n", |
|
"\n", |
|
"* A list of dictionaries (`books`) where each dictionary represents a book.\n", |
|
"* Another dictionary (`book`) that contains information about an author.\n", |
|
"\n", |
|
"It generates a sequence of authors from the `books` list and yields them one by one, while also applying the condition that the book has a valid \"author\" key in its dictionary.\n", |
|
"\n", |
|
"Here's a step-by-step breakdown:\n", |
|
"\n", |
|
"1. `{book.get(\"author\") for book in books if book.get(\"author\")}`:\n", |
|
" * This is an expression that generates a sequence of authors.\n", |
|
" * `for book in books` iterates over each book in the `books` list.\n", |
|
" * `if book.get(\"author\")` filters out books without an \"author\" key, to prevent errors and ensure only valid data is processed.\n", |
|
"\n", |
|
"2. `yield from ...`:\n", |
|
" * This statement is used to delegate a sub-generator or iterator.\n", |
|
" * In this case, it's delegating the sequence of authors generated in step 1.\n", |
|
"\n", |
|
"**Why does it yield authors?**\n", |
|
"\n", |
|
"The use of `yield from` serves two main purposes:\n", |
|
"\n", |
|
"* **Efficiency**: Instead of creating a new list with all the authors, this code yields each author one by one. This approach is more memory-efficient and can be particularly beneficial when dealing with large datasets.\n", |
|
"* **Flexibility**: By using `yield from`, you can create generators that produce values on-the-fly, allowing for lazy evaluation.\n", |
|
"\n", |
|
"### Example Usage\n", |
|
"\n", |
|
"Here's an example of how you might use this code:\n", |
|
"\n", |
|
"```python\n", |
|
"books = [\n", |
|
" {\"title\": \"Book 1\", \"author\": \"Author A\"},\n", |
|
" {\"title\": \"Book 2\", \"author\": \"Author B\"},\n", |
|
" {\"title\": \"Book 3\"}\n", |
|
"]\n", |
|
"\n", |
|
"def get_authors(books):\n", |
|
" \"\"\"Yields authors from the books.\"\"\"\n", |
|
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
|
"\n", |
|
"# Get all unique authors\n", |
|
"authors = set(get_authors(books))\n", |
|
"print(authors) # Output: {'Author A', 'Author B'}\n", |
|
"```\n", |
|
"\n", |
|
"In this example, `get_authors` is a generator function that yields unique authors from the `books` list. The generated values are collected in a set (`authors`) to eliminate duplicates." |
|
], |
|
"text/plain": [ |
|
"<IPython.core.display.Markdown object>" |
|
] |
|
}, |
|
"metadata": {}, |
|
"output_type": "display_data" |
|
} |
|
], |
|
"source": [ |
|
"askOllama()" |
|
] |
|
}, |
|
{ |
|
"cell_type": "code", |
|
"execution_count": null, |
|
"id": "9233ca17-160f-4afd-aef6-a7e02f069a50", |
|
"metadata": {}, |
|
"outputs": [], |
|
"source": [] |
|
} |
|
], |
|
"metadata": { |
|
"kernelspec": { |
|
"display_name": "Python 3 (ipykernel)", |
|
"language": "python", |
|
"name": "python3" |
|
}, |
|
"language_info": { |
|
"codemirror_mode": { |
|
"name": "ipython", |
|
"version": 3 |
|
}, |
|
"file_extension": ".py", |
|
"mimetype": "text/x-python", |
|
"name": "python", |
|
"nbconvert_exporter": "python", |
|
"pygments_lexer": "ipython3", |
|
"version": "3.11.11" |
|
} |
|
}, |
|
"nbformat": 4, |
|
"nbformat_minor": 5 |
|
}
|
|
|