{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
   "metadata": {},
   "source": [
    "# End of week 1 exercise\n",
    "\n",
    "To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question,  \n",
    "and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "c1070317-3ed9-4659-abe3-828943230e03",
   "metadata": {},
   "outputs": [],
   "source": [
    "# imports\n",
    "\n",
    "import os\n",
    "import requests\n",
    "import json\n",
    "from typing import List\n",
    "from dotenv import load_dotenv\n",
    "from bs4 import BeautifulSoup\n",
    "from IPython.display import Markdown, display, update_display\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# constants\n",
    "\n",
    "MODEL_GPT = 'gpt-4o-mini'\n",
    "MODEL_LLAMA = 'llama3.2'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "API key found and looks good so far!\n"
     ]
    }
   ],
   "source": [
    "# set up environment\n",
    "load_dotenv(override=True)\n",
    "api_key = os.getenv('OPENAI_API_KEY')\n",
    "\n",
    "# Check the key\n",
    "\n",
    "if not api_key:\n",
    "    print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
    "elif not api_key.startswith(\"sk-proj-\"):\n",
    "    print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
    "elif api_key.strip() != api_key:\n",
    "    print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
    "else:\n",
    "    print(\"API key found and looks good so far!\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
   "metadata": {},
   "outputs": [],
   "source": [
    "# here is the question; type over this to ask something new\n",
    "\n",
    "question = \"\"\"\n",
    "Please explain what this code does and why:\n",
    "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/markdown": [
       "Certainly! This line of code is using a combination of Python set comprehension and the `yield from` statement. Let's break it down step-by-step:\n",
       "\n",
       "1. **Set Comprehension**: \n",
       "   - `{book.get(\"author\") for book in books if book.get(\"author\")}` is a set comprehension. It constructs a set containing the authors from the `books` collection.\n",
       "   - `books` is assumed to be an iterable, such as a list or tuple, of dictionaries, where each dictionary represents a book with various attributes, including an \"author\".\n",
       "   - `book.get(\"author\")` attempts to retrieve the value associated with the \"author\" key from each book dictionary.\n",
       "   - The `if book.get(\"author\")` part ensures that only books with a non-None value for the \"author\" key are included in the set. This helps in filtering out entries where the \"author\" key doesn't exist or is set to `None`.\n",
       "\n",
       "2. **Yield From**: \n",
       "   - `yield from` is a statement in Python that delegates part of a generator's operations to another iterable.\n",
       "   - `yield from` in this context is used to yield each element from the set constructed by the set comprehension one at a time.\n",
       "\n",
       "3. **Purpose**:\n",
       "   - The combination of set comprehension and `yield from` results in yielding each distinct author found in the list of books, but importantly, without duplicates, since sets naturally eliminate duplicates.\n",
       "   - This might be part of a generator function, which is a function allowing iteration over the unique authors.\n",
       "\n",
       "In summary, this code is used to retrieve and yield each unique author's name from a list of book dictionaries, one by one, eliminating any duplicates. The use of `yield from` makes it efficient for building generators that process data as an iterable sequence."
      ],
      "text/plain": [
       "<IPython.core.display.Markdown object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Get gpt-4o-mini to answer, with streaming\n",
    "from IPython.display import display, Markdown\n",
    "import openai\n",
    "\n",
    "# Your model\n",
    "MODEL = \"gpt-4o\"  \n",
    "# System prompt (sets the assistant's role)\n",
    "system_prompt = \"You are a helpful Python expert who explains code clearly and concisely.\"\n",
    "\n",
    "# The user question\n",
    "question = \"\"\"\n",
    "Please explain what this code does and why:\n",
    "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
    "\"\"\"\n",
    "\n",
    "# Streaming function for explanation\n",
    "def stream_code_explanation(code_question):\n",
    "    stream = openai.chat.completions.create(\n",
    "        model=MODEL,\n",
    "        messages=[\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": code_question}\n",
    "        ],\n",
    "        stream=True\n",
    "    )\n",
    "    \n",
    "    response = \"\"\n",
    "    display_handle = display(Markdown(\"\"), display_id=True)\n",
    "    \n",
    "    for chunk in stream:\n",
    "        content = chunk.choices[0].delta.content or \"\"\n",
    "        response += content\n",
    "        response_cleaned = response.replace(\"```\", \"\").replace(\"markdown\", \"\")\n",
    "        display_handle.update(Markdown(response_cleaned))\n",
    "\n",
    "# Run it!\n",
    "stream_code_explanation(question)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get Llama 3.2 to answer"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.0rc2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}