diff --git a/README.md b/README.md index 16272a5..d5a843a 100644 --- a/README.md +++ b/README.md @@ -8,11 +8,17 @@ I'm so happy you're joining me on this path. We'll be building immensely satisfy ### A note before you begin -I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed at edwarddonner dot com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here: +I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here: https://www.linkedin.com/in/eddonner/ I'm still polishing up the last couple of weeks of code, but it's looking really terrific and I'll push it in the coming days. +### An important point on API costs + +During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. + +Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. + ### How this Jupyter Lab is organized There are folders for each of the "weeks", representing modules of the class. @@ -52,19 +58,21 @@ https://docs.anaconda.com/anaconda/install/ ### When we get to it, creating your API keys -Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models. You'll need to join me in setting up accounts and API keys. +Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models (models at the forefront of progress). You'll need to join me in setting up accounts and API keys. - [GPT API](https://platform.openai.com/) from OpenAI - [Claude API](https://console.anthropic.com/) from Anthropic - [Gemini API](https://ai.google.dev/gemini-api) from Google -Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. +Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. See the extra note on API costs below if that's a concern. One student mentioned to me that OpenAI can take a few minutes to register; if you initially get an error about being out of quota, wait a few minutes and try again. If it's still a problem, message me! + +Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at [HuggingFace](https://huggingface.co) - you can create an API token from the Avatar menu >> Settings >> Access Tokens. -Later in the course you'll be using a HuggingFace account, which is available for free at https://huggingface.co - you'll need to create an API token from the Avatar menu >> Settings >> Access Tokens. +And in Week 6/7 you'll be using the terrific [Weights & Biases](https://wandb.ai) platform to watch over your training batches. Accounts are also free, and you can set up a token in a similar way. -When you have these keys, please create a new file called `.env` in your project root directory. (For more detailed instructions on creating the `.env` file, I've added a guide at the bottom of this README.) +When you have these keys, please create a new file called `.env` in your project root directory. This file won't appear in Jupyter Lab because it's a hidden file; you should create it using something like Notepad (PC) or nano (Mac / Linux). I've put detailed instructions at the end of this README. -It should have contents like this: +It should have contents like this, and to start with you only need the first line: ``` OPENAI_API_KEY=xxxx @@ -74,9 +82,12 @@ HF_TOKEN=xxxx ``` This file is listed in the `.gitignore` file, so it won't get checked in and your keys stay safe. +If you have any problems with this process, there's a simple workaround which I explain in the video. ### Starting in Week 3, we'll also be using Google Colab for running with GPUs +You should be able to use the free tier or minimal spend to complete all the projects in the class. I personally signed up for Colab Pro+ and I'm loving it - but it's not required. + The colab links are in the Week folders and also here: - For week 3 day 1, this Google Colab shows what [colab can do](https://colab.research.google.com/drive/1DjcrYDZldAXKJ08x1uYIVCtItoLPk1Wr?usp=sharing) - For week 3 day 2, here is a colab for the HuggingFace [pipelines API](https://colab.research.google.com/drive/1aMaEw8A56xs0bRM4lu8z7ou18jqyybGm?usp=sharing) @@ -84,6 +95,15 @@ The colab links are in the Week folders and also here: - For week 3 day 4, we go to a colab with HuggingFace [models](https://colab.research.google.com/drive/1hhR9Z-yiqjUe7pJjVQw4c74z_V3VchLy?usp=sharing) - For week 3 day 5, we return to colab to make our [Meeting Minutes product](https://colab.research.google.com/drive/1KSMxOCprsl1QRpt_Rq0UqCAyMtPqDQYx?usp=sharing) +### Monitoring API charges + +You can keep your API spend very low throughout this course; you can monitor spend at the dashboards: [here](https://platform.openai.com/usage) for OpenAI, [here](https://console.anthropic.com/settings/cost) for Anthropic and [here](https://console.cloud.google.com/apis/api/generativelanguage.googleapis.com/cost) for Google Gemini. + +The charges for the exercsies in this course should always be quite low, but if you'd prefer to keep them minimal, then be sure to always choose the cheapest versions of models: +1. For OpenAI: Always use model `gpt-4o-mini` in the code instead of `gpt-4o` +2. For Anthropic: Always use model `claude-3-haiku-20240307` in the code instead of the other Claude models +3. During week 7, look out for my instructions for using the cheaper dataset + ## And that's it! Happy coding! ### Alternative Setup Instructions if you're a die-hard virtualenv-er @@ -163,4 +183,4 @@ Control + X to exit the editor And confirm that the `.env` file is there. -Please do message me or email me at ed at `edwarddonner dot com` if this doesn't work or if I can help with anything. I can't wait to hear how you get on. \ No newline at end of file +Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on. \ No newline at end of file diff --git a/week1/day1.ipynb b/week1/day1.ipynb index 9378e86..aedc23b 100644 --- a/week1/day1.ipynb +++ b/week1/day1.ipynb @@ -31,6 +31,24 @@ "from openai import OpenAI" ] }, + { + "cell_type": "markdown", + "id": "6900b2a8-6384-4316-8aaa-5e519fca4254", + "metadata": {}, + "source": [ + "# Connecting to OpenAI\n", + "\n", + "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", + "\n", + "Troubleshooting if you have problems:\n", + "\n", + "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", + "2. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", + "3. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", + "\n", + "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." + ] + }, { "cell_type": "code", "execution_count": null, diff --git a/week2/day1.ipynb b/week2/day1.ipynb index ed76272..abdc15a 100644 --- a/week2/day1.ipynb +++ b/week2/day1.ipynb @@ -81,6 +81,8 @@ "source": [ "# Connect to OpenAI, Anthropic and Google\n", "# All 3 APIs are similar\n", + "# Having problems with API files? You can use openai = OpenAI(api_key=\"your-key-here\") and same for claude\n", + "# Having problems with Google Gemini setup? Then just skip Gemini; you'll get all the experience you need from GPT and Claude.\n", "\n", "openai = OpenAI()\n", "\n", @@ -324,6 +326,7 @@ "outputs": [], "source": [ "# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", + "# We're using cheap versions of models so the costs will be minimal\n", "\n", "gpt_model = \"gpt-4o-mini\"\n", "claude_model = \"claude-3-haiku-20240307\"\n", diff --git a/week4/day3.ipynb b/week4/day3.ipynb index ffa1cbe..c50f1e7 100644 --- a/week4/day3.ipynb +++ b/week4/day3.ipynb @@ -7,7 +7,11 @@ "source": [ "# Code Generator\n", "\n", - "The requirement: use a Frontier model to generate high performance C++ code from Python code" + "The requirement: use a Frontier model to generate high performance C++ code from Python code\n", + "\n", + "# Important Note\n", + "\n", + "In the exercise I use GPT-4o and Claude-3.5-Sonnet, which are the slightly higher priced versions. The costs are still low, but if you'd prefer to keep costs ultra low, please make the suggested switches to the models (3 cells down from here)." ] }, { @@ -53,11 +57,16 @@ "outputs": [], "source": [ "# initialize\n", + "# NOTE - option to use ultra-low cost models by uncommenting last 2 lines\n", "\n", "openai = OpenAI()\n", "claude = anthropic.Anthropic()\n", "OPENAI_MODEL = \"gpt-4o\"\n", - "CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"" + "CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", + "\n", + "# Want to keep costs ultra-low? Uncomment these lines:\n", + "# OPENAI_MODEL = \"gpt-4o-mini\"\n", + "# CLAUDE_MODEL = \"claude-3-haiku-20240307\"" ] }, { @@ -213,6 +222,21 @@ "exec(pi)" ] }, + { + "cell_type": "markdown", + "id": "bf8f8018-f64d-425c-a0e1-d7862aa9592d", + "metadata": {}, + "source": [ + "# Compiling C++ and executing\n", + "\n", + "This next cell contains the command to compile a C++ file on my M1 Mac. \n", + "It compiles the file `optimized.cpp` into an executable called `optimized` \n", + "Then it runs the program called `optimized`\n", + "\n", + "You can google (or ask ChatGPT!) for how to do this on your platform, then replace the lines below.\n", + "If you're not comfortable with this step, you can skip it for sure - I'll show you exactly how it performs on my Mac." + ] + }, { "cell_type": "code", "execution_count": null, @@ -220,6 +244,8 @@ "metadata": {}, "outputs": [], "source": [ + "# Compile C++ and run the executable\n", + "\n", "!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n", "!./optimized" ] @@ -241,6 +267,8 @@ "metadata": {}, "outputs": [], "source": [ + "# Repeat for Claude - again, use the right approach for your platform\n", + "\n", "!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n", "!./optimized" ] @@ -323,6 +351,8 @@ "metadata": {}, "outputs": [], "source": [ + "# Replace this with the right C++ compile + execute command for your platform\n", + "\n", "!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n", "!./optimized" ] @@ -344,6 +374,8 @@ "metadata": {}, "outputs": [], "source": [ + "# Replace this with the right C++ compile + execute command for your platform\n", + "\n", "!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n", "!./optimized" ] @@ -447,6 +479,12 @@ "metadata": {}, "outputs": [], "source": [ + "# You'll need to change the code in the try block to compile the C++ code for your platform\n", + "# I pasted this into Claude's chat UI with a request for it to give me a version for an Intel PC,\n", + "# and it responded with something that looks perfect - you can try a similar approach for your platform.\n", + "\n", + "# M1 Mac version to compile and execute optimized C++ code:\n", + "\n", "def execute_cpp(code):\n", " write_output(code)\n", " try:\n", diff --git a/week6/day2.ipynb b/week6/day2.ipynb index df36e61..f59a4e9 100644 --- a/week6/day2.ipynb +++ b/week6/day2.ipynb @@ -17,7 +17,13 @@ "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023\n", "\n", "And the folder with all the product datasets is here: \n", - "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023/tree/main/raw/meta_categories" + "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023/tree/main/raw/meta_categories\n", + "\n", + "## Important Note - read me first please\n", + "\n", + "We are about to craft a massive dataset of 400,000 items covering multiple types of product. In Week 7 we will be using this data to train our own model. It's a pretty big dataset, and depending on the GPU you select, training could take 20+ hours. It will be really good fun, but it could cost a few dollars in compute units.\n", + "\n", + "As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one." ] }, { @@ -552,7 +558,10 @@ "metadata": {}, "outputs": [], "source": [ - "# DATASET_NAME = \"ed-donner/pricer-data\"\n", + "# Uncomment these lines if you're ready to push to the hub, and replace my name with your HF username\n", + "\n", + "# HF_USER = \"ed-donner\"\n", + "# DATASET_NAME = f\"{HF_USER}/pricer-data\"\n", "# dataset.push_to_hub(DATASET_NAME, private=True)" ] }, diff --git a/week6/day4.ipynb b/week6/day4.ipynb index aeb513b..3f4897f 100644 --- a/week6/day4.ipynb +++ b/week6/day4.ipynb @@ -313,8 +313,6 @@ "metadata": {}, "outputs": [], "source": [ - "# The function for gpt-4o - the August model\n", - "\n", "def gpt_4o_frontier(item):\n", " response = openai.chat.completions.create(\n", " model=\"gpt-4o-2024-08-06\", \n", @@ -333,6 +331,10 @@ "metadata": {}, "outputs": [], "source": [ + "# The function for gpt-4o - the August model\n", + "# Note that it cost me about 1-2 cents to run this (pricing may vary by region)\n", + "# You can skip this and look at my results instead\n", + "\n", "Tester.test(gpt_4o_frontier, test)" ] }, @@ -364,6 +366,10 @@ "metadata": {}, "outputs": [], "source": [ + "# The function for Claude 3.5 Sonnet\n", + "# It also cost me about 1-2 cents to run this (pricing may vary by region)\n", + "# You can skip this and look at my results instead\n", + "\n", "Tester.test(claude_3_point_5_sonnet, test)" ] }, diff --git a/week6/day5.ipynb b/week6/day5.ipynb index f15be59..51c6d7e 100644 --- a/week6/day5.ipynb +++ b/week6/day5.ipynb @@ -532,9 +532,72 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 320, "id": "03ff4b48-3788-4370-9e34-6592f23d1bce", "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "DNS resolution for api.gradio.app: 54.68.118.249\n", + "Gradio API Status: 500\n", + "Gradio API Response: Internal Server Error\n", + "HuggingFace CDN Status: 403\n" + ] + } + ], + "source": [ + "import requests\n", + "import socket\n", + "\n", + "def check_connectivity():\n", + " try:\n", + " # Check DNS resolution\n", + " ip = socket.gethostbyname('api.gradio.app')\n", + " print(f\"DNS resolution for api.gradio.app: {ip}\")\n", + "\n", + " # Check connection to Gradio API\n", + " response = requests.get(\"https://api.gradio.app/v2/tunnel/\", timeout=5)\n", + " print(f\"Gradio API Status: {response.status_code}\")\n", + " print(f\"Gradio API Response: {response.text}\")\n", + "\n", + " # Check connection to HuggingFace CDN\n", + " cdn_response = requests.get(\"https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_aarch64\", timeout=5)\n", + " print(f\"HuggingFace CDN Status: {cdn_response.status_code}\")\n", + " except Exception as e:\n", + " print(f\"Error in connectivity check: {e}\")\n", + "\n", + "check_connectivity()" + ] + }, + { + "cell_type": "code", + "execution_count": 323, + "id": "f7d4eec4-da5e-4fbf-ba3e-fbbcfb399d6c", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "'4.44.0'" + ] + }, + "execution_count": 323, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "import gradio\n", + "gradio.__version__" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "cad08a54-912b-43d2-9280-f00b5a7775a6", + "metadata": {}, "outputs": [], "source": [] } diff --git a/week6/lite.ipynb b/week6/lite.ipynb new file mode 100644 index 0000000..e359219 --- /dev/null +++ b/week6/lite.ipynb @@ -0,0 +1,424 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "28a0673e-96b5-43f2-8a8b-bd033bf851b0", + "metadata": {}, + "source": [ + "# The Product Pricer Continued\n", + "\n", + "A model that can estimate how much something costs, from its description.\n", + "\n", + "## Data Curation Part 2\n", + "\n", + "Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training.\n", + "\n", + "The dataset is here: \n", + "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023\n", + "\n", + "And the folder with all the product datasets is here: \n", + "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023/tree/main/raw/meta_categories\n", + "\n", + "## The Lite dataset\n", + "\n", + "This notebook is an alternative to `day2.ipynb` that creates a smaller dataset for Home Appliances only, to keep training fast and costs low. You may need to update names of future notebooks to reflect that you have built the \"lite\" dataset not the full dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "67cedf85-8125-4322-998e-9375fe745597", + "metadata": {}, + "outputs": [], + "source": [ + "# imports\n", + "\n", + "import os\n", + "import random\n", + "from dotenv import load_dotenv\n", + "from huggingface_hub import login\n", + "from datasets import load_dataset, Dataset, DatasetDict\n", + "from items import Item\n", + "from loaders import ItemLoader\n", + "import matplotlib.pyplot as plt\n", + "from collections import Counter, defaultdict\n", + "import numpy as np\n", + "import pickle" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7390a6aa-79cb-4dea-b6d7-de7e4b13e472", + "metadata": {}, + "outputs": [], + "source": [ + "# environment\n", + "\n", + "load_dotenv()\n", + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", + "os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", + "os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0732274a-aa6a-44fc-aee2-40dc8a8e4451", + "metadata": {}, + "outputs": [], + "source": [ + "# Log in to HuggingFace\n", + "\n", + "hf_token = os.environ['HF_TOKEN']\n", + "login(hf_token, add_to_git_credential=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "1adcf323-de9d-4c24-a9c3-d7ae554d06ca", + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline" + ] + }, + { + "cell_type": "markdown", + "id": "01065d69-765c-42c8-9f90-68b8c8754068", + "metadata": {}, + "source": [ + "## The ItemLoader code\n", + "\n", + "Look in loaders.py - there's some useful code to make life easier for us" + ] + }, + { + "cell_type": "markdown", + "id": "e2b6dc50-ac5c-4cf2-af2e-968ed8ef86d7", + "metadata": {}, + "source": [ + "## Now to SCALE UP\n", + "\n", + "Let's look at all datasets of all the items that you might find in a large home retail store - electrical, electronic, office and related, but not clothes / beauty / books." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d1d06cd3-f3c2-44f0-a9f2-13b54ff8be5c", + "metadata": {}, + "outputs": [], + "source": [ + "dataset_names = [\n", + " # \"Automotive\",\n", + " # \"Electronics\",\n", + " # \"Office_Products\",\n", + " # \"Tools_and_Home_Improvement\",\n", + " # \"Cell_Phones_and_Accessories\",\n", + " # \"Toys_and_Games\",\n", + " \"Appliances\",\n", + " # \"Musical_Instruments\",\n", + "]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "aa8fd0f0-509a-4298-8fcc-e499a061e1be", + "metadata": {}, + "outputs": [], + "source": [ + "items = []\n", + "for dataset_name in dataset_names:\n", + " loader = ItemLoader(dataset_name)\n", + " items.extend(loader.load())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3e29a5ab-ca61-41cc-9b33-22d374681b85", + "metadata": {}, + "outputs": [], + "source": [ + "print(f\"A grand total of {len(items):,} items\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "89078cb1-9679-4eb0-b295-599b8586bcd1", + "metadata": {}, + "outputs": [], + "source": [ + "# Plot the distribution of token counts again\n", + "\n", + "tokens = [item.token_count for item in items]\n", + "plt.figure(figsize=(15, 6))\n", + "plt.title(f\"Token counts: Avg {sum(tokens)/len(tokens):,.1f} and highest {max(tokens):,}\\n\")\n", + "plt.xlabel('Length (tokens)')\n", + "plt.ylabel('Count')\n", + "plt.hist(tokens, rwidth=0.7, color=\"skyblue\", bins=range(0, 300, 10))\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "c38e0c43-9f7a-450e-a911-c94d37d9b9c3", + "metadata": {}, + "outputs": [], + "source": [ + "# Plot the distribution of prices\n", + "\n", + "prices = [item.price for item in items]\n", + "plt.figure(figsize=(15, 6))\n", + "plt.title(f\"Prices: Avg {sum(prices)/len(prices):,.1f} and highest {max(prices):,}\\n\")\n", + "plt.xlabel('Price ($)')\n", + "plt.ylabel('Count')\n", + "plt.hist(prices, rwidth=0.7, color=\"blueviolet\", bins=range(0, 1000, 10))\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "id": "ac046cc1-2717-415b-96ad-b73b2950d235", + "metadata": {}, + "source": [ + "# Dataset Curated!\n", + "\n", + "We've crafted an excellent dataset.\n", + "\n", + "Let's do some final checks" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "70219e99-22cc-4e08-9121-51f9707caef0", + "metadata": {}, + "outputs": [], + "source": [ + "# How does the price vary with the character count of the prompt?\n", + "\n", + "sample = items\n", + "\n", + "sizes = [len(item.prompt) for item in sample]\n", + "prices = [item.price for item in sample]\n", + "\n", + "# Create the scatter plot\n", + "plt.figure(figsize=(15, 8))\n", + "plt.scatter(sizes, prices, s=0.2, color=\"red\")\n", + "\n", + "# Add labels and title\n", + "plt.xlabel('Size')\n", + "plt.ylabel('Price')\n", + "plt.title('Is there a simple correlation?')\n", + "\n", + "# Display the plot\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "30ae1453-b9fc-40db-8310-65d850c4b1da", + "metadata": {}, + "outputs": [], + "source": [ + "def report(item):\n", + " prompt = item.prompt\n", + " tokens = Item.tokenizer.encode(item.prompt)\n", + " print(prompt)\n", + " print(tokens[-10:])\n", + " print(Item.tokenizer.batch_decode(tokens[-10:]))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d9998b8d-d746-4541-9ac2-701108e0e8fb", + "metadata": {}, + "outputs": [], + "source": [ + "report(sample[50])" + ] + }, + { + "cell_type": "markdown", + "id": "7aa0a3fc-d2fe-4e6e-8fdb-96913df2f588", + "metadata": {}, + "source": [ + "## Observation\n", + "\n", + "An interesting thing about the Llama tokenizer is that every number from 1 to 999 gets mapped to 1 token, much as we saw with gpt-4o. The same is not true of qwen2, gemma and phi3, which all map individual digits to tokens. This does turn out to be a bit useful for our project, although it's not an essential requirement." + ] + }, + { + "cell_type": "markdown", + "id": "0f03c0ee-3103-4603-af5c-b484884a3aa2", + "metadata": {}, + "source": [ + "# Finally\n", + "\n", + "It's time to break down our data into a training, test and validation dataset.\n", + "\n", + "It's typical to use 5%-10% of your data for testing purposes, but actually we have far more than we need at this point. We'll take 25,000 points for training, and we'll reserve 2,000 for testing, although we won't use all of them.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "3b163ca2-18ef-4c26-8e9d-88eb55f114f6", + "metadata": {}, + "outputs": [], + "source": [ + "random.seed(42)\n", + "random.shuffle(sample)\n", + "train = sample[:25_000]\n", + "test = sample[25_000:27_000]\n", + "print(f\"Divided into a training set of {len(train):,} items and test set of {len(test):,} items\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "299b9816-8885-4798-829a-69d66d60eb01", + "metadata": {}, + "outputs": [], + "source": [ + "print(train[0].prompt)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "97222da3-9f2c-4d15-a5cd-5e5f8dbde6cc", + "metadata": {}, + "outputs": [], + "source": [ + "print(test[0].test_prompt())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "7a116369-335a-412b-b70c-2add6675c2e3", + "metadata": {}, + "outputs": [], + "source": [ + "# Plot the distribution of prices in the first 250 test points\n", + "\n", + "prices = [float(item.price) for item in test[:250]]\n", + "plt.figure(figsize=(15, 6))\n", + "plt.title(f\"Avg {sum(prices)/len(prices):.2f} and highest {max(prices):,.2f}\\n\")\n", + "plt.xlabel('Price ($)')\n", + "plt.ylabel('Count')\n", + "plt.hist(prices, rwidth=0.7, color=\"darkblue\", bins=range(0, 1000, 10))\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "id": "d522d752-6f66-4786-a4dc-8ef51842558c", + "metadata": {}, + "source": [ + "# Finally - upload your brand new dataset\n", + "\n", + "Convert to prompts and upload to HuggingFace hub" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "fa11b3e5-fcf4-4efc-a573-f6f67fec3e73", + "metadata": {}, + "outputs": [], + "source": [ + "train_prompts = [item.prompt for item in train]\n", + "train_prices = [item.price for item in train]\n", + "test_prompts = [item.test_prompt() for item in test]\n", + "test_prices = [item.price for item in test]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b020ab1b-7153-4e5f-b8a3-d5bc2fafb6df", + "metadata": {}, + "outputs": [], + "source": [ + "# Create a Dataset from the lists\n", + "\n", + "train_dataset = Dataset.from_dict({\"text\": train_prompts, \"price\": train_prices})\n", + "test_dataset = Dataset.from_dict({\"text\": test_prompts, \"price\": test_prices})\n", + "dataset = DatasetDict({\n", + " \"train\": train_dataset,\n", + " \"test\": test_dataset\n", + "})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "17639641-fb55-44e2-a463-b0b394d00f32", + "metadata": {}, + "outputs": [], + "source": [ + "DATASET_NAME = \"ed-donner/lite-data\"\n", + "dataset.push_to_hub(DATASET_NAME, private=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "b85733ba-d165-4f07-b055-46803543edfe", + "metadata": {}, + "outputs": [], + "source": [ + "# One more thing!\n", + "# Let's pickle the training and test dataset so we don't have to execute all this code next time!\n", + "\n", + "with open('train_lite.pkl', 'wb') as file:\n", + " pickle.dump(train, file)\n", + "\n", + "with open('test_lite.pkl', 'wb') as file:\n", + " pickle.dump(test, file)" + ] + }, + { + "cell_type": "markdown", + "id": "2b58dc61-747f-46f7-b9e0-c205db4f3e5e", + "metadata": {}, + "source": [ + "## Todos for you:\n", + "\n", + "- Investigate the dataset more!\n", + "- Confirm that the tokenizer tokenizes all 3 digit prices into 1 token" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.10" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +}