Browse Source

Merge branch 'main' into SD_enhancements

pull/40/head
Ed Donner 7 months ago committed by GitHub
parent
commit
01691fdd79
  1. 1
      .gitignore
  2. 135
      README.md
  3. 6
      environment.yml
  4. 99
      week1/day1.ipynb
  5. 10
      week1/day5.ipynb
  6. 38
      week2/day5.ipynb
  7. 10
      week3/day4.ipynb
  8. 2
      week5/day1.ipynb
  9. 5
      week5/day2.ipynb
  10. 5
      week5/day3.ipynb
  11. 5
      week5/day4.5.ipynb
  12. 42
      week5/day4.ipynb
  13. 1511
      week5/day5.ipynb
  14. 6
      week6/day3.ipynb
  15. 33
      week8/agents/agent.py
  16. 109
      week8/agents/deals.py
  17. 48
      week8/agents/ensemble_agent.py
  18. 105
      week8/agents/frontier_agent.py
  19. 78
      week8/agents/messaging_agent.py
  20. 57
      week8/agents/planning_agent.py
  21. 37
      week8/agents/random_forest_agent.py
  22. 94
      week8/agents/scanner_agent.py
  23. 29
      week8/agents/specialist_agent.py
  24. 268
      week8/day1.ipynb
  25. 258
      week8/day2.0.ipynb
  26. 174
      week8/day2.1.ipynb
  27. 174
      week8/day2.2.ipynb
  28. 397
      week8/day2.3.ipynb
  29. 408
      week8/day2.4.ipynb
  30. 235
      week8/day3.ipynb
  31. 141
      week8/day4.ipynb
  32. 163
      week8/day5.ipynb
  33. 94
      week8/deal_agent_framework.py
  34. 18
      week8/hello.py
  35. 101
      week8/items.py
  36. 44
      week8/llama.py
  37. 35
      week8/log_utils.py
  38. 164
      week8/memory.json
  39. 61
      week8/price_is_right.py
  40. 180
      week8/price_is_right_final.py
  41. 66
      week8/pricer_ephemeral.py
  42. 66
      week8/pricer_service.py
  43. 84
      week8/pricer_service2.py
  44. 75
      week8/testing.py

1
.gitignore vendored

@ -165,6 +165,7 @@ model_cache/
# Ignore Chroma vector database # Ignore Chroma vector database
vector_db/ vector_db/
products_vectorstore/
# And ignore any pickle files made during the course # And ignore any pickle files made during the course
*.pkl *.pkl

135
README.md

@ -11,7 +11,7 @@ I'm so happy you're joining me on this path. We'll be building immensely satisfy
I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here: I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here:
https://www.linkedin.com/in/eddonner/ https://www.linkedin.com/in/eddonner/
I'm still polishing up the last couple of weeks of code, but it's looking really terrific and I'll push it in the coming days. I'm still polishing up Week 8's code, but it's looking really terrific and I'll push it in the coming days.
### An important point on API costs ### An important point on API costs
@ -19,7 +19,7 @@ During the course, I'll suggest you try out the leading models at the forefront
Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning.
### How this Jupyter Lab is organized ### How this Repo is organized
There are folders for each of the "weeks", representing modules of the class. There are folders for each of the "weeks", representing modules of the class.
Follow the setup instructions below, then open the Week 1 folder and prepare for joy. Follow the setup instructions below, then open the Week 1 folder and prepare for joy.
@ -32,29 +32,105 @@ The mantra of the course is: the best way to learn is by **DOING**. You should w
By far the recommended approach is to use Anaconda for your environment. Even if you've never used it before, it makes such a difference. Anaconda ensures that you're working with the right version of Python and all your packages are compatible with mine, even if we're on different platforms. By far the recommended approach is to use Anaconda for your environment. Even if you've never used it before, it makes such a difference. Anaconda ensures that you're working with the right version of Python and all your packages are compatible with mine, even if we're on different platforms.
### Getting ready to set up **Update** Some people have had problems with Anaconda - horrors! The idea of Anaconda is to make it really smooth and simple to be working with the same environment. If you hit any problems with the instructions below, please skip to near the end of this README for the alternative approach using `pip`, and hopefully you'll be up and running fast. And please do message me if I can help with anything.
Clone this repo by clicking on the dropdown in the green 'Code' button in Github, copying the URL to the clip board, and entering `git clone <url>` in your terminal. We'll be mostly using Jupyter Lab in this course. For those new to Jupyter Lab / Jupyter Notebook, it's a delightful Data Science environment where you can simply hit shift+enter in any cell to run it; start at the top and work your way down! When we move to Google Colab in Week 3, you'll experience the same interface for Python runtimes in the cloud.
Then if you've not used Anaconda before, install it for your platform. You will thank me! It's the best. ### For PC Users
Link to install Anaconda:
https://docs.anaconda.com/anaconda/install/
### Setup instructions in 4 steps 1. **Install Git** (if not already installed):
1. Create a new Anaconda environment for this project. It's like virtualenv, only infinitely better. - Download Git from https://git-scm.com/download/win
- Run the installer and follow the prompts, using default options
`conda env create -f environment.yml` 2. **Open Command Prompt:**
2. Activate the environment: - Press Win + R, type `cmd`, and press Enter
`conda activate llms` 3. **Navigate to your projects folder:**
3. Start your Jupyter Lab If you have a specific folder for projects, navigate to it using the cd command. For example:
`cd C:\Users\YourUsername\Documents\Projects`
`jupyter lab` If you don't have a projects folder, you can create one:
```
mkdir C:\Users\YourUsername\Documents\Projects
cd C:\Users\YourUsername\Documents\Projects
```
(Replace YourUsername with your actual Windows username)
3. **Clone the repository:**
- Go to the course's GitHub page
- Click the green 'Code' button and copy the URL
- In the Command Prompt, type: `git clone <paste-url-here>`
4. **Install Anaconda:**
- Download Anaconda from https://docs.anaconda.com/anaconda/install/windows/
- Run the installer and follow the prompts
- A student mentioned that if you are prompted to upgrade Anaconda to a newer version during the install, you shouldn't do it, as there might be problems with the very latest update for PC. (Thanks for the pro-tip!)
5. **Set up the environment:**
- Open Anaconda Prompt (search for it in the Start menu)
- Navigate to the cloned repository folder using `cd path\to\repo`
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed
- Activate the environment: `conda activate llms`
You should see `(llms)` in your prompt, which indicates you've activated your new environment.
6. **Start Jupyter Lab:**
- In the Anaconda Prompt, type: `jupyter lab`
Congratulations! You're now ready to start coding. Enjoy your celebratory cup of coffee!
### For Mac Users
1. **Install Git** if not already installed (it will be in most cases)
- Open Terminal (Applications > Utilities > Terminal)
- Type `git --version` If not installed, you'll be prompted to install it
4. Get a celebratory cup of coffee and prepare for coding! 2. **Navigate to your projects folder:**
If you have a specific folder for projects, navigate to it using the cd command. For example:
`cd ~/Documents/Projects`
If you don't have a projects folder, you can create one:
```
mkdir ~/Documents/Projects
cd ~/Documents/Projects
```
3. **Clone the repository**
- Go to the course's GitHub page
- Click the green 'Code' button and copy the URL
- In Terminal, type: `git clone <paste-url-here>`
4. **Install Anaconda:**
- Download Anaconda from https://docs.anaconda.com/anaconda/install/mac-os/
- Double-click the downloaded file and follow the installation prompts
5. **Set up the environment:**
- Open Terminal
- Navigate to the cloned repository folder using `cd path/to/repo`
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed
- Activate the environment: `conda activate llms`
You should see `(llms)` in your prompt, which indicates you've activated your new environment.
6. **Start Jupyter Lab:**
- In Terminal, type: `jupyter lab`
Congratulations! You're now ready to start coding. Enjoy your celebratory cup of coffee!
### When we get to it, creating your API keys ### When we get to it, creating your API keys
@ -64,7 +140,7 @@ Particularly during weeks 1 and 2 of the course, you'll be writing code to call
- [Claude API](https://console.anthropic.com/) from Anthropic - [Claude API](https://console.anthropic.com/) from Anthropic
- [Gemini API](https://ai.google.dev/gemini-api) from Google - [Gemini API](https://ai.google.dev/gemini-api) from Google
Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. See the extra note on API costs below if that's a concern. One student mentioned to me that OpenAI can take a few minutes to register; if you initially get an error about being out of quota, wait a few minutes and try again. If it's still a problem, message me! Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. The webpage where you set up your OpenAI key is [here](https://platform.openai.com/api-keys). See the extra note on API costs below if that's a concern. One student mentioned to me that OpenAI can take a few minutes to register; if you initially get an error about being out of quota, wait a few minutes and try again. Another reason you might encounter the out of quota error is if you haven't yet added a valid payment method to your OpenAI account. You can do this by clicking your profile picture on the OpenAI website then clicking "Your profile." Once you are redirected to your profile page, choose "Billing" on the left-pane menu. You will need to enter a valid payment method and charge your account with a small advance payment. It is recommended that you **disable** the automatic recharge as an extra failsafe. If it's still a problem, see more troubleshooting tips in the Week 1 Day 1 notebook, and/or message me!
Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at [HuggingFace](https://huggingface.co) - you can create an API token from the Avatar menu >> Settings >> Access Tokens. Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at [HuggingFace](https://huggingface.co) - you can create an API token from the Avatar menu >> Settings >> Access Tokens.
@ -88,6 +164,8 @@ If you have any problems with this process, there's a simple workaround which I
You should be able to use the free tier or minimal spend to complete all the projects in the class. I personally signed up for Colab Pro+ and I'm loving it - but it's not required. You should be able to use the free tier or minimal spend to complete all the projects in the class. I personally signed up for Colab Pro+ and I'm loving it - but it's not required.
Learn about Google Colab and set up a Google account (if you don't already have one) [here](https://colab.research.google.com/)
The colab links are in the Week folders and also here: The colab links are in the Week folders and also here:
- For week 3 day 1, this Google Colab shows what [colab can do](https://colab.research.google.com/drive/1DjcrYDZldAXKJ08x1uYIVCtItoLPk1Wr?usp=sharing) - For week 3 day 1, this Google Colab shows what [colab can do](https://colab.research.google.com/drive/1DjcrYDZldAXKJ08x1uYIVCtItoLPk1Wr?usp=sharing)
- For week 3 day 2, here is a colab for the HuggingFace [pipelines API](https://colab.research.google.com/drive/1aMaEw8A56xs0bRM4lu8z7ou18jqyybGm?usp=sharing) - For week 3 day 2, here is a colab for the HuggingFace [pipelines API](https://colab.research.google.com/drive/1aMaEw8A56xs0bRM4lu8z7ou18jqyybGm?usp=sharing)
@ -107,17 +185,24 @@ The charges for the exercsies in this course should always be quite low, but if
## And that's it! Happy coding! ## And that's it! Happy coding!
### Alternative Setup Instructions if you're a die-hard virtualenv-er ### Alternative Setup Instructions if Anaconda is giving you problems
Well if you must! Just be sure to be running python 3.11, or we might hit compatibility snags. First please run:
`python --version`
To find out which python you're on. Ideally you'd be using Python 3.11.x, so we're completely in sync. You can download python at
https://www.python.org/downloads/
Here are the steps: Here are the steps:
After cloning the repo: After cloning the repo, cd into the project root directory `llm_engineering`.
Then:
1. Create a new virtual environment using something like `python3 -m venv /path/to/new/virtual/environment` 1. Create a new virtual environment: `python -m venv venv`
2. Activate the virtual environment with `source /path/to/new/virtual/environment/bin/activate` 2. Activate the virtual environment with
3. Create a file called `.env` in the project root directory (this is .gitignored) and add any private API keys, such as below. On a Mac: `source venv/bin/activate`
On a PC: `venv\Scripts\activate`
3. Run `pip install -r requirements.txt`
4. Create a file called `.env` in the project root directory and add any private API keys, such as below. (The next section has more detailed instructions for this, if you prefer.)
``` ```
OPENAI_API_KEY=xxxx OPENAI_API_KEY=xxxx
@ -126,11 +211,9 @@ ANTHROPIC_API_KEY=xxxx
HF_TOKEN=xxxx HF_TOKEN=xxxx
``` ```
4. From the repo root directory, run `pip install -r requirements.txt`
5. Run `jupyter lab` to launch Jupyter and head over to the intro folder to get started. 5. Run `jupyter lab` to launch Jupyter and head over to the intro folder to get started.
Let me know if you hit problems, and try looking in the environment.yml file to see if there are clues for any other packages that need to be installed in your system. Let me know if you hit problems.
Or... try Anaconda!!
### Guide to creating the `.env` file ### Guide to creating the `.env` file
@ -184,4 +267,4 @@ Control + X to exit the editor
And confirm that the `.env` file is there. And confirm that the `.env` file is there.
Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on. Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on.

6
environment.yml

@ -30,8 +30,12 @@ dependencies:
- tiktoken - tiktoken
- jupyter-dash - jupyter-dash
- plotly - plotly
- twilio
- duckdb
- feedparser
- pip: - pip:
- transformers - transformers
- sentence-transformers
- datasets - datasets
- accelerate - accelerate
- sentencepiece - sentencepiece
@ -39,3 +43,5 @@ dependencies:
- openai - openai
- gradio - gradio
- gensim - gensim
- modal

99
week1/day1.ipynb

@ -11,7 +11,13 @@
"\n", "\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n", "\n",
"Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file." "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n",
"\n",
"## If you're new to Jupyer Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, like the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n",
"\n",
"If you need to start again, go to Kernel menu >> Restart kernel."
] ]
}, },
{ {
@ -40,11 +46,16 @@
"\n", "\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n", "\n",
"Troubleshooting if you have problems:\n", "## Troubleshooting if you have problems:\n",
"\n", "\n",
"1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n",
"2. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", "2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n",
"3. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", "- Pressing \"Create new secret key\" on the top right\n",
"- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n",
"- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n",
"- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n",
"4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n",
"5. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n", "\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point."
] ]
@ -396,84 +407,14 @@
] ]
}, },
{ {
"cell_type": "code", "cell_type": "markdown",
"execution_count": 19, "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"id": "49c4315f-340b-4371-b6cd-2a772f4b7bdd",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Visit Singapore Official Site\n",
"\n",
"The **Visit Singapore Official Site** serves as a comprehensive guide for tourists and locals eager to explore the myriad attractions that Singapore has to offer. The website features detailed information on various categories including:\n",
"\n",
"- **Top Attractions**: Highlights of popular places to visit, such as Gardens by the Bay, Sentosa Island, and Universal Studios Singapore.\n",
"- **Cultural Experiences**: Insights into Singapore's diverse heritage and cultural festivals.\n",
"- **Dining Options**: Recommendations for local cuisine, hawker centers, and fine dining establishments.\n",
"- **Shopping**: Guides on where to shop, including famous shopping streets and malls.\n",
"- **Events and Festivals**: Information on upcoming events and annual festivals that showcase Singapore’s vibrant lifestyle.\n",
"\n",
"The site also emphasizes the city’s safety and cleanliness, making it an appealing destination for travelers.\n",
"\n",
"### News and Announcements\n",
"No specific news or announcements were highlighted in the provided content."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://www.visitsingapore.com\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "7586494d-d2d7-4e08-952b-b07420b12edc",
"metadata": {}, "metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Gardens by the Bay - Summary\n",
"\n",
"Gardens by the Bay is a premier horticultural attraction located in the heart of Singapore, renowned for its diverse collection of over 1.5 million plants from around the world, excluding Antarctica. The site features iconic structures and attractions such as the Flower Dome, Cloud Forest, OCBC Skyway, and Supertree Observatory, creating a unique blend of nature and architecture.\n",
"\n",
"## Highlights\n",
"- **Attractions**: Visitors can explore various themed conservatories, interact with art sculptures, and enjoy panoramic views from the Skyway.\n",
"- **Events**: Noteworthy upcoming events include the \"Carnival of Flowers\" running from September 23 to November 17, 2024, and seasonal craft activities in the Flower Dome.\n",
"- **Sustainability**: The gardens emphasize sustainability through innovative architecture and eco-friendly practices.\n",
"\n",
"## Promotions and Membership\n",
"- Current promotions include a 15% discount on Friends of the Gardens membership for DBS/POSB cardholders until October 31, 2024, and ongoing deals for dining within the attraction.\n",
"- A chance to win air tickets to Europe is offered for new Friends of the Gardens members from September 1, 2024, to May 31, 2025.\n",
"\n",
"The website serves as a comprehensive guide for planning visits, offers educational resources for schools, and encourages engagement through social media platforms."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [ "source": [
"display_summary(\"https://www.gardensbythebay.com.sg/\")" "## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. Please push your code afterwards so I can share it with other students!"
] ]
},
{
"cell_type": "code",
"execution_count": null,
"id": "79f8471d-46a7-4250-a550-dab379bb9263",
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {

10
week1/day5.ipynb

@ -1076,8 +1076,14 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", "system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", "and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\"" "Include details of company culture, customers and careers/jobs if you have the information.\"\n",
"\n",
"# Or uncomment the line below for a more humorous brochure:\n",
"\n",
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
] ]
}, },
{ {

38
week2/day5.ipynb

@ -298,7 +298,39 @@
"source": [ "source": [
"## Audio\n", "## Audio\n",
"\n", "\n",
"And let's make a function talker that uses OpenAI's speech model to generate Audio" "And let's make a function talker that uses OpenAI's speech model to generate Audio\n",
"\n",
"### Troubleshooting Audio issues\n",
"\n",
"If you have any problems running this code below (like a FileNotFound error, or a warning of a missing package), you may need to install FFmpeg, a very popular audio utility.\n",
"\n",
"**For PC Users**\n",
"\n",
"1. Download FFmpeg from the official website: https://ffmpeg.org/download.html\n",
"\n",
"2. Extract the downloaded files to a location on your computer (e.g., `C:\\ffmpeg`)\n",
"\n",
"3. Add the FFmpeg bin folder to your system PATH:\n",
"- Right-click on 'This PC' or 'My Computer' and select 'Properties'\n",
"- Click on 'Advanced system settings'\n",
"- Click on 'Environment Variables'\n",
"- Under 'System variables', find and edit 'Path'\n",
"- Add a new entry with the path to your FFmpeg bin folder (e.g., `C:\\ffmpeg\\bin`)\n",
"- Restart your command prompt, and within Jupyter Lab do Kernel -> Restart kernel, to pick up the changes\n",
"\n",
"4. Open a new command prompt and run this to make sure it's installed OK\n",
"`ffmpeg -version`\n",
"\n",
"**For Mac Users**\n",
"\n",
"1. Install homebrew if you don't have it already by running this in a Terminal window and following any instructions: \n",
"`/bin/bash -c \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\"`\n",
"\n",
"2. Then install FFmpeg with `brew install ffmpeg`\n",
"\n",
"3. Verify your installation with `ffmpeg -version` and if everything is good, within Jupyter Lab do Kernel -> Restart kernel to pick up the changes\n",
"\n",
"Message me or email me at ed@edwarddonner.com with any problems!"
] ]
}, },
{ {
@ -314,7 +346,7 @@
"def talker(message):\n", "def talker(message):\n",
" response = openai.audio.speech.create(\n", " response = openai.audio.speech.create(\n",
" model=\"tts-1\",\n", " model=\"tts-1\",\n",
" voice=\"onyx\", #alloy onyx\n", " voice=\"onyx\", # Also, try replacing onyx with alloy\n",
" input=message\n", " input=message\n",
" )\n", " )\n",
" \n", " \n",
@ -348,7 +380,7 @@
"4. An LLM can act as the Planner, dividing bigger tasks into smaller ones for the specialists\n", "4. An LLM can act as the Planner, dividing bigger tasks into smaller ones for the specialists\n",
"5. The concept of an Agent having autonomy / agency, beyond just responding to a prompt - such as Memory\n", "5. The concept of an Agent having autonomy / agency, beyond just responding to a prompt - such as Memory\n",
"\n", "\n",
"We're showing 1 and 2 here, and to a lesser extent 3 and 5." "We're showing 1 and 2 here, and to a lesser extent 3 and 5. In week 8 we will do the lot!"
] ]
}, },
{ {

10
week3/day4.ipynb

@ -9,18 +9,10 @@
"\n", "\n",
"And now - this colab unveils the heart (or the brains?) of the transformers library - the models:\n", "And now - this colab unveils the heart (or the brains?) of the transformers library - the models:\n",
"\n", "\n",
"https://colab.research.google.com/drive/1WD6Y2N7ctQi1X9wa6rpkg8UfyA4iSVuz?usp=sharing\n", "https://colab.research.google.com/drive/1hhR9Z-yiqjUe7pJjVQw4c74z_V3VchLy?usp=sharing\n",
"\n", "\n",
"This should run nicely on a low-cost or free T4 box." "This should run nicely on a low-cost or free T4 box."
] ]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e9289ba7-200c-43a9-b67a-c5ce826c9537",
"metadata": {},
"outputs": [],
"source": []
} }
], ],
"metadata": { "metadata": {

2
week5/day1.ipynb

@ -97,7 +97,7 @@
"products = glob.glob(\"knowledge-base/products/*\")\n", "products = glob.glob(\"knowledge-base/products/*\")\n",
"\n", "\n",
"for product in products:\n", "for product in products:\n",
" name = product.split('/')[-1][:-3]\n", " name = product.split(os.sep)[-1][:-3]\n",
" doc = \"\"\n", " doc = \"\"\n",
" with open(product, \"r\") as f:\n", " with open(product, \"r\") as f:\n",
" doc = f.read()\n", " doc = f.read()\n",

5
week5/day2.ipynb

@ -80,10 +80,13 @@
"\n", "\n",
"folders = glob.glob(\"knowledge-base/*\")\n", "folders = glob.glob(\"knowledge-base/*\")\n",
"\n", "\n",
"# With thanks to Jon R, a student on the course, for this fix needed for some users \n",
"text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"documents = []\n", "documents = []\n",
"for folder in folders:\n", "for folder in folders:\n",
" doc_type = os.path.basename(folder)\n", " doc_type = os.path.basename(folder)\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader)\n", " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n", " folder_docs = loader.load()\n",
" for doc in folder_docs:\n", " for doc in folder_docs:\n",
" doc.metadata[\"doc_type\"] = doc_type\n", " doc.metadata[\"doc_type\"] = doc_type\n",

5
week5/day3.ipynb

@ -86,10 +86,13 @@
"\n", "\n",
"folders = glob.glob(\"knowledge-base/*\")\n", "folders = glob.glob(\"knowledge-base/*\")\n",
"\n", "\n",
"# With thanks to Jon R, a student on the course, for this fix needed for some users \n",
"text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"documents = []\n", "documents = []\n",
"for folder in folders:\n", "for folder in folders:\n",
" doc_type = os.path.basename(folder)\n", " doc_type = os.path.basename(folder)\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader)\n", " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n", " folder_docs = loader.load()\n",
" for doc in folder_docs:\n", " for doc in folder_docs:\n",
" doc.metadata[\"doc_type\"] = doc_type\n", " doc.metadata[\"doc_type\"] = doc_type\n",

5
week5/day4.5.ipynb

@ -87,10 +87,13 @@
"\n", "\n",
"folders = glob.glob(\"knowledge-base/*\")\n", "folders = glob.glob(\"knowledge-base/*\")\n",
"\n", "\n",
"# With thanks to Jon R, a student on the course, for this fix needed for some users \n",
"text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"documents = []\n", "documents = []\n",
"for folder in folders:\n", "for folder in folders:\n",
" doc_type = os.path.basename(folder)\n", " doc_type = os.path.basename(folder)\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader)\n", " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n", " folder_docs = loader.load()\n",
" for doc in folder_docs:\n", " for doc in folder_docs:\n",
" doc.metadata[\"doc_type\"] = doc_type\n", " doc.metadata[\"doc_type\"] = doc_type\n",

42
week5/day4.ipynb

@ -16,7 +16,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 1,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -31,7 +31,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 2,
"id": "802137aa-8a74-45e0-a487-d1974927d7ca", "id": "802137aa-8a74-45e0-a487-d1974927d7ca",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -52,7 +52,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 3,
"id": "58c85082-e417-4708-9efe-81a5d55d1424", "id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -65,7 +65,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 4,
"id": "ee78efcb-60fe-449e-a944-40bab26261af", "id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -78,7 +78,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 5,
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -88,10 +88,13 @@
"\n", "\n",
"folders = glob.glob(\"knowledge-base/*\")\n", "folders = glob.glob(\"knowledge-base/*\")\n",
"\n", "\n",
"# With thanks to Jon R, a student on the course, for this fix needed for some users \n",
"text_loader_kwargs={'autodetect_encoding': True}\n",
"\n",
"documents = []\n", "documents = []\n",
"for folder in folders:\n", "for folder in folders:\n",
" doc_type = os.path.basename(folder)\n", " doc_type = os.path.basename(folder)\n",
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader)\n", " loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n",
" folder_docs = loader.load()\n", " folder_docs = loader.load()\n",
" for doc in folder_docs:\n", " for doc in folder_docs:\n",
" doc.metadata[\"doc_type\"] = doc_type\n", " doc.metadata[\"doc_type\"] = doc_type\n",
@ -100,10 +103,18 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 6,
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1088, which is longer than the specified 1000\n"
]
}
],
"source": [ "source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)" "chunks = text_splitter.split_documents(documents)"
@ -111,10 +122,21 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 7,
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/plain": [
"123"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"len(chunks)" "len(chunks)"
] ]

1511
week5/day5.ipynb

File diff suppressed because one or more lines are too long

6
week6/day3.ipynb

@ -1341,12 +1341,6 @@
"np.random.seed(42)\n", "np.random.seed(42)\n",
"\n", "\n",
"# Separate features and target\n", "# Separate features and target\n",
"feature_columns = [col for col in train_df.columns if col != 'price']\n",
"X_train = train_df[feature_columns]\n",
"y_train = train_df['price']\n",
"X_test = test_df[feature_columns]\n",
"y_test = test_df['price']\n",
"\n",
"feature_columns = ['weight', 'rank', 'text_length', 'is_top_electronics_brand']\n", "feature_columns = ['weight', 'rank', 'text_length', 'is_top_electronics_brand']\n",
"\n", "\n",
"X_train = train_df[feature_columns]\n", "X_train = train_df[feature_columns]\n",

33
week8/agents/agent.py

@ -0,0 +1,33 @@
import logging
class Agent:
"""
An abstract superclass for Agents
Used to log messages in a way that can identify each Agent
"""
# Foreground colors
RED = '\033[31m'
GREEN = '\033[32m'
YELLOW = '\033[33m'
BLUE = '\033[34m'
MAGENTA = '\033[35m'
CYAN = '\033[36m'
WHITE = '\033[37m'
# Background color
BG_BLACK = '\033[40m'
# Reset code to return to default color
RESET = '\033[0m'
name: str = ""
color: str = '\033[37m'
def log(self, message):
"""
Log this as an info message, identifying the agent
"""
color_code = self.BG_BLACK + self.color
message = f"[{self.name}] {message}"
logging.info(color_code + message + self.RESET)

109
week8/agents/deals.py

@ -0,0 +1,109 @@
from pydantic import BaseModel
from typing import List, Dict, Self
from bs4 import BeautifulSoup
import re
import feedparser
from tqdm import tqdm
import requests
import time
feeds = [
"https://www.dealnews.com/c142/Electronics/?rss=1",
"https://www.dealnews.com/c39/Computers/?rss=1",
"https://www.dealnews.com/c238/Automotive/?rss=1",
"https://www.dealnews.com/f1912/Smart-Home/?rss=1",
"https://www.dealnews.com/c196/Home-Garden/?rss=1",
]
def extract(html_snippet: str) -> str:
"""
Use Beautiful Soup to clean up this HTML snippet and extract useful text
"""
soup = BeautifulSoup(html_snippet, 'html.parser')
snippet_div = soup.find('div', class_='snippet summary')
if snippet_div:
description = snippet_div.get_text(strip=True)
description = BeautifulSoup(description, 'html.parser').get_text()
description = re.sub('<[^<]+?>', '', description)
result = description.strip()
else:
result = html_snippet
return result.replace('\n', ' ')
class ScrapedDeal:
"""
A class to represent a Deal retrieved from an RSS feed
"""
category: str
title: str
summary: str
url: str
details: str
features: str
def __init__(self, entry: Dict[str, str]):
"""
Populate this instance based on the provided dict
"""
self.title = entry['title']
self.summary = extract(entry['summary'])
self.url = entry['links'][0]['href']
stuff = requests.get(self.url).content
soup = BeautifulSoup(stuff, 'html.parser')
content = soup.find('div', class_='content-section').get_text()
content = content.replace('\nmore', '').replace('\n', ' ')
if "Features" in content:
self.details, self.features = content.split("Features")
else:
self.details = content
self.features = ""
def __repr__(self):
"""
Return a string to describe this deal
"""
return f"<{self.title}>"
def describe(self):
"""
Return a longer string to describe this deal for use in calling a model
"""
return f"Title: {self.title}\nDetails: {self.details.strip()}\nFeatures: {self.features.strip()}\nURL: {self.url}"
@classmethod
def fetch(cls, show_progress : bool = False) -> List[Self]:
"""
Retrieve all deals from the selected RSS feeds
"""
deals = []
feed_iter = tqdm(feeds) if show_progress else feeds
for feed_url in feed_iter:
feed = feedparser.parse(feed_url)
for entry in feed.entries[:10]:
deals.append(cls(entry))
time.sleep(0.5)
return deals
class Deal(BaseModel):
"""
A class to Represent a Deal with a summary description
"""
product_description: str
price: float
url: str
class DealSelection(BaseModel):
"""
A class to Represent a list of Deals
"""
deals: List[Deal]
class Opportunity(BaseModel):
"""
A class to represent a possible opportunity: a Deal where we estimate
it should cost more than it's being offered
"""
deal: Deal
estimate: float
discount: float

48
week8/agents/ensemble_agent.py

@ -0,0 +1,48 @@
import pandas as pd
from sklearn.linear_model import LinearRegression
import joblib
from agents.agent import Agent
from agents.specialist_agent import SpecialistAgent
from agents.frontier_agent import FrontierAgent
from agents.random_forest_agent import RandomForestAgent
class EnsembleAgent(Agent):
name = "Ensemble Agent"
color = Agent.YELLOW
def __init__(self, collection):
"""
Create an instance of Ensemble, by creating each of the models
And loading the weights of the Ensemble
"""
self.log("Initializing Ensemble Agent")
self.specialist = SpecialistAgent()
self.frontier = FrontierAgent(collection)
self.random_forest = RandomForestAgent()
self.model = joblib.load('ensemble_model.pkl')
self.log("Ensemble Agent is ready")
def price(self, description: str) -> float:
"""
Run this ensemble model
Ask each of the models to price the product
Then use the Linear Regression model to return the weighted price
:param description: the description of a product
:return: an estimate of its price
"""
self.log("Running Ensemble Agent - collaborating with specialist, frontier and random forest agents")
specialist = self.specialist.price(description)
frontier = self.frontier.price(description)
random_forest = self.random_forest.price(description)
X = pd.DataFrame({
'Specialist': [specialist],
'Frontier': [frontier],
'RandomForest': [random_forest],
'Min': [min(specialist, frontier, random_forest)],
'Max': [max(specialist, frontier, random_forest)],
})
y = self.model.predict(X)[0]
self.log(f"Ensemble Agent complete - returning ${y:.2f}")
return y

105
week8/agents/frontier_agent.py

@ -0,0 +1,105 @@
# imports
import os
import re
import math
import json
from typing import List, Dict
from openai import OpenAI
from sentence_transformers import SentenceTransformer
from datasets import load_dataset
import chromadb
from items import Item
from testing import Tester
from agents.agent import Agent
class FrontierAgent(Agent):
name = "Frontier Agent"
color = Agent.BLUE
MODEL = "gpt-4o-mini"
def __init__(self, collection):
"""
Set up this instance by connecting to OpenAI, to the Chroma Datastore,
And setting up the vector encoding model
"""
self.log("Initializing Frontier Agent")
self.openai = OpenAI()
self.collection = collection
self.model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
self.log("Frontier Agent is ready")
def make_context(self, similars: List[str], prices: List[float]) -> str:
"""
Create context that can be inserted into the prompt
:param similars: similar products to the one being estimated
:param prices: prices of the similar products
:return: text to insert in the prompt that provides context
"""
message = "To provide some context, here are some other items that might be similar to the item you need to estimate.\n\n"
for similar, price in zip(similars, prices):
message += f"Potentially related product:\n{similar}\nPrice is ${price:.2f}\n\n"
return message
def messages_for(self, description: str, similars: List[str], prices: List[float]) -> List[Dict[str, str]]:
"""
Create the message list to be included in a call to OpenAI
With the system and user prompt
:param description: a description of the product
:param similars: similar products to this one
:param prices: prices of similar products
:return: the list of messages in the format expected by OpenAI
"""
system_message = "You estimate prices of items. Reply only with the price, no explanation"
user_prompt = self.make_context(similars, prices)
user_prompt += "And now the question for you:\n\n"
user_prompt += "How much does this cost?\n\n" + description
return [
{"role": "system", "content": system_message},
{"role": "user", "content": user_prompt},
{"role": "assistant", "content": "Price is $"}
]
def find_similars(self, description: str):
"""
Return a list of items similar to the given one by looking in the Chroma datastore
"""
self.log("Frontier Agent is performing a RAG search of the Chroma datastore to find 5 similar products")
vector = self.model.encode([description])
results = self.collection.query(query_embeddings=vector.astype(float).tolist(), n_results=5)
documents = results['documents'][0][:]
prices = [m['price'] for m in results['metadatas'][0][:]]
self.log("Frontier Agent has found similar products")
return documents, prices
def get_price(self, s) -> float:
"""
A utility that plucks a floating point number out of a string
"""
s = s.replace('$','').replace(',','')
match = re.search(r"[-+]?\d*\.\d+|\d+", s)
return float(match.group()) if match else 0.0
def price(self, description: str) -> float:
"""
Make a call to OpenAI to estimate the price of the described product,
by looking up 5 similar products and including them in the prompt to give context
:param description: a description of the product
:return: an estimate of the price
"""
documents, prices = self.find_similars(description)
self.log("Frontier Agent is about to call OpenAI with context including 5 similar products")
response = self.openai.chat.completions.create(
model=self.MODEL,
messages=self.messages_for(description, documents, prices),
seed=42,
max_tokens=5
)
reply = response.choices[0].message.content
result = self.get_price(reply)
self.log(f"Frontier Agent completed - predicting ${result:.2f}")
return result

78
week8/agents/messaging_agent.py

@ -0,0 +1,78 @@
import os
from twilio.rest import Client
from agents.deals import Opportunity
import http.client
import urllib
from agents.agent import Agent
DO_TEXT = False
DO_PUSH = True
class MessagingAgent(Agent):
name = "Messaging Agent"
color = Agent.WHITE
def __init__(self):
"""
Set up this object to either do push notifications via Pushover,
or SMS via Twilio,
whichever is specified in the constants
"""
self.log(f"Messaging Agent is initializing")
if DO_TEXT:
account_sid = os.getenv('TWILIO_ACCOUNT_SID', 'your-sid-if-not-using-env')
auth_token = os.getenv('TWILIO_AUTH_TOKEN', 'your-auth-if-not-using-env')
self.me_from = os.getenv('TWILIO_FROM', 'your-phone-number-if-not-using-env')
self.me_to = os.getenv('MY_PHONE_NUMBER', 'your-phone-number-if-not-using-env')
self.client = Client(account_sid, auth_token)
self.log("Messaging Agent has initialized Twilio")
if DO_PUSH:
self.pushover_user = os.getenv('PUSHOVER_USER', 'your-pushover-user-if-not-using-env')
self.pushover_token = os.getenv('PUSHOVER_TOKEN', 'your-pushover-user-if-not-using-env')
self.log("Messaging Agent has initialized Pushover")
def message(self, text):
"""
Send an SMS message using the Twilio API
"""
self.log("Messaging Agent is sending a text message")
message = self.client.messages.create(
from_=self.me_from,
body=text,
to=self.me_to
)
def push(self, text):
"""
Send a Push Notification using the Pushover API
"""
self.log("Messaging Agent is sending a push notification")
conn = http.client.HTTPSConnection("api.pushover.net:443")
conn.request("POST", "/1/messages.json",
urllib.parse.urlencode({
"token": self.pushover_token,
"user": self.pushover_user,
"message": text,
"sound": "cashregister"
}), { "Content-type": "application/x-www-form-urlencoded" })
conn.getresponse()
def alert(self, opportunity: Opportunity):
"""
Make an alert about the specified Opportunity
"""
text = f"Deal Alert! Price=${opportunity.deal.price:.2f}, "
text += f"Estimate=${opportunity.estimate:.2f}, "
text += f"Discount=${opportunity.discount:.2f} :"
text += opportunity.deal.product_description[:10]+'... '
text += opportunity.deal.url
if DO_TEXT:
self.message(text)
if DO_PUSH:
self.push(text)
self.log("Messaging Agent has completed")

57
week8/agents/planning_agent.py

@ -0,0 +1,57 @@
from typing import Optional, List
from agents.agent import Agent
from agents.deals import ScrapedDeal, DealSelection, Deal, Opportunity
from agents.scanner_agent import ScannerAgent
from agents.ensemble_agent import EnsembleAgent
from agents.messaging_agent import MessagingAgent
class PlanningAgent(Agent):
name = "Planning Agent"
color = Agent.GREEN
DEAL_THRESHOLD = 50
def __init__(self, collection):
"""
Create instances of the 3 Agents that this planner coordinates across
"""
self.log("Planning Agent is initializing")
self.scanner = ScannerAgent()
self.ensemble = EnsembleAgent(collection)
self.messenger = MessagingAgent()
self.log("Planning Agent is ready")
def run(self, deal: Deal) -> Opportunity:
"""
Run the workflow for a particular deal
:param deal: the deal, summarized from an RSS scrape
:returns: an opportunity including the discount
"""
self.log("Planning Agent is pricing up a potential deal")
estimate = self.ensemble.price(deal.product_description)
discount = estimate - deal.price
self.log(f"Planning Agent has processed a deal with discount ${discount:.2f}")
return Opportunity(deal=deal, estimate=estimate, discount=discount)
def plan(self, memory: List[str] = []) -> Optional[Opportunity]:
"""
Run the full workflow:
1. Use the ScannerAgent to find deals from RSS feeds
2. Use the EnsembleAgent to estimate them
3. Use the MessagingAgent to send a notification of deals
:param memory: a list of URLs that have been surfaced in the past
:return: an Opportunity if one was surfaced, otherwise None
"""
self.log("Planning Agent is kicking off a run")
selection = self.scanner.scan(memory=memory)
if selection:
opportunities = [self.run(deal) for deal in selection.deals[:5]]
opportunities.sort(key=lambda opp: opp.discount, reverse=True)
best = opportunities[0]
self.log(f"Planning Agent has identified the best deal has discount ${best.discount:.2f}")
if best.discount > self.DEAL_THRESHOLD:
self.messenger.alert(best)
self.log("Planning Agent has completed a run")
return best if best.discount > self.DEAL_THRESHOLD else None
return None

37
week8/agents/random_forest_agent.py

@ -0,0 +1,37 @@
# imports
import os
import re
from typing import List
from sentence_transformers import SentenceTransformer
import joblib
from agents.agent import Agent
class RandomForestAgent(Agent):
name = "Random Forest Agent"
color = Agent.MAGENTA
def __init__(self):
"""
Initialize this object by loading in the saved model weights
and the SentenceTransformer vector encoding model
"""
self.log("Random Forest Agent is initializing")
self.vectorizer = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
self.model = joblib.load('random_forest_model.pkl')
self.log("Random Forest Agent is ready")
def price(self, description: str) -> float:
"""
Use a Random Forest model to estimate the price of the described item
:param description: the product to be estimated
:return: the price as a float
"""
self.log("Random Forest Agent is starting a prediction")
vector = self.vectorizer.encode([description])
result = max(0, self.model.predict(vector)[0])
self.log(f"Random Forest Agent completed - predicting ${result:.2f}")
return result

94
week8/agents/scanner_agent.py

@ -0,0 +1,94 @@
import os
import json
from typing import Optional, List
from openai import OpenAI
from agents.deals import ScrapedDeal, DealSelection
from agents.agent import Agent
class ScannerAgent(Agent):
MODEL = "gpt-4o-mini"
SYSTEM_PROMPT = """You identify and summarize the 5 most detailed deals from a list, by selecting deals that have the most detailed, high quality description and the most clear price.
Respond strictly in JSON with no explanation, using this format. You should provide the price as a number derived from the description. If the price of a deal isn't clear, do not include that deal in your response.
Most important is that you respond with the 5 deals that have the most detailed product description with price. It's not important to mention the terms of the deal; most important is a thorough description of the product.
Be careful with products that are described as "$XXX off" or "reduced by $XXX" - this isn't the actual price of the product. Only respond with products when you are highly confident about the price.
{"deals": [
{
"product_description": "Your clearly expressed summary of the product in 4-5 sentences. Details of the item are much more important than why it's a good deal. Avoid mentioning discounts and coupons; focus on the item itself. There should be a paragpraph of text for each item you choose.",
"price": 99.99,
"url": "the url as provided"
},
...
]}"""
USER_PROMPT_PREFIX = """Respond with the most promising 5 deals from this list, selecting those which have the most detailed, high quality product description and a clear price that is greater than 0.
Respond strictly in JSON, and only JSON. You should rephrase the description to be a summary of the product itself, not the terms of the deal.
Remember to respond with a paragraph of text in the product_description field for each of the 5 items that you select.
Be careful with products that are described as "$XXX off" or "reduced by $XXX" - this isn't the actual price of the product. Only respond with products when you are highly confident about the price.
Deals:
"""
USER_PROMPT_SUFFIX = "\n\nStrictly respond in JSON and include exactly 5 deals, no more."
name = "Scanner Agent"
color = Agent.CYAN
def __init__(self):
"""
Set up this instance by initializing OpenAI
"""
self.log("Scanner Agent is initializing")
self.openai = OpenAI()
self.log("Scanner Agent is ready")
def fetch_deals(self, memory) -> List[ScrapedDeal]:
"""
Look up deals published on RSS feeds
Return any new deals that are not already in the memory provided
"""
self.log("Scanner Agent is about to fetch deals from RSS feed")
urls = [opp.deal.url for opp in memory]
scraped = ScrapedDeal.fetch()
result = [scrape for scrape in scraped if scrape.url not in urls]
self.log(f"Scanner Agent received {len(result)} deals not already scraped")
return result
def make_user_prompt(self, scraped) -> str:
"""
Create a user prompt for OpenAI based on the scraped deals provided
"""
user_prompt = self.USER_PROMPT_PREFIX
user_prompt += '\n\n'.join([scrape.describe() for scrape in scraped])
user_prompt += self.USER_PROMPT_SUFFIX
return user_prompt
def scan(self, memory: List[str]=[]) -> Optional[DealSelection]:
"""
Call OpenAI to provide a high potential list of deals with good descriptions and prices
Use StructuredOutputs to ensure it conforms to our specifications
:param memory: a list of URLs representing deals already raised
:return: a selection of good deals, or None if there aren't any
"""
scraped = self.fetch_deals(memory)
if scraped:
user_prompt = self.make_user_prompt(scraped)
self.log("Scanner Agent is calling OpenAI using Structured Output")
result = self.openai.beta.chat.completions.parse(
model=self.MODEL,
messages=[
{"role": "system", "content": self.SYSTEM_PROMPT},
{"role": "user", "content": user_prompt}
],
response_format=DealSelection
)
result = result.choices[0].message.parsed
result.deals = [deal for deal in result.deals if deal.price>0]
self.log(f"Scanner Agent received {len(result.deals)} selected deals with price>0 from OpenAI")
return result
return None

29
week8/agents/specialist_agent.py

@ -0,0 +1,29 @@
import modal
from agents.agent import Agent
class SpecialistAgent(Agent):
"""
An Agent that runs our fine-tuned LLM that's running remotely on Modal
"""
name = "Specialist Agent"
color = Agent.RED
def __init__(self):
"""
Set up this Agent by creating an instance of the modal class
"""
self.log("Specialist Agent is initializing - connecting to modal")
Pricer = modal.Cls.lookup("pricer-service", "Pricer")
self.pricer = Pricer()
self.log("Specialist Agent is ready")
def price(self, description: str) -> float:
"""
Make a remote call to return the estimate of the price of this item
"""
self.log("Specialist Agent is calling remote fine-tuned model")
result = self.pricer.price.remote(description)
self.log(f"Specialist Agent completed - predicting ${result:.2f}")
return result

268
week8/day1.ipynb

@ -0,0 +1,268 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e426cd04-c053-43e8-b505-63cee7956a53",
"metadata": {},
"source": [
"# Welcome to a very busy Week 8 folder\n",
"\n",
"## We have lots to do this week!\n",
"\n",
"We'll move at a faster pace than usual, particularly as you're becoming proficient LLM engineers.\n",
"\n",
"One quick admin thing: I've added a number of packages to the environment.yml file during September. To make sure you have the latest repo with the latest code, it's worth doing this from the `llm_engineering` project folder:\n",
"\n",
"```\n",
"git pull\n",
"conda env update --f environment.yml --prune\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc0e1c1c-be6a-4395-bbbd-eeafc9330d7e",
"metadata": {},
"outputs": [],
"source": [
"# Just one import to start with!!\n",
"\n",
"import modal"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d240622-8422-4c99-8464-c04d063e4cb6",
"metadata": {},
"outputs": [],
"source": [
"# !modal setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3b133701-f550-44a1-a67f-eb7ccc4769a9",
"metadata": {},
"outputs": [],
"source": [
"from hello import app, hello"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0f3f73ae-1295-49f3-9099-b8b41fc3429b",
"metadata": {},
"outputs": [],
"source": [
"with app.run(show_progress=False):\n",
" reply=hello.local()\n",
"reply"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c1d8c6f9-edc7-4e52-9b3a-c07d7cff1ac7",
"metadata": {},
"outputs": [],
"source": [
"with app.run(show_progress=False):\n",
" reply=hello.remote()\n",
"reply"
]
},
{
"cell_type": "markdown",
"id": "22e8d804-c027-45fb-8fef-06e7bba6295a",
"metadata": {},
"source": [
"# Before we move on -\n",
"\n",
"## We need to set your HuggingFace Token as a secret in Modal\n",
"\n",
"1. Go to modal.com, sign in and go to your dashboard\n",
"2. Click on Secrets in the nav bar\n",
"3. Create new secret, click on Hugging Face\n",
"4. Fill in your HF_TOKEN where it prompts you\n",
"\n",
"### And now back to business: time to work with Llama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb8b6c41-8259-4329-b1c4-a1f67d26d1be",
"metadata": {},
"outputs": [],
"source": [
"from llama import app, generate"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "db4a718a-d95d-4f61-9688-c9df21d88fe6",
"metadata": {},
"outputs": [],
"source": [
"with modal.enable_output():\n",
" with app.run():\n",
" result=generate.remote(\"Life is a mystery, everyone must stand alone, I hear\")\n",
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a9a6844-29ec-4264-8e72-362d976b3968",
"metadata": {},
"outputs": [],
"source": [
"import modal\n",
"from pricer_ephemeral import app, price"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50e6cf99-8959-4ae3-ba02-e325cb7fff94",
"metadata": {},
"outputs": [],
"source": [
"with modal.enable_output():\n",
" with app.run():\n",
" result=price.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")\n",
"result"
]
},
{
"cell_type": "markdown",
"id": "04d8747f-8452-4077-8af6-27e03888508a",
"metadata": {},
"source": [
"## Transitioning From Ephemeral Apps to Deployed Apps\n",
"\n",
"From a command line, `modal deploy xxx` will deploy your code as a Deployed App\n",
"\n",
"This is how you could package your AI service behind an API to be used in a Production System.\n",
"\n",
"You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7f90d857-2f12-4521-bb90-28efd917f7d1",
"metadata": {},
"outputs": [],
"source": [
"!modal deploy pricer_service"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1dec70ff-1986-4405-8624-9bbbe0ce1f4a",
"metadata": {},
"outputs": [],
"source": [
"pricer = modal.Function.lookup(\"pricer-service\", \"price\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "17776139-0d9e-4ad0-bcd0-82d3a92ca61f",
"metadata": {},
"outputs": [],
"source": [
"pricer.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f56d1e55-2a03-4ce2-bb47-2ab6b9175a02",
"metadata": {},
"outputs": [],
"source": [
"!modal deploy pricer_service2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9e19daeb-1281-484b-9d2f-95cc6fed2622",
"metadata": {},
"outputs": [],
"source": [
"Pricer = modal.Cls.lookup(\"pricer-service\", \"Pricer\")\n",
"pricer = Pricer()\n",
"reply = pricer.price.remote(\"Quadcast HyperX condenser mic, connects via usb-c to your computer for crystal clear audio\")\n",
"print(reply)"
]
},
{
"cell_type": "markdown",
"id": "3754cfdd-ae28-47c8-91f2-6e060e2c91b3",
"metadata": {},
"source": [
"## And now introducing our Agent class"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba9aedca-6a7b-4d30-9f64-59d76f76fb6d",
"metadata": {},
"outputs": [],
"source": [
"from agents.specialist_agent import SpecialistAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fe5843e5-e958-4a65-8326-8f5b4686de7f",
"metadata": {},
"outputs": [],
"source": [
"agent = SpecialistAgent()\n",
"agent.price(\"iPad Pro 2nd generation\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f5a3181b-1310-4102-8d7d-52caf4c00538",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

258
week8/day2.0.ipynb

@ -0,0 +1,258 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "046fd8f8-ad14-4c7f-b759-fec52f5b5306",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we build a more complex solution for estimating prices of goods.\n",
"\n",
"1. This notebook: create a RAG database with our 400,000 training data\n",
"2. Day 2.1 notebook: visualize in 2D\n",
"3. Day 2.2 notebook: visualize in 3D\n",
"4. Day 2.3 notebook: build and test a RAG pipeline with GPT-4o-mini\n",
"5. Day 2.4 notebook: (a) bring back our Random Forest pricer (b) Create a Ensemble pricer that allows contributions from all the pricers\n",
"\n",
"Phew! That's a lot to get through in one day!\n",
"\n",
"## PLEASE NOTE:\n",
"\n",
"We already have a very powerful product estimator with our proprietary, fine-tuned LLM. Most people would be very satisfied with that! The main reason we're adding these extra steps is to deepen your expertise with RAG and with Agentic workflows.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "993a2a24-1a58-42be-8034-6d116fb8d786",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import re\n",
"import math\n",
"import json\n",
"from tqdm import tqdm\n",
"import random\n",
"from dotenv import load_dotenv\n",
"from huggingface_hub import login\n",
"import numpy as np\n",
"import pickle\n",
"from sentence_transformers import SentenceTransformer\n",
"from datasets import load_dataset\n",
"import chromadb\n",
"from items import Item\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2359ccc0-dbf2-4b1e-9473-e472b32f548b",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')\n",
"DB = \"products_vectorstore\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "645167e6-cf0d-42d2-949f-1089a25a2841",
"metadata": {},
"outputs": [],
"source": [
"# Log in to HuggingFace\n",
"\n",
"hf_token = os.environ['HF_TOKEN']\n",
"login(hf_token, add_to_git_credential=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "688bd995-ec3e-43cd-8179-7fe14b275877",
"metadata": {},
"outputs": [],
"source": [
"# Let's avoid curating all our data again! Load in the pickle files:\n",
"\n",
"with open('train.pkl', 'rb') as file:\n",
" train = pickle.load(file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2817eaf5-4302-4a18-9148-d1062e3b3dbb",
"metadata": {},
"outputs": [],
"source": [
"train[0].prompt"
]
},
{
"cell_type": "markdown",
"id": "9ae1ba16-7e80-4096-ac88-64ef8edcc80c",
"metadata": {},
"source": [
"# Now create a Chroma Datastore\n",
"\n",
"In Week 5, we created a Chroma datastore with 123 documents representing chunks of objects from our fictional company Insurellm.\n",
"\n",
"Now we will create a Chroma datastore with 400,000 products from our training dataset! It's getting real!\n",
"\n",
"Note that we won't be using LangChain, but the API is very straightforward and consistent with before."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4aab95e-d719-4476-b6e7-e248120df25a",
"metadata": {},
"outputs": [],
"source": [
"client = chromadb.PersistentClient(path=DB)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f95dafd-ab80-464e-ba8a-dec7a2424780",
"metadata": {},
"outputs": [],
"source": [
"# Check if the collection exists and delete it if it does\n",
"collection_name = \"products\"\n",
"existing_collection_names = [collection.name for collection in client.list_collections()]\n",
"if collection_name in existing_collection_names:\n",
" client.delete_collection(collection_name)\n",
" print(f\"Deleted existing collection: {collection_name}\")\n",
"\n",
"collection = client.create_collection(collection_name)"
]
},
{
"cell_type": "markdown",
"id": "d392ed28-203d-4e73-be87-ac1390bdf722",
"metadata": {},
"source": [
"# Introducing the SentenceTransfomer\n",
"\n",
"The all-MiniLM is a very useful model from HuggingFace that maps sentences & paragraphs to a 384 dimensional dense vector space and is ideal for tasks like semantic search.\n",
"\n",
"https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2\n",
"\n",
"It can run pretty quickly locally.\n",
"\n",
"Last time we used OpenAI embeddings to produce vector embeddings. Benefits compared to OpenAI embeddings:\n",
"1. It's free and fast!\n",
"3. We can run it locally, so the data never leaves our box - might be useful if you're building a personal RAG\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a87db200-d19d-44bf-acbd-15c45c70f5c9",
"metadata": {},
"outputs": [],
"source": [
"model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b23a025-4c35-4d3a-96ad-b956cad37b0a",
"metadata": {},
"outputs": [],
"source": [
"# Pass in a list of texts, get back a numpy array of vectors\n",
"\n",
"vector = model.encode([\"Well hi there\"])[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8adde63f-e732-4f7c-bba9-f8b2a469f14e",
"metadata": {},
"outputs": [],
"source": [
"vector"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38de1bf8-c9b5-45b4-9f4b-86af93b3f80d",
"metadata": {},
"outputs": [],
"source": [
"def description(item):\n",
" text = item.prompt.replace(\"How much does this cost to the nearest dollar?\\n\\n\", \"\")\n",
" return text.split(\"\\n\\nPrice is $\")[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c1205bd-4692-44ef-8ea4-69f255354537",
"metadata": {},
"outputs": [],
"source": [
"description(train[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c79e2fe-1f50-4ebf-9a93-34f3088f2996",
"metadata": {},
"outputs": [],
"source": [
"for i in tqdm(range(0, len(train), 1000)):\n",
" documents = [description(item) for item in train[i: i+1000]]\n",
" vectors = model.encode(documents).astype(float).tolist()\n",
" metadatas = [{\"category\": item.category, \"price\": item.price} for item in train[i: i+1000]]\n",
" ids = [f\"doc_{j}\" for j in range(i, i+1000)]\n",
" collection.add(\n",
" ids=ids,\n",
" documents=documents,\n",
" embeddings=vectors,\n",
" metadatas=metadatas\n",
" )"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

174
week8/day2.1.ipynb

@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b577c1be-f7a4-4549-8d27-30cb35407225",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we build a more complex solution for estimating prices of goods.\n",
"\n",
"1. Day 2.0 notebook: create a RAG database with our 400,000 training data\n",
"2. Day 2.1 notebook: visualize in 2D\n",
"3. Day 2.2 notebook: visualize in 3D\n",
"4. Day 2.3 notebook: build and test a RAG pipeline with GPT-4o-mini\n",
"5. Day 2.4 notebook: (a) bring back our Random Forest pricer (b) Create a Ensemble pricer that allows contributions from all the pricers\n",
"\n",
"Phew! That's a lot to get through in one day!\n",
"\n",
"## PLEASE NOTE:\n",
"\n",
"We already have a very powerful product estimator with our proprietary, fine-tuned LLM. Most people would be very satisfied with that! The main reason we're adding these extra steps is to deepen your expertise with RAG and with Agentic workflows.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "993a2a24-1a58-42be-8034-6d116fb8d786",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import re\n",
"import math\n",
"import json\n",
"from tqdm import tqdm\n",
"import random\n",
"from dotenv import load_dotenv\n",
"from huggingface_hub import login\n",
"import numpy as np\n",
"import pickle\n",
"from sentence_transformers import SentenceTransformer\n",
"from datasets import load_dataset\n",
"import chromadb\n",
"from items import Item\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1cc1fe53-612f-4228-aa02-8758f4c2098f",
"metadata": {},
"outputs": [],
"source": [
"# It is very fun turning this up to 400_000 and seeing the full dataset visualized,\n",
"# but it almost crashes my box every time so do that at your own risk!! 10_000 is safe!\n",
"\n",
"MAXIMUM_DATAPOINTS = 10_000"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4aab95e-d719-4476-b6e7-e248120df25a",
"metadata": {},
"outputs": [],
"source": [
"DB = \"products_vectorstore\"\n",
"client = chromadb.PersistentClient(path=DB)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f95dafd-ab80-464e-ba8a-dec7a2424780",
"metadata": {},
"outputs": [],
"source": [
"collection = client.get_or_create_collection('products')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "525fc313-8a16-4ac0-8c42-6a6d1ba1c9b8",
"metadata": {},
"outputs": [],
"source": [
"CATEGORIES = ['Appliances', 'Automotive', 'Cell_Phones_and_Accessories', 'Electronics','Musical_Instruments', 'Office_Products', 'Tools_and_Home_Improvement', 'Toys_and_Games']\n",
"COLORS = ['red', 'blue', 'brown', 'orange', 'yellow', 'green' , 'purple', 'cyan']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4cf1c9a-1ced-48d4-974c-3c850905034e",
"metadata": {},
"outputs": [],
"source": [
"# Prework\n",
"result = collection.get(include=['embeddings', 'documents', 'metadatas'], limit=MAXIMUM_DATAPOINTS)\n",
"vectors = np.array(result['embeddings'])\n",
"documents = result['documents']\n",
"categories = [metadata['category'] for metadata in result['metadatas']]\n",
"colors = [COLORS[CATEGORIES.index(c)] for c in categories]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c54df150-c8d8-4bc3-8877-6759691eeb42",
"metadata": {},
"outputs": [],
"source": [
"# Let's try a 2D chart\n",
"\n",
"tsne = TSNE(n_components=2, random_state=42, n_jobs=-1)\n",
"reduced_vectors = tsne.fit_transform(vectors)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8fb2a63-24c5-4dce-9e63-aa208272f82d",
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Create the 2D scatter plot\n",
"fig = go.Figure(data=[go.Scatter(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" mode='markers',\n",
" marker=dict(size=2, color=colors, opacity=0.7),\n",
")])\n",
"\n",
"fig.update_layout(\n",
" title='2D Chroma Vectorstore Visualization',\n",
" scene=dict(xaxis_title='x', yaxis_title='y'),\n",
" width=1200,\n",
" height=800,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
")\n",
"\n",
"fig.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

174
week8/day2.2.ipynb

@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4e60bd8a-a4da-4db9-86a8-ac8c03f3e367",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we build a more complex solution for estimating prices of goods.\n",
"\n",
"1. Day 2.0 notebook: create a RAG database with our 400,000 training data\n",
"2. Day 2.1 notebook: visualize in 2D\n",
"3. Day 2.2 notebook: visualize in 3D\n",
"4. Day 2.3 notebook: build and test a RAG pipeline with GPT-4o-mini\n",
"5. Day 2.4 notebook: (a) bring back our Random Forest pricer (b) Create a Ensemble pricer that allows contributions from all the pricers\n",
"\n",
"Phew! That's a lot to get through in one day!\n",
"\n",
"## PLEASE NOTE:\n",
"\n",
"We already have a very powerful product estimator with our proprietary, fine-tuned LLM. Most people would be very satisfied with that! The main reason we're adding these extra steps is to deepen your expertise with RAG and with Agentic workflows."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "993a2a24-1a58-42be-8034-6d116fb8d786",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import re\n",
"import math\n",
"import json\n",
"from tqdm import tqdm\n",
"import random\n",
"from dotenv import load_dotenv\n",
"from huggingface_hub import login\n",
"import numpy as np\n",
"import pickle\n",
"from sentence_transformers import SentenceTransformer\n",
"from datasets import load_dataset\n",
"import chromadb\n",
"from items import Item\n",
"from sklearn.manifold import TSNE\n",
"import plotly.graph_objects as go"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1cc1fe53-612f-4228-aa02-8758f4c2098f",
"metadata": {},
"outputs": [],
"source": [
"# Turn this up at your own risk! 10_000 is safe..\n",
"\n",
"MAXIMUM_DATAPOINTS = 10_000"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4aab95e-d719-4476-b6e7-e248120df25a",
"metadata": {},
"outputs": [],
"source": [
"DB = \"products_vectorstore\"\n",
"client = chromadb.PersistentClient(path=DB)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5f95dafd-ab80-464e-ba8a-dec7a2424780",
"metadata": {},
"outputs": [],
"source": [
"collection = client.get_or_create_collection('products')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "525fc313-8a16-4ac0-8c42-6a6d1ba1c9b8",
"metadata": {},
"outputs": [],
"source": [
"CATEGORIES = ['Appliances', 'Automotive', 'Cell_Phones_and_Accessories', 'Electronics','Musical_Instruments', 'Office_Products', 'Tools_and_Home_Improvement', 'Toys_and_Games']\n",
"COLORS = ['red', 'blue', 'brown', 'orange', 'yellow', 'green' , 'purple', 'cyan']"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a4cf1c9a-1ced-48d4-974c-3c850905034e",
"metadata": {},
"outputs": [],
"source": [
"# Prework\n",
"result = collection.get(include=['embeddings', 'documents', 'metadatas'], limit=MAXIMUM_DATAPOINTS)\n",
"vectors = np.array(result['embeddings'])\n",
"documents = result['documents']\n",
"categories = [metadata['category'] for metadata in result['metadatas']]\n",
"colors = [COLORS[CATEGORIES.index(c)] for c in categories]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c54df150-c8d8-4bc3-8877-6759691eeb42",
"metadata": {},
"outputs": [],
"source": [
"# Let's try 3D!\n",
"\n",
"tsne = TSNE(n_components=3, random_state=42, n_jobs=-1)\n",
"reduced_vectors = tsne.fit_transform(vectors)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8fb2a63-24c5-4dce-9e63-aa208272f82d",
"metadata": {},
"outputs": [],
"source": [
"\n",
"# Create the 3D scatter plot\n",
"fig = go.Figure(data=[go.Scatter3d(\n",
" x=reduced_vectors[:, 0],\n",
" y=reduced_vectors[:, 1],\n",
" z=reduced_vectors[:, 2],\n",
" mode='markers',\n",
" marker=dict(size=3, color=colors, opacity=0.7),\n",
")])\n",
"\n",
"fig.update_layout(\n",
" title='3D Chroma Vector Store Visualization',\n",
" scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n",
" width=1200,\n",
" height=800,\n",
" margin=dict(r=20, b=10, l=10, t=40)\n",
")\n",
"\n",
"fig.show()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

397
week8/day2.3.ipynb

@ -0,0 +1,397 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2a0f44a9-37cd-4aa5-9b20-cfc0dc8dfc0a",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we build a more complex solution for estimating prices of goods.\n",
"\n",
"1. Day 2.0 notebook: create a RAG database with our 400,000 training data\n",
"2. Day 2.1 notebook: visualize in 2D\n",
"3. Day 2.2 notebook: visualize in 3D\n",
"4. Day 2.3 notebook: build and test a RAG pipeline with GPT-4o-mini\n",
"5. Day 2.4 notebook: (a) bring back our Random Forest pricer (b) Create a Ensemble pricer that allows contributions from all the pricers\n",
"\n",
"Phew! That's a lot to get through in one day!\n",
"\n",
"## PLEASE NOTE:\n",
"\n",
"We already have a very powerful product estimator with our proprietary, fine-tuned LLM. Most people would be very satisfied with that! The main reason we're adding these extra steps is to deepen your expertise with RAG and with Agentic workflows.\n",
"\n",
"## We will go fast today! Hold on to your hat.."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbcdfea8-7241-46d7-a771-c0381a3e7063",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import re\n",
"import math\n",
"import json\n",
"from tqdm import tqdm\n",
"import random\n",
"from dotenv import load_dotenv\n",
"from huggingface_hub import login\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"import pickle\n",
"from openai import OpenAI\n",
"from sentence_transformers import SentenceTransformer\n",
"from datasets import load_dataset\n",
"import chromadb\n",
"from items import Item\n",
"from testing import Tester"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "98666e73-938e-469d-8987-e6e55ba5e034",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a25a5cf-8f6c-4b5d-ad98-fdd096f5adf8",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc696493-0b6f-48aa-9fa8-b1ae0ecaf3cd",
"metadata": {},
"outputs": [],
"source": [
"# Load in the test pickle file:\n",
"\n",
"with open('test.pkl', 'rb') as file:\n",
" test = pickle.load(file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33d38a06-0c0d-4e96-94d1-35ee183416ce",
"metadata": {},
"outputs": [],
"source": [
"def make_context(similars, prices):\n",
" message = \"To provide some context, here are some other items that might be similar to the item you need to estimate.\\n\\n\"\n",
" for similar, price in zip(similars, prices):\n",
" message += f\"Potentially related product:\\n{similar}\\nPrice is ${price:.2f}\\n\\n\"\n",
" return message"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "61f203b7-63b6-48ed-869b-e393b5bfcad3",
"metadata": {},
"outputs": [],
"source": [
"def messages_for(item, similars, prices):\n",
" system_message = \"You estimate prices of items. Reply only with the price, no explanation\"\n",
" user_prompt = make_context(similars, prices)\n",
" user_prompt += \"And now the question for you:\\n\\n\"\n",
" user_prompt += item.test_prompt().replace(\" to the nearest dollar\",\"\").replace(\"\\n\\nPrice is $\",\"\")\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt},\n",
" {\"role\": \"assistant\", \"content\": \"Price is $\"}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b26f405d-6e1f-4caa-b97f-1f62cd9d1ebc",
"metadata": {},
"outputs": [],
"source": [
"DB = \"products_vectorstore\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d26a1104-cd11-4361-ab25-85fb576e0582",
"metadata": {},
"outputs": [],
"source": [
"client = chromadb.PersistentClient(path=DB)\n",
"collection = client.get_or_create_collection('products')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e339760-96d8-4485-bec7-43fadcd30c4d",
"metadata": {},
"outputs": [],
"source": [
"def description(item):\n",
" text = item.prompt.replace(\"How much does this cost to the nearest dollar?\\n\\n\", \"\")\n",
" return text.split(\"\\n\\nPrice is $\")[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a1bd0c87-8bad-43d9-9461-bb69a9e0e22c",
"metadata": {},
"outputs": [],
"source": [
"description(test[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f759bd2-7a7e-4c1a-80a0-e12470feca89",
"metadata": {},
"outputs": [],
"source": [
"model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e44dbd25-fb95-4b6b-bbbb-8da5fc817105",
"metadata": {},
"outputs": [],
"source": [
"def vector(item):\n",
" return model.encode([description(item)])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ffd5ee47-db5d-4263-b0d9-80d568c91341",
"metadata": {},
"outputs": [],
"source": [
"def find_similars(item):\n",
" results = collection.query(query_embeddings=vector(item).astype(float).tolist(), n_results=5)\n",
" documents = results['documents'][0][:]\n",
" prices = [m['price'] for m in results['metadatas'][0][:]]\n",
" return documents, prices"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6f7b9ff9-fd90-4627-bb17-7c2f7bbd21f3",
"metadata": {},
"outputs": [],
"source": [
"test[1].prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ff1b2659-cc6b-47aa-a797-dd1cd3d1d6c3",
"metadata": {},
"outputs": [],
"source": [
"documents, prices = find_similars(test[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "24756d4d-edac-41ce-bb80-c3b6f1cea7ee",
"metadata": {},
"outputs": [],
"source": [
"print(make_context(documents, prices))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0b81eca2-0b58-4fe8-9dd6-47f13ba5f8ee",
"metadata": {},
"outputs": [],
"source": [
"print(messages_for(test[1], documents, prices))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d11f1c8d-7480-4d64-a274-b030d701f1b8",
"metadata": {},
"outputs": [],
"source": [
"def get_price(s):\n",
" s = s.replace('$','').replace(',','')\n",
" match = re.search(r\"[-+]?\\d*\\.\\d+|\\d+\", s)\n",
" return float(match.group()) if match else 0"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a919cf7d-b3d3-4968-8c96-54a0da0b0219",
"metadata": {},
"outputs": [],
"source": [
"# The function for gpt-4o-mini\n",
"\n",
"def gpt_4o_mini_rag(item):\n",
" documents, prices = find_similars(item)\n",
" response = openai.chat.completions.create(\n",
" model=\"gpt-4o-mini\", \n",
" messages=messages_for(item, documents, prices),\n",
" seed=42,\n",
" max_tokens=5\n",
" )\n",
" reply = response.choices[0].message.content\n",
" return get_price(reply)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3e519e26-ff15-4425-90bb-bfbf55deb39b",
"metadata": {},
"outputs": [],
"source": [
"gpt_4o_mini_rag(test[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ce78741b-2966-41d2-9831-cbf8f8d176be",
"metadata": {},
"outputs": [],
"source": [
"test[1].price"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "16d90455-ff7d-4f5f-8b8c-8e061263d1c7",
"metadata": {},
"outputs": [],
"source": [
"Tester.test(gpt_4o_mini_rag, test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6d5deb3-6a2a-4484-872c-37176c5e1f07",
"metadata": {},
"outputs": [],
"source": [
"from agents.frontier_agent import FrontierAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "56e8dd5d-ed36-49d8-95f7-dc82e548255b",
"metadata": {},
"outputs": [],
"source": [
"agent = FrontierAgent(collection)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "980dd126-f675-4499-8817-0cc0bb73e247",
"metadata": {},
"outputs": [],
"source": [
"agent.price(\"Quadcast HyperX condenser mic for high quality podcasting\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66c18a06-d0f1-4ec9-8aff-ec3ca294dd09",
"metadata": {},
"outputs": [],
"source": [
"from agents.specialist_agent import SpecialistAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba672fb4-2c3e-42ee-9ea0-21bfcfc5260c",
"metadata": {},
"outputs": [],
"source": [
"agent2 = SpecialistAgent()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5a97004-95b4-46ea-b12d-a4ead22fcb2a",
"metadata": {},
"outputs": [],
"source": [
"agent2.price(\"Quadcast HyperX condenser mic for high quality podcasting\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26d5ddc6-baa6-4760-a430-05671847ac47",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

408
week8/day2.4.ipynb

@ -0,0 +1,408 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "40d49349-faaa-420c-9b65-0bdc9edfabce",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we build a more complex solution for estimating prices of goods.\n",
"\n",
"1. Day 2.0 notebook: create a RAG database with our 400,000 training data\n",
"2. Day 2.1 notebook: visualize in 2D\n",
"3. Day 2.2 notebook: visualize in 3D\n",
"4. Day 2.3 notebook: build and test a RAG pipeline with GPT-4o-mini\n",
"5. Day 2.4 notebook: (a) bring back our Random Forest pricer (b) Create a Ensemble pricer that allows contributions from all the pricers\n",
"\n",
"Phew! That's a lot to get through in one day!\n",
"\n",
"## PLEASE NOTE:\n",
"\n",
"We already have a very powerful product estimator with our proprietary, fine-tuned LLM. Most people would be very satisfied with that! The main reason we're adding these extra steps is to deepen your expertise with RAG and with Agentic workflows.\n",
"\n",
"## Finishing off with Random Forests & Ensemble"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbcdfea8-7241-46d7-a771-c0381a3e7063",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import re\n",
"import math\n",
"import json\n",
"from tqdm import tqdm\n",
"import random\n",
"from dotenv import load_dotenv\n",
"from huggingface_hub import login\n",
"import numpy as np\n",
"import pickle\n",
"from openai import OpenAI\n",
"from sentence_transformers import SentenceTransformer\n",
"from datasets import load_dataset\n",
"import chromadb\n",
"from items import Item\n",
"from testing import Tester\n",
"import pandas as pd\n",
"import numpy as np\n",
"from sklearn.ensemble import RandomForestRegressor\n",
"from sklearn.linear_model import LinearRegression\n",
"from sklearn.metrics import mean_squared_error, r2_score\n",
"import joblib\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6e88bd1-f89c-4b98-92fa-aa4bc1575bca",
"metadata": {},
"outputs": [],
"source": [
"# CONSTANTS\n",
"\n",
"QUESTION = \"How much does this cost to the nearest dollar?\\n\\n\"\n",
"DB = \"products_vectorstore\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "98666e73-938e-469d-8987-e6e55ba5e034",
"metadata": {},
"outputs": [],
"source": [
"# environment\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dc696493-0b6f-48aa-9fa8-b1ae0ecaf3cd",
"metadata": {},
"outputs": [],
"source": [
"# Load in the test pickle file:\n",
"\n",
"with open('test.pkl', 'rb') as file:\n",
" test = pickle.load(file)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d26a1104-cd11-4361-ab25-85fb576e0582",
"metadata": {},
"outputs": [],
"source": [
"client = chromadb.PersistentClient(path=DB)\n",
"collection = client.get_or_create_collection('products')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e00b82a9-a8dc-46f1-8ea9-2f07cbc8e60d",
"metadata": {},
"outputs": [],
"source": [
"result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n",
"vectors = np.array(result['embeddings'])\n",
"documents = result['documents']\n",
"prices = [metadata['price'] for metadata in result['metadatas']]"
]
},
{
"cell_type": "markdown",
"id": "bf6492cb-b11a-4ad5-859b-a71a78ffb949",
"metadata": {},
"source": [
"# Random Forest\n",
"\n",
"We will now train a Random Forest model.\n",
"\n",
"Can you spot the difference from what we did in Week 6? In week 6 we used the word2vec model to form vectors; this time we'll use the vectors we already have in Chroma, from the SentenceTransformer model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48894777-101f-4fe5-998c-47079407f340",
"metadata": {},
"outputs": [],
"source": [
"# This next line takes an hour on my M1 Mac!\n",
"\n",
"rf_model = RandomForestRegressor(n_estimators=100, random_state=42, n_jobs=-1)\n",
"rf_model.fit(vectors, prices)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62eb7ddf-e1da-481e-84c6-1256547566bd",
"metadata": {},
"outputs": [],
"source": [
"# Save the model to a file\n",
"\n",
"joblib.dump(rf_model, 'random_forest_model.pkl')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d281dc5e-761e-4a5e-86b3-29d9c0a33d4a",
"metadata": {},
"outputs": [],
"source": [
"# Load it back in again\n",
"\n",
"rf_model = joblib.load('random_forest_model.pkl')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d438dec-8e5b-4e60-bb6f-c3f82e522dd9",
"metadata": {},
"outputs": [],
"source": [
"from agents.specialist_agent import SpecialistAgent\n",
"from agents.frontier_agent import FrontierAgent\n",
"from agents.random_forest_agent import RandomForestAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "afc39369-b97b-4a90-b17e-b20ef501d3c9",
"metadata": {},
"outputs": [],
"source": [
"specialist = SpecialistAgent()\n",
"frontier = FrontierAgent(collection)\n",
"random_forest = RandomForestAgent()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e2d0d0a-8bb8-4b39-b046-322828c39244",
"metadata": {},
"outputs": [],
"source": [
"def description(item):\n",
" return item.prompt.split(\"to the nearest dollar?\\n\\n\")[1].split(\"\\n\\nPrice is $\")[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bfe0434f-b29e-4cc0-bad9-b07624665727",
"metadata": {},
"outputs": [],
"source": [
"def rf(item):\n",
" return random_forest.price(description(item))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cdf233ec-264f-4b34-9f2b-27c39692137b",
"metadata": {},
"outputs": [],
"source": [
"Tester.test(rf, test)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f759bd2-7a7e-4c1a-80a0-e12470feca89",
"metadata": {},
"outputs": [],
"source": [
"product = \"Quadcast HyperX condenser mic for high quality audio for podcasting\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e44dbd25-fb95-4b6b-bbbb-8da5fc817105",
"metadata": {},
"outputs": [],
"source": [
"print(specialist.price(product))\n",
"print(frontier.price(product))\n",
"print(random_forest.price(product))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1779b353-e2bb-4fc7-be7c-93057e4d688a",
"metadata": {},
"outputs": [],
"source": [
"specialists = []\n",
"frontiers = []\n",
"random_forests = []\n",
"prices = []\n",
"for item in tqdm(test[1000:1250]):\n",
" text = description(item)\n",
" specialists.append(specialist.price(text))\n",
" frontiers.append(frontier.price(text))\n",
" random_forests.append(random_forest.price(text))\n",
" prices.append(item.price)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0bca725-4e34-405b-8d90-41d67086a25d",
"metadata": {},
"outputs": [],
"source": [
"mins = [min(s,f,r) for s,f,r in zip(specialists, frontiers, random_forests)]\n",
"maxes = [max(s,f,r) for s,f,r in zip(specialists, frontiers, random_forests)]\n",
"\n",
"X = pd.DataFrame({\n",
" 'Specialist': specialists,\n",
" 'Frontier': frontiers,\n",
" 'RandomForest': random_forests,\n",
" 'Min': mins,\n",
" 'Max': maxes,\n",
"})\n",
"\n",
"# Convert y to a Series\n",
"y = pd.Series(prices)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1be5be8a-3e7f-42a2-be54-0c7e380f7cc4",
"metadata": {},
"outputs": [],
"source": [
"# Train a Linear Regression\n",
"np.random.seed(42)\n",
"\n",
"lr = LinearRegression()\n",
"lr.fit(X, y)\n",
"\n",
"feature_columns = X.columns.tolist()\n",
"\n",
"for feature, coef in zip(feature_columns, lr.coef_):\n",
" print(f\"{feature}: {coef:.2f}\")\n",
"print(f\"Intercept={lr.intercept_:.2f}\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0bdf6e68-28a3-4ed2-b17e-de0ede923d34",
"metadata": {},
"outputs": [],
"source": [
"joblib.dump(lr, 'ensemble_model.pkl')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e762441a-9470-4dd7-8a8f-ec0430e908c7",
"metadata": {},
"outputs": [],
"source": [
"from agents.ensemble_agent import EnsembleAgent\n",
"ensemble = EnsembleAgent(collection)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1a29f03c-8010-43b7-ae7d-1bc85ca6e8e2",
"metadata": {},
"outputs": [],
"source": [
"ensemble.price(product)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e6a5e226-a508-43d5-aa42-cefbde72ffdf",
"metadata": {},
"outputs": [],
"source": [
"def ensemble_pricer(item):\n",
" return ensemble.price(description(item))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8397b1ef-2ea3-4af8-bb34-36594e0600cc",
"metadata": {},
"outputs": [],
"source": [
"Tester.test(ensemble_pricer, test)"
]
},
{
"cell_type": "markdown",
"id": "347c5350-d4b5-42ae-96f6-ec94f6ab41d7",
"metadata": {},
"source": [
"# WHAT A DAY!\n",
"\n",
"We got so much done - a Fronter RAG pipeline, a Random Forest model using transformer-based encodings, and an Ensemble model.\n",
"\n",
"You can do better, for sure!\n",
"\n",
"Tweak this, and try adding components into the ensemble, to beat my performance."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "85009065-851e-44a2-b39f-4c116f7fbd22",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

235
week8/day3.ipynb

@ -0,0 +1,235 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0df0d850-49eb-4a0b-a27a-146969db710d",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"Today we'll build another piece of the puzzle: a ScanningAgent that looks for promising deals by subscribing to RSS feeds."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d3763a79-8a5a-4300-8de4-93e85475af10",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import json\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"from agents.deals import ScrapedDeal, DealSelection"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c6469e32-16c3-4443-9475-ade710ef6933",
"metadata": {},
"outputs": [],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "afece9db-8cd4-46be-ac57-0b472e84da7d",
"metadata": {},
"outputs": [],
"source": [
"deals = ScrapedDeal.fetch(show_progress=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8cd15c4d-eb44-4601-bf0c-f945c1d8e3ec",
"metadata": {},
"outputs": [],
"source": [
"len(deals)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4259f30a-6455-49ed-8863-2f9ddd4776cb",
"metadata": {},
"outputs": [],
"source": [
"deals[44].describe()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8100e5ac-38f5-40c1-a712-08ae12c85038",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"\"\"You identify and summarize the 5 most detailed deals from a list, by selecting deals that have the most detailed, high quality description and the most clear price.\n",
"Respond strictly in JSON with no explanation, using this format. You should provide the price as a number derived from the description. If the price of a deal isn't clear, do not include that deal in your response.\n",
"Most important is that you respond with the 5 deals that have the most detailed product description with price. It's not important to mention the terms of the deal; most important is a thorough description of the product.\n",
"Be careful with products that are described as \"$XXX off\" or \"reduced by $XXX\" - this isn't the actual price of the product. Only respond with products when you are highly confident about the price. \n",
"\n",
"{\"deals\": [\n",
" {\n",
" \"product_description\": \"Your clearly expressed summary of the product in 4-5 sentences. Details of the item are much more important than why it's a good deal. Avoid mentioning discounts and coupons; focus on the item itself. There should be a paragpraph of text for each item you choose.\",\n",
" \"price\": 99.99,\n",
" \"url\": \"the url as provided\"\n",
" },\n",
" ...\n",
"]}\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4bca170-af71-40c9-9597-1d72980c74d8",
"metadata": {},
"outputs": [],
"source": [
"user_prompt = \"\"\"Respond with the most promising 5 deals from this list, selecting those which have the most detailed, high quality product description and a clear price.\n",
"Respond strictly in JSON, and only JSON. You should rephrase the description to be a summary of the product itself, not the terms of the deal.\n",
"Remember to respond with a paragraph of text in the product_description field for each of the 5 items that you select.\n",
"Be careful with products that are described as \"$XXX off\" or \"reduced by $XXX\" - this isn't the actual price of the product. Only respond with products when you are highly confident about the price. \n",
"\n",
"Deals:\n",
"\n",
"\"\"\"\n",
"user_prompt += '\\n\\n'.join([deal.describe() for deal in deals])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "020947a6-561b-417b-98a0-a085e31d2ce3",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt[:2000])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7de46f74-868c-4127-8a68-cf2da7d600bb",
"metadata": {},
"outputs": [],
"source": [
"def get_recommendations():\n",
" completion = openai.beta.chat.completions.parse(\n",
" model=\"gpt-4o-mini\",\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ],\n",
" response_format=DealSelection\n",
" )\n",
" result = completion.choices[0].message.parsed\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4c06270d-8c17-4d5a-9cfe-b6cefe788d5e",
"metadata": {},
"outputs": [],
"source": [
"result = get_recommendations()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84e62845-3338-441a-8161-c70097af4773",
"metadata": {},
"outputs": [],
"source": [
"len(result.deals)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e5554a0a-ae40-4684-ad3e-faa3d22e030c",
"metadata": {},
"outputs": [],
"source": [
"result.deals[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8bdc57fb-7497-47af-a643-6ba5a21cc17e",
"metadata": {},
"outputs": [],
"source": [
"from agents.scanner_agent import ScannerAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "132278bc-217a-43a6-b6c4-724140c6a225",
"metadata": {},
"outputs": [],
"source": [
"agent = ScannerAgent()\n",
"result = agent.scan()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e1d013a-c930-4dad-901b-41433379e14b",
"metadata": {},
"outputs": [],
"source": [
"result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5ee2e837-1f1d-42d4-8bc4-51cccc343006",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

141
week8/day4.ipynb

@ -0,0 +1,141 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "23f53670-1a73-46ba-a754-4a497e8e0e64",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"First we'll polish off 2 more simple agents:\n",
"\n",
"The **Messaging Agent** to send push notifications\n",
"\n",
"The **Planning Agent** to coordinate activities\n",
"\n",
"Then we'll put it all together into an Agent Framework.\n",
"\n",
"For the Push Notification, we will be using a nifty platform called Pushover. \n",
"You'll need to set up a free account and add 2 tokens to your `.env` file:\n",
"\n",
"```\n",
"PUSHOVER_USER=xxx\n",
"PUSHOVER_TOKEN=xxx\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "80d683d9-9e92-44ae-af87-a413ca84db21",
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"from agents.messaging_agent import MessagingAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5ba769cc-5301-4810-b01f-cab584cfb3b3",
"metadata": {},
"outputs": [],
"source": [
"load_dotenv()\n",
"DB = \"products_vectorstore\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e05cc427-3d2c-4792-ade1-d356f95a82a9",
"metadata": {},
"outputs": [],
"source": [
"agent = MessagingAgent()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5ec518f5-dae4-44b1-a185-d7eaf853ec00",
"metadata": {},
"outputs": [],
"source": [
"agent.push(\"MASSIVE NEWS!!!\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0056a02f-06a3-4acc-99f3-cbe919ee936b",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "57b3a014-0b15-425a-a29b-6fefc5006dee",
"metadata": {},
"outputs": [],
"source": [
"import chromadb\n",
"DB = \"products_vectorstore\"\n",
"client = chromadb.PersistentClient(path=DB)\n",
"collection = client.get_or_create_collection('products')\n",
"from agents.planning_agent import PlanningAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a5c31c39-e357-446e-9cec-b4775c298941",
"metadata": {},
"outputs": [],
"source": [
"planner = PlanningAgent(collection)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9ac771b-ea12-41c0-a7ce-05f12e27ad9e",
"metadata": {},
"outputs": [],
"source": [
"planner.plan()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8dd94a70-3202-452b-9ef0-551d6feb159b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

163
week8/day5.ipynb

@ -0,0 +1,163 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a71ed017-e1b0-4299-88b3-f0eb05adc4df",
"metadata": {},
"source": [
"# The Price is Right\n",
"\n",
"The final step is to build a User Interface\n",
"\n",
"We will use more advanced aspects of Gradio - building piece by piece."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "614c6202-4575-448d-98ee-78b735775d2b",
"metadata": {},
"outputs": [],
"source": [
"import gradio as gr\n",
"from deal_agent_framework import DealAgentFramework\n",
"from agents.deals import Opportunity, Deal"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0534e714-5a9c-45c6-998c-3472ac0bb8b5",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks(title=\"The Price is Right\", fill_width=True) as ui:\n",
"\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:24px\">The Price is Right - Deal Hunting Agentic AI</div>')\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:14px\">Autonomous agent framework that finds online deals, collaborating with a proprietary fine-tuned LLM deployed on Modal, and a RAG pipeline with a frontier model and Chroma.</div>')\n",
" \n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "18c12c10-750c-4da3-8df5-f2bc3393f9e0",
"metadata": {},
"outputs": [],
"source": [
"with gr.Blocks(title=\"The Price is Right\", fill_width=True) as ui:\n",
"\n",
" initial_deal = Deal(product_description=\"Example description\", price=100.0, url=\"https://cnn.com\")\n",
" initial_opportunity = Opportunity(deal=initial_deal, estimate=200.0, discount=100.0)\n",
" opportunities = gr.State([initial_opportunity])\n",
"\n",
" def get_table(opps):\n",
" return [[opp.deal.product_description, opp.deal.price, opp.estimate, opp.discount, opp.deal.url] for opp in opps]\n",
"\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:24px\">\"The Price is Right\" - Deal Hunting Agentic AI</div>')\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:14px\">Deals surfaced so far:</div>')\n",
" with gr.Row():\n",
" opportunities_dataframe = gr.Dataframe(\n",
" headers=[\"Description\", \"Price\", \"Estimate\", \"Discount\", \"URL\"],\n",
" wrap=True,\n",
" column_widths=[4, 1, 1, 1, 2],\n",
" row_count=10,\n",
" col_count=5,\n",
" height=400,\n",
" )\n",
"\n",
" ui.load(get_table, inputs=[opportunities], outputs=[opportunities_dataframe])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "87106328-a17a-447e-90b9-c547613468da",
"metadata": {},
"outputs": [],
"source": [
"agent_framework = DealAgentFramework()\n",
"\n",
"with gr.Blocks(title=\"The Price is Right\", fill_width=True) as ui:\n",
"\n",
" initial_deal = Deal(product_description=\"Example description\", price=100.0, url=\"https://cnn.com\")\n",
" initial_opportunity = Opportunity(deal=initial_deal, estimate=200.0, discount=100.0)\n",
" opportunities = gr.State([initial_opportunity])\n",
"\n",
" def get_table(opps):\n",
" return [[opp.deal.product_description, opp.deal.price, opp.estimate, opp.discount, opp.deal.url] for opp in opps]\n",
"\n",
" def do_select(opportunities, selected_index: gr.SelectData):\n",
" row = selected_index.index[0]\n",
" opportunity = opportunities[row]\n",
" agent_framework.planner.messenger.alert(opportunity)\n",
"\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:24px\">\"The Price is Right\" - Deal Hunting Agentic AI</div>')\n",
" with gr.Row():\n",
" gr.Markdown('<div style=\"text-align: center;font-size:14px\">Deals surfaced so far:</div>')\n",
" with gr.Row():\n",
" opportunities_dataframe = gr.Dataframe(\n",
" headers=[\"Description\", \"Price\", \"Estimate\", \"Discount\", \"URL\"],\n",
" wrap=True,\n",
" column_widths=[4, 1, 1, 1, 2],\n",
" row_count=10,\n",
" col_count=5,\n",
" height=400,\n",
" )\n",
"\n",
" ui.load(get_table, inputs=[opportunities], outputs=[opportunities_dataframe])\n",
" opportunities_dataframe.select(do_select, inputs=[opportunities], outputs=[])\n",
"\n",
"ui.launch(inbrowser=True)"
]
},
{
"cell_type": "markdown",
"id": "ecfed67b-ebcb-4e17-ad15-a7151f940119",
"metadata": {},
"source": [
"# Time for the code\n",
"\n",
"And now we'll move to the price_is_right.py code, followed by price_is_right_final.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "48506465-1c7a-433f-a665-b277a8b4665c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

94
week8/deal_agent_framework.py

@ -0,0 +1,94 @@
import os
import sys
import logging
import json
from typing import List, Optional
from twilio.rest import Client
from dotenv import load_dotenv
import chromadb
from agents.planning_agent import PlanningAgent
from agents.deals import Opportunity
from sklearn.manifold import TSNE
import numpy as np
# Colors for logging
BG_BLUE = '\033[44m'
WHITE = '\033[37m'
RESET = '\033[0m'
# Colors for plot
CATEGORIES = ['Appliances', 'Automotive', 'Cell_Phones_and_Accessories', 'Electronics','Musical_Instruments', 'Office_Products', 'Tools_and_Home_Improvement', 'Toys_and_Games']
COLORS = ['red', 'blue', 'brown', 'orange', 'yellow', 'green' , 'purple', 'cyan']
def init_logging():
root = logging.getLogger()
root.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.INFO)
formatter = logging.Formatter(
"[%(asctime)s] [Agents] [%(levelname)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S %z",
)
handler.setFormatter(formatter)
root.addHandler(handler)
class DealAgentFramework:
DB = "products_vectorstore"
MEMORY_FILENAME = "memory.json"
def __init__(self):
init_logging()
self.log("Initializing Agent Framework")
load_dotenv()
client = chromadb.PersistentClient(path=self.DB)
self.memory = self.read_memory()
self.collection = client.get_or_create_collection('products')
self.planner = PlanningAgent(self.collection)
self.log("Agent Framework is ready")
def read_memory(self) -> List[Opportunity]:
if os.path.exists(self.MEMORY_FILENAME):
with open(self.MEMORY_FILENAME, "r") as file:
data = json.load(file)
opportunities = [Opportunity(**item) for item in data]
return opportunities
return []
def write_memory(self) -> None:
data = [opportunity.dict() for opportunity in self.memory]
with open(self.MEMORY_FILENAME, "w") as file:
json.dump(data, file, indent=2)
def log(self, message: str):
text = BG_BLUE + WHITE + "[Agent Framework] " + message + RESET
logging.info(text)
def run(self) -> Optional[Opportunity]:
logging.info("Kicking off Planning Agent")
result = self.planner.plan(memory=self.memory)
logging.info(f"Planning Agent has completed and returned: {result}")
if result:
self.memory.append(result)
self.write_memory()
return result
@classmethod
def get_plot_data(cls, max_datapoints=10000):
client = chromadb.PersistentClient(path=cls.DB)
collection = client.get_or_create_collection('products')
result = collection.get(include=['embeddings', 'documents', 'metadatas'], limit=max_datapoints)
vectors = np.array(result['embeddings'])
documents = result['documents']
categories = [metadata['category'] for metadata in result['metadatas']]
colors = [COLORS[CATEGORIES.index(c)] for c in categories]
tsne = TSNE(n_components=3, random_state=42, n_jobs=-1)
reduced_vectors = tsne.fit_transform(vectors)
return documents, reduced_vectors, colors
if __name__=="__main__":
DealAgentFramework().run()

18
week8/hello.py

@ -0,0 +1,18 @@
import modal
from modal import App, Image
# Setup
app = modal.App("hello")
image = Image.debian_slim().pip_install("requests")
# Hello!
@app.function(image=image)
def hello() -> str:
import requests
response = requests.get('https://ipinfo.io/json')
data = response.json()
city, region, country = data['city'], data['region'], data['country']
return f"Hello from {city}, {region}, {country}!!"

101
week8/items.py

@ -0,0 +1,101 @@
from typing import Optional
from transformers import AutoTokenizer
import re
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
MIN_TOKENS = 150
MAX_TOKENS = 160
MIN_CHARS = 300
CEILING_CHARS = MAX_TOKENS * 7
class Item:
"""
An Item is a cleaned, curated datapoint of a Product with a Price
"""
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)
PREFIX = "Price is $"
QUESTION = "How much does this cost to the nearest dollar?"
REMOVALS = ['"Batteries Included?": "No"', '"Batteries Included?": "Yes"', '"Batteries Required?": "No"', '"Batteries Required?": "Yes"', "By Manufacturer", "Item", "Date First", "Package", ":", "Number of", "Best Sellers", "Number", "Product "]
title: str
price: float
category: str
token_count: int = 0
details: Optional[str]
prompt: Optional[str] = None
include = False
def __init__(self, data, price):
self.title = data['title']
self.price = price
self.parse(data)
def scrub_details(self):
"""
Clean up the details string by removing common text that doesn't add value
"""
details = self.details
for remove in self.REMOVALS:
details = details.replace(remove, "")
return details
def scrub(self, stuff):
"""
Clean up the provided text by removing unnecessary characters and whitespace
Also remove words that are 7+ chars and contain numbers, as these are likely irrelevant product numbers
"""
stuff = re.sub(r'[:\[\]"{}【】\s]+', ' ', stuff).strip()
stuff = stuff.replace(" ,", ",").replace(",,,",",").replace(",,",",")
words = stuff.split(' ')
select = [word for word in words if len(word)<7 or not any(char.isdigit() for char in word)]
return " ".join(select)
def parse(self, data):
"""
Parse this datapoint and if it fits within the allowed Token range,
then set include to True
"""
contents = '\n'.join(data['description'])
if contents:
contents += '\n'
features = '\n'.join(data['features'])
if features:
contents += features + '\n'
self.details = data['details']
if self.details:
contents += self.scrub_details() + '\n'
if len(contents) > MIN_CHARS:
contents = contents[:CEILING_CHARS]
text = f"{self.scrub(self.title)}\n{self.scrub(contents)}"
tokens = self.tokenizer.encode(text, add_special_tokens=False)
if len(tokens) > MIN_TOKENS:
tokens = tokens[:MAX_TOKENS]
text = self.tokenizer.decode(tokens)
self.make_prompt(text)
self.include = True
def make_prompt(self, text):
"""
Set the prompt instance variable to be a prompt appropriate for training
"""
self.prompt = f"{self.QUESTION}\n\n{text}\n\n"
self.prompt += f"{self.PREFIX}{str(round(self.price))}.00"
self.token_count = len(self.tokenizer.encode(self.prompt, add_special_tokens=False))
def test_prompt(self):
"""
Return a prompt suitable for testing, with the actual price removed
"""
return self.prompt.split(self.PREFIX)[0] + self.PREFIX
def __repr__(self):
"""
Return a String version of this Item
"""
return f"<{self.title} = ${self.price}>"

44
week8/llama.py

@ -0,0 +1,44 @@
import modal
from modal import App, Volume, Image
# Setup
app = modal.App("llama")
image = Image.debian_slim().pip_install("torch", "transformers", "bitsandbytes", "accelerate")
secrets = [modal.Secret.from_name("hf-secret")]
GPU = "T4"
MODEL_NAME = "meta-llama/Meta-Llama-3.1-8B"
@app.function(image=image, secrets=secrets, gpu=GPU)
def generate(prompt: str) -> str:
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, set_seed
# Quant Config
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
quantization_config=quant_config,
device_map="auto"
)
set_seed(42)
inputs = tokenizer.encode(prompt, return_tensors="pt").to("cuda")
attention_mask = torch.ones(inputs.shape, device="cuda")
outputs = model.generate(inputs, attention_mask=attention_mask, max_new_tokens=5, num_return_sequences=1)
return tokenizer.decode(outputs[0])

35
week8/log_utils.py

@ -0,0 +1,35 @@
# Foreground colors
RED = '\033[31m'
GREEN = '\033[32m'
YELLOW = '\033[33m'
BLUE = '\033[34m'
MAGENTA = '\033[35m'
CYAN = '\033[36m'
WHITE = '\033[37m'
# Background color
BG_BLACK = '\033[40m'
BG_BLUE = '\033[44m'
# Reset code to return to default color
RESET = '\033[0m'
mapper = {
BG_BLACK+RED: "#dd0000",
BG_BLACK+GREEN: "#00dd00",
BG_BLACK+YELLOW: "#dddd00",
BG_BLACK+BLUE: "#0000ee",
BG_BLACK+MAGENTA: "#aa00dd",
BG_BLACK+CYAN: "#00dddd",
BG_BLACK+WHITE: "#87CEEB",
BG_BLUE+WHITE: "#ff7800"
}
def reformat(message):
for key, value in mapper.items():
message = message.replace(key, f'<span style="color: {value}">')
message = message.replace(RESET, '</span>')
return message

164
week8/memory.json

@ -0,0 +1,164 @@
[
{
"deal": {
"product_description": "The Samsung Galaxy Watch Ultra is a premium 47mm LTE Titanium smartwatch designed for both style and functionality. It features a circular display made with durable materials suitable for outdoor activities, providing GPS tracking, health monitoring, and custom apps for various needs. The robust design integrates a range of smart features including notifications, music control, and heart rate tracking, making it an ideal companion for fitness enthusiasts and tech-savvy users alike.",
"price": 350.0,
"url": "https://www.dealnews.com/Samsung-Galaxy-Watch-Ultra-47-mm-LTE-Titanium-Smartwatch-up-to-350-off-w-Trade-in-free-shipping/21663266.html?iref=rss-c142"
},
"estimate": 773.8138460593241,
"discount": 423.8138460593241
},
{
"deal": {
"product_description": "The Refurbished Unlocked Apple iPhone 14 Pro Max offers an impressive 256GB storage and a huge display, perfect for both media consumption and productivity. Enjoy advanced camera technology for stunning photos. This model is designed to provide a seamless user experience with 5G capabilities for faster downloads and streaming. Refurbished to high standards, it comes in various colors and can support all the latest apps from the App Store, accommodating any Apple enthusiast's needs.",
"price": 705.0,
"url": "https://www.dealnews.com/products/Apple/Unlocked-Apple-iPhone-14-Pro-Max-256-GB-Smartphone/462808.html?iref=rss-c142"
},
"estimate": 930.8824204895075,
"discount": 225.88242048950747
},
{
"deal": {
"product_description": "The Certified Refurbished Acer Nitro V laptop boasts a powerful 13th Generation Intel Core i5 processor, perfect for gaming and multitasking. With its 15.6-inch 1080p IPS display and NVIDIA GeForce RTX 4050 GPU, expect stunning visuals and smooth performance for all your gaming needs. It comes with 8GB RAM and a 512GB SSD, ensuring fast load times and ample storage. This model is backed by a 2-year warranty from Allstate, ensuring reliable performance for years to come.",
"price": 560.0,
"url": "https://www.dealnews.com/products/Acer/Nitro-V-13-th-Gen-i5-15-6-Laptop-w-NVIDIA-Ge-Force-RTX-4050/480447.html?iref=rss-c39"
},
"estimate": 925.1468647365509,
"discount": 365.1468647365509
},
{
"deal": {
"product_description": "This EcoFlow DELTA 2 (950) Portable Power Station is designed to meet all your mobile energy needs. With a capacity of 1024Wh, it features six AC outlets and multiple USB ports, allowing you to charge various devices simultaneously. The included 800W alternator charger ensures quick recharging, making it an ideal power solution for camping trips or emergency situations. Its Wi-Fi and Bluetooth capabilities provide easy control and monitoring.",
"price": 699.0,
"url": "https://www.dealnews.com/Eco-Flow-DELTA-2-950-Portable-Power-Station-800-W-Alternator-Charger-for-699-free-shipping/21671420.html?iref=rss-c142"
},
"estimate": 869.9627492224658,
"discount": 170.96274922246585
},
{
"deal": {
"product_description": "This 12V 280Ah LiFePO4 battery offers advanced lithium iron phosphate technology, providing lightweight yet robust power storage for a variety of applications including solar, RV, and marine use. Its long cycle life ensures you can rely on it for consistent performance over time. With built-in BMS (Battery Management System) for safe operation and efficiency, this battery is designed for those who prioritize both reliability and energy efficiency.",
"price": 400.0,
"url": "https://www.dealnews.com/12-V-280-Ah-Li-Fe-PO4-Battery-for-400-free-shipping/21676885.html?iref=rss-c142"
},
"estimate": 744.6978640830646,
"discount": 344.6978640830646
},
{
"deal": {
"product_description": "This certified refurbished Apple iPhone 14 features 128GB of storage, providing ample space for apps, photos, and videos. Known for its performance, the iPhone 14 includes advanced camera capabilities and a long-lasting battery, setting standards in smartphone technology. The device comes with a one-year warranty from Allstate, ensuring reliability and support for customers. This refurbished option is an excellent choice for users looking for quality at a lower price point.",
"price": 425.0,
"url": "https://www.dealnews.com/products/Apple/Refurb-Unlocked-Apple-iPhone-14-128-GB-Phone-Excellent-Cond/472441.html?iref=rss-c142"
},
"estimate": 667.340418267145,
"discount": 242.34041826714497
},
{
"deal": {
"product_description": "The iRobot Roomba j7+ is a highly efficient WiFi self-emptying robot vacuum designed to make cleaning effortless. With advanced smart mapping technology, it can identify and avoid obstacles, ensuring a thorough cleaning of your home. This model offers a self-emptying base, eliminating the need for manual bag changes. A runtime of 90 minutes allows it to cover large areas, making it ideal for busy households.",
"price": 250.0,
"url": "https://www.dealnews.com/products/iRobot/iRobot-Roomba-j7-Wi-Fi-Self-Emptying-Robot-Vacuum/293669.html?iref=rss-f1912"
},
"estimate": 621.4089582478217,
"discount": 371.4089582478217
},
{
"deal": {
"product_description": "The certified refurbished iRobot Roomba 692 is a smart cleaning device that simplifies household upkeep. This vacuum operates with three-stage cleaning technology and has a compatibility feature with smart assistants such as Alexa and Google Home. It is designed for efficient cleaning across hard floors and carpets, providing a 90-minute runtime per charge. This model comes with a 2-year warranty from Allstate, ensuring peace of mind along with top-notch functionality for your cleaning needs.",
"price": 120.0,
"url": "https://www.dealnews.com/products/iRobot/iRobot-Roomba-692-Robot-Vacuum/143125.html?iref=rss-f1912"
},
"estimate": 304.1034980572389,
"discount": 184.10349805723888
},
{
"deal": {
"product_description": "The Certified Refurb Acer Swift Edge Ryzen 7 Laptop features an advanced AMD Ryzen 7 7735U 8-core processor, offering exceptional performance for both work and play. It is equipped with a 16GB RAM and a spacious 1TB SSD, ensuring smooth multitasking and ample storage for your files. The 16-inch display boasts a stunning 3840x2400 resolution, making it ideal for streaming and content creation. This laptop is certified refurbished, meaning it comes backed by a 2-year warranty for peace of mind.",
"price": 760.0,
"url": "https://www.dealnews.com/Certified-Refurb-Acer-Swift-Edge-Ryzen-7-16-Laptop-for-760-free-shipping/21682096.html?iref=rss-c39"
},
"estimate": 985.5902213719573,
"discount": 225.59022137195734
},
{
"deal": {
"product_description": "The Klipsch T5 II True Wireless ANC Earphones offer a high-fidelity audio experience combined with active noise-canceling technology. These earbuds come with six sizes of patented, color-coded oval ear tips to ensure a comfortable fit for all users. The device is equipped with a two-mic hybrid design, enhancing call quality while reducing background noise. With intuitive head gesture controls, you can easily manage your audio playback without needing to reach for your device.",
"price": 68.0,
"url": "https://www.dealnews.com/products/Klipsch/Klipsch-T5-II-True-Wireless-ANC-Earphones/482823.html?iref=rss-c142"
},
"estimate": 309.56921733594834,
"discount": 241.56921733594834
},
{
"deal": {
"product_description": "The Certified Refurbished Acer Aspire 3 laptop boasts a powerful 6th-generation AMD Ryzen 5 processor and a spacious 15.6-inch touchscreen display with full HD resolution. It is equipped with 16GB of RAM and a 1TB SSD, providing ample storage and fast performance. This model comes with Windows 11 Home and is backed by a two-year warranty.",
"price": 288.0,
"url": "https://www.dealnews.com/products/Acer/Acer-Aspire-3-6-th-Gen-Ryzen-5-15-6-Touchscreen-Laptop/476367.html?iref=rss-c39"
},
"estimate": 558.5214284656016,
"discount": 270.5214284656016
},
{
"deal": {
"product_description": "The Eufy eufyCam S330 (eufyCam 3) 4-Camera kit comes equipped with advanced security features, making it a top choice for home monitoring. Each camera offers high-resolution 4K video and includes an integrated solar panel for continuous powering, thus eliminating the need for frequent recharges. The system utilizes artificial intelligence for facial recognition, effectively distinguishing between family and strangers. Furthermore, it includes a 1TB hard drive for expandable local storage, ensuring that you have ample room to save recorded footage without additional fees.",
"price": 650.0,
"url": "https://www.dealnews.com/products/Eufy/eufy-Cam-S330-eufy-Cam-3-4-Camera-Kit-1-TB-HDD/464094.html?iref=rss-c196"
},
"estimate": 901.053559336033,
"discount": 251.053559336033
},
{
"deal": {
"product_description": "The Shark IQ Robot Vacuum is designed to simplify your cleaning routine with its 60-day capacity base, which allows it to store more dirt before needing to be emptied. It comes equipped with advanced navigation technology to efficiently clean your floors while avoiding obstacles. This self-emptying feature not only saves you time but also ensures your home remains clean with minimal effort on your part. It's a perfect solution for busy households looking for convenience.",
"price": 267.0,
"url": "https://www.dealnews.com/Shark-IQ-Robot-Vacuum-w-60-Day-Capacity-Base-for-267-free-shipping/21668563.html?iref=rss-f1912"
},
"estimate": 495.68476864134675,
"discount": 228.68476864134675
},
{
"deal": {
"product_description": "The Dell Inspiron 15 Laptop is equipped with a 12th Generation Intel Core i5 processor and a 15.6-inch touchscreen display, providing a dynamic computing experience for work and entertainment. With 8GB of RAM and a 512GB SSD, it offers sufficient memory for multitasking and fast boot-up times. This laptop runs Windows 11 Home in S Mode, enhancing its performance for everyday tasks. Its sleek design and robust features make it a practical choice for students and professionals alike.",
"price": 350.0,
"url": "https://www.dealnews.com/products/Dell/Dell-Inspiron-15-12-th-Gen-i5-15-6-Touchscreen-Laptop-w-512-GB-SSD/479057.html?iref=rss-c39"
},
"estimate": 577.1076076116793,
"discount": 227.10760761167933
},
{
"deal": {
"product_description": "The EcoFlow DELTA 2 (950) Portable Power Station is designed for convenience and reliability, boasting a 1024Wh capacity that can power multiple devices simultaneously. It features six AC outlets, two USB-A, and two USB-C ports, making it versatile for outdoor adventures, emergencies, or everyday use. With both Wi-Fi and Bluetooth connectivity, you can monitor the status of your power station right from your smartphone. This bundle also comes with an 800W alternator charger, ensuring that you have everything you need to stay powered up wherever you go.",
"price": 699.0,
"url": "https://www.dealnews.com/Eco-Flow-DELTA-2-950-Portable-Power-Station-800-W-Alternator-Charger-for-699-free-shipping/21673983.html?iref=rss-c142"
},
"estimate": 963.8626028683989,
"discount": 264.8626028683989
},
{
"deal": {
"product_description": "The Certified Refurb iRobot Roomba i4 EVO WiFi Robot Vacuum combines smart technology with powerful cleaning capabilities. This robot vacuum is designed to navigate effortlessly through your home, providing thorough cleaning on multiple surfaces. With WiFi connectivity, users can control the vacuum remotely via a smartphone app. The three-stage cleaning system ensures a deep clean, making it a perfect addition for busy households.",
"price": 130.0,
"url": "https://www.dealnews.com/products/iRobot/iRobot-Roomba-i4-EVO-Wi-Fi-Robot-Vacuum/431157.html?iref=rss-f1912"
},
"estimate": 341.23175777017946,
"discount": 211.23175777017946
},
{
"deal": {
"product_description": "The ZimaBoard 832 Single Board Server is a versatile and compact solution for home or office use, equipped with robust processing power suitable for a variety of applications, including media streaming and file servers. It is designed for ease of use, with capabilities for expansion and customization. Its lightweight and energy-efficient design makes it an ideal selection for developers and tech enthusiasts seeking a reliable platform for programming and digital projects.",
"price": 140.0,
"url": "https://www.dealnews.com/Zima-Board-832-Single-Board-Server-for-140-free-shipping/21676871.html?iref=rss-c39"
},
"estimate": 292.84135797094723,
"discount": 152.84135797094723
},
{
"deal": {
"product_description": "The EcoFlow DELTA 2 (950) Portable Power Station is a robust solution for all your electrical needs while on the go. With a 1024Wh capacity, this power station is versatile enough to charge multiple devices simultaneously, thanks to its six AC outlets. Additionally, it features two USB-A and two USB-C ports, enabling you to charge laptops, phones, and other electronics quickly. Bundled with an 800W Alternator Charger, it's designed to ensure you have power, wherever your adventures may lead you.",
"price": 699.0,
"url": "https://www.dealnews.com/Eco-Flow-DELTA-2-950-Portable-Power-Station-800-W-Alternator-Charger-for-699-free-shipping/21676798.html?iref=rss-c142"
},
"estimate": 870.8927901823207,
"discount": 171.8927901823207
}
]

61
week8/price_is_right.py

@ -0,0 +1,61 @@
import gradio as gr
from deal_agent_framework import DealAgentFramework
from agents.deals import Opportunity, Deal
class App:
def __init__(self):
self.agent_framework = None
def run(self):
with gr.Blocks(title="The Price is Right", fill_width=True) as ui:
def table_for(opps):
return [[opp.deal.product_description, f"${opp.deal.price:.2f}", f"${opp.estimate:.2f}", f"${opp.discount:.2f}", opp.deal.url] for opp in opps]
def start():
self.agent_framework = DealAgentFramework()
opportunities = self.agent_framework.memory
table = table_for(opportunities)
return table
def go():
self.agent_framework.run()
new_opportunities = self.agent_framework.memory
table = table_for(new_opportunities)
return table
def do_select(selected_index: gr.SelectData):
opportunities = self.agent_framework.memory
row = selected_index.index[0]
opportunity = opportunities[row]
self.agent_framework.planner.messenger.alert(opportunity)
with gr.Row():
gr.Markdown('<div style="text-align: center;font-size:24px">"The Price is Right" - Deal Hunting Agentic AI</div>')
with gr.Row():
gr.Markdown('<div style="text-align: center;font-size:14px">Autonomous agent framework that finds online deals, collaborating with a proprietary fine-tuned LLM deployed on Modal, and a RAG pipeline with a frontier model and Chroma.</div>')
with gr.Row():
gr.Markdown('<div style="text-align: center;font-size:14px">Deals surfaced so far:</div>')
with gr.Row():
opportunities_dataframe = gr.Dataframe(
headers=["Description", "Price", "Estimate", "Discount", "URL"],
wrap=True,
column_widths=[4, 1, 1, 1, 2],
row_count=10,
col_count=5,
height=400,
)
ui.load(start, inputs=[], outputs=[opportunities_dataframe])
timer = gr.Timer(value=60)
timer.tick(go, inputs=[], outputs=[opportunities_dataframe])
opportunities_dataframe.select(do_select)
ui.launch(share=False, inbrowser=True)
if __name__=="__main__":
App().run()

180
week8/price_is_right_final.py

@ -0,0 +1,180 @@
import logging
import queue
import threading
import time
import gradio as gr
from deal_agent_framework import DealAgentFramework
from agents.deals import Opportunity, Deal
from log_utils import reformat
import plotly.graph_objects as go
class QueueHandler(logging.Handler):
def __init__(self, log_queue):
super().__init__()
self.log_queue = log_queue
def emit(self, record):
self.log_queue.put(self.format(record))
def html_for(log_data):
output = '<br>'.join(log_data[-18:])
return f"""
<div id="scrollContent" style="height: 400px; overflow-y: auto; border: 1px solid #ccc; background-color: #222229; padding: 10px;">
{output}
</div>
"""
def setup_logging(log_queue):
handler = QueueHandler(log_queue)
formatter = logging.Formatter(
"[%(asctime)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S %z",
)
handler.setFormatter(formatter)
logger = logging.getLogger()
logger.addHandler(handler)
logger.setLevel(logging.INFO)
class App:
def __init__(self):
self.agent_framework = None
def run(self):
with gr.Blocks(title="The Price is Right", fill_width=True) as ui:
log_data = gr.State([])
def table_for(opps):
return [[opp.deal.product_description, f"${opp.deal.price:.2f}", f"${opp.estimate:.2f}", f"${opp.discount:.2f}", opp.deal.url] for opp in opps]
def update_output(log_data, log_queue, result_queue):
final_result = None
while True:
try:
message = log_queue.get_nowait()
log_data.append(reformat(message))
yield log_data, html_for(log_data), final_result
except queue.Empty:
try:
final_result = result_queue.get_nowait()
yield log_data, html_for(log_data), final_result
except queue.Empty:
if final_result is not None:
break
time.sleep(0.1)
def get_initial_plot():
fig = go.Figure()
fig.update_layout(
title='Loading vector DB...',
height=400,
)
return fig
def get_plot():
documents, vectors, colors = DealAgentFramework.get_plot_data(max_datapoints=1000)
# Create the 3D scatter plot
fig = go.Figure(data=[go.Scatter3d(
x=vectors[:, 0],
y=vectors[:, 1],
z=vectors[:, 2],
mode='markers',
marker=dict(size=2, color=colors, opacity=0.7),
)])
fig.update_layout(
scene=dict(xaxis_title='x',
yaxis_title='y',
zaxis_title='z',
aspectmode='manual',
aspectratio=dict(x=2.2, y=2.2, z=1), # Make x-axis twice as long
camera=dict(
eye=dict(x=1.6, y=1.6, z=0.8) # Adjust camera position
)),
height=400,
margin=dict(r=5, b=1, l=5, t=2)
)
return fig
def start():
self.agent_framework = DealAgentFramework()
self.agent_framework.run()
opportunities = self.agent_framework.memory
table = table_for(opportunities)
return table
def do_run():
if not self.agent_framework:
self.agent_framework = DealAgentFramework()
self.agent_framework.run()
new_opportunities = self.agent_framework.memory
table = table_for(new_opportunities)
return table
def do_with_logging(function, initial_log_data):
log_queue = queue.Queue()
result_queue = queue.Queue()
setup_logging(log_queue)
def worker():
result = function()
result_queue.put(result)
thread = threading.Thread(target=worker)
thread.start()
for log_data, output, final_result in update_output(initial_log_data, log_queue, result_queue):
yield log_data, output, final_result
def start_with_logging(initial_log_data):
for log_data, output, final_result in do_with_logging(start, initial_log_data):
yield log_data, output, final_result
def run_with_logging(initial_log_data):
for log_data, output, final_result in do_with_logging(do_run, initial_log_data):
yield log_data, output, final_result
def do_select(selected_index: gr.SelectData):
opportunities = self.agent_framework.memory
row = selected_index.index[0]
opportunity = opportunities[row]
self.agent_framework.planner.messenger.alert(opportunity)
with gr.Row():
gr.Markdown('<div style="text-align: center;font-size:24px"><strong>The Price is Right</strong> - Autonomous Agent Framework that hunts for deals</div>')
with gr.Row():
gr.Markdown('<div style="text-align: center;font-size:14px">A proprietary fine-tuned LLM deployed on Modal and a RAG pipeline with a frontier model collaborate to send push notifications with great online deals.</div>')
with gr.Row():
opportunities_dataframe = gr.Dataframe(
headers=["Deals found so far", "Price", "Estimate", "Discount", "URL"],
wrap=True,
column_widths=[6, 1, 1, 1, 3],
row_count=10,
col_count=5,
height=400,
)
with gr.Row():
with gr.Column(scale=1):
logs = gr.HTML()
with gr.Column(scale=1):
plot = gr.Plot(value=get_plot(), show_label=False)
ui.load(start_with_logging, inputs=[log_data], outputs=[log_data, logs, opportunities_dataframe])
timer = gr.Timer(value=300, active=True)
timer.tick(run_with_logging, inputs=[log_data], outputs=[log_data, logs, opportunities_dataframe])
# timer2 = gr.Timer(value=5, active=True)
# timer2.tick(get_plot, inputs=[], outputs=[plot, timer2])
opportunities_dataframe.select(do_select)
ui.launch(share=False, inbrowser=True)
if __name__=="__main__":
App().run()

66
week8/pricer_ephemeral.py

@ -0,0 +1,66 @@
import modal
from modal import App, Image
# Setup
app = modal.App("pricer")
image = Image.debian_slim().pip_install("torch", "transformers", "bitsandbytes", "accelerate", "peft")
secrets = [modal.Secret.from_name("hf-secret")]
# Constants
GPU = "T4"
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
PROJECT_NAME = "pricer"
HF_USER = "ed-donner" # your HF name here! Or use mine if you just want to reproduce my results.
RUN_NAME = "2024-09-13_13.04.39"
PROJECT_RUN_NAME = f"{PROJECT_NAME}-{RUN_NAME}"
REVISION = "e8d637df551603dc86cd7a1598a8f44af4d7ae36"
FINETUNED_MODEL = f"{HF_USER}/{PROJECT_RUN_NAME}"
@app.function(image=image, secrets=secrets, gpu=GPU)
def price(description: str) -> float:
import os
import re
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, set_seed
from peft import PeftModel
QUESTION = "How much does this cost to the nearest dollar?"
PREFIX = "Price is $"
prompt = f"{QUESTION}\n{description}\n{PREFIX}"
# Quant Config
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
base_model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
quantization_config=quant_config,
device_map="auto"
)
fine_tuned_model = PeftModel.from_pretrained(base_model, FINETUNED_MODEL, revision=REVISION)
set_seed(42)
inputs = tokenizer.encode(prompt, return_tensors="pt").to("cuda")
attention_mask = torch.ones(inputs.shape, device="cuda")
outputs = fine_tuned_model.generate(inputs, attention_mask=attention_mask, max_new_tokens=5, num_return_sequences=1)
result = tokenizer.decode(outputs[0])
contents = result.split("Price is $")[1]
contents = contents.replace(',','')
match = re.search(r"[-+]?\d*\.\d+|\d+", contents)
return float(match.group()) if match else 0

66
week8/pricer_service.py

@ -0,0 +1,66 @@
import modal
from modal import App, Image
# Setup - define our infrastructure with code!
app = modal.App("pricer-service")
image = Image.debian_slim().pip_install("torch", "transformers", "bitsandbytes", "accelerate", "peft")
secrets = [modal.Secret.from_name("hf-secret")]
# Constants
GPU = "T4"
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
PROJECT_NAME = "pricer"
HF_USER = "ed-donner" # your HF name here! Or use mine if you just want to reproduce my results.
RUN_NAME = "2024-09-13_13.04.39"
PROJECT_RUN_NAME = f"{PROJECT_NAME}-{RUN_NAME}"
REVISION = "e8d637df551603dc86cd7a1598a8f44af4d7ae36"
FINETUNED_MODEL = f"{HF_USER}/{PROJECT_RUN_NAME}"
@app.function(image=image, secrets=secrets, gpu=GPU)
def price(description: str) -> float:
import os
import re
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, set_seed
from peft import PeftModel
QUESTION = "How much does this cost to the nearest dollar?"
PREFIX = "Price is $"
prompt = f"{QUESTION}\n{description}\n{PREFIX}"
# Quant Config
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
base_model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
quantization_config=quant_config,
device_map="auto"
)
fine_tuned_model = PeftModel.from_pretrained(base_model, FINETUNED_MODEL, revision=REVISION)
set_seed(42)
inputs = tokenizer.encode(prompt, return_tensors="pt").to("cuda")
attention_mask = torch.ones(inputs.shape, device="cuda")
outputs = fine_tuned_model.generate(inputs, attention_mask=attention_mask, max_new_tokens=5, num_return_sequences=1)
result = tokenizer.decode(outputs[0])
contents = result.split("Price is $")[1]
contents = contents.replace(',','')
match = re.search(r"[-+]?\d*\.\d+|\d+", contents)
return float(match.group()) if match else 0

84
week8/pricer_service2.py

@ -0,0 +1,84 @@
import modal
from modal import App, Volume, Image
# Setup - define our infrastructure with code!
app = modal.App("pricer-service")
image = Image.debian_slim().pip_install("huggingface", "torch", "transformers", "bitsandbytes", "accelerate", "peft")
secrets = [modal.Secret.from_name("hf-secret")]
# Constants
GPU = "T4"
BASE_MODEL = "meta-llama/Meta-Llama-3.1-8B"
PROJECT_NAME = "pricer"
HF_USER = "ed-donner" # your HF name here! Or use mine if you just want to reproduce my results.
RUN_NAME = "2024-09-13_13.04.39"
PROJECT_RUN_NAME = f"{PROJECT_NAME}-{RUN_NAME}"
REVISION = "e8d637df551603dc86cd7a1598a8f44af4d7ae36"
FINETUNED_MODEL = f"{HF_USER}/{PROJECT_RUN_NAME}"
QUESTION = "How much does this cost to the nearest dollar?"
PREFIX = "Price is $"
@app.cls(image=image, secrets=secrets, gpu=GPU)
class Pricer:
@modal.build()
def download_model_to_folder(self):
from huggingface_hub import snapshot_download
import os
MODEL_DIR = "~/.cache/huggingface/hub/"
os.makedirs(MODEL_DIR, exist_ok=True)
snapshot_download(BASE_MODEL, local_dir=MODEL_DIR)
snapshot_download(FINETUNED_MODEL, revision=REVISION, local_dir=MODEL_DIR)
@modal.enter()
def setup(self):
import os
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, set_seed
from peft import PeftModel
# Quant Config
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_type="nf4"
)
# Load model and tokenizer
self.tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL)
self.tokenizer.pad_token = self.tokenizer.eos_token
self.tokenizer.padding_side = "right"
self.base_model = AutoModelForCausalLM.from_pretrained(
BASE_MODEL,
quantization_config=quant_config,
device_map="auto"
)
self.fine_tuned_model = PeftModel.from_pretrained(self.base_model, FINETUNED_MODEL, revision=REVISION)
@modal.method()
def price(self, description: str) -> float:
import os
import re
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, set_seed
from peft import PeftModel
set_seed(42)
prompt = f"{QUESTION}\n\n{description}\n\n{PREFIX}"
inputs = self.tokenizer.encode(prompt, return_tensors="pt").to("cuda")
attention_mask = torch.ones(inputs.shape, device="cuda")
outputs = self.fine_tuned_model.generate(inputs, attention_mask=attention_mask, max_new_tokens=5, num_return_sequences=1)
result = self.tokenizer.decode(outputs[0])
contents = result.split("Price is $")[1]
contents = contents.replace(',','')
match = re.search(r"[-+]?\d*\.\d+|\d+", contents)
return float(match.group()) if match else 0

75
week8/testing.py

@ -0,0 +1,75 @@
import math
import matplotlib.pyplot as plt
GREEN = "\033[92m"
YELLOW = "\033[93m"
RED = "\033[91m"
RESET = "\033[0m"
COLOR_MAP = {"red":RED, "orange": YELLOW, "green": GREEN}
class Tester:
def __init__(self, predictor, data, title=None, size=250):
self.predictor = predictor
self.data = data
self.title = title or predictor.__name__.replace("_", " ").title()
self.size = size
self.guesses = []
self.truths = []
self.errors = []
self.sles = []
self.colors = []
def color_for(self, error, truth):
if error<40 or error/truth < 0.2:
return "green"
elif error<80 or error/truth < 0.4:
return "orange"
else:
return "red"
def run_datapoint(self, i):
datapoint = self.data[i]
guess = self.predictor(datapoint)
truth = datapoint.price
error = abs(guess - truth)
log_error = math.log(truth+1) - math.log(guess+1)
sle = log_error ** 2
color = self.color_for(error, truth)
title = datapoint.title if len(datapoint.title) <= 40 else datapoint.title[:40]+"..."
self.guesses.append(guess)
self.truths.append(truth)
self.errors.append(error)
self.sles.append(sle)
self.colors.append(color)
print(f"{COLOR_MAP[color]}{i+1}: Guess: ${guess:,.2f} Truth: ${truth:,.2f} Error: ${error:,.2f} SLE: {sle:,.2f} Item: {title}{RESET}")
def chart(self, title):
max_error = max(self.errors)
plt.figure(figsize=(12, 8))
max_val = max(max(self.truths), max(self.guesses))
plt.plot([0, max_val], [0, max_val], color='deepskyblue', lw=2, alpha=0.6)
plt.scatter(self.truths, self.guesses, s=3, c=self.colors)
plt.xlabel('Ground Truth')
plt.ylabel('Model Estimate')
plt.xlim(0, max_val)
plt.ylim(0, max_val)
plt.title(title)
plt.show()
def report(self):
average_error = sum(self.errors) / self.size
rmsle = math.sqrt(sum(self.sles) / self.size)
hits = sum(1 for color in self.colors if color=="green")
title = f"{self.title} Error=${average_error:,.2f} RMSLE={rmsle:,.2f} Hits={hits/self.size*100:.1f}%"
self.chart(title)
def run(self):
self.error = 0
for i in range(self.size):
self.run_datapoint(i)
self.report()
@classmethod
def test(cls, function, data):
cls(function, data).run()
Loading…
Cancel
Save