Browse Source

last version

pull/34/head
MuhammedDele 5 months ago
parent
commit
7ea4ba144d
  1. 260
      README.md
  2. 41
      environment.yml
  3. 11
      requirements.txt
  4. 2
      week1/community-contributions/day1-selenium-for-javascript-sites.ipynb
  5. 385
      week1/day1.ipynb
  6. 1513
      week1/day5.ipynb
  7. 76
      week1/solutions/week1 SOLUTION.ipynb

260
README.md

@ -11,48 +11,154 @@ I'm so happy you're joining me on this path. We'll be building immensely satisfy
I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here:
https://www.linkedin.com/in/eddonner/
Resources to accompany the course, including the slides and useful links, are here:
https://edwarddonner.com/2024/11/13/llm-engineering-resources/
### An important point on API costs
## Instant Gratification instructions for Week 1, Day 1
During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time.
We will start the course by installing Ollama so you can see results immediately!
1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly
2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal).
3. Run `ollama run llama3.2` or for smaller machines try `ollama run llama3.2:1b`
4. If this doesn't work, you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again
5. And if that doesn't work on your box, I've set up this on the cloud. This is on Google Colab, which will need you to have a Google account to sign in, but is free: https://colab.research.google.com/drive/1-_f5XZPsChvfU1sJ0QqCePtIuc55LSdu?usp=sharing
Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning.
Any problems, please contact me!
### How this Repo is organized
## Then, Setup instructions
There are folders for each of the "weeks", representing modules of the class, culminating in a powerful autonomous Agentic AI solution in Week 8 that draws on many of the prior weeks.
Follow the setup instructions below, then open the Week 1 folder and prepare for joy.
After we do the Ollama quick project, and after I introduce myself and the course, we get to work with the full environment setup.
### The most important part
Hopefully I've done a decent job of making these guides bulletproof - but please contact me right away if you hit roadblocks:
The mantra of the course is: the best way to learn is by **DOING**. You should work along with me, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
- PC people please follow the instructions in [SETUP-PC.md](SETUP-PC.md)
- Mac people please follow the instructions in [SETUP-mac.md](SETUP-mac.md)
- Linux people, the Mac instructions should be close enough!
## Setup instructions
### An important point on API costs (which are optional! No need to spend if you don't wish)
The recommended approach is to use Anaconda for your environment. Even if you've never used it before, it makes such a difference. Anaconda ensures that you're working with the right version of Python and all your packages are compatible with mine, even if we're on different platforms.
During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. And I'll provide alternatives if you'd prefer not to use them.
**Update** Some people have had problems with Anaconda - horrors! The idea of Anaconda is to make it really smooth and simple to be working with the same environment. If you hit any problems with the instructions below, please skip to near the end of this README for the alternative approach using `pip`, and hopefully you'll be up and running fast. And please do message me if I can help with anything.
Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning.
We'll be mostly using Jupyter Lab in this course. For those new to Jupyter Lab / Jupyter Notebook, it's a delightful Data Science environment where you can simply hit shift+return in any cell to run it; start at the top and work your way down! When we move to Google Colab in Week 3, you'll experience the same interface for Python runtimes in the cloud.
I'll also show you an alternative if you'd rather not spend anything on APIs.
### For Windows Users
### How this Repo is organized
1. **Install Git** (if not already installed):
There are folders for each of the "weeks", representing modules of the class, culminating in a powerful autonomous Agentic AI solution in Week 8 that draws on many of the prior weeks.
Follow the setup instructions above, then open the Week 1 folder and prepare for joy.
- Download Git from https://git-scm.com/download/win
- Run the installer and follow the prompts, using default options
### The most important part
2. **Open Command Prompt:**
- Press Win + R, type `cmd`, and press Enter
3. **Navigate to your projects folder:**
If you have a specific folder for projects, navigate to it using the cd command. For example:
`cd C:\Users\YourUsername\Documents\Projects`
If you don't have a projects folder, you can create one:
```
mkdir C:\Users\YourUsername\Documents\Projects
cd C:\Users\YourUsername\Documents\Projects
```
(Replace YourUsername with your actual Windows username)
3. **Clone the repository:**
- Go to the course's GitHub page
- Click the green 'Code' button and copy the URL
- In the Command Prompt, type: `git clone <paste-url-here>`
4. **Install Anaconda:**
- Download Anaconda from https://docs.anaconda.com/anaconda/install/windows/
- Run the installer and follow the prompts
- A student mentioned that if you are prompted to upgrade Anaconda to a newer version during the install, you shouldn't do it, as there might be problems with the very latest update for PC. (Thanks for the pro-tip!)
5. **Set up the environment:**
- Open Anaconda Prompt (search for it in the Start menu)
- Navigate to the cloned repository folder using `cd path\to\repo` (replace `path\to\repo` with the actual path to the llm_engineering directory, your locally cloned version of the repo)
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed
- Activate the environment: `conda activate llms`
You should see `(llms)` in your prompt, which indicates you've activated your new environment.
6. **Start Jupyter Lab:**
- In the Anaconda Prompt, from within the `llm_engineering` folder, type: `jupyter lab`
...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipnbk`.
### For Mac Users
1. **Install Git** if not already installed (it will be in most cases)
- Open Terminal (Applications > Utilities > Terminal)
- Type `git --version` If not installed, you'll be prompted to install it
2. **Navigate to your projects folder:**
If you have a specific folder for projects, navigate to it using the cd command. For example:
`cd ~/Documents/Projects`
If you don't have a projects folder, you can create one:
```
mkdir ~/Documents/Projects
cd ~/Documents/Projects
```
3. **Clone the repository**
- Go to the course's GitHub page
- Click the green 'Code' button and copy the URL
- In Terminal, type: `git clone <paste-url-here>`
4. **Install Anaconda:**
- Download Anaconda from https://docs.anaconda.com/anaconda/install/mac-os/
- Double-click the downloaded file and follow the installation prompts
5. **Set up the environment:**
- Open Terminal
- Navigate to the cloned repository folder using `cd path/to/repo` (replace `path/to/repo` with the actual path to the llm_engineering directory, your locally cloned version of the repo)
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed
- Activate the environment: `conda activate llms`
You should see `(llms)` in your prompt, which indicates you've activated your new environment.
6. **Start Jupyter Lab:**
- In Terminal, from within the `llm_engineering` folder, type: `jupyter lab`
...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipnbk`.
### When we get to it, creating your API keys
Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models (models at the forefront of progress). You'll need to join me in setting up accounts and API keys.
The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
- [GPT API](https://platform.openai.com/) from OpenAI
- [Claude API](https://console.anthropic.com/) from Anthropic
- [Gemini API](https://ai.google.dev/gemini-api) from Google
## Starting in Week 3, we'll also be using Google Colab for running with GPUs
Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. The webpage where you set up your OpenAI key is [here](https://platform.openai.com/api-keys). See the extra note on API costs below if that's a concern. One student mentioned to me that OpenAI can take a few minutes to register; if you initially get an error about being out of quota, wait a few minutes and try again. Another reason you might encounter the out of quota error is if you haven't yet added a valid payment method to your OpenAI account. You can do this by clicking your profile picture on the OpenAI website then clicking "Your profile." Once you are redirected to your profile page, choose "Billing" on the left-pane menu. You will need to enter a valid payment method and charge your account with a small advance payment. It is recommended that you **disable** the automatic recharge as an extra failsafe. If it's still a problem, see more troubleshooting tips in the Week 1 Day 1 notebook, and/or message me!
Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at [HuggingFace](https://huggingface.co) - you can create an API token from the Avatar menu >> Settings >> Access Tokens.
And in Week 6/7 you'll be using the terrific [Weights & Biases](https://wandb.ai) platform to watch over your training batches. Accounts are also free, and you can set up a token in a similar way.
When you have these keys, please create a new file called `.env` in your project root directory. This file won't appear in Jupyter Lab because it's a hidden file; you should create it using something like Notepad (PC) or nano (Mac / Linux). I've put detailed instructions at the end of this README.
It should have contents like this, and to start with you only need the first line:
```
OPENAI_API_KEY=xxxx
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
HF_TOKEN=xxxx
```
This file is listed in the `.gitignore` file, so it won't get checked in and your keys stay safe.
If you have any problems with this process, there's a simple workaround which I explain in the video.
### Starting in Week 3, we'll also be using Google Colab for running with GPUs
You should be able to use the free tier or minimal spend to complete all the projects in the class. I personally signed up for Colab Pro+ and I'm loving it - but it's not required.
@ -75,19 +181,91 @@ The charges for the exercsies in this course should always be quite low, but if
2. For Anthropic: Always use model `claude-3-haiku-20240307` in the code instead of the other Claude models
3. During week 7, look out for my instructions for using the cheaper dataset
Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on.
## And that's it! Happy coding!
### Alternative Setup Instructions if Anaconda is giving you problems
First please run:
`python --version`
To find out which python you're on. Ideally you'd be using Python 3.11.x, so we're completely in sync. You can download python at
https://www.python.org/downloads/
Here are the steps:
After cloning the repo, cd into the project root directory `llm_engineering`.
Then:
1. Create a new virtual environment: `python -m venv venv`
2. Activate the virtual environment with
On a Mac: `source venv/bin/activate`
On a PC: `venv\Scripts\activate`
3. Run `pip install -r requirements.txt`
4. Create a file called `.env` in the project root directory and add any private API keys, such as below. (The next section has more detailed instructions for this, if you prefer.)
```
OPENAI_API_KEY=xxxx
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
HF_TOKEN=xxxx
```
5. Run `jupyter lab` to launch Jupyter and head over to the intro folder to get started.
Let me know if you hit problems.
### Guide to creating the `.env` file
**For PC users:**
1. Open the Notepad (Windows + R to open the Run box, enter notepad)
<table style="margin: 0; text-align: left;">
<tr>
<td style="width: 150px; height: 150px; vertical-align: middle;">
<img src="resources.jpg" width="150" height="150" style="display: block;" />
</td>
<td>
<h2 style="color:#f71;">Other resources</h2>
<span style="color:#f71;">I've put together this webpage with useful resources for the course. This includes links to all the slides.<br/>
<a href="https://edwarddonner.com/2024/11/13/llm-engineering-resources/">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>
Please keep this bookmarked, and I'll continue to add more useful links there over time.
</span>
</td>
</tr>
</table>
2. In the Notepad, type the contents of the file, such as:
```
OPENAI_API_KEY=xxxx
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
HF_TOKEN=xxxx
```
Double check there are no spaces before or after the `=` sign, and no spaces at the end of the key.
3. Go to File > Save As. In the "Save as type" dropdown, select All Files. In the "File name" field, type ".env". Choose the root of the project folder (the folder called `llm_engineering`) and click Save.
4. Navigate to the foler where you saved the file in Explorer and ensure it was saved as ".env" not ".env.txt" - if necessary rename it to ".env" - you might need to ensure that "Show file extensions" is set to "On" so that you see the file extensions. Message or email me if that doesn't make sense!
**For Mac users:**
1. Open Terminal (Command + Space to open Spotlight, type Terminal and press Enter)
2. cd to your project root directory
cd /path/to/your/project
(in other words, change to the directory like `/Users/your_name/Projects/llm_engineering`, or wherever you have cloned llm_engineering).
3. Create the .env file with
nano .env
4. Then type your API keys into nano:
```
OPENAI_API_KEY=xxxx
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
HF_TOKEN=xxxx
```
5. Save the file:
Control + O
Enter (to confirm save the file)
Control + X to exit the editor
6. Use this command to list files in your file
`ls -a`
And confirm that the `.env` file is there.
Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on.

41
environment.yml

@ -7,44 +7,41 @@ dependencies:
- pip
- python-dotenv
- requests
- beautifulsoup4
- pydub
- numpy
- pandas
- scipy
- pytorch
- jupyterlab
- ipywidgets
- pyarrow
- anthropic
- google-generativeai
- matplotlib
- scikit-learn
- chromadb
- jupyter-dash
- sentencepiece
- pyarrow
- langchain
- langchain-text-splitters
- langchain-openai
- langchain-experimental
- langchain-chroma
- faiss-cpu
- pip:
- beautifulsoup4
- tiktoken
- jupyter-dash
- plotly
- bitsandbytes
- twilio
- duckdb
- feedparser
- pip:
- transformers
- sentence-transformers
- datasets
- accelerate
- sentencepiece
- bitsandbytes
- openai
- anthropic
- google-generativeai
- gradio
- gensim
- modal
- ollama
- psutil
- setuptools
- speedtest-cli
- langchain
- langchain-core
- langchain-text-splitters
- langchain-openai
- langchain-chroma
- langchain-community
- faiss-cpu
- feedparser
- twilio
- pydub

11
requirements.txt

@ -10,6 +10,9 @@ matplotlib
gensim
torch
transformers
accelerate
sentencepiece
bitsandbytes
tqdm
openai
gradio
@ -32,11 +35,3 @@ plotly
jupyter-dash
beautifulsoup4
pydub
modal
ollama
accelerate
sentencepiece
bitsandbytes
psutil
setuptools
speedtest-cli

2
week1/community-contributions/day1-selenium-for-javascript-sites.ipynb

@ -376,7 +376,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.11.4"
}
},
"nbformat": 4,

385
week1/day1.ipynb

@ -5,9 +5,7 @@
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"\n",
"## Your first Frontier LLM Project!\n",
"# Instant Gratification!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
@ -15,61 +13,39 @@
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you need to start a 'notebook' again, go to Kernel menu >> Restart kernel. \n",
"\n",
"## If you'd like to brush up your Python\n",
"If you want to become a pro at Jupyter Lab, you can read their tutorial [here](https://jupyterlab.readthedocs.io/en/latest/). But this isn't required for our course; just a good technique for hitting Shift + Return and enjoying the result!\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect.\n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more ideas!\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
"## Business value of these exercises\n",
"\n",
"A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
@ -81,9 +57,7 @@
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
"from openai import OpenAI"
]
},
{
@ -97,18 +71,23 @@
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n",
"2. You'll need to set up billing and add the minimum amount of credit at this page [here](https://platform.openai.com/settings/organization/billing/overview). OpenAI requires a minimum of $5 to get started in the U.S. right now - this might be different for your region. You'll only need to use a fraction for this course. In my view, this is well worth the investment for your education and future projects - but if you have any concerns, you can skip this and watch me using OpenAI instead. In week 3 we will start to use free open-source models!\n",
"3. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n",
"- Pressing \"Create new secret key\" on the top right\n",
"- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n",
"- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n",
"- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n",
"4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n",
"5. See the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more instructions\n",
"6. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
@ -116,75 +95,29 @@
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n",
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy:\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
"# See the troubleshooting notebook, ot try the below line instead if this gives you any problems:\n",
"# openai = OpenAI(api_key=\"your-key-here\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped\n",
" \"\"\"\n",
" url: str\n",
" title: str\n",
" text: str\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
@ -201,12 +134,65 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Home - Edward Donner\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"# Let's try one out\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
@ -233,7 +219,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
@ -247,7 +233,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
@ -256,23 +242,13 @@
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
" user_prompt += \"The contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
@ -287,48 +263,12 @@
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
@ -342,18 +282,6 @@
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
@ -364,7 +292,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
@ -382,17 +310,28 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nEdward Donner's website highlights his interests and expertise in programming, experimenting with Large Language Models (LLMs), and electronic music production. He serves as the co-founder and CTO of **Nebula.io**, which focuses on using AI to help individuals discover their potential in the talent acquisition sector. The site also notes his previous role as the founder and CEO of **untapt**, an AI startup acquired in 2021.\\n\\n## Recent Posts\\n- **October 16, 2024:** Resources for transitioning from Software Engineer to AI Data Scientist.\\n- **August 6, 2024:** Announcement of the *Outsmart LLM Arena*, a competitive platform for LLMs.\\n- **June 26, 2024:** Guidance on choosing the right LLM with suggested tools and resources.\\n- **February 7, 2024:** Insights on fine-tuning an LLM to simulate personal writing styles.\\n\\nThe website encourages visitors to connect with Ed for further collaboration or discussions.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
@ -406,12 +345,54 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# İstanbul Nişantaşı Üniversitesi\n",
"\n",
"İstanbul Nişantaşı Üniversitesi, çeşitli akademik alanlarda eğitim veren bir yükseköğretim kurumudur. Temel misyonu, öğrencilere nitelikli eğitim ve araştırma fırsatları sunmaktır. Üniversitede tıp, mühendislik, sanat ve sosyal bilimler gibi birçok fakülte bulunmaktadır.\n",
"\n",
"## Kurumsal Bilgiler\n",
"- **Misyon/Vizyon**: Nişantaşı Eğitim Vakfı tarafından kurulan üniversite, kaliteli eğitim sunmayı hedeflemektedir.\n",
"- **Yönetim**: Rektör, senato üyeleri ve yönetim kurulu hakkında bilgiler mevcuttur.\n",
"- **Kalite Yönetimi**: Kalite ve yönetişim, sürekli eğitim ve bilimsel faaliyetler ile ilgili koordinatörlük birimleri bulunmaktadır.\n",
"\n",
"## Akademik Yapılar\n",
"- **Fakülteler**: Tıp, Diş Hekimliği, Mühendislik ve Mimarlık, İktisadi İdari ve Sosyal Bilimler, Sanat ve Tasarım, Sağlık Bilimleri.\n",
"- **Yüksekokullar ve Meslek Yüksekokulları**: Spor, Sivil Havacılık, Uygulamalı Bilimler ve Konservatuvar gibi farklı alanlarda yüksekokul programları sunulmaktadır.\n",
"- **Araştırma Merkezleri**: Ağız ve Diş Sağlığı, Finans Ekonomi gibi çeşitli araştırma merkezleri bulunmaktadır.\n",
"\n",
"## Öğrenci Kaynakları\n",
"- Öğrenci kulüpleri, spor faaliyetleri, psikolojik danışmanlık ve sağlık birimi gibi destek hizmetleri mevcut.\n",
"- Akademik takvim, ders programları ve sıkça sorulan sorular gibi öğrenci kaynakları sağlanmaktadır.\n",
"\n",
"## Güncel Haberler\n",
"- **Cumhuriyetin 101. Yılı**: İstanbul Nişantaşı Üniversitesi, ilkokul öğrencilerini ağırlamıştır.\n",
"- **Seminerler**: Ekonomik verilerin analizi ve yapay zeka ile öğrencilik eğitimi üzerine seminerler gerçekleştirilmiştir.\n",
"\n",
"## Etkinlikler ve Duyurular\n",
"- **29 Ekim Kutlamaları** ve **İlk Yardım Semineri** gibi çeşitli etkinlikler düzenlenmektedir.\n",
"- 29 Ekim resmi tatili ve öğretim görevlisi değerlendirme gibi duyurular yapılmıştır.\n",
"\n",
"## Başarılar\n",
"Üniversitenin spor takımları ulusal ve uluslararası düzeyde çeşitli başarılar elde etmiştir, örneğin masa tenisi takımı Avrupa Şampiyonu olmuştur.\n",
"\n",
"İstanbul Nişantaşı Üniversitesi, öğrencilere kapsamlı bir akademik ve sosyal deneyim sunmayı amaçlamaktadır."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://edwarddonner.com\")"
"display_summary(\"https://www.nisantasi.edu.tr/\")"
]
},
{
@ -455,59 +436,11 @@
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"something here\"\n",
"user_prompt = \"\"\"\n",
" Lots of text\n",
" Can be pasted here\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response =\n",
"## Business Applications\n",
"\n",
"# Step 4: print the result\n",
"In this exercise, you experienced calling the API of a Frontier Model (a leading model at the frontier of AI) for the first time. This is broadly applicable across Gen AI use cases and we will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"print("
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution."
]
},
{
@ -517,7 +450,7 @@
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them."
]
},
{
@ -559,7 +492,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
"version": "3.11.4"
}
},
"nbformat": 4,

1513
week1/day5.ipynb

File diff suppressed because it is too large Load Diff

76
week1/solutions/week1 SOLUTION.ipynb

@ -15,7 +15,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
@ -25,12 +25,12 @@
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n",
"import ollama"
"# import ollama"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
@ -43,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [],
@ -56,7 +56,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
@ -71,7 +71,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "8595807b-8ae2-4e1b-95d9-e8532142e8bb",
"metadata": {},
"outputs": [],
@ -84,7 +84,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "9605cbb6-3d3f-4969-b420-7f4cae0b9328",
"metadata": {},
"outputs": [],
@ -99,10 +99,66 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"Certainly! Let's break down the code snippet you've provided to understand what it does and why it operates that way.\n",
"\n",
"The code snippet is:\n",
"\n",
"python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"\n",
"### Explanation of Components\n",
"\n",
"1. **Set Comprehension**:\n",
" - `{book.get(\"author\") for book in books if book.get(\"author\")}`\n",
" - This is a set comprehension, which is a concise way to create a set in Python.\n",
" - **Iterates**: It iterates over a collection named `books`.\n",
" - **Extraction**: `book.get(\"author\")` attempts to retrieve the value associated with the key `\"author\"` from each `book` dictionary.\n",
" - **Filtering**: The `if book.get(\"author\")` condition filters out any books that do not have an `\"author\"` key or where the value is `None` or an empty string. This means only those books that have a valid author will be included in the set.\n",
"\n",
"2. **Set**:\n",
" - The output of the set comprehension is a set of unique author names from the `books` collection. \n",
" - Sets automatically handle duplicates, so if multiple books have the same author, their name will only appear once in the resulting set.\n",
"\n",
"3. **Yielding with `yield from`**:\n",
" - `yield from` is a syntax used in Python to delegate part of a generator’s operations to another generator.\n",
" - In this context, it means that each element obtained from the set (the unique authors) will be yielded one by one.\n",
"\n",
"### Overall Functionality\n",
"\n",
"- This code essentially extracts all unique authors from the list of `books` (where each `book` is presumably a dictionary containing various attributes) and yields each author one at a time from a generator function.\n",
"\n",
"### Practical Implications\n",
"\n",
"- When this line of code is executed within a generator function, it allows the generator to yield each unique author efficiently, enabling the caller to iterate over them.\n",
"- Utilizing a set ensures that authors are only returned once, even if they appear multiple times across different books.\n",
"\n",
"### Summary\n",
"\n",
"To summarize, the code snippet:\n",
"\n",
"1. Extracts authors from a list of book dictionaries.\n",
"2. Filters out any entries without an author.\n",
"3. Collects unique authors in a set.\n",
"4. Uses `yield from` to yield each author one at a time.\n",
"\n",
"This approach is efficient and concise, leveraging the power of Python's set comprehension and generator functions to handle potentially large data sets with an emphasis on uniqueness and iteration simplicity."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"\n",
@ -154,7 +210,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "llms",
"language": "python",
"name": "python3"
},

Loading…
Cancel
Save