diff --git a/README.md b/README.md
index a712e19..91a8a95 100644
--- a/README.md
+++ b/README.md
@@ -6,16 +6,19 @@
I'm so happy you're joining me on this path. We'll be building immensely satisfying projects in the coming weeks. Some will be easy, some will be challenging, many will ASTOUND you! The projects build on each other so you develop deeper and deeper expertise each week. One thing's for sure: you're going to have a lot of fun along the way.
-### A note before you begin
+### Before you begin
I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here:
-https://www.linkedin.com/in/eddonner/
+https://www.linkedin.com/in/eddonner/
+And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done đŸ˜‚
Resources to accompany the course, including the slides and useful links, are here:
https://edwarddonner.com/2024/11/13/llm-engineering-resources/
## Instant Gratification instructions for Week 1, Day 1
+### Important note: see my warning about Llama3.3 below - it's too large for home computers! Stick with llama3.2! Several students have missed this warning...
+
We will start the course by installing Ollama so you can see results immediately!
1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly
2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal).
diff --git a/SETUP-PC.md b/SETUP-PC.md
index e87d0ff..1261a5e 100644
--- a/SETUP-PC.md
+++ b/SETUP-PC.md
@@ -13,6 +13,17 @@ I use a platform called Anaconda to set up your environment. It's a powerful too
Having said that: if you have any problems with Anaconda, I've provided an alternative approach. It's faster and simpler and should have you running quickly, with less of a guarantee around compatibility.
+### Before we begin - Heads up!
+
+If you are relatively new to using the Command Prompt, here is an excellent [guide](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) with instructions and exercises. I'd suggest you work through this first to build some confidence.
+
+There are 4 common gotchas to developing on Windows to be aware of:
+
+1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows
+2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed
+3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!
+4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0).
+
### Part 1: Clone the Repo
This gets you a local copy of the code on your box.
@@ -77,9 +88,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne
Press Win + R, type `cmd`, and press Enter
-Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync.
-If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues.
-You can download python here:
+Run `python --version` to find out which python you're on.
+Ideally you'd be using a version of Python 3.11, so we're completely in sync.
+I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13.
+If you need to install Python or install another version, you can download it here:
https://www.python.org/downloads/
2. Navigate to the "project root directory" by entering something like `cd C:\Users\YourUsername\Documents\Projects\llm_engineering` using the actual path to your llm_engineering project root directory. Do a `dir` and check you can see subdirectories for each week of the course.
diff --git a/SETUP-PC.pdf b/SETUP-PC.pdf
index 4a23be1..080346f 100644
Binary files a/SETUP-PC.pdf and b/SETUP-PC.pdf differ
diff --git a/SETUP-mac.md b/SETUP-mac.md
index a97d700..e78d73e 100644
--- a/SETUP-mac.md
+++ b/SETUP-mac.md
@@ -13,6 +13,14 @@ I use a platform called Anaconda to set up your environment. It's a powerful too
Having said that: if you have any problems with Anaconda, I've provided an alternative approach. It's faster and simpler and should have you running quickly, with less of a guarantee around compatibility.
+### Before we begin
+
+If you're less familiar with using the Terminal, please review this excellent [guide](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b) for some details and exercises.
+
+If you're new to developing on your Mac, you may need to install XCode developer tools. Here are [instructions](https://chatgpt.com/share/67b0b8d7-8eec-8012-9a37-6973b9db11f5).
+
+One "gotcha" to keep in mind: if you run anti-virus software, VPN or a Firewall, it might interfere with installations or network access. Please temporarily disable if you have problems.
+
### Part 1: Clone the Repo
This gets you a local copy of the code on your box.
@@ -49,10 +57,11 @@ If this Part 2 gives you any problems, there is an alternative Part 2B below tha
- Download Anaconda from https://docs.anaconda.com/anaconda/install/mac-os/
- Double-click the downloaded file and follow the installation prompts. Note that it takes up several GB and take a while to install, but it will be a powerful platform for you to use in the future.
+- After installing, you'll need to open a fresh, new Terminal to be able to use it (and you might even need to restart).
2. **Set up the environment:**
-- Open a new Terminal (Applications > Utilities > Terminal)
+- Open a **new** Terminal (Applications > Utilities > Terminal)
- Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path as needed with the actual path to the llm_engineering directory, your locally cloned version of the repo). Do `ls` and check you can see subdirectories for each week of the course.
- Create the environment: `conda env create -f environment.yml`
- Wait for a few minutes for all packages to be installed - in some cases, this can literally take 20-30 minutes if you've not used Anaconda before, and even longer depending on your internet connection. Important stuff is happening! If this runs for more than 1 hour 15 mins, or gives you other problems, please go to Part 2B instead.
@@ -70,9 +79,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne
1. **Open a new Terminal** (Applications > Utilities > Terminal)
-Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync.
-If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues.
-You can download python here:
+Run `python --version` to find out which python you're on.
+Ideally you'd be using a version of Python 3.11, so we're completely in sync.
+I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13.
+If you need to install Python or install another version, you can download it here:
https://www.python.org/downloads/
2. Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path with the actual path to the llm_engineering directory, your locally cloned version of the repo). Do `ls` and check you can see subdirectories for each week of the course.
diff --git a/SETUP-mac.pdf b/SETUP-mac.pdf
index 82d7a08..a9bc223 100644
Binary files a/SETUP-mac.pdf and b/SETUP-mac.pdf differ
diff --git a/week1/Intermediate Python.ipynb b/week1/Intermediate Python.ipynb
index 3cb4d4a..aaaa0e3 100644
--- a/week1/Intermediate Python.ipynb
+++ b/week1/Intermediate Python.ipynb
@@ -50,6 +50,22 @@
"https://chatgpt.com/share/673b553e-9d0c-8012-9919-f3bb5aa23e31"
]
},
+ {
+ "cell_type": "markdown",
+ "id": "f9e0f8e1-09b3-478b-ada7-c8c35003929b",
+ "metadata": {},
+ "source": [
+ "## With this in mind - understanding NameErrors in Python\n",
+ "\n",
+ "It's quite common to hit a NameError in python. With foundational knowledge, you should always feel equipped to debug a NameError and get to the bottom of it.\n",
+ "\n",
+ "If you're unsure how to fix a NameError, please see this [initial guide](https://chatgpt.com/share/67958312-ada0-8012-a1d3-62b3a5fcbbfc) and this [second guide with exercises](https://chatgpt.com/share/67a57e0b-0194-8012-bb50-8ea76c5995b8), and work through them both until you have high confidence.\n",
+ "\n",
+ "There's some repetition here, so feel free to skip it if you're already confident.\n",
+ "\n",
+ "## And now, on to the code!"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -57,7 +73,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# Next let's create some things:\n",
+ "# First let's create some things:\n",
"\n",
"fruits = [\"Apples\", \"Bananas\", \"Pears\"]\n",
"\n",
diff --git a/week1/day1.ipynb b/week1/day1.ipynb
index 74cc2ce..6395917 100644
--- a/week1/day1.ipynb
+++ b/week1/day1.ipynb
@@ -5,9 +5,10 @@
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
- "# Instant Gratification\n",
+ "# YOUR FIRST LAB\n",
+ "## Please read this. This is super-critical to get you prepared; there's no fluff here!\n",
"\n",
- "## Your first Frontier LLM Project!\n",
+ "## Your first Frontier LLM Project\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
@@ -23,6 +24,11 @@
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
+ "## If you're new to the Command Line\n",
+ "\n",
+ "Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
+ "Linux people, something tells me you could teach _me_ a thing or two about the command line!\n",
+ "\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
@@ -36,7 +42,8 @@
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
- "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n",
+ "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
+ "And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done đŸ˜‚ \n",
"\n",
"## More troubleshooting\n",
"\n",
@@ -53,7 +60,19 @@
" \n",
"
\n",
" Please read - important note\n",
- " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
+ " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, after watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
+ " | \n",
+ " \n",
+ "\n",
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Treat these labs as a resource\n",
+ " I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n",
+ " \n",
" | \n",
"
\n",
"
\n",
@@ -736,6 +755,14 @@
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
+ "metadata": {},
+ "outputs": [],
+ "source": []
}
],
"metadata": {
diff --git a/week1/day5.ipynb b/week1/day5.ipynb
index f39a4b2..2d02cdf 100644
--- a/week1/day5.ipynb
+++ b/week1/day5.ipynb
@@ -411,7 +411,7 @@
"\n",
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
"\n",
- "Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.\n",
+ "Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype. See what other students have done in the community-contributions folder -- so many valuable projects -- it's wild!\n",
" \n",
" \n",
""
@@ -446,9 +446,10 @@
"
\n",
" \n",
" \n",
- " A reminder on 2 useful resources\n",
+ " A reminder on 3 useful resources\n",
" 1. The resources for the course are available here. \n",
- " 2. I'm on LinkedIn here and I love connecting with people taking the course!\n",
+ " 2. I'm on LinkedIn here and I love connecting with people taking the course! \n",
+ " 3. I'm trying out X/Twitter and I'm at @edwarddonner and hoping people will teach me how it's done.. \n",
" \n",
" | \n",
" \n",
diff --git a/week1/troubleshooting.ipynb b/week1/troubleshooting.ipynb
index 03032fc..c9dfe43 100644
--- a/week1/troubleshooting.ipynb
+++ b/week1/troubleshooting.ipynb
@@ -57,6 +57,25 @@
" print(f\"Failed to connect with this error: {e}\")"
]
},
+ {
+ "cell_type": "markdown",
+ "id": "d91da3b2-5a41-4233-9ed6-c53a7661b328",
+ "metadata": {},
+ "source": [
+ "## Another mention of occasional \"gotchas\" for PC people\n",
+ "\n",
+ "There are 4 snafus on Windows to be aware of: \n",
+ "1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows\n",
+ "2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed\n",
+ "3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!\n",
+ "4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0).\n",
+ "\n",
+ "## And for Mac people\n",
+ "\n",
+ "1. If you're new to developing on your Mac, you may need to install XCode developer tools. Here are [instructions](https://chatgpt.com/share/67b0b8d7-8eec-8012-9a37-6973b9db11f5).\n",
+ "2. As with PC people, Anti-virus, Firewall, VPN can be problematic. These can interfere with installations and network access; try temporarily disabling them as needed"
+ ]
+ },
{
"cell_type": "markdown",
"id": "f5190688-205a-46d1-a0dc-9136a42ad0db",
@@ -64,7 +83,7 @@
"source": [
"# Step 1\n",
"\n",
- "Try running the next 2 cells (click in the cell under this one and hit shift+return, then shift+return again).\n",
+ "Try running the next cell (click in the cell under this one and hit shift+return).\n",
"\n",
"If this gives an error, then you're likely not running in an \"activated\" environment. Please check back in Part 5 of the SETUP guide for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) for setting up the Anaconda (or virtualenv) environment and activating it, before running `jupyter lab`.\n",
"\n",
@@ -113,6 +132,36 @@
" print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")"
]
},
+ {
+ "cell_type": "markdown",
+ "id": "45e2cc99-b7d3-48bd-b27c-910206c4171a",
+ "metadata": {},
+ "source": [
+ "# Step 1.1\n",
+ "\n",
+ "## It's time to check that the environment is good and dependencies are installed\n",
+ "\n",
+ "And now, this next cell should run with no output - no import errors. \n",
+ "\n",
+ "Import errors might indicate that you started jupyter lab without your environment activated? See SETUP Part 5. \n",
+ "\n",
+ "Or you might need to restart your Kernel and Jupyter Lab. \n",
+ "\n",
+ "Or it's possible that something is wrong with Anaconda. \n",
+ "If so, here are some recovery instructions: \n",
+ "First, close everything down and restart your computer. \n",
+ "Then in an Anaconda Prompt (PC) or Terminal (Mac), from an activated environment, with **(llms)** showing in the prompt, from the llm_engineering directory, run this: \n",
+ "`python -m pip install --upgrade pip` \n",
+ "`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` \n",
+ "Watch carefully for any errors, and let me know. \n",
+ "If you see instructions to install Microsoft Build Tools, or Apple XCode tools, then follow the instructions. \n",
+ "Then try again!\n",
+ "\n",
+ "Finally, if that doesn't work, please try SETUP Part 2B, the alternative to Part 2 (with Python 3.11 or Python 3.12). \n",
+ "\n",
+ "If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -120,11 +169,7 @@
"metadata": {},
"outputs": [],
"source": [
- "# And now, this should run with no output - no import errors.\n",
- "# Import errors might indicate that you started jupyter lab without your environment activated? See SETUP part 5.\n",
- "# Or you might need to restart your Kernel and Jupyter Lab.\n",
- "# Or it's possible that something is wrong with Anaconda. Please try SETUP Part 2B, the alternative to Part 2.\n",
- "# If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com\n",
+ "# This import should work if your environment is active and dependencies are installed!\n",
"\n",
"from openai import OpenAI"
]
@@ -175,11 +220,14 @@
"\n",
" key_exists = any(line.startswith(\"OPENAI_API_KEY=\") for line in contents)\n",
" good_key = any(line.startswith(\"OPENAI_API_KEY=sk-proj-\") for line in contents)\n",
+ " classic_problem = any(\"OPEN_\" in line for line in contents)\n",
" \n",
" if key_exists and good_key:\n",
" print(\"SUCCESS! OPENAI_API_KEY found and it has the right prefix\")\n",
" elif key_exists:\n",
" print(\"Found an OPENAI_API_KEY although it didn't have the expected prefix sk-proj- \\nPlease double check your key in the file..\")\n",
+ " elif classic_problem:\n",
+ " print(\"Didn't find an OPENAI_API_KEY, but I notice that 'OPEN_' appears - do you have a typo like OPEN_API_KEY instead of OPENAI_API_KEY?\")\n",
" else:\n",
" print(\"Didn't find an OPENAI_API_KEY in the .env file\")\n",
"else:\n",
@@ -365,6 +413,11 @@
"It's unlikely, but if there's something wrong with your key, you could also try creating a new key (button on the top right) here: \n",
"https://platform.openai.com/api-keys\n",
"\n",
+ "### Check that you can use gpt-4o-mini from the OpenAI playground\n",
+ "\n",
+ "To confirm that billing is set up and your key is good, you could try using gtp-4o-mini directly: \n",
+ "https://platform.openai.com/playground/chat?models=gpt-4o-mini\n",
+ "\n",
"### If there's a cert related error\n",
"\n",
"If you encountered a certificates error like: \n",
@@ -380,7 +433,9 @@
"\n",
"(1) Try pasting your error into ChatGPT or Claude! It's amazing how often they can figure things out\n",
"\n",
- "(2) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n",
+ "(2) Try creating another key and replacing it in the .env file and rerunning!\n",
+ "\n",
+ "(3) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n",
"\n",
"Thanks so much, and I'm sorry this is giving you bother!"
]
diff --git a/week2/day2.ipynb b/week2/day2.ipynb
index 133ca0f..c2a8084 100644
--- a/week2/day2.ipynb
+++ b/week2/day2.ipynb
@@ -53,7 +53,7 @@
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
- "load_dotenv()\n",
+ "load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
diff --git a/week2/day3.ipynb b/week2/day3.ipynb
index bad0605..c3e13a1 100644
--- a/week2/day3.ipynb
+++ b/week2/day3.ipynb
@@ -33,7 +33,7 @@
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
- "load_dotenv()\n",
+ "load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
diff --git a/week2/day4.ipynb b/week2/day4.ipynb
index bc3f0df..1a8d72f 100644
--- a/week2/day4.ipynb
+++ b/week2/day4.ipynb
@@ -35,7 +35,7 @@
"source": [
"# Initialization\n",
"\n",
- "load_dotenv()\n",
+ "load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
diff --git a/week2/day5.ipynb b/week2/day5.ipynb
index df71896..70cee59 100644
--- a/week2/day5.ipynb
+++ b/week2/day5.ipynb
@@ -35,7 +35,7 @@
"source": [
"# Initialization\n",
"\n",
- "load_dotenv()\n",
+ "load_dotenv(override=True)\n",
"\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"if openai_api_key:\n",
diff --git a/week3/community-contributions/dataset_generator.ipynb b/week3/community-contributions/dataset_generator.ipynb
index 0802303..095d836 100644
--- a/week3/community-contributions/dataset_generator.ipynb
+++ b/week3/community-contributions/dataset_generator.ipynb
@@ -22,7 +22,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"metadata": {
"id": "-Apd7-p-hyLk"
},
@@ -84,7 +84,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"metadata": {
"id": "WjxNWW6bvdgj"
},
@@ -105,7 +105,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": null,
"metadata": {
"id": "ZvljDKdji8iV"
},
@@ -161,7 +161,7 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"metadata": {
"id": "JAdfqYXnvEDE"
},
@@ -196,7 +196,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": null,
"metadata": {
"id": "xy2RP5T-vxXg"
},
diff --git a/week5/day4.5.ipynb b/week5/day4.5.ipynb
index 9027a28..a02b9cd 100644
--- a/week5/day4.5.ipynb
+++ b/week5/day4.5.ipynb
@@ -14,7 +14,7 @@
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": null,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {},
"outputs": [],
@@ -29,7 +29,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"id": "802137aa-8a74-45e0-a487-d1974927d7ca",
"metadata": {},
"outputs": [],
@@ -51,7 +51,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": null,
"id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {},
"outputs": [],
@@ -64,7 +64,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {},
"outputs": [],
@@ -77,7 +77,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": null,
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905",
"metadata": {},
"outputs": [],
@@ -104,18 +104,10 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a",
"metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "Created a chunk of size 1088, which is longer than the specified 1000\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)"
@@ -123,39 +115,20 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": null,
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "123"
- ]
- },
- "execution_count": 7,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"len(chunks)"
]
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": null,
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Document types found: company, employees, contracts, products\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n",
"print(f\"Document types found: {', '.join(doc_types)}\")"
@@ -184,18 +157,10 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": null,
"id": "78998399-ac17-4e28-b15f-0b5f51e6ee23",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "There are 123 vectors with 1,536 dimensions in the vector store\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n",
"# Chroma is a popular open source Vector Database based on SQLLite\n",
diff --git a/week8/day4.ipynb b/week8/day4.ipynb
index 4cd5d8f..6895681 100644
--- a/week8/day4.ipynb
+++ b/week8/day4.ipynb
@@ -67,12 +67,23 @@
]
},
{
- "cell_type": "code",
- "execution_count": null,
- "id": "0056a02f-06a3-4acc-99f3-cbe919ee936b",
+ "cell_type": "markdown",
+ "id": "7f2781ad-e122-4570-8fad-a2fe6452414e",
"metadata": {},
- "outputs": [],
- "source": []
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Additional resource: more sophisticated planning agent\n",
+ " The Planning Agent that we use in the next cell is simply a python script that calls the other Agents; frankly that's all we require for this project. But if you're intrigued to see a more Autonomous version in which we give the Planning Agent tools and allow it to decide which Agents to call, see my implementation of AutonomousPlanningAgent in my related repo, Agentic. This is an example with multiple tools that dynamically decides which function to call.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
},
{
"cell_type": "code",
diff --git a/week8/day5.ipynb b/week8/day5.ipynb
index 22edec5..d9c0513 100644
--- a/week8/day5.ipynb
+++ b/week8/day5.ipynb
@@ -169,7 +169,7 @@
" \n",
" CONGRATULATIONS AND THANK YOU!!!\n",
" \n",
- " It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm here on LinkedIn if we're not already connected. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others.
Thanks once again for working all the way through the course, and I'm excited to hear all about your career as an LLM Engineer.\n",
+ " It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm here on LinkedIn if we're not already connected and I'm on X at @edwarddonner. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others.
Massive thanks again for putting up with me for 8 weeks and getting all the way to the final cell! I'm excited to hear all about your career as an LLM Engineer. You could not have picked a better time to be in this field.\n",
" \n",
" | \n",
" \n",