diff --git a/README.md b/README.md index a712e19..408885f 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,8 @@ I'm so happy you're joining me on this path. We'll be building immensely satisfy ### A note before you begin I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here: -https://www.linkedin.com/in/eddonner/ +https://www.linkedin.com/in/eddonner/ +And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done đŸ˜‚ Resources to accompany the course, including the slides and useful links, are here: https://edwarddonner.com/2024/11/13/llm-engineering-resources/ diff --git a/SETUP-PC.md b/SETUP-PC.md index e87d0ff..b0534dc 100644 --- a/SETUP-PC.md +++ b/SETUP-PC.md @@ -77,9 +77,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne Press Win + R, type `cmd`, and press Enter -Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync. -If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues. -You can download python here: +Run `python --version` to find out which python you're on. +Ideally you'd be using a version of Python 3.11, so we're completely in sync. +I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13. +If you need to install Python or install another version, you can download it here: https://www.python.org/downloads/ 2. Navigate to the "project root directory" by entering something like `cd C:\Users\YourUsername\Documents\Projects\llm_engineering` using the actual path to your llm_engineering project root directory. Do a `dir` and check you can see subdirectories for each week of the course. diff --git a/SETUP-mac.md b/SETUP-mac.md index a97d700..d72c312 100644 --- a/SETUP-mac.md +++ b/SETUP-mac.md @@ -70,9 +70,10 @@ You should see `(llms)` in your prompt, which indicates you've activated your ne 1. **Open a new Terminal** (Applications > Utilities > Terminal) -Run `python --version` to find out which python you're on. Ideally you'd be using a version of Python 3.11, so we're completely in sync. -If not, it's not a big deal, but we might need to come back to this later if you have compatibility issues. -You can download python here: +Run `python --version` to find out which python you're on. +Ideally you'd be using a version of Python 3.11, so we're completely in sync. +I believe Python 3.12 works also, but (as of Feb 2025) Python 3.13 does **not** yet work as several Data Science dependencies are not yet ready for Python 3.13. +If you need to install Python or install another version, you can download it here: https://www.python.org/downloads/ 2. Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path with the actual path to the llm_engineering directory, your locally cloned version of the repo). Do `ls` and check you can see subdirectories for each week of the course. diff --git a/week1/day1.ipynb b/week1/day1.ipynb index d1823b1..ce72b0f 100644 --- a/week1/day1.ipynb +++ b/week1/day1.ipynb @@ -36,7 +36,8 @@ "## I am here to help\n", "\n", "If you have any problems at all, please do reach out. \n", - "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", + "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n", + "And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done đŸ˜‚ \n", "\n", "## More troubleshooting\n", "\n", @@ -53,7 +54,19 @@ " \n", " \n", "

Please read - important note

\n", - " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", + " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, after watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", + " \n", + " \n", + "\n", + "\n", + " \n", + " \n", + " \n", " \n", "
\n", + " \n", + " \n", + "

Treat these labs as a resource

\n", + " I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations. Consider these labs like an interactive book that accompany the lectures.\n", + " \n", "
\n", @@ -146,6 +159,21 @@ "# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" ] }, + { + "cell_type": "code", + "execution_count": null, + "id": "36647a1c-9970-4486-81e0-7c41e81e18cf", + "metadata": {}, + "outputs": [], + "source": [ + "api_key = \"sk-proj-3gfz2-gil9zs21dE-xMJQXQ5pp_Kciz9Almshfes53gu76b5R2k1Dd7o8H_53xeDPKcfxSr6X4T3BlbkFJx\"\n", + "api_key += \"o7n3Cgwb04tLnlGz2NMtT9H4TZQ4Jjm8XUJQxBdZ7sqjR7hm3fJ5KSX0r45jnzJR3jUROSBDwRMA\"\n", + "openai = OpenAI(api_key=api_key)\n", + "message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", + "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", + "print(response.choices[0].message.content)" + ] + }, { "cell_type": "markdown", "id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", @@ -541,6 +569,14 @@ "Here are good instructions courtesy of an AI friend: \n", "https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "f4484fcf-8b39-4c3f-9674-37970ed71988", + "metadata": {}, + "outputs": [], + "source": [] } ], "metadata": { diff --git a/week1/day5.ipynb b/week1/day5.ipynb index f39a4b2..2d02cdf 100644 --- a/week1/day5.ipynb +++ b/week1/day5.ipynb @@ -411,7 +411,7 @@ "\n", "This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n", "\n", - "Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.\n", + "Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype. See what other students have done in the community-contributions folder -- so many valuable projects -- it's wild!\n", " \n", " \n", "" @@ -446,9 +446,10 @@ " \n", " \n", " \n", - "

A reminder on 2 useful resources

\n", + "

A reminder on 3 useful resources

\n", " 1. The resources for the course are available here.
\n", - " 2. I'm on LinkedIn here and I love connecting with people taking the course!\n", + " 2. I'm on LinkedIn here and I love connecting with people taking the course!
\n", + " 3. I'm trying out X/Twitter and I'm at @edwarddonner and hoping people will teach me how it's done.. \n", "
\n", " \n", " \n", diff --git a/week1/troubleshooting.ipynb b/week1/troubleshooting.ipynb index 03032fc..40db0f1 100644 --- a/week1/troubleshooting.ipynb +++ b/week1/troubleshooting.ipynb @@ -113,6 +113,36 @@ " print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")" ] }, + { + "cell_type": "markdown", + "id": "45e2cc99-b7d3-48bd-b27c-910206c4171a", + "metadata": {}, + "source": [ + "# Step 1.1\n", + "\n", + "## It's time to check that the environment is good and dependencies are installed\n", + "\n", + "And now, this next cell should run with no output - no import errors. \n", + "\n", + "Import errors might indicate that you started jupyter lab without your environment activated? See SETUP Part 5. \n", + "\n", + "Or you might need to restart your Kernel and Jupyter Lab. \n", + "\n", + "Or it's possible that something is wrong with Anaconda. \n", + "If so, here are some recovery instructions: \n", + "First, close everything down and restart your computer. \n", + "Then in an Anaconda Prompt (PC) or Terminal (Mac), from an activated environment, with **(llms)** showing in the prompt, from the llm_engineering directory, run this: \n", + "`python -m pip install --upgrade pip` \n", + "`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` \n", + "Watch carefully for any errors, and let me know. \n", + "If you see instructions to install Microsoft Build Tools, or Apple XCode tools, then follow the instructions. \n", + "Then try again!\n", + "\n", + "Finally, if that doesn't work, please try SETUP Part 2B, the alternative to Part 2 (with Python 3.11 or Python 3.12). \n", + "\n", + "If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com" + ] + }, { "cell_type": "code", "execution_count": null, @@ -120,11 +150,7 @@ "metadata": {}, "outputs": [], "source": [ - "# And now, this should run with no output - no import errors.\n", - "# Import errors might indicate that you started jupyter lab without your environment activated? See SETUP part 5.\n", - "# Or you might need to restart your Kernel and Jupyter Lab.\n", - "# Or it's possible that something is wrong with Anaconda. Please try SETUP Part 2B, the alternative to Part 2.\n", - "# If you're unsure, please run the diagnostics (last cell in this notebook) and then email me at ed@edwarddonner.com\n", + "# This import should work if your environment is active and dependencies are installed!\n", "\n", "from openai import OpenAI" ] @@ -175,11 +201,14 @@ "\n", " key_exists = any(line.startswith(\"OPENAI_API_KEY=\") for line in contents)\n", " good_key = any(line.startswith(\"OPENAI_API_KEY=sk-proj-\") for line in contents)\n", + " classic_problem = any(\"OPEN_\" in line for line in contents)\n", " \n", " if key_exists and good_key:\n", " print(\"SUCCESS! OPENAI_API_KEY found and it has the right prefix\")\n", " elif key_exists:\n", " print(\"Found an OPENAI_API_KEY although it didn't have the expected prefix sk-proj- \\nPlease double check your key in the file..\")\n", + " elif classic_problem:\n", + " print(\"Didn't find an OPENAI_API_KEY, but I notice that 'OPEN_' appears - do you have a typo like OPEN_API_KEY instead of OPENAI_API_KEY?\")\n", " else:\n", " print(\"Didn't find an OPENAI_API_KEY in the .env file\")\n", "else:\n", @@ -365,6 +394,11 @@ "It's unlikely, but if there's something wrong with your key, you could also try creating a new key (button on the top right) here: \n", "https://platform.openai.com/api-keys\n", "\n", + "### Check that you can use gpt-4o-mini from the OpenAI playground\n", + "\n", + "To confirm that billing is set up and your key is good, you could try using gtp-4o-mini directly: \n", + "https://platform.openai.com/playground/chat?models=gpt-4o-mini\n", + "\n", "### If there's a cert related error\n", "\n", "If you encountered a certificates error like: \n", @@ -380,7 +414,9 @@ "\n", "(1) Try pasting your error into ChatGPT or Claude! It's amazing how often they can figure things out\n", "\n", - "(2) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n", + "(2) Try creating another key and replacing it in the .env file and rerunning!\n", + "\n", + "(3) Contact me! Please run the diagnostics in the cell below, then email me your problems to ed@edwarddonner.com\n", "\n", "Thanks so much, and I'm sorry this is giving you bother!" ] diff --git a/week2/day2.ipynb b/week2/day2.ipynb index 133ca0f..c2a8084 100644 --- a/week2/day2.ipynb +++ b/week2/day2.ipynb @@ -53,7 +53,7 @@ "# Load environment variables in a file called .env\n", "# Print the key prefixes to help with any debugging\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", "google_api_key = os.getenv('GOOGLE_API_KEY')\n", diff --git a/week2/day3.ipynb b/week2/day3.ipynb index bad0605..c3e13a1 100644 --- a/week2/day3.ipynb +++ b/week2/day3.ipynb @@ -33,7 +33,7 @@ "# Load environment variables in a file called .env\n", "# Print the key prefixes to help with any debugging\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", "google_api_key = os.getenv('GOOGLE_API_KEY')\n", diff --git a/week2/day4.ipynb b/week2/day4.ipynb index bc3f0df..1a8d72f 100644 --- a/week2/day4.ipynb +++ b/week2/day4.ipynb @@ -35,7 +35,7 @@ "source": [ "# Initialization\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "if openai_api_key:\n", diff --git a/week2/day5.ipynb b/week2/day5.ipynb index df71896..70cee59 100644 --- a/week2/day5.ipynb +++ b/week2/day5.ipynb @@ -35,7 +35,7 @@ "source": [ "# Initialization\n", "\n", - "load_dotenv()\n", + "load_dotenv(override=True)\n", "\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "if openai_api_key:\n", diff --git a/week3/community-contributions/dataset_generator.ipynb b/week3/community-contributions/dataset_generator.ipynb index 0802303..095d836 100644 --- a/week3/community-contributions/dataset_generator.ipynb +++ b/week3/community-contributions/dataset_generator.ipynb @@ -22,7 +22,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "metadata": { "id": "-Apd7-p-hyLk" }, @@ -84,7 +84,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "metadata": { "id": "WjxNWW6bvdgj" }, @@ -105,7 +105,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": null, "metadata": { "id": "ZvljDKdji8iV" }, @@ -161,7 +161,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": null, "metadata": { "id": "JAdfqYXnvEDE" }, @@ -196,7 +196,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": null, "metadata": { "id": "xy2RP5T-vxXg" }, diff --git a/week5/day4.5.ipynb b/week5/day4.5.ipynb index 9027a28..a02b9cd 100644 --- a/week5/day4.5.ipynb +++ b/week5/day4.5.ipynb @@ -14,7 +14,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", "metadata": {}, "outputs": [], @@ -29,7 +29,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": null, "id": "802137aa-8a74-45e0-a487-d1974927d7ca", "metadata": {}, "outputs": [], @@ -51,7 +51,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": null, "id": "58c85082-e417-4708-9efe-81a5d55d1424", "metadata": {}, "outputs": [], @@ -64,7 +64,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": null, "id": "ee78efcb-60fe-449e-a944-40bab26261af", "metadata": {}, "outputs": [], @@ -77,7 +77,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": null, "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", "metadata": {}, "outputs": [], @@ -104,18 +104,10 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": null, "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Created a chunk of size 1088, which is longer than the specified 1000\n" - ] - } - ], + "outputs": [], "source": [ "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "chunks = text_splitter.split_documents(documents)" @@ -123,39 +115,20 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": null, "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "123" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], + "outputs": [], "source": [ "len(chunks)" ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": null, "id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Document types found: company, employees, contracts, products\n" - ] - } - ], + "outputs": [], "source": [ "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", "print(f\"Document types found: {', '.join(doc_types)}\")" @@ -184,18 +157,10 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": null, "id": "78998399-ac17-4e28-b15f-0b5f51e6ee23", "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "There are 123 vectors with 1,536 dimensions in the vector store\n" - ] - } - ], + "outputs": [], "source": [ "# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", "# Chroma is a popular open source Vector Database based on SQLLite\n", diff --git a/week8/day4.ipynb b/week8/day4.ipynb index 4cd5d8f..6895681 100644 --- a/week8/day4.ipynb +++ b/week8/day4.ipynb @@ -67,12 +67,23 @@ ] }, { - "cell_type": "code", - "execution_count": null, - "id": "0056a02f-06a3-4acc-99f3-cbe919ee936b", + "cell_type": "markdown", + "id": "7f2781ad-e122-4570-8fad-a2fe6452414e", "metadata": {}, - "outputs": [], - "source": [] + "source": [ + "\n", + " \n", + " \n", + " \n", + " \n", + "
\n", + " \n", + " \n", + "

Additional resource: more sophisticated planning agent

\n", + " The Planning Agent that we use in the next cell is simply a python script that calls the other Agents; frankly that's all we require for this project. But if you're intrigued to see a more Autonomous version in which we give the Planning Agent tools and allow it to decide which Agents to call, see my implementation of AutonomousPlanningAgent in my related repo, Agentic. This is an example with multiple tools that dynamically decides which function to call.\n", + " \n", + "
" + ] }, { "cell_type": "code", diff --git a/week8/day5.ipynb b/week8/day5.ipynb index 22edec5..d9c0513 100644 --- a/week8/day5.ipynb +++ b/week8/day5.ipynb @@ -169,7 +169,7 @@ " \n", "

CONGRATULATIONS AND THANK YOU!!!

\n", " \n", - " It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm
here on LinkedIn if we're not already connected. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others.

Thanks once again for working all the way through the course, and I'm excited to hear all about your career as an LLM Engineer.\n", + " It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm here on LinkedIn if we're not already connected and I'm on X at @edwarddonner. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others.

Massive thanks again for putting up with me for 8 weeks and getting all the way to the final cell! I'm excited to hear all about your career as an LLM Engineer. You could not have picked a better time to be in this field.\n", " \n", " \n", " \n",