From e334b841ca0c9ff7c52f9e9ace32ed4558436feb Mon Sep 17 00:00:00 2001 From: Edward Donner Date: Sun, 5 Jan 2025 12:51:20 -0500 Subject: [PATCH] Improvements to explanations and minor edits --- README.md | 2 +- SETUP-PC.md | 2 +- SETUP-linux.md | 2 +- SETUP-mac.md | 2 +- week1/day1.ipynb | 11 ++--------- week1/solutions/week1 SOLUTION.ipynb | 2 +- week2/day1.ipynb | 2 +- week4/day4.ipynb | 2 +- week5/day3.ipynb | 8 +++++++- week5/day4.ipynb | 6 ++++++ week5/day5.ipynb | 4 +++- week6/day2.ipynb | 9 ++++++--- week6/day3.ipynb | 16 +++++++++++++++- week6/day4-results.ipynb | 2 +- week7/day1.ipynb | 2 +- week8/day1.ipynb | 13 +++++++++++-- 16 files changed, 59 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 2302d82..8f79257 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,7 @@ Follow the setup instructions above, then open the Week 1 folder and prepare for ### The most important part -The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work. +The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to submit a Pull Request for your code (instructions [here](https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293)) and I can make your solutions available to others so we share in your progress; as an added benefit, you'll be recognized in GitHub for your contribution to the repo. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work. ## Starting in Week 3, we'll also be using Google Colab for running with GPUs diff --git a/SETUP-PC.md b/SETUP-PC.md index 055ddef..ab6ebb9 100644 --- a/SETUP-PC.md +++ b/SETUP-PC.md @@ -94,7 +94,7 @@ You should see (llms) in your command prompt, which is your sign that things are 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt` This may take a few minutes to install. In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version: -`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt` +`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` 5. **Start Jupyter Lab:** diff --git a/SETUP-linux.md b/SETUP-linux.md index f777cb6..45218ea 100644 --- a/SETUP-linux.md +++ b/SETUP-linux.md @@ -101,7 +101,7 @@ Your prompt should now display `(llms)`, indicating the environment is active. Run: `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`. If issues occur, try the fallback: -`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt` +`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` 6. **Start Jupyter Lab:** diff --git a/SETUP-mac.md b/SETUP-mac.md index 1778ae2..2c0e566 100644 --- a/SETUP-mac.md +++ b/SETUP-mac.md @@ -87,7 +87,7 @@ You should see (llms) in your command prompt, which is your sign that things are 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt` This may take a few minutes to install. In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version: -`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt` +`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` 5. **Start Jupyter Lab:** diff --git a/week1/day1.ipynb b/week1/day1.ipynb index f7db47b..d1823b1 100644 --- a/week1/day1.ipynb +++ b/week1/day1.ipynb @@ -538,16 +538,9 @@ "\n", "If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", "\n", - "PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c" + "Here are good instructions courtesy of an AI friend: \n", + "https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "682eff74-55c4-4d4b-b267-703edbc293c7", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": { diff --git a/week1/solutions/week1 SOLUTION.ipynb b/week1/solutions/week1 SOLUTION.ipynb index 5a7f2a7..df9efaa 100644 --- a/week1/solutions/week1 SOLUTION.ipynb +++ b/week1/solutions/week1 SOLUTION.ipynb @@ -168,7 +168,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week2/day1.ipynb b/week2/day1.ipynb index fe515bc..8a2640d 100644 --- a/week2/day1.ipynb +++ b/week2/day1.ipynb @@ -30,7 +30,7 @@ " At the start of each week, it's worth checking you have the latest code.
\n", " First do a git pull and merge your changes as needed. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!

\n", " After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:
\n", - " conda env update --f environment.yml --prune
\n", + " conda env update --f environment.yml
\n", " Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):
\n", " pip install -r requirements.txt\n", "
Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n", diff --git a/week4/day4.ipynb b/week4/day4.ipynb index e628a50..24df7b5 100644 --- a/week4/day4.ipynb +++ b/week4/day4.ipynb @@ -839,7 +839,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week5/day3.ipynb b/week5/day3.ipynb index 764f13c..349cb6b 100644 --- a/week5/day3.ipynb +++ b/week5/day3.ipynb @@ -180,7 +180,13 @@ "source": [ "# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", "\n", - "embeddings = OpenAIEmbeddings()" + "embeddings = OpenAIEmbeddings()\n", + "\n", + "# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n", + "# Then replace embeddings = OpenAIEmbeddings()\n", + "# with:\n", + "# from langchain.embeddings import HuggingFaceEmbeddings\n", + "# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")" ] }, { diff --git a/week5/day4.ipynb b/week5/day4.ipynb index 3e2cc00..de5e45a 100644 --- a/week5/day4.ipynb +++ b/week5/day4.ipynb @@ -168,6 +168,12 @@ "\n", "embeddings = OpenAIEmbeddings()\n", "\n", + "# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n", + "# Then replace embeddings = OpenAIEmbeddings()\n", + "# with:\n", + "# from langchain.embeddings import HuggingFaceEmbeddings\n", + "# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n", + "\n", "# Delete if already exists\n", "\n", "if os.path.exists(db_name):\n", diff --git a/week5/day5.ipynb b/week5/day5.ipynb index 5c29d40..cbd17b6 100644 --- a/week5/day5.ipynb +++ b/week5/day5.ipynb @@ -149,7 +149,9 @@ "embeddings = OpenAIEmbeddings()\n", "\n", "# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n", - "# Then uncomment this line instead\n", + "# Then replace embeddings = OpenAIEmbeddings()\n", + "# with:\n", + "# from langchain.embeddings import HuggingFaceEmbeddings\n", "# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n", "\n", "# Delete if already exists\n", diff --git a/week6/day2.ipynb b/week6/day2.ipynb index f59a4e9..d365869 100644 --- a/week6/day2.ipynb +++ b/week6/day2.ipynb @@ -11,7 +11,8 @@ "\n", "## Data Curation Part 2\n", "\n", - "Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training.\n", + "Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training. \n", + "Data curation can seem less exciting than other things we work on, but it's a crucial part of the LLM engineers' responsibility and an important craft to hone, so that you can build your own commercial solutions with high quality datasets.\n", "\n", "The dataset is here: \n", "https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023\n", @@ -23,7 +24,9 @@ "\n", "We are about to craft a massive dataset of 400,000 items covering multiple types of product. In Week 7 we will be using this data to train our own model. It's a pretty big dataset, and depending on the GPU you select, training could take 20+ hours. It will be really good fun, but it could cost a few dollars in compute units.\n", "\n", - "As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one." + "As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one.\n", + "\n", + "Also, if you'd prefer, you can shortcut running all this data curation by downloading the pickle files that we save in the last cell. The pickle files are available here: https://drive.google.com/drive/folders/1f_IZGybvs9o0J5sb3xmtTEQB3BXllzrW" ] }, { @@ -610,7 +613,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week6/day3.ipynb b/week6/day3.ipynb index 62345ac..45bbac2 100644 --- a/week6/day3.ipynb +++ b/week6/day3.ipynb @@ -52,6 +52,20 @@ "from sklearn.preprocessing import StandardScaler" ] }, + { + "cell_type": "markdown", + "id": "b3c87c11-8dbe-4b8c-8989-01e3d3a60026", + "metadata": {}, + "source": [ + "## NLP imports\n", + "\n", + "In the next cell, we have more imports for our NLP related machine learning. \n", + "If the gensim import gives you an error like \"Cannot import name 'triu' from 'scipy.linalg' then please run in another cell: \n", + "`!pip install \"scipy<1.13\"` \n", + "As described on StackOverflow [here](https://stackoverflow.com/questions/78279136/importerror-cannot-import-name-triu-from-scipy-linalg-when-importing-gens). \n", + "Many thanks to students Arnaldo G and Ard V for sorting this." + ] + }, { "cell_type": "code", "execution_count": null, @@ -59,7 +73,7 @@ "metadata": {}, "outputs": [], "source": [ - "# And more imports for our NLP related machine learning\n", + "# NLP related imports\n", "\n", "from sklearn.feature_extraction.text import CountVectorizer\n", "from gensim.models import Word2Vec\n", diff --git a/week6/day4-results.ipynb b/week6/day4-results.ipynb index 097d396..a3a44eb 100644 --- a/week6/day4-results.ipynb +++ b/week6/day4-results.ipynb @@ -1508,7 +1508,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week7/day1.ipynb b/week7/day1.ipynb index 9c9d90d..161e686 100644 --- a/week7/day1.ipynb +++ b/week7/day1.ipynb @@ -31,7 +31,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.10" + "version": "3.11.11" } }, "nbformat": 4, diff --git a/week8/day1.ipynb b/week8/day1.ipynb index f6c5da1..a6e54b2 100644 --- a/week8/day1.ipynb +++ b/week8/day1.ipynb @@ -189,9 +189,18 @@ "\n", "You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python.\n", "\n", - "## Important note:\n", + "## Important note for Windows people:\n", "\n", - "On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do !modal deploy too." + "On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do `!modal deploy` too.\n", + "\n", + "As an alternative, a few students have mentioned you can run this code within Jupyter Lab if you want to run it from here:\n", + "```\n", + "# Check the default encoding\n", + "print(locale.getpreferredencoding()) # Should print 'UTF-8'\n", + "\n", + "# Ensure UTF-8 encoding\n", + "os.environ['PYTHONIOENCODING'] = 'utf-8'\n", + "```" ] }, {