Browse Source

Improvements to explanations and minor edits

pull/79/head
Edward Donner 4 months ago
parent
commit
e334b841ca
  1. 2
      README.md
  2. 2
      SETUP-PC.md
  3. 2
      SETUP-linux.md
  4. 2
      SETUP-mac.md
  5. 11
      week1/day1.ipynb
  6. 2
      week1/solutions/week1 SOLUTION.ipynb
  7. 2
      week2/day1.ipynb
  8. 2
      week4/day4.ipynb
  9. 8
      week5/day3.ipynb
  10. 6
      week5/day4.ipynb
  11. 4
      week5/day5.ipynb
  12. 7
      week6/day2.ipynb
  13. 16
      week6/day3.ipynb
  14. 2
      week6/day4-results.ipynb
  15. 2
      week7/day1.ipynb
  16. 13
      week8/day1.ipynb

2
README.md

@ -76,7 +76,7 @@ Follow the setup instructions above, then open the Week 1 folder and prepare for
### The most important part
The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to submit a Pull Request for your code (instructions [here](https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293)) and I can make your solutions available to others so we share in your progress; as an added benefit, you'll be recognized in GitHub for your contribution to the repo. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
## Starting in Week 3, we'll also be using Google Colab for running with GPUs

2
SETUP-PC.md

@ -94,7 +94,7 @@ You should see (llms) in your command prompt, which is your sign that things are
4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install.
In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`
5. **Start Jupyter Lab:**

2
SETUP-linux.md

@ -101,7 +101,7 @@ Your prompt should now display `(llms)`, indicating the environment is active.
Run: `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`.
If issues occur, try the fallback:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`
6. **Start Jupyter Lab:**

2
SETUP-mac.md

@ -87,7 +87,7 @@ You should see (llms) in your command prompt, which is your sign that things are
4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install.
In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`
5. **Start Jupyter Lab:**

11
week1/day1.ipynb

@ -538,16 +538,9 @@
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c"
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "682eff74-55c4-4d4b-b267-703edbc293c7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {

2
week1/solutions/week1 SOLUTION.ipynb

@ -168,7 +168,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

2
week2/day1.ipynb

@ -30,7 +30,7 @@
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" <code>conda env update --f environment.yml</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" <br/>Then restart the kernel (Kernel menu >> Restart Kernel and Clear Outputs Of All Cells) to pick up the changes.\n",

2
week4/day4.ipynb

@ -839,7 +839,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

8
week5/day3.ipynb

@ -180,7 +180,13 @@
"source": [
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n",
"\n",
"embeddings = OpenAIEmbeddings()"
"embeddings = OpenAIEmbeddings()\n",
"\n",
"# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n",
"# Then replace embeddings = OpenAIEmbeddings()\n",
"# with:\n",
"# from langchain.embeddings import HuggingFaceEmbeddings\n",
"# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")"
]
},
{

6
week5/day4.ipynb

@ -168,6 +168,12 @@
"\n",
"embeddings = OpenAIEmbeddings()\n",
"\n",
"# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n",
"# Then replace embeddings = OpenAIEmbeddings()\n",
"# with:\n",
"# from langchain.embeddings import HuggingFaceEmbeddings\n",
"# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n",
"\n",
"# Delete if already exists\n",
"\n",
"if os.path.exists(db_name):\n",

4
week5/day5.ipynb

@ -149,7 +149,9 @@
"embeddings = OpenAIEmbeddings()\n",
"\n",
"# If you would rather use the free Vector Embeddings from HuggingFace sentence-transformers\n",
"# Then uncomment this line instead\n",
"# Then replace embeddings = OpenAIEmbeddings()\n",
"# with:\n",
"# from langchain.embeddings import HuggingFaceEmbeddings\n",
"# embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n",
"\n",
"# Delete if already exists\n",

7
week6/day2.ipynb

@ -12,6 +12,7 @@
"## Data Curation Part 2\n",
"\n",
"Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training. \n",
"Data curation can seem less exciting than other things we work on, but it's a crucial part of the LLM engineers' responsibility and an important craft to hone, so that you can build your own commercial solutions with high quality datasets.\n",
"\n",
"The dataset is here: \n",
"https://huggingface.co/datasets/McAuley-Lab/Amazon-Reviews-2023\n",
@ -23,7 +24,9 @@
"\n",
"We are about to craft a massive dataset of 400,000 items covering multiple types of product. In Week 7 we will be using this data to train our own model. It's a pretty big dataset, and depending on the GPU you select, training could take 20+ hours. It will be really good fun, but it could cost a few dollars in compute units.\n",
"\n",
"As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one."
"As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one.\n",
"\n",
"Also, if you'd prefer, you can shortcut running all this data curation by downloading the pickle files that we save in the last cell. The pickle files are available here: https://drive.google.com/drive/folders/1f_IZGybvs9o0J5sb3xmtTEQB3BXllzrW"
]
},
{
@ -610,7 +613,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

16
week6/day3.ipynb

@ -52,6 +52,20 @@
"from sklearn.preprocessing import StandardScaler"
]
},
{
"cell_type": "markdown",
"id": "b3c87c11-8dbe-4b8c-8989-01e3d3a60026",
"metadata": {},
"source": [
"## NLP imports\n",
"\n",
"In the next cell, we have more imports for our NLP related machine learning. \n",
"If the gensim import gives you an error like \"Cannot import name 'triu' from 'scipy.linalg' then please run in another cell: \n",
"`!pip install \"scipy<1.13\"` \n",
"As described on StackOverflow [here](https://stackoverflow.com/questions/78279136/importerror-cannot-import-name-triu-from-scipy-linalg-when-importing-gens). \n",
"Many thanks to students Arnaldo G and Ard V for sorting this."
]
},
{
"cell_type": "code",
"execution_count": null,
@ -59,7 +73,7 @@
"metadata": {},
"outputs": [],
"source": [
"# And more imports for our NLP related machine learning\n",
"# NLP related imports\n",
"\n",
"from sklearn.feature_extraction.text import CountVectorizer\n",
"from gensim.models import Word2Vec\n",

2
week6/day4-results.ipynb

@ -1508,7 +1508,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

2
week7/day1.ipynb

@ -31,7 +31,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

13
week8/day1.ipynb

@ -189,9 +189,18 @@
"\n",
"You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python.\n",
"\n",
"## Important note:\n",
"## Important note for Windows people:\n",
"\n",
"On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do !modal deploy too."
"On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do `!modal deploy` too.\n",
"\n",
"As an alternative, a few students have mentioned you can run this code within Jupyter Lab if you want to run it from here:\n",
"```\n",
"# Check the default encoding\n",
"print(locale.getpreferredencoding()) # Should print 'UTF-8'\n",
"\n",
"# Ensure UTF-8 encoding\n",
"os.environ['PYTHONIOENCODING'] = 'utf-8'\n",
"```"
]
},
{

Loading…
Cancel
Save