@ -76,7 +76,7 @@ Follow the setup instructions above, then open the Week 1 folder and prepare for
### The most important part
### The most important part
The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to submit a Pull Request for your code (instructions [here](https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293)) and I can make your solutions available to others so we share in your progress; as an added benefit, you'll be recognized in GitHub for your contribution to the repo. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work.
## Starting in Week 3, we'll also be using Google Colab for running with GPUs
## Starting in Week 3, we'll also be using Google Colab for running with GPUs
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"\n",
"PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c"
"Here are good instructions courtesy of an AI friend: \n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
"Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training. \n",
"Today we'll extend our dataset to a greater coverage, and craft it into an excellent dataset for training. \n",
"Data curation can seem less exciting than other things we work on, but it's a crucial part of the LLM engineers' responsibility and an important craft to hone, so that you can build your own commercial solutions with high quality datasets.\n",
"We are about to craft a massive dataset of 400,000 items covering multiple types of product. In Week 7 we will be using this data to train our own model. It's a pretty big dataset, and depending on the GPU you select, training could take 20+ hours. It will be really good fun, but it could cost a few dollars in compute units.\n",
"We are about to craft a massive dataset of 400,000 items covering multiple types of product. In Week 7 we will be using this data to train our own model. It's a pretty big dataset, and depending on the GPU you select, training could take 20+ hours. It will be really good fun, but it could cost a few dollars in compute units.\n",
"\n",
"\n",
"As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one."
"As an alternative, if you want to keep things quick & low cost, you can work with a smaller dataset focused only on Home Appliances. You'll be able to cover the same learning points; the results will be good -- not quite as good as the full dataset, but still pretty amazing! If you'd prefer to do this, I've set up an alternative jupyter notebook in this folder called `lite.ipynb` that you should use in place of this one.\n",
"\n",
"Also, if you'd prefer, you can shortcut running all this data curation by downloading the pickle files that we save in the last cell. The pickle files are available here: https://drive.google.com/drive/folders/1f_IZGybvs9o0J5sb3xmtTEQB3BXllzrW"
"In the next cell, we have more imports for our NLP related machine learning. \n",
"If the gensim import gives you an error like \"Cannot import name 'triu' from 'scipy.linalg' then please run in another cell: \n",
"`!pip install \"scipy<1.13\"` \n",
"As described on StackOverflow [here](https://stackoverflow.com/questions/78279136/importerror-cannot-import-name-triu-from-scipy-linalg-when-importing-gens). \n",
"Many thanks to students Arnaldo G and Ard V for sorting this."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": null,
"execution_count": null,
@ -59,7 +73,7 @@
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
"source": [
"source": [
"# And more imports for our NLP related machine learning\n",
"You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python.\n",
"You can also build REST endpoints easily, although we won't cover that as we'll be calling direct from Python.\n",
"\n",
"\n",
"## Important note:\n",
"## Important note for Windows people:\n",
"\n",
"\n",
"On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do !modal deploy too."
"On the next line, I call `modal deploy` from within Jupyter lab; I've heard that on some versions of Windows this gives a strange unicode error because modal prints emojis to the output which can't be displayed. If that happens to you, simply use an Anaconda Prompt window or a Powershell instead, with your environment activated, and type `modal deploy pricer_service` there. Follow the same approach the next time we do `!modal deploy` too.\n",
"\n",
"As an alternative, a few students have mentioned you can run this code within Jupyter Lab if you want to run it from here:\n",
"```\n",
"# Check the default encoding\n",
"print(locale.getpreferredencoding()) # Should print 'UTF-8'\n",