Browse Source

again, the editor bug

pull/150/head
udomai 3 months ago
parent
commit
4f9c482208
  1. 2
      README.md
  2. 10
      SETUP-PC.md
  3. 22
      week2/day1.ipynb
  4. 2
      week2/day2.ipynb
  5. 198
      week3/community-contributions/dataset_generator.ipynb
  6. 61
      week5/day4.5.ipynb
  7. 13
      week8/day1.ipynb

2
README.md

@ -20,7 +20,7 @@ We will start the course by installing Ollama so you can see results immediately
1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly 1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly
2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal). 2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal).
3. Run `ollama run llama3.2` or for smaller machines try `ollama run llama3.2:1b` - **please note** steer clear of Meta's latest model llama3.3 because at 70B parameters that's way too large for most home computers! 3. Run `ollama run llama3.2` or for smaller machines try `ollama run llama3.2:1b` - **please note** steer clear of Meta's latest model llama3.3 because at 70B parameters that's way too large for most home computers!
4. If this doesn't work, you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again 4. If this doesn't work: you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again. On a PC, you may need to be running in an Admin instance of Powershell.
5. And if that doesn't work on your box, I've set up this on the cloud. This is on Google Colab, which will need you to have a Google account to sign in, but is free: https://colab.research.google.com/drive/1-_f5XZPsChvfU1sJ0QqCePtIuc55LSdu?usp=sharing 5. And if that doesn't work on your box, I've set up this on the cloud. This is on Google Colab, which will need you to have a Google account to sign in, but is free: https://colab.research.google.com/drive/1-_f5XZPsChvfU1sJ0QqCePtIuc55LSdu?usp=sharing
Any problems, please contact me! Any problems, please contact me!

10
SETUP-PC.md

@ -93,10 +93,16 @@ You should see (llms) in your command prompt, which is your sign that things are
4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt` 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install. This may take a few minutes to install.
In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version: If you see an error like this:
> Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": [https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
Then please follow the link and install Microsoft C++ Build Tools.
In the very unlikely event that this step doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` `pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`
5. **Start Jupyter Lab:** 6. **Start Jupyter Lab:**
From within the `llm_engineering` folder, type: `jupyter lab` From within the `llm_engineering` folder, type: `jupyter lab`
...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipynb`. Success! Now close down jupyter lab and move on to Part 3. ...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipynb`. Success! Now close down jupyter lab and move on to Part 3.

22
week2/day1.ipynb

@ -300,6 +300,7 @@
"source": [ "source": [
"# Claude 3.5 Sonnet again\n", "# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n", "# Now let's add in streaming back results\n",
"# If the streaming looks strange, then please see the note below this cell!\n",
"\n", "\n",
"result = claude.messages.stream(\n", "result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-latest\",\n", " model=\"claude-3-5-sonnet-latest\",\n",
@ -316,6 +317,27 @@
" print(text, end=\"\", flush=True)" " print(text, end=\"\", flush=True)"
] ]
}, },
{
"cell_type": "markdown",
"id": "dd1e17bc-cd46-4c23-b639-0c7b748e6c5a",
"metadata": {},
"source": [
"## A rare problem with Claude streaming on some Windows boxes\n",
"\n",
"2 students have noticed a strange thing happening with Claude's streaming into Jupyter Lab's output -- it sometimes seems to swallow up parts of the response.\n",
"\n",
"To fix this, replace the code:\n",
"\n",
"`print(text, end=\"\", flush=True)`\n",
"\n",
"with this:\n",
"\n",
"`clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")` \n",
"`print(clean_text, end=\"\", flush=True)`\n",
"\n",
"And it should work fine!"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,

2
week2/day2.ipynb

@ -130,6 +130,8 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# This can reveal the \"training cut off\", or the most recent date in the training data\n",
"\n",
"message_gpt(\"What is today's date?\")" "message_gpt(\"What is today's date?\")"
] ]
}, },

198
week3/community-contributions/dataset_generator.ipynb

@ -1,43 +1,32 @@
{ {
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [ "cells": [
{ {
"cell_type": "code", "cell_type": "code",
"source": [ "execution_count": null,
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio"
],
"metadata": { "metadata": {
"id": "kU2JrcPlhwd9" "id": "kU2JrcPlhwd9"
}, },
"execution_count": null, "outputs": [],
"outputs": [] "source": [
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio"
]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Imports**"
],
"metadata": { "metadata": {
"id": "lAMIVT4iwNg0" "id": "lAMIVT4iwNg0"
} },
"source": [
"**Imports**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "-Apd7-p-hyLk"
},
"outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"import requests\n", "import requests\n",
@ -50,24 +39,24 @@
"\n", "\n",
"hf_token = userdata.get('HF_TOKEN')\n", "hf_token = userdata.get('HF_TOKEN')\n",
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
], ]
"metadata": {
"id": "-Apd7-p-hyLk"
},
"execution_count": 2,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Model**"
],
"metadata": { "metadata": {
"id": "xa0qYqZrwQ66" "id": "xa0qYqZrwQ66"
} },
"source": [
"**Model**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "z5enGmuKjtJu"
},
"outputs": [],
"source": [ "source": [
"model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", "model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n",
"quant_config = BitsAndBytesConfig(\n", "quant_config = BitsAndBytesConfig(\n",
@ -82,45 +71,45 @@
" device_map=\"auto\",\n", " device_map=\"auto\",\n",
" quantization_config=quant_config\n", " quantization_config=quant_config\n",
")" ")"
], ]
"metadata": {
"id": "z5enGmuKjtJu"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Tokenizer**"
],
"metadata": { "metadata": {
"id": "y1hUSmWlwSbp" "id": "y1hUSmWlwSbp"
} },
"source": [
"**Tokenizer**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"source": [ "execution_count": 4,
"tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
"tokenizer.pad_token = tokenizer.eos_token"
],
"metadata": { "metadata": {
"id": "WjxNWW6bvdgj" "id": "WjxNWW6bvdgj"
}, },
"execution_count": 4, "outputs": [],
"outputs": [] "source": [
"tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
"tokenizer.pad_token = tokenizer.eos_token"
]
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Functions**"
],
"metadata": { "metadata": {
"id": "1pg2U-B3wbIK" "id": "1pg2U-B3wbIK"
} },
"source": [
"**Functions**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 12,
"metadata": {
"id": "ZvljDKdji8iV"
},
"outputs": [],
"source": [ "source": [
"def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", "def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
" # Convert user inputs into multi-shot examples\n", " # Convert user inputs into multi-shot examples\n",
@ -159,24 +148,24 @@
"\n", "\n",
"def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", "def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
" return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)" " return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)"
], ]
"metadata": {
"id": "ZvljDKdji8iV"
},
"execution_count": 12,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Default Values**"
],
"metadata": { "metadata": {
"id": "_WDZ5dvRwmng" "id": "_WDZ5dvRwmng"
} },
"source": [
"**Default Values**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "JAdfqYXnvEDE"
},
"outputs": [],
"source": [ "source": [
"default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n", "default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n",
"default_number_of_data = 10\n", "default_number_of_data = 10\n",
@ -194,24 +183,24 @@
" \"response\": \"Because parents want the best for you and they love you the most.\"\n", " \"response\": \"Because parents want the best for you and they love you the most.\"\n",
" }\n", " }\n",
"]" "]"
], ]
"metadata": {
"id": "JAdfqYXnvEDE"
},
"execution_count": 13,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Init gradio**"
],
"metadata": { "metadata": {
"id": "JwZtD032wuK8" "id": "JwZtD032wuK8"
} },
"source": [
"**Init gradio**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": 14,
"metadata": {
"id": "xy2RP5T-vxXg"
},
"outputs": [],
"source": [ "source": [
"gr_interface = gr.Interface(\n", "gr_interface = gr.Interface(\n",
" fn=gradio_interface,\n", " fn=gradio_interface,\n",
@ -227,41 +216,62 @@
" ],\n", " ],\n",
" outputs=gr.Textbox(label=\"Generated Dataset\")\n", " outputs=gr.Textbox(label=\"Generated Dataset\")\n",
")" ")"
], ]
"metadata": {
"id": "xy2RP5T-vxXg"
},
"execution_count": 14,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"source": [
"**Run the app**"
],
"metadata": { "metadata": {
"id": "HZx-mm9Uw3Ph" "id": "HZx-mm9Uw3Ph"
} },
"source": [
"**Run the app**"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"source": [ "execution_count": null,
"gr_interface.launch()"
],
"metadata": { "metadata": {
"id": "bfGs5ip8mndg" "id": "bfGs5ip8mndg"
}, },
"execution_count": null, "outputs": [],
"outputs": [] "source": [
"gr_interface.launch()"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"source": [], "execution_count": null,
"metadata": { "metadata": {
"id": "Cveqx392x7Mm" "id": "Cveqx392x7Mm"
}, },
"execution_count": null, "outputs": [],
"outputs": [] "source": []
} }
] ],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
} }

61
week5/day4.5.ipynb

@ -14,7 +14,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 1,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -29,7 +29,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 2,
"id": "802137aa-8a74-45e0-a487-d1974927d7ca", "id": "802137aa-8a74-45e0-a487-d1974927d7ca",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -51,7 +51,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 3,
"id": "58c85082-e417-4708-9efe-81a5d55d1424", "id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -64,7 +64,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 4,
"id": "ee78efcb-60fe-449e-a944-40bab26261af", "id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -77,7 +77,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 5,
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -104,10 +104,18 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 6,
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1088, which is longer than the specified 1000\n"
]
}
],
"source": [ "source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)" "chunks = text_splitter.split_documents(documents)"
@ -115,20 +123,39 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 7,
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/plain": [
"123"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"len(chunks)" "len(chunks)"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 8,
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", "id": "2c54b4b6-06da-463d-bee7-4dd456c2b887",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document types found: company, employees, contracts, products\n"
]
}
],
"source": [ "source": [
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n",
"print(f\"Document types found: {', '.join(doc_types)}\")" "print(f\"Document types found: {', '.join(doc_types)}\")"
@ -157,10 +184,18 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 9,
"id": "78998399-ac17-4e28-b15f-0b5f51e6ee23", "id": "78998399-ac17-4e28-b15f-0b5f51e6ee23",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"There are 123 vectors with 1,536 dimensions in the vector store\n"
]
}
],
"source": [ "source": [
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", "# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n",
"# Chroma is a popular open source Vector Database based on SQLLite\n", "# Chroma is a popular open source Vector Database based on SQLLite\n",

13
week8/day1.ipynb

@ -58,13 +58,20 @@
"# Setting up the modal tokens\n", "# Setting up the modal tokens\n",
"\n", "\n",
"The first time you run this, please uncomment the next line and execute it. \n", "The first time you run this, please uncomment the next line and execute it. \n",
"This is the same as running `modal setup` from the command line. It connects with Modal and installs your tokens.\n", "This is the same as running `modal setup` from the command line. It connects with Modal and installs your tokens. \n",
"\n", "\n",
"A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt afterwards: \n", "## Debugging some common problems on Windows\n",
"\n",
"If this command fails in the next cell, or if any of the modal commands with the `!` fail, please try running them directly on the command line in an activated environment (without the `!`)\n",
"\n",
"A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt in an activated environment afterwards: \n",
"`modal token new` \n", "`modal token new` \n",
"(Thank you Ed B. for that!)\n", "(Thank you Ed B. for that!)\n",
"\n", "\n",
"And I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)." "Also, a student David S. mentioned the following: \n",
"> In case anyone else using Windows hits this problem: Along with having to run `modal token new` from a command prompt, you have to move the generated token file. It will deploy the token file (.modal.toml) to your Windows profile folder. The virtual environment couldn't see that location (strangely, it couldn't even after I set environment variables for it and rebooted). I moved that token file to the folder I'm operating out of for the lab and it stopped throwing auth errors.\n",
"\n",
"Finally: I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)."
] ]
}, },
{ {

Loading…
Cancel
Save