Browse Source

Improvements to descriptions and links

pull/275/head
Edward Donner 2 months ago
parent
commit
c80065df86
  1. 4
      SETUP-PC.md
  2. 30
      week1/day2 EXERCISE.ipynb
  3. 4
      week1/troubleshooting.ipynb
  4. 124
      week3/community-contributions/day4_OCR_model_exercise.ipynb
  5. 184
      week3/community-contributions/day5_with_Gradio.ipynb
  6. 112
      week3/community-contributions/en-de-fr_dataset_generator.ipynb
  7. 6
      week3/community-contributions/synthetic_data_generator.ipynb
  8. 6
      week5/community-contributions/day 5 - ollama_rag_1.ipynb
  9. 4
      week5/community-contributions/day3-gemini.ipynb
  10. 2
      week5/day2.ipynb
  11. 22
      week6/day1.ipynb
  12. 13
      week6/day2.ipynb
  13. 13
      week6/day3.ipynb
  14. 28
      week6/day4.ipynb
  15. 28
      week6/day5.ipynb
  16. 2
      week8/agents/ensemble_agent.py
  17. 15
      week8/day1.ipynb
  18. 2
      week8/day2.4.ipynb
  19. 24
      week8/day5.ipynb

4
SETUP-PC.md

@ -22,7 +22,7 @@ There are 4 common gotchas to developing on Windows to be aware of:
1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows 1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows
2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed 2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed
3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)! 3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!
4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0). 4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0). A student also mentioned that [these instructions](https://github.com/bycloudai/InstallVSBuildToolsWindows) might be helpful for people on Windows 11.
### Part 1: Clone the Repo ### Part 1: Clone the Repo
@ -109,7 +109,7 @@ If you see an error like this:
> Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": [https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/) > Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": [https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
Then please follow the link and install Microsoft C++ Build Tools. Then please follow the link and install Microsoft C++ Build Tools. A student also mentioned that [these instructions](https://github.com/bycloudai/InstallVSBuildToolsWindows) might be helpful for people on Windows 11.
In the very unlikely event that this step doesn't go well, you should try the bullet-proof (but slower) version: In the very unlikely event that this step doesn't go well, you should try the bullet-proof (but slower) version:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` `pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`

30
week1/day2 EXERCISE.ipynb

@ -203,6 +203,36 @@
"print(response.choices[0].message.content)" "print(response.choices[0].message.content)"
] ]
}, },
{
"cell_type": "markdown",
"id": "9f9e22da-b891-41f6-9ac9-bd0c0a5f4f44",
"metadata": {},
"source": [
"## Are you confused about why that works?\n",
"\n",
"It seems strange, right? We just used OpenAI code to call Ollama?? What's going on?!\n",
"\n",
"Here's the scoop:\n",
"\n",
"The python class `OpenAI` is simply code written by OpenAI engineers that makes calls over the internet to an endpoint. \n",
"\n",
"When you call `openai.chat.completions.create()`, this python code just makes a web request to the following url: \"https://api.openai.com/v1/chat/completions\"\n",
"\n",
"Code like this is known as a \"client library\" - it's just wrapper code that runs on your machine to make web requests. The actual power of GPT is running on OpenAI's cloud behind this API, not on your computer!\n",
"\n",
"OpenAI was so popular, that lots of other AI providers provided identical web endpoints, so you could use the same approach.\n",
"\n",
"So Ollama has an endpoint running on your local box at http://localhost:11434/v1/chat/completions \n",
"And in week 2 we'll discover that lots of other providers do this too, including Gemini and DeepSeek.\n",
"\n",
"And then the team at OpenAI had a great idea: they can extend their client library so you can specify a different 'base url', and use their library to call any compatible API.\n",
"\n",
"That's it!\n",
"\n",
"So when you say: `ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')` \n",
"Then this will make the same endpoint calls, but to Ollama instead of OpenAI."
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", "id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90",

4
week1/troubleshooting.ipynb

@ -68,7 +68,7 @@
"1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows\n", "1. Permissions. Please take a look at this [tutorial](https://chatgpt.com/share/67b0ae58-d1a8-8012-82ca-74762b0408b0) on permissions on Windows\n",
"2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed\n", "2. Anti-virus, Firewall, VPN. These can interfere with installations and network access; try temporarily disabling them as needed\n",
"3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!\n", "3. The evil Windows 260 character limit to filenames - here is a full [explanation and fix](https://chatgpt.com/share/67b0afb9-1b60-8012-a9f7-f968a5a910c7)!\n",
"4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0).\n", "4. If you've not worked with Data Science packages on your computer before, you might need to install Microsoft Build Tools. Here are [instructions](https://chatgpt.com/share/67b0b762-327c-8012-b809-b4ec3b9e7be0). A student also mentioned that [these instructions](https://github.com/bycloudai/InstallVSBuildToolsWindows) might be helpful for people on Windows 11. \n",
"\n", "\n",
"## And for Mac people\n", "## And for Mac people\n",
"\n", "\n",
@ -127,7 +127,7 @@
" print(f\"Environment Name: {venv_name}\")\n", " print(f\"Environment Name: {venv_name}\")\n",
"\n", "\n",
"if conda_name != \"llms\" and venv_name != \"llms\" and venv_name != \"venv\":\n", "if conda_name != \"llms\" and venv_name != \"llms\" and venv_name != \"venv\":\n",
" print(\"Neither Anaconda nor Virtualenv seem to be activated with the expected name 'llms'\")\n", " print(\"Neither Anaconda nor Virtualenv seem to be activated with the expected name 'llms' or 'venv'\")\n",
" print(\"Did you run 'jupyter lab' from an activated environment with (llms) showing on the command line?\")\n", " print(\"Did you run 'jupyter lab' from an activated environment with (llms) showing on the command line?\")\n",
" print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")" " print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")"
] ]

124
week3/community-contributions/day4_OCR_model_exercise.ipynb

@ -1,37 +1,25 @@
{ {
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4",
"authorship_tag": "ABX9TyPtAT7Yq5xd4vDcJEZtg69J"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [ "cells": [
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6gGKXU5RXORf"
},
"outputs": [],
"source": [ "source": [
"# getting the latest transformers first, since this will require a restart\n", "# getting the latest transformers first, since this will require a restart\n",
"\n", "\n",
"!pip install git+https://github.com/huggingface/transformers.git" "!pip install git+https://github.com/huggingface/transformers.git"
], ]
"metadata": {
"id": "6gGKXU5RXORf"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yCRrF4aiXPPo"
},
"outputs": [],
"source": [ "source": [
"# imports\n", "# imports\n",
"\n", "\n",
@ -40,26 +28,21 @@
"from huggingface_hub import login\n", "from huggingface_hub import login\n",
"from transformers import AutoProcessor, AutoModelForImageTextToText\n", "from transformers import AutoProcessor, AutoModelForImageTextToText\n",
"from google.colab import files" "from google.colab import files"
], ]
"metadata": {
"id": "yCRrF4aiXPPo"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AAlOQuCbXcrv"
},
"outputs": [],
"source": [ "source": [
"# logging in to HF\n", "# logging in to HF\n",
"\n", "\n",
"hf_token = userdata.get('HF_TOKEN')\n", "hf_token = userdata.get('HF_TOKEN')\n",
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
], ]
"metadata": {
"id": "AAlOQuCbXcrv"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
@ -77,6 +60,11 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "V_UAuSSkXBKh"
},
"outputs": [],
"source": [ "source": [
"'''\n", "'''\n",
"ChatGPT and Gemini explain the following part roughly like so:\n", "ChatGPT and Gemini explain the following part roughly like so:\n",
@ -90,29 +78,29 @@
"image = uploaded[image_path]\n", "image = uploaded[image_path]\n",
"with open(image_path, \"wb\") as f:\n", "with open(image_path, \"wb\") as f:\n",
" f.write(image)" " f.write(image)"
], ]
"metadata": {
"id": "V_UAuSSkXBKh"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AiFP-mQtXrpV"
},
"outputs": [],
"source": [ "source": [
"# from HF model instructions\n", "# from HF model instructions\n",
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n", "device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"model = AutoModelForImageTextToText.from_pretrained(\"stepfun-ai/GOT-OCR-2.0-hf\", device_map=device)\n", "model = AutoModelForImageTextToText.from_pretrained(\"stepfun-ai/GOT-OCR-2.0-hf\", device_map=device)\n",
"processor = AutoProcessor.from_pretrained(\"stepfun-ai/GOT-OCR-2.0-hf\")" "processor = AutoProcessor.from_pretrained(\"stepfun-ai/GOT-OCR-2.0-hf\")"
], ]
"metadata": {
"id": "AiFP-mQtXrpV"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7Adr8HB_YNf5"
},
"outputs": [],
"source": [ "source": [
"# also from HF documentation about this model, see https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf\n", "# also from HF documentation about this model, see https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf\n",
"\n", "\n",
@ -126,25 +114,47 @@
" stop_strings=\"<|im_end|>\",\n", " stop_strings=\"<|im_end|>\",\n",
" max_new_tokens=4096,\n", " max_new_tokens=4096,\n",
")" ")"
], ]
"metadata": {
"id": "7Adr8HB_YNf5"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "nRsRUIIuYdJ9"
},
"outputs": [],
"source": [ "source": [
"# prints out the recognized text. This can read my handwriting pretty well! And it works super quick on the free T4 GPU server here.\n", "# prints out the recognized text. This can read my handwriting pretty well! And it works super quick on the free T4 GPU server here.\n",
"\n", "\n",
"print(processor.decode(ocr[0, inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True))" "print(processor.decode(ocr[0, inputs[\"input_ids\"].shape[1]:], skip_special_tokens=True))"
]
}
], ],
"metadata": { "metadata": {
"id": "nRsRUIIuYdJ9" "accelerator": "GPU",
"colab": {
"authorship_tag": "ABX9TyPtAT7Yq5xd4vDcJEZtg69J",
"gpuType": "T4",
"provenance": []
}, },
"execution_count": null, "kernelspec": {
"outputs": [] "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
} }
] },
"nbformat": 4,
"nbformat_minor": 4
} }

184
week3/community-contributions/day5_with_Gradio.ipynb

@ -1,23 +1,10 @@
{ {
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [ "cells": [
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {
"id": "It89APiAtTUF"
},
"source": [ "source": [
"# Create meeting minutes from an Audio file\n", "# Create meeting minutes from an Audio file\n",
"\n", "\n",
@ -38,21 +25,18 @@
"> ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", "> ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"gcsfs 2024.10.0 requires fsspec==2024.10.0, but you have fsspec 2024.9.0 which is incompatible.\n", "gcsfs 2024.10.0 requires fsspec==2024.10.0, but you have fsspec 2024.9.0 which is incompatible.\n",
"\n" "\n"
], ]
"metadata": {
"id": "It89APiAtTUF"
}
}, },
{ {
"cell_type": "code", "cell_type": "code",
"source": [ "execution_count": null,
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate openai httpx==0.27.2 gradio"
],
"metadata": { "metadata": {
"id": "f2vvgnFpHpID" "id": "f2vvgnFpHpID"
}, },
"execution_count": null, "outputs": [],
"outputs": [] "source": [
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate openai httpx==0.27.2 gradio"
]
}, },
{ {
"cell_type": "code", "cell_type": "code",
@ -77,35 +61,38 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "q3D1_T0uG_Qh"
},
"outputs": [],
"source": [ "source": [
"# Constants\n", "# Constants\n",
"\n", "\n",
"AUDIO_MODEL = \"whisper-1\"\n", "AUDIO_MODEL = \"whisper-1\"\n",
"LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"" "LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\""
], ]
"metadata": {
"id": "q3D1_T0uG_Qh"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Es9GkQ0FGCMt"
},
"outputs": [],
"source": [ "source": [
"# New capability - connect this Colab to my Google Drive\n", "# New capability - connect this Colab to my Google Drive\n",
"# See immediately below this for instructions to obtain denver_extract.mp3\n", "# See immediately below this for instructions to obtain denver_extract.mp3\n",
"\n", "\n",
"drive.mount(\"/content/drive\")\n", "drive.mount(\"/content/drive\")\n",
"audio_filename = \"/content/drive/MyDrive/llms/denver_extract.mp3\"" "audio_filename = \"/content/drive/MyDrive/llms/denver_extract.mp3\""
], ]
"metadata": {
"id": "Es9GkQ0FGCMt"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "markdown", "cell_type": "markdown",
"metadata": {
"id": "HTl3mcjyzIEE"
},
"source": [ "source": [
"# Download denver_extract.mp3\n", "# Download denver_extract.mp3\n",
"\n", "\n",
@ -113,41 +100,43 @@
"\n", "\n",
"If you want to use the same as me, then please download my extract here, and put this on your Google Drive: \n", "If you want to use the same as me, then please download my extract here, and put this on your Google Drive: \n",
"https://drive.google.com/file/d/1N_kpSojRR5RYzupz6nqM8hMSoEF_R7pU/view?usp=sharing\n" "https://drive.google.com/file/d/1N_kpSojRR5RYzupz6nqM8hMSoEF_R7pU/view?usp=sharing\n"
], ]
"metadata": {
"id": "HTl3mcjyzIEE"
}
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xYW8kQYtF-3L"
},
"outputs": [],
"source": [ "source": [
"# Sign in to HuggingFace Hub\n", "# Sign in to HuggingFace Hub\n",
"\n", "\n",
"hf_token = userdata.get('HF_TOKEN')\n", "hf_token = userdata.get('HF_TOKEN')\n",
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
], ]
"metadata": {
"id": "xYW8kQYtF-3L"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qP6OB2OeGC2C"
},
"outputs": [],
"source": [ "source": [
"# Sign in to OpenAI using Secrets in Colab\n", "# Sign in to OpenAI using Secrets in Colab\n",
"\n", "\n",
"openai_api_key = userdata.get('OPENAI_API_KEY')\n", "openai_api_key = userdata.get('OPENAI_API_KEY')\n",
"openai = OpenAI(api_key=openai_api_key)" "openai = OpenAI(api_key=openai_api_key)"
], ]
"metadata": {
"id": "qP6OB2OeGC2C"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "hgQBeIYUyaqj"
},
"outputs": [],
"source": [ "source": [
"# Initialize Llama model and tokenizer\n", "# Initialize Llama model and tokenizer\n",
"\n", "\n",
@ -166,15 +155,15 @@
" device_map=\"auto\",\n", " device_map=\"auto\",\n",
" quantization_config=quant_config\n", " quantization_config=quant_config\n",
")" ")"
], ]
"metadata": {
"id": "hgQBeIYUyaqj"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "u9aFA7tjy3Ri"
},
"outputs": [],
"source": [ "source": [
"# Generate meeting minutes\n", "# Generate meeting minutes\n",
"\n", "\n",
@ -198,15 +187,15 @@
" response = response.split(\"<|end_header_id|>\")[-1].strip().replace(\"<|eot_id|>\",\"\")\n", " response = response.split(\"<|end_header_id|>\")[-1].strip().replace(\"<|eot_id|>\",\"\")\n",
"\n", "\n",
" return response" " return response"
], ]
"metadata": {
"id": "u9aFA7tjy3Ri"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "OEuqR90Vy4AZ"
},
"outputs": [],
"source": [ "source": [
"# Transcribe the uploaded audio file using OpenAI's Whisper model\n", "# Transcribe the uploaded audio file using OpenAI's Whisper model\n",
"\n", "\n",
@ -223,15 +212,15 @@
" return transcription\n", " return transcription\n",
" except Exception as e:\n", " except Exception as e:\n",
" return f\"Error during transcription: {str(e)}\"" " return f\"Error during transcription: {str(e)}\""
], ]
"metadata": {
"id": "OEuqR90Vy4AZ"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "lmdsy2iDy5d7"
},
"outputs": [],
"source": [ "source": [
"# Process the uploaded audio file, transcribe it, and generate meeting minutes\n", "# Process the uploaded audio file, transcribe it, and generate meeting minutes\n",
"\n", "\n",
@ -258,15 +247,15 @@
"\n", "\n",
" except Exception as e:\n", " except Exception as e:\n",
" return f\"Error processing file: {str(e)}\"" " return f\"Error processing file: {str(e)}\""
], ]
"metadata": {
"id": "lmdsy2iDy5d7"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "k2U2bWtey7Yo"
},
"outputs": [],
"source": [ "source": [
"# Create Gradio interface\n", "# Create Gradio interface\n",
"\n", "\n",
@ -278,25 +267,46 @@
" description=\"Upload an MP3 recording of your meeting to get AI-generated meeting minutes. This process may take a few minutes.\",\n", " description=\"Upload an MP3 recording of your meeting to get AI-generated meeting minutes. This process may take a few minutes.\",\n",
" flagging_mode=\"never\"\n", " flagging_mode=\"never\"\n",
")" ")"
], ]
"metadata": {
"id": "k2U2bWtey7Yo"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "X3JbzRNRy9oG"
},
"outputs": [],
"source": [ "source": [
"# Launch Gradio interface\n", "# Launch Gradio interface\n",
"\n", "\n",
"interface.launch()" "interface.launch()"
]
}
], ],
"metadata": { "metadata": {
"id": "X3JbzRNRy9oG" "accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
}, },
"execution_count": null, "kernelspec": {
"outputs": [] "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
} }
] },
"nbformat": 4,
"nbformat_minor": 4
} }

112
week3/community-contributions/en-de-fr_dataset_generator.ipynb

@ -1,21 +1,4 @@
{ {
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"gpuType": "T4",
"authorship_tag": "ABX9TyPxJzufoQPtui+nhl1J1xiR"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"cells": [ "cells": [
{ {
"cell_type": "code", "cell_type": "code",
@ -30,6 +13,11 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "eyfvQrLxdkGT"
},
"outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"import requests\n", "import requests\n",
@ -42,43 +30,43 @@
"import torch\n", "import torch\n",
"import gradio as gr\n", "import gradio as gr\n",
"import re" "import re"
], ]
"metadata": {
"id": "eyfvQrLxdkGT"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WW-cSZk7dnp6"
},
"outputs": [],
"source": [ "source": [
"# one can always add more models, of course\n", "# one can always add more models, of course\n",
"\n", "\n",
"LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", "LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n",
"OPENAI_MODEL = \"gpt-4o-mini\"" "OPENAI_MODEL = \"gpt-4o-mini\""
], ]
"metadata": {
"id": "WW-cSZk7dnp6"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "XG7Iam6Rdw8F"
},
"outputs": [],
"source": [ "source": [
"hf_token = userdata.get('HF_TOKEN')\n", "hf_token = userdata.get('HF_TOKEN')\n",
"login(hf_token, add_to_git_credential=True)\n", "login(hf_token, add_to_git_credential=True)\n",
"openai_api_key = userdata.get('OPENAI_API_KEY')\n", "openai_api_key = userdata.get('OPENAI_API_KEY')\n",
"openai = OpenAI(api_key=openai_api_key)" "openai = OpenAI(api_key=openai_api_key)"
], ]
"metadata": {
"id": "XG7Iam6Rdw8F"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ov7WSdx9dzSt"
},
"outputs": [],
"source": [ "source": [
"force_dark_mode = \"\"\"\n", "force_dark_mode = \"\"\"\n",
"function refresh() {\n", "function refresh() {\n",
@ -89,15 +77,15 @@
" }\n", " }\n",
"}\n", "}\n",
"\"\"\"" "\"\"\""
], ]
"metadata": {
"id": "Ov7WSdx9dzSt"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "bEF8w_Mdd2Nb"
},
"outputs": [],
"source": [ "source": [
"def dataset_generator(model, nature, shots, volume, language):\n", "def dataset_generator(model, nature, shots, volume, language):\n",
"\n", "\n",
@ -174,15 +162,15 @@
" sentences = response.choices[0].message.content\n", " sentences = response.choices[0].message.content\n",
"\n", "\n",
" return sentences" " return sentences"
], ]
"metadata": {
"id": "bEF8w_Mdd2Nb"
},
"execution_count": null,
"outputs": []
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VRKdu0fEt8mg"
},
"outputs": [],
"source": [ "source": [
"global data\n", "global data\n",
"data = \"\"\n", "data = \"\"\n",
@ -311,12 +299,34 @@
" save.click(saveData, inputs=outputPath, outputs=None).then(lambda: gr.update(value=\"Your data has been saved\", elem_classes=\"green-button\"), [], [save])\n", " save.click(saveData, inputs=outputPath, outputs=None).then(lambda: gr.update(value=\"Your data has been saved\", elem_classes=\"green-button\"), [], [save])\n",
"\n", "\n",
"view.launch(share=True) #, debug=True)" "view.launch(share=True) #, debug=True)"
]
}
], ],
"metadata": { "metadata": {
"id": "VRKdu0fEt8mg" "accelerator": "GPU",
"colab": {
"authorship_tag": "ABX9TyPxJzufoQPtui+nhl1J1xiR",
"gpuType": "T4",
"provenance": []
}, },
"execution_count": null, "kernelspec": {
"outputs": [] "display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
} }
] },
"nbformat": 4,
"nbformat_minor": 4
} }

6
week3/community-contributions/synthetic_data_generator.ipynb

@ -387,7 +387,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "llm_engineering-yg2xCEUG", "display_name": "Python 3 (ipykernel)",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -401,9 +401,9 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.10.8" "version": "3.11.11"
} }
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 2 "nbformat_minor": 4
} }

6
week5/community-contributions/day 5 - ollama_rag_1.ipynb

@ -202,7 +202,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "venv", "display_name": "Python 3 (ipykernel)",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -216,9 +216,9 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.12.5" "version": "3.11.11"
} }
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 2 "nbformat_minor": 4
} }

4
week5/community-contributions/day3-gemini.ipynb

@ -3389,7 +3389,7 @@
], ],
"metadata": { "metadata": {
"kernelspec": { "kernelspec": {
"display_name": "llms", "display_name": "Python 3 (ipykernel)",
"language": "python", "language": "python",
"name": "python3" "name": "python3"
}, },
@ -3407,5 +3407,5 @@
} }
}, },
"nbformat": 4, "nbformat": 4,
"nbformat_minor": 2 "nbformat_minor": 4
} }

2
week5/day2.ipynb

@ -78,7 +78,7 @@
"# Read in documents using LangChain's loaders\n", "# Read in documents using LangChain's loaders\n",
"# Take everything in all the sub-folders of our knowledgebase\n", "# Take everything in all the sub-folders of our knowledgebase\n",
"\n", "\n",
"folders = glob.glob(\"knowledge-base/*\")\n", "folders = glob.glob(\"knowledge-base/*/\")\n",
"\n", "\n",
"# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", "# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n",
"text_loader_kwargs = {'encoding': 'utf-8'}\n", "text_loader_kwargs = {'encoding': 'utf-8'}\n",

22
week6/day1.ipynb

@ -66,6 +66,22 @@
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
] ]
}, },
{
"cell_type": "markdown",
"id": "e7cb2e20-7fac-44c1-8a4b-131dd37ee06e",
"metadata": {},
"source": [
"## One more import - the Item class\n",
"\n",
"If you get an error that you need to agree to Meta's terms when you run this, then follow the link it provides you and follow their instructions. You should get approved by Meta within minutes.\n",
"\n",
"See the last cell in [this colab](https://colab.research.google.com/drive/1deJO03YZTXUwcq2vzxWbiBhrRuI29Vo8?usp=sharing#scrollTo=FqyF5jZQkIl_) for steps to take if Meta doesn't approve.\n",
"\n",
"Any problems - message me or email me! \n",
"\n",
"With thanks to student Dr John S. for pointing out that this import needs to come after signing in to HF"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
@ -73,12 +89,6 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# One more import - the Item class\n",
"# If you get an error that you need to agree to Meta's terms when you run this, then follow the link it provides you and follow their instructions\n",
"# You should get approved by Meta within minutes\n",
"# Any problems - message me or email me!\n",
"# With thanks to student Dr John S. for pointing out that this import needs to come after signing in to HF\n",
"\n",
"from items import Item" "from items import Item"
] ]
}, },

13
week6/day2.ipynb

@ -43,7 +43,6 @@
"from dotenv import load_dotenv\n", "from dotenv import load_dotenv\n",
"from huggingface_hub import login\n", "from huggingface_hub import login\n",
"from datasets import load_dataset, Dataset, DatasetDict\n", "from datasets import load_dataset, Dataset, DatasetDict\n",
"from items import Item\n",
"from loaders import ItemLoader\n", "from loaders import ItemLoader\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"from collections import Counter, defaultdict\n", "from collections import Counter, defaultdict\n",
@ -79,6 +78,18 @@
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "6746144c-2e19-485a-8086-368c144722b4",
"metadata": {},
"outputs": [],
"source": [
"# One more import after HF login\n",
"\n",
"from items import Item"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,

13
week6/day3.ipynb

@ -29,7 +29,6 @@
"import random\n", "import random\n",
"from dotenv import load_dotenv\n", "from dotenv import load_dotenv\n",
"from huggingface_hub import login\n", "from huggingface_hub import login\n",
"from items import Item\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"import numpy as np\n", "import numpy as np\n",
"import pickle\n", "import pickle\n",
@ -137,6 +136,18 @@
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "ff3942d8-b010-46b5-a665-15554eae9776",
"metadata": {},
"outputs": [],
"source": [
"# One more import after logging in\n",
"\n",
"from items import Item"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,

28
week6/day4.ipynb

@ -38,7 +38,6 @@
"import random\n", "import random\n",
"from dotenv import load_dotenv\n", "from dotenv import load_dotenv\n",
"from huggingface_hub import login\n", "from huggingface_hub import login\n",
"from items import Item\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"import numpy as np\n", "import numpy as np\n",
"import pickle\n", "import pickle\n",
@ -47,19 +46,6 @@
"from anthropic import Anthropic" "from anthropic import Anthropic"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "21a3833e-4093-43b0-8f7b-839c50b911ea",
"metadata": {},
"outputs": [],
"source": [
"# moved our Tester into a separate package\n",
"# call it with Tester.test(function_name, test_dataset)\n",
"\n",
"from testing import Tester"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
@ -88,6 +74,20 @@
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "6985bdc7-fa45-49a3-ae97-84bdeb9b2083",
"metadata": {},
"outputs": [],
"source": [
"# moved our Tester into a separate package\n",
"# call it with Tester.test(function_name, test_dataset)\n",
"\n",
"from items import Item\n",
"from testing import Tester"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,

28
week6/day5.ipynb

@ -30,7 +30,6 @@
"import random\n", "import random\n",
"from dotenv import load_dotenv\n", "from dotenv import load_dotenv\n",
"from huggingface_hub import login\n", "from huggingface_hub import login\n",
"from items import Item\n",
"import matplotlib.pyplot as plt\n", "import matplotlib.pyplot as plt\n",
"import numpy as np\n", "import numpy as np\n",
"import pickle\n", "import pickle\n",
@ -39,19 +38,6 @@
"from anthropic import Anthropic" "from anthropic import Anthropic"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "21a3833e-4093-43b0-8f7b-839c50b911ea",
"metadata": {},
"outputs": [],
"source": [
"# moved our Tester into a separate package\n",
"# call it with Tester.test(function_name, test_dataset)\n",
"\n",
"from testing import Tester"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
@ -80,6 +66,20 @@
"login(hf_token, add_to_git_credential=True)" "login(hf_token, add_to_git_credential=True)"
] ]
}, },
{
"cell_type": "code",
"execution_count": null,
"id": "884a50bd-8cae-425e-8e56-f079fc3e65ce",
"metadata": {},
"outputs": [],
"source": [
"# moved our Tester into a separate package\n",
"# call it with Tester.test(function_name, test_dataset)\n",
"\n",
"from items import Item\n",
"from testing import Tester"
]
},
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,

2
week8/agents/ensemble_agent.py

@ -43,6 +43,6 @@ class EnsembleAgent(Agent):
'Min': [min(specialist, frontier, random_forest)], 'Min': [min(specialist, frontier, random_forest)],
'Max': [max(specialist, frontier, random_forest)], 'Max': [max(specialist, frontier, random_forest)],
}) })
y = self.model.predict(X)[0] y = max(0, self.model.predict(X)[0])
self.log(f"Ensemble Agent complete - returning ${y:.2f}") self.log(f"Ensemble Agent complete - returning ${y:.2f}")
return y return y

15
week8/day1.ipynb

@ -68,9 +68,22 @@
"`modal token new` \n", "`modal token new` \n",
"(Thank you Ed B. for that!)\n", "(Thank you Ed B. for that!)\n",
"\n", "\n",
"Another Windows student Minh N. mentioned you may need to use this approach, from an activated environment in the command line: \n",
"`modal token set --token-id <your_token_id> --token-secret <your_token_secret>`\n",
"\n",
"Also, a student David S. mentioned the following: \n", "Also, a student David S. mentioned the following: \n",
"> In case anyone else using Windows hits this problem: Along with having to run `modal token new` from a command prompt, you have to move the generated token file. It will deploy the token file (.modal.toml) to your Windows profile folder. The virtual environment couldn't see that location (strangely, it couldn't even after I set environment variables for it and rebooted). I moved that token file to the folder I'm operating out of for the lab and it stopped throwing auth errors.\n", "> In case anyone else using Windows hits this problem: Along with having to run `modal token new` from a command prompt, you have to move the generated token file. It will deploy the token file (.modal.toml) to your Windows profile folder. The virtual environment couldn't see that location (strangely, it couldn't even after I set environment variables for it and rebooted). I moved that token file to the folder I'm operating out of for the lab and it stopped throwing auth errors.\n",
"\n", "\n",
"And another Windows student (Robert M. - thank you!!) added another possible step:\n",
"\n",
"\n",
"> I could not get modal to see my tokens (resulting in an 'auth error'), even after copying the \".modal.toml\" file to the \"week8\" folder and restarting JupyterLab. The fix was to manually set the environment variables (the standard way). This config method is explained by modal on their [web site](https://modal.com/docs/reference/modal.config) \n",
"```\n",
"import os\n",
"os.environ[\"MODAL_TOKEN_ID\"] = \"xxx\"\n",
"os.environ[\"MODAL_TOKEN_SECRET\"] = \"yyy\" \n",
"```\n",
"\n",
"Finally: I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)." "Finally: I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)."
] ]
}, },
@ -81,7 +94,7 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# Remove the '# ' from the next line and run the cell\n", "# Remove the '# ' from the next line and run the cell, or run this command without the excalamation mark from an activated command prompt\n",
"# !modal setup" "# !modal setup"
] ]
}, },

2
week8/day2.4.ipynb

@ -348,7 +348,7 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"def ensemble_pricer(item):\n", "def ensemble_pricer(item):\n",
" return ensemble.price(description(item))" " return max(0,ensemble.price(description(item)))"
] ]
}, },
{ {

24
week8/day5.ipynb

@ -156,6 +156,28 @@
"!python price_is_right_final.py" "!python price_is_right_final.py"
] ]
}, },
{
"cell_type": "markdown",
"id": "242d1243-fbec-4807-988b-8f70c8c9b806",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">But wait!! There's more..</h2>\n",
" <span style=\"color:#900;\">If you're not fed up of product prices yet 😂 I've built this out some more!<br/>\n",
" If you look in my repo <a href=\"https://github.com/ed-donner/tech2ai\">tech2ai</a>, in segment3/lab1 is a neural network implementation of the pricer in pure PyTorch. It does pretty well..<br/>\n",
" And in segment4/agents is this same Agent project taken further. There's a new version of the PlanningAgent called AutonomousPlanningAgent that uses multiple Tools, and a MessagingAgent that uses claude-3.7 to write texts.<br/>\n",
" You could experiment with similar ideas to build out this framework.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{ {
"cell_type": "markdown", "cell_type": "markdown",
"id": "331a2044-566f-4866-be4d-7542b7dfdf3f", "id": "331a2044-566f-4866-be4d-7542b7dfdf3f",
@ -169,7 +191,7 @@
" <td>\n", " <td>\n",
" <h2 style=\"color:#090;\">CONGRATULATIONS AND THANK YOU!!!</h2>\n", " <h2 style=\"color:#090;\">CONGRATULATIONS AND THANK YOU!!!</h2>\n",
" <span style=\"color:#090;\">\n", " <span style=\"color:#090;\">\n",
" It's so fabulous that you've made it to the end! My heartiest congratulations. Please stay in touch! I'm <a href=\"https://www.linkedin.com/in/eddonner/\">here on LinkedIn</a> if we're not already connected and I'm on X at <a href=\"https://x.com/edwarddonner\">@edwarddonner</a>. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. <br/><br/>Massive thanks again for putting up with me for 8 weeks and getting all the way to the final cell! I'm excited to hear all about your career as an LLM Engineer. <b>You could not have picked a better time to be in this field.</b>\n", " It's so fabulous that you've made it to the very end! My heartiest congratulations. Please stay in touch! I'm <a href=\"https://www.linkedin.com/in/eddonner/\">here on LinkedIn</a> if we're not already connected and I'm on X at <a href=\"https://x.com/edwarddonner\">@edwarddonner</a>. And my editor would be cross with me if I didn't mention one more time: it makes a HUGE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. <br/><br/>Massive thanks again for putting up with me for 8 weeks and getting all the way to the final cell! I'm excited to hear all about your career as an LLM Engineer. If you post on LinkedIn about completing the course and tag me, then I'll weigh in to amplify your achievement. <br/><b>You could not have picked a better time to be in this field.</b>\n",
" </span>\n", " </span>\n",
" </td>\n", " </td>\n",
" </tr>\n", " </tr>\n",

Loading…
Cancel
Save