diff --git a/README.md b/README.md index 8f79257..a712e19 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ We will start the course by installing Ollama so you can see results immediately 1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly 2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal). 3. Run `ollama run llama3.2` or for smaller machines try `ollama run llama3.2:1b` - **please note** steer clear of Meta's latest model llama3.3 because at 70B parameters that's way too large for most home computers! -4. If this doesn't work, you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again +4. If this doesn't work: you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again. On a PC, you may need to be running in an Admin instance of Powershell. 5. And if that doesn't work on your box, I've set up this on the cloud. This is on Google Colab, which will need you to have a Google account to sign in, but is free: https://colab.research.google.com/drive/1-_f5XZPsChvfU1sJ0QqCePtIuc55LSdu?usp=sharing Any problems, please contact me! diff --git a/SETUP-PC.md b/SETUP-PC.md index de3af5c..e87d0ff 100644 --- a/SETUP-PC.md +++ b/SETUP-PC.md @@ -92,11 +92,17 @@ Then, create a new virtual environment with this command: You should see (llms) in your command prompt, which is your sign that things are going well. 4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt` -This may take a few minutes to install. -In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version: +This may take a few minutes to install. +If you see an error like this: + +> Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": [https://visualstudio.microsoft.com/visual-cpp-build-tools/](https://visualstudio.microsoft.com/visual-cpp-build-tools/) + +Then please follow the link and install Microsoft C++ Build Tools. + +In the very unlikely event that this step doesn't go well, you should try the bullet-proof (but slower) version: `pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` -5. **Start Jupyter Lab:** +6. **Start Jupyter Lab:** From within the `llm_engineering` folder, type: `jupyter lab` ...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipynb`. Success! Now close down jupyter lab and move on to Part 3. diff --git a/week2/day1.ipynb b/week2/day1.ipynb index 3a7a79b..7371667 100644 --- a/week2/day1.ipynb +++ b/week2/day1.ipynb @@ -300,6 +300,7 @@ "source": [ "# Claude 3.5 Sonnet again\n", "# Now let's add in streaming back results\n", + "# If the streaming looks strange, then please see the note below this cell!\n", "\n", "result = claude.messages.stream(\n", " model=\"claude-3-5-sonnet-latest\",\n", @@ -316,6 +317,27 @@ " print(text, end=\"\", flush=True)" ] }, + { + "cell_type": "markdown", + "id": "dd1e17bc-cd46-4c23-b639-0c7b748e6c5a", + "metadata": {}, + "source": [ + "## A rare problem with Claude streaming on some Windows boxes\n", + "\n", + "2 students have noticed a strange thing happening with Claude's streaming into Jupyter Lab's output -- it sometimes seems to swallow up parts of the response.\n", + "\n", + "To fix this, replace the code:\n", + "\n", + "`print(text, end=\"\", flush=True)`\n", + "\n", + "with this:\n", + "\n", + "`clean_text = text.replace(\"\\n\", \" \").replace(\"\\r\", \" \")` \n", + "`print(clean_text, end=\"\", flush=True)`\n", + "\n", + "And it should work fine!" + ] + }, { "cell_type": "code", "execution_count": null, diff --git a/week2/day2.ipynb b/week2/day2.ipynb index bf5367f..133ca0f 100644 --- a/week2/day2.ipynb +++ b/week2/day2.ipynb @@ -130,6 +130,8 @@ "metadata": {}, "outputs": [], "source": [ + "# This can reveal the \"training cut off\", or the most recent date in the training data\n", + "\n", "message_gpt(\"What is today's date?\")" ] }, diff --git a/week3/community-contributions/dataset_generator.ipynb b/week3/community-contributions/dataset_generator.ipynb index eda1b9f..0802303 100644 --- a/week3/community-contributions/dataset_generator.ipynb +++ b/week3/community-contributions/dataset_generator.ipynb @@ -1,267 +1,277 @@ { - "nbformat": 4, - "nbformat_minor": 0, - "metadata": { - "colab": { - "provenance": [], - "gpuType": "T4" - }, - "kernelspec": { - "name": "python3", - "display_name": "Python 3" - }, - "language_info": { - "name": "python" - }, - "accelerator": "GPU" + "cells": [ + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "kU2JrcPlhwd9" + }, + "outputs": [], + "source": [ + "!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio" + ] }, - "cells": [ - { - "cell_type": "code", - "source": [ - "!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio" - ], - "metadata": { - "id": "kU2JrcPlhwd9" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Imports**" - ], - "metadata": { - "id": "lAMIVT4iwNg0" - } - }, - { - "cell_type": "code", - "source": [ - "import os\n", - "import requests\n", - "from google.colab import drive\n", - "from huggingface_hub import login\n", - "from google.colab import userdata\n", - "from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n", - "import torch\n", - "import gradio as gr\n", - "\n", - "hf_token = userdata.get('HF_TOKEN')\n", - "login(hf_token, add_to_git_credential=True)" - ], - "metadata": { - "id": "-Apd7-p-hyLk" - }, - "execution_count": 2, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Model**" - ], - "metadata": { - "id": "xa0qYqZrwQ66" - } - }, - { - "cell_type": "code", - "source": [ - "model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", - "quant_config = BitsAndBytesConfig(\n", - " load_in_4bit=True,\n", - " bnb_4bit_use_double_quant=True,\n", - " bnb_4bit_compute_dtype=torch.bfloat16,\n", - " bnb_4bit_quant_type=\"nf4\"\n", - ")\n", - "\n", - "model = AutoModelForCausalLM.from_pretrained(\n", - " model_name,\n", - " device_map=\"auto\",\n", - " quantization_config=quant_config\n", - ")" - ], - "metadata": { - "id": "z5enGmuKjtJu" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Tokenizer**" - ], - "metadata": { - "id": "y1hUSmWlwSbp" - } - }, - { - "cell_type": "code", - "source": [ - "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", - "tokenizer.pad_token = tokenizer.eos_token" - ], - "metadata": { - "id": "WjxNWW6bvdgj" - }, - "execution_count": 4, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Functions**" - ], - "metadata": { - "id": "1pg2U-B3wbIK" - } - }, - { - "cell_type": "code", - "source": [ - "def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", - " # Convert user inputs into multi-shot examples\n", - " multi_shot_examples = [\n", - " {\"instruction\": inst1, \"response\": resp1},\n", - " {\"instruction\": inst2, \"response\": resp2},\n", - " {\"instruction\": inst3, \"response\": resp3}\n", - " ]\n", - "\n", - " # System prompt\n", - " system_prompt = f\"\"\"\n", - " You are a helpful assistant whose main purpose is to generate datasets.\n", - " Topic: {topic}\n", - " Return the dataset in JSON format. Use examples with simple, fun, and easy-to-understand instructions for kids.\n", - " Include the following examples: {multi_shot_examples}\n", - " Return {number_of_data} examples each time.\n", - " Do not repeat the provided examples.\n", - " \"\"\"\n", - "\n", - " # Example Messages\n", - " messages = [\n", - " {\"role\": \"system\", \"content\": system_prompt},\n", - " {\"role\": \"user\", \"content\": f\"Please generate my dataset for {topic}\"}\n", - " ]\n", - "\n", - " # Tokenize Input\n", - " inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n", - " streamer = TextStreamer(tokenizer)\n", - "\n", - " # Generate Output\n", - " outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n", - "\n", - " # Decode and Return\n", - " return tokenizer.decode(outputs[0], skip_special_tokens=True)\n", - "\n", - "\n", - "def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", - " return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)" - ], - "metadata": { - "id": "ZvljDKdji8iV" - }, - "execution_count": 12, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Default Values**" - ], - "metadata": { - "id": "_WDZ5dvRwmng" - } - }, - { - "cell_type": "code", - "source": [ - "default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n", - "default_number_of_data = 10\n", - "default_multi_shot_examples = [\n", - " {\n", - " \"instruction\": \"Why do I have to say please when I want something?\",\n", - " \"response\": \"Because it’s like magic! It shows you’re nice, and people want to help you more.\"\n", - " },\n", - " {\n", - " \"instruction\": \"What should I say if someone gives me a toy?\",\n", - " \"response\": \"You say, 'Thank you!' because it makes them happy you liked it.\"\n", - " },\n", - " {\n", - " \"instruction\": \"why should I listen to my parents?\",\n", - " \"response\": \"Because parents want the best for you and they love you the most.\"\n", - " }\n", - "]" - ], - "metadata": { - "id": "JAdfqYXnvEDE" - }, - "execution_count": 13, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Init gradio**" - ], - "metadata": { - "id": "JwZtD032wuK8" - } - }, - { - "cell_type": "code", - "source": [ - "gr_interface = gr.Interface(\n", - " fn=gradio_interface,\n", - " inputs=[\n", - " gr.Textbox(label=\"Topic\", value=default_topic, lines=2),\n", - " gr.Number(label=\"Number of Examples\", value=default_number_of_data, precision=0),\n", - " gr.Textbox(label=\"Instruction 1\", value=default_multi_shot_examples[0][\"instruction\"]),\n", - " gr.Textbox(label=\"Response 1\", value=default_multi_shot_examples[0][\"response\"]),\n", - " gr.Textbox(label=\"Instruction 2\", value=default_multi_shot_examples[1][\"instruction\"]),\n", - " gr.Textbox(label=\"Response 2\", value=default_multi_shot_examples[1][\"response\"]),\n", - " gr.Textbox(label=\"Instruction 3\", value=default_multi_shot_examples[2][\"instruction\"]),\n", - " gr.Textbox(label=\"Response 3\", value=default_multi_shot_examples[2][\"response\"]),\n", - " ],\n", - " outputs=gr.Textbox(label=\"Generated Dataset\")\n", - ")" - ], - "metadata": { - "id": "xy2RP5T-vxXg" - }, - "execution_count": 14, - "outputs": [] - }, - { - "cell_type": "markdown", - "source": [ - "**Run the app**" - ], - "metadata": { - "id": "HZx-mm9Uw3Ph" - } - }, - { - "cell_type": "code", - "source": [ - "gr_interface.launch()" - ], - "metadata": { - "id": "bfGs5ip8mndg" - }, - "execution_count": null, - "outputs": [] - }, - { - "cell_type": "code", - "source": [], - "metadata": { - "id": "Cveqx392x7Mm" - }, - "execution_count": null, - "outputs": [] - } - ] -} \ No newline at end of file + { + "cell_type": "markdown", + "metadata": { + "id": "lAMIVT4iwNg0" + }, + "source": [ + "**Imports**" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": { + "id": "-Apd7-p-hyLk" + }, + "outputs": [], + "source": [ + "import os\n", + "import requests\n", + "from google.colab import drive\n", + "from huggingface_hub import login\n", + "from google.colab import userdata\n", + "from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n", + "import torch\n", + "import gradio as gr\n", + "\n", + "hf_token = userdata.get('HF_TOKEN')\n", + "login(hf_token, add_to_git_credential=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xa0qYqZrwQ66" + }, + "source": [ + "**Model**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "z5enGmuKjtJu" + }, + "outputs": [], + "source": [ + "model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", + "quant_config = BitsAndBytesConfig(\n", + " load_in_4bit=True,\n", + " bnb_4bit_use_double_quant=True,\n", + " bnb_4bit_compute_dtype=torch.bfloat16,\n", + " bnb_4bit_quant_type=\"nf4\"\n", + ")\n", + "\n", + "model = AutoModelForCausalLM.from_pretrained(\n", + " model_name,\n", + " device_map=\"auto\",\n", + " quantization_config=quant_config\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "y1hUSmWlwSbp" + }, + "source": [ + "**Tokenizer**" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": { + "id": "WjxNWW6bvdgj" + }, + "outputs": [], + "source": [ + "tokenizer = AutoTokenizer.from_pretrained(model_name)\n", + "tokenizer.pad_token = tokenizer.eos_token" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1pg2U-B3wbIK" + }, + "source": [ + "**Functions**" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": { + "id": "ZvljDKdji8iV" + }, + "outputs": [], + "source": [ + "def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", + " # Convert user inputs into multi-shot examples\n", + " multi_shot_examples = [\n", + " {\"instruction\": inst1, \"response\": resp1},\n", + " {\"instruction\": inst2, \"response\": resp2},\n", + " {\"instruction\": inst3, \"response\": resp3}\n", + " ]\n", + "\n", + " # System prompt\n", + " system_prompt = f\"\"\"\n", + " You are a helpful assistant whose main purpose is to generate datasets.\n", + " Topic: {topic}\n", + " Return the dataset in JSON format. Use examples with simple, fun, and easy-to-understand instructions for kids.\n", + " Include the following examples: {multi_shot_examples}\n", + " Return {number_of_data} examples each time.\n", + " Do not repeat the provided examples.\n", + " \"\"\"\n", + "\n", + " # Example Messages\n", + " messages = [\n", + " {\"role\": \"system\", \"content\": system_prompt},\n", + " {\"role\": \"user\", \"content\": f\"Please generate my dataset for {topic}\"}\n", + " ]\n", + "\n", + " # Tokenize Input\n", + " inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n", + " streamer = TextStreamer(tokenizer)\n", + "\n", + " # Generate Output\n", + " outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n", + "\n", + " # Decode and Return\n", + " return tokenizer.decode(outputs[0], skip_special_tokens=True)\n", + "\n", + "\n", + "def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n", + " return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "_WDZ5dvRwmng" + }, + "source": [ + "**Default Values**" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": { + "id": "JAdfqYXnvEDE" + }, + "outputs": [], + "source": [ + "default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n", + "default_number_of_data = 10\n", + "default_multi_shot_examples = [\n", + " {\n", + " \"instruction\": \"Why do I have to say please when I want something?\",\n", + " \"response\": \"Because it’s like magic! It shows you’re nice, and people want to help you more.\"\n", + " },\n", + " {\n", + " \"instruction\": \"What should I say if someone gives me a toy?\",\n", + " \"response\": \"You say, 'Thank you!' because it makes them happy you liked it.\"\n", + " },\n", + " {\n", + " \"instruction\": \"why should I listen to my parents?\",\n", + " \"response\": \"Because parents want the best for you and they love you the most.\"\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JwZtD032wuK8" + }, + "source": [ + "**Init gradio**" + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": { + "id": "xy2RP5T-vxXg" + }, + "outputs": [], + "source": [ + "gr_interface = gr.Interface(\n", + " fn=gradio_interface,\n", + " inputs=[\n", + " gr.Textbox(label=\"Topic\", value=default_topic, lines=2),\n", + " gr.Number(label=\"Number of Examples\", value=default_number_of_data, precision=0),\n", + " gr.Textbox(label=\"Instruction 1\", value=default_multi_shot_examples[0][\"instruction\"]),\n", + " gr.Textbox(label=\"Response 1\", value=default_multi_shot_examples[0][\"response\"]),\n", + " gr.Textbox(label=\"Instruction 2\", value=default_multi_shot_examples[1][\"instruction\"]),\n", + " gr.Textbox(label=\"Response 2\", value=default_multi_shot_examples[1][\"response\"]),\n", + " gr.Textbox(label=\"Instruction 3\", value=default_multi_shot_examples[2][\"instruction\"]),\n", + " gr.Textbox(label=\"Response 3\", value=default_multi_shot_examples[2][\"response\"]),\n", + " ],\n", + " outputs=gr.Textbox(label=\"Generated Dataset\")\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "HZx-mm9Uw3Ph" + }, + "source": [ + "**Run the app**" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "bfGs5ip8mndg" + }, + "outputs": [], + "source": [ + "gr_interface.launch()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "Cveqx392x7Mm" + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "accelerator": "GPU", + "colab": { + "gpuType": "T4", + "provenance": [] + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.11" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/week5/day4.5.ipynb b/week5/day4.5.ipynb index a02b9cd..9027a28 100644 --- a/week5/day4.5.ipynb +++ b/week5/day4.5.ipynb @@ -14,7 +14,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", "metadata": {}, "outputs": [], @@ -29,7 +29,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "802137aa-8a74-45e0-a487-d1974927d7ca", "metadata": {}, "outputs": [], @@ -51,7 +51,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "58c85082-e417-4708-9efe-81a5d55d1424", "metadata": {}, "outputs": [], @@ -64,7 +64,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "ee78efcb-60fe-449e-a944-40bab26261af", "metadata": {}, "outputs": [], @@ -77,7 +77,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", "metadata": {}, "outputs": [], @@ -104,10 +104,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Created a chunk of size 1088, which is longer than the specified 1000\n" + ] + } + ], "source": [ "text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", "chunks = text_splitter.split_documents(documents)" @@ -115,20 +123,39 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "123" + ] + }, + "execution_count": 7, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "len(chunks)" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Document types found: company, employees, contracts, products\n" + ] + } + ], "source": [ "doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", "print(f\"Document types found: {', '.join(doc_types)}\")" @@ -157,10 +184,18 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "id": "78998399-ac17-4e28-b15f-0b5f51e6ee23", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "There are 123 vectors with 1,536 dimensions in the vector store\n" + ] + } + ], "source": [ "# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", "# Chroma is a popular open source Vector Database based on SQLLite\n", diff --git a/week8/day1.ipynb b/week8/day1.ipynb index a6e54b2..5da624a 100644 --- a/week8/day1.ipynb +++ b/week8/day1.ipynb @@ -58,13 +58,20 @@ "# Setting up the modal tokens\n", "\n", "The first time you run this, please uncomment the next line and execute it. \n", - "This is the same as running `modal setup` from the command line. It connects with Modal and installs your tokens.\n", + "This is the same as running `modal setup` from the command line. It connects with Modal and installs your tokens. \n", "\n", - "A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt afterwards: \n", + "## Debugging some common problems on Windows\n", + "\n", + "If this command fails in the next cell, or if any of the modal commands with the `!` fail, please try running them directly on the command line in an activated environment (without the `!`)\n", + "\n", + "A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt in an activated environment afterwards: \n", "`modal token new` \n", "(Thank you Ed B. for that!)\n", "\n", - "And I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)." + "Also, a student David S. mentioned the following: \n", + "> In case anyone else using Windows hits this problem: Along with having to run `modal token new` from a command prompt, you have to move the generated token file. It will deploy the token file (.modal.toml) to your Windows profile folder. The virtual environment couldn't see that location (strangely, it couldn't even after I set environment variables for it and rebooted). I moved that token file to the folder I'm operating out of for the lab and it stopped throwing auth errors.\n", + "\n", + "Finally: I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)." ] }, {