diff --git a/week1/day2 EXERCISE_ollama_llama3.ipynb b/week1/community-contributions/day2 EXERCISE_ollama_llama3.ipynb
similarity index 100%
rename from week1/day2 EXERCISE_ollama_llama3.ipynb
rename to week1/community-contributions/day2 EXERCISE_ollama_llama3.ipynb
diff --git a/week1/day2 EXERCISE.ipynb b/week1/day2 EXERCISE.ipynb
new file mode 100644
index 0000000..81077ed
--- /dev/null
+++ b/week1/day2 EXERCISE.ipynb	
@@ -0,0 +1,286 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
+   "metadata": {},
+   "source": [
+    "# Welcome to your first assignment!\n",
+    "\n",
+    "Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
+   "metadata": {},
+   "source": [
+    "<table style=\"margin: 0; text-align: left;\">\n",
+    "    <tr>\n",
+    "        <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
+    "            <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
+    "        </td>\n",
+    "        <td>\n",
+    "            <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n",
+    "            <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
+    "            <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
+    "            Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
+    "            </span>\n",
+    "        </td>\n",
+    "    </tr>\n",
+    "</table>"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
+   "metadata": {},
+   "source": [
+    "# HOMEWORK EXERCISE ASSIGNMENT\n",
+    "\n",
+    "Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
+    "\n",
+    "You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
+    "\n",
+    "**Benefits:**\n",
+    "1. No API charges - open-source\n",
+    "2. Data doesn't leave your box\n",
+    "\n",
+    "**Disadvantages:**\n",
+    "1. Significantly less power than Frontier Model\n",
+    "\n",
+    "## Recap on installation of Ollama\n",
+    "\n",
+    "Simply visit [ollama.com](https://ollama.com) and install!\n",
+    "\n",
+    "Once complete, the ollama server should already be running locally.  \n",
+    "If you visit:  \n",
+    "[http://localhost:11434/](http://localhost:11434/)\n",
+    "\n",
+    "You should see the message `Ollama is running`.  \n",
+    "\n",
+    "If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve`  \n",
+    "And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2`  \n",
+    "Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
+    "\n",
+    "If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# imports\n",
+    "\n",
+    "import requests\n",
+    "from bs4 import BeautifulSoup\n",
+    "from IPython.display import Markdown, display"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Constants\n",
+    "\n",
+    "OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
+    "HEADERS = {\"Content-Type\": \"application/json\"}\n",
+    "MODEL = \"llama3.2\""
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Create a messages list using the same format that we used for OpenAI\n",
+    "\n",
+    "messages = [\n",
+    "    {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
+    "]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "payload = {\n",
+    "        \"model\": MODEL,\n",
+    "        \"messages\": messages,\n",
+    "        \"stream\": False\n",
+    "    }"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "479ff514-e8bd-4985-a572-2ea28bb4fa40",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# Let's just make sure the model is loaded\n",
+    "\n",
+    "!ollama pull llama3.2"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# If this doesn't work for any reason, try the 2 versions in the following cells\n",
+    "# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n",
+    "# And if none of that works - contact me!\n",
+    "\n",
+    "response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
+    "print(response.json()['message']['content'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
+   "metadata": {},
+   "source": [
+    "# Introducing the ollama package\n",
+    "\n",
+    "And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
+    "\n",
+    "Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "import ollama\n",
+    "\n",
+    "response = ollama.chat(model=MODEL, messages=messages)\n",
+    "print(response['message']['content'])"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "a4704e10-f5fb-4c15-a935-f046c06fb13d",
+   "metadata": {},
+   "source": [
+    "## Alternative approach - using OpenAI python library to connect to Ollama"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "23057e00-b6fc-4678-93a9-6b31cb704bff",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# There's actually an alternative approach that some people might prefer\n",
+    "# You can use the OpenAI client python library to call Ollama:\n",
+    "\n",
+    "from openai import OpenAI\n",
+    "ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
+    "\n",
+    "response = ollama_via_openai.chat.completions.create(\n",
+    "    model=MODEL,\n",
+    "    messages=messages\n",
+    ")\n",
+    "\n",
+    "print(response.choices[0].message.content)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90",
+   "metadata": {},
+   "source": [
+    "## Also trying the amazing reasoning model DeepSeek\n",
+    "\n",
+    "Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B.  \n",
+    "This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n",
+    "\n",
+    "Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "!ollama pull deepseek-r1:1.5b"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "1d3d554b-e00d-4c08-9300-45e073950a76",
+   "metadata": {},
+   "outputs": [],
+   "source": [
+    "# This may take a few minutes to run! You should then see a fascinating \"thinking\" trace inside <think> tags, followed by some decent definitions\n",
+    "\n",
+    "response = ollama_via_openai.chat.completions.create(\n",
+    "    model=\"deepseek-r1:1.5b\",\n",
+    "    messages=[{\"role\": \"user\", \"content\": \"Please give definitions of some core concepts behind LLMs: a neural network, attention and the transformer\"}]\n",
+    ")\n",
+    "\n",
+    "print(response.choices[0].message.content)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
+   "metadata": {},
+   "source": [
+    "# NOW the exercise for you\n",
+    "\n",
+    "Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.11.11"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}