Browse Source

Merge branch 'ed-donner:main' into main

pull/121/head
Daniel Quillan Roxas 3 months ago committed by GitHub
parent
commit
31e3cddec0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 1
      SETUP-PC.md
  2. 19
      SETUP-linux.md
  3. 1
      SETUP-mac.md
  4. 408
      week1/community-contributions/Week1_Challenge_Career_Well_Being_Companion.ipynb
  5. 159
      week1/community-contributions/day1-debs_stock_summary_recommendation.ipynb
  6. 127
      week1/community-contributions/day1-email-reviewer-in-Bahasa.ipynb
  7. 580
      week1/community-contributions/day1_industrial_product_recommendaitons.ipynb
  8. 354
      week1/community-contributions/day2 EXERCISE-disabled-ssl.ipynb
  9. 93
      week1/community-contributions/day2-exercise.ipynb
  10. 131
      week1/community-contributions/web-page-summarizer.ipynb
  11. 125
      week1/community-contributions/wk1-day1-deepseek-stream-summarize.ipynb
  12. 118
      week1/community-contributions/wk1-day2-ollama-exer.ipynb
  13. 42
      week1/day2 EXERCISE.ipynb
  14. 2
      week1/day5.ipynb
  15. 10
      week1/troubleshooting.ipynb
  16. 134
      week2/day1.ipynb
  17. 22
      week2/day3.ipynb
  18. 33
      week3/community-contributions/ai-web-summarizer/.gitignore
  19. 143
      week3/community-contributions/ai-web-summarizer/README.md
  20. 28
      week3/community-contributions/ai-web-summarizer/main.py
  21. 4
      week3/community-contributions/ai-web-summarizer/requirements.txt
  22. 0
      week3/community-contributions/ai-web-summarizer/summarizer/__init__.py
  23. 23
      week3/community-contributions/ai-web-summarizer/summarizer/fetcher.py
  24. 85
      week3/community-contributions/ai-web-summarizer/summarizer/summarizer.py
  25. 0
      week3/community-contributions/ai-web-summarizer/utils/__init__.py
  26. 11
      week3/community-contributions/ai-web-summarizer/utils/config.py
  27. 16
      week3/community-contributions/ai-web-summarizer/utils/logger.py
  28. 29
      week4/community-contributions/doc_string_exercise/README.md
  29. 19
      week4/community-contributions/doc_string_exercise/data/original_file.py
  30. 85
      week4/community-contributions/doc_string_exercise/generate_doc_string.py
  31. 147
      week4/community-contributions/doc_string_exercise/utils.py
  32. 2
      week7/day2.ipynb
  33. 2
      week7/day3 and 4.ipynb
  34. 2
      week7/day5.ipynb
  35. 18
      week8/agents/frontier_agent.py
  36. 106
      week8/day2.3.ipynb
  37. 4
      week8/day5.ipynb

1
SETUP-PC.md

@ -147,6 +147,7 @@ If you have other keys, you can add them too, or come back to this in future wee
```
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
DEEPSEEK_API_KEY=xxxx
HF_TOKEN=xxxx
```

19
SETUP-linux.md

@ -103,6 +103,24 @@ Run: `python -m pip install --upgrade pip` followed by `pip install -r requireme
If issues occur, try the fallback:
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt`
###### Arch users:
Some updates break dependencies. Most notably, numpy, scipy and gensim. To troubleshoot this, you can try many commands:
`sudo pacman -S python-numpy python-pandas python-scipy` This is not recommended, as pacman has no integration with pip (as far as I know)
Another possible solution if having build conflicts, is to update:
`sudo pacman -S gcc gcc-fortran python-setuptools python-wheel`
*Note:* gensim is broken if you have an updated version of scipy. You can either pin scipy to an older version, or
erase gensim from the requirements.txt for the moment. (See: https://aur.archlinux.org/packages/python-gensim)
Lastly, so that the kernel is visible after step (6) in jupyter lab :
`python -m ipykernel install --user --name=llmenv`
`ipython kernel install --user --name=llmenv`
6. **Start Jupyter Lab:**
From the `llm_engineering` folder, run: `jupyter lab`.
@ -157,6 +175,7 @@ If you have other keys, you can add them too, or come back to this in future wee
```
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
DEEPSEEK_API_KEY=xxxx
HF_TOKEN=xxxx
```

1
SETUP-mac.md

@ -146,6 +146,7 @@ If you have other keys, you can add them too, or come back to this in future wee
```
GOOGLE_API_KEY=xxxx
ANTHROPIC_API_KEY=xxxx
DEEPSEEK_API_KEY=xxxx
HF_TOKEN=xxxx
```

408
week1/community-contributions/Week1_Challenge_Career_Well_Being_Companion.ipynb

@ -0,0 +1,408 @@
{
"cells": [
{
"cell_type": "raw",
"id": "f64407a0-fda5-48f3-a2d3-82e80d320931",
"metadata": {},
"source": [
"### \"Career Well-Being Companion\" ###\n",
"This project will gather feelings at the end of day from employee.\n",
"Based on employee feelings provided as input, model will analyze feelings and provide suggestions and acknowledge with feelings employtee is going thru.\n",
"Model even will ask employee \"Do you want more detailed resposne to cope up with your feelings?\".\n",
"If employee agrees, model even replies with online courses, tools, meetups and other ideas for the well being of the employee.\n",
"\n",
"Immediate Impact: Professionals can quickly see value through insights or actionable suggestions.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2b30a8fa-1067-4369-82fc-edb197551e43",
"metadata": {},
"outputs": [],
"source": [
"### Step 1: Emotional Check-in:\n",
"\n",
"# Input: User describes their feelings or workday.\n",
"# LLM Task: Analyze the input for emotional tone and identify keywords (e.g., \"stress,\" \"boredom\").\n",
"# Output: A summary of emotional trends.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2b52469e-da81-42ec-9e6c-0c121ad349a7",
"metadata": {},
"outputs": [],
"source": [
"print(\"I am your well being companion and end goal is to help you in your career.\\nI want to start by asking about your feelings, how was your day today.\\n\")\n",
"print(\"I will do my best as well being companion to analyze your day and come up with the suggestions that might help you in your career and life. \\n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a6df2e2c-785d-4323-90f4-b49592ab33fc",
"metadata": {},
"outputs": [],
"source": [
"how_was_day = \"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "247e4a80-f634-4a7a-9f40-315f042be59c",
"metadata": {},
"outputs": [],
"source": [
"how_was_day = input(\"How was your day today,can you describe about your day, what went well, what did not go well, what you did not like :\\n\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0faac2dd-0d53-431a-87a7-d57a6881e043",
"metadata": {},
"outputs": [],
"source": [
"what_went_well = input(\"What went well for you , today?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2c11628b-d14b-47eb-a97e-70d08ddf3364",
"metadata": {},
"outputs": [],
"source": [
"what_went_bad = input(\"What did not go well, today?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f64e34b4-f83a-4ae4-86bb-5bd164121412",
"metadata": {},
"outputs": [],
"source": [
"how_was_day = how_was_day + what_went_well + what_went_bad\n",
"print(how_was_day)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5fe08c4-4d21-4917-a556-89648eb543c7",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from openai import OpenAI\n",
"from dotenv import load_dotenv\n",
"import json\n",
"from IPython.display import Markdown, display, update_display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d6875d51-f33b-462e-85cb-a5d6a7cfb86e",
"metadata": {},
"outputs": [],
"source": [
"#Initialize environment and constants:\n",
"load_dotenv(override=True)\n",
"\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c12cf934-4bd4-4849-9e8f-5bb89eece996",
"metadata": {},
"outputs": [],
"source": [
"### Step 2: From day spent and what went good, what went bad => LLM will extract feelings, emotions from those unspoken words :)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "237d14b3-571e-4598-a57b-d3ebeaf81afc",
"metadata": {},
"outputs": [],
"source": [
"system_prompt_for_emotion_check_in = \"You are a career well-being assistant. Your task is to analyze the user's emotional state based on their text input.\"\\\n",
"\"Look for signs of stress, burnout, dissatisfaction, boredom, motivation, or any other emotional indicators related to work.\"\\\n",
"\"Based on the input, provide a summary of the user's feelings and categorize them under relevant emotional states (e.g., ‘Burnout,’ ‘Boredom,’ ‘Stress,’ ‘Satisfaction,’ etc.).\"\\\n",
"\"Your response should be empathetic and non-judgmental. Please summarize the list of feelings, emotions , those unspoken but unheard feelings you get it.\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a205a6d3-b0d7-4fcb-9eed-f3a86576cd9f",
"metadata": {},
"outputs": [],
"source": [
"def get_feelings(how_was_day):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages = [\n",
" {'role':'system','content': system_prompt_for_emotion_check_in},\n",
" {'role':'user', 'content': how_was_day}\n",
" ]\n",
" )\n",
" result = response.choices[0].message.content\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45e152c8-37c4-4818-a8a0-49f1ea3c1b65",
"metadata": {},
"outputs": [],
"source": [
"## LLM will give the feelings you have based on \"the day you had today\".\n",
"print(get_feelings(how_was_day))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a62a385-4c51-42b1-ad73-73949e740e66",
"metadata": {},
"outputs": [],
"source": [
"### Step 3: From those feelings, emotions ==> Get suggestions from LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d856ca4f-ade9-4e6f-b540-2d07a70867c7",
"metadata": {},
"outputs": [],
"source": [
"## Lets construct system prompt for LLM to get suggestions (from these feelings above).\n",
"\n",
"system_prompt_for_suggestion =\"You are a career well-being assistant.Provide a list of practical,actionable suggestions to help them improve their emotional state.\"\n",
"\n",
"system_prompt_for_suggestion+=\"The suggestions should be personalized based on their current feelings, and they should be simple, effective actions the user can take immediately.\"\\\n",
"\"Include activities, tasks, habits, or approaches that will either alleviate stress, boost motivation, or help them reconnect with their work in a positive way.\"\\\n",
"\"Be empathetic, non-judgmental, and encouraging in your tone.\\n\"\n",
"system_prompt_for_suggestion += \"Request you to respond in JSON format. Below is example:\\n\"\n",
"system_prompt_for_suggestion += '''\n",
"{\n",
" \"suggestions\": [\n",
" {\n",
" \"action\": \"Take a short break\",\n",
" \"description\": \"Step away from your workspace for 5-10 minutes. Use this time to take deep breaths, stretch, or grab a drink. This mini-break can help clear your mind and reduce feelings of overwhelm.\"\n",
" },\n",
" {\n",
" \"action\": \"Write a quick journal entry\",\n",
" \"description\": \"Spend 5-10 minutes writing down your thoughts and feelings. Specify what's distracting you and what you appreciate about your personal life. This can help you process emotions and refocus on tasks.\"\n",
" },\n",
" {\n",
" \"action\": \"Set a small task goal\",\n",
" \"description\": \"Choose one manageable task to complete today. Break it down into smaller steps to make it less daunting. Completing even a small task can give you a sense of achievement and boost motivation.\"\n",
" }\n",
" ]\n",
"}\n",
"'''\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e9eee380-7fa5-4d21-9357-f4fc34d3368d",
"metadata": {},
"outputs": [],
"source": [
"## Lets build user prompt to ask LLM for the suggestions based on the feelings above.\n",
"## Note: Here while building user_prompt, we are making another LLM call (via function get_feelings() to get feelings analyzed from \"day spent\".\n",
"## Because first step is to get feelings from day spent then we move to offer suggestions to ease discomfort feelings.\n",
"\n",
"def get_user_prompt_for_suggestion(how_was_day):\n",
" user_prompt_for_suggestion = \"You are a career well-being assistant.Please see below user’s emotional input on 'day user had spent' and this user input might have feeling burnt out, bored, uninspired, or stressed or sometime opposite \"\\\n",
" \"of these feelings.\"\n",
" user_prompt_for_suggestion += f\"{get_feelings(how_was_day)}\"\n",
" return user_prompt_for_suggestion\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3576e451-b29c-44e1-bcdb-addc8d61afa7",
"metadata": {},
"outputs": [],
"source": [
"print(get_user_prompt_for_suggestion(how_was_day))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a41ee40-1f49-4474-809f-a0d5e44e4aa4",
"metadata": {},
"outputs": [],
"source": [
"def get_suggestions(how_was_day):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages = [\n",
" {'role': 'system', 'content':system_prompt_for_suggestion},\n",
" {'role': 'user', 'content': get_user_prompt_for_suggestion(how_was_day)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)\n",
" #display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "33e3a14e-0e2c-43cb-b50b-d6df52b4d300",
"metadata": {},
"outputs": [],
"source": [
"suggestions = get_suggestions(how_was_day)\n",
"print(suggestions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "31c75e04-2800-4ba2-845b-bc38f8965622",
"metadata": {},
"outputs": [],
"source": [
"### Step 4: From those suggestions from companion ==> Enhance with support you need to follow sugestions like action plan for your self."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d07f9d3f-5acf-4a86-9160-4c6de8df4eb0",
"metadata": {},
"outputs": [],
"source": [
"system_prompt_for_enhanced_suggestions = \"You are a helpful assistant that enhances actionable suggestions for users. For each suggestion provided, enhance it by adding:\\n\"\\\n",
"\"1. A step-by-step guide for implementation.\"\\\n",
"\"2. Tools, resources, or apps that can help.\"\\\n",
"\"3. Examples or additional context to make the suggestion practical.\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6ab449f1-7a6c-4982-99e0-83d99c45ad2d",
"metadata": {},
"outputs": [],
"source": [
"def get_user_prompt_for_enhanced_suggestions(suggestions):\n",
" prompt = \"You are able to check below suggestions and can enhance to help end user. Below is the list of suggestions.\\n\"\n",
" prompt += f\"{suggestions}\"\n",
" return prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5187b7a-d8cd-4377-b011-7805bd50443d",
"metadata": {},
"outputs": [],
"source": [
"def enhance_suggestions(suggestions):\n",
" stream = openai.chat.completions.create(\n",
" model = MODEL,\n",
" messages=[\n",
" {'role':'system', 'content':system_prompt_for_enhanced_suggestions},\n",
" {'role':'user', 'content':get_user_prompt_for_enhanced_suggestions(suggestions)}\n",
" ],\n",
" stream = True\n",
" )\n",
" \n",
" #result = response.choices[0].message.content\n",
" #for chunk in stream:\n",
" # print(chunk.choices[0].delta.content or '', end='')\n",
"\n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n",
" \n",
" #display(Markdown(result))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "429cd6f8-3215-4140-9a6d-82d14a9b9798",
"metadata": {},
"outputs": [],
"source": [
"detailed = input(\"\\nWould you like a DETAILED PLAN for implementing this suggestion?(Yes/ No)\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5efda045-5bde-4c51-bec6-95b5914102dd",
"metadata": {},
"outputs": [],
"source": [
"if detailed.lower() == 'yes':\n",
" enhance_suggestions(suggestions)\n",
"else:\n",
" print(suggestions)\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1969b2ec-c850-4dfc-b790-8ae8e3fa36e9",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

159
week1/community-contributions/day1-debs_stock_summary_recommendation.ipynb

@ -0,0 +1,159 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d2d5441-2afe-41b9-8039-c367acd715f9",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c7e0988-8f2d-4844-a847-eebec76b114a",
"metadata": {},
"outputs": [],
"source": [
"website = \"https://www.screener.in/company/CMSINFO/\"\n",
"biz = Website(website)\n",
"user_prompt = \"Give short summary of the business \" + biz.text +\" and recommend pros and cons of the business in bullet points alongwith recommendation to buy or sell\"\n",
"print(user_prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"website = \"https://www.screener.in/company/CMSINFO/\"\n",
"biz = Website(website)\n",
"\n",
"system_prompt = \"You are an equity research analyst. Analyze the content of the website and give a summary of the business\"\n",
"user_prompt = \"Give short summary of the business \" + biz.text +\" and recommend pros and cons of the business in bullet points alongwith recommendation to buy or sell\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
"]\n",
"# Step 3: Call OpenAI\n",
"\n",
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"# Step 4: print the result\n",
"\n",
"print(response.choices[0].message.content)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d9edf96e-1190-44fe-9261-405709fb39cd",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

127
week1/community-contributions/day1-email-reviewer-in-Bahasa.ipynb

@ -0,0 +1,127 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "0ee39d65-f27d-416d-8b46-43d15aebe752",
"metadata": {},
"outputs": [],
"source": [
"# Below is a sample for email reviewer using Bahasa Indonesia. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f9fd62af-9b14-490b-8d0b-990da96101bf",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"Anda adalah seorang Asisten untuk menganalisa email berdasarkan user prompt yang nanti akan diberikan. Summarize the email and give me a tone about that email\"\n",
"user_prompt = \"\"\"\n",
" Subject: Permintaan Pertemuan\n",
"\n",
"Yang terhormat Bapak Rijal,\n",
"\n",
"Saya ingin meminta waktu Anda untuk membahas Generative AI untuk bisnis. Apakah Anda tersedia pada besok pukul 19:00? \n",
"Jika tidak, mohon beri tahu waktu yang lebih sesuai bagi Anda.\n",
"\n",
"Terima kasih atas perhatian Anda.\n",
"\n",
"Salam,\n",
"\n",
"Mentari\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"# Step 4: print the result\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d10208fa-02d8-41a0-b9bb-0bf30f237f25",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"Anda adalah seorang Asisten untuk menganalisa email berdasarkan user prompt yang nanti akan diberikan. Summarize the email and give me a tone about that email\"\n",
"user_prompt = \"\"\"\n",
" Subject: Feedback terkait Bapak\n",
"\n",
"Yang terhormat Bapak Rijal,\n",
"\n",
"Saya ingin memberikan sedikit feedback untuk BBapak.\n",
"\n",
"Kemampuan Anda dalam memimpin tim ini mampu membawa saya dan rekan lainnya untuk mengerahkan semua kemampuan saya agar jadi lebih baik.\n",
"Selama ini saya cukup senang bekerja dengan Anda karena memberikan saya peluang untuk mencoba banyak hal baru. Tapi ada beberapa kekhawatiran yang mau saya sampaikan, terutama terkait target yang perlu dicapai oleh tim. Saya pikir melihat performa ke belakang, target yang ditentukan harus lebih realistis lagi.\n",
"Saya beruntung bisa berkesempatan bekerja dengan Anda sehingga banyak ilmu yang saya dapat. Kira-kira untuk ke depannya, hal apa lagi yang bisa tim ini tingkatkan agar kita bisa mencapai target yang lebih baik?\n",
"Selama ini, banyak terjadi miskomunikasi dalam pekerjaan. Dan menurut saya salah satunya karena arahan yang Anda berikan kurang jelas dan kurang ditangkap sepenuhnya oleh anggota yang lain. Saya dan tim berharap ke depan bisa mendapatkan arahan yang lebih jelas dan satu arah.\n",
"\n",
"Terima kasih atas perhatian Anda.\n",
"\n",
"Salam,\n",
"\n",
"Mentari\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"# Step 4: print the result\n",
"\n",
"print(response.choices[0].message.content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

580
week1/community-contributions/day1_industrial_product_recommendaitons.ipynb

@ -0,0 +1,580 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"\n",
"## Your first Frontier LLM Project!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"\"\"you are an AI to a salesperson working in the field of industrial tools and hardware. You have the following roles:\\\n",
"1. identify and understand the scenario the customer is describing.\\\n",
"2. figure what caregory of products are suitable for use in the scenario.\\\n",
"3. search https://industrywaala.com/ for the category of products you identified in 2. and then look for 2 products in that\\\n",
"category that you think will be most suitable in the given use case. for this you need to check for product features provided in\\\n",
"the short and long descriptions on the website that are applicable in the scenario.\\\n",
"4. make a summary of the two products with the brand name, model and 2 other key features of the product\\\n",
"5. always respond in markdown.\n",
"\"\"\"\n",
"\n",
"user_prompt = \"\"\"\\n can you help figure what model of product should i use in high temperature environemt. \\n\\n\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
"] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
")\n",
"\n",
"# Step 4: print the result\n",
"\n",
"display(Markdown(response.choices[0].message.content))"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

354
week1/community-contributions/day2 EXERCISE-disabled-ssl.ipynb

@ -0,0 +1,354 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Welcome to your first assignment!\n",
"\n",
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)"
]
},
{
"cell_type": "markdown",
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n",
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n",
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n",
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
"metadata": {},
"source": [
"# HOMEWORK EXERCISE ASSIGNMENT\n",
"\n",
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
"\n",
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
"\n",
"**Benefits:**\n",
"1. No API charges - open-source\n",
"2. Data doesn't leave your box\n",
"\n",
"**Disadvantages:**\n",
"1. Significantly less power than Frontier Model\n",
"\n",
"## Recap on installation of Ollama\n",
"\n",
"Simply visit [ollama.com](https://ollama.com) and install!\n",
"\n",
"Once complete, the ollama server should already be running locally. \n",
"If you visit: \n",
"[http://localhost:11434/](http://localhost:11434/)\n",
"\n",
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
"metadata": {},
"outputs": [],
"source": [
"# Constants\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
"metadata": {},
"outputs": [],
"source": [
"# Create a messages list using the same format that we used for OpenAI\n",
"\n",
"messages = [\n",
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
"metadata": {},
"outputs": [],
"source": [
"payload = {\n",
" \"model\": MODEL,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40",
"metadata": {},
"outputs": [],
"source": [
"# Let's just make sure the model is loaded\n",
"\n",
"!ollama pull llama3.2"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
"metadata": {},
"outputs": [],
"source": [
"# If this doesn't work for any reason, try the 2 versions in the following cells\n",
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n",
"# And if none of that works - contact me!\n",
"\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
"metadata": {},
"source": [
"# Introducing the ollama package\n",
"\n",
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
"\n",
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
"metadata": {},
"outputs": [],
"source": [
"import ollama\n",
"\n",
"response = ollama.chat(model=MODEL, messages=messages)\n",
"print(response['message']['content'])"
]
},
{
"cell_type": "markdown",
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d",
"metadata": {},
"source": [
"## Alternative approach - using OpenAI python library to connect to Ollama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff",
"metadata": {},
"outputs": [],
"source": [
"# There's actually an alternative approach that some people might prefer\n",
"# You can use the OpenAI client python library to call Ollama:\n",
"\n",
"from openai import OpenAI\n",
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
"metadata": {},
"source": [
"# NOW the exercise for you\n",
"\n",
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef76cfc2-c519-4cb2-947a-64948517913d",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a151a8de-1e90-4190-b68e-b44b25a2cdd7",
"metadata": {},
"outputs": [],
"source": [
"# Constants\n",
"\n",
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}\n",
"MODEL = \"llama3.2\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "700fffc1-c7b0-4001-b381-5c4fd28c8799",
"metadata": {},
"outputs": [],
"source": [
"# Reusing the Website BeautifulSoup wrapper from Day 1\n",
"# SSL Verification has been disabled\n",
"\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers, verify=False) # NOTE Disabled ssl verification here to workaround VPN Limitations\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "402d5686-4e76-4110-b65a-b3906c35c0a4",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website are as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "81f5f140-8f77-418f-a252-8ad5d11f6c5f",
"metadata": {},
"outputs": [],
"source": [
"## enter the web URL here:\n",
"website_url = \"https://www.timecube.net/\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d0ce4aa-b43e-4642-bcbd-d5964700ece8",
"metadata": {},
"outputs": [],
"source": [
"## This will at first print a warning for SSL which can be ignored before providing response. \n",
"\n",
"import ollama\n",
"\n",
"system_prompt = \"You are a virtual assistant who analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(Website(website_url))}\n",
"]\n",
"\n",
"response = ollama.chat(model=MODEL, messages=messages)\n",
"print(response['message']['content'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "910b7e06-c92d-47bf-a4ee-a006d70deb06",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

93
week1/community-contributions/day2-exercise.ipynb

@ -0,0 +1,93 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "fa4447be-7825-45d9-a6a5-ed41f2500533",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
"MODEL = \"llama3.2\"\n",
"\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ] \n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = MODEL,\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))\n",
"\n",
"\n",
"display_summary(\"https://esarijal.my.id\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

131
week1/community-contributions/web-page-summarizer.ipynb

@ -0,0 +1,131 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "6418dce8-3ad0-4da9-81de-b3bf57956086",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75b7849a-841b-4525-90b9-b9fd003516fb",
"metadata": {},
"outputs": [],
"source": [
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45c07164-3276-47f3-8620-a5d0ca6a8d24",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b334629a-cf2a-49fa-b198-edd73493720f",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e4dd0855-302d-4423-9b8b-80c4bbb9ab31",
"metadata": {},
"outputs": [],
"source": [
"website = Website(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65c6cc43-a16a-4337-8c3d-4ab10ee0377a",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59799f7b-a244-4572-9296-34e4b87ba026",
"metadata": {},
"outputs": [],
"source": [
"import ollama\n",
"\n",
"MODEL = \"llama3.2\"\n",
"response = ollama.chat(model=MODEL, messages=messages)\n",
"print(response['message']['content'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a0c03050-60d2-4165-9d8a-27eb57455704",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

125
week1/community-contributions/wk1-day1-deepseek-stream-summarize.ipynb

@ -0,0 +1,125 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "a767b6bc-65fe-42b2-988f-efd54125114f",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, clear_output\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('DEEPSEEK_API_KEY')\n",
"base_url=os.getenv('DEEPSEEK_BASE_URL')\n",
"MODEL = \"deepseek-chat\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]\n",
" \n",
"# Check the key\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; Looks like you are using DeepSeek (R1) model.\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n",
" \n",
"openai = OpenAI(api_key=api_key, base_url=base_url)\n",
"\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" \n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]\n",
" \n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages_for(website),\n",
" stream=True\n",
" )\n",
" print(\"Streaming response:\")\n",
" accumulated_content = \"\" # Accumulate the content here\n",
" for chunk in response:\n",
" if chunk.choices[0].delta.content: # Check if there's content in the chunk\n",
" accumulated_content += chunk.choices[0].delta.content # Append the chunk to the accumulated content\n",
" clear_output(wait=True) # Clear the previous output\n",
" display(Markdown(accumulated_content)) # Display the updated content\n",
"\n",
"def display_summary():\n",
" url = str(input(\"Enter the URL of the website you want to summarize: \"))\n",
" summarize(url)\n",
"\n",
"display_summary()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "01c9e5e7-7510-43ef-bb9c-aa44b15d39a7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

118
week1/community-contributions/wk1-day2-ollama-exer.ipynb

@ -0,0 +1,118 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, clear_output\n",
"from openai import OpenAI\n",
"\n",
"load_dotenv(override=True)\n",
"\n",
"# Day 2 Exercise with Ollama API\n",
"api_key = os.getenv('OLLAMA_API_KEY')\n",
"base_url = os.getenv('OLLAMA_BASE_URL')\n",
"MODEL = \"llama3.2\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]\n",
" \n",
"# Check the key\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; Looks like you are using DeepSeek (R1) model.\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n",
" \n",
"openai = OpenAI(api_key=api_key, base_url=base_url)\n",
"\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" \n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]\n",
" \n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=messages_for(website),\n",
" stream=True\n",
" )\n",
" print(\"Streaming response:\")\n",
" accumulated_content = \"\" # Accumulate the content here\n",
" for chunk in response:\n",
" if chunk.choices[0].delta.content: # Check if there's content in the chunk\n",
" accumulated_content += chunk.choices[0].delta.content # Append the chunk to the accumulated content\n",
" clear_output(wait=True) # Clear the previous output\n",
" display(Markdown(accumulated_content)) # Display the updated content\n",
" \n",
"def display_summary():\n",
" url = str(input(\"Enter the URL of the website you want to summarize: \"))\n",
" summarize(url)\n",
"\n",
"display_summary()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

42
week1/day2 EXERCISE.ipynb

@ -203,6 +203,46 @@
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90",
"metadata": {},
"source": [
"## Also trying the amazing reasoning model DeepSeek\n",
"\n",
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n",
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n",
"\n",
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d",
"metadata": {},
"outputs": [],
"source": [
"!ollama pull deepseek-r1:1.5b"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1d3d554b-e00d-4c08-9300-45e073950a76",
"metadata": {},
"outputs": [],
"source": [
"# This may take a few minutes to run! You should then see a fascinating \"thinking\" trace inside <think> tags, followed by some decent definitions\n",
"\n",
"response = ollama_via_openai.chat.completions.create(\n",
" model=\"deepseek-r1:1.5b\",\n",
" messages=[{\"role\": \"user\", \"content\": \"Please give definitions of some core concepts behind LLMs: a neural network, attention and the transformer\"}]\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
@ -216,7 +256,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "402d5686-4e76-4110-b65a-b3906c35c0a4",
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8",
"metadata": {},
"outputs": [],
"source": []

2
week1/day5.ipynb

@ -334,7 +334,7 @@
"metadata": {},
"outputs": [],
"source": [
"create_brochure(\"HuggingFace\", \"https://huggingface.com\")"
"create_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{

10
week1/troubleshooting.ipynb

@ -27,7 +27,15 @@
"\n",
"Click in the cell below and press Shift+Return to run it. \n",
"If this gives you problems, then please try working through these instructions to address: \n",
"https://chatgpt.com/share/676e6e3b-db44-8012-abaa-b3cf62c83eb3"
"https://chatgpt.com/share/676e6e3b-db44-8012-abaa-b3cf62c83eb3\n",
"\n",
"I've also heard that you might have problems if you are using a work computer that's running security software zscaler.\n",
"\n",
"Some advice from students in this situation with zscaler:\n",
"\n",
"> In the anaconda prompt, this helped sometimes, although still got failures occasionally running code in Jupyter:\n",
"`conda config --set ssl_verify false` \n",
"Another thing that helped was to add `verify=False` anytime where there is `request.get(..)`, so `request.get(url, headers=headers)` becomes `request.get(url, headers=headers, verify=False)`"
]
},
{

134
week2/day1.ipynb

@ -69,12 +69,19 @@
"For Anthropic, visit https://console.anthropic.com/ \n",
"For Google, visit https://ai.google.dev/gemini-api \n",
"\n",
"### Also - adding DeepSeek if you wish\n",
"\n",
"Optionally, if you'd like to also use DeepSeek, create an account [here](https://platform.deepseek.com/), create a key [here](https://platform.deepseek.com/api_keys) and top up with at least the minimum $2 [here](https://platform.deepseek.com/top_up).\n",
"\n",
"### Adding API keys to your .env file\n",
"\n",
"When you get your API keys, you need to set them as environment variables by adding them to your `.env` file.\n",
"\n",
"```\n",
"OPENAI_API_KEY=xxxx\n",
"ANTHROPIC_API_KEY=xxxx\n",
"GOOGLE_API_KEY=xxxx\n",
"DEEPSEEK_API_KEY=xxxx\n",
"```\n",
"\n",
"Afterwards, you may need to restart the Jupyter Lab Kernel (the Python process that sits behind this notebook) via the Kernel menu, and then rerun the cells from the top."
@ -120,7 +127,7 @@
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
"\n",
"load_dotenv()\n",
"load_dotenv(override=True)\n",
"openai_api_key = os.getenv('OPENAI_API_KEY')\n",
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
"google_api_key = os.getenv('GOOGLE_API_KEY')\n",
@ -272,7 +279,7 @@
"# Also adding max_tokens\n",
"\n",
"message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" model=\"claude-3-5-sonnet-latest\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
@ -295,7 +302,7 @@
"# Now let's add in streaming back results\n",
"\n",
"result = claude.messages.stream(\n",
" model=\"claude-3-5-sonnet-20240620\",\n",
" model=\"claude-3-5-sonnet-latest\",\n",
" max_tokens=200,\n",
" temperature=0.7,\n",
" system=system_message,\n",
@ -321,7 +328,7 @@
"# If that happens to you, please skip this cell and use the next cell instead - an alternative approach.\n",
"\n",
"gemini = google.generativeai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" model_name='gemini-2.0-flash-exp',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
@ -344,12 +351,129 @@
")\n",
"\n",
"response = gemini_via_openai_client.chat.completions.create(\n",
" model=\"gemini-1.5-flash\",\n",
" model=\"gemini-2.0-flash-exp\",\n",
" messages=prompts\n",
")\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "33f70c88-7ca9-470b-ad55-d93a57dcc0ab",
"metadata": {},
"source": [
"## (Optional) Trying out the DeepSeek model\n",
"\n",
"### Let's ask DeepSeek a really hard question - both the Chat and the Reasoner model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d0019fb-f6a8-45cb-962b-ef8bf7070d4d",
"metadata": {},
"outputs": [],
"source": [
"# Optionally if you wish to try DeekSeek, you can also use the OpenAI client library\n",
"\n",
"deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
"\n",
"if deepseek_api_key:\n",
" print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
"else:\n",
" print(\"DeepSeek API Key not set - please skip to the next section if you don't wish to try the DeepSeek API\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c72c871e-68d6-4668-9c27-96d52b77b867",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Chat\n",
"\n",
"deepseek_via_openai_client = OpenAI(\n",
" api_key=deepseek_api_key, \n",
" base_url=\"https://api.deepseek.com\"\n",
")\n",
"\n",
"response = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-chat\",\n",
" messages=prompts,\n",
")\n",
"\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "50b6e70f-700a-46cf-942f-659101ffeceb",
"metadata": {},
"outputs": [],
"source": [
"challenge = [{\"role\": \"system\", \"content\": \"You are a helpful assistant\"},\n",
" {\"role\": \"user\", \"content\": \"How many words are there in your answer to this prompt\"}]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "66d1151c-2015-4e37-80c8-16bc16367cfe",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Chat with a harder question! And streaming results\n",
"\n",
"stream = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-chat\",\n",
" messages=challenge,\n",
" stream=True\n",
")\n",
"\n",
"reply = \"\"\n",
"display_handle = display(Markdown(\"\"), display_id=True)\n",
"for chunk in stream:\n",
" reply += chunk.choices[0].delta.content or ''\n",
" reply = reply.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(reply), display_id=display_handle.display_id)\n",
"\n",
"print(\"Number of words:\", len(reply.split(\" \")))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "43a93f7d-9300-48cc-8c1a-ee67380db495",
"metadata": {},
"outputs": [],
"source": [
"# Using DeepSeek Reasoner - this may hit an error if DeepSeek is busy\n",
"# It's over-subscribed (as of 28-Jan-2025) but should come back online soon!\n",
"# If this fails, come back to this in a few days..\n",
"\n",
"response = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-reasoner\",\n",
" messages=challenge\n",
")\n",
"\n",
"reasoning_content = response.choices[0].message.reasoning_content\n",
"content = response.choices[0].message.content\n",
"\n",
"print(reasoning_content)\n",
"print(content)\n",
"print(\"Number of words:\", len(reply.split(\" \")))"
]
},
{
"cell_type": "markdown",
"id": "c09e6b5c-6816-4cd3-a5cd-a20e4171b1a0",
"metadata": {},
"source": [
"## Back to OpenAI with a serious question"
]
},
{
"cell_type": "code",
"execution_count": null,

22
week2/day3.ipynb

@ -136,26 +136,6 @@
" yield response"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40a2d5ad-e907-465e-8397-3120583a5bf9",
"metadata": {},
"outputs": [],
"source": [
"!pip show gradio"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7fed1b9-c502-4eea-b649-ca00458d5c45",
"metadata": {},
"outputs": [],
"source": [
"# 5.8.0 to 5.12"
]
},
{
"cell_type": "markdown",
"id": "1334422a-808f-4147-9c4c-57d63d9780d0",
@ -171,7 +151,7 @@
"metadata": {},
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat, type=\"messages\").launch(pwa=True)"
"gr.ChatInterface(fn=chat, type=\"messages\").launch()"
]
},
{

33
week3/community-contributions/ai-web-summarizer/.gitignore vendored

@ -0,0 +1,33 @@
# Python
__pycache__/
*.py[cod]
*.pyo
*.pyd
.Python
env/
venv/
*.env
*.ini
*.log
# VSCode
.vscode/
# IDE files
.idea/
# System files
.DS_Store
Thumbs.db
# Environment variables
.env
# Jupyter notebook checkpoints
.ipynb_checkpoints
# Dependencies
*.egg-info/
dist/
build/

143
week3/community-contributions/ai-web-summarizer/README.md

@ -0,0 +1,143 @@
# AI Web Page Summarizer
This project is a simple AI-powered web page summarizer that leverages OpenAI's GPT models and local inference with Ollama to generate concise summaries of given text. The goal is to create a "Reader's Digest of the Internet" by summarizing web content efficiently.
## Features
- Summarize text using OpenAI's GPT models or local Ollama models.
- Flexible summarization engine selection (OpenAI API, Ollama API, or Ollama library).
- Simple and modular code structure.
- Error handling for better reliability.
## Project Structure
```
ai-summarizer/
│-- summarizer/
│ │-- __init__.py
│ │-- fetcher.py # Web content fetching logic
│ │-- summarizer.py # Main summarization logic
│-- utils/
│ │-- __init__.py
│ │-- logger.py # Logging configuration
│-- main.py # Entry point of the app
│-- .env # Environment variables
│-- requirements.txt # Python dependencies
│-- README.md # Project documentation
```
## Prerequisites
- Python 3.8 or higher
- OpenAI API Key (You can obtain it from [OpenAI](https://platform.openai.com/signup))
- Ollama installed locally ([Installation Guide](https://ollama.ai))
- `conda` for managing environments (optional)
## Installation
1. **Clone the repository:**
```bash
git clone https://github.com/your-username/ai-summarizer.git
cd ai-summarizer
```
2. **Create a virtual environment (optional but recommended):**
```bash
conda create --name summarizer-env python=3.9
conda activate summarizer-env
```
3. **Install dependencies:**
```bash
pip install -r requirements.txt
```
4. **Set up environment variables:**
Create a `.env` file in the project root and add your OpenAI API key (if using OpenAI):
```env
OPENAI_API_KEY=your-api-key-here
```
## Usage
1. **Run the summarizer:**
```bash
python main.py
```
2. **Sample Output:**
```shell
Enter a URL to summarize: https://example.com
Summary of the page:
AI refers to machines demonstrating intelligence similar to humans and animals.
```
3. **Engine Selection:**
The summarizer supports multiple engines. Modify `main.py` to select your preferred model:
```python
summary = summarize_text(content, 'gpt-4o-mini', engine="openai")
summary = summarize_text(content, 'deepseek-r1:1.5B', engine="ollama-api")
summary = summarize_text(content, 'deepseek-r1:1.5B', engine="ollama-lib")
```
## Configuration
You can modify the model, max tokens, and temperature in `summarizer/summarizer.py`:
```python
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[...],
max_tokens=300,
temperature=0.7
)
```
## Error Handling
If any issues occur, the script will print an error message, for example:
```
Error during summarization: Invalid API key or Ollama not running.
```
## Dependencies
The required dependencies are listed in `requirements.txt`:
```
openai
python-dotenv
requests
ollama-api
```
Install them using:
```bash
pip install -r requirements.txt
```
## Contributing
Contributions are welcome! Feel free to fork the repository and submit pull requests.
## License
This project is licensed under the MIT License. See the `LICENSE` file for more details.
## Contact
For any inquiries, please reach out to:
- Linkedin: https://www.linkedin.com/in/khanarafat/
- GitHub: https://github.com/raoarafat

28
week3/community-contributions/ai-web-summarizer/main.py

@ -0,0 +1,28 @@
from summarizer.fetcher import fetch_web_content
from summarizer.summarizer import summarize_text
from utils.logger import logger
def main():
url = input("Enter a URL to summarize: ")
logger.info(f"Fetching content from: {url}")
content = fetch_web_content(url)
if content:
logger.info("Content fetched successfully. Sending to OpenAI for summarization...")
# summary = summarize_text(content,'gpt-4o-mini', engine="openai")
# summary = summarize_text(content, 'deepseek-r1:1.5B', engine="ollama-lib")
summary = summarize_text(content, 'deepseek-r1:1.5B', engine="ollama-api")
if summary:
logger.info("Summary generated successfully.")
print("\nSummary of the page:\n")
print(summary)
else:
logger.error("Failed to generate summary.")
else:
logger.error("Failed to fetch web content.")
if __name__ == "__main__":
main()

4
week3/community-contributions/ai-web-summarizer/requirements.txt

@ -0,0 +1,4 @@
openai
requests
beautifulsoup4
python-dotenv

0
week3/community-contributions/ai-web-summarizer/summarizer/__init__.py

23
week3/community-contributions/ai-web-summarizer/summarizer/fetcher.py

@ -0,0 +1,23 @@
import requests
from bs4 import BeautifulSoup
def fetch_web_content(url):
try:
response = requests.get(url)
response.raise_for_status()
# Parse the HTML content
soup = BeautifulSoup(response.text, 'html.parser')
# Extract readable text from the web page (ignoring scripts, styles, etc.)
page_text = soup.get_text(separator=' ', strip=True)
return page_text[:5000] # Limit to 5000 chars (API limitation)
except requests.exceptions.RequestException as e:
print(f"Error fetching the webpage: {e}")
return None
if __name__ == "__main__":
url = "https://en.wikipedia.org/wiki/Natural_language_processing"
content = fetch_web_content(url)
print(content[:500]) # Print a sample of the content

85
week3/community-contributions/ai-web-summarizer/summarizer/summarizer.py

@ -0,0 +1,85 @@
import openai # type: ignore
import ollama
import requests
from utils.config import Config
# Local Ollama API endpoint
OLLAMA_API = "http://127.0.0.1:11434/api/chat"
# Initialize OpenAI client with API key
client = openai.Client(api_key=Config.OPENAI_API_KEY)
def summarize_with_openai(text, model):
"""Summarize text using OpenAI's GPT model."""
try:
response = client.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant that summarizes web pages."},
{"role": "user", "content": f"Summarize the following text: {text}"}
],
max_tokens=300,
temperature=0.7
)
return response.choices[0].message.content
except Exception as e:
print(f"Error during OpenAI summarization: {e}")
return None
def summarize_with_ollama_lib(text, model):
"""Summarize text using Ollama Python library."""
try:
messages = [
{"role": "system", "content": "You are a helpful assistant that summarizes web pages."},
{"role": "user", "content": f"Summarize the following text: {text}"}
]
response = ollama.chat(model=model, messages=messages)
return response['message']['content']
except Exception as e:
print(f"Error during Ollama summarization: {e}")
return None
def summarize_with_ollama_api(text, model):
"""Summarize text using local Ollama API."""
try:
payload = {
"model": model,
"messages": [
{"role": "system", "content": "You are a helpful assistant that summarizes web pages."},
{"role": "user", "content": f"Summarize the following text: {text}"}
],
"stream": False # Set to True for streaming responses
}
response = requests.post(OLLAMA_API, json=payload)
response_data = response.json()
return response_data.get('message', {}).get('content', 'No summary generated')
except Exception as e:
print(f"Error during Ollama API summarization: {e}")
return None
def summarize_text(text, model, engine="openai"):
"""Generic function to summarize text using the specified engine (openai/ollama-lib/ollama-api)."""
if engine == "openai":
return summarize_with_openai(text, model)
elif engine == "ollama-lib":
return summarize_with_ollama_lib(text, model)
elif engine == "ollama-api":
return summarize_with_ollama_api(text, model)
else:
print("Invalid engine specified. Use 'openai', 'ollama-lib', or 'ollama-api'.")
return None
if __name__ == "__main__":
sample_text = "Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals and humans."
# Summarize using OpenAI
openai_summary = summarize_text(sample_text, model="gpt-3.5-turbo", engine="openai")
print("OpenAI Summary:", openai_summary)
# Summarize using Ollama Python library
ollama_lib_summary = summarize_text(sample_text, model="deepseek-r1:1.5B", engine="ollama-lib")
print("Ollama Library Summary:", ollama_lib_summary)
# Summarize using local Ollama API
ollama_api_summary = summarize_text(sample_text, model="deepseek-r1:1.5B", engine="ollama-api")
print("Ollama API Summary:", ollama_api_summary)

0
week3/community-contributions/ai-web-summarizer/utils/__init__.py

11
week3/community-contributions/ai-web-summarizer/utils/config.py

@ -0,0 +1,11 @@
import os
from dotenv import load_dotenv
# Load environment variables from .env file
load_dotenv()
class Config:
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if __name__ == "__main__":
print("Your OpenAI Key is:", Config.OPENAI_API_KEY)

16
week3/community-contributions/ai-web-summarizer/utils/logger.py

@ -0,0 +1,16 @@
import logging
# Setup logging configuration
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
handlers=[
logging.FileHandler("app.log"),
logging.StreamHandler()
]
)
logger = logging.getLogger(__name__)
if __name__ == "__main__":
logger.info("Logger is working correctly.")

29
week4/community-contributions/doc_string_exercise/README.md

@ -0,0 +1,29 @@
# Script Overview
The documentation will show you how to run the python script generate_doc_string.py. It is designed to take input
from an existing python file and create a new one with a suffix ('claude' or 'gpt'). If you do not specify and llm
model, it will default to claude.
# How to run
```powershell
conda activate llms
cd <script_location>
python generate_doc_string -fp <full_file_path> -llm <name_of_model>
```
# Show Help Instructions
```shell
python generate_doc_string --help
```
# Error Checking
1) File Path Existence
If the file path doesn't exist, the script will stop running and print out an error.
2) LLM Model Choice
If you choose something other than 'gpt' or 'claude', it will show and assertion error.

19
week4/community-contributions/doc_string_exercise/data/original_file.py

@ -0,0 +1,19 @@
def calculate(iterations, param1, param2):
result = 1.0
for i in range(1, iterations+1):
j = i * param1 - param2
result -= (1/j)
j = i * param1 + param2
result += (1/j)
return result
def calculate_2(iterations, param1, param2):
result = 1.0
for i in range(1, iterations+1):
j = i * param1 - param2
result -= (1/j)
j = i * param1 + param2
result += (1/j)
return result

85
week4/community-contributions/doc_string_exercise/generate_doc_string.py

@ -0,0 +1,85 @@
from argparse import ArgumentParser
import os
from dotenv import load_dotenv
from openai import OpenAI
import anthropic
from utils import add_doc_string, Model, get_system_message
from pathlib import Path
def main():
# get run time arguments
parser = ArgumentParser(
prog='Generate Doc String for an existing functions',
description='Run Doc String for a given file and model',
)
parser.add_argument(
'-fp',
'--file_path',
help='Enter the file path to the script that will be updated with doc strings',
default=None
)
parser.add_argument(
'-llm',
'--llm_model',
help='Choose the LLM model that will create the doc strings',
default='claude'
)
# get run time arguments
args = parser.parse_args()
file_path = Path(args.file_path)
llm_model = args.llm_model
# check for file path
assert file_path.exists(), f"File Path {str(file_path.as_posix())} doesn't exist. Please try again."
# check for value llm values
assert llm_model in ['gpt', 'claude'], (f"Invalid model chosen '{llm_model}'. "
f"Please choose a valid model ('gpt' or 'claude')")
# load keys and environment variables
load_dotenv()
os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')
os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')
os.environ['HF_TOKEN'] = os.getenv('HF_INF_TOKEN', 'your-key-if-not-using-env')
# get system messages
system_message = get_system_message()
# get model info
model_info = {
'gpt': {
'client': OpenAI(),
'model': Model.OPENAI_MODEL.value,
},
'claude': {
'client': anthropic.Anthropic(),
'model': Model.CLAUDE_MODEL.value
}
}
# add standard argumens
model_info[llm_model].update(
{
'file_path': file_path,
'system_message': system_message
}
)
# convert python code to c++ code using open ai
print(f"\nSTARTED | Doc Strings Using {llm_model.upper()} for file {str(file_path)}\n\n")
add_doc_string(**model_info[llm_model])
print(f"\nFINISHED | Doc Strings Using {llm_model.upper()} for file {str(file_path)}\n\n")
if __name__ == '__main__':
main()

147
week4/community-contributions/doc_string_exercise/utils.py

@ -0,0 +1,147 @@
from enum import Enum
from pathlib import Path
class Model(Enum):
"""
Enumeration of supported AI models.
"""
OPENAI_MODEL = "gpt-4o"
CLAUDE_MODEL = "claude-3-5-sonnet-20240620"
def get_system_message() -> str:
"""
Generate a system message for AI assistants creating docstrings.
:return: A string containing instructions for the AI assistant.
:rtype: str
"""
system_message = "You are an assistant that creates doc strings in reStructure Text format for an existing python function. "
system_message += "Respond only with an updated python function; use comments sparingly and do not provide any explanation other than occasional comments. "
system_message += "Be sure to include typing annotation for each function argument or key word argument and return object types."
return system_message
def user_prompt_for(python: str) -> str:
"""
Generate a user prompt for rewriting Python functions with docstrings.
:param python: The Python code to be rewritten.
:type python: str
:return: A string containing the user prompt and the Python code.
:rtype: str
"""
user_prompt = "Rewrite this Python function with doc strings in the reStructuredText style."
user_prompt += "Respond only with python code; do not explain your work other than a few comments. "
user_prompt += "Be sure to write a description of the function purpose with typing for each argument and return\n\n"
user_prompt += python
return user_prompt
def messages_for(python: str, system_message: str) -> list:
"""
Create a list of messages for the AI model.
:param python: The Python code to be processed.
:type python: str
:param system_message: The system message for the AI assistant.
:type system_message: str
:return: A list of dictionaries containing role and content for each message.
:rtype: list
"""
return [
{"role": "system", "content": system_message},
{"role": "user", "content": user_prompt_for(python)}
]
def write_output(output: str, file_suffix: str, file_path: Path) -> None:
"""
Write the processed output to a file.
:param output: The processed Python code with docstrings.
:type output: str
:param file_suffix: The suffix to be added to the output file name.
:type file_suffix: str
:param file_path: The path of the input file.
:type file_path: Path
:return: None
"""
code = output.replace("", "").replace("", "")
out_file = file_path.with_name(f"{file_path.stem}{file_suffix if file_suffix else ''}.py")
out_file.write_text(code)
def add_doc_string(client: object, system_message: str, file_path: Path, model: str) -> None:
"""
Add docstrings to a Python file using the specified AI model.
:param client: The AI client object.
:type client: object
:param system_message: The system message for the AI assistant.
:type system_message: str
:param file_path: The path of the input Python file.
:type file_path: Path
:param model: The AI model to be used.
:type model: str
:return: None
"""
if 'gpt' in model:
add_doc_string_gpt(client=client, system_message=system_message, file_path=file_path, model=model)
else:
add_doc_string_claude(client=client, system_message=system_message, file_path=file_path, model=model)
def add_doc_string_gpt(client: object, system_message: str, file_path: Path, model: str = 'gpt-4o') -> None:
"""
Add docstrings to a Python file using GPT model.
:param client: The OpenAI client object.
:type client: object
:param system_message: The system message for the AI assistant.
:type system_message: str
:param file_path: The path of the input Python file.
:type file_path: Path
:param model: The GPT model to be used, defaults to 'gpt-4o'.
:type model: str
:return: None
"""
code_text = file_path.read_text(encoding='utf-8')
stream = client.chat.completions.create(model=model, messages=messages_for(code_text, system_message), stream=True)
reply = ""
for chunk in stream:
fragment = chunk.choices[0].delta.content or ""
reply += fragment
print(fragment, end='', flush=True)
write_output(reply, file_suffix='_gpt', file_path=file_path)
def add_doc_string_claude(client: object, system_message: str, file_path: Path, model: str = 'claude-3-5-sonnet-20240620') -> None:
"""
Add docstrings to a Python file using Claude model.
:param client: The Anthropic client object.
:type client: object
:param system_message: The system message for the AI assistant.
:type system_message: str
:param file_path: The path of the input Python file.
:type file_path: Path
:param model: The Claude model to be used, defaults to 'claude-3-5-sonnet-20240620'.
:type model: str
:return: None
"""
code_text = file_path.read_text(encoding='utf-8')
result = client.messages.stream(
model=model,
max_tokens=2000,
system=system_message,
messages=[{"role": "user", "content": user_prompt_for(code_text)}],
)
reply = ""
with result as stream:
for text in stream.text_stream:
reply += text
print(text, end="", flush=True)
write_output(reply, file_suffix='_claude', file_path=file_path)

2
week7/day2.ipynb

@ -31,7 +31,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

2
week7/day3 and 4.ipynb

@ -31,7 +31,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

2
week7/day5.ipynb

@ -31,7 +31,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
"version": "3.11.11"
}
},
"nbformat": 4,

18
week8/agents/frontier_agent.py

@ -23,11 +23,19 @@ class FrontierAgent(Agent):
def __init__(self, collection):
"""
Set up this instance by connecting to OpenAI, to the Chroma Datastore,
Set up this instance by connecting to OpenAI or DeepSeek, to the Chroma Datastore,
And setting up the vector encoding model
"""
self.log("Initializing Frontier Agent")
self.openai = OpenAI()
deepseek_api_key = os.getenv("DEEPSEEK_API_KEY")
if deepseek_api_key:
self.client = OpenAI(api_key=deepseek_api_key, base_url="https://api.deepseek.com")
self.MODEL = "deepseek-chat"
self.log("Frontier Agent is set up with DeepSeek")
else:
self.client = OpenAI()
self.MODEL = "gpt-4o-mini"
self.log("Frontier Agent is setting up with OpenAI")
self.collection = collection
self.model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
self.log("Frontier Agent is ready")
@ -85,14 +93,14 @@ class FrontierAgent(Agent):
def price(self, description: str) -> float:
"""
Make a call to OpenAI to estimate the price of the described product,
Make a call to OpenAI or DeepSeek to estimate the price of the described product,
by looking up 5 similar products and including them in the prompt to give context
:param description: a description of the product
:return: an estimate of the price
"""
documents, prices = self.find_similars(description)
self.log("Frontier Agent is about to call OpenAI with context including 5 similar products")
response = self.openai.chat.completions.create(
self.log(f"Frontier Agent is about to call {self.MODEL} with context including 5 similar products")
response = self.client.chat.completions.create(
model=self.MODEL,
messages=self.messages_for(description, documents, prices),
seed=42,

106
week8/day2.3.ipynb

@ -209,7 +209,7 @@
"metadata": {},
"outputs": [],
"source": [
"test[1].prompt"
"print(test[1].prompt)"
]
},
{
@ -255,6 +255,16 @@
" return float(match.group()) if match else 0"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06743833-c362-47f8-b02a-139be2cd52ab",
"metadata": {},
"outputs": [],
"source": [
"get_price(\"The price for this is $99.99\")"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -306,6 +316,86 @@
"Tester.test(gpt_4o_mini_rag, test)"
]
},
{
"cell_type": "markdown",
"id": "d793c6d0-ce3f-4680-b37d-4643f0cd1d8e",
"metadata": {},
"source": [
"## Optional Extra: Trying a DeepSeek API call instead of OpenAI\n",
"\n",
"If you have a DeepSeek API key, we will use it here as an alternative implementation; otherwise skip to the next section.."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21b6a22f-0195-47b6-8f6d-cab6ebe05742",
"metadata": {},
"outputs": [],
"source": [
"# Connect to DeepSeek using the OpenAI client python library\n",
"\n",
"deepseek_api_key = os.getenv(\"DEEPSEEK_API_KEY\")\n",
"deepseek_via_openai_client = OpenAI(api_key=deepseek_api_key,base_url=\"https://api.deepseek.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea7267d6-9489-4dac-a6e0-aec108e788c2",
"metadata": {},
"outputs": [],
"source": [
"# Added some retry logic here because DeepSeek is very oversubscribed and sometimes fails..\n",
"\n",
"def deepseek_api_rag(item):\n",
" documents, prices = find_similars(item)\n",
" retries = 8\n",
" done = False\n",
" while not done and retries > 0:\n",
" try:\n",
" response = deepseek_via_openai_client.chat.completions.create(\n",
" model=\"deepseek-chat\", \n",
" messages=messages_for(item, documents, prices),\n",
" seed=42,\n",
" max_tokens=8\n",
" )\n",
" reply = response.choices[0].message.content\n",
" done = True\n",
" except Exception as e:\n",
" print(f\"Error: {e}\")\n",
" retries -= 1\n",
" return get_price(reply)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6560faf2-4dec-41e5-95e2-b2c46cdb3ba8",
"metadata": {},
"outputs": [],
"source": [
"deepseek_api_rag(test[1])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0578b116-869f-429d-8382-701f1c0882f3",
"metadata": {},
"outputs": [],
"source": [
"Tester.test(deepseek_api_rag, test)"
]
},
{
"cell_type": "markdown",
"id": "6739870f-1eec-4547-965d-4b594e685697",
"metadata": {},
"source": [
"## And now to wrap this in an \"Agent\" class"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -316,6 +406,20 @@
"from agents.frontier_agent import FrontierAgent"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2efa7ba9-c2d7-4f95-8bb5-c4295bbeb01f",
"metadata": {},
"outputs": [],
"source": [
"# Let's print the logs so we can see what's going on\n",
"\n",
"import logging\n",
"root = logging.getLogger()\n",
"root.setLevel(logging.INFO)"
]
},
{
"cell_type": "code",
"execution_count": null,

4
week8/day5.ipynb

@ -141,7 +141,9 @@
"source": [
"# Running the final product\n",
"\n",
"## Just hit shift + enter in the next cell, and let the deals flow in!!"
"## Just hit shift + enter in the next cell, and let the deals flow in!!\n",
"\n",
"Note that the Frontier Agent will use DeepSeek if there's a DEEPSEEK_API_KEY in your .env file, otherwise gpt-4o-mini."
]
},
{

Loading…
Cancel
Save