Browse Source

w01d01 website summary with Llama and Open API

pull/338/head
Andres Mendoza 3 weeks ago
parent
commit
14067a9a2b
  1. 836
      week1/community-contributions/day1-mine.ipynb
  2. 199
      week1/community-contributions/day1_mine_simple.ipynb
  3. 739
      week1/community-contributions/website-summary/doc/LLM_INTEGRATION_GUIDE.md
  4. 307
      week1/community-contributions/website-summary/doc/README.md
  5. 48
      week1/community-contributions/website-summary/setup_environment.py
  6. 29
      week1/community-contributions/website-summary/src/config/constants.py
  7. 455
      week1/community-contributions/website-summary/src/example_usage.ipynb
  8. 30
      week1/community-contributions/website-summary/src/helper/display_utils.py
  9. 43
      week1/community-contributions/website-summary/src/helper/env_utils.py
  10. 236
      week1/community-contributions/website-summary/src/helper/web_scraper.py
  11. 76
      week1/community-contributions/website-summary/src/llm/base_client.py
  12. 62
      week1/community-contributions/website-summary/src/llm/helper/prompt_utils.py
  13. 63
      week1/community-contributions/website-summary/src/llm/helper/validation_utils.py
  14. 26
      week1/community-contributions/website-summary/src/llm/llama/helper/check_ollama_models.py
  15. 156
      week1/community-contributions/website-summary/src/llm/llama/llama_client.py
  16. 43
      week1/community-contributions/website-summary/src/llm/llm_factory.py
  17. 118
      week1/community-contributions/website-summary/src/llm/open_api/openai_client.py
  18. 144
      week1/community-contributions/website-summary/src/main_summarize.py
  19. 22
      week1/community-contributions/website-summary/src/structures/models.py
  20. 2
      week1/day1.ipynb

836
week1/community-contributions/day1-mine.ipynb

@ -0,0 +1,836 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# YOUR FIRST LAB\n",
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n",
"\n",
"## Your first Frontier LLM Project\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you're new to the Command Line\n",
"\n",
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n",
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">Treat these labs as a resource</h2>\n",
" <span style=\"color:#f71;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI (or Ollama)\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI. \n",
"\n",
"If you'd like to use free Ollama instead, please see the README section \"Free Alternative to Paid APIs\", and if you're not sure how to do this, there's a full solution in the solutions folder (day1_with_ollama.ipynb).\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key found and looks good so far!\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! Welcome! I'm glad you're here. How can I assist you today?\n"
]
}
],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Home - Edward Donner\n",
"Home\n",
"Connect Four\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"January 23, 2025\n",
"LLM Workshop – Hands-on with Agents – resources\n",
"December 21, 2024\n",
"Welcome, SuperDataScientists!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"Navigation\n",
"Home\n",
"Connect Four\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are looking at a website titled Home - Edward Donner\n",
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
"\n",
"Home\n",
"Connect Four\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"January 23, 2025\n",
"LLM Workshop – Hands-on with Agents – resources\n",
"December 21, 2024\n",
"Welcome, SuperDataScientists!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"Navigation\n",
"Home\n",
"Connect Four\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"print(user_prompt_for(ed))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Oh, that's a tough one! Let me think... It's 4! Surprised? You shouldn't be.\n"
]
}
],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system',\n",
" 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n",
" {'role': 'user',\n",
" 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nConnect Four\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nJanuary 23, 2025\\nLLM Workshop – Hands-on with Agents – resources\\nDecember 21, 2024\\nWelcome, SuperDataScientists!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nNavigation\\nHome\\nConnect Four\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nEdward Donner's website serves as a hub for his interests in coding, AI, and LLMs (Large Language Models). He is the co-founder and CTO of Nebula.io, which aims to apply AI in talent management to help individuals realize their potential. Previously, he founded the AI startup untapt, acquired in 2021. In addition to technology, Ed enjoys DJing and amateur music production.\\n\\n## Recent Posts and Announcements\\n- **January 23, 2025**: LLM Workshop – Hands-on with Agents – resources available.\\n- **December 21, 2024**: Welcome to SuperDataScientists!\\n- **November 13, 2024**: Mastering AI and LLM Engineering – Resources shared.\\n- **October 16, 2024**: From Software Engineer to AI Data Scientist – resources accessible.\\n\\nEd encourages connections via email and social media.\""
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"**INT. COFFEE SHOP - DAY**\n",
"\n",
"*George and Elaine are seated at a small table, sipping coffee. Across from them sit Ross and Phoebe, both animated, comfortable in their opinions. The atmosphere is lively with the chatter of other patrons in the background.*\n",
"\n",
"**GEORGE**\n",
"Listen, Elaine, you know it’s obvious that Seinfeld is the quintessential show about nothing. We took the mundane and made it extraordinary! Who else could elevate a conversation about a muffin top to such heights?\n",
"\n",
"**ELAINE**\n",
"Exactly! And let’s not forget the iconic catchphrases! \"Not that there's anything wrong with that.\" It's legendary! Plus, we had real-life social commentary, unlike that \"will they, won’t they\" situation with Ross and Rachel.\n",
"\n",
"**ROSS**\n",
"Oh, please! Friends captured real relationships and struggles. It depicted what it’s like to navigate life in your 20s and 30s. Plus, we had amazing story arcs and character development that brought out real emotions.\n",
"\n",
"**PHOEBE**\n",
"Yeah! And think about it – I mean, I had \"Smelly Cat!\" Seinfeld never had a character who could turn that kind of cringe into a cultural phenomenon. We had laughter, friendship, and even some healthy doses of insanity!\n",
"\n",
"*George rolls his eyes dramatically.*\n",
"\n",
"**GEORGE**\n",
"Healthy doses of insanity?! You mean \"Bing-ing\" instead of \"calling\"? Please! We had Newman, we had the Soup Nazi... we had real insanity!\n",
"\n",
"**ELAINE**\n",
"And what about the \"Festivus\" tradition? That's way more relatable than whatever weird holiday you guys came up with!\n",
"\n",
"**ROSS**\n",
"Relatable? Sure, if you want a holiday based on grievances! At least we know how to have a good time with our Thanksgiving dinners... despite the turkey being burnt.\n",
"\n",
"*Just then, KRAMER bursts in, his hair wild and energy infectious. He stands between the tables.*\n",
"\n",
"**KRAMER**\n",
"Hey! What’s all this ruckus? Seinfeld vs. Friends? This is a no-brainer! Friends is all about camaraderie, man! The laughter, the love… it just speaks to the soul!\n",
"\n",
"*JOEY saunters in behind KRAMER, looking confused.*\n",
"\n",
"**JOEY**\n",
"Wait, are we talking about shows? Dude, Seinfeld is where it’s at! The humor is on a whole other level. I mean, we had the best side characters! George, Elaine… they’re classic!\n",
"\n",
"*George smiles smugly while Elaine nods in agreement.*\n",
"\n",
"**KRAMER**\n",
"No, no, no! Friends brought warmth and connection! You guys were just waiting for your next cynical observation, while we were celebrating each other!\n",
"\n",
"**PHOEBE**\n",
"Plus, let’s not forget our iconic theme song! \"I'll be there for you!\" That’s a mantra! Not some whiny neuroses about a confusing relationship!\n",
"\n",
"**GEORGE**\n",
"(leaning in)\n",
"You mean a song that reminds you that you're always gonna be alone? That's not a mantra! That's a sad reminder of what friendship should feel like!\n",
"\n",
"*Everyone laughs, the tension easing slightly.*\n",
"\n",
"**JOEY**\n",
"But seriously… we had the best pizza in New York. That’s like a big deal.\n",
"\n",
"**ELAINE**\n",
"Oh, give me a break, Joey. You think pizza can save that show?\n",
"\n",
"*They all burst into laughter, the debate continuing but now with a more playful tone. As the scene fades out, they exchange playful jabs and mock arguments, each unwilling to concede.*\n",
"\n",
"**FADE OUT.**"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"You are script producer for tv series\"\n",
"user_prompt = \"\"\"\n",
" Create a scene where George Constanza and Elaine from Seinfeld debate with Ross and Pheobe from Friends about which show is better. Each side defends it's own tv show.\n",
" After a while of discussion, Kramer and Joey enter the same; in this case, Kramer roots for friends and Joey roots for Seinfeld.\n",
" \n",
" Try to be a script with 4 interventions of each at most.\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
"]\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages\n",
" )\n",
"\n",
"# Step 4: print the result\n",
"\n",
"display(Markdown(response.choices[0].message.content))"
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

199
week1/community-contributions/day1_mine_simple.ipynb

@ -0,0 +1,199 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# YOUR FIRST LAB\n",
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n",
"\n",
"## Your first Frontier LLM Project\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key found and looks good so far!\n",
"Hello! It’s great to hear from you! How can I assist you today?\n"
]
},
{
"data": {
"text/markdown": [
"# Edward Donner Website Summary\n",
"\n",
"Edward Donner's website serves as a personal platform where he shares insights related to code, LLMs (Large Language Models), and his interests in music production and technology. \n",
"\n",
"## About Ed\n",
"- Ed is the co-founder and CTO of Nebula.io, an AI-driven company focused on talent discovery and engagement.\n",
"- He previously founded untapt, an AI startup that was acquired in 2021.\n",
"\n",
"## Features\n",
"- **Connect Four**: A unique arena designed for LLMs to engage in simulated diplomacy and strategy.\n",
"- **Outsmart**: Another interactive feature related to LLMs.\n",
"\n",
"## News and Announcements\n",
"- **January 23, 2025**: Announcement of a workshop titled \"LLM Workshop – Hands-on with Agents\" providing resources for participants.\n",
"- **December 21, 2024**: Welcoming message for \"SuperDataScientists.\"\n",
"- **November 13, 2024**: Resources available for the topic \"Mastering AI and LLM Engineering.\"\n",
"- **October 16, 2024**: Resources provided for transitioning from a software engineer to an AI data scientist.\n",
"\n",
"The website emphasizes Ed's passion for technology and AI, highlighting his professional achievements and ongoing projects."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"#!/usr/bin/env python\n",
"# coding: utf-8\n",
"\n",
"# # YOUR FIRST LAB\n",
"# ## Your first Frontier LLM Project\n",
"# \n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# # Connecting to OpenAI (or Ollama)\n",
"# Load environment variables in a file called .env\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n",
"\n",
"# Open API connection\n",
"openai = OpenAI()\n",
"\n",
"# To give you a preview -- calling OpenAI with these messages is this easy: openai.chat.completions.create(model, messages)\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(\n",
" model=\"gpt-4o-mini\", \n",
" messages=[\n",
" {\"role\":\"user\", \"content\":message}\n",
" ]\n",
")\n",
"print(response.choices[0].message.content)\n",
"\n",
"# A class to represent a Webpage\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
"\n",
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"user_prompt_content = \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
"\n",
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += user_prompt_content\n",
" user_prompt += website.text\n",
" return user_prompt\n",
"\n",
"# See how this function creates exactly the format above\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]\n",
"\n",
"# Try this out, and then try for a few more websites\n",
"# messages_for(ed)\n",
"\n",
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content\n",
"\n",
"# A function to display this nicely in the Jupyter output, using markdown\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))\n",
"\n",
"website_url = \"https://edwarddonner.com\"\n",
"\n",
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(website_url)\n",
"display_summary(website_url)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b2714e98-8f45-443e-99f8-918e2d61ae36",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

739
week1/community-contributions/website-summary/doc/LLM_INTEGRATION_GUIDE.md

@ -0,0 +1,739 @@
# LLM API Integration Guide
This guide explains how to use both OpenAI and Llama (via Ollama) APIs in Python applications, specifically for the Website Summary Tool.
## 1. Overview of Available LLM Clients
The application supports multiple LLM providers through a unified interface:
```python
from llm.llm_factory import LLMFactory
# Create an OpenAI client
openai_client = LLMFactory.create_client("openai")
# Create a Llama client (via Ollama)
llama_client = LLMFactory.create_client("llama")
```
Each client implements the same interface, making it easy to switch between providers.
## 2. OpenAI Integration
### 2.1 Loading the OpenAI API Key
The first step in using the OpenAI API is to load your API key:
```python
import os
from dotenv import load_dotenv
def load_api_key():
"""Load environment variables from .env file and return the API key."""
load_dotenv(override=True)
return os.getenv('OPENAI_API_KEY')
```
This function:
- Uses `dotenv` to load environment variables from a `.env` file
- Returns the API key from the environment variables
- The `override=True` parameter ensures that environment variables in the `.env` file take precedence
### 2.2 Initializing the OpenAI Client
Initialize the OpenAI client to make API calls:
```python
from openai import OpenAI
def initialize_openai_client():
"""Initialize the OpenAI client."""
load_dotenv(override=True) # Load environment variables including OPENAI_API_KEY
return OpenAI() # The client automatically uses OPENAI_API_KEY from environment
```
**Important Note**: The newer versions of the OpenAI Python library automatically load the API key from the environment variable `OPENAI_API_KEY`. You don't need to explicitly pass the API key when creating the client or making requests. When you call `load_dotenv(override=True)`, it loads the API key into the environment, and the OpenAI client uses it automatically.
If you want to explicitly set the API key instead of relying on environment variables, you can do:
```python
from openai import OpenAI
client = OpenAI(api_key="your-api-key-here")
```
### 2.3 Formatting Messages for OpenAI
The `OpenAIClient` implements the `format_messages` method from the `BaseLLMClient` abstract class:
```python
def format_messages(self, messages):
"""
Format messages for OpenAI API.
Args:
messages: List of message dictionaries with role and content
Returns:
list: The messages formatted for OpenAI
"""
# OpenAI already uses the format we're using, so we can return as-is
return messages
```
Since our internal message format already matches what OpenAI expects, this implementation simply returns the messages unchanged.
### 2.4 Making OpenAI API Requests
Make a request to the OpenAI API:
```python
def generate_content(self, messages, model=None, **kwargs):
"""Generate content from OpenAI."""
# Format messages appropriately for OpenAI
formatted_messages = self.format_messages(messages)
response = self.client.chat.completions.create(
model=model,
messages=formatted_messages,
**kwargs
)
return response.choices[0].message.content
```
The API key is automatically used from the environment variables - you don't need to pass it in each request.
## 3. Llama Integration (via Ollama)
### 3.1 Loading Llama Configuration
Configure the connection to a local Ollama server:
```python
def _load_config(self):
"""Load Llama configuration from .env file."""
load_dotenv(override=True)
self.api_base = os.getenv('LLAMA_API_URL', 'http://localhost:11434')
```
The default URL for Ollama is `http://localhost:11434`, but you can customize it in your `.env` file.
### 3.2 Initializing the Llama Client
Initialize the Llama client to connect to Ollama:
```python
def initialize(self):
"""Initialize the Llama client by loading config."""
self._load_config()
return self
```
### 3.3 Formatting Messages for Llama
The `LlamaClient` implements the `format_messages` method to convert the standard message format to what Ollama expects:
```python
def format_messages(self, messages):
"""
Format messages for Ollama API.
Args:
messages: List of message dictionaries with role and content
Returns:
str: The messages formatted as a prompt string for Ollama
"""
return self._convert_messages_to_prompt(messages)
```
The actual conversion is done by the `_convert_messages_to_prompt` method:
```python
def _convert_messages_to_prompt(self, messages):
"""Convert standard messages to Ollama prompt format."""
prompt = ""
for msg in messages:
role = msg.get("role", "").lower()
content = msg.get("content", "")
if role == "system":
prompt += f"<system>\n{content}\n</system>\n\n"
elif role == "user":
prompt += f"User: {content}\n\n"
elif role == "assistant":
prompt += f"Assistant: {content}\n\n"
else:
prompt += f"{content}\n\n"
# Add final prompt for assistant response
prompt += "Assistant: "
return prompt
```
### 3.4 Making Llama API Requests
Make a request to the Llama API via Ollama:
```python
def generate_content(self, messages, model=None, **kwargs):
"""Generate content from Llama."""
# Convert messages to Ollama format
prompt = self.format_messages(messages)
payload = {
"model": model or self.default_model,
"prompt": prompt,
"stream": False
}
try:
response = requests.post(
f"{self.api_base}/api/generate",
headers={"Content-Type": "application/json"},
json=payload,
timeout=60
)
if response.status_code == 200:
return response.json().get("response", "")
```
## 4. Creating Message Structure
To interact with either API, you need to structure your messages in a specific format:
```python
def create_user_prompt(self, website):
return self.user_prompt_template.format(title=website.title, text=website.text)
def create_messages(self, website):
return [
{"role": "system", "content": self.system_prompt},
{"role": "user", "content": self.create_user_prompt(website)}
]
```
System and user prompt examples:
```python
DEFAULT_SYSTEM_PROMPT = ("You are an assistant that analyzes the contents of a website "
"and provides a short summary, ignoring text that might be navigation related. "
"Respond in markdown.")
DEFAULT_USER_PROMPT_TEMPLATE = """
You are looking at a website titled {title}
The contents of this website is as follows;
please provide a short summary of this website in markdown.
If it includes news or announcements, then summarize these too.
{text}
"""
```
This format includes:
- A system message that sets the behavior of the AI assistant
- A user message containing the actual content to process
- The website object is used to insert relevant content into the user prompt template
## 5. Complete Integration Flow
Here's the complete flow for integrating with either LLM API:
```python
# Create the appropriate client
client = LLMFactory.create_client("openai") # or "llama"
# Validate credentials
is_valid, message = client.validate_credentials()
if not is_valid:
print(message)
exit(1)
# Optional: Test connectivity
test_response = client.test_connection("Hello, this is a test message.")
print("Test API response:", test_response)
# Create a prompt manager (or use default)
prompt_manager = PromptManager() # Customize if needed
# Prepare website content
website = fetch_website_content(url)
# Generate summary with the LLM API
summary = client.generate_content(
prompt_manager.create_messages(website),
model=None, # Uses default model for the client
temperature=0.7 # Optional parameter
)
```
## A Key Note On The Abstract Interface
The system now uses an abstract base class (`BaseLLMClient`) that defines the common interface for all LLM clients. Each provider-specific client implements this interface, including the format_messages method that handles converting the standard message format to the provider's expected format.
This approach eliminates the need to first create messages in OpenAI format and then translate them. Instead, each client knows how to format messages appropriately for its specific provider.
## 6. Additional API Parameters
### OpenAI Parameters
```python
response = client.generate_content(
messages=messages,
model="gpt-4o-mini",
temperature=0.7, # Controls randomness (0-1)
max_tokens=1500, # Maximum length of response
frequency_penalty=0.0, # Reduces repetition of token sequences
presence_penalty=0.0, # Reduces talking about the same topics
stop=None # Sequences where the API will stop generating
)
```
### Llama/Ollama Parameters
```python
response = client.generate_content(
messages=messages,
model="llama3.2:latest",
temperature=0.7, # Controls randomness (0-1)
# Other parameters supported by Ollama
)
```
## 7. Example Usage
Here's an example using both providers:
```python
# Example with OpenAI
openai_client = LLMFactory.create_client("openai")
is_valid, message = openai_client.validate_credentials()
if is_valid:
print("OpenAI credentials validated successfully")
url_to_summarize = "https://example.com"
print(f"Fetching and summarizing content from {url_to_summarize}")
summary = summarize_url(openai_client, url_to_summarize)
print("Summary from OpenAI:", summary)
# Example with Llama
llama_client = LLMFactory.create_client("llama")
is_valid, message = llama_client.validate_credentials()
if is_valid:
print("Llama credentials validated successfully")
url_to_summarize = "https://example.com"
print(f"Fetching and summarizing content from {url_to_summarize}")
summary = summarize_url(llama_client, url_to_summarize)
print("Summary from Llama:", summary)
```
## 8. Environment Setup
Create a `.env` file in your project root with:
```
# OpenAI Configuration
OPENAI_API_KEY=sk-your-openai-api-key
# Llama Configuration (optional, defaults to http://localhost:11434)
LLAMA_API_URL=http://localhost:11434
```
Make sure to install Ollama locally if you want to use Llama models: [Ollama Installation Guide](https://github.com/ollama/ollama)
# Annex: OpenAI vs Llama Side-by-Side Comparison
This annex provides a clear comparison between OpenAI and Llama (via Ollama) implementations for each critical step in the integration process.
## 1. Import Statements
**OpenAI:**
```python
import os
from dotenv import load_dotenv
from openai import OpenAI
```
**Llama (Ollama):**
```python
import os
import requests
from dotenv import load_dotenv
```
## 2. Client Initialization
**OpenAI:**
```python
# Load environment variables
load_dotenv(override=True)
# Initialize client (automatically uses OPENAI_API_KEY from environment)
client = OpenAI()
# Alternative with explicit API key
api_key = os.getenv('OPENAI_API_KEY')
client = OpenAI(api_key=api_key)
```
**Llama (Ollama):**
```python
# Load environment variables
load_dotenv(override=True)
# Get base URL (defaults to localhost if not specified)
api_base = os.getenv('LLAMA_API_URL', 'http://localhost:11434')
# No client object is created - direct API calls are made via requests
```
## 3. Message Formatting
**OpenAI:**
```python
# The BaseLLMClient abstract method implemented for OpenAI
def format_messages(self, messages):
"""
Format messages for OpenAI API.
Args:
messages: List of message dictionaries with role and content
Returns:
list: The messages formatted for OpenAI
"""
# OpenAI already uses the format we're using, so we can return as-is
return messages
```
Examples:
```python
# OpenAI uses a structured format with role-based messages
messages = [
{"role": "system", "content": "You are an assistant that analyzes websites."},
{"role": "user", "content": f"Summarize this website: {website_content}"}
]
# Simple concrete example:
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user","content": "What is machine learning?"},
{"role": "assistant", "content": "Machine learning is a branch of artificial intelligence that focuses on building systems that learn from data."},
{"role": "user", "content": "Can you give me a simple example?"}
]
```
**Llama (Ollama):**
```python
# The BaseLLMClient abstract method implemented for Llama
def format_messages(self, messages):
"""
Format messages for Ollama API.
Args:
messages: List of message dictionaries with role and content
Returns:
str: The messages formatted as a prompt string for Ollama
"""
return self._convert_messages_to_prompt(messages)
def _convert_messages_to_prompt(self, messages):
"""Convert standard messages to Ollama prompt format."""
prompt = ""
for msg in messages:
role = msg.get("role", "").lower()
content = msg.get("content", "")
if role == "system":
prompt += f"<system>\n{content}\n</system>\n\n"
elif role == "user":
prompt += f"User: {content}\n\n"
elif role == "assistant":
prompt += f"Assistant: {content}\n\n"
else:
prompt += f"{content}\n\n"
# Add final prompt for assistant response
prompt += "Assistant: "
return prompt
```
Examples:
```python
# Simple concrete example of converted messages: # Starting with the same OpenAI-style messages messages = [
{"role":"system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is machine learning?"},
{"role":"assistant", "content": "Machine learning is a branch of artificial intelligence that focuses on building systems that learn from data."},
{"role": "user", "content": "Can you give me a simple example?"}
]
# Convert OpenAI-style messages to Ollama format
prompt = convert_messages_to_prompt(messages)
# After conversion, the prompt would look like:
converted_prompt = """<system> You are a helpful assistant. </system>
User: What is machine learning?
# Convert OpenAI-style messages to Ollama format
prompt = convert_messages_to_prompt(messages)
```
## 4. Making Requests
**OpenAI:**
```python
def generate_content(self, messages, model=None, **kwargs):
"""Generate content from OpenAI."""
# Format messages appropriately for OpenAI
formatted_messages = self.format_messages(messages)
response = self.client.chat.completions.create(
model=model or self.default_model,
messages=formatted_messages,
**kwargs
)
return response.choices[0].message.content
```
**Llama (Ollama):**
```python
def generate_content(self, messages, model=None, **kwargs):
"""Generate content from Llama."""
# Format messages appropriately for Llama/Ollama
prompt = self.format_messages(messages)
payload = {
"model": model or self.default_model,
"prompt": prompt,
"stream": False,
**kwargs
}
response = requests.post(
f"{self.api_base}/api/generate",
headers={"Content-Type": "application/json"},
json=payload,
timeout=60
)
if response.status_code == 200:
return response.json().get("response", "")
```
## 5. Processing Response
**OpenAI:**
```python
# Response structure
"""
{
"id": "chatcmpl-123abc",
"object": "chat.completion",
"created": 1677858242,
"model": "gpt-4o-mini",
"choices": [
{
"message": {
"role": "assistant",
"content": "This website is about..."
},
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 7,
"total_tokens": 20
}
}
"""
# Extracting content
content = response.choices[0].message.content
```
**Llama (Ollama):**
```python
# Response structure (if response.status_code == 200)
"""
{
"model": "llama3",
"response": "This website is about...",
"done": true
}
"""
# Extracting content
if response.status_code == 200:
content = response.json().get("response", "")
else:
content = f"Error: {response.status_code}, {response.text}"
```
## 6. Complete Side-by-Side Example With New Architecture
**OpenAI:**
```python
import os
from dotenv import load_dotenv
from openai import OpenAI
from llm.base_client import BaseLLMClient
class OpenAIClient(BaseLLMClient):
def __init__(self):
self.client = None
self.default_model = "gpt-4o-mini"
def initialize(self):
load_dotenv(override=True)
self.client = OpenAI() # Uses OPENAI_API_KEY from environment
return self
def format_messages(self, messages):
# OpenAI already uses our format, so return as-is
return messages
def generate_content(self, messages, model=None, **kwargs):
formatted_messages = self.format_messages(messages)
response = self.client.chat.completions.create(
model=model or self.default_model,
messages=formatted_messages,
**kwargs
)
return response.choices[0].message.content
```
**Llama (Ollama):**
```python
import os
import requests
from dotenv import load_dotenv
from llm.base_client import BaseLLMClient
class LlamaClient(BaseLLMClient):
def __init__(self):
self.api_base = None
self.default_model = "llama3"
def initialize(self):
load_dotenv(override=True)
self.api_base = os.getenv('LLAMA_API_URL', 'http://localhost:11434')
return self
def format_messages(self, messages):
# Convert standard message format to Ollama prompt
return self._convert_messages_to_prompt(messages)
def _convert_messages_to_prompt(self, messages):
prompt = ""
for msg in messages:
role = msg.get("role", "").lower()
content = msg.get("content", "")
if role == "system":
prompt += f"<system>\n{content}\n</system>\n\n"
elif role == "user":
prompt += f"User: {content}\n\n"
elif role == "assistant":
prompt += f"Assistant: {content}\n\n"
else:
prompt += f"{content}\n\n"
prompt += "Assistant: "
return prompt
def generate_content(self, messages, model=None, **kwargs):
prompt = self.format_messages(messages)
payload = {
"model": model or self.default_model,
"prompt": prompt,
"stream": False
}
response = requests.post(
f"{self.api_base}/api/generate",
headers={"Content-Type": "application/json"},
json=payload,
timeout=60
)
if response.status_code == 200:
return response.json().get("response", "")
else:
return f"Error: {response.status_code}, {response.text}"
```
## 7. Key Differences Summary
| Aspect | OpenAI | Llama (Ollama) |
|--------|--------|----------------|
| **Authentication** | API key in environment or explicitly passed | No authentication, just URL to local server |
| **Client Library** | Official Python SDK | Standard HTTP requests |
| **Message Format Implementation** | Returns standard messages as-is | Converts to text-based prompt format |
| **Format Method Return Type** | List of dictionaries | String |
| **Request Format** | Client method calls | Direct HTTP POST requests |
| **Response Format** | Structured object with choices | Simple JSON with response field |
| **Streaming** | Supported via stream parameter | Supported via stream parameter |
| **Error Handling** | SDK throws exceptions | Need to check HTTP status codes |
## 8. The Abstract Base Class
```python
from abc import ABC, abstractmethod
class BaseLLMClient(ABC):
"""Abstract base class for LLM clients."""
@abstractmethod
def initialize(self):
"""Initialize the LLM client."""
pass
@abstractmethod
def validate_credentials(self):
"""
Validate API credentials.
Returns:
tuple: (is_valid, message)
"""
pass
@abstractmethod
def format_messages(self, messages):
"""
Format messages according to the provider's requirements.
Args:
messages: List of message dictionaries with role and content
Returns:
The properly formatted messages for this specific provider
"""
pass
@abstractmethod
def generate_content(self, messages, model=None, **kwargs):
"""
Generate content from the LLM.
Args:
messages: The messages to send
model: The model to use for generation
**kwargs: Additional provider-specific parameters
Returns:
str: The generated content
"""
pass
```
This abstract base class ensures that all LLM clients implement the same interface, making it easy to switch between providers. The `format_messages` method is a key part of this architecture, as it allows each client to format messages appropriately for its specific provider.

307
week1/community-contributions/website-summary/doc/README.md

@ -0,0 +1,307 @@
# Website Summary Tool
A Python tool that generates concise summaries of websites using different Large Language Models (LLMs). This tool supports both OpenAI's API and local Llama models via Ollama.
## 1. How to Use - Use the Example code
### Start the environment
#### Start Anaconda
Go to the project folder, e.g:
```Bash
cd /Users/andresmendoza/data/mydev/_apps/ai/llm_engineer_course/llm_engineering
```
Start the environment and launch the Jupyter webapp.
```Bash
conda activate llms
jupyter lab
```
#### Start Local Llama
Open a new terminal, and from Anywhere:
```Bash
ollama run llama3.2
```
Note: to shut it down, type `/bye` from the interaction console. Or just `ctrl + C`
For more details see: [Data Science environment - Setup](https://docs.google.com/document/d/1z2Go6Eo29knpe1e35MULCuk8EISwLFNopxlzUCDcEM8/edit?usp=sharing)
### Run the sample file
With the enviroenment up and running (Llama is locally running and Jupyter Lab is on the browser), go to where the sample notebook is located:
`/Users/andresmendoza/data/mydev/_apps/ai/llm_engineer_course/llm_engineering/week1/community-contributions/website-summary/src/example_usage.ipynb`
You could launch it from:
- Jupiter Lab: Select the notebook file, and run it as shift+enter:
- Terminal: `python example_usage.ipynb`
#### Terminal
## 2. How to use it - Code a main file.
### Basic Usage
```python
from llm.llm_factory import LLMFactory
from main_summarize import summarize_url
# Create an OpenAI client
openai_client = LLMFactory.create_client("openai")
# Validate credentials
is_valid, message = openai_client.validate_credentials()
if is_valid:
# Summarize a website
url = "https://example.com"
summary = summarize_url(openai_client, url)
print(summary)
```
### Choosing an LLM Provider
You can easily switch between OpenAI and Llama:
```python
# Use OpenAI
client = LLMFactory.create_client("openai")
# Or use Llama (via Ollama)
client = LLMFactory.create_client("llama")
```
### Customizing Prompts
You can customize how the tool interacts with the LLM by modifying the system and user prompts:
```python
from llm.helper.prompt_utils import PromptManager
# Create a custom prompt manager
custom_system_prompt = "You are a tech documentation specialist. Analyze this website and provide a technical summary."
custom_user_prompt = """
You are reviewing a tech website titled {title}.
Analyze the content below and provide:
1. A brief technical summary (2-3 sentences)
2. Key technical features (max 3 bullet points)
3. Target audience
Content:
{text}
"""
# Initialize custom prompt manager
prompt_manager = PromptManager(custom_system_prompt, custom_user_prompt)
# Use custom prompts for summarization
summary = summarize_url(client, url, prompt_manager=prompt_manager)
```
#### PromptManager Parameters
The `PromptManager` class accepts the following parameters:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `system_prompt` | str | DEFAULT_SYSTEM_PROMPT | The system prompt that sets the behavior of the AI assistant |
| `user_prompt_template` | str | DEFAULT_USER_PROMPT_TEMPLATE | The template for user messages that will be populated with website content |
The system prompt is sent as a system message to the LLM, while the user prompt template is formatted with the website's title and text before being sent as a user message.
### Advanced Options
You can pass additional parameters to the LLM when generating content:
```python
from main_summarize import summarize_url_with_options
# For OpenAI
summary = summarize_url_with_options(
openai_client,
url,
model="gpt-4o", # Use a specific model
temperature=0.3, # Lower temperature for more deterministic outputs
max_tokens=1000 # Limit response length
)
# For Llama
summary = summarize_url_with_options(
llama_client,
url,
model="llama3.2:latest",
temperature=0.5
)
```
## 2. Setup Requirements
### Prerequisites
- Python 3.10 or higher
- Anaconda or Miniconda (recommended for environment management)
- An OpenAI API key (if using OpenAI)
- Ollama installed locally (if using Llama)
### Installation
1. Clone the repository:
```bash
git clone https://github.com/yourusername/website-summary-tool.git
cd website-summary-tool
```
2. Create and activate a Conda environment:
```bash
conda create -n website-summary python=3.11
conda activate website-summary
```
3. Install required packages:
```bash
pip install -r requirements.txt
```
4. Create a `.env` file in the project root:
```
# OpenAI Configuration (required for OpenAI)
OPENAI_API_KEY=sk-your-openai-api-key
# Llama Configuration (optional, defaults to http://localhost:11434)
LLAMA_API_URL=http://localhost:11434
```
### Setting Up Ollama (for Llama models)
1. Install Ollama from [ollama.ai](https://ollama.ai/)
2. Pull the Llama model:
```bash
ollama pull llama3.2:latest
```
3. Start the Ollama server:
```bash
ollama serve
```
You can also start Ollama programmatically from your Python code:
```python
import subprocess
import time
def start_ollama():
"""Start the Ollama server as a subprocess."""
try:
print("Starting Ollama server...")
# Start Ollama as a background process
process = subprocess.Popen(
["ollama", "serve"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
# Wait a moment for the server to start
time.sleep(5)
# Check if the process is running
if process.poll() is None:
print("Ollama server started successfully!")
return process
else:
print("Failed to start Ollama server.")
return None
except Exception as e:
print(f"Error starting Ollama server: {str(e)}")
return None
# Usage
ollama_process = start_ollama()
# When you're done with the program
if ollama_process:
ollama_process.terminate()
print("Ollama server stopped.")
```
### Starting Jupyter Lab
To run the example notebook:
```bash
conda activate website-summary
cd src
jupyter lab
```
Then open `example_usage.ipynb` to experiment with the tool.
## 3. Code Structure Overview
```
website-summary/
├── src/
│ ├── example_usage.ipynb # Example notebook demonstrating usage
│ ├── main_summarize.py # Main functions for website summarization
│ ├── config/ # Configuration constants
│ │ ├── __init__.py
│ │ └── constants.py
│ ├── helper/ # Helper utilities
│ │ ├── __init__.py
│ │ ├── prompt_utils.py # Utility for managing LLM prompts
│ │ └── web_scraper.py # Web scraping functionality
│ ├── llm/ # LLM integration code
│ │ ├── __init__.py
│ │ ├── base_client.py # Abstract base class for LLM clients
│ │ ├── llm_factory.py # Factory for creating LLM clients
│ │ ├── llama/ # Llama-specific code
│ │ │ ├── llama_client.py # Llama client implementation
│ │ │ └── helper/
│ │ ├── open_api/ # OpenAI-specific code
│ │ │ └── openai_client.py # OpenAI client implementation
│ │ └── helper/
│ │ └── prompt_utils.py # Prompt utilities
│ └── structures/ # Data structures
│ ├── __init__.py
│ └── models.py # Data models including Website class
```
### Key Components
- **LLMFactory**: Creates the appropriate LLM client based on the provider name.
- **BaseLLMClient**: Abstract base class that defines the interface for all LLM clients.
- **OpenAIClient**: Implementation of the client for the OpenAI API.
- **LlamaClient**: Implementation of the client for Llama models via Ollama.
- **PromptManager**: Manages the system and user prompts for LLM interactions.
- **WebScraper**: Extracts content from websites.
- **Website**: Data model that holds the title and text content of a website.
## Workflow Diagram
```
+----------------+ +---------------+ +------------------+
| URL Input |----->| Web Scraper |----->| Website Object |
+----------------+ +---------------+ +------------------+
|
v
+----------------+ +---------------+ +------------------+
| LLM Response |<-----| LLM Client |<-----| Prompt Manager |
+----------------+ +---------------+ +------------------+
```
## Example Implementation
The example notebook demonstrates how to:
1. Create LLM clients for both OpenAI and Llama
2. Validate credentials for each provider
3. Fetch and summarize website content
4. Use custom prompts for specialized summaries
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.

48
week1/community-contributions/website-summary/setup_environment.py

@ -0,0 +1,48 @@
def ensure_dependencies():
"""Check for required packages and install them if not found."""
import importlib
import subprocess
import sys
# Define required packages
required_packages = {
'requests': 'requests',
'beautifulsoup4': 'bs4',
'selenium': 'selenium',
'openai': 'openai',
'webdriver-manager': 'webdriver_manager',
# Add any other required packages here
}
missing_packages = []
# Check which packages are missing
for package_name, import_name in required_packages.items():
try:
importlib.import_module(import_name)
print(f"{package_name} is already installed")
except ImportError:
missing_packages.append(package_name)
print(f"{package_name} needs to be installed")
# Install missing packages
if missing_packages:
print("\nInstalling missing packages...")
for package in missing_packages:
print(f"Installing {package}...")
try:
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
print(f"Successfully installed {package}")
except subprocess.CalledProcessError:
print(f"Failed to install {package}")
# Verify all packages are now installed
all_installed = True
for package_name, import_name in required_packages.items():
try:
importlib.import_module(import_name)
except ImportError:
all_installed = False
print(f" {package_name} installation failed")
return all_installed

29
week1/community-contributions/website-summary/src/config/constants.py

@ -0,0 +1,29 @@
#!/usr/bin/env python
# coding: utf-8
"""
Constants for the Website Summary Tool
"""
# OpenAI Configuration
DEFAULT_MODEL = "gpt-4o-mini"
DEFAULT_TEST_MESSAGE = "Hello, GPT! This is my first ever message to you! Hi!"
# Web Scraping Configuration
HTTP_HEADERS = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36"
}
# # Default prompts
# DEFAULT_TEST_SYSTEM_PROMPT = ("You are an assistant that analyzes the contents of a website "
# "and provides a short summary, ignoring text that might be navigation related. "
# "Respond in markdown.")
# DEFAULT_TEST_USER_PROMPT_TEMPLATE = """
# You are looking at a website titled {title}
# The contents of this website is as follows;
# please provide a short summary of this website in markdown.
# If it includes news or announcements, then summarize these too.
# {text}
# """

455
week1/community-contributions/website-summary/src/example_usage.ipynb

@ -0,0 +1,455 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"## OPEN API AND LLAMA\n",
"### This project makes calls to 2 LLMs, Open API and Llama to generate, website summaries."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# First cell: Set up environment and imports\n",
"import sys\n",
"import os\n",
"from pathlib import Path\n",
"\n",
"\n",
"# Add the src directory to sys.path instead of project_root\n",
"current_path = Path(os.path.abspath(''))\n",
"src_path = current_path if current_path.name == 'src' else current_path / 'src'\n",
"sys.path.insert(0, str(src_path))\n",
"\n",
"# Import without the src prefix\n",
"from llm.llm_factory import LLMFactory\n",
"from main_summarize import summarize_url\n",
"from helper.display_utils import display_summary_markdown\n",
"from llm.helper.prompt_utils import PromptManager\n",
"from IPython.display import Markdown, display\n",
"\n",
"# Helper function to display section headers\n",
"def display_section(title):\n",
" display(Markdown(f\"## {title}\"))\n",
" print(\"-\" * 80)\n",
"\n",
"# Helper function to display results\n",
"def display_result(provider, url, summary):\n",
" print(f\"\\n🌐 URL: {url}\")\n",
" print(f\"🤖 Provider: {provider}\\n\")\n",
" display_summary_markdown(summary)\n",
" print(\"-\" * 80)\n",
"\n",
"# Make sure we have all required dependencies\n",
"import importlib\n",
"import subprocess\n",
"import sys\n",
"\n",
"def ensure_dependencies():\n",
" required_packages = {\n",
" 'requests': 'requests',\n",
" 'beautifulsoup4': 'bs4',\n",
" 'selenium': 'selenium',\n",
" 'openai': 'openai',\n",
" 'webdriver-manager': 'webdriver_manager',\n",
" }\n",
"\n",
" missing_packages = []\n",
" for package_name, import_name in required_packages.items():\n",
" try:\n",
" importlib.import_module(import_name)\n",
" print(f\"✓ {package_name} is already installed\")\n",
" except ImportError:\n",
" missing_packages.append(package_name)\n",
" print(f\"✗ {package_name} needs to be installed\")\n",
"\n",
" if missing_packages:\n",
" print(\"\\nInstalling missing packages...\")\n",
" for package in missing_packages:\n",
" print(f\"Installing {package}...\")\n",
" try:\n",
" subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])\n",
" print(f\"Successfully installed {package}\")\n",
" except subprocess.CalledProcessError:\n",
" print(f\"Failed to install {package}\")\n",
"\n",
" all_installed = True\n",
" for package_name, import_name in required_packages.items():\n",
" try:\n",
" importlib.import_module(import_name)\n",
" except ImportError:\n",
" all_installed = False\n",
" print(f\"⚠ {package_name} installation failed\")\n",
"\n",
" return all_installed\n",
"\n",
"ensure_dependencies()"
]
},
{
"cell_type": "markdown",
"id": "91d9959b-21c0-4329-9a35-221f9c977e95",
"metadata": {},
"source": [
"### Run Example 1: Basic Website Summary using OpenAI and Llama"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8c5ce489-6cc7-493c-b6c9-9cebfd21a096",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# %% [markdown]\n",
"# ## Example 1: Basic Website Summary using OpenAI and Llama\n",
"\n",
"display_section(\"Example 1: Basic Website Summary\")\n",
"\n",
"# Default system and user prompts (showing the literal values)\n",
"DEFAULT_SYSTEM_PROMPT = (\"You are an assistant that analyzes the contents of a website \"\n",
" \"and provides a short summary, ignoring text that might be navigation related. \"\n",
" \"Respond in markdown.\")\n",
"\n",
"DEFAULT_USER_PROMPT_TEMPLATE = \"\"\"\n",
"You are looking at a website titled {title}\n",
"The contents of this website is as follows; \n",
"please provide a short summary of this website in markdown.\n",
"If it includes news or announcements, then summarize these too.\n",
"\n",
"{text}\n",
"\"\"\"\n",
"\n",
"prompt_manager = PromptManager(DEFAULT_SYSTEM_PROMPT, DEFAULT_USER_PROMPT_TEMPLATE)\n",
"\n",
"# Define the URL to analyze\n",
"# sample_url = \"https://example.com\"\n",
"sample_url = \"https://andresmendoza.dev\"\n",
"\n",
"# OpenAI commented; it's expensive\n",
"# ===== OpenAI Example (can be commented out to avoid API charges) =====\n",
"try:\n",
" # Initialize and validate OpenAI client\n",
" openai_client = LLMFactory.create_client(\"openai\")\n",
" is_valid, message = openai_client.validate_credentials()\n",
" \n",
" if is_valid:\n",
" print(\"✅ OpenAI credentials validated successfully\")\n",
" \n",
" # Generate summary\n",
" openai_summary = summarize_url(\n",
" openai_client, \n",
" sample_url, \n",
" prompt_manager=prompt_manager,\n",
" use_selenium=False # Default, explicitly shown for clarity\n",
" )\n",
" \n",
" # Display results\n",
" display_result(\"OpenAI\", sample_url, openai_summary)\n",
" else:\n",
" print(f\"❌ OpenAI validation failed: {message}\")\n",
"except Exception as e:\n",
" print(f\"❌ Error with OpenAI: {str(e)}\")\n",
"\n",
"# ===== Llama Example =====\n",
"try:\n",
" # Initialize and validate Llama client\n",
" llama_client = LLMFactory.create_client(\"llama\")\n",
" is_valid, message = llama_client.validate_credentials()\n",
" \n",
" if is_valid:\n",
" print(\"✅ Llama credentials validated successfully\")\n",
" \n",
" # Generate summary\n",
" llama_summary = summarize_url(\n",
" llama_client, \n",
" sample_url, \n",
" prompt_manager=prompt_manager,\n",
" use_selenium=False # Default, explicitly shown for clarity\n",
" )\n",
" \n",
" # Display results\n",
" display_result(\"Llama\", sample_url, llama_summary)\n",
" else:\n",
" print(f\"❌ Llama validation failed: {message}\")\n",
"except Exception as e:\n",
" print(f\"❌ Error with Llama: {str(e)}\")\n"
]
},
{
"cell_type": "markdown",
"id": "62911913-94b1-4c84-ae78-607aec3fa1e5",
"metadata": {},
"source": [
"### Run Example 2: Technical Documentation Summary using Custom Prompts"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcb95d51-bcc0-41f1-83f7-c1b618511caa",
"metadata": {},
"outputs": [],
"source": [
"# %% [markdown]\n",
"# ## Example 2: Technical Documentation Summary using Custom Prompts\n",
"\n",
"display_section(\"Example 2: Technical Documentation Summary using Custom Prompts\")\n",
"\n",
"# Create custom prompt manager for technical documentation\n",
"TECH_SYSTEM_PROMPT = \"\"\"You are a technical documentation specialist. \n",
"Analyze the provided website content and generate a concise technical summary.\n",
"Focus on technical details, features, and specifications while ignoring navigation elements.\n",
"Respond using well-structured markdown.\"\"\"\n",
"\n",
"TECH_USER_PROMPT = \"\"\"\n",
"You're analyzing a technical website titled: {title}\n",
"\n",
"Please provide:\n",
"1. A brief overview (2-3 sentences)\n",
"2. Key technical features or specifications (3-5 bullet points)\n",
"3. Target audience or use cases\n",
"4. Any technical requirements mentioned\n",
"\n",
"Website content:\n",
"{text}\n",
"\"\"\"\n",
"\n",
"tech_prompt_manager = PromptManager(TECH_SYSTEM_PROMPT, TECH_USER_PROMPT)\n",
"\n",
"# From now on, we'll use Llama (to avoid OpenAI costs)\n",
"personal_website_url = \"https://andresmendoza.dev\"\n",
"\n",
"try:\n",
" # We've already validated Llama client above, so we can reuse it\n",
" if 'llama_client' in locals() and is_valid:\n",
" print(\"🔍 Analyzing personal website with technical documentation prompts\")\n",
" \n",
" # Generate technical summary with custom prompts\n",
" tech_summary = summarize_url(\n",
" llama_client,\n",
" personal_website_url,\n",
" prompt_manager=tech_prompt_manager,\n",
" use_selenium=False # Most personal websites don't need Selenium\n",
" )\n",
" \n",
" # Display results\n",
" display_result(\"Llama (Technical Analysis)\", personal_website_url, tech_summary)\n",
" else:\n",
" print(\"❌ Llama client not available or invalid\")\n",
"except Exception as e:\n",
" print(f\"❌ Error analyzing personal website: {str(e)}\")\n",
"\n"
]
},
{
"cell_type": "markdown",
"id": "2d1bff5c-7553-4e6b-a13f-d36498082de7",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"source": [
"### Run Example 3: Analyzing JavaScript-Heavy SPA with Selenium"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae25d77d-bd53-4d0c-a3e9-bdfa54b98141",
"metadata": {},
"outputs": [],
"source": [
"# Examples 3 and 4 take too long; commented\n",
"# %% [markdown]\n",
"# ## Example 3: Analyzing JavaScript-Heavy SPA with Selenium\n",
"\n",
"display_section(\"Example 3: Analyzing JavaScript-Heavy SPA with Selenium\")\n",
"\n",
"# Create a prompt manager for SPA analysis\n",
"SPA_SYSTEM_PROMPT = \"\"\"You are a frontend analysis specialist. \n",
"Analyze this Single Page Application (SPA) website and provide insights about its structure,\n",
"features, and user experience. Focus on identifying framework-specific elements and patterns.\n",
"Respond in well-formatted markdown.\"\"\"\n",
"\n",
"SPA_USER_PROMPT = \"\"\"\n",
"You're analyzing a Single Page Application titled: {title}\n",
"\n",
"Based on the extracted content, please provide:\n",
"1. A brief overview of the site's purpose and functionality\n",
"2. Identification of likely frontend framework used (Vue, React, Angular, etc.)\n",
"3. Notable UI/UX features and interaction patterns\n",
"4. Performance observations (if any indicators are present)\n",
"\n",
"Extracted SPA content:\n",
"{text}\n",
"\"\"\"\n",
"\n",
"spa_prompt_manager = PromptManager(SPA_SYSTEM_PROMPT, SPA_USER_PROMPT)\n",
"\n",
"# Choose a JavaScript framework site to analyze\n",
"framework_urls = {\n",
" \"vue\": \"https://vuejs.org/\",\n",
" \"angular\": \"https://angular.io/\",\n",
" \"react\": \"https://reactjs.org/\"\n",
"}\n",
"\n",
"# Let's analyze the Vue.js website as an example\n",
"spa_url = framework_urls[\"vue\"]\n",
"\n",
"try:\n",
" if 'llama_client' in locals() and is_valid:\n",
" print(\"🔍 Analyzing JavaScript framework website (requires Selenium)\")\n",
" \n",
" # Generate summary for SPA site with Selenium enabled\n",
" spa_summary = summarize_url(\n",
" llama_client,\n",
" spa_url,\n",
" prompt_manager=spa_prompt_manager,\n",
" use_selenium=True # Enable Selenium for JavaScript-heavy sites\n",
" )\n",
" \n",
" # Display results\n",
" display_result(\"Llama (SPA Analysis with Selenium)\", spa_url, spa_summary)\n",
" else:\n",
" print(\"❌ Llama client not available or invalid\")\n",
"except Exception as e:\n",
" print(f\"❌ Error analyzing SPA website: {str(e)}\")"
]
},
{
"cell_type": "markdown",
"id": "267bafae-9ab3-4868-a08d-227182cd4baf",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"source": [
"### Run Example 4: Comparative Analysis of Multiple Websites"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "14b08e77-821c-45f3-ae43-2effa644c2a5",
"metadata": {},
"outputs": [],
"source": [
"# %% [markdown]\n",
"# ## Example 4: Comparative Analysis of Multiple Websites\n",
"\n",
"display_section(\"Example 4: Comparative Analysis of Multiple Websites\")\n",
"\n",
"# Create a prompt manager for comparative analysis\n",
"COMPARATIVE_SYSTEM_PROMPT = \"\"\"You are a digital content analyst specializing in comparative analysis.\n",
"Your task is to analyze website content and place it in context with similar sites in its category.\n",
"Focus on identifying unique selling points, target audience, and distinguishing features.\n",
"Respond in well-formatted markdown.\"\"\"\n",
"\n",
"COMPARATIVE_USER_PROMPT = \"\"\"\n",
"You're analyzing a website titled: {title}\n",
"\n",
"Based on the content, please provide:\n",
"1. A concise summary of the site's purpose and content\n",
"2. How this site appears to differentiate itself from competitors\n",
"3. Target audience analysis\n",
"4. Content quality assessment (professionalism, clarity, comprehensiveness)\n",
"\n",
"Website content:\n",
"{text}\n",
"\"\"\"\n",
"\n",
"comparative_prompt_manager = PromptManager(COMPARATIVE_SYSTEM_PROMPT, COMPARATIVE_USER_PROMPT)\n",
"\n",
"# Let's analyze a different JavaScript framework for comparison\n",
"comparison_url = framework_urls[\"angular\"]\n",
"\n",
"try:\n",
" if 'llama_client' in locals() and is_valid:\n",
" print(\"🔍 Performing comparative analysis of another framework website\")\n",
" \n",
" # Generate comparative summary\n",
" comparative_summary = summarize_url(\n",
" llama_client,\n",
" comparison_url,\n",
" prompt_manager=comparative_prompt_manager,\n",
" use_selenium=True # Enable Selenium for JavaScript-heavy sites\n",
" )\n",
" \n",
" # Display results\n",
" display_result(\"Llama (Comparative Analysis)\", comparison_url, comparative_summary)\n",
" else:\n",
" print(\"❌ Llama client not available or invalid\")\n",
"except Exception as e:\n",
" print(f\"❌ Error performing comparative analysis: {str(e)}\")\n",
"\n",
"print(\"*\" * 80)\n",
"print(\"END\")\n",
"print(\"*\" * 80)\n",
"\n",
"# %% [markdown]\n",
"# ## Summary of Examples\n",
"# \n",
"# This notebook demonstrated:\n",
"# \n",
"# 1. Basic website summarization with both OpenAI and Llama\n",
"# 2. Technical documentation analysis with custom prompts\n",
"# 3. SPA website analysis using Selenium for JavaScript-heavy sites\n",
"# 4. Comparative website analysis\n",
"# \n",
"# Each section can be run independently or commented out as needed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "59828c0e-250c-4e0b-8324-ab094bf13e63",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3f47f43-ce85-49db-a2f7-66e58ee5c444",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "947f163d-6619-401d-9479-d038fa4ee82c",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

30
week1/community-contributions/website-summary/src/helper/display_utils.py

@ -0,0 +1,30 @@
#!/usr/bin/env python
# coding: utf-8
"""
Display utilities for the Website Summary Tool
"""
from IPython.display import Markdown, display
def display_summary_markdown(summary):
"""
Display the summary as markdown in Jupyter.
Args:
summary: The summary to display
"""
display(Markdown(summary))
def print_validation_result(is_valid, message):
"""
Print the result of API key validation.
Args:
is_valid: Whether the API key is valid
message: The validation message
"""
print(message)
return is_valid

43
week1/community-contributions/website-summary/src/helper/env_utils.py

@ -0,0 +1,43 @@
# src/helper/env_utils.py
import os
from pathlib import Path
from dotenv import load_dotenv
def find_and_load_env_file():
"""
Find and load the .env file from the project structure.
Returns:
bool: True if a .env file was found and loaded, False otherwise
"""
# Start with the current file's directory
current_file = Path(os.path.abspath(__file__))
# Navigate up from helper/env_utils.py:
# helper/ -> src/ -> website-summary/
website_summary_dir = current_file.parent.parent.parent
# Check for .env in website-summary/
project_env_path = website_summary_dir / '.env'
# Check for .env in LLM_NGINEERING/ (much higher up the directory tree)
llm_engineering_dir = website_summary_dir.parent.parent.parent
llm_engineering_env_path = llm_engineering_dir / '.env'
# List of potential locations to check for .env file
potential_paths = [
project_env_path, # website-summary/.env
llm_engineering_env_path, # LLM_NGINEERING/.env
Path(os.getcwd()) / '.env', # Current working directory/.env
]
# Search for .env in the potential paths
for env_path in potential_paths:
if env_path.exists():
print(f"✅ Found .env file at: {env_path.absolute()}")
load_dotenv(env_path, override=True)
return True
print("❌ No .env file found in any of the checked locations!")
return False

236
week1/community-contributions/website-summary/src/helper/web_scraper.py

@ -0,0 +1,236 @@
#!/usr/bin/env python
# coding: utf-8
"""
Web scraping functionality for the Website Summary Tool
"""
import re
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from config.constants import HTTP_HEADERS
from structures.models import Website
def fetch_website_content_simple(url):
"""
Fetch website content using requests and BeautifulSoup.
Args:
url: The URL to fetch
Returns:
tuple: (Website object containing the parsed content, needs_selenium)
"""
response = requests.get(url, headers=HTTP_HEADERS)
soup = BeautifulSoup(response.content, 'html.parser')
# Extract title
title = soup.title.string if soup.title else "No title found"
# Check if the page might need Selenium
needs_selenium = detect_needs_selenium(soup, response.text)
# Clean up content
if soup.body:
for irrelevant in soup.body(["script", "style", "img", "input"]):
irrelevant.decompose()
text = soup.body.get_text(separator="\n", strip=True)
else:
text = "No body content found"
return Website(url, title, text), needs_selenium
def detect_needs_selenium(soup, html_content):
"""
Detect if a webpage likely needs Selenium for proper rendering.
Args:
soup: BeautifulSoup object
html_content: Raw HTML content
Returns:
bool: True if the page likely needs Selenium
"""
# Check for SPA frameworks
spa_indicators = [
'ng-app', 'ng-controller', # Angular
'react', 'reactjs', # React
'vue', 'v-app', 'v-if', # Vue
'ember' # Ember
]
# Check for loading indicators or placeholders
loading_indicators = [
'loading', 'please wait', 'spinner',
'content is loading', 'loading content'
]
# Check for scripts that might dynamically load content
scripts = soup.find_all('script')
dynamic_content_indicators = [
'document.write', '.innerHTML', 'appendChild',
'fetch(', 'XMLHttpRequest', 'ajax', '.load(',
'getElementById', 'querySelector'
]
# Check for minimal text content
text_content = soup.get_text().strip()
min_content_length = 500 # Arbitrary threshold
# Check for meta tags indicating spa/js app
meta_tags = soup.find_all('meta')
meta_spa_indicators = ['single page application', 'javascript application', 'react application', 'vue application']
# SPA framework indicators
for attr in spa_indicators:
if attr in html_content.lower():
return True
# Check meta tags
for meta in meta_tags:
content = meta.get('content', '').lower()
for indicator in meta_spa_indicators:
if indicator in content:
return True
# Check for loading indicators or placeholders
for indicator in loading_indicators:
if indicator in html_content.lower():
return True
# Check for dynamic content loading scripts
script_text = ' '.join([script.string for script in scripts if script.string])
for indicator in dynamic_content_indicators:
if indicator in script_text:
return True
# Check if the page has minimal text content but many scripts
if len(text_content) < min_content_length and len(scripts) > 5:
return True
# Check for lazy-loaded content
if re.search(r'lazy[\s-]load|lazyload', html_content, re.IGNORECASE):
return True
return False
def fetch_website_content_selenium(url):
"""
Fetch website content using Selenium for JavaScript-heavy websites.
Args:
url: The URL to fetch
Returns:
Website: A Website object containing the parsed content
"""
options = Options()
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--headless=new") # Updated headless mode syntax
# Use the built-in Selenium Manager instead of explicit driver path
driver = webdriver.Chrome(options=options)
try:
driver.get(url)
# Wait for dynamic content to load
driver.implicitly_wait(5) # Wait up to 5 seconds
# Check if there's a verification or captcha
if detect_verification_needed(driver):
# Switch to interactive mode if verification is needed
driver.quit()
# Restart with visible browser for user interaction
options = Options()
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
driver = webdriver.Chrome(options=options)
driver.get(url)
input("Please complete the verification in the browser and press Enter to continue...")
page_source = driver.page_source
finally:
driver.quit()
soup = BeautifulSoup(page_source, 'html.parser')
title = soup.title.string if soup.title else "No title found"
for irrelevant in soup(["script", "style", "img", "input"]):
irrelevant.decompose()
text = soup.get_text(separator="\n", strip=True)
return Website(url, title, text)
def detect_verification_needed(driver):
"""
Detect if the page requires human verification.
Args:
driver: Selenium WebDriver instance
Returns:
bool: True if verification appears to be needed
"""
page_source = driver.page_source.lower()
verification_indicators = [
'captcha', 'recaptcha', 'human verification',
'verify you are human', 'bot check', 'security check',
'prove you are human', 'complete the security check',
'verification required', 'verification needed'
]
for indicator in verification_indicators:
if indicator in page_source:
return True
# Check for typical captcha elements
try:
captcha_elements = driver.find_elements(By.XPATH,
"//*[contains(@id, 'captcha') or contains(@class, 'captcha') or contains(@name, 'captcha')]"
)
if captcha_elements:
return True
except Exception:
pass
return False
def fetch_website_content(url, use_selenium=None):
"""
Fetch website content using the appropriate method.
Args:
url: The URL to fetch
use_selenium: Whether to use Selenium for JavaScript-heavy websites.
If None, automatic detection is used.
If True, always use Selenium.
If False, never use Selenium.
Returns:
Website: A Website object containing the parsed content
"""
# If explicit user preference is provided, respect it
if use_selenium is True:
return fetch_website_content_selenium(url)
elif use_selenium is False:
website, _ = fetch_website_content_simple(url)
return website
# Otherwise, use automatic detection
website, needs_selenium = fetch_website_content_simple(url)
if needs_selenium:
print(f"Detected JavaScript-heavy website, switching to Selenium for better content extraction...")
return fetch_website_content_selenium(url)
return website

76
week1/community-contributions/website-summary/src/llm/base_client.py

@ -0,0 +1,76 @@
# src/llm/base_client.py
"""
Base LLM client interface for the Website Summary Tool
"""
from abc import ABC, abstractmethod
class BaseLLMClient(ABC):
"""Abstract base class for LLM clients."""
@abstractmethod
def initialize(self):
"""Initialize the LLM client."""
pass
@abstractmethod
def validate_credentials(self):
"""
Validate API credentials.
Returns:
tuple: (is_valid, message)
"""
pass
@abstractmethod
def test_connection(self, test_message):
"""
Send a test message to verify API connectivity.
Args:
test_message: The message to send
Returns:
str: The response from the model
"""
pass
@abstractmethod
def format_messages(self, messages):
"""
Format messages according to the provider's requirements.
Args:
messages: List of message dictionaries with role and content
Returns:
The properly formatted messages for this specific provider
"""
pass
@abstractmethod
def generate_content(self, messages, model=None, **kwargs):
"""
Generate content from the LLM.
Args:
messages: The messages to send
model: The model to use for generation
**kwargs: Additional provider-specific parameters
Returns:
str: The generated content
"""
pass
@abstractmethod
def get_available_models(self):
"""
Get available models from this provider.
Returns:
list: Available model names
"""
pass

62
week1/community-contributions/website-summary/src/llm/helper/prompt_utils.py

@ -0,0 +1,62 @@
# src/llm/helper/prompt_utils.py
"""
Prompt management utilities for the Website Summary Tool
"""
from structures.models import Website
# Default prompts
DEFAULT_SYSTEM_PROMPT = ("You are an assistant that analyzes the contents of a website "
"and provides a short summary, ignoring text that might be navigation related. "
"Respond in markdown.")
DEFAULT_USER_PROMPT_TEMPLATE = """
You are looking at a website titled {title}
The contents of this website is as follows;
please provide a short summary of this website in markdown.
If it includes news or announcements, then summarize these too.
{text}
"""
class PromptManager:
"""Class to manage prompts for LLM interactions."""
def __init__(self, system_prompt=None, user_prompt_template=None):
"""
Initialize a PromptManager with customizable prompts.
Args:
system_prompt: Custom system prompt (uses default if None)
user_prompt_template: Custom user prompt template (uses default if None)
"""
self.system_prompt = system_prompt if system_prompt is not None else DEFAULT_SYSTEM_PROMPT
self.user_prompt_template = user_prompt_template if user_prompt_template is not None else DEFAULT_USER_PROMPT_TEMPLATE
def create_user_prompt(self, website):
"""
Create a user prompt that includes website information.
Args:
website: A Website object containing parsed content
Returns:
str: The formatted user prompt
"""
return self.user_prompt_template.format(title=website.title, text=website.text)
def create_messages(self, website):
"""
Create the messages array for the LLM API call.
Args:
website: A Website object containing parsed content
Returns:
list: The messages list for the API call
"""
return [
{"role": "system", "content": self.system_prompt},
{"role": "user", "content": self.create_user_prompt(website)}
]

63
week1/community-contributions/website-summary/src/llm/helper/validation_utils.py

@ -0,0 +1,63 @@
# src/llm/helper/validation_utils.py
"""
Validation utilities for LLM clients
"""
class LLMValidator:
"""Helper class for validating LLM client credentials and connections."""
@staticmethod
def validate_openai_key(api_key):
"""
Validate OpenAI API key format.
Args:
api_key: The API key to validate
Returns:
tuple: (is_valid, message)
"""
if not api_key:
return False, "No OpenAI API key was found - please add OPENAI_API_KEY to your .env file"
elif not api_key.startswith("sk-"):
return False, "An OpenAI API key was found, but it doesn't start with sk-; please check you're using the right key"
elif api_key.strip() != api_key:
return False, "An OpenAI API key was found, but it looks like it might have space or tab characters at the start or end"
return True, "OpenAI API key found and looks good so far!"
@staticmethod
def validate_ollama_models(models_data, target_model):
"""
Validate Ollama models response contains the target model.
Args:
models_data: The response from ollama.list()
target_model: The model name we're looking for (or prefix)
Returns:
tuple: (found_model, is_valid, message)
"""
model_found = False
found_model = target_model
# Handle the various response formats from ollama
if hasattr(models_data, 'models'):
# For newer versions of ollama client that return objects
for model in models_data.models:
if hasattr(model, 'model') and target_model.split(':')[0] in model.model:
found_model = model.model # Use the actual name
model_found = True
break
elif isinstance(models_data, dict) and 'models' in models_data:
# For older versions that return dictionaries
for model in models_data.get('models', []):
if 'name' in model and target_model.split(':')[0] in model['name']:
found_model = model['name'] # Use the actual name
model_found = True
break
if model_found:
return found_model, True, f"Found model {found_model}"
else:
return target_model, False, f"No matching model found for {target_model}"

26
week1/community-contributions/website-summary/src/llm/llama/helper/check_ollama_models.py

@ -0,0 +1,26 @@
# src/llm/llama/helper/check_ollama_model_names.py
import requests
def list_ollama_models():
"""List all models available in Ollama with their exact names"""
try:
response = requests.get("http://localhost:11434/api/tags")
if response.status_code == 200:
models_data = response.json()
print("Available models in Ollama:")
if "models" in models_data and models_data["models"]:
for i, model in enumerate(models_data["models"], 1):
print(f"{i}. {model['name']}")
print("\nTo use a specific model, update the default_model in llama_client.py")
else:
print("No models found in the response")
print(f"Raw response: {models_data}")
else:
print(f"Error: {response.status_code} - {response.text}")
except Exception as e:
print(f"Error connecting to Ollama API: {str(e)}")
if __name__ == "__main__":
list_ollama_models()

156
week1/community-contributions/website-summary/src/llm/llama/llama_client.py

@ -0,0 +1,156 @@
# src/llm/llama/llama_client.py
"""
Llama API interaction for the Website Summary Tool
"""
import os
import ollama
from llm.base_client import BaseLLMClient
from helper.env_utils import find_and_load_env_file
from llm.helper.validation_utils import LLMValidator
class LlamaClient(BaseLLMClient):
"""Client for the Llama API (locally hosted through Ollama)."""
def __init__(self):
"""Initialize the Llama client."""
self.api_base = None
self.available_models = ["llama3.2:latest"]
self.default_model = "llama3.2:latest"
def initialize(self):
"""Initialize the Llama client by loading config."""
# Load .env file and set API URL
find_and_load_env_file()
# Get the API base URL from environment variables
self.api_base = os.getenv('LLAMA_API_URL', 'http://localhost:11434')
print(f"LLAMA_API_URL: {self.api_base}")
# Set the host URL for ollama client
ollama.host = self.api_base
return self
def validate_credentials(self):
"""
Validate that the Llama API is accessible.
Returns:
tuple: (is_valid, message)
"""
if not self.api_base:
return False, "No Llama API URL found - please add LLAMA_API_URL to your .env file"
try:
# Get the list of models from Ollama
models_data = ollama.list()
# Print the raw models data for debugging
print(f"Raw Ollama models data: {models_data}")
# Validate models data contains our target model
found_model, is_valid, message = LLMValidator.validate_ollama_models(
models_data, self.default_model
)
if is_valid:
self.default_model = found_model # Update with the exact model name
return True, f"Ollama API connection successful! Found model {self.default_model}"
else:
return False, f"Connected to Ollama API but no llama3.x model found. Please run 'ollama pull llama3.2'"
except Exception as e:
return False, f"Error connecting to Ollama API: {str(e)}"
def test_connection(self, test_message="Hello, this is a test message."):
"""
Send a test message to verify API connectivity.
Args:
test_message: The message to send
Returns:
str: The response from the model
"""
try:
response = ollama.chat(
model=self.default_model,
messages=[{"role": "user", "content": test_message}]
)
return response["message"]["content"]
except Exception as e:
return f"Error connecting to Ollama API: {str(e)}"
def format_messages(self, messages):
"""
Format messages for Llama API.
Args:
messages: List of message dictionaries with role and content
Returns:
list: A formatted messages list for Ollama
"""
# The ollama.chat API accepts messages in the same format as OpenAI
return messages
def generate_content(self, messages, model=None, **kwargs):
"""
Generate content from Llama.
Args:
messages: The messages to send
model: The model to use for generation
**kwargs: Additional Llama-specific parameters
Returns:
str: The generated content
"""
model = model or self.default_model
formatted_messages = self.format_messages(messages)
try:
# Create options dictionary for additional parameters
options = {}
if "temperature" in kwargs:
options["temperature"] = kwargs["temperature"]
# Call ollama.chat with our messages and options
response = ollama.chat(
model=model,
messages=formatted_messages,
options=options
)
return response["message"]["content"]
except Exception as e:
if "connection" in str(e).lower():
raise Exception(f"Could not connect to Ollama at {self.api_base}. Is the Ollama server running?")
else:
raise Exception(f"Error with Ollama API: {str(e)}")
def get_available_models(self):
"""
Get available models from Ollama.
Returns:
list: Available model names
"""
try:
models_data = ollama.list()
# Extract model names based on response format
if hasattr(models_data, 'models'):
model_names = [model.model for model in models_data.models if hasattr(model, 'model')]
elif isinstance(models_data, dict) and 'models' in models_data:
model_names = [model.get('name') for model in models_data.get('models', [])]
else:
model_names = []
# Filter for our specific model
filtered_models = [name for name in model_names if self.default_model.split(':')[0] in name]
return filtered_models if filtered_models else self.available_models
except Exception as e:
print(f"Error getting available models: {str(e)}")
return self.available_models

43
week1/community-contributions/website-summary/src/llm/llm_factory.py

@ -0,0 +1,43 @@
# src/llm/llm_factory.py
"""
Factory for creating LLM clients
"""
from llm.open_api.openai_client import OpenAIClient
from llm.llama.llama_client import LlamaClient
class LLMFactory:
"""Factory for creating LLM clients."""
@staticmethod
def get_providers():
"""
Get available LLM providers.
Returns:
dict: Dictionary of provider name to display name
"""
return {
"openai": "OpenAI",
"llama": "Llama (Local)"
}
@staticmethod
def create_client(provider_name):
"""
Create an LLM client based on provider name.
Args:
provider_name: The name of the provider
Returns:
BaseLLMClient: The initialized LLM client
"""
if provider_name == "openai":
return OpenAIClient().initialize()
elif provider_name == "llama":
return LlamaClient().initialize()
else:
raise ValueError(f"Unknown provider: {provider_name}")

118
week1/community-contributions/website-summary/src/llm/open_api/openai_client.py

@ -0,0 +1,118 @@
# src/llm/open_api/openai_client.py
"""
OpenAI API interaction for the Website Summary Tool
"""
import os
from openai import OpenAI
from llm.base_client import BaseLLMClient
from helper.env_utils import find_and_load_env_file
from llm.helper.validation_utils import LLMValidator
class OpenAIClient(BaseLLMClient):
"""Client for the OpenAI API."""
def __init__(self):
"""Initialize the OpenAI client."""
self.api_key = None
self.client = None
self.available_models = [
"gpt-4o-mini",
"gpt-4o",
"gpt-3.5-turbo"
]
self.default_model = "gpt-4o-mini"
def initialize(self):
"""Initialize the OpenAI client."""
# Load .env file and set API key
find_and_load_env_file()
self.api_key = os.getenv('OPENAI_API_KEY')
if self.api_key:
print("✅ OPENAI_API_KEY found in environment variables")
self.client = OpenAI(api_key=self.api_key)
else:
print("❌ OPENAI_API_KEY not found in environment variables")
# Try alternative approach as seen in example_usage.ipynb
print("Attempting alternative method to find OpenAI API key...")
# Create client without explicit key - it may find it elsewhere
self.client = OpenAI()
return self
def validate_credentials(self):
"""
Validate that the API key exists and has correct formatting.
Returns:
tuple: (is_valid, message)
"""
return LLMValidator.validate_openai_key(self.api_key)
def test_connection(self, test_message="Hello, GPT! This is a test message."):
"""
Send a test message to verify API connectivity.
Args:
test_message: The message to send
Returns:
str: The response from the model
"""
try:
response = self.client.chat.completions.create(
model=self.default_model,
messages=[
{"role": "user", "content": test_message}
]
)
return response.choices[0].message.content
except Exception as e:
return f"Error connecting to OpenAI API: {str(e)}"
def format_messages(self, messages):
"""
Format messages for OpenAI API.
Args:
messages: List of message dictionaries with role and content
Returns:
list: The messages formatted for OpenAI
"""
# OpenAI already uses the format we're using, so we can return as-is
return messages
def generate_content(self, messages, model=None, **kwargs):
"""
Generate content from OpenAI.
Args:
messages: The messages to send
model: The model to use for generation
**kwargs: Additional OpenAI-specific parameters
Returns:
str: The generated content
"""
model = model or self.default_model
formatted_messages = self.format_messages(messages)
response = self.client.chat.completions.create(
model=model,
messages=formatted_messages,
**kwargs
)
return response.choices[0].message.content
def get_available_models(self):
"""
Get available models from OpenAI.
Returns:
list: Available model names
"""
return self.available_models

144
week1/community-contributions/website-summary/src/main_summarize.py

@ -0,0 +1,144 @@
#!/usr/bin/env python
# coding: utf-8
"""
Website Summary Tool
A tool to fetch website content and generate summaries using multiple LLM providers.
"""
from llm.llm_factory import LLMFactory
from helper.web_scraper import fetch_website_content
from llm.helper.prompt_utils import PromptManager
from helper.display_utils import display_summary_markdown, print_validation_result
def summarize_url(client, url, use_selenium=None, model=None, prompt_manager=None):
"""
Fetch a website and generate a summary.
Args:
client: The LLM client
url: The URL to summarize
use_selenium: Whether to use Selenium for JavaScript-heavy websites.
If None, automatic detection is used.
model: The model to use for generation
prompt_manager: Optional PromptManager to customize prompts
Returns:
str: The generated summary
"""
website = fetch_website_content(url, use_selenium)
# Use default PromptManager if none provided
if prompt_manager is None:
prompt_manager = PromptManager()
messages = prompt_manager.create_messages(website)
return client.generate_content(messages, model=model)
def main():
"""Main function to run the website summary tool."""
# Get available providers
providers = LLMFactory.get_providers()
# Choose provider
print("Available LLM providers:")
for i, (key, name) in enumerate(providers.items(), 1):
print(f"{i}. {name}")
choice = input(f"Select provider (1-{len(providers)}, default: 1): ").strip()
try:
idx = int(choice) - 1 if choice else 0
if idx < 0 or idx >= len(providers):
raise ValueError()
provider_key = list(providers.keys())[idx]
except (ValueError, IndexError):
print(f"Invalid choice. Using {list(providers.values())[0]}.")
provider_key = list(providers.keys())[0]
# Create LLM client
try:
client = LLMFactory.create_client(provider_key)
except Exception as e:
print(f"Error creating {providers[provider_key]} client: {str(e)}")
return
# Validate credentials
is_valid, message = client.validate_credentials()
if not print_validation_result(is_valid, message):
return
# Test API connection
print(f"Testing connection to {providers[provider_key]}...")
test_response = client.test_connection()
print("Test API response:")
print(test_response)
# Choose model
available_models = client.get_available_models()
if len(available_models) > 1:
print("\nAvailable models:")
for i, model_name in enumerate(available_models, 1):
print(f"{i}. {model_name}")
model_choice = input(f"Select model (1-{len(available_models)}, default: 1): ").strip()
try:
model_idx = int(model_choice) - 1 if model_choice else 0
if model_idx < 0 or model_idx >= len(available_models):
raise ValueError()
model = available_models[model_idx]
except (ValueError, IndexError):
print(f"Invalid choice. Using {available_models[0]}.")
model = available_models[0]
else:
model = available_models[0] if available_models else None
# Define website URL to summarize
website_url = input("Enter the URL of the website to summarize: ")
# Prompt customization option
customize_prompts = input("Do you want to customize the prompts? (y/n, default: n): ").lower()
prompt_manager = None
if customize_prompts == 'y':
print("Current system prompt: ")
print(PromptManager().system_prompt)
new_system_prompt = input("Enter new system prompt (leave empty to keep default): ").strip()
print("\nCurrent user prompt template: ")
print(PromptManager().user_prompt_template)
new_user_prompt = input("Enter new user prompt template (leave empty to keep default): ").strip()
# Create custom prompt manager if needed
if new_system_prompt or new_user_prompt:
system_prompt = new_system_prompt if new_system_prompt else None
user_prompt = new_user_prompt if new_user_prompt else None
prompt_manager = PromptManager(system_prompt, user_prompt)
# Ask if user wants to override automatic detection
override = input("Do you want to override automatic detection of JavaScript-heavy websites? (y/n, default: n): ").lower()
use_selenium = None # Default to automatic detection
if override == 'y':
use_selenium_input = input("Use Selenium for this website? (y/n): ").lower()
use_selenium = use_selenium_input == 'y'
# Generate and display summary
print(f"Fetching and summarizing content from {website_url}...")
summary = summarize_url(client, website_url, use_selenium, model, prompt_manager)
print("\nSummary:")
print(summary)
# In Jupyter notebook
try:
display_summary_markdown(summary)
except:
pass # Not in Jupyter
if __name__ == "__main__":
main()

22
week1/community-contributions/website-summary/src/structures/models.py

@ -0,0 +1,22 @@
#!/usr/bin/env python
# coding: utf-8
"""
Data models for the Website Summary Tool
"""
class Website:
"""Class to represent a webpage with its content."""
def __init__(self, url, title, text):
"""
Initialize a Website object with parsed content.
Args:
url: The URL of the website
title: The title of the webpage
text: The parsed text content of the webpage
"""
self.url = url
self.title = title
self.text = text

2
week1/day1.ipynb

@ -587,7 +587,7 @@
"name": "python", "name": "python",
"nbconvert_exporter": "python", "nbconvert_exporter": "python",
"pygments_lexer": "ipython3", "pygments_lexer": "ipython3",
"version": "3.11.11" "version": "3.11.12"
} }
}, },
"nbformat": 4, "nbformat": 4,

Loading…
Cancel
Save