You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

664 lines
28 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations.\n",
"\n",
"If you need to start a 'notebook' again, go to Kernel menu >> Restart kernel.\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect.\n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more ideas!\n",
"\n",
"## Business value of these exercises\n",
"\n",
"A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"from selenium import webdriver\n",
"from selenium.webdriver.chrome.service import Service as ChromeService\n",
"from webdriver_manager.chrome import ChromeDriverManager\n",
"import time"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n",
"2. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n",
"- Pressing \"Create new secret key\" on the top right\n",
"- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n",
"- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n",
"- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n",
"4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n",
"5. See the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more instructions\n",
"6. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv()\n",
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n",
"openai = OpenAI()\n",
"\n",
"# Uncomment the below line if this gives you any problems:\n",
"# openai = OpenAI(api_key=\"your-key-here\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"class Website:\n",
" url: str\n",
" title: str\n",
" text: str\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" try:\n",
" driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))\n",
" driver.get(url)\n",
" \n",
" last_height = driver.execute_script(\"return document.body.scrollHeight\")\n",
" while True:\n",
" driver.execute_script(\"window.scrollTo(0, document.body.scrollHeight);\")\n",
" time.sleep(4)\n",
" new_height = driver.execute_script(\"return document.body.scrollHeight\")\n",
" if new_height == last_height:\n",
" break\n",
" last_height = new_height\n",
" \n",
" soup = BeautifulSoup(driver.page_source, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No Title Found\"\n",
" for useless in soup.body(['script','style','img','input','noscript']):\n",
" useless.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" except Exception as e:\n",
" print(f\"An error occurred: {e}\")\n",
" finally:\n",
" driver.quit()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"https://kiran.me.uk\n",
"KIRAN RAMBHA\n",
"Kiran Rambha\n",
"Hi, I'm Kiran Rambha\n",
".\n",
"About\n",
"A full-stack web developer with over 7 years of experience, currently working with different JavaScript technologies like\n",
"NodeJS\n",
",\n",
"ReactJS\n",
"and many other cloud services in\n",
"GCP\n",
",\n",
"AWS\n",
"and\n",
"Azure\n",
"Stack. I also have experience in developing high-quality large scale web application using\n",
"ASP.NET CORE\n",
",\n",
"C#\n",
"as well as developing Amazon Alexa skills using\n",
"Alexa Skill Kit\n",
"and\n",
"AWS Lambda\n",
".\n",
"Along with being a full-stack developer, I really enjoy photography especially landscape and street photography. I recently started exploring the world of film photography and I love learning all the ins and outs of different film stocks and film cameras. Aside from photography I like to spend my free time at the gym or taking long walks at the local park. I also like to travel and explore food and culture from different countries.\n",
"Experience\n",
"Senior Software Engineer\n",
"Compare the Market\n",
"Specialized in:\n",
"Node Js, GraphqlQL, GitLab, React Js, Kubernetes, AWS, Microservices\n",
"London, UK\n",
"Dec '22 - Present\n",
"Senior Software Engineer\n",
"Profile Pensions\n",
"Specialized in:\n",
"Node Js, React Js, GraphQL, Kubernetes, GCP, Microservice Architecture\n",
"London, UK\n",
"Jul '21 - Nov '22\n",
"Senior Software Engineer Analyst\n",
"Accenture UK\n",
"Specialized in:\n",
"Node Js, React Js, Alexa Skill Kit, AWS\n",
"London, UK\n",
"Dec '19 - Jul '21\n",
"Software Engineer Analyst\n",
"Accenture UK\n",
"Specialized in:\n",
"Microservice Architecture, Chatbots\n",
"London, UK\n",
"Dec '18 - Nov '19\n",
"Software Engineer Associate\n",
"Accenture UK\n",
"Specialized in:\n",
"Murex, Python, HTML, JavaScript, CSS, Shell\n",
"London, UK\n",
"Oct '17 - Nov '18\n",
"Software Engineer (Industrial Placement)\n",
"Accenture UK\n",
"Specialized in:\n",
"C#, ASP.NET, HTML, CSS, jQuery, AJAX\n",
"London, UK\n",
"Jun '15 - Sep '16\n",
"Projects\n",
"Stock Hub - Alexa Skill\n",
"Get the latest price of any stock, When a company is reporting their quarterly earnings and much much more using Alexa...\n",
"Technologies:\n",
"Node Js, Alexa Skill Kit, Lambda, IEX Cloud, Zacks etc.\n",
"GitHub (Access on Request)\n",
"Nov '20 - Present\n",
"Kiran Rambha Website\n",
"My personal website built using React, React Hooks & TailwindCSS while following all responsive web design standards.\n",
"Technologies:\n",
"React Js, Tailwind CSS, Firebase\n",
"GitHub\n",
"Nov '20 - Present\n",
"Just Stream - Alexa Skill\n",
"A movie search engine that lets people find where a movie or a tv show is streaming using their Alexa. Currently this skill supports Netflix, Amazon Prime Video and Apple TV+ streaming services.\n",
"Technologies:\n",
"Node Js, Alexa Skill Kit, etc.\n",
"GitHub (Access on Request)\n",
"Apr '20 - Present\n",
"Local Exchange Trading System\n",
"LETS is a web application where members can exchange goods and services among themselves using a built in local currency (LETS Credit)\n",
"Technologies:\n",
"C#, ASP.NET MVC\n",
"GitHub\n",
"2016 - 2017\n",
"Education\n",
"Royal Holloway, University of London\n",
"BSc Computer Science (Year in Industry)\n",
"London, UK\n",
"2013 - 2017\n",
"Sri Chaitanya Raman Bhavan\n",
"Intermediate Education (A-Level)\n",
"Vijayawada, AP, India\n",
"2011 - 2013\n",
"Gowtham Concept School\n",
"High School\n",
"Gudivada, AP, India\n",
"2007 - 2011\n",
"Skills\n",
"Node Js\n",
"React\n",
"NLP/Chatbots\n",
"Alexa Skill Kit\n",
"Angular\n",
"Java\n",
"AWS\n",
"Mongo Db\n",
"HTML\n",
"CSS\n",
"JavaScript\n",
"Docker\n",
"C-Sharp\n",
"Postgresql\n",
"GIT\n",
"Jenkins\n",
"©\n",
"Copyright 2024\n",
"KIRAN RAMBHA\n",
"Made with ❤ in\n",
"London\n"
]
}
],
"source": [
"# Let's try one out\n",
"\n",
"ed = Website(\"https://kiran.me.uk\")\n",
"print(ed.url)\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"The contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"def get_openai_message_format(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = get_openai_message_format(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nThis website is created by Ed Donner, a technology enthusiast with a focus on coding and large language models (LLMs). He is the co-founder and CTO of Nebula.io, a company dedicated to using AI in talent discovery and management. Ed has a background as the founder and CEO of the AI startup untapt, which was acquired in 2021.\\n\\n## Key Features:\\n\\n- **Outsmart LLM Arena**: A unique platform that pits LLMs against each other in diplomatic and cunning challenges.\\n \\n- **Blog Posts**: The website includes several informative posts, including:\\n - **From Software Engineer to AI Data Scientist – resources** (October 16, 2024)\\n - **Outsmart LLM Arena – a battle of diplomacy and deviousness** (August 6, 2024)\\n - **Choosing the Right LLM: Toolkit and Resources** (June 26, 2024)\\n - **Fine-tuning an LLM on your texts: a simulation of you** (February 7, 2024)\\n\\nEd invites visitors to connect and share interests in coding, LLMs, and electronic music.\""
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Kiran Rambha\n",
"\n",
"## Overview\n",
"Kiran Rambha is a full-stack web developer with over 7 years of experience in JavaScript technologies, cloud services, and web application development. He specializes in NodeJS, ReactJS, and has a solid background in ASP.NET CORE and Alexa Skill development. Kiran is also passionate about photography, particularly landscape and street photography, and enjoys traveling, gym workouts, and exploring different cultures through food.\n",
"\n",
"## Experience\n",
"Kiran has held various senior software engineering positions at notable companies:\n",
"- **Compare the Market** (Dec '22 - Present) - Focused on NodeJS, ReactJS, and microservices.\n",
"- **Profile Pensions** (Jul '21 - Nov '22) - Worked with NodeJS, ReactJS, and GCP.\n",
"- **Accenture UK** (Dec '19 - Jul '21) - Developed Alexa Skills among other technologies.\n",
"\n",
"## Projects\n",
"1. **Stock Hub - Alexa Skill**: Provides stock price updates and company earnings reports.\n",
"2. **Kiran Rambha Website**: A personal site built with React and TailwindCSS, adhering to responsive design.\n",
"3. **Just Stream - Alexa Skill**: A search engine for finding streaming availability of movies on platforms like Netflix and Amazon Prime.\n",
"4. **Local Exchange Trading System**: A web application for exchanging goods and services with a built-in currency.\n",
"\n",
"## Education\n",
"- **BSc Computer Science** from Royal Holloway, University of London (2013 - 2017)\n",
"\n",
"## Skills\n",
"Kiran possesses a wide array of technical skills including NodeJS, React, AWS, Docker, and various programming languages.\n",
"\n",
"## Additional Information\n",
"Kiran has expressed a growing interest in film photography and actively pursues learning about film stocks and cameras."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://kiran.me.uk\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this.\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of CNN Website\n",
"\n",
"CNN is a prominent news outlet that provides a broad range of news coverage, including US and world news, politics, business, health, entertainment, sports, science, climate, and weather updates. The website features video content, live updates, and various analysis pieces on current events.\n",
"\n",
"## Notable News Updates:\n",
"- **Israel-Iran Conflict**: The Israeli military has conducted strikes on air defense batteries in Iran amid escalating tensions. Analysts suggest the outcome may be influenced by the upcoming US elections.\n",
" \n",
"- **Ukraine-Russia War**: Ongoing updates regarding the conflict, including its impact on humanitarian conditions.\n",
" \n",
"- **Recent Domestic Events**: Michelle Obama criticizes Trump regarding his reactions and political maneuvers amid the ongoing war in Gaza. Additionally, significant elections are occurring in Georgia with allegations from opposition parties about electoral misconduct.\n",
"\n",
"- **Global Incidents**: \n",
" - **Flooding in the Philippines**: Over 126 people are dead or missing due to severe flooding and landslides.\n",
" - **Hoax Bomb Threats in India**: These incidents are causing chaos ahead of the Diwali festival.\n",
" - **SpaceX**: An astronaut from the Crew-8 mission has been hospitalized post-splashdown, but is reported to be in stable condition.\n",
"\n",
"- **Human Interest Stories**: Coverage includes a young climber who became the youngest person to summit the world’s highest peaks and heartwarming community initiatives.\n",
"\n",
"The CNN website is a hub for staying informed on global and local events, along with engaging multimedia content and expert analyses."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Anthropic Website\n",
"\n",
"Anthropic is an AI safety and research company based in San Francisco, dedicated to developing reliable and beneficial AI systems. The website showcases their flagship AI model, **Claude**, with the latest release being **Claude 3.5 Sonnet**, which is described as their most intelligent model. Claude is designed to enhance efficiency and create new revenue streams through its API.\n",
"\n",
"## Recent Announcements\n",
"- **October 22, 2024**: Introduction of Claude 3.5 Sonnet and Claude 3.5 Haiku, along with updates to general computer use.\n",
"- **September 4, 2024**: Launch of **Claude for Enterprise**, aimed at providing tailored AI solutions for businesses.\n",
"- **December 15, 2022**: Research published on **Constitutional AI**, focusing on ensuring AI harmlessness through feedback.\n",
"- **March 8, 2023**: Release of insights on **AI Safety**, exploring essential considerations for the field.\n",
"\n",
"The website also includes sections related to the company's team, research initiatives, career opportunities, and more detailed information on the Claude API and pricing."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "code",
"execution_count": 86,
"id": "2e2b2f1f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# OpenAI Website Summary\n",
"\n",
"OpenAI is dedicated to creating safe artificial general intelligence (AGI) that benefits all of humanity. The website outlines various AI research advancements and product innovations while providing tools for users, teams, and enterprises to leverage AI technologies.\n",
"\n",
"## Key Features:\n",
"- **OpenAI o1**: A new series of AI models focused on improving reasoning time before responding.\n",
"- **ChatGPT**: A versatile AI tool capable of various tasks such as planning, teaching, translating, and more.\n",
"- **Sora**: A video generation model designed to create realistic and imaginative video content from text input.\n",
"- **Canvas**: A new method for writing and coding with ChatGPT.\n",
"- **SearchGPT**: A prototype for new AI search features.\n",
"\n",
"## Recent News & Announcements:\n",
"- **Partnership with Apple**: OpenAI and Apple announced a collaboration to integrate ChatGPT into Apple's platforms.\n",
"- **ChatGPT Enhancements**: The AI can now \"see, hear, and speak,\" expanding its interactive capabilities.\n",
"- **GPT Store**: Introduction of a store feature that enhances the accessibility of different AI tools and models.\n",
"\n",
"## Research Highlights:\n",
"- Studies focusing on improving model safety and reasoning capabilities are ongoing, with notable projects such as the development of an early warning system for potential AI misuse.\n",
"\n",
"The site serves as a hub for users interested in accessing cutting-edge AI technologies while keeping abreast of ongoing research and product developments."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://openai.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"## Business Applications\n",
"\n",
"In this exercise, you experienced calling the API of a Frontier Model (a leading model at the frontier of AI) for the first time. This is broadly applicable across Gen AI use cases and we will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution."
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them."
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "682eff74-55c4-4d4b-b267-703edbc293c7",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "llms",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}