You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

1118 lines
57 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
"metadata": {},
"source": [
"# Instant Gratification\n",
"\n",
"## Your first Frontier LLM Project!\n",
"\n",
"Let's build a useful LLM solution - in a matter of minutes.\n",
"\n",
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
"\n",
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
"\n",
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
"\n",
"## If you're new to Jupyter Lab\n",
"\n",
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
"\n",
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
"\n",
"## If you'd prefer to work in IDEs\n",
"\n",
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n",
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n",
"\n",
"## If you'd like to brush up your Python\n",
"\n",
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n",
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n",
"\n",
"## I am here to help\n",
"\n",
"If you have any problems at all, please do reach out. \n",
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n",
"\n",
"## More troubleshooting\n",
"\n",
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n",
"\n",
"## If this is old hat!\n",
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"\n",
"# If you get an error running this cell, then please head over to the troubleshooting notebook!"
]
},
{
"cell_type": "markdown",
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
"metadata": {},
"source": [
"# Connecting to OpenAI\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
"\n",
"## Troubleshooting if you have problems:\n",
"\n",
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
"\n",
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
"\n",
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
"\n",
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key found and looks good so far!\n"
]
}
],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions"
]
},
{
"cell_type": "markdown",
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
"metadata": {},
"source": [
"# Let's make a quick call to a Frontier model to get started, as a preview!"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! Welcome to our chat! What can I help you with today?\n"
]
}
],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n",
"\n",
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
"response = openai.chat.completions.create(model=\"gpt-3.5-turbo\", messages=[{\"role\":\"user\", \"content\":message}])\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "2aa190e5-cb31-456a-96cc-db109919cd78",
"metadata": {},
"source": [
"## OK onwards with our first project"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Website object from the given url using the BeautifulSoup library\n",
" \"\"\"\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" soup = BeautifulSoup(response.content, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Kanit Vural | Data Scientist & ML Engineer\n",
"Home\n",
"About\n",
"Skills\n",
"Projects\n",
"Blog\n",
"Contact\n",
"Kanıt Vural\n",
"|\n",
"Transforming data into actionable insights and building intelligent solutions\n",
"Get in Touch\n",
"About Me\n",
"Data Scientist & Mine Research & Development Engineer with expertise in AI-driven solutions\n",
"He began his career in 2008 at Erdemir Mining Company as a Mining R&D Engineer, part of Oyak Mining Metallurgy Group in Divriği, Turkey. Early on, he worked on geophysical gravity and magnetic iron ore exploration and learned Surpac software to create 3D solid models from drill data. He enhanced his skills in Geostatistics through training at Hacettepe University.\n",
"With this expertise, he created block models and conducted reserve classifications, improving the company’s cost-efficiency and profits. He played a key role in discovering new fields and developing existing reserves, contributing to significant financial gains.\n",
"In 2020, he joined Tosyalı Iron Steel Angola, a subsidiary of Tosyalı Holding, in Jamba, Angola. He continued reserve classifications using Datamine software and discovered new iron and gold fields. He also mentored junior engineers by providing Datamine training.\n",
"Software has always been his passion, starting in high school and continuing throughout his career. He pursued courses in web development, mobile app development, cybersecurity, and data science, eventually discovering his true passion for data science. He took a career break to intensively train in this field, and continues to learn and work on projects daily.\n",
"Key Achievements\n",
"Discovered iron ore deposits totaling more than 300 million tons across multiple sites.\n",
"Applied AI-driven approaches to his work, boosting efficiency.\n",
"Passionately mentored junior engineers, empowering them to grow and reach their full potential.\n",
"With all the knowledge and experience gained over 15 years, he is ready to create added value by applying it in both the mining and IT industries.\n",
"Download CV\n",
"GitHub\n",
"AI/ML\n",
"Cloud\n",
"Data\n",
"MLOps\n",
"Skills & Expertise\n",
"Data Analysis\n",
"Statistical Analysis\n",
"Data Visualization\n",
"CRM Analytics\n",
"Machine & Deep Learning\n",
"Machine Learning Models\n",
"Computer Vision\n",
"Natural Language Processing\n",
"Cloud & Infrastructure\n",
"AWS Services\n",
"MLOps\n",
"Data Engineering\n",
"Generative AI\n",
"Large Language Models\n",
"Prompt Engineering\n",
"AI Applications\n",
"Mine Research & Development\n",
"Mine Exploration\n",
"Geostatistics\n",
"Solid & Block Modeling\n",
"Technologies I Work With\n",
"Python\n",
"NumPy\n",
"Pandas\n",
"Scikit-learn\n",
"TensorFlow\n",
"PyTorch\n",
"PySpark\n",
"Power BI\n",
"ChatGPT\n",
"Claude\n",
"LangChain\n",
"HuggingFace\n",
"FastAPI\n",
"Streamlit\n",
"Gradio\n",
"PostgreSQL\n",
"MLflow\n",
"Docker\n",
"Kubernetes\n",
"Git\n",
"GitHub\n",
"Red Hat\n",
"Jenkins\n",
"AWS\n",
"Terraform\n",
"Hadoop\n",
"Kafka\n",
"Airflow\n",
"JavaScript\n",
"Node.js\n",
"Datamine\n",
"Surpac\n",
"Qgis\n",
"Featured Projects\n",
"Smile-Based Face Recognition Access Control System\n",
"A facial recognition application using AWS infrastructure that activates with your smile and grants\n",
" access to registered users. Features email notifications, entry logging, and optional ChatGPT\n",
" integration.\n",
"AWS\n",
"Terraform\n",
"Python\n",
"Face Recognition\n",
"Learn More\n",
"Voice2Image AI Generator\n",
"An innovative application that transforms voice into images using AI. Record your voice to generate\n",
" text via\n",
" OpenAI's Whisper, create images with DALL·E, and enhance results using Gemini 1.5 Pro for\n",
" regeneration.\n",
"OpenAI\n",
"DALL·E\n",
"Python\n",
"Gemini\n",
"Learn More\n",
"Chat with YouTube Video\n",
"A powerful application that allows you to interact with YouTube videos by converting them into text\n",
" and asking\n",
" questions about their content. Uses OpenAI's Whisper for speech-to-text, LangChain's RAG for Q&A, and\n",
" Gemini\n",
" Pro for chat.\n",
"OpenAI Whisper\n",
"LangChain\n",
"Gemini Pro\n",
"Streamlit\n",
"Learn More\n",
"Data Analyzer with LLM Agents\n",
"An intelligent application that analyzes CSV files using advanced language models. Features automatic\n",
" descriptive statistics, data visualization, and LLM-powered Q&A about datasets. Supports multiple\n",
" models like\n",
" Gemini, Claude, and GPT.\n",
"LangChain\n",
"Streamlit\n",
"Data Analysis\n",
"LLM Agents\n",
"Learn More\n",
"Evolution of Sentiment Analysis\n",
"A comprehensive exploration of NLP techniques from rule-based to transformer models, analyzing IMDB\n",
" reviews.\n",
" Features machine learning, deep learning (LSTM, CNN), and BERT implementations with detailed\n",
" performance comparisons.\n",
"NLP\n",
"BERT\n",
"Deep Learning\n",
"TensorFlow\n",
"Learn More\n",
"Fish Species Classification with ANN\n",
"An image classification project using Artificial Neural Networks to identify 9 different fish\n",
" species. Features\n",
" smart cropping, PCA dimensionality reduction, and K-means clustering for image preprocessing,\n",
" achieving 91%\n",
" accuracy.\n",
"TensorFlow\n",
"Computer Vision\n",
"Neural Networks\n",
"Image Processing\n",
"Learn More\n",
"Cardiovascular Disease Prediction\n",
"A machine learning model for predicting cardiovascular diseases using patient attributes. Features\n",
" MLflow for\n",
" model tracking, Gradio for UI, and FastAPI backend. Analyzes various health metrics including ECG\n",
" results,\n",
" blood pressure, and cholesterol levels.\n",
"MLflow\n",
"FastAPI\n",
"Gradio\n",
"Machine Learning\n",
"Learn More\n",
"Vegetable Image Classification\n",
"A deep learning project that classifies 15 different types of vegetables using transfer learning with\n",
" EfficientNet B0. Features a Gradio interface for easy interaction, PyTorch implementation, and high\n",
" accuracy\n",
" image recognition.\n",
"PyTorch\n",
"EfficientNet\n",
"Gradio\n",
"Transfer Learning\n",
"Learn More\n",
"CRM Analytics & Customer Segmentation\n",
"A comprehensive CRM analysis project featuring cohort analysis, customer lifetime value prediction\n",
" using\n",
" BG-NBD and Gamma-Gamma models, RFM analysis, and purchase propensity prediction. Includes customer\n",
" segmentation and targeted marketing strategies.\n",
"Customer Analytics\n",
"Machine Learning\n",
"RFM Analysis\n",
"CLTV Prediction\n",
"Learn More\n",
"Amazon Multi-Model Analysis\n",
"A comprehensive project combining sentiment analysis (LSTM with self-attention), image classification\n",
" (EfficientNetB0), and recommendation systems. Features transfer learning, BERT embeddings, and\n",
" FAISS/ChromaDB\n",
" for similarity search.\n",
"Deep Learning\n",
"BERT\n",
"AWS\n",
"TensorFlow\n",
"Learn More\n",
"Latest Blog Posts\n",
"Tracing the Evolution of Natural Language Processing Through Sentiment Analysis\n",
"An exploration of NLP's journey and its applications in sentiment\n",
" analysis...\n",
"Read on Medium\n",
"Building a Smile-Based Access Control System Using AWS\n",
"Let your smile be your password - A unique approach to access\n",
" control\n",
" using AWS services and facial recognition...\n",
"Read on Medium\n",
"Get in Touch\n",
"Interested in collaboration? Let's connect!\n",
"[email protected]\n",
"© 2025 Kanıt Vural. All rights reserved.\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"kanit = Website(\"https://kanitvural.com\")\n",
"print(kanit.title)\n",
"print(kanit.text)"
]
},
{
"cell_type": "markdown",
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
"source": [
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\""
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
"source": [
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
" user_prompt += website.text\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "47a22222",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are looking at a website titled Kanit Vural | Data Scientist & ML Engineer\n",
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
"\n",
"Home\n",
"About\n",
"Skills\n",
"Projects\n",
"Blog\n",
"Contact\n",
"Kanıt Vural\n",
"|\n",
"Transforming data into actionable insights and building intelligent solutions\n",
"Get in Touch\n",
"About Me\n",
"Data Scientist & Mine Research & Development Engineer with expertise in AI-driven solutions\n",
"He began his career in 2008 at Erdemir Mining Company as a Mining R&D Engineer, part of Oyak Mining Metallurgy Group in Divriği, Turkey. Early on, he worked on geophysical gravity and magnetic iron ore exploration and learned Surpac software to create 3D solid models from drill data. He enhanced his skills in Geostatistics through training at Hacettepe University.\n",
"With this expertise, he created block models and conducted reserve classifications, improving the company’s cost-efficiency and profits. He played a key role in discovering new fields and developing existing reserves, contributing to significant financial gains.\n",
"In 2020, he joined Tosyalı Iron Steel Angola, a subsidiary of Tosyalı Holding, in Jamba, Angola. He continued reserve classifications using Datamine software and discovered new iron and gold fields. He also mentored junior engineers by providing Datamine training.\n",
"Software has always been his passion, starting in high school and continuing throughout his career. He pursued courses in web development, mobile app development, cybersecurity, and data science, eventually discovering his true passion for data science. He took a career break to intensively train in this field, and continues to learn and work on projects daily.\n",
"Key Achievements\n",
"Discovered iron ore deposits totaling more than 300 million tons across multiple sites.\n",
"Applied AI-driven approaches to his work, boosting efficiency.\n",
"Passionately mentored junior engineers, empowering them to grow and reach their full potential.\n",
"With all the knowledge and experience gained over 15 years, he is ready to create added value by applying it in both the mining and IT industries.\n",
"Download CV\n",
"GitHub\n",
"AI/ML\n",
"Cloud\n",
"Data\n",
"MLOps\n",
"Skills & Expertise\n",
"Data Analysis\n",
"Statistical Analysis\n",
"Data Visualization\n",
"CRM Analytics\n",
"Machine & Deep Learning\n",
"Machine Learning Models\n",
"Computer Vision\n",
"Natural Language Processing\n",
"Cloud & Infrastructure\n",
"AWS Services\n",
"MLOps\n",
"Data Engineering\n",
"Generative AI\n",
"Large Language Models\n",
"Prompt Engineering\n",
"AI Applications\n",
"Mine Research & Development\n",
"Mine Exploration\n",
"Geostatistics\n",
"Solid & Block Modeling\n",
"Technologies I Work With\n",
"Python\n",
"NumPy\n",
"Pandas\n",
"Scikit-learn\n",
"TensorFlow\n",
"PyTorch\n",
"PySpark\n",
"Power BI\n",
"ChatGPT\n",
"Claude\n",
"LangChain\n",
"HuggingFace\n",
"FastAPI\n",
"Streamlit\n",
"Gradio\n",
"PostgreSQL\n",
"MLflow\n",
"Docker\n",
"Kubernetes\n",
"Git\n",
"GitHub\n",
"Red Hat\n",
"Jenkins\n",
"AWS\n",
"Terraform\n",
"Hadoop\n",
"Kafka\n",
"Airflow\n",
"JavaScript\n",
"Node.js\n",
"Datamine\n",
"Surpac\n",
"Qgis\n",
"Featured Projects\n",
"Smile-Based Face Recognition Access Control System\n",
"A facial recognition application using AWS infrastructure that activates with your smile and grants\n",
" access to registered users. Features email notifications, entry logging, and optional ChatGPT\n",
" integration.\n",
"AWS\n",
"Terraform\n",
"Python\n",
"Face Recognition\n",
"Learn More\n",
"Voice2Image AI Generator\n",
"An innovative application that transforms voice into images using AI. Record your voice to generate\n",
" text via\n",
" OpenAI's Whisper, create images with DALL·E, and enhance results using Gemini 1.5 Pro for\n",
" regeneration.\n",
"OpenAI\n",
"DALL·E\n",
"Python\n",
"Gemini\n",
"Learn More\n",
"Chat with YouTube Video\n",
"A powerful application that allows you to interact with YouTube videos by converting them into text\n",
" and asking\n",
" questions about their content. Uses OpenAI's Whisper for speech-to-text, LangChain's RAG for Q&A, and\n",
" Gemini\n",
" Pro for chat.\n",
"OpenAI Whisper\n",
"LangChain\n",
"Gemini Pro\n",
"Streamlit\n",
"Learn More\n",
"Data Analyzer with LLM Agents\n",
"An intelligent application that analyzes CSV files using advanced language models. Features automatic\n",
" descriptive statistics, data visualization, and LLM-powered Q&A about datasets. Supports multiple\n",
" models like\n",
" Gemini, Claude, and GPT.\n",
"LangChain\n",
"Streamlit\n",
"Data Analysis\n",
"LLM Agents\n",
"Learn More\n",
"Evolution of Sentiment Analysis\n",
"A comprehensive exploration of NLP techniques from rule-based to transformer models, analyzing IMDB\n",
" reviews.\n",
" Features machine learning, deep learning (LSTM, CNN), and BERT implementations with detailed\n",
" performance comparisons.\n",
"NLP\n",
"BERT\n",
"Deep Learning\n",
"TensorFlow\n",
"Learn More\n",
"Fish Species Classification with ANN\n",
"An image classification project using Artificial Neural Networks to identify 9 different fish\n",
" species. Features\n",
" smart cropping, PCA dimensionality reduction, and K-means clustering for image preprocessing,\n",
" achieving 91%\n",
" accuracy.\n",
"TensorFlow\n",
"Computer Vision\n",
"Neural Networks\n",
"Image Processing\n",
"Learn More\n",
"Cardiovascular Disease Prediction\n",
"A machine learning model for predicting cardiovascular diseases using patient attributes. Features\n",
" MLflow for\n",
" model tracking, Gradio for UI, and FastAPI backend. Analyzes various health metrics including ECG\n",
" results,\n",
" blood pressure, and cholesterol levels.\n",
"MLflow\n",
"FastAPI\n",
"Gradio\n",
"Machine Learning\n",
"Learn More\n",
"Vegetable Image Classification\n",
"A deep learning project that classifies 15 different types of vegetables using transfer learning with\n",
" EfficientNet B0. Features a Gradio interface for easy interaction, PyTorch implementation, and high\n",
" accuracy\n",
" image recognition.\n",
"PyTorch\n",
"EfficientNet\n",
"Gradio\n",
"Transfer Learning\n",
"Learn More\n",
"CRM Analytics & Customer Segmentation\n",
"A comprehensive CRM analysis project featuring cohort analysis, customer lifetime value prediction\n",
" using\n",
" BG-NBD and Gamma-Gamma models, RFM analysis, and purchase propensity prediction. Includes customer\n",
" segmentation and targeted marketing strategies.\n",
"Customer Analytics\n",
"Machine Learning\n",
"RFM Analysis\n",
"CLTV Prediction\n",
"Learn More\n",
"Amazon Multi-Model Analysis\n",
"A comprehensive project combining sentiment analysis (LSTM with self-attention), image classification\n",
" (EfficientNetB0), and recommendation systems. Features transfer learning, BERT embeddings, and\n",
" FAISS/ChromaDB\n",
" for similarity search.\n",
"Deep Learning\n",
"BERT\n",
"AWS\n",
"TensorFlow\n",
"Learn More\n",
"Latest Blog Posts\n",
"Tracing the Evolution of Natural Language Processing Through Sentiment Analysis\n",
"An exploration of NLP's journey and its applications in sentiment\n",
" analysis...\n",
"Read on Medium\n",
"Building a Smile-Based Access Control System Using AWS\n",
"Let your smile be your password - A unique approach to access\n",
" control\n",
" using AWS services and facial recognition...\n",
"Read on Medium\n",
"Get in Touch\n",
"Interested in collaboration? Let's connect!\n",
"[email protected]\n",
"© 2025 Kanıt Vural. All rights reserved.\n"
]
}
],
"source": [
"print(user_prompt_for(kanit))"
]
},
{
"cell_type": "markdown",
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
"metadata": {},
"source": [
"## Messages\n",
"\n",
"The API from OpenAI expects to receive messages in a particular structure.\n",
"Many of the other APIs share this structure:\n",
"\n",
"```\n",
"[\n",
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
"]\n",
"\n",
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hmm, let me put on my thinking cap for this one... *drumroll*... 2 + 2 equals 4! So exciting, right?\n"
]
}
],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
"response = openai.chat.completions.create(model=\"gpt-3.5-turbo\", messages=messages)\n",
"print(response.choices[0].message.content)"
]
},
{
"cell_type": "markdown",
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
"metadata": {},
"source": [
"## And now let's build useful messages for GPT-4o-mini, using a function"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
"source": [
"# See how this function creates exactly the format above\n",
"\n",
"def messages_for(website):\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "26bf8bb9",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system',\n",
" 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n",
" {'role': 'user',\n",
" 'content': \"You are looking at a website titled Kanit Vural | Data Scientist & ML Engineer\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nAbout\\nSkills\\nProjects\\nBlog\\nContact\\nKanıt Vural\\n|\\nTransforming data into actionable insights and building intelligent solutions\\nGet in Touch\\nAbout Me\\nData Scientist & Mine Research & Development Engineer with expertise in AI-driven solutions\\nHe began his career in 2008 at Erdemir Mining Company as a Mining R&D Engineer, part of Oyak Mining Metallurgy Group in Divriği, Turkey. Early on, he worked on geophysical gravity and magnetic iron ore exploration and learned Surpac software to create 3D solid models from drill data. He enhanced his skills in Geostatistics through training at Hacettepe University.\\nWith this expertise, he created block models and conducted reserve classifications, improving the company’s cost-efficiency and profits. He played a key role in discovering new fields and developing existing reserves, contributing to significant financial gains.\\nIn 2020, he joined Tosyalı Iron Steel Angola, a subsidiary of Tosyalı Holding, in Jamba, Angola. He continued reserve classifications using Datamine software and discovered new iron and gold fields. He also mentored junior engineers by providing Datamine training.\\nSoftware has always been his passion, starting in high school and continuing throughout his career. He pursued courses in web development, mobile app development, cybersecurity, and data science, eventually discovering his true passion for data science. He took a career break to intensively train in this field, and continues to learn and work on projects daily.\\nKey Achievements\\nDiscovered iron ore deposits totaling more than 300 million tons across multiple sites.\\nApplied AI-driven approaches to his work, boosting efficiency.\\nPassionately mentored junior engineers, empowering them to grow and reach their full potential.\\nWith all the knowledge and experience gained over 15 years, he is ready to create added value by applying it in both the mining and IT industries.\\nDownload CV\\nGitHub\\nAI/ML\\nCloud\\nData\\nMLOps\\nSkills & Expertise\\nData Analysis\\nStatistical Analysis\\nData Visualization\\nCRM Analytics\\nMachine & Deep Learning\\nMachine Learning Models\\nComputer Vision\\nNatural Language Processing\\nCloud & Infrastructure\\nAWS Services\\nMLOps\\nData Engineering\\nGenerative AI\\nLarge Language Models\\nPrompt Engineering\\nAI Applications\\nMine Research & Development\\nMine Exploration\\nGeostatistics\\nSolid & Block Modeling\\nTechnologies I Work With\\nPython\\nNumPy\\nPandas\\nScikit-learn\\nTensorFlow\\nPyTorch\\nPySpark\\nPower BI\\nChatGPT\\nClaude\\nLangChain\\nHuggingFace\\nFastAPI\\nStreamlit\\nGradio\\nPostgreSQL\\nMLflow\\nDocker\\nKubernetes\\nGit\\nGitHub\\nRed Hat\\nJenkins\\nAWS\\nTerraform\\nHadoop\\nKafka\\nAirflow\\nJavaScript\\nNode.js\\nDatamine\\nSurpac\\nQgis\\nFeatured Projects\\nSmile-Based Face Recognition Access Control System\\nA facial recognition application using AWS infrastructure that activates with your smile and grants\\n access to registered users. Features email notifications, entry logging, and optional ChatGPT\\n integration.\\nAWS\\nTerraform\\nPython\\nFace Recognition\\nLearn More\\nVoice2Image AI Generator\\nAn innovative application that transforms voice into images using AI. Record your voice to generate\\n text via\\n OpenAI's Whisper, create images with DALL·E, and enhance results using Gemini 1.5 Pro for\\n regeneration.\\nOpenAI\\nDALL·E\\nPython\\nGemini\\nLearn More\\nChat with YouTube Video\\nA powerful application that allows you to interact with YouTube videos by converting them into text\\n and asking\\n questions about their content. Uses OpenAI's Whisper for speech-to-text, LangChain's RAG for Q&A, and\\n Gemini\\n Pro for chat.\\nOpenAI Whisper\\nLangChain\\nGemini Pro\\nStreamlit\\nLearn More\\nData Analyzer with LLM Agents\\nAn intelligent application that analyzes CSV files using advanced language models. Features automatic\\n descriptive statistics, data visualization, and LLM-powered Q&A about datasets. Supports multiple\\n models like\\n Gemini, Claude, and GPT.\\nLangChain\\nStreamlit\\nData Analysis\\nLLM Agents\\nLearn More\\nEvolution of Sentiment Analysis\\nA comprehensive exploration of NLP techniques from rule-based to transformer models, analyzing IMDB\\n reviews.\\n Features machine learning, deep learning (LSTM, CNN), and BERT implementations with detailed\\n performance comparisons.\\nNLP\\nBERT\\nDeep Learning\\nTensorFlow\\nLearn More\\nFish Species Classification with ANN\\nAn image classification project using Artificial Neural Networks to identify 9 different fish\\n species. Features\\n smart cropping, PCA dimensionality reduction, and K-means clustering for image preprocessing,\\n achieving 91%\\n accuracy.\\nTensorFlow\\nComputer Vision\\nNeural Networks\\nImage Processing\\nLearn More\\nCardiovascular Disease Prediction\\nA machine learning model for predicting cardiovascular diseases using patient attributes. Features\\n MLflow for\\n model tracking, Gradio for UI, and FastAPI backend. Analyzes various health metrics including ECG\\n results,\\n blood pressure, and cholesterol levels.\\nMLflow\\nFastAPI\\nGradio\\nMachine Learning\\nLearn More\\nVegetable Image Classification\\nA deep learning project that classifies 15 different types of vegetables using transfer learning with\\n EfficientNet B0. Features a Gradio interface for easy interaction, PyTorch implementation, and high\\n accuracy\\n image recognition.\\nPyTorch\\nEfficientNet\\nGradio\\nTransfer Learning\\nLearn More\\nCRM Analytics & Customer Segmentation\\nA comprehensive CRM analysis project featuring cohort analysis, customer lifetime value prediction\\n using\\n BG-NBD and Gamma-Gamma models, RFM analysis, and purchase propensity prediction. Includes customer\\n segmentation and targeted marketing strategies.\\nCustomer Analytics\\nMachine Learning\\nRFM Analysis\\nCLTV Prediction\\nLearn More\\nAmazon Multi-Model Analysis\\nA comprehensive project combining sentiment analysis (LSTM with self-attention), image classification\\n (EfficientNetB0), and recommendation systems. Features transfer learning, BERT embeddings, and\\n FAISS/ChromaDB\\n for similarity search.\\nDeep Learning\\nBERT\\nAWS\\nTensorFlow\\nLearn More\\nLatest Blog Posts\\nTracing the Evolution of Natural Language Processing Through Sentiment Analysis\\nAn exploration of NLP's journey and its applications in sentiment\\n analysis...\\nRead on Medium\\nBuilding a Smile-Based Access Control System Using AWS\\nLet your smile be your password - A unique approach to access\\n control\\n using AWS services and facial recognition...\\nRead on Medium\\nGet in Touch\\nInterested in collaboration? Let's connect!\\n[email\\xa0protected]\\n© 2025 Kanıt Vural. All rights reserved.\"}]"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages_for(kanit)"
]
},
{
"cell_type": "markdown",
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
"metadata": {},
"source": [
"## Time to bring it together - the API for OpenAI is very simple!"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"\n",
"def summarize(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = messages_for(website)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Kanit Vural's Website\\n\\nKanit Vural is a Data Scientist and Machine Learning Engineer with over 15 years of experience, particularly in the mining industry. He began his career in 2008 at Erdemir Mining Company, focusing on geophysical exploration and geostatistics, ultimately leading to significant financial improvements through new field discoveries. Later, he joined Tosyalı Iron Steel Angola, continuing similar work and mentoring junior engineers.\\n\\n## Key Achievements:\\n- Discovered over 300 million tons of iron ore deposits.\\n- Applied AI-driven solutions to enhance operational efficiency.\\n- Provided mentorship and training in mining technologies.\\n\\n## Skills and Expertise:\\n- **Data Science**: Data analysis, machine learning, computer vision, and NLP.\\n- **Cloud Technologies**: AWS services, MLOps, and data engineering.\\n- **Software Development**: Proficient in Python and various frameworks and libraries for data science and machine learning.\\n\\n## Featured Projects:\\n1. **Smile-Based Face Recognition Access Control System**: Uses AWS to grant access based on user smiles.\\n2. **Voice2Image AI Generator**: Converts voice recordings into images using AI technologies.\\n3. **Data Analyzer with LLM Agents**: Analyzes CSV files using language models.\\n4. **Evolution of Sentiment Analysis**: Explores NLP techniques for analyzing reviews.\\n5. **Cardiovascular Disease Prediction**: Machine learning model to predict health outcomes based on patient data.\\n\\n## Latest Blog Posts:\\n- **Tracing the Evolution of Natural Language Processing Through Sentiment Analysis**: Discusses the development of NLP and its uses.\\n- **Building a Smile-Based Access Control System Using AWS**: Outlines the innovative approach to security using facial recognition.\\n\\nKanit Vural aims to leverage his knowledge across both the mining and IT sectors to create impactful solutions.\""
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://kanitvural.com\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
"source": [
"# A function to display this nicely in the Jupyter output, using markdown\n",
"\n",
"def display_summary(url):\n",
" summary = summarize(url)\n",
" display(Markdown(summary))"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"## Summary of Kanit Vural's Website\n",
"\n",
"**Kanit Vural** is a Data Scientist and Machine Learning Engineer with over 15 years of experience, primarily in the mining and IT industries. His career began at Erdemir Mining Company, where he specialized in geophysical exploration and reserve classifications, contributing to substantial financial improvements. Vural has worked with various mining companies, including Tosyalı Iron Steel Angola, and has been integral in discovering significant mineral deposits.\n",
"\n",
"### Skills and Expertise\n",
"Vural possesses a diverse skill set that includes:\n",
"- **Data Science**: Data analysis, statistical analysis, machine learning, natural language processing (NLP), and machine learning model development.\n",
"- **Cloud & Infrastructure**: Proficient in AWS services and MLOps.\n",
"- **Technologies**: Experienced in programming languages and frameworks such as Python, TensorFlow, PyTorch, and various data engineering tools.\n",
"\n",
"### Featured Projects\n",
"1. **Smile-Based Face Recognition Access Control System**: An innovative access control application that uses facial recognition based on smiles.\n",
"2. **Voice2Image AI Generator**: Transforms voice input into images using a combination of OpenAI technologies.\n",
"3. **Chat with YouTube Video**: Converts YouTube videos into an interactive question-and-answer format.\n",
"4. **Data Analyzer with LLM Agents**: An application for analyzing CSV files using advanced language models.\n",
"5. **Evolution of Sentiment Analysis**: An exploration of NLP techniques for analyzing movie reviews.\n",
"6. **Fish Species Classification**: Uses artificial neural networks for classifying fish species based on images.\n",
"7. **Cardiovascular Disease Prediction**: A machine learning model to predict heart diseases using patient data.\n",
"8. **Vegetable Image Classification**: Classifies vegetable images using deep learning techniques.\n",
"9. **CRM Analytics & Customer Segmentation**: Focuses on customer behavior and targeted marketing strategies.\n",
"10. **Amazon Multi-Model Analysis**: Integrates sentiment analysis, image classification, and recommendation systems.\n",
"\n",
"### Blog\n",
"The latest blog posts tackle topics such as:\n",
"- The evolution of natural language processing through sentiment analysis.\n",
"- Building a smile-based access control system utilizing AWS services.\n",
"\n",
"For collaboration opportunities or more information, Vural encourages users to reach out through contact options provided on the site."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://kanitvural.com\")"
]
},
{
"cell_type": "markdown",
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
"metadata": {},
"source": [
"# Let's try more websites\n",
"\n",
"Note that this will only work on websites that can be scraped using this simplistic approach.\n",
"\n",
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
"\n",
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
"\n",
"But many websites will work just fine!"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of CNN Website\n",
"\n",
"The CNN website offers a comprehensive source for breaking news and diverse topics spanning across various categories including US, World, Politics, Business, Health, Entertainment, and Sports. It features live updates, videos, and a wide range of articles addressing current events, analyses, and in-depth reports on significant global issues. \n",
"\n",
"## Key Highlights\n",
"\n",
"- **Current Events**: Live updates on pressing stories such as wildfires in LA, the Israel-Hamas War, and the Ukraine-Russia War.\n",
"- **Politics**: Ongoing coverage of Trump's legal challenges, Biden's final military aid package for Ukraine, and notable political events.\n",
"- **Health & Science**: Insights into health studies and environmental concerns, including climate-related articles and global health issues.\n",
"- **Entertainment & Lifestyle**: Articles on celebrity news, fashion trends, and travel destinations.\n",
"\n",
"Additionally, the site encourages user feedback to enhance the online experience and offers contact options for further inquiries or technical issues. Overall, CNN serves as a pivotal resource for up-to-date news and analysis."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Anthropic Website\n",
"\n",
"Anthropic is an AI safety and research company based in San Francisco, focused on developing reliable and beneficial AI systems. Their flagship product is Claude, with the latest version being Claude 3.5 Sonnet, which is now available for use. The company emphasizes the importance of AI safety and has an interdisciplinary team background in machine learning, physics, policy, and product development.\n",
"\n",
"## Recent Announcements\n",
"- **October 22, 2024:** Introduction of computer use, new models Claude 3.5 Sonnet and Claude 3.5 Haiku.\n",
"- **September 4, 2024:** Updates related to Claude for Enterprise.\n",
"- **March 8, 2023:** Shared core views on AI safety, focusing on the timing and methodology of AI implementation and its impacts.\n",
"\n",
"Users can leverage the capabilities of Claude through an API to create custom AI-powered applications. The site also highlights ongoing research efforts in AI alignment and safety practices. \n",
"\n",
"For potential employees, Anthropic lists open roles indicating a commitment to expanding their team."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"something here\"\n",
"user_prompt = \"\"\"\n",
" Lots of text\n",
" Can be pasted here\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response =\n",
"\n",
"# Step 4: print the result\n",
"\n",
"print("
]
},
{
"cell_type": "markdown",
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
"metadata": {},
"source": [
"## An extra exercise for those who enjoy web scraping\n",
"\n",
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
]
},
{
"cell_type": "markdown",
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
"metadata": {},
"source": [
"# Sharing your code\n",
"\n",
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
"\n",
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
"\n",
"Here are good instructions courtesy of an AI friend: \n",
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}