You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

1005 lines
52 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
"metadata": {},
"source": [
"# End of week 1 exercise\n",
"\n",
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key looks good so far\n"
]
}
],
"source": [
"# set up environment\n",
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "38f13b72-eb43-4dbb-b80f-34f1625b6db8",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "1d853b19-28d7-49fe-a2af-b53b080b37bf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['https://edwarddonner.com/',\n",
" 'https://edwarddonner.com/outsmart/',\n",
" 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
" 'https://edwarddonner.com/posts/',\n",
" 'https://edwarddonner.com/',\n",
" 'https://news.ycombinator.com',\n",
" 'https://nebula.io/?utm_source=ed&utm_medium=referral',\n",
" 'https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html',\n",
" 'https://patents.google.com/patent/US20210049536A1/',\n",
" 'https://www.linkedin.com/in/eddonner/',\n",
" 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
" 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
" 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
" 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
" 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
" 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
" 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
" 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
" 'https://edwarddonner.com/',\n",
" 'https://edwarddonner.com/outsmart/',\n",
" 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
" 'https://edwarddonner.com/posts/',\n",
" 'mailto:hello@mygroovydomain.com',\n",
" 'https://www.linkedin.com/in/eddonner/',\n",
" 'https://twitter.com/edwarddonner',\n",
" 'https://www.facebook.com/edward.donner.52']"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ed=Website(\"https://www.edwarddonner.com\")\n",
"ed.links"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "a64ccffe-06c5-49d9-ae06-067b6052f1ec",
"metadata": {},
"outputs": [],
"source": [
"#multi shots prmopting \n",
"#shot 1) you should respond in JSON... \n",
"#shot 2) you should also respond in multi colored text\n",
"\n",
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt += \"You should respond in yellow text.\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "38c058d0-d326-40dd-9925-1644288865b1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are provided with a list of links found on a webpage. You are able to decide which of the links would be most relevant to include in a brochure about the company, such as links to an About page, or a Company page, or Careers/Jobs pages.\n",
"You should respond in yellow text.You should respond in JSON as in this example:\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\n"
]
}
],
"source": [
"print(link_system_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "ea606951-a65a-4075-a473-58e4cddaf096",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"print output in yellow text\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "96893ce7-865e-47e5-b637-2fab13305d5c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is the list of links on the website of https://www.edwarddonner.com - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n",
"print output in yellow text\n",
"Links (some might be relative links):\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"https://edwarddonner.com/\n",
"https://news.ycombinator.com\n",
"https://nebula.io/?utm_source=ed&utm_medium=referral\n",
"https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html\n",
"https://patents.google.com/patent/US20210049536A1/\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
"https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
"https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"mailto:hello@mygroovydomain.com\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://twitter.com/edwarddonner\n",
"https://www.facebook.com/edward.donner.52\n"
]
}
],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "716b214d-497e-49fb-a0de-5bf4edb0f6bd",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": 40,
"id": "553b3d1c-4956-42d9-bd86-abaf764e3b5e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['https://edwarddonner.com/',\n",
" 'https://edwarddonner.com/outsmart/',\n",
" 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
" 'https://edwarddonner.com/posts/',\n",
" 'https://edwarddonner.com/',\n",
" 'https://news.ycombinator.com',\n",
" 'https://nebula.io/?utm_source=ed&utm_medium=referral',\n",
" 'https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html',\n",
" 'https://patents.google.com/patent/US20210049536A1/',\n",
" 'https://www.linkedin.com/in/eddonner/',\n",
" 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
" 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
" 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
" 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
" 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
" 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
" 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
" 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
" 'https://edwarddonner.com/',\n",
" 'https://edwarddonner.com/outsmart/',\n",
" 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
" 'https://edwarddonner.com/posts/',\n",
" 'mailto:hello@mygroovydomain.com',\n",
" 'https://www.linkedin.com/in/eddonner/',\n",
" 'https://twitter.com/edwarddonner',\n",
" 'https://www.facebook.com/edward.donner.52']"
]
},
"execution_count": 40,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"anthropic = Website(\"https://edwarddonner.com\")\n",
"anthropic.links"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "0d7b198d-a39f-4553-9432-0aaa8abbb0ec",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'links': [{'type': 'about page',\n",
" 'url': 'https://edwarddonner.com/about-me-and-about-nebula/'},\n",
" {'type': 'company page', 'url': 'https://edwarddonner.com/outsmart/'},\n",
" {'type': 'posts page', 'url': 'https://edwarddonner.com/posts/'}]}"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_links(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "9d160a9e-a129-4d4f-be34-4ccc4e570c03",
"metadata": {},
"outputs": [],
"source": [
"#make a brouchore now\n",
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += Website(url).get_contents()\n",
" links = get_links(url)\n",
" print(\"Found links:\", links)\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 43,
"id": "6ce3de06-f228-4f8c-ad65-522b25c1dcf5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://edwarddonner.com/about-me-and-about-nebula/'}]}\n",
"Landing page:\n",
"Webpage Title:\n",
"Home - Edward Donner\n",
"Webpage Contents:\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n",
"\n",
"\n",
"\n",
"about page\n",
"Webpage Title:\n",
"About - Edward Donner\n",
"Webpage Contents:\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"About\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We help recruiters source, understand, engage and manage talent, using Generative AI and other forms of machine learning. Our\n",
"patented\n",
"model matches people with roles with greater accuracy and speed than previously imaginable — no keywords required. Take a look for yourself; it’s completely free to try.\n",
"Our long term goal is to help people discover their potential and pursue their reason for being, motivated by a concept called\n",
"Ikigai\n",
". We help people find roles where they will be most fulfilled and successful; as a result, we will raise the level of human prosperity. It sounds grandiose, but since 77% of people don’t consider themselves\n",
"inspired or engaged\n",
"at work, it’s completely within our reach.\n",
"I sometimes have to pinch myself. I’m incredibly lucky to be working in the field of AI at a time when it’s rewriting the boundaries of human and tech possibilities. I’m personally most drawn to applying AI to real world problems, and the specific problem of hiring has plagued me throughout my career. So I’m in the business of finding people their dream jobs — and conveniently, I’m in my dream job myself.\n",
"Before Nebula.io\n",
"You can trace the roots of Nebula.io to an AI startup I founded in 2013 called\n",
"untapt\n",
". We built talent marketplaces and data science software for recruitment firms. To start with, we specialized on tech roles in financial services, where there was a huge supply/demand gap.\n",
"We were selected to be part of a prestigious accelerator program – the\n",
"Accenture FinTech Innovation Lab\n",
"– and we were an\n",
"American Banker Top 20 Company To Watch\n",
". We were covered in\n",
"Fast Company\n",
",\n",
"Forbes\n",
"and\n",
"American Banker\n",
", and I was interviewed on the floor of the New York Stock Exchange and Nasdaq:\n",
"After a 20 year career in Financial Services, the most rewarding thing to me about untapt was that we were tackling\n",
"tangible real-world problems\n",
"faced by everyone. I loved that we had billboard ads in train stations and we got to speak to our end-users every day. One of my proudest moments: at an Amazon pitch event to people in tech, we were voted the ‘startup most likely to grow exponentially’.\n",
"And then, our path to exponential growth was accelerated suddenly and wonderfully. Our top client, recruitment powerhouse GQR, was interested in a deeper partnership. In 2021 untapt was\n",
"acquired\n",
"by GQR’s parent company, and shortly afterwards, Nebula.io was born.\n",
"My request to you\n",
"My New Year’s Resolution is to do a better job of networking. That’s where you come in. If any of this sounds interesting, please\n",
"connect with me\n",
"for a virtual coffee. Or even a real coffee, if you’re in NYC.\n",
"I have broad expertise that spans software engineering, data science, technology leadership, entrepreneurship, and anything made by Apple. My notable prowess in these areas is only surpassed by my inability to perform anything requiring hand/eye coordination. Do not be fooled by my final pictures: if you’re looking for someone to join your Amazing Race team, or your America’s Got Talent crew, or really anything that requires functioning outdoors.. you probably want anyone but me.\n",
"Loading Comments...\n",
"Write a Comment...\n",
"Email (Required)\n",
"Name (Required)\n",
"Website\n",
"\n",
"\n"
]
}
],
"source": [
"print(get_all_details(\"https://edwarddonner.com\"))"
]
},
{
"cell_type": "code",
"execution_count": 78,
"id": "7c019742-9586-4061-96ff-912af9802bb5",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\\\n",
"Output should be displayed in mindmap diagram format.\\\n",
"Also output should be in Hindi lanaguage.\"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 79,
"id": "8152a14f-91e8-4abc-89c7-6062934611fa",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:20_000] # Truncate if more than 20,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 80,
"id": "009ab063-eb0e-4d6e-b833-406277a0c70f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://edwarddonner.com/about-me-and-about-nebula/'}]}\n"
]
},
{
"data": {
"text/plain": [
"'You are looking at a company called: Edward Donner\\nHere are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\nLanding page:\\nWebpage Title:\\nHome - Edward Donner\\nWebpage Contents:\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nJune 26, 2024\\nChoosing the Right LLM: Toolkit and Resources\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe\\n\\n\\n\\nabout page\\nWebpage Title:\\nAbout - Edward Donner\\nWebpage Contents:\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nAbout\\nI’m the co-founder and CTO of\\nNebula.io\\n. We help recruiters source, understand, engage and manage talent, using Generative AI and other forms of machine learning. Our\\npatented\\nmodel matches people with roles with greater accuracy and speed than previously imaginable — no keywords required. Take a look for yourself; it’s completely free to try.\\nOur long term goal is to help people discover their potential and pursue their reason for being, motivated by a concept called\\nIkigai\\n. We help people find roles where they will be most fulfilled and successful; as a result, we will raise the level of human prosperity. It sounds grandiose, but since 77% of people don’t consider themselves\\ninspired or engaged\\nat work, it’s completely within our reach.\\nI sometimes have to pinch myself. I’m incredibly lucky to be working in the field of AI at a time when it’s rewriting the boundaries of human and tech possibilities. I’m personally most drawn to applying AI to real world problems, and the specific problem of hiring has plagued me throughout my career. So I’m in the business of finding people their dream jobs — and conveniently, I’m in my dream job myself.\\nBefore Nebula.io\\nYou can trace the roots of Nebula.io to an AI startup I founded in 2013 called\\nuntapt\\n. We built talent marketplaces and data science software for recruitment firms. To start with, we specialized on tech roles in financial services, where there was a huge supply/demand gap.\\nWe were selected to be part of a prestigious accelerator program – the\\nAccenture FinTech Innovation Lab\\n– and we were an\\nAmerican Banker Top 20 Company To Watch\\n. We were covered in\\nFast Company\\n,\\nForbes\\nand\\nAmerican Banker\\n, and I was interviewed on the floor of the New York Stock Exchange and Nasdaq:\\nAfter a 20 year career in Financial Services, the most rewarding thing to me about untapt was that we were tackling\\ntangible real-world problems\\nfaced by everyone. I loved that we had billboard ads in train stations and we got to speak to our end-users every day. One of my proudest moments: at an Amazon pitch event to people in tech, we were voted the ‘startup most likely to grow exponentially’.\\nAnd then, our path to exponential growth was accelerated suddenly and wonderfully. Our top client, recruitment powerhouse GQR, was interested in a deeper partnership. In 2021 untapt was\\nacquired\\nby GQR’s parent company, and shortly afterwards, Nebula.io was born.\\nMy request to you\\nMy New Year’s Resolution is to do a better job of networking. That’s where you come in. If any of this sounds interesting, please\\nconnect with me\\nfor a virtual coffee. Or even a real coffee, if you’re in NYC.\\nI have broad expertise that spans software engineering, data science, technology leadership, entrepreneurship, and anything made by Apple. My notable prowess in these areas is only surpassed by my inability to perform anything requiring hand/eye coordination. Do not be fooled by my final pictures: if you’re looking for someone to join your Amazing Race team, or your America’s Got Talent crew, or really anything that requires functioning outdoors.. you probably want anyone but me.\\nLoading Comments...\\nWrite a Comment...\\nEmail (Required)\\nName (Required)\\nWebsite\\n\\n'"
]
},
"execution_count": 80,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_brochure_user_prompt(\"Edward Donner\", \"https://edwarddonner.com\") "
]
},
{
"cell_type": "code",
"execution_count": 82,
"id": "bcccadb8-11d6-4ddd-a24f-6ac1c2101419",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure(company_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": 83,
"id": "3606b154-8d28-49ca-8028-c064f196fe20",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}]}\n"
]
},
{
"data": {
"text/markdown": [
"```markdown\n",
"# Anthropic कपनशर\n",
"\n",
"## कपननक\n",
"- **नम:** Anthropic\n",
"- **मलय:** सन फि\n",
"- **वर परकर:** AI सरक और अनन कपन\n",
"- **उदय:** विव म AI क सकमक परभव सिित करन \n",
"\n",
"## हम उतद\n",
"- **Claude 3.5 Sonnet:** Intelligent AI मडल\n",
"- **Claude API:** वयवसिए AI क शकि उपयग कर\n",
"- **Claude for Enterprise:** विष रप सवसिक उपयग किए डिइन कि गय\n",
"\n",
"## कपनि\n",
"- **उचच विस:** ईमनद, समझद और सहयग पर आधित ववरण।\n",
"- **एक बडम:** सभच सहयग और वि आदन-परदन।\n",
"- **सरलत पर जर:** जटिलत बचन और ववहिक समन पर धन कित करन।\n",
"- **सरक एक विन:** उतरक तकन उपयग और स करन।\n",
"\n",
"## गहक\n",
"- वििध इडसज: वयवसय, गर-लभकगठन और नगरिक समज समह\n",
"- **उपभडबक:** वयवसयकषमत बढिए Claude क इसल करनहक सकमक अनभव।\n",
"\n",
"## करियर और नकरि\n",
"- **ओपन रस:** अनन, इियरिग, नि, और सलन मििध पठभििए।\n",
"- **लभ और भत:**\n",
" - सय, दत और दि\n",
" - 22 सपह कगतन कि गय-पि अवकश\n",
" - परतिपरतन और शयर पज\n",
" - दरसथ कम कलकत\n",
"\n",
"## करियर मिल क\n",
"1. **रि सबमिट कर**\n",
"2. **चर कर:** आपकि और अनभव क।\n",
"3. **कशल आकलन:** तकन और रजनिक भििए परषण।\n",
"\n",
"## सपरक जनक\n",
"- **सशल मि:** टिटर, लिडइन, यब\n",
"- **वबसइट:** [Anthropic](https://www.anthropic.com)\n",
"\n",
"Anthropic मिल ह और एआई क भवियवरकित करनिए आइए!\n",
"```\n",
"\n",
"### मनसिक मनचिर\n",
"```plaintext\n",
"Anthropic\n",
"│\n",
"├── कपननक\n",
"│ ├── नम: Anthropic\n",
"│ ├── मलय: सन फि\n",
"│ └── उदय: AI क सकमक परभव\n",
"│\n",
"├── उतद\n",
"│ ├── Claude 3.5 Sonnet\n",
"│ ├── Claude API\n",
"│ └── Claude for Enterprise\n",
"│\n",
"├── कपनि\n",
"│ ├── उचच विस\n",
"│ ├── एक बडम\n",
"│ ├── सरलत पर जर\n",
"│ └── सरक एक विन\n",
"│\n",
"├── गहक\n",
"│ ├── वििध इडसज\n",
"│ └── उपभडबक\n",
"│\n",
"└── करियर और नकरि\n",
" ├── ओपन रस\n",
" ├── लभ और भत\n",
" └── शिल हरकि\n",
"``` \n",
"\n",
"इस बशर क लकय सित गहक, निशक और करमचि Anthropic कनकरदन करन और इसकभदयक और सहयमक सि उजगर करन।"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"create_brochure(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "code",
"execution_count": 84,
"id": "757b9f43-9c23-477b-9928-d6bbbf0394bb",
"metadata": {},
"outputs": [],
"source": [
"#stream brochure\n",
"def stream_brochure(company_name, url):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)\n"
]
},
{
"cell_type": "code",
"execution_count": 89,
"id": "ea4408f4-dbdc-4168-a843-034b9620fb38",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.com/huggingface'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.com/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.com/pricing'}, {'type': 'blog page', 'url': 'https://huggingface.com/blog'}]}\n"
]
},
{
"data": {
"text/markdown": [
"\n",
"# Hugging Face Brochure\n",
"\n",
"## 🤖 कपन परिचय\n",
"- **नम:** Hugging Face\n",
"- **सप:** Hugging Face एक AI समय ह भविय किण कर रह। यह मशन लरिग समय किए एक ऐसटफम ह, जहडल, डट और अनरय पर सहयग कि सकत।\n",
"\n",
"## 🌟 हमिषत\n",
"- **मडल:** 400k+ मडलस क।\n",
"- **डस:** 100k+ डस उपलबध ह।\n",
"- **सस:** 150k+ एपिशनस करह।\n",
"- **तकनक:** ओपन-सस तकनक कथ मशन लरिग म सहयग बढ।\n",
"- **एनटरपइज समन:** 20 डलर परति उपयगकररतिह स।\n",
"\n",
"## 🌍 गहक\n",
"- **उदग:** 50,000+ सगठन Hugging Face क उपयग करत, जिनम Meta, Google, Microsoft ज बडम शिल ह।\n",
"\n",
"## 🌈 कपनि\n",
"- **लकय:** अच मशन लरिग ककतिक बनिशन।\n",
"- **समय:** सभ, स करन और सहयग करन आमित कि।\n",
"\n",
"## 💼 करियर\n",
"- **सवन:** सय और विक सतर पर कई रजगर अवसर उपलबध ह।\n",
"- **सि:** एक सहयमक ववरण जिसम नवर और सिक सखन पर जर दि।\n",
"\n",
"## 💬 सपरक\n",
"- **वबसइट:** [Hugging Face](https://huggingface.co)\n",
"- **सशल मि:** GitHub, Twitter, LinkedIn और Discord पर हम कर।\n",
"\n",
"## 📝 मय निरण\n",
"- **फ:** बिक उपयग किए हमत।\n",
"- **प:** $9 परतिह किए एडवस फचरस।\n",
"- **एटरपइज:** सरक और विष समरथन कथ सरवशठ पटफम।\n",
"\n",
"---\n",
"\n",
"### 👥 ज\n",
"आप हमम कि बन सकत और इस ऊर भर समय मिल ह सकत। यदि आप अच मशन लरिग विस करनि रखत, त **आज हिल ह!**\n",
"\n"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.com\")"
]
},
{
"cell_type": "code",
"execution_count": 90,
"id": "256765bb-a307-4fe0-9582-f03403a25e8d",
"metadata": {},
"outputs": [],
"source": [
"#define new system prompt for the question below\n",
"\n",
"system_prompt = \"Output should be in both English and Hindi lanaguage.\""
]
},
{
"cell_type": "code",
"execution_count": 99,
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new , i.e. user_prompt\n",
"\n",
"user_prompt = question = \"\"\"\n",
"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 100,
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
"metadata": {},
"outputs": [],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"#stream result\n",
"def stream_code_explanation():\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 101,
"id": "a7c79f95-6a4f-48b1-afed-42848e5d5975",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"This code snippet uses a generator expression combined with a set comprehension. Let's break down what it does:\n",
"\n",
"### Explanation:\n",
"\n",
"1. **`{book.get(\"author\") for book in books if book.get(\"author\")}`**: \n",
" - This part is a set comprehension. It iterates through each `book` in the `books` collection (which is assumed to be a list or similar iterable).\n",
" - For each `book`, it retrieves the value associated with the key `\"author\"` using the `book.get(\"author\")` method.\n",
" - The `if book.get(\"author\")` condition ensures that only books that have an author value (i.e., not `None` or an empty string) are considered.\n",
" - Since this is a set comprehension, it will only include unique authors in the resulting set.\n",
"\n",
"2. **`yield from`**: \n",
" - The `yield from` statement is used in a generator function to yield all values from the iterable that follows it. In this case, it yields all the unique authors obtained from the set comprehension.\n",
" \n",
"### Purpose and Use:\n",
"The purpose of this code is to create a generator that yields unique authors from a list of books, excluding any entries that lack an author.\n",
"\n",
"### Example Use Case:\n",
"Suppose you have a collection of book records, and you want to create a list of distinct authors for further processing or display. This code effectively filters out any entries without valid author names and provides a way to iterate over only unique authors.\n",
"\n",
"### Hindi Explanation:\n",
"\n",
"यह कड सिट एक जनरटर एकसपशन और सट कमिशन क उपयग करत। आइए इसिित कर:\n",
"\n",
"1. **`{book.get(\"author\") for book in books if book.get(\"author\")}`**: \n",
" - यह एक सट कमिशन ह। यह `books` सरह (ज एक स समन इटरबल समझ रह) म हर `book` किएiterate करत।\n",
" - हर `book` किए, यह `book.get(\"author\")` विि उपयग करक `\"author\"` कित मन पत करत।\n",
" - `if book.get(\"author\")` शरत यह सिित करतिवल वहतकिनकखक मन (य, `None` य एक खिग नह) पर विर किए।\n",
" - चि यह एक सट कमिशन ह, यह अदिय लखक परिट मिल कर।\n",
"\n",
"2. **`yield from`**: \n",
" - `yield from` वश जनरटर फशन म एक इटरबल स सभिलनिए परयग कि इसकद आत। इस ममल, यह सट कमिशन सत अदिय लखक सभरदन करत।\n",
"\n",
"### उदय और उपयग:\n",
"इस कड क उदय पतक अदिय लखक उतपनन करनिए एक जनरटर बन, जिसमिरवििहर रख गयिनमखक नह।\n",
"\n",
"### उदहरण उपयग ममल:\n",
"मन लिए कि आपकस पतक रिड क एक सरह ह, और आप वििट लखक एक स बनहति उस आगरकिरदरशन किए सि सक। यह कड कवल मय लखक नरदरित करक आपकवल अदिय लखक पर इटरट करन एक तररदन करत।"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"stream_code_explanation()"
]
},
{
"cell_type": "code",
"execution_count": 124,
"id": "350eb627-23a3-4215-82be-e5b8f99280e2",
"metadata": {},
"outputs": [],
"source": [
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
"HEADERS = {\"Content-Type\": \"application/json\"}"
]
},
{
"cell_type": "code",
"execution_count": 132,
"id": "652739e1-6edd-4b9d-8a44-8ea8191f45a4",
"metadata": {},
"outputs": [],
"source": [
"# Create a messages list using the same format that we used for OpenAI\n",
"messages = [\n",
" {\"role\": \"user\", \"content\": \"Please explain what this code does and why: yield from {book.get(\\\"author\\\") for book in books if book.get(\\\"author\\\")}\"}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 133,
"id": "0d1b895b-1ba2-4ea2-95a5-fe7a798a4157",
"metadata": {},
"outputs": [],
"source": [
"payload = {\n",
" \"model\": MODEL_LLAMA,\n",
" \"messages\": messages,\n",
" \"stream\": False\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": 135,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This line of code is written in Python and utilizes a feature called \"yield from\" which was introduced in Python 3.3.\n",
"\n",
"**What it does:**\n",
"\n",
"The `yield from` statement is used to yield results from another iterable. In this specific case, it's used to generate an iterator that yields the authors of books found in the `books` collection.\n",
"\n",
"Here's a breakdown:\n",
"\n",
"- `{book.get(\"author\") for book in books if book.get(\"author\")}`: This is a generator expression. It creates an iterable (an iterator) that generates values from each iteration of the loop.\n",
" - `for book in books`: Loops through each item (`book`) in the `books` collection.\n",
" - `if book.get(\"author\")`: Only includes items where the \"author\" key exists and its value is not empty or None. This is to filter out any dictionaries that don't have an author.\n",
" - `book.get(\"author\")`: Retrieves the value of the \"author\" key from each filtered dictionary.\n",
"\n",
"- `yield from ...`: Yields all values generated by the inner iterable (the generator expression).\n",
"\n",
"So, putting it together, this line of code generates a sequence of authors for books in the `books` collection, but only includes authors that are present in the books' data.\n",
"\n",
"**Why:**\n",
"\n",
"Using `yield from` is more efficient than using a loop with an append method when dealing with large datasets. Here's why:\n",
"\n",
"- Without `yield from`, you'd have to create a list and then yield each item one by one, which would be memory-intensive.\n",
"- With `yield from`, the function only yields values once they're generated, so it doesn't need to store them in memory beforehand.\n",
"\n",
"In this case, if the `books` collection is very large, using `yield from` would save a lot of memory and make your code more efficient.\n"
]
}
],
"source": [
"# Get Llama 3.2 to answer\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd9b22dd-cc77-4f1d-80ca-da45fe122dab",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}