You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

2850 lines
112 KiB

{
"cells": [
{
"cell_type": "markdown",
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa",
"metadata": {},
"source": [
"# A full business solution\n",
"\n",
"## Now we will take our project from Day 1 to the next level\n",
"\n",
"### BUSINESS CHALLENGE:\n",
"\n",
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
"\n",
"We will be provided a company name and their primary website.\n",
"\n",
"See the end of this notebook for examples of real-world business applications.\n",
"\n",
"And remember: I'm always available if you have problems or ideas! Please do reach out."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d5b08506-dc8b-4443-9201-5f1848161363",
"metadata": {},
"outputs": [],
"source": [
"# imports\n",
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"API key looks good so far\n"
]
}
],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
" \n",
"MODEL = 'gpt-4o-mini'\n",
"openai = OpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "106dd65e-90af-4ca8-86b6-23a41840645b",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"# Some websites need you to use proper headers when fetching them:\n",
"headers = {\n",
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
"}\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url, headers=headers)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Webpage Title:\n",
"Home - Edward Donner\n",
"Webpage Contents:\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n",
"\n",
"\n",
"['https://edwarddonner.com/', 'https://edwarddonner.com/outsmart/', 'https://edwarddonner.com/about-me-and-about-nebula/', 'https://edwarddonner.com/posts/', 'https://edwarddonner.com/', 'https://news.ycombinator.com', 'https://nebula.io/?utm_source=ed&utm_medium=referral', 'https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html', 'https://patents.google.com/patent/US20210049536A1/', 'https://www.linkedin.com/in/eddonner/', 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/', 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/', 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/', 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/', 'https://edwarddonner.com/2024/08/06/outsmart/', 'https://edwarddonner.com/2024/08/06/outsmart/', 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/', 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/', 'https://edwarddonner.com/', 'https://edwarddonner.com/outsmart/', 'https://edwarddonner.com/about-me-and-about-nebula/', 'https://edwarddonner.com/posts/', 'mailto:hello@mygroovydomain.com', 'https://www.linkedin.com/in/eddonner/', 'https://twitter.com/edwarddonner', 'https://www.facebook.com/edward.donner.52']\n"
]
}
],
"source": [
"# The result for my linkedin page is empty, server side rendering ? javascirpt ? security ? No links?\n",
"# Potential solution is being talked in the previous lecture\n",
"# yifan = Website(\"https://www.linkedin.com/in/yifan-wei-a1576882\")\n",
"# yifan.links\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.get_contents())\n",
"print(ed.links)"
]
},
{
"cell_type": "markdown",
"id": "1771af9c-717a-4fca-bbbe-8a95893312c3",
"metadata": {},
"source": [
"## First step: Have GPT-4o-mini figure out which links are relevant\n",
"\n",
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
"\n",
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
"\n",
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6957b079-0d96-45f7-a26a-3487510e9b35",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt += \"You should respond in JSON as in this example:\"\n",
"link_system_prompt += \"\"\"\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "b97e4068-97ed-4120-beae-c42105e4d59a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are provided with a list of links found on a webpage. You are able to decide which of the links would be most relevant to include in a brochure about the company, such as links to an About page, or a Company page, or Careers/Jobs pages.\n",
"You should respond in JSON as in this example:\n",
"{\n",
" \"links\": [\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\n"
]
}
],
"source": [
"print(link_system_prompt)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is the list of links on the website of https://edwarddonner.com - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n",
"Links (some might be relative links):\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"https://edwarddonner.com/\n",
"https://news.ycombinator.com\n",
"https://nebula.io/?utm_source=ed&utm_medium=referral\n",
"https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html\n",
"https://patents.google.com/patent/US20210049536A1/\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
"https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
"https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"mailto:hello@mygroovydomain.com\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://twitter.com/edwarddonner\n",
"https://www.facebook.com/edward.donner.52\n"
]
}
],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" website = Website(url)\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\": \"json_object\"}\n",
" )\n",
" result = response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"['/',\n",
" '/models',\n",
" '/datasets',\n",
" '/spaces',\n",
" '/posts',\n",
" '/docs',\n",
" '/enterprise',\n",
" '/pricing',\n",
" '/login',\n",
" '/join',\n",
" '/meta-llama/Llama-3.3-70B-Instruct',\n",
" '/Datou1111/shou_xin',\n",
" '/tencent/HunyuanVideo',\n",
" '/black-forest-labs/FLUX.1-dev',\n",
" '/CohereForAI/c4ai-command-r7b-12-2024',\n",
" '/models',\n",
" '/spaces/JeffreyXiang/TRELLIS',\n",
" '/spaces/multimodalart/flux-style-shaping',\n",
" '/spaces/Kwai-Kolors/Kolors-Virtual-Try-On',\n",
" '/spaces/lllyasviel/iclight-v2',\n",
" '/spaces/ginipick/FLUXllama',\n",
" '/spaces',\n",
" '/datasets/HuggingFaceFW/fineweb-2',\n",
" '/datasets/fka/awesome-chatgpt-prompts',\n",
" '/datasets/CohereForAI/Global-MMLU',\n",
" '/datasets/O1-OPEN/OpenO1-SFT',\n",
" '/datasets/aiqtech/kolaw',\n",
" '/datasets',\n",
" '/join',\n",
" '/pricing#endpoints',\n",
" '/pricing#spaces',\n",
" '/pricing',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/enterprise',\n",
" '/allenai',\n",
" '/facebook',\n",
" '/amazon',\n",
" '/google',\n",
" '/Intel',\n",
" '/microsoft',\n",
" '/grammarly',\n",
" '/Writer',\n",
" '/docs/transformers',\n",
" '/docs/diffusers',\n",
" '/docs/safetensors',\n",
" '/docs/huggingface_hub',\n",
" '/docs/tokenizers',\n",
" '/docs/peft',\n",
" '/docs/transformers.js',\n",
" '/docs/timm',\n",
" '/docs/trl',\n",
" '/docs/datasets',\n",
" '/docs/text-generation-inference',\n",
" '/docs/accelerate',\n",
" '/models',\n",
" '/datasets',\n",
" '/spaces',\n",
" '/tasks',\n",
" 'https://ui.endpoints.huggingface.co',\n",
" '/chat',\n",
" '/huggingface',\n",
" '/brand',\n",
" '/terms-of-service',\n",
" '/privacy',\n",
" 'https://apply.workable.com/huggingface/',\n",
" 'mailto:press@huggingface.co',\n",
" '/learn',\n",
" '/docs',\n",
" '/blog',\n",
" 'https://discuss.huggingface.co',\n",
" 'https://status.huggingface.co/',\n",
" 'https://github.com/huggingface',\n",
" 'https://twitter.com/huggingface',\n",
" 'https://www.linkedin.com/company/huggingface/',\n",
" '/join/discord']"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n",
"\n",
"huggingface = Website(\"https://huggingface.co\")\n",
"huggingface.links"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'links': [{'type': 'about page', 'url': 'https://huggingface.co'},\n",
" {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'},\n",
" {'type': 'pricing page', 'url': 'https://huggingface.co/pricing'},\n",
" {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'},\n",
" {'type': 'join page', 'url': 'https://huggingface.co/join'},\n",
" {'type': 'blog page', 'url': 'https://huggingface.co/blog'},\n",
" {'type': 'community discussion', 'url': 'https://discuss.huggingface.co'},\n",
" {'type': 'GitHub page', 'url': 'https://github.com/huggingface'},\n",
" {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'},\n",
" {'type': 'LinkedIn page',\n",
" 'url': 'https://www.linkedin.com/company/huggingface/'}]}"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_links(\"https://huggingface.co\")"
]
},
{
"cell_type": "markdown",
"id": "0d74128e-dfb6-47ec-9549-288b621c838c",
"metadata": {},
"source": [
"## Second step: make the brochure!\n",
"\n",
"Assemble all the details into another prompt to GPT4-o"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5",
"metadata": {},
"outputs": [],
"source": [
"def get_all_details(url):\n",
" result = \"Landing page:\\n\"\n",
" result += Website(url).get_contents()\n",
" links = get_links(url)\n",
" print(\"Found links:\", links)\n",
" for link in links[\"links\"]:\n",
" result += f\"\\n\\n{link['type']}\\n\"\n",
" result += Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.co/pricing'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'documentation page', 'url': 'https://huggingface.co/docs'}]}\n",
"Landing page:\n",
"Webpage Title:\n",
"Hugging Face – The AI community building the future.\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"The AI community building the future.\n",
"The platform where the machine learning community collaborates on models, datasets, and applications.\n",
"Trending on\n",
"this week\n",
"Models\n",
"meta-llama/Llama-3.3-70B-Instruct\n",
"Updated\n",
"5 days ago\n",
"•\n",
"147k\n",
"•\n",
"1.03k\n",
"Datou1111/shou_xin\n",
"Updated\n",
"7 days ago\n",
"•\n",
"15.3k\n",
"•\n",
"411\n",
"tencent/HunyuanVideo\n",
"Updated\n",
"8 days ago\n",
"•\n",
"4.39k\n",
"•\n",
"1.04k\n",
"black-forest-labs/FLUX.1-dev\n",
"Updated\n",
"Aug 16\n",
"•\n",
"1.36M\n",
"•\n",
"7.28k\n",
"CohereForAI/c4ai-command-r7b-12-2024\n",
"Updated\n",
"1 day ago\n",
"•\n",
"1.2k\n",
"•\n",
"185\n",
"Browse 400k+ models\n",
"Spaces\n",
"Running\n",
"on\n",
"Zero\n",
"1.35k\n",
"🏢\n",
"TRELLIS\n",
"Scalable and Versatile 3D Generation from images\n",
"Running\n",
"on\n",
"L40S\n",
"296\n",
"🚀\n",
"Flux Style Shaping\n",
"Optical illusions and style transfer with FLUX\n",
"Running\n",
"on\n",
"CPU Upgrade\n",
"5.98k\n",
"👕\n",
"Kolors Virtual Try-On\n",
"Running\n",
"on\n",
"Zero\n",
"841\n",
"📈\n",
"IC Light V2\n",
"Running\n",
"on\n",
"Zero\n",
"319\n",
"🦀🏆\n",
"FLUXllama\n",
"FLUX 4-bit Quantization(just 8GB VRAM)\n",
"Browse 150k+ applications\n",
"Datasets\n",
"HuggingFaceFW/fineweb-2\n",
"Updated\n",
"7 days ago\n",
"•\n",
"42.5k\n",
"•\n",
"302\n",
"fka/awesome-chatgpt-prompts\n",
"Updated\n",
"Sep 3\n",
"•\n",
"7k\n",
"•\n",
"6.53k\n",
"CohereForAI/Global-MMLU\n",
"Updated\n",
"3 days ago\n",
"•\n",
"6.77k\n",
"•\n",
"88\n",
"O1-OPEN/OpenO1-SFT\n",
"Updated\n",
"24 days ago\n",
"•\n",
"1.44k\n",
"•\n",
"185\n",
"aiqtech/kolaw\n",
"Updated\n",
"Apr 26\n",
"•\n",
"102\n",
"•\n",
"42\n",
"Browse 100k+ datasets\n",
"The Home of Machine Learning\n",
"Create, discover and collaborate on ML better.\n",
"The collaboration platform\n",
"Host and collaborate on unlimited public models, datasets and applications.\n",
"Move faster\n",
"With the HF Open source stack.\n",
"Explore all modalities\n",
"Text, image, video, audio or even 3D.\n",
"Build your portfolio\n",
"Share your work with the world and build your ML profile.\n",
"Sign Up\n",
"Accelerate your ML\n",
"We provide paid Compute and Enterprise solutions.\n",
"Compute\n",
"Deploy on optimized\n",
"Inference Endpoints\n",
"or update your\n",
"Spaces applications\n",
"to a GPU in a few clicks.\n",
"View pricing\n",
"Starting at $0.60/hour for GPU\n",
"Enterprise\n",
"Give your team the most advanced platform to build AI with enterprise-grade security, access controls and\n",
"\t\t\tdedicated support.\n",
"Getting started\n",
"Starting at $20/user/month\n",
"Single Sign-On\n",
"Regions\n",
"Priority Support\n",
"Audit Logs\n",
"Resource Groups\n",
"Private Datasets Viewer\n",
"More than 50,000 organizations are using Hugging Face\n",
"Ai2\n",
"Enterprise\n",
"non-profit\n",
"•\n",
"366 models\n",
"•\n",
"1.72k followers\n",
"AI at Meta\n",
"Enterprise\n",
"company\n",
"•\n",
"2.05k models\n",
"•\n",
"3.76k followers\n",
"Amazon Web Services\n",
"company\n",
"•\n",
"21 models\n",
"•\n",
"2.42k followers\n",
"Google\n",
"company\n",
"•\n",
"911 models\n",
"•\n",
"5.5k followers\n",
"Intel\n",
"company\n",
"•\n",
"217 models\n",
"•\n",
"2.05k followers\n",
"Microsoft\n",
"company\n",
"•\n",
"352 models\n",
"•\n",
"6.13k followers\n",
"Grammarly\n",
"company\n",
"•\n",
"10 models\n",
"•\n",
"98 followers\n",
"Writer\n",
"Enterprise\n",
"company\n",
"•\n",
"16 models\n",
"•\n",
"180 followers\n",
"Our Open Source\n",
"We are building the foundation of ML tooling with the community.\n",
"Transformers\n",
"136,317\n",
"State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
"Diffusers\n",
"26,646\n",
"State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
"Safetensors\n",
"2,954\n",
"Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
"Hub Python Library\n",
"2,165\n",
"Client library for the HF Hub: manage repositories from your Python runtime.\n",
"Tokenizers\n",
"9,153\n",
"Fast tokenizers, optimized for both research and production.\n",
"PEFT\n",
"16,713\n",
"Parameter efficient finetuning methods for large models.\n",
"Transformers.js\n",
"12,349\n",
"State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
"timm\n",
"32,608\n",
"State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
"TRL\n",
"10,312\n",
"Train transformer language models with reinforcement learning.\n",
"Datasets\n",
"19,354\n",
"Access and share datasets for computer vision, audio, and NLP tasks.\n",
"Text Generation Inference\n",
"9,451\n",
"Toolkit to serve Large Language Models.\n",
"Accelerate\n",
"8,054\n",
"Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Tasks\n",
"Inference Endpoints\n",
"HuggingChat\n",
"Company\n",
"About\n",
"Brand assets\n",
"Terms of service\n",
"Privacy\n",
"Jobs\n",
"Press\n",
"Resources\n",
"Learn\n",
"Documentation\n",
"Blog\n",
"Forum\n",
"Service Status\n",
"Social\n",
"GitHub\n",
"Twitter\n",
"LinkedIn\n",
"Discord\n",
"\n",
"\n",
"\n",
"about page\n",
"Webpage Title:\n",
"huggingface (Hugging Face)\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"Hugging Face\n",
"Enterprise\n",
"company\n",
"Verified\n",
"https://huggingface.co\n",
"huggingface\n",
"huggingface\n",
"Follow\n",
"7,721\n",
"AI & ML interests\n",
"The AI community building the future.\n",
"Team members\n",
"224\n",
"+190\n",
"+177\n",
"+156\n",
"+146\n",
"+126\n",
"Organization Card\n",
"Community\n",
"About org cards\n",
"👋 Hi!\n",
"We are on a mission to democratize\n",
"good\n",
"machine learning, one commit at a time.\n",
"If that sounds like something you should be doing, why don't you\n",
"join us\n",
"!\n",
"For press enquiries, you can\n",
"✉ contact our team here\n",
".\n",
"Collections\n",
"1\n",
"DistilBERT release\n",
"Original DistilBERT model, checkpoints obtained from using teacher-student learning from the original BERT checkpoints.\n",
"distilbert/distilbert-base-cased\n",
"Fill-Mask\n",
"•\n",
"Updated\n",
"May 6\n",
"•\n",
"388k\n",
"•\n",
"35\n",
"distilbert/distilbert-base-uncased\n",
"Fill-Mask\n",
"•\n",
"Updated\n",
"May 6\n",
"•\n",
"15M\n",
"•\n",
"571\n",
"distilbert/distilbert-base-multilingual-cased\n",
"Fill-Mask\n",
"•\n",
"Updated\n",
"May 6\n",
"•\n",
"500k\n",
"•\n",
"147\n",
"distilbert/distilbert-base-uncased-finetuned-sst-2-english\n",
"Text Classification\n",
"•\n",
"Updated\n",
"Dec 19, 2023\n",
"•\n",
"7.37M\n",
"•\n",
"•\n",
"642\n",
"spaces\n",
"23\n",
"Sort: \n",
"\t\tRecently updated\n",
"pinned\n",
"Running\n",
"21\n",
"📈\n",
"Number Tokenization Blog\n",
"Running\n",
"313\n",
"😻\n",
"Open Source Ai Year In Review 2024\n",
"What happened in open-source AI this year, and what’s next?\n",
"Running\n",
"194\n",
"⚡\n",
"paper-central\n",
"Running\n",
"42\n",
"🔋\n",
"Inference Playground\n",
"Running\n",
"on\n",
"TPU v5e\n",
"5\n",
"💬\n",
"Keras Chatbot Battle\n",
"Running\n",
"101\n",
"⚡\n",
"Modelcard Creator\n",
"Expand 23\n",
"\t\t\t\t\t\t\tspaces\n",
"models\n",
"16\n",
"Sort: \n",
"\t\tRecently updated\n",
"huggingface/timesfm-tourism-monthly\n",
"Updated\n",
"6 days ago\n",
"•\n",
"24\n",
"huggingface/CodeBERTa-language-id\n",
"Text Classification\n",
"•\n",
"Updated\n",
"Mar 29\n",
"•\n",
"498\n",
"•\n",
"54\n",
"huggingface/falcon-40b-gptq\n",
"Text Generation\n",
"•\n",
"Updated\n",
"Jun 14, 2023\n",
"•\n",
"12\n",
"•\n",
"12\n",
"huggingface/autoformer-tourism-monthly\n",
"Updated\n",
"May 24, 2023\n",
"•\n",
"1.79k\n",
"•\n",
"9\n",
"huggingface/distilbert-base-uncased-finetuned-mnli\n",
"Text Classification\n",
"•\n",
"Updated\n",
"Mar 22, 2023\n",
"•\n",
"1.7k\n",
"•\n",
"2\n",
"huggingface/informer-tourism-monthly\n",
"Updated\n",
"Feb 24, 2023\n",
"•\n",
"1.26k\n",
"•\n",
"5\n",
"huggingface/time-series-transformer-tourism-monthly\n",
"Updated\n",
"Feb 23, 2023\n",
"•\n",
"2.22k\n",
"•\n",
"18\n",
"huggingface/the-no-branch-repo\n",
"Text-to-Image\n",
"•\n",
"Updated\n",
"Feb 10, 2023\n",
"•\n",
"6\n",
"•\n",
"3\n",
"huggingface/CodeBERTa-small-v1\n",
"Fill-Mask\n",
"•\n",
"Updated\n",
"Jun 27, 2022\n",
"•\n",
"38.7k\n",
"•\n",
"71\n",
"huggingface/test-model-repo\n",
"Updated\n",
"Nov 19, 2021\n",
"•\n",
"1\n",
"Expand 16\n",
"\t\t\t\t\t\t\tmodels\n",
"datasets\n",
"31\n",
"Sort: \n",
"\t\tRecently updated\n",
"huggingface/community-science-paper-v2\n",
"Viewer\n",
"•\n",
"Updated\n",
"11 minutes ago\n",
"•\n",
"4.93k\n",
"•\n",
"338\n",
"•\n",
"6\n",
"huggingface/paper-central-data\n",
"Viewer\n",
"•\n",
"Updated\n",
"2 days ago\n",
"•\n",
"113k\n",
"•\n",
"487\n",
"•\n",
"7\n",
"huggingface/documentation-images\n",
"Viewer\n",
"•\n",
"Updated\n",
"2 days ago\n",
"•\n",
"44\n",
"•\n",
"2.48M\n",
"•\n",
"42\n",
"huggingface/transformers-metadata\n",
"Viewer\n",
"•\n",
"Updated\n",
"2 days ago\n",
"•\n",
"1.51k\n",
"•\n",
"643\n",
"•\n",
"13\n",
"huggingface/policy-docs\n",
"Updated\n",
"3 days ago\n",
"•\n",
"905\n",
"•\n",
"6\n",
"huggingface/diffusers-metadata\n",
"Viewer\n",
"•\n",
"Updated\n",
"5 days ago\n",
"•\n",
"56\n",
"•\n",
"441\n",
"•\n",
"4\n",
"huggingface/my-distiset-3f5a230e\n",
"Updated\n",
"24 days ago\n",
"•\n",
"16\n",
"huggingface/cookbook-images\n",
"Viewer\n",
"•\n",
"Updated\n",
"Nov 14\n",
"•\n",
"1\n",
"•\n",
"44.9k\n",
"•\n",
"6\n",
"huggingface/vllm-metadata\n",
"Updated\n",
"Oct 8\n",
"•\n",
"10\n",
"huggingface/paper-central-data-2\n",
"Viewer\n",
"•\n",
"Updated\n",
"Oct 4\n",
"•\n",
"58.3k\n",
"•\n",
"69\n",
"•\n",
"2\n",
"Expand 31\n",
"\t\t\t\t\t\t\tdatasets\n",
"Company\n",
"© Hugging Face\n",
"TOS\n",
"Privacy\n",
"About\n",
"Jobs\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Pricing\n",
"Docs\n",
"\n",
"\n",
"\n",
"careers page\n",
"Webpage Title:\n",
"Hugging Face - Current Openings\n",
"Webpage Contents:\n",
"\n",
"\n",
"\n",
"\n",
"enterprise page\n",
"Webpage Title:\n",
"Enterprise Hub - Hugging Face\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"Enterprise Hub\n",
"Enterprise-ready version of the world’s leading AI platform\n",
"Subscribe to\n",
"Enterprise Hub\n",
"for $20/user/month with your Hub organization\n",
"Give your organization the most advanced platform to build AI with enterprise-grade security, access controls,\n",
"\t\t\tdedicated support and more.\n",
"Single Sign-On\n",
"Connect securely to your identity provider with SSO integration.\n",
"Regions\n",
"Select, manage, and audit the location of your repository data.\n",
"Audit Logs\n",
"Stay in control with comprehensive logs that report on actions taken.\n",
"Resource Groups\n",
"Accurately manage access to repositories with granular access control.\n",
"Token Management\n",
"Centralized token control and custom approval policies for organization access.\n",
"Analytics\n",
"Track and analyze repository usage data in a single dashboard.\n",
"Advanced Compute Options\n",
"Increase scalability and performance with more compute options like ZeroGPU for Spaces.\n",
"Private Datasets Viewer\n",
"Enable the Dataset Viewer on your private datasets for easier collaboration.\n",
"Advanced security\n",
"Configure organization-wide security policies and default repository visibility.\n",
"Billing\n",
"Control your budget effectively with managed billing and yearly commit options.\n",
"Priority Support\n",
"Maximize your platform usage with priority support from the Hugging Face team.\n",
"Join the most forward-thinking AI organizations\n",
"Everything you already know and love about Hugging Face in Enterprise mode.\n",
"Subscribe to\n",
"Enterprise Hub\n",
"or\n",
"Talk to sales\n",
"AI at Meta\n",
"Enterprise\n",
"company\n",
"•\n",
"2.05k models\n",
"•\n",
"3.76k followers\n",
"Nerdy Face\n",
"Enterprise\n",
"company\n",
"•\n",
"1 model\n",
"•\n",
"234 followers\n",
"ServiceNow-AI\n",
"Enterprise\n",
"company\n",
"•\n",
"109 followers\n",
"Deutsche Telekom AG\n",
"Enterprise\n",
"company\n",
"•\n",
"7 models\n",
"•\n",
"112 followers\n",
"Chegg Inc\n",
"Enterprise\n",
"company\n",
"•\n",
"77 followers\n",
"Lightricks\n",
"Enterprise\n",
"company\n",
"•\n",
"3 models\n",
"•\n",
"368 followers\n",
"Aledade Inc\n",
"Enterprise\n",
"company\n",
"•\n",
"53 followers\n",
"Virtusa Corporation\n",
"Enterprise\n",
"company\n",
"•\n",
"48 followers\n",
"HiddenLayer\n",
"Enterprise\n",
"company\n",
"•\n",
"49 followers\n",
"Ekimetrics\n",
"Enterprise\n",
"company\n",
"•\n",
"47 followers\n",
"Johnson & Johnson\n",
"Enterprise\n",
"company\n",
"•\n",
"36 followers\n",
"Vectara\n",
"Enterprise\n",
"company\n",
"•\n",
"1 model\n",
"•\n",
"54 followers\n",
"HOVER External\n",
"Enterprise\n",
"company\n",
"•\n",
"26 followers\n",
"Qualcomm\n",
"Enterprise\n",
"company\n",
"•\n",
"153 models\n",
"•\n",
"353 followers\n",
"Meta Llama\n",
"Enterprise\n",
"company\n",
"•\n",
"57 models\n",
"•\n",
"13.4k followers\n",
"Orange\n",
"Enterprise\n",
"company\n",
"•\n",
"4 models\n",
"•\n",
"148 followers\n",
"Writer\n",
"Enterprise\n",
"company\n",
"•\n",
"16 models\n",
"•\n",
"180 followers\n",
"Toyota Research Institute\n",
"Enterprise\n",
"company\n",
"•\n",
"8 models\n",
"•\n",
"91 followers\n",
"H2O.ai\n",
"Enterprise\n",
"company\n",
"•\n",
"71 models\n",
"•\n",
"360 followers\n",
"Mistral AI_\n",
"Enterprise\n",
"company\n",
"•\n",
"21 models\n",
"•\n",
"3.4k followers\n",
"IBM Granite\n",
"Enterprise\n",
"company\n",
"•\n",
"56 models\n",
"•\n",
"609 followers\n",
"Liberty Mutual\n",
"Enterprise\n",
"company\n",
"•\n",
"41 followers\n",
"Arcee AI\n",
"Enterprise\n",
"company\n",
"•\n",
"130 models\n",
"•\n",
"264 followers\n",
"Gretel.ai\n",
"Enterprise\n",
"company\n",
"•\n",
"8 models\n",
"•\n",
"72 followers\n",
"Gsk-tech\n",
"Enterprise\n",
"company\n",
"•\n",
"33 followers\n",
"BCG X\n",
"Enterprise\n",
"company\n",
"•\n",
"29 followers\n",
"StepStone Online Recruiting\n",
"Enterprise\n",
"company\n",
"•\n",
"32 followers\n",
"Prezi\n",
"Enterprise\n",
"company\n",
"•\n",
"30 followers\n",
"Shopify\n",
"Enterprise\n",
"company\n",
"•\n",
"371 followers\n",
"Together\n",
"Enterprise\n",
"company\n",
"•\n",
"27 models\n",
"•\n",
"462 followers\n",
"Bloomberg\n",
"Enterprise\n",
"company\n",
"•\n",
"2 models\n",
"•\n",
"133 followers\n",
"Fidelity Investments\n",
"Enterprise\n",
"company\n",
"•\n",
"114 followers\n",
"Jusbrasil\n",
"Enterprise\n",
"company\n",
"•\n",
"77 followers\n",
"Technology Innovation Institute\n",
"Enterprise\n",
"company\n",
"•\n",
"25 models\n",
"•\n",
"981 followers\n",
"Stability AI\n",
"Enterprise\n",
"company\n",
"•\n",
"95 models\n",
"•\n",
"8.67k followers\n",
"Nutanix\n",
"Enterprise\n",
"company\n",
"•\n",
"245 models\n",
"•\n",
"38 followers\n",
"Kakao Corp.\n",
"Enterprise\n",
"company\n",
"•\n",
"41 followers\n",
"creditkarma\n",
"Enterprise\n",
"company\n",
"•\n",
"32 followers\n",
"Mercedes-Benz AG\n",
"Enterprise\n",
"company\n",
"•\n",
"80 followers\n",
"Widn AI\n",
"Enterprise\n",
"company\n",
"•\n",
"27 followers\n",
"Liquid AI\n",
"Enterprise\n",
"company\n",
"•\n",
"85 followers\n",
"BRIA AI\n",
"Enterprise\n",
"company\n",
"•\n",
"28 models\n",
"•\n",
"980 followers\n",
"Compliance & Certifications\n",
"GDPR Compliant\n",
"SOC 2 Type 2\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Tasks\n",
"Inference Endpoints\n",
"HuggingChat\n",
"Company\n",
"About\n",
"Brand assets\n",
"Terms of service\n",
"Privacy\n",
"Jobs\n",
"Press\n",
"Resources\n",
"Learn\n",
"Documentation\n",
"Blog\n",
"Forum\n",
"Service Status\n",
"Social\n",
"GitHub\n",
"Twitter\n",
"LinkedIn\n",
"Discord\n",
"\n",
"\n",
"\n",
"pricing page\n",
"Webpage Title:\n",
"Hugging Face – Pricing\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"Pricing\n",
"Leveling up AI collaboration and compute.\n",
"Users and organizations already use the Hub as a collaboration platform,\n",
"we’re making it easy to seamlessly and scalably launch ML compute directly from the Hub.\n",
"HF Hub\n",
"Collaborate on Machine Learning\n",
"Host unlimited public models, datasets\n",
"Create unlimited orgs with no member limits\n",
"Access the latest ML tools and open source\n",
"Community support\n",
"Forever\n",
"Free\n",
"PRO\n",
"Pro Account\n",
"Unlock advanced HF features\n",
"ZeroGPU and Dev Mode for Spaces\n",
"Higher rate limits for serverless inference\n",
"Get early access to upcoming features\n",
"Show your support with a Pro badge\n",
"Subscribe for\n",
"$9\n",
"/month\n",
"Enterprise Hub\n",
"Accelerate your AI roadmap\n",
"SSO and SAML support\n",
"Select data location with Storage Regions\n",
"Precise actions reviews with Audit logs\n",
"Granular access control with Resource groups\n",
"Centralized token control and approval\n",
"Dataset Viewer for private datasets\n",
"Advanced compute options for Spaces\n",
"Deploy Inference on your own Infra\n",
"Managed billing with yearly commits\n",
"Priority support\n",
"Starting at\n",
"$20\n",
"per user per month\n",
"Spaces Hardware\n",
"Upgrade your Space compute\n",
"Free CPUs\n",
"Build more advanced Spaces\n",
"7 optimized hardware available\n",
"From CPU to GPU to Accelerators\n",
"Starting at\n",
"$0\n",
"/hour\n",
"Inference Endpoints\n",
"Deploy models on fully managed infrastructure\n",
"Deploy dedicated Endpoints in seconds\n",
"Keep your costs low\n",
"Fully-managed autoscaling\n",
"Enterprise security\n",
"Starting at\n",
"$0.032\n",
"/hour\n",
"Need support to accelerate AI in your organization? View our\n",
"Expert Support\n",
".\n",
"Hugging Face Hub\n",
"free\n",
"The HF Hub is the central place to explore, experiment, collaborate and build technology with Machine\n",
"\t\t\t\t\tLearning.\n",
"Join the open source Machine Learning movement!\n",
"→\n",
"Sign Up\n",
"Create with ML\n",
"Packed with ML features, like model eval, dataset viewer and much more.\n",
"Collaborate\n",
"Git based and designed for collaboration at its core.\n",
"Play and learn\n",
"Learn by experimenting and sharing with our awesome community.\n",
"Build your ML portfolio\n",
"Share your work with the world and build your own ML profile.\n",
"Spaces Hardware\n",
"Starting at $0\n",
"Spaces are one of the most popular ways to share ML applications and demos with the world.\n",
"Upgrade your Spaces with our selection of custom on-demand hardware:\n",
"→\n",
"Get started with Spaces\n",
"Name\n",
"CPU\n",
"Memory\n",
"Accelerator\n",
"VRAM\n",
"Hourly price\n",
"CPU Basic\n",
"2 vCPU\n",
"16 GB\n",
"-\n",
"-\n",
"FREE\n",
"CPU Upgrade\n",
"8 vCPU\n",
"32 GB\n",
"-\n",
"-\n",
"$0.03\n",
"Nvidia T4 - small\n",
"4 vCPU\n",
"15 GB\n",
"Nvidia T4\n",
"16 GB\n",
"$0.40\n",
"Nvidia T4 - medium\n",
"8 vCPU\n",
"30 GB\n",
"Nvidia T4\n",
"16 GB\n",
"$0.60\n",
"1x Nvidia L4\n",
"8 vCPU\n",
"30 GB\n",
"Nvidia L4\n",
"24 GB\n",
"$0.80\n",
"4x Nvidia L4\n",
"48 vCPU\n",
"186 GB\n",
"Nvidia L4\n",
"96 GB\n",
"$3.80\n",
"1x Nvidia L40S\n",
"8 vCPU\n",
"62 GB\n",
"Nvidia L4\n",
"48 GB\n",
"$1.80\n",
"4x Nvidia L40S\n",
"48 vCPU\n",
"382 GB\n",
"Nvidia L4\n",
"192 GB\n",
"$8.30\n",
"8x Nvidia L40S\n",
"192 vCPU\n",
"1534 GB\n",
"Nvidia L4\n",
"384 GB\n",
"$23.50\n",
"Nvidia A10G - small\n",
"4 vCPU\n",
"15 GB\n",
"Nvidia A10G\n",
"24 GB\n",
"$1.00\n",
"Nvidia A10G - large\n",
"12 vCPU\n",
"46 GB\n",
"Nvidia A10G\n",
"24 GB\n",
"$1.50\n",
"2x Nvidia A10G - large\n",
"24 vCPU\n",
"92 GB\n",
"Nvidia A10G\n",
"48 GB\n",
"$3.00\n",
"4x Nvidia A10G - large\n",
"48 vCPU\n",
"184 GB\n",
"Nvidia A10G\n",
"96 GB\n",
"$5.00\n",
"Nvidia A100 - large\n",
"12 vCPU\n",
"142 GB\n",
"Nvidia A100\n",
"80 GB\n",
"$4.00\n",
"TPU v5e 1x1\n",
"22 vCPU\n",
"44 GB\n",
"Google TPU v5e\n",
"16 GB\n",
"$1.20\n",
"TPU v5e 2x2\n",
"110 vCPU\n",
"186 GB\n",
"Google TPU v5e\n",
"64 GB\n",
"$4.75\n",
"TPU v5e 2x4\n",
"220 vCPU\n",
"380 GB\n",
"Google TPU v5e\n",
"128 GB\n",
"$9.50\n",
"Custom\n",
"on demand\n",
"on demand\n",
"on demand\n",
"on demand\n",
"on demand\n",
"Spaces Persistent Storage\n",
"All Spaces get ephemeral storage for free but you can upgrade and add persistent storage at any time.\n",
"Name\n",
"Storage\n",
"Monthly price\n",
"Small\n",
"20 GB\n",
"$5\n",
"Medium\n",
"150 GB\n",
"$25\n",
"Large\n",
"1 TB\n",
"$100\n",
"Building something cool as a side project? We also offer community GPU grants.\n",
"Inference Endpoints\n",
"Starting at $0.033/hour\n",
"Inference Endpoints (dedicated) offers a secure production solution to easily deploy any ML model on dedicated\n",
"\t\t\t\t\tand autoscaling infrastructure, right from the HF Hub.\n",
"→\n",
"Learn more\n",
"CPU\n",
"instances\n",
"Provider\n",
"Architecture\n",
"vCPUs\n",
"Memory\n",
"Hourly rate\n",
"aws\n",
"Intel Sapphire Rapids\n",
"1\n",
"2GB\n",
"$0.03\n",
"2\n",
"4GB\n",
"$0.07\n",
"4\n",
"8GB\n",
"$0.13\n",
"8\n",
"16GB\n",
"$0.27\n",
"azure\n",
"Intel Xeon\n",
"1\n",
"2GB\n",
"$0.06\n",
"2\n",
"4GB\n",
"$0.12\n",
"4\n",
"8GB\n",
"$0.24\n",
"8\n",
"16GB\n",
"$0.48\n",
"gcp\n",
"Intel Sapphire Rapids\n",
"1\n",
"2GB\n",
"$0.05\n",
"2\n",
"4GB\n",
"$0.10\n",
"4\n",
"8GB\n",
"$0.20\n",
"8\n",
"16GB\n",
"$0.40\n",
"Accelerator\n",
"instances\n",
"Provider\n",
"Architecture\n",
"Topology\n",
"Accelerator Memory\n",
"Hourly rate\n",
"aws\n",
"Inf2\n",
"\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tNeuron\n",
"x1\n",
"14.5GB\n",
"$0.75\n",
"x12\n",
"760GB\n",
"$12.00\n",
"gcp\n",
"TPU\n",
"\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tv5e\n",
"1x1\n",
"16GB\n",
"$1.20\n",
"2x2\n",
"64GB\n",
"$4.75\n",
"2x4\n",
"128GB\n",
"$9.50\n",
"GPU\n",
"instances\n",
"Provider\n",
"Architecture\n",
"GPUs\n",
"GPU Memory\n",
"Hourly rate\n",
"aws\n",
"NVIDIA T4\n",
"1\n",
"14GB\n",
"$0.50\n",
"4\n",
"56GB\n",
"$3.00\n",
"aws\n",
"NVIDIA L4\n",
"1\n",
"24GB\n",
"$0.80\n",
"4\n",
"96GB\n",
"$3.80\n",
"aws\n",
"NVIDIA L40S\n",
"1\n",
"48GB\n",
"$1.80\n",
"4\n",
"192GB\n",
"$8.30\n",
"8\n",
"384GB\n",
"$23.50\n",
"aws\n",
"NVIDIA A10G\n",
"1\n",
"24GB\n",
"$1.00\n",
"4\n",
"96GB\n",
"$5.00\n",
"aws\n",
"NVIDIA A100\n",
"1\n",
"80GB\n",
"$4.00\n",
"2\n",
"160GB\n",
"$8.00\n",
"4\n",
"320GB\n",
"$16.00\n",
"8\n",
"640GB\n",
"$32.00\n",
"gcp\n",
"NVIDIA T4\n",
"1\n",
"16GB\n",
"$0.50\n",
"gcp\n",
"NVIDIA L4\n",
"1\n",
"24GB\n",
"$0.70\n",
"4\n",
"96GB\n",
"$3.80\n",
"gcp\n",
"NVIDIA A100\n",
"1\n",
"80GB\n",
"$3.60\n",
"2\n",
"160GB\n",
"$7.20\n",
"4\n",
"320GB\n",
"$14.40\n",
"8\n",
"640GB\n",
"$28.80\n",
"gcp\n",
"NVIDIA H100\n",
"1\n",
"80GB\n",
"$10.00\n",
"2\n",
"160GB\n",
"$20.00\n",
"4\n",
"320GB\n",
"$40.00\n",
"8\n",
"640GB\n",
"$80.00\n",
"Pro Account\n",
"PRO\n",
"A monthly subscription to access powerful features.\n",
"→\n",
"Get Pro\n",
"($9/month)\n",
"ZeroGPU\n",
": Get 5x usage quota and highest GPU queue priority\n",
"Spaces Hosting\n",
": Create ZeroGPU Spaces with A100 hardware\n",
"Spaces Dev Mode\n",
": Fast iterations via SSH/VS Code for Spaces\n",
"Dataset Viewer\n",
": Activate it on private datasets\n",
"Inference API\n",
": Get x20 higher rate limits on Serverless API\n",
"Blog Articles\n",
": Publish articles to the Hugging Face blog\n",
"Social Posts\n",
": Share short updates with the community\n",
"Features Preview\n",
": Get early access to upcoming\n",
"\t\t\t\t\t\t\t\t\t\tfeatures\n",
"PRO\n",
"Badge\n",
":\n",
"\t\t\t\t\t\t\t\t\t\tShow your support on your profile\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Tasks\n",
"Inference Endpoints\n",
"HuggingChat\n",
"Company\n",
"About\n",
"Brand assets\n",
"Terms of service\n",
"Privacy\n",
"Jobs\n",
"Press\n",
"Resources\n",
"Learn\n",
"Documentation\n",
"Blog\n",
"Forum\n",
"Service Status\n",
"Social\n",
"GitHub\n",
"Twitter\n",
"LinkedIn\n",
"Discord\n",
"\n",
"\n",
"\n",
"blog page\n",
"Webpage Title:\n",
"Hugging Face – Blog\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"Blog, Articles, and discussions\n",
"New Article\n",
"Everything\n",
"community\n",
"guide\n",
"open source collab\n",
"partnerships\n",
"research\n",
"NLP\n",
"Audio\n",
"CV\n",
"RL\n",
"ethics\n",
"Diffusion\n",
"Game Development\n",
"RLHF\n",
"Leaderboard\n",
"Case Studies\n",
"LeMaterial: an open source initiative to accelerate materials discovery and research\n",
"By\n",
"AlexDuvalinho\n",
"December 10, 2024\n",
"guest\n",
"•\n",
"26\n",
"Community Articles\n",
"view all\n",
"Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
"By\n",
"theeseus-ai\n",
"•\n",
"14 minutes ago\n",
"•\n",
"1\n",
"AI Paradigms Explained: Instruct Models vs. Chat Models 🚀\n",
"By\n",
"theeseus-ai\n",
"•\n",
"20 minutes ago\n",
"How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
"By\n",
"theeseus-ai\n",
"•\n",
"2 days ago\n",
"•\n",
"1\n",
"🇪🇺✍ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍🇪🇺\n",
"By\n",
"yjernite\n",
"•\n",
"3 days ago\n",
"•\n",
"6\n",
"The Intersection of CTMU and QCI: Implementing Emergent Intelligence\n",
"By\n",
"dimentox\n",
"•\n",
"3 days ago\n",
"Building an AI-powered search engine from scratch\n",
"By\n",
"as-cle-bert\n",
"•\n",
"4 days ago\n",
"•\n",
"5\n",
"**Build Your Own AI Server at Home: A Cost-Effective Guide Using Pre-Owned Components**\n",
"By\n",
"theeseus-ai\n",
"•\n",
"4 days ago\n",
"•\n",
"1\n",
"MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
"By\n",
"wxDai\n",
"•\n",
"4 days ago\n",
"•\n",
"10\n",
"RLHF 101: A Technical Dive into RLHF\n",
"By\n",
"GitBag\n",
"•\n",
"5 days ago\n",
"•\n",
"1\n",
"[Talk Arena](https://talkarena.org)\n",
"By\n",
"WillHeld\n",
"•\n",
"5 days ago\n",
"Multimodal RAG with Colpali, Milvus and VLMs\n",
"By\n",
"saumitras\n",
"•\n",
"5 days ago\n",
"In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
"By\n",
"Jaward\n",
"•\n",
"6 days ago\n",
"•\n",
"1\n",
"Power steering: Squeeze massive power from small LLMs\n",
"By\n",
"ucheog\n",
"•\n",
"6 days ago\n",
"•\n",
"4\n",
"Exploring the Power of KaibanJS v0.11.0 🚀\n",
"By\n",
"darielnoel\n",
"•\n",
"6 days ago\n",
"•\n",
"1\n",
"**Building a Custom Retrieval System with Motoko and Node.js**\n",
"By\n",
"theeseus-ai\n",
"•\n",
"6 days ago\n",
"Finding Moroccan Arabic (Darija) in Fineweb 2\n",
"By\n",
"omarkamali\n",
"•\n",
"7 days ago\n",
"•\n",
"18\n",
"Running Your Custom LoRA Fine-Tuned MusicGen Large Locally\n",
"By\n",
"theeseus-ai\n",
"•\n",
"9 days ago\n",
"•\n",
"1\n",
"Building a Local Vector Database Index with Annoy and Sentence Transformers\n",
"By\n",
"theeseus-ai\n",
"•\n",
"10 days ago\n",
"•\n",
"2\n",
"Practical Consciousness Theory for AI System Design\n",
"By\n",
"KnutJaegersberg\n",
"•\n",
"10 days ago\n",
"•\n",
"3\n",
"Releasing QwQ-LongCoT-130K\n",
"By\n",
"amphora\n",
"•\n",
"10 days ago\n",
"•\n",
"6\n",
"Hugging Face models in Amazon Bedrock\n",
"By\n",
"pagezyhf\n",
"December 9, 2024\n",
"•\n",
"5\n",
"Hugging Face Community Releases an Open Preference Dataset for Text-to-Image Generation\n",
"By\n",
"davidberenstein1957\n",
"December 9, 2024\n",
"•\n",
"45\n",
"Welcome PaliGemma 2 – New vision language models by Google\n",
"By\n",
"merve\n",
"December 5, 2024\n",
"•\n",
"105\n",
"“How good are LLMs at fixing their mistakes? A chatbot arena experiment with Keras and TPUs\n",
"By\n",
"martin-gorner\n",
"December 5, 2024\n",
"•\n",
"12\n",
"Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard\n",
"By\n",
"alielfilali01\n",
"December 4, 2024\n",
"guest\n",
"•\n",
"24\n",
"Investing in Performance: Fine-tune small models with LLM insights - a CFM case study\n",
"By\n",
"oahouzi\n",
"December 3, 2024\n",
"•\n",
"24\n",
"Rearchitecting Hugging Face Uploads and Downloads\n",
"By\n",
"port8080\n",
"November 26, 2024\n",
"•\n",
"37\n",
"SmolVLM - small yet mighty Vision Language Model\n",
"By\n",
"andito\n",
"November 26, 2024\n",
"•\n",
"135\n",
"You could have designed state of the art positional encoding\n",
"By\n",
"FL33TW00D-HF\n",
"November 25, 2024\n",
"•\n",
"76\n",
"Letting Large Models Debate: The First Multilingual LLM Debate Competition\n",
"By\n",
"xuanricheng\n",
"November 20, 2024\n",
"guest\n",
"•\n",
"26\n",
"From Files to Chunks: Improving Hugging Face Storage Efficiency\n",
"By\n",
"jsulz\n",
"November 20, 2024\n",
"•\n",
"42\n",
"Faster Text Generation with Self-Speculative Decoding\n",
"By\n",
"ariG23498\n",
"November 20, 2024\n",
"•\n",
"43\n",
"Introduction to the Open Leaderboard for Japanese LLMs\n",
"By\n",
"akimfromparis\n",
"November 20, 2024\n",
"guest\n",
"•\n",
"26\n",
"Judge Arena: Benchmarking LLMs as Evaluators\n",
"By\n",
"kaikaidai\n",
"November 19, 2024\n",
"guest\n",
"•\n",
"47\n",
"Previous\n",
"1\n",
"2\n",
"3\n",
"...\n",
"36\n",
"Next\n",
"Community Articles\n",
"view all\n",
"Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
"By\n",
"theeseus-ai\n",
"•\n",
"14 minutes ago\n",
"•\n",
"1\n",
"AI Paradigms Explained: Instruct Models vs. Chat Models 🚀\n",
"By\n",
"theeseus-ai\n",
"•\n",
"20 minutes ago\n",
"How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
"By\n",
"theeseus-ai\n",
"•\n",
"2 days ago\n",
"•\n",
"1\n",
"🇪🇺✍ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍🇪🇺\n",
"By\n",
"yjernite\n",
"•\n",
"3 days ago\n",
"•\n",
"6\n",
"The Intersection of CTMU and QCI: Implementing Emergent Intelligence\n",
"By\n",
"dimentox\n",
"•\n",
"3 days ago\n",
"Building an AI-powered search engine from scratch\n",
"By\n",
"as-cle-bert\n",
"•\n",
"4 days ago\n",
"•\n",
"5\n",
"**Build Your Own AI Server at Home: A Cost-Effective Guide Using Pre-Owned Components**\n",
"By\n",
"theeseus-ai\n",
"•\n",
"4 days ago\n",
"•\n",
"1\n",
"MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
"By\n",
"wxDai\n",
"•\n",
"4 days ago\n",
"•\n",
"10\n",
"RLHF 101: A Technical Dive into RLHF\n",
"By\n",
"GitBag\n",
"•\n",
"5 days ago\n",
"•\n",
"1\n",
"[Talk Arena](https://talkarena.org)\n",
"By\n",
"WillHeld\n",
"•\n",
"5 days ago\n",
"Multimodal RAG with Colpali, Milvus and VLMs\n",
"By\n",
"saumitras\n",
"•\n",
"5 days ago\n",
"In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
"By\n",
"Jaward\n",
"•\n",
"6 days ago\n",
"•\n",
"1\n",
"Power steering: Squeeze massive power from small LLMs\n",
"By\n",
"ucheog\n",
"•\n",
"6 days ago\n",
"•\n",
"4\n",
"Exploring the Power of KaibanJS v0.11.0 🚀\n",
"By\n",
"darielnoel\n",
"•\n",
"6 days ago\n",
"•\n",
"1\n",
"**Building a Custom Retrieval System with Motoko and Node.js**\n",
"By\n",
"theeseus-ai\n",
"•\n",
"6 days ago\n",
"Finding Moroccan Arabic (Darija) in Fineweb 2\n",
"By\n",
"omarkamali\n",
"•\n",
"7 days ago\n",
"•\n",
"18\n",
"Running Your Custom LoRA Fine-Tuned MusicGen Large Locally\n",
"By\n",
"theeseus-ai\n",
"•\n",
"9 days ago\n",
"•\n",
"1\n",
"Building a Local Vector Database Index with Annoy and Sentence Transformers\n",
"By\n",
"theeseus-ai\n",
"•\n",
"10 days ago\n",
"•\n",
"2\n",
"Practical Consciousness Theory for AI System Design\n",
"By\n",
"KnutJaegersberg\n",
"•\n",
"10 days ago\n",
"•\n",
"3\n",
"Releasing QwQ-LongCoT-130K\n",
"By\n",
"amphora\n",
"•\n",
"10 days ago\n",
"•\n",
"6\n",
"Company\n",
"© Hugging Face\n",
"TOS\n",
"Privacy\n",
"About\n",
"Jobs\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Pricing\n",
"Docs\n",
"\n",
"\n",
"\n",
"documentation page\n",
"Webpage Title:\n",
"Hugging Face - Documentation\n",
"Webpage Contents:\n",
"Hugging Face\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Posts\n",
"Docs\n",
"Enterprise\n",
"Pricing\n",
"Log In\n",
"Sign Up\n",
"Documentations\n",
"Hub\n",
"Host Git-based models, datasets and Spaces on the Hugging Face Hub.\n",
"Transformers\n",
"State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
"Diffusers\n",
"State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
"Datasets\n",
"Access and share datasets for computer vision, audio, and NLP tasks.\n",
"Gradio\n",
"Build machine learning demos and other web apps, in just a few lines of Python.\n",
"Hub Python Library\n",
"Client library for the HF Hub: manage repositories from your Python runtime.\n",
"Huggingface.js\n",
"A collection of JS libraries to interact with Hugging Face, with TS types included.\n",
"Transformers.js\n",
"State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
"Inference API (serverless)\n",
"Experiment with over 200k models easily using the serverless tier of Inference Endpoints.\n",
"Inference Endpoints (dedicated)\n",
"Easily deploy models to production on dedicated, fully managed infrastructure.\n",
"PEFT\n",
"Parameter efficient finetuning methods for large models.\n",
"Accelerate\n",
"Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
"Optimum\n",
"Fast training and inference of HF Transformers with easy to use hardware optimization tools.\n",
"AWS Trainium & Inferentia\n",
"Train and Deploy Transformers & Diffusers with AWS Trainium and AWS Inferentia via Optimum.\n",
"Tokenizers\n",
"Fast tokenizers, optimized for both research and production.\n",
"Evaluate\n",
"Evaluate and report model performance easier and more standardized.\n",
"Tasks\n",
"All things about ML tasks: demos, use cases, models, datasets, and more!\n",
"Dataset viewer\n",
"API to access the contents, metadata and basic statistics of all Hugging Face Hub datasets.\n",
"TRL\n",
"Train transformer language models with reinforcement learning.\n",
"Amazon SageMaker\n",
"Train and Deploy Transformer models with Amazon SageMaker and Hugging Face DLCs.\n",
"timm\n",
"State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
"Safetensors\n",
"Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
"Text Generation Inference\n",
"Toolkit to serve Large Language Models.\n",
"AutoTrain\n",
"AutoTrain API and UI.\n",
"Text Embeddings Inference\n",
"Toolkit to serve Text Embedding Models.\n",
"Competitions\n",
"Create your own competitions on Hugging Face.\n",
"Bitsandbytes\n",
"Toolkit to optimize and quantize models.\n",
"Sentence Transformers\n",
"Multilingual Sentence & Image Embeddings\n",
"Google Cloud\n",
"Train and Deploy Transformer models with Hugging Face DLCs on Google Cloud.\n",
"Google TPUs\n",
"Deploy models on Google TPUs via Optimum.\n",
"Chat UI\n",
"Open source chat frontend, powers the HuggingChat app.\n",
"Leaderboards\n",
"Create your own Leaderboards on Hugging Face.\n",
"Lighteval\n",
"Your all-in-one toolkit for evaluating LLMs across multiple backends.\n",
"Argilla\n",
"Collaboration tool for AI engineers and domain experts who need to build high quality datasets.\n",
"Distilabel\n",
"The framework for synthetic data generation and AI feedback.\n",
"Hugging Face Generative AI Services (HUGS)\n",
"Optimized, zero-configuration inference microservices designed to simplify and accelerate the development of AI applications with open models\n",
"Community\n",
"Blog\n",
"Learn\n",
"Discord\n",
"Forum\n",
"Github\n",
"Company\n",
"© Hugging Face\n",
"TOS\n",
"Privacy\n",
"About\n",
"Jobs\n",
"Website\n",
"Models\n",
"Datasets\n",
"Spaces\n",
"Pricing\n",
"Docs\n",
"\n",
"\n"
]
}
],
"source": [
"print(get_all_details(\"https://huggingface.co\"))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\"\n",
"\n",
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
"\n",
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "cd909e0b-1312-4ce2-a553-821e795d7572",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.co/pricing'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'community page', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
]
},
{
"data": {
"text/plain": [
"'You are looking at a company called: HuggingFace\\nHere are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\nLanding page:\\nWebpage Title:\\nHugging Face – The AI community building the future.\\nWebpage Contents:\\nHugging Face\\nModels\\nDatasets\\nSpaces\\nPosts\\nDocs\\nEnterprise\\nPricing\\nLog In\\nSign Up\\nThe AI community building the future.\\nThe platform where the machine learning community collaborates on models, datasets, and applications.\\nTrending on\\nthis week\\nModels\\nmeta-llama/Llama-3.3-70B-Instruct\\nUpdated\\n5 days ago\\n•\\n147k\\n•\\n1.03k\\nDatou1111/shou_xin\\nUpdated\\n7 days ago\\n•\\n15.3k\\n•\\n411\\ntencent/HunyuanVideo\\nUpdated\\n8 days ago\\n•\\n4.39k\\n•\\n1.04k\\nblack-forest-labs/FLUX.1-dev\\nUpdated\\nAug 16\\n•\\n1.36M\\n•\\n7.28k\\nCohereForAI/c4ai-command-r7b-12-2024\\nUpdated\\n1 day ago\\n•\\n1.2k\\n•\\n185\\nBrowse 400k+ models\\nSpaces\\nRunning\\non\\nZero\\n1.35k\\n🏢\\nTRELLIS\\nScalable and Versatile 3D Generation from images\\nRunning\\non\\nL40S\\n296\\n🚀\\nFlux Style Shaping\\nOptical illusions and style transfer with FLUX\\nRunning\\non\\nCPU Upgrade\\n5.98k\\n👕\\nKolors Virtual Try-On\\nRunning\\non\\nZero\\n841\\n📈\\nIC Light V2\\nRunning\\non\\nZero\\n319\\n🦀🏆\\nFLUXllama\\nFLUX 4-bit Quantization(just 8GB VRAM)\\nBrowse 150k+ applications\\nDatasets\\nHuggingFaceFW/fineweb-2\\nUpdated\\n7 days ago\\n•\\n42.5k\\n•\\n302\\nfka/awesome-chatgpt-prompts\\nUpdated\\nSep 3\\n•\\n7k\\n•\\n6.53k\\nCohereForAI/Global-MMLU\\nUpdated\\n3 days ago\\n•\\n6.77k\\n•\\n88\\nO1-OPEN/OpenO1-SFT\\nUpdated\\n24 days ago\\n•\\n1.44k\\n•\\n185\\naiqtech/kolaw\\nUpdated\\nApr 26\\n•\\n102\\n•\\n42\\nBrowse 100k+ datasets\\nThe Home of Machine Learning\\nCreate, discover and collaborate on ML better.\\nThe collaboration platform\\nHost and collaborate on unlimited public models, datasets and applications.\\nMove faster\\nWith the HF Open source stack.\\nExplore all modalities\\nText, image, video, audio or even 3D.\\nBuild your portfolio\\nShare your work with the world and build your ML profile.\\nSign Up\\nAccelerate your ML\\nWe provide paid Compute and Enterprise solutions.\\nCompute\\nDeploy on optimized\\nInference Endpoints\\nor update your\\nSpaces applications\\nto a GPU in a few clicks.\\nView pricing\\nStarting at $0.60/hour for GPU\\nEnterprise\\nGive your team the most advanced platform to build AI with enterprise-grade security, access controls and\\n\\t\\t\\tdedicated support.\\nGetting started\\nStarting at $20/user/month\\nSingle Sign-On\\nRegions\\nPriority Support\\nAudit Logs\\nResource Groups\\nPrivate Datasets Viewer\\nMore than 50,000 organizations are using Hugging Face\\nAi2\\nEnterprise\\nnon-profit\\n•\\n366 models\\n•\\n1.72k followers\\nAI at Meta\\nEnterprise\\ncompany\\n•\\n2.05k models\\n•\\n3.76k followers\\nAmazon Web Services\\ncompany\\n•\\n21 models\\n•\\n2.42k followers\\nGoogle\\ncompany\\n•\\n911 models\\n•\\n5.5k followers\\nIntel\\ncompany\\n•\\n217 models\\n•\\n2.05k followers\\nMicrosoft\\ncompany\\n•\\n352 models\\n•\\n6.13k followers\\nGrammarly\\ncompany\\n•\\n10 models\\n•\\n98 followers\\nWriter\\nEnterprise\\ncompany\\n•\\n16 models\\n•\\n180 followers\\nOur Open Source\\nWe are building the foundation of ML tooling with the community.\\nTransformers\\n136,317\\nState-of-the-art ML for Pytorch, TensorFlow, and JAX.\\nDiffusers\\n26,646\\nState-of-the-art diffusion models for image and audio generation in PyTorch.\\nSafetensors\\n2,954\\nSimple, safe way to store and distribute neural networks weights safely and quickly.\\nHub Python Library\\n2,165\\nClient library for the HF Hub: manage repositories from your Python runtime.\\nTokenizers\\n9,153\\nFast tokenizers, optimized for both research and production.\\nPEFT\\n16,713\\nParameter efficient finetuning methods for large models.\\nTransformers.js\\n12,349\\nState-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\\ntimm\\n32,608\\nState-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\\nTRL\\n10,312\\nTrain transformer language models with reinforcement learning.\\nDatasets\\n19,354\\nAccess and share datasets for computer vision, audio, and NLP tasks.\\nText Generation Inference\\n9,451\\nToolkit to serve Large Language Models.\\nAccelerate\\n8,054\\nEasily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\\nWebsite\\nModels\\nDatasets\\nSpaces\\nTasks\\nInference Endpoints\\nHuggingChat\\nCompany\\nAbout\\nBrand assets\\nTerms of service\\nPrivacy\\nJobs\\nPress\\nResources\\nLearn\\nDocumentation\\nBlog\\nForum\\nService Status\\nSocial\\nGitHub\\nTwitter\\nLinkedIn\\nDiscord\\n\\n\\n\\nabout page\\nWebpage Title:\\nHugging Face – The AI community building the future.\\nWebpage Contents:\\nHugging Face\\nModels\\nDatasets\\nSpaces\\nPosts\\nDocs\\nEnterprise\\nPricing\\nLog In\\nSign Up\\nThe AI community building the future.\\nThe platform where the machine learning community collaborates on models, datasets, and applications.\\nTrending on\\nthis week\\nModels\\nmeta-llama/Llama-3.3-70B-Instruct\\nUpdated\\n5 days ago\\n•\\n147k\\n•\\n1.03k\\nDatou1111/shou_xin\\nUpdated\\n7 days ago\\n•\\n15.3k\\n•\\n411\\ntencent/HunyuanVideo\\nUpdated\\n8 days ago\\n•\\n4.39k\\n•\\n1.04k\\nblack-forest-labs/FLUX.1-dev\\nUpdated\\nAug 16\\n•\\n1.36M\\n•\\n7.28k\\nCohereForAI/c4ai-command-r7b-12-2024\\nUpdated\\n1 day ago\\n•\\n1.2k\\n•\\n185\\nBrowse 400k+ models\\nSpaces\\nRunning\\non\\nZero\\n1.35k\\n🏢\\nTRELLIS\\nScalable and Versatile 3D Generat'"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure(company_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e093444a-9407-42ae-924a-145730591a39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.com'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'company page', 'url': 'https://huggingface.com/enterprise'}, {'type': 'company page', 'url': 'https://huggingface.com/pricing'}, {'type': 'blog', 'url': 'https://huggingface.com/blog'}, {'type': 'community page', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
]
},
{
"data": {
"text/markdown": [
"# 🤗 Hugging Face: The AI Community Building the Future! 🤖\n",
"\n",
"---\n",
"\n",
"**Welcome to Hugging Face**, where you can have a machine learning experience that’s as warm and snuggly as a cozy blanket… but with a whole lot more code! \n",
"\n",
"### Who Are We?\n",
"We are a community of brilliant minds and curious data enthusiasts determined to take machine learning from \"meh\" to \"WOW!\" Think of us as the social club for AI nerds, where everyone leaves smarter than when they came (and some also leave with questionable puns).\n",
"\n",
"### What Do We Offer?\n",
"- **Models, Models, Models!** \n",
" Browse through over **400k+ models**! Whether you want to fine-tune your own or borrow the brilliant minds of others, we've got the options! From the latest and greatest like _Llama-3.3-70B-Instruct_ to the under-appreciated gems (hey, every model deserves a little love).\n",
"\n",
"- **Datasets Galore!** \n",
" Love data? We've got **100k+ datasets**! From images of cats in spacesuits to audio of squirrels trying to read Shakespeare, we have it all! Just remember, if it squeaks or chirps, you might want to check it twice before using it for your model. \n",
"\n",
"- **Spaces for Creativity** \n",
" Our **Spaces** allow you to run applications that are as wild and creative as your imagination. Create optical illusions or even try-on virtual clothes! Who knew AI could help with your wardrobe choices? \n",
"\n",
"---\n",
"\n",
"### Join Us!\n",
"At **Hugging Face**, we believe in collaboration and innovation. Our culture can best be described as:\n",
"- **As Fun as a Data Party**: We work hard but know how to giggle over some neural network mishaps.\n",
"- **Open as an Open Source Project**: If you bring the code, we’ll bring the snacks (virtual or otherwise).\n",
"- **Supportive Like a Great Friend**: Whether you’re a seasoned ML pro or just curious about AI, we all lift each other up. Think of us as a friend who always carves out time to help you debug your life.\n",
"\n",
"---\n",
"\n",
"### Who Loves Us?\n",
"More than **50,000 organizations** love using Hugging Face, including giants like Google, Microsoft, and even Grammarly (because who doesn’t need an AI wingman?). They trust us to keep their AI dreams alive. They might not hug back, but they sure appreciate a good model!\n",
"\n",
"---\n",
"\n",
"### Careers at Hugging Face:\n",
"Are you looking for a career that combines your love for AI with a community that celebrates your inner geek? **Join us!**\n",
"- **Job Perks**: Flexible hours, remote work, and an office pet (currently unsure if it's a cat, dog, or sentient code).\n",
"- **Opportunity for Growth**: You’ll learn faster than a toddler on a sugar rush!\n",
"- **Be Part of Something Bigger**: Help us build out the next big thing in AI… or at least make those awkward Zoom meetings a little less awkward!\n",
"\n",
"---\n",
"\n",
"So, whether you're an **investor** looking for the next big opportunity, a **customer** seeking cutting-edge AI solutions, or a **recruit** ready to join the most fun-loving AI gang around, come discover the joy of Hugging Face!\n",
"\n",
"### Connect with Us\n",
"- 🌐 Website: [huggingface.co](https://huggingface.co)\n",
"- 🐦 Twitter: [@HuggingFace](https://twitter.com/huggingface)\n",
"- 👾 Discord: Join our friendly AI playground!\n",
"\n",
"---\n",
"\n",
"🤗 **Hugging Face** - Where AI Meets Cuddly Innovation!"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"create_brochure(\"HuggingFace\", \"https://huggingface.com\")"
]
},
{
"cell_type": "markdown",
"id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18",
"metadata": {},
"source": [
"## Finally - a minor improvement\n",
"\n",
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
"with the familiar typewriter animation"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "51db0e49-f261-4137-aabe-92dd601f7725",
"metadata": {},
"outputs": [],
"source": [
"def stream_brochure(company_name, url):\n",
" stream = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
" \n",
" response = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response += chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response), display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'company page', 'url': 'https://huggingface.co/brand'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'jobs page', 'url': 'https://huggingface.co/join'}]}\n"
]
},
{
"data": {
"text/markdown": [
"\n",
"# Welcome to Hugging Face!\n",
"### - The AI Community Building the Future (and Maybe Your Next AI Mood Ring)\n",
"\n",
"---\n",
"\n",
"## Who Are We?\n",
"At Hugging Face, we’re not just a bunch of data nerds holed up in a room with some magical algorithms (well, okay, maybe a little). We’re a vibrant **community** of AI enthusiasts dedicated to building the future, one dataset at a time while trying to figure out how to program humility into machines!\n",
"\n",
"### What Do We Do?\n",
"- **Models**: Whether it's Llama-3.3-70B-Instruct (yep, that’s a mouthful) or a plethora of innovative applications, we’re making machine learning as accessible as trying to explain how you lost three hours to cat videos on the internet.\n",
" \n",
"- **Datasets**: With over **100k datasets**, we’ve got more info than a squirrel at a nut factory. Perfect for all your data-hoarding needs!\n",
"\n",
"- **Spaces**: Why do all the fun stuff on your own? Use our **Spaces** to show off your 3D generation skills, or engage in that optical illusion project you’ve got bubbling in the back of your mind. \n",
"\n",
"---\n",
"\n",
"## Who's Using Us?\n",
"Join **50,000+ organizations** who have already discovered that building AI doesn't have to be an existential crisis. Major players like **Google**, **Microsoft**, and even **Grammarly** are part of our family. Yes, English is hard; that’s why they need us!\n",
"\n",
"---\n",
"\n",
"## Join the Culture!\n",
"At Hugging Face, we pride ourselves on a **collaborative culture** that embraces laughter, creativity, and the occasional debate over whether a hot dog is a sandwich. \n",
"\n",
"### Career Opportunities\n",
"Looking for a career where you can hug trees and code? We have positions ranging from **ML engineers** to **data wranglers**. We don't require you to know everything—just a passion for AI and a confidently apologetic expression when you break something! \n",
"\n",
"**Perks include:**\n",
"- A supportive environment that feels like family… minus the awkward Thanksgiving dinner conversations.\n",
"- Work-from-happy-places policies (aka our flexible work arrangements).\n",
"\n",
"---\n",
"\n",
"## How to Get Involved\n",
"- Sign up! Create and share your work on a platform with **400k+ models** (and counting)! \n",
"- Check out our *Enterprise Solutions* that are so advanced even your grandma might be interested (just kidding, but you get the point).\n",
"\n",
"---\n",
"\n",
"### Ready to Create?\n",
"Come on over, and let’s make some magic! Whether you're an AI wizard or just looking for a way to impress your relatives at the next family gathering, Hugging Face is here to turn your dreams into datasets—one hug at a time!\n",
"\n",
"**So… What are you waiting for?** \n",
"*Sign up today!* \n",
"\n",
"\n",
"---\n",
"\n"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "fdb3f8d8-a3eb-41c8-b1aa-9f60686a653b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://www.nvidia.com/en-us/about-nvidia/'}, {'type': 'careers page', 'url': 'https://www.nvidia.com/en-us/about-nvidia/careers/'}, {'type': 'investor relations', 'url': 'https://investor.nvidia.com/home/default.aspx'}, {'type': 'company history', 'url': 'https://images.nvidia.com/pdf/NVIDIA-Story.pdf'}, {'type': 'executive insights', 'url': 'https://www.nvidia.com/en-us/executive-insights/'}]}\n"
]
},
{
"ename": "ChunkedEncodingError",
"evalue": "('Connection broken: IncompleteRead(170 bytes read, 18664049 more expected)', IncompleteRead(170 bytes read, 18664049 more expected))",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mIncompleteRead\u001b[0m Traceback (most recent call last)",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:748\u001b[0m, in \u001b[0;36mHTTPResponse._error_catcher\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 747\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 748\u001b[0m \u001b[38;5;28;01myield\u001b[39;00m\n\u001b[1;32m 750\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m SocketTimeout \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 751\u001b[0m \u001b[38;5;66;03m# FIXME: Ideally we'd like to include the url in the ReadTimeoutError but\u001b[39;00m\n\u001b[1;32m 752\u001b[0m \u001b[38;5;66;03m# there is yet no clean way to get at it from this context.\u001b[39;00m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:894\u001b[0m, in \u001b[0;36mHTTPResponse._raw_read\u001b[0;34m(self, amt, read1)\u001b[0m\n\u001b[1;32m 884\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\n\u001b[1;32m 885\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39menforce_content_length\n\u001b[1;32m 886\u001b[0m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlength_remaining \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 892\u001b[0m \u001b[38;5;66;03m# raised during streaming, so all calls with incorrect\u001b[39;00m\n\u001b[1;32m 893\u001b[0m \u001b[38;5;66;03m# Content-Length are caught.\u001b[39;00m\n\u001b[0;32m--> 894\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m IncompleteRead(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_fp_bytes_read, \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlength_remaining)\n\u001b[1;32m 895\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m read1 \u001b[38;5;129;01mand\u001b[39;00m (\n\u001b[1;32m 896\u001b[0m (amt \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m0\u001b[39m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m data) \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mlength_remaining \u001b[38;5;241m==\u001b[39m \u001b[38;5;28mlen\u001b[39m(data)\n\u001b[1;32m 897\u001b[0m ):\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 900\u001b[0m \u001b[38;5;66;03m# `http.client.HTTPResponse`, so we close it here.\u001b[39;00m\n\u001b[1;32m 901\u001b[0m \u001b[38;5;66;03m# See https://github.com/python/cpython/issues/113199\u001b[39;00m\n",
"\u001b[0;31mIncompleteRead\u001b[0m: IncompleteRead(170 bytes read, 18664049 more expected)",
"\nThe above exception was the direct cause of the following exception:\n",
"\u001b[0;31mProtocolError\u001b[0m Traceback (most recent call last)",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:820\u001b[0m, in \u001b[0;36mResponse.iter_content.<locals>.generate\u001b[0;34m()\u001b[0m\n\u001b[1;32m 819\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 820\u001b[0m \u001b[38;5;28;01myield from\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mraw\u001b[38;5;241m.\u001b[39mstream(chunk_size, decode_content\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[1;32m 821\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ProtocolError \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:1060\u001b[0m, in \u001b[0;36mHTTPResponse.stream\u001b[0;34m(self, amt, decode_content)\u001b[0m\n\u001b[1;32m 1059\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m is_fp_closed(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_fp) \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_decoded_buffer) \u001b[38;5;241m>\u001b[39m \u001b[38;5;241m0\u001b[39m:\n\u001b[0;32m-> 1060\u001b[0m data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mread\u001b[49m\u001b[43m(\u001b[49m\u001b[43mamt\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mamt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdecode_content\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdecode_content\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1062\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m data:\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:977\u001b[0m, in \u001b[0;36mHTTPResponse.read\u001b[0;34m(self, amt, decode_content, cache_content)\u001b[0m\n\u001b[1;32m 973\u001b[0m \u001b[38;5;28;01mwhile\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_decoded_buffer) \u001b[38;5;241m<\u001b[39m amt \u001b[38;5;129;01mand\u001b[39;00m data:\n\u001b[1;32m 974\u001b[0m \u001b[38;5;66;03m# TODO make sure to initially read enough data to get past the headers\u001b[39;00m\n\u001b[1;32m 975\u001b[0m \u001b[38;5;66;03m# For example, the GZ file header takes 10 bytes, we don't want to read\u001b[39;00m\n\u001b[1;32m 976\u001b[0m \u001b[38;5;66;03m# it one byte at a time\u001b[39;00m\n\u001b[0;32m--> 977\u001b[0m data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_raw_read\u001b[49m\u001b[43m(\u001b[49m\u001b[43mamt\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 978\u001b[0m decoded_data \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_decode(data, decode_content, flush_decoder)\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:872\u001b[0m, in \u001b[0;36mHTTPResponse._raw_read\u001b[0;34m(self, amt, read1)\u001b[0m\n\u001b[1;32m 870\u001b[0m fp_closed \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mgetattr\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_fp, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mclosed\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[0;32m--> 872\u001b[0m \u001b[43m\u001b[49m\u001b[38;5;28;43;01mwith\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_error_catcher\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\u001b[43m:\u001b[49m\n\u001b[1;32m 873\u001b[0m \u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43m \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_fp_read\u001b[49m\u001b[43m(\u001b[49m\u001b[43mamt\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mread1\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mread1\u001b[49m\u001b[43m)\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01mif\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;129;43;01mnot\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[43mfp_closed\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43;01melse\u001b[39;49;00m\u001b[43m \u001b[49m\u001b[38;5;124;43mb\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/contextlib.py:158\u001b[0m, in \u001b[0;36m_GeneratorContextManager.__exit__\u001b[0;34m(self, typ, value, traceback)\u001b[0m\n\u001b[1;32m 157\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 158\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mgen\u001b[38;5;241m.\u001b[39mthrow(typ, value, traceback)\n\u001b[1;32m 159\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mStopIteration\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[1;32m 160\u001b[0m \u001b[38;5;66;03m# Suppress StopIteration *unless* it's the same exception that\u001b[39;00m\n\u001b[1;32m 161\u001b[0m \u001b[38;5;66;03m# was passed to throw(). This prevents a StopIteration\u001b[39;00m\n\u001b[1;32m 162\u001b[0m \u001b[38;5;66;03m# raised inside the \"with\" statement from being suppressed.\u001b[39;00m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/urllib3/response.py:772\u001b[0m, in \u001b[0;36mHTTPResponse._error_catcher\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 771\u001b[0m arg \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mConnection broken: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00me\u001b[38;5;132;01m!r}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m--> 772\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m ProtocolError(arg, e) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01me\u001b[39;00m\n\u001b[1;32m 774\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (HTTPException, \u001b[38;5;167;01mOSError\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m e:\n",
"\u001b[0;31mProtocolError\u001b[0m: ('Connection broken: IncompleteRead(170 bytes read, 18664049 more expected)', IncompleteRead(170 bytes read, 18664049 more expected))",
"\nDuring handling of the above exception, another exception occurred:\n",
"\u001b[0;31mChunkedEncodingError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[22], line 3\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;66;03m# Try changing the system prompt to the humorous version when you make the Brochure for Hugging Face:\u001b[39;00m\n\u001b[0;32m----> 3\u001b[0m \u001b[43mstream_brochure\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mNVIDIA\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mhttps://www.nvidia.com\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
"Cell \u001b[0;32mIn[20], line 6\u001b[0m, in \u001b[0;36mstream_brochure\u001b[0;34m(company_name, url)\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mstream_brochure\u001b[39m(company_name, url):\n\u001b[1;32m 2\u001b[0m stream \u001b[38;5;241m=\u001b[39m openai\u001b[38;5;241m.\u001b[39mchat\u001b[38;5;241m.\u001b[39mcompletions\u001b[38;5;241m.\u001b[39mcreate(\n\u001b[1;32m 3\u001b[0m model\u001b[38;5;241m=\u001b[39mMODEL,\n\u001b[1;32m 4\u001b[0m messages\u001b[38;5;241m=\u001b[39m[\n\u001b[1;32m 5\u001b[0m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msystem\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m\"\u001b[39m: system_prompt},\n\u001b[0;32m----> 6\u001b[0m {\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mrole\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124muser\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[43mget_brochure_user_prompt\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcompany_name\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m)\u001b[49m}\n\u001b[1;32m 7\u001b[0m ],\n\u001b[1;32m 8\u001b[0m stream\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 9\u001b[0m )\n\u001b[1;32m 11\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 12\u001b[0m display_handle \u001b[38;5;241m=\u001b[39m display(Markdown(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m), display_id\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n",
"Cell \u001b[0;32mIn[16], line 4\u001b[0m, in \u001b[0;36mget_brochure_user_prompt\u001b[0;34m(company_name, url)\u001b[0m\n\u001b[1;32m 2\u001b[0m user_prompt \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mYou are looking at a company called: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mcompany_name\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 3\u001b[0m user_prompt \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mHere are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m----> 4\u001b[0m user_prompt \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[43mget_all_details\u001b[49m\u001b[43m(\u001b[49m\u001b[43murl\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 5\u001b[0m user_prompt \u001b[38;5;241m=\u001b[39m user_prompt[:\u001b[38;5;241m5_000\u001b[39m] \u001b[38;5;66;03m# Truncate if more than 5,000 characters\u001b[39;00m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m user_prompt\n",
"Cell \u001b[0;32mIn[13], line 8\u001b[0m, in \u001b[0;36mget_all_details\u001b[0;34m(url)\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m link \u001b[38;5;129;01min\u001b[39;00m links[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mlinks\u001b[39m\u001b[38;5;124m\"\u001b[39m]:\n\u001b[1;32m 7\u001b[0m result \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;132;01m{\u001b[39;00mlink[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mtype\u001b[39m\u001b[38;5;124m'\u001b[39m]\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m----> 8\u001b[0m result \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m \u001b[43mWebsite\u001b[49m\u001b[43m(\u001b[49m\u001b[43mlink\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43murl\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39mget_contents()\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m result\n",
"Cell \u001b[0;32mIn[4], line 10\u001b[0m, in \u001b[0;36mWebsite.__init__\u001b[0;34m(self, url)\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m__init__\u001b[39m(\u001b[38;5;28mself\u001b[39m, url):\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39murl \u001b[38;5;241m=\u001b[39m url\n\u001b[0;32m---> 10\u001b[0m response \u001b[38;5;241m=\u001b[39m \u001b[43mrequests\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[43murl\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 11\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mbody \u001b[38;5;241m=\u001b[39m response\u001b[38;5;241m.\u001b[39mcontent\n\u001b[1;32m 12\u001b[0m soup \u001b[38;5;241m=\u001b[39m BeautifulSoup(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mbody, \u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhtml.parser\u001b[39m\u001b[38;5;124m'\u001b[39m)\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py:73\u001b[0m, in \u001b[0;36mget\u001b[0;34m(url, params, **kwargs)\u001b[0m\n\u001b[1;32m 62\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mget\u001b[39m(url, params\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs):\n\u001b[1;32m 63\u001b[0m \u001b[38;5;250m \u001b[39m\u001b[38;5;124mr\u001b[39m\u001b[38;5;124;03m\"\"\"Sends a GET request.\u001b[39;00m\n\u001b[1;32m 64\u001b[0m \n\u001b[1;32m 65\u001b[0m \u001b[38;5;124;03m :param url: URL for the new :class:`Request` object.\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 70\u001b[0m \u001b[38;5;124;03m :rtype: requests.Response\u001b[39;00m\n\u001b[1;32m 71\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[0;32m---> 73\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mget\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mparams\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mparams\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/api.py:59\u001b[0m, in \u001b[0;36mrequest\u001b[0;34m(method, url, **kwargs)\u001b[0m\n\u001b[1;32m 55\u001b[0m \u001b[38;5;66;03m# By using the 'with' statement we are sure the session is closed, thus we\u001b[39;00m\n\u001b[1;32m 56\u001b[0m \u001b[38;5;66;03m# avoid leaving sockets open which can trigger a ResourceWarning in some\u001b[39;00m\n\u001b[1;32m 57\u001b[0m \u001b[38;5;66;03m# cases, and look like a memory leak in others.\u001b[39;00m\n\u001b[1;32m 58\u001b[0m \u001b[38;5;28;01mwith\u001b[39;00m sessions\u001b[38;5;241m.\u001b[39mSession() \u001b[38;5;28;01mas\u001b[39;00m session:\n\u001b[0;32m---> 59\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43msession\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrequest\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmethod\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mmethod\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43murl\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43murl\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py:589\u001b[0m, in \u001b[0;36mSession.request\u001b[0;34m(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)\u001b[0m\n\u001b[1;32m 584\u001b[0m send_kwargs \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 585\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtimeout\u001b[39m\u001b[38;5;124m\"\u001b[39m: timeout,\n\u001b[1;32m 586\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mallow_redirects\u001b[39m\u001b[38;5;124m\"\u001b[39m: allow_redirects,\n\u001b[1;32m 587\u001b[0m }\n\u001b[1;32m 588\u001b[0m send_kwargs\u001b[38;5;241m.\u001b[39mupdate(settings)\n\u001b[0;32m--> 589\u001b[0m resp \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43msend\u001b[49m\u001b[43m(\u001b[49m\u001b[43mprep\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43msend_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 591\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m resp\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/sessions.py:746\u001b[0m, in \u001b[0;36mSession.send\u001b[0;34m(self, request, **kwargs)\u001b[0m\n\u001b[1;32m 743\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n\u001b[1;32m 745\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m stream:\n\u001b[0;32m--> 746\u001b[0m \u001b[43mr\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\n\u001b[1;32m 748\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m r\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:902\u001b[0m, in \u001b[0;36mResponse.content\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 900\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_content \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[1;32m 901\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m--> 902\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_content \u001b[38;5;241m=\u001b[39m \u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;241m.\u001b[39mjoin(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39miter_content(CONTENT_CHUNK_SIZE)) \u001b[38;5;129;01mor\u001b[39;00m \u001b[38;5;124mb\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 904\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_content_consumed \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 905\u001b[0m \u001b[38;5;66;03m# don't need to release the connection; that's been handled by urllib3\u001b[39;00m\n\u001b[1;32m 906\u001b[0m \u001b[38;5;66;03m# since we exhausted the data.\u001b[39;00m\n",
"File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:822\u001b[0m, in \u001b[0;36mResponse.iter_content.<locals>.generate\u001b[0;34m()\u001b[0m\n\u001b[1;32m 820\u001b[0m \u001b[38;5;28;01myield from\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mraw\u001b[38;5;241m.\u001b[39mstream(chunk_size, decode_content\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)\n\u001b[1;32m 821\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m ProtocolError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m--> 822\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m ChunkedEncodingError(e)\n\u001b[1;32m 823\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m DecodeError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[1;32m 824\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m ContentDecodingError(e)\n",
"\u001b[0;31mChunkedEncodingError\u001b[0m: ('Connection broken: IncompleteRead(170 bytes read, 18664049 more expected)', IncompleteRead(170 bytes read, 18664049 more expected))"
]
}
],
"source": [
"# Try changing the system prompt to the humorous version when you make the Brochure for Hugging Face:\n",
"\n",
"stream_brochure(\"NVIDIA\", \"https://www.nvidia.com\")"
]
},
{
"cell_type": "markdown",
"id": "a27bf9e0-665f-4645-b66b-9725e2a959b5",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n",
"\n",
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
"\n",
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "14b2454b-8ef8-4b5c-b928-053a15e0d553",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you move to Week 2 (which is tons of fun)</h2>\n",
" <span style=\"color:#900;\">Please see the week1 EXERCISE notebook for your challenge for the end of week 1. This will give you some essential practice working with Frontier APIs, and prepare you well for Week 2.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "17b64f0f-7d33-4493-985a-033d06e8db08",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#f71;\">A reminder on 2 useful resources</h2>\n",
" <span style=\"color:#f71;\">1. The resources for the course are available <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">here.</a><br/>\n",
" 2. I'm on LinkedIn <a href=\"https://www.linkedin.com/in/eddonner/\">here</a> and I love connecting with people taking the course!\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3de35771-455f-40b5-ba44-7c0a6b7c427a",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "4f53e78a-26cc-4dd6-b2ac-ccc92c359efb",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}