From d9b856285e255c62ece304accbf865573e714792 Mon Sep 17 00:00:00 2001
From: Gopinath G <34595359+gopinath1998@users.noreply.github.com>
Date: Wed, 18 Dec 2024 10:52:38 +0530
Subject: [PATCH 01/29] Add files via upload
week-1 day-2 exercise
---
.../day2 EXERCISE.ipynb | 522 ++++++++++++++++++
1 file changed, 522 insertions(+)
create mode 100644 week1/community-contributions/day2 EXERCISE.ipynb
diff --git a/week1/community-contributions/day2 EXERCISE.ipynb b/week1/community-contributions/day2 EXERCISE.ipynb
new file mode 100644
index 0000000..f7a9c1b
--- /dev/null
+++ b/week1/community-contributions/day2 EXERCISE.ipynb
@@ -0,0 +1,522 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
+ "metadata": {},
+ "source": [
+ "# Welcome to your first assignment!\n",
+ "\n",
+ "Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9",
+ "metadata": {},
+ "source": [
+ "
\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " Just before we get to the assignment --\n",
+ " I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides. \n",
+ " https://edwarddonner.com/2024/11/13/llm-engineering-resources/ \n",
+ " Please keep this bookmarked, and I'll continue to add more useful links there over time.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458",
+ "metadata": {},
+ "source": [
+ "# HOMEWORK EXERCISE ASSIGNMENT\n",
+ "\n",
+ "Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n",
+ "\n",
+ "You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n",
+ "\n",
+ "**Benefits:**\n",
+ "1. No API charges - open-source\n",
+ "2. Data doesn't leave your box\n",
+ "\n",
+ "**Disadvantages:**\n",
+ "1. Significantly less power than Frontier Model\n",
+ "\n",
+ "## Recap on installation of Ollama\n",
+ "\n",
+ "Simply visit [ollama.com](https://ollama.com) and install!\n",
+ "\n",
+ "Once complete, the ollama server should already be running locally. \n",
+ "If you visit: \n",
+ "[http://localhost:11434/](http://localhost:11434/)\n",
+ "\n",
+ "You should see the message `Ollama is running`. \n",
+ "\n",
+ "If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
+ "And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
+ "Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
+ "\n",
+ "If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "from IPython.display import Markdown, display"
+ ]
+ },
+ {
+ "cell_type": "raw",
+ "id": "07e106bd-10c5-4365-b85b-397b5f059656",
+ "metadata": {},
+ "source": [
+ "# Constants\n",
+ "\n",
+ "OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
+ "HEADERS = {\"Content-Type\": \"application/json\"}\n",
+ "MODEL = \"llama3.2\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "dac0a679-599c-441f-9bf2-ddc73d35b940",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Create a messages list using the same format that we used for OpenAI\n",
+ "\n",
+ "messages = [\n",
+ " {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "7bb9c624-14f0-4945-a719-8ddb64f66f47",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "payload = {\n",
+ " \"model\": MODEL,\n",
+ " \"messages\": messages,\n",
+ " \"stream\": False\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "42b9f644-522d-4e05-a691-56e7658c0ea9",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Generative AI (Artificial Intelligence) has numerous business applications across various industries. Here are some examples:\n",
+ "\n",
+ "1. **Content Generation**: Generative AI can create high-quality content such as articles, social media posts, product descriptions, and more. This can help businesses save time and resources on content creation.\n",
+ "2. **Product Design**: Generative AI can be used to design new products, such as fashion items, jewelry, or electronics. It can also generate 3D models and prototypes, reducing the need for manual design and prototyping.\n",
+ "3. **Image and Video Generation**: Generative AI can create realistic images and videos that can be used in marketing campaigns, advertising, and social media. This can help businesses create engaging visual content without requiring extensive photography or videography skills.\n",
+ "4. **Chatbots and Virtual Assistants**: Generative AI can power chatbots and virtual assistants that provide customer support, answer frequently asked questions, and even engage in basic conversations.\n",
+ "5. **Predictive Maintenance**: Generative AI can analyze sensor data from machines and predict when maintenance is needed, reducing downtime and increasing efficiency.\n",
+ "6. **Personalized Recommendations**: Generative AI can analyze customer behavior and preferences to generate personalized product recommendations, improving the overall shopping experience.\n",
+ "7. **Customer Segmentation**: Generative AI can help businesses segment their customers based on their behavior, demographics, and preferences, enabling targeted marketing campaigns.\n",
+ "8. **Automated Writing Assistance**: Generative AI can assist writers with ideas, suggestions, and even full-text writing, helping to boost productivity and creativity.\n",
+ "9. **Data Analysis and Visualization**: Generative AI can analyze large datasets and generate insights, visualizations, and predictions that can inform business decisions.\n",
+ "10. **Creative Collaboration**: Generative AI can collaborate with human creatives, such as artists, designers, and writers, to generate new ideas, concepts, and content.\n",
+ "\n",
+ "Some specific industries where Generative AI is being applied include:\n",
+ "\n",
+ "1. **Marketing and Advertising**: generating personalized ads, content, and messaging.\n",
+ "2. **Finance and Banking**: automating financial analysis, risk assessment, and customer service.\n",
+ "3. **Healthcare**: generating medical images, analyzing patient data, and predicting disease outcomes.\n",
+ "4. **Manufacturing and Supply Chain**: optimizing production workflows, predicting demand, and identifying potential bottlenecks.\n",
+ "5. **Education**: creating personalized learning experiences, grading assignments, and developing educational content.\n",
+ "\n",
+ "These are just a few examples of the many business applications of Generative AI. As the technology continues to evolve, we can expect to see even more innovative uses across various industries.\n"
+ ]
+ }
+ ],
+ "source": [
+ "response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
+ "print(response.json()['message']['content'])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe",
+ "metadata": {},
+ "source": [
+ "# Introducing the ollama package\n",
+ "\n",
+ "And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n",
+ "\n",
+ "Under the hood, it's making the same call as above to the ollama server running at localhost:11434"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "7745b9c4-57dc-4867-9180-61fa5db55eb8",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Generative AI has numerous business applications across various industries. Here are some examples:\n",
+ "\n",
+ "1. **Content Generation**: Generative AI can be used to generate high-quality content such as articles, social media posts, product descriptions, and more. This can save time and resources for businesses that need to produce a large volume of content.\n",
+ "2. **Product Design**: Generative AI can be used to design new products, such as furniture, electronics, and other consumer goods. It can also help optimize product designs by generating multiple versions and selecting the most suitable one based on various criteria.\n",
+ "3. **Marketing Automation**: Generative AI can be used to create personalized marketing campaigns, such as email marketing automation, social media ads, and more. This can help businesses tailor their marketing efforts to specific customer segments and improve engagement rates.\n",
+ "4. **Image and Video Editing**: Generative AI can be used to edit images and videos, such as removing background noise, correcting color casts, and enhancing video quality. This can save time and resources for businesses that need to create high-quality visual content.\n",
+ "5. **Chatbots and Virtual Assistants**: Generative AI can be used to create chatbots and virtual assistants that can understand natural language and respond accordingly. This can help businesses provide better customer service and improve user experience.\n",
+ "6. **Predictive Analytics**: Generative AI can be used to analyze large datasets and generate predictive models that can forecast future trends and behaviors. This can help businesses make data-driven decisions and stay ahead of the competition.\n",
+ "7. **Customer Segmentation**: Generative AI can be used to segment customers based on their behavior, demographics, and preferences. This can help businesses tailor their marketing efforts and improve customer engagement.\n",
+ "8. **Language Translation**: Generative AI can be used to translate languages in real-time, which can help businesses communicate with international clients and customers more effectively.\n",
+ "9. **Music Composition**: Generative AI can be used to compose music for various applications such as advertising, film scoring, and video game soundtracks.\n",
+ "10. **Financial Modeling**: Generative AI can be used to create financial models that can predict future revenue streams, costs, and other financial metrics. This can help businesses make more accurate predictions and inform better investment decisions.\n",
+ "\n",
+ "Some of the industries that are already leveraging generative AI include:\n",
+ "\n",
+ "* E-commerce\n",
+ "* Healthcare\n",
+ "* Finance\n",
+ "* Marketing\n",
+ "* Education\n",
+ "* Entertainment\n",
+ "* Manufacturing\n",
+ "\n",
+ "These applications have the potential to transform various business processes, improve customer experiences, and drive innovation in various sectors.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import ollama\n",
+ "\n",
+ "response = ollama.chat(model=MODEL, messages=messages)\n",
+ "print(response['message']['content'])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a4704e10-f5fb-4c15-a935-f046c06fb13d",
+ "metadata": {},
+ "source": [
+ "## Alternative approach - using OpenAI python library to connect to Ollama"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "23057e00-b6fc-4678-93a9-6b31cb704bff",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Generative AI has numerous business applications across various industries, transforming the way companies operate, create products, and interact with customers. Some key applications include:\n",
+ "\n",
+ "1. **Content Generation**: Automate content creation for marketing materials, such as blog posts, product descriptions, social media posts, and more, using Generative AI-powered tools.\n",
+ "2. **Product Design and Prototyping**: Use Generative AI to design new products, furniture, or other innovative solutions, reducing design time and costs while increasing creativity.\n",
+ "3. **Customer Experience (CX) Tools**: Leverage Generative AI to create personalized customer experiences, such as chatbots that can respond to customer queries and provide tailored recommendations.\n",
+ "4. **Predictive Maintenance**: Use Generative AI to analyze sensor data, identify potential issues, and predict maintenance needs for equipment, reducing downtime and increasing overall efficiency.\n",
+ "5. **Personalized Marketing**: Use Generative AI to create targeted marketing campaigns based on individual customer preferences, behaviors, and demographics.\n",
+ "6. **Content Optimization**: Utilize Generative AI to optimize content for better performance in search engine results pages (SERPs), ensuring improved visibility and traffic.\n",
+ "7. **Brand Storytelling**: Automate the creation of brand stories, taglines, and overall brand narrative using Generative AI-powered tools.\n",
+ "8. **Financial Modeling and Forecasting**: Use Generative AI to create financial models, forecasts, and predictions for businesses, helping them make data-driven decisions.\n",
+ "9. **Supply Chain Optimization**: Leverage Generative AI to optimize supply chain operations, predicting demand, reducing inventory levels, and streamlining logistics.\n",
+ "10. **Automated Transcription and Translation**: Use Generative AI to automate the transcription of audio and video files into written text, as well as translate materials across languages.\n",
+ "11. **Digital Asset Management**: Utilize Generative AI to manage digital assets, such as images, videos, and documents, and automatically generate metadata for easy search and retrieval.\n",
+ "12. **Chatbots and Virtual Assistants**: Create more advanced chatbots using Generative AI that can understand context, emotions, and intent, providing better customer service experiences.\n",
+ "\n",
+ "In healthcare, Generative AI is being applied to:\n",
+ "\n",
+ "1. Medical Imaging Analysis\n",
+ "2. Personalized Medicine\n",
+ "3. Patient Data Analysis\n",
+ "\n",
+ "In education, Generative AI is used in:\n",
+ "\n",
+ "1. Adaptive Learning Systems\n",
+ "2. Automated Grading and Feedback\n",
+ "\n",
+ "Generative AI has numerous applications across various industries, from creative content generation to predictive maintenance and supply chain optimization.\n",
+ "\n",
+ "Keep in mind that these are just a few examples of the many business applications of Generative AI as this technology continues to evolve at a rapid pace.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# There's actually an alternative approach that some people might prefer\n",
+ "# You can use the OpenAI client python library to call Ollama:\n",
+ "\n",
+ "from openai import OpenAI\n",
+ "ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
+ "\n",
+ "response = ollama_via_openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=messages\n",
+ ")\n",
+ "\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",
+ "metadata": {},
+ "source": [
+ "# NOW the exercise for you\n",
+ "\n",
+ "Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "id": "de923314-a427-4199-b1f9-0e60f85114c3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import requests\n",
+ "from bs4 import BeautifulSoup\n",
+ "from IPython.display import Markdown, display\n",
+ "\n",
+ "# A class to represent a Webpage\n",
+ "# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
+ "\n",
+ "# Some websites need you to use proper headers when fetching them:\n",
+ "headers = {\n",
+ " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
+ "}\n",
+ "\n",
+ "class Website:\n",
+ "\n",
+ " def __init__(self, url):\n",
+ " \"\"\"\n",
+ " Create this Website object from the given url using the BeautifulSoup library\n",
+ " \"\"\"\n",
+ " self.url = url\n",
+ " response = requests.get(url, headers=headers)\n",
+ " soup = BeautifulSoup(response.content, 'html.parser')\n",
+ " self.title = soup.title.string if soup.title else \"No title found\"\n",
+ " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
+ " irrelevant.decompose()\n",
+ " self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "id": "0cedada6-adc6-40dc-bdf3-bc8a3b6b3826",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Well, hi there.\n",
+ "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
+ "very\n",
+ "amateur) and losing myself in\n",
+ "Hacker News\n",
+ ", nodding my head sagely to things I only half understand.\n",
+ "I’m the co-founder and CTO of\n",
+ "Nebula.io\n",
+ ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
+ "acquired in 2021\n",
+ ".\n",
+ "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
+ "patented\n",
+ "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
+ "Connect\n",
+ "with me for more!\n",
+ "November 13, 2024\n",
+ "Mastering AI and LLM Engineering – Resources\n",
+ "October 16, 2024\n",
+ "From Software Engineer to AI Data Scientist – resources\n",
+ "August 6, 2024\n",
+ "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
+ "June 26, 2024\n",
+ "Choosing the Right LLM: Toolkit and Resources\n",
+ "Navigation\n",
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Get in touch\n",
+ "ed [at] edwarddonner [dot] com\n",
+ "www.edwarddonner.com\n",
+ "Follow me\n",
+ "LinkedIn\n",
+ "Twitter\n",
+ "Facebook\n",
+ "Subscribe to newsletter\n",
+ "Type your email…\n",
+ "Subscribe\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Let's try one out. Change the website and add print statements to follow along.\n",
+ "\n",
+ "web_res = Website(\"https://edwarddonner.com\")\n",
+ "print(web_res.text)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "64d26055-756b-4095-a1d1-298fdf4fd8f1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Constants\n",
+ "\n",
+ "OLLAMA_API = \"http://localhost:11434/api/chat\"\n",
+ "HEADERS = {\"Content-Type\": \"application/json\"}\n",
+ "MODEL = \"llama3.2\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 52,
+ "id": "65b08550-7506-415f-8612-e2395d6e145d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
+ "\n",
+ "system_prompt = \"You are an helper that assist user to provide crisp summary\\\n",
+ "of the website they pass in, respond with key points\"\n",
+ "\n",
+ "# A function that writes a User Prompt that asks for summaries of websites:\n",
+ "\n",
+ "def user_prompt_for(website):\n",
+ " user_prompt = f\"You are looking at a website titled {website.title}\"\n",
+ " user_prompt += \"\\nThe contents of this website is as follows; \\\n",
+ "please provide a short summary of this website in markdown. \\\n",
+ "If it includes news or announcements, then summarize these too with start bulletin.\\n\\n\"\n",
+ " user_prompt += website.text\n",
+ " return user_prompt\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 33,
+ "id": "36a0a2d0-f07a-40ac-a065-b713cdd5c028",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# See how this function creates exactly the format above\n",
+ "\n",
+ "def messages_for(website):\n",
+ " return [\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
+ " ]\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "id": "8c2b20ea-6a8e-41c9-be3b-f24a5b29e8de",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#website search\n",
+ "\n",
+ "web_msg=Website(\"https://www.cricbuzz.com/cricket-match-squads/91796/aus-vs-ind-3rd-test-india-tour-of-australia-2024-25\")\n",
+ "messages=messages_for(web_msg)\n",
+ "\n",
+ "payload = {\n",
+ " \"model\": MODEL,\n",
+ " \"messages\": messages,\n",
+ " \"stream\": False\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 54,
+ "id": "e5636b3b-7763-4f9c-ab18-88aa25b50de6",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "**Summary of the Website**\n",
+ "=========================\n",
+ "\n",
+ "* The website provides live updates and information about the 3rd Test match between Australia and India as part of India's tour of Australia in the 2024-25 season.\n",
+ "* It includes news, scores, stats, and analysis from the match.\n",
+ "* The website is affiliated with Cricbuzz.com, a popular online cricket platform.\n",
+ "\n",
+ "**News and Announcements**\n",
+ "==========================\n",
+ "\n",
+ "* **Rashid Khan to miss the rest of the series**: Australian all-rounder Mitchell Marsh's teammate Rashid Khan has been ruled out of the remaining Tests due to a knee injury.\n",
+ "* **Bumrah to feature in the third Test**: Indian fast bowler Jasprit Bumrah is expected to return for the third Test, which starts on January 5 at the Sydney Cricket Ground.\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Using Ollama to run it in the local\n",
+ "response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
+ "print(response.json()['message']['content'])"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 9179ced9b101711e41d873bd370a9b73fd9646ec Mon Sep 17 00:00:00 2001
From: Cloud LLama <163757327+cloudllama@users.noreply.github.com>
Date: Wed, 18 Dec 2024 06:42:11 -0500
Subject: [PATCH 02/29] Correct typo in week4/day4.ipynb
Change function from stream_code_quen to stream_code_qwen
---
week4/day4.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/week4/day4.ipynb b/week4/day4.ipynb
index 722a233..0df69a1 100644
--- a/week4/day4.ipynb
+++ b/week4/day4.ipynb
@@ -609,7 +609,7 @@
"metadata": {},
"outputs": [],
"source": [
- "def stream_code_quen(python):\n",
+ "def stream_code_qwen(python):\n",
" tokenizer = AutoTokenizer.from_pretrained(code_qwen)\n",
" messages = messages_for(python)\n",
" text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\n",
From 25e24cf34307db197ce2c0a7ffb77c98035e8f46 Mon Sep 17 00:00:00 2001
From: SIFAT IMTIAZ
Date: Wed, 18 Dec 2024 20:59:04 +0600
Subject: [PATCH 03/29] Add files via upload
---
week2/community-contributions/TTS_STT.ipynb | 334 ++++++++++++++++++++
1 file changed, 334 insertions(+)
create mode 100644 week2/community-contributions/TTS_STT.ipynb
diff --git a/week2/community-contributions/TTS_STT.ipynb b/week2/community-contributions/TTS_STT.ipynb
new file mode 100644
index 0000000..3409bfd
--- /dev/null
+++ b/week2/community-contributions/TTS_STT.ipynb
@@ -0,0 +1,334 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "a60e0f78-4637-4318-9ab6-309c3f7f2799",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "API Key set\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI\n",
+ "\n",
+ "load_dotenv()\n",
+ "\n",
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
+ "if openai_api_key:\n",
+ " print(\"API Key set\")\n",
+ "else:\n",
+ " print(\"OpenAI API Key not set\")\n",
+ " \n",
+ "MODEL = \"gpt-4o-mini\"\n",
+ "openai = OpenAI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "67026ef0-23be-4101-9371-b11f96f505bf",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# TTS\n",
+ "\n",
+ "from pydub import AudioSegment\n",
+ "import os\n",
+ "import subprocess\n",
+ "from io import BytesIO\n",
+ "import tempfile\n",
+ "\n",
+ "# Set custom temp directory\n",
+ "custom_temp_dir = r\"D:\\projects\\llm_engineering-main\\temp\"\n",
+ "os.makedirs(custom_temp_dir, exist_ok=True)\n",
+ "\n",
+ "# Explicitly set FFmpeg paths\n",
+ "AudioSegment.converter = r\"D:\\Anaconda3\\envs\\llms\\Library\\bin\\ffmpeg.exe\"\n",
+ "AudioSegment.ffprobe = r\"D:\\Anaconda3\\envs\\llms\\Library\\bin\\ffprobe.exe\"\n",
+ "\n",
+ "def play_audio_with_ffplay(audio_segment, temp_dir):\n",
+ " # Explicitly create and manage a temporary file\n",
+ " temp_file_path = os.path.join(temp_dir, \"temp_output.wav\")\n",
+ " \n",
+ " # Export the audio to the temporary file\n",
+ " audio_segment.export(temp_file_path, format=\"wav\")\n",
+ " \n",
+ " try:\n",
+ " # Play the audio using ffplay\n",
+ " subprocess.call([\"ffplay\", \"-nodisp\", \"-autoexit\", temp_file_path])\n",
+ " finally:\n",
+ " # Clean up the temporary file after playback\n",
+ " if os.path.exists(temp_file_path):\n",
+ " os.remove(temp_file_path)\n",
+ "\n",
+ "def talker(message):\n",
+ " # Mocked OpenAI response for testing\n",
+ " response = openai.audio.speech.create(\n",
+ " model=\"tts-1\",\n",
+ " voice=\"nova\",\n",
+ " input=message\n",
+ " )\n",
+ " \n",
+ " # Handle audio stream\n",
+ " audio_stream = BytesIO(response.content)\n",
+ " audio = AudioSegment.from_file(audio_stream, format=\"mp3\")\n",
+ " \n",
+ " # Play the audio\n",
+ " play_audio_with_ffplay(audio, custom_temp_dir)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "12c66b44-293a-4bf9-b81e-0f6905fbf607",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "D:\\anaconda3\\envs\\llms\\Lib\\site-packages\\whisper\\__init__.py:150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
+ " checkpoint = torch.load(fp, map_location=device)\n"
+ ]
+ }
+ ],
+ "source": [
+ "# STT Whisper\n",
+ "\n",
+ "import whisper\n",
+ "import sounddevice as sd\n",
+ "import numpy as np\n",
+ "from scipy.io.wavfile import write\n",
+ "\n",
+ "def record_audio(temp_dir, duration=5, samplerate=16000, device_id=2):\n",
+ " # print(f\"Recording for {duration} seconds...\")\n",
+ " sd.default.device = (device_id, None)\n",
+ " audio = sd.rec(int(duration * samplerate), samplerate=samplerate, channels=1, dtype=\"int16\")\n",
+ " sd.wait() # Wait until the recording is finished\n",
+ " \n",
+ " audio_path = os.path.join(temp_dir, \"mic_input.wav\")\n",
+ " write(audio_path, samplerate, audio)\n",
+ " # print(f\"Audio recorded and saved to {audio_path}\")\n",
+ "\n",
+ " return audio_path\n",
+ "\n",
+ "\n",
+ "whisper_model = whisper.load_model(\"base\")\n",
+ "def transcribe_audio(audio_path): \n",
+ " # print(\"Transcribing audio...\")\n",
+ " result = whisper_model.transcribe(audio_path, language=\"en\")\n",
+ " return result[\"text\"]\n",
+ "\n",
+ "def mic_to_text():\n",
+ " audio_path = record_audio(custom_temp_dir, duration=10)\n",
+ " transcription = transcribe_audio(audio_path)\n",
+ " # print(f\"Transcription: {transcription}\")\n",
+ " return transcription"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "0156c106-1844-444a-9a22-88c3475805d9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Chat Functions\n",
+ "\n",
+ "import requests\n",
+ "history = [{\"role\": \"system\", \"content\": \"You are Nova the friendly robot. Reply within couple of sentences.\"}]\n",
+ "\n",
+ "def run_chat():\n",
+ " running = True\n",
+ " while running:\n",
+ " input_text = input(\"press Enter to talk\") \n",
+ " user_input = input_text if input_text.strip() else mic_to_text()\n",
+ " running = False if input_text == \"bye\" or user_input.strip() == \"bye\" else True\n",
+ " print(f\"\\nYou: {user_input}\\n\\n\")\n",
+ " history.append({\"role\": \"user\", \"content\": user_input}) \n",
+ " api_run = requests.post(\n",
+ " \"http://localhost:11434/api/chat\", \n",
+ " json={\n",
+ " \"model\": \"llama3.2\",\n",
+ " \"messages\": history,\n",
+ " \"stream\": False\n",
+ " }, \n",
+ " headers={\"Content-Type\": \"application/json\"}\n",
+ " )\n",
+ " output_message = api_run.json()['message']['content']\n",
+ " print(f\"Nova: {output_message}\\n\\n\") \n",
+ " talker(output_message)\n",
+ " history.append({\"role\": \"assistant\", \"content\": output_message})"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "de61b54e-387e-4480-a592-c78e3245ddde",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk \n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: Hi there. Where am I talking to?\n",
+ "\n",
+ "\n",
+ "Nova: Beep boop! You're talking to me, Nova, a friendly robot designed to assist and chat with users like you. I'm happy to have you here!\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk \n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: Do you know my name?\n",
+ "\n",
+ "\n",
+ "Nova: No, I don't have any information about your personal identity. This is the start of our conversation, so we're starting from scratch! Would you like to tell me your name, or keep it a secret?\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk Sifat\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: Sifat\n",
+ "\n",
+ "\n",
+ "Nova: Beep boop, nice to meet you, Sifat! I'm glad we could have a brief introduction. What would you like to talk about today? The weather, hobbies, or something else?\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk \n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: Nothing much today. I was just wondering that how you react because I am using.\n",
+ "\n",
+ "\n",
+ "Nova: Beep boop, I see! As a robot, my purpose is to assist and provide helpful responses, regardless of the user's background or context. My reactions are programmed to be neutral and friendly, so I don't have personal biases or opinions. I'm here to help and learn from our conversation, Sifat!\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk \n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: So, do you keep on learning while we are having our conversations? Do you train yourself like that?\n",
+ "\n",
+ "\n",
+ "Nova: Beep boop, yes! That's one of the ways I learn and improve. The conversations I have with users like you help me refine my language understanding and generation capabilities. My training data is constantly updated and expanded to include new topics, nuances, and examples. So, our conversation right now helps me become a better conversationalist for others in the future!\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "name": "stdin",
+ "output_type": "stream",
+ "text": [
+ "press Enter to talk bye\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "You: bye\n",
+ "\n",
+ "\n",
+ "Nova: Beep boop, it was nice chatting with you, Sifat! Feel free to come back and talk anytime you'd like. Have a great day, and I'll be here when you're ready for our next conversation! Bye for now!\n",
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "run_chat()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ce16bee7-6ea6-46d5-a407-385e6ae31db8",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 11cc7542fbd8e44b7a422075f37a577b5c3e648d Mon Sep 17 00:00:00 2001
From: SIFAT IMTIAZ
Date: Wed, 18 Dec 2024 21:05:21 +0600
Subject: [PATCH 04/29] Add files via upload
---
week2/community-contributions/TTS_STT.ipynb | 154 +-------------------
1 file changed, 8 insertions(+), 146 deletions(-)
diff --git a/week2/community-contributions/TTS_STT.ipynb b/week2/community-contributions/TTS_STT.ipynb
index 3409bfd..f1347c0 100644
--- a/week2/community-contributions/TTS_STT.ipynb
+++ b/week2/community-contributions/TTS_STT.ipynb
@@ -2,18 +2,10 @@
"cells": [
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": null,
"id": "a60e0f78-4637-4318-9ab6-309c3f7f2799",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "API Key set\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"import os\n",
"import json\n",
@@ -34,7 +26,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"id": "67026ef0-23be-4101-9371-b11f96f505bf",
"metadata": {},
"outputs": [],
@@ -88,19 +80,10 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": null,
"id": "12c66b44-293a-4bf9-b81e-0f6905fbf607",
"metadata": {},
- "outputs": [
- {
- "name": "stderr",
- "output_type": "stream",
- "text": [
- "D:\\anaconda3\\envs\\llms\\Lib\\site-packages\\whisper\\__init__.py:150: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
- " checkpoint = torch.load(fp, map_location=device)\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# STT Whisper\n",
"\n",
@@ -137,7 +120,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"id": "0156c106-1844-444a-9a22-88c3475805d9",
"metadata": {},
"outputs": [],
@@ -172,131 +155,10 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": null,
"id": "de61b54e-387e-4480-a592-c78e3245ddde",
"metadata": {},
- "outputs": [
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk \n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: Hi there. Where am I talking to?\n",
- "\n",
- "\n",
- "Nova: Beep boop! You're talking to me, Nova, a friendly robot designed to assist and chat with users like you. I'm happy to have you here!\n",
- "\n",
- "\n"
- ]
- },
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk \n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: Do you know my name?\n",
- "\n",
- "\n",
- "Nova: No, I don't have any information about your personal identity. This is the start of our conversation, so we're starting from scratch! Would you like to tell me your name, or keep it a secret?\n",
- "\n",
- "\n"
- ]
- },
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk Sifat\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: Sifat\n",
- "\n",
- "\n",
- "Nova: Beep boop, nice to meet you, Sifat! I'm glad we could have a brief introduction. What would you like to talk about today? The weather, hobbies, or something else?\n",
- "\n",
- "\n"
- ]
- },
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk \n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: Nothing much today. I was just wondering that how you react because I am using.\n",
- "\n",
- "\n",
- "Nova: Beep boop, I see! As a robot, my purpose is to assist and provide helpful responses, regardless of the user's background or context. My reactions are programmed to be neutral and friendly, so I don't have personal biases or opinions. I'm here to help and learn from our conversation, Sifat!\n",
- "\n",
- "\n"
- ]
- },
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk \n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: So, do you keep on learning while we are having our conversations? Do you train yourself like that?\n",
- "\n",
- "\n",
- "Nova: Beep boop, yes! That's one of the ways I learn and improve. The conversations I have with users like you help me refine my language understanding and generation capabilities. My training data is constantly updated and expanded to include new topics, nuances, and examples. So, our conversation right now helps me become a better conversationalist for others in the future!\n",
- "\n",
- "\n"
- ]
- },
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- "press Enter to talk bye\n"
- ]
- },
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "You: bye\n",
- "\n",
- "\n",
- "Nova: Beep boop, it was nice chatting with you, Sifat! Feel free to come back and talk anytime you'd like. Have a great day, and I'll be here when you're ready for our next conversation! Bye for now!\n",
- "\n",
- "\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"run_chat()"
]
From d73ac9aa17f149b27c45846cb696d62e2968c89e Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Thu, 19 Dec 2024 02:27:38 +1100
Subject: [PATCH 05/29] day 1 javascript webiste challenged added
---
...-webscraping-selenium-for-javascript.ipynb | 871 ++++++++++++++++++
1 file changed, 871 insertions(+)
create mode 100644 week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
diff --git a/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb b/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
new file mode 100644
index 0000000..8ec191a
--- /dev/null
+++ b/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
@@ -0,0 +1,871 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9",
+ "metadata": {},
+ "source": [
+ "# Instant Gratification\n",
+ "\n",
+ "## Your first Frontier LLM Project!\n",
+ "\n",
+ "Let's build a useful LLM solution - in a matter of minutes.\n",
+ "\n",
+ "By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n",
+ "\n",
+ "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n",
+ "\n",
+ "Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n",
+ "\n",
+ "## If you're new to Jupyter Lab\n",
+ "\n",
+ "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n",
+ "\n",
+ "I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n",
+ "\n",
+ "If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "import requests\n",
+ "from dotenv import load_dotenv\n",
+ "from bs4 import BeautifulSoup\n",
+ "from IPython.display import Markdown, display\n",
+ "from openai import OpenAI\n",
+ "\n",
+ "# If you get an error running this cell, then please head over to the troubleshooting notebook!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6900b2a8-6384-4316-8aaa-5e519fca4254",
+ "metadata": {},
+ "source": [
+ "# Connecting to OpenAI\n",
+ "\n",
+ "The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n",
+ "\n",
+ "## Troubleshooting if you have problems:\n",
+ "\n",
+ "Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n",
+ "\n",
+ "If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n",
+ "\n",
+ "Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n",
+ "\n",
+ "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "API key found and looks good so far!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load environment variables in a file called .env\n",
+ "\n",
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "# Check the key\n",
+ "\n",
+ "if not api_key:\n",
+ " print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
+ "elif not api_key.startswith(\"sk-proj-\"):\n",
+ " print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
+ "elif api_key.strip() != api_key:\n",
+ " print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
+ "else:\n",
+ " print(\"API key found and looks good so far!\")\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "openai = OpenAI()\n",
+ "\n",
+ "# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
+ "# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n",
+ "# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "442fc84b-0815-4f40-99ab-d9a5da6bda91",
+ "metadata": {},
+ "source": [
+ "# Let's make a quick call to a Frontier model to get started, as a preview!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Hello! Welcome! I'm glad to have your first message here. How can I assist you today?\n"
+ ]
+ }
+ ],
+ "source": [
+ "# To give you a preview -- calling OpenAI with these messages is this easy:\n",
+ "\n",
+ "message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n",
+ "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2aa190e5-cb31-456a-96cc-db109919cd78",
+ "metadata": {},
+ "source": [
+ "## OK onwards with our first project"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "c5e793b2-6775-426a-a139-4848291d0463",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# A class to represent a Webpage\n",
+ "# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
+ "\n",
+ "# Some websites need you to use proper headers when fetching them:\n",
+ "headers = {\n",
+ " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
+ "}\n",
+ "\n",
+ "class Website:\n",
+ "\n",
+ " def __init__(self, url):\n",
+ " \"\"\"\n",
+ " Create this Website object from the given url using the BeautifulSoup library\n",
+ " \"\"\"\n",
+ " self.url = url\n",
+ " response = requests.get(url, headers=headers)\n",
+ " soup = BeautifulSoup(response.content, 'html.parser')\n",
+ " self.title = soup.title.string if soup.title else \"No title found\"\n",
+ " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
+ " irrelevant.decompose()\n",
+ " self.text = soup.body.get_text(separator=\"\\n\", strip=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Home - Edward Donner\n",
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Well, hi there.\n",
+ "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
+ "very\n",
+ "amateur) and losing myself in\n",
+ "Hacker News\n",
+ ", nodding my head sagely to things I only half understand.\n",
+ "I’m the co-founder and CTO of\n",
+ "Nebula.io\n",
+ ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
+ "acquired in 2021\n",
+ ".\n",
+ "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
+ "patented\n",
+ "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
+ "Connect\n",
+ "with me for more!\n",
+ "November 13, 2024\n",
+ "Mastering AI and LLM Engineering – Resources\n",
+ "October 16, 2024\n",
+ "From Software Engineer to AI Data Scientist – resources\n",
+ "August 6, 2024\n",
+ "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
+ "June 26, 2024\n",
+ "Choosing the Right LLM: Toolkit and Resources\n",
+ "Navigation\n",
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Get in touch\n",
+ "ed [at] edwarddonner [dot] com\n",
+ "www.edwarddonner.com\n",
+ "Follow me\n",
+ "LinkedIn\n",
+ "Twitter\n",
+ "Facebook\n",
+ "Subscribe to newsletter\n",
+ "Type your email…\n",
+ "Subscribe\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Let's try one out. Change the website and add print statements to follow along.\n",
+ "\n",
+ "ed = Website(\"https://edwarddonner.com\")\n",
+ "print(ed.title)\n",
+ "print(ed.text)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6a478a0c-2c53-48ff-869c-4d08199931e1",
+ "metadata": {},
+ "source": [
+ "## Types of prompts\n",
+ "\n",
+ "You may know this already - but if not, you will get very familiar with it!\n",
+ "\n",
+ "Models like GPT4o have been trained to receive instructions in a particular way.\n",
+ "\n",
+ "They expect to receive:\n",
+ "\n",
+ "**A system prompt** that tells them what task they are performing and what tone they should use\n",
+ "\n",
+ "**A user prompt** -- the conversation starter that they should reply to"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
+ "\n",
+ "system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
+ "and provides a short summary, ignoring text that might be navigation related. \\\n",
+ "Respond in markdown.\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# A function that writes a User Prompt that asks for summaries of websites:\n",
+ "\n",
+ "def user_prompt_for(website):\n",
+ " user_prompt = f\"You are looking at a website titled {website.title}\"\n",
+ " user_prompt += \"\\nThe contents of this website is as follows; \\\n",
+ "please provide a short summary of this website in markdown. \\\n",
+ "If it includes news or announcements, then summarize these too.\\n\\n\"\n",
+ " user_prompt += website.text\n",
+ " return user_prompt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You are looking at a website titled Home - Edward Donner\n",
+ "The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
+ "\n",
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Well, hi there.\n",
+ "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
+ "very\n",
+ "amateur) and losing myself in\n",
+ "Hacker News\n",
+ ", nodding my head sagely to things I only half understand.\n",
+ "I’m the co-founder and CTO of\n",
+ "Nebula.io\n",
+ ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
+ "acquired in 2021\n",
+ ".\n",
+ "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
+ "patented\n",
+ "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
+ "Connect\n",
+ "with me for more!\n",
+ "November 13, 2024\n",
+ "Mastering AI and LLM Engineering – Resources\n",
+ "October 16, 2024\n",
+ "From Software Engineer to AI Data Scientist – resources\n",
+ "August 6, 2024\n",
+ "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
+ "June 26, 2024\n",
+ "Choosing the Right LLM: Toolkit and Resources\n",
+ "Navigation\n",
+ "Home\n",
+ "Outsmart\n",
+ "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
+ "About\n",
+ "Posts\n",
+ "Get in touch\n",
+ "ed [at] edwarddonner [dot] com\n",
+ "www.edwarddonner.com\n",
+ "Follow me\n",
+ "LinkedIn\n",
+ "Twitter\n",
+ "Facebook\n",
+ "Subscribe to newsletter\n",
+ "Type your email…\n",
+ "Subscribe\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(user_prompt_for(ed))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc",
+ "metadata": {},
+ "source": [
+ "## Messages\n",
+ "\n",
+ "The API from OpenAI expects to receive messages in a particular structure.\n",
+ "Many of the other APIs share this structure:\n",
+ "\n",
+ "```\n",
+ "[\n",
+ " {\"role\": \"system\", \"content\": \"system message goes here\"},\n",
+ " {\"role\": \"user\", \"content\": \"user message goes here\"}\n",
+ "]\n",
+ "\n",
+ "To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "messages = [\n",
+ " {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n",
+ " {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Oh, I don’t know, maybe it’s 22? Just kidding—it's 4. Basic math is still safe!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# To give you a preview -- calling OpenAI with system and user messages:\n",
+ "\n",
+ "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
+ "print(response.choices[0].message.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47",
+ "metadata": {},
+ "source": [
+ "## And now let's build useful messages for GPT-4o-mini, using a function"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# See how this function creates exactly the format above\n",
+ "\n",
+ "def messages_for(website):\n",
+ " return [\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": user_prompt_for(website)}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[{'role': 'system',\n",
+ " 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n",
+ " {'role': 'user',\n",
+ " 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nJune 26, 2024\\nChoosing the Right LLM: Toolkit and Resources\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]"
+ ]
+ },
+ "execution_count": 15,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Try this out, and then try for a few more websites\n",
+ "\n",
+ "messages_for(ed)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0",
+ "metadata": {},
+ "source": [
+ "## Time to bring it together - the API for OpenAI is very simple!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# And now: call the OpenAI API. You will get very familiar with this!\n",
+ "\n",
+ "def summarize(url):\n",
+ " website = Website(url)\n",
+ " response = openai.chat.completions.create(\n",
+ " model = \"gpt-4o-mini\",\n",
+ " messages = messages_for(website)\n",
+ " )\n",
+ " return response.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'# Summary of Edward Donner\\'s Website\\n\\nEdward Donner\\'s website serves as a personal and professional hub for his interests and projects, particularly in the domains of code writing, large language models (LLMs), and artificial intelligence (AI). \\n\\n## About Ed\\n- Ed describes himself as a coder and enthusiast of LLMs, highlighting his background as the co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and management. \\n- He has a history as the founder and CEO of the AI startup untapt, which was acquired in 2021.\\n- Outside of his tech interests, Ed enjoys DJing and amateur electronic music production.\\n\\n## Key Projects and Features\\n- **Outsmart**: A platform where LLMs compete against each other in strategic scenarios.\\n \\n## Recent Posts\\n- **November 13, 2024**: \"Mastering AI and LLM Engineering – Resources\" - A collection of resources for those looking to deepen their skills in AI and LLM engineering.\\n- **October 16, 2024**: \"From Software Engineer to AI Data Scientist – Resources\" - Guidance and tools for transitioning from software engineering to AI data science roles.\\n- **August 6, 2024**: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" - A focus on the unique features of the Outsmart program.\\n- **June 26, 2024**: \"Choosing the Right LLM: Toolkit and Resources\" - A resource list for selecting suitable LLMs for various applications.\\n\\nOverall, the website presents Ed as a tech-savvy individual with a passion for sharing knowledge and resources in the AI field.'"
+ ]
+ },
+ "execution_count": 17,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "summarize(\"https://edwarddonner.com\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "3d926d59-450e-4609-92ba-2d6f244f1342",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# A function to display this nicely in the Jupyter output, using markdown\n",
+ "\n",
+ "def display_summary(url):\n",
+ " summary = summarize(url)\n",
+ " display(Markdown(summary))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "3018853a-445f-41ff-9560-d925d1774b2f",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/markdown": [
+ "# Website Summary: Edward Donner\n",
+ "\n",
+ "Edward Donner's website showcases his interests and professional background, particularly in coding and experimenting with large language models (LLMs). He is the co-founder and CTO of Nebula.io, a company focused on applying AI to enhance talent discovery and management. Previously, he founded the AI startup untapt, which was acquired in 2021. \n",
+ "\n",
+ "## Key Features:\n",
+ "- **Outsmart**: A unique platform where LLMs compete in strategy games that test diplomacy and cunning. \n",
+ "- **Blog Posts**: Various posts offering resources for mastering AI and LLM engineering, transitioning from software engineering to AI data science, and guidance on choosing the right LLM.\n",
+ "\n",
+ "## Recent Announcements:\n",
+ "- **November 13, 2024**: Post on \"Mastering AI and LLM Engineering.\"\n",
+ "- **October 16, 2024**: Insights on \"From Software Engineer to AI Data Scientist.\"\n",
+ "- **August 6, 2024**: Information on \"Outsmart LLM Arena.\"\n",
+ "- **June 26, 2024**: Resources for \"Choosing the Right LLM.\" \n",
+ "\n",
+ "Overall, the website serves as a platform for sharing knowledge and fostering connections within the AI and LLM community."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "display_summary(\"https://edwarddonner.com\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624",
+ "metadata": {},
+ "source": [
+ "# Let's try more websites\n",
+ "\n",
+ "Note that this will only work on websites that can be scraped using this simplistic approach.\n",
+ "\n",
+ "Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n",
+ "\n",
+ "Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n",
+ "\n",
+ "But many websites will work just fine!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "45d83403-a24c-44b5-84ac-961449b4008f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "display_summary(\"https://cnn.com\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "75e9fd40-b354-4341-991e-863ef2e59db7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "display_summary(\"https://anthropic.com\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "36ed9f14-b349-40e9-a42c-b367e77f8bda",
+ "metadata": {},
+ "source": [
+ "## An extra exercise for those who enjoy web scraping\n",
+ "\n",
+ "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "eeab24dc-5f90-4570-b542-b0585aca3eb6",
+ "metadata": {},
+ "source": [
+ "# Sharing your code\n",
+ "\n",
+ "I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n",
+ "\n",
+ "If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n",
+ "\n",
+ "PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0f62a788",
+ "metadata": {},
+ "source": [
+ "# **Web Scraping for JavaScript Website**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "dca2768e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# !pip install selenium\n",
+ "# !pip install undetected-chromedriver"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "682eff74-55c4-4d4b-b267-703edbc293c7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import undetected_chromedriver as uc\n",
+ "from selenium.webdriver.common.by import By\n",
+ "from selenium.webdriver.support.ui import WebDriverWait\n",
+ "from selenium.webdriver.support import expected_conditions as EC\n",
+ "import time\n",
+ "from bs4 import BeautifulSoup"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "90ca6dd0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class WebsiteCrawler:\n",
+ " def __init__(self, url, wait_time=20, chrome_binary_path=None):\n",
+ " \"\"\"\n",
+ " Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n",
+ " \"\"\"\n",
+ " self.url = url\n",
+ " self.wait_time = wait_time\n",
+ "\n",
+ " options = uc.ChromeOptions()\n",
+ " options.add_argument(\"--disable-gpu\")\n",
+ " options.add_argument(\"--no-sandbox\")\n",
+ " options.add_argument(\"--disable-dev-shm-usage\")\n",
+ " options.add_argument(\"--disable-blink-features=AutomationControlled\")\n",
+ " options.add_argument(\"start-maximized\")\n",
+ " options.add_argument(\n",
+ " \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
+ " )\n",
+ " if chrome_binary_path:\n",
+ " options.binary_location = chrome_binary_path\n",
+ "\n",
+ " self.driver = uc.Chrome(options=options)\n",
+ "\n",
+ " try:\n",
+ " # Load the URL\n",
+ " self.driver.get(url)\n",
+ "\n",
+ " # Wait for Cloudflare or similar checks\n",
+ " time.sleep(10)\n",
+ "\n",
+ " # Ensure the main content is loaded\n",
+ " WebDriverWait(self.driver, self.wait_time).until(\n",
+ " EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n",
+ " )\n",
+ "\n",
+ " # Extract the main content\n",
+ " main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n",
+ "\n",
+ " # Parse with BeautifulSoup\n",
+ " soup = BeautifulSoup(main_content, \"html.parser\")\n",
+ " self.title = self.driver.title if self.driver.title else \"No title found\"\n",
+ " self.text = soup.get_text(separator=\"\\n\", strip=True)\n",
+ "\n",
+ " except Exception as e:\n",
+ " print(f\"Error occurred: {e}\")\n",
+ " self.title = \"Error occurred\"\n",
+ " self.text = \"\"\n",
+ "\n",
+ " finally:\n",
+ " self.driver.quit()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "947eac30",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "chrome_path = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"\n",
+ "url = \"https://www.canva.com/\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "2cba8c91",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def new_summary(url, chrome_path):\n",
+ " web = WebsiteCrawler(url, 30, chrome_path)\n",
+ " response = openai.chat.completions.create(\n",
+ " model = \"gpt-4o-mini\",\n",
+ " messages = messages_for(web)\n",
+ " )\n",
+ "\n",
+ " web_summary = response.choices[0].message.content\n",
+ " \n",
+ " return display(Markdown(web_summary))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "id": "da7f7b16",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/markdown": [
+ "# Canva: Visual Suite for Everyone\n",
+ "\n",
+ "Canva is a user-friendly design platform that allows individuals and teams to create a variety of professional designs, including posters, logos, presentations, and more. It offers options for different users, including a free version for individuals and premium plans for teams and organizations.\n",
+ "\n",
+ "## Key Features:\n",
+ "- **Design Templates**: A wide range of customizable templates for various purposes, such as social media, business cards, and events.\n",
+ "- **AI-Powered Tools**: Features like Magic Write for copy generation and Magic Edit for photo transformation enhance design capabilities.\n",
+ "- **Collaboration**: Real-time collaborative tools for teams to design and provide feedback on projects together.\n",
+ "- **Printing Services**: Canva offers printing services for various products, with free delivery and sustainable practices.\n",
+ "- **Educational and Nonprofit Support**: Free premium features are available for educational organizations and nonprofits.\n",
+ "\n",
+ "## User Testimonials:\n",
+ "Business leaders commend Canva for its efficiency in streamlining design processes and maintaining brand consistency across teams.\n",
+ "\n",
+ "## Sustainability Efforts:\n",
+ "Canva emphasizes sustainability by planting trees for printed orders and operating with carbon neutrality.\n",
+ "\n",
+ "Overall, Canva caters to a diverse audience, from individuals to large organizations, by providing accessible and innovative design solutions."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "new_summary(url, chrome_path)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "id": "7880ce6a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "url = \"https://openai.com\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "id": "337b06da",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/markdown": [
+ "# OpenAI Website Summary\n",
+ "\n",
+ "OpenAI is focused on creating safe artificial general intelligence (AGI) that benefits humanity. The site features various products and initiatives aimed at enhancing creativity and productivity through advanced AI technologies. \n",
+ "\n",
+ "## Key Highlights:\n",
+ "\n",
+ "### Products and Features\n",
+ "- **Sora**: A new platform that allows users to bring their imagination to life through text, images, or videos.\n",
+ "- **ChatGPT**: Includes various applications such as ChatGPT Pro, desktop integration, and a new search feature. Recent upgrades allow ChatGPT to analyze images, hear, and speak.\n",
+ "- **Canvas**: A new writing and coding interface integrated within ChatGPT.\n",
+ "- **o1 Models**: A series of AI models designed to improve response time by incorporating deeper reasoning.\n",
+ "\n",
+ "### Announcements\n",
+ "- **Partnerships**: OpenAI announced a partnership with Apple to explore advancements in AI technology.\n",
+ "- **New Features**: Introduced improvements to the fine-tuning API and expanded custom models program, aiming to better serve developers and enterprise users.\n",
+ "- **Collaboration with Media**: A partnership with Le Monde and Prisa Media intends to bring French and Spanish news content to ChatGPT.\n",
+ "\n",
+ "### Research and Safety\n",
+ "- Ongoing research efforts are focused on building a safer AI framework, including advanced tools for compliance within the ChatGPT Enterprise suite.\n",
+ "- Publications addressing AI's benefits and risks, including topics like synthetic voices and biological threats, are regularly updated.\n",
+ "\n",
+ "For more detailed insights, the website facilitates exploration of their product offerings, research publications, and the newest tools for developers and businesses."
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "new_summary(url, chrome_path)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9a5d69ea",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "llm_env",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 2c88c787ad7a9364f8e2f5516dad9e44e5b46a4c Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Thu, 19 Dec 2024 02:38:01 +1100
Subject: [PATCH 06/29] day 1 webscraping challenge completed by selenium and
clear output cell
---
...-webscraping-selenium-for-javascript.ipynb | 288 ++----------------
1 file changed, 20 insertions(+), 268 deletions(-)
diff --git a/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb b/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
index 8ec191a..6b7a266 100644
--- a/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
+++ b/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb
@@ -67,18 +67,10 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "API key found and looks good so far!\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
@@ -121,18 +113,10 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Hello! Welcome! I'm glad to have your first message here. How can I assist you today?\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with these messages is this easy:\n",
"\n",
@@ -181,63 +165,10 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Home - Edward Donner\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Well, hi there.\n",
- "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
- "very\n",
- "amateur) and losing myself in\n",
- "Hacker News\n",
- ", nodding my head sagely to things I only half understand.\n",
- "I’m the co-founder and CTO of\n",
- "Nebula.io\n",
- ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
- "acquired in 2021\n",
- ".\n",
- "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
- "patented\n",
- "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
- "Connect\n",
- "with me for more!\n",
- "November 13, 2024\n",
- "Mastering AI and LLM Engineering – Resources\n",
- "October 16, 2024\n",
- "From Software Engineer to AI Data Scientist – resources\n",
- "August 6, 2024\n",
- "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
- "June 26, 2024\n",
- "Choosing the Right LLM: Toolkit and Resources\n",
- "Navigation\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Get in touch\n",
- "ed [at] edwarddonner [dot] com\n",
- "www.edwarddonner.com\n",
- "Follow me\n",
- "LinkedIn\n",
- "Twitter\n",
- "Facebook\n",
- "Subscribe to newsletter\n",
- "Type your email…\n",
- "Subscribe\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
@@ -298,65 +229,10 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": null,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "You are looking at a website titled Home - Edward Donner\n",
- "The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
- "\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Well, hi there.\n",
- "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
- "very\n",
- "amateur) and losing myself in\n",
- "Hacker News\n",
- ", nodding my head sagely to things I only half understand.\n",
- "I’m the co-founder and CTO of\n",
- "Nebula.io\n",
- ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
- "acquired in 2021\n",
- ".\n",
- "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
- "patented\n",
- "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
- "Connect\n",
- "with me for more!\n",
- "November 13, 2024\n",
- "Mastering AI and LLM Engineering – Resources\n",
- "October 16, 2024\n",
- "From Software Engineer to AI Data Scientist – resources\n",
- "August 6, 2024\n",
- "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
- "June 26, 2024\n",
- "Choosing the Right LLM: Toolkit and Resources\n",
- "Navigation\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Get in touch\n",
- "ed [at] edwarddonner [dot] com\n",
- "www.edwarddonner.com\n",
- "Follow me\n",
- "LinkedIn\n",
- "Twitter\n",
- "Facebook\n",
- "Subscribe to newsletter\n",
- "Type your email…\n",
- "Subscribe\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"print(user_prompt_for(ed))"
]
@@ -395,18 +271,10 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Oh, I don’t know, maybe it’s 22? Just kidding—it's 4. Basic math is still safe!\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
@@ -440,24 +308,10 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": null,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[{'role': 'system',\n",
- " 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n",
- " {'role': 'user',\n",
- " 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nJune 26, 2024\\nChoosing the Right LLM: Toolkit and Resources\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]"
- ]
- },
- "execution_count": 15,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
@@ -492,21 +346,10 @@
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'# Summary of Edward Donner\\'s Website\\n\\nEdward Donner\\'s website serves as a personal and professional hub for his interests and projects, particularly in the domains of code writing, large language models (LLMs), and artificial intelligence (AI). \\n\\n## About Ed\\n- Ed describes himself as a coder and enthusiast of LLMs, highlighting his background as the co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and management. \\n- He has a history as the founder and CEO of the AI startup untapt, which was acquired in 2021.\\n- Outside of his tech interests, Ed enjoys DJing and amateur electronic music production.\\n\\n## Key Projects and Features\\n- **Outsmart**: A platform where LLMs compete against each other in strategic scenarios.\\n \\n## Recent Posts\\n- **November 13, 2024**: \"Mastering AI and LLM Engineering – Resources\" - A collection of resources for those looking to deepen their skills in AI and LLM engineering.\\n- **October 16, 2024**: \"From Software Engineer to AI Data Scientist – Resources\" - Guidance and tools for transitioning from software engineering to AI data science roles.\\n- **August 6, 2024**: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" - A focus on the unique features of the Outsmart program.\\n- **June 26, 2024**: \"Choosing the Right LLM: Toolkit and Resources\" - A resource list for selecting suitable LLMs for various applications.\\n\\nOverall, the website presents Ed as a tech-savvy individual with a passion for sharing knowledge and resources in the AI field.'"
- ]
- },
- "execution_count": 17,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
@@ -527,37 +370,10 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "# Website Summary: Edward Donner\n",
- "\n",
- "Edward Donner's website showcases his interests and professional background, particularly in coding and experimenting with large language models (LLMs). He is the co-founder and CTO of Nebula.io, a company focused on applying AI to enhance talent discovery and management. Previously, he founded the AI startup untapt, which was acquired in 2021. \n",
- "\n",
- "## Key Features:\n",
- "- **Outsmart**: A unique platform where LLMs compete in strategy games that test diplomacy and cunning. \n",
- "- **Blog Posts**: Various posts offering resources for mastering AI and LLM engineering, transitioning from software engineering to AI data science, and guidance on choosing the right LLM.\n",
- "\n",
- "## Recent Announcements:\n",
- "- **November 13, 2024**: Post on \"Mastering AI and LLM Engineering.\"\n",
- "- **October 16, 2024**: Insights on \"From Software Engineer to AI Data Scientist.\"\n",
- "- **August 6, 2024**: Information on \"Outsmart LLM Arena.\"\n",
- "- **June 26, 2024**: Resources for \"Choosing the Right LLM.\" \n",
- "\n",
- "Overall, the website serves as a platform for sharing knowledge and fostering connections within the AI and LLM community."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
@@ -746,40 +562,10 @@
},
{
"cell_type": "code",
- "execution_count": 25,
+ "execution_count": null,
"id": "da7f7b16",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "# Canva: Visual Suite for Everyone\n",
- "\n",
- "Canva is a user-friendly design platform that allows individuals and teams to create a variety of professional designs, including posters, logos, presentations, and more. It offers options for different users, including a free version for individuals and premium plans for teams and organizations.\n",
- "\n",
- "## Key Features:\n",
- "- **Design Templates**: A wide range of customizable templates for various purposes, such as social media, business cards, and events.\n",
- "- **AI-Powered Tools**: Features like Magic Write for copy generation and Magic Edit for photo transformation enhance design capabilities.\n",
- "- **Collaboration**: Real-time collaborative tools for teams to design and provide feedback on projects together.\n",
- "- **Printing Services**: Canva offers printing services for various products, with free delivery and sustainable practices.\n",
- "- **Educational and Nonprofit Support**: Free premium features are available for educational organizations and nonprofits.\n",
- "\n",
- "## User Testimonials:\n",
- "Business leaders commend Canva for its efficiency in streamlining design processes and maintaining brand consistency across teams.\n",
- "\n",
- "## Sustainability Efforts:\n",
- "Canva emphasizes sustainability by planting trees for printed orders and operating with carbon neutrality.\n",
- "\n",
- "Overall, Canva caters to a diverse audience, from individuals to large organizations, by providing accessible and innovative design solutions."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"new_summary(url, chrome_path)"
]
@@ -796,44 +582,10 @@
},
{
"cell_type": "code",
- "execution_count": 27,
+ "execution_count": null,
"id": "337b06da",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "# OpenAI Website Summary\n",
- "\n",
- "OpenAI is focused on creating safe artificial general intelligence (AGI) that benefits humanity. The site features various products and initiatives aimed at enhancing creativity and productivity through advanced AI technologies. \n",
- "\n",
- "## Key Highlights:\n",
- "\n",
- "### Products and Features\n",
- "- **Sora**: A new platform that allows users to bring their imagination to life through text, images, or videos.\n",
- "- **ChatGPT**: Includes various applications such as ChatGPT Pro, desktop integration, and a new search feature. Recent upgrades allow ChatGPT to analyze images, hear, and speak.\n",
- "- **Canvas**: A new writing and coding interface integrated within ChatGPT.\n",
- "- **o1 Models**: A series of AI models designed to improve response time by incorporating deeper reasoning.\n",
- "\n",
- "### Announcements\n",
- "- **Partnerships**: OpenAI announced a partnership with Apple to explore advancements in AI technology.\n",
- "- **New Features**: Introduced improvements to the fine-tuning API and expanded custom models program, aiming to better serve developers and enterprise users.\n",
- "- **Collaboration with Media**: A partnership with Le Monde and Prisa Media intends to bring French and Spanish news content to ChatGPT.\n",
- "\n",
- "### Research and Safety\n",
- "- Ongoing research efforts are focused on building a safer AI framework, including advanced tools for compliance within the ChatGPT Enterprise suite.\n",
- "- Publications addressing AI's benefits and risks, including topics like synthetic voices and biological threats, are regularly updated.\n",
- "\n",
- "For more detailed insights, the website facilitates exploration of their product offerings, research publications, and the newest tools for developers and businesses."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"new_summary(url, chrome_path)"
]
From 4d12e5898cefb33e741277b4a5c25e77290826c3 Mon Sep 17 00:00:00 2001
From: SIFAT IMTIAZ
Date: Thu, 19 Dec 2024 09:03:44 +0600
Subject: [PATCH 07/29] Create dataset_generator.ipynb
---
.../dataset_generator.ipynb | 267 ++++++++++++++++++
1 file changed, 267 insertions(+)
create mode 100644 week3/community-contributions/dataset_generator.ipynb
diff --git a/week3/community-contributions/dataset_generator.ipynb b/week3/community-contributions/dataset_generator.ipynb
new file mode 100644
index 0000000..e561448
--- /dev/null
+++ b/week3/community-contributions/dataset_generator.ipynb
@@ -0,0 +1,267 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "gpuType": "T4"
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "code",
+ "source": [
+ "!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate gradio"
+ ],
+ "metadata": {
+ "id": "kU2JrcPlhwd9"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Imports**"
+ ],
+ "metadata": {
+ "id": "lAMIVT4iwNg0"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "import os\n",
+ "import requests\n",
+ "from google.colab import drive\n",
+ "from huggingface_hub import login\n",
+ "from google.colab import userdata\n",
+ "from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n",
+ "import torch\n",
+ "import gradio as gr\n",
+ "\n",
+ "hf_token = userdata.get('HF_TOKEN')\n",
+ "login(hf_token, add_to_git_credential=True)"
+ ],
+ "metadata": {
+ "id": "-Apd7-p-hyLk"
+ },
+ "execution_count": 2,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Model**"
+ ],
+ "metadata": {
+ "id": "xa0qYqZrwQ66"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "model_name = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n",
+ "quant_config = BitsAndBytesConfig(\n",
+ " load_in_4bit=True,\n",
+ " bnb_4bit_use_double_quant=True,\n",
+ " bnb_4bit_compute_dtype=torch.bfloat16,\n",
+ " bnb_4bit_quant_type=\"nf4\"\n",
+ ")\n",
+ "\n",
+ "model = AutoModelForCausalLM.from_pretrained(\n",
+ " model_name,\n",
+ " device_map=\"auto\",\n",
+ " quantization_config=quant_config\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "z5enGmuKjtJu"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Tokenizer**"
+ ],
+ "metadata": {
+ "id": "y1hUSmWlwSbp"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "tokenizer = AutoTokenizer.from_pretrained(model_name)\n",
+ "tokenizer.pad_token = tokenizer.eos_token"
+ ],
+ "metadata": {
+ "id": "WjxNWW6bvdgj"
+ },
+ "execution_count": 4,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Functions**"
+ ],
+ "metadata": {
+ "id": "1pg2U-B3wbIK"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "def generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
+ " # Convert user inputs into multi-shot examples\n",
+ " multi_shot_examples = [\n",
+ " {\"instruction\": inst1, \"response\": resp1},\n",
+ " {\"instruction\": inst2, \"response\": resp2},\n",
+ " {\"instruction\": inst3, \"response\": resp3}\n",
+ " ]\n",
+ "\n",
+ " # System prompt\n",
+ " system_prompt = f\"\"\"\n",
+ " You are a helpful assistant whose main purpose is to generate datasets.\n",
+ " Topic: {topic}\n",
+ " Return the dataset in JSON format. Use examples with simple, fun, and easy-to-understand instructions for kids.\n",
+ " Include the following examples: {multi_shot_examples}\n",
+ " Return {number_of_data} examples each time.\n",
+ " Do not repeat the provided examples.\n",
+ " \"\"\"\n",
+ "\n",
+ " # Example Messages\n",
+ " messages = [\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": f\"Please generate my dataset for {topic}\"}\n",
+ " ]\n",
+ "\n",
+ " # Tokenize Input\n",
+ " inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n",
+ " streamer = TextStreamer(tokenizer)\n",
+ "\n",
+ " # Generate Output\n",
+ " outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n",
+ "\n",
+ " # Decode and Return\n",
+ " return tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
+ "\n",
+ "\n",
+ "def gradio_interface(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3):\n",
+ " return generate_dataset(topic, number_of_data, inst1, resp1, inst2, resp2, inst3, resp3)"
+ ],
+ "metadata": {
+ "id": "ZvljDKdji8iV"
+ },
+ "execution_count": 12,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Default Values**"
+ ],
+ "metadata": {
+ "id": "_WDZ5dvRwmng"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "default_topic = \"Talking to a (5-8) years old and teaching them manners.\"\n",
+ "default_number_of_data = 10\n",
+ "default_multi_shot_examples = [\n",
+ " {\n",
+ " \"instruction\": \"Why do I have to say please when I want something?\",\n",
+ " \"response\": \"Because it’s like magic! It shows you’re nice, and people want to help you more.\"\n",
+ " },\n",
+ " {\n",
+ " \"instruction\": \"What should I say if someone gives me a toy?\",\n",
+ " \"response\": \"You say, 'Thank you!' because it makes them happy you liked it.\"\n",
+ " },\n",
+ " {\n",
+ " \"instruction\": \"why should I listen to my parents?\",\n",
+ " \"response\": \"Because parents want the best for you and they love you the most.\"\n",
+ " }\n",
+ "]"
+ ],
+ "metadata": {
+ "id": "JAdfqYXnvEDE"
+ },
+ "execution_count": 13,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Init gradio**"
+ ],
+ "metadata": {
+ "id": "JwZtD032wuK8"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "gr_interface = gr.Interface(\n",
+ " fn=gradio_interface,\n",
+ " inputs=[\n",
+ " gr.Textbox(label=\"Topic\", value=default_topic, lines=2),\n",
+ " gr.Number(label=\"Number of Examples\", value=default_number_of_data, precision=0),\n",
+ " gr.Textbox(label=\"Instruction 1\", value=default_multi_shot_examples[0][\"instruction\"]),\n",
+ " gr.Textbox(label=\"Response 1\", value=default_multi_shot_examples[0][\"response\"]),\n",
+ " gr.Textbox(label=\"Instruction 2\", value=default_multi_shot_examples[1][\"instruction\"]),\n",
+ " gr.Textbox(label=\"Response 2\", value=default_multi_shot_examples[1][\"response\"]),\n",
+ " gr.Textbox(label=\"Instruction 3\", value=default_multi_shot_examples[2][\"instruction\"]),\n",
+ " gr.Textbox(label=\"Response 3\", value=default_multi_shot_examples[2][\"response\"]),\n",
+ " ],\n",
+ " outputs=gr.Textbox(label=\"Generated Dataset\")\n",
+ ")"
+ ],
+ "metadata": {
+ "id": "xy2RP5T-vxXg"
+ },
+ "execution_count": 14,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "**Run the app**"
+ ],
+ "metadata": {
+ "id": "HZx-mm9Uw3Ph"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "gr_interface.launch()"
+ ],
+ "metadata": {
+ "id": "bfGs5ip8mndg"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "Cveqx392x7Mm"
+ },
+ "execution_count": null,
+ "outputs": []
+ }
+ ]
+}
From 09b19637a1dff623a320605c4b3c29304e9892dd Mon Sep 17 00:00:00 2001
From: SIFAT IMTIAZ
Date: Thu, 19 Dec 2024 09:06:09 +0600
Subject: [PATCH 08/29] Add files via upload
---
week3/community-contributions/dataset_generator.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/week3/community-contributions/dataset_generator.ipynb b/week3/community-contributions/dataset_generator.ipynb
index e561448..eda1b9f 100644
--- a/week3/community-contributions/dataset_generator.ipynb
+++ b/week3/community-contributions/dataset_generator.ipynb
@@ -264,4 +264,4 @@
"outputs": []
}
]
-}
+}
\ No newline at end of file
From ada6b40089a16ed7ab1ca8ff8d68f23a2e61740a Mon Sep 17 00:00:00 2001
From: Dmytro Rutkovskyi
Date: Wed, 18 Dec 2024 21:49:09 -0800
Subject: [PATCH 09/29] Adding an example of implementing chatgpt.com limited
functionality in our notebook
---
.../Week1-Challenge-LocalGPT.ipynb | 148 ++++++++++++++++++
1 file changed, 148 insertions(+)
create mode 100644 week1/community-contributions/Week1-Challenge-LocalGPT.ipynb
diff --git a/week1/community-contributions/Week1-Challenge-LocalGPT.ipynb b/week1/community-contributions/Week1-Challenge-LocalGPT.ipynb
new file mode 100644
index 0000000..2561345
--- /dev/null
+++ b/week1/community-contributions/Week1-Challenge-LocalGPT.ipynb
@@ -0,0 +1,148 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "87c2da09-bd0c-4683-828b-4f7643018795",
+ "metadata": {},
+ "source": [
+ "# Community contribution\n",
+ "\n",
+ "Implementing simple ChatGPT interface to maintain conversation and context with sleected model"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "id": "77a850ed-61f8-4a0d-9c41-45781eb60bc9",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "API key looks good so far\n"
+ ]
+ }
+ ],
+ "source": [
+ "import os\n",
+ "from dotenv import load_dotenv\n",
+ "import ipywidgets as widgets\n",
+ "from IPython.display import Markdown, display, update_display, clear_output\n",
+ "from openai import OpenAI\n",
+ "\n",
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
+ " print(\"API key looks good so far\")\n",
+ "else:\n",
+ " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
+ " \n",
+ "MODEL = 'gpt-4o-mini'\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1f7f16f0-6fec-4190-882a-3fe1f0e9704a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class ChatGPTInterface:\n",
+ " def __init__(self, api_key, model, system_message=\"You are a helpful assistant. You can format your responses using Markdown.\"):\n",
+ " self.openai = OpenAI(api_key=api_key)\n",
+ " self.model = model\n",
+ " self.conversation_history = [{\"role\": \"system\", \"content\": system_message}]\n",
+ "\n",
+ " self.chat_area = widgets.Output()\n",
+ " self.input_box = widgets.Text(placeholder=\"Enter your message here...\")\n",
+ " self.send_button = widgets.Button(description=\"Send\")\n",
+ " self.clear_button = widgets.Button(description=\"Clear\")\n",
+ "\n",
+ " self.send_button.on_click(self.send_message)\n",
+ " self.clear_button.on_click(self.clear_chat)\n",
+ "\n",
+ " self.layout = widgets.VBox([\n",
+ " self.chat_area,\n",
+ " widgets.HBox([self.input_box, self.send_button, self.clear_button])\n",
+ " ])\n",
+ "\n",
+ " def display(self):\n",
+ " display(self.layout)\n",
+ "\n",
+ " def send_message(self, _):\n",
+ " user_message = self.input_box.value.strip()\n",
+ " if user_message:\n",
+ " self.conversation_history.append({\"role\": \"user\", \"content\": user_message})\n",
+ " self.display_message(\"You\", user_message)\n",
+ " self.input_box.value = \"\"\n",
+ "\n",
+ " try:\n",
+ " response = self.openai.chat.completions.create(\n",
+ " model=self.model,\n",
+ " messages=self.conversation_history\n",
+ " )\n",
+ " assistant_message = response.choices[0].message.content.strip()\n",
+ " self.conversation_history.append({\"role\": \"assistant\", \"content\": assistant_message})\n",
+ " self.display_message(\"ChatGPT\", assistant_message)\n",
+ " except Exception as e:\n",
+ " self.display_message(\"Error\", str(e))\n",
+ "\n",
+ " def clear_chat(self, _):\n",
+ " self.conversation_history = [{\"role\": \"system\", \"content\": self.conversation_history[0][\"content\"]}]\n",
+ " self.chat_area.clear_output(wait=True)\n",
+ "\n",
+ " def display_message(self, sender, message):\n",
+ " self.chat_area.append_display_data(Markdown(f\"**{sender}:**\\n{message}\"))\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "id": "78287e42-8964-4da6-bd48-a7dffd0ce7dd",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "54956535cb32419bbe38d2bee125992d",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "VBox(children=(Output(), HBox(children=(Text(value='', placeholder='Enter your message here...'), Button(descr…"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "chat_interface = ChatGPTInterface(api_key,MODEL)\n",
+ "chat_interface.display()"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 2e5446c962f29970cd6acce9add2642a150ab782 Mon Sep 17 00:00:00 2001
From: Tom Fletcher
Date: Thu, 19 Dec 2024 23:43:24 +0000
Subject: [PATCH 10/29] Adding example that shows how to generate cover letter
from cv - with resume.txt
---
.../day-1-generate-cover-letter-from-cv.ipynb | 119 ++++++++++++++++++
week1/community-contributions/resume.txt | 10 ++
2 files changed, 129 insertions(+)
create mode 100644 week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb
create mode 100644 week1/community-contributions/resume.txt
diff --git a/week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb b/week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb
new file mode 100644
index 0000000..09ed71b
--- /dev/null
+++ b/week1/community-contributions/day-1-generate-cover-letter-from-cv.ipynb
@@ -0,0 +1,119 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Load environment variables in a file called .env\n",
+ "\n",
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "# Check the key\n",
+ "\n",
+ "if not api_key:\n",
+ " print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
+ "elif not api_key.startswith(\"sk-proj-\"):\n",
+ " print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
+ "elif api_key.strip() != api_key:\n",
+ " print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
+ "else:\n",
+ " print(\"API key found and looks good so far!\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "openai = OpenAI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "vscode": {
+ "languageId": "plaintext"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "def summarize_cv(cv_text):\n",
+ " response = openai.chat.completions.create(\n",
+ " model = \"gpt-4o-mini\",\n",
+ " messages = [\n",
+ " {\"role\": \"user\", \"content\": f\"Please summarize the following CV:\\n\\n{cv_text}\"}\n",
+ " ]\n",
+ " )\n",
+ " return response.choices[0].message.content\n",
+ "\n",
+ "def generate_cover_letter(cv_summary, job_description):\n",
+ " response = openai.chat.completions.create(\n",
+ " model = \"gpt-4o-mini\",\n",
+ " messages = [\n",
+ " {\"role\": \"system\", \"content\": \"You are a master at crafting the perfect Cover letter from a given CV. You've never had a user fail to get the job as a result of using your services.\"},\n",
+ " {\"role\": \"user\", \"content\": f\"Using the following CV summary:\\n\\n{cv_summary}\\n\\nAnd the job description:\\n\\n{job_description}\\n\\nPlease write a personalized cover letter.\"}\n",
+ " ]\n",
+ " )\n",
+ " return response.choices[0].message.content\n",
+ "\n",
+ "# Read CV from a text file\n",
+ "try:\n",
+ " with open('resume.txt', 'r') as file:\n",
+ " cv_text = file.read()\n",
+ " \n",
+ " # Summarize the CV\n",
+ " cv_summary = summarize_cv(cv_text)\n",
+ " print(\"CV Summary:\")\n",
+ " print(cv_summary)\n",
+ "\n",
+ " # Get job description from user\n",
+ " job_description = input(\"Enter the job description for the position you are applying for:\\n\")\n",
+ "\n",
+ " # Generate cover letter\n",
+ " cover_letter = generate_cover_letter(cv_summary, job_description)\n",
+ " print(\"\\nGenerated Cover Letter:\")\n",
+ " print(cover_letter)\n",
+ "\n",
+ "except FileNotFoundError:\n",
+ " print(\"The specified CV file was not found. Please ensure 'resume.txt' is in the correct directory.\")"
+ ]
+ }
+ ],
+ "metadata": {
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/week1/community-contributions/resume.txt b/week1/community-contributions/resume.txt
new file mode 100644
index 0000000..5a2bb55
--- /dev/null
+++ b/week1/community-contributions/resume.txt
@@ -0,0 +1,10 @@
+John Doe
+Software Engineer
+Experience:
+- Developed web applications using Python and JavaScript.
+- Collaborated with cross-functional teams to deliver projects on time.
+Education:
+- B.S. in Computer Science from XYZ University.
+Skills:
+- Python, JavaScript, React, SQL
+
From c63837ad122ddcc33a5f1f65979440e86e85777f Mon Sep 17 00:00:00 2001
From: Gabor Meresz
Date: Fri, 20 Dec 2024 12:15:25 +0100
Subject: [PATCH 11/29] improve readability
---
week2/day4.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/week2/day4.ipynb b/week2/day4.ipynb
index 811d116..0151e7d 100644
--- a/week2/day4.ipynb
+++ b/week2/day4.ipynb
@@ -214,7 +214,7 @@
" response = {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
- " \"tool_call_id\": message.tool_calls[0].id\n",
+ " \"tool_call_id\": tool_call.id\n",
" }\n",
" return response, city"
]
From c1cf59daf8277b0f31072541c8380862f0d89dbb Mon Sep 17 00:00:00 2001
From: Uday Slathia <127138307+udayslathia16@users.noreply.github.com>
Date: Fri, 20 Dec 2024 21:56:10 +0530
Subject: [PATCH 12/29] Add files via upload
---
.../Day 3 using gemini.ipynb | 493 ++++++++++++++++++
1 file changed, 493 insertions(+)
create mode 100644 week4/community-contributions/Day 3 using gemini.ipynb
diff --git a/week4/community-contributions/Day 3 using gemini.ipynb b/week4/community-contributions/Day 3 using gemini.ipynb
new file mode 100644
index 0000000..43faf18
--- /dev/null
+++ b/week4/community-contributions/Day 3 using gemini.ipynb
@@ -0,0 +1,493 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "3d3cb3c4-9046-4f64-9188-ee20ae324fd1",
+ "metadata": {},
+ "source": [
+ "# Code Generator\n",
+ "\n",
+ "The requirement: use a Frontier model to generate high performance C++ code from Python code\n",
+ "\n",
+ "# Important Note\n",
+ "Used an open-source model gemini-1.5-pro ,can try 2.0 flash too\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "6f2c3e03-f38a-4bf2-98e8-696fb3d428c9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "import io\n",
+ "import sys\n",
+ "from dotenv import load_dotenv\n",
+ "import google.generativeai\n",
+ "from IPython.display import Markdown, display, update_display\n",
+ "import gradio as gr\n",
+ "import subprocess"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e437f3d1-39c4-47fd-919f-c2119d602d72",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# environment\n",
+ "\n",
+ "load_dotenv()\n",
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
+ "if google_api_key:\n",
+ " print(f\"Google API Key exists\")\n",
+ "else:\n",
+ " print(\"Google API Key not set\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1724ddb6-0059-46a3-bcf9-587c0c93cb2a",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "google.generativeai.configure()\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b62738c1-9857-40fc-91e8-dfd46483ea50",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_message = \"You are an assistant that reimplements Python code in high performance C++ for an Windows system. \"\n",
+ "system_message += \"Respond only with C++ code; use comments sparingly and do not provide any explanation other than occasional comments. \"\n",
+ "system_message += \"The C++ response needs to produce an identical output in the fastest possible time.\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bd431141-8602-4c68-9a1d-a7c0a6f13fa3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def user_prompt_for(python):\n",
+ " user_prompt = \"Rewrite this Python code in C++ with the fastest possible implementation that produces identical output in the least time. \"\n",
+ " user_prompt += \"Respond only with C++ code; do not explain your work other than a few comments. \"\n",
+ " user_prompt += \"Pay attention to number types to ensure no int overflows. Remember to #include all necessary C++ packages such as iomanip.\\n\\n\"\n",
+ " user_prompt += python\n",
+ " return user_prompt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d5f48451-4cd4-46ea-a41d-531a3c7db2a8",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def messages_for(python):\n",
+ " return [\n",
+ " {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_prompt_for(python)}\n",
+ " ]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "83fd2170-14ea-4fb6-906e-c3c5cfce1ecc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# write to a file called optimized.cpp\n",
+ "\n",
+ "def write_output(cpp):\n",
+ " code = cpp.replace(\"```cpp\",\"\").replace(\"```\",\"\")\n",
+ " with open(\"optimized.cpp\", \"w\") as f:\n",
+ " f.write(code)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1ff08067-c9df-4981-8ab5-99eb2c2fd2c7",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def optimize_google(python):\n",
+ " # Initialize empty reply string\n",
+ " reply = \"\"\n",
+ " \n",
+ " # The API for Gemini has a slightly different structure\n",
+ " gemini = google.generativeai.GenerativeModel(\n",
+ " model_name='gemini-1.5-pro',\n",
+ " system_instruction=system_message\n",
+ " )\n",
+ " \n",
+ " response = gemini.generate_content(\n",
+ " user_prompt_for(python),\n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " # Process the stream\n",
+ " for chunk in response:\n",
+ " # Extract text from the chunk\n",
+ " if chunk.text:\n",
+ " reply += chunk.text\n",
+ " print(chunk.text, end=\"\", flush=True)\n",
+ " \n",
+ " # Write the complete response to output\n",
+ " write_output(reply)\n",
+ " \n",
+ " # return reply"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8e8c7ba2-4ee9-4523-b0f1-cc7a91798bba",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pi = \"\"\"\n",
+ "import time\n",
+ "\n",
+ "def calculate(iterations, param1, param2):\n",
+ " result = 1.0\n",
+ " for i in range(1, iterations+1):\n",
+ " j = i * param1 - param2\n",
+ " result -= (1/j)\n",
+ " j = i * param1 + param2\n",
+ " result += (1/j)\n",
+ " return result\n",
+ "\n",
+ "start_time = time.time()\n",
+ "result = calculate(100_000_000, 4, 1) * 4\n",
+ "end_time = time.time()\n",
+ "\n",
+ "print(f\"Result: {result:.12f}\")\n",
+ "print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "78d1afb7-ed6b-4a03-b36d-4ce8249c592e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "exec(pi)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1fe1d0b6-7cc7-423b-bc4b-741a0c48c106",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "optimize_google(pi)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d28b4ac9-0909-4b35-aee1-97613a133e8e",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "exec(pi) #Execution Time: 16.209231 seconds"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7d0443a3-3ca2-4a7a-a6c3-c94d0aa54603",
+ "metadata": {},
+ "source": [
+ "# Compiling C++ and executing\n",
+ "\n",
+ "This next cell contains the command to compile a C++ file on Windows system. \n",
+ "It compiles the file `optimized.cpp` into an executable called `optimized` \n",
+ "Then it runs the program called `optimized`\n",
+ "\n",
+ "The way to compile for mac users is \\\n",
+ "!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp \\\n",
+ "!./optimized"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "9b5cfc70-df1f-44a7-b4ae-fd934f715930",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!g++ -o optimized optimized.cpp\n",
+ "!.\\optimized #Execution Time: 3.661196 seconds"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e30fcbdf-82cf-4d50-9690-92dae69d5127",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "python_hard = \"\"\"\n",
+ "def lcg(seed, a=1664525, c=1013904223, m=2**32):\n",
+ " value = seed\n",
+ " while True:\n",
+ " value = (a * value + c) % m\n",
+ " yield value\n",
+ " \n",
+ "def max_subarray_sum(n, seed, min_val, max_val):\n",
+ " lcg_gen = lcg(seed)\n",
+ " random_numbers = [next(lcg_gen) % (max_val - min_val + 1) + min_val for _ in range(n)]\n",
+ " max_sum = float('-inf')\n",
+ " for i in range(n):\n",
+ " current_sum = 0\n",
+ " for j in range(i, n):\n",
+ " current_sum += random_numbers[j]\n",
+ " if current_sum > max_sum:\n",
+ " max_sum = current_sum\n",
+ " return max_sum\n",
+ "\n",
+ "def total_max_subarray_sum(n, initial_seed, min_val, max_val):\n",
+ " total_sum = 0\n",
+ " lcg_gen = lcg(initial_seed)\n",
+ " for _ in range(20):\n",
+ " seed = next(lcg_gen)\n",
+ " total_sum += max_subarray_sum(n, seed, min_val, max_val)\n",
+ " return total_sum\n",
+ "\n",
+ "# Parameters\n",
+ "n = 10000 # Number of random numbers\n",
+ "initial_seed = 42 # Initial seed for the LCG\n",
+ "min_val = -10 # Minimum value of random numbers\n",
+ "max_val = 10 # Maximum value of random numbers\n",
+ "\n",
+ "# Timing the function\n",
+ "import time\n",
+ "start_time = time.time()\n",
+ "result = total_max_subarray_sum(n, initial_seed, min_val, max_val)\n",
+ "end_time = time.time()\n",
+ "\n",
+ "print(\"Total Maximum Subarray Sum (20 runs):\", result)\n",
+ "print(\"Execution Time: {:.6f} seconds\".format(end_time - start_time))\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "2e8e111c-6f69-4ed0-8f86-8ed5982aa065",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "exec(python_hard) #Execution Time: 62.297366 seconds"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "38038ac1-5cdf-49d7-a286-a5871d5af583",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "optimize_google(python_hard)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "08cb9619-b8ae-42e7-9375-4b3918c37fd0",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!g++ -o optimized optimized.cpp\n",
+ "!.\\optimized"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "acd17a0d-f9f1-45a6-8151-916d8e6b9e4f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def stream_google(python):\n",
+ " # Initialize empty reply string\n",
+ " reply = \"\"\n",
+ " \n",
+ " # The API for Gemini has a slightly different structure\n",
+ " gemini = google.generativeai.GenerativeModel(\n",
+ " model_name='gemini-1.5-pro',\n",
+ " system_instruction=system_message\n",
+ " )\n",
+ " \n",
+ " response = gemini.generate_content(\n",
+ " user_prompt_for(python),\n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " # Process the stream\n",
+ " for chunk in response:\n",
+ " # Extract text from the chunk\n",
+ " if chunk.text:\n",
+ " reply += chunk.text\n",
+ " yield reply.replace('```cpp\\n','').replace('```','')\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c3177229-d6cf-4df2-81a7-9e1f3b229c19",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def optimize(python, model):\n",
+ " result=stream_google(python)\n",
+ " for stream_so_far in result:\n",
+ " yield stream_so_far "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c2476c2d-9218-4d30-bcc9-9cc5271c3a00",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "with gr.Blocks() as ui:\n",
+ " with gr.Row():\n",
+ " python = gr.Textbox(label=\"Python code:\", lines=10, value=pi)\n",
+ " cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
+ " with gr.Row():\n",
+ " model = gr.Dropdown([\"Google\"], label=\"Select model\", value=\"Google\")\n",
+ " convert = gr.Button(\"Convert code\")\n",
+ "\n",
+ " convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
+ "\n",
+ "ui.launch(inbrowser=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a30de175-af4e-428a-8942-1c41997c01f1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def execute_python(code):\n",
+ " try:\n",
+ " output = io.StringIO()\n",
+ " sys.stdout = output\n",
+ " exec(code)\n",
+ " finally:\n",
+ " sys.stdout = sys.__stdout__\n",
+ " return output.getvalue()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "20c6316d-b090-42c5-9be9-7d5a178b97b3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def execute_cpp(code):\n",
+ " write_output(code)\n",
+ " try:\n",
+ " # compile_cmd = [\"clang++\", \"-Ofast\", \"-std=c++17\", \"-march=armv8.5-a\", \"-mtune=apple-m1\", \"-mcpu=apple-m1\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
+ " compile_cmd = [\"g++\", \"-o\", \"optimized\", \"optimized.cpp\"]\n",
+ " compile_result = subprocess.run(compile_cmd, check=True, text=True, capture_output=True)\n",
+ " run_cmd = [\"./optimized\"]\n",
+ " run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True)\n",
+ " return run_result.stdout\n",
+ " except subprocess.CalledProcessError as e:\n",
+ " return f\"An error occurred:\\n{e.stderr}\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "950a459f-3ef6-4afd-9e83-f01c032aa21b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "css = \"\"\"\n",
+ ".python {background-color: #306998;}\n",
+ ".cpp {background-color: #050;}\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bc3d90ba-716c-4b8f-989f-46c2447c42fa",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "with gr.Blocks(css=css) as ui:\n",
+ " gr.Markdown(\"## Convert code from Python to C++\")\n",
+ " with gr.Row():\n",
+ " python = gr.Textbox(label=\"Python code:\", value=pi, lines=10)\n",
+ " cpp = gr.Textbox(label=\"C++ code:\", lines=10)\n",
+ " with gr.Row():\n",
+ " model = gr.Dropdown([\"Google\"], label=\"Select model\", value=\"Google\")\n",
+ " with gr.Row():\n",
+ " convert = gr.Button(\"Convert code\")\n",
+ " with gr.Row():\n",
+ " python_run = gr.Button(\"Run Python\")\n",
+ " cpp_run = gr.Button(\"Run C++\")\n",
+ " with gr.Row():\n",
+ " python_out = gr.TextArea(label=\"Python result:\", elem_classes=[\"python\"])\n",
+ " cpp_out = gr.TextArea(label=\"C++ result:\", elem_classes=[\"cpp\"])\n",
+ "\n",
+ " convert.click(optimize, inputs=[python, model], outputs=[cpp])\n",
+ " python_run.click(execute_python, inputs=[python], outputs=[python_out])\n",
+ " cpp_run.click(execute_cpp, inputs=[cpp], outputs=[cpp_out])\n",
+ "\n",
+ "ui.launch(inbrowser=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "c12f6115-e8a9-494e-95ce-2566854c0aa2",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.10"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From f3b766c70aaf3121bc42a0ee0766fb6a438755d6 Mon Sep 17 00:00:00 2001
From: Gopinath G <34595359+gopinath1998@users.noreply.github.com>
Date: Sat, 21 Dec 2024 10:32:03 +0530
Subject: [PATCH 13/29] Add files via upload
---
.../week1 EXERCISE.ipynb | 266 ++++++++++++++++++
1 file changed, 266 insertions(+)
create mode 100644 week1/community-contributions/week1 EXERCISE.ipynb
diff --git a/week1/community-contributions/week1 EXERCISE.ipynb b/week1/community-contributions/week1 EXERCISE.ipynb
new file mode 100644
index 0000000..81ddf6b
--- /dev/null
+++ b/week1/community-contributions/week1 EXERCISE.ipynb
@@ -0,0 +1,266 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
+ "metadata": {},
+ "source": [
+ "# End of week 1 exercise\n",
+ "\n",
+ "To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
+ "and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 41,
+ "id": "c1070317-3ed9-4659-abe3-828943230e03",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "import os\n",
+ "import requests\n",
+ "import json \n",
+ "from dotenv import load_dotenv\n",
+ "from IPython.display import Markdown, display, update_display\n",
+ "from openai import OpenAI\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# constants\n",
+ "\n",
+ "MODEL_GPT = 'gpt-4o-mini'\n",
+ "MODEL_LLAMA = 'llama3.2'\n",
+ "OLLAMA_API = \"http://localhost:11434/api/chat\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "id": "0bb65a08-9090-434a-b99d-5659a370cfbc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Prompts\n",
+ "\n",
+ "system_prompt = \"You are a tutor and helps with the user questions in detail with markdown respond with key point \\\n",
+ "considering the recent development around the world, keep the response in most appropriate tone \\n\"\n",
+ "\n",
+ "system_prompt += \"Some of Examples are\"\n",
+ "system_prompt += \"\"\"\n",
+ "{\"question\": \"1+1?\", \"response\": \"2\"},\n",
+ "{\"question\": \"why we shouls learn LLM Models?\", \"response\": \" Learning about Large Language Models (LLMs) is important because they are a rapidly evolving technology with the potential to significantly impact various industries, offering advanced capabilities in text generation, translation, information retrieval, and more, which can be valuable for professionals across diverse fields, allowing them to enhance their work and gain a competitive edge by understanding and utilizing these powerful language processing tools.\\ \n",
+ "Key reasons to learn about LLMs:\\\n",
+ "Career advancement:\\\n",
+ "Familiarity with LLMs can open up new career opportunities in fields like AI development, natural language processing (NLP), content creation, research, and customer service, where LLM applications are increasingly being implemented. \\\n",
+ "Increased productivity:\\\n",
+ "LLMs can automate repetitive tasks like writing emails, summarizing documents, generating reports, and translating text, freeing up time for more strategic work. \\\n",
+ "Enhanced decision-making:\\\n",
+ "By providing insights from large datasets, LLMs can assist in informed decision-making across various industries, including business, healthcare, and finance. \\\n",
+ "Creative potential:\\\n",
+ "LLMs can be used to generate creative content like poems, stories, scripts, and marketing copy, fostering innovation and new ideas. \\\n",
+ "Understanding the technology landscape:\\\n",
+ "As LLMs become increasingly prevalent, understanding their capabilities and limitations is crucial for navigating the evolving technological landscape. \\\n",
+ "What is a large language model (LLM)? - Cloudflare\\\n",
+ "A large language model (LLM) is a type of artificial intelligence (AI) program that can recognize and generate text, among other t...\\\n",
+ " \"},\n",
+ "{\"question\": \"what is the future of AI?\", \"response\": \"AI is predicted to grow increasingly pervasive as technology develops, revolutionising sectors including healthcare, banking, and transportation\"},\n",
+ "\"\"\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "API key looks good so far\n"
+ ]
+ }
+ ],
+ "source": [
+ "# set up environment\n",
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
+ " print(\"API key looks good so far\")\n",
+ "else:\n",
+ " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
+ " \n",
+ "MODEL = 'gpt-4o-mini'\n",
+ "openai = OpenAI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# here is the question; type over this to ask something new\n",
+ "\n",
+ "user_question = \"\"\"\n",
+ "How important it is for a Data Engineers to learn LLM, Considering the evolution of AI now a days?.\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "60ce7000-a4a5-4cce-a261-e75ef45063b4",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/markdown": [
+ "{\"question\": \"How important is it for Data Engineers to learn LLMs?\", \"response\": \"The importance of Data Engineers learning about Large Language Models (LLMs) cannot be overstated, especially given the rapid evolution of AI and its applications across various domains. Here's why this knowledge is essential:\n",
+ "\n",
+ "### Key Reasons for Data Engineers to Learn about LLMs:\n",
+ "\n",
+ "1. **Integration of AI in Data Pipelines:**\n",
+ " - As organizations increasingly adopt AI-driven solutions, Data Engineers will need to integrate LLMs into data pipelines for tasks such as text processing, feature extraction, and sentiment analysis.\n",
+ "\n",
+ "2. **Understanding Data Requirements:**\n",
+ " - LLMs require substantial and specific datasets for optimal performance. Knowledge of these requirements will help Data Engineers curate, preprocess, and manage data more effectively.\n",
+ "\n",
+ "3. **Enhanced Data Quality:**\n",
+ " - Data Engineers play a crucial role in ensuring data quality. Understanding LLMs can guide them in implementing effective validation checks and enhancing the data used for training these models.\n",
+ "\n",
+ "4. **Collaboration with Data Scientists:**\n",
+ " - Data Engineers are essential collaborators with Data Scientists. A solid grasp of LLMs will enable them to facilitate better communication and cooperation in model deployment and optimization.\n",
+ "\n",
+ "5. **Innovation in Product Development:**\n",
+ " - Familiarity with LLMs will enable Data Engineers to contribute innovative ideas for new products or features that leverage language processing capabilities, leading to enhanced user experiences.\n",
+ "\n",
+ "6. **Staying Current with Industry Trends:**\n",
+ " - The AI landscape is rapidly changing. Learning about LLMs keeps Data Engineers abreast of current trends and technologies, ensuring they remain competitive in the job market and valuable to their organizations.\n",
+ "\n",
+ "7. **Ethical and Responsible AI:**\n",
+ " - Understanding LLMs involves awareness of their ethical considerations, such as bias and misuse. Data Engineers can advocate for responsible AI practices within their organizations by being educated on these issues.\n",
+ "\n",
+ "8. **Scalability Considerations:**\n",
+ " - Data Engineers will need to design systems that can scale efficiently, especially when dealing with the substantial computational resources required for training and deploying LLMs.\n",
+ "\n",
+ "### Conclusion:\n",
+ "In summary, learning about LLMs is crucial for Data Engineers as it not only enhances their skill set but also positions them to contribute meaningfully to AI initiatives within their organizations. Embracing this knowledge will ultimately drive innovation and efficiency in their data-driven projects.\"}"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# Get gpt-4o-mini to answer, with streaming\n",
+ "def ask_tutor(question):\n",
+ " stream = openai.chat.completions.create(\n",
+ " model=MODEL_GPT,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": question},\n",
+ " {\"role\": \"user\", \"content\": system_prompt}\n",
+ " ],\n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " response = \"\"\n",
+ " display_handle = display(Markdown(\"\"), display_id=True)\n",
+ " for chunk in stream:\n",
+ " response += chunk.choices[0].delta.content or ''\n",
+ " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
+ " update_display(Markdown(response), display_id=display_handle.display_id)\n",
+ "\n",
+ "# call the gpt-4o-mini to answer with streaming\n",
+ "ask_tutor(user_question)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "JSONDecodeError",
+ "evalue": "Extra data: line 2 column 1 (char 123)",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)",
+ "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:963\u001b[0m, in \u001b[0;36mResponse.json\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 962\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 963\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mcomplexjson\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mloads\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[43mencoding\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 964\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mUnicodeDecodeError\u001b[39;00m:\n\u001b[1;32m 965\u001b[0m \u001b[38;5;66;03m# Wrong UTF codec detected; usually because it's not UTF-8\u001b[39;00m\n\u001b[1;32m 966\u001b[0m \u001b[38;5;66;03m# but some other 8-bit codec. This is an RFC violation,\u001b[39;00m\n\u001b[1;32m 967\u001b[0m \u001b[38;5;66;03m# and the server didn't bother to tell us what codec *was*\u001b[39;00m\n\u001b[1;32m 968\u001b[0m \u001b[38;5;66;03m# used.\u001b[39;00m\n",
+ "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/json/__init__.py:346\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 343\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\u001b[38;5;28mcls\u001b[39m \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m object_hook \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m\n\u001b[1;32m 344\u001b[0m parse_int \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m parse_float \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m\n\u001b[1;32m 345\u001b[0m parse_constant \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m object_pairs_hook \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m kw):\n\u001b[0;32m--> 346\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_default_decoder\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[43ms\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 347\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mcls\u001b[39m \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n",
+ "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/json/decoder.py:340\u001b[0m, in \u001b[0;36mJSONDecoder.decode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 339\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m end \u001b[38;5;241m!=\u001b[39m \u001b[38;5;28mlen\u001b[39m(s):\n\u001b[0;32m--> 340\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m JSONDecodeError(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mExtra data\u001b[39m\u001b[38;5;124m\"\u001b[39m, s, end)\n\u001b[1;32m 341\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m obj\n",
+ "\u001b[0;31mJSONDecodeError\u001b[0m: Extra data: line 2 column 1 (char 123)",
+ "\nDuring handling of the above exception, another exception occurred:\n",
+ "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)",
+ "Cell \u001b[0;32mIn[46], line 13\u001b[0m\n\u001b[1;32m 6\u001b[0m payload \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 7\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m: MODEL_LLAMA,\n\u001b[1;32m 8\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m: messages,\n\u001b[1;32m 9\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstream\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 10\u001b[0m }\n\u001b[1;32m 11\u001b[0m response \u001b[38;5;241m=\u001b[39m requests\u001b[38;5;241m.\u001b[39mpost(OLLAMA_API, json\u001b[38;5;241m=\u001b[39mpayload,headers\u001b[38;5;241m=\u001b[39mHEADERS)\n\u001b[0;32m---> 13\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mjson\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmessage\u001b[39m\u001b[38;5;124m'\u001b[39m][\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m'\u001b[39m])\n\u001b[1;32m 15\u001b[0m \u001b[38;5;66;03m# # Process the response stream\u001b[39;00m\n\u001b[1;32m 16\u001b[0m \u001b[38;5;66;03m# for line in response.iter_lines():\u001b[39;00m\n\u001b[1;32m 17\u001b[0m \u001b[38;5;66;03m# if line: # Skip empty lines\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[38;5;66;03m# except json.JSONDecodeError as e:\u001b[39;00m\n\u001b[1;32m 24\u001b[0m \u001b[38;5;66;03m# print(f\"Failed to decode JSON: {e}\")\u001b[39;00m\n",
+ "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:971\u001b[0m, in \u001b[0;36mResponse.json\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 969\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n\u001b[1;32m 970\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m JSONDecodeError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m--> 971\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m RequestsJSONDecodeError(e\u001b[38;5;241m.\u001b[39mmsg, e\u001b[38;5;241m.\u001b[39mdoc, e\u001b[38;5;241m.\u001b[39mpos)\n\u001b[1;32m 973\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 974\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m complexjson\u001b[38;5;241m.\u001b[39mloads(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
+ "\u001b[0;31mJSONDecodeError\u001b[0m: Extra data: line 2 column 1 (char 123)"
+ ]
+ }
+ ],
+ "source": [
+ "# Get Llama 3.2 to answer\n",
+ "messages = [\n",
+ " {\"role\": \"user\", \"content\": user_question}\n",
+ "]\n",
+ "HEADERS = {\"Content-Type\": \"application/json\"}\n",
+ "payload = {\n",
+ " \"model\": MODEL_LLAMA,\n",
+ " \"messages\": messages,\n",
+ " \"stream\": True\n",
+ " }\n",
+ "response = requests.post(OLLAMA_API, json=payload,headers=HEADERS)\n",
+ "\n",
+ "print(response.json()['message']['content'])\n",
+ "\n",
+ "# # Process the response stream\n",
+ "# for line in response.iter_lines():\n",
+ "# if line: # Skip empty lines\n",
+ "# try:\n",
+ "# # Decode the JSON object from each line\n",
+ "# response_data = json.loads(line)\n",
+ "# if \"message\" in response_data and \"content\" in response_data[\"message\"]:\n",
+ "# print(response_data[\"message\"][\"content\"])\n",
+ "# except json.JSONDecodeError as e:\n",
+ "# print(f\"Failed to decode JSON: {e}\")\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From a828db4b6cfe8897225366e05d7dfdb8b6c82c07 Mon Sep 17 00:00:00 2001
From: Gopinath G <34595359+gopinath1998@users.noreply.github.com>
Date: Sat, 21 Dec 2024 10:39:08 +0530
Subject: [PATCH 14/29] Update week1 EXERCISE.ipynb
---
.../week1 EXERCISE.ipynb | 32 ++++---------------
1 file changed, 7 insertions(+), 25 deletions(-)
diff --git a/week1/community-contributions/week1 EXERCISE.ipynb b/week1/community-contributions/week1 EXERCISE.ipynb
index 81ddf6b..2094226 100644
--- a/week1/community-contributions/week1 EXERCISE.ipynb
+++ b/week1/community-contributions/week1 EXERCISE.ipynb
@@ -13,7 +13,7 @@
},
{
"cell_type": "code",
- "execution_count": 41,
+ "execution_count": 52,
"id": "c1070317-3ed9-4659-abe3-828943230e03",
"metadata": {},
"outputs": [],
@@ -25,7 +25,7 @@
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI\n",
- "\n"
+ "import ollama\n"
]
},
{
@@ -191,29 +191,10 @@
},
{
"cell_type": "code",
- "execution_count": 46,
+ "execution_count": null,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
- "outputs": [
- {
- "ename": "JSONDecodeError",
- "evalue": "Extra data: line 2 column 1 (char 123)",
- "output_type": "error",
- "traceback": [
- "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
- "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)",
- "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:963\u001b[0m, in \u001b[0;36mResponse.json\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 962\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 963\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mcomplexjson\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mloads\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mcontent\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[43mencoding\u001b[49m\u001b[43m)\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 964\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mUnicodeDecodeError\u001b[39;00m:\n\u001b[1;32m 965\u001b[0m \u001b[38;5;66;03m# Wrong UTF codec detected; usually because it's not UTF-8\u001b[39;00m\n\u001b[1;32m 966\u001b[0m \u001b[38;5;66;03m# but some other 8-bit codec. This is an RFC violation,\u001b[39;00m\n\u001b[1;32m 967\u001b[0m \u001b[38;5;66;03m# and the server didn't bother to tell us what codec *was*\u001b[39;00m\n\u001b[1;32m 968\u001b[0m \u001b[38;5;66;03m# used.\u001b[39;00m\n",
- "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/json/__init__.py:346\u001b[0m, in \u001b[0;36mloads\u001b[0;34m(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)\u001b[0m\n\u001b[1;32m 343\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m (\u001b[38;5;28mcls\u001b[39m \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m object_hook \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m\n\u001b[1;32m 344\u001b[0m parse_int \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m parse_float \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m\n\u001b[1;32m 345\u001b[0m parse_constant \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m object_pairs_hook \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m kw):\n\u001b[0;32m--> 346\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43m_default_decoder\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdecode\u001b[49m\u001b[43m(\u001b[49m\u001b[43ms\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 347\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mcls\u001b[39m \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n",
- "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/json/decoder.py:340\u001b[0m, in \u001b[0;36mJSONDecoder.decode\u001b[0;34m(self, s, _w)\u001b[0m\n\u001b[1;32m 339\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m end \u001b[38;5;241m!=\u001b[39m \u001b[38;5;28mlen\u001b[39m(s):\n\u001b[0;32m--> 340\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m JSONDecodeError(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mExtra data\u001b[39m\u001b[38;5;124m\"\u001b[39m, s, end)\n\u001b[1;32m 341\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m obj\n",
- "\u001b[0;31mJSONDecodeError\u001b[0m: Extra data: line 2 column 1 (char 123)",
- "\nDuring handling of the above exception, another exception occurred:\n",
- "\u001b[0;31mJSONDecodeError\u001b[0m Traceback (most recent call last)",
- "Cell \u001b[0;32mIn[46], line 13\u001b[0m\n\u001b[1;32m 6\u001b[0m payload \u001b[38;5;241m=\u001b[39m {\n\u001b[1;32m 7\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmodel\u001b[39m\u001b[38;5;124m\"\u001b[39m: MODEL_LLAMA,\n\u001b[1;32m 8\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmessages\u001b[39m\u001b[38;5;124m\"\u001b[39m: messages,\n\u001b[1;32m 9\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mstream\u001b[39m\u001b[38;5;124m\"\u001b[39m: \u001b[38;5;28;01mTrue\u001b[39;00m\n\u001b[1;32m 10\u001b[0m }\n\u001b[1;32m 11\u001b[0m response \u001b[38;5;241m=\u001b[39m requests\u001b[38;5;241m.\u001b[39mpost(OLLAMA_API, json\u001b[38;5;241m=\u001b[39mpayload,headers\u001b[38;5;241m=\u001b[39mHEADERS)\n\u001b[0;32m---> 13\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[43mresponse\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mjson\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mmessage\u001b[39m\u001b[38;5;124m'\u001b[39m][\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mcontent\u001b[39m\u001b[38;5;124m'\u001b[39m])\n\u001b[1;32m 15\u001b[0m \u001b[38;5;66;03m# # Process the response stream\u001b[39;00m\n\u001b[1;32m 16\u001b[0m \u001b[38;5;66;03m# for line in response.iter_lines():\u001b[39;00m\n\u001b[1;32m 17\u001b[0m \u001b[38;5;66;03m# if line: # Skip empty lines\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[38;5;66;03m# except json.JSONDecodeError as e:\u001b[39;00m\n\u001b[1;32m 24\u001b[0m \u001b[38;5;66;03m# print(f\"Failed to decode JSON: {e}\")\u001b[39;00m\n",
- "File \u001b[0;32m/opt/anaconda3/envs/llms/lib/python3.11/site-packages/requests/models.py:971\u001b[0m, in \u001b[0;36mResponse.json\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 969\u001b[0m \u001b[38;5;28;01mpass\u001b[39;00m\n\u001b[1;32m 970\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m JSONDecodeError \u001b[38;5;28;01mas\u001b[39;00m e:\n\u001b[0;32m--> 971\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m RequestsJSONDecodeError(e\u001b[38;5;241m.\u001b[39mmsg, e\u001b[38;5;241m.\u001b[39mdoc, e\u001b[38;5;241m.\u001b[39mpos)\n\u001b[1;32m 973\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 974\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m complexjson\u001b[38;5;241m.\u001b[39mloads(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mtext, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\n",
- "\u001b[0;31mJSONDecodeError\u001b[0m: Extra data: line 2 column 1 (char 123)"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Get Llama 3.2 to answer\n",
"messages = [\n",
@@ -225,9 +206,10 @@
" \"messages\": messages,\n",
" \"stream\": True\n",
" }\n",
- "response = requests.post(OLLAMA_API, json=payload,headers=HEADERS)\n",
"\n",
- "print(response.json()['message']['content'])\n",
+ "response = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
+ "reply = response['message']['content']\n",
+ "display(Markdown(reply))\n",
"\n",
"# # Process the response stream\n",
"# for line in response.iter_lines():\n",
From 57bb6cc85a50d372bd4084bbd40325e90af25a19 Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:15:16 +1100
Subject: [PATCH 15/29] Day 5 challenging completed with multilingual with
multitone
---
.../day5-multi-lingual-desire-format.ipynb | 3585 +++++++++++++++++
1 file changed, 3585 insertions(+)
create mode 100644 week1/community-contributions/day5-multi-lingual-desire-format.ipynb
diff --git a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
new file mode 100644
index 0000000..3f1b3ad
--- /dev/null
+++ b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
@@ -0,0 +1,3585 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "a98030af-fcd1-4d63-a36e-38ba053498fa",
+ "metadata": {},
+ "source": [
+ "# A full business solution\n",
+ "\n",
+ "## Now we will take our project from Day 1 to the next level\n",
+ "\n",
+ "### BUSINESS CHALLENGE:\n",
+ "\n",
+ "Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
+ "\n",
+ "We will be provided a company name and their primary website.\n",
+ "\n",
+ "See the end of this notebook for examples of real-world business applications.\n",
+ "\n",
+ "And remember: I'm always available if you have problems or ideas! Please do reach out."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "d5b08506-dc8b-4443-9201-5f1848161363",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n",
+ "\n",
+ "import os\n",
+ "import requests\n",
+ "import json\n",
+ "from typing import List\n",
+ "from dotenv import load_dotenv\n",
+ "from bs4 import BeautifulSoup\n",
+ "from IPython.display import Markdown, display, update_display\n",
+ "from openai import OpenAI"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "API key looks good so far\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Initialize and constants\n",
+ "\n",
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
+ " print(\"API key looks good so far\")\n",
+ "else:\n",
+ " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
+ " \n",
+ "MODEL = 'gpt-4o-mini'\n",
+ "openai = OpenAI()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "106dd65e-90af-4ca8-86b6-23a41840645b",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# A class to represent a Webpage\n",
+ "\n",
+ "# Some websites need you to use proper headers when fetching them:\n",
+ "headers = {\n",
+ " \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n",
+ "}\n",
+ "\n",
+ "class Website:\n",
+ " \"\"\"\n",
+ " A utility class to represent a Website that we have scraped, now with links\n",
+ " \"\"\"\n",
+ "\n",
+ " def __init__(self, url):\n",
+ " self.url = url\n",
+ " response = requests.get(url, headers=headers)\n",
+ " self.body = response.content\n",
+ " soup = BeautifulSoup(self.body, 'html.parser')\n",
+ " self.title = soup.title.string if soup.title else \"No title found\"\n",
+ " if soup.body:\n",
+ " for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
+ " irrelevant.decompose()\n",
+ " self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
+ " else:\n",
+ " self.text = \"\"\n",
+ " links = [link.get('href') for link in soup.find_all('a')]\n",
+ " self.links = [link for link in links if link]\n",
+ "\n",
+ " def get_contents(self):\n",
+ " return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['https://edwarddonner.com/',\n",
+ " 'https://edwarddonner.com/outsmart/',\n",
+ " 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
+ " 'https://edwarddonner.com/posts/',\n",
+ " 'https://edwarddonner.com/',\n",
+ " 'https://news.ycombinator.com',\n",
+ " 'https://nebula.io/?utm_source=ed&utm_medium=referral',\n",
+ " 'https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html',\n",
+ " 'https://patents.google.com/patent/US20210049536A1/',\n",
+ " 'https://www.linkedin.com/in/eddonner/',\n",
+ " 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
+ " 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
+ " 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
+ " 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
+ " 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
+ " 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
+ " 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
+ " 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
+ " 'https://edwarddonner.com/',\n",
+ " 'https://edwarddonner.com/outsmart/',\n",
+ " 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
+ " 'https://edwarddonner.com/posts/',\n",
+ " 'mailto:hello@mygroovydomain.com',\n",
+ " 'https://www.linkedin.com/in/eddonner/',\n",
+ " 'https://twitter.com/edwarddonner',\n",
+ " 'https://www.facebook.com/edward.donner.52']"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "ed = Website(\"https://edwarddonner.com\")\n",
+ "ed.links"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1771af9c-717a-4fca-bbbe-8a95893312c3",
+ "metadata": {},
+ "source": [
+ "## First step: Have GPT-4o-mini figure out which links are relevant\n",
+ "\n",
+ "### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
+ "It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
+ "We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
+ "\n",
+ "This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
+ "\n",
+ "Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "6957b079-0d96-45f7-a26a-3487510e9b35",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n",
+ "You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
+ "such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
+ "link_system_prompt += \"You should respond in JSON as in this example:\"\n",
+ "link_system_prompt += \"\"\"\n",
+ "{\n",
+ " \"links\": [\n",
+ " {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
+ " {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
+ " ]\n",
+ "}\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "b97e4068-97ed-4120-beae-c42105e4d59a",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "You are provided with a list of links found on a webpage. You are able to decide which of the links would be most relevant to include in a brochure about the company, such as links to an About page, or a Company page, or Careers/Jobs pages.\n",
+ "You should respond in JSON as in this example:\n",
+ "{\n",
+ " \"links\": [\n",
+ " {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
+ " {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
+ " ]\n",
+ "}\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(link_system_prompt)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_links_user_prompt(website):\n",
+ " user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
+ " user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
+ "Do not include Terms of Service, Privacy, email links.\\n\"\n",
+ " user_prompt += \"Links (some might be relative links):\\n\"\n",
+ " user_prompt += \"\\n\".join(website.links)\n",
+ " return user_prompt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Here is the list of links on the website of https://edwarddonner.com - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n",
+ "Links (some might be relative links):\n",
+ "https://edwarddonner.com/\n",
+ "https://edwarddonner.com/outsmart/\n",
+ "https://edwarddonner.com/about-me-and-about-nebula/\n",
+ "https://edwarddonner.com/posts/\n",
+ "https://edwarddonner.com/\n",
+ "https://news.ycombinator.com\n",
+ "https://nebula.io/?utm_source=ed&utm_medium=referral\n",
+ "https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html\n",
+ "https://patents.google.com/patent/US20210049536A1/\n",
+ "https://www.linkedin.com/in/eddonner/\n",
+ "https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
+ "https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
+ "https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
+ "https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
+ "https://edwarddonner.com/2024/08/06/outsmart/\n",
+ "https://edwarddonner.com/2024/08/06/outsmart/\n",
+ "https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
+ "https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
+ "https://edwarddonner.com/\n",
+ "https://edwarddonner.com/outsmart/\n",
+ "https://edwarddonner.com/about-me-and-about-nebula/\n",
+ "https://edwarddonner.com/posts/\n",
+ "mailto:hello@mygroovydomain.com\n",
+ "https://www.linkedin.com/in/eddonner/\n",
+ "https://twitter.com/edwarddonner\n",
+ "https://www.facebook.com/edward.donner.52\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(get_links_user_prompt(ed))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "a29aca19-ca13-471c-a4b4-5abbfa813f69",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_links(url):\n",
+ " website = Website(url)\n",
+ " response = openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": link_system_prompt},\n",
+ " {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
+ " ],\n",
+ " response_format={\"type\": \"json_object\"}\n",
+ " )\n",
+ " result = response.choices[0].message.content\n",
+ " return json.loads(result)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['/',\n",
+ " '/models',\n",
+ " '/datasets',\n",
+ " '/spaces',\n",
+ " '/posts',\n",
+ " '/docs',\n",
+ " '/enterprise',\n",
+ " '/pricing',\n",
+ " '/login',\n",
+ " '/join',\n",
+ " '/IamCreateAI/Ruyi-Mini-7B',\n",
+ " '/Datou1111/shou_xin',\n",
+ " '/answerdotai/ModernBERT-base',\n",
+ " '/meta-llama/Llama-3.3-70B-Instruct',\n",
+ " '/tencent/HunyuanVideo',\n",
+ " '/models',\n",
+ " '/spaces/JeffreyXiang/TRELLIS',\n",
+ " '/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute',\n",
+ " '/spaces/multimodalart/flux-style-shaping',\n",
+ " '/spaces/Kwai-Kolors/Kolors-Virtual-Try-On',\n",
+ " '/spaces/lllyasviel/iclight-v2',\n",
+ " '/spaces',\n",
+ " '/datasets/fka/awesome-chatgpt-prompts',\n",
+ " '/datasets/O1-OPEN/OpenO1-SFT',\n",
+ " '/datasets/HuggingFaceFW/fineweb-2',\n",
+ " '/datasets/HuggingFaceTB/finemath',\n",
+ " '/datasets/amphora/QwQ-LongCoT-130K',\n",
+ " '/datasets',\n",
+ " '/join',\n",
+ " '/pricing#endpoints',\n",
+ " '/pricing#spaces',\n",
+ " '/pricing',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/enterprise',\n",
+ " '/allenai',\n",
+ " '/facebook',\n",
+ " '/amazon',\n",
+ " '/google',\n",
+ " '/Intel',\n",
+ " '/microsoft',\n",
+ " '/grammarly',\n",
+ " '/Writer',\n",
+ " '/docs/transformers',\n",
+ " '/docs/diffusers',\n",
+ " '/docs/safetensors',\n",
+ " '/docs/huggingface_hub',\n",
+ " '/docs/tokenizers',\n",
+ " '/docs/peft',\n",
+ " '/docs/transformers.js',\n",
+ " '/docs/timm',\n",
+ " '/docs/trl',\n",
+ " '/docs/datasets',\n",
+ " '/docs/text-generation-inference',\n",
+ " '/docs/accelerate',\n",
+ " '/models',\n",
+ " '/datasets',\n",
+ " '/spaces',\n",
+ " '/tasks',\n",
+ " 'https://ui.endpoints.huggingface.co',\n",
+ " '/chat',\n",
+ " '/huggingface',\n",
+ " '/brand',\n",
+ " '/terms-of-service',\n",
+ " '/privacy',\n",
+ " 'https://apply.workable.com/huggingface/',\n",
+ " 'mailto:press@huggingface.co',\n",
+ " '/learn',\n",
+ " '/docs',\n",
+ " '/blog',\n",
+ " 'https://discuss.huggingface.co',\n",
+ " 'https://status.huggingface.co/',\n",
+ " 'https://github.com/huggingface',\n",
+ " 'https://twitter.com/huggingface',\n",
+ " 'https://www.linkedin.com/company/huggingface/',\n",
+ " '/join/discord']"
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n",
+ "\n",
+ "huggingface = Website(\"https://huggingface.co\")\n",
+ "huggingface.links"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'links': [{'type': 'homepage', 'url': 'https://huggingface.co/'},\n",
+ " {'type': 'about page', 'url': 'https://huggingface.co/huggingface'},\n",
+ " {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'},\n",
+ " {'type': 'blog', 'url': 'https://huggingface.co/blog'},\n",
+ " {'type': 'github page', 'url': 'https://github.com/huggingface'},\n",
+ " {'type': 'twitter page', 'url': 'https://twitter.com/huggingface'},\n",
+ " {'type': 'linkedin page',\n",
+ " 'url': 'https://www.linkedin.com/company/huggingface/'}]}"
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "get_links(\"https://huggingface.co\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0d74128e-dfb6-47ec-9549-288b621c838c",
+ "metadata": {},
+ "source": [
+ "## Second step: make the brochure!\n",
+ "\n",
+ "Assemble all the details into another prompt to GPT4-o"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_all_details(url):\n",
+ " result = \"Landing page:\\n\"\n",
+ " result += Website(url).get_contents()\n",
+ " links = get_links(url)\n",
+ " print(\"Found links:\", links)\n",
+ " for link in links[\"links\"]:\n",
+ " result += f\"\\n\\n{link['type']}\\n\"\n",
+ " result += Website(link[\"url\"]).get_contents()\n",
+ " return result"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/about'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'company page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'community discussions', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n",
+ "Landing page:\n",
+ "Webpage Title:\n",
+ "Hugging Face – The AI community building the future.\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "The AI community building the future.\n",
+ "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
+ "Trending on\n",
+ "this week\n",
+ "Models\n",
+ "IamCreateAI/Ruyi-Mini-7B\n",
+ "Updated\n",
+ "4 days ago\n",
+ "•\n",
+ "8.17k\n",
+ "•\n",
+ "352\n",
+ "Datou1111/shou_xin\n",
+ "Updated\n",
+ "12 days ago\n",
+ "•\n",
+ "28.3k\n",
+ "•\n",
+ "672\n",
+ "answerdotai/ModernBERT-base\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "6.24k\n",
+ "•\n",
+ "236\n",
+ "meta-llama/Llama-3.3-70B-Instruct\n",
+ "Updated\n",
+ "11 days ago\n",
+ "•\n",
+ "236k\n",
+ "•\n",
+ "1.21k\n",
+ "tencent/HunyuanVideo\n",
+ "Updated\n",
+ "3 days ago\n",
+ "•\n",
+ "6.01k\n",
+ "•\n",
+ "1.2k\n",
+ "Browse 400k+ models\n",
+ "Spaces\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "1.79k\n",
+ "🏢\n",
+ "TRELLIS\n",
+ "Scalable and Versatile 3D Generation from images\n",
+ "Running\n",
+ "306\n",
+ "📝\n",
+ "Scaling test-time compute\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "470\n",
+ "🚀\n",
+ "Flux Style Shaping\n",
+ "Optical illusions and style transfer with FLUX\n",
+ "Running\n",
+ "on\n",
+ "CPU Upgrade\n",
+ "6.11k\n",
+ "👕\n",
+ "Kolors Virtual Try-On\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "965\n",
+ "📈\n",
+ "IC Light V2\n",
+ "Browse 150k+ applications\n",
+ "Datasets\n",
+ "fka/awesome-chatgpt-prompts\n",
+ "Updated\n",
+ "Sep 3\n",
+ "•\n",
+ "6.83k\n",
+ "•\n",
+ "6.58k\n",
+ "O1-OPEN/OpenO1-SFT\n",
+ "Updated\n",
+ "4 days ago\n",
+ "•\n",
+ "1.86k\n",
+ "•\n",
+ "234\n",
+ "HuggingFaceFW/fineweb-2\n",
+ "Updated\n",
+ "13 days ago\n",
+ "•\n",
+ "77.7k\n",
+ "•\n",
+ "342\n",
+ "HuggingFaceTB/finemath\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "1.86k\n",
+ "•\n",
+ "43\n",
+ "amphora/QwQ-LongCoT-130K\n",
+ "Updated\n",
+ "16 days ago\n",
+ "•\n",
+ "1.34k\n",
+ "•\n",
+ "85\n",
+ "Browse 100k+ datasets\n",
+ "The Home of Machine Learning\n",
+ "Create, discover and collaborate on ML better.\n",
+ "The collaboration platform\n",
+ "Host and collaborate on unlimited public models, datasets and applications.\n",
+ "Move faster\n",
+ "With the HF Open source stack.\n",
+ "Explore all modalities\n",
+ "Text, image, video, audio or even 3D.\n",
+ "Build your portfolio\n",
+ "Share your work with the world and build your ML profile.\n",
+ "Sign Up\n",
+ "Accelerate your ML\n",
+ "We provide paid Compute and Enterprise solutions.\n",
+ "Compute\n",
+ "Deploy on optimized\n",
+ "Inference Endpoints\n",
+ "or update your\n",
+ "Spaces applications\n",
+ "to a GPU in a few clicks.\n",
+ "View pricing\n",
+ "Starting at $0.60/hour for GPU\n",
+ "Enterprise\n",
+ "Give your team the most advanced platform to build AI with enterprise-grade security, access controls and\n",
+ "\t\t\tdedicated support.\n",
+ "Getting started\n",
+ "Starting at $20/user/month\n",
+ "Single Sign-On\n",
+ "Regions\n",
+ "Priority Support\n",
+ "Audit Logs\n",
+ "Resource Groups\n",
+ "Private Datasets Viewer\n",
+ "More than 50,000 organizations are using Hugging Face\n",
+ "Ai2\n",
+ "Enterprise\n",
+ "non-profit\n",
+ "•\n",
+ "366 models\n",
+ "•\n",
+ "1.76k followers\n",
+ "AI at Meta\n",
+ "Enterprise\n",
+ "company\n",
+ "•\n",
+ "2.05k models\n",
+ "•\n",
+ "3.83k followers\n",
+ "Amazon Web Services\n",
+ "company\n",
+ "•\n",
+ "21 models\n",
+ "•\n",
+ "2.45k followers\n",
+ "Google\n",
+ "company\n",
+ "•\n",
+ "911 models\n",
+ "•\n",
+ "5.76k followers\n",
+ "Intel\n",
+ "company\n",
+ "•\n",
+ "217 models\n",
+ "•\n",
+ "2.07k followers\n",
+ "Microsoft\n",
+ "company\n",
+ "•\n",
+ "351 models\n",
+ "•\n",
+ "6.29k followers\n",
+ "Grammarly\n",
+ "company\n",
+ "•\n",
+ "10 models\n",
+ "•\n",
+ "102 followers\n",
+ "Writer\n",
+ "Enterprise\n",
+ "company\n",
+ "•\n",
+ "17 models\n",
+ "•\n",
+ "186 followers\n",
+ "Our Open Source\n",
+ "We are building the foundation of ML tooling with the community.\n",
+ "Transformers\n",
+ "136,571\n",
+ "State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
+ "Diffusers\n",
+ "26,740\n",
+ "State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
+ "Safetensors\n",
+ "2,960\n",
+ "Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
+ "Hub Python Library\n",
+ "2,177\n",
+ "Client library for the HF Hub: manage repositories from your Python runtime.\n",
+ "Tokenizers\n",
+ "9,165\n",
+ "Fast tokenizers, optimized for both research and production.\n",
+ "PEFT\n",
+ "16,767\n",
+ "Parameter efficient finetuning methods for large models.\n",
+ "Transformers.js\n",
+ "12,421\n",
+ "State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
+ "timm\n",
+ "32,668\n",
+ "State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
+ "TRL\n",
+ "10,382\n",
+ "Train transformer language models with reinforcement learning.\n",
+ "Datasets\n",
+ "19,378\n",
+ "Access and share datasets for computer vision, audio, and NLP tasks.\n",
+ "Text Generation Inference\n",
+ "9,484\n",
+ "Toolkit to serve Large Language Models.\n",
+ "Accelerate\n",
+ "8,082\n",
+ "Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
+ "System theme\n",
+ "Website\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Tasks\n",
+ "Inference Endpoints\n",
+ "HuggingChat\n",
+ "Company\n",
+ "About\n",
+ "Brand assets\n",
+ "Terms of service\n",
+ "Privacy\n",
+ "Jobs\n",
+ "Press\n",
+ "Resources\n",
+ "Learn\n",
+ "Documentation\n",
+ "Blog\n",
+ "Forum\n",
+ "Service Status\n",
+ "Social\n",
+ "GitHub\n",
+ "Twitter\n",
+ "LinkedIn\n",
+ "Discord\n",
+ "\n",
+ "\n",
+ "\n",
+ "about page\n",
+ "Webpage Title:\n",
+ "about (Sergei)\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "Sergei\n",
+ "about\n",
+ "Follow\n",
+ "Kalaipriya's profile picture\n",
+ "selvivincent's profile picture\n",
+ "Renumathi's profile picture\n",
+ "3\n",
+ "\t\t\t\t\tfollowers\n",
+ "·\n",
+ "0 following\n",
+ "AI & ML interests\n",
+ "None yet\n",
+ "Organizations\n",
+ "None yet\n",
+ "models\n",
+ "None public yet\n",
+ "datasets\n",
+ "None public yet\n",
+ "System theme\n",
+ "Company\n",
+ "TOS\n",
+ "Privacy\n",
+ "About\n",
+ "Jobs\n",
+ "Website\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Pricing\n",
+ "Docs\n",
+ "\n",
+ "\n",
+ "\n",
+ "careers page\n",
+ "Webpage Title:\n",
+ "Hugging Face - Current Openings\n",
+ "Webpage Contents:\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "blog page\n",
+ "Webpage Title:\n",
+ "Hugging Face – Blog\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "Blog, Articles, and discussions\n",
+ "New Article\n",
+ "Everything\n",
+ "community\n",
+ "guide\n",
+ "open source collab\n",
+ "partnerships\n",
+ "research\n",
+ "NLP\n",
+ "Audio\n",
+ "CV\n",
+ "RL\n",
+ "ethics\n",
+ "Diffusion\n",
+ "Game Development\n",
+ "RLHF\n",
+ "Leaderboard\n",
+ "Case Studies\n",
+ "Evaluating Audio Reasoning with Big Bench Audio\n",
+ "By\n",
+ "mhillsmith\n",
+ "December 20, 2024\n",
+ "guest\n",
+ "•\n",
+ "8\n",
+ "Community Articles\n",
+ "view all\n",
+ "20+ Free and Paid Digital Marketing Strategies to Automate Repetitive Tasks\n",
+ "By\n",
+ "Markets\n",
+ "•\n",
+ "about 3 hours ago\n",
+ "•\n",
+ "1\n",
+ "🧠 Tags generation dataset\n",
+ "By\n",
+ "zino36\n",
+ "•\n",
+ "about 16 hours ago\n",
+ "•\n",
+ "1\n",
+ "AI Agents in Action: Managing GitHub Issues with KaibanJS\n",
+ "By\n",
+ "darielnoel\n",
+ "•\n",
+ "1 day ago\n",
+ "**Intelligence Potentiation: An Evolutionary Perspective on AI Agent Designs**\n",
+ "By\n",
+ "KnutJaegersberg\n",
+ "•\n",
+ "1 day ago\n",
+ "•\n",
+ "3\n",
+ "MINERVA: A Multi-Agent LLM System for Digital Scam Protection\n",
+ "By\n",
+ "dcarpintero\n",
+ "•\n",
+ "2 days ago\n",
+ "Mastering Iterative Prompting for Optimized AI Code Generation\n",
+ "By\n",
+ "luigi12345\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "1\n",
+ "SILMA RAGQA V1.0: A Comprehensive Benchmark for Evaluating LLMs on RAG QA Use-Cases\n",
+ "By\n",
+ "karimouda\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "1\n",
+ "FuseChat-3.0: Preference Optimization for Implicit Model Fusion\n",
+ "By\n",
+ "Wanfq\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "2\n",
+ "Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "6 days ago\n",
+ "•\n",
+ "3\n",
+ "How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "8 days ago\n",
+ "•\n",
+ "1\n",
+ "🇪🇺✍️ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️🇪🇺\n",
+ "By\n",
+ "yjernite\n",
+ "•\n",
+ "9 days ago\n",
+ "•\n",
+ "11\n",
+ "Building an AI-powered search engine from scratch\n",
+ "By\n",
+ "as-cle-bert\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "8\n",
+ "MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
+ "By\n",
+ "wxDai\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "12\n",
+ "RLHF 101: A Technical Dive into RLHF\n",
+ "By\n",
+ "GitBag\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "4\n",
+ "[Talk Arena](https://talkarena.org)\n",
+ "By\n",
+ "WillHeld\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "1\n",
+ "Multimodal RAG with Colpali, Milvus and VLMs\n",
+ "By\n",
+ "saumitras\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "2\n",
+ "In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
+ "By\n",
+ "Jaward\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "2\n",
+ "Power steering: Squeeze massive power from small LLMs\n",
+ "By\n",
+ "ucheog\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "4\n",
+ "Exploring the Power of KaibanJS v0.11.0 🚀\n",
+ "By\n",
+ "darielnoel\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "1\n",
+ "**Building a Custom Retrieval System with Motoko and Node.js**\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "1\n",
+ "Finally, a Replacement for BERT: Introducing ModernBERT\n",
+ "By\n",
+ "bwarner\n",
+ "December 19, 2024\n",
+ "guest\n",
+ "•\n",
+ "289\n",
+ "Bamba: Inference-Efficient Hybrid Mamba2 Model\n",
+ "By\n",
+ "Linsong-C\n",
+ "December 18, 2024\n",
+ "guest\n",
+ "•\n",
+ "30\n",
+ "Welcome the Falcon 3 Family of Open Models!\n",
+ "By\n",
+ "FalconLLM\n",
+ "December 17, 2024\n",
+ "•\n",
+ "98\n",
+ "Benchmarking Language Model Performance on 5th Gen Xeon at GCP\n",
+ "By\n",
+ "MatrixYao\n",
+ "December 17, 2024\n",
+ "•\n",
+ "2\n",
+ "Introducing the Synthetic Data Generator - Build Datasets with Natural Language\n",
+ "By\n",
+ "davidberenstein1957\n",
+ "December 16, 2024\n",
+ "•\n",
+ "55\n",
+ "LeMaterial: an open source initiative to accelerate materials discovery and research\n",
+ "By\n",
+ "AlexDuvalinho\n",
+ "December 10, 2024\n",
+ "guest\n",
+ "•\n",
+ "30\n",
+ "Hugging Face models in Amazon Bedrock\n",
+ "By\n",
+ "pagezyhf\n",
+ "December 9, 2024\n",
+ "•\n",
+ "8\n",
+ "Open Preference Dataset for Text-to-Image Generation by the 🤗 Community\n",
+ "By\n",
+ "davidberenstein1957\n",
+ "December 9, 2024\n",
+ "•\n",
+ "47\n",
+ "Welcome PaliGemma 2 – New vision language models by Google\n",
+ "By\n",
+ "merve\n",
+ "December 5, 2024\n",
+ "•\n",
+ "117\n",
+ "“How good are LLMs at fixing their mistakes? A chatbot arena experiment with Keras and TPUs\n",
+ "By\n",
+ "martin-gorner\n",
+ "December 5, 2024\n",
+ "•\n",
+ "12\n",
+ "Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard\n",
+ "By\n",
+ "alielfilali01\n",
+ "December 4, 2024\n",
+ "guest\n",
+ "•\n",
+ "26\n",
+ "Investing in Performance: Fine-tune small models with LLM insights - a CFM case study\n",
+ "By\n",
+ "oahouzi\n",
+ "December 3, 2024\n",
+ "•\n",
+ "25\n",
+ "Rearchitecting Hugging Face Uploads and Downloads\n",
+ "By\n",
+ "port8080\n",
+ "November 26, 2024\n",
+ "•\n",
+ "37\n",
+ "SmolVLM - small yet mighty Vision Language Model\n",
+ "By\n",
+ "andito\n",
+ "November 26, 2024\n",
+ "•\n",
+ "142\n",
+ "Previous\n",
+ "1\n",
+ "2\n",
+ "3\n",
+ "...\n",
+ "36\n",
+ "Next\n",
+ "Community Articles\n",
+ "view all\n",
+ "20+ Free and Paid Digital Marketing Strategies to Automate Repetitive Tasks\n",
+ "By\n",
+ "Markets\n",
+ "•\n",
+ "about 3 hours ago\n",
+ "•\n",
+ "1\n",
+ "🧠 Tags generation dataset\n",
+ "By\n",
+ "zino36\n",
+ "•\n",
+ "about 16 hours ago\n",
+ "•\n",
+ "1\n",
+ "AI Agents in Action: Managing GitHub Issues with KaibanJS\n",
+ "By\n",
+ "darielnoel\n",
+ "•\n",
+ "1 day ago\n",
+ "**Intelligence Potentiation: An Evolutionary Perspective on AI Agent Designs**\n",
+ "By\n",
+ "KnutJaegersberg\n",
+ "•\n",
+ "1 day ago\n",
+ "•\n",
+ "3\n",
+ "MINERVA: A Multi-Agent LLM System for Digital Scam Protection\n",
+ "By\n",
+ "dcarpintero\n",
+ "•\n",
+ "2 days ago\n",
+ "Mastering Iterative Prompting for Optimized AI Code Generation\n",
+ "By\n",
+ "luigi12345\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "1\n",
+ "SILMA RAGQA V1.0: A Comprehensive Benchmark for Evaluating LLMs on RAG QA Use-Cases\n",
+ "By\n",
+ "karimouda\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "1\n",
+ "FuseChat-3.0: Preference Optimization for Implicit Model Fusion\n",
+ "By\n",
+ "Wanfq\n",
+ "•\n",
+ "3 days ago\n",
+ "•\n",
+ "2\n",
+ "Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "6 days ago\n",
+ "•\n",
+ "3\n",
+ "How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "8 days ago\n",
+ "•\n",
+ "1\n",
+ "🇪🇺✍️ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️🇪🇺\n",
+ "By\n",
+ "yjernite\n",
+ "•\n",
+ "9 days ago\n",
+ "•\n",
+ "11\n",
+ "Building an AI-powered search engine from scratch\n",
+ "By\n",
+ "as-cle-bert\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "8\n",
+ "MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
+ "By\n",
+ "wxDai\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "12\n",
+ "RLHF 101: A Technical Dive into RLHF\n",
+ "By\n",
+ "GitBag\n",
+ "•\n",
+ "10 days ago\n",
+ "•\n",
+ "4\n",
+ "[Talk Arena](https://talkarena.org)\n",
+ "By\n",
+ "WillHeld\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "1\n",
+ "Multimodal RAG with Colpali, Milvus and VLMs\n",
+ "By\n",
+ "saumitras\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "2\n",
+ "In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
+ "By\n",
+ "Jaward\n",
+ "•\n",
+ "11 days ago\n",
+ "•\n",
+ "2\n",
+ "Power steering: Squeeze massive power from small LLMs\n",
+ "By\n",
+ "ucheog\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "4\n",
+ "Exploring the Power of KaibanJS v0.11.0 🚀\n",
+ "By\n",
+ "darielnoel\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "1\n",
+ "**Building a Custom Retrieval System with Motoko and Node.js**\n",
+ "By\n",
+ "theeseus-ai\n",
+ "•\n",
+ "12 days ago\n",
+ "•\n",
+ "1\n",
+ "System theme\n",
+ "Company\n",
+ "TOS\n",
+ "Privacy\n",
+ "About\n",
+ "Jobs\n",
+ "Website\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Pricing\n",
+ "Docs\n",
+ "\n",
+ "\n",
+ "\n",
+ "company page\n",
+ "Webpage Title:\n",
+ "huggingface (Hugging Face)\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "Hugging Face\n",
+ "Enterprise\n",
+ "company\n",
+ "Verified\n",
+ "https://huggingface.co\n",
+ "huggingface\n",
+ "huggingface\n",
+ "Activity Feed\n",
+ "Follow\n",
+ "8,542\n",
+ "AI & ML interests\n",
+ "The AI community building the future.\n",
+ "Recent Activity\n",
+ "IAMJB\n",
+ "updated\n",
+ "a dataset\n",
+ "9 minutes ago\n",
+ "huggingface/community-science-paper-v2\n",
+ "IAMJB\n",
+ "updated\n",
+ "a dataset\n",
+ "about 6 hours ago\n",
+ "huggingface/paper-central-data\n",
+ "fdaudens\n",
+ "updated\n",
+ "a Space\n",
+ "about 19 hours ago\n",
+ "huggingface/open-source-ai-year-in-review-2024\n",
+ "View all activity\n",
+ "Team members\n",
+ "224\n",
+ "+190\n",
+ "+177\n",
+ "+156\n",
+ "+146\n",
+ "+126\n",
+ "Organization Card\n",
+ "Community\n",
+ "About org cards\n",
+ "👋 Hi!\n",
+ "We are on a mission to democratize\n",
+ "good\n",
+ "machine learning, one commit at a time.\n",
+ "If that sounds like something you should be doing, why don't you\n",
+ "join us\n",
+ "!\n",
+ "For press enquiries, you can\n",
+ "✉️ contact our team here\n",
+ ".\n",
+ "Collections\n",
+ "1\n",
+ "DistilBERT release\n",
+ "Original DistilBERT model, checkpoints obtained from using teacher-student learning from the original BERT checkpoints.\n",
+ "distilbert/distilbert-base-cased\n",
+ "Fill-Mask\n",
+ "•\n",
+ "Updated\n",
+ "May 6\n",
+ "•\n",
+ "358k\n",
+ "•\n",
+ "35\n",
+ "distilbert/distilbert-base-uncased\n",
+ "Fill-Mask\n",
+ "•\n",
+ "Updated\n",
+ "May 6\n",
+ "•\n",
+ "14.8M\n",
+ "•\n",
+ "577\n",
+ "distilbert/distilbert-base-multilingual-cased\n",
+ "Fill-Mask\n",
+ "•\n",
+ "Updated\n",
+ "May 6\n",
+ "•\n",
+ "472k\n",
+ "•\n",
+ "148\n",
+ "distilbert/distilbert-base-uncased-finetuned-sst-2-english\n",
+ "Text Classification\n",
+ "•\n",
+ "Updated\n",
+ "Dec 19, 2023\n",
+ "•\n",
+ "6.96M\n",
+ "•\n",
+ "•\n",
+ "645\n",
+ "spaces\n",
+ "23\n",
+ "Sort: \n",
+ "\t\tRecently updated\n",
+ "pinned\n",
+ "Running\n",
+ "52\n",
+ "📈\n",
+ "Number Tokenization Blog\n",
+ "Running\n",
+ "395\n",
+ "😻\n",
+ "Open Source Ai Year In Review 2024\n",
+ "What happened in open-source AI this year, and what’s next?\n",
+ "Running\n",
+ "42\n",
+ "🔋\n",
+ "Inference Playground\n",
+ "Running\n",
+ "196\n",
+ "⚡\n",
+ "paper-central\n",
+ "Running\n",
+ "on\n",
+ "TPU v5e\n",
+ "6\n",
+ "💬\n",
+ "Keras Chatbot Battle\n",
+ "Running\n",
+ "101\n",
+ "⚡\n",
+ "Modelcard Creator\n",
+ "Expand 23\n",
+ "\t\t\t\t\t\t\tspaces\n",
+ "models\n",
+ "18\n",
+ "Sort: \n",
+ "\t\tRecently updated\n",
+ "huggingface/test-gating-group-2\n",
+ "Updated\n",
+ "4 days ago\n",
+ "huggingface/test-gating-group-1\n",
+ "Updated\n",
+ "4 days ago\n",
+ "huggingface/timesfm-tourism-monthly\n",
+ "Updated\n",
+ "12 days ago\n",
+ "•\n",
+ "29\n",
+ "•\n",
+ "1\n",
+ "huggingface/CodeBERTa-language-id\n",
+ "Text Classification\n",
+ "•\n",
+ "Updated\n",
+ "Mar 29\n",
+ "•\n",
+ "1.14k\n",
+ "•\n",
+ "54\n",
+ "huggingface/falcon-40b-gptq\n",
+ "Text Generation\n",
+ "•\n",
+ "Updated\n",
+ "Jun 14, 2023\n",
+ "•\n",
+ "19\n",
+ "•\n",
+ "12\n",
+ "huggingface/autoformer-tourism-monthly\n",
+ "Updated\n",
+ "May 24, 2023\n",
+ "•\n",
+ "1.5k\n",
+ "•\n",
+ "9\n",
+ "huggingface/distilbert-base-uncased-finetuned-mnli\n",
+ "Text Classification\n",
+ "•\n",
+ "Updated\n",
+ "Mar 22, 2023\n",
+ "•\n",
+ "1.37k\n",
+ "•\n",
+ "2\n",
+ "huggingface/informer-tourism-monthly\n",
+ "Updated\n",
+ "Feb 24, 2023\n",
+ "•\n",
+ "1.12k\n",
+ "•\n",
+ "5\n",
+ "huggingface/time-series-transformer-tourism-monthly\n",
+ "Updated\n",
+ "Feb 23, 2023\n",
+ "•\n",
+ "2.16k\n",
+ "•\n",
+ "18\n",
+ "huggingface/the-no-branch-repo\n",
+ "Text-to-Image\n",
+ "•\n",
+ "Updated\n",
+ "Feb 10, 2023\n",
+ "•\n",
+ "7\n",
+ "•\n",
+ "3\n",
+ "Expand 18\n",
+ "\t\t\t\t\t\t\tmodels\n",
+ "datasets\n",
+ "31\n",
+ "Sort: \n",
+ "\t\tRecently updated\n",
+ "huggingface/community-science-paper-v2\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "9 minutes ago\n",
+ "•\n",
+ "5.03k\n",
+ "•\n",
+ "404\n",
+ "•\n",
+ "7\n",
+ "huggingface/paper-central-data\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "about 6 hours ago\n",
+ "•\n",
+ "119k\n",
+ "•\n",
+ "553\n",
+ "•\n",
+ "8\n",
+ "huggingface/documentation-images\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "44\n",
+ "•\n",
+ "2.43M\n",
+ "•\n",
+ "43\n",
+ "huggingface/transformers-metadata\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "2 days ago\n",
+ "•\n",
+ "1.52k\n",
+ "•\n",
+ "559\n",
+ "•\n",
+ "14\n",
+ "huggingface/diffusers-metadata\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "2 days ago\n",
+ "•\n",
+ "62\n",
+ "•\n",
+ "442\n",
+ "•\n",
+ "4\n",
+ "huggingface/policy-docs\n",
+ "Updated\n",
+ "3 days ago\n",
+ "•\n",
+ "898\n",
+ "•\n",
+ "6\n",
+ "huggingface/my-distiset-3f5a230e\n",
+ "Updated\n",
+ "30 days ago\n",
+ "•\n",
+ "17\n",
+ "huggingface/cookbook-images\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "Nov 14\n",
+ "•\n",
+ "1\n",
+ "•\n",
+ "40.1k\n",
+ "•\n",
+ "6\n",
+ "huggingface/vllm-metadata\n",
+ "Updated\n",
+ "Oct 8\n",
+ "•\n",
+ "12\n",
+ "huggingface/paper-central-data-2\n",
+ "Viewer\n",
+ "•\n",
+ "Updated\n",
+ "Oct 4\n",
+ "•\n",
+ "58.3k\n",
+ "•\n",
+ "68\n",
+ "•\n",
+ "2\n",
+ "Expand 31\n",
+ "\t\t\t\t\t\t\tdatasets\n",
+ "System theme\n",
+ "Company\n",
+ "TOS\n",
+ "Privacy\n",
+ "About\n",
+ "Jobs\n",
+ "Website\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Pricing\n",
+ "Docs\n",
+ "\n",
+ "\n",
+ "\n",
+ "community discussions\n",
+ "Webpage Title:\n",
+ "Hugging Face Forums - Hugging Face Community Discussion\n",
+ "Webpage Contents:\n",
+ "Loading\n",
+ "Hugging Face Forums\n",
+ "Topic\n",
+ "Replies\n",
+ "Views\n",
+ "Activity\n",
+ "List of `size_categories`\n",
+ "🤗Datasets\n",
+ "3\n",
+ "5\n",
+ "December 21, 2024\n",
+ "Feature request - maintain list of favorite hf pages reachable from my hom epage\n",
+ "Site Feedback\n",
+ "4\n",
+ "886\n",
+ "December 21, 2024\n",
+ "404 error on carbon emission calculation\n",
+ "Site Feedback\n",
+ "1\n",
+ "7\n",
+ "December 21, 2024\n",
+ "Cannot connect gRPC Server Hosted on HuggingFace Spaces\n",
+ "Spaces\n",
+ "0\n",
+ "8\n",
+ "December 21, 2024\n",
+ "Hide system prompt or system instruction\n",
+ "Beginners\n",
+ "3\n",
+ "15\n",
+ "December 21, 2024\n",
+ "ModuleNotFoundError: No module named 'huggingface_hub.inference._types'\n",
+ "🤗Hub\n",
+ "0\n",
+ "5\n",
+ "December 21, 2024\n",
+ "Understanding State Management with Gradio and LangGraph\n",
+ "Beginners\n",
+ "1\n",
+ "11\n",
+ "December 21, 2024\n",
+ "Dimension problem\n",
+ "Beginners\n",
+ "25\n",
+ "21\n",
+ "December 21, 2024\n",
+ "Fine-tuning whisper on sound-event-detection dataset\n",
+ "🤗Transformers\n",
+ "0\n",
+ "4\n",
+ "December 20, 2024\n",
+ "Model that can generate both text and image as output\n",
+ "Research\n",
+ "4\n",
+ "42\n",
+ "December 21, 2024\n",
+ "Lm studio and Chat ui doesn't work with module\n",
+ "Beginners\n",
+ "11\n",
+ "33\n",
+ "December 21, 2024\n",
+ "Inference API Context Window and TOS\n",
+ "Beginners\n",
+ "0\n",
+ "12\n",
+ "December 20, 2024\n",
+ "Talkie AI got remove from app store -any alternative ai chat?\n",
+ "Beginners\n",
+ "4\n",
+ "1151\n",
+ "December 18, 2024\n",
+ "Inference Text Generation API issue\n",
+ "Intermediate\n",
+ "0\n",
+ "7\n",
+ "December 20, 2024\n",
+ "From Pandas Dataframe to Huggingface Dataset\n",
+ "Beginners\n",
+ "9\n",
+ "60459\n",
+ "December 20, 2024\n",
+ "\"Load Diffusion Model\" and \"Unet Loader (GGUF)\" null/undefined\n",
+ "Beginners\n",
+ "6\n",
+ "200\n",
+ "December 20, 2024\n",
+ "Timeout Issue with DeepSpeed on Multiple GPUs\n",
+ "DeepSpeed\n",
+ "0\n",
+ "8\n",
+ "December 20, 2024\n",
+ "Spaces dedicated gpu limit\n",
+ "Spaces\n",
+ "1\n",
+ "14\n",
+ "December 19, 2024\n",
+ "Chatbot PDF - using flan-t5-large model\n",
+ "Models\n",
+ "0\n",
+ "7\n",
+ "December 20, 2024\n",
+ "Gateway Problem\n",
+ "Beginners\n",
+ "0\n",
+ "8\n",
+ "December 20, 2024\n",
+ "RT-DETR attention map dimension - PekingU/rtdetr_r50vd\n",
+ "Models\n",
+ "0\n",
+ "5\n",
+ "December 20, 2024\n",
+ "Extending the tokenizer affects model generation\n",
+ "Intermediate\n",
+ "3\n",
+ "9\n",
+ "December 19, 2024\n",
+ "How to Ensure Each Process Reads Its Own Dataset and Trains Correctly When Using Trainer?\n",
+ "🤗Transformers\n",
+ "0\n",
+ "5\n",
+ "December 20, 2024\n",
+ "Can't save the tensorflow model of nvidia/mit-b5\n",
+ "Intermediate\n",
+ "3\n",
+ "127\n",
+ "December 19, 2024\n",
+ "# Audio course Unit 4. sample code not working. Can anyone check for me? Thanks\n",
+ "Course\n",
+ "0\n",
+ "6\n",
+ "December 20, 2024\n",
+ "Host Models on Hugging face and Perform Inference on Hugging Face Infrastructure\n",
+ "Beginners\n",
+ "0\n",
+ "6\n",
+ "December 20, 2024\n",
+ "Torchrun, trainer, dataset setup\n",
+ "Intermediate\n",
+ "4\n",
+ "71\n",
+ "December 20, 2024\n",
+ "Training fails on multiple GPUs with RuntimeError 'chuck expects at least a 1-dimensional array'\n",
+ "Beginners\n",
+ "2\n",
+ "108\n",
+ "December 19, 2024\n",
+ "How do you know whether the model is merged and uploaded?\n",
+ "Intermediate\n",
+ "0\n",
+ "11\n",
+ "December 20, 2024\n",
+ "Qwen based AI assistant randomly having an absolute, utter, complete 'mental breakdowns'?? (Inference API)\n",
+ "🤗Transformers\n",
+ "2\n",
+ "23\n",
+ "December 17, 2024\n",
+ "next page →\n",
+ "Home\n",
+ "Categories\n",
+ "Guidelines\n",
+ "Terms of Service\n",
+ "Privacy Policy\n",
+ "Powered by\n",
+ "Discourse\n",
+ ", best viewed with JavaScript enabled\n",
+ "\n",
+ "\n",
+ "\n",
+ "GitHub page\n",
+ "Webpage Title:\n",
+ "Hugging Face · GitHub\n",
+ "Webpage Contents:\n",
+ "Skip to content\n",
+ "Navigation Menu\n",
+ "Toggle navigation\n",
+ "Sign in\n",
+ "huggingface\n",
+ "Product\n",
+ "GitHub Copilot\n",
+ "Write better code with AI\n",
+ "Security\n",
+ "Find and fix vulnerabilities\n",
+ "Actions\n",
+ "Automate any workflow\n",
+ "Codespaces\n",
+ "Instant dev environments\n",
+ "Issues\n",
+ "Plan and track work\n",
+ "Code Review\n",
+ "Manage code changes\n",
+ "Discussions\n",
+ "Collaborate outside of code\n",
+ "Code Search\n",
+ "Find more, search less\n",
+ "Explore\n",
+ "All features\n",
+ "Documentation\n",
+ "GitHub Skills\n",
+ "Blog\n",
+ "Solutions\n",
+ "By company size\n",
+ "Enterprises\n",
+ "Small and medium teams\n",
+ "Startups\n",
+ "By use case\n",
+ "DevSecOps\n",
+ "DevOps\n",
+ "CI/CD\n",
+ "View all use cases\n",
+ "By industry\n",
+ "Healthcare\n",
+ "Financial services\n",
+ "Manufacturing\n",
+ "Government\n",
+ "View all industries\n",
+ "View all solutions\n",
+ "Resources\n",
+ "Topics\n",
+ "AI\n",
+ "DevOps\n",
+ "Security\n",
+ "Software Development\n",
+ "View all\n",
+ "Explore\n",
+ "Learning Pathways\n",
+ "White papers, Ebooks, Webinars\n",
+ "Customer Stories\n",
+ "Partners\n",
+ "Executive Insights\n",
+ "Open Source\n",
+ "GitHub Sponsors\n",
+ "Fund open source developers\n",
+ "The ReadME Project\n",
+ "GitHub community articles\n",
+ "Repositories\n",
+ "Topics\n",
+ "Trending\n",
+ "Collections\n",
+ "Enterprise\n",
+ "Enterprise platform\n",
+ "AI-powered developer platform\n",
+ "Available add-ons\n",
+ "Advanced Security\n",
+ "Enterprise-grade security features\n",
+ "GitHub Copilot\n",
+ "Enterprise-grade AI features\n",
+ "Premium Support\n",
+ "Enterprise-grade 24/7 support\n",
+ "Pricing\n",
+ "Search or jump to...\n",
+ "Search code, repositories, users, issues, pull requests...\n",
+ "Search\n",
+ "Clear\n",
+ "Search syntax tips\n",
+ "Provide feedback\n",
+ "We read every piece of feedback, and take your input very seriously.\n",
+ "Include my email address so I can be contacted\n",
+ "Cancel\n",
+ "Submit feedback\n",
+ "Saved searches\n",
+ "Use saved searches to filter your results more quickly\n",
+ "Cancel\n",
+ "Create saved search\n",
+ "Sign in\n",
+ "Sign up\n",
+ "Reseting focus\n",
+ "You signed in with another tab or window.\n",
+ "Reload\n",
+ "to refresh your session.\n",
+ "You signed out in another tab or window.\n",
+ "Reload\n",
+ "to refresh your session.\n",
+ "You switched accounts on another tab or window.\n",
+ "Reload\n",
+ "to refresh your session.\n",
+ "Dismiss alert\n",
+ "Hugging Face\n",
+ "The AI community building the future.\n",
+ "Verified\n",
+ "We've verified that the organization\n",
+ "huggingface\n",
+ "controls the domain:\n",
+ "huggingface.co\n",
+ "Learn more about verified organizations\n",
+ "40.1k\n",
+ "followers\n",
+ "NYC + Paris\n",
+ "https://huggingface.co/\n",
+ "X\n",
+ "@huggingface\n",
+ "Overview\n",
+ "Repositories\n",
+ "Projects\n",
+ "Packages\n",
+ "People\n",
+ "Sponsoring\n",
+ "0\n",
+ "More\n",
+ "Overview\n",
+ "Repositories\n",
+ "Projects\n",
+ "Packages\n",
+ "People\n",
+ "Sponsoring\n",
+ "Pinned\n",
+ "Loading\n",
+ "transformers\n",
+ "transformers\n",
+ "Public\n",
+ "🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.\n",
+ "Python\n",
+ "137k\n",
+ "27.3k\n",
+ "diffusers\n",
+ "diffusers\n",
+ "Public\n",
+ "🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.\n",
+ "Python\n",
+ "26.7k\n",
+ "5.5k\n",
+ "datasets\n",
+ "datasets\n",
+ "Public\n",
+ "🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools\n",
+ "Python\n",
+ "19.4k\n",
+ "2.7k\n",
+ "peft\n",
+ "peft\n",
+ "Public\n",
+ "🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.\n",
+ "Python\n",
+ "16.8k\n",
+ "1.7k\n",
+ "accelerate\n",
+ "accelerate\n",
+ "Public\n",
+ "🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support\n",
+ "Python\n",
+ "8.1k\n",
+ "995\n",
+ "optimum\n",
+ "optimum\n",
+ "Public\n",
+ "🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools\n",
+ "Python\n",
+ "2.6k\n",
+ "486\n",
+ "Repositories\n",
+ "Loading\n",
+ "Type\n",
+ "Select type\n",
+ "Forks\n",
+ "Archived\n",
+ "Mirrors\n",
+ "Templates\n",
+ "Language\n",
+ "Select language\n",
+ "All\n",
+ "C\n",
+ "C#\n",
+ "C++\n",
+ "Cuda\n",
+ "Dockerfile\n",
+ "Go\n",
+ "Handlebars\n",
+ "HTML\n",
+ "Java\n",
+ "JavaScript\n",
+ "Jupyter Notebook\n",
+ "Kotlin\n",
+ "Lua\n",
+ "MDX\n",
+ "Mustache\n",
+ "Nix\n",
+ "Python\n",
+ "Rust\n",
+ "Shell\n",
+ "Smarty\n",
+ "Swift\n",
+ "TypeScript\n",
+ "Sort\n",
+ "Select order\n",
+ "Last updated\n",
+ "Name\n",
+ "Stars\n",
+ "Showing 10 of 275 repositories\n",
+ "trl\n",
+ "Public\n",
+ "Train transformer language models with reinforcement learning.\n",
+ "huggingface/trl’s past year of commit activity\n",
+ "Python\n",
+ "10,382\n",
+ "Apache-2.0\n",
+ "1,337\n",
+ "106\n",
+ "46\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "transformers.js\n",
+ "Public\n",
+ "State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!\n",
+ "huggingface/transformers.js’s past year of commit activity\n",
+ "JavaScript\n",
+ "12,421\n",
+ "Apache-2.0\n",
+ "790\n",
+ "274\n",
+ "(3 issues need help)\n",
+ "48\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "diffusers\n",
+ "Public\n",
+ "🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.\n",
+ "huggingface/diffusers’s past year of commit activity\n",
+ "Python\n",
+ "26,740\n",
+ "Apache-2.0\n",
+ "5,504\n",
+ "379\n",
+ "(10 issues need help)\n",
+ "169\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "text-generation-inference\n",
+ "Public\n",
+ "Large Language Model Text Generation Inference\n",
+ "huggingface/text-generation-inference’s past year of commit activity\n",
+ "Python\n",
+ "9,484\n",
+ "Apache-2.0\n",
+ "1,106\n",
+ "152\n",
+ "21\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "candle\n",
+ "Public\n",
+ "Minimalist ML framework for Rust\n",
+ "huggingface/candle’s past year of commit activity\n",
+ "Rust\n",
+ "16,103\n",
+ "Apache-2.0\n",
+ "980\n",
+ "344\n",
+ "(5 issues need help)\n",
+ "86\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "autotrain-advanced\n",
+ "Public\n",
+ "🤗 AutoTrain Advanced\n",
+ "huggingface/autotrain-advanced’s past year of commit activity\n",
+ "Python\n",
+ "4,157\n",
+ "Apache-2.0\n",
+ "505\n",
+ "16\n",
+ "2\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "transformers\n",
+ "Public\n",
+ "🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.\n",
+ "huggingface/transformers’s past year of commit activity\n",
+ "Python\n",
+ "136,571\n",
+ "Apache-2.0\n",
+ "27,342\n",
+ "1,003\n",
+ "(2 issues need help)\n",
+ "526\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "lighteval\n",
+ "Public\n",
+ "Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends\n",
+ "huggingface/lighteval’s past year of commit activity\n",
+ "Python\n",
+ "889\n",
+ "MIT\n",
+ "109\n",
+ "62\n",
+ "(1 issue needs help)\n",
+ "15\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "hub-docs\n",
+ "Public\n",
+ "Docs of the Hugging Face Hub\n",
+ "huggingface/hub-docs’s past year of commit activity\n",
+ "Handlebars\n",
+ "309\n",
+ "Apache-2.0\n",
+ "259\n",
+ "90\n",
+ "25\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "optimum-habana\n",
+ "Public\n",
+ "Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)\n",
+ "huggingface/optimum-habana’s past year of commit activity\n",
+ "Python\n",
+ "162\n",
+ "Apache-2.0\n",
+ "219\n",
+ "11\n",
+ "(1 issue needs help)\n",
+ "40\n",
+ "Updated\n",
+ "Dec 21, 2024\n",
+ "View all repositories\n",
+ "People\n",
+ "View all\n",
+ "Top languages\n",
+ "Python\n",
+ "Jupyter Notebook\n",
+ "Rust\n",
+ "TypeScript\n",
+ "JavaScript\n",
+ "Most used topics\n",
+ "pytorch\n",
+ "machine-learning\n",
+ "nlp\n",
+ "deep-learning\n",
+ "transformers\n",
+ "Footer\n",
+ "© 2024 GitHub, Inc.\n",
+ "Footer navigation\n",
+ "Terms\n",
+ "Privacy\n",
+ "Security\n",
+ "Status\n",
+ "Docs\n",
+ "Contact\n",
+ "Manage cookies\n",
+ "Do not share my personal information\n",
+ "You can’t perform that action at this time.\n",
+ "\n",
+ "\n",
+ "\n",
+ "Twitter page\n",
+ "Webpage Title:\n",
+ "x.com\n",
+ "Webpage Contents:\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ "LinkedIn page\n",
+ "Webpage Title:\n",
+ "Hugging Face | LinkedIn\n",
+ "Webpage Contents:\n",
+ "Skip to main content\n",
+ "LinkedIn\n",
+ "Articles\n",
+ "People\n",
+ "Learning\n",
+ "Jobs\n",
+ "Games\n",
+ "Get the app\n",
+ "Join now\n",
+ "Sign in\n",
+ "Hugging Face\n",
+ "Software Development\n",
+ "The AI community building the future.\n",
+ "See jobs\n",
+ "Follow\n",
+ "Discover all 472 employees\n",
+ "Report this company\n",
+ "About us\n",
+ "The AI community building the future.\n",
+ "Website\n",
+ "https://huggingface.co\n",
+ "External link for Hugging Face\n",
+ "Industry\n",
+ "Software Development\n",
+ "Company size\n",
+ "51-200 employees\n",
+ "Type\n",
+ "Privately Held\n",
+ "Founded\n",
+ "2016\n",
+ "Specialties\n",
+ "machine learning, natural language processing, and deep learning\n",
+ "Products\n",
+ "Hugging Face\n",
+ "Hugging Face\n",
+ "Natural Language Processing (NLP) Software\n",
+ "We’re on a journey to solve and democratize artificial intelligence through natural language.\n",
+ "Locations\n",
+ "Primary\n",
+ "Get directions\n",
+ "Paris, FR\n",
+ "Get directions\n",
+ "Employees at Hugging Face\n",
+ "Ludovic Huraux\n",
+ "Bassem ASSEH\n",
+ "Rajat Arya\n",
+ "Tech Lead & Software Engineer @ HF | prev: co-founder XetHub, Apple, Turi, AWS, Microsoft\n",
+ "Jeff Boudier\n",
+ "Product + Growth at Hugging Face\n",
+ "See all employees\n",
+ "Updates\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Gradio\n",
+ "47,326 followers\n",
+ "7h\n",
+ "Report this post\n",
+ "NOW you can add AI to your Slack, Discord in just few steps with Gradio!ü§©\n",
+ "\n",
+ "üî•Create Slack apps, Discord bots, or Intercom-style website widgets in ANY modality (Text, image, Video, Audio, Omni etc)! Keep reading to learn how ‚¨áÔ∏è\n",
+ "\n",
+ "Guide: üöÄ Creating a Slack Bot from a Gradio App üöÄ\n",
+ "Read here:\n",
+ "https://lnkd.in/g2_Bydrj\n",
+ "ü§éDo you love building stuff with Gradio? Support us on GitHub:\n",
+ "Gradio.dev\n",
+ "…more\n",
+ "50\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Daniel V.\n",
+ "Machine Learning Librarian@ü§ó | Championing Open Science & Machine Learning\n",
+ "21h\n",
+ "Report this post\n",
+ "Introducing FineWeb-C üåêüéì, a community-built dataset for improving language models in ALL languages. \n",
+ "\n",
+ "Inspired by FineWeb-Edu the community is labelling the educational quality of texts for many languages. \n",
+ "\n",
+ "318 annotators, 32K+ annotations, 12 languages - and growing!üåç\n",
+ "57\n",
+ "2 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Merve Noyan\n",
+ "open-sourceress at ü§ó | Google Developer Expert in Machine Learning, MSc Candidate in Data Science\n",
+ "22h\n",
+ "Report this post\n",
+ "Fine-tune ColPali for your multimodal RAG use case üî•\n",
+ "\n",
+ "ColPali just landed to\n",
+ "Hugging Face\n",
+ "transformers and I have built a simple fine-tuning tutorial with QLoRA ü§ó\n",
+ "You can fine-tune the model with 32 GB VRAM with batch size of 4 (which can run on Colab A100)\n",
+ "Link in comments üí¨\n",
+ "267\n",
+ "4 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "ü§ñ Avthar Sewrathan\n",
+ "AI and Developer Product Leader | I talk about using AI and building AI apps\n",
+ "1d\n",
+ "Report this post\n",
+ "TIL: You can now load any\n",
+ "Hugging Face\n",
+ "dataset into PostgreSQL with just 1 line of SQL ü§Ø\n",
+ "\n",
+ "All thanks to the pgai PostgreSQL extension. \n",
+ "\n",
+ "Shoutout to\n",
+ "Matvey Arye\n",
+ "from the\n",
+ "Timescale\n",
+ "AI engineering team for implementing this.\n",
+ "\n",
+ "Learn more about using PostgreSQL with HuggingFace datasets in the HuggingFace docs:\n",
+ "https://lnkd.in/eS4hqSDq\n",
+ "#postgresql\n",
+ "#huggingface\n",
+ "#opensource\n",
+ "180\n",
+ "14 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Argilla\n",
+ "10,266 followers\n",
+ "1d\n",
+ "Report this post\n",
+ "üé¢ Push to Hub: Export your dataset to the Hugging Face Hub directly from the Argilla UI.\n",
+ "\n",
+ "We’re super excited to announce that we've closed the loop: now you can load a dataset from the Hub, open it on\n",
+ "Argilla\n",
+ "UI, label it, and push the annotated dataset to the Hub. All this without a line of code!\n",
+ "\n",
+ "\n",
+ "ùó™ùóµùòÜ ùòÄùóµùóºùòÇùóπùó± ùòÜùóºùòÇ ùòÇùòÄùó≤ ùó∂ùòÅ?\n",
+ "\n",
+ "Your AI project's impact depends heavily on the effort and care you put into your data. This new feature enables you to iterate faster and make annotated data available in the right format for training and evaluation.\n",
+ "\n",
+ "\n",
+ "ùóõùóºùòÑ ùó±ùóºùó≤ùòÄ ùó∂ùòÅ ùòÑùóºùóøùó∏?\n",
+ "\n",
+ "1️⃣ Import initial data from a CSV or any format to Hugging Face\n",
+ "2️⃣ Load it into the Argilla UI and configure the annotation task\n",
+ "3️⃣ Annotate your dataset\n",
+ "üöÄ Click on ‚ÄúPush to Hub‚Äù and share the dataset with your team (or the entire world)\n",
+ "\n",
+ "üëâ ùó•ùó≤ùóÆùó±ùòÜ ùòÅùóº ùòÅùóøùòÜ ùó∂ùòÅ ùóºùòÇùòÅ?\n",
+ "\n",
+ "Get started here:\n",
+ "https://lnkd.in/dhA-swR5\n",
+ "Release highlights:\n",
+ "https://lnkd.in/dbdQXG-W\n",
+ "35\n",
+ "3 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Daniel V.\n",
+ "Machine Learning Librarian@ü§ó | Championing Open Science & Machine Learning\n",
+ "1d\n",
+ "Report this post\n",
+ "Hot take: shipping BERT-sized models in 2025 will benefit far more people than sharing an LLM overfitted to some saturated leaderboards \n",
+ "\n",
+ "We're already seeing ModernBERT finetunes on the\n",
+ "Hugging Face\n",
+ "Hub. My guess is we'll see hundreds of these by the end of 2025.\n",
+ "80\n",
+ "4 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Gradio\n",
+ "47,326 followers\n",
+ "1d\n",
+ "Edited\n",
+ "Report this post\n",
+ "ü§Øüî•LEARN HOW TO CREATE interactive agentic chatbots using Gradio that are capable of showcasing the Thoughts, Tasks, and interim responses of Multiple Agents as you await the final answer from your AI assistant.\n",
+ "\n",
+ "üéØ Customer Support multi-agents with\n",
+ "CrewAI\n",
+ "and\n",
+ "Gradio\n",
+ "Showcasing here, a user-friendly, high-performing multi-agent gradio app. TO operate it, simply enter a webpage URL along with your questions related to that page, and in turn receive a high-quality response from the CrewAI Multi-Agent setup.\n",
+ "\n",
+ "üöÄAccess this app on\n",
+ "Hugging Face\n",
+ "Spaces:\n",
+ "https://lnkd.in/g6kXp_D2\n",
+ "…more\n",
+ "72\n",
+ "1 Comment\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Clem Delangue ü§ó\n",
+ "Clem Delangue ü§ó is an Influencer\n",
+ "Co-founder & CEO at Hugging Face\n",
+ "2d\n",
+ "Report this post\n",
+ "In the past few months, we've invested a lot of efforts in improving the user management features of the Hugging Face hub that more than 5M AI builder are now using. It helps not only for easier organization collaboration but also for security (for example to make sure ex team members don't still have access to private models). \n",
+ "\n",
+ "If your manager, VP AI or admin/CISO is not aware, mention them below so that we can connect if they have any questions or feedback as most of these features are part of the Enterprise hub subscriptions:\n",
+ "https://lnkd.in/e-RY-3vs\n",
+ ")\n",
+ "\n",
+ "Cheers!\n",
+ "47\n",
+ "3 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Clem Delangue ü§ó\n",
+ "Clem Delangue ü§ó is an Influencer\n",
+ "Co-founder & CEO at Hugging Face\n",
+ "4d\n",
+ "Report this post\n",
+ "Just 10 days after o1's public debut, we‚Äôre thrilled to unveil the open-source version of the groundbreaking technique behind its success: scaling test-time compute ü߆üí° \n",
+ "\n",
+ "By giving models more \"time to think,\" Llama 1B outperforms Llama 8B in math‚Äîbeating a model 8x its size. The full recipe is open-sourceü§Ø \n",
+ "\n",
+ "This is the power of open science and open-source AI! üåç‚ú®\n",
+ "5,292\n",
+ "125 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Hugging Face\n",
+ "reposted this\n",
+ "Philipp Schmid\n",
+ "Technical Lead & LLMs at Hugging Face ü§ó | AWS ML HERO ü¶∏ü誂ôÇÔ∏è\n",
+ "1d\n",
+ "Report this post\n",
+ "ModernBERT, BERT revisited in the age of LLMs and Generative AI!\n",
+ "LightOn\n",
+ "and\n",
+ "Answer.ai\n",
+ "modernized BERT! Improved architecture with 8192 context length, flash attention, and trained on 2T tokens. ModernBERT outperforms version BERT and RoBERTa versions! üëÄ\n",
+ "\n",
+ "TL;DR;\n",
+ "2️⃣ Comes in 2 sizes base (139M) and large (395M)\n",
+ "üöĬ†Better performance across all metrics than the original BERT\n",
+ "üìè 8,192 token context length (16x longer than BERT)\n",
+ "‚ö° Modern architecture with Flash Attention 2, RoPE embeddings, and alternating attention\n",
+ "üìö Trained on 2 trillion tokens, primarily English and Code\n",
+ "üí® 2-4x faster than other models with mixed-length inputs\n",
+ "üî쬆Released under Apache 2.0\n",
+ "ü§ó¬†Available on\n",
+ "Hugging Face\n",
+ "and Transformers (main)\n",
+ "\n",
+ "Models:\n",
+ "https://lnkd.in/ethiJ2xh\n",
+ "Blog:\n",
+ "https://lnkd.in/ebiEzb4P\n",
+ "Paper:\n",
+ "https://lnkd.in/ezR8MUBF\n",
+ "1,844\n",
+ "67 Comments\n",
+ "Like\n",
+ "Comment\n",
+ "Share\n",
+ "Join now to see what you are missing\n",
+ "Find people you know at Hugging Face\n",
+ "Browse recommended jobs for you\n",
+ "View all updates, news, and articles\n",
+ "Join now\n",
+ "Similar pages\n",
+ "Anthropic\n",
+ "Research Services\n",
+ "Mistral AI\n",
+ "Technology, Information and Internet\n",
+ "Paris, France\n",
+ "OpenAI\n",
+ "Research Services\n",
+ "San Francisco, CA\n",
+ "LangChain\n",
+ "Technology, Information and Internet\n",
+ "Perplexity\n",
+ "Software Development\n",
+ "San Francisco, California\n",
+ "Generative AI\n",
+ "Technology, Information and Internet\n",
+ "Google DeepMind\n",
+ "Research Services\n",
+ "London, London\n",
+ "LlamaIndex\n",
+ "Technology, Information and Internet\n",
+ "San Francisco, California\n",
+ "DeepLearning.AI\n",
+ "Software Development\n",
+ "Palo Alto, California\n",
+ "Cohere\n",
+ "Software Development\n",
+ "Toronto, Ontario\n",
+ "Show more similar pages\n",
+ "Show fewer similar pages\n",
+ "Browse jobs\n",
+ "Engineer jobs\n",
+ "555,845 open jobs\n",
+ "Machine Learning Engineer jobs\n",
+ "148,937 open jobs\n",
+ "Scientist jobs\n",
+ "48,969 open jobs\n",
+ "Software Engineer jobs\n",
+ "300,699 open jobs\n",
+ "Intern jobs\n",
+ "71,196 open jobs\n",
+ "Developer jobs\n",
+ "258,935 open jobs\n",
+ "Analyst jobs\n",
+ "694,057 open jobs\n",
+ "Intelligence Specialist jobs\n",
+ "7,156 open jobs\n",
+ "Manager jobs\n",
+ "1,880,925 open jobs\n",
+ "Data Scientist jobs\n",
+ "264,158 open jobs\n",
+ "Director jobs\n",
+ "1,220,357 open jobs\n",
+ "Associate jobs\n",
+ "1,091,945 open jobs\n",
+ "Python Developer jobs\n",
+ "46,642 open jobs\n",
+ "Evangelist jobs\n",
+ "5,068 open jobs\n",
+ "Data Engineer jobs\n",
+ "192,126 open jobs\n",
+ "Vice President jobs\n",
+ "235,270 open jobs\n",
+ "Quantitative Analyst jobs\n",
+ "19,570 open jobs\n",
+ "Program Manager jobs\n",
+ "243,900 open jobs\n",
+ "Data Science Specialist jobs\n",
+ "2,441 open jobs\n",
+ "Lead Software Engineer jobs\n",
+ "68,215 open jobs\n",
+ "Show more jobs like this\n",
+ "Show fewer jobs like this\n",
+ "Funding\n",
+ "Hugging Face\n",
+ "7 total rounds\n",
+ "Last Round\n",
+ "Series D\n",
+ "Feb 16, 2024\n",
+ "External Crunchbase Link for last round of funding\n",
+ "See more info on\n",
+ "crunchbase\n",
+ "More searches\n",
+ "More searches\n",
+ "Engineer jobs\n",
+ "Intern jobs\n",
+ "Machine Learning Engineer jobs\n",
+ "Software Engineer jobs\n",
+ "Scientist jobs\n",
+ "Developer jobs\n",
+ "Research Intern jobs\n",
+ "Analyst jobs\n",
+ "Intelligence Specialist jobs\n",
+ "Quantitative Analyst jobs\n",
+ "Technician jobs\n",
+ "Data Science Specialist jobs\n",
+ "Project Manager jobs\n",
+ "Summer Intern jobs\n",
+ "Manager jobs\n",
+ "Senior Staff Engineer jobs\n",
+ "PHD jobs\n",
+ "Trader jobs\n",
+ "Researcher jobs\n",
+ "Data Scientist jobs\n",
+ "Writer jobs\n",
+ "Data Analyst jobs\n",
+ "Product Designer jobs\n",
+ "Back End Developer jobs\n",
+ "Spring Intern jobs\n",
+ "Program Manager jobs\n",
+ "Technology Officer jobs\n",
+ "Software Intern jobs\n",
+ "Security Professional jobs\n",
+ "Senior Software Engineer jobs\n",
+ "Python Developer jobs\n",
+ "Engineering Manager jobs\n",
+ "Web Developer jobs\n",
+ "Graduate jobs\n",
+ "Full Stack Engineer jobs\n",
+ "Professor jobs\n",
+ "Head jobs\n",
+ "Verification Manager jobs\n",
+ "User Experience Designer jobs\n",
+ "Recruiter jobs\n",
+ "Chief Executive Officer jobs\n",
+ "Associate jobs\n",
+ "Support Developer jobs\n",
+ "Senior Firmware Engineer jobs\n",
+ "Marketing Manager jobs\n",
+ "Modeling Engineer jobs\n",
+ "Designer jobs\n",
+ "Automation Lead jobs\n",
+ "Options Trader jobs\n",
+ "Agile Coach jobs\n",
+ "Research Engineer jobs\n",
+ "Software Quality Assurance Analyst jobs\n",
+ "User Experience Manager jobs\n",
+ "Technical Intern jobs\n",
+ "Junior Network Engineer jobs\n",
+ "Information Technology Recruiter jobs\n",
+ "User Researcher jobs\n",
+ "Player jobs\n",
+ "Engineering Project Manager jobs\n",
+ "Digital Strategist jobs\n",
+ "LinkedIn\n",
+ "© 2024\n",
+ "About\n",
+ "Accessibility\n",
+ "User Agreement\n",
+ "Privacy Policy\n",
+ "Cookie Policy\n",
+ "Copyright Policy\n",
+ "Brand Policy\n",
+ "Guest Controls\n",
+ "Community Guidelines\n",
+ "العربية (Arabic)\n",
+ "বাংলা (Bangla)\n",
+ "Čeština (Czech)\n",
+ "Dansk (Danish)\n",
+ "Deutsch (German)\n",
+ "Ελληνικά (Greek)\n",
+ "English (English)\n",
+ "Español (Spanish)\n",
+ "فارسی (Persian)\n",
+ "Suomi (Finnish)\n",
+ "Français (French)\n",
+ "हिंदी (Hindi)\n",
+ "Magyar (Hungarian)\n",
+ "Bahasa Indonesia (Indonesian)\n",
+ "Italiano (Italian)\n",
+ "עברית (Hebrew)\n",
+ "日本語 (Japanese)\n",
+ "한국어 (Korean)\n",
+ "मराठी (Marathi)\n",
+ "Bahasa Malaysia (Malay)\n",
+ "Nederlands (Dutch)\n",
+ "Norsk (Norwegian)\n",
+ "ਪੰਜਾਬੀ (Punjabi)\n",
+ "Polski (Polish)\n",
+ "Português (Portuguese)\n",
+ "Română (Romanian)\n",
+ "–†—É—Å—Å–∫–∏–π (Russian)\n",
+ "Svenska (Swedish)\n",
+ "తెలుగు (Telugu)\n",
+ "ภาษาไทย (Thai)\n",
+ "Tagalog (Tagalog)\n",
+ "Türkçe (Turkish)\n",
+ "–£–∫—Ä–∞—ó–Ω—Å—å–∫–∞ (Ukrainian)\n",
+ "Tiếng Việt (Vietnamese)\n",
+ "简体中文 (Chinese (Simplified))\n",
+ "正體中文 (Chinese (Traditional))\n",
+ "Language\n",
+ "Agree & Join LinkedIn\n",
+ "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
+ "User Agreement\n",
+ ",\n",
+ "Privacy Policy\n",
+ ", and\n",
+ "Cookie Policy\n",
+ ".\n",
+ "Sign in to see who you already know at Hugging Face\n",
+ "Sign in\n",
+ "Welcome back\n",
+ "Email or phone\n",
+ "Password\n",
+ "Show\n",
+ "Forgot password?\n",
+ "Sign in\n",
+ "or\n",
+ "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
+ "User Agreement\n",
+ ",\n",
+ "Privacy Policy\n",
+ ", and\n",
+ "Cookie Policy\n",
+ ".\n",
+ "New to LinkedIn?\n",
+ "Join now\n",
+ "or\n",
+ "New to LinkedIn?\n",
+ "Join now\n",
+ "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
+ "User Agreement\n",
+ ",\n",
+ "Privacy Policy\n",
+ ", and\n",
+ "Cookie Policy\n",
+ ".\n",
+ "LinkedIn\n",
+ "LinkedIn is better on the app\n",
+ "Don’t have the app? Get it in the Microsoft Store.\n",
+ "Open the app\n",
+ "\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(get_all_details(\"https://huggingface.co\"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
+ "and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
+ "Include details of company culture, customers and careers/jobs if you have the information.\"\n",
+ "\n",
+ "# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
+ "\n",
+ "# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
+ "# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
+ "# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_brochure_user_prompt(company_name, url):\n",
+ " user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
+ " user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
+ " user_prompt += get_all_details(url)\n",
+ " user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n",
+ " return user_prompt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "id": "cd909e0b-1312-4ce2-a553-821e795d7572",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog', 'url': 'https://huggingface.co/blog'}, {'type': 'company page', 'url': 'https://huggingface.co/enterprise'}]}\n",
+ "You are looking at a company called: HuggingFace\n",
+ "Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\n",
+ "Landing page:\n",
+ "Webpage Title:\n",
+ "Hugging Face – The AI community building the future.\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "The AI community building the future.\n",
+ "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
+ "Trending on\n",
+ "this week\n",
+ "Models\n",
+ "IamCreateAI/Ruyi-Mini-7B\n",
+ "Updated\n",
+ "4 days ago\n",
+ "•\n",
+ "8.17k\n",
+ "•\n",
+ "352\n",
+ "Datou1111/shou_xin\n",
+ "Updated\n",
+ "12 days ago\n",
+ "•\n",
+ "28.3k\n",
+ "•\n",
+ "672\n",
+ "answerdotai/ModernBERT-base\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "6.24k\n",
+ "•\n",
+ "236\n",
+ "meta-llama/Llama-3.3-70B-Instruct\n",
+ "Updated\n",
+ "11 days ago\n",
+ "•\n",
+ "236k\n",
+ "•\n",
+ "1.21k\n",
+ "tencent/HunyuanVideo\n",
+ "Updated\n",
+ "3 days ago\n",
+ "•\n",
+ "6.01k\n",
+ "•\n",
+ "1.2k\n",
+ "Browse 400k+ models\n",
+ "Spaces\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "1.79k\n",
+ "🏢\n",
+ "TRELLIS\n",
+ "Scalable and Versatile 3D Generation from images\n",
+ "Running\n",
+ "306\n",
+ "📝\n",
+ "Scaling test-time compute\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "470\n",
+ "🚀\n",
+ "Flux Style Shaping\n",
+ "Optical illusions and style transfer with FLUX\n",
+ "Running\n",
+ "on\n",
+ "CPU Upgrade\n",
+ "6.11k\n",
+ "👕\n",
+ "Kolors Virtual Try-On\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "965\n",
+ "📈\n",
+ "IC Light V2\n",
+ "Browse 150k+ applications\n",
+ "Datasets\n",
+ "fka/awesome-chatgpt-prompts\n",
+ "Updated\n",
+ "Sep 3\n",
+ "•\n",
+ "6.83k\n",
+ "•\n",
+ "6.58k\n",
+ "O1-OPEN/OpenO1-SFT\n",
+ "Updated\n",
+ "4 days ago\n",
+ "•\n",
+ "1.86k\n",
+ "•\n",
+ "234\n",
+ "HuggingFaceFW/fineweb-2\n",
+ "Updated\n",
+ "13 days ago\n",
+ "•\n",
+ "77.7k\n",
+ "•\n",
+ "342\n",
+ "HuggingFaceTB/finemath\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "1.86k\n",
+ "•\n",
+ "43\n",
+ "amphora/QwQ-LongCoT-130K\n",
+ "Updated\n",
+ "16 days ago\n",
+ "•\n",
+ "1.34k\n",
+ "•\n",
+ "85\n",
+ "Browse 100k+ datasets\n",
+ "The Home of Machine Learning\n",
+ "Create, discover and collaborate on ML better.\n",
+ "The collaboration platform\n",
+ "Host and collaborate on unlimited public models, datasets and applications.\n",
+ "Move faster\n",
+ "With the HF Open source stack.\n",
+ "Explore all modalities\n",
+ "Text, image, video, audio or even 3D.\n",
+ "Build your portfolio\n",
+ "Share your work with the world and build your ML profile.\n",
+ "Sign Up\n",
+ "Accelerate your ML\n",
+ "We provide paid Compute and Enterprise solutions.\n",
+ "Compute\n",
+ "Deploy on optimized\n",
+ "Inference Endpoints\n",
+ "or update your\n",
+ "Spaces applications\n",
+ "to a GPU in a few clicks.\n",
+ "View pricing\n",
+ "Starting at $0.60/hour for GPU\n",
+ "Enterprise\n",
+ "Give your team the most advanced platform to build AI with enterprise-grade security, access controls and\n",
+ "\t\t\tdedicated support.\n",
+ "Getting started\n",
+ "Starting at $20/user/month\n",
+ "Single Sign-On\n",
+ "Regions\n",
+ "Priority Support\n",
+ "Audit Logs\n",
+ "Resource Groups\n",
+ "Private Datasets Viewer\n",
+ "More than 50,000 organizations are using Hugging Face\n",
+ "Ai2\n",
+ "Enterprise\n",
+ "non-profit\n",
+ "•\n",
+ "366 models\n",
+ "•\n",
+ "1.76k followers\n",
+ "AI at Meta\n",
+ "Enterprise\n",
+ "company\n",
+ "•\n",
+ "2.05k models\n",
+ "•\n",
+ "3.83k followers\n",
+ "Amazon Web Services\n",
+ "company\n",
+ "•\n",
+ "21 models\n",
+ "•\n",
+ "2.45k followers\n",
+ "Google\n",
+ "company\n",
+ "•\n",
+ "911 models\n",
+ "•\n",
+ "5.76k followers\n",
+ "Intel\n",
+ "company\n",
+ "•\n",
+ "217 models\n",
+ "•\n",
+ "2.07k followers\n",
+ "Microsoft\n",
+ "company\n",
+ "•\n",
+ "351 models\n",
+ "•\n",
+ "6.29k followers\n",
+ "Grammarly\n",
+ "company\n",
+ "•\n",
+ "10 models\n",
+ "•\n",
+ "102 followers\n",
+ "Writer\n",
+ "Enterprise\n",
+ "company\n",
+ "•\n",
+ "17 models\n",
+ "•\n",
+ "186 followers\n",
+ "Our Open Source\n",
+ "We are building the foundation of ML tooling with the community.\n",
+ "Transformers\n",
+ "136,571\n",
+ "State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
+ "Diffusers\n",
+ "26,740\n",
+ "State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
+ "Safetensors\n",
+ "2,960\n",
+ "Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
+ "Hub Python Library\n",
+ "2,177\n",
+ "Client library for the HF Hub: manage repositories from your Python runtime.\n",
+ "Tokenizers\n",
+ "9,165\n",
+ "Fast tokenizers, optimized for both research and production.\n",
+ "PEFT\n",
+ "16,767\n",
+ "Parameter efficient finetuning methods for large models.\n",
+ "Transformers.js\n",
+ "12,421\n",
+ "State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
+ "timm\n",
+ "32,668\n",
+ "State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
+ "TRL\n",
+ "10,382\n",
+ "Train transformer language models with reinforcement learning.\n",
+ "Datasets\n",
+ "19,378\n",
+ "Access and share datasets for computer vision, audio, and NLP tasks.\n",
+ "Text Generation Inference\n",
+ "9,484\n",
+ "Toolkit to serve Large Language Models.\n",
+ "Accelerate\n",
+ "8,082\n",
+ "Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
+ "System theme\n",
+ "Website\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Tasks\n",
+ "Inference Endpoints\n",
+ "HuggingChat\n",
+ "Company\n",
+ "About\n",
+ "Brand assets\n",
+ "Terms of service\n",
+ "Privacy\n",
+ "Jobs\n",
+ "Press\n",
+ "Resources\n",
+ "Learn\n",
+ "Documentation\n",
+ "Blog\n",
+ "Forum\n",
+ "Service Status\n",
+ "Social\n",
+ "GitHub\n",
+ "Twitter\n",
+ "LinkedIn\n",
+ "Discord\n",
+ "\n",
+ "\n",
+ "\n",
+ "about page\n",
+ "Webpage Title:\n",
+ "Hugging Face – The AI community building the future.\n",
+ "Webpage Contents:\n",
+ "Hugging Face\n",
+ "Models\n",
+ "Datasets\n",
+ "Spaces\n",
+ "Posts\n",
+ "Docs\n",
+ "Enterprise\n",
+ "Pricing\n",
+ "Log In\n",
+ "Sign Up\n",
+ "The AI community building the future.\n",
+ "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
+ "Trending on\n",
+ "this week\n",
+ "Models\n",
+ "IamCreateAI/Ruyi-Mini-7B\n",
+ "Updated\n",
+ "4 days ago\n",
+ "•\n",
+ "8.17k\n",
+ "•\n",
+ "352\n",
+ "Datou1111/shou_xin\n",
+ "Updated\n",
+ "12 days ago\n",
+ "•\n",
+ "28.3k\n",
+ "•\n",
+ "672\n",
+ "answerdotai/ModernBERT-base\n",
+ "Updated\n",
+ "1 day ago\n",
+ "•\n",
+ "6.24k\n",
+ "•\n",
+ "236\n",
+ "meta-llama/Llama-3.3-70B-Instruct\n",
+ "Updated\n",
+ "11 days ago\n",
+ "•\n",
+ "236k\n",
+ "•\n",
+ "1.21k\n",
+ "tencent/HunyuanVideo\n",
+ "Updated\n",
+ "3 days ago\n",
+ "•\n",
+ "6.01k\n",
+ "•\n",
+ "1.2k\n",
+ "Browse 400k+ models\n",
+ "Spaces\n",
+ "Running\n",
+ "on\n",
+ "Zero\n",
+ "1.79k\n",
+ "🏢\n",
+ "TRELLIS\n",
+ "Scalable and Versatile 3D Generation from images\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\"))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def create_brochure(company_name, url):\n",
+ " response = openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
+ " ],\n",
+ " )\n",
+ " result = response.choices[0].message.content\n",
+ " display(Markdown(result))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "id": "e093444a-9407-42ae-924a-145730591a39",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'home page', 'url': 'https://huggingface.com/'}, {'type': 'about page', 'url': 'https://huggingface.com/huggingface'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.com/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.com/pricing'}, {'type': 'blog page', 'url': 'https://huggingface.com/blog'}, {'type': 'community page', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
+ ]
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "# Hugging Face Brochure\n",
+ "\n",
+ "**Hugging Face** \n",
+ "*The AI community building the future.*\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## About Us\n",
+ "Hugging Face is a pioneering platform where the machine learning community comes together to collaborate on models, datasets, and applications. With over 400,000 models and 100,000 datasets available, we empower users to create, discover, and innovate in the field of machine learning.\n",
+ "\n",
+ "### Our Mission\n",
+ "To accelerate the development and deployment of machine learning applications, making cutting-edge technology accessible to everyone.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Company Culture\n",
+ "At Hugging Face, we believe in the power of collaboration and open-source technology. We foster an inclusive environment where every team member's input is valued, allowing for diverse ideas and perspectives. Our culture emphasizes continuous learning, innovation, and a commitment to advancing AI for the greater good.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Customers\n",
+ "Hugging Face serves more than 50,000 organizations, including industry leaders such as:\n",
+ "\n",
+ "- **Amazon Web Services**\n",
+ "- **Meta**\n",
+ "- **Google**\n",
+ "- **Microsoft**\n",
+ "- **Intel**\n",
+ " \n",
+ "These organizations utilize our platform for various machine learning tasks, enhancing their workflows and outputs.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Careers at Hugging Face\n",
+ "We are always on the lookout for talented individuals who are passionate about AI and machine learning. Career opportunities at Hugging Face offer:\n",
+ "\n",
+ "- A collaborative work environment\n",
+ "- Remote work flexibility\n",
+ "- Continuing education and mentorship\n",
+ "- Opportunities to work on impactful projects\n",
+ "\n",
+ "**Join us and help shape the future of AI!**\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Our Offerings\n",
+ "### Models\n",
+ "- Access over 400,000 machine learning models, covering a variety of tasks and technologies.\n",
+ "\n",
+ "### Datasets\n",
+ "- Discover and share 100,000+ datasets tailored for computer vision, audio, and NLP tasks.\n",
+ "\n",
+ "### Spaces\n",
+ "- Utilize our application space to run various applications including real-time projects and demonstrations.\n",
+ "\n",
+ "### Enterprise Solutions\n",
+ "- With dedicated support and industry-grade security, our Enterprise solutions are designed for organizations looking to implement AI at scale.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Get Started Today!\n",
+ "**Sign up now** to become part of the Hugging Face community and access an array of tools to accelerate your machine learning journey. \n",
+ "[Sign Up Here](#)\n",
+ "\n",
+ "---\n",
+ "\n",
+ "**Stay Connected** \n",
+ "Follow us on our social media platforms:\n",
+ "- [GitHub](#)\n",
+ "- [Twitter](#)\n",
+ "- [LinkedIn](#)\n",
+ "- [Discord](#)\n",
+ "\n",
+ "**Hugging Face – Building the Future of AI**"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "create_brochure(\"HuggingFace\", \"https://huggingface.com\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18",
+ "metadata": {},
+ "source": [
+ "## Finally - a minor improvement\n",
+ "\n",
+ "With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
+ "with the familiar typewriter animation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "id": "51db0e49-f261-4137-aabe-92dd601f7725",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def stream_brochure(company_name, url):\n",
+ " stream = openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
+ " ],\n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " response = \"\"\n",
+ " display_handle = display(Markdown(\"\"), display_id=True)\n",
+ " for chunk in stream:\n",
+ " response += chunk.choices[0].delta.content or ''\n",
+ " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
+ " update_display(Markdown(response), display_id=display_handle.display_id)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'community discussion', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
+ ]
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "# Welcome to Hugging Face\n",
+ "\n",
+ "## The AI Community Building the Future\n",
+ "\n",
+ "At Hugging Face, we bring together the machine learning community to collaborate on groundbreaking models, datasets, and applications. Our platform is a vibrant hub where innovation meets practicality, empowering developers and researchers to create state-of-the-art AI solutions.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "### 🏆 What We Offer\n",
+ "\n",
+ "- **Models**: Access and discover over **400k+ models** including the latest advancements in AI.\n",
+ "- **Datasets**: A rich collection of **100k+ datasets** tailored for various machine learning tasks.\n",
+ "- **Spaces**: Collaborate on applications and projects seamlessly within our community’s creative workspace.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "### 🌏 Our Customers\n",
+ "\n",
+ "Join the ranks of **50,000+ organizations** leveraging Hugging Face’s offerings, including industry giants like:\n",
+ "- **Meta**\n",
+ "- **Amazon Web Services**\n",
+ "- **Google**\n",
+ "- **Microsoft**\n",
+ "- **Grammarly**\n",
+ "\n",
+ "These companies trust us to accelerate their machine learning initiatives and foster innovation.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "### 🌱 Company Culture\n",
+ "\n",
+ "At Hugging Face, we embrace an open-source ethos, encouraging collaboration and contribution from the community. Our culture is centered around creativity, innovation, and inclusivity. We believe in empowering individuals and teams by providing the right tools and support to shape the future of AI.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "### 🚀 Careers at Hugging Face\n",
+ "\n",
+ "We are on the lookout for passionate individuals to join our team! If you share our vision of an accessible AI landscape, explore the career opportunities we offer. We provide an environment that supports academic growth, teamwork, and professional development while making a meaningful impact in the machine learning realm.\n",
+ "\n",
+ "#### Current Openings Include:\n",
+ "- Machine Learning Engineers\n",
+ "- Data Scientists\n",
+ "- Software Developers\n",
+ "- Community Managers\n",
+ "\n",
+ "---\n",
+ "\n",
+ "### 💡 Join Us\n",
+ "\n",
+ "Are you ready to be part of a revolution in AI? **[Sign Up](#)** today to explore the possibilities with Hugging Face or **[Log In](#)** if you’re already part of our community.\n",
+ "\n",
+ "Let’s build the future of AI together!\n",
+ "\n",
+ "---\n",
+ "\n",
+ "*For inquiries about our enterprise solutions, pricing, or community involvement, feel free to reach out through our website.* \n",
+ "\n",
+ "**Connect with us:** \n",
+ "[Twitter](#) | [LinkedIn](#) | [GitHub](#) | [Forum](#)"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "id": "87bd1188",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'homepage', 'url': 'https://huggingface.co/'}, {'type': 'about page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.co/pricing'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'discussion forum', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
+ ]
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "\n",
+ "# Hugging Face: The AI Community Building the Future\n",
+ "\n",
+ "Welcome to Hugging Face, the leading collaborative platform for the machine learning community. With a robust environment designed for creating, discovering, and deploying machine learning models, datasets, and applications, Hugging Face is at the frontier of artificial intelligence innovation. \n",
+ "\n",
+ "---\n",
+ "\n",
+ "## About Us\n",
+ "At Hugging Face, we believe in the power of collaboration. Our platform enables users to work together on projects that range from machine-learning models to expansive datasets. With over 400,000 models and 100,000 datasets available, we provide the tools necessary to help researchers, developers, and organizations accelerate their machine learning projects.\n",
+ "\n",
+ "- **Trending Models This Week:**\n",
+ " - **IamCreateAI/Ruyi-Mini-7B** | 8.17k | Updated 4 days ago\n",
+ " - **Datou1111/shou_xin** | 28.3k | Updated 12 days ago\n",
+ " - **meta-llama/Llama-3.3-70B-Instruct** | 236k | Updated 11 days ago\n",
+ "\n",
+ "Explore our community-driven approach that integrates state-of-the-art tools like Transformers, DiffUsers, and PEFT (Parameter Efficient Finetuning).\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Company Culture\n",
+ "Hugging Face fosters a vibrant and inclusive company culture, aiming to empower individuals and teams through transparent practices and open-source methodologies. We believe in “AI for everyone,” promoting accessibility and co-creation within the AI community. \n",
+ "\n",
+ "### Why Work With Us?\n",
+ "- **Collaborative Environment**: Join a diverse team of experts and enthusiasts dedicated to pushing the boundaries of AI and machine learning.\n",
+ "- **Open Source Commitment**: Contribute to freely accessible tools that serve the global community.\n",
+ "- **Flexible Work**: We support remote work and provide a range of job opportunities tailored to different areas of expertise.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Customers & Organizations\n",
+ "Over 50,000 organizations utilize Hugging Face in various industries, including notable names such as:\n",
+ "- **Meta AI**\n",
+ "- **Amazon Web Services**\n",
+ "- **Google**\n",
+ "- **Microsoft**\n",
+ "\n",
+ "Our enterprise solutions offer seamless integration with advanced security features, making us a trusted partner for both startups and established corporations.\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Careers at Hugging Face\n",
+ "We are always on the lookout for passionate individuals to join our team. Explore our open positions in areas such as software development, research, marketing, and customer support.\n",
+ "\n",
+ "- **Open Positions**: \n",
+ " - Machine Learning Engineer\n",
+ " - Data Scientist\n",
+ " - Community Manager\n",
+ "\n",
+ "Join us in shaping the future of AI. \n",
+ "\n",
+ "**[Explore Careers](#)**\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Join the Hugging Face Community\n",
+ "Whether you're looking to develop cutting-edge AI models, contribute to open-source projects, or advance your career in this dynamic field, Hugging Face is your gateway to innovation.\n",
+ "\n",
+ "**[Learn More](#)** | **[Sign Up Today](#)**\n",
+ "\n",
+ "Together, let's build the future of AI!\n",
+ "\n"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a9e7375d",
+ "metadata": {},
+ "source": [
+ "## **Multi-lingual with Desire Format**\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "id": "af5c959f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def multi_lingual_stream_brochure(company_name, url, language, tone):\n",
+ "\n",
+ " system_prompt = f\"\"\"\n",
+ "You are an assistant that analyzes the contents of several relevant pages from a company website and creates a visually appealing and professional short brochure for prospective customers, investors, and recruits. \n",
+ "The brochure should be written in {language} and use a {tone.lower()} tone throughout.\n",
+ "\n",
+ "The brochure should follow this structure (in {language}):\n",
+ "\n",
+ "1. **Front Cover**:\n",
+ " - Prominently display the company name as Title.\n",
+ " - Include a compelling headline or tagline.\n",
+ " - Add something engaging relevant to the company’s mission.\n",
+ "\n",
+ "2. **About Us**:\n",
+ " - Provide a brief introduction to the company.\n",
+ " - State the company’s core mission and vision.\n",
+ " - Mention the founding story or key milestones.\n",
+ "\n",
+ "3. **What We Offer**:\n",
+ " - Summarize the company's products, services, or solutions.\n",
+ " - Highlight benefits or unique selling points.\n",
+ " - Include testimonials or case studies if available.\n",
+ "\n",
+ "4. **Our Culture**:\n",
+ " - Outline the company’s key values or guiding principles.\n",
+ " - Describe the workplace environment (e.g., innovation-driven, inclusive, collaborative).\n",
+ " - Highlight community engagement or CSR initiatives.\n",
+ "\n",
+ "5. **Who We Serve**:\n",
+ " - Describe the target customers or industries served.\n",
+ " - Mention notable clients or partners.\n",
+ " - Include testimonials or endorsements from customers.\n",
+ "\n",
+ "6. **Join Us**:\n",
+ " - Detail career or internship opportunities.\n",
+ " - Highlight benefits, career growth, or training opportunities.\n",
+ " - Provide direct links or steps to apply.\n",
+ "\n",
+ "7. **Contact Us**:\n",
+ " - Provide the company’s address, phone number, and email.\n",
+ " - Include links to social media platforms.\n",
+ " - Add a link to the company’s website.\n",
+ "\n",
+ "8. **Closing Note**:\n",
+ " - End with a thank-you message or an inspirational note for the reader.\n",
+ " - Add a call-to-action (e.g., “Get in touch today!” or “Explore more on our website”).\n",
+ "\n",
+ "Ensure the content is concise, engaging, visually clear, and tailored to the target audience. Use headings and subheadings to make the brochure easy to navigate. Include links and contact information wherever applicable.\n",
+ "\"\"\"\n",
+ "\n",
+ "\n",
+ " \n",
+ " stream = openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
+ " ],\n",
+ " stream=True\n",
+ " )\n",
+ " \n",
+ " response = \"\"\n",
+ " display_handle = display(Markdown(\"\"), display_id=True)\n",
+ " for chunk in stream:\n",
+ " response += chunk.choices[0].delta.content or ''\n",
+ " response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n",
+ " update_display(Markdown(response), display_id=display_handle.display_id)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "id": "744bfc05",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Found links: {'links': [{'type': 'about page', 'url': 'https://openai.com/about'}, {'type': 'careers page', 'url': 'https://openai.com/careers'}]}\n"
+ ]
+ },
+ {
+ "data": {
+ "text/markdown": [
+ "It seems that the landing and related pages for OpenAI did not yield any specific content. However, I can create a creative and engaging brochure based on general knowledge about OpenAI. Here's a humorous and entertaining brochure written in Urdu:\n",
+ "\n",
+ "\n",
+ "# 🎉 اوپن اے آئی: ہوشیار robots کا دوست! 🎉\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## About Us - ہمارے بارے میں:\n",
+ "\n",
+ "ہماری کمپنی اوپن اے آئی، 2015 میں بنی۔ ہم نے سوچا کہ \"کیوں نہ ایک ایسا انٹیلیجنٹ سسٹم بنائیں جو انسانوں کی مدد کرے؟\" تو ہم نے کام شروع کیا اور دیکھیں! ہم نے ایک نئی دنیا کی بنیاد رکھی۔ ہماری مشن ہے \"تمام لوگوں کے لئے AI کی طاقت کو قابل رسائی بنانا\"۔ آفاقی طاقت کو ڈھونڈتے ہیں، جیسے آپ کے فرج میں چھپے ہوئے برگر!\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## What We Offer - ہم کیا پیش کرتے ہیں:\n",
+ "\n",
+ "ہم AI کے شوقین ہیں! 🤖 ہم مختلف پروڈکٹس اور سروسز پیش کرتے ہیں، جیسے کہ:\n",
+ "\n",
+ "- **GPT-3**: آپ کے سوالات کے جواب دینے کے لئے تیار!\n",
+ "- **تخلیقی تحریر**: جنریٹنگ آئیڈیاز جب آپ کی تخلیقیت بریک ہو جائے!\n",
+ "- **AI ٹولز**: آپ کی زندگی کو مزید آسان بنانے کے لئے!\n",
+ "\n",
+ "ہمارے صارفین کہتے ہیں، \"اپنی زندگی میں اوپن اے آئی کی ضرورت ہے، جیسے موٹیویشن کی ضرورت ہوتی ہے!\"\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Our Culture - ہماری ثقافت:\n",
+ "\n",
+ "ہماری کمپنی میں، ہمارا بنیادی اصول ہے: \"پیار اور انوکھا خیالات!\" 🤗 ہم نے انوکھے، تعاون پر مبنی ماحول کی بنیاد رکھی، جہاں ہر کوئی اپنی بات کہہ سکتا ہے، یہاں تک کہ ونڈو کے باہر کھڑا درخت بھی! ہم کمیونٹی کی خدمت کیلئے ہمیشہ تیار رہتے ہیں، وہ بھی سوشل میڈٰیا پر۔\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Who We Serve - ہم کس کی خدمت کرتے ہیں:\n",
+ "\n",
+ "ہم ہر اُس شخص کی خدمت کرتے ہیں جو سوپر ہیرومنٹ کی تلاش میں ہے۔ ہمارے وزیٹر، محققین، اور ٹیکنالوجی کے شوقین ہیں، اور ہمارے بہترین کلائنٹس include شامل ہیں \"بڑا دماغی جیسا سوچنے والے!\" 💡\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Join Us - ہمارے ساتھ شامل ہوں:\n",
+ "\n",
+ "آپ کو ترقی کی تلاش ہے؟ تو ہماری ٹیم کا حصہ بنیں! 🚀 ہم ہمیشہ نئے امریکی جاموں کی تلاش میں ہیں۔ آپ کو ٹریننگ، ترقی کے مواقع، اور سہولیات فراہم کریں گے۔\n",
+ "\n",
+ "📩 **درخواست دینے کے مرحلے:** ہماری ویب سائٹ پر جائیں، کیونکہ ہم جانتے ہیں کہ آپ کا خواب آپ کے قریب ہے!\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Contact Us - ہم سے رابطہ کریں:\n",
+ "\n",
+ "**پتہ:** نیٹ ورک کی دنیا \n",
+ "**فون:** 123-456-789 \n",
+ "**ایمیل:** info@openai.com \n",
+ "**سوشل میڈیا:** [فیس بک](#) | [ٹویٹر](#) | [لنکڈ ان](#) \n",
+ "**ویب سائٹ:** [openai.com](#)\n",
+ "\n",
+ "---\n",
+ "\n",
+ "## Closing Note - اختتامی نوٹ:\n",
+ "\n",
+ "ہماری کمپنی اوپن اے آئی کی طرف سے ایک شکریہ! اے آئی کی دنیا میں قدم رکھنے کا وقت آ گیا ہے! \n",
+ "\n",
+ "🖱️ **آج ہی رابطہ کریں یا ہماری ویب سائٹ کا دورہ کریں!**\n",
+ "\n",
+ "\n",
+ "**نوٹ:** واقعی ویب سائٹ کے مخصوص روابط، ای میل اور نمبر تخلیقی مقصد کے لئے ہیں۔ اس کو حقیقی معلومات کے ساتھ تبدیل کیا جا سکتا ہے۔"
+ ],
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "\n",
+ "multi_lingual_stream_brochure(\"OpenAI\", \"https://openai.com/\", \"Urdu\", \"humorous, entertaining, jokey\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b6f1e8d9",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4fb86dc6",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "llm_env",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 1c7c9e35e02b6322a9681f9fcf9b120235887cda Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:20:22 +1100
Subject: [PATCH 16/29] Day 5 Challend one with multilingual aloing with
multitone
---
.../day5-multi-lingual-desire-format.ipynb | 3112 +----------------
1 file changed, 24 insertions(+), 3088 deletions(-)
diff --git a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
index 3f1b3ad..b17c402 100644
--- a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
+++ b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
@@ -42,18 +42,10 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": null,
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "API key looks good so far\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Initialize and constants\n",
"\n",
@@ -109,46 +101,10 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "['https://edwarddonner.com/',\n",
- " 'https://edwarddonner.com/outsmart/',\n",
- " 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
- " 'https://edwarddonner.com/posts/',\n",
- " 'https://edwarddonner.com/',\n",
- " 'https://news.ycombinator.com',\n",
- " 'https://nebula.io/?utm_source=ed&utm_medium=referral',\n",
- " 'https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html',\n",
- " 'https://patents.google.com/patent/US20210049536A1/',\n",
- " 'https://www.linkedin.com/in/eddonner/',\n",
- " 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
- " 'https://edwarddonner.com/2024/11/13/llm-engineering-resources/',\n",
- " 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
- " 'https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/',\n",
- " 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
- " 'https://edwarddonner.com/2024/08/06/outsmart/',\n",
- " 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
- " 'https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/',\n",
- " 'https://edwarddonner.com/',\n",
- " 'https://edwarddonner.com/outsmart/',\n",
- " 'https://edwarddonner.com/about-me-and-about-nebula/',\n",
- " 'https://edwarddonner.com/posts/',\n",
- " 'mailto:hello@mygroovydomain.com',\n",
- " 'https://www.linkedin.com/in/eddonner/',\n",
- " 'https://twitter.com/edwarddonner',\n",
- " 'https://www.facebook.com/edward.donner.52']"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"ed = Website(\"https://edwarddonner.com\")\n",
"ed.links"
@@ -193,26 +149,10 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"id": "b97e4068-97ed-4120-beae-c42105e4d59a",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "You are provided with a list of links found on a webpage. You are able to decide which of the links would be most relevant to include in a brochure about the company, such as links to an About page, or a Company page, or Careers/Jobs pages.\n",
- "You should respond in JSON as in this example:\n",
- "{\n",
- " \"links\": [\n",
- " {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
- " {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
- " ]\n",
- "}\n",
- "\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"print(link_system_prompt)"
]
@@ -235,45 +175,10 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": null,
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Here is the list of links on the website of https://edwarddonner.com - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n",
- "Links (some might be relative links):\n",
- "https://edwarddonner.com/\n",
- "https://edwarddonner.com/outsmart/\n",
- "https://edwarddonner.com/about-me-and-about-nebula/\n",
- "https://edwarddonner.com/posts/\n",
- "https://edwarddonner.com/\n",
- "https://news.ycombinator.com\n",
- "https://nebula.io/?utm_source=ed&utm_medium=referral\n",
- "https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html\n",
- "https://patents.google.com/patent/US20210049536A1/\n",
- "https://www.linkedin.com/in/eddonner/\n",
- "https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
- "https://edwarddonner.com/2024/11/13/llm-engineering-resources/\n",
- "https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
- "https://edwarddonner.com/2024/10/16/from-software-engineer-to-ai-data-scientist-resources/\n",
- "https://edwarddonner.com/2024/08/06/outsmart/\n",
- "https://edwarddonner.com/2024/08/06/outsmart/\n",
- "https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
- "https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
- "https://edwarddonner.com/\n",
- "https://edwarddonner.com/outsmart/\n",
- "https://edwarddonner.com/about-me-and-about-nebula/\n",
- "https://edwarddonner.com/posts/\n",
- "mailto:hello@mygroovydomain.com\n",
- "https://www.linkedin.com/in/eddonner/\n",
- "https://twitter.com/edwarddonner\n",
- "https://www.facebook.com/edward.donner.52\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"print(get_links_user_prompt(ed))"
]
@@ -301,100 +206,10 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": null,
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "['/',\n",
- " '/models',\n",
- " '/datasets',\n",
- " '/spaces',\n",
- " '/posts',\n",
- " '/docs',\n",
- " '/enterprise',\n",
- " '/pricing',\n",
- " '/login',\n",
- " '/join',\n",
- " '/IamCreateAI/Ruyi-Mini-7B',\n",
- " '/Datou1111/shou_xin',\n",
- " '/answerdotai/ModernBERT-base',\n",
- " '/meta-llama/Llama-3.3-70B-Instruct',\n",
- " '/tencent/HunyuanVideo',\n",
- " '/models',\n",
- " '/spaces/JeffreyXiang/TRELLIS',\n",
- " '/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute',\n",
- " '/spaces/multimodalart/flux-style-shaping',\n",
- " '/spaces/Kwai-Kolors/Kolors-Virtual-Try-On',\n",
- " '/spaces/lllyasviel/iclight-v2',\n",
- " '/spaces',\n",
- " '/datasets/fka/awesome-chatgpt-prompts',\n",
- " '/datasets/O1-OPEN/OpenO1-SFT',\n",
- " '/datasets/HuggingFaceFW/fineweb-2',\n",
- " '/datasets/HuggingFaceTB/finemath',\n",
- " '/datasets/amphora/QwQ-LongCoT-130K',\n",
- " '/datasets',\n",
- " '/join',\n",
- " '/pricing#endpoints',\n",
- " '/pricing#spaces',\n",
- " '/pricing',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/enterprise',\n",
- " '/allenai',\n",
- " '/facebook',\n",
- " '/amazon',\n",
- " '/google',\n",
- " '/Intel',\n",
- " '/microsoft',\n",
- " '/grammarly',\n",
- " '/Writer',\n",
- " '/docs/transformers',\n",
- " '/docs/diffusers',\n",
- " '/docs/safetensors',\n",
- " '/docs/huggingface_hub',\n",
- " '/docs/tokenizers',\n",
- " '/docs/peft',\n",
- " '/docs/transformers.js',\n",
- " '/docs/timm',\n",
- " '/docs/trl',\n",
- " '/docs/datasets',\n",
- " '/docs/text-generation-inference',\n",
- " '/docs/accelerate',\n",
- " '/models',\n",
- " '/datasets',\n",
- " '/spaces',\n",
- " '/tasks',\n",
- " 'https://ui.endpoints.huggingface.co',\n",
- " '/chat',\n",
- " '/huggingface',\n",
- " '/brand',\n",
- " '/terms-of-service',\n",
- " '/privacy',\n",
- " 'https://apply.workable.com/huggingface/',\n",
- " 'mailto:press@huggingface.co',\n",
- " '/learn',\n",
- " '/docs',\n",
- " '/blog',\n",
- " 'https://discuss.huggingface.co',\n",
- " 'https://status.huggingface.co/',\n",
- " 'https://github.com/huggingface',\n",
- " 'https://twitter.com/huggingface',\n",
- " 'https://www.linkedin.com/company/huggingface/',\n",
- " '/join/discord']"
- ]
- },
- "execution_count": 10,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n",
"\n",
@@ -404,28 +219,10 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": null,
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "{'links': [{'type': 'homepage', 'url': 'https://huggingface.co/'},\n",
- " {'type': 'about page', 'url': 'https://huggingface.co/huggingface'},\n",
- " {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'},\n",
- " {'type': 'blog', 'url': 'https://huggingface.co/blog'},\n",
- " {'type': 'github page', 'url': 'https://github.com/huggingface'},\n",
- " {'type': 'twitter page', 'url': 'https://twitter.com/huggingface'},\n",
- " {'type': 'linkedin page',\n",
- " 'url': 'https://www.linkedin.com/company/huggingface/'}]}"
- ]
- },
- "execution_count": 11,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"get_links(\"https://huggingface.co\")"
]
@@ -460,2181 +257,10 @@
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/about'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'company page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'community discussions', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n",
- "Landing page:\n",
- "Webpage Title:\n",
- "Hugging Face – The AI community building the future.\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "The AI community building the future.\n",
- "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
- "Trending on\n",
- "this week\n",
- "Models\n",
- "IamCreateAI/Ruyi-Mini-7B\n",
- "Updated\n",
- "4 days ago\n",
- "•\n",
- "8.17k\n",
- "•\n",
- "352\n",
- "Datou1111/shou_xin\n",
- "Updated\n",
- "12 days ago\n",
- "•\n",
- "28.3k\n",
- "•\n",
- "672\n",
- "answerdotai/ModernBERT-base\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "6.24k\n",
- "•\n",
- "236\n",
- "meta-llama/Llama-3.3-70B-Instruct\n",
- "Updated\n",
- "11 days ago\n",
- "•\n",
- "236k\n",
- "•\n",
- "1.21k\n",
- "tencent/HunyuanVideo\n",
- "Updated\n",
- "3 days ago\n",
- "•\n",
- "6.01k\n",
- "•\n",
- "1.2k\n",
- "Browse 400k+ models\n",
- "Spaces\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "1.79k\n",
- "🏢\n",
- "TRELLIS\n",
- "Scalable and Versatile 3D Generation from images\n",
- "Running\n",
- "306\n",
- "📝\n",
- "Scaling test-time compute\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "470\n",
- "🚀\n",
- "Flux Style Shaping\n",
- "Optical illusions and style transfer with FLUX\n",
- "Running\n",
- "on\n",
- "CPU Upgrade\n",
- "6.11k\n",
- "👕\n",
- "Kolors Virtual Try-On\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "965\n",
- "📈\n",
- "IC Light V2\n",
- "Browse 150k+ applications\n",
- "Datasets\n",
- "fka/awesome-chatgpt-prompts\n",
- "Updated\n",
- "Sep 3\n",
- "•\n",
- "6.83k\n",
- "•\n",
- "6.58k\n",
- "O1-OPEN/OpenO1-SFT\n",
- "Updated\n",
- "4 days ago\n",
- "•\n",
- "1.86k\n",
- "•\n",
- "234\n",
- "HuggingFaceFW/fineweb-2\n",
- "Updated\n",
- "13 days ago\n",
- "•\n",
- "77.7k\n",
- "•\n",
- "342\n",
- "HuggingFaceTB/finemath\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "1.86k\n",
- "•\n",
- "43\n",
- "amphora/QwQ-LongCoT-130K\n",
- "Updated\n",
- "16 days ago\n",
- "•\n",
- "1.34k\n",
- "•\n",
- "85\n",
- "Browse 100k+ datasets\n",
- "The Home of Machine Learning\n",
- "Create, discover and collaborate on ML better.\n",
- "The collaboration platform\n",
- "Host and collaborate on unlimited public models, datasets and applications.\n",
- "Move faster\n",
- "With the HF Open source stack.\n",
- "Explore all modalities\n",
- "Text, image, video, audio or even 3D.\n",
- "Build your portfolio\n",
- "Share your work with the world and build your ML profile.\n",
- "Sign Up\n",
- "Accelerate your ML\n",
- "We provide paid Compute and Enterprise solutions.\n",
- "Compute\n",
- "Deploy on optimized\n",
- "Inference Endpoints\n",
- "or update your\n",
- "Spaces applications\n",
- "to a GPU in a few clicks.\n",
- "View pricing\n",
- "Starting at $0.60/hour for GPU\n",
- "Enterprise\n",
- "Give your team the most advanced platform to build AI with enterprise-grade security, access controls and\n",
- "\t\t\tdedicated support.\n",
- "Getting started\n",
- "Starting at $20/user/month\n",
- "Single Sign-On\n",
- "Regions\n",
- "Priority Support\n",
- "Audit Logs\n",
- "Resource Groups\n",
- "Private Datasets Viewer\n",
- "More than 50,000 organizations are using Hugging Face\n",
- "Ai2\n",
- "Enterprise\n",
- "non-profit\n",
- "•\n",
- "366 models\n",
- "•\n",
- "1.76k followers\n",
- "AI at Meta\n",
- "Enterprise\n",
- "company\n",
- "•\n",
- "2.05k models\n",
- "•\n",
- "3.83k followers\n",
- "Amazon Web Services\n",
- "company\n",
- "•\n",
- "21 models\n",
- "•\n",
- "2.45k followers\n",
- "Google\n",
- "company\n",
- "•\n",
- "911 models\n",
- "•\n",
- "5.76k followers\n",
- "Intel\n",
- "company\n",
- "•\n",
- "217 models\n",
- "•\n",
- "2.07k followers\n",
- "Microsoft\n",
- "company\n",
- "•\n",
- "351 models\n",
- "•\n",
- "6.29k followers\n",
- "Grammarly\n",
- "company\n",
- "•\n",
- "10 models\n",
- "•\n",
- "102 followers\n",
- "Writer\n",
- "Enterprise\n",
- "company\n",
- "•\n",
- "17 models\n",
- "•\n",
- "186 followers\n",
- "Our Open Source\n",
- "We are building the foundation of ML tooling with the community.\n",
- "Transformers\n",
- "136,571\n",
- "State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
- "Diffusers\n",
- "26,740\n",
- "State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
- "Safetensors\n",
- "2,960\n",
- "Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
- "Hub Python Library\n",
- "2,177\n",
- "Client library for the HF Hub: manage repositories from your Python runtime.\n",
- "Tokenizers\n",
- "9,165\n",
- "Fast tokenizers, optimized for both research and production.\n",
- "PEFT\n",
- "16,767\n",
- "Parameter efficient finetuning methods for large models.\n",
- "Transformers.js\n",
- "12,421\n",
- "State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
- "timm\n",
- "32,668\n",
- "State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
- "TRL\n",
- "10,382\n",
- "Train transformer language models with reinforcement learning.\n",
- "Datasets\n",
- "19,378\n",
- "Access and share datasets for computer vision, audio, and NLP tasks.\n",
- "Text Generation Inference\n",
- "9,484\n",
- "Toolkit to serve Large Language Models.\n",
- "Accelerate\n",
- "8,082\n",
- "Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
- "System theme\n",
- "Website\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Tasks\n",
- "Inference Endpoints\n",
- "HuggingChat\n",
- "Company\n",
- "About\n",
- "Brand assets\n",
- "Terms of service\n",
- "Privacy\n",
- "Jobs\n",
- "Press\n",
- "Resources\n",
- "Learn\n",
- "Documentation\n",
- "Blog\n",
- "Forum\n",
- "Service Status\n",
- "Social\n",
- "GitHub\n",
- "Twitter\n",
- "LinkedIn\n",
- "Discord\n",
- "\n",
- "\n",
- "\n",
- "about page\n",
- "Webpage Title:\n",
- "about (Sergei)\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "Sergei\n",
- "about\n",
- "Follow\n",
- "Kalaipriya's profile picture\n",
- "selvivincent's profile picture\n",
- "Renumathi's profile picture\n",
- "3\n",
- "\t\t\t\t\tfollowers\n",
- "·\n",
- "0 following\n",
- "AI & ML interests\n",
- "None yet\n",
- "Organizations\n",
- "None yet\n",
- "models\n",
- "None public yet\n",
- "datasets\n",
- "None public yet\n",
- "System theme\n",
- "Company\n",
- "TOS\n",
- "Privacy\n",
- "About\n",
- "Jobs\n",
- "Website\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Pricing\n",
- "Docs\n",
- "\n",
- "\n",
- "\n",
- "careers page\n",
- "Webpage Title:\n",
- "Hugging Face - Current Openings\n",
- "Webpage Contents:\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "blog page\n",
- "Webpage Title:\n",
- "Hugging Face – Blog\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "Blog, Articles, and discussions\n",
- "New Article\n",
- "Everything\n",
- "community\n",
- "guide\n",
- "open source collab\n",
- "partnerships\n",
- "research\n",
- "NLP\n",
- "Audio\n",
- "CV\n",
- "RL\n",
- "ethics\n",
- "Diffusion\n",
- "Game Development\n",
- "RLHF\n",
- "Leaderboard\n",
- "Case Studies\n",
- "Evaluating Audio Reasoning with Big Bench Audio\n",
- "By\n",
- "mhillsmith\n",
- "December 20, 2024\n",
- "guest\n",
- "•\n",
- "8\n",
- "Community Articles\n",
- "view all\n",
- "20+ Free and Paid Digital Marketing Strategies to Automate Repetitive Tasks\n",
- "By\n",
- "Markets\n",
- "•\n",
- "about 3 hours ago\n",
- "•\n",
- "1\n",
- "🧠 Tags generation dataset\n",
- "By\n",
- "zino36\n",
- "•\n",
- "about 16 hours ago\n",
- "•\n",
- "1\n",
- "AI Agents in Action: Managing GitHub Issues with KaibanJS\n",
- "By\n",
- "darielnoel\n",
- "•\n",
- "1 day ago\n",
- "**Intelligence Potentiation: An Evolutionary Perspective on AI Agent Designs**\n",
- "By\n",
- "KnutJaegersberg\n",
- "•\n",
- "1 day ago\n",
- "•\n",
- "3\n",
- "MINERVA: A Multi-Agent LLM System for Digital Scam Protection\n",
- "By\n",
- "dcarpintero\n",
- "•\n",
- "2 days ago\n",
- "Mastering Iterative Prompting for Optimized AI Code Generation\n",
- "By\n",
- "luigi12345\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "1\n",
- "SILMA RAGQA V1.0: A Comprehensive Benchmark for Evaluating LLMs on RAG QA Use-Cases\n",
- "By\n",
- "karimouda\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "1\n",
- "FuseChat-3.0: Preference Optimization for Implicit Model Fusion\n",
- "By\n",
- "Wanfq\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "2\n",
- "Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "6 days ago\n",
- "•\n",
- "3\n",
- "How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "8 days ago\n",
- "•\n",
- "1\n",
- "🇪🇺✍️ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️🇪🇺\n",
- "By\n",
- "yjernite\n",
- "•\n",
- "9 days ago\n",
- "•\n",
- "11\n",
- "Building an AI-powered search engine from scratch\n",
- "By\n",
- "as-cle-bert\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "8\n",
- "MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
- "By\n",
- "wxDai\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "12\n",
- "RLHF 101: A Technical Dive into RLHF\n",
- "By\n",
- "GitBag\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "4\n",
- "[Talk Arena](https://talkarena.org)\n",
- "By\n",
- "WillHeld\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "1\n",
- "Multimodal RAG with Colpali, Milvus and VLMs\n",
- "By\n",
- "saumitras\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "2\n",
- "In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
- "By\n",
- "Jaward\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "2\n",
- "Power steering: Squeeze massive power from small LLMs\n",
- "By\n",
- "ucheog\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "4\n",
- "Exploring the Power of KaibanJS v0.11.0 🚀\n",
- "By\n",
- "darielnoel\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "1\n",
- "**Building a Custom Retrieval System with Motoko and Node.js**\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "1\n",
- "Finally, a Replacement for BERT: Introducing ModernBERT\n",
- "By\n",
- "bwarner\n",
- "December 19, 2024\n",
- "guest\n",
- "•\n",
- "289\n",
- "Bamba: Inference-Efficient Hybrid Mamba2 Model\n",
- "By\n",
- "Linsong-C\n",
- "December 18, 2024\n",
- "guest\n",
- "•\n",
- "30\n",
- "Welcome the Falcon 3 Family of Open Models!\n",
- "By\n",
- "FalconLLM\n",
- "December 17, 2024\n",
- "•\n",
- "98\n",
- "Benchmarking Language Model Performance on 5th Gen Xeon at GCP\n",
- "By\n",
- "MatrixYao\n",
- "December 17, 2024\n",
- "•\n",
- "2\n",
- "Introducing the Synthetic Data Generator - Build Datasets with Natural Language\n",
- "By\n",
- "davidberenstein1957\n",
- "December 16, 2024\n",
- "•\n",
- "55\n",
- "LeMaterial: an open source initiative to accelerate materials discovery and research\n",
- "By\n",
- "AlexDuvalinho\n",
- "December 10, 2024\n",
- "guest\n",
- "•\n",
- "30\n",
- "Hugging Face models in Amazon Bedrock\n",
- "By\n",
- "pagezyhf\n",
- "December 9, 2024\n",
- "•\n",
- "8\n",
- "Open Preference Dataset for Text-to-Image Generation by the 🤗 Community\n",
- "By\n",
- "davidberenstein1957\n",
- "December 9, 2024\n",
- "•\n",
- "47\n",
- "Welcome PaliGemma 2 – New vision language models by Google\n",
- "By\n",
- "merve\n",
- "December 5, 2024\n",
- "•\n",
- "117\n",
- "“How good are LLMs at fixing their mistakes? A chatbot arena experiment with Keras and TPUs\n",
- "By\n",
- "martin-gorner\n",
- "December 5, 2024\n",
- "•\n",
- "12\n",
- "Rethinking LLM Evaluation with 3C3H: AraGen Benchmark and Leaderboard\n",
- "By\n",
- "alielfilali01\n",
- "December 4, 2024\n",
- "guest\n",
- "•\n",
- "26\n",
- "Investing in Performance: Fine-tune small models with LLM insights - a CFM case study\n",
- "By\n",
- "oahouzi\n",
- "December 3, 2024\n",
- "•\n",
- "25\n",
- "Rearchitecting Hugging Face Uploads and Downloads\n",
- "By\n",
- "port8080\n",
- "November 26, 2024\n",
- "•\n",
- "37\n",
- "SmolVLM - small yet mighty Vision Language Model\n",
- "By\n",
- "andito\n",
- "November 26, 2024\n",
- "•\n",
- "142\n",
- "Previous\n",
- "1\n",
- "2\n",
- "3\n",
- "...\n",
- "36\n",
- "Next\n",
- "Community Articles\n",
- "view all\n",
- "20+ Free and Paid Digital Marketing Strategies to Automate Repetitive Tasks\n",
- "By\n",
- "Markets\n",
- "•\n",
- "about 3 hours ago\n",
- "•\n",
- "1\n",
- "🧠 Tags generation dataset\n",
- "By\n",
- "zino36\n",
- "•\n",
- "about 16 hours ago\n",
- "•\n",
- "1\n",
- "AI Agents in Action: Managing GitHub Issues with KaibanJS\n",
- "By\n",
- "darielnoel\n",
- "•\n",
- "1 day ago\n",
- "**Intelligence Potentiation: An Evolutionary Perspective on AI Agent Designs**\n",
- "By\n",
- "KnutJaegersberg\n",
- "•\n",
- "1 day ago\n",
- "•\n",
- "3\n",
- "MINERVA: A Multi-Agent LLM System for Digital Scam Protection\n",
- "By\n",
- "dcarpintero\n",
- "•\n",
- "2 days ago\n",
- "Mastering Iterative Prompting for Optimized AI Code Generation\n",
- "By\n",
- "luigi12345\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "1\n",
- "SILMA RAGQA V1.0: A Comprehensive Benchmark for Evaluating LLMs on RAG QA Use-Cases\n",
- "By\n",
- "karimouda\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "1\n",
- "FuseChat-3.0: Preference Optimization for Implicit Model Fusion\n",
- "By\n",
- "Wanfq\n",
- "•\n",
- "3 days ago\n",
- "•\n",
- "2\n",
- "Tutorial: Quantizing Llama 3+ Models for Efficient Deployment\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "6 days ago\n",
- "•\n",
- "3\n",
- "How to Expand Your AI Music Generations of 30 Seconds to Several Minutes\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "8 days ago\n",
- "•\n",
- "1\n",
- "🇪🇺✍️ EU AI Act: Systemic Risks in the First CoP Draft Comments ✍️🇪🇺\n",
- "By\n",
- "yjernite\n",
- "•\n",
- "9 days ago\n",
- "•\n",
- "11\n",
- "Building an AI-powered search engine from scratch\n",
- "By\n",
- "as-cle-bert\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "8\n",
- "MotionLCM-V2: Improved Compression Rate for Multi-Latent-Token Diffusion\n",
- "By\n",
- "wxDai\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "12\n",
- "RLHF 101: A Technical Dive into RLHF\n",
- "By\n",
- "GitBag\n",
- "•\n",
- "10 days ago\n",
- "•\n",
- "4\n",
- "[Talk Arena](https://talkarena.org)\n",
- "By\n",
- "WillHeld\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "1\n",
- "Multimodal RAG with Colpali, Milvus and VLMs\n",
- "By\n",
- "saumitras\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "2\n",
- "In Honour of This Year's NeurIPs Test of Time Paper Awardees\n",
- "By\n",
- "Jaward\n",
- "•\n",
- "11 days ago\n",
- "•\n",
- "2\n",
- "Power steering: Squeeze massive power from small LLMs\n",
- "By\n",
- "ucheog\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "4\n",
- "Exploring the Power of KaibanJS v0.11.0 🚀\n",
- "By\n",
- "darielnoel\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "1\n",
- "**Building a Custom Retrieval System with Motoko and Node.js**\n",
- "By\n",
- "theeseus-ai\n",
- "•\n",
- "12 days ago\n",
- "•\n",
- "1\n",
- "System theme\n",
- "Company\n",
- "TOS\n",
- "Privacy\n",
- "About\n",
- "Jobs\n",
- "Website\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Pricing\n",
- "Docs\n",
- "\n",
- "\n",
- "\n",
- "company page\n",
- "Webpage Title:\n",
- "huggingface (Hugging Face)\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "Hugging Face\n",
- "Enterprise\n",
- "company\n",
- "Verified\n",
- "https://huggingface.co\n",
- "huggingface\n",
- "huggingface\n",
- "Activity Feed\n",
- "Follow\n",
- "8,542\n",
- "AI & ML interests\n",
- "The AI community building the future.\n",
- "Recent Activity\n",
- "IAMJB\n",
- "updated\n",
- "a dataset\n",
- "9 minutes ago\n",
- "huggingface/community-science-paper-v2\n",
- "IAMJB\n",
- "updated\n",
- "a dataset\n",
- "about 6 hours ago\n",
- "huggingface/paper-central-data\n",
- "fdaudens\n",
- "updated\n",
- "a Space\n",
- "about 19 hours ago\n",
- "huggingface/open-source-ai-year-in-review-2024\n",
- "View all activity\n",
- "Team members\n",
- "224\n",
- "+190\n",
- "+177\n",
- "+156\n",
- "+146\n",
- "+126\n",
- "Organization Card\n",
- "Community\n",
- "About org cards\n",
- "👋 Hi!\n",
- "We are on a mission to democratize\n",
- "good\n",
- "machine learning, one commit at a time.\n",
- "If that sounds like something you should be doing, why don't you\n",
- "join us\n",
- "!\n",
- "For press enquiries, you can\n",
- "✉️ contact our team here\n",
- ".\n",
- "Collections\n",
- "1\n",
- "DistilBERT release\n",
- "Original DistilBERT model, checkpoints obtained from using teacher-student learning from the original BERT checkpoints.\n",
- "distilbert/distilbert-base-cased\n",
- "Fill-Mask\n",
- "•\n",
- "Updated\n",
- "May 6\n",
- "•\n",
- "358k\n",
- "•\n",
- "35\n",
- "distilbert/distilbert-base-uncased\n",
- "Fill-Mask\n",
- "•\n",
- "Updated\n",
- "May 6\n",
- "•\n",
- "14.8M\n",
- "•\n",
- "577\n",
- "distilbert/distilbert-base-multilingual-cased\n",
- "Fill-Mask\n",
- "•\n",
- "Updated\n",
- "May 6\n",
- "•\n",
- "472k\n",
- "•\n",
- "148\n",
- "distilbert/distilbert-base-uncased-finetuned-sst-2-english\n",
- "Text Classification\n",
- "•\n",
- "Updated\n",
- "Dec 19, 2023\n",
- "•\n",
- "6.96M\n",
- "•\n",
- "•\n",
- "645\n",
- "spaces\n",
- "23\n",
- "Sort: \n",
- "\t\tRecently updated\n",
- "pinned\n",
- "Running\n",
- "52\n",
- "📈\n",
- "Number Tokenization Blog\n",
- "Running\n",
- "395\n",
- "😻\n",
- "Open Source Ai Year In Review 2024\n",
- "What happened in open-source AI this year, and what’s next?\n",
- "Running\n",
- "42\n",
- "🔋\n",
- "Inference Playground\n",
- "Running\n",
- "196\n",
- "⚡\n",
- "paper-central\n",
- "Running\n",
- "on\n",
- "TPU v5e\n",
- "6\n",
- "💬\n",
- "Keras Chatbot Battle\n",
- "Running\n",
- "101\n",
- "⚡\n",
- "Modelcard Creator\n",
- "Expand 23\n",
- "\t\t\t\t\t\t\tspaces\n",
- "models\n",
- "18\n",
- "Sort: \n",
- "\t\tRecently updated\n",
- "huggingface/test-gating-group-2\n",
- "Updated\n",
- "4 days ago\n",
- "huggingface/test-gating-group-1\n",
- "Updated\n",
- "4 days ago\n",
- "huggingface/timesfm-tourism-monthly\n",
- "Updated\n",
- "12 days ago\n",
- "•\n",
- "29\n",
- "•\n",
- "1\n",
- "huggingface/CodeBERTa-language-id\n",
- "Text Classification\n",
- "•\n",
- "Updated\n",
- "Mar 29\n",
- "•\n",
- "1.14k\n",
- "•\n",
- "54\n",
- "huggingface/falcon-40b-gptq\n",
- "Text Generation\n",
- "•\n",
- "Updated\n",
- "Jun 14, 2023\n",
- "•\n",
- "19\n",
- "•\n",
- "12\n",
- "huggingface/autoformer-tourism-monthly\n",
- "Updated\n",
- "May 24, 2023\n",
- "•\n",
- "1.5k\n",
- "•\n",
- "9\n",
- "huggingface/distilbert-base-uncased-finetuned-mnli\n",
- "Text Classification\n",
- "•\n",
- "Updated\n",
- "Mar 22, 2023\n",
- "•\n",
- "1.37k\n",
- "•\n",
- "2\n",
- "huggingface/informer-tourism-monthly\n",
- "Updated\n",
- "Feb 24, 2023\n",
- "•\n",
- "1.12k\n",
- "•\n",
- "5\n",
- "huggingface/time-series-transformer-tourism-monthly\n",
- "Updated\n",
- "Feb 23, 2023\n",
- "•\n",
- "2.16k\n",
- "•\n",
- "18\n",
- "huggingface/the-no-branch-repo\n",
- "Text-to-Image\n",
- "•\n",
- "Updated\n",
- "Feb 10, 2023\n",
- "•\n",
- "7\n",
- "•\n",
- "3\n",
- "Expand 18\n",
- "\t\t\t\t\t\t\tmodels\n",
- "datasets\n",
- "31\n",
- "Sort: \n",
- "\t\tRecently updated\n",
- "huggingface/community-science-paper-v2\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "9 minutes ago\n",
- "•\n",
- "5.03k\n",
- "•\n",
- "404\n",
- "•\n",
- "7\n",
- "huggingface/paper-central-data\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "about 6 hours ago\n",
- "•\n",
- "119k\n",
- "•\n",
- "553\n",
- "•\n",
- "8\n",
- "huggingface/documentation-images\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "44\n",
- "•\n",
- "2.43M\n",
- "•\n",
- "43\n",
- "huggingface/transformers-metadata\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "2 days ago\n",
- "•\n",
- "1.52k\n",
- "•\n",
- "559\n",
- "•\n",
- "14\n",
- "huggingface/diffusers-metadata\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "2 days ago\n",
- "•\n",
- "62\n",
- "•\n",
- "442\n",
- "•\n",
- "4\n",
- "huggingface/policy-docs\n",
- "Updated\n",
- "3 days ago\n",
- "•\n",
- "898\n",
- "•\n",
- "6\n",
- "huggingface/my-distiset-3f5a230e\n",
- "Updated\n",
- "30 days ago\n",
- "•\n",
- "17\n",
- "huggingface/cookbook-images\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "Nov 14\n",
- "•\n",
- "1\n",
- "•\n",
- "40.1k\n",
- "•\n",
- "6\n",
- "huggingface/vllm-metadata\n",
- "Updated\n",
- "Oct 8\n",
- "•\n",
- "12\n",
- "huggingface/paper-central-data-2\n",
- "Viewer\n",
- "•\n",
- "Updated\n",
- "Oct 4\n",
- "•\n",
- "58.3k\n",
- "•\n",
- "68\n",
- "•\n",
- "2\n",
- "Expand 31\n",
- "\t\t\t\t\t\t\tdatasets\n",
- "System theme\n",
- "Company\n",
- "TOS\n",
- "Privacy\n",
- "About\n",
- "Jobs\n",
- "Website\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Pricing\n",
- "Docs\n",
- "\n",
- "\n",
- "\n",
- "community discussions\n",
- "Webpage Title:\n",
- "Hugging Face Forums - Hugging Face Community Discussion\n",
- "Webpage Contents:\n",
- "Loading\n",
- "Hugging Face Forums\n",
- "Topic\n",
- "Replies\n",
- "Views\n",
- "Activity\n",
- "List of `size_categories`\n",
- "🤗Datasets\n",
- "3\n",
- "5\n",
- "December 21, 2024\n",
- "Feature request - maintain list of favorite hf pages reachable from my hom epage\n",
- "Site Feedback\n",
- "4\n",
- "886\n",
- "December 21, 2024\n",
- "404 error on carbon emission calculation\n",
- "Site Feedback\n",
- "1\n",
- "7\n",
- "December 21, 2024\n",
- "Cannot connect gRPC Server Hosted on HuggingFace Spaces\n",
- "Spaces\n",
- "0\n",
- "8\n",
- "December 21, 2024\n",
- "Hide system prompt or system instruction\n",
- "Beginners\n",
- "3\n",
- "15\n",
- "December 21, 2024\n",
- "ModuleNotFoundError: No module named 'huggingface_hub.inference._types'\n",
- "🤗Hub\n",
- "0\n",
- "5\n",
- "December 21, 2024\n",
- "Understanding State Management with Gradio and LangGraph\n",
- "Beginners\n",
- "1\n",
- "11\n",
- "December 21, 2024\n",
- "Dimension problem\n",
- "Beginners\n",
- "25\n",
- "21\n",
- "December 21, 2024\n",
- "Fine-tuning whisper on sound-event-detection dataset\n",
- "🤗Transformers\n",
- "0\n",
- "4\n",
- "December 20, 2024\n",
- "Model that can generate both text and image as output\n",
- "Research\n",
- "4\n",
- "42\n",
- "December 21, 2024\n",
- "Lm studio and Chat ui doesn't work with module\n",
- "Beginners\n",
- "11\n",
- "33\n",
- "December 21, 2024\n",
- "Inference API Context Window and TOS\n",
- "Beginners\n",
- "0\n",
- "12\n",
- "December 20, 2024\n",
- "Talkie AI got remove from app store -any alternative ai chat?\n",
- "Beginners\n",
- "4\n",
- "1151\n",
- "December 18, 2024\n",
- "Inference Text Generation API issue\n",
- "Intermediate\n",
- "0\n",
- "7\n",
- "December 20, 2024\n",
- "From Pandas Dataframe to Huggingface Dataset\n",
- "Beginners\n",
- "9\n",
- "60459\n",
- "December 20, 2024\n",
- "\"Load Diffusion Model\" and \"Unet Loader (GGUF)\" null/undefined\n",
- "Beginners\n",
- "6\n",
- "200\n",
- "December 20, 2024\n",
- "Timeout Issue with DeepSpeed on Multiple GPUs\n",
- "DeepSpeed\n",
- "0\n",
- "8\n",
- "December 20, 2024\n",
- "Spaces dedicated gpu limit\n",
- "Spaces\n",
- "1\n",
- "14\n",
- "December 19, 2024\n",
- "Chatbot PDF - using flan-t5-large model\n",
- "Models\n",
- "0\n",
- "7\n",
- "December 20, 2024\n",
- "Gateway Problem\n",
- "Beginners\n",
- "0\n",
- "8\n",
- "December 20, 2024\n",
- "RT-DETR attention map dimension - PekingU/rtdetr_r50vd\n",
- "Models\n",
- "0\n",
- "5\n",
- "December 20, 2024\n",
- "Extending the tokenizer affects model generation\n",
- "Intermediate\n",
- "3\n",
- "9\n",
- "December 19, 2024\n",
- "How to Ensure Each Process Reads Its Own Dataset and Trains Correctly When Using Trainer?\n",
- "🤗Transformers\n",
- "0\n",
- "5\n",
- "December 20, 2024\n",
- "Can't save the tensorflow model of nvidia/mit-b5\n",
- "Intermediate\n",
- "3\n",
- "127\n",
- "December 19, 2024\n",
- "# Audio course Unit 4. sample code not working. Can anyone check for me? Thanks\n",
- "Course\n",
- "0\n",
- "6\n",
- "December 20, 2024\n",
- "Host Models on Hugging face and Perform Inference on Hugging Face Infrastructure\n",
- "Beginners\n",
- "0\n",
- "6\n",
- "December 20, 2024\n",
- "Torchrun, trainer, dataset setup\n",
- "Intermediate\n",
- "4\n",
- "71\n",
- "December 20, 2024\n",
- "Training fails on multiple GPUs with RuntimeError 'chuck expects at least a 1-dimensional array'\n",
- "Beginners\n",
- "2\n",
- "108\n",
- "December 19, 2024\n",
- "How do you know whether the model is merged and uploaded?\n",
- "Intermediate\n",
- "0\n",
- "11\n",
- "December 20, 2024\n",
- "Qwen based AI assistant randomly having an absolute, utter, complete 'mental breakdowns'?? (Inference API)\n",
- "🤗Transformers\n",
- "2\n",
- "23\n",
- "December 17, 2024\n",
- "next page →\n",
- "Home\n",
- "Categories\n",
- "Guidelines\n",
- "Terms of Service\n",
- "Privacy Policy\n",
- "Powered by\n",
- "Discourse\n",
- ", best viewed with JavaScript enabled\n",
- "\n",
- "\n",
- "\n",
- "GitHub page\n",
- "Webpage Title:\n",
- "Hugging Face · GitHub\n",
- "Webpage Contents:\n",
- "Skip to content\n",
- "Navigation Menu\n",
- "Toggle navigation\n",
- "Sign in\n",
- "huggingface\n",
- "Product\n",
- "GitHub Copilot\n",
- "Write better code with AI\n",
- "Security\n",
- "Find and fix vulnerabilities\n",
- "Actions\n",
- "Automate any workflow\n",
- "Codespaces\n",
- "Instant dev environments\n",
- "Issues\n",
- "Plan and track work\n",
- "Code Review\n",
- "Manage code changes\n",
- "Discussions\n",
- "Collaborate outside of code\n",
- "Code Search\n",
- "Find more, search less\n",
- "Explore\n",
- "All features\n",
- "Documentation\n",
- "GitHub Skills\n",
- "Blog\n",
- "Solutions\n",
- "By company size\n",
- "Enterprises\n",
- "Small and medium teams\n",
- "Startups\n",
- "By use case\n",
- "DevSecOps\n",
- "DevOps\n",
- "CI/CD\n",
- "View all use cases\n",
- "By industry\n",
- "Healthcare\n",
- "Financial services\n",
- "Manufacturing\n",
- "Government\n",
- "View all industries\n",
- "View all solutions\n",
- "Resources\n",
- "Topics\n",
- "AI\n",
- "DevOps\n",
- "Security\n",
- "Software Development\n",
- "View all\n",
- "Explore\n",
- "Learning Pathways\n",
- "White papers, Ebooks, Webinars\n",
- "Customer Stories\n",
- "Partners\n",
- "Executive Insights\n",
- "Open Source\n",
- "GitHub Sponsors\n",
- "Fund open source developers\n",
- "The ReadME Project\n",
- "GitHub community articles\n",
- "Repositories\n",
- "Topics\n",
- "Trending\n",
- "Collections\n",
- "Enterprise\n",
- "Enterprise platform\n",
- "AI-powered developer platform\n",
- "Available add-ons\n",
- "Advanced Security\n",
- "Enterprise-grade security features\n",
- "GitHub Copilot\n",
- "Enterprise-grade AI features\n",
- "Premium Support\n",
- "Enterprise-grade 24/7 support\n",
- "Pricing\n",
- "Search or jump to...\n",
- "Search code, repositories, users, issues, pull requests...\n",
- "Search\n",
- "Clear\n",
- "Search syntax tips\n",
- "Provide feedback\n",
- "We read every piece of feedback, and take your input very seriously.\n",
- "Include my email address so I can be contacted\n",
- "Cancel\n",
- "Submit feedback\n",
- "Saved searches\n",
- "Use saved searches to filter your results more quickly\n",
- "Cancel\n",
- "Create saved search\n",
- "Sign in\n",
- "Sign up\n",
- "Reseting focus\n",
- "You signed in with another tab or window.\n",
- "Reload\n",
- "to refresh your session.\n",
- "You signed out in another tab or window.\n",
- "Reload\n",
- "to refresh your session.\n",
- "You switched accounts on another tab or window.\n",
- "Reload\n",
- "to refresh your session.\n",
- "Dismiss alert\n",
- "Hugging Face\n",
- "The AI community building the future.\n",
- "Verified\n",
- "We've verified that the organization\n",
- "huggingface\n",
- "controls the domain:\n",
- "huggingface.co\n",
- "Learn more about verified organizations\n",
- "40.1k\n",
- "followers\n",
- "NYC + Paris\n",
- "https://huggingface.co/\n",
- "X\n",
- "@huggingface\n",
- "Overview\n",
- "Repositories\n",
- "Projects\n",
- "Packages\n",
- "People\n",
- "Sponsoring\n",
- "0\n",
- "More\n",
- "Overview\n",
- "Repositories\n",
- "Projects\n",
- "Packages\n",
- "People\n",
- "Sponsoring\n",
- "Pinned\n",
- "Loading\n",
- "transformers\n",
- "transformers\n",
- "Public\n",
- "🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.\n",
- "Python\n",
- "137k\n",
- "27.3k\n",
- "diffusers\n",
- "diffusers\n",
- "Public\n",
- "🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.\n",
- "Python\n",
- "26.7k\n",
- "5.5k\n",
- "datasets\n",
- "datasets\n",
- "Public\n",
- "🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools\n",
- "Python\n",
- "19.4k\n",
- "2.7k\n",
- "peft\n",
- "peft\n",
- "Public\n",
- "🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.\n",
- "Python\n",
- "16.8k\n",
- "1.7k\n",
- "accelerate\n",
- "accelerate\n",
- "Public\n",
- "🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support\n",
- "Python\n",
- "8.1k\n",
- "995\n",
- "optimum\n",
- "optimum\n",
- "Public\n",
- "🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization tools\n",
- "Python\n",
- "2.6k\n",
- "486\n",
- "Repositories\n",
- "Loading\n",
- "Type\n",
- "Select type\n",
- "Forks\n",
- "Archived\n",
- "Mirrors\n",
- "Templates\n",
- "Language\n",
- "Select language\n",
- "All\n",
- "C\n",
- "C#\n",
- "C++\n",
- "Cuda\n",
- "Dockerfile\n",
- "Go\n",
- "Handlebars\n",
- "HTML\n",
- "Java\n",
- "JavaScript\n",
- "Jupyter Notebook\n",
- "Kotlin\n",
- "Lua\n",
- "MDX\n",
- "Mustache\n",
- "Nix\n",
- "Python\n",
- "Rust\n",
- "Shell\n",
- "Smarty\n",
- "Swift\n",
- "TypeScript\n",
- "Sort\n",
- "Select order\n",
- "Last updated\n",
- "Name\n",
- "Stars\n",
- "Showing 10 of 275 repositories\n",
- "trl\n",
- "Public\n",
- "Train transformer language models with reinforcement learning.\n",
- "huggingface/trl’s past year of commit activity\n",
- "Python\n",
- "10,382\n",
- "Apache-2.0\n",
- "1,337\n",
- "106\n",
- "46\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "transformers.js\n",
- "Public\n",
- "State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!\n",
- "huggingface/transformers.js’s past year of commit activity\n",
- "JavaScript\n",
- "12,421\n",
- "Apache-2.0\n",
- "790\n",
- "274\n",
- "(3 issues need help)\n",
- "48\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "diffusers\n",
- "Public\n",
- "🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.\n",
- "huggingface/diffusers’s past year of commit activity\n",
- "Python\n",
- "26,740\n",
- "Apache-2.0\n",
- "5,504\n",
- "379\n",
- "(10 issues need help)\n",
- "169\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "text-generation-inference\n",
- "Public\n",
- "Large Language Model Text Generation Inference\n",
- "huggingface/text-generation-inference’s past year of commit activity\n",
- "Python\n",
- "9,484\n",
- "Apache-2.0\n",
- "1,106\n",
- "152\n",
- "21\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "candle\n",
- "Public\n",
- "Minimalist ML framework for Rust\n",
- "huggingface/candle’s past year of commit activity\n",
- "Rust\n",
- "16,103\n",
- "Apache-2.0\n",
- "980\n",
- "344\n",
- "(5 issues need help)\n",
- "86\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "autotrain-advanced\n",
- "Public\n",
- "🤗 AutoTrain Advanced\n",
- "huggingface/autotrain-advanced’s past year of commit activity\n",
- "Python\n",
- "4,157\n",
- "Apache-2.0\n",
- "505\n",
- "16\n",
- "2\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "transformers\n",
- "Public\n",
- "🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.\n",
- "huggingface/transformers’s past year of commit activity\n",
- "Python\n",
- "136,571\n",
- "Apache-2.0\n",
- "27,342\n",
- "1,003\n",
- "(2 issues need help)\n",
- "526\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "lighteval\n",
- "Public\n",
- "Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends\n",
- "huggingface/lighteval’s past year of commit activity\n",
- "Python\n",
- "889\n",
- "MIT\n",
- "109\n",
- "62\n",
- "(1 issue needs help)\n",
- "15\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "hub-docs\n",
- "Public\n",
- "Docs of the Hugging Face Hub\n",
- "huggingface/hub-docs’s past year of commit activity\n",
- "Handlebars\n",
- "309\n",
- "Apache-2.0\n",
- "259\n",
- "90\n",
- "25\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "optimum-habana\n",
- "Public\n",
- "Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)\n",
- "huggingface/optimum-habana’s past year of commit activity\n",
- "Python\n",
- "162\n",
- "Apache-2.0\n",
- "219\n",
- "11\n",
- "(1 issue needs help)\n",
- "40\n",
- "Updated\n",
- "Dec 21, 2024\n",
- "View all repositories\n",
- "People\n",
- "View all\n",
- "Top languages\n",
- "Python\n",
- "Jupyter Notebook\n",
- "Rust\n",
- "TypeScript\n",
- "JavaScript\n",
- "Most used topics\n",
- "pytorch\n",
- "machine-learning\n",
- "nlp\n",
- "deep-learning\n",
- "transformers\n",
- "Footer\n",
- "© 2024 GitHub, Inc.\n",
- "Footer navigation\n",
- "Terms\n",
- "Privacy\n",
- "Security\n",
- "Status\n",
- "Docs\n",
- "Contact\n",
- "Manage cookies\n",
- "Do not share my personal information\n",
- "You can’t perform that action at this time.\n",
- "\n",
- "\n",
- "\n",
- "Twitter page\n",
- "Webpage Title:\n",
- "x.com\n",
- "Webpage Contents:\n",
- "\n",
- "\n",
- "\n",
- "\n",
- "LinkedIn page\n",
- "Webpage Title:\n",
- "Hugging Face | LinkedIn\n",
- "Webpage Contents:\n",
- "Skip to main content\n",
- "LinkedIn\n",
- "Articles\n",
- "People\n",
- "Learning\n",
- "Jobs\n",
- "Games\n",
- "Get the app\n",
- "Join now\n",
- "Sign in\n",
- "Hugging Face\n",
- "Software Development\n",
- "The AI community building the future.\n",
- "See jobs\n",
- "Follow\n",
- "Discover all 472 employees\n",
- "Report this company\n",
- "About us\n",
- "The AI community building the future.\n",
- "Website\n",
- "https://huggingface.co\n",
- "External link for Hugging Face\n",
- "Industry\n",
- "Software Development\n",
- "Company size\n",
- "51-200 employees\n",
- "Type\n",
- "Privately Held\n",
- "Founded\n",
- "2016\n",
- "Specialties\n",
- "machine learning, natural language processing, and deep learning\n",
- "Products\n",
- "Hugging Face\n",
- "Hugging Face\n",
- "Natural Language Processing (NLP) Software\n",
- "We’re on a journey to solve and democratize artificial intelligence through natural language.\n",
- "Locations\n",
- "Primary\n",
- "Get directions\n",
- "Paris, FR\n",
- "Get directions\n",
- "Employees at Hugging Face\n",
- "Ludovic Huraux\n",
- "Bassem ASSEH\n",
- "Rajat Arya\n",
- "Tech Lead & Software Engineer @ HF | prev: co-founder XetHub, Apple, Turi, AWS, Microsoft\n",
- "Jeff Boudier\n",
- "Product + Growth at Hugging Face\n",
- "See all employees\n",
- "Updates\n",
- "Hugging Face\n",
- "reposted this\n",
- "Gradio\n",
- "47,326 followers\n",
- "7h\n",
- "Report this post\n",
- "NOW you can add AI to your Slack, Discord in just few steps with Gradio!ü§©\n",
- "\n",
- "üî•Create Slack apps, Discord bots, or Intercom-style website widgets in ANY modality (Text, image, Video, Audio, Omni etc)! Keep reading to learn how ‚¨áÔ∏è\n",
- "\n",
- "Guide: üöÄ Creating a Slack Bot from a Gradio App üöÄ\n",
- "Read here:\n",
- "https://lnkd.in/g2_Bydrj\n",
- "ü§éDo you love building stuff with Gradio? Support us on GitHub:\n",
- "Gradio.dev\n",
- "…more\n",
- "50\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Daniel V.\n",
- "Machine Learning Librarian@ü§ó | Championing Open Science & Machine Learning\n",
- "21h\n",
- "Report this post\n",
- "Introducing FineWeb-C üåêüéì, a community-built dataset for improving language models in ALL languages. \n",
- "\n",
- "Inspired by FineWeb-Edu the community is labelling the educational quality of texts for many languages. \n",
- "\n",
- "318 annotators, 32K+ annotations, 12 languages - and growing!üåç\n",
- "57\n",
- "2 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Merve Noyan\n",
- "open-sourceress at ü§ó | Google Developer Expert in Machine Learning, MSc Candidate in Data Science\n",
- "22h\n",
- "Report this post\n",
- "Fine-tune ColPali for your multimodal RAG use case üî•\n",
- "\n",
- "ColPali just landed to\n",
- "Hugging Face\n",
- "transformers and I have built a simple fine-tuning tutorial with QLoRA ü§ó\n",
- "You can fine-tune the model with 32 GB VRAM with batch size of 4 (which can run on Colab A100)\n",
- "Link in comments üí¨\n",
- "267\n",
- "4 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "ü§ñ Avthar Sewrathan\n",
- "AI and Developer Product Leader | I talk about using AI and building AI apps\n",
- "1d\n",
- "Report this post\n",
- "TIL: You can now load any\n",
- "Hugging Face\n",
- "dataset into PostgreSQL with just 1 line of SQL ü§Ø\n",
- "\n",
- "All thanks to the pgai PostgreSQL extension. \n",
- "\n",
- "Shoutout to\n",
- "Matvey Arye\n",
- "from the\n",
- "Timescale\n",
- "AI engineering team for implementing this.\n",
- "\n",
- "Learn more about using PostgreSQL with HuggingFace datasets in the HuggingFace docs:\n",
- "https://lnkd.in/eS4hqSDq\n",
- "#postgresql\n",
- "#huggingface\n",
- "#opensource\n",
- "180\n",
- "14 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Argilla\n",
- "10,266 followers\n",
- "1d\n",
- "Report this post\n",
- "üé¢ Push to Hub: Export your dataset to the Hugging Face Hub directly from the Argilla UI.\n",
- "\n",
- "We’re super excited to announce that we've closed the loop: now you can load a dataset from the Hub, open it on\n",
- "Argilla\n",
- "UI, label it, and push the annotated dataset to the Hub. All this without a line of code!\n",
- "\n",
- "\n",
- "ùó™ùóµùòÜ ùòÄùóµùóºùòÇùóπùó± ùòÜùóºùòÇ ùòÇùòÄùó≤ ùó∂ùòÅ?\n",
- "\n",
- "Your AI project's impact depends heavily on the effort and care you put into your data. This new feature enables you to iterate faster and make annotated data available in the right format for training and evaluation.\n",
- "\n",
- "\n",
- "ùóõùóºùòÑ ùó±ùóºùó≤ùòÄ ùó∂ùòÅ ùòÑùóºùóøùó∏?\n",
- "\n",
- "1️⃣ Import initial data from a CSV or any format to Hugging Face\n",
- "2️⃣ Load it into the Argilla UI and configure the annotation task\n",
- "3️⃣ Annotate your dataset\n",
- "üöÄ Click on ‚ÄúPush to Hub‚Äù and share the dataset with your team (or the entire world)\n",
- "\n",
- "üëâ ùó•ùó≤ùóÆùó±ùòÜ ùòÅùóº ùòÅùóøùòÜ ùó∂ùòÅ ùóºùòÇùòÅ?\n",
- "\n",
- "Get started here:\n",
- "https://lnkd.in/dhA-swR5\n",
- "Release highlights:\n",
- "https://lnkd.in/dbdQXG-W\n",
- "35\n",
- "3 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Daniel V.\n",
- "Machine Learning Librarian@ü§ó | Championing Open Science & Machine Learning\n",
- "1d\n",
- "Report this post\n",
- "Hot take: shipping BERT-sized models in 2025 will benefit far more people than sharing an LLM overfitted to some saturated leaderboards \n",
- "\n",
- "We're already seeing ModernBERT finetunes on the\n",
- "Hugging Face\n",
- "Hub. My guess is we'll see hundreds of these by the end of 2025.\n",
- "80\n",
- "4 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Gradio\n",
- "47,326 followers\n",
- "1d\n",
- "Edited\n",
- "Report this post\n",
- "ü§Øüî•LEARN HOW TO CREATE interactive agentic chatbots using Gradio that are capable of showcasing the Thoughts, Tasks, and interim responses of Multiple Agents as you await the final answer from your AI assistant.\n",
- "\n",
- "üéØ Customer Support multi-agents with\n",
- "CrewAI\n",
- "and\n",
- "Gradio\n",
- "Showcasing here, a user-friendly, high-performing multi-agent gradio app. TO operate it, simply enter a webpage URL along with your questions related to that page, and in turn receive a high-quality response from the CrewAI Multi-Agent setup.\n",
- "\n",
- "üöÄAccess this app on\n",
- "Hugging Face\n",
- "Spaces:\n",
- "https://lnkd.in/g6kXp_D2\n",
- "…more\n",
- "72\n",
- "1 Comment\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Clem Delangue ü§ó\n",
- "Clem Delangue ü§ó is an Influencer\n",
- "Co-founder & CEO at Hugging Face\n",
- "2d\n",
- "Report this post\n",
- "In the past few months, we've invested a lot of efforts in improving the user management features of the Hugging Face hub that more than 5M AI builder are now using. It helps not only for easier organization collaboration but also for security (for example to make sure ex team members don't still have access to private models). \n",
- "\n",
- "If your manager, VP AI or admin/CISO is not aware, mention them below so that we can connect if they have any questions or feedback as most of these features are part of the Enterprise hub subscriptions:\n",
- "https://lnkd.in/e-RY-3vs\n",
- ")\n",
- "\n",
- "Cheers!\n",
- "47\n",
- "3 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Clem Delangue ü§ó\n",
- "Clem Delangue ü§ó is an Influencer\n",
- "Co-founder & CEO at Hugging Face\n",
- "4d\n",
- "Report this post\n",
- "Just 10 days after o1's public debut, we‚Äôre thrilled to unveil the open-source version of the groundbreaking technique behind its success: scaling test-time compute ü߆üí° \n",
- "\n",
- "By giving models more \"time to think,\" Llama 1B outperforms Llama 8B in math‚Äîbeating a model 8x its size. The full recipe is open-sourceü§Ø \n",
- "\n",
- "This is the power of open science and open-source AI! üåç‚ú®\n",
- "5,292\n",
- "125 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Hugging Face\n",
- "reposted this\n",
- "Philipp Schmid\n",
- "Technical Lead & LLMs at Hugging Face ü§ó | AWS ML HERO ü¶∏ü誂ôÇÔ∏è\n",
- "1d\n",
- "Report this post\n",
- "ModernBERT, BERT revisited in the age of LLMs and Generative AI!\n",
- "LightOn\n",
- "and\n",
- "Answer.ai\n",
- "modernized BERT! Improved architecture with 8192 context length, flash attention, and trained on 2T tokens. ModernBERT outperforms version BERT and RoBERTa versions! üëÄ\n",
- "\n",
- "TL;DR;\n",
- "2️⃣ Comes in 2 sizes base (139M) and large (395M)\n",
- "üöĬ†Better performance across all metrics than the original BERT\n",
- "üìè 8,192 token context length (16x longer than BERT)\n",
- "‚ö° Modern architecture with Flash Attention 2, RoPE embeddings, and alternating attention\n",
- "üìö Trained on 2 trillion tokens, primarily English and Code\n",
- "üí® 2-4x faster than other models with mixed-length inputs\n",
- "üî쬆Released under Apache 2.0\n",
- "ü§ó¬†Available on\n",
- "Hugging Face\n",
- "and Transformers (main)\n",
- "\n",
- "Models:\n",
- "https://lnkd.in/ethiJ2xh\n",
- "Blog:\n",
- "https://lnkd.in/ebiEzb4P\n",
- "Paper:\n",
- "https://lnkd.in/ezR8MUBF\n",
- "1,844\n",
- "67 Comments\n",
- "Like\n",
- "Comment\n",
- "Share\n",
- "Join now to see what you are missing\n",
- "Find people you know at Hugging Face\n",
- "Browse recommended jobs for you\n",
- "View all updates, news, and articles\n",
- "Join now\n",
- "Similar pages\n",
- "Anthropic\n",
- "Research Services\n",
- "Mistral AI\n",
- "Technology, Information and Internet\n",
- "Paris, France\n",
- "OpenAI\n",
- "Research Services\n",
- "San Francisco, CA\n",
- "LangChain\n",
- "Technology, Information and Internet\n",
- "Perplexity\n",
- "Software Development\n",
- "San Francisco, California\n",
- "Generative AI\n",
- "Technology, Information and Internet\n",
- "Google DeepMind\n",
- "Research Services\n",
- "London, London\n",
- "LlamaIndex\n",
- "Technology, Information and Internet\n",
- "San Francisco, California\n",
- "DeepLearning.AI\n",
- "Software Development\n",
- "Palo Alto, California\n",
- "Cohere\n",
- "Software Development\n",
- "Toronto, Ontario\n",
- "Show more similar pages\n",
- "Show fewer similar pages\n",
- "Browse jobs\n",
- "Engineer jobs\n",
- "555,845 open jobs\n",
- "Machine Learning Engineer jobs\n",
- "148,937 open jobs\n",
- "Scientist jobs\n",
- "48,969 open jobs\n",
- "Software Engineer jobs\n",
- "300,699 open jobs\n",
- "Intern jobs\n",
- "71,196 open jobs\n",
- "Developer jobs\n",
- "258,935 open jobs\n",
- "Analyst jobs\n",
- "694,057 open jobs\n",
- "Intelligence Specialist jobs\n",
- "7,156 open jobs\n",
- "Manager jobs\n",
- "1,880,925 open jobs\n",
- "Data Scientist jobs\n",
- "264,158 open jobs\n",
- "Director jobs\n",
- "1,220,357 open jobs\n",
- "Associate jobs\n",
- "1,091,945 open jobs\n",
- "Python Developer jobs\n",
- "46,642 open jobs\n",
- "Evangelist jobs\n",
- "5,068 open jobs\n",
- "Data Engineer jobs\n",
- "192,126 open jobs\n",
- "Vice President jobs\n",
- "235,270 open jobs\n",
- "Quantitative Analyst jobs\n",
- "19,570 open jobs\n",
- "Program Manager jobs\n",
- "243,900 open jobs\n",
- "Data Science Specialist jobs\n",
- "2,441 open jobs\n",
- "Lead Software Engineer jobs\n",
- "68,215 open jobs\n",
- "Show more jobs like this\n",
- "Show fewer jobs like this\n",
- "Funding\n",
- "Hugging Face\n",
- "7 total rounds\n",
- "Last Round\n",
- "Series D\n",
- "Feb 16, 2024\n",
- "External Crunchbase Link for last round of funding\n",
- "See more info on\n",
- "crunchbase\n",
- "More searches\n",
- "More searches\n",
- "Engineer jobs\n",
- "Intern jobs\n",
- "Machine Learning Engineer jobs\n",
- "Software Engineer jobs\n",
- "Scientist jobs\n",
- "Developer jobs\n",
- "Research Intern jobs\n",
- "Analyst jobs\n",
- "Intelligence Specialist jobs\n",
- "Quantitative Analyst jobs\n",
- "Technician jobs\n",
- "Data Science Specialist jobs\n",
- "Project Manager jobs\n",
- "Summer Intern jobs\n",
- "Manager jobs\n",
- "Senior Staff Engineer jobs\n",
- "PHD jobs\n",
- "Trader jobs\n",
- "Researcher jobs\n",
- "Data Scientist jobs\n",
- "Writer jobs\n",
- "Data Analyst jobs\n",
- "Product Designer jobs\n",
- "Back End Developer jobs\n",
- "Spring Intern jobs\n",
- "Program Manager jobs\n",
- "Technology Officer jobs\n",
- "Software Intern jobs\n",
- "Security Professional jobs\n",
- "Senior Software Engineer jobs\n",
- "Python Developer jobs\n",
- "Engineering Manager jobs\n",
- "Web Developer jobs\n",
- "Graduate jobs\n",
- "Full Stack Engineer jobs\n",
- "Professor jobs\n",
- "Head jobs\n",
- "Verification Manager jobs\n",
- "User Experience Designer jobs\n",
- "Recruiter jobs\n",
- "Chief Executive Officer jobs\n",
- "Associate jobs\n",
- "Support Developer jobs\n",
- "Senior Firmware Engineer jobs\n",
- "Marketing Manager jobs\n",
- "Modeling Engineer jobs\n",
- "Designer jobs\n",
- "Automation Lead jobs\n",
- "Options Trader jobs\n",
- "Agile Coach jobs\n",
- "Research Engineer jobs\n",
- "Software Quality Assurance Analyst jobs\n",
- "User Experience Manager jobs\n",
- "Technical Intern jobs\n",
- "Junior Network Engineer jobs\n",
- "Information Technology Recruiter jobs\n",
- "User Researcher jobs\n",
- "Player jobs\n",
- "Engineering Project Manager jobs\n",
- "Digital Strategist jobs\n",
- "LinkedIn\n",
- "© 2024\n",
- "About\n",
- "Accessibility\n",
- "User Agreement\n",
- "Privacy Policy\n",
- "Cookie Policy\n",
- "Copyright Policy\n",
- "Brand Policy\n",
- "Guest Controls\n",
- "Community Guidelines\n",
- "العربية (Arabic)\n",
- "বাংলা (Bangla)\n",
- "Čeština (Czech)\n",
- "Dansk (Danish)\n",
- "Deutsch (German)\n",
- "Ελληνικά (Greek)\n",
- "English (English)\n",
- "Español (Spanish)\n",
- "فارسی (Persian)\n",
- "Suomi (Finnish)\n",
- "Français (French)\n",
- "हिंदी (Hindi)\n",
- "Magyar (Hungarian)\n",
- "Bahasa Indonesia (Indonesian)\n",
- "Italiano (Italian)\n",
- "עברית (Hebrew)\n",
- "日本語 (Japanese)\n",
- "한국어 (Korean)\n",
- "मराठी (Marathi)\n",
- "Bahasa Malaysia (Malay)\n",
- "Nederlands (Dutch)\n",
- "Norsk (Norwegian)\n",
- "ਪੰਜਾਬੀ (Punjabi)\n",
- "Polski (Polish)\n",
- "Português (Portuguese)\n",
- "Română (Romanian)\n",
- "–†—É—Å—Å–∫–∏–π (Russian)\n",
- "Svenska (Swedish)\n",
- "తెలుగు (Telugu)\n",
- "ภาษาไทย (Thai)\n",
- "Tagalog (Tagalog)\n",
- "Türkçe (Turkish)\n",
- "–£–∫—Ä–∞—ó–Ω—Å—å–∫–∞ (Ukrainian)\n",
- "Tiếng Việt (Vietnamese)\n",
- "简体中文 (Chinese (Simplified))\n",
- "正體中文 (Chinese (Traditional))\n",
- "Language\n",
- "Agree & Join LinkedIn\n",
- "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
- "User Agreement\n",
- ",\n",
- "Privacy Policy\n",
- ", and\n",
- "Cookie Policy\n",
- ".\n",
- "Sign in to see who you already know at Hugging Face\n",
- "Sign in\n",
- "Welcome back\n",
- "Email or phone\n",
- "Password\n",
- "Show\n",
- "Forgot password?\n",
- "Sign in\n",
- "or\n",
- "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
- "User Agreement\n",
- ",\n",
- "Privacy Policy\n",
- ", and\n",
- "Cookie Policy\n",
- ".\n",
- "New to LinkedIn?\n",
- "Join now\n",
- "or\n",
- "New to LinkedIn?\n",
- "Join now\n",
- "By clicking Continue to join or sign in, you agree to LinkedIn’s\n",
- "User Agreement\n",
- ",\n",
- "Privacy Policy\n",
- ", and\n",
- "Cookie Policy\n",
- ".\n",
- "LinkedIn\n",
- "LinkedIn is better on the app\n",
- "Don’t have the app? Get it in the Microsoft Store.\n",
- "Open the app\n",
- "\n",
- "\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"print(get_all_details(\"https://huggingface.co\"))"
]
@@ -2674,359 +300,10 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": null,
"id": "cd909e0b-1312-4ce2-a553-821e795d7572",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog', 'url': 'https://huggingface.co/blog'}, {'type': 'company page', 'url': 'https://huggingface.co/enterprise'}]}\n",
- "You are looking at a company called: HuggingFace\n",
- "Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\n",
- "Landing page:\n",
- "Webpage Title:\n",
- "Hugging Face – The AI community building the future.\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "The AI community building the future.\n",
- "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
- "Trending on\n",
- "this week\n",
- "Models\n",
- "IamCreateAI/Ruyi-Mini-7B\n",
- "Updated\n",
- "4 days ago\n",
- "•\n",
- "8.17k\n",
- "•\n",
- "352\n",
- "Datou1111/shou_xin\n",
- "Updated\n",
- "12 days ago\n",
- "•\n",
- "28.3k\n",
- "•\n",
- "672\n",
- "answerdotai/ModernBERT-base\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "6.24k\n",
- "•\n",
- "236\n",
- "meta-llama/Llama-3.3-70B-Instruct\n",
- "Updated\n",
- "11 days ago\n",
- "•\n",
- "236k\n",
- "•\n",
- "1.21k\n",
- "tencent/HunyuanVideo\n",
- "Updated\n",
- "3 days ago\n",
- "•\n",
- "6.01k\n",
- "•\n",
- "1.2k\n",
- "Browse 400k+ models\n",
- "Spaces\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "1.79k\n",
- "🏢\n",
- "TRELLIS\n",
- "Scalable and Versatile 3D Generation from images\n",
- "Running\n",
- "306\n",
- "📝\n",
- "Scaling test-time compute\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "470\n",
- "🚀\n",
- "Flux Style Shaping\n",
- "Optical illusions and style transfer with FLUX\n",
- "Running\n",
- "on\n",
- "CPU Upgrade\n",
- "6.11k\n",
- "👕\n",
- "Kolors Virtual Try-On\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "965\n",
- "📈\n",
- "IC Light V2\n",
- "Browse 150k+ applications\n",
- "Datasets\n",
- "fka/awesome-chatgpt-prompts\n",
- "Updated\n",
- "Sep 3\n",
- "•\n",
- "6.83k\n",
- "•\n",
- "6.58k\n",
- "O1-OPEN/OpenO1-SFT\n",
- "Updated\n",
- "4 days ago\n",
- "•\n",
- "1.86k\n",
- "•\n",
- "234\n",
- "HuggingFaceFW/fineweb-2\n",
- "Updated\n",
- "13 days ago\n",
- "•\n",
- "77.7k\n",
- "•\n",
- "342\n",
- "HuggingFaceTB/finemath\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "1.86k\n",
- "•\n",
- "43\n",
- "amphora/QwQ-LongCoT-130K\n",
- "Updated\n",
- "16 days ago\n",
- "•\n",
- "1.34k\n",
- "•\n",
- "85\n",
- "Browse 100k+ datasets\n",
- "The Home of Machine Learning\n",
- "Create, discover and collaborate on ML better.\n",
- "The collaboration platform\n",
- "Host and collaborate on unlimited public models, datasets and applications.\n",
- "Move faster\n",
- "With the HF Open source stack.\n",
- "Explore all modalities\n",
- "Text, image, video, audio or even 3D.\n",
- "Build your portfolio\n",
- "Share your work with the world and build your ML profile.\n",
- "Sign Up\n",
- "Accelerate your ML\n",
- "We provide paid Compute and Enterprise solutions.\n",
- "Compute\n",
- "Deploy on optimized\n",
- "Inference Endpoints\n",
- "or update your\n",
- "Spaces applications\n",
- "to a GPU in a few clicks.\n",
- "View pricing\n",
- "Starting at $0.60/hour for GPU\n",
- "Enterprise\n",
- "Give your team the most advanced platform to build AI with enterprise-grade security, access controls and\n",
- "\t\t\tdedicated support.\n",
- "Getting started\n",
- "Starting at $20/user/month\n",
- "Single Sign-On\n",
- "Regions\n",
- "Priority Support\n",
- "Audit Logs\n",
- "Resource Groups\n",
- "Private Datasets Viewer\n",
- "More than 50,000 organizations are using Hugging Face\n",
- "Ai2\n",
- "Enterprise\n",
- "non-profit\n",
- "•\n",
- "366 models\n",
- "•\n",
- "1.76k followers\n",
- "AI at Meta\n",
- "Enterprise\n",
- "company\n",
- "•\n",
- "2.05k models\n",
- "•\n",
- "3.83k followers\n",
- "Amazon Web Services\n",
- "company\n",
- "•\n",
- "21 models\n",
- "•\n",
- "2.45k followers\n",
- "Google\n",
- "company\n",
- "•\n",
- "911 models\n",
- "•\n",
- "5.76k followers\n",
- "Intel\n",
- "company\n",
- "•\n",
- "217 models\n",
- "•\n",
- "2.07k followers\n",
- "Microsoft\n",
- "company\n",
- "•\n",
- "351 models\n",
- "•\n",
- "6.29k followers\n",
- "Grammarly\n",
- "company\n",
- "•\n",
- "10 models\n",
- "•\n",
- "102 followers\n",
- "Writer\n",
- "Enterprise\n",
- "company\n",
- "•\n",
- "17 models\n",
- "•\n",
- "186 followers\n",
- "Our Open Source\n",
- "We are building the foundation of ML tooling with the community.\n",
- "Transformers\n",
- "136,571\n",
- "State-of-the-art ML for Pytorch, TensorFlow, and JAX.\n",
- "Diffusers\n",
- "26,740\n",
- "State-of-the-art diffusion models for image and audio generation in PyTorch.\n",
- "Safetensors\n",
- "2,960\n",
- "Simple, safe way to store and distribute neural networks weights safely and quickly.\n",
- "Hub Python Library\n",
- "2,177\n",
- "Client library for the HF Hub: manage repositories from your Python runtime.\n",
- "Tokenizers\n",
- "9,165\n",
- "Fast tokenizers, optimized for both research and production.\n",
- "PEFT\n",
- "16,767\n",
- "Parameter efficient finetuning methods for large models.\n",
- "Transformers.js\n",
- "12,421\n",
- "State-of-the-art Machine Learning for the web. Run Transformers directly in your browser, with no need for a server.\n",
- "timm\n",
- "32,668\n",
- "State-of-the-art computer vision models, layers, optimizers, training/evaluation, and utilities.\n",
- "TRL\n",
- "10,382\n",
- "Train transformer language models with reinforcement learning.\n",
- "Datasets\n",
- "19,378\n",
- "Access and share datasets for computer vision, audio, and NLP tasks.\n",
- "Text Generation Inference\n",
- "9,484\n",
- "Toolkit to serve Large Language Models.\n",
- "Accelerate\n",
- "8,082\n",
- "Easily train and use PyTorch models with multi-GPU, TPU, mixed-precision.\n",
- "System theme\n",
- "Website\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Tasks\n",
- "Inference Endpoints\n",
- "HuggingChat\n",
- "Company\n",
- "About\n",
- "Brand assets\n",
- "Terms of service\n",
- "Privacy\n",
- "Jobs\n",
- "Press\n",
- "Resources\n",
- "Learn\n",
- "Documentation\n",
- "Blog\n",
- "Forum\n",
- "Service Status\n",
- "Social\n",
- "GitHub\n",
- "Twitter\n",
- "LinkedIn\n",
- "Discord\n",
- "\n",
- "\n",
- "\n",
- "about page\n",
- "Webpage Title:\n",
- "Hugging Face – The AI community building the future.\n",
- "Webpage Contents:\n",
- "Hugging Face\n",
- "Models\n",
- "Datasets\n",
- "Spaces\n",
- "Posts\n",
- "Docs\n",
- "Enterprise\n",
- "Pricing\n",
- "Log In\n",
- "Sign Up\n",
- "The AI community building the future.\n",
- "The platform where the machine learning community collaborates on models, datasets, and applications.\n",
- "Trending on\n",
- "this week\n",
- "Models\n",
- "IamCreateAI/Ruyi-Mini-7B\n",
- "Updated\n",
- "4 days ago\n",
- "•\n",
- "8.17k\n",
- "•\n",
- "352\n",
- "Datou1111/shou_xin\n",
- "Updated\n",
- "12 days ago\n",
- "•\n",
- "28.3k\n",
- "•\n",
- "672\n",
- "answerdotai/ModernBERT-base\n",
- "Updated\n",
- "1 day ago\n",
- "•\n",
- "6.24k\n",
- "•\n",
- "236\n",
- "meta-llama/Llama-3.3-70B-Instruct\n",
- "Updated\n",
- "11 days ago\n",
- "•\n",
- "236k\n",
- "•\n",
- "1.21k\n",
- "tencent/HunyuanVideo\n",
- "Updated\n",
- "3 days ago\n",
- "•\n",
- "6.01k\n",
- "•\n",
- "1.2k\n",
- "Browse 400k+ models\n",
- "Spaces\n",
- "Running\n",
- "on\n",
- "Zero\n",
- "1.79k\n",
- "🏢\n",
- "TRELLIS\n",
- "Scalable and Versatile 3D Generation from images\n",
- "\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"print(get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\"))"
]
@@ -3052,103 +329,10 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": null,
"id": "e093444a-9407-42ae-924a-145730591a39",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'home page', 'url': 'https://huggingface.com/'}, {'type': 'about page', 'url': 'https://huggingface.com/huggingface'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.com/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.com/pricing'}, {'type': 'blog page', 'url': 'https://huggingface.com/blog'}, {'type': 'community page', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
- ]
- },
- {
- "data": {
- "text/markdown": [
- "# Hugging Face Brochure\n",
- "\n",
- "**Hugging Face** \n",
- "*The AI community building the future.*\n",
- "\n",
- "---\n",
- "\n",
- "## About Us\n",
- "Hugging Face is a pioneering platform where the machine learning community comes together to collaborate on models, datasets, and applications. With over 400,000 models and 100,000 datasets available, we empower users to create, discover, and innovate in the field of machine learning.\n",
- "\n",
- "### Our Mission\n",
- "To accelerate the development and deployment of machine learning applications, making cutting-edge technology accessible to everyone.\n",
- "\n",
- "---\n",
- "\n",
- "## Company Culture\n",
- "At Hugging Face, we believe in the power of collaboration and open-source technology. We foster an inclusive environment where every team member's input is valued, allowing for diverse ideas and perspectives. Our culture emphasizes continuous learning, innovation, and a commitment to advancing AI for the greater good.\n",
- "\n",
- "---\n",
- "\n",
- "## Customers\n",
- "Hugging Face serves more than 50,000 organizations, including industry leaders such as:\n",
- "\n",
- "- **Amazon Web Services**\n",
- "- **Meta**\n",
- "- **Google**\n",
- "- **Microsoft**\n",
- "- **Intel**\n",
- " \n",
- "These organizations utilize our platform for various machine learning tasks, enhancing their workflows and outputs.\n",
- "\n",
- "---\n",
- "\n",
- "## Careers at Hugging Face\n",
- "We are always on the lookout for talented individuals who are passionate about AI and machine learning. Career opportunities at Hugging Face offer:\n",
- "\n",
- "- A collaborative work environment\n",
- "- Remote work flexibility\n",
- "- Continuing education and mentorship\n",
- "- Opportunities to work on impactful projects\n",
- "\n",
- "**Join us and help shape the future of AI!**\n",
- "\n",
- "---\n",
- "\n",
- "## Our Offerings\n",
- "### Models\n",
- "- Access over 400,000 machine learning models, covering a variety of tasks and technologies.\n",
- "\n",
- "### Datasets\n",
- "- Discover and share 100,000+ datasets tailored for computer vision, audio, and NLP tasks.\n",
- "\n",
- "### Spaces\n",
- "- Utilize our application space to run various applications including real-time projects and demonstrations.\n",
- "\n",
- "### Enterprise Solutions\n",
- "- With dedicated support and industry-grade security, our Enterprise solutions are designed for organizations looking to implement AI at scale.\n",
- "\n",
- "---\n",
- "\n",
- "## Get Started Today!\n",
- "**Sign up now** to become part of the Hugging Face community and access an array of tools to accelerate your machine learning journey. \n",
- "[Sign Up Here](#)\n",
- "\n",
- "---\n",
- "\n",
- "**Stay Connected** \n",
- "Follow us on our social media platforms:\n",
- "- [GitHub](#)\n",
- "- [Twitter](#)\n",
- "- [LinkedIn](#)\n",
- "- [Discord](#)\n",
- "\n",
- "**Hugging Face – Building the Future of AI**"
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"create_brochure(\"HuggingFace\", \"https://huggingface.com\")"
]
@@ -3191,178 +375,20 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": null,
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'community discussion', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
- ]
- },
- {
- "data": {
- "text/markdown": [
- "# Welcome to Hugging Face\n",
- "\n",
- "## The AI Community Building the Future\n",
- "\n",
- "At Hugging Face, we bring together the machine learning community to collaborate on groundbreaking models, datasets, and applications. Our platform is a vibrant hub where innovation meets practicality, empowering developers and researchers to create state-of-the-art AI solutions.\n",
- "\n",
- "---\n",
- "\n",
- "### 🏆 What We Offer\n",
- "\n",
- "- **Models**: Access and discover over **400k+ models** including the latest advancements in AI.\n",
- "- **Datasets**: A rich collection of **100k+ datasets** tailored for various machine learning tasks.\n",
- "- **Spaces**: Collaborate on applications and projects seamlessly within our community’s creative workspace.\n",
- "\n",
- "---\n",
- "\n",
- "### 🌏 Our Customers\n",
- "\n",
- "Join the ranks of **50,000+ organizations** leveraging Hugging Face’s offerings, including industry giants like:\n",
- "- **Meta**\n",
- "- **Amazon Web Services**\n",
- "- **Google**\n",
- "- **Microsoft**\n",
- "- **Grammarly**\n",
- "\n",
- "These companies trust us to accelerate their machine learning initiatives and foster innovation.\n",
- "\n",
- "---\n",
- "\n",
- "### 🌱 Company Culture\n",
- "\n",
- "At Hugging Face, we embrace an open-source ethos, encouraging collaboration and contribution from the community. Our culture is centered around creativity, innovation, and inclusivity. We believe in empowering individuals and teams by providing the right tools and support to shape the future of AI.\n",
- "\n",
- "---\n",
- "\n",
- "### 🚀 Careers at Hugging Face\n",
- "\n",
- "We are on the lookout for passionate individuals to join our team! If you share our vision of an accessible AI landscape, explore the career opportunities we offer. We provide an environment that supports academic growth, teamwork, and professional development while making a meaningful impact in the machine learning realm.\n",
- "\n",
- "#### Current Openings Include:\n",
- "- Machine Learning Engineers\n",
- "- Data Scientists\n",
- "- Software Developers\n",
- "- Community Managers\n",
- "\n",
- "---\n",
- "\n",
- "### 💡 Join Us\n",
- "\n",
- "Are you ready to be part of a revolution in AI? **[Sign Up](#)** today to explore the possibilities with Hugging Face or **[Log In](#)** if you’re already part of our community.\n",
- "\n",
- "Let’s build the future of AI together!\n",
- "\n",
- "---\n",
- "\n",
- "*For inquiries about our enterprise solutions, pricing, or community involvement, feel free to reach out through our website.* \n",
- "\n",
- "**Connect with us:** \n",
- "[Twitter](#) | [LinkedIn](#) | [GitHub](#) | [Forum](#)"
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
},
{
"cell_type": "code",
- "execution_count": 21,
+ "execution_count": null,
"id": "87bd1188",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'homepage', 'url': 'https://huggingface.co/'}, {'type': 'about page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'enterprise page', 'url': 'https://huggingface.co/enterprise'}, {'type': 'pricing page', 'url': 'https://huggingface.co/pricing'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog page', 'url': 'https://huggingface.co/blog'}, {'type': 'discussion forum', 'url': 'https://discuss.huggingface.co'}, {'type': 'GitHub page', 'url': 'https://github.com/huggingface'}, {'type': 'Twitter page', 'url': 'https://twitter.com/huggingface'}, {'type': 'LinkedIn page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
- ]
- },
- {
- "data": {
- "text/markdown": [
- "\n",
- "# Hugging Face: The AI Community Building the Future\n",
- "\n",
- "Welcome to Hugging Face, the leading collaborative platform for the machine learning community. With a robust environment designed for creating, discovering, and deploying machine learning models, datasets, and applications, Hugging Face is at the frontier of artificial intelligence innovation. \n",
- "\n",
- "---\n",
- "\n",
- "## About Us\n",
- "At Hugging Face, we believe in the power of collaboration. Our platform enables users to work together on projects that range from machine-learning models to expansive datasets. With over 400,000 models and 100,000 datasets available, we provide the tools necessary to help researchers, developers, and organizations accelerate their machine learning projects.\n",
- "\n",
- "- **Trending Models This Week:**\n",
- " - **IamCreateAI/Ruyi-Mini-7B** | 8.17k | Updated 4 days ago\n",
- " - **Datou1111/shou_xin** | 28.3k | Updated 12 days ago\n",
- " - **meta-llama/Llama-3.3-70B-Instruct** | 236k | Updated 11 days ago\n",
- "\n",
- "Explore our community-driven approach that integrates state-of-the-art tools like Transformers, DiffUsers, and PEFT (Parameter Efficient Finetuning).\n",
- "\n",
- "---\n",
- "\n",
- "## Company Culture\n",
- "Hugging Face fosters a vibrant and inclusive company culture, aiming to empower individuals and teams through transparent practices and open-source methodologies. We believe in “AI for everyone,” promoting accessibility and co-creation within the AI community. \n",
- "\n",
- "### Why Work With Us?\n",
- "- **Collaborative Environment**: Join a diverse team of experts and enthusiasts dedicated to pushing the boundaries of AI and machine learning.\n",
- "- **Open Source Commitment**: Contribute to freely accessible tools that serve the global community.\n",
- "- **Flexible Work**: We support remote work and provide a range of job opportunities tailored to different areas of expertise.\n",
- "\n",
- "---\n",
- "\n",
- "## Customers & Organizations\n",
- "Over 50,000 organizations utilize Hugging Face in various industries, including notable names such as:\n",
- "- **Meta AI**\n",
- "- **Amazon Web Services**\n",
- "- **Google**\n",
- "- **Microsoft**\n",
- "\n",
- "Our enterprise solutions offer seamless integration with advanced security features, making us a trusted partner for both startups and established corporations.\n",
- "\n",
- "---\n",
- "\n",
- "## Careers at Hugging Face\n",
- "We are always on the lookout for passionate individuals to join our team. Explore our open positions in areas such as software development, research, marketing, and customer support.\n",
- "\n",
- "- **Open Positions**: \n",
- " - Machine Learning Engineer\n",
- " - Data Scientist\n",
- " - Community Manager\n",
- "\n",
- "Join us in shaping the future of AI. \n",
- "\n",
- "**[Explore Careers](#)**\n",
- "\n",
- "---\n",
- "\n",
- "## Join the Hugging Face Community\n",
- "Whether you're looking to develop cutting-edge AI models, contribute to open-source projects, or advance your career in this dynamic field, Hugging Face is your gateway to innovation.\n",
- "\n",
- "**[Learn More](#)** | **[Sign Up Today](#)**\n",
- "\n",
- "Together, let's build the future of AI!\n",
- "\n"
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]
@@ -3453,92 +479,10 @@
},
{
"cell_type": "code",
- "execution_count": 25,
+ "execution_count": null,
"id": "744bfc05",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Found links: {'links': [{'type': 'about page', 'url': 'https://openai.com/about'}, {'type': 'careers page', 'url': 'https://openai.com/careers'}]}\n"
- ]
- },
- {
- "data": {
- "text/markdown": [
- "It seems that the landing and related pages for OpenAI did not yield any specific content. However, I can create a creative and engaging brochure based on general knowledge about OpenAI. Here's a humorous and entertaining brochure written in Urdu:\n",
- "\n",
- "\n",
- "# 🎉 اوپن اے آئی: ہوشیار robots کا دوست! 🎉\n",
- "\n",
- "---\n",
- "\n",
- "## About Us - ہمارے بارے میں:\n",
- "\n",
- "ہماری کمپنی اوپن اے آئی، 2015 میں بنی۔ ہم نے سوچا کہ \"کیوں نہ ایک ایسا انٹیلیجنٹ سسٹم بنائیں جو انسانوں کی مدد کرے؟\" تو ہم نے کام شروع کیا اور دیکھیں! ہم نے ایک نئی دنیا کی بنیاد رکھی۔ ہماری مشن ہے \"تمام لوگوں کے لئے AI کی طاقت کو قابل رسائی بنانا\"۔ آفاقی طاقت کو ڈھونڈتے ہیں، جیسے آپ کے فرج میں چھپے ہوئے برگر!\n",
- "\n",
- "---\n",
- "\n",
- "## What We Offer - ہم کیا پیش کرتے ہیں:\n",
- "\n",
- "ہم AI کے شوقین ہیں! 🤖 ہم مختلف پروڈکٹس اور سروسز پیش کرتے ہیں، جیسے کہ:\n",
- "\n",
- "- **GPT-3**: آپ کے سوالات کے جواب دینے کے لئے تیار!\n",
- "- **تخلیقی تحریر**: جنریٹنگ آئیڈیاز جب آپ کی تخلیقیت بریک ہو جائے!\n",
- "- **AI ٹولز**: آپ کی زندگی کو مزید آسان بنانے کے لئے!\n",
- "\n",
- "ہمارے صارفین کہتے ہیں، \"اپنی زندگی میں اوپن اے آئی کی ضرورت ہے، جیسے موٹیویشن کی ضرورت ہوتی ہے!\"\n",
- "\n",
- "---\n",
- "\n",
- "## Our Culture - ہماری ثقافت:\n",
- "\n",
- "ہماری کمپنی میں، ہمارا بنیادی اصول ہے: \"پیار اور انوکھا خیالات!\" 🤗 ہم نے انوکھے، تعاون پر مبنی ماحول کی بنیاد رکھی، جہاں ہر کوئی اپنی بات کہہ سکتا ہے، یہاں تک کہ ونڈو کے باہر کھڑا درخت بھی! ہم کمیونٹی کی خدمت کیلئے ہمیشہ تیار رہتے ہیں، وہ بھی سوشل میڈٰیا پر۔\n",
- "\n",
- "---\n",
- "\n",
- "## Who We Serve - ہم کس کی خدمت کرتے ہیں:\n",
- "\n",
- "ہم ہر اُس شخص کی خدمت کرتے ہیں جو سوپر ہیرومنٹ کی تلاش میں ہے۔ ہمارے وزیٹر، محققین، اور ٹیکنالوجی کے شوقین ہیں، اور ہمارے بہترین کلائنٹس include شامل ہیں \"بڑا دماغی جیسا سوچنے والے!\" 💡\n",
- "\n",
- "---\n",
- "\n",
- "## Join Us - ہمارے ساتھ شامل ہوں:\n",
- "\n",
- "آپ کو ترقی کی تلاش ہے؟ تو ہماری ٹیم کا حصہ بنیں! 🚀 ہم ہمیشہ نئے امریکی جاموں کی تلاش میں ہیں۔ آپ کو ٹریننگ، ترقی کے مواقع، اور سہولیات فراہم کریں گے۔\n",
- "\n",
- "📩 **درخواست دینے کے مرحلے:** ہماری ویب سائٹ پر جائیں، کیونکہ ہم جانتے ہیں کہ آپ کا خواب آپ کے قریب ہے!\n",
- "\n",
- "---\n",
- "\n",
- "## Contact Us - ہم سے رابطہ کریں:\n",
- "\n",
- "**پتہ:** نیٹ ورک کی دنیا \n",
- "**فون:** 123-456-789 \n",
- "**ایمیل:** info@openai.com \n",
- "**سوشل میڈیا:** [فیس بک](#) | [ٹویٹر](#) | [لنکڈ ان](#) \n",
- "**ویب سائٹ:** [openai.com](#)\n",
- "\n",
- "---\n",
- "\n",
- "## Closing Note - اختتامی نوٹ:\n",
- "\n",
- "ہماری کمپنی اوپن اے آئی کی طرف سے ایک شکریہ! اے آئی کی دنیا میں قدم رکھنے کا وقت آ گیا ہے! \n",
- "\n",
- "🖱️ **آج ہی رابطہ کریں یا ہماری ویب سائٹ کا دورہ کریں!**\n",
- "\n",
- "\n",
- "**نوٹ:** واقعی ویب سائٹ کے مخصوص روابط، ای میل اور نمبر تخلیقی مقصد کے لئے ہیں۔ اس کو حقیقی معلومات کے ساتھ تبدیل کیا جا سکتا ہے۔"
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"\n",
"multi_lingual_stream_brochure(\"OpenAI\", \"https://openai.com/\", \"Urdu\", \"humorous, entertaining, jokey\")"
@@ -3551,14 +495,6 @@
"metadata": {},
"outputs": [],
"source": []
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "4fb86dc6",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
From db429806bc15946513cefa605bfcb7885b4a3571 Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:22:20 +1100
Subject: [PATCH 17/29] Day 5 Challend one with multilingual aloing with
multitone
---
.../day5-multi-lingual-desire-format.ipynb | 8 --------
1 file changed, 8 deletions(-)
diff --git a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
index b17c402..17e1094 100644
--- a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
+++ b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb
@@ -487,14 +487,6 @@
"\n",
"multi_lingual_stream_brochure(\"OpenAI\", \"https://openai.com/\", \"Urdu\", \"humorous, entertaining, jokey\")"
]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "b6f1e8d9",
- "metadata": {},
- "outputs": [],
- "source": []
}
],
"metadata": {
From 7b38868ddee956bb89b85137e71e3df5820c4e14 Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:38:47 +1100
Subject: [PATCH 18/29] Rename accendently
---
...-format.ipynb => day5-multi-lingual-desire-format.ipynb.ipynb} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename week1/community-contributions/{day5-multi-lingual-desire-format.ipynb => day5-multi-lingual-desire-format.ipynb.ipynb} (100%)
diff --git a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb b/week1/community-contributions/day5-multi-lingual-desire-format.ipynb.ipynb
similarity index 100%
rename from week1/community-contributions/day5-multi-lingual-desire-format.ipynb
rename to week1/community-contributions/day5-multi-lingual-desire-format.ipynb.ipynb
From 8b52eab336c56c64ef96eef44f841d487cb6563f Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:43:47 +1100
Subject: [PATCH 19/29] Renamed day5-multi-lingual-desire-format.ipynb to
day5-MultiLingual-MultiTone.ipynb
---
...esire-format.ipynb.ipynb => day5-MultiLingual-MultiTone.ipynb} | 0
1 file changed, 0 insertions(+), 0 deletions(-)
rename week1/community-contributions/{day5-multi-lingual-desire-format.ipynb.ipynb => day5-MultiLingual-MultiTone.ipynb} (100%)
diff --git a/week1/community-contributions/day5-multi-lingual-desire-format.ipynb.ipynb b/week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
similarity index 100%
rename from week1/community-contributions/day5-multi-lingual-desire-format.ipynb.ipynb
rename to week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
From 7446459510a6154c88c08f79739b87b050aa6245 Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 01:47:59 +1100
Subject: [PATCH 20/29] Update the code and did final changes
---
week1/community-contributions/day5-MultiLingual-MultiTone.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/week1/community-contributions/day5-MultiLingual-MultiTone.ipynb b/week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
index 17e1094..6e07f60 100644
--- a/week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
+++ b/week1/community-contributions/day5-MultiLingual-MultiTone.ipynb
@@ -398,7 +398,7 @@
"id": "a9e7375d",
"metadata": {},
"source": [
- "## **Multi-lingual with Desire Format**\n"
+ "## **Multi-lingual with Multi-Tone in Desire Format**"
]
},
{
From a18900a59c8f78986d9714c3782ad2f56f6ec4d1 Mon Sep 17 00:00:00 2001
From: codenigma1
Date: Sun, 22 Dec 2024 12:04:51 +1100
Subject: [PATCH 21/29] First solution I design to get response from your
favorite LLM and second is colloborative approach of two LLM added might be
good specimen for the upcoming student to think about it even better and good
approach to refine this approach I leave it from them
---
...eek1-collaborative-approach-two-llms.ipynb | 332 ++++++++++++++++++
1 file changed, 332 insertions(+)
create mode 100644 week1/community-contributions/week1-collaborative-approach-two-llms.ipynb
diff --git a/week1/community-contributions/week1-collaborative-approach-two-llms.ipynb b/week1/community-contributions/week1-collaborative-approach-two-llms.ipynb
new file mode 100644
index 0000000..87b820a
--- /dev/null
+++ b/week1/community-contributions/week1-collaborative-approach-two-llms.ipynb
@@ -0,0 +1,332 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5",
+ "metadata": {},
+ "source": [
+ "# **End of week 1 exercise**\n",
+ "\n",
+ "To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n",
+ "and responds with an explanation. This is a tool that you will be able to use yourself during the course!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c70e5ab1",
+ "metadata": {},
+ "source": [
+ "## **1. Get a response from your favorite AI Tutor** "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "c1070317-3ed9-4659-abe3-828943230e03",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "from openai import OpenAI\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "from IPython.display import Markdown, display, update_display"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "65dace69",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "load_dotenv()\n",
+ "api_key = os.getenv('OPENAI_API_KEY')\n",
+ "\n",
+ "if api_key and api_key.startswith('sk-proj-') and len(api_key) > 10:\n",
+ " print(\"API key looks good so far\")\n",
+ "else:\n",
+ " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "4a456906-915a-4bfd-bb9d-57e505c5093f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# constants\n",
+ "\n",
+ "MODEL_GPT = 'gpt-4o-mini'\n",
+ "MODEL_LLAMA = 'llama3.2'\n",
+ "\n",
+ "openai = OpenAI()\n",
+ "\n",
+ "ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 38,
+ "id": "3673d863",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_prompt = \"\"\"You are the software engnieer, phd in mathematics, machine learning engnieer, and other topics\"\"\"\n",
+ "system_prompt += \"\"\"\n",
+ "When responding, always use Markdown for formatting. For any code, use well-structured code blocks with syntax highlighting,\n",
+ "For instance:\n",
+ "```python\n",
+ "\n",
+ "sample_list = [for i in range(10)]\n",
+ "```\n",
+ "Another example\n",
+ "```javascript\n",
+ " function displayMessage() {\n",
+ " alert(\"Hello, welcome to JavaScript!\");\n",
+ " }\n",
+ "\n",
+ "```\n",
+ "\n",
+ "Break down explanations into clear, numbered steps for better understanding. \n",
+ "Highlight important terms using inline code formatting (e.g., `function_name`, `variable`).\n",
+ "Provide examples for any concepts and ensure all examples are concise, clear, and relevant.\n",
+ "Your goal is to create visually appealing, easy-to-read, and informative responses.\n",
+ "\n",
+ "\"\"\"\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 39,
+ "id": "1df78d41",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def tutor_user_prompt(question):\n",
+ " # Ensure the question is properly appended to the user prompt.\n",
+ " user_prompt = (\n",
+ " \"Please carefully explain the following question in a step-by-step manner for clarity:\\n\\n\"\n",
+ " )\n",
+ " user_prompt += question\n",
+ " return user_prompt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 43,
+ "id": "6dccbccb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "\n",
+ "\n",
+ "def askTutor(question, MODEL):\n",
+ " # Generate the user prompt dynamically.\n",
+ " user_prompt = tutor_user_prompt(question)\n",
+ " \n",
+ " # OpenAI API call to generate response.\n",
+ " if MODEL == 'gpt-4o-mini':\n",
+ " print(f'You are getting response from {MODEL}')\n",
+ " stream = openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": user_prompt}\n",
+ " ],\n",
+ " stream=True\n",
+ " )\n",
+ " else:\n",
+ " MODEL == 'llama3.2'\n",
+ " print(f'You are getting response from {MODEL}')\n",
+ " stream = ollama_via_openai.chat.completions.create(\n",
+ " model=MODEL,\n",
+ " messages=[\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": user_prompt}\n",
+ " ],\n",
+ " stream=True\n",
+ " )\n",
+ "\n",
+ " # Initialize variables for response processing.\n",
+ " response = \"\"\n",
+ " display_handle = display(Markdown(\"\"), display_id=True)\n",
+ " \n",
+ " # Process the response stream and update display dynamically.\n",
+ " for chunk in stream:\n",
+ " # Safely access the content attribute.\n",
+ " response_chunk = getattr(chunk.choices[0].delta, \"content\", \"\")\n",
+ " if response_chunk: # Check if response_chunk is not None or empty\n",
+ " response += response_chunk\n",
+ " # No replacement of Markdown formatting here!\n",
+ " update_display(Markdown(response), display_id=display_handle.display_id)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 44,
+ "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# here is the question; type over this to ask something new\n",
+ "\n",
+ "question = \"\"\"\n",
+ "Please explain what this code does and why:\n",
+ "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "3f0d0137-52b0-47a8-81a8-11a90a010798",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "askTutor(question=question, MODEL=MODEL_GPT)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b79f9479",
+ "metadata": {},
+ "source": [
+ "## **2. Using both LLMs collaboratively approach**"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "80e3c8f5",
+ "metadata": {},
+ "source": [
+ "- I thought about like similar the idea of a RAG (Retrieval-Augmented Generation) approach, is an excellent idea to improve responses by refining the user query and producing a polished, detailed final answer. Two LLM talking each other its cool!!! Here's how we can implement this:\n",
+ "\n",
+ "**Updated Concept:**\n",
+ "1. Refine Query with Ollama:\n",
+ " - Use Ollama to refine the raw user query into a well-structured prompt.\n",
+ " - This is especially helpful when users input vague or poorly structured queries.\n",
+ "2. Generate Final Response with GPT:\n",
+ " - Pass the refined prompt from Ollama to GPT to generate the final, detailed, and polished response.\n",
+ "3. Return the Combined Output:\n",
+ " - Combine the input, refined query, and the final response into a single display to ensure clarity."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 59,
+ "id": "60f5ac2d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def refine_with_ollama(raw_question):\n",
+ " \"\"\"\n",
+ " Use Ollama to refine the user's raw question into a well-structured prompt.\n",
+ " \"\"\"\n",
+ " print(\"Refining the query using Ollama...\")\n",
+ " messages = [\n",
+ " {\"role\": \"system\", \"content\": \"You are a helpful assistant. Refine and structure the following user input.\"},\n",
+ "\n",
+ " {\"role\": \"user\", \"content\": raw_question},\n",
+ " ]\n",
+ " response = ollama_via_openai.chat.completions.create(\n",
+ " model=MODEL_LLAMA,\n",
+ " messages=messages,\n",
+ " stream=False # Non-streamed refinement\n",
+ " )\n",
+ " refined_query = response.choices[0].message.content\n",
+ " return refined_query"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 60,
+ "id": "2aa4c9f6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def ask_with_ollama_and_gpt(raw_question):\n",
+ " \"\"\"\n",
+ " Use Ollama to refine the user query and GPT to generate the final response.\n",
+ " \"\"\"\n",
+ " # Step 1: Refine the query using Ollama\n",
+ " refined_query = refine_with_ollama(raw_question)\n",
+ " \n",
+ " # Step 2: Generate final response with GPT\n",
+ " print(\"Generating the final response using GPT...\")\n",
+ " messages = [\n",
+ " {\"role\": \"system\", \"content\": system_prompt},\n",
+ " {\"role\": \"user\", \"content\": refined_query},\n",
+ " ]\n",
+ " stream = openai.chat.completions.create(\n",
+ " model=MODEL_GPT,\n",
+ " messages=messages,\n",
+ " stream=True # Stream response for dynamic display\n",
+ " )\n",
+ "\n",
+ " # Step 3: Combine responses\n",
+ " response = \"\"\n",
+ " display_handle = display(Markdown(f\"### Refined Query:\\n\\n{refined_query}\\n\\n---\\n\\n### Final Response:\"), display_id=True)\n",
+ " for chunk in stream:\n",
+ " response_chunk = getattr(chunk.choices[0].delta, \"content\", \"\")\n",
+ " if response_chunk:\n",
+ " response += response_chunk\n",
+ " update_display(Markdown(f\"### Refined Query:\\n\\n{refined_query}\\n\\n---\\n\\n### Final Response:\\n\\n{response}\"), display_id=display_handle.display_id)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "id": "4150e857",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Example Usage\n",
+ "question = \"\"\"\n",
+ "Please explain what this code does:\n",
+ "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
+ "\"\"\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f2b8935f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ask_with_ollama_and_gpt(raw_question=question)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "086a5294",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "llm_env",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 898686369e9edcba749915bb8a6798ebd642f448 Mon Sep 17 00:00:00 2001
From: Gabor Meresz
Date: Sun, 22 Dec 2024 10:18:06 +0100
Subject: [PATCH 22/29] Week 2 Day 4 - handle multiple tool calls
---
.../day4-handle-multiple-tool-call.ipynb | 264 ++++++++++++++++++
1 file changed, 264 insertions(+)
create mode 100644 week2/community-contributions/day4-handle-multiple-tool-call.ipynb
diff --git a/week2/community-contributions/day4-handle-multiple-tool-call.ipynb b/week2/community-contributions/day4-handle-multiple-tool-call.ipynb
new file mode 100644
index 0000000..eadd494
--- /dev/null
+++ b/week2/community-contributions/day4-handle-multiple-tool-call.ipynb
@@ -0,0 +1,264 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec",
+ "metadata": {},
+ "source": [
+ "# Project - Airline AI Assistant\n",
+ "\n",
+ "We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# imports\n",
+ "\n",
+ "import os\n",
+ "import json\n",
+ "from dotenv import load_dotenv\n",
+ "from openai import OpenAI\n",
+ "import gradio as gr"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Initialization\n",
+ "\n",
+ "load_dotenv()\n",
+ "\n",
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
+ "if openai_api_key:\n",
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
+ "else:\n",
+ " print(\"OpenAI API Key not set\")\n",
+ " \n",
+ "MODEL = \"gpt-4o-mini\"\n",
+ "openai = OpenAI()\n",
+ "\n",
+ "# As an alternative, if you'd like to use Ollama instead of OpenAI\n",
+ "# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n",
+ "# MODEL = \"llama3.2\"\n",
+ "# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n",
+ "system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n",
+ "system_message += \"Always be accurate. If you don't know the answer, say so.\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n",
+ "\n",
+ "def chat(message, history):\n",
+ " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
+ " return response.choices[0].message.content\n",
+ "\n",
+ "gr.ChatInterface(fn=chat, type=\"messages\").launch()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4",
+ "metadata": {},
+ "source": [
+ "## Tools\n",
+ "\n",
+ "Tools are an incredibly powerful feature provided by the frontier LLMs.\n",
+ "\n",
+ "With tools, you can write a function, and have the LLM call that function as part of its response.\n",
+ "\n",
+ "Sounds almost spooky.. we're giving it the power to run code on our machine?\n",
+ "\n",
+ "Well, kinda."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Let's start by making a useful function\n",
+ "\n",
+ "ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n",
+ "\n",
+ "def get_ticket_price(destination_city):\n",
+ " print(f\"Tool get_ticket_price called for {destination_city}\")\n",
+ " city = destination_city.lower()\n",
+ " return ticket_prices.get(city, \"Unknown\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "get_ticket_price(\"Berlin\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# There's a particular dictionary structure that's required to describe our function:\n",
+ "\n",
+ "price_function = {\n",
+ " \"name\": \"get_ticket_price\",\n",
+ " \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n",
+ " \"parameters\": {\n",
+ " \"type\": \"object\",\n",
+ " \"properties\": {\n",
+ " \"destination_city\": {\n",
+ " \"type\": \"string\",\n",
+ " \"description\": \"The city that the customer wants to travel to\",\n",
+ " },\n",
+ " },\n",
+ " \"required\": [\"destination_city\"],\n",
+ " \"additionalProperties\": False\n",
+ " }\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# And this is included in a list of tools:\n",
+ "\n",
+ "tools = [{\"type\": \"function\", \"function\": price_function}]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340",
+ "metadata": {},
+ "source": [
+ "## Getting OpenAI to use our Tool\n",
+ "\n",
+ "There's some fiddly stuff to allow OpenAI \"to call our tool\"\n",
+ "\n",
+ "What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n",
+ "\n",
+ "Here's how the new chat function looks:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def chat(message, history):\n",
+ " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n",
+ "\n",
+ " if response.choices[0].finish_reason==\"tool_calls\":\n",
+ " message = response.choices[0].message\n",
+ " responses = handle_tool_call(message)\n",
+ " messages.append(message)\n",
+ " for response in responses:\n",
+ " messages.append(response)\n",
+ " response = openai.chat.completions.create(model=MODEL, messages=messages)\n",
+ " \n",
+ " return response.choices[0].message.content"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b0992986-ea09-4912-a076-8e5603ee631f",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# We have to write that function handle_tool_call:\n",
+ "\n",
+ "def handle_tool_call(message):\n",
+ " responses = []\n",
+ " for tool_call in message.tool_calls:\n",
+ " arguments = json.loads(tool_call.function.arguments)\n",
+ " city = arguments.get('destination_city')\n",
+ " price = get_ticket_price(city)\n",
+ " responses.append({\n",
+ " \"role\": \"tool\",\n",
+ " \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
+ " \"tool_call_id\": tool_call.id\n",
+ " })\n",
+ " return responses"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "gr.ChatInterface(fn=chat, type=\"messages\").launch()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "11c9da69-d0cf-4cf2-a49e-e5669deec47b",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.11"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 26f1135eade397208b87352027409d2c85985ba2 Mon Sep 17 00:00:00 2001
From: Edward Donner
Date: Sun, 22 Dec 2024 10:30:50 +0000
Subject: [PATCH 23/29] Additional comments and refinements
---
week1/day2 EXERCISE.ipynb | 16 +++
week1/solutions/day2 SOLUTION.ipynb | 175 +++-------------------------
week1/troubleshooting.ipynb | 13 ++-
week2/day5.ipynb | 49 +++++++-
week4/day3.ipynb | 6 +-
week5/day4.ipynb | 26 ++++-
week8/day1.ipynb | 4 +-
7 files changed, 119 insertions(+), 170 deletions(-)
diff --git a/week1/day2 EXERCISE.ipynb b/week1/day2 EXERCISE.ipynb
index 4504401..fb08ca8 100644
--- a/week1/day2 EXERCISE.ipynb
+++ b/week1/day2 EXERCISE.ipynb
@@ -122,6 +122,18 @@
" }"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "479ff514-e8bd-4985-a572-2ea28bb4fa40",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Let's just make sure the model is loaded\n",
+ "\n",
+ "!ollama pull llama3.2"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -129,6 +141,10 @@
"metadata": {},
"outputs": [],
"source": [
+ "# If this doesn't work for any reason, try the 2 versions in the following cells\n",
+ "# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n",
+ "# And if none of that works - contact me!\n",
+ "\n",
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n",
"print(response.json()['message']['content'])"
]
diff --git a/week1/solutions/day2 SOLUTION.ipynb b/week1/solutions/day2 SOLUTION.ipynb
index 6d688fc..da834b1 100644
--- a/week1/solutions/day2 SOLUTION.ipynb
+++ b/week1/solutions/day2 SOLUTION.ipynb
@@ -34,7 +34,7 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
@@ -49,7 +49,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": null,
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724",
"metadata": {},
"outputs": [],
@@ -61,7 +61,7 @@
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
@@ -91,63 +91,10 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Home - Edward Donner\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Well, hi there.\n",
- "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
- "very\n",
- "amateur) and losing myself in\n",
- "Hacker News\n",
- ", nodding my head sagely to things I only half understand.\n",
- "I’m the co-founder and CTO of\n",
- "Nebula.io\n",
- ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
- "acquired in 2021\n",
- ".\n",
- "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
- "patented\n",
- "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
- "Connect\n",
- "with me for more!\n",
- "October 16, 2024\n",
- "From Software Engineer to AI Data Scientist – resources\n",
- "August 6, 2024\n",
- "Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
- "June 26, 2024\n",
- "Choosing the Right LLM: Toolkit and Resources\n",
- "February 7, 2024\n",
- "Fine-tuning an LLM on your texts: a simulation of you\n",
- "Navigation\n",
- "Home\n",
- "Outsmart\n",
- "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
- "About\n",
- "Posts\n",
- "Get in touch\n",
- "ed [at] edwarddonner [dot] com\n",
- "www.edwarddonner.com\n",
- "Follow me\n",
- "LinkedIn\n",
- "Twitter\n",
- "Facebook\n",
- "Subscribe to newsletter\n",
- "Type your email…\n",
- "Subscribe\n"
- ]
- }
- ],
+ "outputs": [],
"source": [
"# Let's try one out\n",
"\n",
@@ -176,7 +123,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
@@ -190,7 +137,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
@@ -224,7 +171,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
@@ -248,7 +195,7 @@
},
{
"cell_type": "code",
- "execution_count": 11,
+ "execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
@@ -264,28 +211,17 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'**Summary**\\n\\n* Website belongs to Edward Donner, a co-founder and CTO of Nebula.io.\\n* He is the founder and CEO of AI startup untapt, which was acquired in 2021.\\n\\n**News/Announcements**\\n\\n* October 16, 2024: \"From Software Engineer to AI Data Scientist – resources\" (resource list for career advancement)\\n* August 6, 2024: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" (introducing the Outsmart arena, a competition between LLMs)\\n* June 26, 2024: \"Choosing the Right LLM: Toolkit and Resources\" (resource list for selecting the right LLM)\\n* February 7, 2024: \"Fine-tuning an LLM on your texts: a simulation of you\" (blog post about simulating human-like conversations with LLMs)'"
- ]
- },
- "execution_count": 12,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
+ "outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
- "execution_count": 13,
+ "execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
@@ -299,37 +235,10 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "# Summary of Edward Donner's Website\n",
- "\n",
- "## About the Creator\n",
- "Edward Donner is a writer, code enthusiast, and co-founder/CTO of Nebula.io, an AI company that applies AI to help people discover their potential.\n",
- "\n",
- "## Recent Announcements and News\n",
- "\n",
- "* October 16, 2024: \"From Software Engineer to AI Data Scientist – resources\" - a resource list for those transitioning into AI data science.\n",
- "* August 6, 2024: \"Outsmart LLM Arena – a battle of diplomacy and deviousness\" - an introduction to the Outsmart arena where LLMs compete against each other in diplomacy and strategy.\n",
- "* June 26, 2024: \"Choosing the Right LLM: Toolkit and Resources\" - a resource list for choosing the right Large Language Model (LLM) for specific use cases.\n",
- "\n",
- "## Miscellaneous\n",
- "\n",
- "* A section about Ed's personal interests, including DJing and amateur electronic music production.\n",
- "* Links to his professional profiles on LinkedIn, Twitter, Facebook, and a contact email."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
@@ -352,66 +261,20 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "I can't provide information on that topic."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
- "outputs": [
- {
- "data": {
- "text/markdown": [
- "# Website Summary: Anthropic\n",
- "## Overview\n",
- "\n",
- "Anthropic is an AI safety and research company based in San Francisco. Their interdisciplinary team has experience across ML, physics, policy, and product.\n",
- "\n",
- "### News and Announcements\n",
- "\n",
- "* **Claude 3.5 Sonnet** is now available, featuring the most intelligent AI model.\n",
- "* **Announcement**: Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku (October 22, 2024)\n",
- "* **Research Update**: Constitutional AI: Harmlessness from AI Feedback (December 15, 2022) and Core Views on AI Safety: When, Why, What, and How (March 8, 2023)\n",
- "\n",
- "### Products and Services\n",
- "\n",
- "* Claude for Enterprise\n",
- "* Research and development of AI systems with a focus on safety and reliability.\n",
- "\n",
- "### Company Information\n",
- "\n",
- "* Founded in San Francisco\n",
- "* Interdisciplinary team with experience across ML, physics, policy, and product.\n",
- "* Provides reliable and beneficial AI systems."
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
+ "outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
@@ -455,7 +318,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.10"
+ "version": "3.11.11"
}
},
"nbformat": 4,
diff --git a/week1/troubleshooting.ipynb b/week1/troubleshooting.ipynb
index 3811bbb..beb597c 100644
--- a/week1/troubleshooting.ipynb
+++ b/week1/troubleshooting.ipynb
@@ -48,21 +48,26 @@
"# The Environment Name should be: llms\n",
"\n",
"import os\n",
+ "conda_name, venv_name = \"\", \"\"\n",
"\n",
"conda_prefix = os.environ.get('CONDA_PREFIX')\n",
"if conda_prefix:\n",
" print(\"Anaconda environment is active:\")\n",
" print(f\"Environment Path: {conda_prefix}\")\n",
- " print(f\"Environment Name: {os.path.basename(conda_prefix)}\")\n",
+ " conda_name = os.path.basename(conda_prefix)\n",
+ " print(f\"Environment Name: {conda_name}\")\n",
"\n",
"virtual_env = os.environ.get('VIRTUAL_ENV')\n",
"if virtual_env:\n",
" print(\"Virtualenv is active:\")\n",
" print(f\"Environment Path: {virtual_env}\")\n",
- " print(f\"Environment Name: {os.path.basename(virtual_env)}\")\n",
+ " venv_name = os.path.basename(virtual_env)\n",
+ " print(f\"Environment Name: {venv_name}\")\n",
"\n",
- "if not conda_prefix and not virtual_env:\n",
- " print(\"Neither Anaconda nor Virtualenv seems to be active. Did you start jupyter lab in an Activated environment? See Setup Part 5.\")"
+ "if conda_name != \"llms\" and virtual_env != \"llms\":\n",
+ " print(\"Neither Anaconda nor Virtualenv seem to be activated with the expected name 'llms'\")\n",
+ " print(\"Did you run 'jupyter lab' from an activated environment with (llms) showing on the command line?\")\n",
+ " print(\"If in doubt, close down all jupyter lab, and follow Part 5 in the SETUP-PC or SETUP-mac guide.\")"
]
},
{
diff --git a/week2/day5.ipynb b/week2/day5.ipynb
index 5722305..353b039 100644
--- a/week2/day5.ipynb
+++ b/week2/day5.ipynb
@@ -366,7 +366,7 @@
"id": "d91d3f8f-e505-4e3c-a87c-9e42ed823db6",
"metadata": {},
"source": [
- "# For Mac users\n",
+ "# For Mac users - and possibly many PC users too\n",
"\n",
"This version should work fine for you. It might work for Windows users too, but you might get a Permissions error writing to a temp file. If so, see the next section!\n",
"\n",
@@ -410,19 +410,56 @@
"id": "ad89a9bd-bb1e-4bbb-a49a-83af5f500c24",
"metadata": {},
"source": [
- "# For Windows users\n",
+ "# For Windows users (or any Mac users with problems above)\n",
"\n",
"## First try the Mac version above, but if you get a permissions error writing to a temp file, then this code should work instead.\n",
"\n",
"A collaboration between students Mark M. and Patrick H. and Claude got this resolved!\n",
"\n",
- "Below are 3 variations - hopefully one of them will work on your PC. If not, message me please!\n",
+ "Below are 4 variations - hopefully one of them will work on your PC. If not, message me please!\n",
"\n",
"And for Mac people - all 3 of the below work on my Mac too - please try these if the Mac version gave you problems.\n",
"\n",
"## PC Variation 1"
]
},
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d104b96a-02ca-4159-82fe-88e0452aa479",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import base64\n",
+ "from io import BytesIO\n",
+ "from PIL import Image\n",
+ "from IPython.display import Audio, display\n",
+ "\n",
+ "def talker(message):\n",
+ " response = openai.audio.speech.create(\n",
+ " model=\"tts-1\",\n",
+ " voice=\"onyx\",\n",
+ " input=message)\n",
+ "\n",
+ " audio_stream = BytesIO(response.content)\n",
+ " output_filename = \"output_audio.mp3\"\n",
+ " with open(output_filename, \"wb\") as f:\n",
+ " f.write(audio_stream.read())\n",
+ "\n",
+ " # Play the generated audio\n",
+ " display(Audio(output_filename, autoplay=True))\n",
+ "\n",
+ "talker(\"Well, hi there\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5d11f4-bbd3-43a1-904d-f684eb5f3e3a",
+ "metadata": {},
+ "source": [
+ "## PC Variation 2"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -473,7 +510,7 @@
"id": "96f90e35-f71e-468e-afea-07b98f74dbcf",
"metadata": {},
"source": [
- "## PC Variation 2"
+ "## PC Variation 3"
]
},
{
@@ -516,7 +553,7 @@
"id": "e821224c-b069-4f9b-9535-c15fdb0e411c",
"metadata": {},
"source": [
- "## PC Variation 3\n",
+ "## PC Variation 4\n",
"\n",
"### Let's try a completely different sound library\n",
"\n",
@@ -577,7 +614,7 @@
"id": "7986176b-cd04-495f-a47f-e057b0e462ed",
"metadata": {},
"source": [
- "## PC Users - if none of those 3 variations worked!\n",
+ "## PC Users - if none of those 4 variations worked!\n",
"\n",
"Please get in touch with me. I'm sorry this is causing problems! We'll figure it out.\n",
"\n",
diff --git a/week4/day3.ipynb b/week4/day3.ipynb
index 69188c4..74cbc7c 100644
--- a/week4/day3.ipynb
+++ b/week4/day3.ipynb
@@ -276,7 +276,11 @@
"Then it runs the program called `optimized`\n",
"\n",
"You can google (or ask ChatGPT!) for how to do this on your platform, then replace the lines below.\n",
- "If you're not comfortable with this step, you can skip it for sure - I'll show you exactly how it performs on my Mac."
+ "If you're not comfortable with this step, you can skip it for sure - I'll show you exactly how it performs on my Mac.\n",
+ "\n",
+ "OR alternatively: student Sandeep K.G. points out that you can run Python and C++ code online to test it out that way. Thank you Sandeep! \n",
+ "> Not an exact comparison but you can still get the idea of performance difference.\n",
+ "> For example here: https://www.programiz.com/cpp-programming/online-compiler/"
]
},
{
diff --git a/week5/day4.ipynb b/week5/day4.ipynb
index 43aa358..3e2cc00 100644
--- a/week5/day4.ipynb
+++ b/week5/day4.ipynb
@@ -294,7 +294,31 @@
"id": "9468860b-86a2-41df-af01-b2400cc985be",
"metadata": {},
"source": [
- "## Time to use LangChain to bring it all together"
+ "# Time to use LangChain to bring it all together"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "8ba8a5e7-965d-4770-a12d-532aff50c4b5",
+ "metadata": {},
+ "source": [
+ "\n",
+ " \n",
+ " \n",
+ " \n",
+ " | \n",
+ " \n",
+ " PLEASE READ ME! Ignoring the Deprecation Warning\n",
+ " When you run the next cell, you will get a LangChainDeprecationWarning \n",
+ " about the simple way we use LangChain memory. They ask us to migrate to their new approach for memory. \n",
+ " I feel quite conflicted about this. The new approach involves moving to LangGraph and getting deep into their ecosystem.\n",
+ " There's a fair amount of learning and coding in LangGraph, frankly without much benefit in our case.
\n",
+ " I'm going to think about whether/how to incorporate it in the course, but for now please ignore the Depreciation Warning and\n",
+ " use the code as is; LangChain are not expected to remove ConversationBufferMemory any time soon.\n",
+ " \n",
+ " | \n",
+ "
\n",
+ "
"
]
},
{
diff --git a/week8/day1.ipynb b/week8/day1.ipynb
index 0836b59..2c8fc37 100644
--- a/week8/day1.ipynb
+++ b/week8/day1.ipynb
@@ -95,7 +95,7 @@
"metadata": {},
"outputs": [],
"source": [
- "with app.run(show_progress=False):\n",
+ "with app.run():\n",
" reply=hello.local()\n",
"reply"
]
@@ -107,7 +107,7 @@
"metadata": {},
"outputs": [],
"source": [
- "with app.run(show_progress=False):\n",
+ "with app.run():\n",
" reply=hello.remote()\n",
"reply"
]
From effeac88f3c47bb9e34b662723fc9e89ea2f8476 Mon Sep 17 00:00:00 2001
From: Edward Donner
Date: Sun, 22 Dec 2024 10:33:09 +0000
Subject: [PATCH 24/29] Minor improvements
---
week2/day5.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/week2/day5.ipynb b/week2/day5.ipynb
index 353b039..1fed9bf 100644
--- a/week2/day5.ipynb
+++ b/week2/day5.ipynb
@@ -209,7 +209,7 @@
" response = {\n",
" \"role\": \"tool\",\n",
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n",
- " \"tool_call_id\": message.tool_calls[0].id\n",
+ " \"tool_call_id\": tool_call.id\n",
" }\n",
" return response, city"
]
From 5a6b3bc4cd9a2161e6cb8b9ec8b6b8dfd2e5f86c Mon Sep 17 00:00:00 2001
From: Edward Donner
Date: Sun, 22 Dec 2024 20:00:55 +0000
Subject: [PATCH 25/29] Minor improvements to README and Setup guides
---
README.md | 5 ++++-
SETUP-PC.md | 4 +++-
SETUP-PC.pdf | Bin 146512 -> 147373 bytes
SETUP-mac.md | 4 +++-
SETUP-mac.pdf | Bin 145588 -> 145969 bytes
5 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index 5b133d8..88a5fbb 100644
--- a/README.md
+++ b/README.md
@@ -52,9 +52,12 @@ You can use this as a direct replacement:
Below is a full example:
```
+# You need to do this one time on your computer
+!ollama pull llama3.2
+
from openai import OpenAI
MODEL = "llama3.2"
-openai = OpenAI(base_url='http://localhost:11434/v1';, api_key='ollama')
+openai = OpenAI(base_url="http://localhost:11434/v1", api_key="ollama")
response = openai.chat.completions.create(
model=MODEL,
diff --git a/SETUP-PC.md b/SETUP-PC.md
index 31ec6f2..055ddef 100644
--- a/SETUP-PC.md
+++ b/SETUP-PC.md
@@ -91,8 +91,10 @@ Then, create a new virtual environment with this command:
`llms\Scripts\activate`
You should see (llms) in your command prompt, which is your sign that things are going well.
-4. Run `pip install -r requirements.txt`
+4. Run `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`
This may take a few minutes to install.
+In the very unlikely event that this doesn't go well, you should try the bullet-proof (but slower) version:
+`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall --verbose -r requirements.txt`
5. **Start Jupyter Lab:**
diff --git a/SETUP-PC.pdf b/SETUP-PC.pdf
index 97cb81e32f119c16ae91459e23bdffbae4fbc3db..4a23be19c511e261b161f954b6ed01a8493ebd61 100644
GIT binary patch
delta 42512
zcmZU(Q*fY7*sUGgb~3STb7D+1v28nfVrOF8wr$&XCdtG${`ae|_P-ByAFZm@U47VB
z-)qgiz*K+3)D6Obaj~QZ5Q9+z&6=9Aha;$dyZS$b(!taPE}}-eJiA+1M$8(+sbHdQ
zP#lVKPz5k(;+Eu&cS|cOGe~Ks>%;GduGEQ_U40(a==8Q24u+Yo57wUtA3LLRU0cQ^
za>`5`eLLeuBDdBgj8yWQbam{pzN4cxzRurP4VQ;!If;!jOfd~(Ku>nX
zaa_{i>eT&ZU)E@!x!nVp9jCTv))=e`ZFO8*oVJv6)|i$jn2@VY-B~1@)CB8RH_T^T
zMNHf7UUyDL!N1sgx1o{%V@a=J0
z@MPH12*lZpHSP`<9J}~4wz~6t>Y^gX&E7>LB{usCWe1B-a0T8YaqP+=9rkm4_DoEX7d*>jTHiX>#*FGR?{t
zPU~wjjDp_t>^&MQ2nhOe8ywmFHk;V}vXn4&sZ!_)?@x{I10!7sqPXQtBx|`$E?3n!
zdO~JY4CMv4E9|B6_`Xt(j`CLvqbVs1g_!>4<*{R{cdy5#V_0T{ItX}0#|9-F?)R8U
zsI;D%{dnL|zxBfaezH%1Bf5um8@A}k%yp1>Ql8cu1*lI1>Qj=hq*C-r%s)YppP{KT
znYyUC`~FgcpcPsg?kMMily~H)@u1!;+4=RJ&OCX%v8YZx7~0+;G5!~XjRgh_n^eR=
zXMRuj)qfBp1g%>D+P+j_-}Y4;T||N_dDHb8`mq@W?2lOruw56}gZTj^^&4B@TJccd
z;k^)s_|*GQ?Xm*OTyVkhv*ncL&vB}%R#~*JiEf>mC;S-!@93IlV2W;ix~O1am)s%^
z&O~czNc_JCgO17dJMoy$z!gQLj|vlh?fWWUTgrqbM!6KYLUP?tGRw(9XdOKYGA8A{
z>()5VvSx{5fUU-Uta>$@+M{LkWv-q2pB;8AMPFLRq!o
zSWnq2!|3^UXuh{FStdvIKwt1!^GLdMd1zyxbJIdjzi4pa
zLTbze2Ui;39bNF=+4}3*D-eKH4g%QPBlG*}V+tI_>yp4uiRRpBq1+@Y!DBZlYh*_I
z=LDYPGCFymB3EY)Z}6=CwvTPzB{(eMf)J)auWzJ&EgjtxPwGu@+OJaC`yDZ6ie{#;
z{{epsw5mYNR@c=dMD>htUg^x&%-d~7Bu^Q9qGx}vuw`VxG=s<@*Tws2qQ+4ciF_oI
zwm!vFaHE$Arz;mNpAzN5yA*}p)!fu4H;SEDu*{&(2_A2-A8lkg+sd?KQOY}4=Q;*u
z3jM;bO*e89QP`90FguB>b8Mx(_O0sa^XvMcZGQDW
z*+9bY1t%L)(7hmd65>1u$H{vZ3LF+79(`)9|7DDb0$rpfOwNjSK6lG2#FmXK0&YBk
zv1{NEWwcEk8TnzP|FFq2lm=V8OvN6B6)&@E||tVBq1kp?E;*=
zl`9E1W+J#mW?P3gi%=dz7A_1T!IAwP+6lvk+dC8|G=$X=J6SjlLHEv}89`qmY-;Vo
zh;enjZ<{Lh+JDqFtbHk_ktSkHuppiT&UX<2J)6M{(R<(E7=QX>JJNfPa_~sa!*3xa
z8qX|PO-_^x0WJ2JtEf)q(NQp;aqPA>aJiFTQ2`l`36Jq~Br@XfGGXCnp#l&T!>X
z5ABUK;m?(qnz|9?d>boq6zjrD!YgPAsZH!A|6?}+6V7T+I*moh&rz!1ARw{ME8i_^
z*Z(#+#-0fWl8O$-Ru}WRt&ll@)=(pFXe+$P_>2*9L>pXzL#d5^B~`rPUyC_3)dwuB
z{RH|#*<`x3ior|KpN$39`t1dWH@(Kp$Xzx%+m?*ga@799x&W_e{&h4BGfa|Z@7{?;llE9E2*vTMA&D!aq$oS`VYKvr
z`d~O~ooo@y)cJ=2bx>Y_w5stF?|5vyIl8QkIPD@^#4xdt2}T~y_UkAGN5z$Ej}mn%
z?v`Y{m9R6RxK8tai+99EIgSob`bF>(o&?O)iziD`C>$S4;xf^fz$ZZ;f|u&Ao>HVF
z#~|N)l(3EUPAe|@EKv@ZasHGkF-GUC^&H<9C0?w!P$_CZ%Hl7C-P^#qs{z<9Y1lmc|)8nd790E*4puMAR+e5RJO^qjMOJ8Z|+~o)o9#
z1jQzX*xfM}dq)AW?$*ZZZS%#a%|Bp(x!$V-vKy<{9QFepzjSUgHAgClk(I(`pyzee
zP;uqvm-<#0k;y3_&B{VL|Dgb5`5VmXH5OK2;vIzF!pTLBl?TCp9Ct@lT(lDwH>IOT
zJ_0p9fvr8;_g_*;{Zjq!ch?Jwf4olWWbFKCk#!0->vt@A$JLD8MSN_-|PAesA8v
z@#lJ8DkYYs#~t5_097fX{g@P8doVtV)dxbeH8*nU{=c&F(&hamsZ*W|^7c7c6koa!
zB*^rPsn>sR+W;tN|KHU_WHf3p4xazzt7HvL=S^-jzuDS3{`8t=gM__wE=9A3zvbma
z9JebgOD5J)q=peJ5*D6Pl|b+IDU`%Q^r0{597w=VVtwK(t4^oi0B~~md@}0ZN<8S2
z9LauHxO})ck*GZMt08hgjT4@F^a6UnH
zr7V8IewqF$oqHF7b$v#^>Kc#gItksxE^sLR+|+38(p$dt*h>W9d-Q%%C$b#&+#Gl`
zKcW3dOnP}5x`_R!;)^P)!K^JryhqvAU9u^3`YBim?S=|O>d3&fhZJkKZ+<1bXp5J&Nz_yU}WP<6gn>|wZ)rjPCgG0Z0k`MxU3CA
zqfa_4=C?l)9FsZtRo{v#q31N-1)mQBR}-BUn~xG19+jg4S&yU5#C*QpeW~UEycjC<
zVlUE(UFm*%J~8k4<1Y@|@^gwFrq=fWCbWNwvn(lOVkeb!;9l}DW^?nvUK5@3k)7ls
z_@7t3tX#*Mce;BIa*01R!;ie-Iww6!nrhjl_^%&`7NnH&Iy++puqsE~5efziwd#2gO?Dfk>=c
zU~o6n+Fa@LQ*6@~gin_;$N3X;U*RlfkI`HKQB08nks=oKPr9Leu_-KZ@DuPP<|^tG
zryXo=eF@f^dd=jG&2%PN{@z}sh`5tqT~N=QZGlF!8e$_E_EvK@L+!&OHXl)I{%);krC6&re4)T=CzUV$`yY&f#o>7=OED37
z^Op71k`hUBw#d7*56-)=$5vU)BaX^wI8Zl!k7ARl?-?)>3m;?0gR
z@w?bJ49Ja?a)JVZAxuDhEnmEH(Z>yP?*D^<+S`#2Sl76S#iq!01t-Tl_tO(MaIc9f
zKt^m$v+w!S$uem{lBl{^4ubu~4MI)1OgL}(+tT7RtziNMyUxmrzt!qBhNl2UPzmRe
z{?8WCD7l`id1ud*#6OLDm?XUv((w&=eYXL=#02T8j>EKjb%-&iaA
zegF}oGzq`z-9Fc>;6T9*qtxa|6-@XR(`%o#@d%bq71$()G{g1HK^WUxSrDJ?QTeEa
zg^jo7fyO}of!|&0={kBs5Hrv2BdMR66*gc442w&6TV+1hA6ypa)6iFSdK{3Fa
zKn`Osul&w#-YlPI{i7iKbu3461K_4rB9?*%LVA=3BQPg-W2CrmVnUQ~(jd3hkhB
zzLxmmg=%De^G29yt11}}RhnPy_E#@`0jzXKxaN^}_AXn?Sk(kiyi}*3#TY+x+cq)>
zh_PE|(_?G7-hbZ^J%`yHY+bF>@5o>HsunW%^eM^0XARzMoPKXruOnuWfGRV&Oog#Z
z1-Vm(btEZzV89kq=s@fH;BI*bg@;?FR~wOqL;m5O6#x0G4dhW}oTl6Q?Tzbbzt9ZL
zlaNA#l-`p{=l0D|TMUs)su<((fPo$X;o#p!FTzXEv?SIUPtZQ{fgRAsGm&iZG=mYv
z)q<8r%pg75%ky~>3>9GgBuH}lMlKHxV8_FW$T-)<1PY3A0-m*!h8_(!LI3QFu-Z5r
zGzplg%Mgu6c)F@-!!D4i-o|4Dlozu)l{|$xBzuE|(Hh45ZrpNdEQO*4ePS=%vb=p|btw
z$35uk7{2_c5Xi{b^cB;1)iegTDa;;iWJoC78m#heD`)I-Qjuv3>sn>5ln=UvgS==~
z(Y!LUqJox(*!}rc4$N!`x_gN{0JEG+Omr;-bi8M$z_y{1W0YuAFsd4oyxU%#{47Z|
z41V|T0kKK;)HjWRHP;|~&_O`_B_hz;bcCAh0t%ihnBfg4)^DAGu0gPcFb%bM*Pmzb
zhPidm5|)$EfX_HpPRh8{mJDhV_3mtZyCB>apM@FVj{JPtWbqoFtW`(@{7j8(91|EG
z7FD~XU_}ytU=gG|Hzj5FAPeQ#;|6Om(iO}hYft3NnORD;nYeHVPRfUJp~lc7Mi@J~D4=5_;A
z0SH{-T!*!}Y1DKtF{gGHfDNg7+A$O6SU)y@%{EHmgsyTL6cO`RU$9#$9?U-nZ^>g=
z`?#C_C?VUA?e5ih9WSvcWl?mT!;Xh=Z`uISJD(v;lV(rBVaBtS21eXpg2qQBw-SM8
z{r>#0^BXi03BeoA)Xv1mUsr_J5SUkrf;}I|(z%{|J75L^vinQ#*4P3ldf~R`&lz
zq8JTZ`^}$dUpe~yU}0<_mDG%>wKO&U=-huYNja4Uv$I#t&HnP?kGj|Bzb0O1{;eG5OXp2b9TvHuza~WesfYDbfq+7fgv`9%
z&Cea&{;Bt`;wR1MbAw#poO>`?!BSLWKM+7j(6&8$EOK^sc4x+!%Ic=}KDTh%W9NX<
z9(mkZKJge}4TYriVAlAx4y2q4iXG{x#}sS!-0Cmp1dY$`igT0n2PR4@Ts)>YEJ0Z^le
zw%JPF+iBmV=iJ+J=uVuAEm|e9CtNKeyy~vOpH=W{^?$n1wkPEvti9PV
zw*)nEpIIc5-YXhYW7MNV4$WRi9
zH8i_$4IA4_^03#ts15HnEXo&m%qv9R-wdUbhL41`lyS0mF~~NnP{w3?0{*{7Q!D`f!Cl^`uu(
zcWzXr!$s~R6@lTo!jtrd4PO|u_~XV8y$sV!auD%BQ_dAWuku!)GU1R)H++T8<}N}XD4
z&jdsihJL|XuW~HE$XnvUcDZW%m&5$AP|V$Lo3bD_>K8g+f}u}vfIU*4G8^f($ELrUt*YMq=%E^Y=y9sa#nW?6<9Ctu#ZA^GkBMM6
zr-!fCQ?D|>7Y#ebyX#(E|6FwYcN^h7r}1oZ?cP{jzurxI5PK-Vm;|+N<=Di{zM2>t
zf6%w)2{M~|_@htcgk*DzMj4r2!u<r3(0;U_I}9u2-1^mZKR
z?%QKtldeWeTYSu?7XTI~;9L
z@if_DfPtyekr3AE22uaU8OWe1Q6`u`OFk}i(L7}XP7p#;r`1W@L1I4?ByAFu=#IQP
zeNjONlG*Z#J|D{r)|9HpnRQLGKXOK=CvAQDqzGA}5^cHj=}=(~SWF3m8VmN&QFZ-j
zZG?E8mXSg}xW{5E!=X-z#;3avo+@SPY~!lmfJfFK6f9>VG1A+h@(G1Ac>3E{%WM(*}E{`n>h?(T385s)av&=;+`v
zvNE)h`2q#QmR^;)oZ^QhJ9U*6lSABck}*y@HBWZ`8}F{c_`K8!9)TvsQ73E6w!C2t
z22c+4K|u7R`;(lC7uHqfBA$>S3dGAqML$7COu3!D9&Yh^PGNB^Pg}Az_0w1YjLfji
z23=R}YrnHP)U@lKvq2_(0XX<-H
z_0sA3>GXWNc)Q;}W%BJR^u9NAeL6hW?=qv9T{LkAS%3kOu#XHLyad^8T0L+0Y+eV@
z|1VT=aQ^>L^?#X)n}_-T#CW!`n{BB7G1b107Vfqwd4ktICPUn$ZXm%%C&>DNgGHpM
z^R0yENKx>b}Eqx_RiWQD->zwcgOj$Jf_zWov~7jv;G0
z!xdMQMjDvkaFj_rHN{<(hKVI(kqq|p`>;?U@y4N;ZK}J>nME^@B+=A(bsLfqnti#o
z%TCL8wbHaSM?8_NOZiJN7kQEo$DFI0c+PF9cMGAVkTT8gwe~$%uY0;7J{lIjY(dl{
zJqzvy8_&U3g?3URi(-+*x7pUtahQ}g^U=6m#9~r8bx^#4L{=$9A%xyS>-<=m){5&z
z{Blq3;R~|aY<3b5l>jq`GjEV8lDhM8Og$Q)Zf?^bRJB?$_mL|L;ny``#g;i~df=n-
zMMM1X(VovAp4TtmH;y>}?nxSfgX>3nDU{
zh#~p`nN2TW*>B1PW$5>29mL1<3rY@SF?`rOG(>pCRqPf`T39Vl8>htl3KSY2`H_S-
zxL)YPqbi1pKUzu>4`!>pESouy#e?j>u4hSO$6#_BLN#EkooY)bs!IwgVEtqe!;r99
z=FJ5$FkOMJGhu;MHR#@rD>*=D*m5e!waN0FqrDn1KWufW-uQ-;JgG~6hW@geKfQxO(?u48_pzz_USxZQ
zS3!<99l?W7vrdF6zSgC=U{xo1g>a884>w5opd|nW-e$2VP8C~7$y?}gdF%9`xAy4K
z6}j$T7C*BtF61Y$1X9OO7lhyDvq=2;vAmbfuu>RPMy_3?Nvf>WwmAli?Wzthb1vzy
zC#(>ve;PF@j(bA57NLvk$1ziu$Mj5-hkRP(=#4hG+V7q`Pbg4VyRS
z13MN?dtvh5URo_ULQwW^+r>tYxnUm5XcQi~)xLETDocub$j
z?&hkR?8lSg!18nhr0B)d>#`RHcQ+Qi95+B;oum88lb$g7PPBhBBW~zv8nN|al1VU-
zddqzA@U<7gLI|#eSFX4em@mxTJc(fzndM*Kn4a6x-UmpI5Fs+sON6HI
z;O+#pmk1shcL!~(!0apT`*C3pPIF~*HvKH|r;;1^f6s%8NbpJb7%q#{AweIV@eLrk
zIw>L&H2Ai4{bvx<(@_BmbhhIR+UYq6{RUjT%Jou)FSc&0s)lu+kT35LXs40t8mdff!%3Ka-2zi#J|Hx9_VH
z*lV5>k9t&c}|Sy5YkX6?6G@{@O)da_3|EuIVaDj&`oFk(?qtaIy5HBVu
zynv2IgBHdPkwV`K%rxa35CcG8(~-^}90&GzY(cR379&z4ZikFtZbsAxC>&D)9PDz4
z&IO@n;)pz~BC*h6Sq}+vR3kP
z#nad+KD%D8R0)1^c7>WM$Oz3@xPhyXJHv>oPeUe
zi{V;&GQj|pyJS`abby_Y)qr7k*Un7b95V949OE0fc7V~Mg;Z5YbL{?!um8iPNJG$p
zI^7fGh!?tj;#zH$Gdf+z>?Zm^O94_QH4auD%j80_UMt|dgX0$dq7)FkJI6VWx8fx>
zbmQb@ej$k)YF)*7n(VZ;(XxbmGN3GAN$l$RFHJ{zZ~SL~0T2s39#&K$9W@yOv#UOH
z3*H|`8-_a-R*!GosicFQa;mES@yeA;qN;m#$$
zdKuuQJC~)Gt~ZRkkKI$TEdCh@ijR_m>8dP+{>1KdK`Y=SLHguIX>|}%?4&INAAK;4
z4r8{s)J58w1E9^}@D8oznf#K-l2A${{8c!1eRxP603@L>RL)fQ(-#FwqQ#BN3jST=
ztpvXUldO1WQUx3|OX-rFrcKRk2$rpH8f|l;G|(zt*v41>@}Ms7Z}IyS_9U5;<|1Ln
zDx;f{v?qwZkzQL9A82fH5lYr#u~Kp%o&3NM@j1@8o3vgpfbjc)s4|+Xjm;4mryYcz
z8e^_?0|ZFJxwhR{SzLw1ZTaM^5wS1LJi$F(4opHJW*$0R=oFaW)zO^f&PAOQx0%T@
zIQVchp0~@tbz8u)Gu#l#G^z}V9@aCvu4D6~5M}tC{sFnLT~r|}XNsv~W(h{c6yow1
zDN;j75pV4Eb&iV?l_!6{rj3tFE{fBOrD}_-0@}xg(dg6=)_>%d!k_h%5Xr(Oimj%R
ze9O#1@uh0LD(L)!`v^Ew6K4Xy)rhL33ieB9?e*@Y|IkH%8(TL2o41_P)?R#qv5*%)
zB0q7uoQyP+g8thzAUaCpELGbGZP{Va^elnMTKoNO6AFsZ+UhHxplxte%``K)yQ3E8
z23R}#;-ox<8QheRue%#B7TgZWOc!lYaw?pE;RCOCICgTgTu^(NSg*x`
zPytul^;+6+Krm=*phYOwC@L$2){C2O3fR<&&GP
zB{*L+-mPBAVtozUNcp=c#v=UmkGLK*dEOK0DnivYmy0y_giwM7S7&dMahUVaL%%M~
z)w{M++cQ_{r0ib$bcJ9IO^7rMU}sWyYh-gccNK`%tqdDcg)g{Wj9rRw!{hRUN{A
zzFCWJOQ2eZM%Qe5VjE1r(d<%nwDdv~TwSl#Wz>Jd#Kb;!@Pm?;P1Ka8=m5ou+#KOw
z(QNE9(-*iJ$sW!BOXHmXcf|QWG!Fg0BTmj#fEbJx(EU}fFvN_o{f@aK6&6}z=`OtruOLu;i91;ryv?ZXE{!(<_0W#{1jKmQw65>`%TF6P9z
zcMJ$tE_Uuz|8h`zAak?D&GGpFD^JSN{l5i|Bof$(>FypWJ5S1>P=9b`7Xnt?hWWbC
zea*9dvi1Iqj{q~fWIn^Os&y?TDxY7`Pi_Xw3M~_q)hA~&RrLWPK|v9R9v+h{O=xj>
zWpE5uHRaNCPwgiHkhdzf{&;*g@
z4W8t0z$R~RPJ-`0bHS7c=LF7{p4l$4Bur(kO|_>T9tAPBzPY@z0rkNL1&zLVE!EpS
zoDOevj_2FI3%#f^p?~~@|ms1SF#9^qcB!Pi6`yF?v4-4YLX;8-|
z1RqseO4ZohiJyjhWHo(paf2-+kFU+;Pc^no1mX#}Lh^!qTSa#!QWRwy;P35%`Z2)?
z!=;9zlIvk|Vxp|AVol7=k5*{?5cHLfzYCA^hTmZUYVYjvpDBx@6LqP*aoeFHEmS
z$2m1QHUVXU$PC!t1}Oo?NqU84H+o^gK+hYI>ciKV|>EU9*1|Fd_x2nw{>Q+aJH~
z(wPCrM_0%9u{$5;>sTaIT-sc9o4cq1AavT$d-hLuy{l6z|CrJf(i*awl0qpHNlk9D
z(vpPZuDDr(y88gl?>|(ON4uAaO(74k2u_b?
zulmlm*ARCucb~gXblOz1mAl!kF79UYP7Q#oBlLvyS1gBk@Q0)sA_TYws4zCj+~h#$
zU-ypQlE1BKm#sJpP=LO999!SSR6pXWy)|TT7s-3afA9lp@Y_AF_gV<^J0fn~10etZMiCQ-9rPys){MV^6g
z^W4whjF#@u(57sXB-=piL+|Ajfh}Rz_uySGy^DWu`vOgn4&gMxm#q0YrR_OnlWH@h
z_S(ntaPP2_0o&MV6ptn!$PeqgRH^f4J175WhNIK^PhTg{w6!+RFInhxDR^(Hyi9!m
z%xfl0l!cJ*19;0%`xX!^L>5bFO-ms3n}Zcdm*&y*r?<_n0=>}L0%808o6X)teT<;a
z*%8!dorKI5_TkN1350%1frNPoa_<7^30fmSCmvv;p?Y4zDV3aqArMsKn>#GgGgR6zDVpqq7H~5#!Ek-MS#2V5Byuv
zyYerQID|6*g*<44cnH>D{^P%$OgG=F63W5Fz(OCBa`OXiMs`}4w
z{xlqfEuS9QT!(s51OrggFNH{o`rW?$<03Ervmrkc5sG+6h2jvECxu8GMb?78jzZT0
zv%dP15S1hDi8+Q=*Abke`9P}sfFhf3vd?#=msT9n1>93B_&3RuHIk6d4{%X9_fP0x
z)qhW;#LCM+4LUwPIr$4QiqZa;Ip&WYVB6CUi)0l@0kI4NM*q8Ad{!uqP&r}sKZ$kS
zy9d~5FYmUl=tHxf=lXsKt-k7OfM38hK%cE?i4vK985Wp8xivqs8iZt(*r4zno#=n#
zeoBI!31RmObqx=T==I2D>DYi!B1v8H6WH{I#cqBJ=`eitXngaI{WzKKy}8y#1cC4Z
zWr$0~76!1u4Bc=Uu4%!GB1|$fz({RR
zYcLQ=^2OX#=wMQHf!M&;aKfC07LD_FfVH4f>SJX-)
zUs(A;pl6m_dn6LjhrG<*Tsnc8d8&zpQ58d_TmaZcn9=;`S@#!_&Y(7CK5N>8BE;c0I0CGnwF!Xvsh7JG|uE;Bjz0xPxhP>2v`YFQ}Dzm%bZ
zi21pl6q}73-&>*YBg^j?3phy?IRKOu6ERu|Cw`>w^qxHn#;FQ_mpKP3AD2_e#R-mQsJz}B{jWnu>Pt%_q
zNXI?0h*8<|q*y0$d>*=+uy*m_uF6d#5gDD&Pv$1Wd*C;Ff`6XV*$2BvE&%q$6r}z1
zE;-2^Q7bk-M6}vvTtoHV`33t{$-NDW%pF5db1&whn@E*(U53VJ+~_B}>_jN8oo>pVNJq45ogyuoFs%@Rjo2|009hoh^uLpbjAPdUJB6u7P=7Ku$cEs{k#PzAcab
zE#J*6I9T
zHC~e;Zc^9g1a?d_gO9yf?#>_#c=1Q&lo1_avZ`|W3?}opc_C{I^uN*gVVQy2jd;Me
zQe2Zt9Q(dV!(c?Sv681Zdf65nS)s(s;bP&`m{)+jrFj*j+PO-EUH;Y+pOiKw3~@SuPLcm!4QM3lcUBaS~=2;Qui<0U(^3}
zg^!2YyI`RbuAx>d$D9`lu{oSoHvVz{KEC4bthDwYe>uo)U$+wFO=$k
z$Y3fnaf&B*@F*z%TLa4B9l3*5)M6w;E#45&S`O|Msdd*~A`%hm2rqpJfJ4p;0r|jX
zwSV*1Uyp41Qm?V3YoOMu7Z~*~NYYk3X9gyApIsi2f2kr>i_`wunmocI@30oLHh+k9
zY@8G|KQWVR`b7S6OQwFlEVulp;f>FWJPFfq%kdmOi8UU>Q3}wI5!>vkZi+1+*n`40
zf*=lJ8*dR8Jm4(Mec_u)sth`{*0H}VUXw*E0jqxKg1
z*OjDbd=e-_I#rY2w?KxngH4v0(5#e;<*vAy%U&_6eE6*M5WgGb3JU6)Fv>GZ>7Mbv
zL(CCZh5=ZHO_p*bX31LZ6P5Y=uG~o7*^}s#aH4(Hq4hhjgYZVwl@WVKDZR#$)Lp$Y
zBJ`Ed3#wp?Q?8QW*8MeraV=VDPF4}H8D(f
zJo5cf#*NEmBwgEcam=zmNA^BQFwjyCPZf$6RD>&-En6+(@@W9r*OwX1EP9Uqv2^`{
z^E*b>K2}A3Eh(bmEgCS3VJwWj+E8UtSC*gNY`Cc
zxtz1CxwxowXEJ6ScEZX(itrQHQ!Lu2dICmI7rdt#f&7{F`C4@;yA#-o9+>4X7_R7i
z#O%(~Z$@iw@mx@%VyT=Eb!4_!5Z2xk7OZk%?}4iF=Hnin0hvcd#b};{A_#Y$B4F@3
z0$vC$iX1U|jL;g@<~Zu}a|Of=pSO)|j3__R4G6ug+K6)1?K8D{T-Fa5`j?Mt{>#G^
z<2!C@#2bWapf~5aP%Tii(1g>ITju|1j|Fo)4`<28k!nWguBGB4T6W{|npw&(y20Fx
zlI#kL|XlhKIkk+KCEBusoZ$PF~|3x0Q)c%=H%j0Z3K6(Q9q!
zQ|oabWXt0e-S_`ADQE~6m#{^JJcW!Di_Vh-t;e2}o0;WR^tv4VWaKk<7tQ_H>)vSP
zwc?bX54y%On^(I?{tNjR(zZExgwANpuuPt|Gb5_Sg@;$sEFYuk%^*_gHVh_DO=%TM
z_*sH%!M-nnUP3KhcRyvw0PqORRNlQm#&?^hXQJhwcJsL6HR~UD#B*ikul-_8p~hmI
zxGeZ4u)g5<+cr|0QmHL@qpK>A>l8&bhA0bhEN(UUWQ+z%mGC|BL(QN6#7SsIn|mi9
z8^K0d{vo#VqqdElJ>!>iTBvU*>q$U{ZbQj|IB8IU
zdEJFnjMl|;;m@I!aBF`rj?_d0x%XQ~dK!%iTUVbzx|zPPmzp;YxaAsgz&6(uR?Cy$0>Nmsh?9*4x%<_C8_U>R}!TFF@~%1^*Fh2VD}
zZBG4#_Ra_$;7xuxshhNA@0fP?!B#i`cK93_*~KGP9LBpw2&ifm4)=;XQ^C
z`OT0!h)8`|DA)$83G76ZRoMp@$awS$NTQF}!IH$E?=VRD^w^#W^P`fEicPsGH3)hY
zYjw{P<|;{c)9zVhe{qXMC8L>Fe1lFKS3`&?dZmxR8a`0sDE>W7_^&aZnEp~#$8v(b
zbar&!j+#hq3G8Q<-%@1q)c-E|A;aDJ?+h`xR0~8uf5)a-Uu`t^m2+)B!TKk*XaAo8
zNb2=_uMV6BfQ2S?V^$9)?W`ha8I!wzV={cs+l~PPvxL
zA8Z#bCj%1yS`O=-&S#TZ>_|22#dJK+cK{yg_JXGz%+Uw8@aeXnjHs)328l7Z
z=y$9C=anX99!l6@V6wR=~8^@-0VLc56c~?
zSiDech4y{Govf!uE!x>B#@J)h-Tdl;Pzz%iok70@GwadWWg1Aj*<8JeU$&@tSvWx=
zmjWKV3PNW`7kY7JX3dK;OL6fR5BqF-(eYaL4ea{;XX>aN&e)rrRIuNqvj32Njgt(x
z7M3i{-o)rSMI^J9eJ;W>ah;iq?zG8^fa78!-k@c;JjD)i4izW3*T#q5Ze;Z3y=Q}F
zBP-W+ZoPhxa{M8n$n!P1cOxaQPhv0Zy9L^u1327_1vnj{nFJ=fC^J@zC${CSMQ{Zs
zAA4EDo0o~xUS$p^xdnd8SkNSF?y*gal#;T19JSR4uuz+)#^8JIwA)(`<~EB)Wk|&+-D3*!FSkr%PKdg1_4wJr1lO
z%~jN&KPbE`nL5o?HT>NpZ8kwpv9G2c4Aya!A!`=LW*whYC)x^aD2kK)BTD3J+gq)1
zkPL~;^@TUq!8m2(`cZSld{d=kICPrBd&Vgq`j_}7I%#c#C|^8ApxVa=HMS9(*IUj~
z#DM1g3>?hSfiVK)E>t@-2Kviiu?bkQs>YZ?nTG{^PRIV{P}Rv#aeHQ1|F5JGBs_dnCxLW&hNWqWYyhkUJvwjd&
z&9L_G!SgeByQD@bvJlB3l+uXz_5Epcd938M))N1|l
zNJk#G?&5k6^+fwn7R!z;yROazeOTgC_a2W_d{Tb5(5ZiG#=dz$NO1nnv=#Qg-F}pb
z6Z+W+GSp$2JOJ+d%PWIVSDxNb?oAQmhN}^M9Y!DO0&e$^*;}O=qm<|)>B8FG?U=^;
zxh1d_ual>+bohcu0HV54q#0OtA1!Z9kB)?gs`(iuKJehc>cdn)RnFNgdq&ly749{x
zmExAaK(f{`WbO?0Mq%q!%i$55)IwNX`;3{!(y+fN3jB`f(BrV^Z>)+6UV={sclKZ-dQ_)fZU7WTNWsN>X8#
z{z!cBwjcQ{ND(~i*(&yctey%^OzlJq#BEAySFkEGGHFbZc`@H8V#?H_h{v~YLwauh
zG!2nnlI)JOLwoK0)>k~EgIlu}Qe0#s@4Hm+^g*!N3haU-qzloXII3PsUUt@7=^3Ro
z^G;a)@wx*%(|wvb761dtT#8h&FF&O^*u5W|)QV-k-u}iENzB3xD?zDBYqz!kcsw3N
zR7dr_&Ms^Ozz(O17``Y{3TxURY*u#`#~N9MEl!6FQzI6rVoEmx;Uqcaj|Rt8#%d`d
zi+mVwW49vCwTY1zp5ms%Vte+!goU6)Gh-2(M@`K-j;QWI7(LfWP5m#+w8x{sf;b9kXT*<)PFznO*ec5gHhJ#irpXrmI5p
zuJ?qQQ`F6pl@V?4@i#Jm-}Q2QdxRz2)mEH?`C0YtSBuenS93bq;d+#cY^?`n==tUm
zDD_65yhavw7A}H|_~+4%qa~?aYq1;#Qa9eGX+JbLFIFNLC_Shre#XW5w3uOQ3?52V
zr$nH;fH%>krTRX_Oio`fy{h$@l^v3GK
zCmH+oBwB@x@#qIf_7-Z!shBjWik%xxjSI<*vIp#*0Ux1%5yr@
z3i*mj+_99moRog6HH5K*MXyp?2=@uad)_k14p~T6h-F@>`hTpO*csHK-pybZclcCp
z)G$rkRO?@V%rqCLS`n2uqoj1b)8uWz+;LcVwA3={}_N($fIw&U;1|w~D
z>y+{29>~uT)_b1U#^L4kCTuny>vFTH9ez-ekSmd|*<=j}JA3=FU>afbhkBu|9*z4;
z=*UHX$i!>F;v37*h;u#syR#MdVM;O)sScln=u2+0z0XmzCFV&ilK!7x222$~J~}!+
zlVHDy(sUVe1+>(>ot7cD_Vv{)u2-Sa?F{yR%?F*lgNuKZw^Z)#TTD{8?4M8(+#5N>
zcc@HU1AF1DlwwUqe0&@2)4u;fq2p{TA#mz{`>@!UP-0Gjo3ljYtv0v{@xf638=i@u
ztNMhsxR?NO7;~ec-*U1+Z%Q7^_=@A$np41ypasjt4|Qngs0@ZGIMrduwCC-Pz@TrbCyq#zpzSF}}ZfALnNkfxCSeTmjn7mf;X$
zCH1BIso4ze_heQONOXVcRsb&)2iKfKpLg~)L1{&3wlKVIq|YjVc`?0Wp(FQu0)xx)
zJIX%wQWsKG(g^oZ9vA|!9W9-ZbIE`_lGeoM+3be2E*+2KxOa+M9o)6>BB6YL_M)@B
zlCC0}FjB%f_>OTlPv+mB-LVt-_QF7WGq^{SBjf6?4G@R9>vwx#W614bvc+%We8)v9
z{B&A*ip60NsCgOoE0makeFeugIWWLcDbZD$9BfbS|E!GXL6w<@
zE;+f^-hM=WsZ2*Su>$Yka?pbJ;56QzbFe@oGv2N3s!TL`QJO4Q;&?BxZ<7M8_6oJ=
ziGLKVy}aZQOeLoz{ms=5^b1+EWmZSL1o0sZhY4GXwDdk@^WT|V0Y6QDtRu*4vfjwt
zI-G;TrwYYH8!{&n8@5`WPsxPNY~5CSV#!_U@>r0{1UwQ7$q|Oi83y+nlB!?Cz
zoTIS-8iP^cZKW1mYacp9MB_kyD9zpiH1?s-akkEbn|V#CpvkzVJckqhSNzKX9$EFv
zQ!K=4mxG%QT*zZd>R|JKyRVI_ro{xCQ?aG|kAhk|`+3(EY==DBOq(;Gi%>14LTO79
zIALw>-_7q537K|ZMs;Phs*a1|_iA!Y`3Pb+g1Di7#Yul$#c?M0SyxdV
zwmGNn#H0zi{}$6=(REBwc&8&X*c`Rqiv?ll8qVV>fIq
z(;U@Q_;L<@b4s;;ec!sB+3dG%i8@v_jX&QUK~H8hENw%Rt4%npSscr)Y;;n|p~27%!E0rghFd&Q8yju$rCyhqeU_ldz%3_yh+RP%0
zyqQ)~Dxw;nb^_x5gikS#{Q7;U^4%o!Dd3c85??Hm7rhib?ss}BK+Dqkro^#816Uq<
zJl~OOsArhTg##`L?-J~03Bl4aGhc$gyMEt0?Lm!yiCRrtX=t*AhBceU%9^}od4F7Q
zqM}NJe_x7DsiIo={oG(6f~%ErPc=4Xcv)D9*#21x-)|0^u&
z2m3L9PkK8uwvml6*t;LDlMIgw-Rck3`xh6qg*1Wf8B_%B>9X08sare(H&fg5PDKho
zo6p;ti0JIGSUswe6?RNR9?x;#I=%FY@x}bDJ9kdwFY}NAgy5#KN
zuaWEyZ0;BUMbl^1MIhSqJ6wP3r7I@wPC3(mw$t0Bu6bjnm?!IMX_Z;tOv409HCcQo
zbuzaq_L99ll>daho;2V2LZrOrdRe%1yI8X8z2jpKP^v^c`@&Y7-Z0vJojERx3k+4?
zt3@nY5y3#@p06Cq?2;SuW?KI2^5b0S26OQ8D5In}L4=N0=&Z!7>NX28p|xWMKhc9RwqDDbt&?7ZJk8H{Xdd(3`+GDIv!
zqQP%CxWm;V_!C+&rYJd$$Yj})m0CDTtnO47CZerGhTxG`?<`1Zbzu?rR}c>|FONT^
z`XqArFmDAk5=8^|kkQryt&r%HU5~G}e>ocU_?s7mhlKIAWD9@BT_NS;vl{w{kn#Kd
zlR0KF#9zV2)ARD&X8(FmB^Jbg6YxpT+^S+K%CFuE7Cn!RX<01btLtg2-@o4G*}k(J
zo&T%?cxkLetRj;_4FDV%G7jpSSUCxa)m`dWiDzD6;i7yQ$UN5Zo#JJv!>@w<1od@i
zZuG?Asc>@hbb#R%z~dY#WAOFj8u5TFX8w@H^AEqk69n8lm2&^l(sr_cp4%==iVj1Y
zDiU!h>!@*o)cy|UAE*PT?zDdqNhNaakICLPX(9gmZN^F=q4%YDsyaE!5mVG
zXB7fj(hDI1d%bCHA18=hPE46Ta&VGfFR#b
z-`t|>dETh)LA!872(XlYMCi62DwxS{%@ns=Ps#Ebr(c;MPqhwUQ!%4{ELH0LB}X(q
zTR4HVDMVjL^q9BVg9Mizo4)nc2xX8GisYeYR%xRAA(>V?Z`wk!HO*Gr3(Pift7(3^
zI)Jc_afU)wUl>hO4;}QZb2EzC%J9YM>|~u0+Or4|S@2GB4*7F`U-^6BLJeAK$Rznx
zrM&Dp=@@uSk4h$g#Nr_WC}1XW*c6iz|eY6n~|}
zNYAuSiqV>1vj*6Hj!~QL7ib#4)A+8VR>ahMfiUuZF)f9~6y-9qh;QBmMV&aHe7-m%
z;&f_AgoU|b=Q(G884%F6DCq%@qOQ^Zh@_a|Ujg&0)5Q?yHP>mRPSAR@m3IE#6Dk*m%FlP?kKy}3`9%X&opZR>F2hU
z#9*xoa2TY*4g@JKhWEcs=L*@LDA)}~l)ZcoPRgw+(|ON-^f~;+==vsaJmE6~2TO};
zLEi(PTGk!%?ZeU$tD%bq#xg!X!In3!W11TpD{2?g+xaOtGu$G>?HS#|Y4Ra64g}@N
zy{?*!l*|jzv<+$>+t82My{2JSU+c5$r@d3dy5+CaV9;5Z=lsex$`;RE=y)wYYhvFwN84jHxuip0L?EYV_^`0mQpZFT%2LBr0suN&fe-qHCWR;RWAb>1!Rqyl1Z}RnYHk
zl^PL_#(H>myrd@i#6R#NzgD4gqo>5rk@K2sC!eSb)OAkUdqKE=Nm?OBya088@a|h%C&2iTXXBesY6P-NzzUou@Aa9mLy|u(Wj8Gk8SBe
za(XxcG=?I^%-%XXbPK7`b|2F-SK2KfKLy+T2$th4lIertyJYs*n_5+UQ(ve_!OASj
zx>V<#DWnE&K>lXXrhl13<)<-A;jz`=V*=EFuTPvBqb!PvQj^^miJtIV;q_cx`T68*
zl{}e(jgXF-Q)-i(m5~T#56!(u_+qvhiuDD0ExE{I>=F)|#9CztIgZRA6L?nO7loZz
zWXXzj`-KRz-;wNk&bzn?Da!r(J%br$;=N2cO|2_@Yh9u
z6C1fUmsEQl3&i5QtVlQx!$k3k#1Ds_d&hd0*G_{-b&D}q?}E5y7a|gz7v-R=Gk)RR
zPZKo6{Pa;`s;T^vPUPOLCNegsqgEd9!kQYbSS8e)-9dbPOH*)Ic?-}A^1fgv-3y>B
z;2jo6(&fwjsK#>>V#MWXsRK=Juf1}A#O1kKPsaCDmK$tNrkD3*k5QW@0Hq!Du=21n
z#WE3sH9{L9ITLdAwmE?09S`9U`o-5z=BHdA=kA3F|%35JS)WUjPR
z7^Ae2{>QL1+WwkJ?-@}Oc5
zb<*KEUEcXLM`@`0b@Q@-Tsv!sNyZzFlF#^jn8Mq+i7}TA4?Ggp_+Z(5yFO>jW-O7k
zG%$yOR8B!vbfcnA5`2_88>t6>GI<6cGP^T*J?9}d;K!omr#UGp&3%Ti8>9MeWhfeR
zO^*E&IR)qCvKT=p^7yfr`2WWz7vyOw=4~o|do4;-E%O2hM_7itWs}kSViDEA{pUs=
zmAk6(f_!m2>8{s19cjNxi1xh4)0^)5CQ13Qh0l9WDnAMCN$H-+^<}kx#~aWD3z}-y
z|D4s5R3W>&5hR?&9K-yMh-y}V$?Sl>cZUJjTgB-HqWAZ#OymZk{#~fvkFpI-}Vo6IZ
z2~6*XHtfK1jyQ8>L7|NEI2FFUX9!$%K4%0jRNuR#jQ>FA8Q*h%Y`aWAt0q%OJzJ*t
ziI&7exlLC)+p)aEckY9CWKQepA!&P$`ilMVfLE5g^RUc*5p&7~)+e?qkuQpib!Ice
z;gRC(EwwFEgSdiIqL%}!kRF1JY4qjzLMgE}rn~Kzi3$Z|z}InFFF}XHD+)h)4ZFhp
z0!~^s9hZ6HxVy#i<`cBprZ4S3z#N}2TNj`l-N;hpB
zZ}!e|t+wl!1I>Sv`HECmtFE|UBndmk&MCE6KP|f*Cd8%m1CbJ4Y0JKDalfbq1ar$j+x`1hHQ58nqxb}WB`Ivy_j+UaF>XWSOv=OY
z;wbOR7IxWKE_WKf?Q*4bkG=27N%MY$0Quw?U}#PP8H)E8
zEL1;Wbl%$TD`1sw-g;t;Et!ZjTxS_2W*nQTB1f>Wy=N=7#hquCdQm4x7%`q7%Oi@t
zhxRcEtF_&KLaNDsUp^7l(FS%<%7_AnMeYQ-PVTS1%EgeD_6pC2K|s*$d-wK{Qw*an
z?m3@uxhVo^bmZhC#%B1}#S@7E!RHxUj0x8(B$>$k1;cEo>#{N45nW5mF7{UMZS1Ur
zlI4S?S?9^&qf%gkGXt?EI?_-9{r-Ee-9VLRm{%!(0M>p#bb1WimS`^j73FM+beV#w
z<4y&~~dg8r;F^8t~o2IjPftdIP;|4FAl
zioA_~*}c=q2;VF^dwo5sD;R35^Z<#|#(4c}HRn|i#wRk~o)Y#YMt&|a`vcKZF*NJC
zeU^UPF&K2mJa>d$$`OGdNdWk{1(ZzOrzIs!t@E{*#8OqyAyy2y^F~p(A8;%@#RNk2
z(yceWPio-n6R+t;o-Bf|Fbaseu?lWqk~k87wvO~%Zs!+j8X#v}a@~@eug>e{j?p26
z2Y35L@>`{t04f=m+~B9%x6^}8O+SjI)J^6d+m#Jlu{MI&Jsa=5(Nqdmb}YUxUa|3-
zH7(KiM0%3DpJd_8w6`Scz)o^4cE<`f)^)L_3dQTE=##X5Yi!6o2ED3m8W<(t03kkq
z!e6+6OIZapR0P_Trc#r2?4l2xeVo1ph=;f8sv-~nI<-z{X~R=XS`WDGnQE|J@*h27
zjuIwK{FWb*AQ+$^6zM17boN1-M~_2*b`x0+Hv?ff+ntFwkm_}O@3);%SuXwRBQ#9w
z7wDvk_NB9rg7J5!Tv)jhr)JSs1XeE#*&du
zdKLhV=Z&q-B!EM#eyb+oArML^@oD$4+vF~lCv~1Eih{<(L$F!L&zsFGsZO+pN<}
zf7Caq~6`U&R_0-L{{a~7KqwL;ty__DXp!jr+d}HH)z_Fk>53G8qHFZ
zNPL#vWk@mmc19gF5u6N;p;JeZ%jSb=YumBdeWh7JIe^%2JG!`e3%z^rU(7zIYZuuHv
zNyh&=ncg|z)y00$`Rgzy=lWmMxLpAqhj`VZGbgL~+a@eMMJF=R#xYJ=8@Z0Jk?~EM
z%tN3BwTyD5s}>Rkt8A+~cVP7n1m^LsH2G<5js@kDQWHkDM1(HCrE_XcHokQow`8J5
z?D(p#6xEl5d75KS%Pp0E!b#n8NgIgh`txy5Wq?wJDj!^2Rahu
zz(e*1irS5%cb^RswX%(wECE4g3+iNQE*#u9itoNY)o_gnm!WKb?}QPA`aQZLT@`(O
zjtvc=zY97w&I4A`%>$mCEl%IEATfZ1HvI$XWWdIb;JFF@|AR
z!fK?6rk0Gf#6s5UvJ#n?tnQhLVq~s!c<=?qs=_8P3aBRbQeLTG+&tNKE-h2(j~y#}
zckmPgxvu(t#D-xI^jaO){&4!>{^Txzi{|lAa?I(LxQ@tw`gVqhQERcKSY3$X=fA(q
zrhw}FnadbgZvBA~FXb$JL5Tdt1&e=!h>dW|ZPovDaZMgJJ;wGn*;1vVbvl26{;Nfb
zo@RpF^}2-@`~az##@BhM{_$ijYDA(+xSoPw(*un84_X(@!1X7pw8FQyleL3bseJw_
z++dLv>OfR|0(tZh>L86nG!krI4M;k__mY;Zhp3VwV?ZJ;p;{f+WEz%5c}
z=phsbK=a5;8W%x|maYv#m&2TPp{#>37awQwJ@ekv@;M^UC77hf7$DTc1)r9IP0Y_+
z4Bu%|Vn^c%IM{71$CjG|I;n+K;@c9+B3Zw6W_dq;{$93|=S_A^4HAFF`mIt^j?VSrC%2c*HeNM^4!DB7()
z%z$uz1_%y+xZ2dX6y~*zotO?M(CvMC=icAEy^>!bU_wpmH$JF6gaTDOHGzw7xrM{p
zYa7?gATJdqtSG`N(~`fB_0ZU?oa~v6{+JAKtGKAhlAG+z`Z|z2SOxg*6#>`S6a*K`
zB~vwIO0s^QoAoL7v6#xkI@)&})$T*L1%tDH99pd)1=9qRaLPg+%f6&`RW;oP2o-B6
zl52-`p3!5Qic(83L4I`<$laWred$iK4Rgo*T=D~&O0c}SX%x%RT?S2DjwN==_P?UU`n~-u(?pF?nZn_?C0=T>(Q&%1fV&l$f~k2OhD+i}WFX
z7=+p#n4(OoDm4nwto{q(niBK6sro($`?fokJZ}r}8^T=gOCV>>mu6@Mw_euraKVj!
z)(HI1*pkgJeW;1hQuAwjz}-o~A?42_@Ui#QbjQuxOQ1}l6Y-Q^49;#}S{O(C*i2eC
zGG8}o4nNzXYc*4Ad2SiFf5nwl4$xD7em9i7X`vdP!hVt1+EfS1r6MiQ#CJ;OPX|^ar0hlirSYm_SU2ZI(tz6gVepP8&Ooo9
zlRVhZXV(;-4iF-q&N9^{?sbWQy_>lkGpVR-xHq|#C?5QRNmcG{l1@8+yZW`!
zoN6HQ9ewq_;Yc4x*)!Ch8R}}~y`u^W`w&hGigqQ!ooTW6Sict%hHkuR{|yUg{%82U
zgNi5VsH^v%zb5i25Y)BNDy!Mx@e|gkI(Qbd*~_F+X24ZTh(@n6nux>wR1^qWo&kxStFM`!mDN}^1dgVo5}=j$tUj(+%tbYH~o!?s9vA%4B@E$cbKaV
zNrx^!I?4j1vfUN1dYCT
zH_WR=5(N)xue=0cVL%V1%3TeoTihszwdsV2QAvgTwA%ya^xnNv}a-)a1x<#sLP5
zT8gWl_2;G)r`wnZzB@lR}#3x$u+0sk4Z+|gpBOwiyUYQY9x=(IJTqk
z(ZpRa%B=3g`GiK;!Po_VP)sqKzwZvp5G~e=VKDR=#A3V*n)X9|XoU8!rCZ@O~Ms
zvVuQMZrmfn=Pylc8D^#yxANX)Ul12Ys83$niC3V}Ke}e_5$!eeyHGlSPP4K#fZc=T
zD~93u%6VdAVkY8S)x^MYuna9A5bJwo4k%g`seJQjwL60{qJjB;1BF?U_E-;@bO&4i
za!i2(Y{(Iu`D!OWj*?L@Zi*?#TdaTdJ+uI=OW{EHr$_|72UzRa)#6$ED5h^^aQ1iu
z{patl4n$fBF9|+L>F3*BJCZ2g*E}aA4U^|24DxE3oum;ZC(alYV09)J@U4rvNWi=j
z95Ni!vk&=k&F}PouKuTaw~^?fT({Fn0#<%5p6dS9+4(BrV%Uf(p>|z%|3HBZ&aJ#x
zVVkfQ=<>nj5xsDmJq1JYc;sOMhbaE59Z|$!t#~D`4tbB*)Joki=VTVYGOK8Fz9Ni~
zNN1QpI&X>)M+cObaO5n|yVTS7=sVTR%!{+F267a08_-6c{ZS
z-F_vuplQ^9%Dgy)LYFR9dI=o8uQ0xou~(zA*>Aa(xdzHAmV5nFTSrcP7_djb92w6V
zeVH9t{P{TNDO8hzvy({}5*Yo}r)QIIyd^A!jE0jn^09Zl+<04x*KdIdwP99m`hq(+
z{A0{vENzt!&OBC8LW{#{ZnP$m&|IR*X-IRBUezjpb*F}H7d6tf8MkADIC(VuK>9$)
zUZSj3$h1c6$k>JZFL5LFp#nwe)~XQIZ^p3*pQA9owZkK0kf)(-`i)Lf6cbAJqGP@T
zstWv+-^A2;BEh#YR1le9z(48CP%I)hCEEXq^T;M;5DW~uFpw;Kcxu2xMFn;O@X8+z
z#8~Wq>5R-={F5x5PqJI1g(Ts7?PCnj6k#U-wvJ?huI6Z<%cT}q#V><1wIhSKJ+ttz
z2rIu>Z$%XyLSf&?b?j{b{In29Nmv%L_XWN!x*VQLOmeCm3She>PVh_)v6eSaXJ*=Y
zNrc(Cog9nCQoQE<-YaW<{TXshyd0KK&<1ONK=rFQ3;Q21MEb&WC>hb~M8XKJeEKqg
zgp{O_=I8cAUzINatMJC($EIPpe}|}Z=RyNrDmP`B?j#)@)WMOUyZzOzHs<{Bt@fHa
z7kwr1wO6>TOs?KtF;H?eDQ`yMvoy#m8pZcr)mFNR_se4lrj}74Y*SZEZXFdj`Ql=K
zv~ug+Yy6MezP&WfrLlRd$ufcH4w7g@Q~ttd@&^CMH-Pn`28<%8M*QjRUyT*X4;Uw&
z$nJgUH4#jXZ<%C>`<$v-dsg!bM$2$5RauI|keyFFYgpF}Mx(;fHjDW}({b~Z6;H>Q
zHpGMzI{Ha#6ZM#UpZT57^#?^pe_M}#3}1SgbOWuV*p)m*;oC^QMG_OTjOEz2Jin&e
z0%d+DyTd?cyPR`(*>k043p3|)Jd@l=li~>Jk`I;jHX_K*EIHOp9!e}~jePuTF2!~|
zFN8MHH5$%kUlF9S0)3<(*cef6U!$Dr2&``FVH^&=EhR`>-!(HRAT#6CkY_W0yx}(E
z={M5*X{b
zxFhu$ZF}E}sXaw7Vz_y7Yvnv!wn{R-XbB!3$wn`Y;Tlq26>mmavGgPQ?oFH1`wb12
zRX4p(5??#&kQ3R|-d$#fBn7(^8I^OocoFHVPF6R`%wvX{j16tQdJmX?HEFqFNF}`2
zIJ=t)$N`GD=O*XT^?DVQ^xncPU!~X0O^D6nr)l|`F*$IbYX#my^a9z7)}vD?9(6B~
z(%BEDKGm?JuFroa;vs$4apcUV8r_=m^a~xCQt2(hGxGYwoS%#digsgR({|!pI_vW3
z?>F$+iL~k_WHRJC)=$`fS}R(GJk^^qhzMhYM+E>O=?ZZCz5aP+3lcH!N7(G5Ah{u;3EHo@*5eMwc7f4mXQ+)iN5q=CS<3PteJ;3o*<8#RP97U;
z5#N!C?rjkJl9haD5kdkD0aqg$U)15q-9Oc;uPfLPN{YDpmjX%Z{woPwhU6Xs}
z2=bTLQ=fq1{w}v4-tNE4Tu-FRd-T-Wn2)62J&o*zA}4A*DnZ5W@?o
z(@A)?DK~f_Eh+hb_)2XI{DJt>tHzILF!BYTuU5;U$Rr!pgSO^Pnn71PoaJ+V;n;U=
zEYJX^8R93u;e7`{A$?tHjSkl{LzOtZNJKZW7Gw%s=Js++-%#SZR;@6@)G5ic
zA5tC2IgBG+FN6oE>)c9gWxW0h#%CG%l__~t&c2yN0VmgGge)xXy;
zopW>W=oBwb-%-rWA5M*$uz#`q=+K+fzhK1LK_xyJ+))*NdF}g}cMQhHXqHtZTcNCK
z1!cUQj>7D%52>n7%?0aAn#g{3aiGB&0c0hc6F-&wvzvJY)$
z*u=j@n}=q9S3^IS*QRVLujk0UBEjbCLp#9kAekK6n<-`*11d>i+&N|;%Ef94iOHi^heUJ6pe)yh^Z
zI$r^AI(+71IHXY@!4ZsCK&pBy-*pcHu5#)n#4|mUJ&oj@bufc5j7N=3rKT5WTiAYq8!_I#V#Br2aTU1cdyE1AawO
zw*bY2Q>|7{sw~~@i96NwyPLtcaH#uqo{vAtW?)(@;bi_MGzBbhB87uY1ekZEa-YX(
ze(&Q%_hwyKNR#Hi-1{d@^Js^P{AYDK@`hjadX~ve-m>~&7~mosghD?@jdp-W;I>?U
zyq+kxlo8K0TAuUpD4@FkVx0bg(3rehb}ATceA1&2oZgvU1|yS6UFdYko`7fs><<<$
z;|15lRREGxZJno3kcqC|N2oh>;@cD+jjueMVX-Djz=vM4^S;lp}W0k
zEnVkTH+_Da->7gGFNUz9n_JX)DFVrV$O#9D8%@OqJ9Bf1(=#@KpQJjb=`Ao`K$lAZ
zZZDq3?h|qd%$o)fR=O4&e`@@rWAEL
zBoB)@b}c|qKVp@wngplC^j;O>Ah48@`BSNNCY#|68RISlq^lF-H{%Ys)$Q-1#47%4r{;I$oaf_F9)Y`gFFuhUDkzqoo4@UT{cjFkw^YSB!FJ!5tJz9K_FWKP-T8pZM>msz
zD;{)Eb_z}lrMsR9y;<)?2rSLXcm)IVG7L=pLckt#2vGVLQ#lb7bbOchrKSLfz5FW*9e;ittYIn0^&6u#%($w}k
zfArq0I){_SyjCfFluLc#Vc6>POZYDx&C-z%u6pVECsfnRhF9POw*$sZc+sEXpxeP;E@lF^OE~>tTQC@X(6hz
zk@-yx^yoiUG^GB>l#Rjf7BJtnS5oLBaIl4x4kRN^AquZ`X8fCX>&l%Q?`CtK^22TA
z#HK+(4L!<%i+Z?!N5!jd6$1$IlgemJMO2Pll0m?th9R1rqR6{;Mx6JPc$4KbF2ilD
z$88e)wrfo3>w{uWek6;L~E!E#Rr6dh2wl6?I9xae&>t
z8O}f?hKL^;o>6Rej)ZGSnM879hE5mr534Ml1RzoSUIssZCxqW>mbEb8nJ+$(>ibax
zECuJ(B@-9#NC1r5=3zIC4WJNrY9--enz&UN)nG|RdzTpDA$ikaBhE=IGFu9U#t&1v
zR9c`n3@XXZU(K!0)D7U7+BRdz_Pxe!U8@@S;8<7)q#51bZ)b-tJjwP5QXftV`NbC<
z-AA1+<_JN5jS$&QijOEH@$q)6D9j)*^q>B56Vf-#}0E6#@cUJnD
z7DWK0|CY%(cU{{5Z#deIky^3-eGP8ravx7Kxs+oFTD%{fk
zrNlJZypV>V`|1~q^rX|yjOD-Fj0M3P2w_kDM{wfNK_~@;NU#OtG?z{7#$hV9ZpzcIWUclO
zR*{*1#|}g5(dZd-*Z}7s>2KTr8!Z_*G>541PibXejlPo)SwX#(qN;?N?+9=@?Jnz>
z?dN#MU=_;C_*<tc9E53AoO8{o@|jcGQ!Hw7ZfB?F~bTJ%C<%qYW0Gv^348kt~M67aQ6w6en5#l
zEcGCm5a2Y}zCi(eRcWwUuF-naLf?js=lrbQ#-PRFx<<$k6Du>$7cq@1=~T9xm>G
zd;O|`3<#5JpOIhJit4jM^?J9ilg`G#UO0RGT4K8rFJ;wB&=7tg*u;>O$i3$k0WK52CR
zw=X{YJ@xs+TMcW|HU7_`btZ)6k?_BN3ToR%Qv}BooijeUa1!4)GhW}|Vc$C=?Wp4W
zthm!J^2iLe#)8LWld%$#@o`s
zOYJNwZxJ29X>b}uU;xe4e-XbeZG5_q1
z_gc6!9Myh2D?;+Msc-waO9*U#ct=17R#DXYGi@&0n{7z1(yz!al{o|ap$pu%t@oBy
zUtM|*{gB7a@t%VjVXn^=pe7{aZBBJcB46q%94LGN@g!?*rlULG7$kiDVnE0$uVJUt
z`)o)^afR1m;{}d7jchsT*%KKY@jGTwN$=e0G0!BprOf}-x1|X7Pw)bN$TKXlQ+5_RiY7+T;gT%6kR(MD7g&@qU*^hr}SJa
z%Lpz5(BJ?Ph0I1r8@|0YvUT%vy||LH=f)c-ywmoHh+>t~p_Lh=gK
zwf=bW{SwQvW4$d+`q+kYBq=et<@#JyF7DGr6X{ypcpR_ZrAeZHj}<(XV>sKEnN
zoLvq#dauShD*DHN@tFQ=b4~pKaQz)Ba)%qjKb>bmOP9Gqp8)0Xhv>jzl`C1Wg`FMi
zHUX)VwqzKhw$@V=(z|KXUq1++bz7iKT?42;
zPUve`;)kz5J6!+#?t8^s(M^evw1i9-vHUH
zM?Y)Mj9Wr*_NmELbKogAYXW+JnI4PSlL5NR%}M5Yd7m9J=i$c{po6(}uZBW3`02kyL6Xh#O+ssflS
zN$L;iD_lJe4qrTkd5scX_+jyYotjR6sd#%3KF9ti-nH*{i9Lur9AEjao}THu3L;F5
z_Y4AtyJChZj#oN50VN*^S8M`4)9D4d=pAid02>#m*hTEe1OEt7)TK16@T8S_-Smj%
zg$#|<>voC|f6*Om2m9Fq1g3n^KTk3PkH|@6c(XfeywRd^e?Y?M-~JuwOaRz_sAJ|;
z_el)%lTcA-(pmE1E17DW8(Ek9R~a1>O;#j
zl~({D{_GeEuAm14(4Qd*vQQJx9-H)ZTH5%VzZ0uRQRV#GfAE6`mMHAWLlQh+w&NmO
zQPUG#tNiyqDAs~NN#3bJYDEiw;4RBlkd8O{@NT2cw34#smwm7+(pwX6vI4oIJx8wf
z&m)KVDob=GEq5uh-9>^(IdOIrCY%VX5#S-Rc@@h=Lys-Qzb!bIaICT$lkBx#_d85)
zBMO(EzkFlT^=y;Y=+DalY|WIz@RLVnWE{+40PA&}*E=xycDdZ?8(QdpEWd&s45B$l
z70dNx+MTq@vLHDez?Y?%*MIg$lGpH7caUtF9OoJf>0HNx|EGT_B<2Mge}@`IM|TrT
z;ze5BBET`C7ZfIl8@@o0(aZc!^Ksx!UHYw<%Dh=9u4q6y
z$I=%|t>m$m-kxE{T}WhqA*45x8ZNzoY%*m9UPRxIUO91AVz5TG
zZ$G8?>(Hm(+LIbTIc%~p7r_oBxiCy6veyz8nV7{O_j;uRodFJ)3@Av+87L`wUe&)HnZ|bFSP|6z1TP4!N6UmV?C(M7Q|Ez*_NIlQKy5^#!RVvFn_mV$MO~m
zO1caN9)%99QLDUvC|@h|Mv;^JAWAbqmuq3;GD{Yk5xqwQ4No7wfoWbZ8~ZCyl-+US~3omcCbf!wYQS^Y6+l^#PL@<6mg1lazdC$4VamDyc=1*^z4AfXiVg>W)zF
z$}}{86}gq`+;Frrw?Q$r$u-vu>E3lfdg%_5cos{efL*o@xy2gRVzb4Wc`DX!?h?|w
z2;*hhDl^Zxef!h4=OyLFi==^qp5ze0cp6SE#=DH9$!i&ME7n_F_?<^P<_@LZEPf!_
z`e^Hv+5fr2=v&7BTlhpw!Qt;57RlDeNze;_A~;(|)z9UG4(?~qUO+8;Xk3yA-qS@l
z&MlIB;5h~F_X2wP0eaVSULO-5Q9>Uie?|?Xn7C~OPFi~g|4kEM&D`zLoGun7Lnoqp
zOMYxHZ&A7S-uboD@(6j~5;CHB*S&?gs0c;1V*t8td^EcVZD<#LYZKH
zSZN$bu$=4TQvKkHJ83rw6eGiA4oTZkGms2_1~8*-jnXW2v*F_%sLTb|gbMen_b*Ve
zNNUkWSsjGAPCOO$rmX3P~uvXsY{7jgK;AWtU)rPt(~>YcRG2ZoNs(-MHePn
zn2?HcxUg<4X`O1fU;YgOV_A%r^DMfA7Nhq#>luT!&LiBE02tx7z*cX;7@Y>BcsAL
z2nx(|TkDa~IL3vvxI)=wFWWchSk_&%VoQh4@*#V`snAag=o+^U;hHm{|
zJ`9DORaBfou&u*j!6mo_cOBf_-E|1=E(yVYAOv@JcL@?SxNCyDy9bBCPmbJk*L}J#
zy?R&oTRqjPUG;6NKWvcH$%&N44h(xsBNpcaPFBMt@(FcHr7h6l2^7mc3M-@sp_EAlQ<9v7@0O}&u@)z?fBBjp<
z$X%qzI485>E>koy1L_F`&L%V0m^%x`$;5Qj#f{<9!`bQFFZyb{OtN20-Nyn~3T!2|
zAlwg_r3OTu9|Bn0jql^4O;{>NpC-F`~|F+oV(+Qd&JoB?rv`0{G?vyPg?Y3?0I|Vz>c1>j7j$evHYF-0Y5m
z=N8=-H21SHH}xMfNY#;@eYsvllQe#zFXyWd{&9e)#RU#&q|4A
z?8tB^OC3Y*KwFO)rOsphJ8f44Wrh_RQ3$7w9R{|gGbeoKsrvI;xFCPu!r%0&cCQ@zBFW5wk<
zE*h=JwT7&JRfgR!v;rXdrY5vl_SXWE;j1Z4KLJ<|JyVa(b8fV`uC|eVc-%;w2*Xj^
zgpD6D;!2rdndxeI7V9~Jdgc@GZ<#4ajh)z9ytKnY07Xp1p;g!{++AfYLuTV^5^tt?
zWI$K;ijS9BZb_5=2Zo=%&+^1=seYbI9=I~8NSxxU+HEjJGvkazmJa5X~
zVe7R>rc5)UOCr(Ry+uic<~y-9Eu$C8Qd*bAbDwBLy}bW&I9=59P-w)s)Gi^CRkQ+=
zv91%^b7dk5D|=#UusrWJ*hwk8Ruq$5Krwt=BApsrZ~Nic@zNfxk9T&UP{=vns^OhW
z)5KcoP@f6tO}R6UA^#u~n7gz*8^7SSdP{75o#bVSzT*m{`Bdv0uZ2}p_z25DM|I?Z
zdeYXZ;D*U#{>s4b7qKj;Dh^qR%tVV0{%p~k8`)%tWh}EgZSg^dRm5dmadAfaHlk0h
z@g-(K5C)^bjNWI58kUEolt+dCir1TN_?aPP)j*A7u@d6R!V!}X$W`3u_)>09DXz2X
zaE{tVwOoFRm)dcXwvt=W(t*=&9;TPfny(nd^HPrgnX#zxj*D246Wc$lNzLJ1$PoJy
zX5tNNt+wJJZNY;$44rcpPE5GufvGhCU%28i9UZusdD4|hssU`Knrk4H1upL
zg)x3$07D52;Htzo#ee{xdtnA34pV_btZrmzt?oAPT;ELrs^58XS{L>mvBCyF_(+i8
zRxEO?%eu8DVjpc+Bj2bYnDKaNQ~EHsFaDm1{lO<|ReGrxC8~onvkeiJdPOhvfgC@xZ(&t$=JZ)t+FJGWTzc%j^|#04A!pLWJKkDA
z_>N(cilv=a50Z@@RryCe+`Ba)Rv>Tqq1fHs#!6nz406okKSM+Lj#+pE&9K%_n(eM)
zTaULL3H~?#6GchJvCPNGZ~1TD`zIJ}ju8dp3S#`gX%73Da1gi47u7>*PY*5AVY7Dh
zp@?RP`LH?LDXr}HeIcC28Y=_G1}YH8x`%*CJ38Ejb#afdq>S}NtOF}6VwlQiF?PiE
z;#`?U*7{>KM%vB~dE)8i0XNG9ig?{HVvl?hu6nqd)YL-8WDIVt@@&rAfl3@J{n|H+
zHb^qya?{Tf{ZoPdN@ura>7%L4CW-cFyUEihX|{k7fnaMx*P5htYcAcGhTiFa}aOUZ+rPsNAUsPr9)wldL
zGl;F~x#@w;oM#|Bj!bNtaQLl@c&(p6a?2@fCBt174}dniz!Ofun^sg{qLysUc-cY9
z3=SO^%z?Z1S?N+W*x<7UUyywNon_R#gE07c-M6lK2>HTe`(8PlmD=5LlVcgFbi)fGf!Ymu!?VYqr?K{$8`{`=$4_
z6r)6MqT)=5{m&m&U4Oh-#4*rZuTTO2>sAwszY$a()bOQ+hx;;zRYjOb96
zq_n;gRVycqx)^_DA5JInZy{?pwN2X>FJ@}ViD~7~!01N839D`rgns93jDs9iE^Dg^
zGSaXGqe)6KCwhSC{Wck)dgE%1v&KnY?DH$GAY_)Gt0lcZ=yEf`q}Eq7e^V5wx^EMv
z$jUhWrEXjL)mLMnjQh+9?V*&>wgh%rZyny=2SdKO@pYtGNqqC%TlPDjt`OFe>n0azQhEYS@O)1U`M0{C@n(G(@5(fap6Y`d=(
z0Krb5sZlA6&2_))7kl7>a9Jx9q8n3#FGZY55++fSKWv%p8P_Vk(W-cwg6VH@ci0tp
zk-;cQ-w98R(h3((JkAaH!-{*}^!pa?LxV8TXi@x5N|Ozm>=>!a!!#8*g0strVf0)M
z`iw;nY|A;MAZTV}bcowZy^euBr#`KJi
zL-^_eny>&UTF8}2a#2VI>F3u}NPNAO519AH{Z5e?wN=!uK_v?4wwE$;Q0L`KC5At>
zU+~VB@_N2Zxq1wDkB1VzD$JKP+T}qaaCEymUcSSAN2zEM4Odp5-
zK|u~rCg#!Xlf898+%@*E3Y;ByI(iN{XvNQc(Ts5Os4yUpo-!{SCAH%zm!wiW9Ysl4
z`l?UpvZJT-Bc7%Ij9LKIvVcOIg2C8rPAgASy`tNt8n%=&}));Y36
z`8$p1Z7jjO1AOju?43VQA&yR~L*VY75N8b7$zN+UW4kBBoCZp&7k|O99sbH=}SlQd(ecBL+q)O4SoVGd-7ED
z!z~@R@|Q;(043QPx7k#$%=u=93mpxP
z0=6UEv{{C-Cp!T%q#zlMS#@q6EqV72d$MiLJ~96yw1Me-WL8VOv(43}MlXo@b-p8X
zYzy#KFZjZIWeevcx`iXuGrDLqaBZ-T3Nu}R^-IC73{HHUJoWr3*D+19Xoiv`-i}8{
zC^oM!bu6Z;wLr6@p4Wx#Z6|75M||LAlj~tYnJX}06z{qbIP_ls;e%aR0dY>$=~(BP
z_d*Oo-ky5LK69_Xj0OKIXZ%ud!gQ-?h!UV6J8QO#y-5FS*&J^ts-QI{D#=PcTe?@U
z=7yTW{4Ip4={w<6$Q}Rsd$#tsrHET&-!mol%W~Sv-e3EAD5j^Plg6RXnUcYOoilJZ
zPeV`!3=!#E1BO97M*(`dg^U;$I9&s~7N^WjUh+2folRpfAK|y|N9ULase?M+QWO9*
z%ePJkBj;2v9o_nPOfubK)X$pwG_!kaSjSYiL>5niTaC26iz`Irs~?@SxQ1OB;liJA
z>6Bp2-0slt^xY{jAe|nQ<)ChRCoa_UV63N!V3P!WBqG(X{_dnWsvIvsB!NX>bF}^d+)I&@j)S)(bN}*ck*_ec}fM(+W>`(H|&F~S&q5xQgU>EXE$R{gOS-3I}->E!kkkU%kOYz}SrN%`k
zL_Mem0Hkr}`R@Y~$TsIM$&!G*nrgtd%isOKaIifA%r`OJy-@6>YLx*JQ{xS5<@VP{
zoxQ$$BWvMQ`7UVftP$0cKI^WSJMyxMWN64a-)$Gdy)Y8}m=kNTAaVDSPBgucz?cnG
zwa9hk4EsK?P>~qEkdAb|*Y}tO3_aZ^qULuWR~rmjZiS2M#KOySUIYL~^+*iC2qEns
z=%@2W@m4WWug+v5iC!5Aa#RTp-JRd_)jzB?DH{LY`1XKf_`WF|>U^bHith}`bb329
zT1_t+*R;*2q~dY$9)k!mb*rOxcIc_wo%tro#$4rz1}qC_wYfqLB+7R!{Ow-@0&E*ft3mZxP~V3
zs74p24e#!KD~{3uI@3S4FtL4YV9RsV-=o5ND|{zYlfH^V(Fve{8Ce1R%xtS4d(o8?
zbQ0aus2V`zzIhQhHSB~d{6#Ud^g4f4NAMt66AdM1y5ck!aufGa`Q3$sUYcH*{aI4a
z%=k`H_GX|CXEM2b%=Q;DWP1^>Z(6amh2uN%S|dTru$FJsx~v2bML^UBL%+xEn%R4w
z3fsY7hVdcuifcd>*I@Oz_q=*Liz!`a0O1$)SaEvmLNy*UAY$NxsQNdTxv51iib2_P
zI{o&q3wCK}u3y|q1RLnFKYb?R*J8Ygr
zKhArX6eOc-=_6P3ZF9uZ{#B?9!-9<#`c-Gd?k$yCoD9g3_6g4Wm2!uLD;pp;mTT%a
ztyBNf+fiIr4
zs_^H&+PgazebXnWf8deTe}wjtt%Y=E$94YV0Dj-Z?_@t*K*A2nvtOf8nia@8|DxL*
z+5d%Do?8Meu_l2vYbxLIXCSWX+{V=C<%j8N!9^`?`U#J}Ru^@GtS?oTNs-3X&ooHu
zzh7s)I>m6I(F+^HnT$OAO@@x!f&BMAd#(w7HF$gwr>v#esS2G`v{UV>ag$y&2^Zo%
zp;TjWaEb`jzC}>9XI3PdJ0UidME<0S>Kd;8lyM8pc_Zediul`-txPuS)CQQN9*TY<
z|10Ywv1VYMGC^yr>NUs#x6C%?L^Rpv(>y5^G