Browse Source

Merge pull request #16 from udayslathia16/main

Added my_question as an input field for prompts to both the models
pull/24/head
Ed Donner 5 months ago committed by GitHub
parent
commit
6735a81bcf
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
  1. 580
      week1/community-contributions/day 5 personal tutor.ipynb
  2. 453
      week1/community-contributions/day5 company brochure.ipynb

580
week1/community-contributions/day 5 personal tutor.ipynb

@ -0,0 +1,580 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 8,
"id": "bdb801c9-e33a-4a41-bdb8-9cacb382535d",
"metadata": {},
"outputs": [],
"source": [
"#imports\n",
"from IPython.display import Markdown, display, update_display\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import ollama"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f5a8a43d-530e-4031-b42f-5b6bd09af34b",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ddfffcbf-d6e3-4e63-85dc-02fb916cee88",
"metadata": {},
"outputs": [],
"source": [
"#sset up enviornment\n",
"\n",
"load_dotenv()\n",
"openai=OpenAI()\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "048e5e7c-dd7a-469e-9ed5-0c6f75fb0193",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"question = \"\"\"\n",
"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "22d989ab-d1e2-4b93-9893-87c40ccde3cf",
"metadata": {},
"outputs": [],
"source": [
"system_prompt=\"You are a helpful technical tutor who answers questions about python code, software engineering, data science and LLMs\"\n",
"user_prompt=\"Please give a detailed explanation to the following question: \" + question"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "90a02948-86cb-4adc-9d88-977e7ed99c5b",
"metadata": {},
"outputs": [],
"source": [
"# messages\n",
"\n",
"messages=[\n",
" {\"role\":\"system\",\"content\":system_prompt},\n",
" {\"role\":\"user\",\"content\":user_prompt}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6819c2cd-80e8-4cba-8472-b5a5729d2530",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Certainly! Let's dissect the code snippet you provided:\n",
"\n",
"python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"\n",
"### Breakdown of the Code:\n",
"\n",
"1. **Context of `yield from`:**\n",
" - The expression starts with `yield from`, which is a syntax used in Python's generator functions. A generator function is a special type of function that returns an iterator and allows you to iterate over a sequence of values lazily (one value at a time) instead of returning them all at once.\n",
" - `yield from` is specifically used to delegate part of the generator's operations to another iterator. When you use `yield from`, the values from the iterator on the right-hand side are yielded to the caller of the generator function.\n",
"\n",
"2. **Understanding the Set Comprehension:**\n",
" - `{book.get(\"author\") for book in books if book.get(\"author\")}` is a set comprehension.\n",
" - It iterates over each `book` in a collection called `books`. In this context, `books` is expected to be a list (or another iterable) of dictionaries, where each dictionary represents a book and contains various attributes (like \"title\", \"author\", etc.).\n",
" - Within the set comprehension, it calls `book.get(\"author\")`, which attempts to retrieve the value associated with the key \"author\" from each `book` dictionary.\n",
" - The `if book.get(\"author\")` condition ensures that only books with a non-falsy author (e.g., not `None` or an empty string) are included in the resulting set.\n",
" - The result of the comprehension is a set of unique author names (since sets inherently do not allow duplicates).\n",
"\n",
"### Summary of Functionality:\n",
"\n",
"- The entire line of code is a compact way to extract unique author names from a list of books and yield each unique author to the caller of the generator function. \n",
"- If there are multiple books with the same author, that author will only appear once in the output since sets do not allow duplicate entries.\n",
"\n",
"### Why Use This Code?\n",
"\n",
"1. **Unique Values**: By using a set comprehension, this code efficiently ensures that the output consists only of unique author names, which is often desirable when you're interested in knowing all distinct authors.\n",
" \n",
"2. **Lazy Evaluation**: By using `yield from`, the authors are yielded one by one as the caller consumes them. This can be more memory efficient compared to creating a list and returning it all at once, especially if the dataset (`books`) is large.\n",
"\n",
"3. **Readable and Concise**: The use of comprehensions makes the code compact and, with a bit of familiarity, easy to read. It expresses the intention to filter and collect authors succinctly.\n",
"\n",
"### Example:\n",
"\n",
"Here's a simple example to illustrate how this might work in practice:\n",
"\n",
"python\n",
"books = [\n",
" {\"title\": \"Book 1\", \"author\": \"Author A\"},\n",
" {\"title\": \"Book 2\", \"author\": \"Author B\"},\n",
" {\"title\": \"Book 3\", \"author\": \"Author A\"},\n",
" {\"title\": \"Book 4\", \"author\": None},\n",
" {\"title\": \"Book 5\", \"author\": \"Author C\"},\n",
" {\"title\": \"Book 6\", \"author\": \"\"}\n",
"]\n",
"\n",
"def unique_authors(books):\n",
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"for author in unique_authors(books):\n",
" print(author)\n",
"\n",
"\n",
"In this example, the output would be:\n",
"\n",
"Author A\n",
"Author B\n",
"Author C\n",
"\n",
"\n",
"Notice that duplicate authors are eliminated, and any books without an author are ignored."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"\n",
"stream=openai.chat.completions.create(model=MODEL_GPT, messages=messages,stream=True)\n",
"\n",
"response=\"\"\n",
"display_handle=display(Markdown(\"\"),display_id=True)\n",
"for chunk in stream:\n",
" response +=chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(response),display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "95c15975-ba7d-4964-b94a-5ce105ccc9e3",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"**Code Explanation**\n",
"\n",
"The given code snippet is written in Python 3.5+ syntax, which utilizes the `yield from` keyword to iterate over a generator expression.\n",
"\n",
"```python\n",
"from collections import namedtuple\n",
"\n",
"Book = namedtuple('Book', ['title', 'author'])\n",
"books = [\n",
" Book(\"Book1\", \"AuthorA\"),\n",
" Book(\"Book2\", \"AuthorB\"),\n",
" Book(\"Book3\", \"AuthorC\")\n",
"]\n",
"\n",
"authors = yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"```\n",
"\n",
"**Breaking Down the Code**\n",
"\n",
"Here's a step-by-step explanation of what the code does:\n",
"\n",
"1. **Define a named tuple `Book`**: The `namedtuple` function is used to create a lightweight, immutable data structure called `Book`. It has two attributes: `title` and `author`.\n",
"\n",
"2. **Create a list of `Book` objects**: A list of `Book` objects is created with some sample data.\n",
"\n",
"3. **Define an empty generator expression**: An empty generator expression is defined using the `{}` syntax, which will be used to yield values from another iterable.\n",
"\n",
"4. **Use `yield from` to delegate iteration**: The `yield from` keyword is used in conjunction with a dictionary comprehension. This allows us to \"delegate\" iteration over the values of the dictionary to an underlying iterable (in this case, the generator expression).\n",
"\n",
"5. **Filter books based on author presence**: Inside the dictionary comprehension, we use the `.get()` method to access the `author` attribute of each `Book` object. We then filter out any books that don't have an `author`.\n",
"\n",
"6. **Yield authors from filtered books**: The resulting generator expression yields the authors of only those books that have a valid author.\n",
"\n",
"**What Does it Do?**\n",
"\n",
"In essence, this code takes a list of `Book` objects and extracts their corresponding authors into a set (since sets automatically remove duplicates). It does so in an efficient manner by using generators to avoid loading all the data into memory at once.\n",
"\n",
"The output would be:\n",
"```python\n",
"{'AuthorA', 'AuthorB', 'AuthorC'}\n",
"```\n",
"This can be useful when working with large datasets where not all elements are required, or when you want to process data iteratively without loading everything into memory simultaneously."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get Llama 3.2 to answer\n",
"\n",
"response = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
"reply = response['message']['content']\n",
"display(Markdown(reply))"
]
},
{
"cell_type": "markdown",
"id": "9eb0a013-c1f2-4f01-8b10-9f68325356e9",
"metadata": {},
"source": [
"# Modify\n",
"Update such that the question is taken as input and sent to the model for response"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3f01b258-a293-4afc-a99c-d3cfb624b9eb",
"metadata": {},
"outputs": [],
"source": [
"def get_model_responses(question):\n",
" \"\"\"\n",
" Takes a question as input, queries GPT-4o-mini and Llama 3.2 models, \n",
" and displays their responses.\n",
" \n",
" Args:\n",
" question (str): The question to be processed by the models.\n",
" \"\"\"\n",
" # system_prompt is already declared above lets generate a new user prompt so that the input question can be sent\n",
" user_input_prompt = f\"Please give a detailed explanation to the following question: {question}\"\n",
"\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_input_prompt}\n",
" ]\n",
" # GPT-4o-mini Response with Streaming\n",
" print(\"Fetching response from GPT-4o-mini...\")\n",
" stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages, stream=True)\n",
"\n",
" response_gpt = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response_gpt += chunk.choices[0].delta.content or ''\n",
" response_gpt = response_gpt.replace(\"```\", \"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response_gpt), display_id=display_handle.display_id)\n",
"\n",
" # Llama 3.2 Response\n",
" print(\"Fetching response from Llama 3.2...\")\n",
" response_llama = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
" reply_llama = response_llama['message']['content']\n",
" display(Markdown(reply_llama))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "dd35ac5e-a934-4c20-9be9-657afef66c12",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Please enter your question: what are the various career paths of data science\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fetching response from GPT-4o-mini...\n"
]
},
{
"data": {
"text/markdown": [
"Data science is a diverse and rapidly evolving field that encompasses a wide range of roles and specializations. As organizations increasingly rely on data-driven decision-making, the demand for data professionals has surged, giving rise to various career paths within data science. Here are some of the primary career paths:\n",
"\n",
"### 1. Data Scientist\n",
"**Role Description:** Data scientists are experts in extracting insights and knowledge from structured and unstructured data. They apply various techniques from statistics, machine learning, and data analysis to solve complex business problems.\n",
"\n",
"**Skills Required:**\n",
"- Proficient in programming languages like Python and R.\n",
"- Knowledge of machine learning algorithms and libraries (e.g., Scikit-learn, TensorFlow).\n",
"- Strong statistical background.\n",
"- Data visualization skills using tools like Matplotlib, Seaborn, Tableau, or Power BI.\n",
"\n",
"### 2. Data Analyst\n",
"**Role Description:** Data analysts focus on interpreting data and generating actionable insights. They analyze data trends and patterns, create visualizations, and communicate findings to stakeholders.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in SQL for database querying.\n",
"- Experience with Excel and data visualization tools (Tableau, Power BI).\n",
"- Strong analytical and problem-solving skills.\n",
"- Basic knowledge of statistics and data modeling.\n",
"\n",
"### 3. Machine Learning Engineer\n",
"**Role Description:** Machine learning engineers develop, implement, and optimize machine learning models. They focus on creating algorithms that enable systems to learn from data and make predictions or decisions.\n",
"\n",
"**Skills Required:**\n",
"- Strong programming skills (Python, Java, C++).\n",
"- Deep understanding of machine learning frameworks (TensorFlow, PyTorch).\n",
"- Experience with model deployment and scaling.\n",
"- Knowledge of data preprocessing and feature engineering.\n",
"\n",
"### 4. Data Engineer\n",
"**Role Description:** Data engineers are responsible for designing, building, and maintaining the infrastructure for data generation, storage, and retrieval. They ensure that data pipelines are efficient and scalable.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in programming (Python, Java, Scala).\n",
"- Experience with ETL (Extract, Transform, Load) processes and tools.\n",
"- Familiarity with database systems (SQL, NoSQL).\n",
"- Knowledge of data warehousing solutions (Amazon Redshift, Google BigQuery).\n",
"\n",
"### 5. Business Intelligence (BI) Analyst/Developer\n",
"**Role Description:** BI analysts focus on analyzing business data to provide strategic insights. They create dashboards and reports to help stakeholders make informed decisions.\n",
"\n",
"**Skills Required:**\n",
"- Strong SQL and data visualization skills.\n",
"- Familiarity with BI tools (Tableau, Power BI, Looker).\n",
"- Good understanding of business metrics and KPIs.\n",
"- Ability to communicate complex data insights clearly.\n",
"\n",
"### 6. Statistician\n",
"**Role Description:** Statisticians apply statistical methods to collect, analyze, and interpret data. They use their expertise to inform decisions in various fields, including healthcare, finance, and government.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in statistical software (SAS, R, SPSS).\n",
"- Strong foundation in probability and statistical theories.\n",
"- Ability to design experiments and surveys.\n",
"- Good visualization and reporting skills.\n",
"\n",
"### 7. Data Architect\n",
"**Role Description:** Data architects design the data infrastructure and architecture to support data management and analytics. They ensure data is reliable, consistent, and accessible.\n",
"\n",
"**Skills Required:**\n",
"- Expertise in data modeling and database design.\n",
"- Knowledge of data warehousing solutions.\n",
"- Familiarity with big data technologies (Hadoop, Spark).\n",
"- Understanding of data governance and security best practices.\n",
"\n",
"### 8. Data Product Manager\n",
"**Role Description:** Data product managers focus on developing and managing products that rely on data. They bridge the gap between technical teams and business stakeholders, ensuring that data initiatives align with business goals.\n",
"\n",
"**Skills Required:**\n",
"- Strong understanding of data and analytics.\n",
"- Project management skills (Agile methodologies).\n",
"- Ability to communicate effectively with technical and non-technical stakeholders.\n",
"- Knowledge of market trends and customer needs.\n",
"\n",
"### 9. Research Scientist\n",
"**Role Description:** Research scientists in data science focus on advanced data mining and machine learning techniques. They conduct experiments and develop new algorithms to solve complex scientific problems or improve existing methodologies.\n",
"\n",
"**Skills Required:**\n",
"- Advanced degrees (Ph.D.) in a relevant field (computer science, mathematics).\n",
"- Strong research and analytical skills.\n",
"- Proficiency in programming and statistical analysis.\n",
"- Experience with scientific computing and software development.\n",
"\n",
"### 10. AI/Deep Learning Specialist\n",
"**Role Description:** Specialists in AI and deep learning focus on developing advanced algorithms that enable machines to learn from large datasets. This includes work on neural networks, natural language processing, and computer vision.\n",
"\n",
"**Skills Required:**\n",
"- Strong knowledge of deep learning frameworks (Keras, TensorFlow).\n",
"- Familiarity with architecture design for neural networks.\n",
"- Experience with big data processing.\n",
"- Ability to handle unstructured data types (text, images).\n",
"\n",
"### Career Path Considerations\n",
"When choosing a career path in data science, it’s important to consider factors such as your educational background, interests, strengths, and the specific needs of the industry you want to work in. Many roles may require cross-disciplinary skills, so gaining a broad range of competencies can help you adapt and find your niche in the expansive field of data science.\n",
"\n",
"### Conclusion\n",
"Data science offers various fulfilling career paths to suit different interests and skill sets. With continuous growth in data generation and analytics needs, professionals in this field can expect a dynamic and rewarding career landscape. Continuous learning and adaptation to emerging technologies are crucial for success in these roles."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fetching response from Llama 3.2...\n"
]
},
{
"data": {
"text/markdown": [
"Data Science is a multifaceted field that encompasses a wide range of career paths. Here's a comprehensive overview of the various careers in Data Science:\n",
"\n",
"**1. Data Analyst**\n",
"\n",
"* Job Description: Collect, analyze, and interpret complex data to identify trends and patterns, often using visualization tools.\n",
"* Responsibilities:\n",
"\t+ Cleaning and preprocessing datasets\n",
"\t+ Developing reports and dashboards for stakeholders\n",
"\t+ Conducting ad-hoc analysis to answer business questions\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to inform decisions\n",
"* Salary Range: $60,000 - $100,000 per year\n",
"\n",
"**2. Data Scientist**\n",
"\n",
"* Job Description: Develop and apply advanced statistical and machine learning models to extract insights from large datasets.\n",
"* Responsibilities:\n",
"\t+ Designing and implementing data pipelines for data preparation and processing\n",
"\t+ Building and training machine learning models using techniques such as supervised and unsupervised learning, deep learning, and natural language processing\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n",
"* Salary Range: $100,000 - $160,000 per year\n",
"\n",
"**3. Business Analyst**\n",
"\n",
"* Job Description: Apply data analysis skills to drive business decisions and optimize organizational performance.\n",
"* Responsibilities:\n",
"\t+ Analyzing business data to identify trends and areas for improvement\n",
"\t+ Developing predictive models to forecast future business outcomes\n",
"\t+ Collaborating with stakeholders (e.g., product managers, sales teams) to design and implement solutions\n",
"\t+ Communicating insights and recommendations to senior leadership\n",
"* Salary Range: $80,000 - $120,000 per year\n",
"\n",
"**4. Quantitative Analyst**\n",
"\n",
"* Job Description: Apply mathematical and statistical techniques to analyze and optimize investment strategies.\n",
"* Responsibilities:\n",
"\t+ Developing and implementing quantitative models for portfolio optimization, risk management, and trading\n",
"\t+ Analyzing large datasets to identify trends and patterns in financial markets\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"\t+ Communicating complex results and recommendations to senior leadership\n",
"* Salary Range: $100,000 - $180,000 per year\n",
"\n",
"**5. Data Engineer**\n",
"\n",
"* Job Description: Design, build, and maintain large-scale data systems for scalability, reliability, and performance.\n",
"* Responsibilities:\n",
"\t+ Building data pipelines using languages like Python, Java, or Scala\n",
"\t+ Developing cloud-based data platforms (e.g., AWS, GCP) for data storage and processing\n",
"\t+ Ensuring data quality and integrity across different data sources\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"* Salary Range: $110,000 - $160,000 per year\n",
"\n",
"**6. Machine Learning Engineer**\n",
"\n",
"* Job Description: Design, build, and deploy machine learning models for production use cases.\n",
"* Responsibilities:\n",
"\t+ Developing and deploying deep learning models using frameworks like TensorFlow or PyTorch\n",
"\t+ Building data pipelines to collect, preprocess, and process large datasets\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and recommendations to senior leadership\n",
"* Salary Range: $120,000 - $180,000 per year\n",
"\n",
"**7. Data Architect**\n",
"\n",
"* Job Description: Design and implement data management systems for organizations.\n",
"* Responsibilities:\n",
"\t+ Developing data warehousing and business intelligence solutions\n",
"\t+ Building data governance frameworks for data quality, security, and compliance\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"\t+ Communicating technical designs and trade-offs to stakeholders\n",
"* Salary Range: $140,000 - $200,000 per year\n",
"\n",
"**8. Business Intelligence Analyst**\n",
"\n",
"* Job Description: Develop and maintain business intelligence solutions using data visualization tools.\n",
"* Responsibilities:\n",
"\t+ Creating reports and dashboards for stakeholders\n",
"\t+ Developing predictive models for forecasted outcomes\n",
"\t+ Collaborating with other teams (e.g., product management, sales) to design and implement solutions\n",
"\t+ Communicating insights and recommendations to senior leadership\n",
"* Salary Range: $80,000 - $120,000 per year\n",
"\n",
"**9. Operations Research Analyst**\n",
"\n",
"* Job Description: Apply advanced analytical techniques to optimize business processes and improve decision-making.\n",
"* Responsibilities:\n",
"\t+ Developing optimization models using linear programming and integer programming\n",
"\t+ Analyzing complex data sets to identify trends and patterns\n",
"\t+ Collaborating with other teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating results and recommendations to senior leadership\n",
"* Salary Range: $90,000 - $140,000 per year\n",
"\n",
"**10. Data Scientist (Specialized)**\n",
"\n",
"* Job Description: Focus on specialized areas like natural language processing, computer vision, or predictive analytics.\n",
"* Responsibilities:\n",
"\t+ Building and training machine learning models using deep learning techniques\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n",
"\t+ Staying up-to-date with the latest advancements in specialized areas\n",
"* Salary Range: $100,000 - $160,000 per year\n",
"\n",
"Keep in mind that salaries can vary widely depending on factors like location, industry, experience level, and company size. Additionally, these roles often require a combination of technical skills, business acumen, and soft skills to be successful."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
" # Prompt user for their question\n",
"my_question = input(\"Please enter your question: \")\n",
"# Fetch and display responses from models\n",
"get_model_responses(my_question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4acf2af-635f-4216-9f5a-7c08d8313a07",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

453
week1/community-contributions/day5 company brochure.ipynb

@ -0,0 +1,453 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e14248ff-07be-4ba8-a13c-d8c7f40ffb5f",
"metadata": {},
"source": [
"# A full business solution\n",
"## Now we will take our project from Day 1 to the next level\n",
"## BUSINESS CHALLENGE:\n",
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
"\n",
"We will be provided a company name and their primary website.\n",
"\n",
"See the end of this notebook for examples of real-world business applications.\n",
"\n",
"And remember: I'm always available if you have problems or ideas! Please do reach out."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6c8dc88a-85d9-493b-965c-68895cdd93f2",
"metadata": {},
"outputs": [],
"source": [
"#imports \n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "131c483b-dd58-4faa-baf5-469ab6b00fbb",
"metadata": {},
"outputs": [],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"api_key=os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key[:8]=='sk-proj-':\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? \")\n",
"\n",
"MODEL='gpt-4o-mini'\n",
"openai=OpenAI()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "196c0dee-7236-4f88-b7c2-f2a885190b19",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1329717-3727-4987-ada7-75df87a10459",
"metadata": {},
"outputs": [],
"source": [
"ed=Website(\"https://www.anthropic.com/\")\n",
"print(ed.get_contents)\n",
"ed.links"
]
},
{
"cell_type": "markdown",
"id": "912d4f83-c8f1-437c-a01b-e21988af477c",
"metadata": {},
"source": [
"## First step: Have GPT-4o-mini figure out which links are relevant\n",
"\n",
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
"\n",
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
"\n",
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed206771-df05-429d-8743-310bc86358ce",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt=\"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt+=\"You should respond in JSON as in this example:\"\n",
"link_system_prompt+=\"\"\"\n",
"{\n",
" \"links\":[\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef835a85-9a48-42bd-979e-ca5f51bb1586",
"metadata": {},
"outputs": [],
"source": [
"print(link_system_prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2885e89-6455-4239-a98d-5599ea6e5947",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da7e4468-a225-4263-a212-94b1c69d38da",
"metadata": {},
"outputs": [],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "53c59051-eed0-4292-8204-abbbd1d78df4",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" website=Website(url)\n",
" response=openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\":\"json_object\"}\n",
" )\n",
" result=response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76d3d68d-6534-4b04-8a26-a07a9e532665",
"metadata": {},
"outputs": [],
"source": [
"anthropic=Website(\"https://www.anthropic.com/\")\n",
"anthropic.links"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12ca6438-bc99-4b45-9603-54bee5d8bce2",
"metadata": {},
"outputs": [],
"source": [
"get_links(\"https://www.anthropic.com/\")"
]
},
{
"cell_type": "markdown",
"id": "4304d6e8-900e-4702-b84c-f202d6265459",
"metadata": {},
"source": [
"## Second step: make the brochure!\n",
"\n",
"Assemble all the details into another prompt to GPT4-o"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "91ac10e6-8a7a-4367-939b-ac537c1c6c67",
"metadata": {},
"outputs": [],
"source": [
"def get_all_details(url):\n",
" result=\"Landing page:\\n\"\n",
" result+=Website(url).get_contents()\n",
" links=get_links(url)\n",
" print(\"Found links:\",links)\n",
" for link in links[\"links\"]:\n",
" result+=f\"\\n\\n{link['type']}\\n\"\n",
" result+=Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "765e9c71-2bbc-4222-bce1-0f553d8d2b10",
"metadata": {},
"outputs": [],
"source": [
"print(get_all_details(\"https://anthropic.com\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7116adc1-6f5e-445f-9869-ffcf5fa6a9b8",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\"\n",
"\n",
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
"\n",
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02edb903-6352-417f-8c0f-85c2eee269b6",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:20_000] # Truncate if more than 20,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f760069-910e-4209-b357-b97e710f560d",
"metadata": {},
"outputs": [],
"source": [
"get_brochure_user_prompt(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "faf9d9cc-fe30-4441-9adc-aee5b4dc80ca",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure(company_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8a672f4-ee87-4e2a-a6b1-dfb46f344ef3",
"metadata": {},
"outputs": [],
"source": [
"create_brochure(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "781fa1db-7acc-41fc-b26c-0d64964eb161",
"metadata": {},
"source": [
"## Finally - a minor improvement\n",
"\n",
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
"with the familiar typewriter animation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8359501-9f05-42bc-916c-7990ac910866",
"metadata": {},
"outputs": [],
"source": [
"def stream_brochure(company_name, url):\n",
" stream= openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
"\n",
" response=\"\"\n",
" display_handle=display(Markdown(\"\"),display_id=True)\n",
" for chunk in stream:\n",
" response +=chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(response),display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd834aa7-deda-40cd-97ab-5fa5117fc6e0",
"metadata": {},
"outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"http://huggingface.co\")"
]
},
{
"cell_type": "markdown",
"id": "207068f8-d768-46b2-8b92-0ec78a9f71ae",
"metadata": {},
"source": [
"# Convert the brochure to a specified language\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e75be9e6-040d-4178-a5b3-1b7ae4460bc8",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure_language(company_name, url, language):\n",
" language_prompt = f\"You are a professional translator and writer specializing in creating and translating brochures. Convert the brochure to {language} while maintaining its original tone, format, and purpose.\"\n",
" user_language_prompt = f\"Generate a brochure for the company '{company_name}' available at the URL: {url}, and translate it into {language}.\"\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": language_prompt},\n",
" {\"role\": \"user\", \"content\": user_language_prompt}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0748ec58-335b-4796-ae15-300dee7b24b0",
"metadata": {},
"outputs": [],
"source": [
"create_brochure_language(\"HuggingFace\", \"http://huggingface.co\",\"Hindi\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba54f80b-b2cd-4a50-b460-e0d042499c49",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "182f35da-d7b1-40f8-b1a7-74e0cd7fd6fe",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading…
Cancel
Save