Browse Source

Merge branch 'main' of github.com:ed-donner/llm_engineering

pull/33/head
Edward Donner 5 months ago
parent
commit
fc0305bf81
  1. 580
      week1/community-contributions/day 5 personal tutor.ipynb
  2. 189
      week1/community-contributions/day1-article-pdf-reader.ipynb
  3. 480
      week1/community-contributions/day1_first_llm_videotranscript_summary.ipynb
  4. 453
      week1/community-contributions/day5 company brochure.ipynb
  5. 2
      week2/community-contributions/day4.ipynb
  6. 4
      week2/day5.ipynb

580
week1/community-contributions/day 5 personal tutor.ipynb

@ -0,0 +1,580 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 8,
"id": "bdb801c9-e33a-4a41-bdb8-9cacb382535d",
"metadata": {},
"outputs": [],
"source": [
"#imports\n",
"from IPython.display import Markdown, display, update_display\n",
"from dotenv import load_dotenv\n",
"from openai import OpenAI\n",
"import ollama"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "f5a8a43d-530e-4031-b42f-5b6bd09af34b",
"metadata": {},
"outputs": [],
"source": [
"# constants\n",
"\n",
"MODEL_GPT = 'gpt-4o-mini'\n",
"MODEL_LLAMA = 'llama3.2'"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "ddfffcbf-d6e3-4e63-85dc-02fb916cee88",
"metadata": {},
"outputs": [],
"source": [
"#sset up enviornment\n",
"\n",
"load_dotenv()\n",
"openai=OpenAI()\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "048e5e7c-dd7a-469e-9ed5-0c6f75fb0193",
"metadata": {},
"outputs": [],
"source": [
"# here is the question; type over this to ask something new\n",
"\n",
"question = \"\"\"\n",
"Please explain what this code does and why:\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "22d989ab-d1e2-4b93-9893-87c40ccde3cf",
"metadata": {},
"outputs": [],
"source": [
"system_prompt=\"You are a helpful technical tutor who answers questions about python code, software engineering, data science and LLMs\"\n",
"user_prompt=\"Please give a detailed explanation to the following question: \" + question"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "90a02948-86cb-4adc-9d88-977e7ed99c5b",
"metadata": {},
"outputs": [],
"source": [
"# messages\n",
"\n",
"messages=[\n",
" {\"role\":\"system\",\"content\":system_prompt},\n",
" {\"role\":\"user\",\"content\":user_prompt}\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "6819c2cd-80e8-4cba-8472-b5a5729d2530",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Certainly! Let's dissect the code snippet you provided:\n",
"\n",
"python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"\n",
"### Breakdown of the Code:\n",
"\n",
"1. **Context of `yield from`:**\n",
" - The expression starts with `yield from`, which is a syntax used in Python's generator functions. A generator function is a special type of function that returns an iterator and allows you to iterate over a sequence of values lazily (one value at a time) instead of returning them all at once.\n",
" - `yield from` is specifically used to delegate part of the generator's operations to another iterator. When you use `yield from`, the values from the iterator on the right-hand side are yielded to the caller of the generator function.\n",
"\n",
"2. **Understanding the Set Comprehension:**\n",
" - `{book.get(\"author\") for book in books if book.get(\"author\")}` is a set comprehension.\n",
" - It iterates over each `book` in a collection called `books`. In this context, `books` is expected to be a list (or another iterable) of dictionaries, where each dictionary represents a book and contains various attributes (like \"title\", \"author\", etc.).\n",
" - Within the set comprehension, it calls `book.get(\"author\")`, which attempts to retrieve the value associated with the key \"author\" from each `book` dictionary.\n",
" - The `if book.get(\"author\")` condition ensures that only books with a non-falsy author (e.g., not `None` or an empty string) are included in the resulting set.\n",
" - The result of the comprehension is a set of unique author names (since sets inherently do not allow duplicates).\n",
"\n",
"### Summary of Functionality:\n",
"\n",
"- The entire line of code is a compact way to extract unique author names from a list of books and yield each unique author to the caller of the generator function. \n",
"- If there are multiple books with the same author, that author will only appear once in the output since sets do not allow duplicate entries.\n",
"\n",
"### Why Use This Code?\n",
"\n",
"1. **Unique Values**: By using a set comprehension, this code efficiently ensures that the output consists only of unique author names, which is often desirable when you're interested in knowing all distinct authors.\n",
" \n",
"2. **Lazy Evaluation**: By using `yield from`, the authors are yielded one by one as the caller consumes them. This can be more memory efficient compared to creating a list and returning it all at once, especially if the dataset (`books`) is large.\n",
"\n",
"3. **Readable and Concise**: The use of comprehensions makes the code compact and, with a bit of familiarity, easy to read. It expresses the intention to filter and collect authors succinctly.\n",
"\n",
"### Example:\n",
"\n",
"Here's a simple example to illustrate how this might work in practice:\n",
"\n",
"python\n",
"books = [\n",
" {\"title\": \"Book 1\", \"author\": \"Author A\"},\n",
" {\"title\": \"Book 2\", \"author\": \"Author B\"},\n",
" {\"title\": \"Book 3\", \"author\": \"Author A\"},\n",
" {\"title\": \"Book 4\", \"author\": None},\n",
" {\"title\": \"Book 5\", \"author\": \"Author C\"},\n",
" {\"title\": \"Book 6\", \"author\": \"\"}\n",
"]\n",
"\n",
"def unique_authors(books):\n",
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"\n",
"for author in unique_authors(books):\n",
" print(author)\n",
"\n",
"\n",
"In this example, the output would be:\n",
"\n",
"Author A\n",
"Author B\n",
"Author C\n",
"\n",
"\n",
"Notice that duplicate authors are eliminated, and any books without an author are ignored."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"\n",
"stream=openai.chat.completions.create(model=MODEL_GPT, messages=messages,stream=True)\n",
"\n",
"response=\"\"\n",
"display_handle=display(Markdown(\"\"),display_id=True)\n",
"for chunk in stream:\n",
" response +=chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(response),display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "95c15975-ba7d-4964-b94a-5ce105ccc9e3",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"**Code Explanation**\n",
"\n",
"The given code snippet is written in Python 3.5+ syntax, which utilizes the `yield from` keyword to iterate over a generator expression.\n",
"\n",
"```python\n",
"from collections import namedtuple\n",
"\n",
"Book = namedtuple('Book', ['title', 'author'])\n",
"books = [\n",
" Book(\"Book1\", \"AuthorA\"),\n",
" Book(\"Book2\", \"AuthorB\"),\n",
" Book(\"Book3\", \"AuthorC\")\n",
"]\n",
"\n",
"authors = yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"```\n",
"\n",
"**Breaking Down the Code**\n",
"\n",
"Here's a step-by-step explanation of what the code does:\n",
"\n",
"1. **Define a named tuple `Book`**: The `namedtuple` function is used to create a lightweight, immutable data structure called `Book`. It has two attributes: `title` and `author`.\n",
"\n",
"2. **Create a list of `Book` objects**: A list of `Book` objects is created with some sample data.\n",
"\n",
"3. **Define an empty generator expression**: An empty generator expression is defined using the `{}` syntax, which will be used to yield values from another iterable.\n",
"\n",
"4. **Use `yield from` to delegate iteration**: The `yield from` keyword is used in conjunction with a dictionary comprehension. This allows us to \"delegate\" iteration over the values of the dictionary to an underlying iterable (in this case, the generator expression).\n",
"\n",
"5. **Filter books based on author presence**: Inside the dictionary comprehension, we use the `.get()` method to access the `author` attribute of each `Book` object. We then filter out any books that don't have an `author`.\n",
"\n",
"6. **Yield authors from filtered books**: The resulting generator expression yields the authors of only those books that have a valid author.\n",
"\n",
"**What Does it Do?**\n",
"\n",
"In essence, this code takes a list of `Book` objects and extracts their corresponding authors into a set (since sets automatically remove duplicates). It does so in an efficient manner by using generators to avoid loading all the data into memory at once.\n",
"\n",
"The output would be:\n",
"```python\n",
"{'AuthorA', 'AuthorB', 'AuthorC'}\n",
"```\n",
"This can be useful when working with large datasets where not all elements are required, or when you want to process data iteratively without loading everything into memory simultaneously."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get Llama 3.2 to answer\n",
"\n",
"response = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
"reply = response['message']['content']\n",
"display(Markdown(reply))"
]
},
{
"cell_type": "markdown",
"id": "9eb0a013-c1f2-4f01-8b10-9f68325356e9",
"metadata": {},
"source": [
"# Modify\n",
"Update such that the question is taken as input and sent to the model for response"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "3f01b258-a293-4afc-a99c-d3cfb624b9eb",
"metadata": {},
"outputs": [],
"source": [
"def get_model_responses(question):\n",
" \"\"\"\n",
" Takes a question as input, queries GPT-4o-mini and Llama 3.2 models, \n",
" and displays their responses.\n",
" \n",
" Args:\n",
" question (str): The question to be processed by the models.\n",
" \"\"\"\n",
" # system_prompt is already declared above lets generate a new user prompt so that the input question can be sent\n",
" user_input_prompt = f\"Please give a detailed explanation to the following question: {question}\"\n",
"\n",
" messages = [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": user_input_prompt}\n",
" ]\n",
" # GPT-4o-mini Response with Streaming\n",
" print(\"Fetching response from GPT-4o-mini...\")\n",
" stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages, stream=True)\n",
"\n",
" response_gpt = \"\"\n",
" display_handle = display(Markdown(\"\"), display_id=True)\n",
" for chunk in stream:\n",
" response_gpt += chunk.choices[0].delta.content or ''\n",
" response_gpt = response_gpt.replace(\"```\", \"\").replace(\"markdown\", \"\")\n",
" update_display(Markdown(response_gpt), display_id=display_handle.display_id)\n",
"\n",
" # Llama 3.2 Response\n",
" print(\"Fetching response from Llama 3.2...\")\n",
" response_llama = ollama.chat(model=MODEL_LLAMA, messages=messages)\n",
" reply_llama = response_llama['message']['content']\n",
" display(Markdown(reply_llama))"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "dd35ac5e-a934-4c20-9be9-657afef66c12",
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Please enter your question: what are the various career paths of data science\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fetching response from GPT-4o-mini...\n"
]
},
{
"data": {
"text/markdown": [
"Data science is a diverse and rapidly evolving field that encompasses a wide range of roles and specializations. As organizations increasingly rely on data-driven decision-making, the demand for data professionals has surged, giving rise to various career paths within data science. Here are some of the primary career paths:\n",
"\n",
"### 1. Data Scientist\n",
"**Role Description:** Data scientists are experts in extracting insights and knowledge from structured and unstructured data. They apply various techniques from statistics, machine learning, and data analysis to solve complex business problems.\n",
"\n",
"**Skills Required:**\n",
"- Proficient in programming languages like Python and R.\n",
"- Knowledge of machine learning algorithms and libraries (e.g., Scikit-learn, TensorFlow).\n",
"- Strong statistical background.\n",
"- Data visualization skills using tools like Matplotlib, Seaborn, Tableau, or Power BI.\n",
"\n",
"### 2. Data Analyst\n",
"**Role Description:** Data analysts focus on interpreting data and generating actionable insights. They analyze data trends and patterns, create visualizations, and communicate findings to stakeholders.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in SQL for database querying.\n",
"- Experience with Excel and data visualization tools (Tableau, Power BI).\n",
"- Strong analytical and problem-solving skills.\n",
"- Basic knowledge of statistics and data modeling.\n",
"\n",
"### 3. Machine Learning Engineer\n",
"**Role Description:** Machine learning engineers develop, implement, and optimize machine learning models. They focus on creating algorithms that enable systems to learn from data and make predictions or decisions.\n",
"\n",
"**Skills Required:**\n",
"- Strong programming skills (Python, Java, C++).\n",
"- Deep understanding of machine learning frameworks (TensorFlow, PyTorch).\n",
"- Experience with model deployment and scaling.\n",
"- Knowledge of data preprocessing and feature engineering.\n",
"\n",
"### 4. Data Engineer\n",
"**Role Description:** Data engineers are responsible for designing, building, and maintaining the infrastructure for data generation, storage, and retrieval. They ensure that data pipelines are efficient and scalable.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in programming (Python, Java, Scala).\n",
"- Experience with ETL (Extract, Transform, Load) processes and tools.\n",
"- Familiarity with database systems (SQL, NoSQL).\n",
"- Knowledge of data warehousing solutions (Amazon Redshift, Google BigQuery).\n",
"\n",
"### 5. Business Intelligence (BI) Analyst/Developer\n",
"**Role Description:** BI analysts focus on analyzing business data to provide strategic insights. They create dashboards and reports to help stakeholders make informed decisions.\n",
"\n",
"**Skills Required:**\n",
"- Strong SQL and data visualization skills.\n",
"- Familiarity with BI tools (Tableau, Power BI, Looker).\n",
"- Good understanding of business metrics and KPIs.\n",
"- Ability to communicate complex data insights clearly.\n",
"\n",
"### 6. Statistician\n",
"**Role Description:** Statisticians apply statistical methods to collect, analyze, and interpret data. They use their expertise to inform decisions in various fields, including healthcare, finance, and government.\n",
"\n",
"**Skills Required:**\n",
"- Proficiency in statistical software (SAS, R, SPSS).\n",
"- Strong foundation in probability and statistical theories.\n",
"- Ability to design experiments and surveys.\n",
"- Good visualization and reporting skills.\n",
"\n",
"### 7. Data Architect\n",
"**Role Description:** Data architects design the data infrastructure and architecture to support data management and analytics. They ensure data is reliable, consistent, and accessible.\n",
"\n",
"**Skills Required:**\n",
"- Expertise in data modeling and database design.\n",
"- Knowledge of data warehousing solutions.\n",
"- Familiarity with big data technologies (Hadoop, Spark).\n",
"- Understanding of data governance and security best practices.\n",
"\n",
"### 8. Data Product Manager\n",
"**Role Description:** Data product managers focus on developing and managing products that rely on data. They bridge the gap between technical teams and business stakeholders, ensuring that data initiatives align with business goals.\n",
"\n",
"**Skills Required:**\n",
"- Strong understanding of data and analytics.\n",
"- Project management skills (Agile methodologies).\n",
"- Ability to communicate effectively with technical and non-technical stakeholders.\n",
"- Knowledge of market trends and customer needs.\n",
"\n",
"### 9. Research Scientist\n",
"**Role Description:** Research scientists in data science focus on advanced data mining and machine learning techniques. They conduct experiments and develop new algorithms to solve complex scientific problems or improve existing methodologies.\n",
"\n",
"**Skills Required:**\n",
"- Advanced degrees (Ph.D.) in a relevant field (computer science, mathematics).\n",
"- Strong research and analytical skills.\n",
"- Proficiency in programming and statistical analysis.\n",
"- Experience with scientific computing and software development.\n",
"\n",
"### 10. AI/Deep Learning Specialist\n",
"**Role Description:** Specialists in AI and deep learning focus on developing advanced algorithms that enable machines to learn from large datasets. This includes work on neural networks, natural language processing, and computer vision.\n",
"\n",
"**Skills Required:**\n",
"- Strong knowledge of deep learning frameworks (Keras, TensorFlow).\n",
"- Familiarity with architecture design for neural networks.\n",
"- Experience with big data processing.\n",
"- Ability to handle unstructured data types (text, images).\n",
"\n",
"### Career Path Considerations\n",
"When choosing a career path in data science, it’s important to consider factors such as your educational background, interests, strengths, and the specific needs of the industry you want to work in. Many roles may require cross-disciplinary skills, so gaining a broad range of competencies can help you adapt and find your niche in the expansive field of data science.\n",
"\n",
"### Conclusion\n",
"Data science offers various fulfilling career paths to suit different interests and skill sets. With continuous growth in data generation and analytics needs, professionals in this field can expect a dynamic and rewarding career landscape. Continuous learning and adaptation to emerging technologies are crucial for success in these roles."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fetching response from Llama 3.2...\n"
]
},
{
"data": {
"text/markdown": [
"Data Science is a multifaceted field that encompasses a wide range of career paths. Here's a comprehensive overview of the various careers in Data Science:\n",
"\n",
"**1. Data Analyst**\n",
"\n",
"* Job Description: Collect, analyze, and interpret complex data to identify trends and patterns, often using visualization tools.\n",
"* Responsibilities:\n",
"\t+ Cleaning and preprocessing datasets\n",
"\t+ Developing reports and dashboards for stakeholders\n",
"\t+ Conducting ad-hoc analysis to answer business questions\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to inform decisions\n",
"* Salary Range: $60,000 - $100,000 per year\n",
"\n",
"**2. Data Scientist**\n",
"\n",
"* Job Description: Develop and apply advanced statistical and machine learning models to extract insights from large datasets.\n",
"* Responsibilities:\n",
"\t+ Designing and implementing data pipelines for data preparation and processing\n",
"\t+ Building and training machine learning models using techniques such as supervised and unsupervised learning, deep learning, and natural language processing\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n",
"* Salary Range: $100,000 - $160,000 per year\n",
"\n",
"**3. Business Analyst**\n",
"\n",
"* Job Description: Apply data analysis skills to drive business decisions and optimize organizational performance.\n",
"* Responsibilities:\n",
"\t+ Analyzing business data to identify trends and areas for improvement\n",
"\t+ Developing predictive models to forecast future business outcomes\n",
"\t+ Collaborating with stakeholders (e.g., product managers, sales teams) to design and implement solutions\n",
"\t+ Communicating insights and recommendations to senior leadership\n",
"* Salary Range: $80,000 - $120,000 per year\n",
"\n",
"**4. Quantitative Analyst**\n",
"\n",
"* Job Description: Apply mathematical and statistical techniques to analyze and optimize investment strategies.\n",
"* Responsibilities:\n",
"\t+ Developing and implementing quantitative models for portfolio optimization, risk management, and trading\n",
"\t+ Analyzing large datasets to identify trends and patterns in financial markets\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"\t+ Communicating complex results and recommendations to senior leadership\n",
"* Salary Range: $100,000 - $180,000 per year\n",
"\n",
"**5. Data Engineer**\n",
"\n",
"* Job Description: Design, build, and maintain large-scale data systems for scalability, reliability, and performance.\n",
"* Responsibilities:\n",
"\t+ Building data pipelines using languages like Python, Java, or Scala\n",
"\t+ Developing cloud-based data platforms (e.g., AWS, GCP) for data storage and processing\n",
"\t+ Ensuring data quality and integrity across different data sources\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"* Salary Range: $110,000 - $160,000 per year\n",
"\n",
"**6. Machine Learning Engineer**\n",
"\n",
"* Job Description: Design, build, and deploy machine learning models for production use cases.\n",
"* Responsibilities:\n",
"\t+ Developing and deploying deep learning models using frameworks like TensorFlow or PyTorch\n",
"\t+ Building data pipelines to collect, preprocess, and process large datasets\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and recommendations to senior leadership\n",
"* Salary Range: $120,000 - $180,000 per year\n",
"\n",
"**7. Data Architect**\n",
"\n",
"* Job Description: Design and implement data management systems for organizations.\n",
"* Responsibilities:\n",
"\t+ Developing data warehousing and business intelligence solutions\n",
"\t+ Building data governance frameworks for data quality, security, and compliance\n",
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n",
"\t+ Communicating technical designs and trade-offs to stakeholders\n",
"* Salary Range: $140,000 - $200,000 per year\n",
"\n",
"**8. Business Intelligence Analyst**\n",
"\n",
"* Job Description: Develop and maintain business intelligence solutions using data visualization tools.\n",
"* Responsibilities:\n",
"\t+ Creating reports and dashboards for stakeholders\n",
"\t+ Developing predictive models for forecasted outcomes\n",
"\t+ Collaborating with other teams (e.g., product management, sales) to design and implement solutions\n",
"\t+ Communicating insights and recommendations to senior leadership\n",
"* Salary Range: $80,000 - $120,000 per year\n",
"\n",
"**9. Operations Research Analyst**\n",
"\n",
"* Job Description: Apply advanced analytical techniques to optimize business processes and improve decision-making.\n",
"* Responsibilities:\n",
"\t+ Developing optimization models using linear programming and integer programming\n",
"\t+ Analyzing complex data sets to identify trends and patterns\n",
"\t+ Collaborating with other teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating results and recommendations to senior leadership\n",
"* Salary Range: $90,000 - $140,000 per year\n",
"\n",
"**10. Data Scientist (Specialized)**\n",
"\n",
"* Job Description: Focus on specialized areas like natural language processing, computer vision, or predictive analytics.\n",
"* Responsibilities:\n",
"\t+ Building and training machine learning models using deep learning techniques\n",
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n",
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n",
"\t+ Staying up-to-date with the latest advancements in specialized areas\n",
"* Salary Range: $100,000 - $160,000 per year\n",
"\n",
"Keep in mind that salaries can vary widely depending on factors like location, industry, experience level, and company size. Additionally, these roles often require a combination of technical skills, business acumen, and soft skills to be successful."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
" # Prompt user for their question\n",
"my_question = input(\"Please enter your question: \")\n",
"# Fetch and display responses from models\n",
"get_model_responses(my_question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4acf2af-635f-4216-9f5a-7c08d8313a07",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

189
week1/community-contributions/day1-article-pdf-reader.ipynb

@ -0,0 +1,189 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c7d95a7f-205a-4262-a1af-4579489025ff",
"metadata": {},
"source": [
"# Hello everyone."
]
},
{
"cell_type": "markdown",
"id": "bc815dbc-acf7-45f9-a043-5767184c44c6",
"metadata": {},
"source": [
"I completed the day 1, first LLM Experiment moments ago and found it really awesome. After the challenge was done, I wanted to chip in my two cents by making a PDF summarizer, basing myself on the code for the Website Summarizer. I want to share it in this contribution!\n",
"### To consider:\n",
"* To extract the contents of PDF files, I used the PyPDF2 library, which doesn't come with the default configuration of the virtual environment. To remedy the situation, you need to follow the steps:\n",
" 1. Shut down Anaconda. Running `CTRL-C` in the Anaconda terminal should achieve this.\n",
" 2. Run the following command, `pip install PyPDF2 --user`\n",
" 3. Restart Jupyter lab with `jupyter lab`\n",
"* To find PDF files online, you can add `filetype:url` on your browser query, i.e. searching the following can give you PDF files to add as input: `AI Engineering prompts filetype:pdf`!\n",
"\n",
"Without further ado, here's the PDF Summarizer!"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "06b63787-c6c8-4868-8a71-eb56b7618626",
"metadata": {},
"outputs": [],
"source": [
"# Import statements\n",
"import os\n",
"import requests\n",
"from dotenv import load_dotenv\n",
"from IPython.display import Markdown, display\n",
"from openai import OpenAI\n",
"from io import BytesIO\n",
"from PyPDF2 import PdfReader"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "284ca770-5da4-495c-b1cf-637727a8609f",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"\n",
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"# Check the key\n",
"\n",
"if not api_key:\n",
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n",
"elif not api_key.startswith(\"sk-proj-\"):\n",
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n",
"elif api_key.strip() != api_key:\n",
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n",
"else:\n",
" print(\"API key found and looks good so far!\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4c316d7-d9c9-4400-b03e-1dd629c6b2ad",
"metadata": {},
"outputs": [],
"source": [
"openai = OpenAI()\n",
"\n",
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n",
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n",
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a053092-f4f6-4156-8721-39353c8a9367",
"metadata": {},
"outputs": [],
"source": [
"# Step 0: Create article class\n",
"class Article:\n",
" def __init__(self, url):\n",
" \"\"\"\n",
" Create this Article object from the given url using the PyPDF2 library\n",
" \"\"\"\n",
" self.url = url \n",
" response = requests.get(self.url)\n",
" if response.status_code == 200:\n",
" pdf_bytes = BytesIO(response.content)\n",
" reader = PdfReader(pdf_bytes)\n",
" \n",
" text = \"\"\n",
" for page in reader.pages:\n",
" text += page.extract_text()\n",
" \n",
" self.text = text\n",
" self.title = reader.metadata.get(\"/Title\", \"No title found\")\n",
" else:\n",
" print(f\"Failed to fetch PDF. Status code: {response.status_code}\")\n",
" self.text = \"No text found\"\n",
" self.title = \"No title found\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "adc528f2-25ca-47b5-896e-9d417ba0195f",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"def craft_user_prompt(article):\n",
" user_prompt = f\"You are looking at a research article titled {article.title}\\n Based on the body of the article, how are micro RNAs produced in the cell? State the function of the proteins \\\n",
" involved. The body of the article is as follows.\"\n",
" user_prompt += article.text\n",
" return user_prompt\n",
"\n",
"# Step 2: Make the messages list\n",
"def craft_messages(article):\n",
" system_prompt = \"You are an assistant that analyses the contents of a research article and provide answers to the question asked by the user in 250 words or less. \\\n",
" Ignore text that doesn't belong to the article, like headers or navigation related text. Respond in markdown. Structure your text in the form of question/answer.\"\n",
" return [\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": craft_user_prompt(article)}\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "81ab896e-1ba9-4964-a477-2a0608b7036c",
"metadata": {},
"outputs": [],
"source": [
"# Step 3: Call OpenAI\n",
"def summarize(url):\n",
" article = Article(url)\n",
" response = openai.chat.completions.create(\n",
" model = \"gpt-4o-mini\",\n",
" messages = craft_messages(article)\n",
" )\n",
" return response.choices[0].message.content"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a7a98cdf-0d3b-477d-8e39-a6a4264b9feb",
"metadata": {},
"outputs": [],
"source": [
"# Step 4: Print the result of an example pdf\n",
"summary = summarize(\"https://www.nature.com/articles/s12276-023-01050-9.pdf\")\n",
"display(Markdown(summary))"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

480
week1/community-contributions/day1_first_llm_videotranscript_summary.ipynb

@ -0,0 +1,480 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "387e2968-3bfd-48c6-a925-d315f4566623",
"metadata": {},
"source": [
"# Instant Gratification\n",
"## Your first Frontier LLM Project!\n",
"Using **Gemini API** to summarise transcripts from class videos. <br>\n",
"Tested with: *day_1_first_llm_experiment_summarization_project* transcript video. \n",
"## [Test_video](https://www.udemy.com/course/llm-engineering-master-ai-and-large-language-models/learn/lecture/46867741#questions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9540582d-8d2a-4c14-b117-850823b634a0",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# imports\n",
"import os, sys\n",
"import google.generativeai as genai\n",
"from dotenv import load_dotenv\n",
"from IPython.display import HTML, Markdown, display"
]
},
{
"cell_type": "markdown",
"id": "2fe0d366-b183-415c-b6e1-4993afd82f2a",
"metadata": {},
"source": [
"# Connecting to Gemini API\n",
"\n",
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "89c1194c-715b-41ff-8cb7-6b6067c83ea5",
"metadata": {},
"outputs": [],
"source": [
"# Load environment variables in a file called .env\n",
"load_dotenv()\n",
"api_key = os.getenv('GOOGLE_API_KEY')\n",
"\n",
"# Check the key\n",
"if not api_key:\n",
" print(\"No API key was found!\")\n",
"else:\n",
" print(\"Great! API key found and looks good so far!\")"
]
},
{
"cell_type": "markdown",
"id": "6bc036e2-54c1-4206-a386-371a9705b190",
"metadata": {},
"source": [
"# Upload Daily or Weekly Transcriptions\n",
"If you have text files corresponding to your video transcripts, upload them by day or week. With the help of Cutting-edge LLM models, you will get accurate summaries, highlighting key topics."
]
},
{
"cell_type": "markdown",
"id": "8fcf4b72-49c9-49cd-8b1c-b5a4df38edf7",
"metadata": {},
"source": [
"## Read data from txt files"
]
},
{
"cell_type": "markdown",
"id": "00466898-68d9-43f7-b8d3-7d61696061de",
"metadata": {
"jp-MarkdownHeadingCollapsed": true
},
"source": [
"```\n",
"# Read the entire file using read() function\n",
"file = open(\"../day_1_first_llm_experiment_summarization_project.txt\", \"r\") # Your file path\n",
"file_content = file.read()\n",
"text = file_content\n",
"file.close()\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "37d15b83-786d-40e9-b730-4f654e8bec1e",
"metadata": {},
"source": [
"## Types of prompts\n",
"\n",
"You may know this already - but if not, you will get very familiar with it!\n",
"\n",
"Models like GPT4o have been trained to receive instructions in a particular way.\n",
"\n",
"They expect to receive:\n",
"\n",
"**A system prompt** that tells them what task they are performing and what tone they should use\n",
"\n",
"**A user prompt** -- the conversation starter that they should reply to\n"
]
},
{
"cell_type": "markdown",
"id": "846c63a4-14e0-4a3c-99ce-654a6928dc20",
"metadata": {},
"source": [
"### For this example, we will directly input the text file into the prompt."
]
},
{
"cell_type": "markdown",
"id": "e6ef537b-a660-44e3-a0c2-94f3b9e60b11",
"metadata": {},
"source": [
"## Messages"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1e96271-593a-4e16-bb17-81c834a59178",
"metadata": {},
"outputs": [],
"source": [
"system_message = \"You are an assistant that analyzes the contents of text files \\\n",
"and provides an accurate summary, ignoring text that might be irrelevant. \\\n",
"Respond in markdown.\"\n",
"\n",
"#user_prompt = file_content Use if you load your data\n",
"user_prompt = \"\"\"\n",
"It's time for our first LM experiment at this point.\n",
"So some of this you may know well, you may know very well already.\n",
"For some people this might be new, but let me just explain.\n",
"The models that we're going to be using.\n",
"These frontier models have been trained in a particular way.\n",
"That means that they expect two different types of instruction from us the user.\n",
"One of them is known as the system prompt, and one of them is known as the user prompt.\n",
"The system prompt is something which explains the context of this conversation.\n",
"It tells them what kind of task they're performing, what tone they should use, and we'll be experimenting\n",
"with what it means to to change a system prompt and what kind of information that you can include in\n",
"the system prompt throughout this course.\n",
"The user prompt is the actual conversation itself.\n",
"And in our case right now, it's going to just be the the conversation starter.\n",
"And the role of the LM of the large language model is to figure out what is the most likely way that\n",
"it should respond, given this user prompt.\n",
"If it's given this user prompt, and in the context of this system prompt, what is the most likely\n",
"next text that will come after it?\n",
"That would come from an assistant responding to this user.\n",
"So that's the difference between the system prompt that sets the context, the user prompt that is the\n",
"conversation starter.\n",
"So we're going to set a system prompt.\n",
"And this is what it's going to say.\n",
"It's going to say you are an assistant that analyzes the contents of a website and provides a short\n",
"summary, ignoring texts that might be navigation related.\n",
"Respond in markdown.\n",
"You'll see more of what that means in in just a second.\n",
"So that is our system prompt for the user prompt.\n",
"It's going to take as a we're going to write a function user prompt for.\n",
"And it's going to take a website as the argument to the function.\n",
"And it's going to say you are looking at a website titled The Website.\n",
"The contents of this website is as follows.\n",
"Please provide a short summary of the website in markdown if it includes news or announcements.\n",
"Summarize these two and we then take the text from the website object that Beautifulsoup plucked out\n",
"for us, and we add that into the user prompt and we return that user prompt.\n",
"So let's just quickly let's run that cell right now and let's just have a look now.\n",
"So after doing that, if I just look at what system Prompt has.\n",
"It has that text of course that we just said.\n",
"And now if you remember earlier on we created a new website object and we stored it in this variable\n",
"editor.\n",
"So if I come here I should be able to say user prompt for and then pass in the object Ed.\n",
"And what we'll get is a prompt.\n",
"It might be easier if I print this so that it prints out empty lines.\n",
"And here is the user prompt string that we've created.\n",
"It says you're looking at a website titled blah blah blah.\n",
"The contents of this website is as follows.\n",
"Please provide a short summary.\n",
"Look, it looks like we should have a space right here, otherwise it might be confusing.\n",
"Let's try that again.\n",
"That's always why it's worth printing things as you go, because you'll spot little inconsistencies\n",
"like that.\n",
"I think it'll be nicer, actually, now that I look at that.\n",
"If we have a carriage return there like so.\n",
"Let's have a look at this prompt.\n",
"Now you're looking at the website and there we go on a separate line that looks good okay.\n",
"So let's talk about the messages object.\n",
"So OpenAI expects to receive a conversation in a particular format.\n",
"It's a format that OpenAI came up with and they used for their APIs, and it became so well used that\n",
"all of the other major frontier models decided to adopt the same convention.\n",
"So this has gone from being originally OpenAI's way of using the API to being something of a standard\n",
"across many different models to use this approach.\n",
"And here's how it works.\n",
"When you're trying to describe a conversation, you describe it using a list a Python list of dictionaries.\n",
"So it's a list where each element in the list is a dictionary.\n",
"And that dictionary looks like this.\n",
"It's a dictionary with two elements.\n",
"One of them has a key of role, and here the value is either system or user, a key of role.\n",
"And the value is system a key of content.\n",
"And the value is of course the system message.\n",
"There's another Dictionary where there's a key of role.\n",
"The value is user because it's the user message.\n",
"The user prompt content is where the user message goes.\n",
"User message and user prompt are the same thing.\n",
"So hopefully I didn't explain it very well, but it makes sense when you see it visually like this.\n",
"It's just a dictionary which has role and content, system and system, message user and the user message.\n",
"And there are some other roles as well, but we're going to get to them in good time.\n",
"This is all we need for now.\n",
"So this is how messages are built.\n",
"And if you look at this next function def messages for hopefully it's super clear to you that this is\n",
"creating.\n",
"This here is creating exactly this construct using code.\n",
"It's going to do it's going to put in there the generic system prompt we came up with.\n",
"And it's going to create the user prompt for the website.\n",
"So let's run that.\n",
"And now, presumably it's clear that if I say messages for Ed, which is the object for my website,\n",
"let's print it so that we see empty lines and stuff.\n",
"Actually, sorry, in this case it might be better if we don't print it.\n",
"If we just do this, it might look a bit clearer.\n",
"There we go.\n",
"And now you can see that it is it's a list of two things role system.\n",
"And there's a system message role user.\n",
"And there is the user message.\n",
"Okay.\n",
"It's time to bring this together.\n",
"It's time to actually do it.\n",
"The API for OpenAI to make a call to a frontier model to do this for us is super simple, and we're\n",
"going to be using this API all the time.\n",
"So whereas now it might look like it's a few things to remember.\n",
"You're going to get so used to this, but we're going to make a function called summarize.\n",
"And that is that's going to do the business that's going to solve our problem and summarize a URL that's\n",
"passed in.\n",
"It will first create a website for that URL, just like we did for editor.\n",
"And this is where we call OpenAI.\n",
"We say OpenAI, which is the the OpenAI object.\n",
"We created OpenAI dot chat, dot completions, dot create.\n",
"And that for now you can just learn it by rote.\n",
"We'll understand a lot more about that later.\n",
"But as far as OpenAI is concerned, this is known as the completions API because we're asking it to\n",
"complete this conversation, predict what would be most likely to come next.\n",
"We pass in the name of the model we're going to use.\n",
"We're going to use a model called GPT four mini that you'll get very familiar with.\n",
"It is the light, cheap version of GPT four, the the one of the finest models on the planet, and this\n",
"will cost fractions of a cent to use.\n",
"This, um, you pass in the model and then you pass in the messages and the messages we pass in, use\n",
"this structure that we've just created and that is all it takes.\n",
"What comes back we put in this this object response.\n",
"And when we get back the response we call response dot choices zero dot message dot content.\n",
"Now I'm going to explain what this is another day we don't need to know.\n",
"For now.\n",
"We just need to know that we're going to do response dot choices zero dot message dot content.\n",
"That's going to be it.\n",
"That is our summarize function.\n",
"And with that let's try summarizing my website we're running.\n",
"It's now connecting to OpenAI in the cloud.\n",
"It's making the call and back.\n",
"Here is a summary of my website.\n",
"We have just uh, spent a fraction of a cent and we have just summarized my website.\n",
"We can do a little bit better because we can print this in a nice style.\n",
"Uh, GPT four, we've asked to respond in markdown, and that means that it's responded with various\n",
"characters to represent headings, things in bold and so on.\n",
"And we can use a feature of Jupyter Labs that we can ask it to actually show that in a nice markdown\n",
"format.\n",
"So let's do that.\n",
"Let's use this display summary function and try again.\n",
"Again we're going to GPT for a mini in the cloud.\n",
"And here is a summary of my website.\n",
"Uh, it says something about me.\n",
"Uh, and it's uh yeah, very nicely formatted, very nicely structured.\n",
"Pretty impressive.\n",
"And apparently it highlights my work with proprietary LMS, offers resources related to AI and LMS,\n",
"showcasing his commitment to advancing knowledge in this field.\n",
"Good for you, GPT for mini.\n",
"That's a very nice summary.\n",
"Okay.\n",
"And now we can try some more websites.\n",
"Let's try summarizing cnn.com.\n",
"Uh, we'll see what this happens.\n",
"Obviously, CNN is a much bigger, uh, result you've got here.\n",
"Uh, and, uh, we get some information about what's going on.\n",
"I'm actually recording this right now on the 5th of November at, uh, in the evening, which is the\n",
"date of the 2024 elections going on right now.\n",
"So that, of course, is featured on CNN's web page.\n",
"We can also summarize anthropic, which is the website for Claude.\n",
"And they have a nice page.\n",
"And here you go.\n",
"And you can read more about it in this nice little summary of their web page.\n",
"All right.\n",
"And that wraps up our first instant gratification.\n",
"It's it's juicy.\n",
"It's something where we've actually done something useful.\n",
"We've scraped the web.\n",
"We've summarized summarization is one of the most common AI use cases.\n",
"So common it's useful for all sorts of purposes.\n",
"We'll be doing it a few different ways during during this course, even in our week eight a sticky solution\n",
"will be using something that will do some summarization.\n",
"So it's a great, uh, thing to have experimented with already.\n",
"So there are so many other business applications of summarization.\n",
"This is something you should be able to put to good use.\n",
"You should be able to think of some ways you could apply this to your day job right away, or be building\n",
"a couple of example projects in GitHub that show summarization in action.\n",
"You could apply it to summarizing the news, summarizing financial performance from a financial report,\n",
"a resume, and a cover letter.\n",
"You could you could take a resume and generate a cover letter.\n",
"Uh, there are so many different things you can do with summarization of of documents.\n",
"And also adding on to that the scraping the web angle of it.\n",
"So have a think about how you would apply summarization to your business and try extending this to do\n",
"some summarization.\n",
"There's also uh, for for the more technically inclined, uh, one of the things that you'll discover\n",
"quite quickly when you use this is that there are many websites that cannot be summarized with this\n",
"approach, and that's because they use JavaScript to render the web page and are rather simplistic.\n",
"Approach has just taken the the just just made the requests the server call and taken what we get back.\n",
"But there's a solution.\n",
"And the solution is to use a platform like selenium or others like it, or playwright, which would\n",
"allow you to render the page and and do it that way.\n",
"So if you're technically inclined and have some background with that kind of thing, then a really interesting\n",
"challenge is to turn this into something that's a bit beefier and add selenium to the mix.\n",
"Um, as it happens, someone has already done that.\n",
"Uh, one of the students, thank you very much.\n",
"And if you go into this folder community contributions, you'll see a few different solutions.\n",
"And one of them is a selenium based solution.\n",
"So you can always go in and just just look at that yourself.\n",
"Or you can have a shot at doing it too.\n",
"And you'll find the solution in there.\n",
"And if you do come up with a solution to that or to anything, I would love it if you were willing to\n",
"share your code so that others can benefit from it.\n",
"Ideally, put it in the community contributions folder and be sure to clear the output.\n",
"So you go to kernel restart kernel and clear outputs of all cells.\n",
"Otherwise, everything that you've got in your output would also get checked into code which which would\n",
"just clutter things up a bit.\n",
"So so do that.\n",
"And then if you could submit a PR, a pull request, I can then merge that into the code.\n",
"And if that's a new thing for you, it is a bit of a process.\n",
"There is a write up here for exactly what you need to do to make that work.\n",
"Anyways, this was the first project, the first of many.\n",
"It's a simple project, but it's an important one.\n",
"A very important business use case.\n",
"I hope you found it worthwhile.\n",
"I will see you for the next video when we wrap up.\n",
"Week one.\n",
"Day one.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ab80b7dd-4b07-4460-9bdd-90bb6ba9e285",
"metadata": {},
"outputs": [],
"source": [
"prompts = [\n",
" {\"role\": \"system\", \"content\": system_message},\n",
" {\"role\": \"user\", \"content\": user_prompt}\n",
" ]"
]
},
{
"cell_type": "markdown",
"id": "0a04ec8b-4d44-4a90-9d84-34fbf757bbe4",
"metadata": {},
"source": [
"## The structure to connect with Gemini API was taken from this contribution. \n",
"### [From this notebook](https://github.com/ed-donner/llm_engineering/blob/main/week2/community-contributions/day1-with-3way.ipynb)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3aecf1b4-786c-4834-8cae-0a2758ea3edd",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# The API for Gemini - Structure\n",
"genai.configure(api_key=api_key)\n",
"\n",
"gemini = genai.GenerativeModel(\n",
" model_name='gemini-1.5-flash',\n",
" system_instruction=system_message\n",
")\n",
"response = gemini.generate_content(user_prompt)\n",
"response = response.text\n",
"# response = str(response.text) Convert to string in order to save the response as text file\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "cf9a97d9-d935-40de-9736-e566a26dff25",
"metadata": {},
"source": [
"## To save the processed text data as a file, utilize the following code:\n",
"\n",
"```\n",
"# This is a common pattern for writing text to a file in Python, \n",
"with open('data_transcript/pro_summary.txt', 'w') as fp:\n",
" fp.write(response)\n",
" fp.close()\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a36a01fa-5718-4bee-bb1b-ad742ab86d6a",
"metadata": {},
"outputs": [],
"source": [
"# Markdown(response.text) If you convert the data type of the variable \"response\" to a string\n",
"Markdown(response)"
]
},
{
"cell_type": "markdown",
"id": "c7c53213-838c-4b67-8e99-1fd020b3508d",
"metadata": {},
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "fcddef17-9487-4800-8b04-c12ee2a58925",
"metadata": {},
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "markdown",
"id": "fe959162-2c24-4077-b273-ea924e568731",
"metadata": {},
"source": [
"# Key Benefits of AI Summarization:\n",
"\n",
"__Time-Saving:__ Quickly process large volumes of text, such as research papers, reports, and news articles. <br>\n",
"__Improved Comprehension:__ Identify key points and insights more efficiently. <br>\n",
"__Enhanced Decision-Making:__ Make informed decisions based on accurate and concise information. <br>\n",
"__Cost Reduction:__ Reduce labor costs associated with manual summarization tasks. <br>\n",
"\n",
"# Potential Applications in Business Development:\n",
"\n",
"__Market Research:__ Quickly analyze market reports and competitor insights to identify trends and opportunities. <br>\n",
"__Sales and Marketing:__ Summarize customer feedback and product reviews to inform marketing strategies. <br>\n",
"__Customer Support:__ Quickly process customer inquiries and provide accurate answers. <br>\n",
"__Legal and Compliance:__ Analyze legal documents and contracts to identify key clauses and potential risks. <br>\n",
"__Human Resources:__ Summarize job applications and performance reviews to streamline hiring and evaluation processes."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "llm",
"language": "python",
"name": "llm"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.0"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

453
week1/community-contributions/day5 company brochure.ipynb

@ -0,0 +1,453 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e14248ff-07be-4ba8-a13c-d8c7f40ffb5f",
"metadata": {},
"source": [
"# A full business solution\n",
"## Now we will take our project from Day 1 to the next level\n",
"## BUSINESS CHALLENGE:\n",
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n",
"\n",
"We will be provided a company name and their primary website.\n",
"\n",
"See the end of this notebook for examples of real-world business applications.\n",
"\n",
"And remember: I'm always available if you have problems or ideas! Please do reach out."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6c8dc88a-85d9-493b-965c-68895cdd93f2",
"metadata": {},
"outputs": [],
"source": [
"#imports \n",
"\n",
"import os\n",
"import requests\n",
"import json\n",
"from typing import List\n",
"from dotenv import load_dotenv\n",
"from bs4 import BeautifulSoup\n",
"from IPython.display import Markdown, display, update_display\n",
"from openai import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "131c483b-dd58-4faa-baf5-469ab6b00fbb",
"metadata": {},
"outputs": [],
"source": [
"# Initialize and constants\n",
"\n",
"load_dotenv()\n",
"api_key=os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key[:8]=='sk-proj-':\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? \")\n",
"\n",
"MODEL='gpt-4o-mini'\n",
"openai=OpenAI()\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "196c0dee-7236-4f88-b7c2-f2a885190b19",
"metadata": {},
"outputs": [],
"source": [
"# A class to represent a Webpage\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped, now with links\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" self.url = url\n",
" response = requests.get(url)\n",
" self.body = response.content\n",
" soup = BeautifulSoup(self.body, 'html.parser')\n",
" self.title = soup.title.string if soup.title else \"No title found\"\n",
" if soup.body:\n",
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n",
" irrelevant.decompose()\n",
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n",
" else:\n",
" self.text = \"\"\n",
" links = [link.get('href') for link in soup.find_all('a')]\n",
" self.links = [link for link in links if link]\n",
"\n",
" def get_contents(self):\n",
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f1329717-3727-4987-ada7-75df87a10459",
"metadata": {},
"outputs": [],
"source": [
"ed=Website(\"https://www.anthropic.com/\")\n",
"print(ed.get_contents)\n",
"ed.links"
]
},
{
"cell_type": "markdown",
"id": "912d4f83-c8f1-437c-a01b-e21988af477c",
"metadata": {},
"source": [
"## First step: Have GPT-4o-mini figure out which links are relevant\n",
"\n",
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n",
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n",
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n",
"\n",
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n",
"\n",
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed206771-df05-429d-8743-310bc86358ce",
"metadata": {},
"outputs": [],
"source": [
"link_system_prompt=\"You are provided with a list of links found on a webpage. \\\n",
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n",
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n",
"link_system_prompt+=\"You should respond in JSON as in this example:\"\n",
"link_system_prompt+=\"\"\"\n",
"{\n",
" \"links\":[\n",
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n",
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n",
" ]\n",
"}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ef835a85-9a48-42bd-979e-ca5f51bb1586",
"metadata": {},
"outputs": [],
"source": [
"print(link_system_prompt)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2885e89-6455-4239-a98d-5599ea6e5947",
"metadata": {},
"outputs": [],
"source": [
"def get_links_user_prompt(website):\n",
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n",
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n",
"Do not include Terms of Service, Privacy, email links.\\n\"\n",
" user_prompt += \"Links (some might be relative links):\\n\"\n",
" user_prompt += \"\\n\".join(website.links)\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "da7e4468-a225-4263-a212-94b1c69d38da",
"metadata": {},
"outputs": [],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "53c59051-eed0-4292-8204-abbbd1d78df4",
"metadata": {},
"outputs": [],
"source": [
"def get_links(url):\n",
" website=Website(url)\n",
" response=openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": link_system_prompt},\n",
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n",
" ],\n",
" response_format={\"type\":\"json_object\"}\n",
" )\n",
" result=response.choices[0].message.content\n",
" return json.loads(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "76d3d68d-6534-4b04-8a26-a07a9e532665",
"metadata": {},
"outputs": [],
"source": [
"anthropic=Website(\"https://www.anthropic.com/\")\n",
"anthropic.links"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "12ca6438-bc99-4b45-9603-54bee5d8bce2",
"metadata": {},
"outputs": [],
"source": [
"get_links(\"https://www.anthropic.com/\")"
]
},
{
"cell_type": "markdown",
"id": "4304d6e8-900e-4702-b84c-f202d6265459",
"metadata": {},
"source": [
"## Second step: make the brochure!\n",
"\n",
"Assemble all the details into another prompt to GPT4-o"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "91ac10e6-8a7a-4367-939b-ac537c1c6c67",
"metadata": {},
"outputs": [],
"source": [
"def get_all_details(url):\n",
" result=\"Landing page:\\n\"\n",
" result+=Website(url).get_contents()\n",
" links=get_links(url)\n",
" print(\"Found links:\",links)\n",
" for link in links[\"links\"]:\n",
" result+=f\"\\n\\n{link['type']}\\n\"\n",
" result+=Website(link[\"url\"]).get_contents()\n",
" return result"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "765e9c71-2bbc-4222-bce1-0f553d8d2b10",
"metadata": {},
"outputs": [],
"source": [
"print(get_all_details(\"https://anthropic.com\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7116adc1-6f5e-445f-9869-ffcf5fa6a9b8",
"metadata": {},
"outputs": [],
"source": [
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"Include details of company culture, customers and careers/jobs if you have the information.\"\n",
"\n",
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n",
"\n",
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n",
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n",
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "02edb903-6352-417f-8c0f-85c2eee269b6",
"metadata": {},
"outputs": [],
"source": [
"def get_brochure_user_prompt(company_name, url):\n",
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n",
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n",
" user_prompt += get_all_details(url)\n",
" user_prompt = user_prompt[:20_000] # Truncate if more than 20,000 characters\n",
" return user_prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f760069-910e-4209-b357-b97e710f560d",
"metadata": {},
"outputs": [],
"source": [
"get_brochure_user_prompt(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "faf9d9cc-fe30-4441-9adc-aee5b4dc80ca",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure(company_name, url):\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c8a672f4-ee87-4e2a-a6b1-dfb46f344ef3",
"metadata": {},
"outputs": [],
"source": [
"create_brochure(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "markdown",
"id": "781fa1db-7acc-41fc-b26c-0d64964eb161",
"metadata": {},
"source": [
"## Finally - a minor improvement\n",
"\n",
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n",
"with the familiar typewriter animation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b8359501-9f05-42bc-916c-7990ac910866",
"metadata": {},
"outputs": [],
"source": [
"def stream_brochure(company_name, url):\n",
" stream= openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": system_prompt},\n",
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n",
" ],\n",
" stream=True\n",
" )\n",
"\n",
" response=\"\"\n",
" display_handle=display(Markdown(\"\"),display_id=True)\n",
" for chunk in stream:\n",
" response +=chunk.choices[0].delta.content or ''\n",
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n",
" update_display(Markdown(response),display_id=display_handle.display_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd834aa7-deda-40cd-97ab-5fa5117fc6e0",
"metadata": {},
"outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"http://huggingface.co\")"
]
},
{
"cell_type": "markdown",
"id": "207068f8-d768-46b2-8b92-0ec78a9f71ae",
"metadata": {},
"source": [
"# Convert the brochure to a specified language\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e75be9e6-040d-4178-a5b3-1b7ae4460bc8",
"metadata": {},
"outputs": [],
"source": [
"def create_brochure_language(company_name, url, language):\n",
" language_prompt = f\"You are a professional translator and writer specializing in creating and translating brochures. Convert the brochure to {language} while maintaining its original tone, format, and purpose.\"\n",
" user_language_prompt = f\"Generate a brochure for the company '{company_name}' available at the URL: {url}, and translate it into {language}.\"\n",
" response = openai.chat.completions.create(\n",
" model=MODEL,\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": language_prompt},\n",
" {\"role\": \"user\", \"content\": user_language_prompt}\n",
" ],\n",
" )\n",
" result = response.choices[0].message.content\n",
" display(Markdown(result))\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0748ec58-335b-4796-ae15-300dee7b24b0",
"metadata": {},
"outputs": [],
"source": [
"create_brochure_language(\"HuggingFace\", \"http://huggingface.co\",\"Hindi\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba54f80b-b2cd-4a50-b460-e0d042499c49",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "182f35da-d7b1-40f8-b1a7-74e0cd7fd6fe",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.10"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

2
week2/community-contributions/day4.ipynb

@ -245,7 +245,7 @@
" arguments = json.loads(tool_call.function.arguments)\n",
" city = arguments.get('destination_city')\n",
" price = get_ticket_price(city)\n",
" if price != \"Unknow\":\n",
" if price != \"Unknown\":\n",
" ticket_booked = book_ticket(city)\n",
" response = {\n",
" \"role\": \"tool\",\n",

4
week2/day5.ipynb

@ -356,7 +356,9 @@
"metadata": {},
"outputs": [],
"source": [
"!ffmpeg -version"
"!ffmpeg -version\n",
"!ffprobe -version\n",
"!ffplay -version"
]
},
{

Loading…
Cancel
Save