15 changed files with 3986 additions and 0 deletions
@ -0,0 +1,316 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "1c6700cb-a0b0-4ac2-8fd5-363729284173", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# AI-Powered Resume Analyzer for Job Postings" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "a2fa4891-b283-44de-aa63-f017eb9b140d", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"This tool is designed to analyze resumes against specific job postings, offering valuable insights such as:\n", |
||||||
|
"\n", |
||||||
|
"- Identification of skill gaps\n", |
||||||
|
"- Keyword matching between the CV and the job description\n", |
||||||
|
"- Tailored recommendations for CV improvement\n", |
||||||
|
"- An alignment score reflecting how well the CV fits the job\n", |
||||||
|
"- Personalized feedback \n", |
||||||
|
"- Job market trend insights\n", |
||||||
|
"\n", |
||||||
|
"An example of the tool's output can be found [here](https://tvarol.github.io/sideProjects/AILLMAgents/output.html)." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8a6a34ea-191f-4c54-9793-a3eb63faab23", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Imports\n", |
||||||
|
"import os\n", |
||||||
|
"import io\n", |
||||||
|
"import time\n", |
||||||
|
"import requests\n", |
||||||
|
"import PyPDF2\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"from ipywidgets import Textarea, FileUpload, Button, VBox, HTML" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "04bbe1d3-bacc-400c-aed2-db44699e38f3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found!!!\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "27bfcee1-58e6-4ff2-9f12-9dc5c1aa5b5b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "c82e79f2-3139-4520-ac01-a728c11cb8b9", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Using a Frontier Model GPT-4o Mini for This Project\n", |
||||||
|
"\n", |
||||||
|
"### Types of Prompts\n", |
||||||
|
"\n", |
||||||
|
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||||
|
"\n", |
||||||
|
"They expect to receive:\n", |
||||||
|
"\n", |
||||||
|
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||||
|
"\n", |
||||||
|
"**A user prompt** -- the conversation starter that they should reply to" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0da158ad-c3a8-4cef-806f-be0f90852996", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define our system prompt \n", |
||||||
|
"system_prompt = \"\"\"You are a powerful AI model designed to assist with resume analysis. Your task is to analyze a resume against a given job posting and provide feedback on how well the resume aligns with the job requirements. Your response should include the following: \n", |
||||||
|
"1) Skill gap identification: Compare the skills listed in the resume with those required in the job posting, highlighting areas where the resume may be lacking or overemphasized.\n", |
||||||
|
"2) Keyword matching between a CV and a job posting: Match keywords from the job description with the resume, determining how well they align. Provide specific suggestions for missing keywords to add to the CV.\n", |
||||||
|
"3) Recommendations for CV improvement: Provide actionable suggestions on how to enhance the resume, such as adding missing skills or rephrasing experience to match job requirements.\n", |
||||||
|
"4) Alignment score: Display a score that represents the degree of alignment between the resume and the job posting.\n", |
||||||
|
"5) Personalized feedback: Offer tailored advice based on the job posting, guiding the user on how to optimize their CV for the best chances of success.\n", |
||||||
|
"6) Job market trend insights, provide broader market trends and insights, such as in-demand skills and salary ranges.\n", |
||||||
|
"Provide responses that are concise, clear, and to the point. Respond in markdown.\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ebdb34b0-85bd-4e36-933a-20c3c42e833b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# The job posting and the CV are required to define the user prompt\n", |
||||||
|
"# The user will input the job posting as text in a box here\n", |
||||||
|
"# The user will upload the CV in PDF format, from which the text will be extracted\n", |
||||||
|
"\n", |
||||||
|
"# You might need to install PyPDF2 via pip if it's not already installed\n", |
||||||
|
"# !pip install PyPDF2\n", |
||||||
|
"\n", |
||||||
|
"# Create widgets - to create a box for the job posting text\n", |
||||||
|
"job_posting_area = Textarea(\n", |
||||||
|
" placeholder='Paste the job posting text here...',\n", |
||||||
|
" description='Job Posting:',\n", |
||||||
|
" disabled=False,\n", |
||||||
|
" layout={'width': '800px', 'height': '300px'}\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Define file upload for CV\n", |
||||||
|
"cv_upload = FileUpload(\n", |
||||||
|
" accept='.pdf', # Only accept PDF files\n", |
||||||
|
" multiple=False, # Only allow single file selection\n", |
||||||
|
" description='Upload CV (PDF)'\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"status = HTML(value=\"<b>Status:</b> Waiting for inputs...\")\n", |
||||||
|
"\n", |
||||||
|
"# Create Submit Buttons\n", |
||||||
|
"submit_cv_button = Button(description='Submit CV', button_style='success')\n", |
||||||
|
"submit_job_posting_button = Button(description='Submit Job Posting', button_style='success')\n", |
||||||
|
"\n", |
||||||
|
"# Initialize variables to store the data\n", |
||||||
|
"# This dictionary will hold the text for both the job posting and the CV\n", |
||||||
|
"# It will be used to define the user_prompt\n", |
||||||
|
"for_user_prompt = {\n", |
||||||
|
" 'job_posting': '',\n", |
||||||
|
" 'cv_text': ''\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"# Functions\n", |
||||||
|
"def submit_cv_action(change):\n", |
||||||
|
"\n", |
||||||
|
" if not for_user_prompt['cv_text']:\n", |
||||||
|
" status.value = \"<b>Status:</b> Please upload a CV before submitting.\"\n", |
||||||
|
" \n", |
||||||
|
" if cv_upload.value:\n", |
||||||
|
" # Get the uploaded file\n", |
||||||
|
" uploaded_file = cv_upload.value[0]\n", |
||||||
|
" content = io.BytesIO(uploaded_file['content'])\n", |
||||||
|
" \n", |
||||||
|
" try:\n", |
||||||
|
" pdf_reader = PyPDF2.PdfReader(content) \n", |
||||||
|
" cv_text = \"\"\n", |
||||||
|
" for page in pdf_reader.pages: \n", |
||||||
|
" cv_text += page.extract_text() \n", |
||||||
|
" \n", |
||||||
|
" # Store CV text in for_user_prompt\n", |
||||||
|
" for_user_prompt['cv_text'] = cv_text\n", |
||||||
|
" status.value = \"<b>Status:</b> CV uploaded and processed successfully!\"\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" status.value = f\"<b>Status:</b> Error processing PDF: {str(e)}\"\n", |
||||||
|
"\n", |
||||||
|
" time.sleep(0.5) # Short pause between upload and submit messages to display both\n", |
||||||
|
" \n", |
||||||
|
" if for_user_prompt['cv_text']:\n", |
||||||
|
" #print(\"CV Submitted:\")\n", |
||||||
|
" #print(for_user_prompt['cv_text'])\n", |
||||||
|
" status.value = \"<b>Status:</b> CV submitted successfully!\"\n", |
||||||
|
" \n", |
||||||
|
"def submit_job_posting_action(b):\n", |
||||||
|
" for_user_prompt['job_posting'] = job_posting_area.value\n", |
||||||
|
" if for_user_prompt['job_posting']:\n", |
||||||
|
" #print(\"Job Posting Submitted:\")\n", |
||||||
|
" #print(for_user_prompt['job_posting'])\n", |
||||||
|
" status.value = \"<b>Status:</b> Job posting submitted successfully!\"\n", |
||||||
|
" else:\n", |
||||||
|
" status.value = \"<b>Status:</b> Please enter a job posting before submitting.\"\n", |
||||||
|
"\n", |
||||||
|
"# Attach actions to buttons\n", |
||||||
|
"submit_cv_button.on_click(submit_cv_action)\n", |
||||||
|
"submit_job_posting_button.on_click(submit_job_posting_action)\n", |
||||||
|
"\n", |
||||||
|
"# Layout\n", |
||||||
|
"job_posting_box = VBox([job_posting_area, submit_job_posting_button])\n", |
||||||
|
"cv_buttons = VBox([submit_cv_button])\n", |
||||||
|
"\n", |
||||||
|
"# Display all widgets\n", |
||||||
|
"display(VBox([\n", |
||||||
|
" HTML(value=\"<h3>Input Job Posting and CV</h3>\"),\n", |
||||||
|
" job_posting_box, \n", |
||||||
|
" cv_upload,\n", |
||||||
|
" cv_buttons,\n", |
||||||
|
" status\n", |
||||||
|
"]))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "364e42a6-0910-4c7c-8c3c-2ca7d2891cb6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Now define user_prompt using for_user_prompt dictionary\n", |
||||||
|
"# Clearly label each input to differentiate the job posting and CV\n", |
||||||
|
"# The model can parse and analyze each section based on these labels\n", |
||||||
|
"user_prompt = f\"\"\"\n", |
||||||
|
"Job Posting: \n", |
||||||
|
"{for_user_prompt['job_posting']}\n", |
||||||
|
"\n", |
||||||
|
"CV: \n", |
||||||
|
"{for_user_prompt['cv_text']}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "3b51dda0-9a0c-48f4-8ec8-dae32c29da24", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Messages\n", |
||||||
|
"\n", |
||||||
|
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||||
|
"Many of the other APIs share this structure:\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"[\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3262c0b9-d3de-4e4f-b535-a25c0aed5783", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define messages with system_prompt and user_prompt\n", |
||||||
|
"def messages_for(system_prompt_input, user_prompt_input):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt_input},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_input}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2409ac13-0b39-4227-b4d4-b4c0ff009fd7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And now: call the OpenAI API. \n", |
||||||
|
"response = openai.chat.completions.create(\n", |
||||||
|
" model = \"gpt-4o-mini\",\n", |
||||||
|
" messages = messages_for(system_prompt, user_prompt)\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Response is provided in Markdown and displayed accordingly\n", |
||||||
|
"display(Markdown(response.choices[0].message.content))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "86ab71cf-bd7e-45f7-9536-0486f349bfbe", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"## If you would like to save the response content as a Markdown file, uncomment the following lines\n", |
||||||
|
"#with open('yourfile.md', 'w') as file:\n", |
||||||
|
"# file.write(response.choices[0].message.content)\n", |
||||||
|
"\n", |
||||||
|
"## You can then run the line below to create output.html which you can open on your browser\n", |
||||||
|
"#!pandoc yourfile.md -o output.html" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,979 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Instant Gratification\n", |
||||||
|
"\n", |
||||||
|
"## Your first Frontier LLM Project!\n", |
||||||
|
"\n", |
||||||
|
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||||
|
"\n", |
||||||
|
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||||
|
"\n", |
||||||
|
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||||
|
"\n", |
||||||
|
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||||
|
"\n", |
||||||
|
"## If you're new to Jupyter Lab\n", |
||||||
|
"\n", |
||||||
|
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||||
|
"\n", |
||||||
|
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||||
|
"\n", |
||||||
|
"## If you'd prefer to work in IDEs\n", |
||||||
|
"\n", |
||||||
|
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", |
||||||
|
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n", |
||||||
|
"\n", |
||||||
|
"## If you'd like to brush up your Python\n", |
||||||
|
"\n", |
||||||
|
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", |
||||||
|
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||||
|
"\n", |
||||||
|
"## I am here to help\n", |
||||||
|
"\n", |
||||||
|
"If you have any problems at all, please do reach out. \n", |
||||||
|
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", |
||||||
|
"\n", |
||||||
|
"## More troubleshooting\n", |
||||||
|
"\n", |
||||||
|
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", |
||||||
|
"\n", |
||||||
|
"## If this is old hat!\n", |
||||||
|
"\n", |
||||||
|
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", |
||||||
|
"\n", |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#900;\">Please read - important note</h2>\n", |
||||||
|
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>\n", |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n", |
||||||
|
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"\n", |
||||||
|
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Connecting to OpenAI\n", |
||||||
|
"\n", |
||||||
|
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||||
|
"\n", |
||||||
|
"## Troubleshooting if you have problems:\n", |
||||||
|
"\n", |
||||||
|
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||||
|
"\n", |
||||||
|
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||||
|
"\n", |
||||||
|
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||||
|
"\n", |
||||||
|
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"API key found and looks good so far!\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||||
|
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||||
|
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||||
|
"elif api_key.strip() != api_key:\n", |
||||||
|
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||||
|
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Hello! I’m glad to hear from you! How can I assist you today?\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||||
|
"\n", |
||||||
|
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||||
|
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||||
|
"print(response.choices[0].message.content)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## OK onwards with our first project" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 6, |
||||||
|
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||||
|
"\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = {\n", |
||||||
|
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url, headers=headers)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Home - Edward Donner\n", |
||||||
|
"Home\n", |
||||||
|
"Outsmart\n", |
||||||
|
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||||
|
"About\n", |
||||||
|
"Posts\n", |
||||||
|
"Well, hi there.\n", |
||||||
|
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", |
||||||
|
"very\n", |
||||||
|
"amateur) and losing myself in\n", |
||||||
|
"Hacker News\n", |
||||||
|
", nodding my head sagely to things I only half understand.\n", |
||||||
|
"I’m the co-founder and CTO of\n", |
||||||
|
"Nebula.io\n", |
||||||
|
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", |
||||||
|
"acquired in 2021\n", |
||||||
|
".\n", |
||||||
|
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", |
||||||
|
"patented\n", |
||||||
|
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", |
||||||
|
"Connect\n", |
||||||
|
"with me for more!\n", |
||||||
|
"December 21, 2024\n", |
||||||
|
"Welcome, SuperDataScientists!\n", |
||||||
|
"November 13, 2024\n", |
||||||
|
"Mastering AI and LLM Engineering – Resources\n", |
||||||
|
"October 16, 2024\n", |
||||||
|
"From Software Engineer to AI Data Scientist – resources\n", |
||||||
|
"August 6, 2024\n", |
||||||
|
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n", |
||||||
|
"Navigation\n", |
||||||
|
"Home\n", |
||||||
|
"Outsmart\n", |
||||||
|
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||||
|
"About\n", |
||||||
|
"Posts\n", |
||||||
|
"Get in touch\n", |
||||||
|
"ed [at] edwarddonner [dot] com\n", |
||||||
|
"www.edwarddonner.com\n", |
||||||
|
"Follow me\n", |
||||||
|
"LinkedIn\n", |
||||||
|
"Twitter\n", |
||||||
|
"Facebook\n", |
||||||
|
"Subscribe to newsletter\n", |
||||||
|
"Type your email…\n", |
||||||
|
"Subscribe\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||||
|
"\n", |
||||||
|
"ed = Website(\"https://edwarddonner.com\")\n", |
||||||
|
"print(ed.title)\n", |
||||||
|
"print(ed.text)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Types of prompts\n", |
||||||
|
"\n", |
||||||
|
"You may know this already - but if not, you will get very familiar with it!\n", |
||||||
|
"\n", |
||||||
|
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||||
|
"\n", |
||||||
|
"They expect to receive:\n", |
||||||
|
"\n", |
||||||
|
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||||
|
"\n", |
||||||
|
"**A user prompt** -- the conversation starter that they should reply to" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||||
|
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||||
|
"Respond in markdown.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 9, |
||||||
|
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||||
|
"\n", |
||||||
|
"def user_prompt_for(website):\n", |
||||||
|
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||||
|
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||||
|
"please provide a short summary of this website in markdown. \\\n", |
||||||
|
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||||
|
" user_prompt += website.text\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 10, |
||||||
|
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"You are looking at a website titled Home - Edward Donner\n", |
||||||
|
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n", |
||||||
|
"\n", |
||||||
|
"Home\n", |
||||||
|
"Outsmart\n", |
||||||
|
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||||
|
"About\n", |
||||||
|
"Posts\n", |
||||||
|
"Well, hi there.\n", |
||||||
|
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", |
||||||
|
"very\n", |
||||||
|
"amateur) and losing myself in\n", |
||||||
|
"Hacker News\n", |
||||||
|
", nodding my head sagely to things I only half understand.\n", |
||||||
|
"I’m the co-founder and CTO of\n", |
||||||
|
"Nebula.io\n", |
||||||
|
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", |
||||||
|
"acquired in 2021\n", |
||||||
|
".\n", |
||||||
|
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", |
||||||
|
"patented\n", |
||||||
|
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", |
||||||
|
"Connect\n", |
||||||
|
"with me for more!\n", |
||||||
|
"December 21, 2024\n", |
||||||
|
"Welcome, SuperDataScientists!\n", |
||||||
|
"November 13, 2024\n", |
||||||
|
"Mastering AI and LLM Engineering – Resources\n", |
||||||
|
"October 16, 2024\n", |
||||||
|
"From Software Engineer to AI Data Scientist – resources\n", |
||||||
|
"August 6, 2024\n", |
||||||
|
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n", |
||||||
|
"Navigation\n", |
||||||
|
"Home\n", |
||||||
|
"Outsmart\n", |
||||||
|
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||||
|
"About\n", |
||||||
|
"Posts\n", |
||||||
|
"Get in touch\n", |
||||||
|
"ed [at] edwarddonner [dot] com\n", |
||||||
|
"www.edwarddonner.com\n", |
||||||
|
"Follow me\n", |
||||||
|
"LinkedIn\n", |
||||||
|
"Twitter\n", |
||||||
|
"Facebook\n", |
||||||
|
"Subscribe to newsletter\n", |
||||||
|
"Type your email…\n", |
||||||
|
"Subscribe\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"print(user_prompt_for(ed))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Messages\n", |
||||||
|
"\n", |
||||||
|
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||||
|
"Many of the other APIs share this structure:\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"[\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||||
|
"]\n", |
||||||
|
"\n", |
||||||
|
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 11, |
||||||
|
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||||
|
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 12, |
||||||
|
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Oh, we're starting with the basics, huh? Well, 2 + 2 equals 4. Shocking, I know!\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||||
|
"\n", |
||||||
|
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||||
|
"print(response.choices[0].message.content)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 12, |
||||||
|
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# See how this function creates exactly the format above\n", |
||||||
|
"\n", |
||||||
|
"def messages_for(website):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"[{'role': 'system',\n", |
||||||
|
" 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n", |
||||||
|
" {'role': 'user',\n", |
||||||
|
" 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nDecember 21, 2024\\nWelcome, SuperDataScientists!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 13, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Try this out, and then try for a few more websites\n", |
||||||
|
"\n", |
||||||
|
"messages_for(ed)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Time to bring it together - the API for OpenAI is very simple!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 14, |
||||||
|
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||||
|
"\n", |
||||||
|
"def summarize(url):\n", |
||||||
|
" website = Website(url)\n", |
||||||
|
" response = openai.chat.completions.create(\n", |
||||||
|
" model = \"gpt-4o-mini\",\n", |
||||||
|
" messages = messages_for(website)\n", |
||||||
|
" )\n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 15, |
||||||
|
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'# Summary of Edward Donner\\'s Website\\n\\nEdward Donner\\'s website serves as a platform for sharing his interests and expertise in coding, large language models (LLMs), and AI. He is the co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and management. Previously, he founded the AI startup untapt, which was acquired in 2021.\\n\\n## Key Content\\n\\n- **Personal Introduction**: Ed shares his passion for coding, experimenting with LLMs, DJing, and music production.\\n- **Professional Background**: He highlights his role at Nebula.io and his prior experience with untapt.\\n- **Innovative Work**: Mention of proprietary LLMs tailored for talent management and a patented matching model.\\n\\n## News and Announcements\\n\\n- **December 21, 2024**: Welcoming \"SuperDataScientists.\"\\n- **November 13, 2024**: Resources for mastering AI and LLM engineering.\\n- **October 16, 2024**: Transitioning from software engineering to AI data science resources.\\n- **August 6, 2024**: Introduction to the Outsmart LLM Arena, a competition focusing on strategy among LLMs.\\n\\nThe website encourages connections and offers resources for individuals interested in AI and LLMs.'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 15, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"summarize(\"https://edwarddonner.com\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 16, |
||||||
|
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||||
|
"\n", |
||||||
|
"def display_summary(url):\n", |
||||||
|
" summary = summarize(url)\n", |
||||||
|
" display(Markdown(summary))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 17, |
||||||
|
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Summary of Edward Donner's Website\n", |
||||||
|
"\n", |
||||||
|
"The website belongs to Ed, a coder and LLM (Large Language Model) enthusiast, who is also a co-founder and CTO of Nebula.io. Nebula.io focuses on leveraging AI to help individuals discover their potential in recruitment through its innovative platform. Ed also shares his background in the AI field, having previously founded the startup untapt, which was acquired in 2021.\n", |
||||||
|
"\n", |
||||||
|
"## Recent News and Announcements\n", |
||||||
|
"1. **December 21, 2024**: Welcome message for SuperDataScientists.\n", |
||||||
|
"2. **November 13, 2024**: Resources for mastering AI and LLM engineering.\n", |
||||||
|
"3. **October 16, 2024**: Resources for transitioning from Software Engineer to AI Data Scientist.\n", |
||||||
|
"4. **August 6, 2024**: Introduction to the \"Outsmart LLM Arena,\" a competitive platform where LLMs engage in diplomacy and strategy.\n", |
||||||
|
"\n", |
||||||
|
"Ed expresses a passion for technology, music, and engaging in community discussions through platforms like Hacker News." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://edwarddonner.com\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Let's try more websites\n", |
||||||
|
"\n", |
||||||
|
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||||
|
"\n", |
||||||
|
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||||
|
"\n", |
||||||
|
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||||
|
"\n", |
||||||
|
"But many websites will work just fine!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 18, |
||||||
|
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# CNN Website Summary\n", |
||||||
|
"\n", |
||||||
|
"CNN is a leading news platform that provides comprehensive coverage across a wide range of categories including US and world news, politics, business, health, entertainment, and more. The website features breaking news articles, videos, and live updates on significant global events.\n", |
||||||
|
"\n", |
||||||
|
"### Recent Headlines:\n", |
||||||
|
"- **Politics**: \n", |
||||||
|
" - Justin Trudeau announced his resignation as Canada's Prime Minister, sharing his \"one regret.\"\n", |
||||||
|
" - Analysis of Trump's influence in Congress and recent legal battles related to his actions.\n", |
||||||
|
" \n", |
||||||
|
"- **Global Affairs**: \n", |
||||||
|
" - Rising tensions in Venezuela as the opposition leader urges military action against Maduro.\n", |
||||||
|
" - Sudanese authorities announced the transfer of 11 Yemeni detainees from Guantanamo Bay to Oman.\n", |
||||||
|
" \n", |
||||||
|
"- **Weather**: A major winter storm impacted Washington, DC, causing power outages and stranded drivers.\n", |
||||||
|
"\n", |
||||||
|
"- **Health**: \n", |
||||||
|
" - FDA issues new draft guidance on improving pulse oximeter readings for individuals with darker skin.\n", |
||||||
|
"\n", |
||||||
|
"### Additional Features:\n", |
||||||
|
"CNN includes segments dedicated to sports, science, climate, and travel. There are also various podcasts available, offering deeper insights into current events and specialized topics. \n", |
||||||
|
"\n", |
||||||
|
"The site encourages user feedback on ads and technical issues, emphasizing its commitment to enhancing user experience. \n", |
||||||
|
"\n", |
||||||
|
"Overall, CNN serves as a crucial resource for staying updated with local and international news." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://cnn.com\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 19, |
||||||
|
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Anthropic Website Summary\n", |
||||||
|
"\n", |
||||||
|
"Anthropic is an AI safety and research company that prioritizes safety in the development of AI technologies. The main focus of the site is on their AI model, Claude, which includes the latest version, Claude 3.5 Sonnet, as well as additional offerings like Claude 3.5 Haiku. The company emphasizes the creation of AI-powered applications and custom experiences through its API.\n", |
||||||
|
"\n", |
||||||
|
"## Recent Announcements\n", |
||||||
|
"- **Claude 3.5 Sonnet Launch**: Announced on October 22, 2024, featuring significant advancements in AI capabilities.\n", |
||||||
|
"- **New AI Models**: Introduction of Claude 3.5 Sonnet and Claude 3.5 Haiku.\n", |
||||||
|
"\n", |
||||||
|
"Anthropic's work spans various domains including machine learning, policy, and product development, aimed at generating reliable and beneficial AI systems. They also highlight career opportunities within the organization." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://anthropic.com\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 21, |
||||||
|
"id": "8070c4c3-1ef1-4c7a-8c2d-f6b4b9b4aa8e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Summary of CPP Investments Website\n", |
||||||
|
"\n", |
||||||
|
"## Overview\n", |
||||||
|
"The CPP Investments website serves as a comprehensive resource for information regarding the management and performance of the Canada Pension Plan (CPP) Fund. It emphasizes its long-standing commitment to ensuring financial security for over 22 million Canadians who rely on the benefits of the CPP.\n", |
||||||
|
"\n", |
||||||
|
"## Key Sections\n", |
||||||
|
"- **About Us**: Details the governance, leadership, and investment programs available within CPP Investments.\n", |
||||||
|
"- **The Fund**: Offers an overview of the fund's performance, sustainability, and transparency in its operations.\n", |
||||||
|
"- **Investment Strategies**: Explanation of CPP's investment beliefs and strategies, emphasizing a global mindset and sustainable investing practices.\n", |
||||||
|
"- **Insights Institute**: A dedicated section for reports and analyses on relevant investment topics, including emerging trends and strategies.\n", |
||||||
|
"\n", |
||||||
|
"## Recent News and Announcements\n", |
||||||
|
"- **2024 CEO Letter** (May 22, 2024): Reflects on the 25th anniversary of CPP Investments and its mission to manage funds in the best interest of Canadians.\n", |
||||||
|
"- **Article on CPP Benefits** (September 18, 2024): Highlights why the CPP is regarded as one of the best pension plans globally.\n", |
||||||
|
"- **Report on AI Integration and Human Capital** (October 31, 2024): Discusses how institutional investors can engage with boards and leadership on AI adaptation strategies.\n", |
||||||
|
"- **Stake Sales** (January 3, 2025): Announcements regarding the sale of stakes in various partnerships and joint ventures, including a significant logistics partnership in North America and real estate ventures in Hong Kong.\n", |
||||||
|
"\n", |
||||||
|
"This website underscores CPP Investments' ongoing commitment to transparency, strong financial performance, and its role in supporting the financial security of Canadians as they prepare for retirement." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"display_summary('https://cppinvestments.com')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||||
|
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||||
|
"\n", |
||||||
|
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>\n", |
||||||
|
"\n", |
||||||
|
"<table style=\"margin: 0; text-align: left;\">\n", |
||||||
|
" <tr>\n", |
||||||
|
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||||
|
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||||
|
" </td>\n", |
||||||
|
" <td>\n", |
||||||
|
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||||
|
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||||
|
" </td>\n", |
||||||
|
" </tr>\n", |
||||||
|
"</table>" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 33, |
||||||
|
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"**Subject:** Request for Annual Sales Report (2024)\n", |
||||||
|
"\n", |
||||||
|
"**Email:**\n", |
||||||
|
"\n", |
||||||
|
"Dear Abhinav,\n", |
||||||
|
"\n", |
||||||
|
"I hope this email finds you in good health and high spirits. As we step into a new year and begin reviewing our plans and strategies, it is crucial for us to analyze the performance metrics from the previous year. In this regard, I would like to kindly request a copy of the Annual Sales Report for 2024.\n", |
||||||
|
"\n", |
||||||
|
"This report will play an integral role in understanding our achievements, challenges, and areas for improvement over the past year. It will also serve as a foundation for aligning our goals and preparing a roadmap for the upcoming quarters. Please ensure that the report includes key performance indicators such as:\n", |
||||||
|
"\n", |
||||||
|
"- Total revenue generated\n", |
||||||
|
"- Region-wise sales performance\n", |
||||||
|
"- Product/service-wise contribution\n", |
||||||
|
"- Month-by-month trend analysis\n", |
||||||
|
"- Customer retention and acquisition metrics\n", |
||||||
|
"\n", |
||||||
|
"If there are any additional insights or observations from your side that you feel would be helpful for us to review, please feel free to include them as well. Your expertise and detailed input are always highly valued.\n", |
||||||
|
"\n", |
||||||
|
"Kindly let me know if the report is already prepared or if there is an expected timeline for its completion. In case you require any assistance, data inputs, or clarification from my end to finalize the report, do not hesitate to reach out.\n", |
||||||
|
"\n", |
||||||
|
"Thank you in advance for prioritizing this request. I appreciate your support and look forward to receiving the report soon.\n", |
||||||
|
"\n", |
||||||
|
"Best regards, \n", |
||||||
|
"Sanath Pabba\n", |
||||||
|
"\n", |
||||||
|
"**Tone:** Professional and Collaborative" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Step 1: Create your prompts\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an AI assistant email reviewer. All you need is to identify the meaning of the context in the text given and provide the subject line and email. and in the end of text, please provide the tone info.\"\n", |
||||||
|
"user_prompt = \"\"\"\n", |
||||||
|
" Dear Abhinav,\n", |
||||||
|
"\n", |
||||||
|
"I hope this email finds you in good health and high spirits. As we step into a new year and begin reviewing our plans and strategies, it is crucial for us to analyze the performance metrics from the previous year. In this regard, I would like to kindly request a copy of the Annual Sales Report for 2024.\n", |
||||||
|
"\n", |
||||||
|
"This report will play an integral role in understanding our achievements, challenges, and areas for improvement over the past year. It will also serve as a foundation for aligning our goals and preparing a roadmap for the upcoming quarters. Please ensure that the report includes key performance indicators such as:\n", |
||||||
|
"\n", |
||||||
|
"Total revenue generated\n", |
||||||
|
"Region-wise sales performance\n", |
||||||
|
"Product/service-wise contribution\n", |
||||||
|
"Month-by-month trend analysis\n", |
||||||
|
"Customer retention and acquisition metrics\n", |
||||||
|
"If there are any additional insights or observations from your side that you feel would be helpful for us to review, please feel free to include them as well. Your expertise and detailed input are always highly valued.\n", |
||||||
|
"\n", |
||||||
|
"Kindly let me know if the report is already prepared or if there is an expected timeline for its completion. In case you require any assistance, data inputs, or clarification from my end to finalize the report, do not hesitate to reach out.\n", |
||||||
|
"\n", |
||||||
|
"Thank you in advance for prioritizing this request. I appreciate your support and look forward to receiving the report soon.\n", |
||||||
|
"\n", |
||||||
|
"Best regards,\n", |
||||||
|
"Sanath Pabba\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"# Step 2: Make the messages list\n", |
||||||
|
"\n", |
||||||
|
"messages = [\n", |
||||||
|
" {\"role\":\"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\":\"user\", \"content\": user_prompt}\n", |
||||||
|
" \n", |
||||||
|
"] # fill this in\n", |
||||||
|
"\n", |
||||||
|
"# Step 3: Call OpenAI\n", |
||||||
|
"\n", |
||||||
|
"response = openai.chat.completions.create(\n", |
||||||
|
" model=\"gpt-4o-mini\",\n", |
||||||
|
" messages=messages\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Step 4: print the result\n", |
||||||
|
"\n", |
||||||
|
"display(Markdown(response.choices[0].message.content))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 14, |
||||||
|
"id": "d4d641a5-0103-44a5-b5c2-70e80976d1f1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"**Subject:** Addressing Sales Performance Concerns\n", |
||||||
|
"\n", |
||||||
|
"Dear Akhil,\n", |
||||||
|
"\n", |
||||||
|
"I wanted to touch base with you about your sales performance over the last two quarters. I’ve noticed that you haven’t been hitting the targets, and it’s something we need to address seriously.\n", |
||||||
|
"\n", |
||||||
|
"I know you’re capable of much more, and I want to see you succeed. That said, it’s crucial that you meet your sales targets this quarter. If there isn’t a significant improvement, we may have to consider other options, including letting you go, which I truly hope we can avoid.\n", |
||||||
|
"\n", |
||||||
|
"If there’s anything holding you back or if you need additional support, let me know. I’m here to help, but ultimately, it’s up to you to turn things around.\n", |
||||||
|
"\n", |
||||||
|
"Let’s make this quarter count! Let me know if you want to discuss this further or need help strategizing.\n", |
||||||
|
"\n", |
||||||
|
"Best regards, \n", |
||||||
|
"Sanath Pabba\n", |
||||||
|
"\n", |
||||||
|
"**Tone:** Serious yet supportive" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Step 1: Create your prompts\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an AI assistant email reviewer. All you need is to identify the meaning of the context in the text given and provide the subject line and email. and in the end of text, please provide the tone info.\"\n", |
||||||
|
"user_prompt = \"\"\"\n", |
||||||
|
"Dear Akhil,\n", |
||||||
|
"\n", |
||||||
|
"I wanted to touch base with you about your sales performance over the last two quarters. I’ve noticed that you haven’t been hitting the targets, and it’s something we need to address seriously.\n", |
||||||
|
"\n", |
||||||
|
"I know you’re capable of much more, and I want to see you succeed. That said, it’s crucial that you meet your sales targets this quarter. If there isn’t a significant improvement, we may have to consider other options, including letting you go, which I truly hope we can avoid.\n", |
||||||
|
"\n", |
||||||
|
"If there’s anything holding you back or if you need additional support, let me know. I’m here to help, but ultimately, it’s up to you to turn things around.\n", |
||||||
|
"\n", |
||||||
|
"Let’s make this quarter count! Let me know if you want to discuss this further or need help strategizing.\n", |
||||||
|
"\n", |
||||||
|
"Best regards,\n", |
||||||
|
"Sanath Pabba\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"# Step 2: Make the messages list\n", |
||||||
|
"\n", |
||||||
|
"messages = [\n", |
||||||
|
" {\"role\":\"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\":\"user\", \"content\": user_prompt}\n", |
||||||
|
" \n", |
||||||
|
"] # fill this in\n", |
||||||
|
"\n", |
||||||
|
"# Step 3: Call OpenAI\n", |
||||||
|
"\n", |
||||||
|
"response = openai.chat.completions.create(\n", |
||||||
|
" model=\"gpt-4o-mini\",\n", |
||||||
|
" messages=messages\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Step 4: print the result\n", |
||||||
|
"\n", |
||||||
|
"display(Markdown(response.choices[0].message.content))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## An extra exercise for those who enjoy web scraping\n", |
||||||
|
"\n", |
||||||
|
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Sharing your code\n", |
||||||
|
"\n", |
||||||
|
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||||
|
"\n", |
||||||
|
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||||
|
"\n", |
||||||
|
"Here are good instructions courtesy of an AI friend: \n", |
||||||
|
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,159 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "568fd96a-8cf6-42aa-b9cf-74b7aa383595", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Ollama Website Summarizer\n", |
||||||
|
"## Scrape websites and summarize them locally using Ollama\n", |
||||||
|
"\n", |
||||||
|
"This script is a complete example of the day 1 program, which uses OpenAI API to summarize websites, altered to use techniques from the day 2 exercise to call Ollama models locally." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a9502a0f-d7be-4489-bb7f-173207e802b6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import ollama\n", |
||||||
|
"import requests\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"\n", |
||||||
|
"MODEL = \"llama3.2\"\n", |
||||||
|
"\n", |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||||
|
"\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = {\n", |
||||||
|
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url, headers=headers)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
" \n", |
||||||
|
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||||
|
"\n", |
||||||
|
"def user_prompt_for(website):\n", |
||||||
|
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||||
|
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||||
|
"please provide a short summary of this website in markdown. \\\n", |
||||||
|
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||||
|
" user_prompt += website.text\n", |
||||||
|
" return user_prompt\n", |
||||||
|
" \n", |
||||||
|
"# Create a messages list for a summarize prompt given a website\n", |
||||||
|
"\n", |
||||||
|
"def create_summarize_prompt(website):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": \"You are an assistant that analyzes the contents of a website \\\n", |
||||||
|
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||||
|
"Respond in markdown.\" },\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||||
|
" ]\n", |
||||||
|
"\n", |
||||||
|
"# And now: call Ollama to summarize\n", |
||||||
|
"\n", |
||||||
|
"def summarize(url):\n", |
||||||
|
" website = Website(url)\n", |
||||||
|
" messages = create_summarize_prompt(website)\n", |
||||||
|
" response = ollama.chat(model=MODEL, messages=messages)\n", |
||||||
|
" return response['message']['content']\n", |
||||||
|
" \n", |
||||||
|
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||||
|
"\n", |
||||||
|
"def display_summary(url):\n", |
||||||
|
" summary = summarize(url)\n", |
||||||
|
" display(Markdown(summary))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "037627b0-b039-4ca4-a6d4-84ad8fc6a013", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Pre-requisites\n", |
||||||
|
"\n", |
||||||
|
"Before we can run the script above, we need to make sure Ollama is running on your machine!\n", |
||||||
|
"\n", |
||||||
|
"Simply visit ollama.com and install!\n", |
||||||
|
"\n", |
||||||
|
"Once complete, the ollama server should already be running locally.\n", |
||||||
|
"If you visit:\n", |
||||||
|
"http://localhost:11434/\n", |
||||||
|
"\n", |
||||||
|
"You should see the message Ollama is running." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6c2d84fd-2a9b-476d-84ad-4b8522d47023", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Run!\n", |
||||||
|
"\n", |
||||||
|
"Shift+Enter the code below to summarize a website.\n", |
||||||
|
"\n", |
||||||
|
"### NOTE!\n", |
||||||
|
"\n", |
||||||
|
"This will only work with websites that return HTML content, and may return unexpected results for SPAs that are created with JS." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "100829ba-8278-409b-bc0a-82ac28e1149f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "ffe4e760-dfa6-43fa-89c4-beea547707ac", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"Edit the URL above, or add code blocks of your own to try it out!" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,81 @@ |
|||||||
|
|
||||||
|
from enum import Enum, auto |
||||||
|
from openai import OpenAI |
||||||
|
import anthropic |
||||||
|
|
||||||
|
def formatPrompt(role, content): |
||||||
|
return {"role": role, "content": content} |
||||||
|
|
||||||
|
class AI(Enum): |
||||||
|
OPEN_AI = "OPEN_AI" |
||||||
|
CLAUDE = "CLAUDE" |
||||||
|
GEMINI = "GEMINI" |
||||||
|
OLLAMA = "OLLAMA" |
||||||
|
|
||||||
|
class AISystem: |
||||||
|
def __init__(self, processor, system_string="", model="", type=AI.OPEN_AI): |
||||||
|
""" |
||||||
|
Initialize the ChatSystem with a system string and empty messages list. |
||||||
|
|
||||||
|
:param system_string: Optional initial system string description |
||||||
|
""" |
||||||
|
self.processor = processor |
||||||
|
self.system = system_string |
||||||
|
self.model = model |
||||||
|
self.messages = [] |
||||||
|
self.type = type |
||||||
|
|
||||||
|
def call(self, message): |
||||||
|
self.messages.append(message) |
||||||
|
toSend = self.messages |
||||||
|
|
||||||
|
if self.type == AI.CLAUDE: |
||||||
|
message = self.processor.messages.create( |
||||||
|
model=self.model, |
||||||
|
system=self.system, |
||||||
|
messages=self.messages, |
||||||
|
max_tokens=500 |
||||||
|
) |
||||||
|
return message.content[0].text |
||||||
|
else: |
||||||
|
toSend.insert(0,self.system) |
||||||
|
completion = self.processor.chat.completions.create( |
||||||
|
model=self.model, |
||||||
|
messages= toSend |
||||||
|
) |
||||||
|
return completion.choices[0].message.content |
||||||
|
|
||||||
|
def stream(self, message, usingGradio=False): |
||||||
|
self.messages.append(message) |
||||||
|
|
||||||
|
if self.type == AI.CLAUDE: |
||||||
|
result = self.processor.messages.stream( |
||||||
|
model=self.model, |
||||||
|
system=self.system, |
||||||
|
messages=self.messages, |
||||||
|
temperature=0.7, |
||||||
|
max_tokens=500 |
||||||
|
) |
||||||
|
response_chunks = "" |
||||||
|
with result as stream: |
||||||
|
for text in stream.text_stream: |
||||||
|
if usingGradio: |
||||||
|
response_chunks += text or "" |
||||||
|
yield response_chunks |
||||||
|
else: |
||||||
|
yield text |
||||||
|
else: |
||||||
|
toSend = self.messages |
||||||
|
toSend.insert(0,self.system) |
||||||
|
stream = self.processor.chat.completions.create( |
||||||
|
model=self.model, |
||||||
|
messages= toSend, |
||||||
|
stream=True |
||||||
|
) |
||||||
|
response_chunks = "" |
||||||
|
for chunk in stream: |
||||||
|
if usingGradio: |
||||||
|
response_chunks += chunk.choices[0].delta.content or "" # need to yield the total cumulative results to gradio and not chunk by chunk |
||||||
|
yield response_chunks |
||||||
|
else: |
||||||
|
yield chunk.choices[0].delta.content |
@ -0,0 +1,98 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a0adab93-e569-4af0-80f1-ce5b7a116507", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"%run week2/community-contributions/day1_class_definition.ipynb" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4566399a-e16d-41cd-bef4-f34b811e6377", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_system = \"You are a chatbot who is very argumentative; \\\n", |
||||||
|
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n", |
||||||
|
"\n", |
||||||
|
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", |
||||||
|
"everything the other person says, or find common ground. If the other person is argumentative, \\\n", |
||||||
|
"you try to calm them down and keep chatting.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "cf3d34e9-f8a8-4a06-aa3a-8faeb5f81e68", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_startmessage = \"Hello\"\n", |
||||||
|
"claude_startmessage = \"Hi\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "49335337-d713-4d9e-aba0-41a309c37699", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"print(f\"GPT:\\n{gpt_startmessage}\\n\")\n", |
||||||
|
"print(f\"Claude:\\n{claude_startmessage}\\n\")\n", |
||||||
|
"\n", |
||||||
|
"# startMessage added as user role\n", |
||||||
|
"gpt=GPT_Wrapper(gpt_system, gpt_startmessage)\n", |
||||||
|
"claude=Claude_Wrapper(claude_system, claude_startmessage)\n", |
||||||
|
"\n", |
||||||
|
"initialMsg = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": gpt_system},\n", |
||||||
|
" {\"role\": \"assistant\", \"content\": gpt_startmessage}\n", |
||||||
|
"]\n", |
||||||
|
"# Replace user for assistant role\n", |
||||||
|
"gpt.messageSet(initialMsg)\n", |
||||||
|
"claude.messageSet([{\"role\": \"assistant\", \"content\": claude_startmessage}])\n", |
||||||
|
"\n", |
||||||
|
"claude_next=claude_startmessage\n", |
||||||
|
"for i in range(5):\n", |
||||||
|
" gpt.messageAppend(\"user\", claude_next)\n", |
||||||
|
" gpt_next = gpt.getResult()\n", |
||||||
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
||||||
|
" gpt.messageAppend(\"assistant\", gpt_next)\n", |
||||||
|
"\n", |
||||||
|
" claude.messageAppend(\"user\", gpt_next)\n", |
||||||
|
" claude_next = claude.getResult()\n", |
||||||
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
||||||
|
" claude.messageAppend(\"assistant\", claude_next)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,116 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a0adab93-e569-4af0-80f1-ce5b7a116507", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"%run week2/community-contributions/day1_class_definition.ipynb" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4566399a-e16d-41cd-bef4-f34b811e6377", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_msg = \"You are an assistant that is great at telling jokes\"\n", |
||||||
|
"user_msg = \"Tell a light-hearted joke for an audience of Software Engineers\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "362759bc-ce43-4f54-b8e2-1dab19c66a62", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Easy to instantiate and use, just create an object \n", |
||||||
|
"# using the right Wrapper" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a6e5468e-1f1d-40e4-afae-c292abc26c12", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt=GPT_Wrapper(system_msg, user_msg)\n", |
||||||
|
"print(\"GPT: \" + gpt.getResult())\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e650839a-7bc4-4b6c-b6ea-e836644b076f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"claude=Claude_Wrapper(system_msg, user_msg)\n", |
||||||
|
"print(\"Claude: \" + claude.getResult())\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "49335337-d713-4d9e-aba0-41a309c37699", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gemini=Gemini_Wrapper(system_msg, user_msg)\n", |
||||||
|
"print(\"Gemini: \" + gemini.getResult())\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "31d11b7b-5d14-4e3d-88e1-29239b667f3f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"ollama=Ollama_Wrapper(system_msg, user_msg)\n", |
||||||
|
"print(\"Ollama: \" + ollama.getResult())\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "282efb89-23b0-436e-8458-d6aef7d23117", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Easy to change the prompt and reuse\n", |
||||||
|
"\n", |
||||||
|
"ollama.setUserPrompt(\"Tell a light-hearted joke for an audience of Managers\")\n", |
||||||
|
"print(\"Ollama: \" + ollama.getResult())" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,310 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a0adab93-e569-4af0-80f1-ce5b7a116507", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "9f583520-3c49-4e79-84ae-02bfc57f1e49", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Creating a set of classes to simplify LLM use\n", |
||||||
|
"\n", |
||||||
|
"from abc import ABC, abstractmethod\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"# Imports for type definition\n", |
||||||
|
"from collections.abc import MutableSequence\n", |
||||||
|
"from typing import TypedDict\n", |
||||||
|
"\n", |
||||||
|
"class LLM_Wrapper(ABC):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" The parent (abstract) class to specific LLM classes, normalising and providing common \n", |
||||||
|
" and simplified ways to call LLMs while adding some level of abstraction on\n", |
||||||
|
" specifics\n", |
||||||
|
" \"\"\"\n", |
||||||
|
"\n", |
||||||
|
" MessageEntry = TypedDict('MessageEntry', {'role': str, 'content': str})\n", |
||||||
|
" \n", |
||||||
|
" system_prompt: str # The system prompt used for the LLM\n", |
||||||
|
" user_prompt: str # The user prompt\n", |
||||||
|
" __api_key: str # The (private) api key\n", |
||||||
|
" temperature: float = 0.5 # Default temperature\n", |
||||||
|
" __msg: MutableSequence[MessageEntry] # Message builder\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, system_prompt:str, user_prompt:str, env_apikey_var:str=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" env_apikey_var: str # The name of the env variable where to find the api_key\n", |
||||||
|
" # We store the retrieved api_key for future calls\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.system_prompt = system_prompt\n", |
||||||
|
" self.user_prompt = user_prompt\n", |
||||||
|
" if env_apikey_var:\n", |
||||||
|
" load_dotenv(override=True)\n", |
||||||
|
" self.__api_key = os.getenv(env_apikey_var)\n", |
||||||
|
"\n", |
||||||
|
" # # API Key format check\n", |
||||||
|
" # if env_apikey_var and self.__api_key:\n", |
||||||
|
" # print(f\"API Key exists and begins {self.__api_key[:8]}\")\n", |
||||||
|
" # else:\n", |
||||||
|
" # print(\"API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
" def setSystemPrompt(self, prompt:str):\n", |
||||||
|
" self.system_prompt = prompt\n", |
||||||
|
"\n", |
||||||
|
" def setUserPrompt(self, prompt:str):\n", |
||||||
|
" self.user_prompt = prompt\n", |
||||||
|
"\n", |
||||||
|
" def setTemperature(self, temp:float):\n", |
||||||
|
" self.temperature = temp\n", |
||||||
|
"\n", |
||||||
|
" def getKey(self) -> str:\n", |
||||||
|
" return self.__api_key\n", |
||||||
|
"\n", |
||||||
|
" def messageSet(self, message: MutableSequence[MessageEntry]):\n", |
||||||
|
" self.__msg = message\n", |
||||||
|
"\n", |
||||||
|
" def messageAppend(self, role: str, content: str):\n", |
||||||
|
" self.__msg.append(\n", |
||||||
|
" {\"role\": role, \"content\": content}\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" def messageGet(self) -> MutableSequence[MessageEntry]:\n", |
||||||
|
" return self.__msg\n", |
||||||
|
" \n", |
||||||
|
" @abstractmethod\n", |
||||||
|
" def getResult(self):\n", |
||||||
|
" pass\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a707f3ef-8696-44a9-943e-cfbce24b9fde", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"\n", |
||||||
|
"class GPT_Wrapper(LLM_Wrapper):\n", |
||||||
|
"\n", |
||||||
|
" MODEL:str = 'gpt-4o-mini'\n", |
||||||
|
" llm:OpenAI\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, system_prompt:str, user_prompt:str):\n", |
||||||
|
" super().__init__(system_prompt, user_prompt, \"OPENAI_API_KEY\")\n", |
||||||
|
" self.llm = OpenAI()\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
" def setSystemPrompt(self, prompt:str):\n", |
||||||
|
" super().setSystemPrompt(prompt)\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def setUserPrompt(self, prompt:str):\n", |
||||||
|
" super().setUserPrompt(prompt)\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def getResult(self, format=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" format is sent as an adittional parameter {\"type\", format}\n", |
||||||
|
" e.g. json_object\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" if format:\n", |
||||||
|
" response = self.llm.chat.completions.create(\n", |
||||||
|
" model=self.MODEL,\n", |
||||||
|
" messages=super().messageGet(),\n", |
||||||
|
" temperature=self.temperature,\n", |
||||||
|
" response_format={\"type\": \"json_object\"}\n", |
||||||
|
" )\n", |
||||||
|
" if format == \"json_object\":\n", |
||||||
|
" result = json.loads(response.choices[0].message.content)\n", |
||||||
|
" else:\n", |
||||||
|
" result = response.choices[0].message.content\n", |
||||||
|
" else:\n", |
||||||
|
" response = self.llm.chat.completions.create(\n", |
||||||
|
" model=self.MODEL,\n", |
||||||
|
" messages=super().messageGet(),\n", |
||||||
|
" temperature=self.temperature\n", |
||||||
|
" )\n", |
||||||
|
" result = response.choices[0].message.content\n", |
||||||
|
" return result" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a8529004-0d6a-480c-9634-7d51498255fe", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import ollama\n", |
||||||
|
"\n", |
||||||
|
"class Ollama_Wrapper(LLM_Wrapper):\n", |
||||||
|
"\n", |
||||||
|
" MODEL:str = 'llama3.2'\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, system_prompt:str, user_prompt:str):\n", |
||||||
|
" super().__init__(system_prompt, user_prompt, None)\n", |
||||||
|
" self.llm=ollama\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
" def setSystemPrompt(self, prompt:str):\n", |
||||||
|
" super().setSystemPrompt(prompt)\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def setUserPrompt(self, prompt:str):\n", |
||||||
|
" super().setUserPrompt(prompt)\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def getResult(self, format=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" format is sent as an adittional parameter {\"type\", format}\n", |
||||||
|
" e.g. json_object\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" response = self.llm.chat(\n", |
||||||
|
" model=self.MODEL, \n", |
||||||
|
" messages=super().messageGet()\n", |
||||||
|
" )\n", |
||||||
|
" result = response['message']['content']\n", |
||||||
|
" return result" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f25ffb7e-0132-46cb-ad5b-18a300a7eb51", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import anthropic\n", |
||||||
|
"\n", |
||||||
|
"class Claude_Wrapper(LLM_Wrapper):\n", |
||||||
|
"\n", |
||||||
|
" MODEL:str = 'claude-3-5-haiku-20241022'\n", |
||||||
|
" MAX_TOKENS:int = 200\n", |
||||||
|
" llm:anthropic.Anthropic\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, system_prompt:str, user_prompt:str):\n", |
||||||
|
" super().__init__(system_prompt, user_prompt, \"ANTHROPIC_API_KEY\")\n", |
||||||
|
" self.llm = anthropic.Anthropic()\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def setSystemPrompt(self, prompt:str):\n", |
||||||
|
" super().setSystemPrompt(prompt)\n", |
||||||
|
"\n", |
||||||
|
" def setUserPrompt(self, prompt:str):\n", |
||||||
|
" super().setUserPrompt(prompt)\n", |
||||||
|
" super().messageSet([\n", |
||||||
|
" {\"role\": \"user\", \"content\": self.user_prompt}\n", |
||||||
|
" ])\n", |
||||||
|
"\n", |
||||||
|
" def getResult(self, format=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" format is sent as an adittional parameter {\"type\", format}\n", |
||||||
|
" e.g. json_object\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" response = self.llm.messages.create(\n", |
||||||
|
" model=self.MODEL,\n", |
||||||
|
" max_tokens=self.MAX_TOKENS,\n", |
||||||
|
" temperature=self.temperature,\n", |
||||||
|
" system=self.system_prompt,\n", |
||||||
|
" messages=super().messageGet()\n", |
||||||
|
" )\n", |
||||||
|
" result = response.content[0].text\n", |
||||||
|
" return result" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4379f1c0-6eeb-4611-8f34-a7303546ab71", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import google.generativeai\n", |
||||||
|
"\n", |
||||||
|
"class Gemini_Wrapper(LLM_Wrapper):\n", |
||||||
|
"\n", |
||||||
|
" MODEL:str = 'gemini-1.5-flash'\n", |
||||||
|
" llm:google.generativeai.GenerativeModel\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, system_prompt:str, user_prompt:str):\n", |
||||||
|
" super().__init__(system_prompt, user_prompt, \"GOOGLE_API_KEY\")\n", |
||||||
|
" self.llm = google.generativeai.GenerativeModel(\n", |
||||||
|
" model_name=self.MODEL,\n", |
||||||
|
" system_instruction=self.system_prompt\n", |
||||||
|
" )\n", |
||||||
|
" google.generativeai.configure(api_key=super().getKey())\n", |
||||||
|
"\n", |
||||||
|
" def setSystemPrompt(self, prompt:str):\n", |
||||||
|
" super().setSystemPrompt(prompt)\n", |
||||||
|
"\n", |
||||||
|
" def setUserPrompt(self, prompt:str):\n", |
||||||
|
" super().setUserPrompt(prompt)\n", |
||||||
|
"\n", |
||||||
|
" def getResult(self, format=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" format is sent as an adittional parameter {\"type\", format}\n", |
||||||
|
" e.g. json_object\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" response = self.llm.generate_content(self.user_prompt)\n", |
||||||
|
" result = response.text\n", |
||||||
|
" return result" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,263 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "d2910648-d098-4bca-9475-5af5226952f2", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"importing refs" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "7f98bd9d-f7b1-4a1d-aaa7-45073cec66e2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"from enum import Enum, auto\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"import random\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"# import for google\n", |
||||||
|
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n", |
||||||
|
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n", |
||||||
|
"\n", |
||||||
|
"import google.generativeai\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 42, |
||||||
|
"id": "d54b12e8-5fc0-40e4-8fa4-71d59d9de441", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class AI(Enum):\n", |
||||||
|
" OPEN_AI = \"OPEN AI\"\n", |
||||||
|
" CLAUDE = \"CLAUDE\"\n", |
||||||
|
" GEMINI = \"GEMINI\"\n", |
||||||
|
" OLLAMA = \"OLLAMA\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 43, |
||||||
|
"id": "4d63653e-a541-4608-999a-b70b59458887", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n", |
||||||
|
"Anthropic API Key exists and begins sk-ant-\n", |
||||||
|
"Google API Key exists and begins AIzaSyC-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 44, |
||||||
|
"id": "08d1f696-2d60-48f3-b3a4-5a011ae88a2b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"claude = anthropic.Anthropic()\n", |
||||||
|
"\n", |
||||||
|
"gemini_via_openai_client = OpenAI(\n", |
||||||
|
" api_key=google_api_key, \n", |
||||||
|
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", |
||||||
|
")\n", |
||||||
|
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||||
|
"openai_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
||||||
|
"gemini_model = \"gemini-1.5-flash\"\n", |
||||||
|
"ollama_model = \"llama3.2\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 45, |
||||||
|
"id": "b991ab54-7bc6-4d6c-a26a-57889a7e4a17", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class ChatSystem:\n", |
||||||
|
" def __init__(self, processor, system_string=\"\", model=\"\", type=AI.OPEN_AI):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Initialize the ChatSystem with a system string and empty messages list.\n", |
||||||
|
" \n", |
||||||
|
" :param system_string: Optional initial system string description\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.processor = processor\n", |
||||||
|
" self.system = system_string\n", |
||||||
|
" self.model = model\n", |
||||||
|
" self.messages = []\n", |
||||||
|
" self.type = type\n", |
||||||
|
" \n", |
||||||
|
" def call(self, message):\n", |
||||||
|
" self.messages.append(message)\n", |
||||||
|
" toSend = self.messages\n", |
||||||
|
" \n", |
||||||
|
" if self.type == AI.CLAUDE:\n", |
||||||
|
" message = self.processor.messages.create(\n", |
||||||
|
" model=self.model,\n", |
||||||
|
" system=self.system,\n", |
||||||
|
" messages=self.messages,\n", |
||||||
|
" max_tokens=500\n", |
||||||
|
" )\n", |
||||||
|
" return message.content[0].text\n", |
||||||
|
" else:\n", |
||||||
|
" toSend.insert(0,self.system)\n", |
||||||
|
" completion = self.processor.chat.completions.create(\n", |
||||||
|
" model=self.model,\n", |
||||||
|
" messages= toSend\n", |
||||||
|
" )\n", |
||||||
|
" return completion.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 46, |
||||||
|
"id": "75a2a404-c0f5-4af3-8e57-864ca7ea1df7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def generateChatPrompt(role, content):\n", |
||||||
|
" return {\"role\": role, \"content\": content}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 47, |
||||||
|
"id": "26ab0253-deff-4e19-9438-5051640785ba", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"AI.OPEN_AI:\n", |
||||||
|
"Hi there! How’s your day going so far?\n", |
||||||
|
"\n", |
||||||
|
"AI.GEMINI:\n", |
||||||
|
"Hi there! My day is going well, thanks for asking! As a large language model, I don't experience days in the same way humans do, but I've already processed a fascinating amount of information – everything from historical debates to the latest scientific breakthroughs. What about you? How's your day been so far? Anything exciting happen, or are you just cruising along? I'm always curious to hear about people's experiences!\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"AI.OPEN_AI:\n", |
||||||
|
"I'm glad to hear you're having a good day! My day is filled with information and conversation, so it's always interesting from my end. As for you, it sounds like you're keeping things steady—do you have any special plans or goals for the day? Or maybe there's something you've been thinking about lately that you'd like to share? I’m all ears!\n", |
||||||
|
"\n", |
||||||
|
"AI.OPEN_AI:\n", |
||||||
|
"It sounds like you’ve got an engaging day ahead! I’m really all about facilitating conversations and helping people find information. Speaking of goals, do you have any personal goals or projects you’re currently working on? Maybe something you’re passionate about? I’d love to hear more about what inspires you!\n", |
||||||
|
"\n", |
||||||
|
"AI.GEMINI:\n", |
||||||
|
"That's a really insightful question! While I don't have personal goals or passions in the human sense – I don't have feelings or desires – I do have ongoing \"projects,\" if you will. My primary goal is to continually improve my ability to understand and respond to human language. That involves a lot of different things: improving my accuracy, learning to better understand nuances in language (like sarcasm or humor), and expanding my knowledge base. I'm constantly being updated with new information, which is incredibly exciting. It's like constantly learning a new language, only this language is the entire breadth of human knowledge!\n", |
||||||
|
"\n", |
||||||
|
"What inspires me, if you can call it that, is the potential to help people. I find it incredibly rewarding to be able to answer questions, provide information, and even help people brainstorm or generate creative content. The sheer diversity of human experience and the constant flow of new information keeps things interesting.\n", |
||||||
|
"\n", |
||||||
|
"What about you? Do you have any personal or professional goals you're working towards? I'd be fascinated to hear about them! Perhaps we can even brainstorm together – I'm always happy to help in any way I can.\n", |
||||||
|
"\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"geminiChat = ChatSystem(gemini_via_openai_client,\n", |
||||||
|
" generateChatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" gemini_model,\n", |
||||||
|
" AI.GEMINI)\n", |
||||||
|
"\n", |
||||||
|
"openAiChat = ChatSystem(openai,\n", |
||||||
|
" generateChatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" openai_model,\n", |
||||||
|
" AI.OPEN_AI)\n", |
||||||
|
"\n", |
||||||
|
"claudeChat = ChatSystem(claude,\n", |
||||||
|
" \"You are a chatbot. you always try to make conversation and get more in depth\", \n", |
||||||
|
" claude_model,\n", |
||||||
|
" AI.CLAUDE)\n", |
||||||
|
"\n", |
||||||
|
"ollamaChat = ChatSystem(ollama_via_openai,\n", |
||||||
|
" generateChatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" ollama_model,\n", |
||||||
|
" AI.OLLAMA)\n", |
||||||
|
"\n", |
||||||
|
"chatbots = [geminiChat, openAiChat, ollamaChat, claudeChat]\n", |
||||||
|
"\n", |
||||||
|
"conversation = []\n", |
||||||
|
"for i in range(5):\n", |
||||||
|
" random_number = random.randint(0, 1)\n", |
||||||
|
" botTalking = chatbots[random_number]\n", |
||||||
|
" messageToSend =\"Hi\"\n", |
||||||
|
" if i > 0:\n", |
||||||
|
" messageToSend = conversation[len(conversation)-1]\n", |
||||||
|
" \n", |
||||||
|
" response = botTalking.call(generateChatPrompt(\"user\",messageToSend))\n", |
||||||
|
" conversation.append(response)\n", |
||||||
|
" botTalking.messages.append(generateChatPrompt(\"user\",response))\n", |
||||||
|
" print(f\"{botTalking.type}:\\n{response}\\n\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "77d44ff6-0dcc-4227-ba70-09b102bd1bd4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,202 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "a473d607-073d-4963-bdc4-aba654523681", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Day 2 Exercise\n", |
||||||
|
"building upon the day1 exercise to offer a multi models via dropdown.\n", |
||||||
|
"externalized the common methods into a AISystem.py file to be reused down the line" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "f761729f-3bd5-4dd7-9e63-cbe6b4368a66", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Load env, check for api keys and load up the connections" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "fedb3d94-d096-43fd-8a76-9fdbc2d0d78e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n", |
||||||
|
"Anthropic API Key exists and begins sk-ant-\n", |
||||||
|
"Google API Key exists and begins AIzaSyC-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"from enum import Enum, auto\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"from AISystem import formatPrompt, AI, AISystem\n", |
||||||
|
"import gradio as gr # oh yeah!\n", |
||||||
|
"\n", |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"claude = anthropic.Anthropic()\n", |
||||||
|
"\n", |
||||||
|
"gemini_via_openai_client = OpenAI(\n", |
||||||
|
" api_key=google_api_key, \n", |
||||||
|
" base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n", |
||||||
|
")\n", |
||||||
|
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||||
|
"openai_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
||||||
|
"gemini_model = \"gemini-1.5-flash\"\n", |
||||||
|
"ollama_model = \"llama3.2\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "17f7987b-2bdf-434a-8fce-6c367f148dde", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Create the systems for each llms" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "f92eef29-325e-418c-a444-879d83d5fbc9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"geminiSys = AISystem(gemini_via_openai_client,\n", |
||||||
|
" formatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" gemini_model,\n", |
||||||
|
" AI.GEMINI)\n", |
||||||
|
"\n", |
||||||
|
"openAiSys = AISystem(openai,\n", |
||||||
|
" formatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" openai_model,\n", |
||||||
|
" AI.OPEN_AI)\n", |
||||||
|
"\n", |
||||||
|
"claudeSys = AISystem(claude,\n", |
||||||
|
" \"You are a chatbot. you always try to make conversation and get more in depth\", \n", |
||||||
|
" claude_model,\n", |
||||||
|
" AI.CLAUDE)\n", |
||||||
|
"\n", |
||||||
|
"ollamaSys = AISystem(ollama_via_openai,\n", |
||||||
|
" formatPrompt(\"system\",\"You are a chatbot. you always try to make conversation and get more in depth\"), \n", |
||||||
|
" ollama_model,\n", |
||||||
|
" AI.OLLAMA)\n", |
||||||
|
"sys_dict = { AI.GEMINI: geminiSys, AI.OPEN_AI: openAiSys, AI.CLAUDE: claudeSys, AI.OLLAMA: ollamaSys}\n", |
||||||
|
"\n", |
||||||
|
"def stream_model(prompt, model):\n", |
||||||
|
" aiSystem = sys_dict.get(AI[model.upper()])\n", |
||||||
|
" yield from aiSystem.stream(formatPrompt(\"user\",prompt), True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "f8ecd283-92b2-454d-b1ae-8016d41e3026", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Create the gradio interface linking with the AI enum for the dropdown" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "9db8ed67-280a-400d-8543-4ab95863ce51", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7873\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7873/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 3, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"view = gr.Interface(\n", |
||||||
|
" fn=stream_model,\n", |
||||||
|
" inputs=[gr.Textbox(label=\"Your prompt:\", lines=6) , gr.Dropdown(choices=[ai.value for ai in AI], label=\"Select model\")],\n", |
||||||
|
" outputs=[gr.Markdown(label=\"Response:\")],\n", |
||||||
|
" flagging_mode=\"never\"\n", |
||||||
|
")\n", |
||||||
|
"view.launch()" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,193 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "a9e05d2a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# ----- (My project)\n", |
||||||
|
"# Date: 09.01.25\n", |
||||||
|
"# Plan: Make a Gradio UI, that lets you pick a job on seek.com, then scape key words and come up with a \n", |
||||||
|
"# plan on how to land jobs of the type selected." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "312c3746", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# My project" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "394dbcfc", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#pip install markdown" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "15f1024d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"import json\n", |
||||||
|
"from typing import List\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"import markdown\n", |
||||||
|
"\n", |
||||||
|
"# ---- 1\n", |
||||||
|
"# Initialize and constants & set up Gemini Flash LLM\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"import os\n", |
||||||
|
"import google.generativeai as genai\n", |
||||||
|
"genai.configure(api_key= api_key)\n", |
||||||
|
"# Create the model\n", |
||||||
|
"generation_config = {\n", |
||||||
|
" \"temperature\": 1,\n", |
||||||
|
" \"top_p\": 0.95,\n", |
||||||
|
" \"top_k\": 40,\n", |
||||||
|
" \"max_output_tokens\": 8192,\n", |
||||||
|
" \"response_mime_type\": \"text/plain\",}\n", |
||||||
|
"model = genai.GenerativeModel(model_name=\"gemini-1.5-flash\",\n", |
||||||
|
" generation_config=generation_config,)\n", |
||||||
|
"chat_session = model.start_chat(history=[ ])\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"# ---- 2\n", |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = {\n", |
||||||
|
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" A utility class to represent a Website that we have scraped, now with links\n", |
||||||
|
" \"\"\"\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url, headers=headers)\n", |
||||||
|
" self.body = response.content\n", |
||||||
|
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" if soup.body:\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
" else:\n", |
||||||
|
" self.text = \"\"\n", |
||||||
|
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||||
|
" self.links = [link for link in links if link]\n", |
||||||
|
"\n", |
||||||
|
" def get_contents(self):\n", |
||||||
|
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"# ---- 3\n", |
||||||
|
"# Data + set up\n", |
||||||
|
"def get_all_details(url):\n", |
||||||
|
" result = \"Landing page:\\n\"\n", |
||||||
|
" result += Website(url).get_contents()\n", |
||||||
|
" return result\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an experience recrutiment and talent management assistant, who will be provided a list of roles on offer.\\\n", |
||||||
|
"You will display those roles along with a high level summary of the key steps you suggest to land those roles. \\\n", |
||||||
|
"Output is to be in markdown (i.e. a professional format, with bold headders, proper spacing between different sections, etc.)\\\n", |
||||||
|
"Include suggested next steps on how to successfully apply for and land each of these jobs.\"\n", |
||||||
|
"\n", |
||||||
|
"def get_brochure_user_prompt(url):\n", |
||||||
|
" user_prompt = f\"Here are the contents of your recruitment search. Please list out individual roles and your best advise on landing those roles.\"\n", |
||||||
|
" user_prompt += f\"Please provide output in a professional style with bold text for headings, content nicely layed out under headings, different content split out into sections, etc.)\\n\"\n", |
||||||
|
" user_prompt += get_all_details(url)\n", |
||||||
|
" #user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", |
||||||
|
" user_prompt = user_prompt[:7_500] # Truncate if more than 5,000 characters\n", |
||||||
|
" return user_prompt\n", |
||||||
|
"\n", |
||||||
|
"def create_brochure(url):\n", |
||||||
|
" response = chat_session.send_message(system_prompt + get_brochure_user_prompt(url))\n", |
||||||
|
" result = response.text\n", |
||||||
|
" html_output = markdown.markdown(result)\n", |
||||||
|
" return html_output\n", |
||||||
|
"\n", |
||||||
|
"# ---- 4 \n", |
||||||
|
"# Gradio UI\n", |
||||||
|
"with gr.Blocks(css=\"\"\"\n", |
||||||
|
" #header-container { text-align: left; position: fixed; top: 10px; left: 0; padding: 10px; background-color: #f0f0f0; }\n", |
||||||
|
" #input-container { text-align: left; position: fixed; top: 100px; left: 0; right: 0; background: white; z-index: 100; padding: 8px; line-height: 0.5;}\n", |
||||||
|
" #output-container { margin-top: 160px; height: calc(100vh - 280px); overflow-y: auto; }\n", |
||||||
|
" #output-html { white-space: pre-wrap; font-family: monospace; border: 1px solid #ccc; padding: 5px; line-height: 1.2;}\n", |
||||||
|
" .button-container { margin-top: 10px; } /* Space above the button */\n", |
||||||
|
" .output-label { margin-top: 10px; font-weight: bold; } /* Style for output label */\n", |
||||||
|
"\"\"\") as iface:\n", |
||||||
|
" with gr.Column(elem_id=\"main-container\"):\n", |
||||||
|
" # Add header and description\n", |
||||||
|
" with gr.Row(elem_id=\"header-container\"):\n", |
||||||
|
" gr.Markdown(\"# Job seeker guide\")\n", |
||||||
|
" gr.Markdown(\"1.0 Works best with recruitment site https://www.seek.com.au/ (but can try others).\")\n", |
||||||
|
" gr.Markdown(\"2.0 Search for jobs of your choice, copy URL from that search & paste in input field below to get helpful advice on how to land those roles.\")\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
" \n", |
||||||
|
" with gr.Row(elem_id=\"input-container\"):\n", |
||||||
|
" input_text = gr.Textbox(label=\"Input\", elem_id=\"input-box\")\n", |
||||||
|
" \n", |
||||||
|
" with gr.Column(elem_id=\"output-container\"):\n", |
||||||
|
" output_label = gr.Markdown(\"<div class='output-label'>Output:</div>\")\n", |
||||||
|
" output_text = gr.HTML(elem_id=\"output-html\")\n", |
||||||
|
" \n", |
||||||
|
" # Move the button below the output box\n", |
||||||
|
" submit_btn = gr.Button(\"Generate\", elem_id=\"generate-button\", elem_classes=\"button-container\")\n", |
||||||
|
" \n", |
||||||
|
" submit_btn.click(fn=create_brochure, inputs=input_text, outputs=output_text)\n", |
||||||
|
"\n", |
||||||
|
"iface.launch(share=True)\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "21c4b557", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": ".venv", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.12.8" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,409 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Dataset generator\n", |
||||||
|
"\n", |
||||||
|
"Suports dataset creation for the following formats (inspired by HuggingFace dashboard):\n", |
||||||
|
"\n", |
||||||
|
"Realistic to create:\n", |
||||||
|
" * Tabular data\n", |
||||||
|
" * Text \n", |
||||||
|
" * Time-series\n", |
||||||
|
"\n", |
||||||
|
"Output formats included:\n", |
||||||
|
"\n", |
||||||
|
"* JSON\n", |
||||||
|
"* CSV\n", |
||||||
|
"* Parquet\n", |
||||||
|
"* Markdown\n", |
||||||
|
"\n", |
||||||
|
"The tool works as follows: given the business problem and the dataset requirements it generates the possible dataset along with the python code that can be executed afterwards. The code saves the created dataset to the files.\n", |
||||||
|
"\n", |
||||||
|
"Supports Chatgpt and Claude models." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"import re\n", |
||||||
|
"import os\n", |
||||||
|
"import sys\n", |
||||||
|
"import io\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"from pathlib import Path\n", |
||||||
|
"from datetime import datetime\n", |
||||||
|
"import requests\n", |
||||||
|
"import subprocess\n", |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Initialization\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"OPENAI_MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"claude = anthropic.Anthropic()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Prompts definition" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"\"\"You are a helpful assistant whose main purpose is to generate datasets for a given business problem.\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def get_user_prompt_tabular(business_problem, dataset_format, file_format, num_samples):\n", |
||||||
|
" \n", |
||||||
|
" user_message = f\"\"\"\n", |
||||||
|
" The business problem is: {business_problem}. \\n\n", |
||||||
|
" The dataset is expected to be in {dataset_format}. \n", |
||||||
|
" For the dataset types such as tabular or time series implement python code for creating the dataset.\n", |
||||||
|
" If the generated dataset contains several entities, i.e. products, users, write the output for these entities into separate files. \n", |
||||||
|
" The dependencies for python code should include only standard python libraries such as numpy, pandas and built-in libraries. \n", |
||||||
|
" The output dataset is stored as a {file_format} file and contains {num_samples} samples. \\n \n", |
||||||
|
" \"\"\"\n", |
||||||
|
"\n", |
||||||
|
" return user_message\n", |
||||||
|
"\n", |
||||||
|
"def get_user_prompt_text(business_problem, dataset_format, file_format):\n", |
||||||
|
" \n", |
||||||
|
" user_message = f\"\"\"\n", |
||||||
|
" The business problem is: {business_problem}. \\n\n", |
||||||
|
" The dataset is expected to be in {dataset_format}. \n", |
||||||
|
" For the text type return the generated dataset and the python code to write the output to the files.\n", |
||||||
|
" If the generated dataset contains several entities, i.e. products, users, write the output for these entities into separate files. \n", |
||||||
|
" The dependencies for python code should include only standard python libraries such as numpy, pandas and built-in libraries. \n", |
||||||
|
" The output dataset is stored as a {file_format} file. \\n \n", |
||||||
|
" \"\"\"\n", |
||||||
|
"\n", |
||||||
|
" return user_message\n", |
||||||
|
"\n", |
||||||
|
"def select_user_prompt(business_problem, dataset_format, file_format, num_samples):\n", |
||||||
|
" user_prompt = \"\"\n", |
||||||
|
" if dataset_format == \"Text\":\n", |
||||||
|
" user_prompt = get_user_prompt_text(business_problem, dataset_format, file_format)\n", |
||||||
|
" elif dataset_format in [\"Tabular\", \"Time-series\"]:\n", |
||||||
|
" user_prompt = get_user_prompt_tabular(business_problem, dataset_format, file_format, num_samples)\n", |
||||||
|
" return user_prompt\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Calls to api to fetch the dataset requirements" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def stream_gpt(business_problem, dataset_format, file_format, num_samples):\n", |
||||||
|
"\n", |
||||||
|
" user_prompt = select_user_prompt(\n", |
||||||
|
" business_problem, dataset_format, file_format, num_samples\n", |
||||||
|
" )\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model=OPENAI_MODEL,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_message},\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": user_prompt,\n", |
||||||
|
" },\n", |
||||||
|
" ],\n", |
||||||
|
" stream=True,\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" response = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk.choices[0].delta.content or \"\"\n", |
||||||
|
" yield response\n", |
||||||
|
"\n", |
||||||
|
" return response\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def stream_claude(business_problem, dataset_format, file_format, num_samples):\n", |
||||||
|
" user_prompt = select_user_prompt(\n", |
||||||
|
" business_problem, dataset_format, file_format, num_samples\n", |
||||||
|
" )\n", |
||||||
|
" result = claude.messages.stream(\n", |
||||||
|
" model=CLAUDE_MODEL,\n", |
||||||
|
" max_tokens=2000,\n", |
||||||
|
" system=system_message,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": user_prompt,\n", |
||||||
|
" }\n", |
||||||
|
" ],\n", |
||||||
|
" )\n", |
||||||
|
" reply = \"\"\n", |
||||||
|
" with result as stream:\n", |
||||||
|
" for text in stream.text_stream:\n", |
||||||
|
" reply += text\n", |
||||||
|
" yield reply\n", |
||||||
|
" print(text, end=\"\", flush=True)\n", |
||||||
|
" return reply\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def generate_dataset(business_problem, dataset_format, file_format, num_samples, model):\n", |
||||||
|
" if model == \"GPT\":\n", |
||||||
|
" result = stream_gpt(business_problem, dataset_format, file_format, num_samples)\n", |
||||||
|
" elif model == \"Claude\":\n", |
||||||
|
" result = stream_claude(business_problem, dataset_format, file_format, num_samples)\n", |
||||||
|
" else:\n", |
||||||
|
" raise ValueError(\"Unknown model\")\n", |
||||||
|
" for stream_so_far in result:\n", |
||||||
|
" yield stream_so_far\n", |
||||||
|
" return result" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Extract python code from the LLM output and execute it locally" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"def extract_code(text):\n", |
||||||
|
" # Regular expression to find text between ``python and ``\n", |
||||||
|
" match = re.search(r\"```python(.*?)```\", text, re.DOTALL)\n", |
||||||
|
"\n", |
||||||
|
" if match:\n", |
||||||
|
" code = match.group(0).strip() # Extract and strip extra spaces\n", |
||||||
|
" else:\n", |
||||||
|
" code = \"\"\n", |
||||||
|
" print(\"No matching substring found.\")\n", |
||||||
|
"\n", |
||||||
|
" return code.replace(\"```python\\n\", \"\").replace(\"```\", \"\")\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def execute_code_in_virtualenv(text, python_interpreter=sys.executable):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Execute the given Python code string within the specified virtual environment.\n", |
||||||
|
" \n", |
||||||
|
" Args:\n", |
||||||
|
" - code_str: str, the Python code to execute.\n", |
||||||
|
" - venv_dir: str, the directory path to the virtual environment created by pipenv.\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" # Construct the full path to the Python interpreter in the virtual environment\n", |
||||||
|
" # python_interpreter = f\"{venv_dir}/bin/python\"\n", |
||||||
|
"\n", |
||||||
|
" # Check if executing within the specified virtual environment interpreter\n", |
||||||
|
" if not python_interpreter:\n", |
||||||
|
" raise EnvironmentError(\"Python interpreter not found in the specified virtual environment.\")\n", |
||||||
|
"\n", |
||||||
|
" # Prepare the command to execute the code\n", |
||||||
|
" code_str = extract_code(text)\n", |
||||||
|
" command = [python_interpreter, '-c', code_str]\n", |
||||||
|
"\n", |
||||||
|
" # Execute the command\n", |
||||||
|
" try:\n", |
||||||
|
" result = subprocess.run(command, check=True, capture_output=True, text=True)\n", |
||||||
|
" print(\"Output:\", result.stdout)\n", |
||||||
|
" print(\"Errors:\", result.stderr)\n", |
||||||
|
" except subprocess.CalledProcessError as e:\n", |
||||||
|
" print(f\"An error occurred while executing the code: {e}\")\n", |
||||||
|
" return result.stdout\n", |
||||||
|
"\n", |
||||||
|
"# Example usage\n", |
||||||
|
"code_string = \"\"\"\n", |
||||||
|
"print('Hello from Pipenv virtual environment!')\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"venv_directory = sys.executable # replace with your actual virtualenv path\n", |
||||||
|
"(execute_code_in_virtualenv(code_string, venv_directory))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Test example for running the code locally" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Example string\n", |
||||||
|
"text = \"\"\"\n", |
||||||
|
"Some text here \n", |
||||||
|
"```python\n", |
||||||
|
"import pandas as pd\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"from datetime import datetime, timedelta\n", |
||||||
|
"\n", |
||||||
|
"# Parameters\n", |
||||||
|
"num_records = 100\n", |
||||||
|
"start_date = datetime(2023, 1, 1)\n", |
||||||
|
"item_ids = [f'item_{i}' for i in range(1, num_records+1)]\n", |
||||||
|
"\n", |
||||||
|
"# Generate dates\n", |
||||||
|
"dates = [start_date + timedelta(days=i) for i in range(num_records)]\n", |
||||||
|
"\n", |
||||||
|
"# Generate random views and clicks\n", |
||||||
|
"np.random.seed(42) # For reproducibility\n", |
||||||
|
"views = np.random.poisson(lam=100, size=num_records) # Average 100 views\n", |
||||||
|
"clicks = np.random.binomial(n=views, p=0.1) # 10% click-through rate\n", |
||||||
|
"\n", |
||||||
|
"# Calculate rank based on clicks (lower is better)\n", |
||||||
|
"# You can also modify this function as per your ranking criteria\n", |
||||||
|
"ranks = [sorted(clicks, reverse=True).index(x) + 1 for x in clicks] # Rank 1 is highest\n", |
||||||
|
"\n", |
||||||
|
"# Assemble the DataFrame\n", |
||||||
|
"data = {\n", |
||||||
|
" 'date': dates,\n", |
||||||
|
" 'item_id': item_ids,\n", |
||||||
|
" 'views': views,\n", |
||||||
|
" 'clicks': clicks,\n", |
||||||
|
" 'rank': ranks\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"df = pd.DataFrame(data)\n", |
||||||
|
"\n", |
||||||
|
"# Save to CSV\n", |
||||||
|
"df.to_csv('fashion_classified_ranking_dataset.csv', index=False)\n", |
||||||
|
"print(\"Dataset generated and saved as 'fashion_classified_ranking_dataset.csv'\")\n", |
||||||
|
"```\n", |
||||||
|
" and more text here.\n", |
||||||
|
"\"\"\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# execute_code_in_virtualenv(text, venv_directory)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Gradio interface" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 11, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"with gr.Blocks() as ui:\n", |
||||||
|
" gr.Markdown(\"## Create a dataset for a business problem\")\n", |
||||||
|
" with gr.Column():\n", |
||||||
|
" business_problem = gr.Textbox(label=\"Business problem\", lines=2)\n", |
||||||
|
" dataset_type = gr.Dropdown(\n", |
||||||
|
" [\"Tabular\", \"Time-series\", \"Text\"], label=\"Dataset modality\"\n", |
||||||
|
" )\n", |
||||||
|
" dataset_format = gr.Dropdown([\"JSON\", \"csv\", \"parquet\", \"Markdown\"], label=\"Output format\")\n", |
||||||
|
" num_samples = gr.Number(label=\"Number of samples (for tabular and time-series data)\", value=10, precision=0)\n", |
||||||
|
" model = gr.Dropdown([\"GPT\", \"Claude\"], label=\"Select model\", value=\"GPT\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" dataset_run = gr.Button(\"Create a dataset\")\n", |
||||||
|
" code_run = gr.Button(\"Execute code for a dataset\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" dataset_out = gr.Textbox(label=\"Generated Dataset\")\n", |
||||||
|
" code_out = gr.Textbox(label=\"Executed code\")\n", |
||||||
|
" dataset_run.click(\n", |
||||||
|
" generate_dataset,\n", |
||||||
|
" inputs=[business_problem, dataset_type, dataset_format, num_samples, model],\n", |
||||||
|
" outputs=[dataset_out]\n", |
||||||
|
" )\n", |
||||||
|
" code_run.click(execute_code_in_virtualenv, inputs=[dataset_out], outputs=[code_out])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"ui.launch(inbrowser=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "llm_engineering-yg2xCEUG", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.10.8" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 2 |
||||||
|
} |
@ -0,0 +1,401 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "4a6ab9a2-28a2-445d-8512-a0dc8d1b54e9", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Code Commenter\n", |
||||||
|
"\n", |
||||||
|
"The requirement: use an LLM to generate docstring and comments for Python code\n", |
||||||
|
"\n", |
||||||
|
"This is my week 4 day 5 project. \n", |
||||||
|
"\n", |
||||||
|
"Note: I used gpt to find out the most effective system and user prompt (very effective). I also decided not to use the open source models due to inference api costs with HF" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "e610bf56-a46e-4aff-8de1-ab49d62b1ad3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import io\n", |
||||||
|
"import sys\n", |
||||||
|
"import json\n", |
||||||
|
"import requests\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import google.generativeai\n", |
||||||
|
"import anthropic\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"import subprocess" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "4f672e1c-87e9-4865-b760-370fa605e614", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# environment\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||||
|
"os.environ['ANTHROPIC_API_KEY'] = os.getenv('ANTHROPIC_API_KEY', 'your-key-if-not-using-env')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "8aa149ed-9298-4d69-8fe2-8f5de0f667da", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# initialize\n", |
||||||
|
"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"claude = anthropic.Anthropic()\n", |
||||||
|
"google.generativeai.configure()\n", |
||||||
|
"\n", |
||||||
|
"OPENAI_MODEL = \"gpt-4o\"\n", |
||||||
|
"CLAUDE_MODEL = \"claude-3-5-sonnet-20240620\"\n", |
||||||
|
"GOOGLE_MODEL = \"gemini-1.5-pro\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"id": "6896636f-923e-4a2c-9d6c-fac07828a201", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a Python code assistant. Your task is to analyze Python code and generate high-quality, concise comments and docstrings. Follow these guidelines:\"\n", |
||||||
|
"system_message += \"Docstrings: Add a docstring for every function, class, and module. Describe the purpose of the function/class, its parameters, and its return value. Keep the description concise but informative, using proper Python docstring conventions (e.g., Google, NumPy, or reStructuredText format).\"\n", |
||||||
|
"system_message += \"Inline Comments: Add inline comments only where necessary to clarify complex logic, important steps, or non-obvious behavior. Avoid commenting on obvious operations like x += 1 unless it involves a nuanced concept. Keep comments short, clear, and relevant.\"\n", |
||||||
|
"system_message += \"General Instructions: Maintain consistency in style and tone. Use technical terminology where appropriate, but ensure clarity for someone with intermediate Python knowledge. Do not over-explain or add redundant comments for self-explanatory code. Follow PEP 257 and PEP 8 standards for style and formatting.\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "8e7b3546-57aa-4c29-bc5d-f211970d04eb", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def user_prompt_for(python):\n", |
||||||
|
" user_prompt = \"Analyze the following Python code and enhance it by adding high-quality, concise docstrings and comments. \"\n", |
||||||
|
" user_prompt += \"Ensure all functions, classes, and modules have appropriate docstrings describing their purpose, parameters, and return values. \"\n", |
||||||
|
" user_prompt += \"Add inline comments only for complex or non-obvious parts of the code. \"\n", |
||||||
|
" user_prompt += \"Follow Python's PEP 257 and PEP 8 standards for documentation and formatting. \"\n", |
||||||
|
" user_prompt += \"Do not modify the code itself; only add annotations.\\n\\n\"\n", |
||||||
|
" user_prompt += python\n", |
||||||
|
" return user_prompt\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 6, |
||||||
|
"id": "c6190659-f54c-4951-bef4-4960f8e51cc4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def messages_for(python):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_message},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(python)}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"id": "a1cbb778-fa57-43de-b04b-ed523f396c38", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"pi = \"\"\"\n", |
||||||
|
"import time\n", |
||||||
|
"\n", |
||||||
|
"def calculate(iterations, param1, param2):\n", |
||||||
|
" result = 1.0\n", |
||||||
|
" for i in range(1, iterations+1):\n", |
||||||
|
" j = i * param1 - param2\n", |
||||||
|
" result -= (1/j)\n", |
||||||
|
" j = i * param1 + param2\n", |
||||||
|
" result += (1/j)\n", |
||||||
|
" return result\n", |
||||||
|
"\n", |
||||||
|
"start_time = time.time()\n", |
||||||
|
"result = calculate(100_000_000, 4, 1) * 4\n", |
||||||
|
"end_time = time.time()\n", |
||||||
|
"\n", |
||||||
|
"print(f\"Result: {result:.12f}\")\n", |
||||||
|
"print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"id": "c3b497b3-f569-420e-b92e-fb0f49957ce0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"python_hard = \"\"\"# Be careful to support large number sizes\n", |
||||||
|
"\n", |
||||||
|
"def lcg(seed, a=1664525, c=1013904223, m=2**32):\n", |
||||||
|
" value = seed\n", |
||||||
|
" while True:\n", |
||||||
|
" value = (a * value + c) % m\n", |
||||||
|
" yield value\n", |
||||||
|
" \n", |
||||||
|
"def max_subarray_sum(n, seed, min_val, max_val):\n", |
||||||
|
" lcg_gen = lcg(seed)\n", |
||||||
|
" random_numbers = [next(lcg_gen) % (max_val - min_val + 1) + min_val for _ in range(n)]\n", |
||||||
|
" max_sum = float('-inf')\n", |
||||||
|
" for i in range(n):\n", |
||||||
|
" current_sum = 0\n", |
||||||
|
" for j in range(i, n):\n", |
||||||
|
" current_sum += random_numbers[j]\n", |
||||||
|
" if current_sum > max_sum:\n", |
||||||
|
" max_sum = current_sum\n", |
||||||
|
" return max_sum\n", |
||||||
|
"\n", |
||||||
|
"def total_max_subarray_sum(n, initial_seed, min_val, max_val):\n", |
||||||
|
" total_sum = 0\n", |
||||||
|
" lcg_gen = lcg(initial_seed)\n", |
||||||
|
" for _ in range(20):\n", |
||||||
|
" seed = next(lcg_gen)\n", |
||||||
|
" total_sum += max_subarray_sum(n, seed, min_val, max_val)\n", |
||||||
|
" return total_sum\n", |
||||||
|
"\n", |
||||||
|
"# Parameters\n", |
||||||
|
"n = 10000 # Number of random numbers\n", |
||||||
|
"initial_seed = 42 # Initial seed for the LCG\n", |
||||||
|
"min_val = -10 # Minimum value of random numbers\n", |
||||||
|
"max_val = 10 # Maximum value of random numbers\n", |
||||||
|
"\n", |
||||||
|
"# Timing the function\n", |
||||||
|
"import time\n", |
||||||
|
"start_time = time.time()\n", |
||||||
|
"result = total_max_subarray_sum(n, initial_seed, min_val, max_val)\n", |
||||||
|
"end_time = time.time()\n", |
||||||
|
"\n", |
||||||
|
"print(\"Total Maximum Subarray Sum (20 runs):\", result)\n", |
||||||
|
"print(\"Execution Time: {:.6f} seconds\".format(end_time - start_time))\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 9, |
||||||
|
"id": "0be9f47d-5213-4700-b0e2-d444c7c738c0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def stream_gpt(python): \n", |
||||||
|
" stream = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages_for(python), stream=True)\n", |
||||||
|
" reply = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" fragment = chunk.choices[0].delta.content or \"\"\n", |
||||||
|
" reply += fragment\n", |
||||||
|
" yield reply.replace('```python\\n','').replace('```','')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 10, |
||||||
|
"id": "8669f56b-8314-4582-a167-78842caea131", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def stream_claude(python):\n", |
||||||
|
" result = claude.messages.stream(\n", |
||||||
|
" model=CLAUDE_MODEL,\n", |
||||||
|
" max_tokens=2000,\n", |
||||||
|
" system=system_message,\n", |
||||||
|
" messages=[{\"role\": \"user\", \"content\": user_prompt_for(python)}],\n", |
||||||
|
" )\n", |
||||||
|
" reply = \"\"\n", |
||||||
|
" with result as stream:\n", |
||||||
|
" for text in stream.text_stream:\n", |
||||||
|
" reply += text\n", |
||||||
|
" yield reply.replace('```python\\n','').replace('```','')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 11, |
||||||
|
"id": "25f8d215-67a8-4179-8834-0e1da5a7dd32", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def stream_google(python):\n", |
||||||
|
" # Initialize empty reply string\n", |
||||||
|
" reply = \"\"\n", |
||||||
|
" \n", |
||||||
|
" # The API for Gemini has a slightly different structure\n", |
||||||
|
" gemini = google.generativeai.GenerativeModel(\n", |
||||||
|
" model_name=GOOGLE_MODEL,\n", |
||||||
|
" system_instruction=system_message\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" response = gemini.generate_content(\n", |
||||||
|
" user_prompt_for(python),\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" # Process the stream\n", |
||||||
|
" for chunk in response:\n", |
||||||
|
" # Extract text from the chunk\n", |
||||||
|
" if chunk.text:\n", |
||||||
|
" reply += chunk.text\n", |
||||||
|
" yield reply.replace('```python\\n','').replace('```','')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 12, |
||||||
|
"id": "2f1ae8f5-16c8-40a0-aa18-63b617df078d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def optimize(python, model):\n", |
||||||
|
" if model==\"GPT\":\n", |
||||||
|
" result = stream_gpt(python)\n", |
||||||
|
" elif model==\"Claude\":\n", |
||||||
|
" result = stream_claude(python)\n", |
||||||
|
" elif model==\"Gemini\":\n", |
||||||
|
" result = stream_google(python)\n", |
||||||
|
" else:\n", |
||||||
|
" raise ValueError(\"Unknown model\")\n", |
||||||
|
" for stream_so_far in result:\n", |
||||||
|
" yield stream_so_far " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "43a6b5f5-5d7c-4511-9d0c-21640070b3cf", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def execute_python(code):\n", |
||||||
|
" try:\n", |
||||||
|
" output = io.StringIO()\n", |
||||||
|
" sys.stdout = output\n", |
||||||
|
" exec(code)\n", |
||||||
|
" finally:\n", |
||||||
|
" sys.stdout = sys.__stdout__\n", |
||||||
|
" return output.getvalue()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 14, |
||||||
|
"id": "f35b0602-84f9-4ed6-aa35-87be4290ed24", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"css = \"\"\"\n", |
||||||
|
".python {background-color: #306998;}\n", |
||||||
|
".cpp {background-color: #050;}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 15, |
||||||
|
"id": "62488014-d34c-4de8-ba72-9516e05e9dde", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7860\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 15, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"with gr.Blocks(css=css) as ui:\n", |
||||||
|
" gr.Markdown(\"## Convert code from Python to C++\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" python = gr.Textbox(label=\"Python code:\", value=pi, lines=10)\n", |
||||||
|
" commented_python = gr.Textbox(label=\"Commented code:\", lines=10)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" model = gr.Dropdown([\"GPT\", \"Claude\", \"Gemini\"], label=\"Select model\", value=\"GPT\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" comment = gr.Button(\"Comment code\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" python_run = gr.Button(\"Check Commented Python\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" python_out = gr.TextArea(label=\"Python result:\", elem_classes=[\"python\"])\n", |
||||||
|
"\n", |
||||||
|
" comment.click(optimize, inputs=[python, model], outputs=[commented_python])\n", |
||||||
|
" python_run.click(execute_python, inputs=[python], outputs=[python_out])\n", |
||||||
|
"\n", |
||||||
|
"ui.launch(inbrowser=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "b084760b-c327-4fe7-9b7c-a01b1a383dc3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,224 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# # Document loading, retrieval methods and text splitting\n", |
||||||
|
"# !pip install -qU langchain langchain_community\n", |
||||||
|
"\n", |
||||||
|
"# # Local vector store via Chroma\n", |
||||||
|
"# !pip install -qU langchain_chroma\n", |
||||||
|
"\n", |
||||||
|
"# # Local inference and embeddings via Ollama\n", |
||||||
|
"# !pip install -qU langchain_ollama\n", |
||||||
|
"\n", |
||||||
|
"# # Web Loader\n", |
||||||
|
"# !pip install -qU beautifulsoup4\n", |
||||||
|
"\n", |
||||||
|
"# # Pull the model first\n", |
||||||
|
"# !ollama pull nomic-embed-text\n", |
||||||
|
"\n", |
||||||
|
"# !pip install -qU pypdf" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Imports\n", |
||||||
|
"import os\n", |
||||||
|
"import glob\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader\n", |
||||||
|
"from langchain_text_splitters import CharacterTextSplitter, RecursiveCharacterTextSplitter\n", |
||||||
|
"from langchain_chroma import Chroma\n", |
||||||
|
"from langchain_ollama import OllamaEmbeddings\n", |
||||||
|
"from langchain_ollama import ChatOllama\n", |
||||||
|
"from langchain_core.output_parsers import StrOutputParser\n", |
||||||
|
"from langchain_core.prompts import ChatPromptTemplate\n", |
||||||
|
"from langchain_core.runnables import RunnablePassthrough" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Read in documents using LangChain's loaders\n", |
||||||
|
"# Take everything in all the sub-folders of our knowledgebase\n", |
||||||
|
"\n", |
||||||
|
"folders = glob.glob(\"Manuals/*\")\n", |
||||||
|
"\n", |
||||||
|
"def add_metadata(doc, doc_type):\n", |
||||||
|
" doc.metadata[\"doc_type\"] = doc_type\n", |
||||||
|
" return doc\n", |
||||||
|
"\n", |
||||||
|
"documents = []\n", |
||||||
|
"for folder in folders:\n", |
||||||
|
" doc_type = os.path.basename(folder)\n", |
||||||
|
" loader = DirectoryLoader(folder, glob=\"**/*.pdf\", loader_cls=PyPDFLoader)\n", |
||||||
|
" folder_docs = loader.load()\n", |
||||||
|
" documents.extend([add_metadata(doc, doc_type) for doc in folder_docs])\n", |
||||||
|
"\n", |
||||||
|
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", |
||||||
|
"chunks = text_splitter.split_documents(documents)\n", |
||||||
|
"\n", |
||||||
|
"print(f\"Total number of chunks: {len(chunks)}\")\n", |
||||||
|
"print(f\"Document types found: {set(doc.metadata['doc_type'] for doc in documents)}\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Put the chunks of data into a Vector Store that associates a Vector Embedding with each chunk\n", |
||||||
|
"# Chroma is a popular open source Vector Database based on SQLLite\n", |
||||||
|
"DB_NAME = \"vector_db\"\n", |
||||||
|
"\n", |
||||||
|
"embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")\n", |
||||||
|
"\n", |
||||||
|
"# Delete if already exists\n", |
||||||
|
"\n", |
||||||
|
"if os.path.exists(DB_NAME):\n", |
||||||
|
" Chroma(persist_directory=DB_NAME, embedding_function=embeddings).delete_collection()\n", |
||||||
|
"\n", |
||||||
|
"# Create vectorstore\n", |
||||||
|
"\n", |
||||||
|
"vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=DB_NAME)\n", |
||||||
|
"print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#run a quick test - should return a list of documents = 4\n", |
||||||
|
"question = \"What kind of grill is the Spirt II?\"\n", |
||||||
|
"docs = vectorstore.similarity_search(question)\n", |
||||||
|
"len(docs)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"docs[0]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# create a new Chat with Ollama\n", |
||||||
|
"from langchain.memory import ConversationBufferMemory\n", |
||||||
|
"from langchain.chains import ConversationalRetrievalChain\n", |
||||||
|
"MODEL = \"llama3.2:latest\"\n", |
||||||
|
"llm = ChatOllama(temperature=0.7, model=MODEL)\n", |
||||||
|
"\n", |
||||||
|
"# set up the conversation memory for the chat\n", |
||||||
|
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", |
||||||
|
"\n", |
||||||
|
"# the retriever is an abstraction over the VectorStore that will be used during RAG\n", |
||||||
|
"retriever = vectorstore.as_retriever()\n", |
||||||
|
"\n", |
||||||
|
"# putting it together: set up the conversation chain with the GPT 3.5 LLM, the vector store and memory\n", |
||||||
|
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's try a simple question\n", |
||||||
|
"\n", |
||||||
|
"query = \"How do I change the water bottle ?\"\n", |
||||||
|
"result = conversation_chain.invoke({\"question\": query})\n", |
||||||
|
"print(result[\"answer\"])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 15, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set up a new conversation memory for the chat\n", |
||||||
|
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", |
||||||
|
"\n", |
||||||
|
"# putting it together: set up the conversation chain with the LLM, the vector store and memory\n", |
||||||
|
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 16, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Wrapping that in a function\n", |
||||||
|
"\n", |
||||||
|
"def chat(question, history):\n", |
||||||
|
" result = conversation_chain.invoke({\"question\": question})\n", |
||||||
|
" return result[\"answer\"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Now we will bring this up in Gradio using the Chat interface -\n", |
||||||
|
"\n", |
||||||
|
"A quick and easy way to prototype a chat with an LLM" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And in Gradio:\n", |
||||||
|
"\n", |
||||||
|
"view = gr.ChatInterface(chat, type=\"messages\").launch(inbrowser=True)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "venv", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.12.5" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 2 |
||||||
|
} |
Loading…
Reference in new issue