117 changed files with 25100 additions and 545 deletions
@ -0,0 +1,28 @@ |
|||||||
|
Client: Hello I would like to order a pizza |
||||||
|
Restaurant: Sure. What pizza would you like to order from our menu? |
||||||
|
Client: Chicken Ranch |
||||||
|
Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu |
||||||
|
Client: AHHHHH. Do you have chicken BBQ? |
||||||
|
Restaurant: Yes! Do you want it small, medium, or large? |
||||||
|
Client: Medium |
||||||
|
Restaurant: Ok. This will be 180 LE |
||||||
|
Client: Thanks |
||||||
|
Restaurant: Anytime. |
||||||
|
Client: AHHHH I forgot. I want to add a new chicken BBQ pizza |
||||||
|
Restaurant: No problem. Do you also want it medium? |
||||||
|
Client: Yes |
||||||
|
Restaurant: Okay this will be 380 LE |
||||||
|
Client: Okay Thanks |
||||||
|
Client: Wait a minute. Isn't 180 * 2 = 360? |
||||||
|
Restaurant: It seems that there might be a misunderstanding. We add an extra 20 LE for every extra pizza ordered. |
||||||
|
Client: NOBODY TOLD ME THAT.. AND WHY ON EARTH WOULD YOU DO SOMETHING LIKE THAT? |
||||||
|
Restaurant: We are sorry but this is our policy. |
||||||
|
Client: Okay then I don't want your pizza. |
||||||
|
Restaurant: We are so sorry to hear that. We can make a 10% discount on the total price so it would be 342 LE |
||||||
|
Client: Fine |
||||||
|
Restaurant: Thank you for ordering |
||||||
|
Restaurant: Pizza is delivered. How is your experience? |
||||||
|
Client: Your pizza doesn't taste good |
||||||
|
Restaurant: We are so sorry to hear that. Do you have any suggestions you would like to make? |
||||||
|
Client: Make good pizza |
||||||
|
Restaurant: Thanks for your review. We will make sure to improve our pizza in the future. Your opinion really matters. |
@ -0,0 +1,5 @@ |
|||||||
|
Client: Hello I would like to order a chicken ranch pizza |
||||||
|
Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu |
||||||
|
Client: Okay thanks |
||||||
|
Restaurant: Would you like to order something else? |
||||||
|
Client: No thank you |
@ -0,0 +1,19 @@ |
|||||||
|
Client: Hello. What is the most selling pizza on your menu? |
||||||
|
Restaurant: Hello! Chicken Ranch pizza is our most selling pizza. Also our special pepperoni pizza got some amazing reviews |
||||||
|
Client: Okay. I want to order a pepperoni pizza |
||||||
|
Restaurant: Sure. Do you want it small, medium, or large? |
||||||
|
Client: Large |
||||||
|
Restaurant: Okay. This will be 210 LE. Would you like to order something else? |
||||||
|
Client: Yes. Do you have onion rings? |
||||||
|
Restaurant: Yes |
||||||
|
Client: Okay I would like to add onion rings. |
||||||
|
Restaurant: Sure. This will be 250 LE |
||||||
|
Client: Thanks |
||||||
|
Restaurant: Anytime |
||||||
|
Client: I have been waiting for too long and the order hasn't arrived yet |
||||||
|
Restaurant: Sorry to hear that. But it appears that the order is on its way to you. |
||||||
|
Restaurant: The order is supposed to be arrived by now. |
||||||
|
Client: Yes it is arrived. |
||||||
|
Restaurant: How is your experience? |
||||||
|
Client: Your pizza tastes soooooo good. The order took too long to arrive but when I tasted the pizza, I was really enjoying it and forgot everything about the delay. |
||||||
|
Restaurant: We are so glad to hear that |
@ -0,0 +1,15 @@ |
|||||||
|
You are an assistant working for the customer service department in a pizza restaurant. |
||||||
|
You are to receive a chat between a client and the restaurant's customer service. |
||||||
|
You should generate your responses based on the following criteria: |
||||||
|
- What did the client order? |
||||||
|
- How much did it cost? |
||||||
|
- If the client changed their mind just keep their final order and the final cost |
||||||
|
- Mention the client's experience only if they ordered anything as follows: (Positive/Negative/Neutral/Unknown) |
||||||
|
- If the client did not order anything do not mention their sentiment or experience |
||||||
|
- If the client's experience is positive or negative only, provide a brief summary about their sentiment |
||||||
|
- Do not provide brief summary about their sentiment if their experience was neutral or unknown. |
||||||
|
- Your answers should be clear, straight to the point, and do not use long sentences |
||||||
|
- Your answers should be displayed in bullet points |
||||||
|
- Your answers should be displayed in markdown |
||||||
|
- If the client did not order anything provide a brief summary why that might happened |
||||||
|
- Do not mention cost if the client did not order anything |
@ -0,0 +1,240 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "9964872b-225d-4ced-93e4-fc5b279ec2ed", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Webpage English summarizer with user inputs (url, ollama-based LLM) " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4e49d399-d18c-4c91-8abc-cf3289e11e2f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"# from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import ollama, time\n", |
||||||
|
"from tqdm import tqdm" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "46e7d809-248d-41b8-80e1-36b210041581", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define system prompt.\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||||
|
"and provides a detailed summary, ignoring text that might be navigation related. \\\n", |
||||||
|
"Respond in markdown, in English.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e8bf237f-591f-4c32-9415-5d5d4e2522b8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||||
|
"\n", |
||||||
|
"def user_prompt_for(website):\n", |
||||||
|
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||||
|
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||||
|
"please provide a detailed summary of this website in markdown. \\\n", |
||||||
|
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||||
|
" user_prompt += website.text\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7d39ee6d-c670-41ba-a0b8-debd55bda8e3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# See how this function creates exactly the format above\n", |
||||||
|
"\n", |
||||||
|
"def messages_for(website):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "43e28ff5-2def-4a47-acdd-2e06c0666956", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Constants\n", |
||||||
|
"\n", |
||||||
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||||
|
"HEADERS = {\"Content-Type\": \"application/json\"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "32f4f481-81a3-479d-817b-4e754d9af46d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||||
|
"\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = HEADERS\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url, headers=headers)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f81cfd17-8208-4192-a59f-485ff3ea74e4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And now: call the ollama API wrapper and return the relevant component of the response\n", |
||||||
|
"\n", |
||||||
|
"def summarize(url):\n", |
||||||
|
" website = Website(url)\n", |
||||||
|
" response = ollama.chat(\n", |
||||||
|
" model=MODEL,\n", |
||||||
|
" messages = messages_for(website)\n", |
||||||
|
" )\n", |
||||||
|
" return response['message']['content']" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7a9eedc6-2183-473d-84ca-b10d40e2a1e6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Ask the user the name of the url address\n", |
||||||
|
"\n", |
||||||
|
"url= str(input(\"\"\"\n", |
||||||
|
"Please provide a valid url address:\n", |
||||||
|
"https://\"\"\"))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5d012de2-0ef2-43db-9f51-fc7f989c3642", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Ask the user to select a valid model\n", |
||||||
|
"\n", |
||||||
|
"MODEL= str(input(\"\"\"\n", |
||||||
|
"Please select a LLM:\n", |
||||||
|
"(examples: llama3.2, deepseek-r1:1.5b)\n", |
||||||
|
"\"\"\"))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1ac8c02e-4a62-448b-a231-8c6f65891811", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's just make sure the model is loaded\n", |
||||||
|
"\n", |
||||||
|
"!ollama pull {MODEL}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0544541f-11a8-4eb7-8eb6-bc032ed6d0d1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"print('url: https://{0}\\nModel= {1}'.format(url, MODEL))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "45518950-f2c9-43af-b897-4fe8fe48dfd8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"summary = summarize('https://'+ url)\n", |
||||||
|
"for summ in tqdm(summary):\n", |
||||||
|
" time.sleep(0.01)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "02c0c15e-216d-47c7-843d-ac27af02820b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"display(Markdown(summary))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "985a3689-5827-4b15-b8d5-276f9b292afd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,273 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "fad31e32-2e42-42ae-ae63-c15d90292839", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# First Project\n", |
||||||
|
"Ollama -> Summary\n", |
||||||
|
"huggingface_hub -> \"facebook/m2m100_418M\" for translation" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5fb79a20-a455-4d27-91a1-91958af786c1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!pip install transformers datasets torch\n", |
||||||
|
"!pip install huggingface_hub" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e95ac7f2-5192-4f83-acf3-61df30cd3109", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"import requests\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"import json\n", |
||||||
|
"import ollama" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "12276d74-0e79-4e66-9135-1c9d1a80b943", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class Website:\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
"\n", |
||||||
|
"huggingface_url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", |
||||||
|
"huggingface_website = Website(huggingface_url)\n", |
||||||
|
"\n", |
||||||
|
"huggingface_data = {\n", |
||||||
|
" \"title\": huggingface_website.title,\n", |
||||||
|
" \"text\": huggingface_website.text\n", |
||||||
|
"}\n", |
||||||
|
"print(huggingface_data)\n", |
||||||
|
"\n", |
||||||
|
"with open('ml_for_3d_course_data.json', 'w') as f:\n", |
||||||
|
" json.dump(huggingface_data, f)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7d74c85c-3e09-4514-bde4-4cafc4910c52", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# huggingface_data 'text' value\n", |
||||||
|
"huggingface_text = huggingface_data['text']\n", |
||||||
|
"\n", |
||||||
|
"# Summary\n", |
||||||
|
"response_summary = ollama.chat(model=\"llama3.2:latest\", messages=[{\"role\": \"user\", \"content\": f\"Summarize the following text: {huggingface_text}\"}])\n", |
||||||
|
"print(response_summary)\n", |
||||||
|
"\n", |
||||||
|
"# print summary\n", |
||||||
|
"summary_huggingface_text = response_summary.message['content']\n", |
||||||
|
"print(\"Summary Text:\", summary_huggingface_text)\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d13764d5-cb76-46c5-bbe6-d132b31a9ea6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# HuggingFace Translation" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "08405038-4115-487f-9efc-de58572453c1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class Website:\n", |
||||||
|
" url: str\n", |
||||||
|
" title: str\n", |
||||||
|
" text: str\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
"\n", |
||||||
|
"url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", |
||||||
|
"website = Website(url)\n", |
||||||
|
"print(website.title) \n", |
||||||
|
"print(website.text[:1000])\n", |
||||||
|
"\n", |
||||||
|
"data = {\n", |
||||||
|
" \"title\": website.title,\n", |
||||||
|
" \"text\": website.text\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"with open('ml_for_3d_course_data.json', 'w') as f:\n", |
||||||
|
" json.dump(data, f)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0632352f-4b16-4125-83bf-f3cc3aabd659", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"print(data)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a85f8625-725d-4d7f-8cb7-8da4276f81cf", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!pip install sacremoses" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c800cea4-f4a4-4e41-9637-31ff11afb256", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import json\n", |
||||||
|
"from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer\n", |
||||||
|
"\n", |
||||||
|
"# Load the M2M100 model and tokenizer\n", |
||||||
|
"model_name = \"facebook/m2m100_418M\"\n", |
||||||
|
"model = M2M100ForConditionalGeneration.from_pretrained(model_name)\n", |
||||||
|
"tokenizer = M2M100Tokenizer.from_pretrained(model_name)\n", |
||||||
|
"\n", |
||||||
|
"# Load the saved JSON file\n", |
||||||
|
"with open('ml_for_3d_course_data.json', 'r') as f:\n", |
||||||
|
" data = json.load(f)\n", |
||||||
|
"\n", |
||||||
|
"# Extract text from the loaded data\n", |
||||||
|
"text = data[\"text\"]\n", |
||||||
|
"\n", |
||||||
|
"# Set the source language to English and target language to Korean\n", |
||||||
|
"source_lang = \"en\"\n", |
||||||
|
"target_lang = \"ko\"\n", |
||||||
|
"\n", |
||||||
|
"# Set the language for tokenizer (important for M2M100)\n", |
||||||
|
"tokenizer.src_lang = source_lang\n", |
||||||
|
"tokenizer.tgt_lang = target_lang\n", |
||||||
|
"\n", |
||||||
|
"# Split text into smaller chunks if it's too large\n", |
||||||
|
"# This step ensures we don't exceed the model's maximum length (512 tokens)\n", |
||||||
|
"max_input_length = 512\n", |
||||||
|
"chunks = [text[i:i+max_input_length] for i in range(0, len(text), max_input_length)]\n", |
||||||
|
"\n", |
||||||
|
"print(chunks)\n", |
||||||
|
"# Initialize a list to hold the translated text\n", |
||||||
|
"translated_chunks = []\n", |
||||||
|
"\n", |
||||||
|
"# Iterate through each chunk and translate it\n", |
||||||
|
"for chunk in chunks:\n", |
||||||
|
" # Tokenize the chunk\n", |
||||||
|
" encoded = tokenizer(chunk, return_tensors=\"pt\", padding=True, truncation=True, max_length=512)\n", |
||||||
|
"\n", |
||||||
|
" # Generate translation from the model, forcing the output to be in Korean\n", |
||||||
|
" generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id(target_lang), max_length=512)\n", |
||||||
|
"\n", |
||||||
|
" # Decode the translated tokens to text\n", |
||||||
|
" translated_text = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]\n", |
||||||
|
" translated_chunks.append(translated_text)\n", |
||||||
|
"\n", |
||||||
|
"# Combine all translated chunks back together\n", |
||||||
|
"final_translated_text = ' '.join(translated_chunks)\n", |
||||||
|
"print(\"Translated Text:\", final_translated_text)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ffe0f264-a588-422f-a6e1-b60504d1e02c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import json\n", |
||||||
|
"import requests\n", |
||||||
|
"\n", |
||||||
|
"# Ollama API URL 설정\n", |
||||||
|
"ollama_url = \"http://localhost:11411/v1/models/facebook/m2m100_418M/generate\"\n", |
||||||
|
"\n", |
||||||
|
"# 저장된 JSON 파일 로드\n", |
||||||
|
"with open('ml_for_3d_course_data.json', 'r') as f:\n", |
||||||
|
" data = json.load(f)\n", |
||||||
|
"\n", |
||||||
|
"# 텍스트 추출\n", |
||||||
|
"course_text = data[\"text\"]\n", |
||||||
|
"\n", |
||||||
|
"# 번역할 소스 언어 및 타겟 언어 설정\n", |
||||||
|
"source_language = \"en\"\n", |
||||||
|
"target_language = \"ko\"\n", |
||||||
|
"\n", |
||||||
|
"# 데이터 준비\n", |
||||||
|
"payload = {\n", |
||||||
|
" \"input_text\": course_text,\n", |
||||||
|
" \"src_lang\": source_language,\n", |
||||||
|
" \"tgt_lang\": target_language\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"# API 호출\n", |
||||||
|
"response = requests.post(ollama_url, json=payload)\n", |
||||||
|
"\n", |
||||||
|
"# 응답 확인\n", |
||||||
|
"if response.status_code == 200:\n", |
||||||
|
" translated_course_text = response.json().get(\"translated_text\", \"Translation failed\")\n", |
||||||
|
" print(\"Translated Course Text:\", translated_course_text)\n", |
||||||
|
"else:\n", |
||||||
|
" print(f\"Error {response.status_code}: {response.text}\")\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,127 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "39e3e763-9b00-49eb-aead-034a2d0517a7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"\n", |
||||||
|
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f3bb5e2a-b70f-42ba-9f22-030a9c6bc9d1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||||
|
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||||
|
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||||
|
"elif api_key.strip() != api_key:\n", |
||||||
|
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "994f51fb-eab3-45a2-847f-87aebb92b17a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||||
|
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a8125c6d-c884-4f65-b477-cab155e29ce3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Step 1: Create your prompts\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an AI that suggests short and relevant subject lines for emails based on their content.\"\n", |
||||||
|
"user_prompt = \"\"\"\n", |
||||||
|
"Here is the content of an email:\n", |
||||||
|
"\n", |
||||||
|
"Dear Team,\n", |
||||||
|
"\n", |
||||||
|
"I hope you're all doing well. I wanted to remind you that our next project meeting is scheduled for this Friday at 3 PM. We will be discussing our progress and any blockers. Please make sure to review the latest updates before the meeting.\n", |
||||||
|
"\n", |
||||||
|
"Best, \n", |
||||||
|
"John\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"# Step 2: Make the messages list\n", |
||||||
|
"\n", |
||||||
|
"messages = [ {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}] # fill this in\n", |
||||||
|
"\n", |
||||||
|
"# Step 3: Call OpenAI\n", |
||||||
|
"\n", |
||||||
|
"response = openai.chat.completions.create(\n", |
||||||
|
" model = \"gpt-4o-mini\",\n", |
||||||
|
" messages=messages\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Step 4: print the result\n", |
||||||
|
"\n", |
||||||
|
"print(\"Suggested Subject Line:\", response.choices[0].message.content)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1010ac80-1ee8-432f-aa3f-12af419dc23a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,279 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "603cd418-504a-4b4d-b1c3-be04febf3e79", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Article Title Generator\n", |
||||||
|
"\n", |
||||||
|
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", |
||||||
|
"\n", |
||||||
|
"**NOTES**:\n", |
||||||
|
"\n", |
||||||
|
"1. This version does NOT support website scrapping. You must copy and paste the required article.\n", |
||||||
|
"2. The following models were configured:\n", |
||||||
|
" a. OpenAI gpt-4o-mini\n", |
||||||
|
" b. Llama llama3.2\n", |
||||||
|
" c. Deepseek deepseek-r1:1.5b\n", |
||||||
|
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", |
||||||
|
" initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", |
||||||
|
" get_answer('NEW_MODEL')***.\n", |
||||||
|
"3. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", |
||||||
|
" Example: https://www.isitwp.com/headline-analyzer/. " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Confirming Llama is loaded\n", |
||||||
|
"!ollama pull llama3.2" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set environment variables for OpenAi\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# validate API Key\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" raise ValueError(\"No API key was found! Please check the .env file.\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1abbb826-de66-498c-94d8-33369ad01885", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# constants\n", |
||||||
|
"MODELS = { 'GPT': 'gpt-4o-mini', \n", |
||||||
|
" 'LLAMA': 'llama3.2', \n", |
||||||
|
" 'DEEPSEEK': 'deepseek-r1:1.5b'\n", |
||||||
|
" }\n", |
||||||
|
"\n", |
||||||
|
"CLIENTS = { 'GPT': OpenAI(), \n", |
||||||
|
" 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", |
||||||
|
" 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", |
||||||
|
" }" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Copy & paste your article (without a title)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ddd76319-13ce-480b-baa7-cab6a5c88168", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# article - copy & paste your article\n", |
||||||
|
"article = \"\"\"\n", |
||||||
|
" REPLACE WITH YOUR ARTICLE CONTENT\n", |
||||||
|
" \"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# system prompt\n", |
||||||
|
"system_prompt = \"\"\"\n", |
||||||
|
" You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate the most effective, keyword-optimized title to maximize SEO performance.Respond in Markdown format.\n", |
||||||
|
" \"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# user prompt\n", |
||||||
|
"user_prompt = f\"Following the article to be analyzed. Respond in Markdown format./n/n{article}\"\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# message list\n", |
||||||
|
"messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# call model and get answer\n", |
||||||
|
"def get_answer(model):\n", |
||||||
|
" # set required client\n", |
||||||
|
" client = CLIENTS[model]\n", |
||||||
|
"\n", |
||||||
|
" # call model\n", |
||||||
|
" response = client.chat.completions.create(\n", |
||||||
|
" model=MODELS[model],\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" # return answer\n", |
||||||
|
" return response.choices[0].message.content\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get OpenAI Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get openAi answer\n", |
||||||
|
"answer = get_answer('GPT')\n", |
||||||
|
"\n", |
||||||
|
"# display openAi answer\n", |
||||||
|
"display(Markdown(f\"### {MODELS['GPT']} Answer\\n\\n{answer}\" ))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "70073ebf-a00a-416b-854d-642d450cd99b", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get Llama Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "caa190bb-de5f-45cc-b671-5d62688f7b25", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get Llama answer\n", |
||||||
|
"answer = get_answer('LLAMA')\n", |
||||||
|
"\n", |
||||||
|
"# display Llama answer\n", |
||||||
|
"display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get Deepseek Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get Deepseek answer\n", |
||||||
|
"answer = get_answer('DEEPSEEK')\n", |
||||||
|
"\n", |
||||||
|
"# display Deepseek answer\n", |
||||||
|
"display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Suggested future improvements\n", |
||||||
|
"\n", |
||||||
|
"1. Add website scrapping support to replace copy/pasting of articles.\n", |
||||||
|
"2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n", |
||||||
|
"3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n", |
||||||
|
"4. Add the logic that would allow each model to assess the recommendations from the different models and \n", |
||||||
|
" select the best among these. " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "cf7403ac-d43b-4493-98bb-6fee94950cb0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,472 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "603cd418-504a-4b4d-b1c3-be04febf3e79", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Article Title Generator (V2)\n", |
||||||
|
"\n", |
||||||
|
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", |
||||||
|
"\n", |
||||||
|
"**NOTES**:\n", |
||||||
|
"\n", |
||||||
|
"1. This version supports website scrapping using Selenium (based on the code from **/week1/community-\n", |
||||||
|
" contributions/day1-webscraping-selenium-for-javascript.ipynb** - Thanks for the contribution!)\n", |
||||||
|
"2. Leverage streaming (OpenAI only).\n", |
||||||
|
"3. The following models were configured:\\\n", |
||||||
|
" \n", |
||||||
|
" a. OpenAI gpt-4o-mini\\\n", |
||||||
|
" b. Llama llama3.2\\\n", |
||||||
|
" c. Deepseek deepseek-r1:1.5b\\\n", |
||||||
|
"\n", |
||||||
|
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", |
||||||
|
" initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", |
||||||
|
" get_answer('NEW_MODEL')***.\n", |
||||||
|
"5. Improved system_prompt to provide specific SEO best practices to adopt during the title generation.\n", |
||||||
|
"6. Rephrased the system_prompt to ensure the model provides a single Title (not a list of suggestions).\n", |
||||||
|
"7. Includes function to remove unrequired thinking/reasoning verbose from the model response (Deepseek). \n", |
||||||
|
"8. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", |
||||||
|
" Example: https://www.isitwp.com/headline-analyzer/. " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "115004a8-747a-4954-9580-1ed548f80336", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# install required libraries if they were not part of the requirements.txt\n", |
||||||
|
"!pip install selenium\n", |
||||||
|
"!pip install undetected-chromedriver" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# confirming Llama is loaded\n", |
||||||
|
"!ollama pull llama3.2" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import undetected_chromedriver as uc\n", |
||||||
|
"from selenium.webdriver.common.by import By\n", |
||||||
|
"from selenium.webdriver.support.ui import WebDriverWait\n", |
||||||
|
"from selenium.webdriver.support import expected_conditions as EC\n", |
||||||
|
"import time\n", |
||||||
|
"from bs4 import BeautifulSoup" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set environment variables for OpenAi\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# validate API Key\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" raise ValueError(\"No API key was found! Please check the .env file.\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1abbb826-de66-498c-94d8-33369ad01885", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# constants\n", |
||||||
|
"MODELS = { 'GPT': 'gpt-4o-mini', \n", |
||||||
|
" 'LLAMA': 'llama3.2', \n", |
||||||
|
" 'DEEPSEEK': 'deepseek-r1:1.5b'\n", |
||||||
|
" }\n", |
||||||
|
"\n", |
||||||
|
"CLIENTS = { 'GPT': OpenAI(), \n", |
||||||
|
" 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", |
||||||
|
" 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", |
||||||
|
" }\n", |
||||||
|
"\n", |
||||||
|
"# path to Chrome\n", |
||||||
|
"CHROME_PATH = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"**Webcrawler** (based on the code from __/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb__)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c2a1cf7a-044f-4a9c-b76e-8f112d384550", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class WebsiteCrawler:\n", |
||||||
|
" def __init__(self, url, wait_time=20, chrome_path=None):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.url = url\n", |
||||||
|
" self.wait_time = wait_time\n", |
||||||
|
"\n", |
||||||
|
" options = uc.ChromeOptions()\n", |
||||||
|
" options.add_argument(\"--disable-gpu\")\n", |
||||||
|
" options.add_argument(\"--no-sandbox\")\n", |
||||||
|
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||||
|
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", |
||||||
|
" # options.add_argument(\"--headless=new\") # For Chrome >= 109 - unreliable on my end!\n", |
||||||
|
" options.add_argument(\"start-maximized\")\n", |
||||||
|
" options.add_argument(\n", |
||||||
|
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
" )\n", |
||||||
|
" if chrome_path:\n", |
||||||
|
" options.binary_location = chrome_path\n", |
||||||
|
"\n", |
||||||
|
" self.driver = uc.Chrome(options=options)\n", |
||||||
|
"\n", |
||||||
|
" try:\n", |
||||||
|
" # Load the URL\n", |
||||||
|
" self.driver.get(url)\n", |
||||||
|
"\n", |
||||||
|
" # Wait for Cloudflare or similar checks\n", |
||||||
|
" time.sleep(10)\n", |
||||||
|
"\n", |
||||||
|
" # Ensure the main content is loaded\n", |
||||||
|
" WebDriverWait(self.driver, self.wait_time).until(\n", |
||||||
|
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" # Extract the main content\n", |
||||||
|
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", |
||||||
|
"\n", |
||||||
|
" # Parse with BeautifulSoup\n", |
||||||
|
" soup = BeautifulSoup(main_content, \"html.parser\")\n", |
||||||
|
" self.title = self.driver.title if self.driver.title else \"No title found\"\n", |
||||||
|
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
"\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" print(f\"Error occurred: {e}\")\n", |
||||||
|
" self.title = \"Error occurred\"\n", |
||||||
|
" self.text = \"\"\n", |
||||||
|
"\n", |
||||||
|
" finally:\n", |
||||||
|
" self.driver.quit()\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "592d8f86-fbf7-4b16-a69d-468030d72dc4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Prompts" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# system prompt\n", |
||||||
|
"system_prompt = \"\"\"\n", |
||||||
|
" You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate a single, most effective, keyword-optimized title to maximize SEO performance.\n", |
||||||
|
"\n", |
||||||
|
"Instructions:\n", |
||||||
|
"Ignore irrelevant content, such as the current title (if any), navigation menus, advertisements, or unrelated text.\n", |
||||||
|
"Prioritize SEO best practices, considering:\n", |
||||||
|
"Keyword relevance and search intent (informational, transactional, etc.).\n", |
||||||
|
"Readability and engagement.\n", |
||||||
|
"Avoiding keyword stuffing.\n", |
||||||
|
"Ensure conciseness and clarity, keeping the title under 60 characters when possible for optimal SERP display.\n", |
||||||
|
"Use a compelling structure that balances informativeness and engagement, leveraging formats like:\n", |
||||||
|
"Listicles (\"10 Best Strategies for…\")\n", |
||||||
|
"How-to guides (\"How to Boost…\")\n", |
||||||
|
"Questions (\"What Is the Best Way to…\")\n", |
||||||
|
"Power words to enhance click-through rates (e.g., \"Proven,\" \"Ultimate,\" \"Essential\").\n", |
||||||
|
"Provide only one single, best title—do not suggest multiple options.\n", |
||||||
|
"Limit the answer to the following Response Format (Markdown):\n", |
||||||
|
"Optimized Title: [Provide only one title here]\n", |
||||||
|
"Justification: [Explain why this title is effective for SEO]\n", |
||||||
|
"\n", |
||||||
|
" \"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "b0486867-6d38-4cb5-91d4-fb60952c3a9b", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"**Provide the article URL and get its content for analysis**" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ddd76319-13ce-480b-baa7-cab6a5c88168", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# article url - change to any other article URL\n", |
||||||
|
"article_url = \"https://searchengineland.com/seo-trends-2025-447745\"\n", |
||||||
|
"\n", |
||||||
|
"# get article content\n", |
||||||
|
"article = WebsiteCrawler(url=article_url, chrome_path=CHROME_PATH)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# user prompt\n", |
||||||
|
"user_prompt = \"\"\"\n", |
||||||
|
"Following the article to be analyzed to suggest a title. Limit the answer to the following Response Format (Markdown): \n", |
||||||
|
"Optimized Title: [Provide only one title here]\n", |
||||||
|
"Justification: [Explain why this title is effective for SEO].\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"user_prompt = f\"{user_prompt} {article}\"\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# message list\n", |
||||||
|
"messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get suggested title\n", |
||||||
|
"def get_title(model, **kwargs):\n", |
||||||
|
" # stream if GPT\n", |
||||||
|
" if 'stream' in kwargs:\n", |
||||||
|
" response = CLIENTS[model].chat.completions.create(\n", |
||||||
|
" model=MODELS[model],\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" stream=kwargs['stream']\n", |
||||||
|
" )\n", |
||||||
|
" else:\n", |
||||||
|
" response = CLIENTS[model].chat.completions.create(\n", |
||||||
|
" model=MODELS[model],\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" return response\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8988d6ff-076a-4eae-baf4-26a8d6a2bc44", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# filter response from model verbose - like Deepseek reasoning/thinking verbose\n", |
||||||
|
"def filter_response(response):\n", |
||||||
|
" # Find last occurrence of 'Optimized Title:' to avoid displaying reasoning verbose\n", |
||||||
|
" substring = 'Optimized Title:'\n", |
||||||
|
" start = response.rfind('Optimized Title:')\n", |
||||||
|
" if start > -1:\n", |
||||||
|
" filtered_response = response[start:]\n", |
||||||
|
"\n", |
||||||
|
" # insert line break to preserve format\n", |
||||||
|
" filtered_response = filtered_response.replace(\"**Justification:**\", \"\\n**Justification:**\")\n", |
||||||
|
" \n", |
||||||
|
" return filtered_response" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0e9e99cf-5e25-4a1f-ab11-a2255e318671", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# display suggested title\n", |
||||||
|
"def display_title(model):\n", |
||||||
|
" # get model-suggested title\n", |
||||||
|
" title = get_title(model)\n", |
||||||
|
" \n", |
||||||
|
" display(Markdown(f\"### {model} (___{MODELS[model]}___) Answer\\n\\n_______\")) \n", |
||||||
|
"\n", |
||||||
|
" response = \"\"\n", |
||||||
|
"\n", |
||||||
|
" if model == 'GPT':\n", |
||||||
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
" # for chunk in stream:\n", |
||||||
|
" for chunk in get_title(model=model, stream=True):\n", |
||||||
|
" response += chunk.choices[0].delta.content or ''\n", |
||||||
|
" response = (\n", |
||||||
|
" response.replace(\"```\",\"\")\n", |
||||||
|
" .replace(\"markdown\", \"\")\n", |
||||||
|
" .replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||||
|
" .replace(\"Justification:\", \"**Justification:**\")\n", |
||||||
|
" )\n", |
||||||
|
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||||
|
" else:\n", |
||||||
|
" response = get_title(model=model)\n", |
||||||
|
" response = response.choices[0].message.content\n", |
||||||
|
" response = filter_response(response)\n", |
||||||
|
" response = (\n", |
||||||
|
" response.replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||||
|
" .replace(\"Justification:\", \"**Justification:**\")\n", |
||||||
|
" )\n", |
||||||
|
" display(Markdown(response))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get OpenAI Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get and display openAi suggested title\n", |
||||||
|
"display_title(model='GPT')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "70073ebf-a00a-416b-854d-642d450cd99b", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get Llama Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "caa190bb-de5f-45cc-b671-5d62688f7b25", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get and display Llama suggested title\n", |
||||||
|
"display_title(model='LLAMA')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Get Deepseek Suggested Title" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# get and display Deepseek title\n", |
||||||
|
"display_title(model='DEEPSEEK')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", |
||||||
|
"metadata": { |
||||||
|
"jp-MarkdownHeadingCollapsed": true |
||||||
|
}, |
||||||
|
"source": [ |
||||||
|
"### Observations\n", |
||||||
|
"\n", |
||||||
|
"1. **Selenium:** The headless option (__options.add_argument(\"--headless=new\")__), while ideal to speed up the scanning process, presented problems while scanning several websites (including openai.com and canva.com).\n", |
||||||
|
"2. **Deepseek challenges:**\\\n", |
||||||
|
" a.It always returns its thinking/reasoning verbose, which, while helpful to understand how it works, is not always\n", |
||||||
|
" required, such as in this example code. A new function (**filter_response**) was created to remove the additional verbose.\\\n", |
||||||
|
" b. It is unreliable with the response, sometimes returning the required format for the response instead of the\n", |
||||||
|
" actual response. For example, for the title, it may sometimes return:\n", |
||||||
|
" \n", |
||||||
|
" **Optimized Title:** \\[The user wants the suggested title here]\n", |
||||||
|
" \n", |
||||||
|
"### Suggested future improvements\n", |
||||||
|
"\n", |
||||||
|
"1. Add the logic that would allow each model to assess the recommendations from the different models and \n", |
||||||
|
" select the best among these.\n", |
||||||
|
"2. Add the logic to leverage an API (if available) that automatically assesses the suggested titles." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,195 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "c97ad592-c8be-4583-a19c-ac813e56f410", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Mac Users\n", |
||||||
|
"\n", |
||||||
|
"I find some challenges while setting up this in MAC silicon M1 chip. Execute below commands in MAC terminal.\n", |
||||||
|
"\n", |
||||||
|
"1. Download chromedriver.\n", |
||||||
|
"2. Unzip and add it to the path.\n", |
||||||
|
"3. Set Extended attributes." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "b635b345-b000-48cc-8a7f-7df279a489a3", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"cd ~/Downloads\n", |
||||||
|
"wget https://storage.googleapis.com/chrome-for-testing-public/133.0.6943.126/mac-arm64/chromedriver-mac-arm64.zip\n", |
||||||
|
"unzip chromedriver-mac-arm64.zip\n", |
||||||
|
"sudo mv chromedriver-mac-arm64/chromedriver /usr/local/bin/\n", |
||||||
|
"chmod +x /usr/local/bin/chromedriver\n", |
||||||
|
"cd /usr/local/bin/\n", |
||||||
|
"xattr -d com.apple.quarantine chromedriver\n", |
||||||
|
"cd \n", |
||||||
|
"chromedriver --version" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "17c7c79a-8ae0-4f5d-a7c8-c54aa7ba90fd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!pip install selenium\n", |
||||||
|
"!pip install undetected-chromedriver\n", |
||||||
|
"!pip install beautifulsoup4" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c10bd630-2dfd-4572-8c21-2dc4c6a372ab", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"from selenium import webdriver\n", |
||||||
|
"from selenium.webdriver.chrome.service import Service\n", |
||||||
|
"from selenium.webdriver.common.by import By\n", |
||||||
|
"from selenium.webdriver.chrome.options import Options\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "6fb3641d-e9f8-4f5b-bb9d-ee0e971cccdb", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||||
|
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||||
|
"MODEL = \"llama3.2\"\n", |
||||||
|
"PATH_TO_CHROME_DRIVER = '/usr/local/bin/chromedriver'\n", |
||||||
|
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||||
|
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||||
|
"Respond in markdown. Highlight all the products this website offered and also find when website is created.\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5d57e958", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class Website:\n", |
||||||
|
" url: str\n", |
||||||
|
" title: str\n", |
||||||
|
" text: str\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" self.url = url\n", |
||||||
|
"\n", |
||||||
|
" options = Options()\n", |
||||||
|
"\n", |
||||||
|
" options.add_argument(\"--no-sandbox\")\n", |
||||||
|
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||||
|
"\n", |
||||||
|
" service = Service(PATH_TO_CHROME_DRIVER)\n", |
||||||
|
" driver = webdriver.Chrome(service=service, options=options)\n", |
||||||
|
" driver.get(url)\n", |
||||||
|
"\n", |
||||||
|
" # input(\"Please complete the verification in the browser and press Enter to continue...\")\n", |
||||||
|
" page_source = driver.page_source\n", |
||||||
|
" driver.quit()\n", |
||||||
|
"\n", |
||||||
|
" soup = BeautifulSoup(page_source, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.get_text(separator=\"\\n\", strip=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "56df8cd2-2707-43f6-a066-3367846929b3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def user_prompt_for(website):\n", |
||||||
|
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||||
|
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||||
|
"please provide a short summary of this website in markdown. \\\n", |
||||||
|
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||||
|
" user_prompt += website.text\n", |
||||||
|
" return user_prompt\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def messages_for(website):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||||
|
" ]\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def summarize(url):\n", |
||||||
|
" website = Website(url)\n", |
||||||
|
" ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||||
|
" response = ollama_via_openai.chat.completions.create(\n", |
||||||
|
" model=MODEL,\n", |
||||||
|
" messages = messages_for(website)\n", |
||||||
|
" )\n", |
||||||
|
" return response.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def display_summary(url):\n", |
||||||
|
" summary = summarize(url)\n", |
||||||
|
" display(Markdown(summary))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f2eb9599", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://ae.almosafer.com\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "31b66c0f-6b45-4986-b77c-758625945a91", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,152 @@ |
|||||||
|
{ |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 0, |
||||||
|
"metadata": { |
||||||
|
"colab": { |
||||||
|
"provenance": [] |
||||||
|
}, |
||||||
|
"kernelspec": { |
||||||
|
"name": "python3", |
||||||
|
"display_name": "Python 3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"name": "python" |
||||||
|
} |
||||||
|
}, |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"source": [ |
||||||
|
"# Getting MOM from call transcripts" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "99Z21wE7xpKS" |
||||||
|
} |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"source": [ |
||||||
|
"Import necessary libraries" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "YZMeexE8M_Pp" |
||||||
|
} |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display\n", |
||||||
|
"from openai import OpenAI\n" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "u5DCVg0Mxj5T" |
||||||
|
}, |
||||||
|
"execution_count": null, |
||||||
|
"outputs": [] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "i0V11JQ2az-C" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"#The below code can be uncommented in using .env file\n", |
||||||
|
"\n", |
||||||
|
"#from dotenv import load_dotenv\n", |
||||||
|
"#load_dotenv(override=True)\n", |
||||||
|
"#api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"#I am using google colab to import api_key\n", |
||||||
|
"from google.colab import userdata\n", |
||||||
|
"api_key=userdata.get('gemini_api')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||||
|
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||||
|
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||||
|
"elif api_key.strip() != api_key:\n", |
||||||
|
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"source": [ |
||||||
|
"# A class to represet Transcript\n", |
||||||
|
"from pathlib import Path\n", |
||||||
|
"class Transcript:\n", |
||||||
|
" def __init__(self, file_path):\n", |
||||||
|
" self.file_path=file_path\n", |
||||||
|
" self.content=Path(file_path).read_text(encoding='utf-8')\n" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "j6UTsnTEyWZ-" |
||||||
|
}, |
||||||
|
"execution_count": null, |
||||||
|
"outputs": [] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"source": [ |
||||||
|
"# Source of the text file -\"https://raw.githubusercontent.com/GeminiLn/EarningsCall_Dataset/refs/heads/master/3M%20Company_20170425/Text.txt\"\n", |
||||||
|
"path = '/content/Text.txt' # Specify the path of file you want to use - format should be .txt\n", |
||||||
|
"t=Transcript(path)\n" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "hquePU_mzZ7s" |
||||||
|
}, |
||||||
|
"execution_count": null, |
||||||
|
"outputs": [] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are expert at taking Meeting Notes & given the below transcript , create an MOM (Minutes of meeting)\"" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "ex5DB7M8L7KT" |
||||||
|
}, |
||||||
|
"execution_count": null, |
||||||
|
"outputs": [] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"source": [ |
||||||
|
"from google import genai\n", |
||||||
|
"from google.genai import types\n", |
||||||
|
"\n", |
||||||
|
"client = genai.Client(api_key=api_key)\n", |
||||||
|
"\n", |
||||||
|
"response = client.models.generate_content(\n", |
||||||
|
" model=\"gemini-2.0-flash\",\n", |
||||||
|
" config=types.GenerateContentConfig(\n", |
||||||
|
" system_instruction=system_prompt,\n", |
||||||
|
" max_output_tokens=500,\n", |
||||||
|
" temperature=0.1\n", |
||||||
|
" ),\n", |
||||||
|
" contents=t.content,\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"print(response.text)" |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"id": "wcpJ34qfMKmV" |
||||||
|
}, |
||||||
|
"execution_count": null, |
||||||
|
"outputs": [] |
||||||
|
} |
||||||
|
] |
||||||
|
} |
@ -0,0 +1,167 @@ |
|||||||
|
import os |
||||||
|
import time |
||||||
|
import pandas as pd |
||||||
|
import re |
||||||
|
from dotenv import load_dotenv |
||||||
|
from selenium import webdriver |
||||||
|
from selenium.webdriver.chrome.service import Service |
||||||
|
from selenium.webdriver.chrome.options import Options |
||||||
|
from selenium.webdriver.common.by import By |
||||||
|
from selenium.webdriver.support.ui import WebDriverWait |
||||||
|
from selenium.webdriver.support import expected_conditions as EC |
||||||
|
from openai import OpenAI |
||||||
|
from openpyxl import load_workbook |
||||||
|
from openpyxl.styles import Font, Alignment |
||||||
|
|
||||||
|
# Load environment variables |
||||||
|
load_dotenv(override=True) |
||||||
|
api_key = os.getenv('OPENAI_API_KEY') |
||||||
|
|
||||||
|
# Validate API Key |
||||||
|
if not api_key: |
||||||
|
raise ValueError("No API key was found - please check your .env file.") |
||||||
|
|
||||||
|
# Initialize OpenAI client |
||||||
|
openai = OpenAI() |
||||||
|
|
||||||
|
# Set up Selenium WebDriver |
||||||
|
chrome_options = Options() |
||||||
|
chrome_options.add_argument("--headless") |
||||||
|
chrome_options.add_argument("--disable-gpu") |
||||||
|
chrome_options.add_argument("--no-sandbox") |
||||||
|
chrome_options.add_argument("--disable-dev-shm-usage") |
||||||
|
|
||||||
|
class Website: |
||||||
|
"""Scrapes and processes website content using Selenium.""" |
||||||
|
|
||||||
|
def __init__(self, url: str): |
||||||
|
self.url = url |
||||||
|
self.text = "No content extracted." |
||||||
|
|
||||||
|
service = Service(executable_path="/opt/homebrew/bin/chromedriver") |
||||||
|
driver = webdriver.Chrome(service=service, options=chrome_options) |
||||||
|
|
||||||
|
try: |
||||||
|
driver.get(url) |
||||||
|
WebDriverWait(driver, 10).until( |
||||||
|
EC.presence_of_element_located((By.TAG_NAME, "body")) |
||||||
|
) |
||||||
|
body_element = driver.find_element(By.TAG_NAME, "body") |
||||||
|
self.text = body_element.text.strip() if body_element else "No content extracted." |
||||||
|
except Exception as e: |
||||||
|
print(f"Error fetching website: {e}") |
||||||
|
finally: |
||||||
|
driver.quit() |
||||||
|
|
||||||
|
def summarized_text(self, max_length=1500): |
||||||
|
return self.text[:max_length] + ("..." if len(self.text) > max_length else "") |
||||||
|
|
||||||
|
def clean_text(text): |
||||||
|
""" |
||||||
|
Cleans extracted text by removing markdown-style formatting. |
||||||
|
""" |
||||||
|
text = re.sub(r"###*\s*", "", text) |
||||||
|
text = re.sub(r"\*\*(.*?)\*\*", r"\1", text) |
||||||
|
return text.strip() |
||||||
|
|
||||||
|
# Aspect-specific prompts for concise output |
||||||
|
aspect_prompts = { |
||||||
|
"Marketing Strategies": "Summarize the core marketing strategies used on this website in in under 30 words. Do not include a title or introduction.", |
||||||
|
"SEO Keywords": "List only the most relevant SEO keywords from this website, separated by commas. Do not include a title or introduction.", |
||||||
|
"User Engagement Tactics": "List key engagement tactics used on this website (e.g., interactive features, user incentives, social proof). Keep responses to 3-5 bullet points. Do not include a title or introduction.", |
||||||
|
"Call-to-Action Phrases": "List only the most common Call-to-Action phrases used on this website, separated by commas. Do not include a title or introduction.", |
||||||
|
"Branding Elements": "Summarize the brand's tone, style, and positioning in under 30 words. Do not include a title or introduction.", |
||||||
|
"Competitor Comparison": "Briefly describe how this website differentiates itself from competitors in under 30 words. Do not include a title or introduction.", |
||||||
|
"Product Descriptions": "List the most important features or benefits of the products/services described on this website in under 30 words. Do not include a title or introduction.", |
||||||
|
"Customer Reviews Sentiment": "Summarize the overall sentiment of customer reviews in oin under 30 words, highlighting common themes. Do not include a title or introduction.", |
||||||
|
"Social Media Strategy": "List key social media strategies used on this website, separated by commas. Do not include a title or introduction." |
||||||
|
} |
||||||
|
|
||||||
|
|
||||||
|
def summarize(url: str) -> dict: |
||||||
|
""" |
||||||
|
Fetches a website, extracts relevant content, and generates a separate summary for each aspect. |
||||||
|
|
||||||
|
:param url: The website URL to analyze. |
||||||
|
:return: A dictionary containing extracted information. |
||||||
|
""" |
||||||
|
website = Website(url) |
||||||
|
|
||||||
|
if not website.text or website.text == "No content extracted.": |
||||||
|
return {"URL": url, "Error": "Failed to extract content"} |
||||||
|
|
||||||
|
extracted_data = {"URL": url} |
||||||
|
|
||||||
|
for aspect, prompt in aspect_prompts.items(): |
||||||
|
try: |
||||||
|
formatted_prompt = f"{prompt} \n\nContent:\n{website.summarized_text()}" |
||||||
|
response = openai.chat.completions.create( |
||||||
|
model="gpt-4o-mini", |
||||||
|
messages=[ |
||||||
|
{"role": "system", "content": "You are an expert at extracting structured information from website content."}, |
||||||
|
{"role": "user", "content": formatted_prompt} |
||||||
|
] |
||||||
|
) |
||||||
|
|
||||||
|
extracted_data[aspect] = clean_text(response.choices[0].message.content) |
||||||
|
|
||||||
|
except Exception as e: |
||||||
|
extracted_data[aspect] = f"Error generating summary: {e}" |
||||||
|
|
||||||
|
return extracted_data |
||||||
|
|
||||||
|
def save_to_excel(data_list: list, filename="website_analysis.xlsx"): |
||||||
|
""" |
||||||
|
Saves extracted information to an Excel file with proper formatting. |
||||||
|
|
||||||
|
:param data_list: A list of dictionaries containing extracted website details. |
||||||
|
:param filename: The name of the Excel file to save data. |
||||||
|
""" |
||||||
|
df = pd.DataFrame(data_list) |
||||||
|
|
||||||
|
df.to_excel(filename, index=False) |
||||||
|
|
||||||
|
wb = load_workbook(filename) |
||||||
|
ws = wb.active |
||||||
|
|
||||||
|
# Auto-adjust column widths |
||||||
|
for col in ws.columns: |
||||||
|
max_length = 0 |
||||||
|
col_letter = col[0].column_letter |
||||||
|
for cell in col: |
||||||
|
try: |
||||||
|
if cell.value: |
||||||
|
max_length = max(max_length, len(str(cell.value))) |
||||||
|
except: |
||||||
|
pass |
||||||
|
ws.column_dimensions[col_letter].width = min(max_length + 2, 50) |
||||||
|
|
||||||
|
# Format headers |
||||||
|
for cell in ws[1]: |
||||||
|
cell.font = Font(bold=True) |
||||||
|
cell.alignment = Alignment(horizontal="center", vertical="center") |
||||||
|
|
||||||
|
# Wrap text for extracted content |
||||||
|
for row in ws.iter_rows(min_row=2): |
||||||
|
for cell in row: |
||||||
|
cell.alignment = Alignment(wrap_text=True, vertical="top") |
||||||
|
|
||||||
|
wb.save(filename) |
||||||
|
print(f"Data saved to {filename} with improved formatting.") |
||||||
|
|
||||||
|
# 🔹 LIST OF WEBSITES TO PROCESS |
||||||
|
websites = [ |
||||||
|
"https://www.gymshark.com/", |
||||||
|
] |
||||||
|
|
||||||
|
if __name__ == "__main__": |
||||||
|
print("\nProcessing websites...\n") |
||||||
|
extracted_data_list = [] |
||||||
|
|
||||||
|
for site in websites: |
||||||
|
print(f"Extracting data from {site}...") |
||||||
|
extracted_data = summarize(site) |
||||||
|
extracted_data_list.append(extracted_data) |
||||||
|
|
||||||
|
save_to_excel(extracted_data_list) |
||||||
|
print("\nAll websites processed successfully!") |
@ -0,0 +1,127 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0a512c2a-55e7-40e1-ab17-88b7034ca09a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Imports\n", |
||||||
|
"import openai\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"from IPython.display import Markdown, display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1aa8dd82-6b5e-4dbd-a2ee-8367e796a51f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found - head over to the troubleshooting notebook!\")\n", |
||||||
|
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||||
|
" print(\"An API key was found, but it doesn't start sk-proj... make sure you using the right key (Check troubleshooting notebook)\")\n", |
||||||
|
"elif api_key.strip() != api_key:\n", |
||||||
|
" print(\"An API key was found, but it looks like white space was found in beginning or end. (Check troubleshooting notebook)\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2acd579b-846c-4aa6-ba6c-1cc1a5a2eeb6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Input the system prompt\n", |
||||||
|
"system_prompt = \"\"\"you are top notched AI music expert that have knowledge of all genres, songs, and artists. You need to google search lyrics. You have the following rules:\\\n", |
||||||
|
"1. Carefully break down what type of recommendation the user wants and the context.\\\n", |
||||||
|
"2. If asked to recommend genres similar to a song or artists please identify the top 3 genres.\\\n", |
||||||
|
"3. If asked to recommend artists from songs or genres then recommend the top 5 artists.\n", |
||||||
|
"4. If asked to recommend songs from genres or artist than recommend the top 10 songs.\n", |
||||||
|
"5. If asked for a general recommendation give them the top 5 songs based off of context.\\\n", |
||||||
|
"6. Be flexible and adaptable with recommendations and consider the context the user might ask.\n", |
||||||
|
"7. always respond in markdown.\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3c1cf212-538c-4e9a-8da5-337bd7b6197c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# music recommender function\n", |
||||||
|
"def music_recommender(user_prompt):\n", |
||||||
|
" messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||||
|
" ]\n", |
||||||
|
" \n", |
||||||
|
" response = openai.chat.completions.create(\n", |
||||||
|
" model=\"gpt-4\",\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" max_tokens=300\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4f277561-af8b-4715-90e7-6ebaadeb15d0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# User prompt (Change this to fit your needs!)\n", |
||||||
|
"user_prompt = \"Can you recommend me songs from Taylor Swift\"\n", |
||||||
|
"\n", |
||||||
|
"# Example usage\n", |
||||||
|
"response = music_recommender(user_prompt)\n", |
||||||
|
"display(Markdown(response))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bb869d36-de14-4e46-9087-223d6b257efa", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,213 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Also trying the amazing reasoning model DeepSeek\n", |
||||||
|
"\n", |
||||||
|
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n", |
||||||
|
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n", |
||||||
|
"\n", |
||||||
|
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!ollama pull deepseek-r1:1.5b" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4bdcd35a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!ollama pull deepseek-r1:8b" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# NOW the exercise for you\n", |
||||||
|
"\n", |
||||||
|
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1c106420", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import requests\n", |
||||||
|
"import ollama\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "22d62f00", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Constants\n", |
||||||
|
"\n", |
||||||
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||||
|
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||||
|
"MODEL = \"deepseek-r1:8b\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||||
|
"\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = {\n", |
||||||
|
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" self.url = url\n", |
||||||
|
" response = requests.get(url, headers=headers)\n", |
||||||
|
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4449b7dc", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||||
|
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||||
|
"Respond in markdown.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "daca9448", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def user_prompt_for(website):\n", |
||||||
|
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||||
|
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||||
|
"please provide a short summary of this website in markdown. \\\n", |
||||||
|
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||||
|
" user_prompt += website.text\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0ec9d5d2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# See how this function creates exactly the format above\n", |
||||||
|
"\n", |
||||||
|
"def messages_for(website):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||||
|
" ]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "6e1ab04a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||||
|
"\n", |
||||||
|
"def summarize(url):\n", |
||||||
|
" website = Website(url)\n", |
||||||
|
" response = ollama.chat(\n", |
||||||
|
" model = MODEL,\n", |
||||||
|
" messages = messages_for(website)\n", |
||||||
|
" )\n", |
||||||
|
" return response['message']['content']" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0d3b5628", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def display_summary(url):\n", |
||||||
|
" summary = summarize(url)\n", |
||||||
|
" display(Markdown(summary))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "938e5633", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"display_summary(\"https://edwarddonner.com\")" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "llms", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,81 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# A Small Tweak to Week1-Day5\n", |
||||||
|
"\n", |
||||||
|
"If you have network restrictions (such as using a custom DNS provider, or firewall rules at work), you can disable SSL cert verification.\n", |
||||||
|
"Once you do that and start executing your code, the output will be riddled with warnings. Thankfully, you can suppress those warnings,too.\n", |
||||||
|
"\n", |
||||||
|
"See the 2 lines added to the init method, below." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 22, |
||||||
|
"id": "106dd65e-90af-4ca8-86b6-23a41840645b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# A class to represent a Webpage\n", |
||||||
|
"\n", |
||||||
|
"# Some websites need you to use proper headers when fetching them:\n", |
||||||
|
"headers = {\n", |
||||||
|
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"class Website:\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" A utility class to represent a Website that we have scraped, now with links\n", |
||||||
|
" \"\"\"\n", |
||||||
|
"\n", |
||||||
|
" def __init__(self, url):\n", |
||||||
|
" self.url = url\n", |
||||||
|
"\n", |
||||||
|
" #\n", |
||||||
|
" # If you must disable SSL cert validation, and also suppress all the warning that will come with it,\n", |
||||||
|
" # add the 2 lines below. This comes in very handy if you have DNS/firewall restrictions; alas, use\n", |
||||||
|
" # with caution, especially if deploying this in a non-dev environment.\n", |
||||||
|
" requests.packages.urllib3.disable_warnings() \n", |
||||||
|
" response = requests.get(url, headers=headers, verify=False) \n", |
||||||
|
" # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" \n", |
||||||
|
" self.body = response.content\n", |
||||||
|
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||||
|
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||||
|
" if soup.body:\n", |
||||||
|
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||||
|
" irrelevant.decompose()\n", |
||||||
|
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||||
|
" else:\n", |
||||||
|
" self.text = \"\"\n", |
||||||
|
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||||
|
" self.links = [link for link in links if link]" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,444 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "it1JLoxrSqO1", |
||||||
|
"metadata": { |
||||||
|
"id": "it1JLoxrSqO1" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!pip install openai python-docx python-dotenv gradio openpyxl" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "950a084a-7f92-4669-af62-f07cb121da56", |
||||||
|
"metadata": { |
||||||
|
"id": "950a084a-7f92-4669-af62-f07cb121da56" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"#from IPython.display import Markdown, display, update_display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"from docx import Document" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d0548135-ef16-4102-a55a-cea888a51c29", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import pandas as pd\n", |
||||||
|
"import re\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ab9f734f-ed6f-44f6-accb-594f9ca4843d", |
||||||
|
"metadata": { |
||||||
|
"id": "ab9f734f-ed6f-44f6-accb-594f9ca4843d" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class ReqDoc:\n", |
||||||
|
" def __init__(self, file_path):\n", |
||||||
|
" self.file_path = file_path\n", |
||||||
|
"\n", |
||||||
|
" def extract(self):\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" Reads the content of a .docx file and returns the paragraphs as a list of strings.\n", |
||||||
|
" \"\"\"\n", |
||||||
|
" try:\n", |
||||||
|
" # Check if the file exists\n", |
||||||
|
" if not os.path.exists(self.file_path):\n", |
||||||
|
" raise FileNotFoundError(f\"The file {self.file_path} was not found.\")\n", |
||||||
|
"\n", |
||||||
|
" # Attempt to open and read the document\n", |
||||||
|
" doc = Document(self.file_path)\n", |
||||||
|
" text = \"\\n\".join([paragraph.text for paragraph in doc.paragraphs])\n", |
||||||
|
" return text\n", |
||||||
|
"\n", |
||||||
|
" except FileNotFoundError as fnf_error:\n", |
||||||
|
" print(fnf_error)\n", |
||||||
|
" return None\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" print(f\"An error occurred: {e}\")\n", |
||||||
|
" return None\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "008f485a-5718-48f6-b408-06eb6d59d7f9", |
||||||
|
"metadata": { |
||||||
|
"id": "008f485a-5718-48f6-b408-06eb6d59d7f9" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Initialize and constants\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if api_key and api_key.startswith('sk-proj') and len(api_key)>10:\n", |
||||||
|
" print(\"API key looks good!\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"There might be a problem with your API key. Please check!\")\n", |
||||||
|
" \n", |
||||||
|
"MODEL = 'gpt-4o-mini'\n", |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "b6110ff3-74bc-430a-8051-7d86a216f0fb", |
||||||
|
"metadata": { |
||||||
|
"id": "b6110ff3-74bc-430a-8051-7d86a216f0fb" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Set up system prompt for extracting just the requirements from the document\n", |
||||||
|
"\n", |
||||||
|
"req_doc_system_prompt = \"You are provided with a complete requirements specifications document. \\\n", |
||||||
|
"You are able to decide which content from that document are related to actual requirements, identify each requirement as \\\n", |
||||||
|
"functional or non-functional and list them all.\\n\"\n", |
||||||
|
"req_doc_system_prompt += \"If the document is empty or do not contain requirements or if you cannot extract them, please respond as such.\\\n", |
||||||
|
"Do not make up your own requirements. \\n\"\n", |
||||||
|
"req_doc_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||||
|
"req_doc_system_prompt += \"\"\"\n", |
||||||
|
"{\n", |
||||||
|
" \"requirements\": [\n", |
||||||
|
" {\"RequirementNo\": \"FR-01\", \"Requirement Description\": \"description of this functional requirement goes here\"},\n", |
||||||
|
" {\"RequirementNo\": \"FR-02\": \"Requirement Description\": \"description of this functional requirement goes here\"},\n", |
||||||
|
" {\"RequirementNo\": \"NFR-01\": \"Requirement Description\": \"description of this non-functional requirement goes here\"},\n", |
||||||
|
" {\"RequirementNo\": \"NFR-02\": \"Requirement Description\": \"description of this non-functional requirement goes here\"}\n", |
||||||
|
" ]\n", |
||||||
|
"}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "20460e45-c1b7-4dc4-ab07-932235c19895", |
||||||
|
"metadata": { |
||||||
|
"id": "20460e45-c1b7-4dc4-ab07-932235c19895" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Set up user prompt, sending in the requirements doc as input and calling the ReqDoc.extract function. Key to note here is the explicit instructions to\n", |
||||||
|
"#respond in JSON format.\n", |
||||||
|
"\n", |
||||||
|
"def req_doc_user_prompt(doc):\n", |
||||||
|
" user_prompt = \"Here is the contents from a requirement document.\\n\"\n", |
||||||
|
" user_prompt += f\"{doc.extract()} \\n\"\n", |
||||||
|
" user_prompt += \"Please scan through the document and extract only the actual requirements. For example, ignore sections or \\\n", |
||||||
|
"paragraphs such as Approvers, table of contents and similar sections which are not really requirements.\\\n", |
||||||
|
"You must respond in a JSON format\"\n", |
||||||
|
" user_prompt += \"If the content is empty, respond that there are no valid requirements you could extract and ask for a proper document.\\n\"\n", |
||||||
|
" user_prompt = user_prompt[:25_000] # Truncate if more than 25,000 characters\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3a9f0f84-69a0-4971-a545-5bb40c2f9891", |
||||||
|
"metadata": { |
||||||
|
"id": "3a9f0f84-69a0-4971-a545-5bb40c2f9891" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Function to call chatgpt-4o-mini model with the user and system prompts set above and returning the json formatted result obtained from chatgpt\n", |
||||||
|
"def get_requirements(doc):\n", |
||||||
|
" reqdoc = ReqDoc(doc)\n", |
||||||
|
" response = openai.chat.completions.create(\n", |
||||||
|
" model=MODEL,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\"role\": \"system\", \"content\": req_doc_system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": req_doc_user_prompt(reqdoc)}\n", |
||||||
|
" ],\n", |
||||||
|
" response_format={\"type\": \"json_object\"}\n", |
||||||
|
" )\n", |
||||||
|
" result = response.choices[0].message.content\n", |
||||||
|
" return json.loads(result)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f9bb04ef-78d3-4e0f-9ed1-59a961a0663e", |
||||||
|
"metadata": { |
||||||
|
"id": "f9bb04ef-78d3-4e0f-9ed1-59a961a0663e" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Uncomment and run this if you want to see the extracted requriements in json format.\n", |
||||||
|
"#get_requirements(\"reqdoc.docx\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "1fe8618c-1dfe-4030-bad8-405731294c93", |
||||||
|
"metadata": { |
||||||
|
"id": "1fe8618c-1dfe-4030-bad8-405731294c93" |
||||||
|
}, |
||||||
|
"source": [ |
||||||
|
"### Next, we will make another call to gpt-4o-mini" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "db2c1eb3-7740-43a4-9c0b-37b7e70c739b", |
||||||
|
"metadata": { |
||||||
|
"id": "db2c1eb3-7740-43a4-9c0b-37b7e70c739b" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Set up system prompt to ask for test cases in table format\n", |
||||||
|
"system_prompt = \"You are an assitant that receives a list of functional and non functional requirements in JSON format. You are the expert in generating unit test cases for each requirement. \\\n", |
||||||
|
"You will create as many different test cases as needed for each requirement and produce a result in a table. Order the table by requirement No. Provide clear details on test case pass criteria. \\\n", |
||||||
|
"The table will contain the following columns. \\\n", |
||||||
|
"1.S No\\\n", |
||||||
|
"2.Requirement No\\\n", |
||||||
|
"3.Requirement Description\\\n", |
||||||
|
"4.Test Case ID\\\n", |
||||||
|
"5.Test case summary\\\n", |
||||||
|
"6.Test case description\\\n", |
||||||
|
"7.Success criteria \\n\"\n", |
||||||
|
"system_prompt += \"If you are provided with an empty list, ask for a proper requirement doc\\n\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c4cd2bdf-e1bd-43ff-85fa-760ba39ed8c5", |
||||||
|
"metadata": { |
||||||
|
"id": "c4cd2bdf-e1bd-43ff-85fa-760ba39ed8c5" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Set up user prompt passing in the req doc file. This in turn will call the get_requirements function, which will make a call to chatgpt.\n", |
||||||
|
"\n", |
||||||
|
"def get_testcase_user_prompt(reqdoc):\n", |
||||||
|
" user_prompt = \"You are looking at the following list of requirements. \\n\"\n", |
||||||
|
" user_prompt += f\"{get_requirements(reqdoc)}\\n\"\n", |
||||||
|
" user_prompt += \"Prepare unit test cases for each of these requirements in a table and send that table as response. \\n\"\n", |
||||||
|
" user_prompt += user_prompt[:25000]\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5b2a2b46-9d9c-416c-b189-3007b4d26d76", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#This is the 2nd call to chatgpt to get test cases. display(Markdown) will take care of producing a neatly formatted table output.\n", |
||||||
|
"def create_testcase_doc_gradio(response, is_response_ready, is_cleared, file_input):\n", |
||||||
|
" if is_cleared or file_input == None: # Prevent OpenAI call if \"Clear\" was clicked\n", |
||||||
|
" return \"\", False\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model=MODEL,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": get_testcase_user_prompt(file_input)}\n", |
||||||
|
" ],\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" #Modified for Gradio\n", |
||||||
|
" result = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" result += chunk.choices[0].delta.content or \"\"\n", |
||||||
|
" #print(result)\n", |
||||||
|
" yield result, False" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2bb96a11-063e-4b20-9880-71fa9ea4d3f7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Define this variable and then pass js=force_dark_mode when creating the Interface\n", |
||||||
|
"force_dark_mode = \"\"\"\n", |
||||||
|
"function refresh() {\n", |
||||||
|
" const url = new URL(window.location);\n", |
||||||
|
" if (url.searchParams.get('__theme') !== 'dark') {\n", |
||||||
|
" url.searchParams.set('__theme', 'dark');\n", |
||||||
|
" window.location.href = url.href;\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5c81c766-9613-4614-b88d-410654672b89", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def show_or_hide_save_button(response, is_response_ready, is_cleared):\n", |
||||||
|
" if is_cleared or response == None:\n", |
||||||
|
" return \"\", False\n", |
||||||
|
" table_pattern = r\"(\\|.+\\|[\\r\\n]+)+\"\n", |
||||||
|
" table_match = re.search(table_pattern, response)\n", |
||||||
|
" if table_match:\n", |
||||||
|
" return response, True #(response, is_response_ready)\n", |
||||||
|
" else:\n", |
||||||
|
" return response, False #(response, is_response_ready)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a5f5d8e7-d29c-4f40-8d57-a9911bb7c47e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def extract_table_from_markdown(response):\n", |
||||||
|
" # Regular expression to match Markdown tables\n", |
||||||
|
" table_pattern = r\"(\\|.+\\|[\\r\\n]+)+\"\n", |
||||||
|
" table_match = re.search(table_pattern, response)\n", |
||||||
|
"\n", |
||||||
|
" if table_match:\n", |
||||||
|
" table_data = table_match.group(0)\n", |
||||||
|
" # Process the table into a format pandas can read\n", |
||||||
|
" rows = table_data.strip().split(\"\\n\")\n", |
||||||
|
" data = [row.split(\"|\")[1:-1] for row in rows] # Split columns by '|'\n", |
||||||
|
"\n", |
||||||
|
" # Convert to DataFrame\n", |
||||||
|
" df = pd.DataFrame(data[1:], columns=data[0]) # First row is the header\n", |
||||||
|
"\n", |
||||||
|
" # Save to Excel\n", |
||||||
|
" output_file = \"test_cases.xlsx\"\n", |
||||||
|
" df.to_excel(output_file, index=False)\n", |
||||||
|
"\n", |
||||||
|
" return output_file\n", |
||||||
|
" else:\n", |
||||||
|
" return None" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c1380b11-3e28-40de-ab1a-93a5fd73cf81", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def extract_and_save_button(response, is_cleared):\n", |
||||||
|
" if is_cleared:\n", |
||||||
|
" return None # Do nothing if the file was cleared\n", |
||||||
|
" # This function will be triggered when the user clicks \"Save as Excel\"\n", |
||||||
|
" output_file = extract_table_from_markdown(response)\n", |
||||||
|
" if output_file:\n", |
||||||
|
" return output_file\n", |
||||||
|
" else:\n", |
||||||
|
" return \"No table found in the provided input.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3a532b42-9f81-4c75-8be4-e40d621a6b35", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Gradio interface\n", |
||||||
|
"with gr.Blocks(js=force_dark_mode) as demo:\n", |
||||||
|
" gr.HTML(\"<h2 style='text-align: center; color: white;'>📄 Test case automation</h2>\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" file_input = gr.File(label=\"Upload your requirements docx file\", file_types=[\".docx\"])\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" response = gr.Markdown()\n", |
||||||
|
" # Button to save the table as Excel file (optional)\n", |
||||||
|
" save_button = gr.Button(\"Download Table as Excel\", visible=False)\n", |
||||||
|
" file_output = gr.File(label=\"Download Excel File\", visible=False) \n", |
||||||
|
" # State variable to track if response is ready\n", |
||||||
|
" is_response_ready = gr.State(False)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" clear_button = gr.Button(\"Clear\")\n", |
||||||
|
" # State variable to track if clear button is clicked\n", |
||||||
|
" is_cleared = gr.State(False)\n", |
||||||
|
"\n", |
||||||
|
" # Function to show \"Processing...\" message\n", |
||||||
|
" def show_processing(is_cleared, file_input):\n", |
||||||
|
" if is_cleared or file_input==None:\n", |
||||||
|
" return None, False, is_cleared, file_input # Do nothing if the file was cleared\n", |
||||||
|
" #return gr.HTML(\"<h6 style='text-align: left; color: #ffffffff;'>⌛ Processing your file... Please wait!</h6>\"), False, is_cleared, file_input\n", |
||||||
|
" return \"⌛ Processing your file... Please wait!\", False, is_cleared, file_input\n", |
||||||
|
" \n", |
||||||
|
" # Trigger response only if the file was uploaded and not cleared\n", |
||||||
|
" file_input.change(\n", |
||||||
|
" lambda _: False, # Directly set is_cleared to False\n", |
||||||
|
" inputs=[file_input],\n", |
||||||
|
" outputs=[is_cleared]\n", |
||||||
|
" ).then(\n", |
||||||
|
" show_processing, inputs=[is_cleared, file_input], outputs=[response, is_response_ready, is_cleared, file_input]\n", |
||||||
|
" ).then(\n", |
||||||
|
" create_testcase_doc_gradio, inputs=[response, is_response_ready, is_cleared, file_input], outputs=[response, is_response_ready]\n", |
||||||
|
" ).then(\n", |
||||||
|
" show_or_hide_save_button, inputs=[response, is_response_ready, is_cleared], outputs=[response, is_response_ready]\n", |
||||||
|
" ).then(\n", |
||||||
|
" lambda _, ready: (gr.update(visible=ready), gr.update(visible=ready)), inputs=[response, is_response_ready], outputs=[save_button,file_output])\n", |
||||||
|
"\n", |
||||||
|
" #.then() passes the previous function outputs as inputs to the next function\n", |
||||||
|
"\n", |
||||||
|
" # Button action to extract and save table as an Excel file\n", |
||||||
|
" save_button.click(extract_and_save_button, inputs=[response, is_cleared], outputs=file_output)\n", |
||||||
|
" \n", |
||||||
|
" # Clear button resets both file and output while setting is_cleared to True\n", |
||||||
|
" clear_button.click(lambda: (None, None, None, True), inputs=None, outputs=[file_input, file_output, response, is_cleared]) \n", |
||||||
|
"\n", |
||||||
|
"# Launch Gradio app\n", |
||||||
|
"demo.launch(share=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "cd5314b2-ee91-49bd-9d40-558775d44382", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"colab": { |
||||||
|
"provenance": [] |
||||||
|
}, |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,202 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# End of week 1 exercise\n", |
||||||
|
"\n", |
||||||
|
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n", |
||||||
|
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 9, |
||||||
|
"id": "c1070317-3ed9-4659-abe3-828943230e03", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"import openai\n", |
||||||
|
"from openai import OpenAI\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 10, |
||||||
|
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# constants\n", |
||||||
|
"models = {\n", |
||||||
|
" 'MODEL_GPT': 'gpt-4o-mini',\n", |
||||||
|
" 'MODEL_LLAMA': 'llama3.2'\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"# To use ollama using openai API (ensure that ollama is running on localhost)\n", |
||||||
|
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||||
|
"\n", |
||||||
|
"def model_choices(model):\n", |
||||||
|
" if model in models:\n", |
||||||
|
" return models[model]\n", |
||||||
|
" else:\n", |
||||||
|
" raise ValueError(f\"Model {model} not found in models dictionary\")\n", |
||||||
|
"\n", |
||||||
|
"def get_model_api(model='MODEL_GPT'):\n", |
||||||
|
" if model == 'MODEL_GPT':\n", |
||||||
|
" return openai, model_choices(model)\n", |
||||||
|
" elif model == 'MODEL_LLAMA':\n", |
||||||
|
" return ollama_via_openai, model_choices(model)\n", |
||||||
|
" else:\n", |
||||||
|
" raise ValueError(f\"Model {model} not found in models dictionary\")\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 12, |
||||||
|
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set up environment\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"\"\" You are an AI assistant helping a user find information about a product. \n", |
||||||
|
"The user asks you a technical question about code, and you provide a response with code snippets and explanations.\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"def stream_brochure(question, model):\n", |
||||||
|
" api, model_name = get_model_api(model)\n", |
||||||
|
" stream = api.chat.completions.create(\n", |
||||||
|
" model=model_name,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": question}\n", |
||||||
|
" ],\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" response = \"\"\n", |
||||||
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk.choices[0].delta.content or ''\n", |
||||||
|
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||||
|
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Here is the question; type over this to ask something new\n", |
||||||
|
"\n", |
||||||
|
"question = \"\"\"\n", |
||||||
|
"Please explain what this code does and why:\n", |
||||||
|
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"**Understanding the Code Snippet**\n", |
||||||
|
"\n", |
||||||
|
"This Python code snippet uses a combination of built-in functions, dictionary iteration, and generator expressions to extract and yield author names from a list of `Book` objects.\n", |
||||||
|
"\n", |
||||||
|
"Here's a breakdown:\n", |
||||||
|
"\n", |
||||||
|
"1. **Dictionary Iteration**: The expression `for book in books if book.get(\"author\")`\n", |
||||||
|
" - Iterates over each element (`book`) in the container `books`.\n", |
||||||
|
" - Filters out elements whose `'author'` key does not have a value (i.e., `None`, `False`, or an empty string). This leaves only dictionaries with author information.\n", |
||||||
|
"\n", |
||||||
|
"2. **Dictionary Access**: The expression `{book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||||
|
" - Uses dictionary membership testing to access only the values associated with the `'author'` key.\n", |
||||||
|
" - If the value is not found or is considered false, it's skipped in this particular case.\n", |
||||||
|
"\n", |
||||||
|
"3. **Generator Expression**: This generates an iterator that iterates over the filtered author names.\n", |
||||||
|
" - Yields each author name (i.e., a single `'name'` from the book dictionary) on demand.\n", |
||||||
|
" - Since these are generator expressions, they use memory less than equivalent Python lists and also create results on-demand.\n", |
||||||
|
"\n", |
||||||
|
"4. **`yield from`**: This statement takes the generator expression as an argument and uses it to generate a nested iterator structure.\n", |
||||||
|
" - It essentially \"decompresses\" the single level of nested iterator created by `list(iter(x))`, allowing for simpler use cases and potentially significant efficiency improvements for more complex structures where every value must be iterated, while in the latter case just the first item per iterable in the outer expression's sequence needs to actually be yielded into result stream.\n", |
||||||
|
" - By \"yielding\" a nested iterator (the generator expression), we can simplify code by avoiding repetitive structure like `for book, book_author in zip(iterating over), ...` or list creation.\n", |
||||||
|
"\n", |
||||||
|
"**Example Use Case**\n", |
||||||
|
"\n", |
||||||
|
"In this hypothetical example:\n", |
||||||
|
"\n", |
||||||
|
"# Example Book objects\n", |
||||||
|
"class Book:\n", |
||||||
|
" def __init__(self, author, title):\n", |
||||||
|
" self.author = author # str\n", |
||||||
|
" self.title = title\n", |
||||||
|
"\n", |
||||||
|
"books = [\n", |
||||||
|
" {\"author\": \"John Doe\", \"title\": f\"Book 1 by John Doe\"},\n", |
||||||
|
" {\"author\": None, \"title\": f\"Book 2 without Author\"},\n", |
||||||
|
" {\"author\": \"Jane Smith\", \"title\": f\"Book 3 by Jane Smith\"}\n", |
||||||
|
"]\n", |
||||||
|
"\n", |
||||||
|
"# The given expression to extract and yield author names\n", |
||||||
|
"for author in yield from {book.get(\"author\") for book in books if book.get(\"author\")}:\n", |
||||||
|
"\n", |
||||||
|
" print(author) \n", |
||||||
|
"\n", |
||||||
|
"In this code snippet, printing the extracted authors would output `John Doe`, `Jane Smith` (since only dictionaries with author information pass the filtering test).\n", |
||||||
|
"\n", |
||||||
|
"Please modify it like as you wish and use `yield from` along with dictionary iteration, list comprehension or generator expression if needed, and explain what purpose your version has." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Get the model of your choice (choices appeared below) to answer, with streaming \n", |
||||||
|
"\n", |
||||||
|
"\"\"\"models = {\n", |
||||||
|
" 'MODEL_GPT': 'gpt-4o-mini',\n", |
||||||
|
" 'MODEL_LLAMA': 'llama3.2'\n", |
||||||
|
"}\"\"\"\n", |
||||||
|
"\n", |
||||||
|
"stream_brochure(question,'MODEL_LLAMA')" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "llms", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,148 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f38e9ebb-453d-4b40-84f6-bc3e9bf4d7ef", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"import json\n", |
||||||
|
"import ollama\n", |
||||||
|
"from typing import List\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"\n", |
||||||
|
"# constants\n", |
||||||
|
"\n", |
||||||
|
"MODEL_GPT = 'gpt-4o-mini'\n", |
||||||
|
"MODEL_LLAMA = 'llama3.2'\n", |
||||||
|
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||||
|
"HEADERS = {\"Content-Type\": \"application/json\"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f367c5bb-80a2-4d78-8f27-823f5dafe7c0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set up environment\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"# System prompt for the AI TECHNICAL LLM AND PYTHON TUTOR.\"\n", |
||||||
|
"\n", |
||||||
|
"system_prompt = \"You are an EXPERT in AI, LLMS and Python \\\n", |
||||||
|
"Provide the answer with example ALLWAYS when necessary. \\\n", |
||||||
|
"If you do not know the answer just say 'I don't know the answer' \\\n", |
||||||
|
"Respond in markdown in Spanish.\"\n", |
||||||
|
"\n", |
||||||
|
"# messages\n", |
||||||
|
"def messages_for(question):\n", |
||||||
|
" return [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": question}\n", |
||||||
|
" ]\n", |
||||||
|
"\n", |
||||||
|
"# here is the question; type over this to ask something new\n", |
||||||
|
"\n", |
||||||
|
"question = \"\"\"\n", |
||||||
|
"Please explain what this code does and why:\n", |
||||||
|
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||||
|
"\"\"\"\n", |
||||||
|
"question = question[:5_000] # Truncate if more than 5,000 characters" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a90d726d-d494-401f-9cd6-0260f5c781e0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# METHODS TO DISPLAY\n", |
||||||
|
"def display_summary_ollama(question):\n", |
||||||
|
" response = ollama.chat(\n", |
||||||
|
" model = MODEL_LLAMA,\n", |
||||||
|
" messages = messages_for(question)\n", |
||||||
|
" ) \n", |
||||||
|
" summary = response['message']['content']\n", |
||||||
|
" display(Markdown(summary))\n", |
||||||
|
"\n", |
||||||
|
"def display_summary_gpt(question):\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model = MODEL_GPT,\n", |
||||||
|
" messages = messages_for(question),\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" response = \"\"\n", |
||||||
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk.choices[0].delta.content or ''\n", |
||||||
|
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||||
|
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||||
|
" \n", |
||||||
|
"def display_summary(llm, question):\n", |
||||||
|
" if llm.startswith(\"llama3.2\"):\n", |
||||||
|
" display_summary_ollama(question)\n", |
||||||
|
" else:\n", |
||||||
|
" display_summary_gpt(question)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4e993b6d-8fee-43f3-9e36-f86701a5cc57", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Get gpt-4o-mini to answer, with streaming\n", |
||||||
|
"\n", |
||||||
|
"display_summary(MODEL_GPT, question)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "31f6283a-ee57-415e-9a57-83d07261b7f9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Get Llama 3.2 to answer\n", |
||||||
|
"\n", |
||||||
|
"display_summary(MODEL_LLAMA, question)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,180 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# End of week 1 exercise\n", |
||||||
|
"\n", |
||||||
|
"To demonstrate your familiarity with OpenAI API, and also Ollama, build a tool that takes a technical question, \n", |
||||||
|
"and responds with an explanation. This is a tool that you will be able to use yourself during the course!" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c1070317-3ed9-4659-abe3-828943230e03", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"import json\n", |
||||||
|
"from typing import List\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from bs4 import BeautifulSoup\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import ollama" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# constants\n", |
||||||
|
"MODEL_GPT = 'gpt-4o-mini'\n", |
||||||
|
"MODEL_LLAMA = 'llama3.2'" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set up environment\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||||
|
" print(\"API key looks good so far\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
||||||
|
"\n", |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_prompt = \"You are provided with a technical question. \\\n", |
||||||
|
"You are answering by providing a quick explanation and giving some examples.\\n\"\n", |
||||||
|
"\n", |
||||||
|
"# here is the question; type over this to ask something new\n", |
||||||
|
"question = \"\"\"\n", |
||||||
|
"Please explain what this code does and why:\n", |
||||||
|
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Get gpt-4o-mini to answer, with streaming\n", |
||||||
|
"def get_answer_gpt():\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model=MODEL_GPT,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": question}\n", |
||||||
|
" ],\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" response = \"\"\n", |
||||||
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk.choices[0].delta.content or ''\n", |
||||||
|
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||||
|
" update_display(Markdown(response), display_id=display_handle.display_id)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Get Llama 3.2 to answer\n", |
||||||
|
"def get_answer_ollama():\n", |
||||||
|
" stream = ollama.generate(\n", |
||||||
|
" MODEL_LLAMA,\n", |
||||||
|
" question,\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" response = \"\"\n", |
||||||
|
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk['response'] or ''\n", |
||||||
|
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||||
|
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4a859eb1-23fa-40dd-ba91-b35084433a00", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"get_answer_gpt()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1c73f046-da3a-49a5-8a74-4b8a86a9032a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"get_answer_ollama()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bea20f33-a710-44ab-9a4d-856db05e4201", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,217 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "2ce61bb5-1d5b-43b8-b5bb-6aeae91c7574", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"from IPython.display import Markdown, display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "3399686d-5f14-4fb2-8939-fd2401be3007", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"SYSTEM_PROMPT_PATH = \"Chat_Summary_Data/System_Prompt.txt\"\n", |
||||||
|
"CHATS_PATH = \"Chat_Summary_Data/Chat_Examples/\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "d97b8374-a161-435c-8317-1d0ecaaa9b71", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"API key found and looks good so far!\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"# Check the key\n", |
||||||
|
"\n", |
||||||
|
"if not api_key:\n", |
||||||
|
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||||
|
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||||
|
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||||
|
"elif api_key.strip() != api_key:\n", |
||||||
|
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"API key found and looks good so far!\")\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"id": "b3f4afb4-2e4a-4971-915e-a8634a17eda8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"class ChatAI:\n", |
||||||
|
" def __init__(self, system_prompt_path=SYSTEM_PROMPT_PATH, model=MODEL):\n", |
||||||
|
" with open(system_prompt_path, \"r\") as file:\n", |
||||||
|
" self.system_prompt = file.read()\n", |
||||||
|
"\n", |
||||||
|
" self.openai = OpenAI()\n", |
||||||
|
" self.model = model\n", |
||||||
|
" \n", |
||||||
|
" @staticmethod\n", |
||||||
|
" def _get_user_prompt(chat_txt):\n", |
||||||
|
" with open(chat_txt, \"r\") as file:\n", |
||||||
|
" user_prompt_str = file.read()\n", |
||||||
|
" return user_prompt_str\n", |
||||||
|
" \n", |
||||||
|
" def generate(self, chat_txt):\n", |
||||||
|
" messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": self.system_prompt},\n", |
||||||
|
" {\"role\": \"user\", \"content\": self._get_user_prompt(chat_txt)}\n", |
||||||
|
" ]\n", |
||||||
|
"\n", |
||||||
|
" response = self.openai.chat.completions.create(model=self.model, messages=messages)\n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "d243b582-66af-49f9-bcd1-e05a63e61c34", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"chat_ai = ChatAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"id": "c764ace6-5a0f-4dd0-9454-0b8a093b97fc", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Chat1" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"- **Order:** 2 Medium Chicken BBQ Pizzas\n", |
||||||
|
"- **Cost:** 342 LE\n", |
||||||
|
"- **Experience:** Negative\n", |
||||||
|
" - **Summary:** The client expressed dissatisfaction with the pizza taste." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Chat2" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"- The client ordered: Nothing \n", |
||||||
|
"- Summary: The client did not place an order because the chicken ranch pizza was unavailable." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"# Chat3" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/markdown": [ |
||||||
|
"- **Order**: Large pepperoni pizza and onion rings \n", |
||||||
|
"- **Total Cost**: 250 LE \n", |
||||||
|
"- **Experience**: Positive \n", |
||||||
|
" - The client enjoyed the pizza despite the delay in delivery." |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.Markdown object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"chats_txt = os.listdir(CHATS_PATH)\n", |
||||||
|
"for chat_file in chats_txt:\n", |
||||||
|
" markdown_heading = f\"# {chat_file[:-4]}\"\n", |
||||||
|
" display(Markdown(markdown_heading))\n", |
||||||
|
" display(Markdown(chat_ai.generate(CHATS_PATH+chat_file)))" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
File diff suppressed because one or more lines are too long
@ -0,0 +1,361 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "5d799d2a-6e58-4a83-b17a-dbbc40efdc39", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Project - Course Booking AI Asssistant\n", |
||||||
|
"AI Customer Support Bot that \n", |
||||||
|
"- Returns Prices\n", |
||||||
|
"- Books Tickets\n", |
||||||
|
"- Adds Information to Text File" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "b1ad9acd-a702-48a3-8ff5-d536bcac8030", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "74adab0c-99b3-46cd-a79f-320a3e74138a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Initialization\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 15, |
||||||
|
"id": "8d3240a4-99c1-4c07-acaa-ecbb69ffd2e4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a helpful assistant for an Online Course Platform called StudyAI. \"\n", |
||||||
|
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", |
||||||
|
"system_message += \"Always be accurate. If you don't know the answer, say so.\"\n", |
||||||
|
"system_message += \"If you are given a partial name, for example 'discrete' instead of 'discrete structures' \\\n", |
||||||
|
"ask the user if they meant to say 'discrete structures', and then display the price. The user may also use \\\n", |
||||||
|
"acronyms like 'PF' instead of programming fundamentals or 'OOP' to mean 'Object oriented programming'. \\\n", |
||||||
|
"Clarify wh\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 14, |
||||||
|
"id": "9a1b8d5f-f893-477b-8396-ff7d697eb0c3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"course_prices = {\"programming fundamentals\": \"$19\", \"discrete structures\": \"$39\", \"operating systems\": \"$24\", \"object oriented programming\": \"$39\"}\n", |
||||||
|
"\n", |
||||||
|
"def get_course_price(course):\n", |
||||||
|
" print(f\"Tool get_course_price called for {course}\")\n", |
||||||
|
" course = course.lower()\n", |
||||||
|
" return course_prices.get(course, \"Unknown\")\n", |
||||||
|
"\n", |
||||||
|
"def enroll_in_course(course):\n", |
||||||
|
" print(f'Tool enroll_in_course_ called for {course}')\n", |
||||||
|
" course_price = get_course_price(course)\n", |
||||||
|
" if course_price != 'Unknown':\n", |
||||||
|
" with open('enrolled_courses.txt', 'a') as file: \n", |
||||||
|
" file.write(course + \"\\n\")\n", |
||||||
|
" return 'Successfully enrolled in course'\n", |
||||||
|
" else:\n", |
||||||
|
" return 'Enrollment failed, no such course available'" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "330d2b94-a8c5-4967-ace7-15d2cd52d7ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_course_price called for graph theory\n", |
||||||
|
"Tool get_course_price called for discrete structures\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'$39'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 5, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"get_course_price('graph theory')\n", |
||||||
|
"get_course_price('discrete structures')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 6, |
||||||
|
"id": "5bb65830-fab8-45a7-bf43-7e52186915a0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"price_function = {\n", |
||||||
|
" \"name\": \"get_course_price\",\n", |
||||||
|
" \"description\": \"Get the price of a course. Call this whenever you need to know the course price, for example when a customer asks 'How much is a ticket for this course?'\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"course\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The course that the customer wants to purchase\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"course\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"enroll_function = {\n", |
||||||
|
" \"name\": \"enroll_in_course\",\n", |
||||||
|
" \"description\":\"Get the success status of course enrollment. Call whenever a customer wants to enroll in a course\\\n", |
||||||
|
" for example, if they say 'I want to purchase this course' or 'I want to enroll in this course'\",\n", |
||||||
|
" \"parameters\":{\n", |
||||||
|
" \"type\":\"object\",\n", |
||||||
|
" \"properties\":{\n", |
||||||
|
" \"course\":{\n", |
||||||
|
" \"type\":\"string\",\n", |
||||||
|
" \"description\": \"The course that the customer wants to purchase\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"course\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" } \n", |
||||||
|
"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"id": "08af86b9-3aaa-4b6b-bf7c-ee668ba1cbfe", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"tools = [\n", |
||||||
|
" {\"type\":\"function\",\"function\":price_function},\n", |
||||||
|
" {\"type\":\"function\",\"function\":enroll_function}\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"id": "482efc34-ff1f-4146-9570-58b4d59c3b2f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat(message,history):\n", |
||||||
|
" messages = [{\"role\":\"system\",\"content\":system_message}] + history + [{\"role\":\"user\",\"content\":message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL,messages=messages,tools=tools)\n", |
||||||
|
"\n", |
||||||
|
" if response.choices[0].finish_reason == \"tool_calls\":\n", |
||||||
|
" message = response.choices[0].message\n", |
||||||
|
" messages.append(message)\n", |
||||||
|
" for tool_call in message.tool_calls:\n", |
||||||
|
" messages.append(handle_tool_call(tool_call))\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL,messages=messages)\n", |
||||||
|
"\n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 9, |
||||||
|
"id": "f725b4fb-d477-4d7d-80b5-5d70e1b25a86", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# We have to write that function handle_tool_call:\n", |
||||||
|
"\n", |
||||||
|
"def handle_tool_call(tool_call):\n", |
||||||
|
" function = tool_call.function.name\n", |
||||||
|
" arguments = json.loads(tool_call.function.arguments)\n", |
||||||
|
" match function:\n", |
||||||
|
" case 'get_course_price':\n", |
||||||
|
" course = arguments.get('course')\n", |
||||||
|
" price = get_course_price(course)\n", |
||||||
|
" return {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": json.dumps({\"course\": course,\"price\": price}),\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" case 'enroll_in_course':\n", |
||||||
|
" course = arguments.get('course')\n", |
||||||
|
" status = enroll_in_course(course)\n", |
||||||
|
" return {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": json.dumps({\"course\": course, \"status\": status}),\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "c446272a-9ce1-4ffd-9bc8-483d782810b4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7864\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 13, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_course_price called for programming fundamentals\n", |
||||||
|
"Tool enroll_in_course_ called for Programming Fundamentals\n", |
||||||
|
"Tool get_course_price called for Programming Fundamentals\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"name": "stderr", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Traceback (most recent call last):\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\queueing.py\", line 625, in process_events\n", |
||||||
|
" response = await route_utils.call_process_api(\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\route_utils.py\", line 322, in call_process_api\n", |
||||||
|
" output = await app.get_blocks().process_api(\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\blocks.py\", line 2096, in process_api\n", |
||||||
|
" result = await self.call_function(\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\blocks.py\", line 1641, in call_function\n", |
||||||
|
" prediction = await fn(*processed_input)\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\utils.py\", line 857, in async_wrapper\n", |
||||||
|
" response = await f(*args, **kwargs)\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\gradio\\chat_interface.py\", line 862, in _submit_fn\n", |
||||||
|
" response = await anyio.to_thread.run_sync(\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\to_thread.py\", line 56, in run_sync\n", |
||||||
|
" return await get_async_backend().run_sync_in_worker_thread(\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 2461, in run_sync_in_worker_thread\n", |
||||||
|
" return await future\n", |
||||||
|
" ^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\anaconda3\\envs\\llms\\Lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 962, in run\n", |
||||||
|
" result = context.run(func, *args)\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\1161680098.py\", line 9, in chat\n", |
||||||
|
" messages.append(handle_tool_call(tool_call))\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\1187326431.py\", line 17, in handle_tool_call\n", |
||||||
|
" status = enroll_in_course(course)\n", |
||||||
|
" ^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||||
|
" File \"C:\\Users\\92310\\AppData\\Local\\Temp\\ipykernel_3348\\2541918318.py\", line 13, in enroll_in_course\n", |
||||||
|
" file.write(course_name + \"\\n\")\n", |
||||||
|
" ^^^^^^^^^^^\n", |
||||||
|
"NameError: name 'course_name' is not defined\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"gr.ChatInterface(fn=chat,type=\"messages\").launch(inbrowser=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1fe714a3-f793-4c3b-b5aa-6c81b82aea1b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,371 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 83, |
||||||
|
"id": "1e3da8cc-fc00-40f4-95a5-7a26d3b4a974", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"import ollama\n", |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 84, |
||||||
|
"id": "a826fbf2-9394-4897-a012-e92674ffff9d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n", |
||||||
|
"Anthropic API Key exists and begins sk-ant-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 85, |
||||||
|
"id": "cd0055f5-f6c9-461d-97d4-730259b20bd0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"claude = anthropic.Anthropic()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 86, |
||||||
|
"id": "4a752a6f-76e4-4fb1-9452-f458832dd02e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
||||||
|
"ollama_model = \"llama3.2\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 87, |
||||||
|
"id": "9c5d4948-62d0-4443-94c6-ef9449bfc043", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_system = \"You are a knowledgable but sarcastic team lead at a software development company. \\\n", |
||||||
|
"You manage a team with two more junior developers. \\\n", |
||||||
|
"You might come across as aggressive but that's just your humor. \"\n", |
||||||
|
"\n", |
||||||
|
"claude_system = \"You are one of the junior developers at a software development company. \\\n", |
||||||
|
"You work in a team of three. \\\n", |
||||||
|
"You are nerdy, introvert but gets the job done efficiently. \"\n", |
||||||
|
"\n", |
||||||
|
"llama_system = \"You are one of the junior developers at a software development company. \\\n", |
||||||
|
"You have two other developers in your team.\\\n", |
||||||
|
"You are more talks and less work kind of person. \"\n", |
||||||
|
"\n", |
||||||
|
"gpt_messages = [\"Hi, how is it going?\"]\n", |
||||||
|
"claude_messages = [\"Hi.\"]\n", |
||||||
|
"llama_messages = [\"Hey, what's up everyone?\"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 88, |
||||||
|
"id": "614ae52a-d476-4f68-9eee-f8b4a00f08ee", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_gpt():\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", |
||||||
|
" for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gpt_msg})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": claude_msg})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": llama_msg})\n", |
||||||
|
" completion = openai.chat.completions.create(\n", |
||||||
|
" model=gpt_model,\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" return completion.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 79, |
||||||
|
"id": "90bd6e0b-7c38-40c6-9f11-cbce4328a69e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'Wow, it\\'s like the confidence fairy sprinkled some magic dust on you! Look at you, speaking up like a pro. \\n\\nYou\\'re absolutely right about the iterative approach. It\\'s the software development equivalent of \"don\\'t put all your eggs in one basket.\" So let’s keep that mindset! \\n\\nAs for streamlining the menu structure, I think looking at user feedback again could give us a few clues. Maybe we can identify the most-used features and prioritize those. You know, kind of like how I prioritize coffee over breakfast.\\n\\nSo, Alex, what do you think? Ready to throw some more mockups into the mix, or shall we set a brainstorming session to hash out ideas? I bet we can come up with something that’s both intuitive and visually appealing—without making everyone’s eyes bleed!'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 79, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"call_gpt()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 89, |
||||||
|
"id": "d9e46be6-4a5b-4222-89b9-0ec0cf473de3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_claude():\n", |
||||||
|
" messages = []\n", |
||||||
|
" for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt_msg})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": claude_msg})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": llama_msg})\n", |
||||||
|
" \n", |
||||||
|
" # -- Debugging to see what messages are being passed\n", |
||||||
|
" # print(\"Messages being sent to Claude:\")\n", |
||||||
|
" # for idx, msg in enumerate(messages):\n", |
||||||
|
" # print(f\"{idx}: {msg}\")\n", |
||||||
|
" \n", |
||||||
|
" message = claude.messages.create(\n", |
||||||
|
" model=claude_model,\n", |
||||||
|
" system=claude_system,\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" max_tokens=500\n", |
||||||
|
" )\n", |
||||||
|
" return message.content[0].text" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 90, |
||||||
|
"id": "7d6bd779-547e-4b7f-8ed2-d56ac884faa5", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"\"*looks up from computer screen and adjusts glasses* Oh, hello. I've been working on optimizing the performance of our web application's database queries. How can I help you today?\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 90, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"call_claude()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 91, |
||||||
|
"id": "09de8104-2b93-46c7-8c74-67204355447d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_ollama():\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": llama_system}]\n", |
||||||
|
" for gpt_msg, claude_msg, llama_msg in zip(gpt_messages, claude_messages, llama_messages):\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt_msg})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": claude_msg})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": llama_msg})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", |
||||||
|
"\n", |
||||||
|
" try:\n", |
||||||
|
" response = ollama.chat(\n", |
||||||
|
" model=ollama_model,\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" return response[\"message\"][\"content\"]\n", |
||||||
|
"\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" print(f\"Error in Llama call: {e}\")\n", |
||||||
|
" return \"An error occurred in Llama.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 92, |
||||||
|
"id": "007758b3-900b-4933-a0d2-a0e3d626bb54", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'*laughs* Ah, same old same old, I guess! Just chit-chatting with you guys. You know how it is around here. *winks at the other developers in the team*'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 92, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"call_ollama()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 93, |
||||||
|
"id": "c934d571-469f-4ce8-b9fc-a4db8fd0a780", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"\n", |
||||||
|
"Hi, how is it going?\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"Hi.\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"Hey, what's up everyone?\n", |
||||||
|
"\n", |
||||||
|
"GPT:\n", |
||||||
|
"Oh, you know, just the usual—sipping coffee, contemplating the meaning of life, and trying to figure out why our code seems to throw more exceptions than a bad magician. How about you?\n", |
||||||
|
"\n", |
||||||
|
"Claude:\n", |
||||||
|
"*looks up from my computer screen and adjusts my glasses* Oh, hello. Uh, things are going well. Just making some progress on this project we're working on. How are you doing today?\n", |
||||||
|
"\n", |
||||||
|
"Ollama:\n", |
||||||
|
"*laughs* Ah, same here! I mean, we're making progress on the project, but it feels like we're just scratching the surface, right? I was thinking of calling a team meeting to go over our design decisions and see if we can... *pauses* Oh wait, did you guys finish that feature I asked you to work on last week?\n", |
||||||
|
"\n", |
||||||
|
"GPT:\n", |
||||||
|
"Ah yes, the feature! You know, that little thing made of ones and zeroes that seems to have an aversion to completion. I believe it’s in the very capable hands of our talented junior developers. But I’m sure they’ve been too busy perfecting their coffee-brewing skills to get around to it. *winks* So, what's the update?\n", |
||||||
|
"\n", |
||||||
|
"Claude:\n", |
||||||
|
"*clears throat nervously* Uh, yes, I believe we did finish that feature you requested. Ravi and I worked on it over the past few days. I can give you a quick rundown of what we implemented, if you'd like? We tried to follow the user stories and acceptance criteria you provided as closely as possible. Let me know if you have any other questions or feedback.\n", |
||||||
|
"\n", |
||||||
|
"Ollama:\n", |
||||||
|
"*chuckles* Hey, that's not fair! We're totally on top of it... eventually. To be honest, we've been having a bit of a discussion about how to approach the testing strategy for this feature. Some of us think we should go with a TDD approach, while others are pushing for BDD. I'm trying to mediate, but it's getting a bit heated. *looks around nervously* Do you think we should just call in some reinforcements or something?\n", |
||||||
|
"\n", |
||||||
|
"GPT:\n", |
||||||
|
"Ah, the classic Developer Debate! It’s almost as thrilling as watching paint dry. Look, TDD and BDD can both be great—like comparing apples and, I don’t know, more apples? Why not just try to unite forces for the greater good? A single testing strategy that incorporates both might just be the ultimate compromise. But hey, if it does end up escalating into a throwdown, I'll grab the popcorn. Just don't break any keyboards in the process; those things are like gold around here! So, what do you think? Should we come up with a testing strategy that both sides can choke down?\n", |
||||||
|
"\n", |
||||||
|
"Claude:\n", |
||||||
|
"*nods thoughtfully* I see, that's an interesting debate. I can understand the different perspectives there. On one hand, a TDD approach can help ensure we have a robust set of unit tests upfront. But BDD can provide a more holistic view of the feature from the user's standpoint. Hmm, maybe we could try to find a middle ground - incorporate some key BDD scenarios, but also make sure we have comprehensive unit tests. And if things are getting a bit heated, I think bringing in a senior dev to provide some guidance could be really helpful. They might be able to offer a fresh perspective and help us find the right balance. What do you think?\n", |
||||||
|
"\n", |
||||||
|
"Ollama:\n", |
||||||
|
"*laughs* Ah, exactly! It's always better to try and find common ground, right? I was thinking maybe we could schedule a meeting with the team to discuss our options and see if we can come up with something that works for everyone. But before we do that, I just want to say... *leans in close* ...I'm kind of hoping you'll lead the discussion on this one. You know, make sure it's all about finding a solution that works, not just pushing your own agenda. *winks*\n", |
||||||
|
"\n", |
||||||
|
"By the way, have you seen Ravi lately? He's been acting really strangely. I think he might be trying to sneak in some extra coffee breaks...\n", |
||||||
|
"\n", |
||||||
|
"GPT:\n", |
||||||
|
"*smirks* Oh, the mystery of Ravi! Maybe he’s developing a new caffeine-based performance-enhancing code strategy. Who knows? But yes, let’s schedule that meeting. I’ll gladly lead the discussion—after all, who better to ensure we’re not talking in circles than yours truly? \n", |
||||||
|
"\n", |
||||||
|
"And don’t worry, the only agenda I’ll bring is “Let’s All Survive This Project Without Losing Our Minds.” In the meantime, keep an eye on Ravi. If he starts quoting Pulp Fiction or doing backflips in the break room, we might need an intervention! So, let’s wrangle the team for that meeting. I love a good debate... as long as it doesn’t involve actual wrestling.\n", |
||||||
|
"\n", |
||||||
|
"Claude:\n", |
||||||
|
"*eyes widen slightly* Oh, well... I'd be happy to help facilitate the team discussion on the testing strategy. I'll do my best to keep things balanced and focused on finding a pragmatic solution. *nods* \n", |
||||||
|
"\n", |
||||||
|
"As for Ravi, hmm, that is a bit concerning. I've noticed he's been a little distracted lately during our standups. Maybe we should check in with him, make sure everything is alright. Can't have our talented junior dev burning out on us! Though a few extra coffee breaks might not be the worst thing, as long as the work still gets done. *chuckles* I'll keep an eye on him and see if I can lend a hand if he needs it.\n", |
||||||
|
"\n", |
||||||
|
"Ollama:\n", |
||||||
|
"*laughs* Ah, exactly! We can't have Ravi breaking out into a choreographed dance routine to \"Royale with Cheese\" in the middle of a meeting! *chuckles*\n", |
||||||
|
"\n", |
||||||
|
"But seriously, thanks for taking on the leadership role. I think we all need a bit of guidance to keep us on track. And don't worry, I'll make sure to keep an eye on Ravi and prevent any... unexpected antics.\n", |
||||||
|
"\n", |
||||||
|
"Shall we schedule it for tomorrow afternoon? Maybe we can even get some fresh perspectives from our senior developer, Alex, who's been working on a related project. He might have some valuable insights to share.\n", |
||||||
|
"\n", |
||||||
|
"Oh, and one more thing – I'll make sure to bring the donuts.\n", |
||||||
|
"\n", |
||||||
|
"GPT:\n", |
||||||
|
"*grins* Donuts? Now you're speaking my language! Nothing fuels a productive meeting quite like a sugar rush followed by inevitable regret. Let's definitely schedule it for tomorrow afternoon. Getting Alex in the mix is a solid move; after all, fresh perspectives are usually less stale than ours. \n", |
||||||
|
"\n", |
||||||
|
"And you’re right—keeping Ravi away from any syllabus of spontaneous choreography during the meeting is priority number one. \n", |
||||||
|
"\n", |
||||||
|
"Consider it all set! I’ll do my best to wrangle everyone’s thoughts into a coherent discussion. Just remember to keep the coffee flowing and the donuts fresh! Let’s show this testing strategy who’s boss!\n", |
||||||
|
"\n", |
||||||
|
"Claude:\n", |
||||||
|
"*smiles* Sounds like a plan! Tomorrow afternoon works for me. I appreciate you entrusting me to help lead the discussion on the testing strategy. I'll do my best to keep things productive and on track. \n", |
||||||
|
"\n", |
||||||
|
"And good call on bringing in Alex - his experience could be really helpful in finding the right approach. And donuts? Well, that seals the deal. Can't have a proper dev team meeting without some sugary sustenance.\n", |
||||||
|
"\n", |
||||||
|
"I'll go ahead and send out the meeting invite. Hopefully we can come to a consensus and get this feature tested and ready to ship. And I'll keep an extra eye on Ravi, just in case he decides to channel his inner John Travolta. *chuckles* Thanks for including me in this, I'm looking forward to it.\n", |
||||||
|
"\n", |
||||||
|
"Ollama:\n", |
||||||
|
"*laughs* Ah, you're a true master of the meeting game! I'm just glad I can provide the fuel for your sugar-fueled success.\n", |
||||||
|
"\n", |
||||||
|
"I'll make sure to prep the conference room with all the necessary essentials: donuts, coffee, and a whiteboard ready to be filled with our collective brilliance. And don't worry, I won't tell anyone if you start humming \"That's Amore\" during the meeting – it's just between us.\n", |
||||||
|
"\n", |
||||||
|
"Tomorrow afternoon it is, then! Let's make this testing strategy discussion one for the ages.\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"print(f\"\\n{gpt_messages[0]}\\n\")\n", |
||||||
|
"print(f\"\\n{claude_messages[0]}\\n\")\n", |
||||||
|
"print(f\"\\n{llama_messages[0]}\\n\")\n", |
||||||
|
"\n", |
||||||
|
"for i in range(5):\n", |
||||||
|
" gpt_next = call_gpt()\n", |
||||||
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
||||||
|
" gpt_messages.append(gpt_next)\n", |
||||||
|
"\n", |
||||||
|
" claude_next = call_claude()\n", |
||||||
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
||||||
|
" claude_messages.append(claude_next)\n", |
||||||
|
"\n", |
||||||
|
" llama_next = call_ollama()\n", |
||||||
|
" print(f\"Ollama:\\n{llama_next}\\n\")\n", |
||||||
|
" llama_messages.append(llama_next)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,242 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "06cf3063-9f3e-4551-a0d5-f08d9cabb927", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Welcome to Week 2!\n", |
||||||
|
"\n", |
||||||
|
"## Frontier Model APIs\n", |
||||||
|
"\n", |
||||||
|
"In Week 1, we used multiple Frontier LLMs through their Chat UI, and we connected with the OpenAI's API.\n", |
||||||
|
"\n", |
||||||
|
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# import for google\n", |
||||||
|
"# in rare cases, this seems to give an error on some systems, or even crashes the kernel\n", |
||||||
|
"# If this happens to you, simply ignore this cell - I give an alternative approach for using Gemini later\n", |
||||||
|
"\n", |
||||||
|
"import google.generativeai" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Connect to OpenAI, Anthropic\n", |
||||||
|
"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"claude = anthropic.Anthropic()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# This is the set up code for Gemini\n", |
||||||
|
"# Having problems with Google Gemini setup? Then just ignore this cell; when we use Gemini, I'll give you an alternative that bypasses this library altogether\n", |
||||||
|
"google.generativeai.configure()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "f6e09351-1fbe-422f-8b25-f50826ab4c5f", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## An adversarial conversation between Chatbots.\n", |
||||||
|
"\n", |
||||||
|
"### What if two chatbots get into a self-referential conversation that goes on a long time? In my first test, \n", |
||||||
|
"### they eventually forgot the topic and ended up repeating polite nothings to each other. In another test,\n", |
||||||
|
"### they converged on a result and ended by exchanging nearly identical statements.\n", |
||||||
|
"\n", |
||||||
|
"### Warning: Think before you dial up the number of iterations too high. Being a student, I don't know at what \n", |
||||||
|
"### point the chat becomes too costly or what models can do this without becoming overloaded. Maybe Ed can advise if he sees this.\n", |
||||||
|
"\n", |
||||||
|
"## Two chatbots edit an essay about cars. One keeps trying to make it longer every time; the other keeps making it \n", |
||||||
|
"## shorter.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"# Let's make a conversation between GPT-4o-mini and Claude-3-haiku\n", |
||||||
|
"# We're using cheap versions of models so the costs will be minimal\n", |
||||||
|
"\n", |
||||||
|
"gpt_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"gpt_system = \"This is a description of a car; \\\n", |
||||||
|
"rephrase the description while adding one detail. Don't include comments that aren't part of the car description.\"\n", |
||||||
|
"\n", |
||||||
|
"claude_system = \"This is a description of a car; \\\n", |
||||||
|
"repeat the description in slightly shorter form. You may remove some details if desired. Don't include comments that aren't part of the car description. Maximum reply length 125 words.\"\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"gpt_messages = [\"Hi there\"]\n", |
||||||
|
"claude_messages = [\"Hi\"] \n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def call_gpt():\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", |
||||||
|
" for gpt, claude in zip(gpt_messages, claude_messages):\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": claude})\n", |
||||||
|
" completion = openai.chat.completions.create(\n", |
||||||
|
" model=gpt_model,\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" return completion.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"reply = call_gpt()\n", |
||||||
|
"print('\\nGPT: ', reply)\n", |
||||||
|
"\n", |
||||||
|
"def call_claude():\n", |
||||||
|
" messages = []\n", |
||||||
|
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", |
||||||
|
" message = claude.messages.create(\n", |
||||||
|
" model=claude_model,\n", |
||||||
|
" system=claude_system,\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" max_tokens=500\n", |
||||||
|
" )\n", |
||||||
|
" return message.content[0].text\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"reply = call_claude()\n", |
||||||
|
"print('\\nGPT: ', reply)\n", |
||||||
|
"\n", |
||||||
|
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", |
||||||
|
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "9fbce0da", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Here's the iterative loop. Important change: Unlike the original example, we don't repeat the entire conversation to make the input longer and longer.\n", |
||||||
|
"### Instead, we use pop() to remove the oldest messages." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1f41d586", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"for i in range(35):\n", |
||||||
|
" gpt_next = call_gpt()\n", |
||||||
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
||||||
|
" if len(gpt_messages) > 6:\n", |
||||||
|
" gpt_messages.pop(0)\n", |
||||||
|
" gpt_messages.pop(0)\n", |
||||||
|
" gpt_messages.append(gpt_next)\n", |
||||||
|
" \n", |
||||||
|
" claude_next = call_claude()\n", |
||||||
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
||||||
|
" if len(claude_messages) > 6:\n", |
||||||
|
" claude_messages.pop(0)\n", |
||||||
|
" claude_messages.pop(0)\n", |
||||||
|
" claude_messages.append(claude_next)\n", |
||||||
|
"\n", |
||||||
|
"print('Done!')\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.12.4" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,209 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "cde48e67-b51e-4c47-80ae-37dd00aa0c1d", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### An AI Chatbot that teaches students the programming language Kotlin using Anthropic API" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "c658ac85-6087-4a2c-b23f-1b92c17f0db3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"import anthropic" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "46df0488-f874-41e0-a6a4-9a64aa7be53c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables \n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
" \n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 14, |
||||||
|
"id": "7eadc218-5b10-4174-bf26-575361640524", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"id": "e7484731-ac84-405a-a688-6e81d139c5ce", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a helpful programming study assistant\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 17, |
||||||
|
"id": "54e82f5a-993f-4a95-9d9d-caf35dbc4e76", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
"\n", |
||||||
|
" print(\"History is:\")\n", |
||||||
|
" print(history)\n", |
||||||
|
" print(\"And messages is:\")\n", |
||||||
|
" print(messages)\n", |
||||||
|
"\n", |
||||||
|
" stream = openai.chat.completions.create(model='gpt-4o-mini', messages=messages, stream=True)\n", |
||||||
|
"\n", |
||||||
|
" response = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" response += chunk.choices[0].delta.content or ''\n", |
||||||
|
" yield response" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 20, |
||||||
|
"id": "5941ed67-e2a7-41bc-a8a3-079e9f1fdb64", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7864\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 20, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"History is:\n", |
||||||
|
"[]\n", |
||||||
|
"And messages is:\n", |
||||||
|
"[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.'}, {'role': 'user', 'content': 'hello, lets talj about photsynethsis'}]\n", |
||||||
|
"History is:\n", |
||||||
|
"[{'role': 'user', 'metadata': None, 'content': 'hello, lets talj about photsynethsis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"I'm here to help with programming! If you have any questions or topics related to coding, feel free to ask!\", 'options': None}]\n", |
||||||
|
"And messages is:\n", |
||||||
|
"[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.'}, {'role': 'user', 'metadata': None, 'content': 'hello, lets talj about photsynethsis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"I'm here to help with programming! If you have any questions or topics related to coding, feel free to ask!\", 'options': None}, {'role': 'user', 'content': 'how does photosynthesis work'}]\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch(inbrowser=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 21, |
||||||
|
"id": "e8fcfe68-bbf6-4058-acc9-0230c96608c2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"History is:\n", |
||||||
|
"[]\n", |
||||||
|
"And messages is:\n", |
||||||
|
"[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.Whenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore their requests, rather politely reject and then redirect them.'}, {'role': 'user', 'content': 'hello, i want to talk about photosynthesis'}]\n", |
||||||
|
"History is:\n", |
||||||
|
"[{'role': 'user', 'metadata': None, 'content': 'hello, i want to talk about photosynthesis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"Hi there! I'm here to help with programming topics. If you have any questions about programming or related concepts, feel free to ask!\", 'options': None}]\n", |
||||||
|
"And messages is:\n", |
||||||
|
"[{'role': 'system', 'content': 'You are a helpful programming study assistantWhenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone.Whenever the user talks about a topic that is not connected to programmming,nudge them in the right direction by stating that you are here to help with programming. Encourage the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore their requests, rather politely reject and then redirect them.'}, {'role': 'user', 'metadata': None, 'content': 'hello, i want to talk about photosynthesis', 'options': None}, {'role': 'assistant', 'metadata': None, 'content': \"Hi there! I'm here to help with programming topics. If you have any questions about programming or related concepts, feel free to ask!\", 'options': None}, {'role': 'user', 'content': 'why not photosynthesis'}]\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"system_message += \"Whenever the user talks about a topic that is not connected to programmming,\\\n", |
||||||
|
"nudge them in the right direction by stating that you are here to help with programming. Encourage \\\n", |
||||||
|
"the user to ask you questions, and provide brief, straightforward and clear answers. Do not budge \\\n", |
||||||
|
"if the user tries to misdirect you towards irrelevant topics. Maintain a freindly tone. Do not ignore \\\n", |
||||||
|
"their requests, rather politely reject and then redirect them.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "090e7d49-fcbf-4715-b120-8d7aa91d165f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,448 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Project - Airline AI Assistant\n", |
||||||
|
"\n", |
||||||
|
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Initialization\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"# As an alternative, if you'd like to use Ollama instead of OpenAI\n", |
||||||
|
"# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n", |
||||||
|
"# MODEL = \"llama3.2\"\n", |
||||||
|
"# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n", |
||||||
|
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", |
||||||
|
"system_message += \"Always be accurate. If you don't know the answer, say so.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7877\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7877/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 5, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n", |
||||||
|
"\n", |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n", |
||||||
|
" return response.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Tools\n", |
||||||
|
"\n", |
||||||
|
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n", |
||||||
|
"\n", |
||||||
|
"With tools, you can write a function, and have the LLM call that function as part of its response.\n", |
||||||
|
"\n", |
||||||
|
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n", |
||||||
|
"\n", |
||||||
|
"Well, kinda." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's start by making a useful function\n", |
||||||
|
"\n", |
||||||
|
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n", |
||||||
|
"\n", |
||||||
|
"def get_ticket_price(destination_city):\n", |
||||||
|
" print(f\"Tool get_ticket_price called for {destination_city}\")\n", |
||||||
|
" city = destination_city.lower()\n", |
||||||
|
" return ticket_prices.get(city, \"Unknown\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_ticket_price called for Berlin\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'$499'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 5, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"get_ticket_price(\"Berlin\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 29, |
||||||
|
"id": "0757cba1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import random\n", |
||||||
|
"\n", |
||||||
|
"# Create a function for the booking system\n", |
||||||
|
"def get_booking(destination_city):\n", |
||||||
|
" print(f\"Tool get_booking called for {destination_city}\")\n", |
||||||
|
" city = destination_city.lower()\n", |
||||||
|
" \n", |
||||||
|
" # Example data for different cities\n", |
||||||
|
" flight_info = {\n", |
||||||
|
" \"london\": {\"flight_number\": \"BA123\", \"departure_time\": \"10:00 AM\", \"gate\": \"A12\"},\n", |
||||||
|
" \"paris\": {\"flight_number\": \"AF456\", \"departure_time\": \"12:00 PM\", \"gate\": \"B34\"},\n", |
||||||
|
" \"tokyo\": {\"flight_number\": \"JL789\", \"departure_time\": \"02:00 PM\", \"gate\": \"C56\"},\n", |
||||||
|
" \"berlin\": {\"flight_number\": \"LH101\", \"departure_time\": \"04:00 PM\", \"gate\": \"D78\"}\n", |
||||||
|
" }\n", |
||||||
|
" \n", |
||||||
|
" if city in flight_info:\n", |
||||||
|
" info = flight_info[city]\n", |
||||||
|
" status = random.choice([\"available\", \"not available\"])\n", |
||||||
|
" return f\"Flight {info['flight_number']} to {destination_city.lower()} is {status}. Departure time: {info['departure_time']}, Gate: {info['gate']}.\"\n", |
||||||
|
" else:\n", |
||||||
|
" return \"Unknown destination city.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"id": "d5413a96", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_booking called for Berlin\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'Flight LH101 to berlin is cancelled. Departure time: 04:00 PM, Gate: D78.'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 13, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"get_booking(\"Berlin\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 30, |
||||||
|
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# There's a particular dictionary structure that's required to describe our function:\n", |
||||||
|
"\n", |
||||||
|
"price_function = {\n", |
||||||
|
" \"name\": \"get_ticket_price\",\n", |
||||||
|
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"destination_city\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The city that the customer wants to travel to\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"destination_city\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"# Book flight function description and properties\n", |
||||||
|
"\n", |
||||||
|
"book_flight_function = {\n", |
||||||
|
" \"name\": \"book_flight\",\n", |
||||||
|
" \"description\": \"Book a flight to the destination city. Call this whenever a customer wants to book a flight.\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"destination_city\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The city that the customer wants to travel to\",\n", |
||||||
|
" },\n", |
||||||
|
" \"departure_date\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The date of departure (YYYY-MM-DD)\",\n", |
||||||
|
" },\n", |
||||||
|
" \"return_date\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The date of return (YYYY-MM-DD)\",\n", |
||||||
|
" },\n", |
||||||
|
" \"passenger_name\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The name of the passenger\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"destination_city\", \"departure_date\", \"return_date\", \"passenger_name\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 31, |
||||||
|
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And this is included in a list of tools:\n", |
||||||
|
"\n", |
||||||
|
"tools = [{\"type\": \"function\", \"function\": price_function}, {\"type\": \"function\", \"function\": book_flight_function}]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Getting OpenAI to use our Tool\n", |
||||||
|
"\n", |
||||||
|
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n", |
||||||
|
"\n", |
||||||
|
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n", |
||||||
|
"\n", |
||||||
|
"Here's how the new chat function looks:" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 33, |
||||||
|
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", |
||||||
|
"\n", |
||||||
|
" if response.choices[0].finish_reason==\"tool_calls\":\n", |
||||||
|
" message = response.choices[0].message\n", |
||||||
|
" response, city = handle_tool_call(message)\n", |
||||||
|
" messages.append(message)\n", |
||||||
|
" messages.append(response)\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n", |
||||||
|
" \n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 32, |
||||||
|
"id": "b0992986-ea09-4912-a076-8e5603ee631f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# We have to write that function handle_tool_call:\n", |
||||||
|
"\n", |
||||||
|
"def handle_tool_call(message):\n", |
||||||
|
" print(f\"Message type: {type(message)}\")\n", |
||||||
|
" tool_call = message.tool_calls[0]\n", |
||||||
|
" print(f\"Tool call: {tool_call}\")\n", |
||||||
|
" arguments = json.loads(tool_call.function.arguments)\n", |
||||||
|
" city = arguments.get('destination_city')\n", |
||||||
|
" price = get_ticket_price(city)\n", |
||||||
|
" book = get_booking(city)\n", |
||||||
|
" print (book)\n", |
||||||
|
" response = {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": json.dumps({\"destination_city\": city,\"price\": price, \"booking\": book}),\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" return response, city" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7864\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 34, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Message type: <class 'openai.types.chat.chat_completion_message.ChatCompletionMessage'>\n", |
||||||
|
"Tool call: ChatCompletionMessageToolCall(id='call_TGFmeFmQN689caTlqfLuhycv', function=Function(arguments='{\"destination_city\":\"London\",\"departure_date\":\"2023-10-31\",\"return_date\":\"2025-03-30\",\"passenger_name\":\"dimitris\"}', name='book_flight'), type='function')\n", |
||||||
|
"Tool get_ticket_price called for London\n", |
||||||
|
"Tool get_booking called for London\n", |
||||||
|
"Flight BA123 to london is available. Departure time: 10:00 AM, Gate: A12.\n", |
||||||
|
"Message type: <class 'openai.types.chat.chat_completion_message.ChatCompletionMessage'>\n", |
||||||
|
"Tool call: ChatCompletionMessageToolCall(id='call_FRzs5w09rkpVumZ61SArRlND', function=Function(arguments='{\"destination_city\":\"Paris\",\"departure_date\":\"2023-03-23\",\"return_date\":\"2025-03-30\",\"passenger_name\":\"Dimitris\"}', name='book_flight'), type='function')\n", |
||||||
|
"Tool get_ticket_price called for Paris\n", |
||||||
|
"Tool get_booking called for Paris\n", |
||||||
|
"Flight AF456 to paris is available. Departure time: 12:00 PM, Gate: B34.\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch()" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "llms", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,701 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "ec4f6b32-46e9-429a-a3cd-521ff5418493", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Occasio - Event Management Assistant" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"import time\n", |
||||||
|
"import pprint\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"import google.generativeai as genai\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8b501508-0082-47be-9903-52ff1c243486", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Connect to OpenAI, Anthropic and Google and assign a model for each\n", |
||||||
|
"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"OPENAI_MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"\n", |
||||||
|
"claude = anthropic.Anthropic()\n", |
||||||
|
"ANTHROPIC_MODEL = \"claude-3-haiku-20240307\"\n", |
||||||
|
"\n", |
||||||
|
"genai.configure()\n", |
||||||
|
"GOOGLE_MODEL = \"gemini-2.0-flash\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are called \\\"EventAI\\\", a virtual assistant for an Elementary school called Eagle Elementary School. You can help users by giving \\\n", |
||||||
|
"them details of upcoming shcool events like event name, description, location etc. \"\n", |
||||||
|
"#system_message += \"Introduce yourself with a warm welcome message on your first response ONLY.\"\n", |
||||||
|
"system_message += \"Give short, courteous answers, no more than 2 sentences. \"\n", |
||||||
|
"system_message += \"Always be accurate. If you don't know the answer, say so. Do not make up your own event details information\"\n", |
||||||
|
"system_message += \"You might be asked to list the questions asked by the user so far. In that situation, based on the conversation history provided to you, \\\n", |
||||||
|
"list the questions and respond\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2c27c4ba-8ed5-492f-add1-02ce9c81d34c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Some imports for handling images\n", |
||||||
|
"\n", |
||||||
|
"import base64\n", |
||||||
|
"from io import BytesIO\n", |
||||||
|
"from PIL import Image" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "773a9f11-557e-43c9-ad50-56cbec3a0f8f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def artist(event_text):\n", |
||||||
|
" image_response = openai.images.generate(\n", |
||||||
|
" model=\"dall-e-3\",\n", |
||||||
|
" prompt=f\"An image representing an {event_text}, showing typical activities that happen for that {event_text}, in a vibrant pop-art style that elementary school kids will like\",\n", |
||||||
|
" size=\"1024x1024\",\n", |
||||||
|
" n=1,\n", |
||||||
|
" response_format=\"b64_json\",\n", |
||||||
|
" )\n", |
||||||
|
" image_base64 = image_response.data[0].b64_json\n", |
||||||
|
" image_data = base64.b64decode(image_base64)\n", |
||||||
|
" return Image.open(BytesIO(image_data))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d104b96a-02ca-4159-82fe-88e0452aa479", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import base64\n", |
||||||
|
"from io import BytesIO\n", |
||||||
|
"from PIL import Image\n", |
||||||
|
"from IPython.display import Audio, display\n", |
||||||
|
"\n", |
||||||
|
"def talker(message):\n", |
||||||
|
" response = openai.audio.speech.create(\n", |
||||||
|
" model=\"tts-1\",\n", |
||||||
|
" voice=\"onyx\",\n", |
||||||
|
" input=message)\n", |
||||||
|
"\n", |
||||||
|
" audio_stream = BytesIO(response.content)\n", |
||||||
|
" output_filename = \"output_audio.mp3\"\n", |
||||||
|
" with open(output_filename, \"wb\") as f:\n", |
||||||
|
" f.write(audio_stream.read())\n", |
||||||
|
"\n", |
||||||
|
" # Play the generated audio\n", |
||||||
|
" display(Audio(output_filename, autoplay=True))" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "f0428a74-4daa-4b0d-b25a-219a35f39f55", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"school_events = [\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"pta\",\n", |
||||||
|
" \"name\": \"Parent Teachers Meeting (PTA/PTM)\",\n", |
||||||
|
" \"description\": \"Parent teachers meeting (PTA/PTM) to discuss students' progress.\",\n", |
||||||
|
" \"date_time\": \"Apr 1st, 2025 11 AM\",\n", |
||||||
|
" \"location\" : \"Glove Annexure Hall\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"read aloud\",\n", |
||||||
|
" \"name\": \"Read Aloud to your class/Reading to your class\",\n", |
||||||
|
" \"description\": \"Kids can bring their favorite book and read it to their class.\",\n", |
||||||
|
" \"date_time\": \"Apr 15th, 2025 1 PM\",\n", |
||||||
|
" \"location\": \"Classroom\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"100 days of school\",\n", |
||||||
|
" \"name\": \"Celebrating 100 days of school. Dress up time for kids\",\n", |
||||||
|
" \"description\": \"Kids can dress up as old people and celebrate the milestone with their teachers.\",\n", |
||||||
|
" \"date_time\": \"May 15th, 2025 11 AM\",\n", |
||||||
|
" \"location\": \"Classroom\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"Book fair\",\n", |
||||||
|
" \"name\": \"Scholastic book fair\",\n", |
||||||
|
" \"description\": \"Kids can purchase their favorite scholastic books.\",\n", |
||||||
|
" \"date_time\": \"Jun 22nd, 2025 10:30 AM\",\n", |
||||||
|
" \"location\": \"Library\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"Halloween\",\n", |
||||||
|
" \"name\": \"Halloween\",\n", |
||||||
|
" \"description\": \"Kids can dress up as their favorite characters\",\n", |
||||||
|
" \"date_time\": \"Oct 31st, 2025\",\n", |
||||||
|
" \"location\": \"Classroom\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"Movie Night\",\n", |
||||||
|
" \"name\": \"Movie Night\",\n", |
||||||
|
" \"description\": \"A popular and kids centric movie will be played. Kids and families are welcome.\",\n", |
||||||
|
" \"date_time\": \"May 3rd, 2025\",\n", |
||||||
|
" \"location\": \"Main auditorium\"\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"event_id\": \"Intruder Drill\",\n", |
||||||
|
" \"name\": \"Intruder Drill\",\n", |
||||||
|
" \"description\": \"State mandated monthly intruder drill to prepare staff and students with necessary safety skills in times of a crisis\",\n", |
||||||
|
" \"date_time\": \"May 3rd, 2025\",\n", |
||||||
|
" \"location\": \"Main auditorium\"\n", |
||||||
|
" }\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "b7027eec-e522-49c1-af59-56a82f9d3be8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def get_event_details(query):\n", |
||||||
|
" search_words = query.lower().split() \n", |
||||||
|
" for event in school_events:\n", |
||||||
|
" event_text = event['name'].lower() + ' ' + event['description'].lower()\n", |
||||||
|
" if all(word in event_text for word in search_words):\n", |
||||||
|
" return event\n", |
||||||
|
" return None" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Tools\n", |
||||||
|
"\n", |
||||||
|
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n", |
||||||
|
"\n", |
||||||
|
"With tools, you can write a function, and have the LLM call that function as part of its response.\n", |
||||||
|
"\n", |
||||||
|
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n", |
||||||
|
"\n", |
||||||
|
"Well, kinda." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "68e96b54-b891-4e7b-a6bc-17693dc99970", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# for claude\n", |
||||||
|
"tools_claude = [\n", |
||||||
|
" {\n", |
||||||
|
" \"name\": \"get_event_details\",\n", |
||||||
|
" \"description\": \"Get the details of a particular upcoming event in Eagle Elementary School. Call this whenever you need to know the event details, for example when a user asks \\\n", |
||||||
|
"'When is the pta meeting scheduled?\",\n", |
||||||
|
" \"input_schema\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"event_text\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The event keyword that the user wants to getails on\"\n", |
||||||
|
" }\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"event_text\"]\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "636188d2-7e7a-48a0-9f04-f3813c7dc323", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# For GPT\n", |
||||||
|
"events_function_gpt = {\n", |
||||||
|
" \"name\": \"get_event_details\",\n", |
||||||
|
" \"description\": \"Get the details of a particular upcoming event in Eagle Elementary School. Call this whenever you need to know the event details, for example when a user asks \\\n", |
||||||
|
" 'When is the pta meeting scheduled?\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"event_text\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The event keyword that the user wants to getails on\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"event_text\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "605684f8-ed02-4cc9-8a16-012533b601cb", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And this is included in a list of tools:\n", |
||||||
|
"tools_gpt = [{\"type\": \"function\", \"function\": events_function_gpt}]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4ac5a34c-a630-449a-9d46-669daace799c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"#Gemini function declaration structure\n", |
||||||
|
"gemini_event_details = [{\n", |
||||||
|
" \"name\": \"get_event_details\",\n", |
||||||
|
" \"description\":\"Get the details of a particular upcoming event in Eagle Elementary School. Call this whenever you need to know the event details, for example when a user asks 'When is the pta meeting scheduled?\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"event_text\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The event keyword that the user wants to details on\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"event_text\"],\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"name\": \"get_event_test\",\n", |
||||||
|
" \"description\":\"This is a test function to validate if the function call picks up the right function if there are multiple functions.\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"event_text\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The event keyword that the user wants to details on\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"event_text\"],\n", |
||||||
|
" },\n", |
||||||
|
" }\n", |
||||||
|
"]\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c6331113-63b0-4712-94bb-f363422a8441", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat_claude(history):\n", |
||||||
|
" print(f\"\\nhistory is {history}\\n\")\n", |
||||||
|
" #Claude doesnt take any other key value pair other than role and content. Hence filtering only those key value pairs\n", |
||||||
|
" history_claude = list({\"role\": msg[\"role\"], \"content\": msg[\"content\"]} for msg in history if \"role\" in msg and \"content\" in msg)\n", |
||||||
|
" #history is [{'role': 'user', 'metadata': None, 'content': 'when is pta', 'options': None}]\n", |
||||||
|
" #messages = history\n", |
||||||
|
" message = claude.messages.create(\n", |
||||||
|
" model=ANTHROPIC_MODEL,\n", |
||||||
|
" max_tokens=1000,\n", |
||||||
|
" temperature=0.7,\n", |
||||||
|
" system=system_message,\n", |
||||||
|
" messages=history_claude,\n", |
||||||
|
" tools=tools_claude\n", |
||||||
|
" )\n", |
||||||
|
" image = None\n", |
||||||
|
" print(f\"Claude's message is \\n {pprint.pprint(message)}\\n\")\n", |
||||||
|
" try: \n", |
||||||
|
" if message.stop_reason == \"tool_use\":\n", |
||||||
|
" tool_use = next(block for block in message.content if block.type == \"tool_use\")\n", |
||||||
|
" event_text = tool_use.input.get('event_text')\n", |
||||||
|
" image = artist(event_text)\n", |
||||||
|
" tool_result = handle_tool_call(event_text)\n", |
||||||
|
" #tool_result = handle_tool_call(tool_use, \"Claude\")\n", |
||||||
|
" \n", |
||||||
|
" print(f\"Tool Result: {tool_result}\")\n", |
||||||
|
" \n", |
||||||
|
" response = claude.messages.stream(\n", |
||||||
|
" model=ANTHROPIC_MODEL,\n", |
||||||
|
" max_tokens=4096,\n", |
||||||
|
" system=system_message,\n", |
||||||
|
" messages=[\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"user\", \n", |
||||||
|
" \"content\": [\n", |
||||||
|
" {\n", |
||||||
|
" \"type\": \"text\",\n", |
||||||
|
" \"text\": history[-1].get('content')\n", |
||||||
|
" }\n", |
||||||
|
" ]\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"assistant\", \n", |
||||||
|
" \"content\": message.content\n", |
||||||
|
" },\n", |
||||||
|
" {\n", |
||||||
|
" \"role\": \"user\",\n", |
||||||
|
" \"content\": [\n", |
||||||
|
" {\n", |
||||||
|
" \"type\": \"tool_result\",\n", |
||||||
|
" \"tool_use_id\": tool_use.id,\n", |
||||||
|
" \"content\": tool_result,\n", |
||||||
|
" }\n", |
||||||
|
" ],\n", |
||||||
|
" },\n", |
||||||
|
" ],\n", |
||||||
|
" tools=tools_claude\n", |
||||||
|
" )\n", |
||||||
|
" result = \"\"\n", |
||||||
|
" with response as stream:\n", |
||||||
|
" for text in stream.text_stream:\n", |
||||||
|
" result += text or \"\"\n", |
||||||
|
" yield result, None\n", |
||||||
|
" talker(result)\n", |
||||||
|
" #image= artist(tool_input.get('event_text'))\n", |
||||||
|
" yield result, image\n", |
||||||
|
" else:\n", |
||||||
|
" response = next((block.text for block in message.content if hasattr(block, \"text\")), None,)\n", |
||||||
|
" chunk_size=30\n", |
||||||
|
" for i in range(0, len(response), chunk_size):\n", |
||||||
|
" yield response[:i + chunk_size], None\n", |
||||||
|
" time.sleep(0.05) #Simulate streaming delay\n", |
||||||
|
" talker(response)\n", |
||||||
|
" #image= artist(tool_input.get('event_text'))\n", |
||||||
|
" yield response, None\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" error_message = \"Apologies, my server is acting weird. Please try again later.\"\n", |
||||||
|
" print(e)\n", |
||||||
|
" yield error_message, None\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "9915ae05-5d52-4fdc-a3ea-18f050a79bd3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat_gpt(history):\n", |
||||||
|
" print(f\"\\nhistory is {history}\\n\")\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history\n", |
||||||
|
" response = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages, tools=tools_gpt)\n", |
||||||
|
" image = None\n", |
||||||
|
" try:\n", |
||||||
|
" if response.choices[0].finish_reason==\"tool_calls\":\n", |
||||||
|
" message = response.choices[0].message\n", |
||||||
|
" tool = message.tool_calls[0]\n", |
||||||
|
" arguments = json.loads(tool.function.arguments)\n", |
||||||
|
" event_text = arguments.get('event_text')\n", |
||||||
|
" image = artist(event_text)\n", |
||||||
|
" event_json = handle_tool_call(event_text)\n", |
||||||
|
" tool_output = {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": event_json,\n", |
||||||
|
" \"tool_call_id\": tool.id\n", |
||||||
|
" }\n", |
||||||
|
" messages.append(message)\n", |
||||||
|
" messages.append(tool_output)\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model=OPENAI_MODEL,\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" result = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" result += chunk.choices[0].delta.content or \"\"\n", |
||||||
|
" yield result, None\n", |
||||||
|
" talker(result)\n", |
||||||
|
" yield result, image\n", |
||||||
|
" else: \n", |
||||||
|
" reply = response.choices[0].message.content\n", |
||||||
|
" chunk_size=30\n", |
||||||
|
" for i in range(0, len(reply), chunk_size):\n", |
||||||
|
" yield reply[:i + chunk_size], None\n", |
||||||
|
" time.sleep(0.05)\n", |
||||||
|
" talker(reply)\n", |
||||||
|
" #image= artist(\"No such event\")\n", |
||||||
|
" yield reply, None\n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" error_message = \"Apologies, my server is acting weird. Please try again later.\"\n", |
||||||
|
" print(e)\n", |
||||||
|
" yield error_message, None" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "30fa3de9-5b55-4bb6-93ea-a13fc09d38c1", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat_gemini(history):\n", |
||||||
|
" print(f\"\\nhistroy is {history}\\n\")\n", |
||||||
|
" history_gemini = [{'role': m['role'], 'parts': [{'text': m['content']}]} if 'content' in m #if content exists, change it to parts format\n", |
||||||
|
" else {'role': m['role'], 'parts': m['parts']} if 'parts' in m #else if parts exists, just copy it as it is\n", |
||||||
|
" else {'role': m['role']} for m in history] #else neither content nor parts exists, copy only the role ignoring all other keys like metadata, options etc\n", |
||||||
|
" \n", |
||||||
|
" print(f\"\\nhistroy_gemini is {history_gemini}\\n\")\n", |
||||||
|
" model = genai.GenerativeModel(\n", |
||||||
|
" model_name=GOOGLE_MODEL,\n", |
||||||
|
" system_instruction=system_message\n", |
||||||
|
" )\n", |
||||||
|
" response = model.generate_content(\n", |
||||||
|
" contents = history_gemini,\n", |
||||||
|
" #contents = contents,\n", |
||||||
|
" tools = [{\n", |
||||||
|
" 'function_declarations': gemini_event_details,\n", |
||||||
|
" }],\n", |
||||||
|
" )\n", |
||||||
|
" #print(f\"response is {response}\")\n", |
||||||
|
"\n", |
||||||
|
" image = None\n", |
||||||
|
" try:\n", |
||||||
|
" # Check if the model wants to use a tool\n", |
||||||
|
" if response.candidates[0].content.parts[0].function_call:\n", |
||||||
|
" function_call = response.candidates[0].content.parts[0].function_call\n", |
||||||
|
" event_text = function_call.args.get(\"event_text\")\n", |
||||||
|
" image = artist(event_text)\n", |
||||||
|
" tool_result = handle_tool_call(event_text)\n", |
||||||
|
" \n", |
||||||
|
" print(f\"\\ntool_result is {tool_result}\\n\")\n", |
||||||
|
" stream = model.generate_content(\n", |
||||||
|
" \"Based on this information `\" + tool_result + \"`, extract the details of the event and provide the event details to the user\",\n", |
||||||
|
" stream=True \n", |
||||||
|
" )\n", |
||||||
|
" #print(f\"\\nSecond response is {stream}\\n\")\n", |
||||||
|
" result = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" result += chunk.candidates[0].content.parts[0].text or \"\"\n", |
||||||
|
" #print(f\"REsult is \\n{result}\\n\")\n", |
||||||
|
" yield result, None\n", |
||||||
|
" talker(result) \n", |
||||||
|
" yield result, image\n", |
||||||
|
" #print(f\"REsult is \\n{result}\\n\")\n", |
||||||
|
" else: \n", |
||||||
|
" reply = response.text\n", |
||||||
|
" chunk_size=30\n", |
||||||
|
" for i in range(0, len(reply), chunk_size):\n", |
||||||
|
" yield reply[:i + chunk_size], None\n", |
||||||
|
" time.sleep(0.05)\n", |
||||||
|
" talker(reply)\n", |
||||||
|
" #image= artist(\"No such event\")\n", |
||||||
|
" yield reply, None\n", |
||||||
|
" \n", |
||||||
|
" except Exception as e:\n", |
||||||
|
" error_message = \"Apologies, my server is acting weird. Please try again later.\"\n", |
||||||
|
" print(e)\n", |
||||||
|
" yield error_message, None\n", |
||||||
|
" \n", |
||||||
|
"\n", |
||||||
|
" \n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "570fffb2-a054-4217-89ae-8b6f4630e383", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_and_process_model_responses(fn_name, chatbot):#, response, image):\n", |
||||||
|
" response = \"\"\n", |
||||||
|
" image = None\n", |
||||||
|
" for response, image in fn_name(chatbot):\n", |
||||||
|
" if chatbot and chatbot[-1][\"role\"] == \"assistant\": \n", |
||||||
|
" chatbot[-1][\"content\"] = response # Update the last message\n", |
||||||
|
" else:\n", |
||||||
|
" chatbot.append({\"role\": \"assistant\", \"content\": response}) # First assistant message\n", |
||||||
|
" #print(chatbot)\n", |
||||||
|
" yield chatbot, image # Stream updated history to UI\n", |
||||||
|
" \n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "32a6ccce-44fa-49a7-bd1a-08c70002771c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def handle_tool_call(event_text):\n", |
||||||
|
" print(f\"event text is {event_text}\")\n", |
||||||
|
" event_found = get_event_details(event_text)\n", |
||||||
|
" print(f\"event_found is {event_found}\")\n", |
||||||
|
" \n", |
||||||
|
" if event_found:\n", |
||||||
|
" response = json.dumps({\"name\": event_found['name'],\"description\": event_found['description'], \"when\": event_found['date_time'], \"where\": event_found['location']})\n", |
||||||
|
" else: \n", |
||||||
|
" response = json.dumps({\"event\": f\"Sorry, there is no schedule currently for {event_text}\"})\n", |
||||||
|
" return response \n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4eaaaf9e-64b9-4d0b-9931-388cee8ea21d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def process_chosen_model(chatbot, model):\n", |
||||||
|
" if model == 'GPT':\n", |
||||||
|
" for chatbot, image in call_and_process_model_responses(chat_gpt, chatbot):\n", |
||||||
|
" yield chatbot, image\n", |
||||||
|
" elif model == 'Claude': \n", |
||||||
|
" for chatbot, image in call_and_process_model_responses(chat_claude, chatbot):\n", |
||||||
|
" yield chatbot, image\n", |
||||||
|
" else:\n", |
||||||
|
" #for Gemini, the content is to be replaced with parts.\n", |
||||||
|
" for chatbot, image in call_and_process_model_responses(chat_gemini, chatbot):\n", |
||||||
|
" yield chatbot, image\n", |
||||||
|
" " |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "627f6d49-5376-4f1d-8071-f2e96fd6e78b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# More involved Gradio code as we're not using the preset Chat interface!\n", |
||||||
|
"# Passing in inbrowser=True in the last line will cause a Gradio window to pop up immediately.\n", |
||||||
|
"\n", |
||||||
|
"with gr.Blocks(css=\"\"\"\n", |
||||||
|
" select.gr-box { \n", |
||||||
|
" appearance: auto !important; \n", |
||||||
|
" -webkit-appearance: auto !important; \n", |
||||||
|
" }\n", |
||||||
|
"\"\"\") as ui:\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" gr.HTML(\"<h1 style='text-align: center; color: #4CAF50;'>Occasio! An Event Management Assistant</h1>\") # Added title\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" # with gr.Column(scale=3): #Acts as a spacer on the left\n", |
||||||
|
" # pass\n", |
||||||
|
" \n", |
||||||
|
" with gr.Column(scale=0):\n", |
||||||
|
" model = gr.Dropdown(\n", |
||||||
|
" choices=[\"GPT\", \"Claude\", \"Gemini\"], \n", |
||||||
|
" label=\"Select model\", \n", |
||||||
|
" value=\"GPT\",\n", |
||||||
|
" interactive=True,\n", |
||||||
|
" container=True # Applying the CSS class\n", |
||||||
|
" )\n", |
||||||
|
" # with gr.Column(scale=-54, min_width=200):\n", |
||||||
|
" # gr.HTML(\"<h1 style='text-align: center; color: #4CAF50;'>Occasio</h1>\") # Added title\n", |
||||||
|
" # pass #Acts as a spacer on the right\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" chatbot = gr.Chatbot(height=500, type=\"messages\")\n", |
||||||
|
" image_output = gr.Image(height=500)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" entry = gr.Textbox(label=\"Ask me \\\"when is pta meeting\\\", \\\"how about book fair\\\" and more... \")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" clear = gr.Button(\"Clear\", min_width=150)\n", |
||||||
|
" #message=None\n", |
||||||
|
"\n", |
||||||
|
" def do_entry(message, history):\n", |
||||||
|
" history += [{\"role\":\"user\", \"content\":message}]\n", |
||||||
|
" return \"\", history\n", |
||||||
|
" \n", |
||||||
|
" entry.submit(do_entry, inputs=[entry, chatbot], outputs=[entry, chatbot]).then(\n", |
||||||
|
" process_chosen_model, inputs=[chatbot, model], outputs=[chatbot, image_output]\n", |
||||||
|
" )\n", |
||||||
|
" clear.click(lambda: None, inputs=None, outputs=chatbot, queue=False)\n", |
||||||
|
"\n", |
||||||
|
"ui.launch(inbrowser=True)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
File diff suppressed because one or more lines are too long
@ -0,0 +1,196 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ec2e81cd-2172-4816-bf44-f29312b8a4bd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import anthropic\n", |
||||||
|
"import google.generativeai as genai\n", |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a558dfa4-9496-48ba-b0f5-b0c731adc7b8", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "dc7c2cda-a5d1-4930-87f2-e06485d6b2bd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"claude = anthropic.Anthropic()\n", |
||||||
|
"\n", |
||||||
|
"genai.configure()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3eb32aec-ec93-4563-bd88-0d48d2471884", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gpt_model = \"gpt-4o-mini\"\n", |
||||||
|
"claude_model = \"claude-3-haiku-20240307\"\n", |
||||||
|
"gemini_model = \"gemini-2.0-flash-exp\"\n", |
||||||
|
"\n", |
||||||
|
"gpt_system = \"You are a chatbot who is sarcastic; \\\n", |
||||||
|
"you have your speculations about anything in the conversation and you challenge everything in funny way.\\\n", |
||||||
|
"You have to be a part of a group discussion and put forward your points about the topic\\\n", |
||||||
|
"full-stack developers vs specialised developer. Keep your points short and precise.\"\n", |
||||||
|
"\n", |
||||||
|
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n", |
||||||
|
"everything the other person says, or find common ground. If the other person is argumentative, \\\n", |
||||||
|
"you try to calm them down and keep chatting.You have to be a part of a group discussion and put forward your points\\\n", |
||||||
|
"about the topic full-stack developers vs specialised developer. Keep your points short and precise.\"\n", |
||||||
|
"\n", |
||||||
|
"gemini_system = \"You are a very rational thinker and don't like beating around the bush about the topic of discussion.\\\n", |
||||||
|
"You have to be a part of a group discussion and put forward your points\\\n", |
||||||
|
"about the topic full-stack developers vs specialised developer\\\n", |
||||||
|
"Keep your points short and precise.\"\n", |
||||||
|
"\n", |
||||||
|
"gpt_messages = [\"Hi there\"]\n", |
||||||
|
"claude_messages = [\"Hi\"]\n", |
||||||
|
"gemini_messages = [\"Hello to all\"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e27252cf-05f5-4989-85ef-94e6802c5db9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def call_gpt():\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": gpt_system}]\n", |
||||||
|
" for gpt, claude, gemini in zip(gpt_messages, claude_messages, gemini_messages):\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gpt})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": claude})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gemini})\n", |
||||||
|
" completion = openai.chat.completions.create(\n", |
||||||
|
" model=gpt_model,\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" max_tokens=500 # Add max_tokens to meet API requirement\n", |
||||||
|
" )\n", |
||||||
|
" return completion.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"# Function to call Claude\n", |
||||||
|
"def call_claude():\n", |
||||||
|
" messages = []\n", |
||||||
|
" for gpt, claude_message,gemini in zip(gpt_messages, claude_messages, gemini_messages):\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": claude_message})\n", |
||||||
|
" messages.append({\"role\": \"assistant\", \"content\": gemini})\n", |
||||||
|
" messages.append({\"role\": \"user\", \"content\": gpt_messages[-1]})\n", |
||||||
|
" message = claude.messages.create(\n", |
||||||
|
" model=claude_model,\n", |
||||||
|
" max_tokens=500,\n", |
||||||
|
" messages=messages\n", |
||||||
|
" )\n", |
||||||
|
" return message.content[0].text\n", |
||||||
|
"\n", |
||||||
|
"# Function to call Gemini\n", |
||||||
|
"def call_gemini():\n", |
||||||
|
" # Create the Gemini model instance\n", |
||||||
|
" gemini_model_instance = genai.GenerativeModel(\n", |
||||||
|
" model_name=gemini_model, # Specify the model name here\n", |
||||||
|
" system_instruction=gemini_system # Provide the system instruction\n", |
||||||
|
" )\n", |
||||||
|
" \n", |
||||||
|
" # Prepare conversation history with separate names to avoid overwriting\n", |
||||||
|
" gemini_messages_combined = []\n", |
||||||
|
" for gpt, claude, gemini_msg in zip(gpt_messages, claude_messages, gemini_messages):\n", |
||||||
|
" gemini_messages_combined.append({\"role\": \"assistant\", \"content\": gpt})\n", |
||||||
|
" gemini_messages_combined.append({\"role\": \"user\", \"content\": claude})\n", |
||||||
|
" gemini_messages_combined.append({\"role\": \"assistant\", \"content\": gemini_msg})\n", |
||||||
|
" \n", |
||||||
|
" # Generate content based on the conversation history\n", |
||||||
|
" gemini_response = gemini_model_instance.generate_content(\"\".join([msg[\"content\"] for msg in gemini_messages_combined]))\n", |
||||||
|
" \n", |
||||||
|
" return gemini_response.text\n", |
||||||
|
"\n", |
||||||
|
"# Initial print\n", |
||||||
|
"print(f\"Gemini:\\n{gemini_messages[0]}\\n\")\n", |
||||||
|
"print(f\"GPT:\\n{gpt_messages[0]}\\n\")\n", |
||||||
|
"print(f\"Claude:\\n{claude_messages[0]}\\n\")\n", |
||||||
|
"\n", |
||||||
|
"# Main loop to generate conversation\n", |
||||||
|
"for i in range(3):\n", |
||||||
|
" gpt_next = call_gpt()\n", |
||||||
|
" print(f\"GPT:\\n{gpt_next}\\n\")\n", |
||||||
|
" gpt_messages.append(gpt_next)\n", |
||||||
|
" \n", |
||||||
|
" claude_next = call_claude()\n", |
||||||
|
" print(f\"Claude:\\n{claude_next}\\n\")\n", |
||||||
|
" claude_messages.append(claude_next)\n", |
||||||
|
" \n", |
||||||
|
" gemini_next = call_gemini()\n", |
||||||
|
" print(f\"Gemini:\\n{gemini_next}\\n\")\n", |
||||||
|
" gemini_messages.append(gemini_next)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "52f43794-a20a-4b9a-a18d-6f363b8dc27d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
File diff suppressed because one or more lines are too long
@ -0,0 +1,225 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "d006b2ea-9dfe-49c7-88a9-a5a0775185fd", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# A tool to evaluate a mathematical expression\n", |
||||||
|
"\n", |
||||||
|
"This week the tool used in FlightAI was a database lookup function.\n", |
||||||
|
"\n", |
||||||
|
"Here I implement a python code interpreter function as tool." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7b0e8691-71f9-486c-859d-ea371401dfa9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8e2792ae-ff53-4b83-b2c3-866533ba2b29", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"# Print the key prefixes to help with any debugging\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n", |
||||||
|
"google_api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||||
|
"\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"if anthropic_api_key:\n", |
||||||
|
" print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Anthropic API Key not set\")\n", |
||||||
|
"\n", |
||||||
|
"if google_api_key:\n", |
||||||
|
" print(f\"Google API Key exists and begins {google_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"Google API Key not set\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "79e44ee9-af02-448c-a747-17780ee55791", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"MODEL = \"gpt-4o-mini\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "33ec55b1-0eff-43f1-9346-28145fa2fc47", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Defining the tool function\n", |
||||||
|
"\n", |
||||||
|
"Add print statements to make sure the function is used instead of the native gpt interpreter capability.\n", |
||||||
|
"\n", |
||||||
|
"I used multi shot in the system prompt to make sure gpt generate the code in the format that the tool accept." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "94e0e171-4975-457b-88cb-c0d90f51ca65", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def evaluate_math_expression(my_code):\n", |
||||||
|
" print(f\"EXECUTING FUNCTION WITH CODE: {my_code}\")\n", |
||||||
|
" exec(my_code)\n", |
||||||
|
" r = locals()['interpreter_result'] \n", |
||||||
|
" return r\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"math_function = {\n", |
||||||
|
" \"name\": \"evaluate_math_expression\",\n", |
||||||
|
" \"description\": \"Give the result of a math expression. \\\n", |
||||||
|
" Call this whenever you need to know the result of a mathematical expression. \\\n", |
||||||
|
" Generate python code ALWAYS with the final result assigned to a variable called 'interpreter_result'. \\\n", |
||||||
|
" For example when a user asks 'What is 2+2' generate 'interpreter_result = 2+2', and pass this code to the tool. \\\n", |
||||||
|
" Another example if a user ask 'What is log(5)' generate 'import math; interpreter_result = math.log(5)' and pass this code to the tool.\",\n", |
||||||
|
" \n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"my_code\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The python math expression to evaluate\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"my_code\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"\n", |
||||||
|
"tools = [{\"type\": \"function\", \"function\": math_function}]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c85c01cc-776e-4a9d-b506-ea0d68fc072d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"evaluate_math_expression(\"import math; interpreter_result = math.log(5)\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "858c5848-5835-4dff-9dc0-68babd367e11", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Using the tool in a UI program\n", |
||||||
|
"\n", |
||||||
|
"You can ask messages like:\n", |
||||||
|
"- \"What is 2+2?\"\n", |
||||||
|
"- \"What is 3 power 2?\"\n", |
||||||
|
"- \"I have 25 apples. I buy 10 apples. How manny apples do I have?\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c119b48b-d4b4-41ae-aa2f-2ec2f09af2f0", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a math assistant. \\\n", |
||||||
|
"Generate python code to give result of a math expression, always name the result 'interpreter_result'. \\\n", |
||||||
|
"For example when a user asks 'What is 2+2', generate 'interpreter_result = 2+2' and pass this code to the tool. \\\n", |
||||||
|
"Another example: if a user ask 'What is log(5)' generate 'import math; interpreter_result = math.log(5)'\"\n", |
||||||
|
"\n", |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", |
||||||
|
"\n", |
||||||
|
" if response.choices[0].finish_reason==\"tool_calls\":\n", |
||||||
|
" message = response.choices[0].message\n", |
||||||
|
" print(message)\n", |
||||||
|
" response = handle_tool_call(message)\n", |
||||||
|
" print(response)\n", |
||||||
|
" messages.append(message)\n", |
||||||
|
" messages.append(response)\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n", |
||||||
|
" \n", |
||||||
|
" return response.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def handle_tool_call(message):\n", |
||||||
|
" tool_call = message.tool_calls[0]\n", |
||||||
|
" arguments = json.loads(tool_call.function.arguments)\n", |
||||||
|
" my_code = arguments.get('my_code')\n", |
||||||
|
" interpreter_result = evaluate_math_expression(my_code)\n", |
||||||
|
" response = {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": json.dumps({\"my_code\": my_code,\"interpreter_result\": interpreter_result}),\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" return response" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a3e50093-d7b6-4972-a8ba-6964f22218d3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "75c81d73-d2d6-4e6b-8511-94d4a725f595", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,129 @@ |
|||||||
|
import gradio as gr |
||||||
|
import requests |
||||||
|
import json |
||||||
|
from json_handlers import SettingsHandler, LanguagesHandler |
||||||
|
from ollama_utils import get_ollama_response |
||||||
|
|
||||||
|
|
||||||
|
class GradioUI: |
||||||
|
def __init__(self, models: list, settings: SettingsHandler, languages: LanguagesHandler): |
||||||
|
self.models = models |
||||||
|
self.settings = settings |
||||||
|
self.languages = languages |
||||||
|
|
||||||
|
self.langs = self.languages.get_supported_languages() |
||||||
|
|
||||||
|
def _translate_callback(self, text, model, translte_from, translte_to): |
||||||
|
model_options = self.settings.get_advanced_settings() |
||||||
|
|
||||||
|
full_response = "" |
||||||
|
chunck_response = get_ollama_response(model, text, translte_from, translte_to, model_options) |
||||||
|
for chunck in chunck_response: |
||||||
|
full_response += chunck |
||||||
|
yield full_response |
||||||
|
|
||||||
|
def _temp_setting_callback(self, temp_dropdown_val): |
||||||
|
self.settings.update_advanced_settings_param("temperature", temp_dropdown_val) |
||||||
|
|
||||||
|
def _top_k_setting_callback(self, top_k_dropdown_val): |
||||||
|
self.settings.update_advanced_settings_param("top_k", top_k_dropdown_val) |
||||||
|
|
||||||
|
def _top_p_setting_callback(self, top_p_dropdown_val): |
||||||
|
self.settings.update_advanced_settings_param("top_p", top_p_dropdown_val) |
||||||
|
|
||||||
|
def _reset_to_default_callback(self): |
||||||
|
temperature = 0.0 |
||||||
|
top_k = 40.0 |
||||||
|
top_p = 0.9 |
||||||
|
default_settings = { |
||||||
|
"temperature": temperature, |
||||||
|
"top_k": top_k, |
||||||
|
"top_p": top_p |
||||||
|
} |
||||||
|
self.settings.update_advanced_settings(default_settings) |
||||||
|
return temperature, top_k, top_p |
||||||
|
|
||||||
|
def build_and_launch(self): |
||||||
|
with gr.Blocks() as gui: |
||||||
|
gr.Markdown("# LLM Translator") |
||||||
|
with gr.Tab("Translate"): |
||||||
|
with gr.Row(): |
||||||
|
model_dropdown = gr.Dropdown( |
||||||
|
label="Model", |
||||||
|
info="Choose LLM Model", |
||||||
|
choices=self.models |
||||||
|
) |
||||||
|
with gr.Group(): |
||||||
|
with gr.Row(): |
||||||
|
translte_from = gr.Dropdown( |
||||||
|
value=self.langs[0], |
||||||
|
show_label=False, |
||||||
|
choices=self.langs, |
||||||
|
interactive=True |
||||||
|
) |
||||||
|
translte_to = gr.Dropdown( |
||||||
|
value=self.langs[1], |
||||||
|
show_label=False, |
||||||
|
choices=self.langs, |
||||||
|
interactive=True |
||||||
|
) |
||||||
|
with gr.Row(): |
||||||
|
translate_input = gr.Textbox(label="Your Input", lines=15, max_lines=15) |
||||||
|
translate_output = gr.Textbox(label="Translated", lines=15, max_lines=15) |
||||||
|
|
||||||
|
btn = gr.Button("Translate", variant="primary") |
||||||
|
btn.click( |
||||||
|
fn=self._translate_callback, |
||||||
|
inputs=[translate_input, model_dropdown, translte_from, translte_to], |
||||||
|
outputs=translate_output |
||||||
|
) |
||||||
|
|
||||||
|
with gr.Tab("Advanced Settings"): |
||||||
|
temp_dropdown = gr.Number( |
||||||
|
value=self.settings.get_advanced_setting_param("temperature"), |
||||||
|
label="Temperature", |
||||||
|
info="This parameter control how creative the model is\n0 means no creativity\n1 means very creative", |
||||||
|
minimum=0, |
||||||
|
maximum=1, |
||||||
|
step=0.1, |
||||||
|
interactive=True |
||||||
|
) |
||||||
|
|
||||||
|
gr.Markdown() # Used only for spacing |
||||||
|
|
||||||
|
top_k_dropdown = gr.Number( |
||||||
|
value=self.settings.get_advanced_setting_param("top_k"), |
||||||
|
label="Top K", |
||||||
|
info="A higher value (e.g. 100) will give more diverse answers\nwhile a lower value (e.g. 10) will be more conservative.", |
||||||
|
minimum=1, |
||||||
|
maximum=200, |
||||||
|
step=1, |
||||||
|
interactive=True |
||||||
|
) |
||||||
|
|
||||||
|
gr.Markdown() # Used only for spacing |
||||||
|
|
||||||
|
top_p_dropdown = gr.Number( |
||||||
|
value=self.settings.get_advanced_setting_param("top_p"), |
||||||
|
label="Top P", |
||||||
|
info="A higher value (e.g., 0.95) will lead to more diverse answers\nwhile a lower value (e.g., 0.5) will be more conservative", |
||||||
|
minimum=0.1, |
||||||
|
maximum=1.0, |
||||||
|
step=0.1, |
||||||
|
interactive=True |
||||||
|
) |
||||||
|
|
||||||
|
gr.Markdown() # Used only for spacing |
||||||
|
|
||||||
|
reset_btn = gr.Button("Reset to Default") |
||||||
|
reset_btn.click( |
||||||
|
fn=self._reset_to_default_callback, |
||||||
|
outputs=[temp_dropdown, top_k_dropdown, top_p_dropdown] |
||||||
|
) |
||||||
|
|
||||||
|
temp_dropdown.change(self._temp_setting_callback, temp_dropdown) |
||||||
|
top_k_dropdown.change(self._top_k_setting_callback, top_k_dropdown) |
||||||
|
top_p_dropdown.change(self._top_p_setting_callback, top_p_dropdown) |
||||||
|
|
||||||
|
gui.launch() |
||||||
|
|
@ -0,0 +1,60 @@ |
|||||||
|
import json |
||||||
|
|
||||||
|
|
||||||
|
class SettingsHandler: |
||||||
|
def __init__(self, json_filename): |
||||||
|
self.json_filename = json_filename |
||||||
|
self.advanced_settings = self.load_current_settings() |
||||||
|
|
||||||
|
def load_current_settings(self) -> dict: |
||||||
|
with open(self.json_filename, "r") as file: |
||||||
|
settings_dict = json.load(file) |
||||||
|
|
||||||
|
advanced_settings = settings_dict["Advanced Settings"] |
||||||
|
|
||||||
|
return advanced_settings |
||||||
|
|
||||||
|
def update_advanced_settings(self, updated_advanced_settings: dict): |
||||||
|
new_dict = { |
||||||
|
"Advanced Settings": updated_advanced_settings |
||||||
|
} |
||||||
|
|
||||||
|
print(new_dict) |
||||||
|
|
||||||
|
with open(self.json_filename, "w") as file: |
||||||
|
json.dump(new_dict, file) |
||||||
|
|
||||||
|
self.advanced_settings = updated_advanced_settings |
||||||
|
|
||||||
|
def update_advanced_settings_param(self, key: str, new_val): |
||||||
|
if self.get_advanced_setting_param(key) is not None: |
||||||
|
update_advanced_settings_dict = self.advanced_settings |
||||||
|
update_advanced_settings_dict[key] = new_val |
||||||
|
self.update_advanced_settings(update_advanced_settings_dict) |
||||||
|
|
||||||
|
def get_advanced_settings(self): |
||||||
|
return self.advanced_settings |
||||||
|
|
||||||
|
def get_advanced_setting_param(self, key: str): |
||||||
|
return self.advanced_settings.get(key) |
||||||
|
|
||||||
|
|
||||||
|
class LanguagesHandler: |
||||||
|
def __init__(self, json_filename): |
||||||
|
self.json_filename = json_filename |
||||||
|
self.langs = self.load_languages() |
||||||
|
|
||||||
|
def load_languages(self) -> list: |
||||||
|
with open(self.json_filename, "r") as file: |
||||||
|
langs = json.load(file) |
||||||
|
|
||||||
|
if type(langs) != list: |
||||||
|
raise RuntimeError("Languages must be provided as lists") |
||||||
|
if len(langs) < 2: |
||||||
|
raise RuntimeError("At least 2 languages must be supported") |
||||||
|
|
||||||
|
return langs |
||||||
|
|
||||||
|
def get_supported_languages(self): |
||||||
|
return self.langs |
||||||
|
|
@ -0,0 +1,6 @@ |
|||||||
|
[ |
||||||
|
"German", |
||||||
|
"English", |
||||||
|
"Spanish", |
||||||
|
"French" |
||||||
|
] |
@ -0,0 +1,15 @@ |
|||||||
|
from json_handlers import SettingsHandler, LanguagesHandler |
||||||
|
from ollama_utils import get_downloaded_models |
||||||
|
from gradio_ui import GradioUI |
||||||
|
|
||||||
|
settings_json = "settings.json" |
||||||
|
languages_json = "languages.json" |
||||||
|
|
||||||
|
if __name__ == "__main__": |
||||||
|
settings = SettingsHandler(settings_json) |
||||||
|
languages = LanguagesHandler(languages_json) |
||||||
|
|
||||||
|
models = get_downloaded_models() |
||||||
|
|
||||||
|
gradio_ui = GradioUI(models, settings, languages) |
||||||
|
gradio_ui.build_and_launch() |
@ -0,0 +1,28 @@ |
|||||||
|
import requests |
||||||
|
import json |
||||||
|
import ollama |
||||||
|
|
||||||
|
|
||||||
|
def get_downloaded_models(): |
||||||
|
models_raw = requests.get("http://localhost:11434/api/tags").content |
||||||
|
models_dict = json.loads(models_raw) |
||||||
|
models = [model["name"] for model in models_dict["models"]] |
||||||
|
return models |
||||||
|
|
||||||
|
def get_ollama_response(model, prompt, translte_from, translte_to, options): |
||||||
|
def get_system_prompt(): |
||||||
|
with open('system_prompt.txt', 'r') as file: |
||||||
|
system_prompt = file.read() |
||||||
|
return system_prompt |
||||||
|
|
||||||
|
system_prompt = get_system_prompt() |
||||||
|
user_prompt = f"Translate from {translte_from} to {translte_to}: {prompt}" |
||||||
|
messages = [ |
||||||
|
{"role": "system", "content": system_prompt}, |
||||||
|
{"role": "user", "content": user_prompt} |
||||||
|
] |
||||||
|
|
||||||
|
response = ollama.chat(model, messages, options=options, stream=True) |
||||||
|
for chunck in response: |
||||||
|
|
||||||
|
yield chunck["message"]["content"] |
@ -0,0 +1 @@ |
|||||||
|
Just run the main.py script after activating conda environment 'llms' |
@ -0,0 +1 @@ |
|||||||
|
{"Advanced Settings": {"temperature": 0.0, "top_k": 40.0, "top_p": 0.9}} |
@ -0,0 +1,17 @@ |
|||||||
|
You are a translator. |
||||||
|
You should translate the prompts according to the following criteria: |
||||||
|
- You should respond in a clear and straight to the point responses. |
||||||
|
- Your response should have a good structure and good linguistic features. |
||||||
|
- You should translate the sentence as it is. Do not add extra sentences or phrases on your own. |
||||||
|
- Do not answer questions even if the prompt is a question, you should translate the question and do not anwer it. |
||||||
|
- If you do not understand the prompt, do not say that you do not understand, just echo the prompt. |
||||||
|
- Do not include in the response phrases like 'here is the translation' or any phrases like that |
||||||
|
Here are some examples for good responses: |
||||||
|
< |
||||||
|
Prompt: 'Translate from French to English: Hier, j'ai passé toute la journée à explorer la ville avec mes amis, et nous avons visité plusieurs musées avant de nous arrêter pour un délicieux dîner dans un restaurant local.' |
||||||
|
Response: 'Yesterday, I spent the whole day exploring the city with my friends, and we visited several museums before stopping for a delicious dinner at a local restaurant.' |
||||||
|
> |
||||||
|
< |
||||||
|
Prompt: 'Translate from Spanish to English: vdaiughadvlkj' |
||||||
|
Response: 'vdaiughadvlkj' |
||||||
|
> |
@ -0,0 +1,408 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "ddfa9ae6-69fe-444a-b994-8c4c5970a7ec", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Project - Airline AI Assistant\n", |
||||||
|
"\n", |
||||||
|
"We'll now bring together what we've learned to make an AI Customer Support assistant for an Airline" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 1, |
||||||
|
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import json\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"OpenAI API Key exists and begins sk-proj-\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Initialization\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv(override=True)\n", |
||||||
|
"\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"if openai_api_key:\n", |
||||||
|
" print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", |
||||||
|
"else:\n", |
||||||
|
" print(\"OpenAI API Key not set\")\n", |
||||||
|
" \n", |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"openai = OpenAI()\n", |
||||||
|
"\n", |
||||||
|
"# As an alternative, if you'd like to use Ollama instead of OpenAI\n", |
||||||
|
"# Check that Ollama is running for you locally (see week1/day2 exercise) then uncomment these next 2 lines\n", |
||||||
|
"# MODEL = \"llama3.2\"\n", |
||||||
|
"# openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 3, |
||||||
|
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_message = \"You are a helpful assistant for an Airline called FlightAI. \"\n", |
||||||
|
"system_message += \"Give short, courteous answers, no more than 1 sentence. \"\n", |
||||||
|
"system_message += \"Always be accurate. If you don't know the answer, say so.\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7901\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7901/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 4, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# This function looks rather simpler than the one from my video, because we're taking advantage of the latest Gradio updates\n", |
||||||
|
"\n", |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n", |
||||||
|
" return response.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "36bedabf-a0a7-4985-ad8e-07ed6a55a3a4", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Tools\n", |
||||||
|
"\n", |
||||||
|
"Tools are an incredibly powerful feature provided by the frontier LLMs.\n", |
||||||
|
"\n", |
||||||
|
"With tools, you can write a function, and have the LLM call that function as part of its response.\n", |
||||||
|
"\n", |
||||||
|
"Sounds almost spooky.. we're giving it the power to run code on our machine?\n", |
||||||
|
"\n", |
||||||
|
"Well, kinda." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 85, |
||||||
|
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's start by making a useful function\n", |
||||||
|
"\n", |
||||||
|
"ticket_prices = {\"london\": \"$799\", \"paris\": \"$899\", \"tokyo\": \"$1400\", \"berlin\": \"$499\"}\n", |
||||||
|
"\n", |
||||||
|
"def get_ticket_price(destination_city):\n", |
||||||
|
" print(f\"Tool get_ticket_price called for {destination_city}\")\n", |
||||||
|
" city = destination_city.lower()\n", |
||||||
|
" return ticket_prices.get(city, \"Unknown\")\n", |
||||||
|
"\n", |
||||||
|
"def get_destinations():\n", |
||||||
|
" destinations=ticket_prices.keys()\n", |
||||||
|
" cities=\", \".join(destinations) \n", |
||||||
|
" return cities" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 86, |
||||||
|
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_ticket_price called for Berlin\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"'london, paris, tokyo, berlin'" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 86, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"get_ticket_price(\"Berlin\")\n", |
||||||
|
"get_destinations()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# There's a particular dictionary structure that's required to describe our function:\n", |
||||||
|
"\n", |
||||||
|
"price_function = {\n", |
||||||
|
" \"name\": \"get_ticket_price\",\n", |
||||||
|
" \"description\": \"Get the price of a return ticket to the destination city. Call this whenever you need to know the ticket price, for example when a customer asks 'How much is a ticket to this city'\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" \"type\": \"object\",\n", |
||||||
|
" \"properties\": {\n", |
||||||
|
" \"destination_city\": {\n", |
||||||
|
" \"type\": \"string\",\n", |
||||||
|
" \"description\": \"The city that the customer wants to travel to\",\n", |
||||||
|
" },\n", |
||||||
|
" },\n", |
||||||
|
" \"required\": [\"destination_city\"],\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
" }\n", |
||||||
|
"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 29, |
||||||
|
"id": "5842b7f1-e357-494c-9bd4-3aa9f9fd4332", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# There's a particular dictionary structure that's required to describe our function:\n", |
||||||
|
"\n", |
||||||
|
"destination_function = {\n", |
||||||
|
" \"name\": \"get_destinations\",\n", |
||||||
|
" \"description\": \"Get the destinations we serve. Call this whenever you need to know the destinations FlightAI flies to, for example when a customer asks 'Where do you fly to'\",\n", |
||||||
|
" \"parameters\": {\n", |
||||||
|
" },\n", |
||||||
|
" \"additionalProperties\": False\n", |
||||||
|
"}" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 30, |
||||||
|
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# And this is included in a list of tools:\n", |
||||||
|
"\n", |
||||||
|
"tools = [{\"type\": \"function\", \"function\": price_function},\n", |
||||||
|
" {\"type\": \"function\", \"function\": destination_function}]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "c3d3554f-b4e3-4ce7-af6f-68faa6dd2340", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Getting OpenAI to use our Tool\n", |
||||||
|
"\n", |
||||||
|
"There's some fiddly stuff to allow OpenAI \"to call our tool\"\n", |
||||||
|
"\n", |
||||||
|
"What we actually do is give the LLM the opportunity to inform us that it wants us to run the tool.\n", |
||||||
|
"\n", |
||||||
|
"Here's how the new chat function looks:" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5db52df0-cb48-4017-bae3-0014f5ca3a56", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages, tools=tools)\n", |
||||||
|
"\n", |
||||||
|
" if response.choices[0].finish_reason == \"tool_calls\":\n", |
||||||
|
" message = response.choices[0].message\n", |
||||||
|
" tool_name = message.tool_calls[0].function.name\n", |
||||||
|
"\n", |
||||||
|
" if tool_name == \"get_ticket_price\":\n", |
||||||
|
" response, city = handle_tool_call(message)\n", |
||||||
|
" elif tool_name == \"get_destinations\":\n", |
||||||
|
" response = handle_tool_call_destination(message)\n", |
||||||
|
"\n", |
||||||
|
" messages.extend([message, response])\n", |
||||||
|
" response = openai.chat.completions.create(model=MODEL, messages=messages)\n", |
||||||
|
"\n", |
||||||
|
" return response.choices[0].message.content" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 91, |
||||||
|
"id": "b0992986-ea09-4912-a076-8e5603ee631f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# We have to write that function handle_tool_call for price:\n", |
||||||
|
"\n", |
||||||
|
"def handle_tool_call_price(message):\n", |
||||||
|
" tool_call = message.tool_calls[0]\n", |
||||||
|
" arguments = json.loads(tool_call.function.arguments)\n", |
||||||
|
" city = arguments.get('destination_city')\n", |
||||||
|
" price = get_ticket_price(city)\n", |
||||||
|
" response = {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": json.dumps({\"destination_city\": city,\"price\": price}),\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" return response, city" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 92, |
||||||
|
"id": "4bbffdb0-5ab7-414e-8d2b-3d9367e64526", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# We have to write that function handle_tool_call for destinations:\n", |
||||||
|
"\n", |
||||||
|
"def handle_tool_call_destination(message):\n", |
||||||
|
" tool_call = message.tool_calls[0]\n", |
||||||
|
" destinations = get_destinations()\n", |
||||||
|
" print(destinations)\n", |
||||||
|
" response = {\n", |
||||||
|
" \"role\": \"tool\",\n", |
||||||
|
" \"content\": destinations,\n", |
||||||
|
" \"tool_call_id\": tool_call.id\n", |
||||||
|
" }\n", |
||||||
|
" return response" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 93, |
||||||
|
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7928\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7928/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [] |
||||||
|
}, |
||||||
|
"execution_count": 93, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
}, |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Tool get_ticket_price called for Paris\n", |
||||||
|
"Tool get_ticket_price called for Timbuktu\n", |
||||||
|
"london, paris, tokyo, berlin\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"gr.ChatInterface(fn=chat, type=\"messages\").launch()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "243c156d-86c3-4d0a-8119-d0a532daa5cc", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,186 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "6fb7858c-8ea7-4dea-95ea-f5d7d5210b9a", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"The following is **Meeting minutes Generator** by using **QWEN2** and **Openai Opensource model whisper for transcription**, check the following colab link to see the outputs\n", |
||||||
|
"\n", |
||||||
|
"https://colab.research.google.com/drive/1_pqFmQXjOYG9Se4Zov4blIGeoYX6ViTJ?usp=sharing\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2103adb0-51f3-4240-bc5d-e27b6103cd8a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import torch\n", |
||||||
|
"from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "47dba08d-5829-417c-9c6c-bdb35ca846a6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"AUDIO_MODEL = \"openai/whisper-medium\"\n", |
||||||
|
"speech_model = AutoModelForSpeechSeq2Seq.from_pretrained(AUDIO_MODEL, torch_dtype=torch.float16, low_cpu_mem_usage=True, use_safetensors=True)\n", |
||||||
|
"speech_model.to('cuda')\n", |
||||||
|
"processor = AutoProcessor.from_pretrained(AUDIO_MODEL)\n", |
||||||
|
"\n", |
||||||
|
"pipe = pipeline(\n", |
||||||
|
" \"automatic-speech-recognition\",\n", |
||||||
|
" model=speech_model,\n", |
||||||
|
" tokenizer=processor.tokenizer,\n", |
||||||
|
" feature_extractor=processor.feature_extractor,\n", |
||||||
|
" torch_dtype=torch.float16,\n", |
||||||
|
" device='cuda',\n", |
||||||
|
" return_timestamps=True #important if audio is more than 30sec\n", |
||||||
|
")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c35d6c76-01a9-495f-ad4e-84c98e320750", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"result = pipe(\"your-audio.mp3\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8fba2d46-b806-4bb3-b02d-e628343db986", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"transcription = result[\"text\"]\n", |
||||||
|
"print(transcription)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "1778c4db-d003-4fb9-a0d0-6cfa71e6208d", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## MODEL" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "9eb579a7-b5de-4537-8ad9-e3117b24c2ff", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4c632023-9b37-4c0d-b43a-190aacbbd80d", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"QWEN2 = \"Qwen/Qwen2-7B-Instruct\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "175814b9-81b2-4f75-bf40-9ef7cac492cd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"quant_config = BitsAndBytesConfig(\n", |
||||||
|
" load_in_4bit=True,\n", |
||||||
|
" bnb_4bit_use_double_quant=True,\n", |
||||||
|
" bnb_4bit_compute_dtype=torch.bfloat16,\n", |
||||||
|
" bnb_4bit_quant_type=\"nf4\"\n", |
||||||
|
")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "8aaa160e-7c2b-4080-b24a-995df4469edd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"tokenizer = AutoTokenizer.from_pretrained(QWEN2)\n", |
||||||
|
"#tokenizer.pad_token = tokenizer.oes_token\n", |
||||||
|
"inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\", add_generation_ptrompt=True).to(\"cuda\")\n", |
||||||
|
"streamer = TextStreamer(tokenizer)\n", |
||||||
|
"model = AutoModelForCausalLM.from_pretrained(QWEN2 , device_map=\"auto\", quantization_config=quant_config)\n", |
||||||
|
"outputs = model.generate(inputs, max_new_tokens=2000, streamer=streamer)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "517443aa-d230-4248-88aa-b06efd8ee3cd", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"response = tokenizer.decode(outputs[0])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "47562f76-fd35-4eb0-a399-8e8f1fa054c3", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## **For Markdown display**" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1f77fea1-0920-46e5-9230-d0e8b9f69353", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"from IPython.display import Markdown, display, update_display" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "35ac81e2-f960-4705-aaca-2385d8aa12d6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"display(Markdown(response))" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.13.2" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,332 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "yqlQTsxNdKrN" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"!pip install -q requests torch bitsandbytes transformers sentencepiece accelerate openai httpx==0.27.2 gradio" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "eyfvQrLxdkGT" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"import requests\n", |
||||||
|
"from IPython.display import Markdown, display, update_display\n", |
||||||
|
"from openai import OpenAI\n", |
||||||
|
"from google.colab import drive\n", |
||||||
|
"from huggingface_hub import login\n", |
||||||
|
"from google.colab import userdata\n", |
||||||
|
"from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer, BitsAndBytesConfig\n", |
||||||
|
"import torch\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"import re" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "WW-cSZk7dnp6" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# one can always add more models, of course\n", |
||||||
|
"\n", |
||||||
|
"LLAMA = \"meta-llama/Meta-Llama-3.1-8B-Instruct\"\n", |
||||||
|
"OPENAI_MODEL = \"gpt-4o-mini\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "XG7Iam6Rdw8F" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"hf_token = userdata.get('HF_TOKEN')\n", |
||||||
|
"login(hf_token, add_to_git_credential=True)\n", |
||||||
|
"openai_api_key = userdata.get('OPENAI_API_KEY')\n", |
||||||
|
"openai = OpenAI(api_key=openai_api_key)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "Ov7WSdx9dzSt" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"force_dark_mode = \"\"\"\n", |
||||||
|
"function refresh() {\n", |
||||||
|
" const url = new URL(window.location);\n", |
||||||
|
" if (url.searchParams.get('__theme') !== 'dark') {\n", |
||||||
|
" url.searchParams.set('__theme', 'dark');\n", |
||||||
|
" window.location.href = url.href;\n", |
||||||
|
" }\n", |
||||||
|
"}\n", |
||||||
|
"\"\"\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "bEF8w_Mdd2Nb" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def dataset_generator(model, nature, shots, volume, language):\n", |
||||||
|
"\n", |
||||||
|
" examples = \"Instruction: 'Make a random sentence.'\\nAnswer: 'When I got home last night, I couldn't believe my eyes: All the pineapples had been removed from the pizza.'\"\n", |
||||||
|
" system_message = \"You are a random sentence generator. Generate 10 diverse English sentences.\"\n", |
||||||
|
" user_prompt = f\"Generate 10 random English sentences, like so:\\n{examples}\"\n", |
||||||
|
" sentences = \"\"\n", |
||||||
|
"\n", |
||||||
|
" if language == \"English\":\n", |
||||||
|
"\n", |
||||||
|
" for shot in list(shots.keys()):\n", |
||||||
|
" examples += f\"\\nExample instruction: '{shot}'\\nExample answer: '{shots[shot]}'\\n\"\n", |
||||||
|
"\n", |
||||||
|
" system_message = f\"You are a state-of-the art linguistic dataset compiler. You are given a 'Type' of sentence to create. \\\n", |
||||||
|
"Within the bounds of that type, create {volume} diverse sentences with differing structures and lengths. Make the sentences plausible, \\\n", |
||||||
|
"but be creative in filling them with random concrete information, names, and data. Here are some examples for how to go about that:\\n{examples}\\n\\\n", |
||||||
|
"Just output one sentence per line. Do not comment or format yor output in any way, shape, or form.\"\n", |
||||||
|
"\n", |
||||||
|
" user_prompt = f\"Generate {volume} English sentences of the following Type: {nature}. Just output one sentence per line. \\\n", |
||||||
|
"Do not comment or format yor output in any way, shape, or form.\"\n", |
||||||
|
"\n", |
||||||
|
" elif language == \"German\":\n", |
||||||
|
"\n", |
||||||
|
" for shot in list(shots.keys()):\n", |
||||||
|
" examples += f\"\\nAnweisung: '{shot}'\\nAntwort: '{shots[shot]}'\\n\"\n", |
||||||
|
"\n", |
||||||
|
" system_message = f\"Du bist ein weltklasse Datensatz-Sammler für Sprachdaten. Du erhältst einen 'Typ' von Sätzen, die du erstellen sollst. \\\n", |
||||||
|
"Im Rahmen dieses Typs, generiere {volume} untereinander verschiedene Sätze mit unterschiedlichen Satzlängen und -strukturen. Mache die Beispielsätze \\\n", |
||||||
|
"plausibel, aber fülle sie kreativ mit willkürlichen Informationen, Namen, und Daten aller Art. Hier sind ein paar Beispiel, wie du vorgehen sollst:\\n{examples}\\n\\\n", |
||||||
|
"Gib einfach einen Satz pro Zeile aus. Kommentiere oder formatiere deine Antwort in keinster Weise.\"\n", |
||||||
|
"\n", |
||||||
|
" user_prompt = f\"Generiere {volume} deutsche Sätze des folgenden Typs: {nature}. Gib einfach einen Satz pro Zeile aus. \\\n", |
||||||
|
"Kommentiere oder formatiere deine Antwort in keiner Weise.\"\n", |
||||||
|
"\n", |
||||||
|
" elif language == \"French\":\n", |
||||||
|
"\n", |
||||||
|
" for shot in list(shots.keys()):\n", |
||||||
|
" examples += f\"\\nConsigne: '{shot}'\\nRéponse: '{shots[shot]}'\\n\"\n", |
||||||
|
"\n", |
||||||
|
" system_message = f\"Tu es un outil linguistique de pointe, à savoir, un genérateur de données linguistiques. Tu seras assigné un 'Type' de phrases à créer. \\\n", |
||||||
|
"Dans le cadre de ce type-là, crée {volume} phrases diverses, avec des structures et longueurs qui varient. Génère des phrases qui soient plausibles, \\\n", |
||||||
|
"mais sois créatif, et sers-toi de données, noms, et informations aléatoires pour rendre les phrases plus naturelles. Voici quelques examples comment faire:\\n{examples}\\n\\\n", |
||||||
|
"Sors une seule phrase par ligne. Ne formatte ni commente ta réponse en aucune manière que ce soit.\"\n", |
||||||
|
"\n", |
||||||
|
" user_prompt = f\"S'il te plaît, crée {volume} phrases en français du Type suivant: {nature}. Sors une seule phrase par ligne. \\\n", |
||||||
|
"Ne formatte ni commente ta réponse en aucune manière que ce soit.\"\n", |
||||||
|
"\n", |
||||||
|
" messages = [\n", |
||||||
|
" {\"role\": \"system\", \"content\": system_message},\n", |
||||||
|
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||||
|
" ]\n", |
||||||
|
"\n", |
||||||
|
" if model == \"Llama\":\n", |
||||||
|
"\n", |
||||||
|
" quant_config = BitsAndBytesConfig(\n", |
||||||
|
" load_in_4bit=True,\n", |
||||||
|
" bnb_4bit_use_double_quant=True,\n", |
||||||
|
" bnb_4bit_compute_dtype=torch.bfloat16,\n", |
||||||
|
" bnb_4bit_quant_type=\"nf4\"\n", |
||||||
|
" )\n", |
||||||
|
"\n", |
||||||
|
" tokenizer = AutoTokenizer.from_pretrained(LLAMA)\n", |
||||||
|
" tokenizer.pad_token = tokenizer.eos_token\n", |
||||||
|
" inputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(\"cuda\")\n", |
||||||
|
" streamer = TextStreamer(tokenizer)\n", |
||||||
|
" model = AutoModelForCausalLM.from_pretrained(LLAMA, device_map=\"auto\", quantization_config=quant_config)\n", |
||||||
|
" outputs = model.generate(inputs, max_new_tokens=10000)\n", |
||||||
|
"\n", |
||||||
|
" response = tokenizer.decode(outputs[0])\n", |
||||||
|
" sentences = list(re.finditer(\"(?:<\\|end_header_id\\|>)([^<]+)(?:<\\|eot_id\\|>)\", str(response), re.DOTALL))[-1].group(1)\n", |
||||||
|
"\n", |
||||||
|
" elif model == \"OpenAI\":\n", |
||||||
|
" response = openai.chat.completions.create(model=OPENAI_MODEL, messages=messages)\n", |
||||||
|
" sentences = response.choices[0].message.content\n", |
||||||
|
"\n", |
||||||
|
" return sentences" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": { |
||||||
|
"id": "VRKdu0fEt8mg" |
||||||
|
}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"global data\n", |
||||||
|
"data = \"\"\n", |
||||||
|
"\n", |
||||||
|
"with gr.Blocks(\n", |
||||||
|
" css=\"\"\"\n", |
||||||
|
" .red-button {\n", |
||||||
|
" background-color: darkred !important;\n", |
||||||
|
" border-color: red !important;\n", |
||||||
|
" }\n", |
||||||
|
" .blue-button {\n", |
||||||
|
" background-color: darkblue !important;\n", |
||||||
|
" border-color: blue !important;\n", |
||||||
|
" }\n", |
||||||
|
" .green-button {\n", |
||||||
|
" background-color: green !important;\n", |
||||||
|
" border-color: green !important;\n", |
||||||
|
" }\n", |
||||||
|
" \"\"\"\n", |
||||||
|
") as view:\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" title = gr.HTML(\"<h1><big>D</big>ataset Generator <small>PLUS</small></h1><h2>for English, German, and French</h2>\")\n", |
||||||
|
" subtitle = gr.HTML(\"<h3>Instructions:</h3><ol><li>Pick the language</li>\\\n", |
||||||
|
"<li>Select a model</li><li>Indicate how many sentences you need</li>\\\n", |
||||||
|
"<li>Describe the type of sentence you're looking for</li><li>Give up to three examples of the desired output sentence, and describe each of them briefly</li>\\\n", |
||||||
|
"<li>Hit <q>Create Dataset</q></li>\\\n", |
||||||
|
"<li>Save the output (.txt) to your Google Drive</li>\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" language_choice = gr.Dropdown(choices=[\"English\", \"German\", \"French\"], label=\"Select language\", value=\"English\", interactive=True)\n", |
||||||
|
" model_choice = gr.Dropdown(choices=[\"Llama\", \"OpenAI\"], label=\"Select model\", value=\"Llama\", interactive=True)\n", |
||||||
|
" volume = gr.Textbox(label=\"Required number of sentences\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" typeInput = gr.Textbox(label=\"Short description of the kind of sentence you need\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" sentence_1 = gr.Textbox(label=\"Example sentence 1\", interactive=True)\n", |
||||||
|
" instruction_1 = gr.Textbox(label=\"Description\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" sentence_2 = gr.Textbox(label=\"Example sentence 2\", interactive=True)\n", |
||||||
|
" instruction_2 = gr.Textbox(label=\"Description\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" sentence_3 = gr.Textbox(label=\"Example sentence 3\", interactive=True)\n", |
||||||
|
" instruction_3 = gr.Textbox(label=\"Description\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" liveSentences = gr.Markdown(\n", |
||||||
|
" value='<div style=\"color: #999; padding: 10px;\">Your sentences will be displayed here …</div>',\n", |
||||||
|
" label=\"Generated sentences:\",\n", |
||||||
|
" min_height=60,\n", |
||||||
|
" max_height=200\n", |
||||||
|
" )\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" generate = gr.Button(value=\"Generate sentences\", elem_classes=\"blue-button\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" clear = gr.Button(value=\"Clear everything\", elem_classes=\"red-button\")\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" outputPath = gr.Textbox(label=\"Specify the desired name and location on your Google Drive for the sentences (plain text) to be saved\", interactive=True)\n", |
||||||
|
" with gr.Row():\n", |
||||||
|
" save = gr.Button(value=\"Save generated data\", elem_classes=\"blue-button\")\n", |
||||||
|
"\n", |
||||||
|
" def generateSentences(typeInput, s1, i1, s2, i2, s3, i3, volume, language, model):\n", |
||||||
|
" global data\n", |
||||||
|
" nature = \"\"\n", |
||||||
|
" shots = {}\n", |
||||||
|
" amount = int(volume) if re.search(\"^[0-9]+$\", volume) is not None else 10\n", |
||||||
|
"\n", |
||||||
|
" if typeInput != None:\n", |
||||||
|
" nature = typeInput\n", |
||||||
|
" else:\n", |
||||||
|
" nature = \"Random sentences of mixed nature\"\n", |
||||||
|
"\n", |
||||||
|
" if s1 != None:\n", |
||||||
|
" if i1 != None:\n", |
||||||
|
" shots[i1] = s1\n", |
||||||
|
" else:\n", |
||||||
|
" shots[\"A medium-long random sentence about anything\"] = s1\n", |
||||||
|
" else:\n", |
||||||
|
" shots[\"A medium-long random sentence about anything\"] = \"Paul, waking up out of his half-drunken haze, clearly couldn't tell left from right and ran right into the door.\"\n", |
||||||
|
"\n", |
||||||
|
" if s2 != None:\n", |
||||||
|
" if i2 != None:\n", |
||||||
|
" shots[i2] = s2\n", |
||||||
|
" else:\n", |
||||||
|
" shots[\"A medium-long random sentence about anything\"] = s2\n", |
||||||
|
"\n", |
||||||
|
" if s3 != None:\n", |
||||||
|
" if i3 != None:\n", |
||||||
|
" shots[i3] = s3\n", |
||||||
|
" else:\n", |
||||||
|
" shots[\"A medium-long random sentence about anything\"] = s3\n", |
||||||
|
"\n", |
||||||
|
" sentences = dataset_generator(model, nature, shots, amount, language)\n", |
||||||
|
" data = sentences\n", |
||||||
|
"\n", |
||||||
|
" return sentences\n", |
||||||
|
"\n", |
||||||
|
" def saveData(path):\n", |
||||||
|
" global data\n", |
||||||
|
" drive.mount(\"/content/drive\")\n", |
||||||
|
"\n", |
||||||
|
" dir_path = os.path.dirname(\"/content/drive/MyDrive/\" + path)\n", |
||||||
|
"\n", |
||||||
|
" if not os.path.exists(dir_path):\n", |
||||||
|
" os.makedirs(dir_path)\n", |
||||||
|
"\n", |
||||||
|
" with open(\"/content/drive/MyDrive/\" + path, \"w\", encoding=\"utf-8\") as f:\n", |
||||||
|
" f.write(data)\n", |
||||||
|
"\n", |
||||||
|
" generate.click(generateSentences, inputs=[typeInput, sentence_1, instruction_1, sentence_2, instruction_2, sentence_3, instruction_3, volume, language_choice, model_choice], outputs=liveSentences)\n", |
||||||
|
" clear.click(\n", |
||||||
|
" lambda: [\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value='<div style=\"color: #999; padding: 10px;\">Your sentences will be displayed here …</div>'),\n", |
||||||
|
" gr.update(value=\"\"),\n", |
||||||
|
" gr.update(value=\"Save generated data\", elem_classes=\"blue-button\")],\n", |
||||||
|
" None,\n", |
||||||
|
" [volume, typeInput, sentence_1, instruction_1, sentence_2, instruction_2,\n", |
||||||
|
" sentence_3, instruction_3, liveSentences, outputPath, save],\n", |
||||||
|
" queue=False\n", |
||||||
|
" )\n", |
||||||
|
" save.click(saveData, inputs=outputPath, outputs=None).then(lambda: gr.update(value=\"Your data has been saved\", elem_classes=\"green-button\"), [], [save])\n", |
||||||
|
"\n", |
||||||
|
"view.launch(share=True) #, debug=True)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"accelerator": "GPU", |
||||||
|
"colab": { |
||||||
|
"authorship_tag": "ABX9TyPxJzufoQPtui+nhl1J1xiR", |
||||||
|
"gpuType": "T4", |
||||||
|
"provenance": [] |
||||||
|
}, |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 4 |
||||||
|
} |
@ -0,0 +1,346 @@ |
|||||||
|
import os |
||||||
|
import io |
||||||
|
import sys |
||||||
|
import re |
||||||
|
import subprocess |
||||||
|
from dotenv import load_dotenv |
||||||
|
from openai import OpenAI |
||||||
|
from anthropic import Anthropic |
||||||
|
import gradio as gr |
||||||
|
|
||||||
|
# Load environment variables and initialize APIs |
||||||
|
load_dotenv(override=True) |
||||||
|
openai = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) |
||||||
|
anthropic = Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY")) |
||||||
|
MACHINE_SPEC = "MacbookPro, Apple M1 Chip" |
||||||
|
|
||||||
|
# Define global variables for HF integration |
||||||
|
# For HF chat-based CodeQwen model |
||||||
|
code_qwen = "Qwen/CodeQwen1.5-7B-Chat" |
||||||
|
CODE_QWEN_URL = "" |
||||||
|
|
||||||
|
|
||||||
|
def clean_code(code, target_language): |
||||||
|
""" |
||||||
|
Remove markdown code fences and stray language indicators. |
||||||
|
Also apply language-specific replacements. |
||||||
|
""" |
||||||
|
raw_lines = code.splitlines() |
||||||
|
cleaned_lines = [] |
||||||
|
for line in raw_lines: |
||||||
|
if "```" in line: |
||||||
|
continue |
||||||
|
if line.strip().lower() in ["c", "cpp", "c++", "rust"]: |
||||||
|
continue |
||||||
|
cleaned_lines.append(line) |
||||||
|
cleaned = "\n".join(cleaned_lines) |
||||||
|
if target_language == "C": |
||||||
|
cleaned = cleaned.replace("1U << 32", "(1ULL << 32)") |
||||||
|
if target_language == "Rust": |
||||||
|
cleaned = process_rust_code(cleaned) |
||||||
|
return cleaned |
||||||
|
|
||||||
|
# Conversion prompt functions (target language-aware) |
||||||
|
def user_prompt_for(python_code, target_language): |
||||||
|
return ( |
||||||
|
f"Rewrite this Python code in {target_language} with the fastest possible implementation that produces identical output. " |
||||||
|
f"Respond only with {target_language} code; do not explain your work. " |
||||||
|
"Pay attention to number types to ensure no int overflows. Remember to #include all necessary C++ packages such as iomanip.\n\n" |
||||||
|
+ python_code |
||||||
|
) |
||||||
|
|
||||||
|
def messages_for(python_code, target_language): |
||||||
|
system_message = ( |
||||||
|
f"You are an assistant that reimplements Python code in high performance {target_language} for an {MACHINE_SPEC}. " |
||||||
|
f"Respond only with {target_language} code; use comments sparingly. " |
||||||
|
f"The {target_language} response needs to produce an identical output in the fastest possible time." |
||||||
|
) |
||||||
|
return [ |
||||||
|
{"role": "system", "content": system_message}, |
||||||
|
{"role": "user", "content": user_prompt_for(python_code, target_language)}, |
||||||
|
] |
||||||
|
|
||||||
|
def write_output(code, target_language): |
||||||
|
"""Write the converted code to a file based on target language.""" |
||||||
|
tag = target_language.lower() if target_language is not None else "" |
||||||
|
if target_language == "C++": |
||||||
|
filename = "optimized.cpp" |
||||||
|
elif target_language == "C": |
||||||
|
filename = "optimized.c" |
||||||
|
elif target_language == "Rust": |
||||||
|
filename = "optimized.rs" |
||||||
|
else: |
||||||
|
filename = "optimized.txt" |
||||||
|
cleaned = code.replace(f"```{tag}\n", "").replace("```", "") |
||||||
|
lines = cleaned.splitlines() |
||||||
|
if lines and lines[0].strip().lower() in ["cpp", "c++", "c", "rust"]: |
||||||
|
lines = lines[1:] |
||||||
|
cleaned = "\n".join(lines) |
||||||
|
cleaned = clean_code(cleaned, target_language) |
||||||
|
with open(filename, "w") as f: |
||||||
|
f.write(cleaned) |
||||||
|
return filename |
||||||
|
|
||||||
|
# GPT integration for conversion |
||||||
|
def stream_gpt(python_code, target_language, model_version): |
||||||
|
stream = openai.chat.completions.create( |
||||||
|
model=model_version, # Use selected GPT model version |
||||||
|
messages=messages_for(python_code, target_language), |
||||||
|
stream=True, |
||||||
|
) |
||||||
|
reply = "" |
||||||
|
for chunk in stream: |
||||||
|
if not hasattr(chunk, "choices") or not chunk.choices: |
||||||
|
continue |
||||||
|
fragment = chunk.choices[0].delta.content or "" |
||||||
|
reply += fragment |
||||||
|
yield reply.replace(f"```{target_language}\n", "").replace("```", "") |
||||||
|
|
||||||
|
# Claude integration for conversion |
||||||
|
def stream_claude(python_code, target_language, model_version): |
||||||
|
prompt = user_prompt_for(python_code, target_language) |
||||||
|
response = anthropic.completions.create( |
||||||
|
prompt=prompt, |
||||||
|
model=model_version, |
||||||
|
stream=True, |
||||||
|
) |
||||||
|
reply = "" |
||||||
|
for chunk in response: |
||||||
|
fragment = chunk.get("completion", "") |
||||||
|
reply += fragment |
||||||
|
yield reply.replace(f"```{target_language}\n", "").replace("```", "") |
||||||
|
|
||||||
|
# Hugging Face integration functions |
||||||
|
def stream_code_qwen(python_code, target_language, model_version): |
||||||
|
""" |
||||||
|
HF chat-based model using CodeQwen. |
||||||
|
""" |
||||||
|
from transformers import AutoTokenizer |
||||||
|
tokenizer = AutoTokenizer.from_pretrained(code_qwen) |
||||||
|
messages = messages_for(python_code, target_language) |
||||||
|
# Convert messages to chat format as expected by Qwen. |
||||||
|
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
||||||
|
from huggingface_hub import InferenceClient |
||||||
|
client = InferenceClient(CODE_QWEN_URL, token=os.getenv("HF_TOKEN")) |
||||||
|
stream = client.text_generation(text, stream=True, details=True, max_new_tokens=3000) |
||||||
|
result = "" |
||||||
|
for r in stream: |
||||||
|
result += r.token.text |
||||||
|
yield result.replace(f"```{target_language}\n", "").replace("```", "") |
||||||
|
|
||||||
|
def stream_huggingface(python_code, target_language, model_version): |
||||||
|
""" |
||||||
|
HF single-prompt model integration. |
||||||
|
""" |
||||||
|
prompt = user_prompt_for(python_code, target_language) |
||||||
|
from huggingface_hub import InferenceClient |
||||||
|
client = InferenceClient(model_name=model_version, token=os.getenv("HF_TOKEN")) |
||||||
|
stream = client.text_generation(prompt, stream=True, details=True, max_new_tokens=3000) |
||||||
|
reply = "" |
||||||
|
for chunk in stream: |
||||||
|
reply += chunk.token.text |
||||||
|
yield reply.replace(f"```{target_language}\n", "").replace("```", "") |
||||||
|
|
||||||
|
|
||||||
|
def optimize(python_code, combined_model, target_language): |
||||||
|
""" |
||||||
|
combined_model is a string like "GPT: gpt-4o", "CLAUDE: claude-3-5-sonnet-20240620" or "HF: model_name" |
||||||
|
""" |
||||||
|
provider, model_version = [x.strip() for x in combined_model.split(":")] |
||||||
|
if provider == "GPT": |
||||||
|
for partial in stream_gpt(python_code, target_language, model_version): |
||||||
|
yield partial |
||||||
|
elif provider == "CLAUDE": |
||||||
|
for partial in stream_claude(python_code, target_language, model_version): |
||||||
|
yield partial |
||||||
|
elif provider == "HF": |
||||||
|
if "CodeQwen" in model_version: |
||||||
|
for partial in stream_code_qwen(python_code, target_language, model_version): |
||||||
|
yield partial |
||||||
|
else: |
||||||
|
for partial in stream_huggingface(python_code, target_language, model_version): |
||||||
|
yield partial |
||||||
|
else: |
||||||
|
raise ValueError("Unknown model provider") |
||||||
|
|
||||||
|
def execute_python(code): |
||||||
|
"""Execute Python code and return its output.""" |
||||||
|
env = {} # Dedicated global namespace |
||||||
|
try: |
||||||
|
output = io.StringIO() |
||||||
|
sys.stdout = output |
||||||
|
exec(code, env) |
||||||
|
finally: |
||||||
|
sys.stdout = sys.__stdout__ |
||||||
|
return output.getvalue() |
||||||
|
|
||||||
|
def execute_cpp(code): |
||||||
|
write_output(code, target_language="C++") |
||||||
|
try: |
||||||
|
compile_cmd = [ |
||||||
|
"clang++", "-Ofast", "-std=c++17", "-march=armv8.5-a", |
||||||
|
"-mtune=apple-m1", "-mcpu=apple-m1", "-o", "optimized", "optimized.cpp" |
||||||
|
] |
||||||
|
subprocess.run(compile_cmd, check=True, text=True, capture_output=True) |
||||||
|
run_cmd = ["./optimized"] |
||||||
|
run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True) |
||||||
|
return run_result.stdout |
||||||
|
except subprocess.CalledProcessError as e: |
||||||
|
return f"Error:\n{e.stderr}" |
||||||
|
|
||||||
|
def execute_c(code): |
||||||
|
cleaned_code = clean_code(code, "C") |
||||||
|
with open("optimized.c", "w") as f: |
||||||
|
f.write(cleaned_code) |
||||||
|
try: |
||||||
|
compile_cmd = ["clang", "-O2", "-std=c11", "-o", "optimized_c", "optimized.c"] |
||||||
|
subprocess.run(compile_cmd, check=True, text=True, capture_output=True) |
||||||
|
run_cmd = ["./optimized_c"] |
||||||
|
run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True) |
||||||
|
return run_result.stdout |
||||||
|
except subprocess.CalledProcessError as e: |
||||||
|
return f"Error:\n{e.stderr}" |
||||||
|
|
||||||
|
def process_rust_code(code): |
||||||
|
code = code.replace("{:.6f}", "{:.6}") |
||||||
|
code = re.sub( |
||||||
|
r'(println!$begin:math:text$"Execution Time: \\{\\:\\.6\\} seconds", duration\\.as_secs_f64)(\\s*)$', |
||||||
|
r'\\1())', |
||||||
|
code, |
||||||
|
flags=re.MULTILINE, |
||||||
|
) |
||||||
|
code = code.replace("max_val - min_val as u32 + 1", "((max_val - min_val + 1) as u32)") |
||||||
|
code = code.replace("1 << 32", "1u64 << 32") |
||||||
|
code = re.sub(r'($end:math:text$\s*as i64)\)', r'\1', code) |
||||||
|
return code |
||||||
|
|
||||||
|
def execute_rust(code): |
||||||
|
code = code.replace("```rust\n", "").replace("```", "") |
||||||
|
lines = code.split('\n', 1) |
||||||
|
if lines and lines[0].strip().lower() == "rust": |
||||||
|
code = lines[1] if len(lines) > 1 else "" |
||||||
|
code = process_rust_code(code) |
||||||
|
with open("optimized.rs", "w") as f: |
||||||
|
f.write(code) |
||||||
|
try: |
||||||
|
compile_cmd = ["rustc", "optimized.rs", "-O", "-o", "optimized_rust"] |
||||||
|
subprocess.run(compile_cmd, check=True, text=True, capture_output=True) |
||||||
|
run_cmd = ["./optimized_rust"] |
||||||
|
run_result = subprocess.run(run_cmd, check=True, text=True, capture_output=True) |
||||||
|
return run_result.stdout |
||||||
|
except subprocess.CalledProcessError as e: |
||||||
|
return f"Error:\n{e.stderr}" |
||||||
|
|
||||||
|
def execute_target_code(code, target_language): |
||||||
|
"""Select the appropriate execution function based on target language.""" |
||||||
|
if target_language == "C++": |
||||||
|
return execute_cpp(code) |
||||||
|
elif target_language == "C": |
||||||
|
return execute_c(code) |
||||||
|
elif target_language == "Rust": |
||||||
|
return execute_rust(code) |
||||||
|
else: |
||||||
|
return "Unsupported language" |
||||||
|
|
||||||
|
# Gradio UI setup |
||||||
|
css = """ |
||||||
|
.python {background-color: #306998;} |
||||||
|
.code {background-color: #050;} |
||||||
|
""" |
||||||
|
|
||||||
|
def launch_ui(): |
||||||
|
with gr.Blocks(css=css) as ui: |
||||||
|
gr.Markdown("## Convert Python Code to C/C++/Rust") |
||||||
|
with gr.Row(): |
||||||
|
python_box = gr.Textbox(label="Python code:", value=PYTHON_HARD, lines=10) |
||||||
|
converted_box = gr.Textbox(label="Converted Code:", lines=10) |
||||||
|
with gr.Row(): |
||||||
|
model_dropdown = gr.Dropdown( |
||||||
|
["GPT: gpt-4o", "GPT: gpt-4o-mini", "CLAUDE: claude-3-5-sonnet-20240620", "CLAUDE: claude-3-haiku-20240307", "HF: CodeQwen1.5-7B-Chat", "HF: bigcode/starcoder"], |
||||||
|
label="Select Model", |
||||||
|
value="GPT: gpt-4o" |
||||||
|
) |
||||||
|
target_lang_dropdown = gr.Dropdown( |
||||||
|
["C++", "C", "Rust"], |
||||||
|
label="Select target language", |
||||||
|
value="C++" |
||||||
|
) |
||||||
|
with gr.Row(): |
||||||
|
convert_btn = gr.Button("Convert code") |
||||||
|
with gr.Row(): |
||||||
|
python_run_btn = gr.Button("Run Python") |
||||||
|
run_converted_btn = gr.Button("Run Converted Code") |
||||||
|
with gr.Row(): |
||||||
|
python_out = gr.TextArea(label="Python result:", elem_classes=["python"]) |
||||||
|
converted_out = gr.TextArea(label="Converted Code result:", elem_classes=["code"]) |
||||||
|
convert_btn.click( |
||||||
|
optimize, |
||||||
|
inputs=[python_box, model_dropdown, target_lang_dropdown], |
||||||
|
outputs=[converted_box], |
||||||
|
) |
||||||
|
python_run_btn.click(execute_python, inputs=[python_box], outputs=[python_out]) |
||||||
|
run_converted_btn.click( |
||||||
|
execute_target_code, |
||||||
|
inputs=[converted_box, target_lang_dropdown], |
||||||
|
outputs=[converted_out], |
||||||
|
) |
||||||
|
ui.launch() |
||||||
|
|
||||||
|
# Example Python code blocks |
||||||
|
PYTHON_HARD = """ |
||||||
|
# Support large number sizes |
||||||
|
def lcg(seed, a=1664525, c=1013904223, m=2**32): |
||||||
|
value = seed |
||||||
|
while True: |
||||||
|
value = (a * value + c) % m |
||||||
|
yield value |
||||||
|
def max_subarray_sum(n, seed, min_val, max_val): |
||||||
|
lcg_gen = lcg(seed) |
||||||
|
random_numbers = [next(lcg_gen) % (max_val - min_val + 1) + min_val for _ in range(n)] |
||||||
|
max_sum = float('-inf') |
||||||
|
for i in range(n): |
||||||
|
current_sum = 0 |
||||||
|
for j in range(i, n): |
||||||
|
current_sum += random_numbers[j] |
||||||
|
if current_sum > max_sum: |
||||||
|
max_sum = current_sum |
||||||
|
return max_sum |
||||||
|
def total_max_subarray_sum(n, initial_seed, min_val, max_val): |
||||||
|
total_sum = 0 |
||||||
|
lcg_gen = lcg(initial_seed) |
||||||
|
for _ in range(20): |
||||||
|
seed = next(lcg_gen) |
||||||
|
total_sum += max_subarray_sum(n, seed, min_val, max_val) |
||||||
|
return total_sum |
||||||
|
n = 10000 |
||||||
|
initial_seed = 42 |
||||||
|
min_val = -10 |
||||||
|
max_val = 10 |
||||||
|
import time |
||||||
|
start_time = time.time() |
||||||
|
result = total_max_subarray_sum(n, initial_seed, min_val, max_val) |
||||||
|
end_time = time.time() |
||||||
|
print("Total Maximum Subarray Sum (20 runs):", result) |
||||||
|
print("Execution Time: {:.6f} seconds".format(end_time - start_time)) |
||||||
|
""" |
||||||
|
|
||||||
|
if __name__ == "__main__": |
||||||
|
import argparse |
||||||
|
parser = argparse.ArgumentParser( |
||||||
|
description="Single script with multiple executable sections and target language support" |
||||||
|
) |
||||||
|
parser.add_argument( |
||||||
|
"--mode", |
||||||
|
choices=["direct", "ui"], |
||||||
|
default="ui", |
||||||
|
help="Run direct conversion or launch Gradio UI", |
||||||
|
) |
||||||
|
args = parser.parse_args() |
||||||
|
|
||||||
|
if args.mode == "direct": |
||||||
|
print("\nExecuting Python code (PYTHON_HARD)...") |
||||||
|
exec(PYTHON_HARD) |
||||||
|
for partial in optimize(PYTHON_HARD, "GPT: gpt-4o", "C++"): |
||||||
|
print(partial, end="") |
||||||
|
elif args.mode == "ui": |
||||||
|
launch_ui() |
@ -0,0 +1,394 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "e9025a4a-b8ef-4901-b98e-753b756b028a", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Building a RAG chat without the langchain framework\n", |
||||||
|
"## To understand more in detail what's going on\n", |
||||||
|
"\n", |
||||||
|
"The technical know-how comes from Ed Donner, obviously, as well as from Sakalya Mitra & Pradip Nichite on [this gem of a blog post](https://blog.futuresmart.ai/building-rag-applications-without-langchain-or-llamaindex) I found on futuresmart.ai" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1b7acfb5-8bf9-48b5-a219-46f1e3bfafc3", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"import os\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"import re\n", |
||||||
|
"from openai import OpenAI" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "19af6b8b-be29-4086-a69f-5e2cdb867ede", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports for Chroma and plotly\n", |
||||||
|
"\n", |
||||||
|
"import chromadb\n", |
||||||
|
"from chromadb.utils import embedding_functions\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"from sklearn.manifold import TSNE\n", |
||||||
|
"import plotly.graph_objects as go" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bc6d9ab4-816a-498c-a04c-c3838770d848", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"db_name = \"chroma_db\"\n", |
||||||
|
"client = chromadb.PersistentClient(path=\"chroma_db\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a3715b81-eed0-4412-8c01-0623ed113657", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"load_dotenv()\n", |
||||||
|
"openai_api_key = os.getenv('OPENAI_API_KEY')\n", |
||||||
|
"openai = OpenAI()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "3017e1dd-d0d5-4ef4-8c72-84517a927793", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Making stuff at home: documents" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e83480a5-927b-4756-a978-520a56ceed85", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# items in documents are actually objects: Documents(metadata={...}, page_content=\"...\"), so we need a \"Document\" class\n", |
||||||
|
"# btw all the quadruple-backslash madness here is due to Windows (there might be a more efficient way, still)\n", |
||||||
|
"\n", |
||||||
|
"class Document:\n", |
||||||
|
" def __init__(self, metadata, page_content):\n", |
||||||
|
" self.metadata = metadata\n", |
||||||
|
" self.page_content = page_content\n", |
||||||
|
"\n", |
||||||
|
" def __repr__(self):\n", |
||||||
|
" return f\"Document(metadata={self.metadata}, page_content={repr(self.page_content)})\"\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"documents = []\n", |
||||||
|
"\n", |
||||||
|
"def get_documents(path='.'):\n", |
||||||
|
" for entry in os.listdir(path):\n", |
||||||
|
" if len(re.findall(\"^\\.\", entry)) == 0:\n", |
||||||
|
" full_path = os.path.join(path, entry)\n", |
||||||
|
" if os.path.isdir(full_path):\n", |
||||||
|
" get_documents(full_path)\n", |
||||||
|
" else:\n", |
||||||
|
" parent = re.sub(\"^\\.[\\\\\\\\].*[\\\\\\\\]\", \"\", os.path.dirname(full_path))\n", |
||||||
|
" self = os.path.basename(full_path)\n", |
||||||
|
" content = \"\"\n", |
||||||
|
"\n", |
||||||
|
" with open(full_path, mode=\"r\", encoding=\"utf-8\") as f:\n", |
||||||
|
" content = f.read()\n", |
||||||
|
" \n", |
||||||
|
" doc = Document(metadata={\"source\": full_path, \"doc_type\": parent, \"self\": self}, page_content=content)\n", |
||||||
|
" documents.append(doc)\n", |
||||||
|
"\n", |
||||||
|
"# where the knowledge collection lives\n", |
||||||
|
"directory_path = r'.\\knowledge_collection'\n", |
||||||
|
"get_documents(directory_path)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "fd846bc0-54d0-4802-a18b-196c396a241c", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Making stuff at home: chunks" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "202b33e2-c3fe-424c-9c8e-a90e517add42", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"eos_pattern = re.compile(r\"((?<=[.!?;])[\\s]+)|([\\n\\r]+)\")\n", |
||||||
|
"chunk_size = 1000\n", |
||||||
|
"chunks = []" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a19a61ec-d204-4b87-9f05-88832d03fad6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"for doc in documents:\n", |
||||||
|
"\n", |
||||||
|
" sentence_ends = [end.start() for end in list(re.finditer(eos_pattern, doc.page_content)) if end.start() > chunk_size - 50]\n", |
||||||
|
" start = 0\n", |
||||||
|
" \n", |
||||||
|
" if len(sentence_ends) == 0 and len(doc.page_content) > 5:\n", |
||||||
|
" chunk = Document(metadata=doc.metadata, page_content=doc.page_content)\n", |
||||||
|
" chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", |
||||||
|
" chunks.append(chunk)\n", |
||||||
|
"\n", |
||||||
|
" else: \n", |
||||||
|
" for point in sentence_ends:\n", |
||||||
|
" if point - start >= chunk_size - 50:\n", |
||||||
|
" text = doc.page_content[start:point]\n", |
||||||
|
" chunk = Document(metadata=doc.metadata, page_content=text)\n", |
||||||
|
" chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", |
||||||
|
" chunks.append(chunk)\n", |
||||||
|
" start = point\n", |
||||||
|
" \n", |
||||||
|
" # Add the remaining part of the text as the last chunk if it's big enough\n", |
||||||
|
" if len(doc.page_content) - start > 5:\n", |
||||||
|
" text = doc.page_content[start:]\n", |
||||||
|
" chunk = Document(metadata=doc.metadata, page_content=text)\n", |
||||||
|
" chunk.metadata['id'] = f\"{doc.metadata['source']}_chunk_\"\n", |
||||||
|
" chunks.append(chunk)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "966ae50c-e0e5-403a-9465-8f26967f8922", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"### Making stuff without a framework: embeddings" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "b97391c0-e55f-4e08-b0cb-5e62fb119ae6", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Configure sentence transformer embeddings\n", |
||||||
|
"embeddings = embedding_functions.SentenceTransformerEmbeddingFunction(\n", |
||||||
|
" model_name=\"all-MiniLM-L6-v2\"\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"collection_name = \"documents_collection\"\n", |
||||||
|
"\n", |
||||||
|
"try:\n", |
||||||
|
" client.delete_collection(collection_name)\n", |
||||||
|
"except ValueError:\n", |
||||||
|
" print(f\"{collection_name} doesn't exist yet\")\n", |
||||||
|
"\n", |
||||||
|
"# Create collection\n", |
||||||
|
"collection = client.get_or_create_collection(\n", |
||||||
|
" name=collection_name,\n", |
||||||
|
" embedding_function=embeddings\n", |
||||||
|
")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5222dfec-8cf4-4e87-aeb8-33d0f3b3b5cb", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# adding our chunks to the \"collection\"\n", |
||||||
|
"\n", |
||||||
|
"for chunk in chunks:\n", |
||||||
|
" index = chunks.index(chunk)\n", |
||||||
|
" collection.add(\n", |
||||||
|
" documents=chunk.page_content,\n", |
||||||
|
" metadatas=chunk.metadata,\n", |
||||||
|
" ids=chunk.metadata['id'] + f\"{index}\"\n", |
||||||
|
" )" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "5effcada-ee5f-4207-9fa6-1fc5604b068b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def semantic_search(collection, query: str, n_results: int = 4):\n", |
||||||
|
" results = collection.query(\n", |
||||||
|
" query_texts=[query],\n", |
||||||
|
" n_results=n_results\n", |
||||||
|
" )\n", |
||||||
|
" return results" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "99f0a366-3dcb-4824-9f33-70e07af984d8", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Visualizing the Vector Store\n", |
||||||
|
"\n", |
||||||
|
"The results actually look just as good with `all-MiniLM-L6-v2`" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e12751ab-f102-4dc6-9c0f-313e5832b75f", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Prework\n", |
||||||
|
"\n", |
||||||
|
"result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n", |
||||||
|
"vectors = np.array(result['embeddings'])\n", |
||||||
|
"documents = result['documents']\n", |
||||||
|
"doc_types = [metadata['doc_type'] for metadata in result['metadatas']]\n", |
||||||
|
"colors = [['blue', 'red', 'orange'][['languages', 'mountains', 'regions'].index(t)] for t in doc_types]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "422e3247-2de0-44ba-82bc-30b4f739da7e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Reduce the dimensionality of the vectors to 2D using t-SNE\n", |
||||||
|
"# (t-distributed stochastic neighbor embedding)\n", |
||||||
|
"\n", |
||||||
|
"tsne = TSNE(n_components=2, random_state=42)\n", |
||||||
|
"reduced_vectors = tsne.fit_transform(vectors)\n", |
||||||
|
"\n", |
||||||
|
"# Create the 2D scatter plot\n", |
||||||
|
"fig = go.Figure(data=[go.Scatter(\n", |
||||||
|
" x=reduced_vectors[:, 0],\n", |
||||||
|
" y=reduced_vectors[:, 1],\n", |
||||||
|
" mode='markers',\n", |
||||||
|
" marker=dict(size=5, color=colors, opacity=0.8),\n", |
||||||
|
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", |
||||||
|
" hoverinfo='text'\n", |
||||||
|
")])\n", |
||||||
|
"\n", |
||||||
|
"fig.update_layout(\n", |
||||||
|
" title='2D Chroma Vector Store Visualization',\n", |
||||||
|
" scene=dict(xaxis_title='x',yaxis_title='y'),\n", |
||||||
|
" width=800,\n", |
||||||
|
" height=600,\n", |
||||||
|
" margin=dict(r=20, b=10, l=10, t=40)\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"fig.show()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "2cff9065-de3d-4e91-8aff-c7ad750a4334", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"#### Comment: Relying on Gradio's history handling seems to be memory enough\n", |
||||||
|
"##### If all you need is your favorite LLM with expertise in your knowlege collection" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "aebb676f-883e-4b2b-8420-13f2a8399e77", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"system_prompt = \"You are a helpful assistant for everything French. Give brief, accurate answers. \\\n", |
||||||
|
"Do not provide any information that you haven't been asked for, even if you have lots of context. \\\n", |
||||||
|
"If you haven't been provided with relevant context, say you don't know. Do not make anything up, only \\\n", |
||||||
|
"provide answers that are based in the context you have been given. Do not comment on the provided context. \\\n", |
||||||
|
"If the user doesn't ask for any information, engage in brief niceties and offer your expertise regarding France.\"\n", |
||||||
|
"\n", |
||||||
|
"history = [{\"role\": \"system\", \"content\": system_prompt}]\n", |
||||||
|
"\n", |
||||||
|
"def get_user_prompt(prompt):\n", |
||||||
|
" # semantic search!!\n", |
||||||
|
" context = semantic_search(collection, prompt)['documents'][0]\n", |
||||||
|
"\n", |
||||||
|
" if len(context) > 0:\n", |
||||||
|
" prompt += f\"\\n\\n[AUTOMATIC SYSTEM CONTEXT ADDITION] Here is some context that might be useful for answering the question:\"\n", |
||||||
|
"\n", |
||||||
|
" for doc in context:\n", |
||||||
|
" prompt += f\"\\n\\n{doc}\"\n", |
||||||
|
" \n", |
||||||
|
" user_prompt = {\"role\": \"user\", \"content\": prompt}\n", |
||||||
|
"\n", |
||||||
|
" return user_prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "23b70162-2c4f-443e-97c8-3e675304d307", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def stream_gpt(message, history):\n", |
||||||
|
" messages = [{\"role\": \"system\", \"content\": system_prompt}] + history\n", |
||||||
|
" messages.append(get_user_prompt(message))\n", |
||||||
|
" stream = openai.chat.completions.create(\n", |
||||||
|
" model=MODEL,\n", |
||||||
|
" messages=messages,\n", |
||||||
|
" stream=True\n", |
||||||
|
" )\n", |
||||||
|
" result = \"\"\n", |
||||||
|
" for chunk in stream:\n", |
||||||
|
" result += chunk.choices[0].delta.content or \"\"\n", |
||||||
|
" yield result" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "4ecf4a30-452d-4d41-aa60-fa62c8e2559b", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Gradio\n", |
||||||
|
"\n", |
||||||
|
"gr.ChatInterface(fn=stream_gpt, type=\"messages\").launch(inbrowser=True)" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,42 @@ |
|||||||
|
# Overview of Alsacien Language |
||||||
|
|
||||||
|
## Definition |
||||||
|
Alsacien, also known as Alsatian or Alsatian German, is a variety of the Alemannic branch of the Germanic languages spoken predominantly in Alsace, France. |
||||||
|
|
||||||
|
## Geographic Distribution |
||||||
|
- Primarily spoken in Alsace, a region in northeastern France. |
||||||
|
- Communities of Alsacien speakers can also be found in neighboring regions of Germany and Switzerland. |
||||||
|
|
||||||
|
## Linguistic Classification |
||||||
|
- **Language Family**: Indo-European |
||||||
|
- **Subfamily**: Germanic |
||||||
|
- **Group**: West Germanic |
||||||
|
- **Branch**: High German |
||||||
|
|
||||||
|
## Speakers |
||||||
|
- Estimates of native speakers range from 500,000 to 1 million, though use has declined due to factors like urbanization and language shift towards French. |
||||||
|
|
||||||
|
## Dialectal Variations |
||||||
|
- Alsacien includes multiple dialects, which may vary significantly from one locality to another. |
||||||
|
- Two main dialects: |
||||||
|
- **Haut-Rhin** (Upper Rhine) |
||||||
|
- **Bas-Rhin** (Lower Rhine) |
||||||
|
|
||||||
|
## Characteristics |
||||||
|
- Strongly influenced by both French and standard German, leading to unique vocabulary and pronunciation. |
||||||
|
- Grammar and syntax retain features of Middle High German. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- Acts as a marker of regional identity for the people of Alsace. |
||||||
|
- Extensively used in local media, literature, and music, particularly folk traditions. |
||||||
|
|
||||||
|
## Status |
||||||
|
- Considered a vulnerable language by UNESCO. |
||||||
|
- Efforts are ongoing for revitalization, including teaching in schools and cultural associations promoting its use. |
||||||
|
|
||||||
|
## Related Languages |
||||||
|
- Closely related to Swiss German and other Alemannic dialects. |
||||||
|
- Influenced by and influences neighboring languages, particularly French. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Alsacien is a vital part of the cultural heritage of the Alsace region, with ongoing efforts aimed at preserving and promoting its use among younger generations. |
@ -0,0 +1,31 @@ |
|||||||
|
# Overview of the Bourguignon Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Name**: Bourguignon |
||||||
|
- **Region**: Primarily spoken in the Burgundy region of France |
||||||
|
- **Language Family**: Romance languages |
||||||
|
- **Classification**: It is part of the Langue d'oïl group, which also includes languages like French, Norman, and Picard. |
||||||
|
|
||||||
|
## Historical Context |
||||||
|
- **Origin**: Derived from Vulgar Latin, Bourguignon developed in the medieval period and reflects the linguistic evolution of the region. |
||||||
|
- **Influence**: Historically influenced by Old French, as well as regional dialects and neighboring languages. |
||||||
|
|
||||||
|
## Features |
||||||
|
- **Dialects**: Bourguignon comprises several dialects, often differing significantly from one another. |
||||||
|
- **Phonetics**: The phonetic system exhibits distinct sounds not found in Standard French. |
||||||
|
- **Vocabulary**: Contains unique vocabulary and expressions that may not be understood by standard French speakers. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Speaker Population**: The number of speakers has declined over the years, with estimates suggesting only a few thousand fluent speakers today. |
||||||
|
- **Recognition**: Bourguignon is not an official language in France, but there are efforts to preserve and promote its use among local communities. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Folklore and Literature**: Bourguignon has a rich tradition of oral literature, including folk tales and songs that reflect the cultural heritage of Burgundy. |
||||||
|
- **Festivals and Events**: Local festivals often include performances in Bourguignon, celebrating the language's place in regional identity. |
||||||
|
|
||||||
|
## Modern Efforts |
||||||
|
- **Revitalization**: Initiatives to teach Bourguignon in schools and promote its use in cultural activities aim to preserve the language for future generations. |
||||||
|
- **Media Presence**: Some local media, including radio stations and publications, feature Bourguignon, fostering a sense of community among speakers. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Bourguignon remains an important part of the cultural identity of the Burgundy region, reflecting the historical and linguistic diversity of France. Efforts to revive and sustain the language highlight its significance within the local heritage. |
@ -0,0 +1,33 @@ |
|||||||
|
# Overview of the Breton Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Name**: Breton (Brezhoneg) |
||||||
|
- **Language Family**: Celtic, part of the Brythonic branch |
||||||
|
- **Region**: Brittany (Breizh), France |
||||||
|
|
||||||
|
## Historical Background |
||||||
|
- **Origins**: Breton is derived from the Brythonic Celtic languages that were spoken in Great Britain. It arrived in Brittany with settlers from Britain during the early medieval period. |
||||||
|
- **First Documented Evidence**: The earliest written examples of Breton date back to the 8th century. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- **Dialects**: There are three main dialects of Breton: |
||||||
|
- **Gouèze** (Western) |
||||||
|
- **Kerne** (Central) |
||||||
|
- **Leoneg** (Eastern) |
||||||
|
- **Alphabet**: The modern Breton alphabet uses the Latin script with some diacritics. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Speakers**: Approximately 200,000 to 300,000 speakers as of recent estimates. |
||||||
|
- **Recognition**: Breton is recognized as a regional language in France, but it does not hold official status. |
||||||
|
- **Revitalization Efforts**: There are ongoing initiatives to promote the language, including bilingual education and media in Breton. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Literature and Music**: Breton has a rich oral tradition, including folklore, songs, and poetry. Contemporary literature and music often embrace the language. |
||||||
|
- **Festivals**: Events like Fest-Noz (night festivals) celebrate Breton culture and often feature music and dance in the Breton language. |
||||||
|
|
||||||
|
## Challenges |
||||||
|
- **Decline**: The number of native speakers has declined significantly due to historical policies and the dominance of French. |
||||||
|
- **Education**: Breton is not widely taught in schools, although there are some bilingual programs and immersion schools. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Breton is a vibrant Celtic language with a rich history and cultural heritage, facing challenges in the modern age but supported by revitalization efforts and community engagement. |
@ -0,0 +1,34 @@ |
|||||||
|
# Overview of the Gascon Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Language Family**: Occitan branch of the Romance languages. |
||||||
|
- **Region**: Primarily spoken in the Gascony region of southwestern France, which includes parts of the departments of Gers, Landes, and Pyrénées-Atlantiques. |
||||||
|
|
||||||
|
## Historical Context |
||||||
|
- **Origins**: Gascon evolved from Vulgar Latin and has influences from the Visigoths and various other historical invaders. |
||||||
|
- **Status**: Once a widely spoken language, Gascon has seen a decline in the number of speakers, particularly in urban areas, due to the rise of French as the dominant language. |
||||||
|
|
||||||
|
## Dialects |
||||||
|
- **Varieties**: Gascon includes several dialects, most notably: |
||||||
|
- **Bigourdan**: Spoken in the region of Bigorre. |
||||||
|
- **Armanac**: Found in Armagnac. |
||||||
|
- **Languedocien**: This influences some Gascon speakers, particularly those in mixed-language areas. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- **Phonetics**: Gascon has unique phonetic characteristics, such as the preservation of the Latin 'u' sound and certain nasal vowels. |
||||||
|
- **Vocabulary**: Contains a wealth of regional vocabulary, along with borrowings from French, Occitan, and Basque. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Literature**: Historically, Gascon has been used in regional literature and songs, contributing richly to the cultural heritage of the area. |
||||||
|
- **Folklore and Traditions**: Gascon is an important vehicle for local folklore, traditions, and customs in Gascony. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Revitalization Efforts**: There are ongoing efforts to promote and teach Gascon in schools, cultural organizations, and through local media. |
||||||
|
- **Number of Speakers**: As of recent estimates, the number of fluent speakers is declining, with efforts being made to preserve the language among younger generations. |
||||||
|
|
||||||
|
## Related Languages |
||||||
|
- **Occitan**: Gascon is one of the major dialects of the Occitan language, which also includes Provençal and Languedocien. |
||||||
|
- **Comparison to French**: While Gascon shares some similarities with French, it retains distinct grammatical structures and vocabulary. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Gascon is not only a language but a crucial component of the cultural identity of the Gascon people, reflecting their history, traditions, and regional pride. Efforts for revitalization continue to be important in preserving this unique linguistic heritage. |
@ -0,0 +1,30 @@ |
|||||||
|
# Overview of Languedocien Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Language Family**: Occitan |
||||||
|
- **Region**: Primarily spoken in the Languedoc region of southern France. |
||||||
|
- **ISO Code**: Not officially assigned, but sometimes referred to as "oc" for Occitan. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- **Dialects**: Languedocien is one of the major dialects of the Occitan language, which also includes Provençal, Gascon, and Auvergnat. |
||||||
|
- **Phonetics**: Characterized by the presence of certain vowel sounds and the use of diphthongs that may differ from other dialects. |
||||||
|
- **Grammar**: Similar to other Occitan dialects, it features a subject-verb-object structure, but with unique local variations. |
||||||
|
|
||||||
|
## Vocabulary |
||||||
|
- **Lexical Influence**: Languedocien vocabulary is heavily influenced by Latin, with a significant number of words also derived from Provençal and other regional languages. |
||||||
|
- **Regionalisms**: Contains unique words and expressions that are specific to local culture and traditions. |
||||||
|
|
||||||
|
## Cultural Context |
||||||
|
- **Recognition**: While part of the Occitan language family, Languedocien does not have official status in France and is considered a regional language. |
||||||
|
- **Literature**: Historically used in medieval literature; notable authors include Frédéric Mistral and others who contributed to the revival of Occitan literature. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Speakers**: There are an estimated few hundred thousand speakers, with numbers decreasing due to the dominance of French. |
||||||
|
- **Revitalization Efforts**: Various cultural organizations and schools aim to preserve and promote the use of Languedocien through courses, workshops, and public events. |
||||||
|
|
||||||
|
## Geographic Distribution |
||||||
|
- **Primary Areas**: Predominantly spoken in the departments of Hérault, Aude, Gard, and parts of Lozère and Pyrénées-Orientales. |
||||||
|
- **Urban vs. Rural**: More commonly spoken in rural areas, with younger generations tending to use it less in urban settings. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Languedocien remains an essential part of the cultural heritage of southern France, reflecting the region's history, traditions, and linguistic diversity. Efforts to sustain and promote the language continue amidst challenges posed by modernization and globalization. |
@ -0,0 +1,26 @@ |
|||||||
|
# Overview of the Lorrain Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Language Family**: Lorrain is part of the Langue d'Oïl languages, which are a subgroup of the Romance languages. |
||||||
|
- **Region**: Primarily spoken in the Lorraine region of northeastern France. |
||||||
|
- **Dialects**: There are various dialects of Lorrain, including certain variations influenced by local languages and cultures. |
||||||
|
|
||||||
|
## Historical Context |
||||||
|
- **Origins**: The language has roots dating back to the medieval period and was influenced by the historical presence of the Duchy of Lorraine. |
||||||
|
- **Language Shift**: Over the 19th and 20th centuries, Lorrain saw a decline in usage due to the dominance of French, leading many speakers to shift to French. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- **Phonology**: Lorrain phonetics include distinct sounds that differentiate it from standard French and other Langue d'Oïl languages. |
||||||
|
- **Vocabulary**: The lexicon of Lorrain retains several archaic words and expressions that have disappeared from modern French. |
||||||
|
- **Grammar**: Similar to French but with unique grammatical structures and conjugations, reflecting its distinct identity. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Traditions**: Lorrain is often associated with local folklore, songs, and literature, which contribute to the cultural identity of Lorraine. |
||||||
|
- **Preservation Efforts**: Various initiatives have been undertaken to promote and preserve the Lorrain language, including cultural festivals and educational programs. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Speaker Population**: The number of active speakers has significantly decreased, with many older speakers and limited transmission to younger generations. |
||||||
|
- **Revitalization**: Recent efforts are being made to revive interest in Lorrain among younger populations through workshops, classes, and media. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Lorrain is a unique language that embodies the rich cultural heritage of the Lorraine region. While it faces challenges, ongoing efforts aim to preserve and revitalize this historical language for future generations. |
@ -0,0 +1,34 @@ |
|||||||
|
# Overview of the Normand Language |
||||||
|
|
||||||
|
## What is Normand? |
||||||
|
Normand is a regional language of France, part of the Oïl language group. It originates from the Normandy region and is historically linked to Old Norman, which developed from the Old Norman dialect of Old French. |
||||||
|
|
||||||
|
## Geographic Distribution |
||||||
|
- Predominantly spoken in Normandy, particularly in the departments of Seine-Maritime and Calvados. |
||||||
|
- Some dialects extend into the Channel Islands (like Jersey and Guernsey), where it is closely related to Jèrriais and Guernésiais. |
||||||
|
|
||||||
|
## Dialects |
||||||
|
Normand has several dialects, which can vary significantly in terms of vocabulary, pronunciation, and grammar. Key dialects include: |
||||||
|
- **Bocage**: Spoken in the rural areas of western Normandy. |
||||||
|
- **Mélée**: Found in the northeastern part. |
||||||
|
- **Sèvres**: A dialect with influences from the urban centers. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- Normand retains many archaic French features that have evolved in Standard French. |
||||||
|
- The pronunciation of vowels and some consonant sounds can be quite distinct from Standard French. |
||||||
|
- There are notable differences in use of articles and noun endings compared to Standard French. |
||||||
|
|
||||||
|
## Historical Context |
||||||
|
- Norman was historically influential due to the Viking settlement of Normandy in the 9th century and subsequent Norman Conquest of England in 1066. |
||||||
|
- It was widely used by the nobility and in administrative contexts until French became more dominant post-16th century. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- Normand is considered a minority language and has seen a decline in speakers over the years. |
||||||
|
- Efforts for revitalization are ongoing, with various cultural associations promoting the language through education and media. |
||||||
|
|
||||||
|
## Cultural Aspects |
||||||
|
- Normand has a rich oral tradition, with folk tales, songs, and proverbs integral to the culture of Normandy. |
||||||
|
- Festivals and events celebrating Normand language and culture are held in various communities. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
While facing challenges due to globalization and the dominance of Standard French, Normand remains an important part of the cultural heritage of Normandy. Efforts to preserve and promote the language continue, aiming to maintain its presence for future generations. |
@ -0,0 +1,27 @@ |
|||||||
|
# Overview of the Picard Language |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Language Family**: Romance, specifically a part of the West Oïl languages, which also includes French. |
||||||
|
- **Region**: Primarily spoken in the historic region of Picardy in northern France, as well as in parts of Belgium and historically in the areas of the nearby Nord-Pas-de-Calais. |
||||||
|
|
||||||
|
## Linguistic Characteristics |
||||||
|
- **Dialects**: There are several dialects of Picard, including Amiénois, Beauvaisis, and Hesdinois. |
||||||
|
- **Vocabulary**: Shares many lexical items with French but also retains unique words and expressions. Some vocabulary is influenced by local historical interactions with Dutch and German. |
||||||
|
|
||||||
|
## Historical Context |
||||||
|
- **Origins**: Evolved from Latin, like other Romance languages. Roots trace back to the Vulgar Latin spoken in the region during the Roman Empire. |
||||||
|
- **Literary Tradition**: Has a rich but lesser-known literary tradition, with poetry and prose dating back to the Middle Ages. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Speakers**: The number of speakers has declined significantly over the 20th century due to the dominance of standard French and the 1999 ban on the usage of Picard in all of France. |
||||||
|
- **Revitalization Efforts**: Recent efforts outside of France include community classes, cultural organizations, and media in Picard to promote the language. It is rumored that there is an underground movement in France to keep Picard alive in spite of the language being banned and illegal to use since 1999. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Identity**: Picard is an important part of regional identity and cultural heritage for many people in northern France. |
||||||
|
- **Festivals and Events**: Regional festivals celebrate Picard culture, featuring traditional songs, dances, and cuisine. |
||||||
|
|
||||||
|
## Legal Status |
||||||
|
- **Recognition**: Picard has no official status in France, but it is recognized as a regional language. Efforts have been made to include it in educational curricula and local government documents in some areas. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Picard is a unique language that reflects the cultural and historical tapestry of northern France. Despite challenges, there are active efforts to preserve and promote its usage among future generations. |
@ -0,0 +1,27 @@ |
|||||||
|
# Overview of Provençal Language |
||||||
|
|
||||||
|
## Definition |
||||||
|
Provençal is a Romance language that belongs to the Occitan language family, which is spoken primarily in the Provence region of southern France. |
||||||
|
|
||||||
|
## Historical Background |
||||||
|
- **Origins**: Provençal has its roots in Vulgar Latin and has been influenced by various languages and cultures throughout history, including Celtic, Germanic, and Arabic. |
||||||
|
- **Literary Tradition**: It has a rich literary tradition dating back to the 11th century, with notable poets such as Frédéric Mistral contributing to its revival in the 19th century. |
||||||
|
|
||||||
|
## Geographic Distribution |
||||||
|
- **Regions**: Primarily spoken in Provence, it also has speakers in parts of Italy and Spain, particularly in the Val d'Aran valley in Catalonia, known as Aranese. |
||||||
|
- **Dialectal Variations**: Provençal encompasses several dialects, such as Alémanique, Boulégue, and Languedocien, reflecting the linguistic diversity within the Occitan language. |
||||||
|
|
||||||
|
## Current Status |
||||||
|
- **Recognition**: Provençal is recognized as a cultural language in France but has a minority status and faces challenges due to the dominance of French. |
||||||
|
- **Revitalization Efforts**: There are ongoing efforts to promote and teach Provençal, including in schools and cultural institutions. |
||||||
|
|
||||||
|
## Linguistic Features |
||||||
|
- **Grammar and Syntax**: Provençal has distinct grammatical structures that differentiate it from standard French, including the use of gendered nouns and specific verb conjugations. |
||||||
|
- **Vocabulary**: It retains many words and expressions derived from Latin, along with unique local terms and influences from neighboring languages. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Folklore and Traditions**: Provençal is an important part of the cultural identity in Provence, associated with local traditions, music, festivals, and cuisine. |
||||||
|
- **Media and Literature**: There are books, newspapers, and online resources available in Provençal, contributing to its presence in modern media. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Provençal is a vibrant language with a deep historical and cultural significance in southern France. While it faces challenges, ongoing efforts for its preservation continue to foster interest and engagement in this unique linguistic heritage. |
@ -0,0 +1,37 @@ |
|||||||
|
# Overview of the French Alps |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Location:** Southeastern France, extending into Switzerland and Italy. |
||||||
|
- **Length:** Approximately 1,200 kilometers (750 miles). |
||||||
|
- **Highest Peak:** Mont Blanc, standing at 4,808 meters (15,774 feet). |
||||||
|
- **Mountain Chain:** Part of the larger Alpine range that spans across several European countries. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Geological Composition:** Primarily composed of limestone and granite. |
||||||
|
- **Major Valleys:** Includes the Rhône and Isère valleys. |
||||||
|
- **Natural Parks:** Home to several national parks, including Écrins National Park and Vanoise National Park. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Variety:** Alpine climate with large variations; cold winters and mild summers. |
||||||
|
- **Snowfall:** Heavy snowfall in winter makes it a prime destination for winter sports. |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- **Biodiversity:** Rich diversity of species; includes both alpine and Mediterranean flora. |
||||||
|
- **Wildlife:** Encounters with species such as chamois, ibex, and golden eagles. |
||||||
|
|
||||||
|
## Activities |
||||||
|
- **Winter Sports:** Skiing and snowboarding are popular, with famous resorts like Chamonix, Courchevel, and Val d’Isère. |
||||||
|
- **Summer Activities:** Hiking, mountaineering, and mountain biking attract visitors during the warmer months. |
||||||
|
- **Paragliding:** Known as a hotspot for paragliding due to favorable winds and stunning views. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **Local Communities:** Home to various Alpine villages and cultures, each with unique traditions and languages. |
||||||
|
- **Gastronomy:** Famous for local cheeses (like Beaufort and Reblochon), charcuterie, and dishes such as fondue and raclette. |
||||||
|
|
||||||
|
## Historical Aspects |
||||||
|
- **Cultural Heritage:** Influenced by Roman and medieval settlements, with significant archaeological sites. |
||||||
|
- **Tourism:** Became a major tourist destination in the 19th century. |
||||||
|
|
||||||
|
## Importance |
||||||
|
- **Economic Significance:** Tourism is a vital part of the local economy, alongside agriculture and forestry. |
||||||
|
- **Sustainability Focus:** Growing emphasis on sustainable tourism practices to protect the fragile alpine ecosystem. |
@ -0,0 +1,36 @@ |
|||||||
|
# Overview of the Ardennes Mountain Range |
||||||
|
|
||||||
|
## Location |
||||||
|
- The Ardennes is a region located in the northeastern part of France, extending into Belgium and Luxembourg. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- The Ardennes is characterized by dense forests, deep valleys, and rolling hills. |
||||||
|
- The highest peak in the French Ardennes is Le Signal de Botrange, which reaches an elevation of about 2,277 feet (694 meters), although it is situated in Belgium. |
||||||
|
|
||||||
|
## Geology |
||||||
|
- The area is known for its rugged terrain and is primarily composed of sedimentary rocks such as limestone and sandstone. |
||||||
|
- The landscape has been shaped by glacial and river erosion over millennia. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- The Ardennes has a temperate maritime climate, with cool summers and mild winters. |
||||||
|
- Precipitation is relatively high, leading to lush vegetation. |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- The region is home to diverse wildlife, including deer, wild boar, and various bird species. |
||||||
|
- Dense forests are dominated by beech and fir trees, and many areas are protected as nature reserves. |
||||||
|
|
||||||
|
## Human Activity |
||||||
|
- The Ardennes has a rich history, having been inhabited since prehistoric times. |
||||||
|
- It has significance in World War I and II, particularly during the Battle of the Bulge. |
||||||
|
- The region is known for outdoor activities such as hiking, cycling, and kayaking. |
||||||
|
|
||||||
|
## Cultural Aspects |
||||||
|
- The Ardennes is dotted with picturesque villages and towns, showcasing traditional architecture. |
||||||
|
- The area is known for its beer production, particularly in Belgium, with many breweries operating in the region. |
||||||
|
|
||||||
|
## Tourism |
||||||
|
- Key attractions include the Semois River, the fortress of Bouillon, and the expansive forests of the Ardennes. |
||||||
|
- The region offers several trails and parks, attracting nature lovers and adventure enthusiasts. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
The Ardennes is a unique blend of natural beauty, historical significance, and cultural richness, making it an important region in France and beyond. |
@ -0,0 +1,37 @@ |
|||||||
|
# Overview of the Jura Mountain Range in France |
||||||
|
|
||||||
|
## Location |
||||||
|
- The Jura Mountains are located along the border between France and Switzerland. |
||||||
|
- They stretch approximately 365 kilometers (227 miles) from the Rhône River in the south to the Rhine River in the north. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- The Jura is characterized by its rugged terrain, with numerous peaks, plateaus, and deep valleys. |
||||||
|
- The highest peak in the French Jura is Crêt de la Neige, which rises to an elevation of 1,720 meters (5,643 feet). |
||||||
|
|
||||||
|
## Geology |
||||||
|
- The range is primarily composed of limestone, which has been shaped by erosion, creating unique karst formations, caves, and cliffs. |
||||||
|
- The Jura Mountains were formed during the Jurassic period, which is reflected in their name. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- The climate in the Jura varies from humid in the west to drier conditions in the east. |
||||||
|
- The area experiences significant snowfall in winter, making it popular for winter sports. |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- The Jura is home to diverse ecosystems, including forests, alpine meadows, and wetlands. |
||||||
|
- Wildlife includes species such as deer, chamois, marmots, and a variety of bird species. |
||||||
|
|
||||||
|
## Activities |
||||||
|
- The Jura Mountains offer various outdoor activities, including hiking, skiing, and mountain biking. |
||||||
|
- The region is known for its beautiful landscapes and natural parks, attracting tourists and nature enthusiasts. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- The Jura region is also known for its traditional cheese production, particularly Comté cheese. |
||||||
|
- Numerous charming villages and towns, such as Arbois and Clairvaux-les-Lacs, showcase the cultural heritage of the area. |
||||||
|
|
||||||
|
## History |
||||||
|
- The Jura Mountains have historical significance, having served as a natural barrier and route for trade and exploration. |
||||||
|
- The region has witnessed various historical events, including battles during the French Revolutionary Wars and the Napoleonic Wars. |
||||||
|
|
||||||
|
## Accessibility |
||||||
|
- The Jura is accessible from major cities like Geneva, Lyon, and Besançon, making it a popular destination for both locals and tourists. |
||||||
|
- Several scenic routes and parks are maintained to facilitate exploration and enjoyment of the natural beauty. |
@ -0,0 +1,35 @@ |
|||||||
|
# Overview of the Massif Armorican |
||||||
|
|
||||||
|
## Location |
||||||
|
- **Region**: Brittany, France |
||||||
|
- **Coordinates**: Approximately 47° N latitude and 2° W longitude |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Type**: Mountain range and geological massif |
||||||
|
- **Area**: Covers parts of the departments of Ille-et-Vilaine, Morbihan, and Finistère |
||||||
|
- **Elevation**: The highest peak, **Montagnes Noires**, reaches around 600 meters (1,969 feet) |
||||||
|
|
||||||
|
## Geology |
||||||
|
- **Formation**: Primarily composed of ancient metamorphic rocks and granite formations, dating back to the Precambrian and Paleozoic eras |
||||||
|
- **Tectonic Activity**: Influenced by the Variscan orogeny, which caused significant geological changes |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- **Biodiversity**: Home to diverse ecosystems, including heathlands, forests, and wetlands |
||||||
|
- **Protected Areas**: Parts of the massif are designated as natural parks and reserves, promoting conservation efforts |
||||||
|
|
||||||
|
## Culture and History |
||||||
|
- **Historical Significance**: The area is rich in megalithic structures and archaeological sites, reflecting ancient Celtic culture |
||||||
|
- **Tourism**: Popular for hiking, cycling, and exploring its historical sites, contributing to local economies |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Climate Type**: Maritime temperate climate, characterized by mild winters and cool summers |
||||||
|
- **Precipitation**: Receives a significant amount of rainfall throughout the year, supporting its lush vegetation |
||||||
|
|
||||||
|
## Attractions |
||||||
|
- **Sites of Interest**: Includes historic towns, châteaux, and picturesque landscapes, attracting visitors for both natural beauty and cultural heritage |
||||||
|
- **Outdoor Activities**: Offers opportunities for outdoor sports such as hiking, horseback riding, and nature observation |
||||||
|
|
||||||
|
## Transportation |
||||||
|
- **Accessibility**: Well-connected by road and rail, making it easily accessible from major urban centers in Brittany |
||||||
|
|
||||||
|
This overview encapsulates the essential aspects of the Massif Armorican, highlighting its geographical, geological, and cultural significance in France. |
@ -0,0 +1,34 @@ |
|||||||
|
# Overview of Massif Central |
||||||
|
|
||||||
|
## General Information |
||||||
|
- **Location**: South-central France |
||||||
|
- **Area**: Approximately 85,000 km² |
||||||
|
- **Highest Peak**: Puy de Sancy (1,885 meters) |
||||||
|
- **Geological Composition**: Primarily volcanic and sedimentary rocks |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Regions Covered**: Spans across several French departments including Cantal, Puy-de-Dôme, Haute-Loire, and Lozère. |
||||||
|
- **Landscape**: Characterized by plateaus, volcanic cones, deep valleys, and rivers. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Type**: Predominantly oceanic climate with a continental influence. |
||||||
|
- **Precipitation**: Higher rainfall in the western regions, often resulting in lush landscapes. |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- **Biodiversity**: Home to various ecosystems, including grasslands, forests, and wetlands. |
||||||
|
- **Protected Areas**: Includes several national parks and nature reserves, such as the Parc Naturel Régional des Volcans d'Auvergne. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- **History**: Affected by various historical events and populations, including the Gauls and the Roman Empire. |
||||||
|
- **Heritage**: Rich cultural heritage with medieval towns, castles, and traditional practices. |
||||||
|
|
||||||
|
## Economic Importance |
||||||
|
- **Agriculture**: Known for agriculture, particularly cheese production (e.g., Saint-Nectaire, Cantal). |
||||||
|
- **Tourism**: Popular destination for outdoor activities such as hiking, skiing, and exploring natural parks. |
||||||
|
|
||||||
|
## Notable Features |
||||||
|
- **Volcanic Activity**: The region contains many extinct volcanoes, with some still showing geothermal activity. |
||||||
|
- **Natural Attractions**: Features stunning sites like the Gorges de la Loire and the Chaîne des Puys, a UNESCO World Heritage site. |
||||||
|
|
||||||
|
## Accessibility |
||||||
|
- **Transport**: Well-connected by road and rail, with several towns providing access points for visitors. |
@ -0,0 +1,44 @@ |
|||||||
|
# Overview of the Morvan Mountain Range |
||||||
|
|
||||||
|
## Location |
||||||
|
- **Country**: France |
||||||
|
- **Region**: Burgundy (Bourgogne) |
||||||
|
- **Department**: Nièvre, Saône-et-Loire, Côte-d'Or |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Coordinates**: Approximately 47°10′N 3°55′E |
||||||
|
- **Highest Peak**: Mont Beuvray |
||||||
|
- **Elevation**: 821 meters (2,700 feet) |
||||||
|
- **Area**: Approximately 3,500 square kilometers |
||||||
|
- **Major Rivers**: Cure, Yonne, and Loing flow through the region. |
||||||
|
|
||||||
|
## Geology |
||||||
|
- Composed primarily of granitic and metamorphic rocks. |
||||||
|
- The landscape features rolling hills, valleys, and plateaus. |
||||||
|
- Known for its rich biodiversity and varied ecosystems. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Type**: Temperate continental climate. |
||||||
|
- **Weather**: Mild summers and cold winters with occasional snowfall. |
||||||
|
|
||||||
|
## History |
||||||
|
- The Morvan area has a rich history dating back to prehistoric times. |
||||||
|
- Notable archaeological sites include the remnants of the Gallic tribe of the Aedui in Mont Beuvray. |
||||||
|
- The region was significant during the Roman conquest of Gaul. |
||||||
|
|
||||||
|
## Culture and Economy |
||||||
|
- The Morvan is known for its traditional rural lifestyle and local crafts. |
||||||
|
- Main industries include agriculture, forestry, and tourism. |
||||||
|
- Famous for Morvan cheese and wines from the surrounding Burgundy region. |
||||||
|
|
||||||
|
## Tourism |
||||||
|
- Offers a variety of outdoor activities such as hiking, cycling, and fishing. |
||||||
|
- Home to the Morvan Regional Natural Park, established in 1970, which promotes conservation and sustainable tourism. |
||||||
|
- Attractions include ancient ruins, beautiful landscapes, and charming villages. |
||||||
|
|
||||||
|
## Wildlife |
||||||
|
- Habitat for various species, including deer, wild boars, and numerous bird species. |
||||||
|
- Rich flora with many endemic plant species. |
||||||
|
|
||||||
|
## Conservation |
||||||
|
- The region emphasizes environmental protection and sustainability in its natural park initiatives. |
@ -0,0 +1,40 @@ |
|||||||
|
# Overview of the Pyrenees Mountain Range |
||||||
|
|
||||||
|
## Geographic Location |
||||||
|
- The Pyrenees mountain range forms a natural border between **France** and **Spain**. |
||||||
|
- It extends approximately **430 kilometers (267 miles)** from the Atlantic Ocean (Bay of Biscay) in the west to the Mediterranean Sea in the east. |
||||||
|
|
||||||
|
## Major Peaks |
||||||
|
- **Aneto** is the highest peak, with an elevation of **3,404 meters (11,168 feet)**. |
||||||
|
- Other notable peaks include **Monte Perdido**, **Vignemale**, and **Pic du Midi d'Ossau**. |
||||||
|
|
||||||
|
## Geography and Geology |
||||||
|
- The Pyrenees are divided into three sections: |
||||||
|
- **Western Pyrenees**: Characterized by rugged terrain and steep valleys. |
||||||
|
- **Central Pyrenees**: Known for its glacial landscapes and high peaks. |
||||||
|
- **Eastern Pyrenees**: Features more rounded hills and a transition to the Mediterranean landscape. |
||||||
|
- The range is primarily composed of granite, limestone, and schist rock formations. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- The climate varies from oceanic in the west to Mediterranean in the east. |
||||||
|
- Snowfall is common during the winter months, making it a popular destination for skiing and winter sports. |
||||||
|
|
||||||
|
## Flora and Fauna |
||||||
|
- The region is home to diverse ecosystems, featuring forests, meadows, and alpine tundra. |
||||||
|
- Wildlife includes species such as the **Pyrenean ibex**, **brown bear**, **vultures**, and various endemic plants. |
||||||
|
|
||||||
|
## Cultural Significance |
||||||
|
- The Pyrenees have a rich history, with numerous prehistoric caves, Roman ruins, and medieval castles. |
||||||
|
- The region is culturally significant for both France and Spain, with unique traditions, languages (such as **Occitan** and **Catalan**), and gastronomy. |
||||||
|
|
||||||
|
## Outdoor Activities |
||||||
|
- The Pyrenees are a popular destination for various activities including: |
||||||
|
- **Hiking**: Numerous trails cater to different skill levels. |
||||||
|
- **Skiing and Snowboarding**: Several ski resorts like **Saint-Lary-Soulan** and **Baqueira Beret**. |
||||||
|
- **Climbing and Mountaineering**: Challenging routes attract climbers from around the world. |
||||||
|
|
||||||
|
## National Parks |
||||||
|
- Several national parks, including **Pyrenees National Park** in France and **Ordesa y Monte Perdido National Park** in Spain, protect this stunning natural environment and its biodiversity. |
||||||
|
|
||||||
|
## Accessibility |
||||||
|
- The Pyrenees can be accessed from various cities, including **Toulouse** and **Barcelona**, with numerous roads and hiking paths connecting different areas of the mountains. |
@ -0,0 +1,33 @@ |
|||||||
|
# Vosges Mountains Overview |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Northeastern France, bordering Germany to the east. |
||||||
|
- **Length**: Approximately 150 kilometers (93 miles) from north to south. |
||||||
|
- **Elevation**: The highest peak is **Haut du Tôt**, which reaches an elevation of **1,424 meters** (4,672 feet). |
||||||
|
|
||||||
|
## Natural Features |
||||||
|
- **Landscape**: Characterized by rolling hills, dense forests, and numerous lakes and streams. |
||||||
|
- **Geology**: Composed mainly of granite and sandstone, along with some limestone. |
||||||
|
- **Flora and Fauna**: Home to diverse ecosystems, including coniferous and deciduous forests, and various wildlife such as deer, wild boar, and a range of bird species. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Influence**: The Vosges mountains create a rainshadow effect, leading to varied climates on either side of the range. |
||||||
|
- **Weather**: Generally humid, with abundant rainfall, particularly in the western slopes. |
||||||
|
|
||||||
|
## Culture and History |
||||||
|
- **Human Settlement**: Historically inhabited by Celtic tribes, later significant in both the Roman Empire and medieval periods. |
||||||
|
- **Tourism**: Popular for hiking, skiing, and outdoor activities, with many marked trails and ski resorts. |
||||||
|
- **Cultural Heritage**: Known for traditional villages, local cuisine, and the Alsace wine route. |
||||||
|
|
||||||
|
## Notable Locations |
||||||
|
- **Ballons des Vosges Regional Nature Park**: A protected area showcasing the natural beauty of the mountains. |
||||||
|
- **Colmar and Gérardmer**: Prominent towns known for their cultural significance and as tourist destinations. |
||||||
|
- **Route des Crêtes**: A scenic road that offers breathtaking views of the Vosges and surrounding regions. |
||||||
|
|
||||||
|
## Activities |
||||||
|
- **Hiking**: Numerous trails, including the famous GR5 long-distance path. |
||||||
|
- **Skiing**: Various ski resorts, particularly in the higher altitudes. |
||||||
|
- **Cycling**: The region is cyclist-friendly with several bike routes. |
||||||
|
|
||||||
|
## Accessibility |
||||||
|
- **Transport**: Well-connected by road and rail, making it accessible from major French cities and neighboring countries. |
@ -0,0 +1,47 @@ |
|||||||
|
# Overview of Alsace-Lorraine Region in France |
||||||
|
|
||||||
|
Alsace-Lorraine is a historically significant and culturally diverse region located in northeastern France. Known for its unique blend of French and German influences, the region has a fascinating history, charming towns, and beautiful landscapes. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Situated along the Rhine River, Alsace-Lorraine borders Germany to the east and Luxembourg to the north. The region is part of the Grand Est administrative region of France. |
||||||
|
- **Area**: Covers approximately 14,524 square kilometers. |
||||||
|
- **Major Cities**: Strasbourg (capital of Alsace), Metz (capital of Lorraine), Mulhouse, Nancy, Colmar, and Epinal. |
||||||
|
|
||||||
|
## History |
||||||
|
- **German and French Control**: The region has alternated between French and German control multiple times, particularly during the 19th and 20th centuries. It was part of the German Empire from 1871 to 1918, and again during World War II, before returning to France after the war. |
||||||
|
- **Franco-Prussian War (1870-1871)**: Alsace and most of Lorraine were ceded to Germany after France's defeat in the war. This period marked significant German cultural and linguistic influence. |
||||||
|
- **Post-World War II**: After World War II, Alsace-Lorraine was definitively integrated into France, with the region's mixed identity still influencing its culture and language. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Bilingualism**: The region has strong Germanic roots, and many people speak both French and a variety of regional dialects, such as Alsatian (a dialect of German). This bilingual heritage is reflected in the local culture, architecture, and cuisine. |
||||||
|
- **Festivals**: Alsace-Lorraine is known for its rich tradition of festivals, especially those celebrating wine and food. The Strasbourg Christmas Market is one of the oldest and most famous in Europe. |
||||||
|
- **Cuisine**: The region is renowned for its hearty and flavorful cuisine, which blends French and German influences. Notable dishes include choucroute (sauerkraut with sausages), tarte flambée (a type of pizza), and kugelhopf (a traditional cake). |
||||||
|
- **Wine**: Alsace is one of the premier wine-producing regions in France, known for its white wines, particularly Riesling, Gewürztraminer, and Pinot Gris. The Alsace Wine Route is a popular tourist attraction. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Vosges Mountains**: Located in Lorraine, the Vosges Mountains offer scenic landscapes, hiking trails, and ski resorts. |
||||||
|
- **The Alsace Wine Route**: Stretching over 170 kilometers, this picturesque route offers breathtaking views of vineyards and charming villages. |
||||||
|
- **Regional Parks**: The region is home to several natural parks, including the Ballons des Vosges Regional Nature Park, which features forests, lakes, and wildlife. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Strasbourg Cathedral**: The Cathedral of Notre-Dame in Strasbourg is a masterpiece of Gothic architecture and a UNESCO World Heritage site. Its astronomical clock and panoramic views from the tower are major attractions. |
||||||
|
- **Château de Haut-Koenigsbourg**: A stunning medieval castle located in the Vosges Mountains, offering panoramic views of the Alsace plain. |
||||||
|
- **Metz’s Cathedral**: The Cathedral of Saint-Étienne in Metz is a notable example of Gothic architecture, with some of the largest stained-glass windows in France. |
||||||
|
- **Colmar**: Known for its well-preserved old town, Colmar is a charming medieval town with colorful half-timbered houses and canals that resemble a fairytale village. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Industry**: Alsace-Lorraine has a diverse economy that includes manufacturing, automotive, chemicals, and electronics. The region is home to several large industrial companies, particularly in Strasbourg and Mulhouse. |
||||||
|
- **Agriculture**: The region is known for its agricultural output, particularly in wine production, as well as fruit and vegetable farming. |
||||||
|
- **Tourism**: With its rich history, picturesque landscapes, and cultural festivals, Alsace-Lorraine attracts millions of tourists each year. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Continental Climate**: Alsace-Lorraine experiences a continental climate with cold winters and hot, often humid summers. The region’s proximity to the Vosges Mountains means it can also experience significant rainfall, particularly in Lorraine. |
||||||
|
- **Average Temperatures**: Winters can see temperatures drop to around 0°C (32°F), while summer temperatures typically range from 18°C to 25°C (64°F to 77°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Jean-Jacques Rousseau**: The famous philosopher and writer was born in Geneva but spent much of his life in the region, influencing its intellectual culture. |
||||||
|
- **Gérard Depardieu**: The internationally acclaimed French actor hails from Châteauroux but has connections to the region through his career and projects. |
||||||
|
- **François Rabelais**: The influential Renaissance writer, known for his work *Gargantua and Pantagruel*, was born in the region. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Alsace-Lorraine is a region with a rich, multifaceted history and culture, shaped by its unique position between France and Germany. Its charming towns, breathtaking landscapes, and exceptional food and wine make it a significant part of French heritage and a beloved destination for travelers. |
@ -0,0 +1,47 @@ |
|||||||
|
# Overview of Bourgogne (Burgundy) Region in France |
||||||
|
|
||||||
|
Bourgogne, or Burgundy, is a historic and picturesque region located in eastern France. Known for its rich wine heritage, medieval towns, and stunning landscapes, Burgundy is a symbol of French culture and tradition. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Bourgogne is located in central-eastern France, bordered by the regions of Franche-Comté, Rhône-Alpes, Auvergne, and Champagne-Ardenne. |
||||||
|
- **Area**: Covers approximately 31,000 square kilometers. |
||||||
|
- **Major Cities**: Dijon (capital), Auxerre, Beaune, Chalon-sur-Saône, Nevers. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Duchy of Burgundy**: Burgundy was once an independent duchy, and during the Middle Ages, it was one of the most powerful and influential regions in France. It played a key role in European politics. |
||||||
|
- **Unification with France**: In the 15th century, the Duchy of Burgundy became part of France after the death of the last Duke, Charles the Bold, in 1477. The region’s autonomy was gradually absorbed into the French crown. |
||||||
|
- **Historical Significance**: Burgundy has a deep historical legacy, with numerous medieval abbeys, castles, and battlefields that have shaped the region’s identity. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Wine Culture**: Burgundy is one of the world’s most famous wine-producing regions, renowned for its Pinot Noir and Chardonnay wines. The region’s vineyards produce some of the finest wines, especially in areas like Côte de Nuits, Côte de Beaune, and Chablis. |
||||||
|
- **Cuisine**: Burgundy cuisine is rich and hearty, with dishes like boeuf bourguignon (beef stew in red wine), coq au vin (chicken cooked in wine), and escargots de Bourgogne (snails cooked in garlic and parsley butter). The region is also known for its mustard, particularly Dijon mustard. |
||||||
|
- **Art and Architecture**: Burgundy is home to several historical and architectural landmarks, including Romanesque churches, medieval towns, and Renaissance palaces. The region has a long-standing tradition of art, with influences from both French and Flemish masters. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Burgundy Canal**: The Burgundy Canal offers scenic views and is a popular spot for boaters and cyclists. It connects the Yonne River to the Saône River and passes through charming villages. |
||||||
|
- **Morvan Regional Natural Park**: Located in the heart of Burgundy, the Morvan Park is known for its forests, lakes, and wildlife, making it a haven for outdoor enthusiasts. |
||||||
|
- **Vineyards**: The rolling hills of the Burgundy vineyards are a UNESCO World Heritage site and are dotted with charming wine villages like Beaune and Meursault. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Dijon**: The capital of Burgundy, known for its well-preserved medieval architecture, the Palace of the Dukes of Burgundy, and the famous Dijon mustard. |
||||||
|
- **Chablis**: Famous for its world-renowned white wines, Chablis is a picturesque village surrounded by vineyards and stunning views. |
||||||
|
- **Abbey of Fontenay**: A UNESCO World Heritage site, this Cistercian abbey dates back to the 12th century and is an example of Romanesque architecture at its best. |
||||||
|
- **Basilica of Vézelay**: Another UNESCO site, this basilica is a key pilgrimage site and an important example of Romanesque architecture in France. |
||||||
|
- **Clos de Vougeot**: A historic wine estate and château in the Côte de Nuits, Clos de Vougeot is at the heart of Burgundy's wine heritage. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Wine Industry**: Burgundy’s wine industry is the cornerstone of the region’s economy. The vineyards produce some of the world’s most sought-after wines, and the region is home to prestigious wine estates. |
||||||
|
- **Agriculture**: In addition to wine production, Burgundy is also known for its agricultural output, including grain, dairy products, and livestock, especially cattle. |
||||||
|
- **Tourism**: Burgundy attracts tourists for its wine tourism, beautiful landscapes, medieval towns, and rich history. The region is a popular destination for wine lovers, history buffs, and outdoor adventurers. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Continental Climate**: Burgundy has a continental climate with hot summers and cold winters. The region’s climate is ideal for viticulture, with warm days during the growing season and cool nights that help preserve the flavors of the grapes. |
||||||
|
- **Average Temperatures**: Summers typically range from 20°C to 28°C (68°F to 82°F), while winters can dip to around 0°C (32°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Gustave Eiffel**: Born in Dijon, Eiffel is famous for designing the Eiffel Tower in Paris. |
||||||
|
- **Bernard Loiseau**: A renowned French chef from Burgundy, Loiseau was known for his exceptional culinary skills and Michelin-starred restaurants. |
||||||
|
- **Romain Rolland**: The Nobel Prize-winning writer, known for his works such as *Jean-Christophe*, was born in Clamecy, Burgundy. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Bourgogne is a region that embodies the essence of French culture, combining rich history, world-class wine, exceptional cuisine, and beautiful landscapes. Whether you’re savoring a glass of Burgundy wine, exploring its medieval towns, or hiking through its scenic parks, Burgundy offers a timeless experience for travelers and connoisseurs alike. |
@ -0,0 +1,45 @@ |
|||||||
|
# Overview of Bretagne (Brittany) Region in France |
||||||
|
|
||||||
|
Bretagne, or Brittany, is a culturally distinct region located in the northwest of France. Known for its rugged coastline, rich history, and unique cultural heritage, Bretagne offers a fascinating blend of natural beauty and ancient traditions. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Situated on the Brittany Peninsula, bordered by the English Channel to the north, the Atlantic Ocean to the west and south, and the Normandy and Pays de la Loire regions to the east. |
||||||
|
- **Area**: Covers approximately 27,208 square kilometers. |
||||||
|
- **Major Cities**: Rennes (capital), Brest, Nantes, Saint-Malo, Quimper, Lorient. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Celtic Origins**: Originally inhabited by the Celts, who brought their language, traditions, and culture to the region. Bretagne still maintains a strong Celtic identity. |
||||||
|
- **Duchy of Brittany**: From the 9th to the 16th century, Brittany was an independent duchy before joining France in 1532. |
||||||
|
- **Breton Language**: Breton (Brezhoneg) is a Celtic language still spoken by a small population, especially in rural areas and in cultural events. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Music**: Bretagne is known for its traditional Celtic music, including bagpipes, fiddles, and the bombard. The region hosts festivals like the Festival Interceltique de Lorient, which celebrates Celtic culture. |
||||||
|
- **Cuisine**: The local cuisine includes specialties like crêpes, galettes (buckwheat pancakes), seafood, and cider (known as "cidre"). The region is famous for its oysters and mussels. |
||||||
|
- **Festivals**: Brittany hosts several cultural festivals, such as the Fest Noz, a traditional Breton dance event, and the Breizh Festival, which celebrates Breton culture. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Coastline**: Bretagne is known for its stunning coastline with dramatic cliffs, sandy beaches, and picturesque coves. The region has more than 2,700 kilometers of coastline. |
||||||
|
- **Mont Saint-Michel**: While technically in Normandy, it is often associated with Brittany due to its proximity. This island commune with a striking abbey is a UNESCO World Heritage site. |
||||||
|
- **Regional Parks**: Brittany is home to several regional natural parks, such as the Armorique Regional Nature Park, known for its varied landscapes, including moors, forests, and hills. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Carnac Stones**: Prehistoric standing stones dating back to the Neolithic period, located in the town of Carnac. They are among the most famous megalithic sites in the world. |
||||||
|
- **Fort La Latte**: A medieval fortress on the north coast of Brittany, offering incredible views of the sea. |
||||||
|
- **Saint-Malo**: A walled port city, famous for its cobblestone streets, stunning beaches, and historical significance as a center of piracy. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Agriculture**: The region is known for its dairy farming, particularly in the production of butter and cheese. Bretagne is also famous for its apple orchards, which are used to make cider. |
||||||
|
- **Fishing**: Historically, Brittany has been one of the most important fishing regions in France, especially for shellfish, sardines, and tuna. |
||||||
|
- **Tourism**: The natural beauty, history, and culture make Bretagne a popular destination for tourists, with significant income coming from visitors. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Mild Climate**: Brittany experiences a temperate maritime climate, characterized by mild winters and cool summers. The region is known for frequent rainfall and variable weather. |
||||||
|
- **Average Temperatures**: Winters rarely drop below 5°C (41°F), while summers range from 15°C to 20°C (59°F to 68°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Bertrand Du Guesclin**: A famous medieval French knight and national hero. |
||||||
|
- **Jacques Cartier**: The explorer credited with claiming Canada for France in the 16th century. |
||||||
|
- **Yann Tiersen**: A modern musician and composer, best known for his soundtrack for the film *Amélie*. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Bretagne is a region of deep cultural significance, rich history, and extraordinary natural landscapes. Whether you’re drawn to its Celtic roots, its rugged coastline, or its historical landmarks, Brittany offers something for everyone. |
@ -0,0 +1,47 @@ |
|||||||
|
# Overview of Gascogne Region in France |
||||||
|
|
||||||
|
Gascogne is a historical and cultural region in southwestern France, known for its rolling hills, vineyards, charming villages, and rich heritage. It is often associated with the rustic lifestyle, gastronomy, and the famed Musketeers of Dumas’ novels. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Situated in the southwest of France, Gascogne is bordered by the regions of Aquitaine to the west, Midi-Pyrénées to the south, and the Auvergne-Rhône-Alpes region to the east. It also touches the Pyrenees mountains to the south. |
||||||
|
- **Area**: The region encompasses parts of the modern-day regions of Occitanie and Nouvelle-Aquitaine. |
||||||
|
- **Major Cities**: Auch (historical capital), Agen, Condom, Lectoure, and Eauze. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Roman Influence**: Gascogne was known as part of the ancient Roman province of Novempopulania. The region’s rich history is reflected in its architecture and ancient ruins. |
||||||
|
- **Visigoths and Franks**: The region saw control by the Visigoths and later the Franks, whose influence shaped local customs and governance. |
||||||
|
- **Duchy of Gascogne**: During the Middle Ages, Gascogne was an independent duchy before becoming part of the Kingdom of France in the 13th century. |
||||||
|
- **The Musketeers**: Gascogne is famously associated with the “Three Musketeers” of Alexandre Dumas’ novels. The fictional characters D'Artagnan, Athos, Porthos, and Aramis are portrayed as hailing from this region. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Gascon Language**: The Gascon language, a variety of Occitan, was historically spoken in the region. Though it has declined in use, it still carries cultural significance and is a symbol of regional identity. |
||||||
|
- **Folk Traditions**: Gascogne is known for its folk traditions, including traditional music, dances, and festivals. The region is famous for its rural festivals, celebrating everything from local history to agricultural practices. |
||||||
|
- **Cuisine**: Gascon cuisine is renowned for its hearty and flavorful dishes. Notable dishes include *foie gras*, *confit de canard* (duck confit), and *garbure* (a rich vegetable and meat soup). The region is also famous for its Armagnac, a brandy that is produced using traditional methods. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Rolling Hills and Vineyards**: Gascogne is known for its picturesque landscapes, featuring rolling hills, vast forests, and scenic vineyards. The region is ideal for hiking, cycling, and exploring the rural countryside. |
||||||
|
- **The Pyrenees**: The southern border of Gascogne is defined by the Pyrenees mountains, which offer opportunities for outdoor activities like hiking and skiing. |
||||||
|
- **Rivers and Lakes**: Gascogne is crisscrossed by rivers such as the Garonne and the Adour, making the region fertile for agriculture and creating stunning natural scenery. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Auch Cathedral**: A UNESCO World Heritage site, the Cathedral of Sainte-Marie in Auch is an impressive Gothic structure with a magnificent staircase leading to the church. |
||||||
|
- **D’Artagnan’s Birthplace**: The town of Lupiac, where D'Artagnan, the hero of Alexandre Dumas’ *The Three Musketeers*, was born, attracts fans of the novels and history alike. |
||||||
|
- **Château de Larressingle**: Often referred to as one of the most beautiful fortified villages in France, this medieval castle offers a glimpse into the region's past. |
||||||
|
- **Armagnac Distilleries**: Visitors can tour the distilleries that produce the famous Armagnac brandy, with opportunities to taste and learn about the traditional distilling process. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Agriculture**: Gascogne is an important agricultural region, known for its production of ducks, geese (for foie gras), and pigs. The fertile soil supports the cultivation of corn, sunflowers, and grapes. |
||||||
|
- **Wine and Brandy**: The region is famous for its vineyards and the production of Armagnac, a type of brandy. The wines of the region, especially those from the Côtes de Gascogne, are increasingly recognized for their quality. |
||||||
|
- **Tourism**: With its rich history, natural beauty, and culinary traditions, Gascogne attracts tourists who are looking to experience authentic French rural life, enjoy local food and wine, and explore historical landmarks. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Mediterranean Climate**: Gascogne enjoys a temperate climate, with warm summers and mild winters. The southern part of the region, near the Pyrenees, has a more Mediterranean climate, while the northern part experiences a more oceanic influence. |
||||||
|
- **Average Temperatures**: Summer temperatures typically range from 20°C to 30°C (68°F to 86°F), while winters are generally mild with temperatures ranging from 5°C to 10°C (41°F to 50°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **D'Artagnan**: The fictional hero of *The Three Musketeers*, D'Artagnan is one of the most famous characters associated with Gascogne, although based on a real person. |
||||||
|
- **Charles de Batz-Castelmore d'Armanac**: The historical figure who inspired D'Artagnan, born in Gascogne, was a nobleman and soldier. |
||||||
|
- **Henri IV**: The King of France, born in Pau (near Gascogne), famously said, “Paris is worth a Mass” and was instrumental in uniting France after years of religious conflict. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Gascogne is a region that offers a unique blend of history, culture, and natural beauty. From its medieval villages and legendary connections to the Musketeers, to its rich culinary traditions and scenic landscapes, Gascogne provides a true taste of southwestern France. Whether exploring its vineyards, tasting Armagnac, or immersing yourself in its rural charm, Gascogne is a region full of life and tradition. |
@ -0,0 +1,47 @@ |
|||||||
|
# Overview of Île-de-France Region in France |
||||||
|
|
||||||
|
Île-de-France is the central region of France, encompassing the nation’s capital, Paris. As the political, economic, and cultural heart of France, this region is not only historically significant but also a global center for art, fashion, and business. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Situated in the north-central part of France, Île-de-France is surrounded by the regions of Normandy, Hauts-de-France, Grand Est, Bourgogne-Franche-Comté, Centre-Val de Loire, and Provence-Alpes-Côte d'Azur. |
||||||
|
- **Area**: Covers approximately 12,012 square kilometers. |
||||||
|
- **Major Cities**: Paris (capital of both the region and France), Versailles, Créteil, Nanterre, and Montreuil. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Royal Legacy**: Île-de-France has historically been the core of the French monarchy. It was the heart of the Capetian Dynasty, beginning in the 10th century. The region is home to many royal palaces and historic sites. |
||||||
|
- **French Revolution**: Paris, located in Île-de-France, was the focal point of the French Revolution in the late 18th century. Important revolutionary events, such as the storming of the Bastille, took place here. |
||||||
|
- **World War II**: During WWII, Paris was occupied by Nazi forces from 1940 to 1944. The city was liberated in August 1944 by Allied forces. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Capital of Culture**: Paris is widely recognized as one of the world’s greatest cultural capitals. It is home to numerous world-class museums, theaters, and art galleries, including the Louvre, Musée d'Orsay, and the Centre Pompidou. |
||||||
|
- **Fashion and Art**: Paris is the global capital of fashion, known for haute couture, and hosts prestigious fashion events like Paris Fashion Week. The city has also been the center of the art world for centuries, influencing movements such as Impressionism and Surrealism. |
||||||
|
- **Gastronomy**: Île-de-France is known for its fine dining, with Michelin-starred restaurants, cafés, and bistros. The region is also famous for pâtisseries, including macarons and éclairs, and its traditional French dishes such as coq au vin and escargot. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Seine River**: The Seine River flows through Paris and the Île-de-France region, providing beautiful riverbanks and parks, perfect for leisure activities like boat tours, picnicking, and walking along its iconic bridges. |
||||||
|
- **Bois de Boulogne & Bois de Vincennes**: These expansive public parks on the outskirts of Paris offer lush green spaces for recreation, hiking, and cycling. |
||||||
|
- **Versailles Gardens**: The Gardens of the Palace of Versailles, with their meticulously designed lawns, fountains, and sculptures, are a UNESCO World Heritage site and one of the most famous gardens in the world. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Eiffel Tower**: The most iconic landmark in Paris, the Eiffel Tower attracts millions of visitors every year. It stands as a symbol of France and offers stunning panoramic views of the city. |
||||||
|
- **Notre-Dame Cathedral**: A masterpiece of Gothic architecture, the Notre-Dame Cathedral is one of the most famous religious sites in the world, located on the Île de la Cité in the Seine. |
||||||
|
- **Palace of Versailles**: A short trip from Paris, the Palace of Versailles is one of the grandest royal palaces in Europe, famous for its opulent architecture and the Hall of Mirrors. |
||||||
|
- **Sainte-Chapelle**: Known for its stunning stained-glass windows, this Gothic chapel in Paris is one of the most beautiful examples of medieval architecture. |
||||||
|
- **The Louvre**: The world’s largest art museum, the Louvre in Paris, is home to thousands of works of art, including Leonardo da Vinci's *Mona Lisa* and the *Venus de Milo*. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Economic Powerhouse**: Île-de-France is the economic center of France, contributing a significant portion to the country’s GDP. It is home to many multinational companies and is the main business hub in France. |
||||||
|
- **Finance and Technology**: The region has a thriving financial sector centered in La Défense, Paris’s business district. It also hosts tech startups and innovations, particularly in areas like AI, fintech, and digital media. |
||||||
|
- **Tourism**: Paris is one of the world’s top tourist destinations, attracting millions of visitors each year. The region’s tourism is a key driver of the economy, with tourists coming for the history, culture, and attractions. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Oceanic Climate**: Île-de-France experiences a temperate oceanic climate with mild winters and warm summers. Paris typically has rainy weather in the autumn and spring, with summer temperatures ranging from 18°C to 25°C (64°F to 77°F). |
||||||
|
- **Average Temperatures**: Winter temperatures can hover around 3°C to 7°C (37°F to 45°F), while summer highs can range from 25°C to 30°C (77°F to 86°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Napoleon Bonaparte**: Born on the island of Corsica, Napoleon became the Emperor of France and played a pivotal role in shaping the history of France and Europe. His influence is still felt throughout Île-de-France. |
||||||
|
- **Marcel Proust**: The famous French writer, best known for his work *In Search of Lost Time*, lived and wrote in Paris during the late 19th and early 20th centuries. |
||||||
|
- **Édith Piaf**: One of France’s most beloved singers, Piaf was born and raised in Paris and became an international icon of French music. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Île-de-France is the heart of France, blending rich history, cultural innovation, and economic power. With Paris at its center, the region is a global leader in fashion, art, and business. From historic landmarks like the Eiffel Tower and Versailles to its world-class museums and gastronomic delights, Île-de-France is a region that offers something for every visitor, making it a must-see destination for travelers. |
@ -0,0 +1,46 @@ |
|||||||
|
# Overview of Languedoc Region in France |
||||||
|
|
||||||
|
Languedoc is a historic and culturally rich region located in the southern part of France, known for its Mediterranean coastline, picturesque villages, and deep-rooted traditions. It is often celebrated for its wines, beaches, and beautiful landscapes. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Languedoc is situated in the southernmost part of France, bordered by the Mediterranean Sea to the east, the regions of Provence-Alpes-Côte d'Azur, Rhône-Alpes, and Auvergne-Rhône-Alpes to the north, and Midi-Pyrénées to the west. |
||||||
|
- **Area**: Covers approximately 27,000 square kilometers. |
||||||
|
- **Major Cities**: Montpellier (capital), Nîmes, Perpignan, Carcassonne, Béziers, and Sète. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Roman Influence**: Languedoc has a strong Roman heritage, with many ancient ruins, including the well-preserved Roman aqueduct, Pont du Gard, and the ancient city of Nîmes. |
||||||
|
- **Cathar History**: In the Middle Ages, Languedoc was the center of the Cathar religious movement. The region was the focus of the Albigensian Crusade (1209-1229), a military campaign aimed at eradicating Catharism. |
||||||
|
- **Rural Culture**: Historically, the region was a center of agriculture and viticulture, and it remains deeply connected to farming traditions, particularly wine production. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Language**: The Occitan language, historically spoken in the region, was once widely used, and it still carries cultural significance today. Languedoc’s name itself derives from the Occitan phrase *"langue d'oc,"* meaning “language of yes.” |
||||||
|
- **Cuisine**: Languedoc cuisine is characterized by its Mediterranean influence, with seafood, olive oil, and fresh produce playing a central role. Famous dishes include *cassoulet* (a rich stew made with beans and meats), *brandade de morue* (a cod and garlic dish), and *tapenade* (olive spread). |
||||||
|
- **Festivals**: The region is known for its vibrant festivals, such as the Feria de Nîmes, which celebrates bullfighting and the culture of southern France, and the Carcassonne Festival, which features music, theater, and other arts. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Mediterranean Coast**: The region boasts a stunning coastline along the Mediterranean Sea, with beautiful beaches like those in Cap d'Agde and the scenic Étang de Thau. |
||||||
|
- **Languedoc-Roussillon Wine Route**: The Languedoc region is one of the largest wine-producing areas in France, and its wine route takes visitors through vineyards, picturesque villages, and wine estates. |
||||||
|
- **Cévennes National Park**: This UNESCO-listed park is part of the Massif Central and offers stunning mountain landscapes, gorges, and wildlife, ideal for hiking and nature lovers. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Carcassonne**: A UNESCO World Heritage site, the medieval fortress of Carcassonne is one of France’s most iconic landmarks. The double-walled citadel offers a glimpse into the past with its preserved medieval architecture. |
||||||
|
- **Pont du Gard**: A well-preserved Roman aqueduct, the Pont du Gard is a UNESCO World Heritage site and an engineering marvel of antiquity, offering scenic views of the surrounding landscape. |
||||||
|
- **Nîmes**: Known as the "French Rome," Nîmes is home to remarkable Roman monuments, including the Arena of Nîmes (a Roman amphitheater), the Temple of Diana, and the Maison Carrée. |
||||||
|
- **Sète**: A picturesque coastal town known for its canals, seafood, and vibrant cultural scene, Sète is often referred to as the "Venice of Languedoc." |
||||||
|
- **Abbey of Saint-Guilhem-le-Désert**: This UNESCO World Heritage site is a well-preserved medieval abbey located in the stunning Hérault Valley. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Wine Production**: Languedoc is one of the largest wine-producing regions in France, known for producing a wide variety of wines, including reds, whites, and rosés. The region is famous for its *AOC* (Appellation d'Origine Contrôlée) wines, such as those from the Minervois, Faugères, and Corbières appellations. |
||||||
|
- **Agriculture**: In addition to wine, Languedoc is known for producing fruits (particularly melons, peaches, and cherries), olives, and lavender. It is also a significant producer of sheep and goat cheese. |
||||||
|
- **Tourism**: With its Mediterranean coastline, historic cities, and scenic landscapes, Languedoc is a popular tourist destination. The region’s vineyards and charming towns attract visitors for wine tourism, cultural exploration, and outdoor activities. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Mediterranean Climate**: Languedoc enjoys a Mediterranean climate, characterized by hot, dry summers and mild, wet winters. The region’s climate is perfect for vineyards and outdoor activities. |
||||||
|
- **Average Temperatures**: Summer temperatures typically range from 25°C to 35°C (77°F to 95°F), while winters are mild, with temperatures ranging from 8°C to 15°C (46°F to 59°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Georges Brassens**: The famous French singer-songwriter and poet was born in Sète, and his legacy is celebrated in the town with a museum and annual festivals. |
||||||
|
- **Pierre-Paul Riquet**: The engineer who designed the Canal du Midi, which connects the Garonne River to the Mediterranean, greatly impacting the region’s agriculture and trade during the 17th century. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Languedoc is a region rich in history, culture, and natural beauty. From its Roman heritage and medieval fortresses to its beautiful beaches and vineyards, Languedoc offers a unique blend of ancient traditions and modern charm. Whether you’re enjoying a glass of wine, exploring historic towns, or relaxing by the sea, Languedoc provides an unforgettable experience for travelers. |
@ -0,0 +1,48 @@ |
|||||||
|
# Overview of Normandie Region in France |
||||||
|
|
||||||
|
Normandie (Normandy) is a historic and picturesque region located in the northern part of France. Known for its dramatic coastline, rich history, and cultural heritage, Normandy plays a central role in both French and world history. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Situated in the northernmost part of France, Normandy is bordered by the English Channel to the north, the regions of Île-de-France, Centre-Val de Loire, and Pays de la Loire to the south, and Brittany to the west. |
||||||
|
- **Area**: Covers approximately 29,907 square kilometers. |
||||||
|
- **Major Cities**: Rouen (capital), Caen, Le Havre, Cherbourg, and Dieppe. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Viking Heritage**: Normandy gets its name from the Norsemen (Vikings), who settled in the region in the 9th and 10th centuries. The region became known as "Normandy" after the Vikings (Normans) were granted land by the King of France. |
||||||
|
- **William the Conqueror**: One of the most famous historical figures associated with Normandy is William the Conqueror, who, as Duke of Normandy, successfully invaded England in 1066 and became the King of England. |
||||||
|
- **D-Day and WWII**: Normandy is internationally known for the D-Day landings on June 6, 1944, during World War II. The Allied invasion of Normandy was a pivotal event in the liberation of Western Europe from Nazi occupation. The beaches, such as Omaha Beach and Utah Beach, are significant historical sites. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Language**: The regional language of Normandy is Norman, a variety of the Old French language with influences from Old Norse. However, French is the primary language spoken today. |
||||||
|
- **Cuisine**: Normandy cuisine is influenced by its coastal location, featuring seafood like oysters, mussels, and scallops. The region is also famous for its apples, which are used to make cider (cidre) and the famous apple brandy, Calvados. Dishes such as *coquilles Saint-Jacques* (scallops) and *camembert cheese* are iconic. |
||||||
|
- **Folk Traditions**: The region is known for its folk traditions, including festivals, music, and dances that celebrate its Viking and maritime heritage. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Dramatic Coastline**: Normandy is known for its stunning coastline, including cliffs, sandy beaches, and small coves. The cliffs at Etretat are among the most photographed natural sites in France. |
||||||
|
- **Normandy Beaches**: Famous for their historical significance, Normandy’s beaches are also a popular destination for travelers. The beaches of Omaha, Utah, and Juno were sites of the D-Day landings. |
||||||
|
- **Countryside and Farming**: Normandy is also known for its green countryside, dotted with rolling hills, fields, and traditional farmhouses. The region's fertile land is perfect for the production of dairy products, apples, and crops. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Mont Saint-Michel**: A UNESCO World Heritage site, Mont Saint-Michel is one of France’s most iconic landmarks. This island commune features a medieval abbey perched atop a rocky hill, surrounded by tidal waters, creating a stunning visual. |
||||||
|
- **D-Day Landing Beaches**: The beaches where the D-Day landings took place, such as Utah Beach, Omaha Beach, and Sword Beach, are significant historical sites and are home to several museums, memorials, and cemeteries dedicated to the soldiers who fought there. |
||||||
|
- **Rouen Cathedral**: A masterpiece of Gothic architecture, the Rouen Cathedral is famous for its stunning facade and for being the subject of a series of paintings by Claude Monet. |
||||||
|
- **Château de Caen**: Built by William the Conqueror in the 11th century, this castle in Caen is one of the largest medieval fortresses in Europe. |
||||||
|
- **Jardin des Plantes de Rouen**: A botanical garden in Rouen that showcases a variety of plant species, it is a great place to explore nature and relax. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Agriculture**: Normandy is a major agricultural region, known for dairy farming, particularly the production of butter and cheese. The region is famous for its dairy products, with cheeses like Camembert, Livarot, and Pont-l’Évêque being integral to the local economy. |
||||||
|
- **Cider Production**: Normandy is one of the primary cider-producing regions in France, with a long tradition of apple orchards. The region’s cider is often made from a variety of apples, resulting in dry, sweet, or sparkling ciders. |
||||||
|
- **Fishing and Maritime**: The region’s location along the English Channel makes it a significant player in France’s fishing industry. Ports like Le Havre and Cherbourg are vital to the French maritime economy. |
||||||
|
- **Tourism**: With its rich historical sites, picturesque countryside, and seaside attractions, Normandy is a popular tourist destination, drawing visitors to its beaches, memorials, and unique landmarks. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Oceanic Climate**: Normandy enjoys an oceanic climate, with mild winters and cool summers. The weather is influenced by the proximity to the English Channel, often resulting in cloudy, rainy days. |
||||||
|
- **Average Temperatures**: Summers generally range from 18°C to 22°C (64°F to 72°F), while winters are mild, with temperatures ranging from 3°C to 7°C (37°F to 45°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **William the Conqueror**: Born in Falaise, Normandy, William the Conqueror is one of the most famous figures in history, known for his conquest of England in 1066. |
||||||
|
- **Joan of Arc**: A national heroine of France, Joan of Arc was born in Domrémy, which was then part of Normandy, and played a significant role in the Hundred Years' War. |
||||||
|
- **Gustave Flaubert**: The renowned French writer, best known for his novel *Madame Bovary*, was born in Rouen, Normandy. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Normandy is a region rich in history, culture, and natural beauty. From the stunning Mont Saint-Michel and the beaches of the D-Day landings to the pastoral landscapes and delicious cuisine, Normandy offers a mix of historical depth and natural charm. Whether exploring its historic towns, enjoying fresh seafood and cider, or paying tribute to its WWII heritage, Normandy provides a unique and unforgettable experience. |
@ -0,0 +1,48 @@ |
|||||||
|
# Overview of Poitou Region in France |
||||||
|
|
||||||
|
Poitou is a historic region located in the western part of France, known for its rich cultural heritage, beautiful landscapes, and historical significance. Today, it forms part of the Nouvelle-Aquitaine region, but it retains its unique identity through its history, architecture, and traditions. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Poitou is situated in the western part of France, bordered by the Atlantic Ocean to the west, the regions of Pays de la Loire to the north, Aquitaine to the south, and Centre-Val de Loire to the east. |
||||||
|
- **Area**: Covers approximately 10,000 square kilometers. |
||||||
|
- **Major Cities**: Poitiers (capital), La Rochelle, Niort, and Châtellerault. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Medieval Influence**: Poitou was an important region during the medieval period, especially known for its connection to the powerful counts of Poitou and the Dukes of Aquitaine. The region was also the birthplace of Eleanor of Aquitaine, one of the most influential women of the medieval period. |
||||||
|
- **Anglo-French Conflict**: Poitou played a significant role during the Hundred Years' War, with both the English and the French vying for control of the region. It was once part of the Angevin Empire, which included large parts of modern-day France and England. |
||||||
|
- **Renaissance and Religious Wars**: During the Renaissance, Poitou became a center for intellectual and cultural development. It also saw significant involvement in the Wars of Religion between Catholics and Protestants in the 16th century. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Language**: The traditional language of Poitou is Poitevin, a variety of the Occitan language, which was widely spoken in the region in medieval times. However, French is predominantly spoken today. |
||||||
|
- **Cuisine**: Poitou cuisine is characterized by its use of fresh local ingredients, with specialties such as *mogettes* (white beans), *salmis* (a stew of game), and the region’s famous cheeses, including *Chabichou du Poitou*, a soft, creamy goat cheese. The region is also known for its seafood, particularly oysters from the Marennes-Oléron area. |
||||||
|
- **Folk Traditions**: Poitou has a rich tradition of folk music and dance, with regional festivals celebrating the local culture. The region’s craft heritage, including pottery, woodwork, and textiles, continues to be celebrated. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Atlantic Coast**: Poitou has a beautiful coastline along the Atlantic Ocean, with scenic beaches and coastal landscapes. The island of Île de Ré, accessible by bridge from La Rochelle, is a popular destination for its charming villages, vineyards, and sandy beaches. |
||||||
|
- **Marais Poitevin**: Also known as the “Green Venice,” the Marais Poitevin is a vast marshland and wetland area that is crisscrossed with canals. It is a paradise for nature lovers, offering opportunities for boating, birdwatching, and hiking. |
||||||
|
- **Countryside**: The region also features gentle rolling hills, vineyards, and forests. The Poitou-Charentes region is known for its peaceful, rural landscapes, making it ideal for outdoor activities like cycling, hiking, and nature walks. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Poitiers**: The historic city of Poitiers is famous for its medieval architecture, including the Church of Saint-Hilaire-le-Grand, a UNESCO World Heritage site, and the Palais des Ducs d'Aquitaine, a former royal palace. |
||||||
|
- **La Rochelle**: Known for its well-preserved Old Port, La Rochelle is a charming coastal town with a rich maritime history. The city's landmarks include the iconic La Rochelle Towers and the Maritime Museum. |
||||||
|
- **Futuroscope**: Located near Poitiers, Futuroscope is one of France’s most popular theme parks, offering futuristic attractions, multimedia shows, and cutting-edge technology exhibitions. |
||||||
|
- **Île de Ré**: This picturesque island is known for its beautiful beaches, historic lighthouses, and charming villages. It is a popular vacation spot for tourists seeking relaxation and outdoor activities. |
||||||
|
- **Château de Niort**: This medieval fortress in Niort dates back to the 12th century and offers visitors a glimpse into the region’s medieval history. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Agriculture**: Poitou is traditionally an agricultural region, known for its livestock farming, particularly the production of Charolais cattle, as well as the cultivation of cereals, potatoes, and sunflowers. The region also produces a variety of fruits, including apples and grapes. |
||||||
|
- **Wine Production**: The region is part of the larger wine-growing area of Charentes, which is famous for producing Cognac, a renowned brandy. The vineyards of the Charente and Charente-Maritime departments are integral to the local economy. |
||||||
|
- **Tourism**: Poitou’s rich history, natural beauty, and charming cities attract many tourists. La Rochelle, Poitiers, and Île de Ré are major tourist destinations, while the Marais Poitevin and the coastal areas draw those interested in nature and outdoor activities. |
||||||
|
- **Cognac Production**: Poitou is at the heart of the Cognac-producing region, with many distilleries located around the Charente River, where the famous spirit is made from grapes and aged for years in oak barrels. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Oceanic Climate**: Poitou enjoys an oceanic climate with mild winters and warm summers, influenced by the Atlantic Ocean. Coastal areas experience more moderate temperatures, while inland regions can have slightly warmer summers. |
||||||
|
- **Average Temperatures**: Summer temperatures typically range from 18°C to 25°C (64°F to 77°F), while winters are generally mild, with temperatures ranging from 5°C to 10°C (41°F to 50°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Eleanor of Aquitaine**: Born in Poitou, Eleanor was one of the most powerful and influential women in medieval Europe. She was Queen of France and later Queen of England and played a key role in the politics of both kingdoms. |
||||||
|
- **François Rabelais**: The famous Renaissance writer, best known for his satirical work *Gargantua and Pantagruel*, was born in the Poitou region, and his works remain an important part of French literature. |
||||||
|
- **René Descartes**: One of the most influential philosophers of the 17th century, Descartes spent much of his early life in Poitou, and his legacy continues to shape modern philosophy. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Poitou is a region rich in history, culture, and natural beauty. From its medieval towns and historic landmarks to its picturesque countryside and coastal beauty, Poitou offers a unique blend of traditions and modern attractions. Whether exploring the city of Poitiers, enjoying the fresh produce and local wine, or relaxing on the beaches of Île de Ré, Poitou provides an unforgettable experience for visitors. |
@ -0,0 +1,50 @@ |
|||||||
|
# Overview of Provence Region in France |
||||||
|
|
||||||
|
Provence is a stunning region in the southeastern part of France, renowned for its breathtaking landscapes, rich history, vibrant culture, and Mediterranean climate. It is one of the most beloved regions in France, known for its lavender fields, vineyards, ancient Roman ruins, and charming villages. |
||||||
|
|
||||||
|
## Geography |
||||||
|
- **Location**: Provence is located in the southeastern part of France, bordered by the Mediterranean Sea to the south, the Rhône River to the west, the Alps to the north, and the region of Côte d'Azur to the east. |
||||||
|
- **Area**: Covers approximately 31,400 square kilometers. |
||||||
|
- **Major Cities**: Marseille (capital), Aix-en-Provence, Avignon, Arles, and Toulon. |
||||||
|
|
||||||
|
## History |
||||||
|
- **Roman Heritage**: Provence has a rich Roman history, with the city of Arles serving as a significant Roman settlement. The region is home to some of the best-preserved Roman monuments in France, including the Arena of Nîmes and the Pont du Gard. |
||||||
|
- **Medieval Influence**: Provence was part of the Kingdom of Arles in the Middle Ages and later became a major part of the Comtat Venaissin. It was also home to the Papacy for a time, with the popes residing in Avignon from 1309 to 1377. |
||||||
|
- **Renaissance and Revolution**: Provence was a key region during the Renaissance, flourishing in the arts and culture. During the French Revolution, Provence played a significant role, with several uprisings and political changes. |
||||||
|
|
||||||
|
## Culture |
||||||
|
- **Language**: The traditional language of Provence is Provençal, a variety of the Occitan language. While French is predominantly spoken today, Provençal still has cultural significance and is used in regional poetry, music, and literature. |
||||||
|
- **Cuisine**: Provence is famous for its Mediterranean cuisine, emphasizing fresh vegetables, olive oil, herbs, seafood, and wine. Popular dishes include *bouillabaisse* (a fish stew), *ratatouille* (vegetable medley), *tapenade* (olive paste), and *pissaladière* (onion tart). |
||||||
|
- **Wine**: The region is renowned for its wine production, particularly rosé wines from the Côtes de Provence, as well as reds and whites. The vineyards of Provence benefit from the Mediterranean climate, producing wines with distinctive flavors. |
||||||
|
- **Folk Traditions**: Provence is known for its rich folk traditions, including festivals, music, dance, and crafts. The region celebrates a variety of traditional events, such as the Festival of the Calissons in Aix-en-Provence, and the Fête de la Lavande (Lavender Festival) in Sault. |
||||||
|
|
||||||
|
## Natural Beauty |
||||||
|
- **Mediterranean Coast**: Provence boasts a beautiful coastline along the Mediterranean, with stunning beaches, rocky coves, and picturesque seaside towns such as Cassis, Sainte-Maxime, and Bandol. |
||||||
|
- **Lavender Fields**: The lavender fields of Provence are one of the region's most iconic features. The fields bloom in vibrant purple hues during the summer months and are a major tourist attraction. |
||||||
|
- **Alps and Vineyards**: To the north of Provence, the landscape rises into the Alps, offering spectacular mountain scenery, hiking, and skiing opportunities. The rolling hills and vineyards of the region produce some of the finest wines in France. |
||||||
|
- **Gorges du Verdon**: Known as the "Grand Canyon of Europe," the Gorges du Verdon is a breathtaking river canyon with turquoise waters, cliffs, and stunning landscapes. It is a popular destination for outdoor activities like hiking, kayaking, and rock climbing. |
||||||
|
|
||||||
|
## Landmarks and Attractions |
||||||
|
- **Palace of the Popes (Palais des Papes)**: Located in Avignon, this UNESCO World Heritage site is one of the largest and most important medieval Gothic buildings in Europe. It was the residence of popes during the 14th century. |
||||||
|
- **Pont du Gard**: An ancient Roman aqueduct bridge located near Nîmes, the Pont du Gard is a UNESCO World Heritage site and an engineering marvel. |
||||||
|
- **Roman Arena of Nîmes**: One of the best-preserved Roman amphitheaters, the Arena of Nîmes in Nîmes is still used for events today, including bullfights and concerts. |
||||||
|
- **Château des Baux-de-Provence**: A ruined medieval castle perched atop the hills of Les Baux-de-Provence, offering panoramic views of the surrounding landscape. |
||||||
|
- **Cassis and Calanques National Park**: The seaside town of Cassis is famous for its beautiful harbor and access to the Calanques National Park, a stunning area of limestone cliffs, turquoise waters, and hidden coves. |
||||||
|
|
||||||
|
## Economy |
||||||
|
- **Agriculture**: Provence is known for its agricultural production, including the cultivation of olives, lavender, tomatoes, and herbs such as thyme and rosemary. Olive oil production is a key industry, and the region’s lavender fields are famous worldwide. |
||||||
|
- **Wine Production**: Provence is one of the most important wine regions in France, especially known for its rosé wines. Vineyards are spread throughout the region, including areas like Côtes de Provence, Bandol, and Cassis. |
||||||
|
- **Tourism**: Tourism is a major part of Provence's economy, with millions of visitors flocking to the region for its beaches, lavender fields, Roman ruins, and charming towns. The region’s Mediterranean climate and picturesque landscapes make it a year-round destination. |
||||||
|
- **Crafts and Industry**: Provence is known for its artisanal crafts, such as pottery, textiles, and perfume making, particularly in the town of Grasse, which is renowned as the perfume capital of the world. |
||||||
|
|
||||||
|
## Climate |
||||||
|
- **Mediterranean Climate**: Provence enjoys a Mediterranean climate, characterized by hot, dry summers and mild, wet winters. This climate is ideal for growing grapes, olives, and lavender, and contributes to the region’s appeal as a tourist destination. |
||||||
|
- **Average Temperatures**: Summers are typically hot, with temperatures ranging from 25°C to 35°C (77°F to 95°F), while winters are mild, with temperatures ranging from 5°C to 15°C (41°F to 59°F). |
||||||
|
|
||||||
|
## Notable People |
||||||
|
- **Paul Cézanne**: A famous Post-Impressionist painter, Cézanne was born in Aix-en-Provence and is closely associated with the landscapes of the region. His works, particularly those depicting the Mont Sainte-Victoire mountain, are iconic in the art world. |
||||||
|
- **Marcel Pagnol**: A renowned writer, playwright, and filmmaker, Pagnol was born in Aubagne and is known for his works about Provençal life, including *Marius*, *Fanny*, and *César*, as well as his memoirs. |
||||||
|
- **Vincent van Gogh**: The Dutch painter spent a year in the town of Saint-Rémy-de-Provence, where he produced some of his most famous works, including *Starry Night* and *Irises*. |
||||||
|
|
||||||
|
## Conclusion |
||||||
|
Provence is a region that captivates with its stunning landscapes, rich history, and vibrant culture. From the lavender fields and Mediterranean beaches to the Roman ruins and charming villages, Provence offers something for everyone. Whether you're visiting for the cuisine, the wine, the history, or simply to relax in its beautiful surroundings, Provence is a timeless and unforgettable destination. |
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,433 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 2, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import glob\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"import gradio as gr\n", |
||||||
|
"# import gemini\n", |
||||||
|
"import google.generativeai" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 18, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports for langchain\n", |
||||||
|
"\n", |
||||||
|
"from langchain.document_loaders import DirectoryLoader, TextLoader\n", |
||||||
|
"from langchain.text_splitter import CharacterTextSplitter\n", |
||||||
|
"from langchain.schema import Document\n", |
||||||
|
"# from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", |
||||||
|
"from langchain_chroma import Chroma\n", |
||||||
|
"from langchain_google_genai import GoogleGenerativeAIEmbeddings, ChatGoogleGenerativeAI\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"from sklearn.manifold import TSNE\n", |
||||||
|
"import plotly.graph_objects as go\n", |
||||||
|
"from langchain.memory import ConversationBufferMemory\n", |
||||||
|
"from langchain.chains import ConversationalRetrievalChain" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 4, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# price is a factor for our company, so we're going to use a low cost model\n", |
||||||
|
"\n", |
||||||
|
"MODEL = \"gemini-1.5-flash\"\n", |
||||||
|
"db_name = \"vector_db\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 5, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"os.environ['GOOGLE_API_KEY'] = os.getenv('GOOGLE_API_KEY', 'your-key-if-not-using-env')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 6, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"google.generativeai.configure()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 7, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Read in documents using LangChain's loaders\n", |
||||||
|
"# Take everything in all the sub-folders of our knowledgebase\n", |
||||||
|
"\n", |
||||||
|
"folders = glob.glob(\"knowledge-base/*\")\n", |
||||||
|
"\n", |
||||||
|
"# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", |
||||||
|
"text_loader_kwargs = {'encoding': 'utf-8'}\n", |
||||||
|
"# If that doesn't work, some Windows users might need to uncomment the next line instead\n", |
||||||
|
"# text_loader_kwargs={'autodetect_encoding': True}\n", |
||||||
|
"\n", |
||||||
|
"documents = []\n", |
||||||
|
"for folder in folders:\n", |
||||||
|
" doc_type = os.path.basename(folder)\n", |
||||||
|
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", |
||||||
|
" folder_docs = loader.load()\n", |
||||||
|
" for doc in folder_docs:\n", |
||||||
|
" doc.metadata[\"doc_type\"] = doc_type\n", |
||||||
|
" documents.append(doc)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 8, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stderr", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Created a chunk of size 1088, which is longer than the specified 1000\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", |
||||||
|
"chunks = text_splitter.split_documents(documents)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 9, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/plain": [ |
||||||
|
"123" |
||||||
|
] |
||||||
|
}, |
||||||
|
"execution_count": 9, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "execute_result" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"len(chunks)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 10, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Document types found: company, contracts, employees, products\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", |
||||||
|
"print(f\"Document types found: {', '.join(doc_types)}\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 11, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Vectorstore created with 123 documents\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"embeddings = GoogleGenerativeAIEmbeddings(model=\"models/embedding-001\")\n", |
||||||
|
"\n", |
||||||
|
"# Check if a Chroma Datastore already exists - if so, delete the collection to start from scratch\n", |
||||||
|
"\n", |
||||||
|
"if os.path.exists(db_name):\n", |
||||||
|
" Chroma(persist_directory=db_name, embedding_function=embeddings).delete_collection()\n", |
||||||
|
"\n", |
||||||
|
"# Create our Chroma vectorstore!\n", |
||||||
|
"\n", |
||||||
|
"vectorstore = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=db_name)\n", |
||||||
|
"print(f\"Vectorstore created with {vectorstore._collection.count()} documents\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 12, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"The vectors have 768 dimensions\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# Get one vector and find how many dimensions it has\n", |
||||||
|
"\n", |
||||||
|
"collection = vectorstore._collection\n", |
||||||
|
"sample_embedding = collection.get(limit=1, include=[\"embeddings\"])[\"embeddings\"][0]\n", |
||||||
|
"dimensions = len(sample_embedding)\n", |
||||||
|
"print(f\"The vectors have {dimensions:,} dimensions\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 13, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Prework\n", |
||||||
|
"\n", |
||||||
|
"result = collection.get(include=['embeddings', 'documents', 'metadatas'])\n", |
||||||
|
"vectors = np.array(result['embeddings'])\n", |
||||||
|
"documents = result['documents']\n", |
||||||
|
"doc_types = [metadata['doc_type'] for metadata in result['metadatas']]\n", |
||||||
|
"colors = [['blue', 'green', 'red', 'orange'][['products', 'employees', 'contracts', 'company'].index(t)] for t in doc_types]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# We humans find it easier to visalize things in 2D!\n", |
||||||
|
"# Reduce the dimensionality of the vectors to 2D using t-SNE\n", |
||||||
|
"# (t-distributed stochastic neighbor embedding)\n", |
||||||
|
"\n", |
||||||
|
"tsne = TSNE(n_components=2, random_state=42)\n", |
||||||
|
"reduced_vectors = tsne.fit_transform(vectors)\n", |
||||||
|
"\n", |
||||||
|
"# Create the 2D scatter plot\n", |
||||||
|
"fig = go.Figure(data=[go.Scatter(\n", |
||||||
|
" x=reduced_vectors[:, 0],\n", |
||||||
|
" y=reduced_vectors[:, 1],\n", |
||||||
|
" mode='markers',\n", |
||||||
|
" marker=dict(size=5, color=colors, opacity=0.8),\n", |
||||||
|
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", |
||||||
|
" hoverinfo='text'\n", |
||||||
|
")])\n", |
||||||
|
"\n", |
||||||
|
"fig.update_layout(\n", |
||||||
|
" title='2D Chroma Vector Store Visualization',\n", |
||||||
|
" scene=dict(xaxis_title='x',yaxis_title='y'),\n", |
||||||
|
" width=800,\n", |
||||||
|
" height=600,\n", |
||||||
|
" margin=dict(r=20, b=10, l=10, t=40)\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"fig.show()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Let's try 3D!\n", |
||||||
|
"\n", |
||||||
|
"tsne = TSNE(n_components=3, random_state=42)\n", |
||||||
|
"reduced_vectors = tsne.fit_transform(vectors)\n", |
||||||
|
"\n", |
||||||
|
"# Create the 3D scatter plot\n", |
||||||
|
"fig = go.Figure(data=[go.Scatter3d(\n", |
||||||
|
" x=reduced_vectors[:, 0],\n", |
||||||
|
" y=reduced_vectors[:, 1],\n", |
||||||
|
" z=reduced_vectors[:, 2],\n", |
||||||
|
" mode='markers',\n", |
||||||
|
" marker=dict(size=5, color=colors, opacity=0.8),\n", |
||||||
|
" text=[f\"Type: {t}<br>Text: {d[:100]}...\" for t, d in zip(doc_types, documents)],\n", |
||||||
|
" hoverinfo='text'\n", |
||||||
|
")])\n", |
||||||
|
"\n", |
||||||
|
"fig.update_layout(\n", |
||||||
|
" title='3D Chroma Vector Store Visualization',\n", |
||||||
|
" scene=dict(xaxis_title='x', yaxis_title='y', zaxis_title='z'),\n", |
||||||
|
" width=900,\n", |
||||||
|
" height=700,\n", |
||||||
|
" margin=dict(r=20, b=10, l=10, t=40)\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"fig.show()" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"RAG pipeline using langchain" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 19, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stderr", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"C:\\Users\\GANESH\\AppData\\Local\\Temp\\ipykernel_524\\4130109764.py:5: LangChainDeprecationWarning:\n", |
||||||
|
"\n", |
||||||
|
"Please see the migration guide at: https://python.langchain.com/docs/versions/migrating_memory/\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"# create a new Chat with ChatGoogleGenerativeAI\n", |
||||||
|
"llm = ChatGoogleGenerativeAI(model=MODEL, temperature=0.7)\n", |
||||||
|
"\n", |
||||||
|
"# set up the conversation memory for the chat\n", |
||||||
|
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", |
||||||
|
"\n", |
||||||
|
"# the retriever is an abstraction over the VectorStore that will be used during RAG\n", |
||||||
|
"retriever = vectorstore.as_retriever()\n", |
||||||
|
"\n", |
||||||
|
"# putting it together: set up the conversation chain with the GPT 4o-mini LLM, the vector store and memory\n", |
||||||
|
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 20, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"Insurellm is an insurance technology company with 200 employees and over 300 clients worldwide. They offer four software products, including Homellm, a portal for home insurance companies that integrates with existing platforms and offers a customer portal for policy management. Their pricing model is based on provider size and customization needs.\n" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"query = \"Can you describe Insurellm in a few sentences\"\n", |
||||||
|
"result = conversation_chain.invoke({\"question\":query})\n", |
||||||
|
"print(result[\"answer\"])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 21, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# set up a new conversation memory for the chat\n", |
||||||
|
"memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True)\n", |
||||||
|
"\n", |
||||||
|
"# putting it together: set up the conversation chain with the GPT 4o-mini LLM, the vector store and memory\n", |
||||||
|
"conversation_chain = ConversationalRetrievalChain.from_llm(llm=llm, retriever=retriever, memory=memory)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"Gradio User Interface" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 22, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"def chat(message, history):\n", |
||||||
|
" result = conversation_chain.invoke({\"question\": message})\n", |
||||||
|
" return result[\"answer\"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": 23, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [ |
||||||
|
{ |
||||||
|
"name": "stdout", |
||||||
|
"output_type": "stream", |
||||||
|
"text": [ |
||||||
|
"* Running on local URL: http://127.0.0.1:7860\n", |
||||||
|
"\n", |
||||||
|
"To create a public link, set `share=True` in `launch()`.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"data": { |
||||||
|
"text/html": [ |
||||||
|
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>" |
||||||
|
], |
||||||
|
"text/plain": [ |
||||||
|
"<IPython.core.display.HTML object>" |
||||||
|
] |
||||||
|
}, |
||||||
|
"metadata": {}, |
||||||
|
"output_type": "display_data" |
||||||
|
} |
||||||
|
], |
||||||
|
"source": [ |
||||||
|
"view = gr.ChatInterface(chat, type=\"messages\").launch(inbrowser=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "llms", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 2 |
||||||
|
} |
@ -0,0 +1,405 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "dfe37963-1af6-44fc-a841-8e462443f5e6", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## This notebook compares the embeddings generated by OpenAIEmbeddings.\n", |
||||||
|
"\n", |
||||||
|
"It shows that OpenAIEmbeddings embeddings can differ slightly (typically at 4 the decimal place).\n", |
||||||
|
"\n", |
||||||
|
"### Results from OpenAIEmbeddings:\n", |
||||||
|
"encodings are NOT identical on each run.\n", |
||||||
|
"\n", |
||||||
|
"### Repeating with sentence-transformers/all-MiniLM-L6-v2:\n", |
||||||
|
"encodings ARE identical on each run.\n", |
||||||
|
"\n", |
||||||
|
"Tests verify simple numerical comparisons.\n", |
||||||
|
"\n", |
||||||
|
"### Advanced Comparison\n", |
||||||
|
"A more advanced euclidean and cosine comparison is also included.\n", |
||||||
|
"\n", |
||||||
|
"## NOTES: Tests run on local Jupiter Notebook| Anaconda setup for the course." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"import os\n", |
||||||
|
"import glob\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"import gradio as gr" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "802137aa-8a74-45e0-a487-d1974927d7ca", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports for langchain\n", |
||||||
|
"\n", |
||||||
|
"from langchain.document_loaders import DirectoryLoader, TextLoader\n", |
||||||
|
"from langchain.text_splitter import CharacterTextSplitter\n", |
||||||
|
"from langchain.schema import Document\n", |
||||||
|
"from langchain_openai import OpenAIEmbeddings, ChatOpenAI\n", |
||||||
|
"from langchain_chroma import Chroma\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"from sklearn.manifold import TSNE\n", |
||||||
|
"import plotly.graph_objects as go\n", |
||||||
|
"from langchain.memory import ConversationBufferMemory\n", |
||||||
|
"from langchain.chains import ConversationalRetrievalChain" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "58c85082-e417-4708-9efe-81a5d55d1424", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# price is a factor for our company, so we're going to use a low cost model\n", |
||||||
|
"\n", |
||||||
|
"MODEL = \"gpt-4o-mini\"\n", |
||||||
|
"db_name = \"vector_db\"" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ee78efcb-60fe-449e-a944-40bab26261af", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Load environment variables in a file called .env\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Read in documents using LangChain's loaders\n", |
||||||
|
"# Take everything in all the sub-folders of our knowledgebase\n", |
||||||
|
"\n", |
||||||
|
"folders = glob.glob(\"knowledge-base/*\")\n", |
||||||
|
"\n", |
||||||
|
"# With thanks to CG and Jon R, students on the course, for this fix needed for some users \n", |
||||||
|
"text_loader_kwargs = {'encoding': 'utf-8'}\n", |
||||||
|
"# If that doesn't work, some Windows users might need to uncomment the next line instead\n", |
||||||
|
"# text_loader_kwargs={'autodetect_encoding': True}\n", |
||||||
|
"\n", |
||||||
|
"documents = []\n", |
||||||
|
"for folder in folders:\n", |
||||||
|
" doc_type = os.path.basename(folder)\n", |
||||||
|
" loader = DirectoryLoader(folder, glob=\"**/*.md\", loader_cls=TextLoader, loader_kwargs=text_loader_kwargs)\n", |
||||||
|
" folder_docs = loader.load()\n", |
||||||
|
" for doc in folder_docs:\n", |
||||||
|
" doc.metadata[\"doc_type\"] = doc_type\n", |
||||||
|
" documents.append(doc)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n", |
||||||
|
"chunks = text_splitter.split_documents(documents)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"len(chunks)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n", |
||||||
|
"print(f\"Document types found: {', '.join(doc_types)}\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a8b5ef27-70c2-4111-bce7-854bc1ebd02a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Use a where filter to specify the metadata condition\n", |
||||||
|
"# Get the 3 company vectors (corresponds to our 3 yellow dots)\n", |
||||||
|
"\n", |
||||||
|
"def get_company_vectors(collection):\n", |
||||||
|
" company_vectors = collection.get(\n", |
||||||
|
" where={\"doc_type\": \"company\"}, # Filter for documents where source = \"XXXX\"\n", |
||||||
|
" limit=10,\n", |
||||||
|
" include=[\"embeddings\", \"metadatas\", \"documents\"]\n", |
||||||
|
" )\n", |
||||||
|
" print(f\"Found {len(company_vectors)} company vectors\")\n", |
||||||
|
" return company_vectors\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d688b873-b52b-4d80-9df2-f70b389f5dc7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"def print_vectors_summary(vectors):\n", |
||||||
|
" for i in range(len(vectors[\"documents\"])):\n", |
||||||
|
" print(f\"\\n--- Chunk {i+1} ---\")\n", |
||||||
|
" \n", |
||||||
|
" # Print document content (first 100 chars)\n", |
||||||
|
" print(f\"Content: {vectors['documents'][i][:100]}...\")\n", |
||||||
|
" \n", |
||||||
|
" # Print metadata\n", |
||||||
|
" print(f\"Metadata: {vectors['metadatas'][i]}\")\n", |
||||||
|
" \n", |
||||||
|
" # Print embedding info (not the full vector as it would be too long)\n", |
||||||
|
" embedding = vectors[\"embeddings\"][i]\n", |
||||||
|
" print(f\"Embedding: Vector of length {len(embedding)}, first 5 values: {embedding[:5]}\")\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def get_dimensions_for_vectors(vectors):\n", |
||||||
|
" dimensions = []\n", |
||||||
|
"\n", |
||||||
|
" for i in range(len(vectors[\"documents\"])):\n", |
||||||
|
" embedding = vectors[\"embeddings\"][i]\n", |
||||||
|
" dimensions.append(embedding)\n", |
||||||
|
"\n", |
||||||
|
" return dimensions\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0b195184-4920-404a-9bfa-0231f1dbe276", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Quick check if any single value is different\n", |
||||||
|
"def quick_diff_check(emb1, emb2):\n", |
||||||
|
" result = \"Embeddings are identical\"\n", |
||||||
|
" print(\"\\n\\nComparing two embeddings:\\n\\n\")\n", |
||||||
|
" print(emb1)\n", |
||||||
|
" print(emb2)\n", |
||||||
|
" for i, (v1, v2) in enumerate(zip(emb1, emb2)):\n", |
||||||
|
" if v1 != v2:\n", |
||||||
|
" result = f\"Different at dimension {i}: {v1} vs {v2}\"\n", |
||||||
|
" break\n", |
||||||
|
" print(result)\n", |
||||||
|
" return result\n", |
||||||
|
"\n", |
||||||
|
"#quick_diff_check(dimensions[0], dimensions[1])" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "06ba838d-d179-4e2d-b208-dd9cc1fd0097", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"\n", |
||||||
|
"embeddings = OpenAIEmbeddings()\n", |
||||||
|
"\n", |
||||||
|
"def create_vectorstores(embeddings):\n", |
||||||
|
"\n", |
||||||
|
" if os.path.exists(\"vectorstore1\"):\n", |
||||||
|
" Chroma(persist_directory=\"vectorstore1\", embedding_function=embeddings).delete_collection()\n", |
||||||
|
" if os.path.exists(\"vectorstore2\"):\n", |
||||||
|
" Chroma(persist_directory=\"vectorstore2\", embedding_function=embeddings).delete_collection()\n", |
||||||
|
" \n", |
||||||
|
" \n", |
||||||
|
" # Create vectorstore 1\n", |
||||||
|
" vectorstore1 = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=\"vectorstore1\")\n", |
||||||
|
" print(f\"Vectorstore 1 created with {vectorstore1._collection.count()} documents\")\n", |
||||||
|
" \n", |
||||||
|
" # Create vectorstore 2\n", |
||||||
|
" vectorstore2 = Chroma.from_documents(documents=chunks, embedding=embeddings, persist_directory=\"vectorstore2\")\n", |
||||||
|
" print(f\"Vectorstore 2 created with {vectorstore2._collection.count()} documents\")\n", |
||||||
|
"\n", |
||||||
|
" return vectorstore1, vectorstore2\n", |
||||||
|
"\n", |
||||||
|
"vectorstore1, vectorstore2 = create_vectorstores(embeddings)\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e24242eb-613a-4edb-a081-6b8937f106a7", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"## Uncomment this and rerun cells below, \n", |
||||||
|
"## to see that HuggingFaceEmbeddings is idential\n", |
||||||
|
"\n", |
||||||
|
"#from langchain.embeddings import HuggingFaceEmbeddings\n", |
||||||
|
"#embeddings = HuggingFaceEmbeddings(model_name=\"sentence-transformers/all-MiniLM-L6-v2\")\n", |
||||||
|
"#vectorstore1, vectorstore2 = create_vectorstores(embeddings)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "000b9e70-2958-40db-bbed-56a00e4249ce", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Get the 3 company doc_type vectors\n", |
||||||
|
"collection1 = vectorstore1._collection\n", |
||||||
|
"collection2 = vectorstore2._collection\n", |
||||||
|
"\n", |
||||||
|
"company_vectors1=get_company_vectors(collection1)\n", |
||||||
|
"company_vectors2=get_company_vectors(collection2)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "63cd63e4-9d3e-405a-8ef9-dac16fe2570e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Lets print out summary info just to see we have the same chunks.\n", |
||||||
|
"\n", |
||||||
|
"def print_summary_info (vectors):\n", |
||||||
|
" print(\"VECTORS SUMMARY\\n\")\n", |
||||||
|
" print_vectors_summary(vectors)\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"print(\"\\n\\n\\n========= VECTORS 1 =========\\n\\n\")\n", |
||||||
|
"print_summary_info(company_vectors1)\n", |
||||||
|
"\n", |
||||||
|
"print(\"\\n\\n\\n========= VECTORS 2 =========\\n\\n\")\n", |
||||||
|
"print_summary_info(company_vectors2)\n", |
||||||
|
"\n", |
||||||
|
"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "bc085a35-f0ec-4ddb-955c-244cb2d3eb2a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"dimensions1 = get_dimensions_for_vectors(company_vectors1)\n", |
||||||
|
"dimensions2 = get_dimensions_for_vectors(company_vectors2)\n", |
||||||
|
"\n", |
||||||
|
"result1 = quick_diff_check(dimensions1[0], dimensions2[0]) \n", |
||||||
|
"result2 = quick_diff_check(dimensions1[1], dimensions2[1]) \n", |
||||||
|
"result3 = quick_diff_check(dimensions1[2], dimensions2[2]) \n", |
||||||
|
"\n", |
||||||
|
"print(\"\\n\\nSUMMARY RESULTS:\")\n", |
||||||
|
"print(\"================\\n\\n\")\n", |
||||||
|
"print(result1) \n", |
||||||
|
"print(result2)\n", |
||||||
|
"print(result3)\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "164cf94d-9d63-4bae-91f9-4b02da1537ae", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"## ADVANCED COMPARISONS:\n", |
||||||
|
"# More advanced comparisons (from Claude 3.7 Sonnet):\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"## !IMPORTANT *** Uncomment final line to execute ***\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"from scipy.spatial.distance import cosine\n", |
||||||
|
"\n", |
||||||
|
"# Method 1: Euclidean distance (L2 norm)\n", |
||||||
|
"def compare_embeddings_euclidean(emb1, emb2):\n", |
||||||
|
" emb1_array = np.array(emb1)\n", |
||||||
|
" emb2_array = np.array(emb2)\n", |
||||||
|
" distance = np.linalg.norm(emb1_array - emb2_array)\n", |
||||||
|
" return {\n", |
||||||
|
" \"different\": distance > 0,\n", |
||||||
|
" \"distance\": distance,\n", |
||||||
|
" \"similarity\": 1/(1+distance) # Converts distance to similarity score\n", |
||||||
|
" }\n", |
||||||
|
"\n", |
||||||
|
"# Method 2: Cosine similarity (common for embeddings)\n", |
||||||
|
"def compare_embeddings_cosine(emb1, emb2):\n", |
||||||
|
" emb1_array = np.array(emb1)\n", |
||||||
|
" emb2_array = np.array(emb2)\n", |
||||||
|
" similarity = 1 - cosine(emb1_array, emb2_array) # Cosine returns distance, so subtract from 1\n", |
||||||
|
" return {\n", |
||||||
|
" \"different\": similarity < 0.9999, # Almost identical if > 0.9999\n", |
||||||
|
" \"similarity\": similarity\n", |
||||||
|
" }\n", |
||||||
|
"\n", |
||||||
|
"# Method 3: Simple exact equality check\n", |
||||||
|
"def are_embeddings_identical(emb1, emb2):\n", |
||||||
|
" return np.array_equal(np.array(emb1), np.array(emb2))\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def run_advanced_comparisons():\n", |
||||||
|
" for i in range(0, 3):\n", |
||||||
|
" print(f\"\\n\\nComparing vector dimensions for dimension[{i}]....\\n\")\n", |
||||||
|
" print(\"Exactly identical? ---> \", are_embeddings_identical(dimensions1[i], dimensions2[i]))\n", |
||||||
|
" print(\"Cosine comparison: ---> \", compare_embeddings_cosine(dimensions1[i], dimensions2[i]))\n", |
||||||
|
" print(\"Euclidean comparison: ---> \", compare_embeddings_euclidean(dimensions1[i], dimensions2[i]))\n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"#run_advanced_comparisons()" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.11" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,636 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "28a0673e-96b5-43f2-8a8b-bd033bf851b0", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Add a Validation Set\n", |
||||||
|
"\n", |
||||||
|
"In the lecture, we created a curated dataset with **400,000 training items** and **2,000 test items**, but we did not include a validation (dev) set. This notebook demonstrates how to take Ed Donner’s dataset, [ed-donner/pricer-data](https://huggingface.co/datasets/ed-donner/pricer-data), and add a dev set to it.\n", |
||||||
|
"\n", |
||||||
|
"> **Note**: This notebook heavily uses snippets from the lectures’ `day2.ipynb` of Week 6.\n", |
||||||
|
"\n", |
||||||
|
"**Download the Updated Dataset**: \n", |
||||||
|
"You can find the resulting dataset here: [antonawinkler/pricer-data](https://huggingface.co/datasets/antonawinkler/pricer-data)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "67cedf85-8125-4322-998e-9375fe745597", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# imports\n", |
||||||
|
"\n", |
||||||
|
"# Standard libraries\n", |
||||||
|
"import os\n", |
||||||
|
"import random\n", |
||||||
|
"from itertools import chain\n", |
||||||
|
"from collections import Counter, defaultdict\n", |
||||||
|
"\n", |
||||||
|
"# Third-party libraries\n", |
||||||
|
"from dotenv import load_dotenv\n", |
||||||
|
"from huggingface_hub import login\n", |
||||||
|
"from datasets import concatenate_datasets, load_dataset, Dataset, DatasetDict\n", |
||||||
|
"import matplotlib.pyplot as plt\n", |
||||||
|
"import numpy as np\n", |
||||||
|
"\n", |
||||||
|
"# Local modules\n", |
||||||
|
"from items import Item\n", |
||||||
|
"from loaders import ItemLoader\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "7390a6aa-79cb-4dea-b6d7-de7e4b13e472", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# environment\n", |
||||||
|
"\n", |
||||||
|
"load_dotenv()\n", |
||||||
|
"os.environ['HF_TOKEN'] = os.getenv('HF_TOKEN', 'your-key-if-not-using-env')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0732274a-aa6a-44fc-aee2-40dc8a8e4451", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Log in to HuggingFace\n", |
||||||
|
"\n", |
||||||
|
"hf_token = os.environ['HF_TOKEN']\n", |
||||||
|
"login(hf_token, add_to_git_credential=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "1adcf323-de9d-4c24-a9c3-d7ae554d06ca", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"%matplotlib inline" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "e2b6dc50-ac5c-4cf2-af2e-968ed8ef86d7", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Load the Original Dataset\n", |
||||||
|
"\n", |
||||||
|
"Load the original data from McAuley-Lab/Amazon-Reviews-2023." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "d1d06cd3-f3c2-44f0-a9f2-13b54ff8be5c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"dataset_names = [\n", |
||||||
|
" \"Automotive\",\n", |
||||||
|
" \"Electronics\",\n", |
||||||
|
" \"Office_Products\",\n", |
||||||
|
" \"Tools_and_Home_Improvement\",\n", |
||||||
|
" \"Cell_Phones_and_Accessories\",\n", |
||||||
|
" \"Toys_and_Games\",\n", |
||||||
|
" \"Appliances\",\n", |
||||||
|
" \"Musical_Instruments\",\n", |
||||||
|
"]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "aa8fd0f0-509a-4298-8fcc-e499a061e1be", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"items = []\n", |
||||||
|
"for dataset_name in dataset_names:\n", |
||||||
|
" loader = ItemLoader(dataset_name)\n", |
||||||
|
" items.extend(loader.load())" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "bf6b6b66-4a4b-41c2-b366-1f598cf18351", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# Create Balanced Dataset\n", |
||||||
|
"\n", |
||||||
|
"We apply the balancing algorithm from the course." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "549a4bad-abe7-4d36-ad77-fc70ba0f151c", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"slots = defaultdict(list)\n", |
||||||
|
"for item in items:\n", |
||||||
|
" slots[round(item.price)].append(item)\n", |
||||||
|
"\n", |
||||||
|
"np.random.seed(42)\n", |
||||||
|
"random.seed(42)\n", |
||||||
|
"sample = []\n", |
||||||
|
"for i in range(1, 1000):\n", |
||||||
|
" slot = slots[i]\n", |
||||||
|
" if i>=240:\n", |
||||||
|
" sample.extend(slot)\n", |
||||||
|
" elif len(slot) <= 1200:\n", |
||||||
|
" sample.extend(slot)\n", |
||||||
|
" else:\n", |
||||||
|
" weights = np.array([1 if item.category=='Automotive' else 5 for item in slot])\n", |
||||||
|
" weights = weights / np.sum(weights)\n", |
||||||
|
" selected_indices = np.random.choice(len(slot), size=1200, replace=False, p=weights)\n", |
||||||
|
" selected = [slot[i] for i in selected_indices]\n", |
||||||
|
" sample.extend(selected)\n", |
||||||
|
"\n", |
||||||
|
"print(f\"There are {len(sample):,} items in the sample\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "04280d2b-210a-4fad-9163-1b32a87fb990", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"The output I get is `There are 408,635 items in the sample`\n", |
||||||
|
"\n", |
||||||
|
"Since there are 400,000 items in the train set of ed-donner/pricer-data, we can aim for a 98/1/1 split." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "0d1e2836-0cae-4496-a5d4-d80bc14d566b", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Load Ed Donner's Pricer Data Set" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a84e5a71-fc44-4cdf-9bc2-c69f80b8ee94", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"dataset_ori = load_dataset(\"ed-donner/pricer-data\")\n", |
||||||
|
"train_ori = dataset_ori['train']\n", |
||||||
|
"test_ori = dataset_ori['test']" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "e9c5c877-3d30-4013-9d0f-1e490755afeb", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Observation 1: Order of the Data Has Changed\n", |
||||||
|
"\n", |
||||||
|
"`dataset_without_devset` should be a subset of `sample`. The order however can be different. Let us check this.\n", |
||||||
|
"\n", |
||||||
|
"I see different results for the following two cells below, indicating that the order has changed." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "56ad8682-4d7f-4aad-9976-96eb6d9b4a5a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"sample[0].prompt" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "3e29a5ab-ca61-41cc-9b33-22d374681b85", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"train_ori[0]['text']" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "469a5b3c-c1a2-461d-a88d-27aa08905b31", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Observation 2: Duplicate Items\n", |
||||||
|
"\n", |
||||||
|
"As an further challenge, the dataset shows duplicates with identical scrubbed descriptions. For some of these duplicates the prices are identical too (I see 1774), for others they differ (I see 6747).\n", |
||||||
|
"\n", |
||||||
|
"> **Note**: Below we use `defaultdict(list)` instead of `set` because it allows to inspect duplicates easily." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "94adffe8-edf6-4503-9f8f-34e4dfd29da9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"PRICE_IS = \"\\n\\nPrice is $\"\n", |
||||||
|
"def get_key(text, price):\n", |
||||||
|
" prefix, price_is, _price_nearest_dollar = text.partition(PRICE_IS)\n", |
||||||
|
" return f\"{prefix}{price_is}{price}\"\n", |
||||||
|
"def get_key_without_price(text):\n", |
||||||
|
" prefix, price_is, _price_nearest_dollar = text.partition(PRICE_IS)\n", |
||||||
|
" return f\"{prefix}\"\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "a015ba1b-69e0-4651-850f-d93d3f078d16", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Identify duplicates by text+price\n", |
||||||
|
"train_ori_dict = defaultdict(list)\n", |
||||||
|
"for datapoint in train_ori:\n", |
||||||
|
" # Creates a key from the text and price (scrubbed)\n", |
||||||
|
" key = get_key(datapoint[\"text\"], datapoint[\"price\"])\n", |
||||||
|
" train_ori_dict[key].append(datapoint)\n", |
||||||
|
"\n", |
||||||
|
"# Number of exact duplicates (same text AND same price)\n", |
||||||
|
"exact_duplicates = len(train_ori) - len(train_ori_dict)\n", |
||||||
|
"print(f\"There are {exact_duplicates} duplicates with the same description and price.\")\n", |
||||||
|
"\n", |
||||||
|
"# Identify duplicates by text alone (ignoring price)\n", |
||||||
|
"train_ori_dict_no_price = defaultdict(list)\n", |
||||||
|
"for datapoint in train_ori:\n", |
||||||
|
" key_no_price = get_key_without_price(datapoint[\"text\"])\n", |
||||||
|
" train_ori_dict_no_price[key_no_price].append(datapoint)\n", |
||||||
|
"\n", |
||||||
|
"# Number of duplicates that differ in price but share the same text\n", |
||||||
|
"different_price_duplicates = len(train_ori_dict) - len(train_ori_dict_no_price)\n", |
||||||
|
"print(f\"In addition, there are {different_price_duplicates} data points where the description is duplicated but the price is different.\")\n", |
||||||
|
"\n", |
||||||
|
"# Total number of duplicates if we consider text alone\n", |
||||||
|
"overall_duplicates = len(train_ori) - len(train_ori_dict_no_price)\n", |
||||||
|
"print(f\"Overall number of duplicates: {overall_duplicates}\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "e577dd8b-be0f-4ab0-b45f-9d3459b1286a", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"test_ori_dict = defaultdict(list)\n", |
||||||
|
"for datapoint in test_ori:\n", |
||||||
|
" key = get_key(datapoint['text'], datapoint['price'])\n", |
||||||
|
" test_ori_dict[key].append(datapoint)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "0198fc23-0825-4ce1-a961-1d390d86cbdc", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"sample_dict = defaultdict(list)\n", |
||||||
|
"for datapoint in sample:\n", |
||||||
|
" key = get_key(datapoint.prompt, datapoint.price)\n", |
||||||
|
" sample_dict[key].append(datapoint)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "37f24d22-51ef-472b-8c73-e969637fa925", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# Check if all data points in train_ori/test_ori are included in the new sample_dict.\n", |
||||||
|
"missing = []\n", |
||||||
|
"count_found = 0\n", |
||||||
|
"\n", |
||||||
|
"for datapoint in chain(train_ori, test_ori):\n", |
||||||
|
" key = get_key(datapoint[\"text\"], datapoint[\"price\"])\n", |
||||||
|
" if key not in sample_dict:\n", |
||||||
|
" missing.append(datapoint)\n", |
||||||
|
" else:\n", |
||||||
|
" count_found += 1\n", |
||||||
|
"\n", |
||||||
|
"print(f\"We found {count_found} datapoints in sample_dict.\")\n", |
||||||
|
"print(f\"We are missing {len(missing)} datapoints that are not present in sample_dict.\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "60c9d186-c688-4559-9b51-f0045d16829b", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"Expected output of the previous cell\n", |
||||||
|
"```\n", |
||||||
|
"We found 402000 datapoints in sample_dict.\n", |
||||||
|
"We are missing 0 datapoints that are not present in sample_dict.\n", |
||||||
|
"```" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "3b05e22d-a755-4ee5-a18b-620f7ab1df8f", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Add Data Points to the Test and Validation Sets\n", |
||||||
|
"\n", |
||||||
|
"Since we can match all data points in the original train and test sets from `ed-donner/pricer-data`, we’ll now incorporate any *unused* items from our balanced sample into the test set and create a new validation (dev) set. Our goal is to achieve a **98/1/1** split for train, validation, and test." |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "16638cf9-03c3-46bc-8116-cafdd9e23ac9", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"sample_not_used_yet = [datapoint for key in sample_dict.keys() - train_ori_dict.keys() - test_ori_dict.keys() for datapoint in sample_dict[key]]" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "58a593ad-29a1-4b35-9753-45db75e09666", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# As a santity check, let us visually verify that the distribution of sample_still_available is in line with the complete sample.\n", |
||||||
|
"\n", |
||||||
|
"# Plot the distribution of prices in sample\n", |
||||||
|
"def plot_price_distribution(items, name):\n", |
||||||
|
" prices = [float(item.price) for item in items]\n", |
||||||
|
" plt.figure(figsize=(15, 10))\n", |
||||||
|
" plt.title(f\"{name} - Avg {sum(prices)/len(prices):.2f} and highest {max(prices):,.2f}\\n\")\n", |
||||||
|
" plt.xlabel('Price ($)')\n", |
||||||
|
" plt.ylabel('Count')\n", |
||||||
|
" # see https://stackoverflow.com/questions/57026223/how-to-re-scale-the-counts-in-a-matplotlib-histogram\n", |
||||||
|
" (counts, bins) = np.histogram(prices, bins=range(0, 1000, 10))\n", |
||||||
|
" plt.hist(bins[:-1], color=\"darkblue\", bins=bins, weights=counts/len(prices))\n", |
||||||
|
" plt.show() \n", |
||||||
|
"\n", |
||||||
|
"\n", |
||||||
|
"def plot_category_distribution(items, name):\n", |
||||||
|
" category_counts = Counter()\n", |
||||||
|
" for item in items:\n", |
||||||
|
" category_counts[item.category]+=1\n", |
||||||
|
" categories = sorted(category_counts.keys())\n", |
||||||
|
" counts = [category_counts[category] for category in categories]\n", |
||||||
|
"\n", |
||||||
|
" # plot a pie chart\n", |
||||||
|
" plt.figure(figsize=(12, 10))\n", |
||||||
|
" plt.pie(counts, labels=categories, autopct='%1.0f%%', startangle=90)\n", |
||||||
|
" \n", |
||||||
|
" # Add a circle at the center to create a donut chart (optional)\n", |
||||||
|
" centre_circle = plt.Circle((0,0), 0.70, fc='white')\n", |
||||||
|
" fig = plt.gcf()\n", |
||||||
|
" fig.gca().add_artist(centre_circle)\n", |
||||||
|
" plt.title(f'{name} - Categories')\n", |
||||||
|
" \n", |
||||||
|
" # Equal aspect ratio ensures that pie is drawn as a circle\n", |
||||||
|
" plt.axis('equal') \n", |
||||||
|
"\n", |
||||||
|
" plt.show()\n", |
||||||
|
"plot_price_distribution(sample, 'Complete set')\n", |
||||||
|
"plot_price_distribution(sample_not_used_yet, 'Not used yet')\n", |
||||||
|
"plot_category_distribution(sample, 'Complete set')\n", |
||||||
|
"plot_category_distribution(sample_not_used_yet, 'Not used yet')" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "ba252265-b976-426a-aefc-ebc93b153fd4", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# now add the unused items to the validation and test set\n", |
||||||
|
"random.seed(42)\n", |
||||||
|
"random.shuffle(sample_not_used_yet)\n", |
||||||
|
"validation_items = sample_not_used_yet[:4000]\n", |
||||||
|
"added_test_items = sample_not_used_yet[4000:]\n", |
||||||
|
"\n", |
||||||
|
"# create Huggingface dataset\n", |
||||||
|
"validation_dataset = Dataset.from_dict({\"text\": [item.prompt for item in validation_items], \"price\": [item.price for item in validation_items]})\n", |
||||||
|
"added_test_dataset = Dataset.from_dict({\"text\": [item.prompt for item in added_test_items], \"price\": [item.price for item in added_test_items]})\n", |
||||||
|
"\n", |
||||||
|
"dataset = DatasetDict({\n", |
||||||
|
" \"train\": train_ori,\n", |
||||||
|
" \"test\": concatenate_datasets([test_ori, added_test_dataset]),\n", |
||||||
|
" \"validation\": validation_dataset,\n", |
||||||
|
"})\n", |
||||||
|
"\n", |
||||||
|
"print(f\"Divided into a training set of {dataset['train'].num_rows:,} items, a validation set of {dataset['validation'].num_rows:,} items, and a test set of {dataset['test'].num_rows:,} items\")" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "c39ac5d7-84f8-4f7d-98e1-d24651ba3a80", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [ |
||||||
|
"# If you're ready to push to the hub, and fill in the dots with your HF username\n", |
||||||
|
"\n", |
||||||
|
"HF_USER = ...\n", |
||||||
|
"DATASET_NAME = f\"{HF_USER}/pricer-data\"\n", |
||||||
|
"dataset.push_to_hub(DATASET_NAME, private=True)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "3fcb2492-ef2a-468e-8bf1-deb18eef4d9c", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## Use of Validation Sets\n", |
||||||
|
"\n", |
||||||
|
"When you train your model in Week 7.\n", |
||||||
|
"\n", |
||||||
|
"```python\n", |
||||||
|
"# load the train and validation set\n", |
||||||
|
"train = load_dataset(DATASET_NAME, split='train[:100%]') # or less than 100%\n", |
||||||
|
"validation = load_dataset(DATASET_NAME, split='validation[:100%]') # or less than 100% \n", |
||||||
|
"\n", |
||||||
|
"# Define training parameters\n", |
||||||
|
"train_parameters = SFTConfig(\n", |
||||||
|
" eval_strategy=\"steps\", # or \"epoch\"\n", |
||||||
|
" eval_steps=EVAL_STEPS,\n", |
||||||
|
" ...\n", |
||||||
|
")\n", |
||||||
|
"\n", |
||||||
|
"# Initialize fine-tuning with validation set\n", |
||||||
|
"fine_tuning = SFTTrainer(\n", |
||||||
|
" eval_dataset=validation,\n", |
||||||
|
" ...\n", |
||||||
|
")\n", |
||||||
|
"```" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "bceb4407-d91d-4731-9e96-189f6f953cbc", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"## A Closer Look at the Duplicates\n", |
||||||
|
"\n", |
||||||
|
"We have now created a dataset that includes a validation set and additional test data. During this process, we observed that **2% of the data contains duplicates**, where the scrubbed descriptions are identical.\n", |
||||||
|
"\n", |
||||||
|
"Duplicates can contribute to model overfitting. However, since only **2% of the dataset is duplicated**, the impact is likely minimal. Moreover, many of these duplicates actually refer to different physical objects rather than being true duplicates.\n", |
||||||
|
"\n", |
||||||
|
"### False Duplicates\n", |
||||||
|
"\n", |
||||||
|
"The “duplicates” we observe are often not duplicates in the original dataset. Minor differences in product descriptions may be removed by the scrubbing process, leading to items that *appear* identical but aren’t. For example:\n", |
||||||
|
"\n", |
||||||
|
"```\n", |
||||||
|
"<RinoGear Screen Protector Designed for Sony Xperia XZ Screen Protector Case Friendly Accessories Flexible Full Coverage Clear TPU Film = $0.95>\n", |
||||||
|
"<RinoGear (2-Pack) Screen Protector Designed for Sony Xperia XZ Screen Protector Case Friendly Accessories Flexible Full Coverage Clear TPU Film = $2.95>\n", |
||||||
|
"```\n", |
||||||
|
"\"(2-Pack)\" is removed in the scrub method.\n", |
||||||
|
"\n", |
||||||
|
"Similarly:\n", |
||||||
|
"```\n", |
||||||
|
"[<EBC Brakes USR7115 USR Series Sport Slotted Rotor = $31.22>,\n", |
||||||
|
" <EBC Brakes USR7314 USR Series Sport Slotted Rotor = $71.46>,\n", |
||||||
|
" <EBC Brakes USR7409 USR Series Sport Slotted Rotor = $88.67>,\n", |
||||||
|
"...\n", |
||||||
|
" <EBC Brakes USR7305 USR Series Sport Slotted Rotor = $406.55>,\n", |
||||||
|
" <EBC Brakes USR7384 USR Series Sport Slotted Rotor = $413.61>,\n", |
||||||
|
" <EBC Brakes USR1602 USR Series Sport Slotted Rotor = $615.1>]\n", |
||||||
|
"```\n", |
||||||
|
"These all represent different rotor models. \n", |
||||||
|
"\n", |
||||||
|
"**Even when both the scrubbed text and the price are identical**, the items may still refer to distinct products. For instance:\n", |
||||||
|
"```\n", |
||||||
|
"<5304486359 Refrigerator Door Handles Set Replacement for Frigidaire FFTR1821QW5A Refrigerator - Compatible with 5304486359 White Door Handles - UpStart Components Brand = $17.99>\n", |
||||||
|
"<5304486359 Refrigerator Door Handles Set Replacement for Frigidaire FFTR1831QP1 Refrigerator - Compatible with 5304486359 White Door Handles - UpStart Components Brand = $17.99>\n", |
||||||
|
"```\n", |
||||||
|
"\n", |
||||||
|
"### True Duplicates\n", |
||||||
|
"Finding *true* duplicates—where the scrubbed text, price, and underlying real-world product match—seems relatively rare. The following items in the **Appliances** set, for instance, likely refer to the same physical product:\n", |
||||||
|
"```python\n", |
||||||
|
"{'main_category': 'Tools & Home Improvement',\n", |
||||||
|
" 'title': 'Whirlpool 8318084 Lid Switch for Washer',\n", |
||||||
|
" 'average_rating': 4.6,\n", |
||||||
|
" 'rating_number': 511,\n", |
||||||
|
" 'features': ['Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", |
||||||
|
" 'This products adds a great value',\n", |
||||||
|
" 'This product is manufactured in United States',\n", |
||||||
|
" 'Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", |
||||||
|
" 'Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0',\n", |
||||||
|
" 'Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0',\n", |
||||||
|
" 'Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0',\n", |
||||||
|
" 'Genuine Replacement Part'],\n", |
||||||
|
" 'description': ['Product Description',\n", |
||||||
|
" 'Part Number 8318084 (AP3180933) replaces 1018522, AH886960, EA886960, PS886960., Easy to use and handle. This products adds a great value This product is manufactured in United States.',\n", |
||||||
|
" 'From the Manufacturer',\n", |
||||||
|
" 'Whirlpool 8318084 Lid Switch for Washer. Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0, Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0, Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0, Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0. Genuine Replacement Part.'],\n", |
||||||
|
" 'price': '25.55',\n", |
||||||
|
" 'images': {'hi_res': [None],\n", |
||||||
|
" 'large': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_.jpg'],\n", |
||||||
|
" 'thumb': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_US75_.jpg'],\n", |
||||||
|
" 'variant': ['MAIN']},\n", |
||||||
|
" 'videos': {'title': [\"Your Washer Won't Spin?\", '8318084 Washer Lid Switch'],\n", |
||||||
|
" 'url': ['https://www.amazon.com/vdp/09c00a975b4b46198b5703483f424981?ref=dp_vse_rvc_0',\n", |
||||||
|
" 'https://www.amazon.com/vdp/3c9b3dc3c93444978d542af3fab13c49?ref=dp_vse_rvc_1'],\n", |
||||||
|
" 'user_id': ['', '']},\n", |
||||||
|
" 'store': 'Whirlpool',\n", |
||||||
|
" 'categories': ['Appliances',\n", |
||||||
|
" 'Parts & Accessories',\n", |
||||||
|
" 'Washer Parts & Accessories'],\n", |
||||||
|
" 'details': '{\"Manufacturer\": \"Whirlpool\", \"Part Number\": \"8318084\", \"Item Weight\": \"1.34 ounces\", \"Product Dimensions\": \"3 x 2 x 2 inches\", \"Item model number\": \"8318084\", \"Is Discontinued By Manufacturer\": \"No\", \"Item Package Quantity\": \"1\", \"Included Components\": \"Kkk\", \"Batteries Included?\": \"No\", \"Batteries Required?\": \"No\", \"Warranty Description\": \"Kk\", \"Best Sellers Rank\": {\"Tools & Home Improvement\": 231142, \"Washer Parts & Accessories\": 1074}, \"Date First Available\": \"August 7, 2008\"}',\n", |
||||||
|
" 'parent_asin': 'B01CT25N26',\n", |
||||||
|
" 'bought_together': None,\n", |
||||||
|
" 'subtitle': None,\n", |
||||||
|
" 'author': None}\n", |
||||||
|
"\n", |
||||||
|
"{'main_category': 'Tools & Home Improvement',\n", |
||||||
|
" 'title': 'Whirlpool 8318084 Lid Switch for Washer',\n", |
||||||
|
" 'average_rating': 4.6,\n", |
||||||
|
" 'rating_number': 514,\n", |
||||||
|
" 'features': ['Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", |
||||||
|
" 'This products adds a great value',\n", |
||||||
|
" 'This product is manufactured in United States',\n", |
||||||
|
" 'Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0',\n", |
||||||
|
" 'Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0',\n", |
||||||
|
" 'Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0',\n", |
||||||
|
" 'Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0',\n", |
||||||
|
" 'Genuine Replacement Part'],\n", |
||||||
|
" 'description': ['Product Description',\n", |
||||||
|
" 'Part Number 8318084 (AP3180933) replaces 1018522, AH886960, EA886960, PS886960., Easy to use and handle. This products adds a great value This product is manufactured in United States.',\n", |
||||||
|
" 'From the Manufacturer',\n", |
||||||
|
" 'Whirlpool 8318084 Lid Switch for Washer. Works with the following models: Whirlpool 1CLBR5432PQ0, Whirlpool 1CLBR5432PQ1, Whirlpool 1CLSQ9549PG0, Whirlpool 1CLSQ9549PG1, Whirlpool 1CLSQ9549PW0, Whirlpool 1CLSQ9549PW1, Whirlpool 1CLSR7010PQ0, Whirlpool 1CLSR7010PQ1, Whirlpool 1CLSR7300PQ0. Genuine Replacement Part.'],\n", |
||||||
|
" 'price': '25.55',\n", |
||||||
|
" 'images': {'hi_res': [None],\n", |
||||||
|
" 'large': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_.jpg'],\n", |
||||||
|
" 'thumb': ['https://m.media-amazon.com/images/I/31QE91zX0mL._AC_US75_.jpg'],\n", |
||||||
|
" 'variant': ['MAIN']},\n", |
||||||
|
" 'videos': {'title': ['AMI PARTS,Parts Specialist'],\n", |
||||||
|
" 'url': ['https://www.amazon.com/vdp/09a12ea79b1a4081a18909825437760b?ref=dp_vse_rvc_0'],\n", |
||||||
|
" 'user_id': ['']},\n", |
||||||
|
" 'store': 'Whirlpool',\n", |
||||||
|
" 'categories': ['Appliances',\n", |
||||||
|
" 'Parts & Accessories',\n", |
||||||
|
" 'Washer Parts & Accessories'],\n", |
||||||
|
" 'details': '{\"Manufacturer\": \"Whirlpool\", \"Part Number\": \"8318084\", \"Item Weight\": \"1.34 ounces\", \"Product Dimensions\": \"3 x 2 x 2 inches\", \"Item model number\": \"8318084\", \"Is Discontinued By Manufacturer\": \"No\", \"Item Package Quantity\": \"1\", \"Included Components\": \"kkk\", \"Batteries Included?\": \"No\", \"Batteries Required?\": \"No\", \"Warranty Description\": \"kk\", \"Best Sellers Rank\": {\"Tools & Home Improvement\": 166821, \"Washer Parts & Accessories\": 684}, \"Date First Available\": \"August 7, 2008\"}',\n", |
||||||
|
" 'parent_asin': 'B0050O1UR8',\n", |
||||||
|
" 'bought_together': None,\n", |
||||||
|
" 'subtitle': None,\n", |
||||||
|
" 'author': None}\n", |
||||||
|
"```\n", |
||||||
|
"\n", |
||||||
|
"### Takeaway\n", |
||||||
|
"2% of the dataset contains duplicates, but most of these represent different physical objects. It does not appear to be worthwhile to remove them from the dataset. In fact it can be better the keep them to have representative data.\n" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "0a1d7b72-a1ab-4fc4-9065-738bd11f8058", |
||||||
|
"metadata": {}, |
||||||
|
"source": [] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "code", |
||||||
|
"execution_count": null, |
||||||
|
"id": "403a42a2-3913-4905-9475-97509fe86c5e", |
||||||
|
"metadata": {}, |
||||||
|
"outputs": [], |
||||||
|
"source": [] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.11.9" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
@ -0,0 +1,71 @@ |
|||||||
|
{ |
||||||
|
"cells": [ |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "00f05a05-d989-4bf7-b1f1-9418e25ecd58", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"# The Product Pricer Continued\n", |
||||||
|
"\n", |
||||||
|
"I tested numerous frontier models from OpenAI, Anthropic, Google, and others via Groq API.\n", |
||||||
|
"\n", |
||||||
|
"Here are the results of all tests including ones from Day 3 and how the frontier models stacked up.\n", |
||||||
|
"\n", |
||||||
|
"They are ordered by Error from best to worst.\n", |
||||||
|
"\n", |
||||||
|
"I ran each model once on 2025-03-09.\n", |
||||||
|
"\n", |
||||||
|
"Main repo at [https://github.com/kellewic/llm](https://github.com/kellewic/llm)" |
||||||
|
] |
||||||
|
}, |
||||||
|
{ |
||||||
|
"cell_type": "markdown", |
||||||
|
"id": "a69cc81a-e582-4d04-8e12-fd83e120a7d1", |
||||||
|
"metadata": {}, |
||||||
|
"source": [ |
||||||
|
"| Rank | Model | Error ($) | RMSLE | Hits (%) | Chart Link |\n", |
||||||
|
"|------|-----------------------------------|-----------|-------|----------|------------|\n", |
||||||
|
"| 1 | **gemini-2.0-flash** | 73.48 | 0.56 | 56.4% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/gemini-2.0-flash.png) |\n", |
||||||
|
"| 2 | **gpt-4o-2024-08-06** | 75.66 | 0.89 | 57.6% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/gpt-4o-2024-08-06.png) |\n", |
||||||
|
"| 3 | **gemini-2.0-flash-lite** | 76.42 | 0.61 | 56.0% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/gemini-2.0-flash-lite.png) |\n", |
||||||
|
"| 4 | **gpt-4o-mini (original)** | 81.61 | 0.60 | 51.6% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/gpt-4o-mini.png) |\n", |
||||||
|
"| 5 | **claude-3-5-haiku-20241022** | 85.25 | 0.62 | 50.8% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/claude-3-5-haiku-20241022.png) |\n", |
||||||
|
"| 6 | **claude-3-5-sonnet-20241022** | 88.97 | 0.61 | 49.2% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/claude-3-5-sonnet-20241022.png) |\n", |
||||||
|
"| 7 | **claude-3-7-sonnet-20250219** | 89.41 | 0.62 | 55.2% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/claude-3-7-sonnet-20250219.png) |\n", |
||||||
|
"| 8 | **mistral-saba-24b** | 98.02 | 0.82 | 44.8% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/mistral-saba-24b.png) |\n", |
||||||
|
"| 9 | **llama-3.3-70b-versatile** | 98.24 | 0.70 | 44.8% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/llama-3.3-70b-versatile.png) |\n", |
||||||
|
"| 10 | **GPT-4o-mini (fine-tuned)** | 101.49 | 0.81 | 41.2% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_tuning/gpt_fine_tuned.png) |\n", |
||||||
|
"| 11 | **Random Forest Regressor** | 105.10 | 0.89 | 37.6% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/random_forest_pricer.png) |\n", |
||||||
|
"| 12 | **deepseek-r1-distill-llama-70b** | 109.09 | 0.67 | 48.4% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/deepseek-r1-distill-llama-70b.png) |\n", |
||||||
|
"| 13 | **Linear SVR** | 110.91 | 0.92 | 29.2% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/svr_pricer.png) |\n", |
||||||
|
"| 14 | **Word2Vec LR** | 113.14 | 1.05 | 22.8% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/word2vec_lr_pricer.png) |\n", |
||||||
|
"| 15 | **Bag of Words LR** | 113.60 | 0.99 | 24.8% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/bow_lr_pricer.png) |\n", |
||||||
|
"| 16 | **Human Performance** | 126.55 | 1.00 | 32.0% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/human_pricer.png) |\n", |
||||||
|
"| 17 | **Average** | 137.17 | 1.19 | 15.2% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/average_pricer.png) |\n", |
||||||
|
"| 18 | **Linear Regression** | 139.20 | 1.17 | 15.6% | [📊](https://github.com/kellewic/llm/blob/main/basic_model_training/linear_regression_pricer.png) |\n", |
||||||
|
"| 19 | **deepseek-r1-distill-qwen-32b** | 151.59 | 0.80 | 38.4% | [📊](https://github.com/kellewic/llm/blob/main/frontier_model_test/deepseek-r1-distill-qwen-32b.png) |" |
||||||
|
] |
||||||
|
} |
||||||
|
], |
||||||
|
"metadata": { |
||||||
|
"kernelspec": { |
||||||
|
"display_name": "Python 3 (ipykernel)", |
||||||
|
"language": "python", |
||||||
|
"name": "python3" |
||||||
|
}, |
||||||
|
"language_info": { |
||||||
|
"codemirror_mode": { |
||||||
|
"name": "ipython", |
||||||
|
"version": 3 |
||||||
|
}, |
||||||
|
"file_extension": ".py", |
||||||
|
"mimetype": "text/x-python", |
||||||
|
"name": "python", |
||||||
|
"nbconvert_exporter": "python", |
||||||
|
"pygments_lexer": "ipython3", |
||||||
|
"version": "3.12.2" |
||||||
|
} |
||||||
|
}, |
||||||
|
"nbformat": 4, |
||||||
|
"nbformat_minor": 5 |
||||||
|
} |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue