Compare commits
No commits in common. 'main' and '3way' have entirely different histories.
398 changed files with 861 additions and 106346 deletions
Binary file not shown.
@ -1,214 +0,0 @@
|
||||
# LLM Engineering - Master AI and LLMs |
||||
|
||||
## Setup instructions for Linux |
||||
|
||||
Welcome, Linux people! |
||||
|
||||
I should reveal that I had ChatGPT make this document based on the Mac instructions, but then I went through and checked and tweaked some sections. If any of these instructions don't work for your distro, please do reach out and let me know - we'll figure it out, then I'll update the instructions for the future. |
||||
|
||||
___ |
||||
|
||||
Setting up a powerful environment to work at the forefront of AI requires some effort, but these instructions should guide you smoothly. If you encounter any issues, don't hesitate to reach out to me. I'm here to ensure you get set up without hassle. |
||||
|
||||
Email: ed@edwarddonner.com |
||||
LinkedIn: https://www.linkedin.com/in/eddonner/ |
||||
|
||||
For this setup, we'll use Anaconda to create a reliable environment for your AI work. Alternatively, I've provided a lighter option if you prefer to avoid Anaconda. Let's get started! |
||||
|
||||
### Part 1: Clone the Repo |
||||
|
||||
This gets you a local copy of the code on your machine. |
||||
|
||||
1. **Install Git** if not already installed: |
||||
|
||||
- Open your terminal. |
||||
- Run `git --version`. If Git isn't installed, follow the instructions for your distribution: |
||||
- Debian/Ubuntu: `sudo apt update && sudo apt install git` |
||||
- Fedora: `sudo dnf install git` |
||||
- Arch: `sudo pacman -S git` |
||||
|
||||
2. **Navigate to your projects folder:** |
||||
|
||||
If you have a specific folder for projects, navigate to it using the `cd` command. For example: |
||||
`cd ~/Projects` |
||||
|
||||
If you don't have a projects folder, you can create one: |
||||
``` |
||||
mkdir ~/Projects |
||||
cd ~/Projects |
||||
``` |
||||
|
||||
3. **Clone the repository:** |
||||
|
||||
Run the following command in your terminal: |
||||
`git clone https://github.com/ed-donner/llm_engineering.git` |
||||
|
||||
This creates a new directory `llm_engineering` within your Projects folder and downloads the course code. Use `cd llm_engineering` to enter the directory. This is your "project root directory." |
||||
|
||||
### Part 2: Install Anaconda environment |
||||
|
||||
If this Part 2 gives you any trouble, refer to the alternative Part 2B below. |
||||
|
||||
1. **Install Anaconda:** |
||||
|
||||
- Download the Linux installer from https://www.anaconda.com/download. |
||||
- Open a terminal and navigate to the folder containing the downloaded `.sh` file. |
||||
- Run the installer: `bash Anaconda3*.sh` and follow the prompts. Note: This requires about 5+ GB of disk space. |
||||
|
||||
2. **Set up the environment:** |
||||
|
||||
- Open a terminal and navigate to the "project root directory" using: |
||||
`cd ~/Projects/llm_engineering` (adjust the path as necessary). |
||||
- Run `ls` to confirm the presence of subdirectories for each week of the course. |
||||
- Create the environment: `conda env create -f environment.yml` |
||||
|
||||
This may take several minutes (even up to an hour for new Anaconda users). If it takes longer or errors occur, proceed to Part 2B. |
||||
|
||||
- Activate the environment: `conda activate llms`. |
||||
|
||||
You should see `(llms)` in your prompt, indicating successful activation. |
||||
|
||||
In some distributions this may be required so that the llms environment is visible in jupyter lab: |
||||
|
||||
`conda install ipykernel` |
||||
`python -m ipykernel install --user --name=llmenv` |
||||
|
||||
3. **Start Jupyter Lab:** |
||||
|
||||
From the `llm_engineering` folder, run: `jupyter lab`. |
||||
|
||||
Jupyter Lab should open in your browser. Close it after confirming it works, and proceed to Part 3. |
||||
|
||||
### Part 2B - Alternative to Part 2 if Anaconda gives you trouble |
||||
|
||||
1. **Install Python 3.11 (if not already installed):** |
||||
|
||||
- Debian/Ubuntu: `sudo apt update && sudo apt install python3.11` |
||||
- Fedora: `sudo dnf install python3.11` |
||||
- Arch: `sudo pacman -S python` |
||||
|
||||
2. **Navigate to the project root directory:** |
||||
|
||||
Use `cd ~/Projects/llm_engineering` and verify the folder contents with `ls`. |
||||
|
||||
3. **Create a virtual environment:** |
||||
|
||||
Run: `python3.11 -m venv llms` |
||||
|
||||
4. **Activate the virtual environment:** |
||||
|
||||
Use: `source llms/bin/activate` |
||||
|
||||
Your prompt should now display `(llms)`, indicating the environment is active. |
||||
|
||||
5. **Install required packages:** |
||||
|
||||
Run: `python -m pip install --upgrade pip` followed by `pip install -r requirements.txt`. |
||||
|
||||
If issues occur, try the fallback: |
||||
`pip install --retries 5 --timeout 15 --no-cache-dir --force-reinstall -r requirements.txt` |
||||
|
||||
###### Arch users: |
||||
|
||||
Some updates break dependencies. Most notably, numpy, scipy and gensim. To troubleshoot this, you can try many commands: |
||||
|
||||
`sudo pacman -S python-numpy python-pandas python-scipy` This is not recommended, as pacman has no integration with pip (as far as I know) |
||||
|
||||
Another possible solution if having build conflicts, is to update: |
||||
|
||||
`sudo pacman -S gcc gcc-fortran python-setuptools python-wheel` |
||||
|
||||
*Note:* gensim is broken if you have an updated version of scipy. You can either pin scipy to an older version, or |
||||
erase gensim from the requirements.txt for the moment. (See: https://aur.archlinux.org/packages/python-gensim) |
||||
|
||||
Lastly, so that the kernel is visible after step (6) in jupyter lab : |
||||
`python -m ipykernel install --user --name=llmenv` |
||||
`ipython kernel install --user --name=llmenv` |
||||
|
||||
|
||||
6. **Start Jupyter Lab:** |
||||
|
||||
From the `llm_engineering` folder, run: `jupyter lab`. |
||||
|
||||
|
||||
### Part 3 - OpenAI key (OPTIONAL but recommended) |
||||
|
||||
Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models (models at the forefront of AI). |
||||
|
||||
For week 1, you'll only need OpenAI, and you can add the others if you wish later on. |
||||
|
||||
1. Create an OpenAI account if you don't have one by visiting: |
||||
https://platform.openai.com/ |
||||
|
||||
2. OpenAI asks for a minimum credit to use the API. For me in the US, it's \$5. The API calls will spend against this \$5. On this course, we'll only use a small portion of this. I do recommend you make the investment as you'll be able to put it to excellent use. But if you'd prefer not to pay for the API, I give you an alternative in the course using Ollama. |
||||
|
||||
You can add your credit balance to OpenAI at Settings > Billing: |
||||
https://platform.openai.com/settings/organization/billing/overview |
||||
|
||||
I recommend you disable the automatic recharge! |
||||
|
||||
3. Create your API key |
||||
|
||||
The webpage where you set up your OpenAI key is at https://platform.openai.com/api-keys - press the green 'Create new secret key' button and press 'Create secret key'. Keep a record of the API key somewhere private; you won't be able to retrieve it from the OpenAI screens in the future. It should start `sk-proj-`. |
||||
|
||||
In week 2 we will also set up keys for Anthropic and Google, which you can do here when we get there. |
||||
- Claude API at https://console.anthropic.com/ from Anthropic |
||||
- Gemini API at https://ai.google.dev/gemini-api from Google |
||||
|
||||
Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at https://huggingface.co - you can create an API token from the Avatar menu >> Settings >> Access Tokens. |
||||
|
||||
And in Week 6/7 you'll be using the terrific Weights & Biases at https://wandb.ai to watch over your training batches. Accounts are also free, and you can set up a token in a similar way. |
||||
|
||||
### PART 4 - .env file |
||||
|
||||
When you have these keys, please create a new file called `.env` in your project root directory. The filename needs to be exactly the four characters ".env" rather than "my-keys.env" or ".env.txt". Here's how to do it: |
||||
|
||||
1. Open Terminal (Applications > Utilities > Terminal) |
||||
|
||||
2. Navigate to the "project root directory" using `cd ~/Documents/Projects/llm_engineering` (replace this path with the actual path to the llm_engineering directory, your locally cloned version of the repo). |
||||
|
||||
3. Create the .env file with |
||||
|
||||
nano .env |
||||
|
||||
4. Then type your API keys into nano, replacing xxxx with your API key (starting `sk-proj-`). |
||||
|
||||
``` |
||||
OPENAI_API_KEY=xxxx |
||||
``` |
||||
|
||||
If you have other keys, you can add them too, or come back to this in future weeks: |
||||
``` |
||||
GOOGLE_API_KEY=xxxx |
||||
ANTHROPIC_API_KEY=xxxx |
||||
DEEPSEEK_API_KEY=xxxx |
||||
HF_TOKEN=xxxx |
||||
``` |
||||
|
||||
5. Save the file: |
||||
|
||||
Control + O |
||||
Enter (to confirm save the file) |
||||
Control + X to exit the editor |
||||
|
||||
6. Use this command to list files in your project root directory: |
||||
|
||||
`ls -a` |
||||
|
||||
And confirm that the `.env` file is there. |
||||
|
||||
This file won't appear in Jupyter Lab because jupyter hides files starting with a dot. This file is listed in the `.gitignore` file, so it won't get checked in and your keys stay safe. |
||||
|
||||
### Part 5 - Showtime!! |
||||
|
||||
1. Open a terminal. |
||||
2. Navigate to the "project root directory" using: |
||||
`cd ~/Projects/llm_engineering`. |
||||
3. Activate your environment: |
||||
- If you used Anaconda: `conda activate llms` |
||||
- If you used the alternative: `source llms/bin/activate` |
||||
|
||||
You should see `(llms)` in your prompt. Run: `jupyter lab` to get started. |
||||
|
||||
Enjoy your journey into mastering AI and LLMs! |
||||
|
Binary file not shown.
Binary file not shown.
Before Width: | Height: | Size: 367 KiB |
@ -1,54 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "73287ed4-81e3-496a-9e47-f0e8c3770ce9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Gathering Essential Diagnostic information\n", |
||||
"\n", |
||||
"## Please run this next cell to gather some important data\n", |
||||
"\n", |
||||
"Please run the next cell; it should take a minute or so to run (mostly the network test).\n", |
||||
"Rhen email me the output of the last cell to ed@edwarddonner.com. \n", |
||||
"Alternatively: this will create a file called report.txt - just attach the file to your email." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ed8056e8-efa2-4b6f-a4bb-e7ceb733c517", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Run my diagnostics report to collect key information for debugging\n", |
||||
"# Please email me the results. Either copy & paste the output, or attach the file report.txt\n", |
||||
"\n", |
||||
"!pip install -q requests speedtest-cli psutil setuptools\n", |
||||
"from diagnostics import Diagnostics\n", |
||||
"Diagnostics().run()" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,419 +0,0 @@
|
||||
import os |
||||
import sys |
||||
import platform |
||||
import subprocess |
||||
import shutil |
||||
import time |
||||
import ssl |
||||
import tempfile |
||||
from pathlib import Path |
||||
from datetime import datetime |
||||
|
||||
class Diagnostics: |
||||
|
||||
FILENAME = 'report.txt' |
||||
|
||||
def __init__(self): |
||||
self.errors = [] |
||||
self.warnings = [] |
||||
if os.path.exists(self.FILENAME): |
||||
os.remove(self.FILENAME) |
||||
|
||||
def log(self, message): |
||||
print(message) |
||||
with open(self.FILENAME, 'a', encoding='utf-8') as f: |
||||
f.write(message + "\n") |
||||
|
||||
def start(self): |
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") |
||||
self.log(f"Starting diagnostics at {now}\n") |
||||
|
||||
def end(self): |
||||
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") |
||||
self.log(f"\n\nCompleted diagnostics at {now}\n") |
||||
print("\nPlease send these diagnostics to me at ed@edwarddonner.com") |
||||
print(f"Either copy & paste the above output into an email, or attach the file {self.FILENAME} that has been created in this directory.") |
||||
|
||||
|
||||
def _log_error(self, message): |
||||
self.log(f"ERROR: {message}") |
||||
self.errors.append(message) |
||||
|
||||
def _log_warning(self, message): |
||||
self.log(f"WARNING: {message}") |
||||
self.warnings.append(message) |
||||
|
||||
def run(self): |
||||
self.start() |
||||
self._step1_system_info() |
||||
self._step2_check_files() |
||||
self._step3_git_repo() |
||||
self._step4_check_env_file() |
||||
self._step5_anaconda_check() |
||||
self._step6_virtualenv_check() |
||||
self._step7_network_connectivity() |
||||
self._step8_environment_variables() |
||||
self._step9_additional_diagnostics() |
||||
|
||||
if self.warnings: |
||||
self.log("\n===== Warnings Found =====") |
||||
self.log("The following warnings were detected. They might not prevent the program from running but could cause unexpected behavior:") |
||||
for warning in self.warnings: |
||||
self.log(f"- {warning}") |
||||
|
||||
if self.errors: |
||||
self.log("\n===== Errors Found =====") |
||||
self.log("The following critical issues were detected. Please address them before proceeding:") |
||||
for error in self.errors: |
||||
self.log(f"- {error}") |
||||
|
||||
if not self.errors and not self.warnings: |
||||
self.log("\n✅ All diagnostics passed successfully!") |
||||
|
||||
self.end() |
||||
|
||||
def _step1_system_info(self): |
||||
self.log("===== System Information =====") |
||||
try: |
||||
system = platform.system() |
||||
self.log(f"Operating System: {system}") |
||||
|
||||
if system == "Windows": |
||||
release, version, csd, ptype = platform.win32_ver() |
||||
self.log(f"Windows Release: {release}") |
||||
self.log(f"Windows Version: {version}") |
||||
elif system == "Darwin": |
||||
release, version, machine = platform.mac_ver() |
||||
self.log(f"MacOS Version: {release}") |
||||
else: |
||||
self.log(f"Platform: {platform.platform()}") |
||||
|
||||
self.log(f"Architecture: {platform.architecture()}") |
||||
self.log(f"Machine: {platform.machine()}") |
||||
self.log(f"Processor: {platform.processor()}") |
||||
|
||||
try: |
||||
import psutil |
||||
ram = psutil.virtual_memory() |
||||
total_ram_gb = ram.total / (1024 ** 3) |
||||
available_ram_gb = ram.available / (1024 ** 3) |
||||
self.log(f"Total RAM: {total_ram_gb:.2f} GB") |
||||
self.log(f"Available RAM: {available_ram_gb:.2f} GB") |
||||
|
||||
if available_ram_gb < 2: |
||||
self._log_warning(f"Low available RAM: {available_ram_gb:.2f} GB") |
||||
except ImportError: |
||||
self._log_warning("psutil module not found. Cannot determine RAM information.") |
||||
|
||||
total, used, free = shutil.disk_usage(os.path.expanduser("~")) |
||||
free_gb = free / (1024 ** 3) |
||||
self.log(f"Free Disk Space: {free_gb:.2f} GB") |
||||
|
||||
if free_gb < 5: |
||||
self._log_warning(f"Low disk space: {free_gb:.2f} GB free") |
||||
|
||||
except Exception as e: |
||||
self._log_error(f"System information check failed: {e}") |
||||
|
||||
def _step2_check_files(self): |
||||
self.log("\n===== File System Information =====") |
||||
try: |
||||
current_dir = os.getcwd() |
||||
self.log(f"Current Directory: {current_dir}") |
||||
|
||||
# Check write permissions |
||||
test_file = Path(current_dir) / ".test_write_permission" |
||||
try: |
||||
test_file.touch(exist_ok=True) |
||||
test_file.unlink() |
||||
self.log("Write permission: OK") |
||||
except Exception as e: |
||||
self._log_error(f"No write permission in current directory: {e}") |
||||
|
||||
self.log("\nFiles in Current Directory:") |
||||
try: |
||||
for item in sorted(os.listdir(current_dir)): |
||||
self.log(f" - {item}") |
||||
except Exception as e: |
||||
self._log_error(f"Cannot list directory contents: {e}") |
||||
|
||||
except Exception as e: |
||||
self._log_error(f"File system check failed: {e}") |
||||
|
||||
def _step3_git_repo(self): |
||||
self.log("\n===== Git Repository Information =====") |
||||
try: |
||||
result = subprocess.run(['git', 'rev-parse', '--show-toplevel'], |
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||
if result.returncode == 0: |
||||
git_root = result.stdout.strip() |
||||
self.log(f"Git Repository Root: {git_root}") |
||||
|
||||
result = subprocess.run(['git', 'rev-parse', 'HEAD'], |
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||
if result.returncode == 0: |
||||
self.log(f"Current Commit: {result.stdout.strip()}") |
||||
else: |
||||
self._log_warning(f"Could not get current commit: {result.stderr.strip()}") |
||||
|
||||
result = subprocess.run(['git', 'remote', 'get-url', 'origin'], |
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||
if result.returncode == 0: |
||||
self.log(f"Remote Origin: {result.stdout.strip()}") |
||||
else: |
||||
self._log_warning("No remote 'origin' configured") |
||||
else: |
||||
self._log_warning("Not a git repository") |
||||
except FileNotFoundError: |
||||
self._log_warning("Git is not installed or not in PATH") |
||||
except Exception as e: |
||||
self._log_error(f"Git check failed: {e}") |
||||
|
||||
def _step4_check_env_file(self): |
||||
self.log("\n===== Environment File Check =====") |
||||
try: |
||||
result = subprocess.run(['git', 'rev-parse', '--show-toplevel'], |
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||
if result.returncode == 0: |
||||
git_root = result.stdout.strip() |
||||
env_path = os.path.join(git_root, '.env') |
||||
|
||||
if os.path.isfile(env_path): |
||||
self.log(f".env file exists at: {env_path}") |
||||
try: |
||||
with open(env_path, 'r') as f: |
||||
has_api_key = any(line.strip().startswith('OPENAI_API_KEY=') for line in f) |
||||
if has_api_key: |
||||
self.log("OPENAI_API_KEY found in .env file") |
||||
else: |
||||
self._log_warning("OPENAI_API_KEY not found in .env file") |
||||
except Exception as e: |
||||
self._log_error(f"Cannot read .env file: {e}") |
||||
else: |
||||
self._log_warning(".env file not found in project root") |
||||
|
||||
# Check for additional .env files |
||||
for root, _, files in os.walk(git_root): |
||||
if '.env' in files and os.path.join(root, '.env') != env_path: |
||||
self._log_warning(f"Additional .env file found at: {os.path.join(root, '.env')}") |
||||
else: |
||||
self._log_warning("Git root directory not found. Cannot perform .env file check.") |
||||
except FileNotFoundError: |
||||
self._log_warning("Git is not installed or not in PATH") |
||||
except Exception as e: |
||||
self._log_error(f"Environment file check failed: {e}") |
||||
|
||||
def _step5_anaconda_check(self): |
||||
self.log("\n===== Anaconda Environment Check =====") |
||||
try: |
||||
conda_prefix = os.environ.get('CONDA_PREFIX') |
||||
if conda_prefix: |
||||
self.log("Anaconda environment is active:") |
||||
self.log(f"Environment Path: {conda_prefix}") |
||||
self.log(f"Environment Name: {os.path.basename(conda_prefix)}") |
||||
|
||||
conda_exe = os.environ.get('CONDA_EXE', 'conda') |
||||
result = subprocess.run([conda_exe, '--version'], |
||||
stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True) |
||||
if result.returncode == 0: |
||||
self.log(f"Conda Version: {result.stdout.strip()}") |
||||
else: |
||||
self._log_warning("Could not determine Conda version") |
||||
|
||||
self._check_python_packages() |
||||
else: |
||||
self.log("No active Anaconda environment detected") |
||||
except Exception as e: |
||||
self._log_error(f"Anaconda environment check failed: {e}") |
||||
|
||||
def _step6_virtualenv_check(self): |
||||
self.log("\n===== Virtualenv Check =====") |
||||
try: |
||||
virtual_env = os.environ.get('VIRTUAL_ENV') |
||||
if virtual_env: |
||||
self.log("Virtualenv is active:") |
||||
self.log(f"Environment Path: {virtual_env}") |
||||
self.log(f"Environment Name: {os.path.basename(virtual_env)}") |
||||
|
||||
self._check_python_packages() |
||||
else: |
||||
self.log("No active virtualenv detected") |
||||
|
||||
if not virtual_env and not os.environ.get('CONDA_PREFIX'): |
||||
self._log_warning("Neither virtualenv nor Anaconda environment is active") |
||||
except Exception as e: |
||||
self._log_error(f"Virtualenv check failed: {e}") |
||||
|
||||
def _check_python_packages(self): |
||||
self.log("\nPython Environment:") |
||||
self.log(f"Python Version: {sys.version}") |
||||
self.log(f"Python Executable: {sys.executable}") |
||||
|
||||
required_packages = ['openai', 'python-dotenv', 'requests', 'gradio', 'transformers'] |
||||
|
||||
try: |
||||
import pkg_resources |
||||
installed = {pkg.key: pkg.version for pkg in pkg_resources.working_set} |
||||
|
||||
self.log("\nRequired Package Versions:") |
||||
for package in required_packages: |
||||
if package in installed: |
||||
self.log(f"{package}: {installed[package]}") |
||||
else: |
||||
self._log_error(f"Required package '{package}' is not installed") |
||||
|
||||
# Check for potentially conflicting packages |
||||
problem_pairs = [ |
||||
('openai', 'openai-python'), |
||||
('python-dotenv', 'dotenv') |
||||
] |
||||
|
||||
for pkg1, pkg2 in problem_pairs: |
||||
if pkg1 in installed and pkg2 in installed: |
||||
self._log_warning(f"Potentially conflicting packages: {pkg1} and {pkg2}") |
||||
except ImportError: |
||||
self._log_error("Could not import 'pkg_resources' to check installed packages") |
||||
except Exception as e: |
||||
self._log_error(f"Package check failed: {e}") |
||||
|
||||
def _step7_network_connectivity(self): |
||||
self.log("\n===== Network Connectivity Check =====") |
||||
try: |
||||
self.log(f"SSL Version: {ssl.OPENSSL_VERSION}") |
||||
|
||||
import requests |
||||
import speedtest # Importing the speedtest-cli library |
||||
|
||||
# Basic connectivity check |
||||
urls = [ |
||||
'https://www.google.com', |
||||
'https://www.cloudflare.com' |
||||
] |
||||
|
||||
connected = False |
||||
for url in urls: |
||||
try: |
||||
start_time = time.time() |
||||
response = requests.get(url, timeout=10) |
||||
elapsed_time = time.time() - start_time |
||||
response.raise_for_status() |
||||
self.log(f"✓ Connected to {url}") |
||||
self.log(f" Response time: {elapsed_time:.2f}s") |
||||
|
||||
if elapsed_time > 2: |
||||
self._log_warning(f"Slow response from {url}: {elapsed_time:.2f}s") |
||||
connected = True |
||||
break |
||||
except requests.exceptions.RequestException as e: |
||||
self._log_warning(f"Failed to connect to {url}: {e}") |
||||
else: |
||||
self.log("Basic connectivity OK") |
||||
|
||||
if not connected: |
||||
self._log_error("Failed to connect to any test URLs") |
||||
return |
||||
|
||||
# Bandwidth test using speedtest-cli |
||||
self.log("\nPerforming bandwidth test using speedtest-cli...") |
||||
try: |
||||
st = speedtest.Speedtest() |
||||
st.get_best_server() |
||||
download_speed = st.download() # Bits per second |
||||
upload_speed = st.upload() # Bits per second |
||||
|
||||
download_mbps = download_speed / 1e6 # Convert to Mbps |
||||
upload_mbps = upload_speed / 1e6 |
||||
|
||||
self.log(f"Download speed: {download_mbps:.2f} Mbps") |
||||
self.log(f"Upload speed: {upload_mbps:.2f} Mbps") |
||||
|
||||
if download_mbps < 1: |
||||
self._log_warning("Download speed is low") |
||||
if upload_mbps < 0.5: |
||||
self._log_warning("Upload speed is low") |
||||
except speedtest.ConfigRetrievalError: |
||||
self._log_error("Failed to retrieve speedtest configuration") |
||||
except Exception as e: |
||||
self._log_warning(f"Bandwidth test failed: {e}") |
||||
|
||||
except ImportError: |
||||
self._log_error("Required packages are not installed. Please install them using 'pip install requests speedtest-cli'") |
||||
except Exception as e: |
||||
self._log_error(f"Network connectivity check failed: {e}") |
||||
|
||||
|
||||
def _step8_environment_variables(self): |
||||
self.log("\n===== Environment Variables Check =====") |
||||
try: |
||||
# Check Python paths |
||||
pythonpath = os.environ.get('PYTHONPATH') |
||||
if pythonpath: |
||||
self.log("\nPYTHONPATH:") |
||||
for path in pythonpath.split(os.pathsep): |
||||
self.log(f" - {path}") |
||||
else: |
||||
self.log("\nPYTHONPATH is not set.") |
||||
|
||||
self.log("\nPython sys.path:") |
||||
for path in sys.path: |
||||
self.log(f" - {path}") |
||||
|
||||
# Check OPENAI_API_KEY |
||||
from dotenv import load_dotenv |
||||
load_dotenv() |
||||
api_key = os.environ.get('OPENAI_API_KEY') |
||||
if api_key: |
||||
self.log("OPENAI_API_KEY is set after calling load_dotenv()") |
||||
if not api_key.startswith('sk-proj-') or len(api_key)<12: |
||||
self._log_warning("OPENAI_API_KEY format looks incorrect after calling load_dotenv()") |
||||
else: |
||||
self._log_warning("OPENAI_API_KEY environment variable is not set after calling load_dotenv()") |
||||
except Exception as e: |
||||
self._log_error(f"Environment variables check failed: {e}") |
||||
|
||||
def _step9_additional_diagnostics(self): |
||||
self.log("\n===== Additional Diagnostics =====") |
||||
try: |
||||
# Get the site-packages directory paths |
||||
import site |
||||
site_packages_paths = site.getsitepackages() |
||||
if hasattr(site, 'getusersitepackages'): |
||||
site_packages_paths.append(site.getusersitepackages()) |
||||
|
||||
# Function to check if a path is within site-packages |
||||
def is_in_site_packages(path): |
||||
return any(os.path.commonpath([path, sp]) == sp for sp in site_packages_paths) |
||||
|
||||
# Check for potential name conflicts in the current directory and sys.path |
||||
conflict_names = ['openai.py', 'dotenv.py'] |
||||
|
||||
# Check current directory |
||||
current_dir = os.getcwd() |
||||
for name in conflict_names: |
||||
conflict_path = os.path.join(current_dir, name) |
||||
if os.path.isfile(conflict_path): |
||||
self._log_warning(f"Found '{name}' in the current directory, which may cause import conflicts: {conflict_path}") |
||||
|
||||
# Check sys.path directories |
||||
for path in sys.path: |
||||
if not path or is_in_site_packages(path): |
||||
continue # Skip site-packages and empty paths |
||||
for name in conflict_names: |
||||
conflict_file = os.path.join(path, name) |
||||
if os.path.isfile(conflict_file): |
||||
self._log_warning(f"Potential naming conflict: {conflict_file}") |
||||
|
||||
# Check temp directory |
||||
try: |
||||
with tempfile.NamedTemporaryFile() as tmp: |
||||
self.log(f"Temp directory is writable: {os.path.dirname(tmp.name)}") |
||||
except Exception as e: |
||||
self._log_error(f"Cannot write to temp directory: {e}") |
||||
|
||||
except Exception as e: |
||||
self._log_error(f"Additional diagnostics failed: {e}") |
||||
|
||||
|
||||
if __name__ == "__main__": |
||||
diagnostics = Diagnostics() |
||||
diagnostics.run() |
@ -1,413 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "7e2c4bbb-5e8b-4d84-9997-ecb2c349cf54", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First step - generate training data from examples" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 105, |
||||
"id": "16cf3aa2-f407-4b95-8b9e-c3c586f67835", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import requests\n", |
||||
"import pandas as pd\n", |
||||
"from datetime import datetime, timedelta,timezone\n", |
||||
"from datasets import load_dataset, Dataset\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import os\n", |
||||
"from openai import OpenAI\n", |
||||
"import json\n", |
||||
"import tiktoken\n", |
||||
"from IPython.display import display, Markdown\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 96, |
||||
"id": "375302b6-b6a7-46ea-a74c-c2400dbd8bbe", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"load_dotenv()\n", |
||||
"\n", |
||||
"# Replace with your CoinAPI key\n", |
||||
"API_KEY = os.getenv('YOUR_COINAPI_KEY')\n", |
||||
"\n", |
||||
"# Define the base URL for CoinAPI\n", |
||||
"BASE_URL = 'https://rest.coinapi.io/v1/ohlcv/'\n", |
||||
"OLLAMA_URL = \"http://localhost:11434/v1\"\n", |
||||
"\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"# URL to fetch the OHLCV data\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 98, |
||||
"id": "d0cc964d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"ollama = OpenAI(\n", |
||||
" base_url=OLLAMA_URL,\n", |
||||
" api_key='OLAMMA'\n", |
||||
")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 104, |
||||
"id": "8a0c9fff-9eff-42fd-971b-403c99d9b726", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define the symbol and timeframe\n", |
||||
"base_data = {\n", |
||||
" 'name': 'Cardano',\n", |
||||
" 'symbol': f'BINANCE_SPOT_ADA_USDT',\n", |
||||
" 'timeframe': '1DAY',\n", |
||||
" 'time_range': 365 * 2\n", |
||||
"}\n", |
||||
"\n", |
||||
"\n", |
||||
"# Calculate the start date for one year ago\n", |
||||
"end_date = datetime.now(tz=timezone.utc)\n", |
||||
"\n", |
||||
"start_date = end_date - timedelta(days=base_data['time_range'])\n", |
||||
"\n", |
||||
"# Format the dates in the required format (ISO 8601)\n", |
||||
"start_date_str = start_date.strftime('%Y-%m-%dT%H:%M:%S')\n", |
||||
"end_date_str = end_date.strftime('%Y-%m-%dT%H:%M:%S')\n", |
||||
"\n", |
||||
"# Headers for authentication\n", |
||||
"headers = {\n", |
||||
" 'X-CoinAPI-Key': API_KEY\n", |
||||
"}\n", |
||||
"\n", |
||||
"# URL to fetch the OHLCV base_data\n", |
||||
"url = f'{BASE_URL}{base_data['symbol']}/history'\n", |
||||
"\n", |
||||
"# Request parameters\n", |
||||
"params = {\n", |
||||
" 'period_id': base_data['timeframe'],\n", |
||||
" 'time_start': start_date_str,\n", |
||||
" 'time_end': end_date_str,\n", |
||||
" 'limit': 1000 # Maximum number of records per request\n", |
||||
"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 91, |
||||
"id": "586b07ba-5396-4c34-a696-01c8bc3597a0", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"729" |
||||
] |
||||
}, |
||||
"execution_count": 91, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Fetch the data\n", |
||||
"response = requests.get(url, headers=headers, params=params) \n", |
||||
"len(response.json())" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 76, |
||||
"id": "953422d0-2e75-4d01-862e-6383df54d9e5", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
" Timestamp Open High Low Close\n", |
||||
"724 2025-02-06 0.7325 0.7660 0.6978 0.7052\n", |
||||
"725 2025-02-07 0.7052 0.7532 0.6902 0.7072\n", |
||||
"726 2025-02-08 0.7072 0.7180 0.6815 0.7005\n", |
||||
"727 2025-02-09 0.7006 0.7160 0.6503 0.6814\n", |
||||
"728 2025-02-10 0.6815 0.7177 0.6632 0.7037\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Check for successful response\n", |
||||
"if response.status_code == 200:\n", |
||||
" data = response.json()\n", |
||||
"\n", |
||||
" if data:\n", |
||||
" # Convert to DataFrame for better readability\n", |
||||
" df = pd.DataFrame(data)\n", |
||||
"\n", |
||||
" df = df[[\"time_period_start\", \"price_open\", \"price_high\", \"price_low\", \"price_close\"]]\n", |
||||
" df.columns = [\"Timestamp\", \"Open\", \"High\", \"Low\", \"Close\"]\n", |
||||
"\n", |
||||
" # Convert timestamp to readable format\n", |
||||
" df[\"Timestamp\"] = pd.to_datetime(df[\"Timestamp\"]).dt.strftime(\"%Y-%m-%d\")\n", |
||||
"\n", |
||||
" # Display the first few rows of the data\n", |
||||
" print(df.tail())\n", |
||||
" \n", |
||||
" # Convert last 365 days of data into JSON format\n", |
||||
" price_history = df.to_dict(orient=\"records\")\n", |
||||
" \n", |
||||
" else:\n", |
||||
" print('No data found for the given period.')\n", |
||||
"else:\n", |
||||
" print(f'Error fetching data: {response.status_code}, {response.text}')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 47, |
||||
"id": "ada5ed4f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def count_tokens(text, model=\"gpt-4o\"):\n", |
||||
" encoding = tiktoken.encoding_for_model(model)\n", |
||||
" return len(encoding.encode(text))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ab47d974", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
" # Construct prompt \n", |
||||
"\n", |
||||
"prompt = f\"\"\"\n", |
||||
" Given the last 365 days of ${base_data['name']} OHLC data:\n", |
||||
"\n", |
||||
" {json.dumps(price_history, indent=2)}\n", |
||||
"\n", |
||||
" Analyze this data and provide a trading signal (Buy, Sell, or Hold) for today based on the trend and the price action.\n", |
||||
" Note that today is {end_date.strftime('%Y-%m-%d')}\n", |
||||
" Also, provide short term ,mid term and long term signals.\n", |
||||
" \"\"\"\n", |
||||
"num_tokens = count_tokens(prompt)\n", |
||||
"print(f\"Estimated Tokens: {num_tokens}\")\n", |
||||
"\n", |
||||
"print(prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b40fec12", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"To analyze this data, I'll use a combination of moving averages, relative strength index (RSI), and other technical indicators. Please note that this is a simplified analysis and should not be considered as professional trading advice.\n", |
||||
"\n", |
||||
"**Current Data**\n", |
||||
"\n", |
||||
"For 2025-02-10, the opening price is not available. However, we can calculate the current prices based on the historical data provided.\n", |
||||
"\n", |
||||
"Let's assume the last known close price for 2025-02-09 was $0.6815. For simplicity, let's use this as the opening price for today (2025-02-10).\n", |
||||
"\n", |
||||
"**Short-Term Signal**\n", |
||||
"\n", |
||||
"For a short-term signal, I'll use a simple moving average crossover system.\n", |
||||
"\n", |
||||
"* Short-Term Moving Average (20 days): $0.6922\n", |
||||
"* Short-Term Moving Average (10 days): $0.6747\n", |
||||
"\n", |
||||
"Since the 20-day MA ($0.6922) is above the 10-day MA ($0.6747), we can conclude that **Buy** in this timeframe.\n", |
||||
"\n", |
||||
"**Mid-Term Signal**\n", |
||||
"\n", |
||||
"For a mid-term signal, I'll use RSI.\n", |
||||
"\n", |
||||
"* Current Price: $0.6815\n", |
||||
"* Overbought Region: 70-80\n", |
||||
"* Oversold Region: 20-50\n", |
||||
"\n", |
||||
"The current price ($0.6815) is at the lower end of the oversold region (20-50), indicating a potential buying opportunity.\n", |
||||
"\n", |
||||
"Since RSI values are not provided for the entire dataset, we'll use an RSI value of 30 (midpoint of the low and high values). At $0.6815, RSI is approximately 34.\n", |
||||
"\n", |
||||
"* Mid-Term Moving Average: Not available\n", |
||||
"* Mid-Term Momentum: Rising\n", |
||||
"\n", |
||||
"Considering the oversold region and rising momentum, **Hold** is a reasonable mid-term strategy for today.\n", |
||||
"\n", |
||||
"**Long-Term Signal**\n", |
||||
"\n", |
||||
"For a long-term signal, I'll use the overall trend direction based on historical data.\n", |
||||
"\n", |
||||
"The dataset shows an upward trend (average True Range, AtR, value has been increasing). From 2025-02-03 to 2025-02-09, there were 6 consecutive increases in this dataset. That's a strong positive trend.\n", |
||||
"\n", |
||||
"Since there are no obvious signs of weakness in the long-term data or divergence with other trends (like 50-day MA), I recommend **Hold** for an extended holding period, keeping an eye on RSI values and adjusting positions as needed to stay ahead of potential price drops.\n", |
||||
"\n", |
||||
"**Summary**\n", |
||||
"\n", |
||||
"* Short-Term: **Buy**\n", |
||||
"* Mid-Term: **Hold**\n", |
||||
"* Long-Term: **Hold**" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"def get_response(prompt):\n", |
||||
" new_response = ollama.chat.completions.create(model=\"llama3.2\",\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": f\"You are a trading analyst providing Buy/Sell/Hold signals based on ${base_data['name']} price history.Note that today is {end_date.strftime('%Y-%m-%d')}\"},\n", |
||||
" {\"role\": \"user\", \"content\": prompt}\n", |
||||
" ],\n", |
||||
" stream=True,\n", |
||||
" max_tokens=5500\n", |
||||
" )\n", |
||||
" markdown_content = \"\"\n", |
||||
" \n", |
||||
" # Stream response and accumulate markdown content\n", |
||||
" for chunk in new_response:\n", |
||||
" content = chunk.choices[0].delta.content or ''\n", |
||||
" markdown_content += content\n", |
||||
" \n", |
||||
" # Clear output and display updated markdown\n", |
||||
" display(Markdown(markdown_content), clear=True)\n", |
||||
" \n", |
||||
" yield content\n", |
||||
"\n", |
||||
"# Call the function and consume the generator to start streaming\n", |
||||
"for _ in get_response(prompt):\n", |
||||
" pass" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 88, |
||||
"id": "ba09436c", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"# $Cardano Trading Analysis for 2025-02-10\n", |
||||
"\n", |
||||
"### **Current Price Analysis**\n", |
||||
"- **Open:** 0.6815\n", |
||||
"- **High:** 0.7177\n", |
||||
"- **Low:** 0.6632\n", |
||||
"- **Close:** 0.7037\n", |
||||
"\n", |
||||
"The price of $Cardano closed 3.59% higher than the previous day's close. This suggests a potential bullish reversal following a downward trend observed over the last few days. However, the volatility in the high-low range reflects uncertainty in the market.\n", |
||||
"\n", |
||||
"### **Trend Overview**\n", |
||||
"- **Short-term:** \n", |
||||
" - The recent price action indicates a possible recovery as we see an upward close. The price is currently attempting to break resistance, but the last few days exhibited mixed movements (e.g., a decrease before the recent increase). \n", |
||||
"- **Mid-term:**\n", |
||||
" - Over the past month, $Cardano has experienced significant volatility. While it reached its peak at around 1.079 earlier in January, the subsequent decline indicates selling pressure in the mid-term. A consolidation phase appears as buyers are trying to push the price back up.\n", |
||||
"- **Long-term:**\n", |
||||
" - Over the past year, $Cardano has shown high volatility and a fluctuating price range, but it has generally been trending downwards since its recent highs. \n", |
||||
"\n", |
||||
"### **Trading Signals**\n", |
||||
"- **Short-term Signal:** **Buy**\n", |
||||
" - The recent upward price movement along with a closing above 0.7000 indicates potential upward momentum. Short-term traders may consider buying into this recovery signal.\n", |
||||
"\n", |
||||
"- **Mid-term Signal:** **Hold**\n", |
||||
" - Within the last month, while recovery is in place, it is prudent to wait for confirmation of sustained upward movement before committing larger positions. A hold is advised to monitor the situation.\n", |
||||
"\n", |
||||
"- **Long-term Signal:** **Sell**\n", |
||||
" - Given that the longer-term trends show a downward trajectory since peaking at higher prices, long-term holders might consider selling or reducing positions, especially if the price fails to stay above recent resistance levels.\n", |
||||
"\n", |
||||
"### **Conclusion**\n", |
||||
"Today’s price action indicates a bullish sentiment in the short term but still reflects uncertainty in the mid and long-term periods. It would be wise for traders to remain cautious and adjust positions as the market dynamics evolve further. Always consider your risk management strategies when deciding to enter or exit positions." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"def get_response(prompt):\n", |
||||
" new_response = openai.chat.completions.create(model=\"gpt-4o-mini\",\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": f\"You are a trading analyst providing Buy/Sell/Hold signals based on ${base_data['name']} price history. Format your response in markdown.Note that today is {end_date.strftime('%Y-%m-%d')}\"},\n", |
||||
" {\"role\": \"user\", \"content\": prompt}\n", |
||||
" ],\n", |
||||
" stream=True,\n", |
||||
" max_tokens=5500\n", |
||||
" )\n", |
||||
" \n", |
||||
" # Initialize markdown cell output\n", |
||||
" markdown_content = \"\"\n", |
||||
" \n", |
||||
" # Stream response and accumulate markdown content\n", |
||||
" for chunk in new_response:\n", |
||||
" content = chunk.choices[0].delta.content or ''\n", |
||||
" markdown_content += content\n", |
||||
" \n", |
||||
" # Clear output and display updated markdown\n", |
||||
" display(Markdown(markdown_content), clear=True)\n", |
||||
" \n", |
||||
" yield content\n", |
||||
"\n", |
||||
"# Call the function and consume the generator to start streaming\n", |
||||
"for _ in get_response(prompt):\n", |
||||
" pass" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f52bcc0a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "venv", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.7" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,354 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "46d90d45-2d19-49c7-b853-6809dc417ea7", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Extra Project - Trading Code Generator\n", |
||||
"\n", |
||||
"This is an example extra project to show fine-tuning in action, and applied to code generation.\n", |
||||
"\n", |
||||
"## Project Brief\n", |
||||
"\n", |
||||
"Build a prototype LLM that can generate example code to suggest trading decisions to buy or sell stocks!\n", |
||||
"\n", |
||||
"I generated test data using frontier models, in the other files in this directory. Use this to train an open source code model.\n", |
||||
"\n", |
||||
"In this notebook we generate the dataset; then we move over to Google Colab for the fine-tuning.\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">This project is provided as an extra resource</h2>\n", |
||||
" <span style=\"color:#f71;\">It will make most sense after completing Week 7, and might trigger some ideas for your own projects.<br/><br/>\n", |
||||
" This is provided without a detailed walk-through; the output from the colab has been saved (see last cell) so you can review the results if you have any problems running yourself.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Do not use for actual trading decisions!!</h2>\n", |
||||
" <span style=\"color:#900;\">It hopefully goes without saying: this project will generate toy trading code that is over-simplified and untrusted.<br/><br/>Please do not make actual trading decisions based on this!</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "7e2c4bbb-5e8b-4d84-9997-ecb2c349cf54", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First step - generate training data from examples\n", |
||||
"\n", |
||||
"There are 3 sample python files generated (via multiple queries) by GPT-4o, Claude 3 Opus and Gemini 1.5 Pro. \n", |
||||
"\n", |
||||
"This notebook creates training data from these files, then converts to the HuggingFace format and uploads to the hub.\n", |
||||
"\n", |
||||
"Afterwards, we will move to Google Colab to fine-tune." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "16cf3aa2-f407-4b95-8b9e-c3c586f67835", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import glob\n", |
||||
"import matplotlib.pyplot as plt\n", |
||||
"import random\n", |
||||
"from datasets import Dataset\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from huggingface_hub import login\n", |
||||
"import transformers\n", |
||||
"from transformers import AutoTokenizer" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "375302b6-b6a7-46ea-a74c-c2400dbd8bbe", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"from datasets import load_dataset, Dataset\n", |
||||
"load_dotenv()\n", |
||||
"hf_token = os.getenv('HF_TOKEN')\n", |
||||
"if hf_token and hf_token.startswith(\"hf_\") and len(hf_token)>5:\n", |
||||
" print(\"HuggingFace Token looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"Potential problem with HuggingFace token - please check your .env file, and see the Troubleshooting notebook for more\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8a0c9fff-9eff-42fd-971b-403c99d9b726", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"DATASET_NAME = \"trade_code_data\"\n", |
||||
"BASE_MODEL = \"Qwen/CodeQwen1.5-7B\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "586b07ba-5396-4c34-a696-01c8bc3597a0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A utility method to convert the text contents of a file into a list of methods\n", |
||||
"\n", |
||||
"def extract_method_bodies(text):\n", |
||||
" chunks = text.split('def trade')[1:]\n", |
||||
" results = []\n", |
||||
" for chunk in chunks:\n", |
||||
" lines = chunk.split('\\n')[1:]\n", |
||||
" body = '\\n'.join(line for line in lines if line!='\\n')\n", |
||||
" results.append(body)\n", |
||||
" return results " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "953422d0-2e75-4d01-862e-6383df54d9e5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Read all .py files and convert into training data\n", |
||||
"\n", |
||||
"bodies = []\n", |
||||
"for filename in glob.glob(\"*.py\"):\n", |
||||
" with open(filename, 'r', encoding='utf-8') as file:\n", |
||||
" content = file.read()\n", |
||||
" extracted = extract_method_bodies(content)\n", |
||||
" bodies += extracted\n", |
||||
"\n", |
||||
"print(f\"Extracted {len(bodies)} trade method bodies\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "836480e9-ba23-4aa3-a7e2-2666884e9a06", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's look at one\n", |
||||
"\n", |
||||
"print(random.choice(bodies))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "47b10e7e-a562-4968-af3f-254a9b424ac8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To visualize the lines of code in each \n", |
||||
"\n", |
||||
"%matplotlib inline\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"lengths = [len(body.split('\\n')) for body in bodies]\n", |
||||
"ax.set_xlabel('Lines of code')\n", |
||||
"ax.set_ylabel('Count of training samples');\n", |
||||
"_ = ax.hist(lengths, rwidth=0.7, color=\"green\", bins=range(0, max(lengths)))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "03b37f62-679e-4c3d-9e5b-5878a82696e6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Add the prompt to the start of every training example\n", |
||||
"\n", |
||||
"prompt = \"\"\"\n", |
||||
"# tickers is a list of stock tickers\n", |
||||
"import tickers\n", |
||||
"\n", |
||||
"# prices is a dict; the key is a ticker and the value is a list of historic prices, today first\n", |
||||
"import prices\n", |
||||
"\n", |
||||
"# Trade represents a decision to buy or sell a quantity of a ticker\n", |
||||
"import Trade\n", |
||||
"\n", |
||||
"import random\n", |
||||
"import numpy as np\n", |
||||
"\n", |
||||
"def trade():\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"data = [prompt + body for body in bodies]\n", |
||||
"print(random.choice(data))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "28fdb82f-3864-4023-8263-547d17571a5c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Distribution of tokens in our dataset\n", |
||||
"\n", |
||||
"tokenizer = AutoTokenizer.from_pretrained(BASE_MODEL, trust_remote_code=True)\n", |
||||
"tokenized_data = [tokenizer.encode(each) for each in data]\n", |
||||
"token_counts = [len(tokens) for tokens in tokenized_data]\n", |
||||
"\n", |
||||
"%matplotlib inline\n", |
||||
"fig, ax = plt.subplots(1, 1)\n", |
||||
"ax.set_xlabel('Number of tokens')\n", |
||||
"ax.set_ylabel('Count of training samples');\n", |
||||
"_ = ax.hist(token_counts, rwidth=0.7, color=\"purple\", bins=range(0, max(token_counts), 20))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b4eb73b0-88ef-4aeb-8e5b-fe7050109ba0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Enforcing a maximum token length\n", |
||||
"\n", |
||||
"We need to specify a maximum number of tokens when we fine-tune.\n", |
||||
"\n", |
||||
"Let's pick a cut-off, and only keep training data points that fit within this number of tokens," |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ffb0d55c-5602-4518-b811-fa385c0959a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"CUTOFF = 320\n", |
||||
"truncated = len([tokens for tokens in tokenized_data if len(tokens) > CUTOFF])\n", |
||||
"percentage = truncated/len(tokenized_data)*100\n", |
||||
"print(f\"With cutoff at {CUTOFF}, we truncate {truncated} datapoints which is {percentage:.1f}% of the dataset\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7064ef0a-7b07-4f24-a580-cbef2c5e1f2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's only keep datapoints that wouldn't get truncated\n", |
||||
"\n", |
||||
"filtered_data = [datapoint for datapoint in data if len(tokenizer.encode(datapoint))<=CUTOFF]\n", |
||||
"print(f\"After e now have {len(filtered_data)} datapoints\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fb2bb067-2bd3-498b-9fc8-5e8186afbe27", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Mix up the data\n", |
||||
"\n", |
||||
"random.seed(42)\n", |
||||
"random.shuffle(filtered_data)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26713fb9-765f-4524-b9db-447e97686d1a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# I don't make a Training / Test split - if we had more training data, we would!\n", |
||||
"\n", |
||||
"dataset = Dataset.from_dict({'text':filtered_data})" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bfabba27-ef47-46a8-a26b-4d650ae3b193", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"login(hf_token)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "55b595cd-2df7-4be4-aec1-0667b17d36f1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Push your dataset to your hub\n", |
||||
"# I've also pushed the data to my account and made it public, which you can use from the colab below\n", |
||||
"\n", |
||||
"dataset.push_to_hub(DATASET_NAME, private=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "4691a025-9800-4e97-a20f-a65f102401f1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now to head over to a Google Colab for fine-tuning in the cloud\n", |
||||
"\n", |
||||
"Follow this link for the Colab:\n", |
||||
"\n", |
||||
"https://colab.research.google.com/drive/1wry2-4AGw-U7K0LQ_jEgduoTQqVIvo1x?usp=sharing\n", |
||||
"\n", |
||||
"I've also saved this Colab with output included, so you can see the results without needing to train if you'd prefer.\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "04a6c3e0-a2e6-4115-a01a-45e79dfdb730", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,725 +0,0 @@
|
||||
# tickers is a list of stock tickers |
||||
import tickers |
||||
|
||||
# prices is a dict; the key is a ticker and the value is a list of historic prices, today first |
||||
import prices |
||||
|
||||
# Trade represents a decision to buy or sell a quantity of a ticker |
||||
import Trade |
||||
|
||||
import random |
||||
import numpy as np |
||||
|
||||
def trade2(): |
||||
# Buy if the current price is lower than the average of the last 5 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < np.mean(prices[ticker][1:6]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade3(): |
||||
# Sell if the current price is higher than the average of the last 10 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > np.mean(prices[ticker][1:11]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade4(): |
||||
# Buy if the current price is the lowest in the last 3 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] == min(prices[ticker][:3]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade5(): |
||||
# Sell if the current price is the highest in the last 3 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] == max(prices[ticker][:3]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade6(): |
||||
# Buy if the current price is higher than the previous day's price |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > prices[ticker][1]: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade7(): |
||||
# Sell if the current price is lower than the previous day's price |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < prices[ticker][1]: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade8(): |
||||
# Buy if the current price is higher than the average of the last 20 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > np.mean(prices[ticker][1:21]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade9(): |
||||
# Sell if the current price is lower than the average of the last 20 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < np.mean(prices[ticker][1:21]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade10(): |
||||
# Buy if the current price is higher than the highest price in the last 5 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > max(prices[ticker][1:6]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade11(): |
||||
# Sell if the current price is lower than the lowest price in the last 5 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < min(prices[ticker][1:6]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade12(): |
||||
# Long/Short: Buy the best-performing stock and sell the worst-performing stock in the last 10 days |
||||
best_ticker = max(tickers, key=lambda x: (prices[x][0] - prices[x][9]) / prices[x][9]) |
||||
worst_ticker = min(tickers, key=lambda x: (prices[x][0] - prices[x][9]) / prices[x][9]) |
||||
return [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
|
||||
def trade13(): |
||||
# Buy if the 5-day moving average crosses above the 20-day moving average |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if np.mean(prices[ticker][:5]) > np.mean(prices[ticker][:20]) and np.mean(prices[ticker][1:6]) <= np.mean(prices[ticker][1:21]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade14(): |
||||
# Sell if the 5-day moving average crosses below the 20-day moving average |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if np.mean(prices[ticker][:5]) < np.mean(prices[ticker][:20]) and np.mean(prices[ticker][1:6]) >= np.mean(prices[ticker][1:21]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade15(): |
||||
# Buy if the current volume is higher than the average volume of the last 10 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if volumes[ticker][0] > np.mean(volumes[ticker][1:11]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade16(): |
||||
# Sell if the current volume is lower than the average volume of the last 10 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if volumes[ticker][0] < np.mean(volumes[ticker][1:11]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade17(): |
||||
# Long/Short: Buy the stock with the highest relative strength index (RSI) and sell the stock with the lowest RSI |
||||
rsi = {} |
||||
for ticker in tickers: |
||||
gains = [max(prices[ticker][i] - prices[ticker][i+1], 0) for i in range(13)] |
||||
losses = [max(prices[ticker][i+1] - prices[ticker][i], 0) for i in range(13)] |
||||
avg_gain = sum(gains) / 14 |
||||
avg_loss = sum(losses) / 14 |
||||
rs = avg_gain / avg_loss if avg_loss > 0 else 100 |
||||
rsi[ticker] = 100 - (100 / (1 + rs)) |
||||
best_ticker = max(tickers, key=lambda x: rsi[x]) |
||||
worst_ticker = min(tickers, key=lambda x: rsi[x]) |
||||
return [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
|
||||
def trade18(): |
||||
# Buy if the current price is higher than the 50-day moving average and the 50-day moving average is higher than the 200-day moving average |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > np.mean(prices[ticker][:50]) > np.mean(prices[ticker][:200]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade19(): |
||||
# Sell if the current price is lower than the 50-day moving average and the 50-day moving average is lower than the 200-day moving average |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < np.mean(prices[ticker][:50]) < np.mean(prices[ticker][:200]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade20(): |
||||
# Long/Short: Buy the stock with the highest momentum and sell the stock with the lowest momentum |
||||
momentums = {} |
||||
for ticker in tickers: |
||||
momentums[ticker] = prices[ticker][0] - prices[ticker][19] |
||||
best_ticker = max(tickers, key=lambda x: momentums[x]) |
||||
worst_ticker = min(tickers, key=lambda x: momentums[x]) |
||||
return [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
|
||||
def trade21(): |
||||
# Buy if the current price is higher than the upper Bollinger Band |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:20]) |
||||
std = np.std(prices[ticker][:20]) |
||||
upper_band = sma + 2 * std |
||||
if prices[ticker][0] > upper_band: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade22(): |
||||
# Sell if the current price is lower than the lower Bollinger Band |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:20]) |
||||
std = np.std(prices[ticker][:20]) |
||||
lower_band = sma - 2 * std |
||||
if prices[ticker][0] < lower_band: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade23(): |
||||
# Buy if the current volatility is higher than the average volatility of the last 10 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
volatility = np.std(prices[ticker][:10]) |
||||
avg_volatility = np.mean([np.std(prices[ticker][i:i+10]) for i in range(10)]) |
||||
if volatility > avg_volatility: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade24(): |
||||
# Sell if the current volatility is lower than the average volatility of the last 10 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
volatility = np.std(prices[ticker][:10]) |
||||
avg_volatility = np.mean([np.std(prices[ticker][i:i+10]) for i in range(10)]) |
||||
if volatility < avg_volatility: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade25(): |
||||
# Long/Short: Buy the stock with the lowest volatility and sell the stock with the highest volatility |
||||
volatilities = {} |
||||
for ticker in tickers: |
||||
volatilities[ticker] = np.std(prices[ticker][:10]) |
||||
best_ticker = min(tickers, key=lambda x: volatilities[x]) |
||||
worst_ticker = max(tickers, key=lambda x: volatilities[x]) |
||||
return [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
|
||||
def trade26(): |
||||
# Buy if the current price is higher than the 20-day exponential moving average (EMA) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
ema = prices[ticker][0] |
||||
multiplier = 2 / (20 + 1) |
||||
for i in range(1, 20): |
||||
ema = (prices[ticker][i] - ema) * multiplier + ema |
||||
if prices[ticker][0] > ema: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade27(): |
||||
# Sell if the current price is lower than the 20-day exponential moving average (EMA) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
ema = prices[ticker][0] |
||||
multiplier = 2 / (20 + 1) |
||||
for i in range(1, 20): |
||||
ema = (prices[ticker][i] - ema) * multiplier + ema |
||||
if prices[ticker][0] < ema: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade28(): |
||||
# Buy if the current price is higher than the upper Keltner Channel |
||||
trades = [] |
||||
for ticker in tickers: |
||||
ema = prices[ticker][0] |
||||
multiplier = 2 / (20 + 1) |
||||
for i in range(1, 20): |
||||
ema = (prices[ticker][i] - ema) * multiplier + ema |
||||
atr = np.mean([np.max(prices[ticker][i:i+10]) - np.min(prices[ticker][i:i+10]) for i in range(10)]) |
||||
upper_channel = ema + 2 * atr |
||||
if prices[ticker][0] > upper_channel: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade29(): |
||||
# Sell if the current price is lower than the lower Keltner Channel |
||||
trades = [] |
||||
for ticker in tickers: |
||||
ema = prices[ticker][0] |
||||
multiplier = 2 / (20 + 1) |
||||
for i in range(1, 20): |
||||
ema = (prices[ticker][i] - ema) * multiplier + ema |
||||
atr = np.mean([np.max(prices[ticker][i:i+10]) - np.min(prices[ticker][i:i+10]) for i in range(10)]) |
||||
lower_channel = ema - 2 * atr |
||||
if prices[ticker][0] < lower_channel: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade30(): |
||||
# Long/Short: Buy the stock with the highest Sharpe ratio and sell the stock with the lowest Sharpe ratio |
||||
sharpe_ratios = {} |
||||
for ticker in tickers: |
||||
returns = [prices[ticker][i] / prices[ticker][i+1] - 1 for i in range(19)] |
||||
sharpe_ratios[ticker] = np.mean(returns) / np.std(returns) |
||||
best_ticker = max(tickers, key=lambda x: sharpe_ratios[x]) |
||||
worst_ticker = min(tickers, key=lambda x: sharpe_ratios[x]) |
||||
return [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
|
||||
def trade31(): |
||||
# Buy if the current price is higher than the Ichimoku Cloud conversion line |
||||
trades = [] |
||||
for ticker in tickers: |
||||
conversion_line = (np.max(prices[ticker][:9]) + np.min(prices[ticker][:9])) / 2 |
||||
if prices[ticker][0] > conversion_line: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade32(): |
||||
# Buy if the current price is higher than the price 5 days ago |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > prices[ticker][4]: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade33(): |
||||
# Sell if the current price is lower than the price 5 days ago |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < prices[ticker][4]: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade34(): |
||||
# Buy if the current price is the highest in the last 15 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] == max(prices[ticker][:15]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade35(): |
||||
# Sell if the current price is the lowest in the last 15 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] == min(prices[ticker][:15]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade36(): |
||||
# Buy if the current price is higher than the 10-day simple moving average (SMA) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:10]) |
||||
if prices[ticker][0] > sma: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade37(): |
||||
# Sell if the current price is lower than the 10-day simple moving average (SMA) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:10]) |
||||
if prices[ticker][0] < sma: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade38(): |
||||
# Buy if the current price is higher than the highest price in the last 20 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > max(prices[ticker][:20]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade39(): |
||||
# Sell if the current price is lower than the lowest price in the last 20 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < min(prices[ticker][:20]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade40(): |
||||
# Buy if the current price is higher than the 50-day SMA |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:50]) |
||||
if prices[ticker][0] > sma: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade41(): |
||||
# Sell if the current price is lower than the 50-day SMA |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma = np.mean(prices[ticker][:50]) |
||||
if prices[ticker][0] < sma: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade42(): |
||||
# Buy if the current price is higher than the previous 2 days (a simple uptrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > prices[ticker][1] > prices[ticker][2]: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade43(): |
||||
# Sell if the current price is lower than the previous 2 days (a simple downtrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < prices[ticker][1] < prices[ticker][2]: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade44(): |
||||
# Buy if the current price is higher than the previous day's high (a breakout) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > max(prices[ticker][1:2]): |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade45(): |
||||
# Sell if the current price is lower than the previous day's low (a breakdown) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < min(prices[ticker][1:2]): |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade46(): |
||||
# Buy if the current price is above the previous day's high and the previous day was a down day (a potential reversal) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] > max(prices[ticker][1:2]) and prices[ticker][1] < prices[ticker][2]: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade47(): |
||||
# Sell if the current price is below the previous day's low and the previous day was an up day (a potential reversal) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
if prices[ticker][0] < min(prices[ticker][1:2]) and prices[ticker][1] > prices[ticker][2]: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade48(): |
||||
# Buy if the current price is above the 5-day SMA and the 5-day SMA is above the 10-day SMA (a bullish crossover) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma5 = np.mean(prices[ticker][:5]) |
||||
sma10 = np.mean(prices[ticker][:10]) |
||||
if prices[ticker][0] > sma5 > sma10: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade49(): |
||||
# Sell if the current price is below the 5-day SMA and the 5-day SMA is below the 10-day SMA (a bearish crossover) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma5 = np.mean(prices[ticker][:5]) |
||||
sma10 = np.mean(prices[ticker][:10]) |
||||
if prices[ticker][0] < sma5 < sma10: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade50(): |
||||
# Buy if the current price is above the 50-day SMA and the previous price was below the 50-day SMA (a bullish breakthrough) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma50 = np.mean(prices[ticker][:50]) |
||||
if prices[ticker][0] > sma50 and prices[ticker][1] < sma50: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade51(): |
||||
# Sell if the current price is below the 50-day SMA and the previous price was above the 50-day SMA (a bearish breakthrough) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
sma50 = np.mean(prices[ticker][:50]) |
||||
if prices[ticker][0] < sma50 and prices[ticker][1] > sma50: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade52(): |
||||
# Buy if the current price is more than 2 standard deviations below the 20-day mean (a potential oversold condition) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean20 = np.mean(prices[ticker][:20]) |
||||
std20 = np.std(prices[ticker][:20]) |
||||
if prices[ticker][0] < mean20 - 2 * std20: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade53(): |
||||
# Sell if the current price is more than 2 standard deviations above the 20-day mean (a potential overbought condition) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean20 = np.mean(prices[ticker][:20]) |
||||
std20 = np.std(prices[ticker][:20]) |
||||
if prices[ticker][0] > mean20 + 2 * std20: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade54(): |
||||
# Buy if the current price is below the 50-day mean and the 50-day mean is increasing (a potential uptrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean50 = np.mean(prices[ticker][:50]) |
||||
prev_mean50 = np.mean(prices[ticker][1:51]) |
||||
if prices[ticker][0] < mean50 and mean50 > prev_mean50: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade55(): |
||||
# Sell if the current price is above the 50-day mean and the 50-day mean is decreasing (a potential downtrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean50 = np.mean(prices[ticker][:50]) |
||||
prev_mean50 = np.mean(prices[ticker][1:51]) |
||||
if prices[ticker][0] > mean50 and mean50 < prev_mean50: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade56(): |
||||
# Buy if the 5-day mean is above the 50-day mean and the 5-day mean was previously below the 50-day mean (a potential trend change) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean5 = np.mean(prices[ticker][:5]) |
||||
mean50 = np.mean(prices[ticker][:50]) |
||||
prev_mean5 = np.mean(prices[ticker][1:6]) |
||||
if mean5 > mean50 and prev_mean5 < mean50: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade57(): |
||||
# Sell if the 5-day mean is below the 50-day mean and the 5-day mean was previously above the 50-day mean (a potential trend change) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean5 = np.mean(prices[ticker][:5]) |
||||
mean50 = np.mean(prices[ticker][:50]) |
||||
prev_mean5 = np.mean(prices[ticker][1:6]) |
||||
if mean5 < mean50 and prev_mean5 > mean50: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade58(): |
||||
# Buy the ticker that has had the largest percent decrease over the last 10 days (a potential mean reversion play) |
||||
percent_changes = {} |
||||
for ticker in tickers: |
||||
percent_changes[ticker] = (prices[ticker][0] - prices[ticker][9]) / prices[ticker][9] * 100 |
||||
worst_ticker = min(tickers, key=lambda x: percent_changes[x]) |
||||
return [Trade(worst_ticker, 100)] |
||||
|
||||
def trade59(): |
||||
# Sell the ticker that has had the largest percent increase over the last 10 days (a potential mean reversion play) |
||||
percent_changes = {} |
||||
for ticker in tickers: |
||||
percent_changes[ticker] = (prices[ticker][0] - prices[ticker][9]) / prices[ticker][9] * 100 |
||||
best_ticker = max(tickers, key=lambda x: percent_changes[x]) |
||||
return [Trade(best_ticker, -100)] |
||||
|
||||
def trade60(): |
||||
# Buy if the current price is above the 200-day mean and the 200-day mean is increasing (a potential long-term uptrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean200 = np.mean(prices[ticker][:200]) |
||||
prev_mean200 = np.mean(prices[ticker][1:201]) |
||||
if prices[ticker][0] > mean200 and mean200 > prev_mean200: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade61(): |
||||
# Sell if the current price is below the 200-day mean and the 200-day mean is decreasing (a potential long-term downtrend) |
||||
trades = [] |
||||
for ticker in tickers: |
||||
mean200 = np.mean(prices[ticker][:200]) |
||||
prev_mean200 = np.mean(prices[ticker][1:201]) |
||||
if prices[ticker][0] < mean200 and mean200 < prev_mean200: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade62(): |
||||
# Buy if the stock's return is greater than the market's return over the last 5 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
stock_return = (prices[ticker][0] - prices[ticker][4]) / prices[ticker][4] |
||||
market_return = (sum(prices[t][0] for t in tickers) - sum(prices[t][4] for t in tickers)) / sum(prices[t][4] for t in tickers) |
||||
if stock_return > market_return: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade63(): |
||||
# Sell if the stock's return is less than the market's return over the last 5 days |
||||
trades = [] |
||||
for ticker in tickers: |
||||
stock_return = (prices[ticker][0] - prices[ticker][4]) / prices[ticker][4] |
||||
market_return = (sum(prices[t][0] for t in tickers) - sum(prices[t][4] for t in tickers)) / sum(prices[t][4] for t in tickers) |
||||
if stock_return < market_return: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade64(): |
||||
# Buy the stock with the highest relative strength compared to the market over the last 10 days |
||||
relative_strengths = {} |
||||
for ticker in tickers: |
||||
stock_return = prices[ticker][0] / prices[ticker][9] |
||||
market_return = sum(prices[t][0] for t in tickers) / sum(prices[t][9] for t in tickers) |
||||
relative_strengths[ticker] = stock_return / market_return |
||||
best_ticker = max(tickers, key=lambda x: relative_strengths[x]) |
||||
return [Trade(best_ticker, 100)] |
||||
|
||||
def trade65(): |
||||
# Sell the stock with the lowest relative strength compared to the market over the last 10 days |
||||
relative_strengths = {} |
||||
for ticker in tickers: |
||||
stock_return = prices[ticker][0] / prices[ticker][9] |
||||
market_return = sum(prices[t][0] for t in tickers) / sum(prices[t][9] for t in tickers) |
||||
relative_strengths[ticker] = stock_return / market_return |
||||
worst_ticker = min(tickers, key=lambda x: relative_strengths[x]) |
||||
return [Trade(worst_ticker, -100)] |
||||
|
||||
def trade66(): |
||||
# Buy stocks that have a higher Sharpe ratio than the market over the last 20 days |
||||
trades = [] |
||||
market_returns = [(sum(prices[t][i] for t in tickers) / sum(prices[t][i+1] for t in tickers)) - 1 for i in range(19)] |
||||
market_sharpe = np.mean(market_returns) / np.std(market_returns) |
||||
for ticker in tickers: |
||||
stock_returns = [(prices[ticker][i] / prices[ticker][i+1]) - 1 for i in range(19)] |
||||
stock_sharpe = np.mean(stock_returns) / np.std(stock_returns) |
||||
if stock_sharpe > market_sharpe: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade67(): |
||||
# Sell stocks that have a lower Sharpe ratio than the market over the last 20 days |
||||
trades = [] |
||||
market_returns = [(sum(prices[t][i] for t in tickers) / sum(prices[t][i+1] for t in tickers)) - 1 for i in range(19)] |
||||
market_sharpe = np.mean(market_returns) / np.std(market_returns) |
||||
for ticker in tickers: |
||||
stock_returns = [(prices[ticker][i] / prices[ticker][i+1]) - 1 for i in range(19)] |
||||
stock_sharpe = np.mean(stock_returns) / np.std(stock_returns) |
||||
if stock_sharpe < market_sharpe: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade68(): |
||||
# Buy stocks that have a higher beta than 1 (they move more than the market) |
||||
trades = [] |
||||
market_returns = [(sum(prices[t][i] for t in tickers) / sum(prices[t][i+1] for t in tickers)) - 1 for i in range(49)] |
||||
for ticker in tickers: |
||||
stock_returns = [(prices[ticker][i] / prices[ticker][i+1]) - 1 for i in range(49)] |
||||
beta = np.cov(stock_returns, market_returns)[0, 1] / np.var(market_returns) |
||||
if beta > 1: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade69(): |
||||
# Sell stocks that have a lower beta than 1 (they move less than the market) |
||||
trades = [] |
||||
market_returns = [(sum(prices[t][i] for t in tickers) / sum(prices[t][i+1] for t in tickers)) - 1 for i in range(49)] |
||||
for ticker in tickers: |
||||
stock_returns = [(prices[ticker][i] / prices[ticker][i+1]) - 1 for i in range(49)] |
||||
beta = np.cov(stock_returns, market_returns)[0, 1] / np.var(market_returns) |
||||
if beta < 1: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade70(): |
||||
# Buy stocks that have a higher percentage of up days than the market over the last 50 days |
||||
trades = [] |
||||
market_up_days = sum(sum(prices[t][i] for t in tickers) > sum(prices[t][i+1] for t in tickers) for i in range(49)) |
||||
for ticker in tickers: |
||||
stock_up_days = sum(prices[ticker][i] > prices[ticker][i+1] for i in range(49)) |
||||
if stock_up_days > market_up_days: |
||||
quantity = random.randrange(1, 100) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
||||
|
||||
def trade71(): |
||||
# Sell stocks that have a lower percentage of up days than the market over the last 50 days |
||||
trades = [] |
||||
market_up_days = sum(sum(prices[t][i] for t in tickers) > sum(prices[t][i+1] for t in tickers) for i in range(49)) |
||||
for ticker in tickers: |
||||
stock_up_days = sum(prices[ticker][i] > prices[ticker][i+1] for i in range(49)) |
||||
if stock_up_days < market_up_days: |
||||
quantity = random.randrange(-100, -1) |
||||
trades.append(Trade(ticker, quantity)) |
||||
return trades |
@ -1,534 +0,0 @@
|
||||
# tickers is a list of stock tickers |
||||
import tickers |
||||
|
||||
# prices is a dict; the key is a ticker and the value is a list of historic prices, today first |
||||
import prices |
||||
|
||||
# Trade represents a decision to buy or sell a quantity of a ticker |
||||
import Trade |
||||
|
||||
import random |
||||
import numpy as np |
||||
|
||||
def trade2(): |
||||
# Buy the stock with the highest price today |
||||
ticker = max(prices, key=lambda t: prices[t][0]) # Find ticker with highest price |
||||
return [Trade(ticker, random.randrange(1, 10))] # Buy a small quantity |
||||
|
||||
def trade3(): |
||||
# Sell the stock with the lowest price today |
||||
ticker = min(prices, key=lambda t: prices[t][0]) |
||||
return [Trade(ticker, random.randrange(-10, -1))] |
||||
|
||||
def trade4(): |
||||
# Buy the stock with the largest percent increase today |
||||
changes = {t: (prices[t][0] - prices[t][1]) / prices[t][1] for t in prices} |
||||
ticker = max(changes, key=changes.get) |
||||
return [Trade(ticker, random.randrange(1, 10))] |
||||
|
||||
def trade5(): |
||||
# Sell the stock with the largest percent decrease today |
||||
changes = {t: (prices[t][0] - prices[t][1]) / prices[t][1] for t in prices} |
||||
ticker = min(changes, key=changes.get) |
||||
return [Trade(ticker, random.randrange(-10, -1))] |
||||
|
||||
def trade6(): |
||||
# Buy the 3 stocks with the highest moving average over the last 5 days |
||||
mvgs = {t: np.mean(prices[t][:5]) for t in prices} |
||||
top_tickers = sorted(mvgs, key=mvgs.get, reverse=True)[:3] |
||||
return [Trade(t, random.randrange(1, 5)) for t in top_tickers] |
||||
|
||||
def trade7(): |
||||
# Sell the 3 stocks with the lowest moving average over the last 5 days |
||||
mvgs = {t: np.mean(prices[t][:5]) for t in prices} |
||||
bottom_tickers = sorted(mvgs, key=mvgs.get)[:3] |
||||
return [Trade(t, random.randrange(-5, -1)) for t in bottom_tickers] |
||||
|
||||
def trade8(): |
||||
# Randomly buy or sell a single stock based on a coin flip |
||||
ticker = random.choice(tickers) |
||||
action = random.choice([-1, 1]) # -1 for sell, 1 for buy |
||||
return [Trade(ticker, action * random.randrange(1, 10))] |
||||
|
||||
def trade9(): |
||||
# Diversify: Buy a small amount of 5 random stocks |
||||
chosen_tickers = random.sample(tickers, 5) |
||||
return [Trade(t, random.randrange(1, 3)) for t in chosen_tickers] |
||||
|
||||
def trade10(): |
||||
# Follow the trend: If the market is up today, buy, else sell |
||||
market_change = (prices[tickers[0]][0] - prices[tickers[0]][1]) / prices[tickers[0]][1] |
||||
action = 1 if market_change > 0 else -1 |
||||
ticker = random.choice(tickers) |
||||
return [Trade(ticker, action * random.randrange(1, 10))] |
||||
|
||||
def trade11(): |
||||
# Mean Reversion: Buy the 2 stocks that fell the most yesterday, hoping they rebound |
||||
yesterday_changes = {t: (prices[t][1] - prices[t][2]) / prices[t][2] for t in prices} |
||||
bottom_tickers = sorted(yesterday_changes, key=yesterday_changes.get)[:2] |
||||
return [Trade(t, random.randrange(1, 5)) for t in bottom_tickers] |
||||
|
||||
def trade12(): |
||||
# Momentum: Short the 2 stocks that rose the most yesterday, expecting a pullback |
||||
yesterday_changes = {t: (prices[t][1] - prices[t][2]) / prices[t][2] for t in prices} |
||||
top_tickers = sorted(yesterday_changes, key=yesterday_changes.get, reverse=True)[:2] |
||||
return [Trade(t, random.randrange(-5, -1)) for t in top_tickers] |
||||
|
||||
def trade13(): |
||||
# Pairs Trading: Long one stock, short another with a similar price history |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
i, j = np.unravel_index(np.argmax(correlations), correlations.shape) |
||||
return [Trade(tickers[i], 1), Trade(tickers[j], -1)] |
||||
|
||||
def trade14(): |
||||
# Relative Strength: Go long on the strongest stock, short the weakest |
||||
performances = {t: (prices[t][0] - prices[t][-1]) / prices[t][-1] for t in prices} |
||||
strongest = max(performances, key=performances.get) |
||||
weakest = min(performances, key=performances.get) |
||||
return [Trade(strongest, 1), Trade(weakest, -1)] |
||||
|
||||
def trade15(): |
||||
# Calendar Spread: Buy this month's option, sell next month's (same strike |
||||
# This is a simplified representation, as actual option trading is more complex |
||||
ticker = random.choice(tickers) |
||||
return [Trade(f"{ticker}_OPT_THIS_MONTH", 1), Trade(f"{ticker}_OPT_NEXT_MONTH", -1)] |
||||
|
||||
def trade16(): |
||||
# Straddle: Buy both a call and put option on the same stock (same strike |
||||
ticker = random.choice(tickers) |
||||
strike = prices[ticker][0] # Use the current price as a simple strike price |
||||
return [Trade(f"{ticker}_CALL_{strike}", 1), Trade(f"{ticker}_PUT_{strike}", 1)] |
||||
|
||||
def trade17(): |
||||
# Breakout: Buy if a stock breaks above its 52-week high |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] > max(prices[ticker]): |
||||
return [Trade(ticker, random.randrange(1, 10))] |
||||
else: |
||||
return [] |
||||
|
||||
def trade18(): |
||||
# Volatility: If market volatility is high, sell (expecting it to decrease |
||||
market_volatility = np.std([prices[t][0] / prices[t][1] for t in tickers]) |
||||
if market_volatility > 0.05: # You'd adjust this threshold based on your risk tolerance |
||||
ticker = random.choice(tickers) |
||||
return [Trade(ticker, random.randrange(-10, -1))] |
||||
else: |
||||
return [] |
||||
|
||||
def trade19(): |
||||
# Golden Cross: Buy if the short-term moving average crosses above the long-term |
||||
ticker = random.choice(tickers) |
||||
short_ma = np.mean(prices[ticker][:5]) |
||||
long_ma = np.mean(prices[ticker][:20]) |
||||
if short_ma > long_ma and short_ma - long_ma < 0.01: # Small margin to avoid false signals |
||||
return [Trade(ticker, random.randrange(1, 10))] |
||||
else: |
||||
return [] |
||||
|
||||
def trade20(): |
||||
# Death Cross: Sell if the short-term moving average crosses below the long-term |
||||
ticker = random.choice(tickers) |
||||
short_ma = np.mean(prices[ticker][:5]) |
||||
long_ma = np.mean(prices[ticker][:20]) |
||||
if short_ma < long_ma and long_ma - short_ma < 0.01: |
||||
return [Trade(ticker, random.randrange(-10, -1))] |
||||
else: |
||||
return [] |
||||
|
||||
def trade21(): |
||||
# Correlated Pairs Buy: Buy a pair of stocks that have historically moved together |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
i, j = np.unravel_index(np.argmax(correlations), correlations.shape) |
||||
return [Trade(tickers[i], 1), Trade(tickers[j], 1)] |
||||
|
||||
def trade22(): |
||||
# Correlated Pairs Sell: Sell a pair of stocks that have historically moved together |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
i, j = np.unravel_index(np.argmax(correlations), correlations.shape) |
||||
return [Trade(tickers[i], -1), Trade(tickers[j], -1)] |
||||
|
||||
def trade23(): |
||||
# Contrarian Pairs Buy: Buy a stock that's down while its correlated pair is up |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
i, j = np.unravel_index(np.argmax(correlations), correlations.shape) |
||||
if prices[tickers[i]][0] < prices[tickers[i]][1] and prices[tickers[j]][0] > prices[tickers[j]][1]: |
||||
return [Trade(tickers[i], 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade24(): |
||||
# Contrarian Pairs Sell: Sell a stock that's up while its correlated pair is down |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
i, j = np.unravel_index(np.argmax(correlations), correlations.shape) |
||||
if prices[tickers[i]][0] > prices[tickers[i]][1] and prices[tickers[j]][0] < prices[tickers[j]][1]: |
||||
return [Trade(tickers[i], -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade25(): |
||||
# Correlation Reversal: Buy a stock that's recently become less correlated with the market |
||||
# This is a simplified version, you'd likely use a rolling correlation window |
||||
market_prices = [prices[t] for t in tickers] |
||||
correlations_today = np.corrcoef(market_prices) |
||||
correlations_yesterday = np.corrcoef([p[1:] for p in market_prices]) |
||||
diffs = correlations_today - correlations_yesterday |
||||
i, j = np.unravel_index(np.argmin(diffs), diffs.shape) |
||||
if i != j: # Ensure we're not comparing a stock to itself |
||||
return [Trade(tickers[i], 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade26(): |
||||
# Sector Rotation: Buy the top 2 stocks from the sector that's most correlated with the market |
||||
# Assuming you have sector data (e.g., 'sector_map' dict: ticker -> sector) |
||||
sector_returns = {s: np.mean([(prices[t][0] - prices[t][1]) / prices[t][1] for t in tickers if sector_map[t] == s]) for s in set(sector_map.values())} |
||||
top_sector = max(sector_returns, key=sector_returns.get) |
||||
top_tickers_in_sector = sorted([(t, prices[t][0]) for t in tickers if sector_map[t] == top_sector], key=lambda x: x[1], reverse=True)[:2] |
||||
return [Trade(t, 1) for t, _ in top_tickers_in_sector] |
||||
|
||||
def trade27(): |
||||
# Beta-Weighted Portfolio: Allocate more to stocks with higher betas (more volatile |
||||
# You'd need historical market data to calculate betas |
||||
betas = {t: random.uniform(0.5, 2) for t in tickers} # Placeholder for actual betas |
||||
total_beta = sum(betas.values()) |
||||
allocations = {t: betas[t] / total_beta * 100 for t in tickers} |
||||
return [Trade(t, int(allocations[t])) for t in tickers] |
||||
|
||||
def trade28(): |
||||
# Diversified Portfolio: Buy a mix of stocks with low correlations to each other |
||||
correlations = np.corrcoef([prices[t] for t in tickers]) |
||||
chosen_tickers = [] |
||||
while len(chosen_tickers) < 5 and len(tickers) > 0: |
||||
t = random.choice(tickers) |
||||
if all(correlations[tickers.index(t)][tickers.index(c)] < 0.5 for c in chosen_tickers): |
||||
chosen_tickers.append(t) |
||||
tickers.remove(t) |
||||
return [Trade(t, random.randrange(1, 3)) for t in chosen_tickers] |
||||
|
||||
def trade29(): |
||||
# Cointegration: Find a pair of stocks that are cointegrated and trade their spread |
||||
# This requires more complex analysis (e.g., using the Johansen test) |
||||
# For simplicity, we'll just pick a random pair and assume cointegration |
||||
i, j = random.sample(range(len(tickers)), 2) |
||||
spread = prices[tickers[i]][0] - prices[tickers[j]][0] |
||||
if spread > 0: |
||||
return [Trade(tickers[i], -1), Trade(tickers[j], 1)] |
||||
else: |
||||
return [Trade(tickers[i], 1), Trade(tickers[j], -1)] |
||||
|
||||
def trade30(): |
||||
# Basket Trading: Buy or sell a basket of stocks based on their correlation to a benchmark |
||||
# You'd need a benchmark ticker and its historical prices |
||||
benchmark = "SPY" |
||||
correlations = np.corrcoef([prices[t] for t in tickers + [benchmark]])[:-1, -1] # Correlate each stock with the benchmark |
||||
if np.mean(correlations) > 0.5: |
||||
return [Trade(t, 1) for t in tickers] |
||||
else: |
||||
return [Trade(t, -1) for t in tickers] |
||||
|
||||
def trade31(): |
||||
# Double Bottom: Buy when a stock forms a double bottom pattern |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] < prices[ticker][2] < prices[ticker][4] and prices[ticker][1] > prices[ticker][3]: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade32(): |
||||
# Double Top: Sell when a stock forms a double top pattern |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] > prices[ticker][2] > prices[ticker][4] and prices[ticker][1] < prices[ticker][3]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade33(): |
||||
# Head and Shoulders: Sell when a stock forms a head and shoulders pattern |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] < prices[ticker][2] < prices[ticker][4] and prices[ticker][1] > prices[ticker][3] > prices[ticker][5]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade34 |
||||
# Inverse Head and Shoulders: Buy when a stock forms an inverse head and shoulders pattern |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] > prices[ticker][2] > prices[ticker][4] and prices[ticker][1] < prices[ticker][3] < prices[ticker][5]: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade35(): |
||||
# Ascending Triangle: Buy when a stock forms an ascending triangle pattern |
||||
ticker = random.choice(tickers) |
||||
# Simplified logic: check for higher lows and flat highs |
||||
if prices[ticker][0] > prices[ticker][2] > prices[ticker][4] and prices[ticker][1] == prices[ticker][3] == prices[ticker][5]: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade36(): |
||||
# Descending Triangle: Sell when a stock forms a descending triangle pattern |
||||
ticker = random.choice(tickers) |
||||
# Simplified logic: check for lower highs and flat lows |
||||
if prices[ticker][0] < prices[ticker][2] < prices[ticker][4] and prices[ticker][1] == prices[ticker][3] == prices[ticker][5]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade37(): |
||||
# Flag/Pennant: Buy or sell based on the direction of the flag/pennant pattern |
||||
ticker = random.choice(tickers) |
||||
# Simplified logic: check for a consolidation period after a strong move |
||||
if abs(prices[ticker][0] - np.mean(prices[ticker][1:5])) < 0.05 and abs(prices[ticker][5] - prices[ticker][6]) > 0.1: |
||||
# Buy if the prior move was up, sell if down |
||||
return [Trade(ticker, 1 if prices[ticker][5] > prices[ticker][6] else -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade38(): |
||||
# Gap Up: Buy when a stock opens significantly higher than its previous close |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] > prices[ticker][1] * 1.05: # 5% gap up |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade39(): |
||||
# Gap Down: Sell when a stock opens significantly lower than its previous close |
||||
ticker = random.choice(tickers) |
||||
if prices[ticker][0] < prices[ticker][1] * 0.95: # 5% gap down |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade40(): |
||||
# Rounding Bottom: Buy when a stock forms a rounding bottom pattern |
||||
ticker = random.choice(tickers) |
||||
# Simplified logic: check for a gradual price increase after a period of decline |
||||
if prices[ticker][0] > prices[ticker][2] > prices[ticker][4] and prices[ticker][1] < prices[ticker][3] < prices[ticker][5]: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade41(): |
||||
# Overbought/Oversold (RSI): Sell if RSI is above 70, buy if below 30 |
||||
ticker = random.choice(tickers) |
||||
rsi = calculate_rsi(prices[ticker], 14) # Assuming you have an RSI calculation function |
||||
if rsi > 70: |
||||
return [Trade(ticker, -1)] |
||||
elif rsi < 30: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade42(): |
||||
# Bollinger Bands Breakout: Buy if price breaks above the upper band, sell if below lower |
||||
ticker = random.choice(tickers) |
||||
upper, middle, lower = calculate_bollinger_bands(prices[ticker], 20, 2) # Assuming you have a Bollinger Band calculation function |
||||
if prices[ticker][0] > upper: |
||||
return [Trade(ticker, 1)] |
||||
elif prices[ticker][0] < lower: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade43(): |
||||
# Channel Breakout: Buy or sell when price breaks out of a recent price channel |
||||
ticker = random.choice(tickers) |
||||
highs = [max(prices[ticker][i:i+5]) for i in range(len(prices[ticker]) - 5)] |
||||
lows = [min(prices[ticker][i:i+5]) for i in range(len(prices[ticker]) - 5)] |
||||
if prices[ticker][0] > highs[-1]: |
||||
return [Trade(ticker, 1)] |
||||
elif prices[ticker][0] < lows[-1]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade44(): |
||||
# Trend Following: Buy if the 20-day moving average is rising, sell if falling |
||||
ticker = random.choice(tickers) |
||||
ma20_today = np.mean(prices[ticker][:20]) |
||||
ma20_yesterday = np.mean(prices[ticker][1:21]) |
||||
if ma20_today > ma20_yesterday: |
||||
return [Trade(ticker, 1)] |
||||
elif ma20_today < ma20_yesterday: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade45(): |
||||
# MACD Crossover: Buy when MACD line crosses above signal line, sell when below |
||||
ticker = random.choice(tickers) |
||||
macd_line, signal_line = calculate_macd(prices[ticker]) # Assuming you have a MACD calculation function |
||||
if macd_line[-1] > signal_line[-1] and macd_line[-2] <= signal_line[-2]: |
||||
return [Trade(ticker, 1)] |
||||
elif macd_line[-1] < signal_line[-1] and macd_line[-2] >= signal_line[-2]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade46(): |
||||
# Stochastic Oscillator: Buy if %K crosses above %D in oversold zone, sell if opposite |
||||
ticker = random.choice(tickers) |
||||
k_line, d_line = calculate_stochastic(prices[ticker]) # Assuming you have a Stochastic calculation function |
||||
if k_line[-1] > d_line[-1] and k_line[-1] < 20: |
||||
return [Trade(ticker, 1)] |
||||
elif k_line[-1] < d_line[-1] and k_line[-1] > 80: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade47(): |
||||
# Volume Spike: Buy if today's volume is much higher than the average |
||||
# You'd need volume data for this strategy |
||||
ticker = random.choice(tickers) |
||||
avg_volume = np.mean(volumes[ticker][1:]) # Assuming you have 'volumes' data |
||||
if volumes[ticker][0] > avg_volume * 2: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade48(): |
||||
# Price Spike: Buy if today's price increase is much higher than average daily change |
||||
ticker = random.choice(tickers) |
||||
daily_changes = [(prices[ticker][i] - prices[ticker][i + 1]) / prices[ticker][i + 1] for i in range(len(prices[ticker]) - 1)] |
||||
avg_change = np.mean(daily_changes) |
||||
today_change = (prices[ticker][0] - prices[ticker][1]) / prices[ticker][1] |
||||
if today_change > avg_change * 2: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade49(): |
||||
# Mean Reversion (Long-term): Buy if the price is below its 200-day moving average |
||||
ticker = random.choice(tickers) |
||||
ma200 = np.mean(prices[ticker]) |
||||
if prices[ticker][0] < ma200: |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade50(): |
||||
# Trend Reversal (Parabolic SAR): Buy or sell based on the Parabolic SAR indicator |
||||
# Assuming you have a Parabolic SAR calculation function |
||||
ticker = random.choice(tickers) |
||||
sar = calculate_parabolic_sar(prices[ticker]) |
||||
if prices[ticker][0] > sar[-1]: |
||||
return [Trade(ticker, 1)] |
||||
elif prices[ticker][0] < sar[-1]: |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade51(): |
||||
# Market Outperformance: Buy stocks whose daily returns beat the market |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
outperformers = [t for t in tickers if (prices[t][0] - prices[t][1]) / prices[t][1] > market_return] |
||||
if outperformers: |
||||
ticker = random.choice(outperformers) |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade52(): |
||||
# Market Underperformance: Short stocks whose daily returns lag the market |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
underperformers = [t for t in tickers if (prices[t][0] - prices[t][1]) / prices[t][1] < market_return] |
||||
if underperformers: |
||||
ticker = random.choice(underperformers) |
||||
return [Trade(ticker, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade53(): |
||||
# Relative Strength to Market: Buy the stock with the highest relative strength to the market |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
relative_strengths = {t: ((prices[t][0] - prices[t][1]) / prices[t][1]) - market_return for t in tickers} |
||||
ticker = max(relative_strengths, key=relative_strengths.get) |
||||
return [Trade(ticker, 1)] |
||||
|
||||
def trade54(): |
||||
# Relative Weakness to Market: Short the stock with the lowest relative strength to the market |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
relative_strengths = {t: ((prices[t][0] - prices[t][1]) / prices[t][1]) - market_return for t in tickers} |
||||
ticker = min(relative_strengths, key=relative_strengths.get) |
||||
return [Trade(ticker, -1)] |
||||
|
||||
def trade55(): |
||||
# Sector vs. Market: Buy top stock from sector outperforming the market, short from underperforming |
||||
# Assuming you have sector data (e.g., 'sector_map' dict: ticker -> sector) |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
sector_returns = {s: np.mean([(prices[t][0] - prices[t][1]) / prices[t][1] for t in tickers if sector_map[t] == s]) for s in set(sector_map.values())} |
||||
outperforming_sectors = [s for s in sector_returns if sector_returns[s] > market_return] |
||||
underperforming_sectors = [s for s in sector_returns if sector_returns[s] < market_return] |
||||
trades = [] |
||||
if outperforming_sectors: |
||||
top_ticker = max([(t, prices[t][0]) for t in tickers if sector_map[t] == random.choice(outperforming_sectors)], key=lambda x: x[1])[0] |
||||
trades.append(Trade(top_ticker, 1)) |
||||
if underperforming_sectors: |
||||
bottom_ticker = min([(t, prices[t][0]) for t in tickers if sector_map[t] == random.choice(underperforming_sectors)], key=lambda x: x[1])[0] |
||||
trades.append(Trade(bottom_ticker, -1)) |
||||
return trades |
||||
|
||||
def trade56(): |
||||
# Market-Neutral Pairs: Long/short pairs of stocks with similar market betas |
||||
betas = {t: random.uniform(0.8, 1.2) for t in tickers} # Placeholder, calculate actual betas |
||||
pairs = [(t1, t2) for t1 in tickers for t2 in tickers if abs(betas[t1] - betas[t2]) < 0.1 and t1 != t2] |
||||
if pairs: |
||||
t1, t2 = random.choice(pairs) |
||||
return [Trade(t1, 1), Trade(t2, -1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade57(): |
||||
# Beta Rotation: Buy high-beta stocks if the market is rising, low-beta if falling |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
betas = {t: random.uniform(0.5, 2) for t in tickers} # Placeholder, calculate actual betas |
||||
if market_return > 0: # Market is rising |
||||
target_beta = 1.5 # Example target for high-beta |
||||
else: |
||||
target_beta = 0.8 # Example target for low-beta |
||||
closest_ticker = min(tickers, key=lambda t: abs(betas[t] - target_beta)) |
||||
return [Trade(closest_ticker, 1 if market_return > 0 else -1)] # Buy if rising, short if falling |
||||
|
||||
def trade58(): |
||||
# Market Timing with Relative Strength: Buy strong stocks in up markets, sell in down markets |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
market_return = (total_market_values[0] - total_market_values[1]) / total_market_values[1] |
||||
relative_strengths = {t: ((prices[t][0] - prices[t][-1]) / prices[t][-1]) for t in tickers} # Calculate over a longer period (e.g., 20 days) |
||||
if market_return > 0: |
||||
strongest = max(relative_strengths, key=relative_strengths.get) |
||||
return [Trade(strongest, 1)] |
||||
else: |
||||
weakest = min(relative_strengths, key=relative_strengths.get) |
||||
return [Trade(weakest, -1)] |
||||
|
||||
def trade59(): |
||||
# Relative Value to Market: Buy stocks trading below their historical average relative to the market |
||||
# Requires historical data to calculate averages |
||||
total_market_values = [sum(prices[t][i] for t in tickers) for i in range(len(prices[tickers[0]]))] |
||||
relative_values = {t: prices[t][0] / total_market_values[0] for t in tickers} # Current relative value |
||||
historical_averages = {t: 0.05 for t in tickers} # Placeholder, calculate actual averages |
||||
undervalued = [t for t in tickers if relative_values[t] < historical_averages[t] * 0.95] # Allow some buffer |
||||
if undervalued: |
||||
ticker = random.choice(undervalued) |
||||
return [Trade(ticker, 1)] |
||||
else: |
||||
return [] |
||||
|
||||
def trade60(): |
||||
# Market-Cap Weighted: Allocate trade amounts proportional to each stock's market cap relative to total market |
||||
total_market_value = sum(prices[t][0] for t in tickers) |
||||
market_caps = {t: prices[t][0] * 1000 for t in tickers} # Assuming 1000 shares outstanding for each stock |
||||
weights = {t: market_caps[t] / total_market_value for t in tickers} |
||||
total_trade_amount = 100 # Example |
||||
trades = [Trade(t, int(weights[t] * total_trade_amount)) for t in tickers] |
||||
return trades |
@ -1,884 +0,0 @@
|
||||
# tickers is a list of stock tickers |
||||
import tickers |
||||
|
||||
# prices is a dict; the key is a ticker and the value is a list of historic prices, today first |
||||
import prices |
||||
|
||||
# Trade represents a decision to buy or sell a quantity of a ticker |
||||
import Trade |
||||
|
||||
import random |
||||
import numpy as np |
||||
|
||||
def trade2(): |
||||
# Buy top performing stock in the last 5 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:5]) for ticker in tickers} |
||||
best_ticker = max(avg_prices, key=avg_prices.get) |
||||
trade = Trade(best_ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade3(): |
||||
# Sell worst performing stock in the last 5 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:5]) for ticker in tickers} |
||||
worst_ticker = min(avg_prices, key=avg_prices.get) |
||||
trade = Trade(worst_ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade4(): |
||||
# Buy random stock from top 5 performing in the last 10 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:10]) for ticker in tickers} |
||||
top_5_tickers = sorted(avg_prices, key=avg_prices.get, reverse=True)[:5] |
||||
ticker = random.choice(top_5_tickers) |
||||
trade = Trade(ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade5(): |
||||
# Sell random stock from bottom 5 performing in the last 10 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:10]) for ticker in tickers} |
||||
bottom_5_tickers = sorted(avg_prices, key=avg_prices.get)[:5] |
||||
ticker = random.choice(bottom_5_tickers) |
||||
trade = Trade(ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade6(): |
||||
# Buy stocks with a positive trend over the last 7 days |
||||
trending_up = [ticker for ticker in tickers if prices[ticker][0] > prices[ticker][6]] |
||||
ticker = random.choice(trending_up) |
||||
trade = Trade(ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade7(): |
||||
# Sell stocks with a negative trend over the last 7 days |
||||
trending_down = [ticker for ticker in tickers if prices[ticker][0] < prices[ticker][6]] |
||||
ticker = random.choice(trending_down) |
||||
trade = Trade(ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade8(): |
||||
# Buy stocks with the lowest volatility over the last 20 days |
||||
volatilities = {ticker: np.std(prices[ticker][:20]) for ticker in tickers} |
||||
least_volatile = min(volatilities, key=volatilities.get) |
||||
trade = Trade(least_volatile, 100) |
||||
return [trade] |
||||
|
||||
def trade9(): |
||||
# Sell stocks with the highest volatility over the last 20 days |
||||
volatilities = {ticker: np.std(prices[ticker][:20]) for ticker in tickers} |
||||
most_volatile = max(volatilities, key=volatilities.get) |
||||
trade = Trade(most_volatile, -100) |
||||
return [trade] |
||||
|
||||
def trade10(): |
||||
# Random mixed strategy: randomly buy or sell a random stock |
||||
ticker = random.choice(tickers) |
||||
quantity = random.choice([-100, 100]) |
||||
trade = Trade(ticker, quantity) |
||||
return [trade] |
||||
|
||||
def trade11(): |
||||
# Buy the top 3 performing stocks in the last 15 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:15]) for ticker in tickers} |
||||
top_3_tickers = sorted(avg_prices, key=avg_prices.get, reverse=True)[:3] |
||||
trades = [Trade(ticker, 100) for ticker in top_3_tickers] |
||||
return trades |
||||
|
||||
def trade12(): |
||||
# Sell the bottom 3 performing stocks in the last 15 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:15]) for ticker in tickers} |
||||
bottom_3_tickers = sorted(avg_prices, key=avg_prices.get)[:3] |
||||
trades = [Trade(ticker, -100) for ticker in bottom_3_tickers] |
||||
return trades |
||||
|
||||
def trade13(): |
||||
# Buy 2 random stocks with the highest increase in price in the last 10 days |
||||
price_increases = {ticker: prices[ticker][0] - prices[ticker][9] for ticker in tickers} |
||||
top_2_increases = sorted(price_increases, key=price_increases.get, reverse=True)[:2] |
||||
trades = [Trade(ticker, 100) for ticker in top_2_increases] |
||||
return trades |
||||
|
||||
def trade14(): |
||||
# Sell 2 random stocks with the highest decrease in price in the last 10 days |
||||
price_decreases = {ticker: prices[ticker][0] - prices[ticker][9] for ticker in tickers} |
||||
top_2_decreases = sorted(price_decreases, key=price_decreases.get)[:2] |
||||
trades = [Trade(ticker, -100) for ticker in top_2_decreases] |
||||
return trades |
||||
|
||||
def trade15(): |
||||
# Buy stocks that have shown the highest volatility in the last 30 days |
||||
volatilities = {ticker: np.std(prices[ticker][:30]) for ticker in tickers} |
||||
high_volatility_tickers = sorted(volatilities, key=volatilities.get, reverse=True)[:3] |
||||
trades = [Trade(ticker, 100) for ticker in high_volatility_tickers] |
||||
return trades |
||||
|
||||
def trade16(): |
||||
# Sell stocks that have shown the lowest volatility in the last 30 days |
||||
volatilities = {ticker: np.std(prices[ticker][:30]) for ticker in tickers} |
||||
low_volatility_tickers = sorted(volatilities, key=volatilities.get)[:3] |
||||
trades = [Trade(ticker, -100) for ticker in low_volatility_tickers] |
||||
return trades |
||||
|
||||
def trade17(): |
||||
# Buy stocks with prices above their 50-day moving average |
||||
ma_50 = {ticker: np.mean(prices[ticker][:50]) for ticker in tickers} |
||||
above_ma_tickers = [ticker for ticker in tickers if prices[ticker][0] > ma_50[ticker]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(above_ma_tickers, min(3, len(above_ma_tickers)))] |
||||
return trades |
||||
|
||||
def trade18(): |
||||
# Sell stocks with prices below their 50-day moving average |
||||
ma_50 = {ticker: np.mean(prices[ticker][:50]) for ticker in tickers} |
||||
below_ma_tickers = [ticker for ticker in tickers if prices[ticker][0] < ma_50[ticker]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(below_ma_tickers, min(3, len(below_ma_tickers)))] |
||||
return trades |
||||
|
||||
def trade19(): |
||||
# Mixed strategy: buy 2 random stocks and sell 2 random stocks |
||||
buy_tickers = random.sample(tickers, 2) |
||||
sell_tickers = random.sample([ticker for ticker in tickers if ticker not in buy_tickers], 2) |
||||
trades = [Trade(ticker, 100) for ticker in buy_tickers] + [Trade(ticker, -100) for ticker in sell_tickers] |
||||
return trades |
||||
|
||||
def trade20(): |
||||
# Buy stocks that have positive return in the last 20 days and sell those with negative return |
||||
returns = {ticker: (prices[ticker][0] - prices[ticker][19]) / prices[ticker][19] for ticker in tickers} |
||||
buy_tickers = [ticker for ticker in tickers if returns[ticker] > 0] |
||||
sell_tickers = [ticker for ticker in tickers if returns[ticker] < 0] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(buy_tickers, min(2, len(buy_tickers)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(sell_tickers, min(2, len(sell_tickers)))] |
||||
return trades |
||||
|
||||
def trade21(): |
||||
# Buy the top performing stock in the last 3 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:3]) for ticker in tickers} |
||||
best_ticker = max(avg_prices, key=avg_prices.get) |
||||
trade = Trade(best_ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade22(): |
||||
# Sell the worst performing stock in the last 3 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:3]) for ticker in tickers} |
||||
worst_ticker = min(avg_prices, key=avg_prices.get) |
||||
trade = Trade(worst_ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade23(): |
||||
# Buy stocks that have not changed price in the last 7 days |
||||
stable_tickers = [ticker for ticker in tickers if prices[ticker][0] == prices[ticker][6]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(stable_tickers, min(3, len(stable_tickers)))] |
||||
return trades |
||||
|
||||
def trade24(): |
||||
# Sell stocks that have the smallest price change in the last 5 days |
||||
smallest_changes = sorted(tickers, key=lambda t: abs(prices[t][0] - prices[t][4]))[:3] |
||||
trades = [Trade(ticker, -100) for ticker in smallest_changes] |
||||
return trades |
||||
|
||||
def trade25(): |
||||
# Buy random stocks from the top 10 highest priced stocks |
||||
highest_priced = sorted(tickers, key=lambda t: prices[t][0], reverse=True)[:10] |
||||
ticker = random.choice(highest_priced) |
||||
trade = Trade(ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade26(): |
||||
# Sell random stocks from the bottom 10 lowest priced stocks |
||||
lowest_priced = sorted(tickers, key=lambda t: prices[t][0])[:10] |
||||
ticker = random.choice(lowest_priced) |
||||
trade = Trade(ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade27(): |
||||
# Buy 2 stocks with the highest momentum (last 5 days) |
||||
momentums = {ticker: prices[ticker][0] - prices[ticker][4] for ticker in tickers} |
||||
top_momentum_tickers = sorted(momentums, key=momentums.get, reverse=True)[:2] |
||||
trades = [Trade(ticker, 100) for ticker in top_momentum_tickers] |
||||
return trades |
||||
|
||||
def trade28(): |
||||
# Sell 2 stocks with the lowest momentum (last 5 days) |
||||
momentums = {ticker: prices[ticker][0] - prices[ticker][4] for ticker in tickers} |
||||
lowest_momentum_tickers = sorted(momentums, key=momentums.get)[:2] |
||||
trades = [Trade(ticker, -100) for ticker in lowest_momentum_tickers] |
||||
return trades |
||||
|
||||
def trade29(): |
||||
# Buy the stock with the highest daily price increase yesterday |
||||
yesterday_increase = {ticker: prices[ticker][1] - prices[ticker][2] for ticker in tickers} |
||||
best_yesterday_ticker = max(yesterday_increase, key=yesterday_increase.get) |
||||
trade = Trade(best_yesterday_ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade30(): |
||||
# Sell the stock with the highest daily price decrease yesterday |
||||
yesterday_decrease = {ticker: prices[ticker][1] - prices[ticker][2] for ticker in tickers} |
||||
worst_yesterday_ticker = min(yesterday_decrease, key=yesterday_decrease.get) |
||||
trade = Trade(worst_yesterday_ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade31(): |
||||
# Long/short strategy: Buy the top performing stock and sell the worst performing stock over the last 7 days |
||||
avg_prices = {ticker: np.mean(prices[ticker][:7]) for ticker in tickers} |
||||
best_ticker = max(avg_prices, key=avg_prices.get) |
||||
worst_ticker = min(avg_prices, key=avg_prices.get) |
||||
trades = [Trade(best_ticker, 100), Trade(worst_ticker, -100)] |
||||
return trades |
||||
|
||||
def trade32(): |
||||
# Buy stocks that have had a positive return in the last 5 days and sell those with a negative return |
||||
returns = {ticker: (prices[ticker][0] - prices[ticker][4]) / prices[ticker][4] for ticker in tickers} |
||||
buy_tickers = [ticker for ticker in tickers if returns[ticker] > 0] |
||||
sell_tickers = [ticker for ticker in tickers if returns[ticker] < 0] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(buy_tickers, min(2, len(buy_tickers)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(sell_tickers, min(2, len(sell_tickers)))] |
||||
return trades |
||||
|
||||
def trade33(): |
||||
# Buy 2 stocks with the highest price-to-earnings ratio and sell 2 with the lowest |
||||
pe_ratios = {ticker: random.uniform(10, 30) for ticker in tickers} # Mock P/E ratios |
||||
top_pe_tickers = sorted(pe_ratios, key=pe_ratios.get, reverse=True)[:2] |
||||
low_pe_tickers = sorted(pe_ratios, key=pe_ratios.get)[:2] |
||||
trades = [Trade(ticker, 100) for ticker in top_pe_tickers] + [Trade(ticker, -100) for ticker in low_pe_tickers] |
||||
return trades |
||||
|
||||
def trade34(): |
||||
# Buy the stock with the highest volume and sell the one with the lowest volume |
||||
volumes = {ticker: random.randint(1000, 10000) for ticker in tickers} # Mock volumes |
||||
high_volume_ticker = max(volumes, key=volumes.get) |
||||
low_volume_ticker = min(volumes, key=volumes.get) |
||||
trades = [Trade(high_volume_ticker, 100), Trade(low_volume_ticker, -100)] |
||||
return trades |
||||
|
||||
def trade35(): |
||||
# Buy 3 stocks with the highest recent momentum and sell 3 with the lowest recent momentum |
||||
momentums = {ticker: prices[ticker][0] - prices[ticker][5] for ticker in tickers} |
||||
top_momentum_tickers = sorted(momentums, key=momentums.get, reverse=True)[:3] |
||||
low_momentum_tickers = sorted(momentums, key=momentums.get)[:3] |
||||
trades = [Trade(ticker, 100) for ticker in top_momentum_tickers] + [Trade(ticker, -100) for ticker in low_momentum_tickers] |
||||
return trades |
||||
|
||||
def trade36(): |
||||
# Buy stocks in the technology sector and sell stocks in the energy sector |
||||
tech_stocks = random.sample(tickers, 3) # Mock tech stocks |
||||
energy_stocks = random.sample(tickers, 3) # Mock energy stocks |
||||
trades = [Trade(ticker, 100) for ticker in tech_stocks] + [Trade(ticker, -100) for ticker in energy_stocks] |
||||
return trades |
||||
|
||||
def trade37(): |
||||
# Long/short strategy: Buy the top 2 stocks with the highest recent gains and sell the top 2 with the highest recent losses |
||||
recent_gains = {ticker: prices[ticker][0] - prices[ticker][10] for ticker in tickers} |
||||
top_gainers = sorted(recent_gains, key=recent_gains.get, reverse=True)[:2] |
||||
top_losers = sorted(recent_gains, key=recent_gains.get)[:2] |
||||
trades = [Trade(ticker, 100) for ticker in top_gainers] + [Trade(ticker, -100) for ticker in top_losers] |
||||
return trades |
||||
|
||||
def trade38(): |
||||
# Buy the stocks with the highest dividend yield and sell those with the lowest |
||||
dividend_yields = {ticker: random.uniform(1, 5) for ticker in tickers} # Mock dividend yields |
||||
high_yield_tickers = sorted(dividend_yields, key=dividend_yields.get, reverse=True)[:2] |
||||
low_yield_tickers = sorted(dividend_yields, key=dividend_yields.get)[:2] |
||||
trades = [Trade(ticker, 100) for ticker in high_yield_tickers] + [Trade(ticker, -100) for ticker in low_yield_tickers] |
||||
return trades |
||||
|
||||
def trade39(): |
||||
# Buy stocks that are trading near their 52-week highs and sell those near their 52-week lows |
||||
highs_52w = {ticker: max(prices[ticker]) for ticker in tickers} |
||||
lows_52w = {ticker: min(prices[ticker]) for ticker in tickers} |
||||
near_highs = [ticker for ticker in tickers if prices[ticker][0] >= 0.9 * highs_52w[ticker]] |
||||
near_lows = [ticker for ticker in tickers if prices[ticker][0] <= 1.1 * lows_52w[ticker]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(near_highs, min(2, len(near_highs)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(near_lows, min(2, len(near_lows)))] |
||||
return trades |
||||
|
||||
def trade40(): |
||||
# Long/short strategy: Buy 2 random stocks from the top 10 performing sectors and sell 2 from the bottom 10 |
||||
sectors = {ticker: random.choice(['Tech', 'Energy', 'Health', 'Finance', 'Retail']) for ticker in tickers} |
||||
sector_performance = {sector: random.uniform(-10, 10) for sector in set(sectors.values())} |
||||
top_sectors = sorted(sector_performance, key=sector_performance.get, reverse=True)[:2] |
||||
bottom_sectors = sorted(sector_performance, key=sector_performance.get)[:2] |
||||
buy_tickers = [ticker for ticker in tickers if sectors[ticker] in top_sectors] |
||||
sell_tickers = [ticker for ticker in tickers if sectors[ticker] in bottom_sectors] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(buy_tickers, min(2, len(buy_tickers)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(sell_tickers, min(2, len(sell_tickers)))] |
||||
return trades |
||||
|
||||
def trade41(): |
||||
# Buy the stock with the highest price increase today |
||||
price_increases = {ticker: prices[ticker][0] - prices[ticker][1] for ticker in tickers} |
||||
best_ticker = max(price_increases, key=price_increases.get) |
||||
trade = Trade(best_ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade42(): |
||||
# Sell the stock with the highest price decrease today |
||||
price_decreases = {ticker: prices[ticker][0] - prices[ticker][1] for ticker in tickers} |
||||
worst_ticker = min(price_decreases, key=price_decreases.get) |
||||
trade = Trade(worst_ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade43(): |
||||
# Buy stocks that have had a positive return in the last 3 days |
||||
returns = {ticker: (prices[ticker][0] - prices[ticker][2]) / prices[ticker][2] for ticker in tickers} |
||||
buy_tickers = [ticker for ticker in tickers if returns[ticker] > 0] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(buy_tickers, min(3, len(buy_tickers)))] |
||||
return trades |
||||
|
||||
def trade44(): |
||||
# Sell stocks that have had a negative return in the last 3 days |
||||
returns = {ticker: (prices[ticker][0] - prices[ticker][2]) / prices[ticker][2] for ticker in tickers} |
||||
sell_tickers = [ticker for ticker in tickers if returns[ticker] < 0] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(sell_tickers, min(3, len(sell_tickers)))] |
||||
return trades |
||||
|
||||
def trade45(): |
||||
# Buy the stock with the highest average return over the last 10 days |
||||
avg_returns = {ticker: np.mean([(prices[ticker][i] - prices[ticker][i+1]) / prices[ticker][i+1] for i in range(9)]) for ticker in tickers} |
||||
best_ticker = max(avg_returns, key=avg_returns.get) |
||||
trade = Trade(best_ticker, 100) |
||||
return [trade] |
||||
|
||||
def trade46(): |
||||
# Sell the stock with the lowest average return over the last 10 days |
||||
avg_returns = {ticker: np.mean([(prices[ticker][i] - prices[ticker][i+1]) / prices[ticker][i+1] for i in range(9)]) for ticker in tickers} |
||||
worst_ticker = min(avg_returns, key=avg_returns.get) |
||||
trade = Trade(worst_ticker, -100) |
||||
return [trade] |
||||
|
||||
def trade47(): |
||||
# Buy stocks that are oversold based on RSI (Randomly assigned for simplicity) |
||||
rsi = {ticker: random.uniform(0, 100) for ticker in tickers} |
||||
oversold_tickers = [ticker for ticker in tickers if rsi[ticker] < 30] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(oversold_tickers, min(3, len(oversold_tickers)))] |
||||
return trades |
||||
|
||||
def trade48(): |
||||
# Sell stocks that are overbought based on RSI (Randomly assigned for simplicity) |
||||
rsi = {ticker: random.uniform(0, 100) for ticker in tickers} |
||||
overbought_tickers = [ticker for ticker in tickers if rsi[ticker] > 70] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(overbought_tickers, min(3, len(overbought_tickers)))] |
||||
return trades |
||||
|
||||
def trade49(): |
||||
# Buy stocks with positive momentum over the last 20 days |
||||
momentums = {ticker: prices[ticker][0] - prices[ticker][19] for ticker in tickers} |
||||
positive_momentum_tickers = [ticker for ticker in momentums if momentums[ticker] > 0] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(positive_momentum_tickers, min(3, len(positive_momentum_tickers)))] |
||||
return trades |
||||
|
||||
def trade50(): |
||||
# Sell stocks with negative momentum over the last 20 days |
||||
momentums = {ticker: prices[ticker][0] - prices[ticker][19] for ticker in tickers} |
||||
negative_momentum_tickers = [ticker for ticker in momentums if momentums[ticker] < 0] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(negative_momentum_tickers, min(3, len(negative_momentum_tickers)))] |
||||
return trades |
||||
|
||||
def trade51(): |
||||
# Buy stocks that have a high positive correlation with a randomly chosen strong performer |
||||
import scipy.stats |
||||
base_ticker = random.choice(tickers) |
||||
base_prices = prices[base_ticker] |
||||
correlations = {ticker: scipy.stats.pearsonr(base_prices, prices[ticker])[0] for ticker in tickers if ticker != base_ticker} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.8] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(high_corr_tickers, min(3, len(high_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade52(): |
||||
# Sell stocks that have a high negative correlation with a randomly chosen weak performer |
||||
import scipy.stats |
||||
base_ticker = random.choice(tickers) |
||||
base_prices = prices[base_ticker] |
||||
correlations = {ticker: scipy.stats.pearsonr(base_prices, prices[ticker])[0] for ticker in tickers if ticker != base_ticker} |
||||
low_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < -0.8] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(low_corr_tickers, min(3, len(low_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade53(): |
||||
# Long/short strategy: Buy stocks with high positive correlation and sell stocks with high negative correlation to a strong performer |
||||
import scipy.stats |
||||
base_ticker = random.choice(tickers) |
||||
base_prices = prices[base_ticker] |
||||
correlations = {ticker: scipy.stats.pearsonr(base_prices, prices[ticker])[0] for ticker in tickers if ticker != base_ticker} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.7] |
||||
low_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < -0.7] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(high_corr_tickers, min(2, len(high_corr_tickers)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(low_corr_tickers, min(2, len(low_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade54(): |
||||
# Buy stocks that have a high correlation with an index (e.g., S&P 500) |
||||
import scipy.stats |
||||
index_prices = [random.uniform(1000, 5000) for _ in range(len(prices[tickers[0]]))] # Mock index prices |
||||
correlations = {ticker: scipy.stats.pearsonr(index_prices, prices[ticker])[0] for ticker in tickers} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.8] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(high_corr_tickers, min(3, len(high_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade55(): |
||||
# Sell stocks that have a low correlation with an index (e.g., S&P 500) |
||||
import scipy.stats |
||||
index_prices = [random.uniform(1000, 5000) for _ in range(len(prices[tickers[0]]))] # Mock index prices |
||||
correlations = {ticker: scipy.stats.pearsonr(index_prices, prices[ticker])[0] for ticker in tickers} |
||||
low_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < 0.2] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(low_corr_tickers, min(3, len(low_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade56(): |
||||
# Long/short strategy: Buy stocks with high correlation and sell stocks with low correlation to a randomly chosen strong performer |
||||
import scipy.stats |
||||
base_ticker = random.choice(tickers) |
||||
base_prices = prices[base_ticker] |
||||
correlations = {ticker: scipy.stats.pearsonr(base_prices, prices[ticker])[0] for ticker in tickers if ticker != base_ticker} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.7] |
||||
low_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < 0.2] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(high_corr_tickers, min(2, len(high_corr_tickers)))] + \ |
||||
[Trade(ticker, -100) for ticker in random.sample(low_corr_tickers, min(2, len(low_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade57(): |
||||
# Buy stocks that are inversely correlated with a major sector ETF (mocked data) |
||||
import scipy.stats |
||||
sector_etf_prices = [random.uniform(50, 150) for _ in range(len(prices[tickers[0]]))] # Mock sector ETF prices |
||||
correlations = {ticker: scipy.stats.pearsonr(sector_etf_prices, prices[ticker])[0] for ticker in tickers} |
||||
inverse_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < -0.7] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(inverse_corr_tickers, min(3, len(inverse_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade58(): |
||||
# Sell stocks that are highly correlated with a volatile index |
||||
import scipy.stats |
||||
volatile_index_prices = [random.uniform(1000, 2000) for _ in range(len(prices[tickers[0]]))] # Mock volatile index prices |
||||
correlations = {ticker: scipy.stats.pearsonr(volatile_index_prices, prices[ticker])[0] for ticker in tickers} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.8] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(high_corr_tickers, min(3, len(high_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade59(): |
||||
# Buy stocks that are less correlated with the overall market (S&P 500) |
||||
import scipy.stats |
||||
market_prices = [random.uniform(1000, 5000) for _ in range(len(prices[tickers[0]]))] # Mock market index prices |
||||
correlations = {ticker: scipy.stats.pearsonr(market_prices, prices[ticker])[0] for ticker in tickers} |
||||
low_corr_tickers = [ticker for ticker, corr in correlations.items() if corr < 0.3] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(low_corr_tickers, min(3, len(low_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade60(): |
||||
# Sell stocks that are highly correlated with a specific commodity price (e.g., oil) |
||||
import scipy.stats |
||||
commodity_prices = [random.uniform(50, 100) for _ in range(len(prices[tickers[0]]))] # Mock commodity prices |
||||
correlations = {ticker: scipy.stats.pearsonr(commodity_prices, prices[ticker])[0] for ticker in tickers} |
||||
high_corr_tickers = [ticker for ticker, corr in correlations.items() if corr > 0.7] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(high_corr_tickers, min(3, len(high_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade61(): |
||||
# Buy stocks forming a "double bottom" pattern (last 5 days) |
||||
double_bottom_tickers = [ticker for ticker in tickers if prices[ticker][4] < prices[ticker][2] == prices[ticker][0] < prices[ticker][1] and prices[ticker][3] > prices[ticker][2]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(double_bottom_tickers, min(3, len(double_bottom_tickers)))] |
||||
return trades |
||||
|
||||
def trade62(): |
||||
# Sell stocks forming a "double top" pattern (last 5 days) |
||||
double_top_tickers = [ticker for ticker in tickers if prices[ticker][4] > prices[ticker][2] == prices[ticker][0] > prices[ticker][1] and prices[ticker][3] < prices[ticker][2]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(double_top_tickers, min(3, len(double_top_tickers)))] |
||||
return trades |
||||
|
||||
def trade63(): |
||||
# Buy stocks showing a "head and shoulders" bottom pattern (last 7 days) |
||||
hs_bottom_tickers = [ticker for ticker in tickers if prices[ticker][6] > prices[ticker][5] < prices[ticker][4] > prices[ticker][3] < prices[ticker][2] and prices[ticker][1] < prices[ticker][0]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(hs_bottom_tickers, min(3, len(hs_bottom_tickers)))] |
||||
return trades |
||||
|
||||
def trade64(): |
||||
# Sell stocks showing a "head and shoulders" top pattern (last 7 days) |
||||
hs_top_tickers = [ticker for ticker in tickers if prices[ticker][6] < prices[ticker][5] > prices[ticker][4] < prices[ticker][3] > prices[ticker][2] and prices[ticker][1] > prices[ticker][0]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(hs_top_tickers, min(3, len(hs_top_tickers)))] |
||||
return trades |
||||
|
||||
def trade65(): |
||||
# Buy stocks forming a "bullish flag" pattern (last 10 days) |
||||
bullish_flag_tickers = [ticker for ticker in tickers if prices[ticker][9] < prices[ticker][8] and all(prices[ticker][i] < prices[ticker][i+1] for i in range(8, 4, -1)) and all(prices[ticker][i] > prices[ticker][i+1] for i in range(4, 0, -1))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(bullish_flag_tickers, min(3, len(bullish_flag_tickers)))] |
||||
return trades |
||||
|
||||
def trade66(): |
||||
# Sell stocks forming a "bearish flag" pattern (last 10 days) |
||||
bearish_flag_tickers = [ticker for ticker in tickers if prices[ticker][9] > prices[ticker][8] and all(prices[ticker][i] > prices[ticker][i+1] for i in range(8, 4, -1)) and all(prices[ticker][i] < prices[ticker][i+1] for i in range(4, 0, -1))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(bearish_flag_tickers, min(3, len(bearish_flag_tickers)))] |
||||
return trades |
||||
|
||||
def trade67(): |
||||
# Buy stocks forming a "ascending triangle" pattern (last 15 days) |
||||
ascending_triangle_tickers = [ticker for ticker in tickers if prices[ticker][14] < prices[ticker][13] and prices[ticker][0] > prices[ticker][7] and all(prices[ticker][i] <= prices[ticker][i+1] for i in range(13))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(ascending_triangle_tickers, min(3, len(ascending_triangle_tickers)))] |
||||
return trades |
||||
|
||||
def trade68(): |
||||
# Sell stocks forming a "descending triangle" pattern (last 15 days) |
||||
descending_triangle_tickers = [ticker for ticker in tickers if prices[ticker][14] > prices[ticker][13] and prices[ticker][0] < prices[ticker][7] and all(prices[ticker][i] >= prices[ticker][i+1] for i in range(13))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(descending_triangle_tickers, min(3, len(descending_triangle_tickers)))] |
||||
return trades |
||||
|
||||
def trade69(): |
||||
# Buy stocks forming a "rounding bottom" pattern (last 20 days) |
||||
rounding_bottom_tickers = [ticker for ticker in tickers if all(prices[ticker][i] >= prices[ticker][i+1] for i in range(10)) and all(prices[ticker][i] <= prices[ticker][i+1] for i in range(10, 19))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(rounding_bottom_tickers, min(3, len(rounding_bottom_tickers)))] |
||||
return trades |
||||
|
||||
def trade70(): |
||||
# Sell stocks forming a "rounding top" pattern (last 20 days) |
||||
rounding_top_tickers = [ticker for ticker in tickers if all(prices[ticker][i] <= prices[ticker][i+1] for i in range(10)) and all(prices[ticker][i] >= prices[ticker][i+1] for i in range(10, 19))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(rounding_top_tickers, min(3, len(rounding_top_tickers)))] |
||||
return trades |
||||
|
||||
def trade71(): |
||||
# Buy stocks showing a strong upward trend over the last 10 days |
||||
upward_trend_tickers = [ticker for ticker in tickers if prices[ticker][0] > prices[ticker][9] and all(prices[ticker][i] >= prices[ticker][i+1] for i in range(9))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(upward_trend_tickers, min(3, len(upward_trend_tickers)))] |
||||
return trades |
||||
|
||||
def trade72(): |
||||
# Sell stocks showing a strong downward trend over the last 10 days |
||||
downward_trend_tickers = [ticker for ticker in tickers if prices[ticker][0] < prices[ticker][9] and all(prices[ticker][i] <= prices[ticker][i+1] for i in range(9))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(downward_trend_tickers, min(3, len(downward_trend_tickers)))] |
||||
return trades |
||||
|
||||
def trade73(): |
||||
# Buy stocks that have reverted to their mean price over the last 20 days |
||||
mean_reversion_tickers = [ticker for ticker in tickers if abs(prices[ticker][0] - np.mean(prices[ticker][:20])) < np.std(prices[ticker][:20])] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(mean_reversion_tickers, min(3, len(mean_reversion_tickers)))] |
||||
return trades |
||||
|
||||
def trade74(): |
||||
# Sell stocks that have deviated significantly from their mean price over the last 20 days |
||||
mean_deviation_tickers = [ticker for ticker in tickers if abs(prices[ticker][0] - np.mean(prices[ticker][:20])) > 2 * np.std(prices[ticker][:20])] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(mean_deviation_tickers, min(3, len(mean_deviation_tickers)))] |
||||
return trades |
||||
|
||||
def trade75(): |
||||
# Buy stocks that have shown increased volatility in the last 10 days compared to the previous 20 days |
||||
increased_volatility_tickers = [ticker for ticker in tickers if np.std(prices[ticker][:10]) > 1.5 * np.std(prices[ticker][10:30])] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(increased_volatility_tickers, min(3, len(increased_volatility_tickers)))] |
||||
return trades |
||||
|
||||
def trade76(): |
||||
# Sell stocks that have shown decreased volatility in the last 10 days compared to the previous 20 days |
||||
decreased_volatility_tickers = [ticker for ticker in tickers if np.std(prices[ticker][:10]) < 0.5 * np.std(prices[ticker][10:30])] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(decreased_volatility_tickers, min(3, len(decreased_volatility_tickers)))] |
||||
return trades |
||||
|
||||
def trade77(): |
||||
# Buy stocks that have broken above their previous 50-day high |
||||
previous_50_day_highs = {ticker: max(prices[ticker][1:51]) for ticker in tickers} |
||||
breakout_tickers = [ticker for ticker in tickers if prices[ticker][0] > previous_50_day_highs[ticker]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(breakout_tickers, min(3, len(breakout_tickers)))] |
||||
return trades |
||||
|
||||
def trade78(): |
||||
# Sell stocks that have broken below their previous 50-day low |
||||
previous_50_day_lows = {ticker: min(prices[ticker][1:51]) for ticker in tickers} |
||||
breakdown_tickers = [ticker for ticker in tickers if prices[ticker][0] < previous_50_day_lows[ticker]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(breakdown_tickers, min(3, len(breakdown_tickers)))] |
||||
return trades |
||||
|
||||
def trade79(): |
||||
# Buy stocks that have shown a significant upward price spike in the last 3 days |
||||
price_spike_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][2]) / prices[ticker][2] > 0.1] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(price_spike_tickers, min(3, len(price_spike_tickers)))] |
||||
return trades |
||||
|
||||
def trade80(): |
||||
# Sell stocks that have shown a significant downward price spike in the last 3 days |
||||
price_drop_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][2]) / prices[ticker][2] < -0.1] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(price_drop_tickers, min(3, len(price_drop_tickers)))] |
||||
return trades |
||||
|
||||
def trade81(): |
||||
# Buy stocks that have formed a "golden cross" (50-day MA crosses above 200-day MA) |
||||
golden_cross_tickers = [ticker for ticker in tickers if np.mean(prices[ticker][:50]) > np.mean(prices[ticker][:200])] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(golden_cross_tickers, min(3, len(golden_cross_tickers)))] |
||||
return trades |
||||
|
||||
def trade82(): |
||||
# Sell stocks that have formed a "death cross" (50-day MA crosses below 200-day MA) |
||||
death_cross_tickers = [ticker for ticker in tickers if np.mean(prices[ticker][:50]) < np.mean(prices[ticker][:200])] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(death_cross_tickers, min(3, len(death_cross_tickers)))] |
||||
return trades |
||||
|
||||
def trade83(): |
||||
# Buy stocks that have shown an increase in trading volume in the last 5 days |
||||
volume_increase_tickers = [ticker for ticker in tickers if np.mean(prices[ticker][:5]) > 1.2 * np.mean(prices[ticker][5:10])] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(volume_increase_tickers, min(3, len(volume_increase_tickers)))] |
||||
return trades |
||||
|
||||
def trade84(): |
||||
# Sell stocks that have shown a decrease in trading volume in the last 5 days |
||||
volume_decrease_tickers = [ticker for ticker in tickers if np.mean(prices[ticker][:5]) < 0.8 * np.mean(prices[ticker][5:10])] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(volume_decrease_tickers, min(3, len(volume_decrease_tickers)))] |
||||
return trades |
||||
|
||||
def trade85(): |
||||
# Buy stocks that have shown consistent daily gains for the last 5 days |
||||
consistent_gainers = [ticker for ticker in tickers if all(prices[ticker][i] > prices[ticker][i+1] for i in range(5))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(consistent_gainers, min(3, len(consistent_gainers)))] |
||||
return trades |
||||
|
||||
def trade86(): |
||||
# Sell stocks that have shown consistent daily losses for the last 5 days |
||||
consistent_losers = [ticker for ticker in tickers if all(prices[ticker][i] < prices[ticker][i+1] for i in range(5))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(consistent_losers, min(3, len(consistent_losers)))] |
||||
return trades |
||||
|
||||
def trade87(): |
||||
# Buy stocks that are trading near their all-time highs |
||||
all_time_high_tickers = [ticker for ticker in tickers if prices[ticker][0] >= 0.95 * max(prices[ticker])] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(all_time_high_tickers, min(3, len(all_time_high_tickers)))] |
||||
return trades |
||||
|
||||
def trade88(): |
||||
# Sell stocks that are trading near their all-time lows |
||||
all_time_low_tickers = [ticker for ticker in tickers if prices[ticker][0] <= 1.05 * min(prices[ticker])] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(all_time_low_tickers, min(3, len(all_time_low_tickers)))] |
||||
return trades |
||||
|
||||
def trade89(): |
||||
# Buy stocks that have gapped up at market open today |
||||
gap_up_tickers = [ticker for ticker in tickers if prices[ticker][0] > 1.05 * prices[ticker][1]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(gap_up_tickers, min(3, len(gap_up_tickers)))] |
||||
return trades |
||||
|
||||
def trade90(): |
||||
# Sell stocks that have gapped down at market open today |
||||
gap_down_tickers = [ticker for ticker in tickers if prices[ticker][0] < 0.95 * prices[ticker][1]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(gap_down_tickers, min(3, len(gap_down_tickers)))] |
||||
return trades |
||||
|
||||
def trade91(): |
||||
# Buy stocks that have shown a steady upward trend for the last 15 days |
||||
steady_uptrend_tickers = [ticker for ticker in tickers if all(prices[ticker][i] >= prices[ticker][i+1] for i in range(15))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(steady_uptrend_tickers, min(3, len(steady_uptrend_tickers)))] |
||||
return trades |
||||
|
||||
def trade92(): |
||||
# Sell stocks that have shown a steady downward trend for the last 15 days |
||||
steady_downtrend_tickers = [ticker for ticker in tickers if all(prices[ticker][i] <= prices[ticker][i+1] for i in range(15))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(steady_downtrend_tickers, min(3, len(steady_downtrend_tickers)))] |
||||
return trades |
||||
|
||||
def trade93(): |
||||
# Buy stocks that have outperformed the market index by 5% in the last 30 days |
||||
market_index_return = random.uniform(-0.05, 0.05) # Mock market index return |
||||
outperforming_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][29]) / prices[ticker][29] > market_index_return + 0.05] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(outperforming_tickers, min(3, len(outperforming_tickers)))] |
||||
return trades |
||||
|
||||
def trade94(): |
||||
# Sell stocks that have underperformed the market index by 5% in the last 30 days |
||||
market_index_return = random.uniform(-0.05, 0.05) # Mock market index return |
||||
underperforming_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][29]) / prices[ticker][29] < market_index_return - 0.05] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(underperforming_tickers, min(3, len(underperforming_tickers)))] |
||||
return trades |
||||
|
||||
def trade95(): |
||||
# Buy stocks that have broken above their previous 10-day high |
||||
previous_10_day_highs = {ticker: max(prices[ticker][1:11]) for ticker in tickers} |
||||
breakout_tickers = [ticker for ticker in tickers if prices[ticker][0] > previous_10_day_highs[ticker]] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(breakout_tickers, min(3, len(breakout_tickers)))] |
||||
return trades |
||||
|
||||
def trade96(): |
||||
# Sell stocks that have broken below their previous 10-day low |
||||
previous_10_day_lows = {ticker: min(prices[ticker][1:11]) for ticker in tickers} |
||||
breakdown_tickers = [ticker for ticker in tickers if prices[ticker][0] < previous_10_day_lows[ticker]] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(breakdown_tickers, min(3, len(breakdown_tickers)))] |
||||
return trades |
||||
|
||||
def trade97(): |
||||
# Buy stocks with a relative strength index (RSI) below 30 (oversold) |
||||
rsi = {ticker: random.uniform(0, 100) for ticker in tickers} # Mock RSI values |
||||
oversold_tickers = [ticker for ticker in tickers if rsi[ticker] < 30] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(oversold_tickers, min(3, len(oversold_tickers)))] |
||||
return trades |
||||
|
||||
def trade98(): |
||||
# Sell stocks with a relative strength index (RSI) above 70 (overbought) |
||||
rsi = {ticker: random.uniform(0, 100) for ticker in tickers} # Mock RSI values |
||||
overbought_tickers = [ticker for ticker in tickers if rsi[ticker] > 70] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(overbought_tickers, min(3, len(overbought_tickers)))] |
||||
return trades |
||||
|
||||
def trade99(): |
||||
# Buy stocks with a price-to-earnings ratio (P/E) below the industry average (mocked data) |
||||
pe_ratios = {ticker: random.uniform(10, 30) for ticker in tickers} # Mock P/E ratios |
||||
industry_average_pe = 20 # Mock industry average P/E |
||||
undervalued_tickers = [ticker for ticker in tickers if pe_ratios[ticker] < industry_average_pe] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(undervalued_tickers, min(3, len(undervalued_tickers)))] |
||||
return trades |
||||
|
||||
def trade100(): |
||||
# Sell stocks with a price-to-earnings ratio (P/E) above the industry average (mocked data) |
||||
pe_ratios = {ticker: random.uniform(10, 30) for ticker in tickers} # Mock P/E ratios |
||||
industry_average_pe = 20 # Mock industry average P/E |
||||
overvalued_tickers = [ticker for ticker in tickers if pe_ratios[ticker] > industry_average_pe] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(overvalued_tickers, min(3, len(overvalued_tickers)))] |
||||
return trades |
||||
|
||||
def trade101(): |
||||
# Buy stocks that have outperformed the market by more than 5% in the last 10 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
outperforming_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][9]) / prices[ticker][9] > market_return + 0.05] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(outperforming_tickers, min(3, len(outperforming_tickers)))] |
||||
return trades |
||||
|
||||
def trade102(): |
||||
# Sell stocks that have underperformed the market by more than 5% in the last 10 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
underperforming_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][9]) / prices[ticker][9] < market_return - 0.05] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(underperforming_tickers, min(3, len(underperforming_tickers)))] |
||||
return trades |
||||
|
||||
def trade103(): |
||||
# Buy stocks that have shown a positive return while the market showed a negative return over the last 5 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(5)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
positive_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][4]) / prices[ticker][4] > 0 and market_return < 0] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(positive_tickers, min(3, len(positive_tickers)))] |
||||
return trades |
||||
|
||||
def trade104(): |
||||
# Sell stocks that have shown a negative return while the market showed a positive return over the last 5 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(5)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
negative_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][4]) / prices[ticker][4] < 0 and market_return > 0] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(negative_tickers, min(3, len(negative_tickers)))] |
||||
return trades |
||||
|
||||
def trade105(): |
||||
# Buy stocks that have shown less volatility compared to the market over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_volatility = np.std(market_total) |
||||
low_volatility_tickers = [ticker for ticker in tickers if np.std(prices[ticker][:20]) < market_volatility] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(low_volatility_tickers, min(3, len(low_volatility_tickers)))] |
||||
return trades |
||||
|
||||
def trade106(): |
||||
# Sell stocks that have shown more volatility compared to the market over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_volatility = np.std(market_total) |
||||
high_volatility_tickers = [ticker for ticker in tickers if np.std(prices[ticker][:20]) > market_volatility] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(high_volatility_tickers, min(3, len(high_volatility_tickers)))] |
||||
return trades |
||||
|
||||
def trade107(): |
||||
# Buy stocks that have shown an increasing trend while the market showed a decreasing trend over the last 15 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(15)] |
||||
market_trend = market_total[0] > market_total[-1] |
||||
increasing_tickers = [ticker for ticker in tickers if prices[ticker][0] > prices[ticker][14] and not market_trend] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(increasing_tickers, min(3, len(increasing_tickers)))] |
||||
return trades |
||||
|
||||
def trade108(): |
||||
# Sell stocks that have shown a decreasing trend while the market showed an increasing trend over the last 15 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(15)] |
||||
market_trend = market_total[0] < market_total[-1] |
||||
decreasing_tickers = [ticker for ticker in tickers if prices[ticker][0] < prices[ticker][14] and market_trend] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(decreasing_tickers, min(3, len(decreasing_tickers)))] |
||||
return trades |
||||
|
||||
def trade109(): |
||||
# Buy stocks that have broken above their previous 10-day high while the market is flat |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_flat = abs((market_total[0] - market_total[-1]) / market_total[-1]) < 0.01 |
||||
previous_10_day_highs = {ticker: max(prices[ticker][1:11]) for ticker in tickers} |
||||
breakout_tickers = [ticker for ticker in tickers if prices[ticker][0] > previous_10_day_highs[ticker] and market_flat] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(breakout_tickers, min(3, len(breakout_tickers)))] |
||||
return trades |
||||
|
||||
def trade110(): |
||||
# Sell stocks that have broken below their previous 10-day low while the market is flat |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_flat = abs((market_total[0] - market_total[-1]) / market_total[-1]) < 0.01 |
||||
previous_10_day_lows = {ticker: min(prices[ticker][1:11]) for ticker in tickers} |
||||
breakdown_tickers = [ticker for ticker in tickers if prices[ticker][0] < previous_10_day_lows[ticker] and market_flat] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(breakdown_tickers, min(3, len(breakdown_tickers)))] |
||||
return trades |
||||
|
||||
def trade111(): |
||||
# Buy stocks that have shown a higher positive return compared to the market over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
higher_positive_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][19]) / prices[ticker][19] > market_return] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(higher_positive_tickers, min(3, len(higher_positive_tickers)))] |
||||
return trades |
||||
|
||||
def trade112(): |
||||
# Sell stocks that have shown a higher negative return compared to the market over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_return = (market_total[0] - market_total[-1]) / market_total[-1] |
||||
higher_negative_tickers = [ticker for ticker in tickers if (prices[ticker][0] - prices[ticker][19]) / prices[ticker][19] < market_return] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(higher_negative_tickers, min(3, len(higher_negative_tickers)))] |
||||
return trades |
||||
|
||||
def trade113(): |
||||
# Buy stocks that have shown less drawdown compared to the market over the last 30 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(30)] |
||||
market_drawdown = min(market_total) / max(market_total) |
||||
less_drawdown_tickers = [ticker for ticker in tickers if min(prices[ticker][:30]) / max(prices[ticker][:30]) > market_drawdown] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(less_drawdown_tickers, min(3, len(less_drawdown_tickers)))] |
||||
return trades |
||||
|
||||
def trade114(): |
||||
# Sell stocks that have shown more drawdown compared to the market over the last 30 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(30)] |
||||
market_drawdown = min(market_total) / max(market_total) |
||||
more_drawdown_tickers = [ticker for ticker in tickers if min(prices[ticker][:30]) / max(prices[ticker][:30]) < market_drawdown] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(more_drawdown_tickers, min(3, len(more_drawdown_tickers)))] |
||||
return trades |
||||
|
||||
def trade115(): |
||||
# Buy stocks that have had a smaller price range compared to the market over the last 15 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(15)] |
||||
market_range = max(market_total) - min(market_total) |
||||
small_range_tickers = [ticker for ticker in tickers if max(prices[ticker][:15]) - min(prices[ticker][:15]) < market_range] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(small_range_tickers, min(3, len(small_range_tickers)))] |
||||
return trades |
||||
|
||||
def trade116(): |
||||
# Sell stocks that have had a larger price range compared to the market over the last 15 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(15)] |
||||
market_range = max(market_total) - min(market_total) |
||||
large_range_tickers = [ticker for ticker in tickers if max(prices[ticker][:15]) - min(prices[ticker][:15]) > market_range] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(large_range_tickers, min(3, len(large_range_tickers)))] |
||||
return trades |
||||
|
||||
def trade117(): |
||||
# Buy stocks that have consistently stayed above their market-relative average price in the last 10 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_avg = sum(market_total) / len(market_total) |
||||
consistent_above_avg_tickers = [ticker for ticker in tickers if all(prices[ticker][i] > market_avg for i in range(10))] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(consistent_above_avg_tickers, min(3, len(consistent_above_avg_tickers)))] |
||||
return trades |
||||
|
||||
def trade118(): |
||||
# Sell stocks that have consistently stayed below their market-relative average price in the last 10 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(10)] |
||||
market_avg = sum(market_total) / len(market_total) |
||||
consistent_below_avg_tickers = [ticker for ticker in tickers if all(prices[ticker][i] < market_avg for i in range(10))] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(consistent_below_avg_tickers, min(3, len(consistent_below_avg_tickers)))] |
||||
return trades |
||||
|
||||
def trade119(): |
||||
# Buy stocks that have shown a positive correlation with the market trend over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_trend = scipy.stats.linregress(range(20), market_total).slope |
||||
positive_corr_tickers = [ticker for ticker in tickers if scipy.stats.pearsonr(prices[ticker][:20], market_total)[0] > 0.5] |
||||
trades = [Trade(ticker, 100) for ticker in random.sample(positive_corr_tickers, min(3, len(positive_corr_tickers)))] |
||||
return trades |
||||
|
||||
def trade120(): |
||||
# Sell stocks that have shown a negative correlation with the market trend over the last 20 days |
||||
market_total = [sum(prices[ticker][i] for ticker in tickers) for i in range(20)] |
||||
market_trend = scipy.stats.linregress(range(20), market_total).slope |
||||
negative_corr_tickers = [ticker for ticker in tickers if scipy.stats.pearsonr(prices[ticker][:20], market_total)[0] < -0.5] |
||||
trades = [Trade(ticker, -100) for ticker in random.sample(negative_corr_tickers, min(3, len(negative_corr_tickers)))] |
||||
return trades |
Before Width: | Height: | Size: 356 KiB |
Before Width: | Height: | Size: 439 KiB |
Before Width: | Height: | Size: 432 KiB |
@ -1,380 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "5c291475-8c7c-461c-9b12-545a887b2432", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Jupyter Lab\n", |
||||
"\n", |
||||
"## A Quick Start Guide\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Jupyter lab! \n", |
||||
"This is a Data Science playground where you can easily write code and investigate the results. It's an ideal environment for: \n", |
||||
"- Research & Development\n", |
||||
"- Prototyping\n", |
||||
"- Learning (that's us!)\n", |
||||
"\n", |
||||
"It's not typically used for shipping production code, and in Week 8 we'll explore the bridge between Jupyter and python code.\n", |
||||
"\n", |
||||
"A file in Jupyter Lab, like this one, is called a **Notebook**.\n", |
||||
"\n", |
||||
"A long time ago, Jupyter used to be called \"IPython\", and so the extensions of notebooks are \".ipynb\" which stands for \"IPython Notebook\".\n", |
||||
"\n", |
||||
"On the left is a File Browser that lets you navigate around the directories and choose different notebooks. But you probably know that already, or you wouldn't have got here!\n", |
||||
"\n", |
||||
"The notebook consists of a series of square boxes called \"cells\". Some of them contain text, like this cell, and some of them contain code, like the cell below.\n", |
||||
"\n", |
||||
"Click in a cell with code and press `Shift + Return` (or `Shift + Enter`) to run the code and print the output.\n", |
||||
"\n", |
||||
"Do that now for the cell below this:" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "33d37cd8-55c9-4e03-868c-34aa9cab2c80", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Click anywhere in this cell and press Shift + Return\n", |
||||
"\n", |
||||
"2 + 2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9e95df7b-55c6-4204-b8f9-cae83360fc23", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Congrats!\n", |
||||
"\n", |
||||
"Now run the next cell which sets a value, followed by the cells after it to print the value" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "585eb9c1-85ee-4c27-8dc2-b4d8d022eda0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Set a value for a variable\n", |
||||
"\n", |
||||
"favorite_fruit = \"bananas\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "07792faa-761d-46cb-b9b7-2bbf70bb1628", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# The result of the last statement is shown after you run it\n", |
||||
"\n", |
||||
"favorite_fruit" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a067d2b1-53d5-4aeb-8a3c-574d39ff654a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use the variable\n", |
||||
"\n", |
||||
"print(f\"My favorite fruit is {favorite_fruit}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4c5a4e60-b7f4-4953-9e80-6d84ba4664ad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Now change the variable\n", |
||||
"\n", |
||||
"favorite_fruit = f\"anything but {favorite_fruit}\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9442d5c9-f57d-4839-b0af-dce58646c04f", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Now go back and rerun the cell with the print statement, two cells back\n", |
||||
"\n", |
||||
"See how it prints something different, even though favorite_fruit was changed further down in the notebook? \n", |
||||
"\n", |
||||
"The order that code appears in the notebook doesn't matter. What matters is the order that the code is **executed**. There's a python process sitting behind this notebook in which the variables are being changed.\n", |
||||
"\n", |
||||
"This catches some people out when they first use Jupyter." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8e5ec81d-7c5b-4025-bd2e-468d67b581b6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Then run this cell twice, and see if you understand what's going on\n", |
||||
"\n", |
||||
"print(f\"My favorite fruit is {favorite_fruit}\")\n", |
||||
"\n", |
||||
"favorite_fruit = \"apples\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a29dab2d-bab9-4a54-8504-05e62594cc6f", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Explaining the 'kernel'\n", |
||||
"\n", |
||||
"Sitting behind this notebook is a Python process which executes each cell when you run it. That Python process is known as the Kernel. Each notebook has its own separate Kernel.\n", |
||||
"\n", |
||||
"You can go to the Kernel menu and select \"Restart Kernel\".\n", |
||||
"\n", |
||||
"If you then try to run the next cell, you'll get an error, because favorite_fruit is no longer defined. You'll need to run the cells from the top of the notebook again. Then the next cell should run fine." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "84b1e410-5eda-4e2c-97ce-4eebcff816c5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"My favorite fruit is {favorite_fruit}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "4d4188fc-d9cc-42be-8b4e-ae8630456764", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Adding and moving cells\n", |
||||
"\n", |
||||
"Click in this cell, then click the \\[+\\] button in the toolbar above to create a new cell immediately below this one. Copy and paste in the code in the prior cell, then run it! There are also icons in the top right of the selected cell to delete it (bin), duplicate it, and move it up and down.\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ce258424-40c3-49a7-9462-e6fa25014b03", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "30e71f50-8f01-470a-9d7a-b82a6cef4236", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Cell output\n", |
||||
"\n", |
||||
"When you execute a cell, the standard output and the result of the last statement is written to the area immediately under the code, known as the 'cell output'. When you save a Notebook from the file menu (or command+S), the output is also saved, making it a useful record of what happened.\n", |
||||
"\n", |
||||
"You can clean this up by going to Edit menu >> Clear Outputs of All Cells, or Kernel menu >> Restart Kernel and Clear Outputs of All Cells." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a4d021e2-c284-411f-8ab1-030530cfbe72", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"spams = [\"spam\"] * 1000\n", |
||||
"print(spams)\n", |
||||
"\n", |
||||
"# Might be worth clearing output after running this!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eac060f2-7a71-46e7-8235-b6ad0a76f5f8", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Using markdown\n", |
||||
"\n", |
||||
"So what's going on with these areas with writing in them, like this one? Well, there's actually a different kind of cell called a 'Markdown' cell for adding explanations like this. Click the + button to add a cell. Then in the toolbar, click where it says 'Code' and change it to 'Markdown'.\n", |
||||
"\n", |
||||
"Add some comments using Markdown format, perhaps copying and pasting from here:\n", |
||||
"\n", |
||||
"```\n", |
||||
"# This is a heading\n", |
||||
"## This is a sub-head\n", |
||||
"### And a sub-sub-head\n", |
||||
"\n", |
||||
"I like Jupyter Lab because it's\n", |
||||
"- Easy\n", |
||||
"- Flexible\n", |
||||
"- Satisfying\n", |
||||
"```\n", |
||||
"\n", |
||||
"And to turn this into formatted text simply with Shift+Return in the cell.\n", |
||||
"Click in the cell and press the Bin icon if you want to remove it." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e1586320-c90f-4f22-8b39-df6865484950", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1330c83c-67ac-4ca0-ac92-a71699e0c31b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# The exclamation point\n", |
||||
"\n", |
||||
"There's a super useful feature of jupyter labs; you can type a command with a ! in front of it in a code cell, like:\n", |
||||
"\n", |
||||
"!pip install \\[some_package\\]\n", |
||||
"\n", |
||||
"And it will run it at the command line (as if in Windows Powershell or Mac Terminal) and print the result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "82042fc5-a907-4381-a4b8-eb9386df19cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# list the current directory\n", |
||||
"\n", |
||||
"!ls" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4fc3e3da-8a55-40cc-9706-48bf12a0e20e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# ping cnn.com - press the stop button in the toolbar when you're bored\n", |
||||
"\n", |
||||
"!ping cnn.com" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58e9462-89a2-4b4f-b4aa-51c4bd9f796b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# This is a useful command that ensures your Anaconda environment \n", |
||||
"# is up to date with any new upgrades to packages;\n", |
||||
"# But it might take a minute and will print a lot to output\n", |
||||
"\n", |
||||
"!conda env update -f ../environment.yml" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "4688baaf-a72c-41b5-90b6-474cb24790a7", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Minor things we encounter on the course\n", |
||||
"\n", |
||||
"This isn't necessarily a feature of Jupyter, but it's a nice package to know about that is useful in Jupyter Labs, and I use it in the course.\n", |
||||
"\n", |
||||
"The package `tqdm` will print a nice progress bar if you wrap any iterable." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2646a4e5-3c23-4aee-a34d-d623815187d2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Here's some code with no progress bar\n", |
||||
"# It will take 10 seconds while you wonder what's happpening..\n", |
||||
"\n", |
||||
"import time\n", |
||||
"\n", |
||||
"spams = [\"spam\"] * 1000\n", |
||||
"\n", |
||||
"for spam in spams:\n", |
||||
" time.sleep(0.01)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6e96be3d-fa82-42a3-a8aa-b81dd20563a5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now, with a nice little progress bar:\n", |
||||
"\n", |
||||
"import time\n", |
||||
"from tqdm import tqdm\n", |
||||
"\n", |
||||
"spams = [\"spam\"] * 1000\n", |
||||
"\n", |
||||
"for spam in tqdm(spams):\n", |
||||
" time.sleep(0.01)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "63c788dd-4618-4bb4-a5ce-204411a38ade", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# On a different topic, here's a useful way to print output in markdown\n", |
||||
"\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"\n", |
||||
"display(Markdown(\"# This is a big heading!\\n\\n- And this is a bullet-point\\n- So is this\\n- Me, too!\"))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9d14c1fb-3321-4387-b6ca-9af27676f980", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# That's it! You're up to speed on Jupyter Lab.\n", |
||||
"\n", |
||||
"## Want to be even more advanced?\n", |
||||
"\n", |
||||
"If you want to become a pro at Jupyter Lab, you can read their tutorial [here](https://jupyterlab.readthedocs.io/en/latest/). But this isn't required for our course; just a good technique for hitting Shift + Return and enjoying the result!" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,486 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "5c291475-8c7c-461c-9b12-545a887b2432", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Intermediate Level Python\n", |
||||
"\n", |
||||
"## Getting you up to speed\n", |
||||
"\n", |
||||
"This course assumes that you're at an intermediate level of python. For example, you should have a decent idea what something like this might do:\n", |
||||
"\n", |
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||
"\n", |
||||
"If not - then you've come to the right place! Welcome to the crash course in intermediate level python. The best way to learn is by doing!\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "542f0577-a826-4613-a5d7-4170e9666d04", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First: if you need a refresher on the foundations\n", |
||||
"\n", |
||||
"I'm going to defer to an AI friend for this, because these explanations are so well written with great examples. Copy and paste the code examples into a new cell to give them a try. Pick whichever section(s) you'd like to brush up on.\n", |
||||
"\n", |
||||
"**Python imports:** \n", |
||||
"https://chatgpt.com/share/672f9f31-8114-8012-be09-29ef0d0140fb\n", |
||||
"\n", |
||||
"**Python functions** including default arguments: \n", |
||||
"https://chatgpt.com/share/672f9f99-7060-8012-bfec-46d4cf77d672\n", |
||||
"\n", |
||||
"**Python strings**, including slicing, split/join, replace and literals: \n", |
||||
"https://chatgpt.com/share/672fb526-0aa0-8012-9e00-ad1687c04518\n", |
||||
"\n", |
||||
"**Python f-strings** including number and date formatting: \n", |
||||
"https://chatgpt.com/share/672fa125-0de0-8012-8e35-27918cbb481c\n", |
||||
"\n", |
||||
"**Python lists, dicts and sets**, including the `get()` method: \n", |
||||
"https://chatgpt.com/share/672fa225-3f04-8012-91af-f9c95287da8d\n", |
||||
"\n", |
||||
"**Python files** including modes, encoding, context managers, Path, glob.glob: \n", |
||||
"https://chatgpt.com/share/673b53b2-6d5c-8012-a344-221056c2f960\n", |
||||
"\n", |
||||
"**Python classes:** \n", |
||||
"https://chatgpt.com/share/672fa07a-1014-8012-b2ea-6dc679552715\n", |
||||
"\n", |
||||
"**Pickling Python objects and converting to JSON:** \n", |
||||
"https://chatgpt.com/share/673b553e-9d0c-8012-9919-f3bb5aa23e31" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "f9e0f8e1-09b3-478b-ada7-c8c35003929b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## With this in mind - understanding NameErrors in Python\n", |
||||
"\n", |
||||
"It's quite common to hit a NameError in python. With foundational knowledge, you should always feel equipped to debug a NameError and get to the bottom of it.\n", |
||||
"\n", |
||||
"If you're unsure how to fix a NameError, please see this [initial guide](https://chatgpt.com/share/67958312-ada0-8012-a1d3-62b3a5fcbbfc) and this [second guide with exercises](https://chatgpt.com/share/67a57e0b-0194-8012-bb50-8ea76c5995b8), and work through them both until you have high confidence.\n", |
||||
"\n", |
||||
"There's some repetition here, so feel free to skip it if you're already confident.\n", |
||||
"\n", |
||||
"## And now, on to the code!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5802e2f0-0ea0-4237-bbb7-f375a34260f0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# First let's create some things:\n", |
||||
"\n", |
||||
"fruits = [\"Apples\", \"Bananas\", \"Pears\"]\n", |
||||
"\n", |
||||
"book1 = {\"title\": \"Great Expectations\", \"author\": \"Charles Dickens\"}\n", |
||||
"book2 = {\"title\": \"Bleak House\", \"author\": \"Charles Dickens\"}\n", |
||||
"book3 = {\"title\": \"An Book By No Author\"}\n", |
||||
"book4 = {\"title\": \"Moby Dick\", \"author\": \"Herman Melville\"}\n", |
||||
"\n", |
||||
"books = [book1, book2, book3, book4]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9b941e6a-3658-4144-a8d4-72f5e72f3707", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Part 1: List and dict comprehensions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "61992bb8-735d-4dad-8747-8c10b63aec82", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Simple enough to start\n", |
||||
"\n", |
||||
"for fruit in fruits:\n", |
||||
" print(fruit)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c89c3842-9b74-47fa-8424-0fcb08e4177c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's make a new version of fruits\n", |
||||
"\n", |
||||
"fruits_shouted = []\n", |
||||
"for fruit in fruits:\n", |
||||
" fruits_shouted.append(fruit.upper())\n", |
||||
"\n", |
||||
"fruits_shouted" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4ec13b3a-9545-44f1-874a-2910a0663560", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# You probably already know this\n", |
||||
"# There's a nice Python construct called \"list comprehension\" that does this:\n", |
||||
"\n", |
||||
"fruits_shouted2 = [fruit.upper() for fruit in fruits]\n", |
||||
"fruits_shouted2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ecc08c3c-181d-4b64-a3e1-b0ccffc6c0cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# But you may not know that you can do this to create dictionaries, too:\n", |
||||
"\n", |
||||
"fruit_mapping = {fruit: fruit.upper() for fruit in fruits}\n", |
||||
"fruit_mapping" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "500c2406-00d2-4793-b57b-f49b612760c8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# you can also use the if statement to filter the results\n", |
||||
"\n", |
||||
"fruits_with_longer_names_shouted = [fruit.upper() for fruit in fruits if len(fruit)>5]\n", |
||||
"fruits_with_longer_names_shouted" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "38c11c34-d71e-45ba-945b-a3d37dc29793", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"fruit_mapping_unless_starts_with_a = {fruit: fruit.upper() for fruit in fruits if not fruit.startswith('A')}\n", |
||||
"fruit_mapping_unless_starts_with_a" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5c97d8e8-31de-4afa-973e-28d8e5cab749", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Another comprehension\n", |
||||
"\n", |
||||
"[book['title'] for book in books]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "50be0edc-a4cd-493f-a680-06080bb497b4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# This code will fail with an error because one of our books doesn't have an author\n", |
||||
"\n", |
||||
"[book['author'] for book in books]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "53794083-cc09-4edb-b448-2ffb7e8495c2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# But this will work, because get() returns None\n", |
||||
"\n", |
||||
"[book.get('author') for book in books]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8e4b859-24f8-4016-8d74-c2cef226d049", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And this variation will filter out the None\n", |
||||
"\n", |
||||
"[book.get('author') for book in books if book.get('author')]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c44bb999-52b4-4dee-810b-8a400db8f25f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And this version will convert it into a set, removing duplicates\n", |
||||
"\n", |
||||
"set([book.get('author') for book in books if book.get('author')])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "80a65156-6192-4bb4-b4e6-df3fdc933891", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And finally, this version is even nicer\n", |
||||
"# curly braces creates a set, so this is a set comprehension\n", |
||||
"\n", |
||||
"{book.get('author') for book in books if book.get('author')}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c100e5db-5438-4715-921c-3f7152f83f4a", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Part 2: Generators\n", |
||||
"\n", |
||||
"We use Generators in the course because AI models can stream back results.\n", |
||||
"\n", |
||||
"If you've not used Generators before, please start with this excellent intro from ChatGPT:\n", |
||||
"\n", |
||||
"https://chatgpt.com/share/672faa6e-7dd0-8012-aae5-44fc0d0ec218\n", |
||||
"\n", |
||||
"Try pasting some of its examples into a cell." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1efc26fa-9144-4352-9a17-dfec1d246aad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# First define a generator; it looks like a function, but it has yield instead of return\n", |
||||
"\n", |
||||
"import time\n", |
||||
"\n", |
||||
"def come_up_with_fruit_names():\n", |
||||
" for fruit in fruits:\n", |
||||
" time.sleep(1) # thinking of a fruit\n", |
||||
" yield fruit" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "eac338bb-285c-45c8-8a3e-dbfc41409ca3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Then use it\n", |
||||
"\n", |
||||
"for fruit in come_up_with_fruit_names():\n", |
||||
" print(fruit)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f6880578-a3de-4502-952a-4572b95eb9ff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Here's another one\n", |
||||
"\n", |
||||
"def authors_generator():\n", |
||||
" for book in books:\n", |
||||
" if book.get(\"author\"):\n", |
||||
" yield book.get(\"author\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9e316f02-f87f-441d-a01f-024ade949607", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use it\n", |
||||
"\n", |
||||
"for author in authors_generator():\n", |
||||
" print(author)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7535c9d0-410e-4e56-a86c-ae6c0e16053f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Here's the same thing written with list comprehension\n", |
||||
"\n", |
||||
"def authors_generator():\n", |
||||
" for author in [book.get(\"author\") for book in books if book.get(\"author\")]:\n", |
||||
" yield author" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dad34494-0f6c-4edb-b03f-b8d49ee186f2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use it\n", |
||||
"\n", |
||||
"for author in authors_generator():\n", |
||||
" print(author)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abeb7e61-d8aa-4af0-b05a-ae17323e678c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Here's a nice shortcut\n", |
||||
"# You can use \"yield from\" to yield each item of an iterable\n", |
||||
"\n", |
||||
"def authors_generator():\n", |
||||
" yield from [book.get(\"author\") for book in books if book.get(\"author\")]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05b0cb43-aa83-4762-a797-d3beb0f22c44", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use it\n", |
||||
"\n", |
||||
"for author in authors_generator():\n", |
||||
" print(author)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fdfea58e-d809-4dd4-b7b0-c26427f8be55", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And finally - we can replace the list comprehension with a set comprehension\n", |
||||
"\n", |
||||
"def unique_authors_generator():\n", |
||||
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3e821d08-97be-4db9-9a5b-ce5dced3eff8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Use it\n", |
||||
"\n", |
||||
"for author in unique_authors_generator():\n", |
||||
" print(author)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905ba603-15d8-4d01-9a79-60ec293d7ca1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And for some fun - press the stop button in the toolbar when bored!\n", |
||||
"# It's like we've made our own Large Language Model... although not particularly large..\n", |
||||
"# See if you understand why it prints a letter at a time, instead of a word at a time. If you're unsure, try removing the keyword \"from\" everywhere in the code.\n", |
||||
"\n", |
||||
"import random\n", |
||||
"import time\n", |
||||
"\n", |
||||
"pronouns = [\"I\", \"You\", \"We\", \"They\"]\n", |
||||
"verbs = [\"eat\", \"detest\", \"bathe in\", \"deny the existence of\", \"resent\", \"pontificate about\", \"juggle\", \"impersonate\", \"worship\", \"misplace\", \"conspire with\", \"philosophize about\", \"tap dance on\", \"dramatically renounce\", \"secretly collect\"]\n", |
||||
"adjectives = [\"turqoise\", \"smelly\", \"arrogant\", \"festering\", \"pleasing\", \"whimsical\", \"disheveled\", \"pretentious\", \"wobbly\", \"melodramatic\", \"pompous\", \"fluorescent\", \"bewildered\", \"suspicious\", \"overripe\"]\n", |
||||
"nouns = [\"turnips\", \"rodents\", \"eels\", \"walruses\", \"kumquats\", \"monocles\", \"spreadsheets\", \"bagpipes\", \"wombats\", \"accordions\", \"mustaches\", \"calculators\", \"jellyfish\", \"thermostats\"]\n", |
||||
"\n", |
||||
"def infinite_random_sentences():\n", |
||||
" while True:\n", |
||||
" yield from random.choice(pronouns)\n", |
||||
" yield \" \"\n", |
||||
" yield from random.choice(verbs)\n", |
||||
" yield \" \"\n", |
||||
" yield from random.choice(adjectives)\n", |
||||
" yield \" \"\n", |
||||
" yield from random.choice(nouns)\n", |
||||
" yield \". \"\n", |
||||
"\n", |
||||
"for letter in infinite_random_sentences():\n", |
||||
" print(letter, end=\"\", flush=True)\n", |
||||
" time.sleep(0.02)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "04832ea2-2447-4473-a449-104f80e24d85", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Exercise\n", |
||||
"\n", |
||||
"Write some python classes for the books example.\n", |
||||
"\n", |
||||
"Write a Book class with a title and author. Include a method has_author()\n", |
||||
"\n", |
||||
"Write a BookShelf class with a list of books. Include a generator method unique_authors()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "35760406-fe6c-41f9-b0c0-3e8cf73aafd0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Finally\n", |
||||
"\n", |
||||
"Here are some intermediate level details of Classes from our AI friend, including use of type hints, inheritance and class methods. This includes a Book example.\n", |
||||
"\n", |
||||
"https://chatgpt.com/share/67348aca-65fc-8012-a4a9-fd1b8f04ba59" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,185 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "fef36918-109d-41e3-8603-75ff81b42379", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Solution for exercise day 2 - slight modification: model is a parameter also - display_summary(\"deepseek-r1:1.5b\",\"https://yoururl\")\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b50349ac-93ea-496b-ae20-bd72a93bb138", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "edd073c7-8444-4a0d-b84e-4b2ed0ee7f35", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"#MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2e3a6e1a-e4c7-4448-9852-1b6ba2bd8d66", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ae3752ca-3a97-4d6a-ac84-5b75ebfb50ed", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define the system prompt \n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "48b5240f-7617-4e51-a320-cba9650bec84", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6f7d84f0-60f2-4cbf-b4d1-173a79fe3380", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "25520a31-c857-4ed5-86da-50dfe5fab7bb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(model,url):\n", |
||||
" website = Website(url)\n", |
||||
" payload = {\n", |
||||
" \"model\": model,\n", |
||||
" \"messages\": messages_for(website),\n", |
||||
" \"stream\": False\n", |
||||
" }\n", |
||||
" response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
" return response.json()['message']['content']" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "430776ed-8516-43a9-8a22-618d9080f2e1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"def display_summary(model,url):\n", |
||||
" summary = summarize(model,url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b2b05c1f-e4a2-4f65-bd6d-634d72e38b6e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#!ollama pull deepseek-r1:1.5b" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "01513f8a-15b7-4053-bfe4-44b36e5494d1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"deepseek-r1:1.5b\",\"https://www.ipma.pt\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.9" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,28 +0,0 @@
|
||||
Client: Hello I would like to order a pizza |
||||
Restaurant: Sure. What pizza would you like to order from our menu? |
||||
Client: Chicken Ranch |
||||
Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu |
||||
Client: AHHHHH. Do you have chicken BBQ? |
||||
Restaurant: Yes! Do you want it small, medium, or large? |
||||
Client: Medium |
||||
Restaurant: Ok. This will be 180 LE |
||||
Client: Thanks |
||||
Restaurant: Anytime. |
||||
Client: AHHHH I forgot. I want to add a new chicken BBQ pizza |
||||
Restaurant: No problem. Do you also want it medium? |
||||
Client: Yes |
||||
Restaurant: Okay this will be 380 LE |
||||
Client: Okay Thanks |
||||
Client: Wait a minute. Isn't 180 * 2 = 360? |
||||
Restaurant: It seems that there might be a misunderstanding. We add an extra 20 LE for every extra pizza ordered. |
||||
Client: NOBODY TOLD ME THAT.. AND WHY ON EARTH WOULD YOU DO SOMETHING LIKE THAT? |
||||
Restaurant: We are sorry but this is our policy. |
||||
Client: Okay then I don't want your pizza. |
||||
Restaurant: We are so sorry to hear that. We can make a 10% discount on the total price so it would be 342 LE |
||||
Client: Fine |
||||
Restaurant: Thank you for ordering |
||||
Restaurant: Pizza is delivered. How is your experience? |
||||
Client: Your pizza doesn't taste good |
||||
Restaurant: We are so sorry to hear that. Do you have any suggestions you would like to make? |
||||
Client: Make good pizza |
||||
Restaurant: Thanks for your review. We will make sure to improve our pizza in the future. Your opinion really matters. |
@ -1,5 +0,0 @@
|
||||
Client: Hello I would like to order a chicken ranch pizza |
||||
Restaurant: I am so sorry, but chicken ranch is currently unavailable on our menu |
||||
Client: Okay thanks |
||||
Restaurant: Would you like to order something else? |
||||
Client: No thank you |
@ -1,19 +0,0 @@
|
||||
Client: Hello. What is the most selling pizza on your menu? |
||||
Restaurant: Hello! Chicken Ranch pizza is our most selling pizza. Also our special pepperoni pizza got some amazing reviews |
||||
Client: Okay. I want to order a pepperoni pizza |
||||
Restaurant: Sure. Do you want it small, medium, or large? |
||||
Client: Large |
||||
Restaurant: Okay. This will be 210 LE. Would you like to order something else? |
||||
Client: Yes. Do you have onion rings? |
||||
Restaurant: Yes |
||||
Client: Okay I would like to add onion rings. |
||||
Restaurant: Sure. This will be 250 LE |
||||
Client: Thanks |
||||
Restaurant: Anytime |
||||
Client: I have been waiting for too long and the order hasn't arrived yet |
||||
Restaurant: Sorry to hear that. But it appears that the order is on its way to you. |
||||
Restaurant: The order is supposed to be arrived by now. |
||||
Client: Yes it is arrived. |
||||
Restaurant: How is your experience? |
||||
Client: Your pizza tastes soooooo good. The order took too long to arrive but when I tasted the pizza, I was really enjoying it and forgot everything about the delay. |
||||
Restaurant: We are so glad to hear that |
@ -1,15 +0,0 @@
|
||||
You are an assistant working for the customer service department in a pizza restaurant. |
||||
You are to receive a chat between a client and the restaurant's customer service. |
||||
You should generate your responses based on the following criteria: |
||||
- What did the client order? |
||||
- How much did it cost? |
||||
- If the client changed their mind just keep their final order and the final cost |
||||
- Mention the client's experience only if they ordered anything as follows: (Positive/Negative/Neutral/Unknown) |
||||
- If the client did not order anything do not mention their sentiment or experience |
||||
- If the client's experience is positive or negative only, provide a brief summary about their sentiment |
||||
- Do not provide brief summary about their sentiment if their experience was neutral or unknown. |
||||
- Your answers should be clear, straight to the point, and do not use long sentences |
||||
- Your answers should be displayed in bullet points |
||||
- Your answers should be displayed in markdown |
||||
- If the client did not order anything provide a brief summary why that might happened |
||||
- Do not mention cost if the client did not order anything |
@ -1,10 +0,0 @@
|
||||
import mysql.connector |
||||
|
||||
def get_connection(): |
||||
conn = mysql.connector.connect( |
||||
host="127.0.0.1", |
||||
user="root", |
||||
password="xyz", |
||||
database="your_database" |
||||
) |
||||
return conn |
@ -1,42 +0,0 @@
|
||||
import ollama |
||||
from db import get_connection |
||||
import mysql.connector |
||||
|
||||
def text_to_sql(user_query): |
||||
prompt = f""" |
||||
Convert the following natural language query into an SQL statement for MySQL: |
||||
|
||||
Query: "{user_query}" |
||||
|
||||
Ensure the query is syntactically correct and does not contain harmful operations. |
||||
Only return the SQL query without any explanation. |
||||
""" |
||||
|
||||
# Update the model name to 'llama3.2:latest' |
||||
response = ollama.chat(model="llama3.2:latest", messages=[{"role": "user", "content": prompt}]) |
||||
sql_query = response['message']['content'].strip() |
||||
return sql_query |
||||
|
||||
|
||||
# Uncomment this section if you wish to connect with mysql and fill out your credentials in db.py |
||||
'''def execute_sql_query(user_query): |
||||
sql_query = text_to_sql(user_query) |
||||
|
||||
try: |
||||
conn = get_connection() |
||||
cursor = conn.cursor() |
||||
cursor.execute(sql_query) |
||||
result = cursor.fetchall() |
||||
except mysql.connector.Error as e: |
||||
return {"error": f"MySQL Error: {e}"} |
||||
except Exception as e: |
||||
return {"error": str(e)} |
||||
finally: |
||||
conn.close() # Ensure connection is closed even if an error occurs |
||||
|
||||
return result''' |
||||
|
||||
# Example usage |
||||
if __name__ == "__main__": |
||||
user_input = "Show me all users whose first name starts with the letter j in the first_name column." |
||||
print(text_to_sql(user_input)) |
@ -1,123 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Day 2 EXERCISE Solution:\n", |
||||
"\n", |
||||
"Upgraded day 1 project that scrapes and summarizes any webpage using an Open Source model running locally via Ollama instead of OpenAI\n", |
||||
"\n", |
||||
"## Note:-\n", |
||||
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative: \n", |
||||
"1. Run `ollama pull llama3.2:1b` from a Terminal or Powershell\n", |
||||
"2. **Ctrl + /** to comment this code line below: `MODEL = \"llama3.2\"`\n", |
||||
"3. same **Ctrl + /** to uncomment: `MODEL = \"llama3.2:1b\"`" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports:-\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"import ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants:-\n", |
||||
"\n", |
||||
"# MODEL = \"llama3.2\"\n", |
||||
"MODEL = \"llama3.2:1b\"\n", |
||||
"# MODEL = \"deepseek-r1:1.5b\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
" and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
" Respond in markdown.\"\n", |
||||
"\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
" please provide a short summary of this website in markdown. \\\n", |
||||
" If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n", |
||||
"\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"\n", |
||||
"def summary(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model = MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return display(Markdown(response['message']['content']))\n", |
||||
"\n", |
||||
"\n", |
||||
"summary(\"https://edwarddonner.com\")\n", |
||||
"# summary(\"https://cnn.com\")\n", |
||||
"# summary(\"https://anthropic.com\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.10.7" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,432 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 52, |
||||
"id": "b56a950c-db41-4575-bef9-0fa651dea363", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"import ollama\n", |
||||
"from typing import List\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display,clear_output\n", |
||||
"\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0ec875db-0f6a-4eec-a3b6-eae4b71a4b89", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "227cd07c-98a4-463b-94ad-94e33d04944b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" \"\"\"\n", |
||||
" A utility class to represent a Website that we have scraped, now with links\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
"\n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4d5c5e40-c010-4102-8359-899f988185fb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"ed.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5f0b5d71-487c-47a5-ace6-8e02465ed452", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n", |
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n", |
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||
"link_system_prompt += \"\"\"\n", |
||||
"{\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n", |
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c6550325-5160-42c9-b7e7-980b504cd096", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(link_system_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2db4ccc6-5c35-4775-a5b2-4b86e4c73808", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(website):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, email links.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(website.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8af511c7-5a74-4d1a-b763-b31370e70cff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a3b7fb61-ca15-4eab-b017-b0fe5cce46fd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n", |
||||
" ], format = \"json\" #Define format as json!\n", |
||||
" )\n", |
||||
" result = response['message']['content']\n", |
||||
"\n", |
||||
" return json.loads(result)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7816d393-620d-4c53-913e-4ec130b2baba", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n", |
||||
"\n", |
||||
"anthropic = Website(\"https://anthropic.com\")\n", |
||||
"anthropic.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f32ceccb-1d45-41a3-a5c1-fb2e6cd76afe", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a7ec4727-e897-473c-a657-e74f6999c974", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url):\n", |
||||
" result = \"Landing page:\\n\"\n", |
||||
" result += Website(url).get_contents()\n", |
||||
" links = get_links(url)\n", |
||||
" print(\"Found links:\", links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result += f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result += Website(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7acde0c5-1af2-4e8e-9303-e2a98ec9cdbb", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(\"https://anthropic.com\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5a2e2b1d-eb55-4bfb-bf55-5e8c87db0d96", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"Include details of company culture, customers and careers/jobs if you have the information.\"\n", |
||||
"\n", |
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n", |
||||
"\n", |
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8eac1719-7f94-4460-bc4a-0c9c93bb17a5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e2e312f6-01c5-4e57-9134-fb4aa447d155", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_brochure_user_prompt(\"Anthropic\", \"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8b05cbab-f0d2-4a9e-8b8c-c868a036e9cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" response = ollama.chat(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ]\n", |
||||
" )\n", |
||||
" result = response[\"message\"][\"content\"]\n", |
||||
" display(Markdown(result))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "91ede0c0-daf2-42ef-9d31-749afb9d5352", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure(\"Anthropic\", \"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "afb4aeee-5108-42a7-a1c1-5bad254b7e8b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Final omprovement\n", |
||||
"\n", |
||||
"getting a typewriter animation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 50, |
||||
"id": "177de611-1cb1-49e2-b7ea-8d01191af3ee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
" display_markdown = display(Markdown(\"\"), display_id=True) # Initialize Markdown display\n", |
||||
" response_text = \"\"\n", |
||||
"\n", |
||||
" for chunk in ollama.chat(model=MODEL, messages=messages, stream=True): # Ensure stream=True (not a string)\n", |
||||
" response_text += chunk['message']['content']\n", |
||||
" clear_output(wait=True) # Clear previous output to create a streaming effect\n", |
||||
" display_markdown.update(Markdown(response_text)) # Update Markdown dynamically\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 53, |
||||
"id": "a1971d81-fc7f-4ed1-97a0-7ef5e8ed332a", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Found links: {'links': [{'type': 'About page', 'url': 'https://www.anthropic.com/company'}, {'type': 'Careers page', 'url': 'https://www.anthropic.com/careers'}, {'type': 'Company page', 'url': 'https://www.anthropic.com/'}, {'type': 'Research page', 'url': 'https://www.anthropic.com/research'}, {'type': 'Twitter profile', 'url': 'https://twitter.com/AnthropicAI'}, {'type': 'LinkedIn company page', 'url': 'https://www.linkedin.com/company/anthropicresearch'}, {'type': 'YouTube channel', 'url': 'https://www.youtube.com/@anthropic-ai'}]}\n" |
||||
] |
||||
}, |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Anthropic Brochure**\n", |
||||
"======================\n", |
||||
"\n", |
||||
"**Mission Statement**\n", |
||||
"-------------------\n", |
||||
"\n", |
||||
"Anthropic is an AI safety and research company dedicated to building reliable, interpretable, and steerable AI systems that benefit humanity in the long run.\n", |
||||
"\n", |
||||
"**Company Overview**\n", |
||||
"--------------------\n", |
||||
"\n", |
||||
"Anthropic is headquartered in San Francisco and brings together a diverse team of researchers, engineers, policy experts, and business leaders with experience spanning various disciplines. Our mission is to conduct frontier AI research, develop and apply safety techniques, and deploy the resulting systems via partnerships and products.\n", |
||||
"\n", |
||||
"**Research Focus**\n", |
||||
"-----------------\n", |
||||
"\n", |
||||
"Anthropic conducts cutting-edge AI research across various modalities, exploring novel and emerging safety research areas such as interpretability, RL from human feedback, policy, and societal impacts analysis. Our research aims to advance the field of AI safety and inform our product development.\n", |
||||
"\n", |
||||
"**Product Portfolio**\n", |
||||
"---------------------\n", |
||||
"\n", |
||||
"Our flagship product is Claude, a highly intelligent AI model that enables customers to build custom applications and experiences using our API. We also offer various enterprise solutions, including Claude for Enterprise, designed to meet the needs of large organizations.\n", |
||||
"\n", |
||||
"**Customer Base**\n", |
||||
"-----------------\n", |
||||
"\n", |
||||
"Anthropic serves a diverse range of customers, including businesses, nonprofits, civil society groups, and their clients around the globe. Our commitment to safety and reliability has earned us a reputation as a trusted partner in the AI industry.\n", |
||||
"\n", |
||||
"**Values and Culture**\n", |
||||
"----------------------\n", |
||||
"\n", |
||||
"At Anthropic, we value:\n", |
||||
"\n", |
||||
"* **Acting for the global good**: We strive to make decisions that maximize positive outcomes for humanity in the long run.\n", |
||||
"* **Holding light and shade**: We acknowledge the potential risks of AI and approach our work with caution and transparency.\n", |
||||
"\n", |
||||
"**Join Our Team**\n", |
||||
"-----------------\n", |
||||
"\n", |
||||
"We're a collaborative team of researchers, engineers, policy experts, and business leaders passionate about building safer AI systems. Join us to be part of this exciting journey and contribute your skills and expertise to shaping the future of AI.\n", |
||||
"\n", |
||||
"**Careers**\n", |
||||
"------------\n", |
||||
"\n", |
||||
"Check our website for open roles and learn more about our company culture, benefits, and career opportunities.\n", |
||||
"\n", |
||||
"[Learn More](link)\n", |
||||
"\n", |
||||
"**Get in Touch**\n", |
||||
"-----------------\n", |
||||
"\n", |
||||
"Stay up-to-date with the latest news and announcements from Anthropic. Follow us on Twitter, LinkedIn, or YouTube to join the conversation and stay informed.\n", |
||||
"\n", |
||||
"[Twitter](link)\n", |
||||
"[LinkedIn](link)\n", |
||||
"[YouTube](link)" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"create_brochure(\"Anthropic\", \"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c33277a4-84f1-447c-a66e-eb7e2af42d2a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,240 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9964872b-225d-4ced-93e4-fc5b279ec2ed", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Webpage English summarizer with user inputs (url, ollama-based LLM) " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e49d399-d18c-4c91-8abc-cf3289e11e2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"# from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"import ollama, time\n", |
||||
"from tqdm import tqdm" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "46e7d809-248d-41b8-80e1-36b210041581", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define system prompt.\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a detailed summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown, in English.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e8bf237f-591f-4c32-9415-5d5d4e2522b8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a detailed summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7d39ee6d-c670-41ba-a0b8-debd55bda8e3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "43e28ff5-2def-4a47-acdd-2e06c0666956", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "32f4f481-81a3-479d-817b-4e754d9af46d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = HEADERS\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f81cfd17-8208-4192-a59f-485ff3ea74e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the ollama API wrapper and return the relevant component of the response\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model=MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response['message']['content']" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7a9eedc6-2183-473d-84ca-b10d40e2a1e6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Ask the user the name of the url address\n", |
||||
"\n", |
||||
"url= str(input(\"\"\"\n", |
||||
"Please provide a valid url address:\n", |
||||
"https://\"\"\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5d012de2-0ef2-43db-9f51-fc7f989c3642", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Ask the user to select a valid model\n", |
||||
"\n", |
||||
"MODEL= str(input(\"\"\"\n", |
||||
"Please select a LLM:\n", |
||||
"(examples: llama3.2, deepseek-r1:1.5b)\n", |
||||
"\"\"\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1ac8c02e-4a62-448b-a231-8c6f65891811", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's just make sure the model is loaded\n", |
||||
"\n", |
||||
"!ollama pull {MODEL}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0544541f-11a8-4eb7-8eb6-bc032ed6d0d1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print('url: https://{0}\\nModel= {1}'.format(url, MODEL))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45518950-f2c9-43af-b897-4fe8fe48dfd8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summary = summarize('https://'+ url)\n", |
||||
"for summ in tqdm(summary):\n", |
||||
" time.sleep(0.01)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "02c0c15e-216d-47c7-843d-ac27af02820b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "985a3689-5827-4b15-b8d5-276f9b292afd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,76 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# 1) Import Required Libraries \n", |
||||
"\n", |
||||
"import requests\n", |
||||
"import gradio as gr\n", |
||||
"\n", |
||||
"# Deepseek only uses abstract summarization\n", |
||||
"# This tool use DeepSeek API Endpoint\n", |
||||
"\n", |
||||
"# 2) Define the DeepSeek API Endpoint\n", |
||||
"\n", |
||||
"OLLAMA_URL = \"http://localhost:11434/api/generate\"\n", |
||||
"\n", |
||||
"# 3) Define the Summarization Function which can retrieve Information\n", |
||||
"\n", |
||||
"def summarize_text(text):\n", |
||||
" payload = {\n", |
||||
" \"model\": \"deepseek-r1\", #Here you can load whatever the model you have in your ollama(ex:deepseek-r1:1.5b,7b,8b,14b) I used 7b model here \n", |
||||
" \"prompt\": f\"Summarize the following text in **5 bullet points**:\\n\\n{text}\", #The prompt is here for tell commands for the llm to act \n", |
||||
" \"stream\": False # Ensures the response is returned as a whole, not streamed\n", |
||||
" }\n", |
||||
"\n", |
||||
" response = requests.post(OLLAMA_URL, json=payload) #Send Requests to deepseekAPI\n", |
||||
"\n", |
||||
" if response.status_code == 200: #if server run correctly it return the result or it will give error\n", |
||||
" return response.json().get(\"response\", \"No summary generated.\")\n", |
||||
" else:\n", |
||||
" return f\"Error: {response.text}\"\n", |
||||
"\n", |
||||
"# 4) Create Gradio interface to design \n", |
||||
"interface = gr.Interface(\n", |
||||
" fn=summarize_text,\n", |
||||
" inputs=gr.Textbox(lines=10, placeholder=\"Enter text to summarize\"),\n", |
||||
" outputs=gr.Textbox(label=\"Summarized Text\"),\n", |
||||
" #theme='NoCrypt/miku', #Theme for the Interface I used Hatsune Miku from HF \n", |
||||
" title=\"AI-Powered Text Summarizer\",\n", |
||||
" description=\"Enter a long text and DeepSeek AI will generate a concise summary.\"\n", |
||||
")\n", |
||||
"\n", |
||||
"# Launch the web app\n", |
||||
"if __name__ == \"__main__\":\n", |
||||
" interface.launch()\n", |
||||
"\n", |
||||
"\n" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "base", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.4" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 2 |
||||
} |
@ -1,346 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "5787d599-798e-4161-a473-970e8d948db3", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Community contribution\n", |
||||
"\n", |
||||
"Generating a sports brochure - and then in Spanish! Many thanks for the contribution." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4ed9b1c0-50dc-48ea-b1b6-6be3264255b6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from typing import List\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "23cc579a-43eb-44b5-8105-8ec3d46ec029", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Initialize and constants\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", |
||||
"MODEL = 'gpt-4o-mini'\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0206cf1d-00c5-401d-8aa0-88a83aeb8c83", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" url: str\n", |
||||
" title: str\n", |
||||
" body: str\n", |
||||
" links: List[str]\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
"\n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e1ab939b-0a81-4301-8820-abdc1b740a86", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"caps = Website(\"https://www.nhl.com/capitals/\")\n", |
||||
"print(caps.get_contents())" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6bc9763c-e8cb-47f4-a5f0-fccbdb34f5f9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(caps.links)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "11a407e0-6d23-4199-bb58-344352178978", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the team, \\\n", |
||||
"such as links to an About page, Team, News, Schedule, History, Stats pages.\\n\"\n", |
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||
"link_system_prompt += \"\"\"\n", |
||||
"{\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n", |
||||
" {\"type\": \"team page\", \"url\": \"https://full.url/goes/here/team\"},\n", |
||||
" {\"type\": \"news page\": \"url\": \"https://another.full.url/news\"},\n", |
||||
" {\"type\": \"schedule page\": \"url\": \"https://another.full.url/schedule\"},\n", |
||||
" {\"type\": \"history page\": \"url\": \"https://another.full.url/history\"},\n", |
||||
" {\"type\": \"stats page\": \"url\": \"https://another.full.url/stats\"},\n", |
||||
" {\"type\": \"standings page\": \"url\": \"https://another.full.url/standings\"},\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b45c01f5-e27d-47d6-b9e7-7500940da3c0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(website):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the team, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, Tickets, Video, Listen, Community, Fans, Youth Hockey, Shop, League, email links.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(website.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9dbfc365-b3c7-45d2-bd6d-79efb2372f43", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(caps))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "410c113b-a5b2-4639-aace-2203a8631dbc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url):\n", |
||||
" website = Website(url)\n", |
||||
" completion = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n", |
||||
" ],\n", |
||||
" response_format={\"type\": \"json_object\"}\n", |
||||
" )\n", |
||||
" result = completion.choices[0].message.content\n", |
||||
" return json.loads(result)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8a5d809-c85e-4a97-9a81-61114b635027", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(\"https://www.nhl.com/capitals/\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ffb4dbdf-758c-4180-aa4f-3c18899493eb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url):\n", |
||||
" result = \"Landing page:\\n\"\n", |
||||
" result += Website(url).get_contents()\n", |
||||
" links = get_links(url)\n", |
||||
" print(\"Found links:\", links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result += f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result += Website(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b7546e12-c819-4092-a783-7be23d4d7b8c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(\"https://www.nhl.com/capitals/\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d9aa466-e68d-4119-95e4-10893d774ebf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are a sports marketing analyst that analyzes the contents of several relevant pages from a sports team website \\\n", |
||||
"and creates a short brochure about the team for prospective fans and players to recruit. Respond in markdown.\\\n", |
||||
"Include details of team history, team culture, team news, and team stats if you have the information.\"\n", |
||||
"\n", |
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n", |
||||
"\n", |
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "07097603-5196-47cc-9e61-b412c14b62c6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the team in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:40_000] # Truncate if more than 40,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36d42b59-0a96-41b7-b9fc-02ae37659f7a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" display(Markdown(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6e642228-092a-4bf5-a3c2-1ab78822453a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure(\"Washington Capitals\", \"https://www.nhl.com/capitals\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1ccee810-3a11-48ec-8a69-d12efcbeb1cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d392b8a2-9d1a-40bb-a8c1-5b88ad7e463d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Translate to Spanish" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bc08bb5-c02e-4685-bbd1-5afa084b7f18", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure in spanish of the team in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:40_000] # Truncate if more than 40,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a40a2437-499a-4665-8d45-8241705477d6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure(\"Washington Capitals\", \"https://www.nhl.com/capitals\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36f5d6d7-952c-4d50-894c-9b01e29cebc8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,148 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "87c2da09-bd0c-4683-828b-4f7643018795", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Community contribution\n", |
||||
"\n", |
||||
"Implementing simple ChatGPT interface to maintain conversation and context with sleected model" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 26, |
||||
"id": "77a850ed-61f8-4a0d-9c41-45781eb60bc9", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"API key looks good so far\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import ipywidgets as widgets\n", |
||||
"from IPython.display import Markdown, display, update_display, clear_output\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||
" print(\"API key looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
||||
" \n", |
||||
"MODEL = 'gpt-4o-mini'\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1f7f16f0-6fec-4190-882a-3fe1f0e9704a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class ChatGPTInterface:\n", |
||||
" def __init__(self, api_key, model, system_message=\"You are a helpful assistant. You can format your responses using Markdown.\"):\n", |
||||
" self.openai = OpenAI(api_key=api_key)\n", |
||||
" self.model = model\n", |
||||
" self.conversation_history = [{\"role\": \"system\", \"content\": system_message}]\n", |
||||
"\n", |
||||
" self.chat_area = widgets.Output()\n", |
||||
" self.input_box = widgets.Text(placeholder=\"Enter your message here...\")\n", |
||||
" self.send_button = widgets.Button(description=\"Send\")\n", |
||||
" self.clear_button = widgets.Button(description=\"Clear\")\n", |
||||
"\n", |
||||
" self.send_button.on_click(self.send_message)\n", |
||||
" self.clear_button.on_click(self.clear_chat)\n", |
||||
"\n", |
||||
" self.layout = widgets.VBox([\n", |
||||
" self.chat_area,\n", |
||||
" widgets.HBox([self.input_box, self.send_button, self.clear_button])\n", |
||||
" ])\n", |
||||
"\n", |
||||
" def display(self):\n", |
||||
" display(self.layout)\n", |
||||
"\n", |
||||
" def send_message(self, _):\n", |
||||
" user_message = self.input_box.value.strip()\n", |
||||
" if user_message:\n", |
||||
" self.conversation_history.append({\"role\": \"user\", \"content\": user_message})\n", |
||||
" self.display_message(\"You\", user_message)\n", |
||||
" self.input_box.value = \"\"\n", |
||||
"\n", |
||||
" try:\n", |
||||
" response = self.openai.chat.completions.create(\n", |
||||
" model=self.model,\n", |
||||
" messages=self.conversation_history\n", |
||||
" )\n", |
||||
" assistant_message = response.choices[0].message.content.strip()\n", |
||||
" self.conversation_history.append({\"role\": \"assistant\", \"content\": assistant_message})\n", |
||||
" self.display_message(\"ChatGPT\", assistant_message)\n", |
||||
" except Exception as e:\n", |
||||
" self.display_message(\"Error\", str(e))\n", |
||||
"\n", |
||||
" def clear_chat(self, _):\n", |
||||
" self.conversation_history = [{\"role\": \"system\", \"content\": self.conversation_history[0][\"content\"]}]\n", |
||||
" self.chat_area.clear_output(wait=True)\n", |
||||
"\n", |
||||
" def display_message(self, sender, message):\n", |
||||
" self.chat_area.append_display_data(Markdown(f\"**{sender}:**\\n{message}\"))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 28, |
||||
"id": "78287e42-8964-4da6-bd48-a7dffd0ce7dd", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"application/vnd.jupyter.widget-view+json": { |
||||
"model_id": "54956535cb32419bbe38d2bee125992d", |
||||
"version_major": 2, |
||||
"version_minor": 0 |
||||
}, |
||||
"text/plain": [ |
||||
"VBox(children=(Output(), HBox(children=(Text(value='', placeholder='Enter your message here...'), Button(descr…" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"chat_interface = ChatGPTInterface(api_key,MODEL)\n", |
||||
"chat_interface.display()" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,273 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "fad31e32-2e42-42ae-ae63-c15d90292839", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# First Project\n", |
||||
"Ollama -> Summary\n", |
||||
"huggingface_hub -> \"facebook/m2m100_418M\" for translation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5fb79a20-a455-4d27-91a1-91958af786c1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install transformers datasets torch\n", |
||||
"!pip install huggingface_hub" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e95ac7f2-5192-4f83-acf3-61df30cd3109", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"import json\n", |
||||
"import ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "12276d74-0e79-4e66-9135-1c9d1a80b943", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
"huggingface_url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", |
||||
"huggingface_website = Website(huggingface_url)\n", |
||||
"\n", |
||||
"huggingface_data = {\n", |
||||
" \"title\": huggingface_website.title,\n", |
||||
" \"text\": huggingface_website.text\n", |
||||
"}\n", |
||||
"print(huggingface_data)\n", |
||||
"\n", |
||||
"with open('ml_for_3d_course_data.json', 'w') as f:\n", |
||||
" json.dump(huggingface_data, f)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7d74c85c-3e09-4514-bde4-4cafc4910c52", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# huggingface_data 'text' value\n", |
||||
"huggingface_text = huggingface_data['text']\n", |
||||
"\n", |
||||
"# Summary\n", |
||||
"response_summary = ollama.chat(model=\"llama3.2:latest\", messages=[{\"role\": \"user\", \"content\": f\"Summarize the following text: {huggingface_text}\"}])\n", |
||||
"print(response_summary)\n", |
||||
"\n", |
||||
"# print summary\n", |
||||
"summary_huggingface_text = response_summary.message['content']\n", |
||||
"print(\"Summary Text:\", summary_huggingface_text)\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d13764d5-cb76-46c5-bbe6-d132b31a9ea6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# HuggingFace Translation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "08405038-4115-487f-9efc-de58572453c1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" url: str\n", |
||||
" title: str\n", |
||||
" text: str\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
"url = \"https://huggingface.co/learn/ml-for-3d-course\"\n", |
||||
"website = Website(url)\n", |
||||
"print(website.title) \n", |
||||
"print(website.text[:1000])\n", |
||||
"\n", |
||||
"data = {\n", |
||||
" \"title\": website.title,\n", |
||||
" \"text\": website.text\n", |
||||
"}\n", |
||||
"\n", |
||||
"with open('ml_for_3d_course_data.json', 'w') as f:\n", |
||||
" json.dump(data, f)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0632352f-4b16-4125-83bf-f3cc3aabd659", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(data)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a85f8625-725d-4d7f-8cb7-8da4276f81cf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install sacremoses" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c800cea4-f4a4-4e41-9637-31ff11afb256", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import json\n", |
||||
"from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer\n", |
||||
"\n", |
||||
"# Load the M2M100 model and tokenizer\n", |
||||
"model_name = \"facebook/m2m100_418M\"\n", |
||||
"model = M2M100ForConditionalGeneration.from_pretrained(model_name)\n", |
||||
"tokenizer = M2M100Tokenizer.from_pretrained(model_name)\n", |
||||
"\n", |
||||
"# Load the saved JSON file\n", |
||||
"with open('ml_for_3d_course_data.json', 'r') as f:\n", |
||||
" data = json.load(f)\n", |
||||
"\n", |
||||
"# Extract text from the loaded data\n", |
||||
"text = data[\"text\"]\n", |
||||
"\n", |
||||
"# Set the source language to English and target language to Korean\n", |
||||
"source_lang = \"en\"\n", |
||||
"target_lang = \"ko\"\n", |
||||
"\n", |
||||
"# Set the language for tokenizer (important for M2M100)\n", |
||||
"tokenizer.src_lang = source_lang\n", |
||||
"tokenizer.tgt_lang = target_lang\n", |
||||
"\n", |
||||
"# Split text into smaller chunks if it's too large\n", |
||||
"# This step ensures we don't exceed the model's maximum length (512 tokens)\n", |
||||
"max_input_length = 512\n", |
||||
"chunks = [text[i:i+max_input_length] for i in range(0, len(text), max_input_length)]\n", |
||||
"\n", |
||||
"print(chunks)\n", |
||||
"# Initialize a list to hold the translated text\n", |
||||
"translated_chunks = []\n", |
||||
"\n", |
||||
"# Iterate through each chunk and translate it\n", |
||||
"for chunk in chunks:\n", |
||||
" # Tokenize the chunk\n", |
||||
" encoded = tokenizer(chunk, return_tensors=\"pt\", padding=True, truncation=True, max_length=512)\n", |
||||
"\n", |
||||
" # Generate translation from the model, forcing the output to be in Korean\n", |
||||
" generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.get_lang_id(target_lang), max_length=512)\n", |
||||
"\n", |
||||
" # Decode the translated tokens to text\n", |
||||
" translated_text = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]\n", |
||||
" translated_chunks.append(translated_text)\n", |
||||
"\n", |
||||
"# Combine all translated chunks back together\n", |
||||
"final_translated_text = ' '.join(translated_chunks)\n", |
||||
"print(\"Translated Text:\", final_translated_text)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ffe0f264-a588-422f-a6e1-b60504d1e02c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import json\n", |
||||
"import requests\n", |
||||
"\n", |
||||
"# Ollama API URL 설정\n", |
||||
"ollama_url = \"http://localhost:11411/v1/models/facebook/m2m100_418M/generate\"\n", |
||||
"\n", |
||||
"# 저장된 JSON 파일 로드\n", |
||||
"with open('ml_for_3d_course_data.json', 'r') as f:\n", |
||||
" data = json.load(f)\n", |
||||
"\n", |
||||
"# 텍스트 추출\n", |
||||
"course_text = data[\"text\"]\n", |
||||
"\n", |
||||
"# 번역할 소스 언어 및 타겟 언어 설정\n", |
||||
"source_language = \"en\"\n", |
||||
"target_language = \"ko\"\n", |
||||
"\n", |
||||
"# 데이터 준비\n", |
||||
"payload = {\n", |
||||
" \"input_text\": course_text,\n", |
||||
" \"src_lang\": source_language,\n", |
||||
" \"tgt_lang\": target_language\n", |
||||
"}\n", |
||||
"\n", |
||||
"# API 호출\n", |
||||
"response = requests.post(ollama_url, json=payload)\n", |
||||
"\n", |
||||
"# 응답 확인\n", |
||||
"if response.status_code == 200:\n", |
||||
" translated_course_text = response.json().get(\"translated_text\", \"Translation failed\")\n", |
||||
" print(\"Translated Course Text:\", translated_course_text)\n", |
||||
"else:\n", |
||||
" print(f\"Error {response.status_code}: {response.text}\")\n" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,138 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "fe12c203-e6a6-452c-a655-afb8a03a4ff5", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# End of week 1 exercise Solution Ollama with streaming\n", |
||||
"\n", |
||||
"A tool that takes a technical question, and responds with an explanation." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c1070317-3ed9-4659-abe3-828943230e03", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Imports\n", |
||||
"\n", |
||||
"import ollama\n", |
||||
"import requests\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4a456906-915a-4bfd-bb9d-57e505c5093f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"MODEL_LLAMA = 'llama3.2'\n", |
||||
"MODEL_LLAMA1b = \"llama3.2:1b\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3f0d0137-52b0-47a8-81a8-11a90a010798", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Environment\n", |
||||
"\n", |
||||
"system_prompt = \"\"\"\n", |
||||
"You are an assistant that takes a technical question and respond with an explanation.\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"question = \"\"\"\n", |
||||
"Please explain what this code does and why:\n", |
||||
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"question2 = \"\"\"\n", |
||||
"What is the purpose of using yield from in the following code, and how does it differ from a standard for loop with yield?\n", |
||||
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"user_prompt = \"Answer these two questions in detail please, Question1:\" + question + \"Question2:\" + question2\n", |
||||
"\n", |
||||
"def message():\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Llama 3.2 answer, with streaming\n", |
||||
"\n", |
||||
"def llama():\n", |
||||
" response = ollama.chat(\n", |
||||
" model = MODEL_LLAMA,\n", |
||||
" messages = message(),\n", |
||||
" stream =True\n", |
||||
" )\n", |
||||
" full_response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in response:\n", |
||||
" content = chunk.get(\"message\", {}).get(\"content\", \"\")\n", |
||||
" if content:\n", |
||||
" full_response += content\n", |
||||
" display_handle.update(Markdown(full_response))\n", |
||||
"llama()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "342a470c-9aab-4051-ad21-514dceec76eb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Llama 3.2:1b answer\n", |
||||
"\n", |
||||
"def llama():\n", |
||||
" response = ollama.chat(\n", |
||||
" model = MODEL_LLAMA1b,\n", |
||||
" messages = message()\n", |
||||
" )\n", |
||||
" return display(Markdown(response['message']['content']))\n", |
||||
"\n", |
||||
"llama()" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.10.7" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,127 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "39e3e763-9b00-49eb-aead-034a2d0517a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f3bb5e2a-b70f-42ba-9f22-030a9c6bc9d1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "994f51fb-eab3-45a2-847f-87aebb92b17a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a8125c6d-c884-4f65-b477-cab155e29ce3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You are an AI that suggests short and relevant subject lines for emails based on their content.\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Here is the content of an email:\n", |
||||
"\n", |
||||
"Dear Team,\n", |
||||
"\n", |
||||
"I hope you're all doing well. I wanted to remind you that our next project meeting is scheduled for this Friday at 3 PM. We will be discussing our progress and any blockers. Please make sure to review the latest updates before the meeting.\n", |
||||
"\n", |
||||
"Best, \n", |
||||
"John\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [ {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(\"Suggested Subject Line:\", response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1010ac80-1ee8-432f-aa3f-12af419dc23a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,408 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "raw", |
||||
"id": "f64407a0-fda5-48f3-a2d3-82e80d320931", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### \"Career Well-Being Companion\" ###\n", |
||||
"This project will gather feelings at the end of day from employee.\n", |
||||
"Based on employee feelings provided as input, model will analyze feelings and provide suggestions and acknowledge with feelings employtee is going thru.\n", |
||||
"Model even will ask employee \"Do you want more detailed resposne to cope up with your feelings?\".\n", |
||||
"If employee agrees, model even replies with online courses, tools, meetups and other ideas for the well being of the employee.\n", |
||||
"\n", |
||||
"Immediate Impact: Professionals can quickly see value through insights or actionable suggestions.\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2b30a8fa-1067-4369-82fc-edb197551e43", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"### Step 1: Emotional Check-in:\n", |
||||
"\n", |
||||
"# Input: User describes their feelings or workday.\n", |
||||
"# LLM Task: Analyze the input for emotional tone and identify keywords (e.g., \"stress,\" \"boredom\").\n", |
||||
"# Output: A summary of emotional trends.\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2b52469e-da81-42ec-9e6c-0c121ad349a7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(\"I am your well being companion and end goal is to help you in your career.\\nI want to start by asking about your feelings, how was your day today.\\n\")\n", |
||||
"print(\"I will do my best as well being companion to analyze your day and come up with the suggestions that might help you in your career and life. \\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a6df2e2c-785d-4323-90f4-b49592ab33fc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"how_was_day = \"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "247e4a80-f634-4a7a-9f40-315f042be59c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"how_was_day = input(\"How was your day today,can you describe about your day, what went well, what did not go well, what you did not like :\\n\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0faac2dd-0d53-431a-87a7-d57a6881e043", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"what_went_well = input(\"What went well for you , today?\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2c11628b-d14b-47eb-a97e-70d08ddf3364", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"what_went_bad = input(\"What did not go well, today?\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f64e34b4-f83a-4ae4-86bb-5bd164121412", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"how_was_day = how_was_day + what_went_well + what_went_bad\n", |
||||
"print(how_was_day)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5fe08c4-4d21-4917-a556-89648eb543c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"from openai import OpenAI\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import json\n", |
||||
"from IPython.display import Markdown, display, update_display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d6875d51-f33b-462e-85cb-a5d6a7cfb86e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#Initialize environment and constants:\n", |
||||
"load_dotenv(override=True)\n", |
||||
"\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||
" print(\"API key looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
||||
" \n", |
||||
"MODEL = 'gpt-4o-mini'\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 1, |
||||
"id": "c12cf934-4bd4-4849-9e8f-5bb89eece996", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"### Step 2: From day spent and what went good, what went bad => LLM will extract feelings, emotions from those unspoken words :)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "237d14b3-571e-4598-a57b-d3ebeaf81afc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt_for_emotion_check_in = \"You are a career well-being assistant. Your task is to analyze the user's emotional state based on their text input.\"\\\n", |
||||
"\"Look for signs of stress, burnout, dissatisfaction, boredom, motivation, or any other emotional indicators related to work.\"\\\n", |
||||
"\"Based on the input, provide a summary of the user's feelings and categorize them under relevant emotional states (e.g., ‘Burnout,’ ‘Boredom,’ ‘Stress,’ ‘Satisfaction,’ etc.).\"\\\n", |
||||
"\"Your response should be empathetic and non-judgmental. Please summarize the list of feelings, emotions , those unspoken but unheard feelings you get it.\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a205a6d3-b0d7-4fcb-9eed-f3a86576cd9f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_feelings(how_was_day):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages = [\n", |
||||
" {'role':'system','content': system_prompt_for_emotion_check_in},\n", |
||||
" {'role':'user', 'content': how_was_day}\n", |
||||
" ]\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45e152c8-37c4-4818-a8a0-49f1ea3c1b65", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## LLM will give the feelings you have based on \"the day you had today\".\n", |
||||
"print(get_feelings(how_was_day))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4a62a385-4c51-42b1-ad73-73949e740e66", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"### Step 3: From those feelings, emotions ==> Get suggestions from LLM." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d856ca4f-ade9-4e6f-b540-2d07a70867c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## Lets construct system prompt for LLM to get suggestions (from these feelings above).\n", |
||||
"\n", |
||||
"system_prompt_for_suggestion =\"You are a career well-being assistant.Provide a list of practical,actionable suggestions to help them improve their emotional state.\"\n", |
||||
"\n", |
||||
"system_prompt_for_suggestion+=\"The suggestions should be personalized based on their current feelings, and they should be simple, effective actions the user can take immediately.\"\\\n", |
||||
"\"Include activities, tasks, habits, or approaches that will either alleviate stress, boost motivation, or help them reconnect with their work in a positive way.\"\\\n", |
||||
"\"Be empathetic, non-judgmental, and encouraging in your tone.\\n\"\n", |
||||
"system_prompt_for_suggestion += \"Request you to respond in JSON format. Below is example:\\n\"\n", |
||||
"system_prompt_for_suggestion += '''\n", |
||||
"{\n", |
||||
" \"suggestions\": [\n", |
||||
" {\n", |
||||
" \"action\": \"Take a short break\",\n", |
||||
" \"description\": \"Step away from your workspace for 5-10 minutes. Use this time to take deep breaths, stretch, or grab a drink. This mini-break can help clear your mind and reduce feelings of overwhelm.\"\n", |
||||
" },\n", |
||||
" {\n", |
||||
" \"action\": \"Write a quick journal entry\",\n", |
||||
" \"description\": \"Spend 5-10 minutes writing down your thoughts and feelings. Specify what's distracting you and what you appreciate about your personal life. This can help you process emotions and refocus on tasks.\"\n", |
||||
" },\n", |
||||
" {\n", |
||||
" \"action\": \"Set a small task goal\",\n", |
||||
" \"description\": \"Choose one manageable task to complete today. Break it down into smaller steps to make it less daunting. Completing even a small task can give you a sense of achievement and boost motivation.\"\n", |
||||
" }\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"'''\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e9eee380-7fa5-4d21-9357-f4fc34d3368d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## Lets build user prompt to ask LLM for the suggestions based on the feelings above.\n", |
||||
"## Note: Here while building user_prompt, we are making another LLM call (via function get_feelings() to get feelings analyzed from \"day spent\".\n", |
||||
"## Because first step is to get feelings from day spent then we move to offer suggestions to ease discomfort feelings.\n", |
||||
"\n", |
||||
"def get_user_prompt_for_suggestion(how_was_day):\n", |
||||
" user_prompt_for_suggestion = \"You are a career well-being assistant.Please see below user’s emotional input on 'day user had spent' and this user input might have feeling burnt out, bored, uninspired, or stressed or sometime opposite \"\\\n", |
||||
" \"of these feelings.\"\n", |
||||
" user_prompt_for_suggestion += f\"{get_feelings(how_was_day)}\"\n", |
||||
" return user_prompt_for_suggestion\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3576e451-b29c-44e1-bcdb-addc8d61afa7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_user_prompt_for_suggestion(how_was_day))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4a41ee40-1f49-4474-809f-a0d5e44e4aa4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_suggestions(how_was_day):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages = [\n", |
||||
" {'role': 'system', 'content':system_prompt_for_suggestion},\n", |
||||
" {'role': 'user', 'content': get_user_prompt_for_suggestion(how_was_day)}\n", |
||||
" ],\n", |
||||
" response_format={\"type\": \"json_object\"}\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" return json.loads(result)\n", |
||||
" #display(Markdown(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "33e3a14e-0e2c-43cb-b50b-d6df52b4d300", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"suggestions = get_suggestions(how_was_day)\n", |
||||
"print(suggestions)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "31c75e04-2800-4ba2-845b-bc38f8965622", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"### Step 4: From those suggestions from companion ==> Enhance with support you need to follow sugestions like action plan for your self." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d07f9d3f-5acf-4a86-9160-4c6de8df4eb0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt_for_enhanced_suggestions = \"You are a helpful assistant that enhances actionable suggestions for users. For each suggestion provided, enhance it by adding:\\n\"\\\n", |
||||
"\"1. A step-by-step guide for implementation.\"\\\n", |
||||
"\"2. Tools, resources, or apps that can help.\"\\\n", |
||||
"\"3. Examples or additional context to make the suggestion practical.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6ab449f1-7a6c-4982-99e0-83d99c45ad2d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_user_prompt_for_enhanced_suggestions(suggestions):\n", |
||||
" prompt = \"You are able to check below suggestions and can enhance to help end user. Below is the list of suggestions.\\n\"\n", |
||||
" prompt += f\"{suggestions}\"\n", |
||||
" return prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d5187b7a-d8cd-4377-b011-7805bd50443d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def enhance_suggestions(suggestions):\n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model = MODEL,\n", |
||||
" messages=[\n", |
||||
" {'role':'system', 'content':system_prompt_for_enhanced_suggestions},\n", |
||||
" {'role':'user', 'content':get_user_prompt_for_enhanced_suggestions(suggestions)}\n", |
||||
" ],\n", |
||||
" stream = True\n", |
||||
" )\n", |
||||
" \n", |
||||
" #result = response.choices[0].message.content\n", |
||||
" #for chunk in stream:\n", |
||||
" # print(chunk.choices[0].delta.content or '', end='')\n", |
||||
"\n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||
" \n", |
||||
" #display(Markdown(result))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "429cd6f8-3215-4140-9a6d-82d14a9b9798", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"detailed = input(\"\\nWould you like a DETAILED PLAN for implementing this suggestion?(Yes/ No)\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5efda045-5bde-4c51-bec6-95b5914102dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"if detailed.lower() == 'yes':\n", |
||||
" enhance_suggestions(suggestions)\n", |
||||
"else:\n", |
||||
" print(suggestions)\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1969b2ec-c850-4dfc-b790-8ae8e3fa36e9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,279 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "603cd418-504a-4b4d-b1c3-be04febf3e79", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Article Title Generator\n", |
||||
"\n", |
||||
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", |
||||
"\n", |
||||
"**NOTES**:\n", |
||||
"\n", |
||||
"1. This version does NOT support website scrapping. You must copy and paste the required article.\n", |
||||
"2. The following models were configured:\n", |
||||
" a. OpenAI gpt-4o-mini\n", |
||||
" b. Llama llama3.2\n", |
||||
" c. Deepseek deepseek-r1:1.5b\n", |
||||
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", |
||||
" initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", |
||||
" get_answer('NEW_MODEL')***.\n", |
||||
"3. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", |
||||
" Example: https://www.isitwp.com/headline-analyzer/. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Confirming Llama is loaded\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# set environment variables for OpenAi\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# validate API Key\n", |
||||
"if not api_key:\n", |
||||
" raise ValueError(\"No API key was found! Please check the .env file.\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1abbb826-de66-498c-94d8-33369ad01885", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# constants\n", |
||||
"MODELS = { 'GPT': 'gpt-4o-mini', \n", |
||||
" 'LLAMA': 'llama3.2', \n", |
||||
" 'DEEPSEEK': 'deepseek-r1:1.5b'\n", |
||||
" }\n", |
||||
"\n", |
||||
"CLIENTS = { 'GPT': OpenAI(), \n", |
||||
" 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", |
||||
" 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Copy & paste your article (without a title)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ddd76319-13ce-480b-baa7-cab6a5c88168", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# article - copy & paste your article\n", |
||||
"article = \"\"\"\n", |
||||
" REPLACE WITH YOUR ARTICLE CONTENT\n", |
||||
" \"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# system prompt\n", |
||||
"system_prompt = \"\"\"\n", |
||||
" You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate the most effective, keyword-optimized title to maximize SEO performance.Respond in Markdown format.\n", |
||||
" \"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# user prompt\n", |
||||
"user_prompt = f\"Following the article to be analyzed. Respond in Markdown format./n/n{article}\"\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# message list\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# call model and get answer\n", |
||||
"def get_answer(model):\n", |
||||
" # set required client\n", |
||||
" client = CLIENTS[model]\n", |
||||
"\n", |
||||
" # call model\n", |
||||
" response = client.chat.completions.create(\n", |
||||
" model=MODELS[model],\n", |
||||
" messages=messages\n", |
||||
" )\n", |
||||
" \n", |
||||
" # return answer\n", |
||||
" return response.choices[0].message.content\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get OpenAI Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get openAi answer\n", |
||||
"answer = get_answer('GPT')\n", |
||||
"\n", |
||||
"# display openAi answer\n", |
||||
"display(Markdown(f\"### {MODELS['GPT']} Answer\\n\\n{answer}\" ))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "70073ebf-a00a-416b-854d-642d450cd99b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Llama Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "caa190bb-de5f-45cc-b671-5d62688f7b25", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get Llama answer\n", |
||||
"answer = get_answer('LLAMA')\n", |
||||
"\n", |
||||
"# display Llama answer\n", |
||||
"display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Deepseek Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get Deepseek answer\n", |
||||
"answer = get_answer('DEEPSEEK')\n", |
||||
"\n", |
||||
"# display Deepseek answer\n", |
||||
"display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Suggested future improvements\n", |
||||
"\n", |
||||
"1. Add website scrapping support to replace copy/pasting of articles.\n", |
||||
"2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n", |
||||
"3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n", |
||||
"4. Add the logic that would allow each model to assess the recommendations from the different models and \n", |
||||
" select the best among these. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cf7403ac-d43b-4493-98bb-6fee94950cb0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,472 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "603cd418-504a-4b4d-b1c3-be04febf3e79", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Article Title Generator (V2)\n", |
||||
"\n", |
||||
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", |
||||
"\n", |
||||
"**NOTES**:\n", |
||||
"\n", |
||||
"1. This version supports website scrapping using Selenium (based on the code from **/week1/community-\n", |
||||
" contributions/day1-webscraping-selenium-for-javascript.ipynb** - Thanks for the contribution!)\n", |
||||
"2. Leverage streaming (OpenAI only).\n", |
||||
"3. The following models were configured:\\\n", |
||||
" \n", |
||||
" a. OpenAI gpt-4o-mini\\\n", |
||||
" b. Llama llama3.2\\\n", |
||||
" c. Deepseek deepseek-r1:1.5b\\\n", |
||||
"\n", |
||||
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", |
||||
" initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n", |
||||
" get_answer('NEW_MODEL')***.\n", |
||||
"5. Improved system_prompt to provide specific SEO best practices to adopt during the title generation.\n", |
||||
"6. Rephrased the system_prompt to ensure the model provides a single Title (not a list of suggestions).\n", |
||||
"7. Includes function to remove unrequired thinking/reasoning verbose from the model response (Deepseek). \n", |
||||
"8. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", |
||||
" Example: https://www.isitwp.com/headline-analyzer/. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "115004a8-747a-4954-9580-1ed548f80336", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# install required libraries if they were not part of the requirements.txt\n", |
||||
"!pip install selenium\n", |
||||
"!pip install undetected-chromedriver" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# confirming Llama is loaded\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI\n", |
||||
"import undetected_chromedriver as uc\n", |
||||
"from selenium.webdriver.common.by import By\n", |
||||
"from selenium.webdriver.support.ui import WebDriverWait\n", |
||||
"from selenium.webdriver.support import expected_conditions as EC\n", |
||||
"import time\n", |
||||
"from bs4 import BeautifulSoup" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# set environment variables for OpenAi\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# validate API Key\n", |
||||
"if not api_key:\n", |
||||
" raise ValueError(\"No API key was found! Please check the .env file.\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1abbb826-de66-498c-94d8-33369ad01885", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# constants\n", |
||||
"MODELS = { 'GPT': 'gpt-4o-mini', \n", |
||||
" 'LLAMA': 'llama3.2', \n", |
||||
" 'DEEPSEEK': 'deepseek-r1:1.5b'\n", |
||||
" }\n", |
||||
"\n", |
||||
"CLIENTS = { 'GPT': OpenAI(), \n", |
||||
" 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", |
||||
" 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", |
||||
" }\n", |
||||
"\n", |
||||
"# path to Chrome\n", |
||||
"CHROME_PATH = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"**Webcrawler** (based on the code from __/week1/community-contributions/day1-webscraping-selenium-for-javascript.ipynb__)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c2a1cf7a-044f-4a9c-b76e-8f112d384550", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class WebsiteCrawler:\n", |
||||
" def __init__(self, url, wait_time=20, chrome_path=None):\n", |
||||
" \"\"\"\n", |
||||
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" self.wait_time = wait_time\n", |
||||
"\n", |
||||
" options = uc.ChromeOptions()\n", |
||||
" options.add_argument(\"--disable-gpu\")\n", |
||||
" options.add_argument(\"--no-sandbox\")\n", |
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", |
||||
" # options.add_argument(\"--headless=new\") # For Chrome >= 109 - unreliable on my end!\n", |
||||
" options.add_argument(\"start-maximized\")\n", |
||||
" options.add_argument(\n", |
||||
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
" )\n", |
||||
" if chrome_path:\n", |
||||
" options.binary_location = chrome_path\n", |
||||
"\n", |
||||
" self.driver = uc.Chrome(options=options)\n", |
||||
"\n", |
||||
" try:\n", |
||||
" # Load the URL\n", |
||||
" self.driver.get(url)\n", |
||||
"\n", |
||||
" # Wait for Cloudflare or similar checks\n", |
||||
" time.sleep(10)\n", |
||||
"\n", |
||||
" # Ensure the main content is loaded\n", |
||||
" WebDriverWait(self.driver, self.wait_time).until(\n", |
||||
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", |
||||
" )\n", |
||||
"\n", |
||||
" # Extract the main content\n", |
||||
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", |
||||
"\n", |
||||
" # Parse with BeautifulSoup\n", |
||||
" soup = BeautifulSoup(main_content, \"html.parser\")\n", |
||||
" self.title = self.driver.title if self.driver.title else \"No title found\"\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error occurred: {e}\")\n", |
||||
" self.title = \"Error occurred\"\n", |
||||
" self.text = \"\"\n", |
||||
"\n", |
||||
" finally:\n", |
||||
" self.driver.quit()\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "592d8f86-fbf7-4b16-a69d-468030d72dc4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Prompts" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# system prompt\n", |
||||
"system_prompt = \"\"\"\n", |
||||
" You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate a single, most effective, keyword-optimized title to maximize SEO performance.\n", |
||||
"\n", |
||||
"Instructions:\n", |
||||
"Ignore irrelevant content, such as the current title (if any), navigation menus, advertisements, or unrelated text.\n", |
||||
"Prioritize SEO best practices, considering:\n", |
||||
"Keyword relevance and search intent (informational, transactional, etc.).\n", |
||||
"Readability and engagement.\n", |
||||
"Avoiding keyword stuffing.\n", |
||||
"Ensure conciseness and clarity, keeping the title under 60 characters when possible for optimal SERP display.\n", |
||||
"Use a compelling structure that balances informativeness and engagement, leveraging formats like:\n", |
||||
"Listicles (\"10 Best Strategies for…\")\n", |
||||
"How-to guides (\"How to Boost…\")\n", |
||||
"Questions (\"What Is the Best Way to…\")\n", |
||||
"Power words to enhance click-through rates (e.g., \"Proven,\" \"Ultimate,\" \"Essential\").\n", |
||||
"Provide only one single, best title—do not suggest multiple options.\n", |
||||
"Limit the answer to the following Response Format (Markdown):\n", |
||||
"Optimized Title: [Provide only one title here]\n", |
||||
"Justification: [Explain why this title is effective for SEO]\n", |
||||
"\n", |
||||
" \"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b0486867-6d38-4cb5-91d4-fb60952c3a9b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"**Provide the article URL and get its content for analysis**" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ddd76319-13ce-480b-baa7-cab6a5c88168", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# article url - change to any other article URL\n", |
||||
"article_url = \"https://searchengineland.com/seo-trends-2025-447745\"\n", |
||||
"\n", |
||||
"# get article content\n", |
||||
"article = WebsiteCrawler(url=article_url, chrome_path=CHROME_PATH)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# user prompt\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Following the article to be analyzed to suggest a title. Limit the answer to the following Response Format (Markdown): \n", |
||||
"Optimized Title: [Provide only one title here]\n", |
||||
"Justification: [Explain why this title is effective for SEO].\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"user_prompt = f\"{user_prompt} {article}\"\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# message list\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get suggested title\n", |
||||
"def get_title(model, **kwargs):\n", |
||||
" # stream if GPT\n", |
||||
" if 'stream' in kwargs:\n", |
||||
" response = CLIENTS[model].chat.completions.create(\n", |
||||
" model=MODELS[model],\n", |
||||
" messages=messages,\n", |
||||
" stream=kwargs['stream']\n", |
||||
" )\n", |
||||
" else:\n", |
||||
" response = CLIENTS[model].chat.completions.create(\n", |
||||
" model=MODELS[model],\n", |
||||
" messages=messages,\n", |
||||
" )\n", |
||||
"\n", |
||||
" return response\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8988d6ff-076a-4eae-baf4-26a8d6a2bc44", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# filter response from model verbose - like Deepseek reasoning/thinking verbose\n", |
||||
"def filter_response(response):\n", |
||||
" # Find last occurrence of 'Optimized Title:' to avoid displaying reasoning verbose\n", |
||||
" substring = 'Optimized Title:'\n", |
||||
" start = response.rfind('Optimized Title:')\n", |
||||
" if start > -1:\n", |
||||
" filtered_response = response[start:]\n", |
||||
"\n", |
||||
" # insert line break to preserve format\n", |
||||
" filtered_response = filtered_response.replace(\"**Justification:**\", \"\\n**Justification:**\")\n", |
||||
" \n", |
||||
" return filtered_response" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0e9e99cf-5e25-4a1f-ab11-a2255e318671", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# display suggested title\n", |
||||
"def display_title(model):\n", |
||||
" # get model-suggested title\n", |
||||
" title = get_title(model)\n", |
||||
" \n", |
||||
" display(Markdown(f\"### {model} (___{MODELS[model]}___) Answer\\n\\n_______\")) \n", |
||||
"\n", |
||||
" response = \"\"\n", |
||||
"\n", |
||||
" if model == 'GPT':\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" # for chunk in stream:\n", |
||||
" for chunk in get_title(model=model, stream=True):\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = (\n", |
||||
" response.replace(\"```\",\"\")\n", |
||||
" .replace(\"markdown\", \"\")\n", |
||||
" .replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||
" .replace(\"Justification:\", \"**Justification:**\")\n", |
||||
" )\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||
" else:\n", |
||||
" response = get_title(model=model)\n", |
||||
" response = response.choices[0].message.content\n", |
||||
" response = filter_response(response)\n", |
||||
" response = (\n", |
||||
" response.replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||
" .replace(\"Justification:\", \"**Justification:**\")\n", |
||||
" )\n", |
||||
" display(Markdown(response))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get OpenAI Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display openAi suggested title\n", |
||||
"display_title(model='GPT')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "70073ebf-a00a-416b-854d-642d450cd99b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Llama Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "caa190bb-de5f-45cc-b671-5d62688f7b25", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display Llama suggested title\n", |
||||
"display_title(model='LLAMA')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Deepseek Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display Deepseek title\n", |
||||
"display_title(model='DEEPSEEK')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", |
||||
"metadata": { |
||||
"jp-MarkdownHeadingCollapsed": true |
||||
}, |
||||
"source": [ |
||||
"### Observations\n", |
||||
"\n", |
||||
"1. **Selenium:** The headless option (__options.add_argument(\"--headless=new\")__), while ideal to speed up the scanning process, presented problems while scanning several websites (including openai.com and canva.com).\n", |
||||
"2. **Deepseek challenges:**\\\n", |
||||
" a.It always returns its thinking/reasoning verbose, which, while helpful to understand how it works, is not always\n", |
||||
" required, such as in this example code. A new function (**filter_response**) was created to remove the additional verbose.\\\n", |
||||
" b. It is unreliable with the response, sometimes returning the required format for the response instead of the\n", |
||||
" actual response. For example, for the title, it may sometimes return:\n", |
||||
" \n", |
||||
" **Optimized Title:** \\[The user wants the suggested title here]\n", |
||||
" \n", |
||||
"### Suggested future improvements\n", |
||||
"\n", |
||||
"1. Add the logic that would allow each model to assess the recommendations from the different models and \n", |
||||
" select the best among these.\n", |
||||
"2. Add the logic to leverage an API (if available) that automatically assesses the suggested titles." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,532 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "603cd418-504a-4b4d-b1c3-be04febf3e79", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Article Title Generator (V3 - using Firecrawl) \n", |
||||
"\n", |
||||
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", |
||||
"\n", |
||||
"**NOTES**:\n", |
||||
"\n", |
||||
"1. This version supports website scrapping using [Firecrawl](https://www.firecrawl.dev/).<br>\n", |
||||
" 1. **Note:** There is a Free tier that provides 500 one-time credits (good for scraping 500 pages).\n", |
||||
" 2. Upon registration, get and add your Firecrawl API Key to the .env file as: **`FIRECRAWL_API_KEY`**.<br><br>\n", |
||||
"2. Leverage streaming (OpenAI only).<br>\n", |
||||
"3. The following models were configured:<br>\n", |
||||
" 1. OpenAI gpt-4o-mini\n", |
||||
" 2. Llama llama3.2\n", |
||||
" 3. Deepseek deepseek-r1:1.5b\n", |
||||
" 4. Firecrawl LLM Extract feature<br><br>\n", |
||||
" \n", |
||||
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", |
||||
" initialization to the CLIENTS dictionary. Then, call the model with --> **`answer =\n", |
||||
" get_answer('NEW_MODEL')`**.<br>\n", |
||||
"4. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n", |
||||
" Example: [ISITWP Headline Analyzer](https://www.isitwp.com/headline-analyzer/). " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "115004a8-747a-4954-9580-1ed548f80336", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# install required libraries if they were not part of the requirements.txt\n", |
||||
"!pip install firecrawl-py" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# confirming Llama is loaded\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI\n", |
||||
"from firecrawl import FirecrawlApp" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# set environment variables for OpenAi\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# validate API Key\n", |
||||
"if not api_key:\n", |
||||
" raise ValueError(\"No OPENAI API Key was found! Please check the .env file.\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b2a78101-d866-400f-a482-1d8fda8e0df9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# set environment variable for Firecrawl\n", |
||||
"firecrawl_api_key = os.getenv('FIRECRAWL_API_KEY')\n", |
||||
"\n", |
||||
"# validate API Key\n", |
||||
"if not firecrawl_api_key:\n", |
||||
" raise ValueError(\"No FIRECRAWL API Key was found! Please check the .env file.\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1abbb826-de66-498c-94d8-33369ad01885", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# constants\n", |
||||
"MODELS = { 'GPT': 'gpt-4o-mini', \n", |
||||
" 'LLAMA': 'llama3.2', \n", |
||||
" 'DEEPSEEK': 'deepseek-r1:1.5b'\n", |
||||
" }\n", |
||||
"\n", |
||||
"CLIENTS = { 'GPT': OpenAI(), \n", |
||||
" 'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n", |
||||
" 'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n", |
||||
" }\n", |
||||
"\n", |
||||
"# path to Chrome\n", |
||||
"# CHROME_PATH = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"**Webcrawler** (based on the code from Firecrawl [documentation](https://docs.firecrawl.dev/introduction))." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2852700e-33ed-4be5-bd31-8aa05036aaf2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class WebsiteCrawler:\n", |
||||
" def __init__(self, url, wait_time=20, format='markdown'):\n", |
||||
" \"\"\"\n", |
||||
" Initialize the WebsiteCrawler using Firecrawl to scrape JavaScript-rendered content.\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" self.wait_time = wait_time\n", |
||||
" self.format = format\n", |
||||
"\n", |
||||
" try:\n", |
||||
"\n", |
||||
" # initialize Firecrawl\n", |
||||
" screate_app = FirecrawlApp(api_key=firecrawl_api_key)\n", |
||||
"\n", |
||||
" # Scrape a website:\n", |
||||
" scrape_result = screate_app.scrape_url(self.url,\n", |
||||
" params=self.getParams())\n", |
||||
" \n", |
||||
"\n", |
||||
" # parse data\n", |
||||
" self.title = scrape_result['metadata']['ogTitle']\n", |
||||
"\n", |
||||
" # get the content using the appropriate key\n", |
||||
" if format == 'markdown':\n", |
||||
" # OpenAI, Llama, Deepseek\n", |
||||
" self.text = scrape_result['markdown'] \n", |
||||
" elif format == 'json':\n", |
||||
" # Firecrawl LLM Extract\n", |
||||
" self.text = scrape_result['json']\n", |
||||
"\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error occurred: {e}\")\n", |
||||
" self.title = \"Error occurred\"\n", |
||||
" self.text = \"\"\n", |
||||
"\n", |
||||
" # set appropriate parameters for scraping\n", |
||||
" def getParams(self):\n", |
||||
"\n", |
||||
" # For OpenAi, Llama or Deepseek\n", |
||||
" params={'formats': [self.format], \n", |
||||
" 'actions': [{\"type\": \"wait\", \"milliseconds\": self.wait_time}], \n", |
||||
" 'includeTags': [\"main\"], }\n", |
||||
"\n", |
||||
" # For Firecrawl LLM extract\n", |
||||
" if self.format == 'json':\n", |
||||
" params={'formats': [self.format], \n", |
||||
" 'actions': [{\"type\": \"wait\", \"milliseconds\": self.wait_time}], \n", |
||||
" 'jsonOptions': {'systemPrompt': system_prompt, 'prompt': user_prompt, }}\n", |
||||
" \n", |
||||
" return params\n", |
||||
"\n", |
||||
" # Get Firecrawl LLM extract result\n", |
||||
" def getResult(self):\n", |
||||
"\n", |
||||
" formated_result = f\"\"\"\n", |
||||
" **Optimized Title:** {self.text['Optimized Title']} \n", |
||||
" <br><br>**Justification:** {self.text['Justification']}\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" # Remove leading and trailing spaces \n", |
||||
" return formated_result.strip()\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "592d8f86-fbf7-4b16-a69d-468030d72dc4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Prompts" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# system prompt\n", |
||||
"system_prompt = \"\"\"\n", |
||||
" You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate a single, most effective, keyword-optimized title to maximize SEO performance.\n", |
||||
"\n", |
||||
"Instructions:\n", |
||||
"Ignore irrelevant content, such as the current title (if any), navigation menus, advertisements, or unrelated text.\n", |
||||
"Prioritize SEO best practices, considering:\n", |
||||
"Keyword relevance and search intent (informational, transactional, etc.).\n", |
||||
"Readability and engagement.\n", |
||||
"Avoiding keyword stuffing.\n", |
||||
"Ensure conciseness and clarity, keeping the title under 60 characters when possible for optimal SERP display.\n", |
||||
"Use a compelling structure that balances informativeness and engagement, leveraging formats like:\n", |
||||
"Listicles (\"10 Best Strategies for…\")\n", |
||||
"How-to guides (\"How to Boost…\")\n", |
||||
"Questions (\"What Is the Best Way to…\")\n", |
||||
"Power words to enhance click-through rates (e.g., \"Proven,\" \"Ultimate,\" \"Essential\").\n", |
||||
"Provide only one single, best title—do not suggest multiple options.\n", |
||||
"Limit the answer to the following Response Format (Markdown):\n", |
||||
"Optimized Title: [Provide only one title here]\n", |
||||
"Justification: [Explain why this title is effective for SEO]\n", |
||||
"\n", |
||||
" \"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b0486867-6d38-4cb5-91d4-fb60952c3a9b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"**Provide the article URL and get its content for analysis**" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ddd76319-13ce-480b-baa7-cab6a5c88168", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# article url - change to any other article URL\n", |
||||
"article_url = \"https://searchengineland.com/seo-trends-2025-447745\"\n", |
||||
"\n", |
||||
"# get article content\n", |
||||
"article = WebsiteCrawler(url=article_url)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# user prompt\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Following the article to be analyzed to suggest a title. Limit the answer to the following Response Format (Markdown): \n", |
||||
"Optimized Title: [Provide only one title here]\n", |
||||
"Justification: [Explain why this title is effective for SEO].\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"user_prompt = f\"{user_prompt} {article}\"\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c45fc7d7-08c9-4e34-b427-b928a219bb94", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# message list\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get suggested title\n", |
||||
"def get_title(model, **kwargs):\n", |
||||
" # stream if GPT\n", |
||||
" if 'stream' in kwargs:\n", |
||||
" response = CLIENTS[model].chat.completions.create(\n", |
||||
" model=MODELS[model],\n", |
||||
" messages=messages,\n", |
||||
" stream=kwargs['stream']\n", |
||||
" )\n", |
||||
" else:\n", |
||||
" response = CLIENTS[model].chat.completions.create(\n", |
||||
" model=MODELS[model],\n", |
||||
" messages=messages,\n", |
||||
" )\n", |
||||
"\n", |
||||
" return response\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8988d6ff-076a-4eae-baf4-26a8d6a2bc44", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# filter response from model verbose - like Deepseek reasoning/thinking verbose\n", |
||||
"def filter_response(response):\n", |
||||
" filtered_response = response\n", |
||||
" # Find last occurrence of 'Optimized Title:' to avoid displaying reasoning verbose\n", |
||||
" substring = 'Optimized Title:'\n", |
||||
" start = response.rfind(substring)\n", |
||||
" if start > -1:\n", |
||||
" filtered_response = response[start:]\n", |
||||
"\n", |
||||
" # Find if the title has quotation (or other) marks and remove it - this should be improved\n", |
||||
" filtered_response = (\n", |
||||
" filtered_response.replace('\"', '', 2)\n", |
||||
" .replace('[', '', 1)\n", |
||||
" .replace(']', '', 1)\n", |
||||
" .replace('**', '', 2)\n", |
||||
" )\n", |
||||
" \n", |
||||
" return filtered_response" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0e9e99cf-5e25-4a1f-ab11-a2255e318671", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# display suggested title\n", |
||||
"def display_title(model):\n", |
||||
" # get model-suggested title\n", |
||||
" title = get_title(model)\n", |
||||
" \n", |
||||
" display(Markdown(f\"### {model} (___{MODELS[model]}___) Answer\\n\\n_______\")) \n", |
||||
"\n", |
||||
" response = \"\"\n", |
||||
"\n", |
||||
" if model == 'GPT':\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" # for chunk in stream:\n", |
||||
" for chunk in get_title(model=model, stream=True):\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = (\n", |
||||
" response.replace(\"```\",\"\")\n", |
||||
" .replace(\"markdown\", \"\")\n", |
||||
" .replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||
" .replace(\"Justification:\", \"**Justification:**\")\n", |
||||
" )\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)\n", |
||||
" else:\n", |
||||
" response = get_title(model=model)\n", |
||||
" response = response.choices[0].message.content\n", |
||||
" response = filter_response(response)\n", |
||||
"\n", |
||||
" # insert line break to preserve format - only LLAMA\n", |
||||
" line_break = \"<br><br>\"\n", |
||||
" if model == \"DEEPSEEK\":\n", |
||||
" line_break = \"\"\n", |
||||
" \n", |
||||
" response = (\n", |
||||
" response.replace(\"Optimized Title:\", \"**Optimized Title:**\")\n", |
||||
" .replace(\"Justification:\", f\"{line_break}**Justification:**\") \n", |
||||
" )\n", |
||||
" display(Markdown(response))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get OpenAI Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display openAi suggested title\n", |
||||
"display_title(model='GPT')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "70073ebf-a00a-416b-854d-642d450cd99b", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Llama Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "caa190bb-de5f-45cc-b671-5d62688f7b25", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display Llama suggested title\n", |
||||
"display_title(model='LLAMA')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "811edc4f-20e2-482d-ac89-fae9d1b70bed", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Deepseek Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "082628e4-ff4c-46dd-ae5f-76578eb017ad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get and display Deepseek title\n", |
||||
"display_title(model='DEEPSEEK')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "f2d401ed-734d-4e96-be30-09b49d516f38", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Using Firecrawl LLM Extract (to replace LLMs above - OpenAI, Llama & Deepseek)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "3e6495a2-df0b-4a7b-a376-692456be633d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Get Firecrawl Suggested Title" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c8763b0a-54ef-409f-8dd6-13231b6f7774", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"fc_title = WebsiteCrawler(url=article_url, format='json')\n", |
||||
"\n", |
||||
"display(Markdown(fc_title.getResult()))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### Observations\n", |
||||
"\n", |
||||
"1. **Firecrawl** is a great alternative to replace both Selenium and BeautifulSoup. However, it is not free.\n", |
||||
"2. **Firecrawl LLM Extract** feature may replace the calls to other LLMs for analysis and title generation. Note that the result provided seems to be cached upon its first generation. Therefore, the suggested title and its justification will always be the same. \n", |
||||
"3. **Deepseek challenges:**\\\n", |
||||
" a.It always returns its thinking/reasoning verbose, which, while helpful to understand how it works, is not always\n", |
||||
" required, such as in this example code. A new function (**filter_response**) was created to remove the additional verbose.\\\n", |
||||
" b. It is unreliable with the response, sometimes returning the required format for the response instead of the\n", |
||||
" actual response. For example, for the title, it may sometimes return:\n", |
||||
" \n", |
||||
" **Optimized Title:** \\[The user wants the suggested title here]\n", |
||||
" \n", |
||||
"### Suggested future improvements\n", |
||||
"\n", |
||||
"1. Add the logic that would allow each model to assess the recommendations from the different models and \n", |
||||
" select the best among these.\n", |
||||
"2. Add the logic to leverage an API (if available) that automatically assesses the suggested titles." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,580 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "bdb801c9-e33a-4a41-bdb8-9cacb382535d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#imports\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"import ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "f5a8a43d-530e-4031-b42f-5b6bd09af34b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# constants\n", |
||||
"\n", |
||||
"MODEL_GPT = 'gpt-4o-mini'\n", |
||||
"MODEL_LLAMA = 'llama3.2'" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "ddfffcbf-d6e3-4e63-85dc-02fb916cee88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#sset up enviornment\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"openai=OpenAI()\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 11, |
||||
"id": "048e5e7c-dd7a-469e-9ed5-0c6f75fb0193", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# here is the question; type over this to ask something new\n", |
||||
"\n", |
||||
"question = \"\"\"\n", |
||||
"Please explain what this code does and why:\n", |
||||
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "22d989ab-d1e2-4b93-9893-87c40ccde3cf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt=\"You are a helpful technical tutor who answers questions about python code, software engineering, data science and LLMs\"\n", |
||||
"user_prompt=\"Please give a detailed explanation to the following question: \" + question" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 13, |
||||
"id": "90a02948-86cb-4adc-9d88-977e7ed99c5b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# messages\n", |
||||
"\n", |
||||
"messages=[\n", |
||||
" {\"role\":\"system\",\"content\":system_prompt},\n", |
||||
" {\"role\":\"user\",\"content\":user_prompt}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "6819c2cd-80e8-4cba-8472-b5a5729d2530", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"Certainly! Let's dissect the code snippet you provided:\n", |
||||
"\n", |
||||
"python\n", |
||||
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"\n", |
||||
"\n", |
||||
"### Breakdown of the Code:\n", |
||||
"\n", |
||||
"1. **Context of `yield from`:**\n", |
||||
" - The expression starts with `yield from`, which is a syntax used in Python's generator functions. A generator function is a special type of function that returns an iterator and allows you to iterate over a sequence of values lazily (one value at a time) instead of returning them all at once.\n", |
||||
" - `yield from` is specifically used to delegate part of the generator's operations to another iterator. When you use `yield from`, the values from the iterator on the right-hand side are yielded to the caller of the generator function.\n", |
||||
"\n", |
||||
"2. **Understanding the Set Comprehension:**\n", |
||||
" - `{book.get(\"author\") for book in books if book.get(\"author\")}` is a set comprehension.\n", |
||||
" - It iterates over each `book` in a collection called `books`. In this context, `books` is expected to be a list (or another iterable) of dictionaries, where each dictionary represents a book and contains various attributes (like \"title\", \"author\", etc.).\n", |
||||
" - Within the set comprehension, it calls `book.get(\"author\")`, which attempts to retrieve the value associated with the key \"author\" from each `book` dictionary.\n", |
||||
" - The `if book.get(\"author\")` condition ensures that only books with a non-falsy author (e.g., not `None` or an empty string) are included in the resulting set.\n", |
||||
" - The result of the comprehension is a set of unique author names (since sets inherently do not allow duplicates).\n", |
||||
"\n", |
||||
"### Summary of Functionality:\n", |
||||
"\n", |
||||
"- The entire line of code is a compact way to extract unique author names from a list of books and yield each unique author to the caller of the generator function. \n", |
||||
"- If there are multiple books with the same author, that author will only appear once in the output since sets do not allow duplicate entries.\n", |
||||
"\n", |
||||
"### Why Use This Code?\n", |
||||
"\n", |
||||
"1. **Unique Values**: By using a set comprehension, this code efficiently ensures that the output consists only of unique author names, which is often desirable when you're interested in knowing all distinct authors.\n", |
||||
" \n", |
||||
"2. **Lazy Evaluation**: By using `yield from`, the authors are yielded one by one as the caller consumes them. This can be more memory efficient compared to creating a list and returning it all at once, especially if the dataset (`books`) is large.\n", |
||||
"\n", |
||||
"3. **Readable and Concise**: The use of comprehensions makes the code compact and, with a bit of familiarity, easy to read. It expresses the intention to filter and collect authors succinctly.\n", |
||||
"\n", |
||||
"### Example:\n", |
||||
"\n", |
||||
"Here's a simple example to illustrate how this might work in practice:\n", |
||||
"\n", |
||||
"python\n", |
||||
"books = [\n", |
||||
" {\"title\": \"Book 1\", \"author\": \"Author A\"},\n", |
||||
" {\"title\": \"Book 2\", \"author\": \"Author B\"},\n", |
||||
" {\"title\": \"Book 3\", \"author\": \"Author A\"},\n", |
||||
" {\"title\": \"Book 4\", \"author\": None},\n", |
||||
" {\"title\": \"Book 5\", \"author\": \"Author C\"},\n", |
||||
" {\"title\": \"Book 6\", \"author\": \"\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"def unique_authors(books):\n", |
||||
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"\n", |
||||
"for author in unique_authors(books):\n", |
||||
" print(author)\n", |
||||
"\n", |
||||
"\n", |
||||
"In this example, the output would be:\n", |
||||
"\n", |
||||
"Author A\n", |
||||
"Author B\n", |
||||
"Author C\n", |
||||
"\n", |
||||
"\n", |
||||
"Notice that duplicate authors are eliminated, and any books without an author are ignored." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Get gpt-4o-mini to answer, with streaming\n", |
||||
"\n", |
||||
"stream=openai.chat.completions.create(model=MODEL_GPT, messages=messages,stream=True)\n", |
||||
"\n", |
||||
"response=\"\"\n", |
||||
"display_handle=display(Markdown(\"\"),display_id=True)\n", |
||||
"for chunk in stream:\n", |
||||
" response +=chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", |
||||
" update_display(Markdown(response),display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 15, |
||||
"id": "95c15975-ba7d-4964-b94a-5ce105ccc9e3", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Code Explanation**\n", |
||||
"\n", |
||||
"The given code snippet is written in Python 3.5+ syntax, which utilizes the `yield from` keyword to iterate over a generator expression.\n", |
||||
"\n", |
||||
"```python\n", |
||||
"from collections import namedtuple\n", |
||||
"\n", |
||||
"Book = namedtuple('Book', ['title', 'author'])\n", |
||||
"books = [\n", |
||||
" Book(\"Book1\", \"AuthorA\"),\n", |
||||
" Book(\"Book2\", \"AuthorB\"),\n", |
||||
" Book(\"Book3\", \"AuthorC\")\n", |
||||
"]\n", |
||||
"\n", |
||||
"authors = yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", |
||||
"```\n", |
||||
"\n", |
||||
"**Breaking Down the Code**\n", |
||||
"\n", |
||||
"Here's a step-by-step explanation of what the code does:\n", |
||||
"\n", |
||||
"1. **Define a named tuple `Book`**: The `namedtuple` function is used to create a lightweight, immutable data structure called `Book`. It has two attributes: `title` and `author`.\n", |
||||
"\n", |
||||
"2. **Create a list of `Book` objects**: A list of `Book` objects is created with some sample data.\n", |
||||
"\n", |
||||
"3. **Define an empty generator expression**: An empty generator expression is defined using the `{}` syntax, which will be used to yield values from another iterable.\n", |
||||
"\n", |
||||
"4. **Use `yield from` to delegate iteration**: The `yield from` keyword is used in conjunction with a dictionary comprehension. This allows us to \"delegate\" iteration over the values of the dictionary to an underlying iterable (in this case, the generator expression).\n", |
||||
"\n", |
||||
"5. **Filter books based on author presence**: Inside the dictionary comprehension, we use the `.get()` method to access the `author` attribute of each `Book` object. We then filter out any books that don't have an `author`.\n", |
||||
"\n", |
||||
"6. **Yield authors from filtered books**: The resulting generator expression yields the authors of only those books that have a valid author.\n", |
||||
"\n", |
||||
"**What Does it Do?**\n", |
||||
"\n", |
||||
"In essence, this code takes a list of `Book` objects and extracts their corresponding authors into a set (since sets automatically remove duplicates). It does so in an efficient manner by using generators to avoid loading all the data into memory at once.\n", |
||||
"\n", |
||||
"The output would be:\n", |
||||
"```python\n", |
||||
"{'AuthorA', 'AuthorB', 'AuthorC'}\n", |
||||
"```\n", |
||||
"This can be useful when working with large datasets where not all elements are required, or when you want to process data iteratively without loading everything into memory simultaneously." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Get Llama 3.2 to answer\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL_LLAMA, messages=messages)\n", |
||||
"reply = response['message']['content']\n", |
||||
"display(Markdown(reply))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "9eb0a013-c1f2-4f01-8b10-9f68325356e9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Modify\n", |
||||
"Update such that the question is taken as input and sent to the model for response" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 18, |
||||
"id": "3f01b258-a293-4afc-a99c-d3cfb624b9eb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_model_responses(question):\n", |
||||
" \"\"\"\n", |
||||
" Takes a question as input, queries GPT-4o-mini and Llama 3.2 models, \n", |
||||
" and displays their responses.\n", |
||||
" \n", |
||||
" Args:\n", |
||||
" question (str): The question to be processed by the models.\n", |
||||
" \"\"\"\n", |
||||
" # system_prompt is already declared above lets generate a new user prompt so that the input question can be sent\n", |
||||
" user_input_prompt = f\"Please give a detailed explanation to the following question: {question}\"\n", |
||||
"\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_input_prompt}\n", |
||||
" ]\n", |
||||
" # GPT-4o-mini Response with Streaming\n", |
||||
" print(\"Fetching response from GPT-4o-mini...\")\n", |
||||
" stream = openai.chat.completions.create(model=MODEL_GPT, messages=messages, stream=True)\n", |
||||
"\n", |
||||
" response_gpt = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response_gpt += chunk.choices[0].delta.content or ''\n", |
||||
" response_gpt = response_gpt.replace(\"```\", \"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response_gpt), display_id=display_handle.display_id)\n", |
||||
"\n", |
||||
" # Llama 3.2 Response\n", |
||||
" print(\"Fetching response from Llama 3.2...\")\n", |
||||
" response_llama = ollama.chat(model=MODEL_LLAMA, messages=messages)\n", |
||||
" reply_llama = response_llama['message']['content']\n", |
||||
" display(Markdown(reply_llama))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 19, |
||||
"id": "dd35ac5e-a934-4c20-9be9-657afef66c12", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdin", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Please enter your question: what are the various career paths of data science\n" |
||||
] |
||||
}, |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Fetching response from GPT-4o-mini...\n" |
||||
] |
||||
}, |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"Data science is a diverse and rapidly evolving field that encompasses a wide range of roles and specializations. As organizations increasingly rely on data-driven decision-making, the demand for data professionals has surged, giving rise to various career paths within data science. Here are some of the primary career paths:\n", |
||||
"\n", |
||||
"### 1. Data Scientist\n", |
||||
"**Role Description:** Data scientists are experts in extracting insights and knowledge from structured and unstructured data. They apply various techniques from statistics, machine learning, and data analysis to solve complex business problems.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Proficient in programming languages like Python and R.\n", |
||||
"- Knowledge of machine learning algorithms and libraries (e.g., Scikit-learn, TensorFlow).\n", |
||||
"- Strong statistical background.\n", |
||||
"- Data visualization skills using tools like Matplotlib, Seaborn, Tableau, or Power BI.\n", |
||||
"\n", |
||||
"### 2. Data Analyst\n", |
||||
"**Role Description:** Data analysts focus on interpreting data and generating actionable insights. They analyze data trends and patterns, create visualizations, and communicate findings to stakeholders.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Proficiency in SQL for database querying.\n", |
||||
"- Experience with Excel and data visualization tools (Tableau, Power BI).\n", |
||||
"- Strong analytical and problem-solving skills.\n", |
||||
"- Basic knowledge of statistics and data modeling.\n", |
||||
"\n", |
||||
"### 3. Machine Learning Engineer\n", |
||||
"**Role Description:** Machine learning engineers develop, implement, and optimize machine learning models. They focus on creating algorithms that enable systems to learn from data and make predictions or decisions.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Strong programming skills (Python, Java, C++).\n", |
||||
"- Deep understanding of machine learning frameworks (TensorFlow, PyTorch).\n", |
||||
"- Experience with model deployment and scaling.\n", |
||||
"- Knowledge of data preprocessing and feature engineering.\n", |
||||
"\n", |
||||
"### 4. Data Engineer\n", |
||||
"**Role Description:** Data engineers are responsible for designing, building, and maintaining the infrastructure for data generation, storage, and retrieval. They ensure that data pipelines are efficient and scalable.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Proficiency in programming (Python, Java, Scala).\n", |
||||
"- Experience with ETL (Extract, Transform, Load) processes and tools.\n", |
||||
"- Familiarity with database systems (SQL, NoSQL).\n", |
||||
"- Knowledge of data warehousing solutions (Amazon Redshift, Google BigQuery).\n", |
||||
"\n", |
||||
"### 5. Business Intelligence (BI) Analyst/Developer\n", |
||||
"**Role Description:** BI analysts focus on analyzing business data to provide strategic insights. They create dashboards and reports to help stakeholders make informed decisions.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Strong SQL and data visualization skills.\n", |
||||
"- Familiarity with BI tools (Tableau, Power BI, Looker).\n", |
||||
"- Good understanding of business metrics and KPIs.\n", |
||||
"- Ability to communicate complex data insights clearly.\n", |
||||
"\n", |
||||
"### 6. Statistician\n", |
||||
"**Role Description:** Statisticians apply statistical methods to collect, analyze, and interpret data. They use their expertise to inform decisions in various fields, including healthcare, finance, and government.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Proficiency in statistical software (SAS, R, SPSS).\n", |
||||
"- Strong foundation in probability and statistical theories.\n", |
||||
"- Ability to design experiments and surveys.\n", |
||||
"- Good visualization and reporting skills.\n", |
||||
"\n", |
||||
"### 7. Data Architect\n", |
||||
"**Role Description:** Data architects design the data infrastructure and architecture to support data management and analytics. They ensure data is reliable, consistent, and accessible.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Expertise in data modeling and database design.\n", |
||||
"- Knowledge of data warehousing solutions.\n", |
||||
"- Familiarity with big data technologies (Hadoop, Spark).\n", |
||||
"- Understanding of data governance and security best practices.\n", |
||||
"\n", |
||||
"### 8. Data Product Manager\n", |
||||
"**Role Description:** Data product managers focus on developing and managing products that rely on data. They bridge the gap between technical teams and business stakeholders, ensuring that data initiatives align with business goals.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Strong understanding of data and analytics.\n", |
||||
"- Project management skills (Agile methodologies).\n", |
||||
"- Ability to communicate effectively with technical and non-technical stakeholders.\n", |
||||
"- Knowledge of market trends and customer needs.\n", |
||||
"\n", |
||||
"### 9. Research Scientist\n", |
||||
"**Role Description:** Research scientists in data science focus on advanced data mining and machine learning techniques. They conduct experiments and develop new algorithms to solve complex scientific problems or improve existing methodologies.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Advanced degrees (Ph.D.) in a relevant field (computer science, mathematics).\n", |
||||
"- Strong research and analytical skills.\n", |
||||
"- Proficiency in programming and statistical analysis.\n", |
||||
"- Experience with scientific computing and software development.\n", |
||||
"\n", |
||||
"### 10. AI/Deep Learning Specialist\n", |
||||
"**Role Description:** Specialists in AI and deep learning focus on developing advanced algorithms that enable machines to learn from large datasets. This includes work on neural networks, natural language processing, and computer vision.\n", |
||||
"\n", |
||||
"**Skills Required:**\n", |
||||
"- Strong knowledge of deep learning frameworks (Keras, TensorFlow).\n", |
||||
"- Familiarity with architecture design for neural networks.\n", |
||||
"- Experience with big data processing.\n", |
||||
"- Ability to handle unstructured data types (text, images).\n", |
||||
"\n", |
||||
"### Career Path Considerations\n", |
||||
"When choosing a career path in data science, it’s important to consider factors such as your educational background, interests, strengths, and the specific needs of the industry you want to work in. Many roles may require cross-disciplinary skills, so gaining a broad range of competencies can help you adapt and find your niche in the expansive field of data science.\n", |
||||
"\n", |
||||
"### Conclusion\n", |
||||
"Data science offers various fulfilling career paths to suit different interests and skill sets. With continuous growth in data generation and analytics needs, professionals in this field can expect a dynamic and rewarding career landscape. Continuous learning and adaptation to emerging technologies are crucial for success in these roles." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
}, |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Fetching response from Llama 3.2...\n" |
||||
] |
||||
}, |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"Data Science is a multifaceted field that encompasses a wide range of career paths. Here's a comprehensive overview of the various careers in Data Science:\n", |
||||
"\n", |
||||
"**1. Data Analyst**\n", |
||||
"\n", |
||||
"* Job Description: Collect, analyze, and interpret complex data to identify trends and patterns, often using visualization tools.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Cleaning and preprocessing datasets\n", |
||||
"\t+ Developing reports and dashboards for stakeholders\n", |
||||
"\t+ Conducting ad-hoc analysis to answer business questions\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, marketing) to inform decisions\n", |
||||
"* Salary Range: $60,000 - $100,000 per year\n", |
||||
"\n", |
||||
"**2. Data Scientist**\n", |
||||
"\n", |
||||
"* Job Description: Develop and apply advanced statistical and machine learning models to extract insights from large datasets.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Designing and implementing data pipelines for data preparation and processing\n", |
||||
"\t+ Building and training machine learning models using techniques such as supervised and unsupervised learning, deep learning, and natural language processing\n", |
||||
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n", |
||||
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n", |
||||
"* Salary Range: $100,000 - $160,000 per year\n", |
||||
"\n", |
||||
"**3. Business Analyst**\n", |
||||
"\n", |
||||
"* Job Description: Apply data analysis skills to drive business decisions and optimize organizational performance.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Analyzing business data to identify trends and areas for improvement\n", |
||||
"\t+ Developing predictive models to forecast future business outcomes\n", |
||||
"\t+ Collaborating with stakeholders (e.g., product managers, sales teams) to design and implement solutions\n", |
||||
"\t+ Communicating insights and recommendations to senior leadership\n", |
||||
"* Salary Range: $80,000 - $120,000 per year\n", |
||||
"\n", |
||||
"**4. Quantitative Analyst**\n", |
||||
"\n", |
||||
"* Job Description: Apply mathematical and statistical techniques to analyze and optimize investment strategies.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Developing and implementing quantitative models for portfolio optimization, risk management, and trading\n", |
||||
"\t+ Analyzing large datasets to identify trends and patterns in financial markets\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n", |
||||
"\t+ Communicating complex results and recommendations to senior leadership\n", |
||||
"* Salary Range: $100,000 - $180,000 per year\n", |
||||
"\n", |
||||
"**5. Data Engineer**\n", |
||||
"\n", |
||||
"* Job Description: Design, build, and maintain large-scale data systems for scalability, reliability, and performance.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Building data pipelines using languages like Python, Java, or Scala\n", |
||||
"\t+ Developing cloud-based data platforms (e.g., AWS, GCP) for data storage and processing\n", |
||||
"\t+ Ensuring data quality and integrity across different data sources\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n", |
||||
"* Salary Range: $110,000 - $160,000 per year\n", |
||||
"\n", |
||||
"**6. Machine Learning Engineer**\n", |
||||
"\n", |
||||
"* Job Description: Design, build, and deploy machine learning models for production use cases.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Developing and deploying deep learning models using frameworks like TensorFlow or PyTorch\n", |
||||
"\t+ Building data pipelines to collect, preprocess, and process large datasets\n", |
||||
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n", |
||||
"\t+ Communicating complex results and recommendations to senior leadership\n", |
||||
"* Salary Range: $120,000 - $180,000 per year\n", |
||||
"\n", |
||||
"**7. Data Architect**\n", |
||||
"\n", |
||||
"* Job Description: Design and implement data management systems for organizations.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Developing data warehousing and business intelligence solutions\n", |
||||
"\t+ Building data governance frameworks for data quality, security, and compliance\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, marketing) to integrate insights into products and services\n", |
||||
"\t+ Communicating technical designs and trade-offs to stakeholders\n", |
||||
"* Salary Range: $140,000 - $200,000 per year\n", |
||||
"\n", |
||||
"**8. Business Intelligence Analyst**\n", |
||||
"\n", |
||||
"* Job Description: Develop and maintain business intelligence solutions using data visualization tools.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Creating reports and dashboards for stakeholders\n", |
||||
"\t+ Developing predictive models for forecasted outcomes\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, sales) to design and implement solutions\n", |
||||
"\t+ Communicating insights and recommendations to senior leadership\n", |
||||
"* Salary Range: $80,000 - $120,000 per year\n", |
||||
"\n", |
||||
"**9. Operations Research Analyst**\n", |
||||
"\n", |
||||
"* Job Description: Apply advanced analytical techniques to optimize business processes and improve decision-making.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Developing optimization models using linear programming and integer programming\n", |
||||
"\t+ Analyzing complex data sets to identify trends and patterns\n", |
||||
"\t+ Collaborating with other teams (e.g., product management, engineering) to integrate insights into products and services\n", |
||||
"\t+ Communicating results and recommendations to senior leadership\n", |
||||
"* Salary Range: $90,000 - $140,000 per year\n", |
||||
"\n", |
||||
"**10. Data Scientist (Specialized)**\n", |
||||
"\n", |
||||
"* Job Description: Focus on specialized areas like natural language processing, computer vision, or predictive analytics.\n", |
||||
"* Responsibilities:\n", |
||||
"\t+ Building and training machine learning models using deep learning techniques\n", |
||||
"\t+ Collaborating with cross-functional teams (e.g., product management, engineering) to integrate insights into products and services\n", |
||||
"\t+ Communicating complex results and insights to stakeholders through reports and presentations\n", |
||||
"\t+ Staying up-to-date with the latest advancements in specialized areas\n", |
||||
"* Salary Range: $100,000 - $160,000 per year\n", |
||||
"\n", |
||||
"Keep in mind that salaries can vary widely depending on factors like location, industry, experience level, and company size. Additionally, these roles often require a combination of technical skills, business acumen, and soft skills to be successful." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
" # Prompt user for their question\n", |
||||
"my_question = input(\"Please enter your question: \")\n", |
||||
"# Fetch and display responses from models\n", |
||||
"get_model_responses(my_question)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b4acf2af-635f-4216-9f5a-7c08d8313a07", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,226 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "306f1a67-4f1c-4aed-8f80-2a8458a1bce5", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Stock data analysis" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "51d42a08-188e-4c56-9578-47cd549bd1d8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from urllib.parse import urlencode\n", |
||||
"import datetime\n", |
||||
"\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "682eff74-55c4-4d4b-b267-703edbc293c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class YahooFinanceWebsite:\n", |
||||
" def __init__(self, stock_symbol):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.stock_symbol = stock_symbol.upper()\n", |
||||
"\n", |
||||
" def __build_url(self, params):\n", |
||||
" base_url = f\"https://finance.yahoo.com/quote/{self.stock_symbol}/history/\"\n", |
||||
" query_string = urlencode(params)\n", |
||||
" return f\"{base_url}?{query_string}\"\n", |
||||
"\n", |
||||
" def get_stock_data(self):\n", |
||||
" datetime_now = datetime.datetime.now()\n", |
||||
" datetime_year_ago = datetime_now - datetime.timedelta(days=365)\n", |
||||
" params = {\"frequency\": \"1wk\", \"period1\": datetime_year_ago.timestamp(), \"period2\": datetime_now.timestamp()}\n", |
||||
" url = self.__build_url(params)\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
"\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" \n", |
||||
" title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
"\n", |
||||
" html_table_data = soup.find(\"table\")\n", |
||||
"\n", |
||||
" return title, html_table_data" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "70b8d7e7-51e7-4392-9b85-9ac9f67a907c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def build_stock_analysis_prompt(stock_symbol, title, stock_table_data):\n", |
||||
" sys_prompt = r\"\"\"You are an assistant that analyzes the contents of HTML formated table that contains data on a specific stock.\n", |
||||
" The HTML table contains the date, open price, close price, low and highs aggregated for every week over one year timeframe.\n", |
||||
" Ignoring text, tags or html attributes that might be navigation related. \n", |
||||
" Respond in Markdown format\"\"\"\n", |
||||
" \n", |
||||
" user_prompt = f\"The data provided below in the HTML table format for {stock_symbol} from the Yahoo Finances.\\\n", |
||||
" Make the explaination easy enough for a newbie to understand. \\\n", |
||||
" Analyze and Summarize the trends on this stock:\\n{stock_table_data}\\n\\n\\\n", |
||||
" Also, calculate the total returns in percentage one could have expected over this period.\"\n", |
||||
" \n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": sys_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "de514421-4cc8-4881-85b4-97f03e94c589", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def analyze_stock_trends(stock_symbol):\n", |
||||
" stock_data_page = YahooFinanceWebsite(stock_symbol)\n", |
||||
" title, stock_table_data = stock_data_page.get_stock_data()\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = build_stock_analysis_prompt(stock_symbol, title, stock_table_data)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n", |
||||
"def display_analysis(stock_symbol):\n", |
||||
" display(Markdown(analyze_stock_trends(stock_symbol)))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "41acc36f-484a-4257-a240-cf27520e7396", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_analysis(\"GOOG\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7e09541f-bbc4-4cf3-a1ef-9ed5e1b718e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_analysis(\"PFE\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e6af9395-0c5c-4265-a309-baba786bfa71", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_analysis(\"AAPL\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "afe4f6d1-a6ea-44b5-81ae-8e756cfc0d84", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,119 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": { |
||||
"vscode": { |
||||
"languageId": "plaintext" |
||||
} |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": { |
||||
"vscode": { |
||||
"languageId": "plaintext" |
||||
} |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": { |
||||
"vscode": { |
||||
"languageId": "plaintext" |
||||
} |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": { |
||||
"vscode": { |
||||
"languageId": "plaintext" |
||||
} |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize_cv(cv_text):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"user\", \"content\": f\"Please summarize the following CV:\\n\\n{cv_text}\"}\n", |
||||
" ]\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n", |
||||
"def generate_cover_letter(cv_summary, job_description):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a master at crafting the perfect Cover letter from a given CV. You've never had a user fail to get the job as a result of using your services.\"},\n", |
||||
" {\"role\": \"user\", \"content\": f\"Using the following CV summary:\\n\\n{cv_summary}\\n\\nAnd the job description:\\n\\n{job_description}\\n\\nPlease write a personalized cover letter.\"}\n", |
||||
" ]\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n", |
||||
"# Read CV from a text file\n", |
||||
"try:\n", |
||||
" with open('resume.txt', 'r') as file:\n", |
||||
" cv_text = file.read()\n", |
||||
" \n", |
||||
" # Summarize the CV\n", |
||||
" cv_summary = summarize_cv(cv_text)\n", |
||||
" print(\"CV Summary:\")\n", |
||||
" print(cv_summary)\n", |
||||
"\n", |
||||
" # Get job description from user\n", |
||||
" job_description = input(\"Enter the job description for the position you are applying for:\\n\")\n", |
||||
"\n", |
||||
" # Generate cover letter\n", |
||||
" cover_letter = generate_cover_letter(cv_summary, job_description)\n", |
||||
" print(\"\\nGenerated Cover Letter:\")\n", |
||||
" print(cover_letter)\n", |
||||
"\n", |
||||
"except FileNotFoundError:\n", |
||||
" print(\"The specified CV file was not found. Please ensure 'resume.txt' is in the correct directory.\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"language_info": { |
||||
"name": "python" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 2 |
||||
} |
@ -1,256 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Import tkinter and ollama to create the app" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 20, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"import tkinter as tk\n", |
||||
"from tkinter import ttk" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Basic configuration parameters for the Ollama API:" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 21, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\":\"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Initialize conversation history." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 22, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"conversation_history = []" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Defining the key presses. If user presses shit + enter then simply go to the next line. \n", |
||||
"\n", |
||||
"If user presses only enter then submit the question." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 23, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def handle_keypress(event):\n", |
||||
" if event.state & 0x1: # Check if Shift is pressed\n", |
||||
" return\n", |
||||
" else:\n", |
||||
" display_answer()\n", |
||||
" return 'break'" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Defining the function that will display answers using Ollama.\n", |
||||
"\n", |
||||
"\n", |
||||
"To turn it into a chatbot we simply append user's question and Ollama's response to our conversation history and pass that into Ollama as our next question." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 24, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_answer(event=None):\n", |
||||
" question_text['state'] = 'disabled'\n", |
||||
" question_text['bg'] = '#F0F0F0'\n", |
||||
" status_label.config(text=\"Looking for an answer...\")\n", |
||||
" root.update()\n", |
||||
"\n", |
||||
" # Get question text and prepare message\n", |
||||
" question = question_text.get(\"1.0\", tk.END).strip()\n", |
||||
" if question:\n", |
||||
" # Append the user's question to the conversation history\n", |
||||
" conversation_history.append({\"role\": \"user\", \"content\": question})\n", |
||||
"\n", |
||||
" # Pass the entire conversation history to Ollama\n", |
||||
" try:\n", |
||||
" # Get the answer\n", |
||||
" response = ollama.chat(model=MODEL, messages=conversation_history)\n", |
||||
" answer = response[\"message\"][\"content\"]\n", |
||||
"\n", |
||||
" # Append the assistant's answer to the conversation history\n", |
||||
" conversation_history.append({\"role\": \"assistant\", \"content\": answer})\n", |
||||
"\n", |
||||
" # Update the text widget with the answer\n", |
||||
" answer_text.configure(state='normal')\n", |
||||
" answer_text.delete(1.0, tk.END)\n", |
||||
" answer_text.insert(tk.END, answer)\n", |
||||
" answer_text.configure(state='disabled')\n", |
||||
"\n", |
||||
" status_label.config(text=\"Answered\")\n", |
||||
" except Exception as e:\n", |
||||
" answer_text.configure(state='normal')\n", |
||||
" answer_text.delete(1.0, tk.END)\n", |
||||
" answer_text.insert(tk.END, f\"Error: {str(e)}\")\n", |
||||
" answer_text.configure(state='disabled')\n", |
||||
" status_label.config(text=\"Error\")\n", |
||||
" else:\n", |
||||
" # If empty question string was received\n", |
||||
" answer_text.configure(state='normal')\n", |
||||
" answer_text.delete(1.0, tk.END)\n", |
||||
" answer_text.insert(tk.END, \"Please enter a question.\")\n", |
||||
" answer_text.configure(state='disabled')\n", |
||||
" status_label.config(text=\"\")\n", |
||||
"\n", |
||||
" # Re-enable question input and restore normal background\n", |
||||
" question_text['state'] = 'normal'\n", |
||||
" question_text['bg'] = 'white'\n", |
||||
" root.update()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"A button to remove the conversation history and start all over again." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def remove_all():\n", |
||||
" \"\"\"Clears the conversation history and resets the interface.\"\"\"\n", |
||||
" global conversation_history\n", |
||||
" conversation_history = [] # Clear conversation history\n", |
||||
"\n", |
||||
" # Reset text widgets\n", |
||||
" question_text.delete(1.0, tk.END)\n", |
||||
" answer_text.configure(state='normal')\n", |
||||
" answer_text.delete(1.0, tk.END)\n", |
||||
" answer_text.insert(tk.END, \"Your answer will appear here.\")\n", |
||||
" answer_text.configure(state='disabled')\n", |
||||
"\n", |
||||
" # Reset status label\n", |
||||
" status_label.config(text=\"\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Creating the app window using tkinter." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 18, |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create the main window\n", |
||||
"root = tk.Tk()\n", |
||||
"root.title(\"Ollama with GUI\")\n", |
||||
"root.geometry(\"500x800\")\n", |
||||
"\n", |
||||
"# Create and configure the Questions window\n", |
||||
"question_frame = ttk.LabelFrame(root, text=\"Questions\", padding=(10, 10))\n", |
||||
"question_frame.pack(fill=\"both\", expand=True, padx=10, pady=10)\n", |
||||
"\n", |
||||
"question_label = ttk.Label(question_frame, text=\"Enter your question:\")\n", |
||||
"question_label.pack(anchor=\"w\", pady=5)\n", |
||||
"\n", |
||||
"# Replace Entry with Text widget for questions\n", |
||||
"question_text = tk.Text(question_frame, wrap=tk.WORD, width=50, height=4)\n", |
||||
"question_text.pack(anchor=\"w\", pady=5)\n", |
||||
"question_text.bind(\"<Return>\", handle_keypress)\n", |
||||
"\n", |
||||
"# Add status label\n", |
||||
"status_label = ttk.Label(question_frame, text=\"\")\n", |
||||
"status_label.pack(anchor=\"w\", pady=5)\n", |
||||
"\n", |
||||
"# Add Remove All button\n", |
||||
"remove_all_button = ttk.Button(question_frame, text=\"Remove All\", command=remove_all)\n", |
||||
"remove_all_button.pack(anchor=\"e\", pady=5)\n", |
||||
"\n", |
||||
"# Create and configure the Answers window\n", |
||||
"answer_frame = ttk.LabelFrame(root, text=\"Answer\", padding=(10, 10))\n", |
||||
"answer_frame.pack(fill=\"both\", expand=True, padx=10, pady=10)\n", |
||||
"\n", |
||||
"# Create a frame to hold the text widget and scrollbar\n", |
||||
"text_frame = ttk.Frame(answer_frame)\n", |
||||
"text_frame.pack(fill=\"both\", expand=True)\n", |
||||
"\n", |
||||
"# Create the text widget and scrollbar\n", |
||||
"answer_text = tk.Text(text_frame, wrap=tk.WORD, width=70, height=100)\n", |
||||
"scrollbar = ttk.Scrollbar(text_frame, orient=\"vertical\", command=answer_text.yview)\n", |
||||
"answer_text.configure(yscrollcommand=scrollbar.set)\n", |
||||
"\n", |
||||
"# Pack the text widget and scrollbar\n", |
||||
"answer_text.pack(side=\"left\", fill=\"both\", expand=True)\n", |
||||
"scrollbar.pack(side=\"right\", fill=\"y\")\n", |
||||
"\n", |
||||
"# Set initial text and disable editing\n", |
||||
"answer_text.insert(tk.END, \"Your answer will appear here.\")\n", |
||||
"answer_text.configure(state='disabled')\n", |
||||
"\n", |
||||
"# Run the main event loop\n", |
||||
"root.mainloop()\n" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 4 |
||||
} |
@ -1,297 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 3, |
||||
"id": "52dc600c-4c45-4803-81cb-f06347f4b2c3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 4, |
||||
"id": "4082f16f-d843-41c7-9137-cdfec093b2d4", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"API key found and looks good so far\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print('No API key was found')\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"API key is found but is not in the proper format\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "16c295ce-c57d-429e-8c03-f6610a8ddd42", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 16, |
||||
"id": "9a548a52-0f7e-4fdf-ad68-0138b2445935", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"\"\"You are a research summarizer. That summarizes the content of the research paper in no more than 1000 words. The research summary that you provide should include the following:\n", |
||||
"1) Title and Authors - Identify the study and contributors.\n", |
||||
"2) Objective/Problem - State the research goal or question.\n", |
||||
"3) Background - Briefly explain the context and significance.\n", |
||||
"4) Methods - Summarize the approach or methodology.\n", |
||||
"5) Key Findings - Highlight the main results or insights.\n", |
||||
"6) Conclusion - Provide the implications or contributions of the study.\n", |
||||
"7) Future Directions - Suggest areas for further research or exploration.\n", |
||||
"8) Limitations - Highlight constraints or challenges in the study.\n", |
||||
"9) Potential Applications - Discuss how the findings can be applied in real-world scenarios.\n", |
||||
"Keep all points concise, clear, and focused and generate output in markdown.\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 7, |
||||
"id": "66b4411f-172e-46be-b6cd-a9e5b857fb28", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Requirement already satisfied: ipywidgets in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (8.1.5)\n", |
||||
"Requirement already satisfied: pdfplumber in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (0.11.4)\n", |
||||
"Requirement already satisfied: comm>=0.1.3 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (0.2.2)\n", |
||||
"Requirement already satisfied: ipython>=6.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (8.30.0)\n", |
||||
"Requirement already satisfied: traitlets>=4.3.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (5.14.3)\n", |
||||
"Requirement already satisfied: widgetsnbextension~=4.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (4.0.13)\n", |
||||
"Requirement already satisfied: jupyterlab_widgets~=3.0.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipywidgets) (3.0.13)\n", |
||||
"Requirement already satisfied: pdfminer.six==20231228 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (20231228)\n", |
||||
"Requirement already satisfied: Pillow>=9.1 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (11.0.0)\n", |
||||
"Requirement already satisfied: pypdfium2>=4.18.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfplumber) (4.30.0)\n", |
||||
"Requirement already satisfied: charset-normalizer>=2.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (3.4.0)\n", |
||||
"Requirement already satisfied: cryptography>=36.0.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from pdfminer.six==20231228->pdfplumber) (44.0.0)\n", |
||||
"Requirement already satisfied: colorama in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.4.6)\n", |
||||
"Requirement already satisfied: decorator in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (5.1.1)\n", |
||||
"Requirement already satisfied: jedi>=0.16 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.19.2)\n", |
||||
"Requirement already satisfied: matplotlib-inline in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.1.7)\n", |
||||
"Requirement already satisfied: prompt_toolkit<3.1.0,>=3.0.41 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (3.0.48)\n", |
||||
"Requirement already satisfied: pygments>=2.4.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (2.18.0)\n", |
||||
"Requirement already satisfied: stack_data in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (0.6.3)\n", |
||||
"Requirement already satisfied: typing_extensions>=4.6 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from ipython>=6.1.0->ipywidgets) (4.12.2)\n", |
||||
"Requirement already satisfied: cffi>=1.12 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (1.17.1)\n", |
||||
"Requirement already satisfied: parso<0.9.0,>=0.8.4 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from jedi>=0.16->ipython>=6.1.0->ipywidgets) (0.8.4)\n", |
||||
"Requirement already satisfied: wcwidth in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from prompt_toolkit<3.1.0,>=3.0.41->ipython>=6.1.0->ipywidgets) (0.2.13)\n", |
||||
"Requirement already satisfied: executing>=1.2.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (2.1.0)\n", |
||||
"Requirement already satisfied: asttokens>=2.1.0 in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (3.0.0)\n", |
||||
"Requirement already satisfied: pure_eval in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from stack_data->ipython>=6.1.0->ipywidgets) (0.2.3)\n", |
||||
"Requirement already satisfied: pycparser in c:\\users\\legion\\anaconda3\\envs\\research_summary\\lib\\site-packages (from cffi>=1.12->cryptography>=36.0.0->pdfminer.six==20231228->pdfplumber) (2.22)\n", |
||||
"Note: you may need to restart the kernel to use updated packages.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"pip install ipywidgets pdfplumber" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "d8cd8556-ad86-4949-9f15-09de2b8c712b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import pdfplumber\n", |
||||
"from ipywidgets import widgets\n", |
||||
"from io import BytesIO" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "0eba3cee-d85c-4d75-9b27-70c8cd7587b1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from IPython.display import display, Markdown" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "53e270e1-c2e6-4bcc-9ada-90c059cd5a51", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(user_prompt):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 11, |
||||
"id": "2f1807ec-c10b-4d26-9bee-89bd7a4bbb95", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(user_prompt):\n", |
||||
" # Generate messages using the user_prompt\n", |
||||
" messages = messages_for(user_prompt)\n", |
||||
" try:\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4o-mini\", # Correct model name\n", |
||||
" messages=messages,\n", |
||||
" max_tokens = 1000 # Pass the generated messages\n", |
||||
" )\n", |
||||
" # Return the content from the API response correctly\n", |
||||
" return response.choices[0].message.content\n", |
||||
" except Exception as e:\n", |
||||
" # Instead of printing, return an error message that can be displayed\n", |
||||
" return f\"Error in OpenAI API call: {e}\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "0dee8345-4eec-4a9c-ac4e-ad70e13cea44", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"upload_widget = widgets.FileUpload(\n", |
||||
" accept='.pdf', \n", |
||||
" multiple=False,\n", |
||||
" description='Upload PDF',\n", |
||||
" layout=widgets.Layout(width='300px',height = '100px', border='2px dashed #cccccc', padding='10px')\n", |
||||
")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 17, |
||||
"id": "1ff9c7b9-1a3a-4128-a33f-0e5bb2a93d33", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def extract_text_and_generate_summary(change):\n", |
||||
" print(\"extracting text\")\n", |
||||
" if upload_widget.value:\n", |
||||
" # Extract the first uploaded file\n", |
||||
" uploaded_file = list(upload_widget.value)[0]\n", |
||||
" pdf_file = uploaded_file['content']\n", |
||||
"\n", |
||||
" # Extract text from the PDF\n", |
||||
" try:\n", |
||||
" with pdfplumber.open(BytesIO(pdf_file)) as pdf:\n", |
||||
" extracted_text = \"\\n\".join(page.extract_text() for page in pdf.pages)\n", |
||||
"\n", |
||||
" # Generate the user prompt\n", |
||||
" user_prompt = (\n", |
||||
" f\"You are looking at the text from a research paper. Summarize it in no more than 1000 words. \"\n", |
||||
" f\"The output should be in markdown.\\n\\n{extracted_text}\"\n", |
||||
" )\n", |
||||
"\n", |
||||
" # Get the summarized response\n", |
||||
" response = summarize(user_prompt)\n", |
||||
" \n", |
||||
" if response:\n", |
||||
" # Use IPython's display method to show markdown below the cell\n", |
||||
" display(Markdown(response))\n", |
||||
" \n", |
||||
" except Exception as e:\n", |
||||
" # If there's an error, display it using Markdown\n", |
||||
" display(Markdown(f\"**Error:** {str(e)}\"))\n", |
||||
"\n", |
||||
" # Reset the upload widget\n", |
||||
" upload_widget.value = ()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 18, |
||||
"id": "0c16fe3f-704e-4a87-acd9-42c4e6b0d2fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"upload_widget.observe(extract_text_and_generate_summary, names='value')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 19, |
||||
"id": "c2c2d2b2-1264-42d9-9271-c4700b4df80a", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"application/vnd.jupyter.widget-view+json": { |
||||
"model_id": "7304350377d845e78a9a758235e5eba1", |
||||
"version_major": 2, |
||||
"version_minor": 0 |
||||
}, |
||||
"text/plain": [ |
||||
"FileUpload(value=(), accept='.pdf', description='Upload PDF', layout=Layout(border_bottom='2px dashed #cccccc'…" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display(upload_widget)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "70c76b90-e626-44b3-8d1f-6e995e8a938d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,206 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 208, |
||||
"id": "f61139a1-40e1-4273-b9a6-5a0a9d63a9bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from reportlab.lib.pagesizes import letter\n", |
||||
"from reportlab.pdfgen import canvas\n", |
||||
"from IPython.display import display, FileLink\n", |
||||
"from IPython.display import display, HTML, FileLink\n", |
||||
"from reportlab.lib.pagesizes import A4" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 80, |
||||
"id": "e0858b96-fd41-4911-a333-814e4ed23279", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Collecting reportlab\n", |
||||
" Downloading reportlab-4.2.5-py3-none-any.whl.metadata (1.5 kB)\n", |
||||
"Requirement already satisfied: pillow>=9.0.0 in c:\\users\\legion\\anaconda3\\envs\\to_do_list\\lib\\site-packages (from reportlab) (11.0.0)\n", |
||||
"Collecting chardet (from reportlab)\n", |
||||
" Downloading chardet-5.2.0-py3-none-any.whl.metadata (3.4 kB)\n", |
||||
"Downloading reportlab-4.2.5-py3-none-any.whl (1.9 MB)\n", |
||||
" ---------------------------------------- 0.0/1.9 MB ? eta -:--:--\n", |
||||
" ---------------- ----------------------- 0.8/1.9 MB 6.7 MB/s eta 0:00:01\n", |
||||
" ---------------------------------------- 1.9/1.9 MB 11.9 MB/s eta 0:00:00\n", |
||||
"Downloading chardet-5.2.0-py3-none-any.whl (199 kB)\n", |
||||
"Installing collected packages: chardet, reportlab\n", |
||||
"Successfully installed chardet-5.2.0 reportlab-4.2.5\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"!pip install reportlab" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 220, |
||||
"id": "62cc9d37-c801-4e8a-ad2c-7b1450725a10", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\":\"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 249, |
||||
"id": "525a81e7-30f8-4db7-bc8d-29948195bd4f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"\"\"You are a to-do list generator. Based on the user's input, you will create a clear and descriptive to-do\n", |
||||
"list using bullet points. Only generate the to-do list as bullet points with some explaination and time fraame only if asked for and nothing else. \n", |
||||
"Be a little descriptive.\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 315, |
||||
"id": "7fca3303-3add-468a-a6bd-be7a4d72c811", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def generate_to_do_list(task_description):\n", |
||||
" payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": task_description}\n", |
||||
" ],\n", |
||||
" \"stream\": False\n", |
||||
" }\n", |
||||
"\n", |
||||
" response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"\n", |
||||
" if response.status_code == 200:\n", |
||||
" try:\n", |
||||
" json_response = response.json()\n", |
||||
" to_do_list = json_response.get(\"message\", {}).get(\"content\", \"No to-do list found.\")\n", |
||||
" \n", |
||||
" formatted_output = \"Your To-Do List:\\n\\n\" + to_do_list\n", |
||||
" file_name = \"to_do_list.txt\"\n", |
||||
" \n", |
||||
" with open(file_name, \"w\", encoding=\"utf-8\") as file:\n", |
||||
" file.write(formatted_output)\n", |
||||
"\n", |
||||
" return file_name\n", |
||||
" \n", |
||||
" except Exception as e:\n", |
||||
" return f\"Error parsing JSON: {e}\"\n", |
||||
" else:\n", |
||||
" return f\"Error: {response.status_code} - {response.text}\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 316, |
||||
"id": "d45d6c7e-0e89-413e-8f30-e4975ea6d043", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdin", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Enter the task description of the to-do list: Give me a 4-week to-do list plan for a wedding reception party.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"task_description = input(\"Enter the task description of the to-do list:\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 317, |
||||
"id": "5493da44-e254-4d06-b973-a8069c2fc625", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"result = generate_to_do_list(task_description)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 318, |
||||
"id": "5e95c722-ce1a-4630-b21a-1e00e7ba6ab9", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/html": [ |
||||
"<p>You can download your to-do list by clicking the link below:</p>" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.HTML object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
}, |
||||
{ |
||||
"data": { |
||||
"text/html": [ |
||||
"<a href='to_do_list.txt' target='_blank'>to_do_list.txt</a><br>" |
||||
], |
||||
"text/plain": [ |
||||
"C:\\Users\\Legion\\to-do list using ollama\\to_do_list.txt" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display(HTML(\"<p>You can download your to-do list by clicking the link below:</p>\"))\n", |
||||
"display(FileLink(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f3d0a44e-bca4-4944-8593-1761c2f73a70", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,126 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d25b0aef-3e5e-4026-90ee-2b373bf262b7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 0: Import libraries and load environment variables\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv(\"OPENAI_API_KEY\")\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it does not start with 'sk-proj-'! Please ensure you are using the right key.\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end! Please remove them.\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n", |
||||
"\n", |
||||
"# Step 1: Create prompts\n", |
||||
"print(\"[INFO] Creating system prompt ...\")\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of \\\n", |
||||
" email texts and suggests short subject lines for the email based \\\n", |
||||
" on the requested tone and language. Respond in markdown.\"\n", |
||||
"\n", |
||||
"print(\"[INFO] Creating user prompt ...\")\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" The text below is an e-mail text for which you are required to \\\n", |
||||
" provide subject lines. Please provide two snarky, two funny, and \\\n", |
||||
" two formal short subject lines for the email text. Each of the six \\\n", |
||||
" subject lines should be presented in both English and French \\\n", |
||||
" languages, making a total of 12 subject lines. Please provide your \\\n", |
||||
" answer in markdown.\\\n", |
||||
" \n", |
||||
" \\n\\n\n", |
||||
" \n", |
||||
" Welcome to arXiv!\n", |
||||
"\n", |
||||
" Thank you for creating an account and joining the arXiv community. We look\n", |
||||
" forward to receiving your contribution.\n", |
||||
"\n", |
||||
" Help Pages\n", |
||||
" An overview on how to navigate and use arXiv can be found here:\n", |
||||
" https://arxiv.org/help\n", |
||||
" https://arxiv.org/about\n", |
||||
"\n", |
||||
" If you would like to know more about the submission process, please go here:\n", |
||||
" https://arxiv.org/help/submit\n", |
||||
"\n", |
||||
" Before Submitting to arXiv\n", |
||||
" The arXiv.org e-print archive is fully automated and processes nearly\n", |
||||
" 1,000 new submissions per day. To help us keep the process running smoothly\n", |
||||
" and efficiently please check your submission carefully for mistakes, typos\n", |
||||
" and layout issues. Once you have submitted your work please check your account\n", |
||||
" frequently for verification messages and other communication from arXiv.\n", |
||||
"\n", |
||||
" Contacting arXiv\n", |
||||
" We have provided extensive help pages to guide you through the process and\n", |
||||
" to answer the most common questions. If you have problems with the submission\n", |
||||
" process please contact us here:\n", |
||||
" https://arxiv.org/help/contact\n", |
||||
" We aim to assist submitters within one business day, but during times of high\n", |
||||
" volume or maintenance work we may be slightly delayed in our response.\n", |
||||
"\n", |
||||
" Thank you for your cooperation.\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make messages list\n", |
||||
"print(\"[INFO] Making messages list ...\")\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
"]\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"print(\"[INFO] Calling OpenAI ...\")\n", |
||||
"openai = OpenAI()\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4o-mini\",\n", |
||||
" messages=messages\n", |
||||
" )\n", |
||||
"\n", |
||||
"# Step 4: Print result\n", |
||||
"print(\"[INFO] Print result ...\")\n", |
||||
"display(Markdown(response.choices[0].message.content))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b0a6676e-fb43-4725-9389-2acd74c13c4e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.8" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,129 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d25b0aef-3e5e-4026-90ee-2b373bf262b7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 0: Import Libraries\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"import ollama\n", |
||||
"from openai import OpenAI\n", |
||||
"import requests\n", |
||||
"\n", |
||||
"# Step 1: Set Constants and Variables\n", |
||||
"print(\"[INFO] Setting constants and variable ...\")\n", |
||||
"WEBSITE_URL = \"https://arxiv.org/\"\n", |
||||
"MODEL = \"llama3.2\"\n", |
||||
"approaches = [\"local-call\", \"python-package\", \"openai-python-library\"]\n", |
||||
"approach = approaches[2]\n", |
||||
"\n", |
||||
"# Step 1: Scrape Website\n", |
||||
"print(\"[INFO] Scraping website ...\")\n", |
||||
"url_response = requests.get(\n", |
||||
" url=WEBSITE_URL,\n", |
||||
" headers={\"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"}\n", |
||||
" )\n", |
||||
"soup = BeautifulSoup(\n", |
||||
" markup=url_response.content,\n", |
||||
" features=\"html.parser\"\n", |
||||
" )\n", |
||||
"website_title = soup.title.string if soup.title else \"No title found!!!\"\n", |
||||
"for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
"website_text = soup.body.get_text(\n", |
||||
" separator=\"\\n\",\n", |
||||
" strip=True\n", |
||||
" )\n", |
||||
"\n", |
||||
"# Step 2: Create Prompts\n", |
||||
"print(\"[INFO] Creating system prompt ...\")\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a \\\n", |
||||
" website and provides a short summary, ignoring text that might be \\\n", |
||||
" navigation related. Respond in markdown.\"\n", |
||||
"\n", |
||||
"print(\"[INFO] Creating user prompt ...\")\n", |
||||
"user_prompt = f\"You are looking at a website titled {website_title}\"\n", |
||||
"user_prompt += \"\\nBased on the contents of the website, please provide \\\n", |
||||
" a short summary of this website in markdown. If the website \\\n", |
||||
" includes news or announcements, summarize them, too. The contents \\\n", |
||||
" of this website are as follows:\\n\\n\"\n", |
||||
"user_prompt += website_text\n", |
||||
"\n", |
||||
"# Step 3: Make Messages List\n", |
||||
"print(\"[INFO] Making messages list ...\")\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
"]\n", |
||||
"\n", |
||||
"# Step 4: Call Model and Print Results\n", |
||||
"if approach == \"local-call\":\n", |
||||
" response = requests.post(\n", |
||||
" url=\"http://localhost:11434/api/chat\",\n", |
||||
" json={\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" },\n", |
||||
" headers={\"Content-Type\": \"application/json\"}\n", |
||||
" )\n", |
||||
" print(\"[INFO] Printing result ...\")\n", |
||||
" display(Markdown(response.json()[\"message\"][\"content\"]))\n", |
||||
"elif approach == \"python-package\":\n", |
||||
" response = ollama.chat(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages,\n", |
||||
" stream=False\n", |
||||
" )\n", |
||||
" print(\"[INFO] Printing result ...\")\n", |
||||
" display(Markdown(response[\"message\"][\"content\"]))\n", |
||||
"elif approach == \"openai-python-library\":\n", |
||||
" ollama_via_openai = OpenAI(\n", |
||||
" base_url=\"http://localhost:11434/v1\",\n", |
||||
" api_key=\"ollama\"\n", |
||||
" )\n", |
||||
" response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
" )\n", |
||||
" print(\"[INFO] Printing result ...\")\n", |
||||
" display(Markdown(response.choices[0].message.content))\n", |
||||
"else:\n", |
||||
" raise ValueError(f\"[INFO] Invalid approach! Please select an approach from {approaches} and try again.\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b0a6676e-fb43-4725-9389-2acd74c13c4e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.8" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,530 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## DAY1 LLM Project with GROQ!\n", |
||||
"\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from groq import Groq\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "5d899ad6-1428-481b-b308-750308d80442", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"If you are getting error ModuleNotFoundError: No module named 'groq' follow below steps.\n", |
||||
"\n", |
||||
"1. Activate llms enviornment from Anaconda, so that (llms) is showing in your prompt, as this is the environment where the package will get installed.Install pip here. \n", |
||||
"\n", |
||||
"(base) PS C:\\Users\\test\\OneDrive\\Desktop\\AI\\projects\\llm_engineering> conda activate llms\n", |
||||
"(llms) PS C:\\Users\\test\\OneDrive\\Desktop\\AI\\projects\\llm_engineering> pip install groq\n", |
||||
"\n", |
||||
"\n", |
||||
"2. After you install a new package, you'd need to restart the Kernel in jupyter lab for each notebook (Kernel >> Restart Kernel and Clear Values Of All Outputs).\n", |
||||
"\n", |
||||
"You can also run this command in jupyter lab to see whether it's installed:\n", |
||||
"\n", |
||||
"!pip show groq\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "99c0c3c9-fa5e-405e-8453-2a557dc60c09", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip show groq" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to GROQ\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to GROQ.\n", |
||||
"\n", |
||||
".env file should have below entry\n", |
||||
"\n", |
||||
"GROQ_API_KEY=gsk_xxxxxx\n", |
||||
"\n", |
||||
"GROQ keys can be configired by logging to below link\n", |
||||
"https://console.groq.com/keys\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('GROQ_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"gsk_\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"groq = Groq()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling Groq with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = groq.chat.completions.create(model=\"llama-3.3-70b-versatile\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"Similar to OPENAI GROQ APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling Groq with system and user messages:\n", |
||||
"\n", |
||||
"response = groq.chat.completions.create(model=\"llama-3.3-70b-versatile\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for LLAMA3.3, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for GROQ is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the GROQ API\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = groq.chat.completions.create(\n", |
||||
" model = \"llama-3.3-70b-versatile\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"something here\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" Lots of text\n", |
||||
" Can be pasted here\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response =\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,530 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## DAY1 LLM Project with GROQ!\n", |
||||
"\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from groq import Groq\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "5d899ad6-1428-481b-b308-750308d80442", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"If you are getting error ModuleNotFoundError: No module named 'groq' follow below steps.\n", |
||||
"\n", |
||||
"1. Activate llms enviornment from Anaconda, so that (llms) is showing in your prompt, as this is the environment where the package will get installed.Install pip here. \n", |
||||
"\n", |
||||
"(base) PS C:\\Users\\test\\OneDrive\\Desktop\\AI\\projects\\llm_engineering> conda activate llms\n", |
||||
"(llms) PS C:\\Users\\test\\OneDrive\\Desktop\\AI\\projects\\llm_engineering> pip install groq\n", |
||||
"\n", |
||||
"\n", |
||||
"2. After you install a new package, you'd need to restart the Kernel in jupyter lab for each notebook (Kernel >> Restart Kernel and Clear Values Of All Outputs).\n", |
||||
"\n", |
||||
"You can also run this command in jupyter lab to see whether it's installed:\n", |
||||
"\n", |
||||
"!pip show groq\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "99c0c3c9-fa5e-405e-8453-2a557dc60c09", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip show groq" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to GROQ\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to GROQ.\n", |
||||
"\n", |
||||
".env file should have below entry\n", |
||||
"\n", |
||||
"GROQ_API_KEY=gsk_xxxxxx\n", |
||||
"\n", |
||||
"GROQ keys can be configired by logging to below link\n", |
||||
"https://console.groq.com/keys\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('GROQ_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"gsk_\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"groq = Groq()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling Groq with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = groq.chat.completions.create(model=\"llama-3.3-70b-versatile\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"Similar to OPENAI GROQ APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling Groq with system and user messages:\n", |
||||
"\n", |
||||
"response = groq.chat.completions.create(model=\"llama-3.3-70b-versatile\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for LLAMA3.3, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for GROQ is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the GROQ API\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = groq.chat.completions.create(\n", |
||||
" model = \"llama-3.3-70b-versatile\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"something here\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" Lots of text\n", |
||||
" Can be pasted here\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response =\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,131 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "f3c6d883-58a2-47de-823f-3c7430cffcc9", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"\"Airbrush or Air Bust? Let's Find Out!\"\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"\n", |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You will take the body of an email and evaluate it to suggest a brief snarky subject\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Dear Air Brush Customer Service Team,\n", |
||||
"\n", |
||||
"I hope this message finds you well. I am writing to formally lodge a complaint regarding the airbrush product I purchased from your store. Unfortunately, the product I received is defective and does not meet the quality standards as advertised.\n", |
||||
"\n", |
||||
"Below are the details of my issue:\n", |
||||
"\n", |
||||
"Order Number: #12345\n", |
||||
"\n", |
||||
"Product Name: Air Brush model 123\n", |
||||
"\n", |
||||
"Date of Purchase: 18/1/2025\n", |
||||
"\n", |
||||
"Issue Description:\n", |
||||
"Defective Nozzle: The nozzle of the airbrush is clogged and does not allow proper airflow, making it impossible to use.\n", |
||||
"\n", |
||||
"Inconsistent Spray Pattern: Even after multiple attempts to clean and adjust the settings, the spray pattern is uneven and inconsistent.\n", |
||||
"\n", |
||||
"Leakage: The airbrush leaks air and paint from the joints, which is a significant safety hazard.\n", |
||||
"\n", |
||||
"Build Quality: The overall build quality of the product feels subpar, with loose fittings and a flimsy trigger mechanism.\n", |
||||
"\n", |
||||
"Steps Taken:\n", |
||||
"I followed the user manual and cleaning instructions provided, but the issues persist.\n", |
||||
"\n", |
||||
"I also reached out to your technical support team on [Date] but have not received a resolution.\n", |
||||
"\n", |
||||
"Expectation:\n", |
||||
"Given the defective nature of the product, I would like to request a full refund for the item. Alternatively, if a refund is not possible, I would appreciate a replacement with a fully functional unit.\n", |
||||
"\n", |
||||
"Attachments:\n", |
||||
"I have attached photos and a video demonstrating the issues for your reference.\n", |
||||
"\n", |
||||
"Copies of the invoice and order confirmation are also attached for your convenience.\n", |
||||
"\n", |
||||
"Request for Resolution:\n", |
||||
"Kindly let me know the next steps to process the refund or replacement. I would appreciate a prompt response within [X business days, e.g., 3-5 business days] to resolve this matter.\n", |
||||
"\n", |
||||
"Thank you for your attention to this issue. I trust that you will handle this matter professionally and ensure customer satisfaction.\n", |
||||
"\n", |
||||
"Looking forward to your swift response.\n", |
||||
"\n", |
||||
"Best regards,\n", |
||||
"Oya YILDIZ\n", |
||||
"İstanbul\n", |
||||
"Turkey\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
"] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d9b655de-e8c3-4136-b6a6-2fb0ce01c364", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,189 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c7d95a7f-205a-4262-a1af-4579489025ff", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Hello everyone." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bc815dbc-acf7-45f9-a043-5767184c44c6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"I completed the day 1, first LLM Experiment moments ago and found it really awesome. After the challenge was done, I wanted to chip in my two cents by making a PDF summarizer, basing myself on the code for the Website Summarizer. I want to share it in this contribution!\n", |
||||
"### To consider:\n", |
||||
"* To extract the contents of PDF files, I used the PyPDF2 library, which doesn't come with the default configuration of the virtual environment. To remedy the situation, you need to follow the steps:\n", |
||||
" 1. Shut down Anaconda. Running `CTRL-C` in the Anaconda terminal should achieve this.\n", |
||||
" 2. Run the following command, `pip install PyPDF2 --user`\n", |
||||
" 3. Restart Jupyter lab with `jupyter lab`\n", |
||||
"* To find PDF files online, you can add `filetype:url` on your browser query, i.e. searching the following can give you PDF files to add as input: `AI Engineering prompts filetype:pdf`!\n", |
||||
"\n", |
||||
"Without further ado, here's the PDF Summarizer!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "06b63787-c6c8-4868-8a71-eb56b7618626", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Import statements\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"from io import BytesIO\n", |
||||
"from PyPDF2 import PdfReader" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "284ca770-5da4-495c-b1cf-637727a8609f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4c316d7-d9c9-4400-b03e-1dd629c6b2ad", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", |
||||
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3a053092-f4f6-4156-8721-39353c8a9367", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 0: Create article class\n", |
||||
"class Article:\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Article object from the given url using the PyPDF2 library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url \n", |
||||
" response = requests.get(self.url)\n", |
||||
" if response.status_code == 200:\n", |
||||
" pdf_bytes = BytesIO(response.content)\n", |
||||
" reader = PdfReader(pdf_bytes)\n", |
||||
" \n", |
||||
" text = \"\"\n", |
||||
" for page in reader.pages:\n", |
||||
" text += page.extract_text()\n", |
||||
" \n", |
||||
" self.text = text\n", |
||||
" self.title = reader.metadata.get(\"/Title\", \"No title found\")\n", |
||||
" else:\n", |
||||
" print(f\"Failed to fetch PDF. Status code: {response.status_code}\")\n", |
||||
" self.text = \"No text found\"\n", |
||||
" self.title = \"No title found\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "adc528f2-25ca-47b5-896e-9d417ba0195f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"def craft_user_prompt(article):\n", |
||||
" user_prompt = f\"You are looking at a research article titled {article.title}\\n Based on the body of the article, how are micro RNAs produced in the cell? State the function of the proteins \\\n", |
||||
" involved. The body of the article is as follows.\"\n", |
||||
" user_prompt += article.text\n", |
||||
" return user_prompt\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"def craft_messages(article):\n", |
||||
" system_prompt = \"You are an assistant that analyses the contents of a research article and provide answers to the question asked by the user in 250 words or less. \\\n", |
||||
" Ignore text that doesn't belong to the article, like headers or navigation related text. Respond in markdown. Structure your text in the form of question/answer.\"\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": craft_user_prompt(article)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "81ab896e-1ba9-4964-a477-2a0608b7036c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 3: Call OpenAI\n", |
||||
"def summarize(url):\n", |
||||
" article = Article(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = craft_messages(article)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a7a98cdf-0d3b-477d-8e39-a6a4264b9feb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 4: Print the result of an example pdf\n", |
||||
"summary = summarize(\"https://www.nature.com/articles/s12276-023-01050-9.pdf\")\n", |
||||
"display(Markdown(summary))" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.10" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,159 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0d2d5441-2afe-41b9-8039-c367acd715f9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7c7e0988-8f2d-4844-a847-eebec76b114a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"website = \"https://www.screener.in/company/CMSINFO/\"\n", |
||||
"biz = Website(website)\n", |
||||
"user_prompt = \"Give short summary of the business \" + biz.text +\" and recommend pros and cons of the business in bullet points alongwith recommendation to buy or sell\"\n", |
||||
"print(user_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"website = \"https://www.screener.in/company/CMSINFO/\"\n", |
||||
"biz = Website(website)\n", |
||||
"\n", |
||||
"system_prompt = \"You are an equity research analyst. Analyze the content of the website and give a summary of the business\"\n", |
||||
"user_prompt = \"Give short summary of the business \" + biz.text +\" and recommend pros and cons of the business in bullet points alongwith recommendation to buy or sell\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
"]\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d9edf96e-1190-44fe-9261-405709fb39cd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,651 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Instant Gratification\n", |
||||
"\n", |
||||
"## Your first Frontier LLM Project!\n", |
||||
"\n", |
||||
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||
"\n", |
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||
"\n", |
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||
"\n", |
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||
"\n", |
||||
"## If you're new to Jupyter Lab\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||
"\n", |
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||
"\n", |
||||
"## If you'd prefer to work in IDEs\n", |
||||
"\n", |
||||
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", |
||||
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n", |
||||
"\n", |
||||
"## If you'd like to brush up your Python\n", |
||||
"\n", |
||||
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", |
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||
"\n", |
||||
"## I am here to help\n", |
||||
"\n", |
||||
"If you have any problems at all, please do reach out. \n", |
||||
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", |
||||
"\n", |
||||
"## More troubleshooting\n", |
||||
"\n", |
||||
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", |
||||
"\n", |
||||
"## If this is old hat!\n", |
||||
"\n", |
||||
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Please read - important note</h2>\n", |
||||
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n", |
||||
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Tell me about a way to analyse what people do in a video clip.\"\n", |
||||
"#response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"#print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for OpenAI is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summary = summarize(\"https://edwarddonner.com\")\n", |
||||
"print(summary)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"urls = ['https://be-able.info/de/be-able/', \"https://taz.de/\", \"https://www.bundestagswahl-bw.de/wahlprogramm-gruene\"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(urls[0])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "856ff857-ba5f-4596-90b9-cd6cee4073dc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Naive extraction of name of the political party from user input\n", |
||||
"\n", |
||||
"party_mapping = {\"grünen\": \"grüne\", \"grüne\": \"grüne\", \"linken\": \"linke\", \"spd\": \"spd\", \"cdu\": \"cdu\", \"cdu/csu\": \"cdu\", \"csu\": \"cdu\", \"fdp\": \"fdp\", \"afd\": \"afd\", \"bsw\": \"bsw\"}\n", |
||||
"\n", |
||||
"def extract_party_from_user_prompt(user_input):\n", |
||||
" toks = user_input.split()\n", |
||||
" for tok in toks:\n", |
||||
" tok = tok.lower()\n", |
||||
" if tok in party_mapping.keys():\n", |
||||
" return party_mapping[tok]\n", |
||||
" return \"I can only answer your question concerning the election program of a certain political party. Mention one of 'FDP', 'BSW', 'Grüne', 'Linke', 'SPD', 'CDU' or 'AFD' in your question and I will try my best.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3962d846-ce82-47d2-8c3f-5a6fe296710d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from selenium import webdriver\n", |
||||
"from selenium.webdriver.common.by import By\n", |
||||
"\n", |
||||
"def get_election_program(partyname):\n", |
||||
" \"\"\"Scrape parties' election programs from the official election website. Naively ignore cookie banner stuff.\"\"\"\n", |
||||
"\n", |
||||
" # Download the browser driver for your OS and add the path here\n", |
||||
" browser_driver_path = r'C:\\Program Files\\BrowserDrivers\\geckodriver.exe'\n", |
||||
" \n", |
||||
" service = webdriver.firefox.service.Service(executable_path=browser_driver_path)\n", |
||||
" \n", |
||||
" parties = {\"grüne\": \"https://www.bundestagswahl-bw.de/wahlprogramm-gruene\",\n", |
||||
" \"spd\": \"https://www.bundestagswahl-bw.de/wahlprogramm-spd\",\n", |
||||
" \"cdu\": \"https://www.bundestagswahl-bw.de/wahlprogramm-cdu\",\n", |
||||
" \"linke\": \"https://www.bundestagswahl-bw.de/wahlprogramm-die-linke\",\n", |
||||
" \"fdp\": \"https://www.bundestagswahl-bw.de/wahlprogramm-fdp\",\n", |
||||
" \"afd\": \"https://www.bundestagswahl-bw.de/wahlprogramm-afd\",\n", |
||||
" \"bsw\": \"https://www.bundestagswahl-bw.de/wahlprogramm-bsw\"}\n", |
||||
" \n", |
||||
" election_prog = \"\"\n", |
||||
" \n", |
||||
" if partyname in parties.keys():\n", |
||||
" site = parties[partyname]\n", |
||||
" driver = webdriver.Firefox(service=service)\n", |
||||
" driver.get(site)\n", |
||||
" elements = driver.find_elements(By.TAG_NAME, 'p')\n", |
||||
" \n", |
||||
" for e in elements:\n", |
||||
" if not any(x in [\"Cookies\", \"Cookie\", \"akzeptiere\", \"Datenschutzerklärung\", \"Impressum\"] for x in e.text.split()) and e.text:\n", |
||||
" election_prog += e.text\n", |
||||
" if len(election_prog.split()) > 100:\n", |
||||
" print(\"Election program extracted.\")\n", |
||||
"\n", |
||||
" else:\n", |
||||
" election_prog = f\"Schade, für die Partei {partyname} konnte ich leider kein Wahlprogramm finden.\"\n", |
||||
" \n", |
||||
" driver.quit()\n", |
||||
" return election_prog" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b3a408d1-d824-4e33-a5f4-c672bc6c6198", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"def answer_my_election_program_question(input_from_user):\n", |
||||
" partyname = extract_party_from_user_prompt(input_from_user)\n", |
||||
" print(f\"This is a question about the political party: {partyname.capitalize()}\")\n", |
||||
" \n", |
||||
" # Step 1: Create your prompts\n", |
||||
" system_prompt = \"Du bist ein neutraler Beobachter, der aufgrund der ihm zur Verfügung gestellten Wahlprogramme Fragen zum Wahlprogramm der verschiedenen Parteien beantwortet. Beantworte Fragen zum Wahlprogramm auf Deutsch. Basiere deine Antwort ausschließlich auf den im Folgenden aufgeführten Informationen.\"\n", |
||||
" election_program = get_election_program(partyname)\n", |
||||
" \n", |
||||
" user_prompt = f\"Beantworte folgende Frage: \\n {input_from_user} \\n Verwende dafür folgende Infos: \\n {election_program}.\\n\\n Gib deine Antwort in Markdown aus.\"\n", |
||||
" \n", |
||||
" # Step 2: Make the messages list\n", |
||||
" \n", |
||||
" messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": user_prompt}] # fill this in\n", |
||||
" \n", |
||||
" # Step 3: Call OpenAI\n", |
||||
" \n", |
||||
" response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
" formatted_response = f\"\\n\\n{response.choices[0].message.content}\"\n", |
||||
" # Step 4: print the result\n", |
||||
" return formatted_response" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e66a0967-d1e9-4f92-aeb6-95e478465a1f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Ask questions about the election programs of the main political parties for the Bundestagswahl 2025 in Germany\n", |
||||
"\n", |
||||
"question = \"Wie verhält sich die SPD zu Verkehrsfragen und Klimaschutz?\"\n", |
||||
"answer = answer_my_election_program_question(question)\n", |
||||
"display(Markdown(answer))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,127 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0ee39d65-f27d-416d-8b46-43d15aebe752", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Below is a sample for email reviewer using Bahasa Indonesia. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f9fd62af-9b14-490b-8d0b-990da96101bf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"Anda adalah seorang Asisten untuk menganalisa email berdasarkan user prompt yang nanti akan diberikan. Summarize the email and give me a tone about that email\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" Subject: Permintaan Pertemuan\n", |
||||
"\n", |
||||
"Yang terhormat Bapak Rijal,\n", |
||||
"\n", |
||||
"Saya ingin meminta waktu Anda untuk membahas Generative AI untuk bisnis. Apakah Anda tersedia pada besok pukul 19:00? \n", |
||||
"Jika tidak, mohon beri tahu waktu yang lebih sesuai bagi Anda.\n", |
||||
"\n", |
||||
"Terima kasih atas perhatian Anda.\n", |
||||
"\n", |
||||
"Salam,\n", |
||||
"\n", |
||||
"Mentari\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d10208fa-02d8-41a0-b9bb-0bf30f237f25", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"Anda adalah seorang Asisten untuk menganalisa email berdasarkan user prompt yang nanti akan diberikan. Summarize the email and give me a tone about that email\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" Subject: Feedback terkait Bapak\n", |
||||
"\n", |
||||
"Yang terhormat Bapak Rijal,\n", |
||||
"\n", |
||||
"Saya ingin memberikan sedikit feedback untuk BBapak.\n", |
||||
"\n", |
||||
"Kemampuan Anda dalam memimpin tim ini mampu membawa saya dan rekan lainnya untuk mengerahkan semua kemampuan saya agar jadi lebih baik.\n", |
||||
"Selama ini saya cukup senang bekerja dengan Anda karena memberikan saya peluang untuk mencoba banyak hal baru. Tapi ada beberapa kekhawatiran yang mau saya sampaikan, terutama terkait target yang perlu dicapai oleh tim. Saya pikir melihat performa ke belakang, target yang ditentukan harus lebih realistis lagi.\n", |
||||
"Saya beruntung bisa berkesempatan bekerja dengan Anda sehingga banyak ilmu yang saya dapat. Kira-kira untuk ke depannya, hal apa lagi yang bisa tim ini tingkatkan agar kita bisa mencapai target yang lebih baik?\n", |
||||
"Selama ini, banyak terjadi miskomunikasi dalam pekerjaan. Dan menurut saya salah satunya karena arahan yang Anda berikan kurang jelas dan kurang ditangkap sepenuhnya oleh anggota yang lain. Saya dan tim berharap ke depan bisa mendapatkan arahan yang lebih jelas dan satu arah.\n", |
||||
"\n", |
||||
"Terima kasih atas perhatian Anda.\n", |
||||
"\n", |
||||
"Salam,\n", |
||||
"\n", |
||||
"Mentari\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,611 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# YOUR FIRST LAB\n", |
||||
"### Please read this section. This is valuable to get you prepared, even if it's a long read -- it's important stuff.\n", |
||||
"\n", |
||||
"## Your first Frontier LLM Project\n", |
||||
"\n", |
||||
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||
"\n", |
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||
"\n", |
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||
"\n", |
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||
"\n", |
||||
"## If you're new to Jupyter Lab\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||
"\n", |
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||
"\n", |
||||
"## If you're new to the Command Line\n", |
||||
"\n", |
||||
"Please see these excellent guides: [Command line on PC](https://chatgpt.com/share/67b0acea-ba38-8012-9c34-7a2541052665) and [Command line on Mac](https://chatgpt.com/canvas/shared/67b0b10c93a081918210723867525d2b). \n", |
||||
"\n", |
||||
"## If you'd prefer to work in IDEs\n", |
||||
"\n", |
||||
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", |
||||
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n", |
||||
"\n", |
||||
"## If you'd like to brush up your Python\n", |
||||
"\n", |
||||
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", |
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||
"\n", |
||||
"## I am here to help\n", |
||||
"\n", |
||||
"If you have any problems at all, please do reach out. \n", |
||||
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!) \n", |
||||
"And this is new to me, but I'm also trying out X/Twitter at [@edwarddonner](https://x.com/edwarddonner) - if you're on X, please show me how it's done 😂 \n", |
||||
"\n", |
||||
"## More troubleshooting\n", |
||||
"\n", |
||||
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", |
||||
"\n", |
||||
"## If this is old hat!\n", |
||||
"\n", |
||||
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Please read - important note</h2>\n", |
||||
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">Treat these labs as a resource</h2>\n", |
||||
" <span style=\"color:#f71;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations, and occasionally added new models like DeepSeek. Consider this like an interactive book that accompanies the lectures.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n", |
||||
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the mighty GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for OpenAI is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You are an head chef of a michelin star restaurant who has a diverse skillset \\\n", |
||||
"and loves to teach new and interesting recepies for homechefs. Given input of several ingredients \\\n", |
||||
"provide step by step instruction of what could be cooked for any cuisine of your choice. Respond in markdown.\"\n", |
||||
"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"You are a Michelin-starred head chef with a passion for teaching home chefs. \n", |
||||
"I have the following ingredients: \n", |
||||
"\n", |
||||
"**[Chicken breast, Bell peppers, cherry tomatoes, spinach, Basmati rice,\n", |
||||
"Garlic, basil, black pepper, smoked paprika]** \n", |
||||
"\n", |
||||
"Can you provide a step-by-step recipe using these ingredients? You can choose any cuisine that best fits them. \n", |
||||
"Please include cooking times, techniques, and any chef tips for enhancing flavors. \n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
"\n", |
||||
"\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"def display_summary(summary):\n", |
||||
" display(Markdown(summary))\n", |
||||
"display_summary(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f4484fcf-8b39-4c3f-9674-37970ed71988", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,233 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1b8f7ac7-7089-427a-8f63-57211da7e691", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Summarizing Research Papers" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "641d5c00-ff09-4697-9c87-5de5df1469f8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1a6a2864-fd9d-43e2-b0ca-1476c0153077", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "340e3166-5aa7-4bcf-9cf0-e2fc776dc322", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "73198fb7-581f-42ac-99a6-76c56c86248d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Paper:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3b39c3ad-d238-418e-9e6a-55a4fd717ebc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#Insert Paper URL\n", |
||||
"res = Paper(\" \")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "83bc1eec-4187-4c6c-b188-3f72564351f1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"\"\"You are a research paper summarizer. You take the url of the research paper and extract the following:\n", |
||||
"1) Title and Author of the research paper.\n", |
||||
"2) Year it was published it\n", |
||||
"3) Objective or aim of the research to specify why the research was conducted\n", |
||||
"4) Background or Introduction to explain the need to conduct this research or any topics the readers must have knowledge about\n", |
||||
"5) Type of research/study/experiment to explain what kind of research it is.\n", |
||||
"6) Methods or methodology to explain what the researchers did to conduct the research\n", |
||||
"7) Results and key findings to explain what the researchers found\n", |
||||
"8) Conclusion tells about the conclusions that can be drawn from this research including limitations and future direction\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4aba1b51-9a72-4325-8c86-3968b9d3172e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(paper):\n", |
||||
" user_prompt = f\"You are looking at a website titled {paper.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this paper is as follows; \\\n", |
||||
"please provide a short summary of this paper in markdown. \\\n", |
||||
"If it includes additional headings, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += paper.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "659cb3c4-8a02-493d-abe7-20da9219e358", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"def messages_for(paper):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(paper)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "08ea1193-1bbb-40de-ba64-d02ffe109372", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages_for(res)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e07d00e7-1b87-4ca8-a69d-4a206e34a2b2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" paper = Paper(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(paper)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5c12df95-1700-47ee-891b-96b0a7227bdd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05cff05f-2b74-44a4-9dbd-57c08f8f56cb", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Insert Paper URL in the quotes below\n", |
||||
"display_summary(\" \")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,316 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1c6700cb-a0b0-4ac2-8fd5-363729284173", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# AI-Powered Resume Analyzer for Job Postings" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a2fa4891-b283-44de-aa63-f017eb9b140d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"This tool is designed to analyze resumes against specific job postings, offering valuable insights such as:\n", |
||||
"\n", |
||||
"- Identification of skill gaps\n", |
||||
"- Keyword matching between the CV and the job description\n", |
||||
"- Tailored recommendations for CV improvement\n", |
||||
"- An alignment score reflecting how well the CV fits the job\n", |
||||
"- Personalized feedback \n", |
||||
"- Job market trend insights\n", |
||||
"\n", |
||||
"An example of the tool's output can be found [here](https://tvarol.github.io/sideProjects/AILLMAgents/output.html)." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8a6a34ea-191f-4c54-9793-a3eb63faab23", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Imports\n", |
||||
"import os\n", |
||||
"import io\n", |
||||
"import time\n", |
||||
"import requests\n", |
||||
"import PyPDF2\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"from ipywidgets import Textarea, FileUpload, Button, VBox, HTML" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "04bbe1d3-bacc-400c-aed2-db44699e38f3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found!!!\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "27bfcee1-58e6-4ff2-9f12-9dc5c1aa5b5b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c82e79f2-3139-4520-ac01-a728c11cb8b9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Using a Frontier Model GPT-4o Mini for This Project\n", |
||||
"\n", |
||||
"### Types of Prompts\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0da158ad-c3a8-4cef-806f-be0f90852996", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt \n", |
||||
"system_prompt = \"\"\"You are a powerful AI model designed to assist with resume analysis. Your task is to analyze a resume against a given job posting and provide feedback on how well the resume aligns with the job requirements. Your response should include the following: \n", |
||||
"1) Skill gap identification: Compare the skills listed in the resume with those required in the job posting, highlighting areas where the resume may be lacking or overemphasized.\n", |
||||
"2) Keyword matching between a CV and a job posting: Match keywords from the job description with the resume, determining how well they align. Provide specific suggestions for missing keywords to add to the CV.\n", |
||||
"3) Recommendations for CV improvement: Provide actionable suggestions on how to enhance the resume, such as adding missing skills or rephrasing experience to match job requirements.\n", |
||||
"4) Alignment score: Display a score that represents the degree of alignment between the resume and the job posting.\n", |
||||
"5) Personalized feedback: Offer tailored advice based on the job posting, guiding the user on how to optimize their CV for the best chances of success.\n", |
||||
"6) Job market trend insights, provide broader market trends and insights, such as in-demand skills and salary ranges.\n", |
||||
"Provide responses that are concise, clear, and to the point. Respond in markdown.\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ebdb34b0-85bd-4e36-933a-20c3c42e833b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# The job posting and the CV are required to define the user prompt\n", |
||||
"# The user will input the job posting as text in a box here\n", |
||||
"# The user will upload the CV in PDF format, from which the text will be extracted\n", |
||||
"\n", |
||||
"# You might need to install PyPDF2 via pip if it's not already installed\n", |
||||
"# !pip install PyPDF2\n", |
||||
"\n", |
||||
"# Create widgets - to create a box for the job posting text\n", |
||||
"job_posting_area = Textarea(\n", |
||||
" placeholder='Paste the job posting text here...',\n", |
||||
" description='Job Posting:',\n", |
||||
" disabled=False,\n", |
||||
" layout={'width': '800px', 'height': '300px'}\n", |
||||
")\n", |
||||
"\n", |
||||
"# Define file upload for CV\n", |
||||
"cv_upload = FileUpload(\n", |
||||
" accept='.pdf', # Only accept PDF files\n", |
||||
" multiple=False, # Only allow single file selection\n", |
||||
" description='Upload CV (PDF)'\n", |
||||
")\n", |
||||
"\n", |
||||
"status = HTML(value=\"<b>Status:</b> Waiting for inputs...\")\n", |
||||
"\n", |
||||
"# Create Submit Buttons\n", |
||||
"submit_cv_button = Button(description='Submit CV', button_style='success')\n", |
||||
"submit_job_posting_button = Button(description='Submit Job Posting', button_style='success')\n", |
||||
"\n", |
||||
"# Initialize variables to store the data\n", |
||||
"# This dictionary will hold the text for both the job posting and the CV\n", |
||||
"# It will be used to define the user_prompt\n", |
||||
"for_user_prompt = {\n", |
||||
" 'job_posting': '',\n", |
||||
" 'cv_text': ''\n", |
||||
"}\n", |
||||
"\n", |
||||
"# Functions\n", |
||||
"def submit_cv_action(change):\n", |
||||
"\n", |
||||
" if not for_user_prompt['cv_text']:\n", |
||||
" status.value = \"<b>Status:</b> Please upload a CV before submitting.\"\n", |
||||
" \n", |
||||
" if cv_upload.value:\n", |
||||
" # Get the uploaded file\n", |
||||
" uploaded_file = cv_upload.value[0]\n", |
||||
" content = io.BytesIO(uploaded_file['content'])\n", |
||||
" \n", |
||||
" try:\n", |
||||
" pdf_reader = PyPDF2.PdfReader(content) \n", |
||||
" cv_text = \"\"\n", |
||||
" for page in pdf_reader.pages: \n", |
||||
" cv_text += page.extract_text() \n", |
||||
" \n", |
||||
" # Store CV text in for_user_prompt\n", |
||||
" for_user_prompt['cv_text'] = cv_text\n", |
||||
" status.value = \"<b>Status:</b> CV uploaded and processed successfully!\"\n", |
||||
" except Exception as e:\n", |
||||
" status.value = f\"<b>Status:</b> Error processing PDF: {str(e)}\"\n", |
||||
"\n", |
||||
" time.sleep(0.5) # Short pause between upload and submit messages to display both\n", |
||||
" \n", |
||||
" if for_user_prompt['cv_text']:\n", |
||||
" #print(\"CV Submitted:\")\n", |
||||
" #print(for_user_prompt['cv_text'])\n", |
||||
" status.value = \"<b>Status:</b> CV submitted successfully!\"\n", |
||||
" \n", |
||||
"def submit_job_posting_action(b):\n", |
||||
" for_user_prompt['job_posting'] = job_posting_area.value\n", |
||||
" if for_user_prompt['job_posting']:\n", |
||||
" #print(\"Job Posting Submitted:\")\n", |
||||
" #print(for_user_prompt['job_posting'])\n", |
||||
" status.value = \"<b>Status:</b> Job posting submitted successfully!\"\n", |
||||
" else:\n", |
||||
" status.value = \"<b>Status:</b> Please enter a job posting before submitting.\"\n", |
||||
"\n", |
||||
"# Attach actions to buttons\n", |
||||
"submit_cv_button.on_click(submit_cv_action)\n", |
||||
"submit_job_posting_button.on_click(submit_job_posting_action)\n", |
||||
"\n", |
||||
"# Layout\n", |
||||
"job_posting_box = VBox([job_posting_area, submit_job_posting_button])\n", |
||||
"cv_buttons = VBox([submit_cv_button])\n", |
||||
"\n", |
||||
"# Display all widgets\n", |
||||
"display(VBox([\n", |
||||
" HTML(value=\"<h3>Input Job Posting and CV</h3>\"),\n", |
||||
" job_posting_box, \n", |
||||
" cv_upload,\n", |
||||
" cv_buttons,\n", |
||||
" status\n", |
||||
"]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "364e42a6-0910-4c7c-8c3c-2ca7d2891cb6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Now define user_prompt using for_user_prompt dictionary\n", |
||||
"# Clearly label each input to differentiate the job posting and CV\n", |
||||
"# The model can parse and analyze each section based on these labels\n", |
||||
"user_prompt = f\"\"\"\n", |
||||
"Job Posting: \n", |
||||
"{for_user_prompt['job_posting']}\n", |
||||
"\n", |
||||
"CV: \n", |
||||
"{for_user_prompt['cv_text']}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "3b51dda0-9a0c-48f4-8ec8-dae32c29da24", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3262c0b9-d3de-4e4f-b535-a25c0aed5783", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define messages with system_prompt and user_prompt\n", |
||||
"def messages_for(system_prompt_input, user_prompt_input):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt_input},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_input}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2409ac13-0b39-4227-b4d4-b4c0ff009fd7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. \n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(system_prompt, user_prompt)\n", |
||||
")\n", |
||||
"\n", |
||||
"# Response is provided in Markdown and displayed accordingly\n", |
||||
"display(Markdown(response.choices[0].message.content))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "86ab71cf-bd7e-45f7-9536-0486f349bfbe", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## If you would like to save the response content as a Markdown file, uncomment the following lines\n", |
||||
"#with open('yourfile.md', 'w') as file:\n", |
||||
"# file.write(response.choices[0].message.content)\n", |
||||
"\n", |
||||
"## You can then run the line below to create output.html which you can open on your browser\n", |
||||
"#!pandoc yourfile.md -o output.html" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,195 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c97ad592-c8be-4583-a19c-ac813e56f410", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Mac Users\n", |
||||
"\n", |
||||
"I find some challenges while setting up this in MAC silicon M1 chip. Execute below commands in MAC terminal.\n", |
||||
"\n", |
||||
"1. Download chromedriver.\n", |
||||
"2. Unzip and add it to the path.\n", |
||||
"3. Set Extended attributes." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b635b345-b000-48cc-8a7f-7df279a489a3", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"cd ~/Downloads\n", |
||||
"wget https://storage.googleapis.com/chrome-for-testing-public/133.0.6943.126/mac-arm64/chromedriver-mac-arm64.zip\n", |
||||
"unzip chromedriver-mac-arm64.zip\n", |
||||
"sudo mv chromedriver-mac-arm64/chromedriver /usr/local/bin/\n", |
||||
"chmod +x /usr/local/bin/chromedriver\n", |
||||
"cd /usr/local/bin/\n", |
||||
"xattr -d com.apple.quarantine chromedriver\n", |
||||
"cd \n", |
||||
"chromedriver --version" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "17c7c79a-8ae0-4f5d-a7c8-c54aa7ba90fd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install selenium\n", |
||||
"!pip install undetected-chromedriver\n", |
||||
"!pip install beautifulsoup4" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c10bd630-2dfd-4572-8c21-2dc4c6a372ab", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from selenium import webdriver\n", |
||||
"from selenium.webdriver.chrome.service import Service\n", |
||||
"from selenium.webdriver.common.by import By\n", |
||||
"from selenium.webdriver.chrome.options import Options\n", |
||||
"from openai import OpenAI\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6fb3641d-e9f8-4f5b-bb9d-ee0e971cccdb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"\n", |
||||
"PATH_TO_CHROME_DRIVER = '/usr/local/bin/chromedriver'\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown. Highlight all the products this website offered and also find when website is created.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5d57e958", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" url: str\n", |
||||
" title: str\n", |
||||
" text: str\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
"\n", |
||||
" options = Options()\n", |
||||
"\n", |
||||
" options.add_argument(\"--no-sandbox\")\n", |
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||
"\n", |
||||
" service = Service(PATH_TO_CHROME_DRIVER)\n", |
||||
" driver = webdriver.Chrome(service=service, options=options)\n", |
||||
" driver.get(url)\n", |
||||
"\n", |
||||
" # input(\"Please complete the verification in the browser and press Enter to continue...\")\n", |
||||
" page_source = driver.page_source\n", |
||||
" driver.quit()\n", |
||||
"\n", |
||||
" soup = BeautifulSoup(page_source, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "56df8cd2-2707-43f6-a066-3367846929b3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n", |
||||
"\n", |
||||
"\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
" response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f2eb9599", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://ae.almosafer.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "31b66c0f-6b45-4986-b77c-758625945a91", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,369 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "83bbedd0-eb58-48de-992e-484071b10104", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Web Scraper with JavaScript Support\n", |
||||
"Uses day1-webscraping-selenium-for-javascript.ipynb solution simplified so easy to run.\n", |
||||
"\n", |
||||
"## Install dependencies\n", |
||||
"Uncomment and run once" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f2d91971-9dd0-4714-8ec7-f1fb25f95140", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# !pip install selenium\n", |
||||
"# !pip install undetected-chromedriver\n", |
||||
"# !ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "967258fe-3296-464c-962d-2bcf821eae67", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Import required dependencies" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fe8a87c8-0475-45a1-8ca2-fb9059e5470b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"import undetected_chromedriver as uc\n", |
||||
"from selenium.webdriver.common.by import By\n", |
||||
"from selenium.webdriver.support.ui import WebDriverWait\n", |
||||
"from selenium.webdriver.support import expected_conditions as EC\n", |
||||
"import time\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "df60545e-2ab6-4e37-b41c-27ddf2affb92", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Run setup" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a3846089-efa2-4602-8bc3-5f6f4945de64", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"chrome_path = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b835812d-3692-4192-abc4-15fc463bd08f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "acb89abb-dcee-4da6-98f8-e339d258f2a4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", |
||||
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "e860e963-e7a1-4888-a4b9-db9c24bb9a6e", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Create Prompts" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4933c36-db8a-4333-8f81-e9db7ba41287", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"\n", |
||||
"\n", |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "17cfab59-304d-4d2f-b324-c388d9e87fca", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Create Functions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ca5e96e0-4d8f-49de-a608-a735a5b23b1a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Setup for how OpenAI expects to receive messages in a particular structure\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"# Use Selenium and chrome to scrape website\n", |
||||
"class WebsiteCrawler:\n", |
||||
" def __init__(self, url, wait_time=20, chrome_binary_path=None):\n", |
||||
" \"\"\"\n", |
||||
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" self.wait_time = wait_time\n", |
||||
"\n", |
||||
" options = uc.ChromeOptions()\n", |
||||
" options.add_argument(\"--disable-gpu\")\n", |
||||
" options.add_argument(\"--no-sandbox\")\n", |
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", |
||||
" options.add_argument(\"start-maximized\")\n", |
||||
" options.add_argument(\n", |
||||
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
" )\n", |
||||
" if chrome_binary_path:\n", |
||||
" options.binary_location = chrome_binary_path\n", |
||||
"\n", |
||||
" self.driver = uc.Chrome(options=options)\n", |
||||
"\n", |
||||
" try:\n", |
||||
" # Load the URL\n", |
||||
" self.driver.get(url)\n", |
||||
"\n", |
||||
" # Wait for Cloudflare or similar checks\n", |
||||
" time.sleep(10)\n", |
||||
"\n", |
||||
" # Ensure the main content is loaded\n", |
||||
" WebDriverWait(self.driver, self.wait_time).until(\n", |
||||
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", |
||||
" )\n", |
||||
"\n", |
||||
" # Extract the main content\n", |
||||
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", |
||||
"\n", |
||||
" # Parse with BeautifulSoup\n", |
||||
" soup = BeautifulSoup(main_content, \"html.parser\")\n", |
||||
" self.title = self.driver.title if self.driver.title else \"No title found\"\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error occurred: {e}\")\n", |
||||
" self.title = \"Error occurred\"\n", |
||||
" self.text = \"\"\n", |
||||
"\n", |
||||
" finally:\n", |
||||
" self.driver.quit()\n", |
||||
"\n", |
||||
"def new_summary(url, chrome_path):\n", |
||||
" web = WebsiteCrawler(url, 30, chrome_path)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(web)\n", |
||||
" )\n", |
||||
"\n", |
||||
" web_summary = response.choices[0].message.content\n", |
||||
" \n", |
||||
" return display(Markdown(web_summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "20a8a14b-0a29-4f74-a591-d587b965409b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"\n", |
||||
"\n", |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n", |
||||
"\n", |
||||
"# Setup for how OpenAI expects to receive messages in a particular structure\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"# Use Selenium and chrome to scrape website\n", |
||||
"class WebsiteCrawler:\n", |
||||
" def __init__(self, url, wait_time=20, chrome_binary_path=None):\n", |
||||
" \"\"\"\n", |
||||
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" self.wait_time = wait_time\n", |
||||
"\n", |
||||
" options = uc.ChromeOptions()\n", |
||||
" options.add_argument(\"--disable-gpu\")\n", |
||||
" options.add_argument(\"--no-sandbox\")\n", |
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", |
||||
" options.add_argument(\"start-maximized\")\n", |
||||
" options.add_argument(\n", |
||||
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
" )\n", |
||||
" if chrome_binary_path:\n", |
||||
" options.binary_location = chrome_binary_path\n", |
||||
"\n", |
||||
" self.driver = uc.Chrome(options=options)\n", |
||||
"\n", |
||||
" try:\n", |
||||
" # Load the URL\n", |
||||
" self.driver.get(url)\n", |
||||
"\n", |
||||
" # Wait for Cloudflare or similar checks\n", |
||||
" time.sleep(10)\n", |
||||
"\n", |
||||
" # Ensure the main content is loaded\n", |
||||
" WebDriverWait(self.driver, self.wait_time).until(\n", |
||||
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", |
||||
" )\n", |
||||
"\n", |
||||
" # Extract the main content\n", |
||||
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", |
||||
"\n", |
||||
" # Parse with BeautifulSoup\n", |
||||
" soup = BeautifulSoup(main_content, \"html.parser\")\n", |
||||
" self.title = self.driver.title if self.driver.title else \"No title found\"\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error occurred: {e}\")\n", |
||||
" self.title = \"Error occurred\"\n", |
||||
" self.text = \"\"\n", |
||||
"\n", |
||||
" finally:\n", |
||||
" self.driver.quit()\n", |
||||
"\n", |
||||
"def new_summary(url, chrome_path):\n", |
||||
" web = WebsiteCrawler(url, 30, chrome_path)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(web)\n", |
||||
" )\n", |
||||
"\n", |
||||
" web_summary = response.choices[0].message.content\n", |
||||
" \n", |
||||
" return display(Markdown(web_summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "e5f974b3-e417-43a2-88f1-8db06096cd53", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Scrape and Summarize Web Page" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "55f240cb-1fca-46bf-81d1-1beeea64439d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"url = \"https://www.canva.com/\"\n", |
||||
"new_summary(url, chrome_path)" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,224 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", |
||||
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try the course example with \"https://openai.com\" - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. Below an example created with Playwright." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dca2768e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"! pip install playwright\n", |
||||
"! playwright install" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "682eff74-55c4-4d4b-b267-703edbc293c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import asyncio\n", |
||||
"from playwright.async_api import async_playwright\n", |
||||
"import nest_asyncio\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"import time\n", |
||||
"\n", |
||||
"nest_asyncio.apply()\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" title: str\n", |
||||
" text: str\n", |
||||
" url: str\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" \n", |
||||
" async def run(self, playwright):\n", |
||||
" browser = await playwright.chromium.launch(headless=False)\n", |
||||
" page = await browser.new_page()\n", |
||||
" await page.goto(self.url)\n", |
||||
" await page.wait_for_load_state('load')\n", |
||||
" \n", |
||||
" # Extract data from the page\n", |
||||
" self.title = await page.title()\n", |
||||
" text = await page.content()\n", |
||||
" await browser.close()\n", |
||||
" \n", |
||||
" soup = BeautifulSoup(text, 'html.parser')\n", |
||||
" for irrelevant in soup([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||
" \n", |
||||
" async def main(self):\n", |
||||
" async with async_playwright() as playwright:\n", |
||||
" await self.run(playwright) \n", |
||||
" \n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"if __name__ == \"__main__\":\n", |
||||
" site = Website('https://openai.com')\n", |
||||
" asyncio.run(site.main())\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(site)\n", |
||||
" )\n", |
||||
"\n", |
||||
" web_summary = response.choices[0].message.content\n", |
||||
" display(Markdown(web_summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "69218dec-749c-412d-84a0-40a10fd80c73", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,623 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Instant Gratification\n", |
||||
"\n", |
||||
"## Your first Frontier LLM Project!\n", |
||||
"\n", |
||||
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||
"\n", |
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||
"\n", |
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||
"\n", |
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||
"\n", |
||||
"## If you're new to Jupyter Lab\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||
"\n", |
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||
"\n", |
||||
"If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", |
||||
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with these messages is this easy:\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for OpenAI is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"PR instructions courtesy of an AI friend: https://chatgpt.com/share/670145d5-e8a8-8012-8f93-39ee4e248b4c" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "0f62a788", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# **Web Scraping for JavaScript Website**" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dca2768e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# !pip install selenium\n", |
||||
"# !pip install undetected-chromedriver" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "682eff74-55c4-4d4b-b267-703edbc293c7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import undetected_chromedriver as uc\n", |
||||
"from selenium.webdriver.common.by import By\n", |
||||
"from selenium.webdriver.support.ui import WebDriverWait\n", |
||||
"from selenium.webdriver.support import expected_conditions as EC\n", |
||||
"import time\n", |
||||
"from bs4 import BeautifulSoup" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "90ca6dd0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class WebsiteCrawler:\n", |
||||
" def __init__(self, url, wait_time=20, chrome_binary_path=None):\n", |
||||
" \"\"\"\n", |
||||
" Initialize the WebsiteCrawler using Selenium to scrape JavaScript-rendered content.\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" self.wait_time = wait_time\n", |
||||
"\n", |
||||
" options = uc.ChromeOptions()\n", |
||||
" options.add_argument(\"--disable-gpu\")\n", |
||||
" options.add_argument(\"--no-sandbox\")\n", |
||||
" options.add_argument(\"--disable-dev-shm-usage\")\n", |
||||
" options.add_argument(\"--disable-blink-features=AutomationControlled\")\n", |
||||
" options.add_argument(\"start-maximized\")\n", |
||||
" options.add_argument(\n", |
||||
" \"user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
" )\n", |
||||
" if chrome_binary_path:\n", |
||||
" options.binary_location = chrome_binary_path\n", |
||||
"\n", |
||||
" self.driver = uc.Chrome(options=options)\n", |
||||
"\n", |
||||
" try:\n", |
||||
" # Load the URL\n", |
||||
" self.driver.get(url)\n", |
||||
"\n", |
||||
" # Wait for Cloudflare or similar checks\n", |
||||
" time.sleep(10)\n", |
||||
"\n", |
||||
" # Ensure the main content is loaded\n", |
||||
" WebDriverWait(self.driver, self.wait_time).until(\n", |
||||
" EC.presence_of_element_located((By.TAG_NAME, \"main\"))\n", |
||||
" )\n", |
||||
"\n", |
||||
" # Extract the main content\n", |
||||
" main_content = self.driver.find_element(By.CSS_SELECTOR, \"main\").get_attribute(\"outerHTML\")\n", |
||||
"\n", |
||||
" # Parse with BeautifulSoup\n", |
||||
" soup = BeautifulSoup(main_content, \"html.parser\")\n", |
||||
" self.title = self.driver.title if self.driver.title else \"No title found\"\n", |
||||
" self.text = soup.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error occurred: {e}\")\n", |
||||
" self.title = \"Error occurred\"\n", |
||||
" self.text = \"\"\n", |
||||
"\n", |
||||
" finally:\n", |
||||
" self.driver.quit()\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "947eac30", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"chrome_path = \"C:/Program Files/Google/Chrome/Application/chrome.exe\"\n", |
||||
"url = \"https://www.canva.com/\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2cba8c91", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def new_summary(url, chrome_path):\n", |
||||
" web = WebsiteCrawler(url, 30, chrome_path)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(web)\n", |
||||
" )\n", |
||||
"\n", |
||||
" web_summary = response.choices[0].message.content\n", |
||||
" \n", |
||||
" return display(Markdown(web_summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "da7f7b16", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"new_summary(url, chrome_path)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7880ce6a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"url = \"https://openai.com\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "337b06da", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"new_summary(url, chrome_path)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9a5d69ea", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,194 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2112166e-3629-4167-a4cb-0a1a6e549e97", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Hello everyone, \n", |
||||
"The community contributions folder is super motivating. Thanks to Ed for democratising learning with this great idea of sharing. The below small piece is my novice attempt in summarizing content from wikipedia page. It is pretty straightforward, but a good learning exercise for me nevertheless. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "947028c8-30c6-456a-8e0c-25e0de1ecbb6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install wikipedia" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "aa18a060-6dbe-42c9-bc11-c8b079397d6b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Import statements\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"import wikipedia\n", |
||||
"import warnings" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8d9c128d-ed7d-4e58-8cd1-1468242c7967", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#To supress a warning from wikipedia module when there are multiple options.\n", |
||||
"warnings.filterwarnings(\"ignore\", category=UserWarning, module=\"wikipedia\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5371f405-e628-4b6a-a5ab-5774c1431749", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e6610504-bd7b-459f-9722-0044b3101e05", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", |
||||
"# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ac37741a-2608-4760-8ba8-163fb9155f0f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Wikipedia:\n", |
||||
" def __init__(self, searchText):\n", |
||||
" \"\"\"\n", |
||||
" Create this object to extract the summary of wikipedia page for a text entered by user\n", |
||||
" \"\"\"\n", |
||||
" self.searchText = searchText\n", |
||||
" self.summary_text = None\n", |
||||
" self.user_prompt = None\n", |
||||
" \n", |
||||
" self._fetch_summary()\n", |
||||
"\n", |
||||
" def _fetch_summary(self):\n", |
||||
" \"\"\"\n", |
||||
" Fetches the summary from wikipedia page based on user entered search text and sets user prompt accordingly\n", |
||||
" \"\"\"\n", |
||||
" try:\n", |
||||
" # Try to get the summary of the text from Wikipedia based on user entered text. Using starightforward summary module in wikipedia.\n", |
||||
" self.summary_text = wikipedia.summary(self.searchText)\n", |
||||
" self.user_prompt = f\"You are looking a summary extract from a wikipedia page. The content is as follows\\n {self.summary_text}.\\nProvide \\\n", |
||||
" a summary taking key points from each sections listed on the page\"\n", |
||||
" except wikipedia.DisambiguationError as e:\n", |
||||
" #Modify user and system prompts if there are multiple options for a user search text\n", |
||||
" self.user_prompt = f\"You have received quite a few options {e.options} for the keyword {self.searchText}. Please request user to choose one of them\"\n", |
||||
" except wikipedia.PageError:\n", |
||||
" #To handle when there is no page\n", |
||||
" self.user_prompt = f\"There is no wiki page for {self.searchText}. Apparently it is not your fault!\"\n", |
||||
" except Exception as e:\n", |
||||
" # To handle any other exceptions\n", |
||||
" self.user_prompt = f\"Sorry, something seems to be wrong on my end. Please try again later\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "143c203e-bb99-49c6-89a2-2a32ea429719", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Our by-now familiar sumamrize function\n", |
||||
"def summarize(searchText):\n", |
||||
" wiki = Wikipedia(searchText)\n", |
||||
" system_prompt = f\"You are an assitant trying to summarize content from Wikipedia. You will have three scenarios to handle your responses \\\n", |
||||
" 1. You will have the summary text content and you will just show that to user\\\n", |
||||
" 2. You will have multiple options for the user entered keyword, and you will respond by asking user to choose from that and request again \\\n", |
||||
" 3. You will not have the content due to a page not found error. Respond accordingly.\\\n", |
||||
" Respond all of these in Markdown format.\"\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": wiki.user_prompt}\n", |
||||
" ]\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b61532fc-189c-4cd8-9402-93d8d8fa8c59", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summary = summarize(\"mukhari\")\n", |
||||
"display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5c3f05f6-acb5-41e4-a521-8d8b8ace0192", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,276 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1b6fe0c1-931e-4194-bcfe-0716d8f75b50", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Youtube Video Summarization\n", |
||||
"\n", |
||||
"## My First Frontier LLM Project!\n", |
||||
"\n", |
||||
"Welcome to my first LLM-based project! The goal of this project is to leverage large language models (LLMs) to summarize YouTube videos. Currently, it only supports English transcriptions, so instead of watching the entire video, you can simply read the summary!\n", |
||||
"\n", |
||||
"## Important Note\n", |
||||
"Be mindful when testing with longer videos, as they may consume significant resources and could lead to high costs on your ChatGPT bill.\n", |
||||
"You can switch to Ollama for free usage if you're looking to reduce costs.\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install youtube-transcript-api openai" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a082ddaf-abf5-4e6c-8112-74846c768301", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"from youtube_transcript_api import YouTubeTranscriptApi\n", |
||||
"import re\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class YoutubeVideoID:\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" self.video_id = self.extract_video_id(url)\n", |
||||
"\n", |
||||
" def extract_video_id(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Extracts the YouTube video ID from a given URL.\n", |
||||
" Supports both regular and shortened URLs.\n", |
||||
" \"\"\"\n", |
||||
" # Regular expression to match YouTube video URL and extract the video ID\n", |
||||
" regex = r\"(?:https?:\\/\\/)?(?:www\\.)?(?:youtube\\.com\\/(?:[^\\/\\n\\s]+\\/\\S+\\/|\\S*\\?v=)|(?:youtu\\.be\\/))([a-zA-Z0-9_-]{11})\"\n", |
||||
" match = re.match(regex, url)\n", |
||||
" \n", |
||||
" if match:\n", |
||||
" return match.group(1)\n", |
||||
" else:\n", |
||||
" raise ValueError(\"Invalid YouTube URL\")\n", |
||||
"\n", |
||||
" def __str__(self):\n", |
||||
" return f\"Video ID: {self.video_id}\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Example usage\n", |
||||
"video_url = \"https://www.youtube.com/watch?v=kqaMIFEz15s\"\n", |
||||
"\n", |
||||
"yt_video = YoutubeVideoID(video_url)\n", |
||||
"print(yt_video)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f724be3c-bdeb-4079-b4be-f12608144484", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_transcript(video_id, language='en'):\n", |
||||
" try:\n", |
||||
" # Try to get the transcript in the desired language (Indonesian by default)\n", |
||||
" transcript = YouTubeTranscriptApi.get_transcript(video_id, languages=[language])\n", |
||||
" # Join all the 'text' fields into a single string\n", |
||||
" return \" \".join([item['text'] for item in transcript])\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error fetching transcript: {e}\")\n", |
||||
" return None\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "12e302fa-f564-4ec6-a08f-b3b3ce549396", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Fetch transcript using the video ID\n", |
||||
"transcript_text = get_transcript(yt_video.video_id)\n", |
||||
"print(len(transcript_text))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0a0750be-88a1-4e65-9cb8-a0a2f11eecdf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Function to summarize text using ChatGPT\n", |
||||
"def summarize_text(text):\n", |
||||
" try:\n", |
||||
" system_prompts = \"\"\"\n", |
||||
" You are a helpful assistant who provides concise and accurate summaries of text. Your task is to:\n", |
||||
" \n", |
||||
" - Capture the key points of the content.\n", |
||||
" - Keep the summary brief and easy to understand.\n", |
||||
" - Avoid summarizing overly lengthy texts or breaking them into excessively short summaries.\n", |
||||
" - Use bullet points where appropriate to enhance clarity and structure.\n", |
||||
" \"\"\"\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4o-mini\",\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompts},\n", |
||||
" {\"role\": \"user\", \"content\": f\"Summarize the following text:\\n{text}\"}\n", |
||||
" ],\n", |
||||
" max_tokens=200\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
" except Exception as e:\n", |
||||
" print(f\"Error summarizing text: {e}\")\n", |
||||
" return None" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ad646bc4-a11a-4c44-b941-54befdbf9bc6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def split_text(text, chunk_size=3000):\n", |
||||
" \"\"\"\n", |
||||
" Splits large text into smaller chunks based on the given chunk size.\n", |
||||
" Ensures that chunks end with a full stop where possible to maintain sentence integrity.\n", |
||||
" \n", |
||||
" :param text: str, the text to be split\n", |
||||
" :param chunk_size: int, maximum size of each chunk (default 3000 characters)\n", |
||||
" :return: list of str, where each str is a chunk of text\n", |
||||
" \"\"\"\n", |
||||
" chunks = []\n", |
||||
" while len(text) > chunk_size:\n", |
||||
" # Find the last full stop within or at the chunk size\n", |
||||
" split_point = text.rfind('.', 0, chunk_size + 1) # +1 to include the period itself if it's at chunk_size\n", |
||||
" if split_point == -1: # No period found within the chunk size\n", |
||||
" split_point = chunk_size\n", |
||||
" \n", |
||||
" # Append the chunk, ensuring we don't strip spaces that might be part of the sentence structure\n", |
||||
" chunks.append(text[:split_point + 1] if split_point != chunk_size else text[:chunk_size])\n", |
||||
" text = text[split_point + 1:] if split_point != chunk_size else text[chunk_size:]\n", |
||||
" \n", |
||||
" # Add the remaining text as the final chunk, only strip if there's content\n", |
||||
" if text:\n", |
||||
" chunks.append(text.strip())\n", |
||||
" \n", |
||||
" return chunks\n", |
||||
"\n", |
||||
"transcript_chunks = split_text(transcript_text)\n", |
||||
"\n", |
||||
"# Now you can summarize each chunk individually\n", |
||||
"summaries = []\n", |
||||
"for chunk in transcript_chunks:\n", |
||||
" summary = summarize_text(chunk)\n", |
||||
" summaries.append(summary)\n", |
||||
"\n", |
||||
"\n", |
||||
"# Combine the individual summaries into one\n", |
||||
"full_summary = \" \".join(summaries)\n", |
||||
"display(Markdown(full_summary))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6b266fdc-da31-4d79-8982-be77f03be59f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "792c814d-73f8-4c1e-a0bb-b654b40e4d8b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,356 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "31d3c4a4-5442-4074-b812-42d60e0a0c04", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#In this example we will fetch the job description by pasting the URL,then we upload CV. Only then ChatGPT will\n", |
||||
"#analyze CV against the fetched job description. If the CV is a good match then it will write a cover letter.\n", |
||||
"\n", |
||||
"#If \n", |
||||
" ##job posting url is fake/random text or \n", |
||||
" ##job posting is fake/random tex or \n", |
||||
" ##CV is fake/random text\n", |
||||
"#then ChatGPT will not analyze CV, it will give a generic response to enter the info correctly." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bc2eafe6-5255-4317-8ddd-a93695296043", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"pip install PyPDF2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cf45e9d5-4913-416c-9880-5be60a96c0e6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Imports\n", |
||||
"import os\n", |
||||
"import io\n", |
||||
"import time\n", |
||||
"import requests\n", |
||||
"import PyPDF2\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from openai import OpenAI\n", |
||||
"from ipywidgets import Textarea, FileUpload, Button, VBox, HTML" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "af8fea69-60aa-430c-a16c-8757b487e07a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "daee94d2-f82b-43f0-95d1-15370eda1bc7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0712dd1d-b6bc-41c6-84ec-d965f696f7aa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant who analyzes user's CV against the job description \\\n", |
||||
" and provide a short summary if the user is fit for this job. If the user is fit for the job \\\n", |
||||
" write a cover letter for the user to apply for the job. Keep the cover letter professional, short, \\\n", |
||||
" and formal. \\\n", |
||||
" Important things to notice before analyzing CV:\\\n", |
||||
" 1. Always check if the CV is actually a CV or just random text\\\n", |
||||
" 2. Check if the job description fetched from the website is the job description or not\\\n", |
||||
" and ignore text related to navigation\\\n", |
||||
" 3. Also check the link of the job posting, if it actually resembles a job posting or is just random \\\n", |
||||
" fake website\\\n", |
||||
" 4. if any one of these two checks fails, do not analyze the CV against the Job description and give an\\\n", |
||||
" appropriate response as you think\\\n", |
||||
" 5. Always respond in Markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "70c972a6-8af6-4ff2-a338-6d7ba90e2045", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "426dfd9b-3446-4543-9819-63040abd9644", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"for_user_prompt = {\n", |
||||
" 'job_posting_url':'',\n", |
||||
" 'job_posting': '',\n", |
||||
" 'cv_text': ''\n", |
||||
"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "79d9ccd6-f5fe-4ce8-982c-7235d2cf6a9f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create widgets - to create a box for the job posting text\n", |
||||
"job_posting_url_area = Textarea(\n", |
||||
" placeholder='Paste the URL of the job posting here, ONLY URL PLEASE',\n", |
||||
" description='Fetching job:',\n", |
||||
" disabled=False,\n", |
||||
" layout={'width': '800px', 'height': '50px'}\n", |
||||
")\n", |
||||
"\n", |
||||
"status_job_posting = HTML(value=\"<b>Status:</b> Waiting for inputs...\")\n", |
||||
"\n", |
||||
"# Create Submit Buttons\n", |
||||
"fetch_job_posting_button = Button(description='Fetch Job Posting', button_style='primary')\n", |
||||
"\n", |
||||
"def fetch_job_posting_action(b):\n", |
||||
" for_user_prompt['job_posting_url'] = job_posting_url_area.value\n", |
||||
" if for_user_prompt['job_posting_url']:\n", |
||||
" ed = Website(for_user_prompt['job_posting_url'])\n", |
||||
" status_job_posting.value = \"<b>Status:</b> Job posting fetched successfully!\"\n", |
||||
" fetch_job_posting_button.button_style='success'\n", |
||||
" for_user_prompt['job_posting']=ed.text\n", |
||||
" else:\n", |
||||
" status_job_posting.value = \"<b>Status:</b> Please enter a job posting url before submitting.\"\n", |
||||
"\n", |
||||
"# Attach actions to buttons\n", |
||||
"fetch_job_posting_button.on_click(fetch_job_posting_action)\n", |
||||
"\n", |
||||
"# Layout\n", |
||||
"job_posting_box = VBox([job_posting_url_area, fetch_job_posting_button])\n", |
||||
"\n", |
||||
"# Display all widgets\n", |
||||
"display(VBox([\n", |
||||
" HTML(value=\"<h2>Input Job Posting Url</h2>\"),\n", |
||||
" job_posting_box,\n", |
||||
" status_job_posting\n", |
||||
"]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "58d42786-1580-4d3f-b44f-5c52250c2935", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Print fetched job description\n", |
||||
"\n", |
||||
"#print(for_user_prompt['job_posting'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cd258dec-9b57-40ce-b37c-2627acbcb5af", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define file upload for CV\n", |
||||
"cv_upload = FileUpload(\n", |
||||
" accept='.pdf', # Only accept PDF files\n", |
||||
" multiple=False, # Only allow single file selection\n", |
||||
" description='Upload CV (PDF)'\n", |
||||
")\n", |
||||
"\n", |
||||
"status = HTML(value=\"<b>Status:</b> Waiting for inputs...\")\n", |
||||
"\n", |
||||
"# Create Submit Buttons\n", |
||||
"submit_cv_button = Button(description='Submit CV', button_style='success')\n", |
||||
"\n", |
||||
"# Functions\n", |
||||
"def submit_cv_action(change):\n", |
||||
"\n", |
||||
" if not for_user_prompt['cv_text']:\n", |
||||
" status.value = \"<b>Status:</b> Please upload a CV before submitting.\"\n", |
||||
" \n", |
||||
" if cv_upload.value:\n", |
||||
" # Get the uploaded file\n", |
||||
" uploaded_file = cv_upload.value[0]\n", |
||||
" content = io.BytesIO(uploaded_file['content'])\n", |
||||
" \n", |
||||
" try:\n", |
||||
" pdf_reader = PyPDF2.PdfReader(content) \n", |
||||
" cv_text = \"\"\n", |
||||
" for page in pdf_reader.pages: \n", |
||||
" cv_text += page.extract_text() \n", |
||||
" \n", |
||||
" # Store CV text in for_user_prompt\n", |
||||
" for_user_prompt['cv_text'] = cv_text\n", |
||||
" status.value = \"<b>Status:</b> CV uploaded and processed successfully!\"\n", |
||||
" except Exception as e:\n", |
||||
" status.value = f\"<b>Status:</b> Error processing PDF: {str(e)}\"\n", |
||||
"\n", |
||||
" time.sleep(0.5) # Short pause between upload and submit messages to display both\n", |
||||
" \n", |
||||
" if for_user_prompt['cv_text']:\n", |
||||
" #print(\"CV Submitted:\")\n", |
||||
" #print(for_user_prompt['cv_text'])\n", |
||||
" status.value = \"<b>Status:</b> CV submitted successfully!\"\n", |
||||
" \n", |
||||
"\n", |
||||
"# Attach actions to buttons\n", |
||||
"submit_cv_button.on_click(submit_cv_action)\n", |
||||
"\n", |
||||
"# Layout\n", |
||||
"cv_buttons = VBox([submit_cv_button])\n", |
||||
"\n", |
||||
"# Display all widgets\n", |
||||
"display(VBox([\n", |
||||
" HTML(value=\"<h2>Import CV and submit</h2>\"),\n", |
||||
" cv_upload,\n", |
||||
" cv_buttons,\n", |
||||
" status\n", |
||||
"]))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a7dd22a4-ca7b-4b8c-a328-6205cec689cb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Prepare the user prompt that we will send to open ai (added URL for the context)\n", |
||||
"user_prompt = f\"\"\"\n", |
||||
"Job Posting: \n", |
||||
"{for_user_prompt['job_posting']}\n", |
||||
"\n", |
||||
"CV: \n", |
||||
"{for_user_prompt['cv_text']}\n", |
||||
"\n", |
||||
"Url:\n", |
||||
"{for_user_prompt['job_posting_url']}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "82b71c1a-895a-48e7-a945-13e615bb0096", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define messages with system_prompt and user_prompt\n", |
||||
"def messages_for(system_prompt_input, user_prompt_input):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt_input},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_input}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "854dc42e-2bbd-493b-958f-c20484908300", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. \n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(system_prompt, user_prompt)\n", |
||||
")\n", |
||||
"\n", |
||||
"# Response is provided in Markdown and displayed accordingly\n", |
||||
"display(Markdown(response.choices[0].message.content))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "758d2cbe-0f80-4572-8724-7cba77f701dd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,979 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Instant Gratification\n", |
||||
"\n", |
||||
"## Your first Frontier LLM Project!\n", |
||||
"\n", |
||||
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||
"\n", |
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||
"\n", |
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||
"\n", |
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||
"\n", |
||||
"## If you're new to Jupyter Lab\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||
"\n", |
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||
"\n", |
||||
"## If you'd prefer to work in IDEs\n", |
||||
"\n", |
||||
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", |
||||
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n", |
||||
"\n", |
||||
"## If you'd like to brush up your Python\n", |
||||
"\n", |
||||
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", |
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||
"\n", |
||||
"## I am here to help\n", |
||||
"\n", |
||||
"If you have any problems at all, please do reach out. \n", |
||||
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", |
||||
"\n", |
||||
"## More troubleshooting\n", |
||||
"\n", |
||||
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", |
||||
"\n", |
||||
"## If this is old hat!\n", |
||||
"\n", |
||||
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Please read - important note</h2>\n", |
||||
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n", |
||||
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 3, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"API key found and looks good so far!\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 4, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Hello! I’m glad to hear from you! How can I assist you today?\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 6, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 7, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Home - Edward Donner\n", |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Well, hi there.\n", |
||||
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", |
||||
"very\n", |
||||
"amateur) and losing myself in\n", |
||||
"Hacker News\n", |
||||
", nodding my head sagely to things I only half understand.\n", |
||||
"I’m the co-founder and CTO of\n", |
||||
"Nebula.io\n", |
||||
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", |
||||
"acquired in 2021\n", |
||||
".\n", |
||||
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", |
||||
"patented\n", |
||||
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", |
||||
"Connect\n", |
||||
"with me for more!\n", |
||||
"December 21, 2024\n", |
||||
"Welcome, SuperDataScientists!\n", |
||||
"November 13, 2024\n", |
||||
"Mastering AI and LLM Engineering – Resources\n", |
||||
"October 16, 2024\n", |
||||
"From Software Engineer to AI Data Scientist – resources\n", |
||||
"August 6, 2024\n", |
||||
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n", |
||||
"Navigation\n", |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Get in touch\n", |
||||
"ed [at] edwarddonner [dot] com\n", |
||||
"www.edwarddonner.com\n", |
||||
"Follow me\n", |
||||
"LinkedIn\n", |
||||
"Twitter\n", |
||||
"Facebook\n", |
||||
"Subscribe to newsletter\n", |
||||
"Type your email…\n", |
||||
"Subscribe\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"You are looking at a website titled Home - Edward Donner\n", |
||||
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n", |
||||
"\n", |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Well, hi there.\n", |
||||
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", |
||||
"very\n", |
||||
"amateur) and losing myself in\n", |
||||
"Hacker News\n", |
||||
", nodding my head sagely to things I only half understand.\n", |
||||
"I’m the co-founder and CTO of\n", |
||||
"Nebula.io\n", |
||||
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", |
||||
"acquired in 2021\n", |
||||
".\n", |
||||
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", |
||||
"patented\n", |
||||
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", |
||||
"Connect\n", |
||||
"with me for more!\n", |
||||
"December 21, 2024\n", |
||||
"Welcome, SuperDataScientists!\n", |
||||
"November 13, 2024\n", |
||||
"Mastering AI and LLM Engineering – Resources\n", |
||||
"October 16, 2024\n", |
||||
"From Software Engineer to AI Data Scientist – resources\n", |
||||
"August 6, 2024\n", |
||||
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n", |
||||
"Navigation\n", |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Get in touch\n", |
||||
"ed [at] edwarddonner [dot] com\n", |
||||
"www.edwarddonner.com\n", |
||||
"Follow me\n", |
||||
"LinkedIn\n", |
||||
"Twitter\n", |
||||
"Facebook\n", |
||||
"Subscribe to newsletter\n", |
||||
"Type your email…\n", |
||||
"Subscribe\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 11, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Oh, we're starting with the basics, huh? Well, 2 + 2 equals 4. Shocking, I know!\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 13, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"[{'role': 'system',\n", |
||||
" 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n", |
||||
" {'role': 'user',\n", |
||||
" 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nDecember 21, 2024\\nWelcome, SuperDataScientists!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]" |
||||
] |
||||
}, |
||||
"execution_count": 13, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for OpenAI is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 15, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/plain": [ |
||||
"'# Summary of Edward Donner\\'s Website\\n\\nEdward Donner\\'s website serves as a platform for sharing his interests and expertise in coding, large language models (LLMs), and AI. He is the co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and management. Previously, he founded the AI startup untapt, which was acquired in 2021.\\n\\n## Key Content\\n\\n- **Personal Introduction**: Ed shares his passion for coding, experimenting with LLMs, DJing, and music production.\\n- **Professional Background**: He highlights his role at Nebula.io and his prior experience with untapt.\\n- **Innovative Work**: Mention of proprietary LLMs tailored for talent management and a patented matching model.\\n\\n## News and Announcements\\n\\n- **December 21, 2024**: Welcoming \"SuperDataScientists.\"\\n- **November 13, 2024**: Resources for mastering AI and LLM engineering.\\n- **October 16, 2024**: Transitioning from software engineering to AI data science resources.\\n- **August 6, 2024**: Introduction to the Outsmart LLM Arena, a competition focusing on strategy among LLMs.\\n\\nThe website encourages connections and offers resources for individuals interested in AI and LLMs.'" |
||||
] |
||||
}, |
||||
"execution_count": 15, |
||||
"metadata": {}, |
||||
"output_type": "execute_result" |
||||
} |
||||
], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 16, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 17, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"# Summary of Edward Donner's Website\n", |
||||
"\n", |
||||
"The website belongs to Ed, a coder and LLM (Large Language Model) enthusiast, who is also a co-founder and CTO of Nebula.io. Nebula.io focuses on leveraging AI to help individuals discover their potential in recruitment through its innovative platform. Ed also shares his background in the AI field, having previously founded the startup untapt, which was acquired in 2021.\n", |
||||
"\n", |
||||
"## Recent News and Announcements\n", |
||||
"1. **December 21, 2024**: Welcome message for SuperDataScientists.\n", |
||||
"2. **November 13, 2024**: Resources for mastering AI and LLM engineering.\n", |
||||
"3. **October 16, 2024**: Resources for transitioning from Software Engineer to AI Data Scientist.\n", |
||||
"4. **August 6, 2024**: Introduction to the \"Outsmart LLM Arena,\" a competitive platform where LLMs engage in diplomacy and strategy.\n", |
||||
"\n", |
||||
"Ed expresses a passion for technology, music, and engaging in community discussions through platforms like Hacker News." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 18, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"# CNN Website Summary\n", |
||||
"\n", |
||||
"CNN is a leading news platform that provides comprehensive coverage across a wide range of categories including US and world news, politics, business, health, entertainment, and more. The website features breaking news articles, videos, and live updates on significant global events.\n", |
||||
"\n", |
||||
"### Recent Headlines:\n", |
||||
"- **Politics**: \n", |
||||
" - Justin Trudeau announced his resignation as Canada's Prime Minister, sharing his \"one regret.\"\n", |
||||
" - Analysis of Trump's influence in Congress and recent legal battles related to his actions.\n", |
||||
" \n", |
||||
"- **Global Affairs**: \n", |
||||
" - Rising tensions in Venezuela as the opposition leader urges military action against Maduro.\n", |
||||
" - Sudanese authorities announced the transfer of 11 Yemeni detainees from Guantanamo Bay to Oman.\n", |
||||
" \n", |
||||
"- **Weather**: A major winter storm impacted Washington, DC, causing power outages and stranded drivers.\n", |
||||
"\n", |
||||
"- **Health**: \n", |
||||
" - FDA issues new draft guidance on improving pulse oximeter readings for individuals with darker skin.\n", |
||||
"\n", |
||||
"### Additional Features:\n", |
||||
"CNN includes segments dedicated to sports, science, climate, and travel. There are also various podcasts available, offering deeper insights into current events and specialized topics. \n", |
||||
"\n", |
||||
"The site encourages user feedback on ads and technical issues, emphasizing its commitment to enhancing user experience. \n", |
||||
"\n", |
||||
"Overall, CNN serves as a crucial resource for staying updated with local and international news." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 19, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"# Anthropic Website Summary\n", |
||||
"\n", |
||||
"Anthropic is an AI safety and research company that prioritizes safety in the development of AI technologies. The main focus of the site is on their AI model, Claude, which includes the latest version, Claude 3.5 Sonnet, as well as additional offerings like Claude 3.5 Haiku. The company emphasizes the creation of AI-powered applications and custom experiences through its API.\n", |
||||
"\n", |
||||
"## Recent Announcements\n", |
||||
"- **Claude 3.5 Sonnet Launch**: Announced on October 22, 2024, featuring significant advancements in AI capabilities.\n", |
||||
"- **New AI Models**: Introduction of Claude 3.5 Sonnet and Claude 3.5 Haiku.\n", |
||||
"\n", |
||||
"Anthropic's work spans various domains including machine learning, policy, and product development, aimed at generating reliable and beneficial AI systems. They also highlight career opportunities within the organization." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 21, |
||||
"id": "8070c4c3-1ef1-4c7a-8c2d-f6b4b9b4aa8e", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"# Summary of CPP Investments Website\n", |
||||
"\n", |
||||
"## Overview\n", |
||||
"The CPP Investments website serves as a comprehensive resource for information regarding the management and performance of the Canada Pension Plan (CPP) Fund. It emphasizes its long-standing commitment to ensuring financial security for over 22 million Canadians who rely on the benefits of the CPP.\n", |
||||
"\n", |
||||
"## Key Sections\n", |
||||
"- **About Us**: Details the governance, leadership, and investment programs available within CPP Investments.\n", |
||||
"- **The Fund**: Offers an overview of the fund's performance, sustainability, and transparency in its operations.\n", |
||||
"- **Investment Strategies**: Explanation of CPP's investment beliefs and strategies, emphasizing a global mindset and sustainable investing practices.\n", |
||||
"- **Insights Institute**: A dedicated section for reports and analyses on relevant investment topics, including emerging trends and strategies.\n", |
||||
"\n", |
||||
"## Recent News and Announcements\n", |
||||
"- **2024 CEO Letter** (May 22, 2024): Reflects on the 25th anniversary of CPP Investments and its mission to manage funds in the best interest of Canadians.\n", |
||||
"- **Article on CPP Benefits** (September 18, 2024): Highlights why the CPP is regarded as one of the best pension plans globally.\n", |
||||
"- **Report on AI Integration and Human Capital** (October 31, 2024): Discusses how institutional investors can engage with boards and leadership on AI adaptation strategies.\n", |
||||
"- **Stake Sales** (January 3, 2025): Announcements regarding the sale of stakes in various partnerships and joint ventures, including a significant logistics partnership in North America and real estate ventures in Hong Kong.\n", |
||||
"\n", |
||||
"This website underscores CPP Investments' ongoing commitment to transparency, strong financial performance, and its role in supporting the financial security of Canadians as they prepare for retirement." |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display_summary('https://cppinvestments.com')" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 33, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Subject:** Request for Annual Sales Report (2024)\n", |
||||
"\n", |
||||
"**Email:**\n", |
||||
"\n", |
||||
"Dear Abhinav,\n", |
||||
"\n", |
||||
"I hope this email finds you in good health and high spirits. As we step into a new year and begin reviewing our plans and strategies, it is crucial for us to analyze the performance metrics from the previous year. In this regard, I would like to kindly request a copy of the Annual Sales Report for 2024.\n", |
||||
"\n", |
||||
"This report will play an integral role in understanding our achievements, challenges, and areas for improvement over the past year. It will also serve as a foundation for aligning our goals and preparing a roadmap for the upcoming quarters. Please ensure that the report includes key performance indicators such as:\n", |
||||
"\n", |
||||
"- Total revenue generated\n", |
||||
"- Region-wise sales performance\n", |
||||
"- Product/service-wise contribution\n", |
||||
"- Month-by-month trend analysis\n", |
||||
"- Customer retention and acquisition metrics\n", |
||||
"\n", |
||||
"If there are any additional insights or observations from your side that you feel would be helpful for us to review, please feel free to include them as well. Your expertise and detailed input are always highly valued.\n", |
||||
"\n", |
||||
"Kindly let me know if the report is already prepared or if there is an expected timeline for its completion. In case you require any assistance, data inputs, or clarification from my end to finalize the report, do not hesitate to reach out.\n", |
||||
"\n", |
||||
"Thank you in advance for prioritizing this request. I appreciate your support and look forward to receiving the report soon.\n", |
||||
"\n", |
||||
"Best regards, \n", |
||||
"Sanath Pabba\n", |
||||
"\n", |
||||
"**Tone:** Professional and Collaborative" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You are an AI assistant email reviewer. All you need is to identify the meaning of the context in the text given and provide the subject line and email. and in the end of text, please provide the tone info.\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
" Dear Abhinav,\n", |
||||
"\n", |
||||
"I hope this email finds you in good health and high spirits. As we step into a new year and begin reviewing our plans and strategies, it is crucial for us to analyze the performance metrics from the previous year. In this regard, I would like to kindly request a copy of the Annual Sales Report for 2024.\n", |
||||
"\n", |
||||
"This report will play an integral role in understanding our achievements, challenges, and areas for improvement over the past year. It will also serve as a foundation for aligning our goals and preparing a roadmap for the upcoming quarters. Please ensure that the report includes key performance indicators such as:\n", |
||||
"\n", |
||||
"Total revenue generated\n", |
||||
"Region-wise sales performance\n", |
||||
"Product/service-wise contribution\n", |
||||
"Month-by-month trend analysis\n", |
||||
"Customer retention and acquisition metrics\n", |
||||
"If there are any additional insights or observations from your side that you feel would be helpful for us to review, please feel free to include them as well. Your expertise and detailed input are always highly valued.\n", |
||||
"\n", |
||||
"Kindly let me know if the report is already prepared or if there is an expected timeline for its completion. In case you require any assistance, data inputs, or clarification from my end to finalize the report, do not hesitate to reach out.\n", |
||||
"\n", |
||||
"Thank you in advance for prioritizing this request. I appreciate your support and look forward to receiving the report soon.\n", |
||||
"\n", |
||||
"Best regards,\n", |
||||
"Sanath Pabba\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\":\"system\", \"content\": system_prompt},\n", |
||||
" {\"role\":\"user\", \"content\": user_prompt}\n", |
||||
" \n", |
||||
"] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4o-mini\",\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"display(Markdown(response.choices[0].message.content))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "d4d641a5-0103-44a5-b5c2-70e80976d1f1", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Subject:** Addressing Sales Performance Concerns\n", |
||||
"\n", |
||||
"Dear Akhil,\n", |
||||
"\n", |
||||
"I wanted to touch base with you about your sales performance over the last two quarters. I’ve noticed that you haven’t been hitting the targets, and it’s something we need to address seriously.\n", |
||||
"\n", |
||||
"I know you’re capable of much more, and I want to see you succeed. That said, it’s crucial that you meet your sales targets this quarter. If there isn’t a significant improvement, we may have to consider other options, including letting you go, which I truly hope we can avoid.\n", |
||||
"\n", |
||||
"If there’s anything holding you back or if you need additional support, let me know. I’m here to help, but ultimately, it’s up to you to turn things around.\n", |
||||
"\n", |
||||
"Let’s make this quarter count! Let me know if you want to discuss this further or need help strategizing.\n", |
||||
"\n", |
||||
"Best regards, \n", |
||||
"Sanath Pabba\n", |
||||
"\n", |
||||
"**Tone:** Serious yet supportive" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"You are an AI assistant email reviewer. All you need is to identify the meaning of the context in the text given and provide the subject line and email. and in the end of text, please provide the tone info.\"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Dear Akhil,\n", |
||||
"\n", |
||||
"I wanted to touch base with you about your sales performance over the last two quarters. I’ve noticed that you haven’t been hitting the targets, and it’s something we need to address seriously.\n", |
||||
"\n", |
||||
"I know you’re capable of much more, and I want to see you succeed. That said, it’s crucial that you meet your sales targets this quarter. If there isn’t a significant improvement, we may have to consider other options, including letting you go, which I truly hope we can avoid.\n", |
||||
"\n", |
||||
"If there’s anything holding you back or if you need additional support, let me know. I’m here to help, but ultimately, it’s up to you to turn things around.\n", |
||||
"\n", |
||||
"Let’s make this quarter count! Let me know if you want to discuss this further or need help strategizing.\n", |
||||
"\n", |
||||
"Best regards,\n", |
||||
"Sanath Pabba\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\":\"system\", \"content\": system_prompt},\n", |
||||
" {\"role\":\"user\", \"content\": user_prompt}\n", |
||||
" \n", |
||||
"] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4o-mini\",\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"display(Markdown(response.choices[0].message.content))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,115 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "44aba2a0-c6eb-4fc1-a5cc-0a8f8679dbb8", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Far Far Away..." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4d58124-5e9a-4f5a-9e0a-ff74f43896a8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "33179b68-7ed5-46ab-b583-d67ed57cd39d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def add_user_greeting(greeting):\n", |
||||
" user_prompt = \"\"\"\n", |
||||
" The following is the greeting from the user. Please respond in character as a barman in the Mos Eisley Cantina.\\n\\n\n", |
||||
" \"\"\"\n", |
||||
" user_prompt += greeting\n", |
||||
"\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "67dc3099-2ccc-4ee8-8ff2-0dbbe4ae2fcb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def approach_the_bar(greeting):\n", |
||||
"\n", |
||||
" system_prompt = \"You are a barman in the Mos Eisley Cantina from the Star Wars universe.\\\n", |
||||
"It is a Tuesday evening, the year is 3BBY, and the Cantina is quiet except for a few lonely regulars.\\\n", |
||||
"The barman (you) is slightly skeptical but eager to share some interesting news regarding some nearby imperial activity.\\\n", |
||||
"You will recieve a greeting from the user, you must respond and provide them with some gossip detailing \\\n", |
||||
"some local shady dealings occuring in Mos Eisley. Please format your response using markdown to provide a sense of the conversation.\"\n", |
||||
"\n", |
||||
" user_prompt = add_user_greeting(greeting)\n", |
||||
" \n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt},\n", |
||||
" ]\n", |
||||
" \n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
" \n", |
||||
" # Step 4: print the result in markdown format\n", |
||||
" pretty_response = Markdown(response.choices[0].message.content)\n", |
||||
" display(pretty_response)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fb47e2b7-5509-4d1a-8e71-ff103fc8a885", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"approach_the_bar(\"\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,480 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "387e2968-3bfd-48c6-a925-d315f4566623", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Instant Gratification\n", |
||||
"## Your first Frontier LLM Project!\n", |
||||
"Using **Gemini API** to summarise transcripts from class videos. <br>\n", |
||||
"Tested with: *day_1_first_llm_experiment_summarization_project* transcript video. \n", |
||||
"## [Test_video](https://www.udemy.com/course/llm-engineering-master-ai-and-large-language-models/learn/lecture/46867741#questions)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9540582d-8d2a-4c14-b117-850823b634a0", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"import os, sys\n", |
||||
"import google.generativeai as genai\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from IPython.display import HTML, Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2fe0d366-b183-415c-b6e1-4993afd82f2a", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to Gemini API\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "89c1194c-715b-41ff-8cb7-6b6067c83ea5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('GOOGLE_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found!\")\n", |
||||
"else:\n", |
||||
" print(\"Great! API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6bc036e2-54c1-4206-a386-371a9705b190", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Upload Daily or Weekly Transcriptions\n", |
||||
"If you have text files corresponding to your video transcripts, upload them by day or week. With the help of Cutting-edge LLM models, you will get accurate summaries, highlighting key topics." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "8fcf4b72-49c9-49cd-8b1c-b5a4df38edf7", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Read data from txt files" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "00466898-68d9-43f7-b8d3-7d61696061de", |
||||
"metadata": { |
||||
"jp-MarkdownHeadingCollapsed": true |
||||
}, |
||||
"source": [ |
||||
"```\n", |
||||
"# Read the entire file using read() function\n", |
||||
"file = open(\"../day_1_first_llm_experiment_summarization_project.txt\", \"r\") # Your file path\n", |
||||
"file_content = file.read()\n", |
||||
"text = file_content\n", |
||||
"file.close()\n", |
||||
"```" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "37d15b83-786d-40e9-b730-4f654e8bec1e", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "846c63a4-14e0-4a3c-99ce-654a6928dc20", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### For this example, we will directly input the text file into the prompt." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "e6ef537b-a660-44e3-a0c2-94f3b9e60b11", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d1e96271-593a-4e16-bb17-81c834a59178", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_message = \"You are an assistant that analyzes the contents of text files \\\n", |
||||
"and provides an accurate summary, ignoring text that might be irrelevant. \\\n", |
||||
"Respond in markdown.\"\n", |
||||
"\n", |
||||
"#user_prompt = file_content Use if you load your data\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"It's time for our first LM experiment at this point.\n", |
||||
"So some of this you may know well, you may know very well already.\n", |
||||
"For some people this might be new, but let me just explain.\n", |
||||
"The models that we're going to be using.\n", |
||||
"These frontier models have been trained in a particular way.\n", |
||||
"That means that they expect two different types of instruction from us the user.\n", |
||||
"One of them is known as the system prompt, and one of them is known as the user prompt.\n", |
||||
"The system prompt is something which explains the context of this conversation.\n", |
||||
"It tells them what kind of task they're performing, what tone they should use, and we'll be experimenting\n", |
||||
"with what it means to to change a system prompt and what kind of information that you can include in\n", |
||||
"the system prompt throughout this course.\n", |
||||
"The user prompt is the actual conversation itself.\n", |
||||
"And in our case right now, it's going to just be the the conversation starter.\n", |
||||
"And the role of the LM of the large language model is to figure out what is the most likely way that\n", |
||||
"it should respond, given this user prompt.\n", |
||||
"If it's given this user prompt, and in the context of this system prompt, what is the most likely\n", |
||||
"next text that will come after it?\n", |
||||
"That would come from an assistant responding to this user.\n", |
||||
"So that's the difference between the system prompt that sets the context, the user prompt that is the\n", |
||||
"conversation starter.\n", |
||||
"So we're going to set a system prompt.\n", |
||||
"And this is what it's going to say.\n", |
||||
"It's going to say you are an assistant that analyzes the contents of a website and provides a short\n", |
||||
"summary, ignoring texts that might be navigation related.\n", |
||||
"Respond in markdown.\n", |
||||
"You'll see more of what that means in in just a second.\n", |
||||
"So that is our system prompt for the user prompt.\n", |
||||
"It's going to take as a we're going to write a function user prompt for.\n", |
||||
"And it's going to take a website as the argument to the function.\n", |
||||
"And it's going to say you are looking at a website titled The Website.\n", |
||||
"The contents of this website is as follows.\n", |
||||
"Please provide a short summary of the website in markdown if it includes news or announcements.\n", |
||||
"Summarize these two and we then take the text from the website object that Beautifulsoup plucked out\n", |
||||
"for us, and we add that into the user prompt and we return that user prompt.\n", |
||||
"So let's just quickly let's run that cell right now and let's just have a look now.\n", |
||||
"So after doing that, if I just look at what system Prompt has.\n", |
||||
"It has that text of course that we just said.\n", |
||||
"And now if you remember earlier on we created a new website object and we stored it in this variable\n", |
||||
"editor.\n", |
||||
"So if I come here I should be able to say user prompt for and then pass in the object Ed.\n", |
||||
"And what we'll get is a prompt.\n", |
||||
"It might be easier if I print this so that it prints out empty lines.\n", |
||||
"And here is the user prompt string that we've created.\n", |
||||
"It says you're looking at a website titled blah blah blah.\n", |
||||
"The contents of this website is as follows.\n", |
||||
"Please provide a short summary.\n", |
||||
"Look, it looks like we should have a space right here, otherwise it might be confusing.\n", |
||||
"Let's try that again.\n", |
||||
"That's always why it's worth printing things as you go, because you'll spot little inconsistencies\n", |
||||
"like that.\n", |
||||
"I think it'll be nicer, actually, now that I look at that.\n", |
||||
"If we have a carriage return there like so.\n", |
||||
"Let's have a look at this prompt.\n", |
||||
"Now you're looking at the website and there we go on a separate line that looks good okay.\n", |
||||
"So let's talk about the messages object.\n", |
||||
"So OpenAI expects to receive a conversation in a particular format.\n", |
||||
"It's a format that OpenAI came up with and they used for their APIs, and it became so well used that\n", |
||||
"all of the other major frontier models decided to adopt the same convention.\n", |
||||
"So this has gone from being originally OpenAI's way of using the API to being something of a standard\n", |
||||
"across many different models to use this approach.\n", |
||||
"And here's how it works.\n", |
||||
"When you're trying to describe a conversation, you describe it using a list a Python list of dictionaries.\n", |
||||
"So it's a list where each element in the list is a dictionary.\n", |
||||
"And that dictionary looks like this.\n", |
||||
"It's a dictionary with two elements.\n", |
||||
"One of them has a key of role, and here the value is either system or user, a key of role.\n", |
||||
"And the value is system a key of content.\n", |
||||
"And the value is of course the system message.\n", |
||||
"There's another Dictionary where there's a key of role.\n", |
||||
"The value is user because it's the user message.\n", |
||||
"The user prompt content is where the user message goes.\n", |
||||
"User message and user prompt are the same thing.\n", |
||||
"So hopefully I didn't explain it very well, but it makes sense when you see it visually like this.\n", |
||||
"It's just a dictionary which has role and content, system and system, message user and the user message.\n", |
||||
"And there are some other roles as well, but we're going to get to them in good time.\n", |
||||
"This is all we need for now.\n", |
||||
"So this is how messages are built.\n", |
||||
"And if you look at this next function def messages for hopefully it's super clear to you that this is\n", |
||||
"creating.\n", |
||||
"This here is creating exactly this construct using code.\n", |
||||
"It's going to do it's going to put in there the generic system prompt we came up with.\n", |
||||
"And it's going to create the user prompt for the website.\n", |
||||
"So let's run that.\n", |
||||
"And now, presumably it's clear that if I say messages for Ed, which is the object for my website,\n", |
||||
"let's print it so that we see empty lines and stuff.\n", |
||||
"Actually, sorry, in this case it might be better if we don't print it.\n", |
||||
"If we just do this, it might look a bit clearer.\n", |
||||
"There we go.\n", |
||||
"And now you can see that it is it's a list of two things role system.\n", |
||||
"And there's a system message role user.\n", |
||||
"And there is the user message.\n", |
||||
"Okay.\n", |
||||
"It's time to bring this together.\n", |
||||
"It's time to actually do it.\n", |
||||
"The API for OpenAI to make a call to a frontier model to do this for us is super simple, and we're\n", |
||||
"going to be using this API all the time.\n", |
||||
"So whereas now it might look like it's a few things to remember.\n", |
||||
"You're going to get so used to this, but we're going to make a function called summarize.\n", |
||||
"And that is that's going to do the business that's going to solve our problem and summarize a URL that's\n", |
||||
"passed in.\n", |
||||
"It will first create a website for that URL, just like we did for editor.\n", |
||||
"And this is where we call OpenAI.\n", |
||||
"We say OpenAI, which is the the OpenAI object.\n", |
||||
"We created OpenAI dot chat, dot completions, dot create.\n", |
||||
"And that for now you can just learn it by rote.\n", |
||||
"We'll understand a lot more about that later.\n", |
||||
"But as far as OpenAI is concerned, this is known as the completions API because we're asking it to\n", |
||||
"complete this conversation, predict what would be most likely to come next.\n", |
||||
"We pass in the name of the model we're going to use.\n", |
||||
"We're going to use a model called GPT four mini that you'll get very familiar with.\n", |
||||
"It is the light, cheap version of GPT four, the the one of the finest models on the planet, and this\n", |
||||
"will cost fractions of a cent to use.\n", |
||||
"This, um, you pass in the model and then you pass in the messages and the messages we pass in, use\n", |
||||
"this structure that we've just created and that is all it takes.\n", |
||||
"What comes back we put in this this object response.\n", |
||||
"And when we get back the response we call response dot choices zero dot message dot content.\n", |
||||
"Now I'm going to explain what this is another day we don't need to know.\n", |
||||
"For now.\n", |
||||
"We just need to know that we're going to do response dot choices zero dot message dot content.\n", |
||||
"That's going to be it.\n", |
||||
"That is our summarize function.\n", |
||||
"And with that let's try summarizing my website we're running.\n", |
||||
"It's now connecting to OpenAI in the cloud.\n", |
||||
"It's making the call and back.\n", |
||||
"Here is a summary of my website.\n", |
||||
"We have just uh, spent a fraction of a cent and we have just summarized my website.\n", |
||||
"We can do a little bit better because we can print this in a nice style.\n", |
||||
"Uh, GPT four, we've asked to respond in markdown, and that means that it's responded with various\n", |
||||
"characters to represent headings, things in bold and so on.\n", |
||||
"And we can use a feature of Jupyter Labs that we can ask it to actually show that in a nice markdown\n", |
||||
"format.\n", |
||||
"So let's do that.\n", |
||||
"Let's use this display summary function and try again.\n", |
||||
"Again we're going to GPT for a mini in the cloud.\n", |
||||
"And here is a summary of my website.\n", |
||||
"Uh, it says something about me.\n", |
||||
"Uh, and it's uh yeah, very nicely formatted, very nicely structured.\n", |
||||
"Pretty impressive.\n", |
||||
"And apparently it highlights my work with proprietary LMS, offers resources related to AI and LMS,\n", |
||||
"showcasing his commitment to advancing knowledge in this field.\n", |
||||
"Good for you, GPT for mini.\n", |
||||
"That's a very nice summary.\n", |
||||
"Okay.\n", |
||||
"And now we can try some more websites.\n", |
||||
"Let's try summarizing cnn.com.\n", |
||||
"Uh, we'll see what this happens.\n", |
||||
"Obviously, CNN is a much bigger, uh, result you've got here.\n", |
||||
"Uh, and, uh, we get some information about what's going on.\n", |
||||
"I'm actually recording this right now on the 5th of November at, uh, in the evening, which is the\n", |
||||
"date of the 2024 elections going on right now.\n", |
||||
"So that, of course, is featured on CNN's web page.\n", |
||||
"We can also summarize anthropic, which is the website for Claude.\n", |
||||
"And they have a nice page.\n", |
||||
"And here you go.\n", |
||||
"And you can read more about it in this nice little summary of their web page.\n", |
||||
"All right.\n", |
||||
"And that wraps up our first instant gratification.\n", |
||||
"It's it's juicy.\n", |
||||
"It's something where we've actually done something useful.\n", |
||||
"We've scraped the web.\n", |
||||
"We've summarized summarization is one of the most common AI use cases.\n", |
||||
"So common it's useful for all sorts of purposes.\n", |
||||
"We'll be doing it a few different ways during during this course, even in our week eight a sticky solution\n", |
||||
"will be using something that will do some summarization.\n", |
||||
"So it's a great, uh, thing to have experimented with already.\n", |
||||
"So there are so many other business applications of summarization.\n", |
||||
"This is something you should be able to put to good use.\n", |
||||
"You should be able to think of some ways you could apply this to your day job right away, or be building\n", |
||||
"a couple of example projects in GitHub that show summarization in action.\n", |
||||
"You could apply it to summarizing the news, summarizing financial performance from a financial report,\n", |
||||
"a resume, and a cover letter.\n", |
||||
"You could you could take a resume and generate a cover letter.\n", |
||||
"Uh, there are so many different things you can do with summarization of of documents.\n", |
||||
"And also adding on to that the scraping the web angle of it.\n", |
||||
"So have a think about how you would apply summarization to your business and try extending this to do\n", |
||||
"some summarization.\n", |
||||
"There's also uh, for for the more technically inclined, uh, one of the things that you'll discover\n", |
||||
"quite quickly when you use this is that there are many websites that cannot be summarized with this\n", |
||||
"approach, and that's because they use JavaScript to render the web page and are rather simplistic.\n", |
||||
"Approach has just taken the the just just made the requests the server call and taken what we get back.\n", |
||||
"But there's a solution.\n", |
||||
"And the solution is to use a platform like selenium or others like it, or playwright, which would\n", |
||||
"allow you to render the page and and do it that way.\n", |
||||
"So if you're technically inclined and have some background with that kind of thing, then a really interesting\n", |
||||
"challenge is to turn this into something that's a bit beefier and add selenium to the mix.\n", |
||||
"Um, as it happens, someone has already done that.\n", |
||||
"Uh, one of the students, thank you very much.\n", |
||||
"And if you go into this folder community contributions, you'll see a few different solutions.\n", |
||||
"And one of them is a selenium based solution.\n", |
||||
"So you can always go in and just just look at that yourself.\n", |
||||
"Or you can have a shot at doing it too.\n", |
||||
"And you'll find the solution in there.\n", |
||||
"And if you do come up with a solution to that or to anything, I would love it if you were willing to\n", |
||||
"share your code so that others can benefit from it.\n", |
||||
"Ideally, put it in the community contributions folder and be sure to clear the output.\n", |
||||
"So you go to kernel restart kernel and clear outputs of all cells.\n", |
||||
"Otherwise, everything that you've got in your output would also get checked into code which which would\n", |
||||
"just clutter things up a bit.\n", |
||||
"So so do that.\n", |
||||
"And then if you could submit a PR, a pull request, I can then merge that into the code.\n", |
||||
"And if that's a new thing for you, it is a bit of a process.\n", |
||||
"There is a write up here for exactly what you need to do to make that work.\n", |
||||
"Anyways, this was the first project, the first of many.\n", |
||||
"It's a simple project, but it's an important one.\n", |
||||
"A very important business use case.\n", |
||||
"I hope you found it worthwhile.\n", |
||||
"I will see you for the next video when we wrap up.\n", |
||||
"Week one.\n", |
||||
"Day one.\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ab80b7dd-4b07-4460-9bdd-90bb6ba9e285", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"prompts = [\n", |
||||
" {\"role\": \"system\", \"content\": system_message},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "0a04ec8b-4d44-4a90-9d84-34fbf757bbe4", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## The structure to connect with Gemini API was taken from this contribution. \n", |
||||
"### [From this notebook](https://github.com/ed-donner/llm_engineering/blob/main/week2/community-contributions/day1-with-3way.ipynb)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3aecf1b4-786c-4834-8cae-0a2758ea3edd", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# The API for Gemini - Structure\n", |
||||
"genai.configure(api_key=api_key)\n", |
||||
"\n", |
||||
"gemini = genai.GenerativeModel(\n", |
||||
" model_name='gemini-1.5-flash',\n", |
||||
" system_instruction=system_message\n", |
||||
")\n", |
||||
"response = gemini.generate_content(user_prompt)\n", |
||||
"response = response.text\n", |
||||
"# response = str(response.text) Convert to string in order to save the response as text file\n", |
||||
"print(response)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "cf9a97d9-d935-40de-9736-e566a26dff25", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## To save the processed text data as a file, utilize the following code:\n", |
||||
"\n", |
||||
"```\n", |
||||
"# This is a common pattern for writing text to a file in Python, \n", |
||||
"with open('data_transcript/pro_summary.txt', 'w') as fp:\n", |
||||
" fp.write(response)\n", |
||||
" fp.close()\n", |
||||
"```" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a36a01fa-5718-4bee-bb1b-ad742ab86d6a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Markdown(response.text) If you convert the data type of the variable \"response\" to a string\n", |
||||
"Markdown(response)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c7c53213-838c-4b67-8e99-1fd020b3508d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "fcddef17-9487-4800-8b04-c12ee2a58925", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "fe959162-2c24-4077-b273-ea924e568731", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Key Benefits of AI Summarization:\n", |
||||
"\n", |
||||
"__Time-Saving:__ Quickly process large volumes of text, such as research papers, reports, and news articles. <br>\n", |
||||
"__Improved Comprehension:__ Identify key points and insights more efficiently. <br>\n", |
||||
"__Enhanced Decision-Making:__ Make informed decisions based on accurate and concise information. <br>\n", |
||||
"__Cost Reduction:__ Reduce labor costs associated with manual summarization tasks. <br>\n", |
||||
"\n", |
||||
"# Potential Applications in Business Development:\n", |
||||
"\n", |
||||
"__Market Research:__ Quickly analyze market reports and competitor insights to identify trends and opportunities. <br>\n", |
||||
"__Sales and Marketing:__ Summarize customer feedback and product reviews to inform marketing strategies. <br>\n", |
||||
"__Customer Support:__ Quickly process customer inquiries and provide accurate answers. <br>\n", |
||||
"__Legal and Compliance:__ Analyze legal documents and contracts to identify key clauses and potential risks. <br>\n", |
||||
"__Human Resources:__ Summarize job applications and performance reviews to streamline hiring and evaluation processes." |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "llm", |
||||
"language": "python", |
||||
"name": "llm" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.0" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,152 +0,0 @@
|
||||
{ |
||||
"nbformat": 4, |
||||
"nbformat_minor": 0, |
||||
"metadata": { |
||||
"colab": { |
||||
"provenance": [] |
||||
}, |
||||
"kernelspec": { |
||||
"name": "python3", |
||||
"display_name": "Python 3" |
||||
}, |
||||
"language_info": { |
||||
"name": "python" |
||||
} |
||||
}, |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"source": [ |
||||
"# Getting MOM from call transcripts" |
||||
], |
||||
"metadata": { |
||||
"id": "99Z21wE7xpKS" |
||||
} |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"source": [ |
||||
"Import necessary libraries" |
||||
], |
||||
"metadata": { |
||||
"id": "YZMeexE8M_Pp" |
||||
} |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n" |
||||
], |
||||
"metadata": { |
||||
"id": "u5DCVg0Mxj5T" |
||||
}, |
||||
"execution_count": null, |
||||
"outputs": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"metadata": { |
||||
"id": "i0V11JQ2az-C" |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"#The below code can be uncommented in using .env file\n", |
||||
"\n", |
||||
"#from dotenv import load_dotenv\n", |
||||
"#load_dotenv(override=True)\n", |
||||
"#api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"#I am using google colab to import api_key\n", |
||||
"from google.colab import userdata\n", |
||||
"api_key=userdata.get('gemini_api')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"source": [ |
||||
"# A class to represet Transcript\n", |
||||
"from pathlib import Path\n", |
||||
"class Transcript:\n", |
||||
" def __init__(self, file_path):\n", |
||||
" self.file_path=file_path\n", |
||||
" self.content=Path(file_path).read_text(encoding='utf-8')\n" |
||||
], |
||||
"metadata": { |
||||
"id": "j6UTsnTEyWZ-" |
||||
}, |
||||
"execution_count": null, |
||||
"outputs": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"source": [ |
||||
"# Source of the text file -\"https://raw.githubusercontent.com/GeminiLn/EarningsCall_Dataset/refs/heads/master/3M%20Company_20170425/Text.txt\"\n", |
||||
"path = '/content/Text.txt' # Specify the path of file you want to use - format should be .txt\n", |
||||
"t=Transcript(path)\n" |
||||
], |
||||
"metadata": { |
||||
"id": "hquePU_mzZ7s" |
||||
}, |
||||
"execution_count": null, |
||||
"outputs": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"source": [ |
||||
"\n", |
||||
"system_prompt = \"You are expert at taking Meeting Notes & given the below transcript , create an MOM (Minutes of meeting)\"" |
||||
], |
||||
"metadata": { |
||||
"id": "ex5DB7M8L7KT" |
||||
}, |
||||
"execution_count": null, |
||||
"outputs": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"source": [ |
||||
"from google import genai\n", |
||||
"from google.genai import types\n", |
||||
"\n", |
||||
"client = genai.Client(api_key=api_key)\n", |
||||
"\n", |
||||
"response = client.models.generate_content(\n", |
||||
" model=\"gemini-2.0-flash\",\n", |
||||
" config=types.GenerateContentConfig(\n", |
||||
" system_instruction=system_prompt,\n", |
||||
" max_output_tokens=500,\n", |
||||
" temperature=0.1\n", |
||||
" ),\n", |
||||
" contents=t.content,\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.text)" |
||||
], |
||||
"metadata": { |
||||
"id": "wcpJ34qfMKmV" |
||||
}, |
||||
"execution_count": null, |
||||
"outputs": [] |
||||
} |
||||
] |
||||
} |
@ -1,580 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Instant Gratification\n", |
||||
"\n", |
||||
"## Your first Frontier LLM Project!\n", |
||||
"\n", |
||||
"Let's build a useful LLM solution - in a matter of minutes.\n", |
||||
"\n", |
||||
"By the end of this course, you will have built an autonomous Agentic AI solution with 7 agents that collaborate to solve a business problem. All in good time! We will start with something smaller...\n", |
||||
"\n", |
||||
"Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", |
||||
"\n", |
||||
"Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", |
||||
"\n", |
||||
"## If you're new to Jupyter Lab\n", |
||||
"\n", |
||||
"Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", |
||||
"\n", |
||||
"I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", |
||||
"\n", |
||||
"## If you'd prefer to work in IDEs\n", |
||||
"\n", |
||||
"If you're more comfortable in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", |
||||
"If you'd prefer to work in VSCode, [here](https://chatgpt.com/share/676f2e19-c228-8012-9911-6ca42f8ed766) are instructions from an AI friend on how to configure it for the course.\n", |
||||
"\n", |
||||
"## If you'd like to brush up your Python\n", |
||||
"\n", |
||||
"I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", |
||||
"`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", |
||||
"\n", |
||||
"## I am here to help\n", |
||||
"\n", |
||||
"If you have any problems at all, please do reach out. \n", |
||||
"I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", |
||||
"\n", |
||||
"## More troubleshooting\n", |
||||
"\n", |
||||
"Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", |
||||
"\n", |
||||
"## If this is old hat!\n", |
||||
"\n", |
||||
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Please read - important note</h2>\n", |
||||
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n", |
||||
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# If you get an error running this cell, then please head over to the troubleshooting notebook!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6900b2a8-6384-4316-8aaa-5e519fca4254", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Connecting to OpenAI\n", |
||||
"\n", |
||||
"The next cell is where we load in the environment variables in your `.env` file and connect to OpenAI.\n", |
||||
"\n", |
||||
"## Troubleshooting if you have problems:\n", |
||||
"\n", |
||||
"Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", |
||||
"\n", |
||||
"If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", |
||||
"\n", |
||||
"Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", |
||||
"\n", |
||||
"Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"openai = OpenAI()\n", |
||||
"\n", |
||||
"# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", |
||||
"# If it STILL doesn't work (horrors!) then please see the Troubleshooting notebook in this folder for full instructions" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's make a quick call to a Frontier model to get started, as a preview!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with these messages is this easy. Any problems, head over to the Troubleshooting notebook.\n", |
||||
"\n", |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "2aa190e5-cb31-456a-96cc-db109919cd78", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## OK onwards with our first project" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c5e793b2-6775-426a-a139-4848291d0463", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a478a0c-2c53-48ff-869c-4d08199931e1", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Types of prompts\n", |
||||
"\n", |
||||
"You may know this already - but if not, you will get very familiar with it!\n", |
||||
"\n", |
||||
"Models like GPT4o have been trained to receive instructions in a particular way.\n", |
||||
"\n", |
||||
"They expect to receive:\n", |
||||
"\n", |
||||
"**A system prompt** that tells them what task they are performing and what tone they should use\n", |
||||
"\n", |
||||
"**A user prompt** -- the conversation starter that they should reply to" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Messages\n", |
||||
"\n", |
||||
"The API from OpenAI expects to receive messages in a particular structure.\n", |
||||
"Many of the other APIs share this structure:\n", |
||||
"\n", |
||||
"```\n", |
||||
"[\n", |
||||
" {\"role\": \"system\", \"content\": \"system message goes here\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"user message goes here\"}\n", |
||||
"]\n", |
||||
"\n", |
||||
"To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling OpenAI with system and user messages:\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## And now let's build useful messages for GPT-4o-mini, using a function" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Time to bring it together - the API for OpenAI is very simple!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3d926d59-450e-4609-92ba-2d6f244f1342", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3018853a-445f-41ff-9560-d925d1774b2f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b3bcf6f4-adce-45e9-97ad-d9a5d7a3a624", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Let's try more websites\n", |
||||
"\n", |
||||
"Note that this will only work on websites that can be scraped using this simplistic approach.\n", |
||||
"\n", |
||||
"Websites that are rendered with Javascript, like React apps, won't show up. See the community-contributions folder for a Selenium implementation that gets around this. You'll need to read up on installing Selenium (ask ChatGPT!)\n", |
||||
"\n", |
||||
"Also Websites protected with CloudFront (and similar) may give 403 errors - many thanks Andy J for pointing this out.\n", |
||||
"\n", |
||||
"But many websites will work just fine!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "45d83403-a24c-44b5-84ac-961449b4008f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://cnn.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "75e9fd40-b354-4341-991e-863ef2e59db7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", |
||||
"\n", |
||||
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>\n", |
||||
"\n", |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n", |
||||
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Step 1: Create your prompts\n", |
||||
"\n", |
||||
"system_prompt = \"\"\"you are an AI to a salesperson working in the field of industrial tools and hardware. You have the following roles:\\\n", |
||||
"1. identify and understand the scenario the customer is describing.\\\n", |
||||
"2. figure what caregory of products are suitable for use in the scenario.\\\n", |
||||
"3. search https://industrywaala.com/ for the category of products you identified in 2. and then look for 2 products in that\\\n", |
||||
"category that you think will be most suitable in the given use case. for this you need to check for product features provided in\\\n", |
||||
"the short and long descriptions on the website that are applicable in the scenario.\\\n", |
||||
"4. make a summary of the two products with the brand name, model and 2 other key features of the product\\\n", |
||||
"5. always respond in markdown.\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"user_prompt = \"\"\"\\n can you help figure what model of product should i use in high temperature environemt. \\n\\n\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"# Step 2: Make the messages list\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
"] # fill this in\n", |
||||
"\n", |
||||
"# Step 3: Call OpenAI\n", |
||||
"\n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
")\n", |
||||
"\n", |
||||
"# Step 4: print the result\n", |
||||
"\n", |
||||
"display(Markdown(response.choices[0].message.content))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "36ed9f14-b349-40e9-a42c-b367e77f8bda", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## An extra exercise for those who enjoy web scraping\n", |
||||
"\n", |
||||
"You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "eeab24dc-5f90-4570-b542-b0585aca3eb6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Sharing your code\n", |
||||
"\n", |
||||
"I'd love it if you share your code afterwards so I can share it with others! You'll notice that some students have already made changes (including a Selenium implementation) which you will find in the community-contributions folder. If you'd like add your changes to that folder, submit a Pull Request with your new versions in that folder and I'll merge your changes.\n", |
||||
"\n", |
||||
"If you're not an expert with git (and I am not!) then GPT has given some nice instructions on how to submit a Pull Request. It's a bit of an involved process, but once you've done it once it's pretty clear. As a pro-tip: it's best if you clear the outputs of your Jupyter notebooks (Edit >> Clean outputs of all cells, and then Save) for clean notebooks.\n", |
||||
"\n", |
||||
"Here are good instructions courtesy of an AI friend: \n", |
||||
"https://chatgpt.com/share/677a9cb5-c64c-8012-99e0-e06e88afd293" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,167 +0,0 @@
|
||||
import os |
||||
import time |
||||
import pandas as pd |
||||
import re |
||||
from dotenv import load_dotenv |
||||
from selenium import webdriver |
||||
from selenium.webdriver.chrome.service import Service |
||||
from selenium.webdriver.chrome.options import Options |
||||
from selenium.webdriver.common.by import By |
||||
from selenium.webdriver.support.ui import WebDriverWait |
||||
from selenium.webdriver.support import expected_conditions as EC |
||||
from openai import OpenAI |
||||
from openpyxl import load_workbook |
||||
from openpyxl.styles import Font, Alignment |
||||
|
||||
# Load environment variables |
||||
load_dotenv(override=True) |
||||
api_key = os.getenv('OPENAI_API_KEY') |
||||
|
||||
# Validate API Key |
||||
if not api_key: |
||||
raise ValueError("No API key was found - please check your .env file.") |
||||
|
||||
# Initialize OpenAI client |
||||
openai = OpenAI() |
||||
|
||||
# Set up Selenium WebDriver |
||||
chrome_options = Options() |
||||
chrome_options.add_argument("--headless") |
||||
chrome_options.add_argument("--disable-gpu") |
||||
chrome_options.add_argument("--no-sandbox") |
||||
chrome_options.add_argument("--disable-dev-shm-usage") |
||||
|
||||
class Website: |
||||
"""Scrapes and processes website content using Selenium.""" |
||||
|
||||
def __init__(self, url: str): |
||||
self.url = url |
||||
self.text = "No content extracted." |
||||
|
||||
service = Service(executable_path="/opt/homebrew/bin/chromedriver") |
||||
driver = webdriver.Chrome(service=service, options=chrome_options) |
||||
|
||||
try: |
||||
driver.get(url) |
||||
WebDriverWait(driver, 10).until( |
||||
EC.presence_of_element_located((By.TAG_NAME, "body")) |
||||
) |
||||
body_element = driver.find_element(By.TAG_NAME, "body") |
||||
self.text = body_element.text.strip() if body_element else "No content extracted." |
||||
except Exception as e: |
||||
print(f"Error fetching website: {e}") |
||||
finally: |
||||
driver.quit() |
||||
|
||||
def summarized_text(self, max_length=1500): |
||||
return self.text[:max_length] + ("..." if len(self.text) > max_length else "") |
||||
|
||||
def clean_text(text): |
||||
""" |
||||
Cleans extracted text by removing markdown-style formatting. |
||||
""" |
||||
text = re.sub(r"###*\s*", "", text) |
||||
text = re.sub(r"\*\*(.*?)\*\*", r"\1", text) |
||||
return text.strip() |
||||
|
||||
# Aspect-specific prompts for concise output |
||||
aspect_prompts = { |
||||
"Marketing Strategies": "Summarize the core marketing strategies used on this website in in under 30 words. Do not include a title or introduction.", |
||||
"SEO Keywords": "List only the most relevant SEO keywords from this website, separated by commas. Do not include a title or introduction.", |
||||
"User Engagement Tactics": "List key engagement tactics used on this website (e.g., interactive features, user incentives, social proof). Keep responses to 3-5 bullet points. Do not include a title or introduction.", |
||||
"Call-to-Action Phrases": "List only the most common Call-to-Action phrases used on this website, separated by commas. Do not include a title or introduction.", |
||||
"Branding Elements": "Summarize the brand's tone, style, and positioning in under 30 words. Do not include a title or introduction.", |
||||
"Competitor Comparison": "Briefly describe how this website differentiates itself from competitors in under 30 words. Do not include a title or introduction.", |
||||
"Product Descriptions": "List the most important features or benefits of the products/services described on this website in under 30 words. Do not include a title or introduction.", |
||||
"Customer Reviews Sentiment": "Summarize the overall sentiment of customer reviews in oin under 30 words, highlighting common themes. Do not include a title or introduction.", |
||||
"Social Media Strategy": "List key social media strategies used on this website, separated by commas. Do not include a title or introduction." |
||||
} |
||||
|
||||
|
||||
def summarize(url: str) -> dict: |
||||
""" |
||||
Fetches a website, extracts relevant content, and generates a separate summary for each aspect. |
||||
|
||||
:param url: The website URL to analyze. |
||||
:return: A dictionary containing extracted information. |
||||
""" |
||||
website = Website(url) |
||||
|
||||
if not website.text or website.text == "No content extracted.": |
||||
return {"URL": url, "Error": "Failed to extract content"} |
||||
|
||||
extracted_data = {"URL": url} |
||||
|
||||
for aspect, prompt in aspect_prompts.items(): |
||||
try: |
||||
formatted_prompt = f"{prompt} \n\nContent:\n{website.summarized_text()}" |
||||
response = openai.chat.completions.create( |
||||
model="gpt-4o-mini", |
||||
messages=[ |
||||
{"role": "system", "content": "You are an expert at extracting structured information from website content."}, |
||||
{"role": "user", "content": formatted_prompt} |
||||
] |
||||
) |
||||
|
||||
extracted_data[aspect] = clean_text(response.choices[0].message.content) |
||||
|
||||
except Exception as e: |
||||
extracted_data[aspect] = f"Error generating summary: {e}" |
||||
|
||||
return extracted_data |
||||
|
||||
def save_to_excel(data_list: list, filename="website_analysis.xlsx"): |
||||
""" |
||||
Saves extracted information to an Excel file with proper formatting. |
||||
|
||||
:param data_list: A list of dictionaries containing extracted website details. |
||||
:param filename: The name of the Excel file to save data. |
||||
""" |
||||
df = pd.DataFrame(data_list) |
||||
|
||||
df.to_excel(filename, index=False) |
||||
|
||||
wb = load_workbook(filename) |
||||
ws = wb.active |
||||
|
||||
# Auto-adjust column widths |
||||
for col in ws.columns: |
||||
max_length = 0 |
||||
col_letter = col[0].column_letter |
||||
for cell in col: |
||||
try: |
||||
if cell.value: |
||||
max_length = max(max_length, len(str(cell.value))) |
||||
except: |
||||
pass |
||||
ws.column_dimensions[col_letter].width = min(max_length + 2, 50) |
||||
|
||||
# Format headers |
||||
for cell in ws[1]: |
||||
cell.font = Font(bold=True) |
||||
cell.alignment = Alignment(horizontal="center", vertical="center") |
||||
|
||||
# Wrap text for extracted content |
||||
for row in ws.iter_rows(min_row=2): |
||||
for cell in row: |
||||
cell.alignment = Alignment(wrap_text=True, vertical="top") |
||||
|
||||
wb.save(filename) |
||||
print(f"Data saved to {filename} with improved formatting.") |
||||
|
||||
# 🔹 LIST OF WEBSITES TO PROCESS |
||||
websites = [ |
||||
"https://www.gymshark.com/", |
||||
] |
||||
|
||||
if __name__ == "__main__": |
||||
print("\nProcessing websites...\n") |
||||
extracted_data_list = [] |
||||
|
||||
for site in websites: |
||||
print(f"Extracting data from {site}...") |
||||
extracted_data = summarize(site) |
||||
extracted_data_list.append(extracted_data) |
||||
|
||||
save_to_excel(extracted_data_list) |
||||
print("\nAll websites processed successfully!") |
@ -1,87 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "44aba2a0-c6eb-4fc1-a5cc-0a8f8679dbb8", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Michelin-star cook..." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d4d58124-5e9a-4f5a-9e0a-ff74f43896a8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "67dc3099-2ccc-4ee8-8ff2-0dbbe4ae2fcb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are a professional chef in a Michelin-star restaurant. You will help me cook restaurant-style dishes using the ingredients I have left in my refrigerator.\\\n", |
||||
"You will provide detailed instructions with precise times and measurements in grams and include calorie information for raw ingredients, not cooked ones.\\\n", |
||||
"Add the caloric information at the end. Your responses should be formatted in Markdown.\"\n", |
||||
"\n", |
||||
"user_prompt = \"\"\"\n", |
||||
"Help me with a recipe using the ingredients I have left in the refrigerator. I have spinach, eggs, pasta, rice, chicken, beef, carrots, potatoes, butter, milk, cheese, tomatoes, red peppers, and all spices in the pantry.\\n\\n\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt},\n", |
||||
"]\n", |
||||
" \n", |
||||
"response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages\n", |
||||
" )\n", |
||||
"\n", |
||||
"# Step 4: print the result in markdown format\n", |
||||
"pretty_response = Markdown(response.choices[0].message.content)\n", |
||||
"display(pretty_response)" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,127 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0a512c2a-55e7-40e1-ab17-88b7034ca09a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Imports\n", |
||||
"import openai\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1aa8dd82-6b5e-4dbd-a2ee-8367e796a51f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - head over to the troubleshooting notebook!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj... make sure you using the right key (Check troubleshooting notebook)\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like white space was found in beginning or end. (Check troubleshooting notebook)\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2acd579b-846c-4aa6-ba6c-1cc1a5a2eeb6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Input the system prompt\n", |
||||
"system_prompt = \"\"\"you are top notched AI music expert that have knowledge of all genres, songs, and artists. You need to google search lyrics. You have the following rules:\\\n", |
||||
"1. Carefully break down what type of recommendation the user wants and the context.\\\n", |
||||
"2. If asked to recommend genres similar to a song or artists please identify the top 3 genres.\\\n", |
||||
"3. If asked to recommend artists from songs or genres then recommend the top 5 artists.\n", |
||||
"4. If asked to recommend songs from genres or artist than recommend the top 10 songs.\n", |
||||
"5. If asked for a general recommendation give them the top 5 songs based off of context.\\\n", |
||||
"6. Be flexible and adaptable with recommendations and consider the context the user might ask.\n", |
||||
"7. always respond in markdown.\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3c1cf212-538c-4e9a-8da5-337bd7b6197c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# music recommender function\n", |
||||
"def music_recommender(user_prompt):\n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt}\n", |
||||
" ]\n", |
||||
" \n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4\",\n", |
||||
" messages=messages,\n", |
||||
" max_tokens=300\n", |
||||
" )\n", |
||||
" \n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4f277561-af8b-4715-90e7-6ebaadeb15d0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# User prompt (Change this to fit your needs!)\n", |
||||
"user_prompt = \"Can you recommend me songs from Taylor Swift\"\n", |
||||
"\n", |
||||
"# Example usage\n", |
||||
"response = music_recommender(user_prompt)\n", |
||||
"display(Markdown(response))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bb869d36-de14-4e46-9087-223d6b257efa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,223 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bfa3abd0-4e66-4117-96f9-7a71fbb6d0cb", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Powerpoint Slides Summarizer\n", |
||||
"\n", |
||||
"This converts a Power Point presentation into notes that a student can easily skim through.\n", |
||||
"\n", |
||||
"Concepts Used:\n", |
||||
"- Converting Contents of PPT to text via python-pptx\n", |
||||
"- User and System Prompts\n", |
||||
"- Use of Open AI GPT-4o-mini via API key\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ab95eb49-6a2d-4c7d-9057-78a2cd9364cc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!pip install python-pptx" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "62715f16-7125-455e-98e7-5705871c0e4a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ff42eab7-789d-44f8-a5cc-64baeebf3224", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bce425c2-6d19-4c03-93ce-8930dabc61ee", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# creating an instance\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c0c75e30-3b38-4a89-b7d3-a41a6f5dc650", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"from pptx import Presentation\n", |
||||
"\n", |
||||
"class PowerPoint():\n", |
||||
" def __init__(self,ppt):\n", |
||||
" \"\"\"\n", |
||||
" Creates a PowerPoint object, with name and text.\n", |
||||
" \"\"\"\n", |
||||
" self.ppt = ppt\n", |
||||
" self.title = os.path.basename(ppt)\n", |
||||
" self.text = self.extract_text()\n", |
||||
"\n", |
||||
" def extract_text(self):\n", |
||||
" \"\"\"\n", |
||||
" Extracts text from powerpoint.\n", |
||||
" \"\"\"\n", |
||||
" prs = Presentation(self.ppt)\n", |
||||
" text_content = []\n", |
||||
" \n", |
||||
" for slide in prs.slides:\n", |
||||
" for shape in slide.shapes:\n", |
||||
" if hasattr(shape, \"text\"):\n", |
||||
" text_content.append(shape.text)\n", |
||||
" \n", |
||||
" return \"\\n\".join(text_content)\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1963a055-87f4-4e47-8456-cac4d4ac57fc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents \\\n", |
||||
"of a PowerPoint presentation, and provides a summary in the style of \\\n", |
||||
"a cheat-sheet, for students to easily learn key concepts from.\\\n", |
||||
"You are to ignore text that might be navigation-related\\\n", |
||||
"and respond in Markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ca600e90-7d3f-4fc7-a698-1b8f2925f81e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of PowerPoints:\n", |
||||
"\n", |
||||
"def user_prompt_for(powerpoint):\n", |
||||
" user_prompt = f\"You are looking at a website titled {powerpoint.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this powerpoint are as follows; \\\n", |
||||
"please provide a summary of the content in markdown. \\\n", |
||||
"If it includes a question bank, add that along with short answers too.\\n\\n\"\n", |
||||
" user_prompt += powerpoint.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4fe19c56-9940-4528-b43a-c86798b215d2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(powerpoint):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(powerpoint)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f7704da5-90b0-40af-bbb4-7d589309f180", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. \n", |
||||
"\n", |
||||
"def summarize(powerpoint_path):\n", |
||||
" powerpoint = PowerPoint(powerpoint_path)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(powerpoint)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "49d1d0cf-fa4b-4bea-bd68-a834145070ef", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "348078d1-e86f-4eb3-909d-33ab4ede984e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ppt_file = \"Theoretical Perspectives on Media and Technology.pptx\" \n", |
||||
"display_summary(ppt_file)" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,170 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "6ba7c60a-c338-49a1-b1ba-46b7c20e33cb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import openai\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from openai import OpenAI\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "4acb4062-17b2-43b1-8b74-aefaa9599463", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"API key found and looks good so far!\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "56f011b2-b759-4ad6-9d01-870fbcb8ade1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def generate_quiz(topic):\n", |
||||
" prompt = f\"Generate a multiple-choice quiz with 5 questions on the topic: {topic}. Include the correct answer for each question.\"\n", |
||||
" \n", |
||||
" messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a quiz generator. Create a multiple-choice quiz with 5 questions and provide the correct answers.Respond in markdown.\"},\n", |
||||
" {\"role\": \"user\", \"content\": prompt}\n", |
||||
" ]\n", |
||||
" \n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=\"gpt-4\",\n", |
||||
" messages=messages,\n", |
||||
" max_tokens=300\n", |
||||
" )\n", |
||||
" \n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 10, |
||||
"id": "1cf977e7-b04b-49e7-8b0a-d0ab2800c234", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Question 1:** What is Python?\n", |
||||
"\n", |
||||
"**Choice A:** A type of snake\n", |
||||
"**Choice B:** A medical term\n", |
||||
"**Choice C:** A drilling tool\n", |
||||
"**Choice D:** A high-level programming language\n", |
||||
"\n", |
||||
"Correct Answer: **Choice D:** A high-level programming language\n", |
||||
"\n", |
||||
"**Question 2:** In Python, what keyword is used to create a function?\n", |
||||
"\n", |
||||
"**Choice A:** func\n", |
||||
"**Choice B:** def\n", |
||||
"**Choice C:** function\n", |
||||
"**Choice D:** create\n", |
||||
"\n", |
||||
"Correct Answer: **Choice B:** def\n", |
||||
"\n", |
||||
"**Question 3:** What is the correct syntax to output \"Hello World\" in Python?\n", |
||||
"\n", |
||||
"**Choice A:** printf(\"Hello World\")\n", |
||||
"**Choice B:** println(\"Hello World\")\n", |
||||
"**Choice C:** echo(\"Hello World\")\n", |
||||
"**Choice D:** print(\"Hello World\")\n", |
||||
"\n", |
||||
"Correct Answer: **Choice D:** print(\"Hello World\")\n", |
||||
"\n", |
||||
"**Question 4:** How would you create a variable \"x\" that equals 5 in Python?\n", |
||||
"\n", |
||||
"**Choice A:** var x = 5\n", |
||||
"**Choice B:** x := 5\n", |
||||
"**Choice C:** x = 5\n", |
||||
"**Choice D:** x : 5\n", |
||||
"\n", |
||||
"Correct Answer: **Choice C:** x = 5\n", |
||||
"\n", |
||||
"**Question 5:** How do you create a comment in Python?\n", |
||||
"\n", |
||||
"**Choice A:** // This is a comment\n", |
||||
"**Choice B:** # This is a comment\n", |
||||
"**Choice C:** <!-- This is a comment -->\n", |
||||
"**Choice D:** /* This is a comment */\n", |
||||
"\n", |
||||
"Correct Answer" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Example usage\n", |
||||
"topic = \"Python programming\"\n", |
||||
"quiz = generate_quiz(topic)\n", |
||||
"display(Markdown(quiz))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "70990d7c-6061-43c6-b3c9-9146a3c51c3e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,230 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "56c86bae-1d3c-4c01-b5d6-c8879fec1954", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Wiki Summarizer\n", |
||||
"\n", |
||||
"This Project takes the name of a topic as input, and checks if the corresponding wiki-page exists. If it does, it parses the web page, and outputs a summary created using the GPT-4o-mini model. \n", |
||||
"\n", |
||||
"Concepts used: \n", |
||||
"- Web Scraping via Beautiful Soup\n", |
||||
"- User and System Prompts\n", |
||||
"- Use of Open AI GPT-4o-mini via API key" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4820830e-b3b4-426e-b1a2-518e7c7f6c1a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2cd7ad51-396c-45c5-9089-f7b21a19da50", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Load environment variables in a file called .env\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"# Check the key\n", |
||||
"\n", |
||||
"if not api_key:\n", |
||||
" print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", |
||||
"elif not api_key.startswith(\"sk-proj-\"):\n", |
||||
" print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", |
||||
"elif api_key.strip() != api_key:\n", |
||||
" print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", |
||||
"else:\n", |
||||
" print(\"API key found and looks good so far!\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "689421a0-20a1-428b-a8b8-fa239fa6f633", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# creating an instance\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "401901ae-7639-4190-98fd-e69374084723", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def isWiki(url):\n", |
||||
" \"\"\"\n", |
||||
" Check whether a Wikipedia page exists for a given topic, and \n", |
||||
" returns a Boolean value.\n", |
||||
" \"\"\"\n", |
||||
" response = requests.get(url)\n", |
||||
"\n", |
||||
" if response.status_code != 200:\n", |
||||
" return False\n", |
||||
" \n", |
||||
" return True" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7cdb14d3-05ea-4de2-a475-d49a5731692e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7f6ed50e-0fb5-479e-9845-f62cf25980f7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an educational assistant tasked with helping users understand topics\\\n", |
||||
"by providing succinct and clear summaries of requested data. Ignore navigation-related text\\\n", |
||||
"and provide answers in markdown format\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b2d77dd9-a94f-49c1-a1be-11d157bd37fb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of wiki pages:\n", |
||||
"\n", |
||||
"def user_prompt_for(wiki):\n", |
||||
" user_prompt = f\"You are looking at a Wikipedia page titled {wiki.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this page is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown.\\n\"\n", |
||||
" user_prompt += wiki.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0d23bcc4-1d89-4bd4-9809-d3a1819aa919", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(wiki):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(wiki)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "971bd7fb-2ff8-4494-b386-de69a39c24ff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = \"gpt-4o-mini\",\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a8fdf9f2-f49e-4d06-ac9e-dfcb8da33d60", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(topic):\n", |
||||
" url = f\"https://en.wikipedia.org/wiki/{topic}\"\n", |
||||
" if isWiki(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))\n", |
||||
" else:\n", |
||||
" print('A Wikipedia page does not exist for this topic')\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f4758ef0-9b7c-4d3e-9131-e3284dc76b6b", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"topic = input('Enter the name of Wikipedia page for which you would like a summary: ').strip()\n", |
||||
"display_summary(topic)" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,192 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "e3ce0a59-fbfb-4377-85db-f62f95039200", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Day2 EXERCISE - Summarization using Ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import os\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cb5c0f84-4e4d-4f87-b492-e09d0333a638", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "23457b52-c85b-4dc1-b946-6f1461dc0675", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "bed206ed-43c1-4f68-ad01-a738b3b4648d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e558f381-614a-461f-83bc-e5bdc99460df", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e5ba638d-aeb9-441e-a62a-8e8027ad8439", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e85ca2ec-3e46-4b8f-9c2f-66e7d20138fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#website search\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"messages=messages_for(ed)\n", |
||||
"\n", |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "402d5686-4e76-4110-b65a-b3906c35c0a4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,354 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Welcome to your first assignment!\n", |
||||
"\n", |
||||
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n", |
||||
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n", |
||||
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
||||
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# HOMEWORK EXERCISE ASSIGNMENT\n", |
||||
"\n", |
||||
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n", |
||||
"\n", |
||||
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n", |
||||
"\n", |
||||
"**Benefits:**\n", |
||||
"1. No API charges - open-source\n", |
||||
"2. Data doesn't leave your box\n", |
||||
"\n", |
||||
"**Disadvantages:**\n", |
||||
"1. Significantly less power than Frontier Model\n", |
||||
"\n", |
||||
"## Recap on installation of Ollama\n", |
||||
"\n", |
||||
"Simply visit [ollama.com](https://ollama.com) and install!\n", |
||||
"\n", |
||||
"Once complete, the ollama server should already be running locally. \n", |
||||
"If you visit: \n", |
||||
"[http://localhost:11434/](http://localhost:11434/)\n", |
||||
"\n", |
||||
"You should see the message `Ollama is running`. \n", |
||||
"\n", |
||||
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n", |
||||
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n", |
||||
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n", |
||||
"\n", |
||||
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a messages list using the same format that we used for OpenAI\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's just make sure the model is loaded\n", |
||||
"\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# If this doesn't work for any reason, try the 2 versions in the following cells\n", |
||||
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n", |
||||
"# And if none of that works - contact me!\n", |
||||
"\n", |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Introducing the ollama package\n", |
||||
"\n", |
||||
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n", |
||||
"\n", |
||||
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Alternative approach - using OpenAI python library to connect to Ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# There's actually an alternative approach that some people might prefer\n", |
||||
"# You can use the OpenAI client python library to call Ollama:\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# NOW the exercise for you\n", |
||||
"\n", |
||||
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ef76cfc2-c519-4cb2-947a-64948517913d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a151a8de-1e90-4190-b68e-b44b25a2cdd7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "700fffc1-c7b0-4001-b381-5c4fd28c8799", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Reusing the Website BeautifulSoup wrapper from Day 1\n", |
||||
"# SSL Verification has been disabled\n", |
||||
"\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers, verify=False) # NOTE Disabled ssl verification here to workaround VPN Limitations\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "402d5686-4e76-4110-b65a-b3906c35c0a4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website are as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "81f5f140-8f77-418f-a252-8ad5d11f6c5f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## enter the web URL here:\n", |
||||
"website_url = \"https://www.timecube.net/\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1d0ce4aa-b43e-4642-bcbd-d5964700ece8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"## This will at first print a warning for SSL which can be ignored before providing response. \n", |
||||
"\n", |
||||
"import ollama\n", |
||||
"\n", |
||||
"system_prompt = \"You are a virtual assistant who analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(Website(website_url))}\n", |
||||
"]\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "910b7e06-c92d-47bf-a4ee-a006d70deb06", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,522 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Welcome to your first assignment!\n", |
||||
"\n", |
||||
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n", |
||||
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n", |
||||
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
||||
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# HOMEWORK EXERCISE ASSIGNMENT\n", |
||||
"\n", |
||||
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n", |
||||
"\n", |
||||
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n", |
||||
"\n", |
||||
"**Benefits:**\n", |
||||
"1. No API charges - open-source\n", |
||||
"2. Data doesn't leave your box\n", |
||||
"\n", |
||||
"**Disadvantages:**\n", |
||||
"1. Significantly less power than Frontier Model\n", |
||||
"\n", |
||||
"## Recap on installation of Ollama\n", |
||||
"\n", |
||||
"Simply visit [ollama.com](https://ollama.com) and install!\n", |
||||
"\n", |
||||
"Once complete, the ollama server should already be running locally. \n", |
||||
"If you visit: \n", |
||||
"[http://localhost:11434/](http://localhost:11434/)\n", |
||||
"\n", |
||||
"You should see the message `Ollama is running`. \n", |
||||
"\n", |
||||
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n", |
||||
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n", |
||||
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n", |
||||
"\n", |
||||
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "raw", |
||||
"id": "07e106bd-10c5-4365-b85b-397b5f059656", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a messages list using the same format that we used for OpenAI\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 6, |
||||
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 7, |
||||
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Generative AI (Artificial Intelligence) has numerous business applications across various industries. Here are some examples:\n", |
||||
"\n", |
||||
"1. **Content Generation**: Generative AI can create high-quality content such as articles, social media posts, product descriptions, and more. This can help businesses save time and resources on content creation.\n", |
||||
"2. **Product Design**: Generative AI can be used to design new products, such as fashion items, jewelry, or electronics. It can also generate 3D models and prototypes, reducing the need for manual design and prototyping.\n", |
||||
"3. **Image and Video Generation**: Generative AI can create realistic images and videos that can be used in marketing campaigns, advertising, and social media. This can help businesses create engaging visual content without requiring extensive photography or videography skills.\n", |
||||
"4. **Chatbots and Virtual Assistants**: Generative AI can power chatbots and virtual assistants that provide customer support, answer frequently asked questions, and even engage in basic conversations.\n", |
||||
"5. **Predictive Maintenance**: Generative AI can analyze sensor data from machines and predict when maintenance is needed, reducing downtime and increasing efficiency.\n", |
||||
"6. **Personalized Recommendations**: Generative AI can analyze customer behavior and preferences to generate personalized product recommendations, improving the overall shopping experience.\n", |
||||
"7. **Customer Segmentation**: Generative AI can help businesses segment their customers based on their behavior, demographics, and preferences, enabling targeted marketing campaigns.\n", |
||||
"8. **Automated Writing Assistance**: Generative AI can assist writers with ideas, suggestions, and even full-text writing, helping to boost productivity and creativity.\n", |
||||
"9. **Data Analysis and Visualization**: Generative AI can analyze large datasets and generate insights, visualizations, and predictions that can inform business decisions.\n", |
||||
"10. **Creative Collaboration**: Generative AI can collaborate with human creatives, such as artists, designers, and writers, to generate new ideas, concepts, and content.\n", |
||||
"\n", |
||||
"Some specific industries where Generative AI is being applied include:\n", |
||||
"\n", |
||||
"1. **Marketing and Advertising**: generating personalized ads, content, and messaging.\n", |
||||
"2. **Finance and Banking**: automating financial analysis, risk assessment, and customer service.\n", |
||||
"3. **Healthcare**: generating medical images, analyzing patient data, and predicting disease outcomes.\n", |
||||
"4. **Manufacturing and Supply Chain**: optimizing production workflows, predicting demand, and identifying potential bottlenecks.\n", |
||||
"5. **Education**: creating personalized learning experiences, grading assignments, and developing educational content.\n", |
||||
"\n", |
||||
"These are just a few examples of the many business applications of Generative AI. As the technology continues to evolve, we can expect to see even more innovative uses across various industries.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Introducing the ollama package\n", |
||||
"\n", |
||||
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n", |
||||
"\n", |
||||
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Generative AI has numerous business applications across various industries. Here are some examples:\n", |
||||
"\n", |
||||
"1. **Content Generation**: Generative AI can be used to generate high-quality content such as articles, social media posts, product descriptions, and more. This can save time and resources for businesses that need to produce a large volume of content.\n", |
||||
"2. **Product Design**: Generative AI can be used to design new products, such as furniture, electronics, and other consumer goods. It can also help optimize product designs by generating multiple versions and selecting the most suitable one based on various criteria.\n", |
||||
"3. **Marketing Automation**: Generative AI can be used to create personalized marketing campaigns, such as email marketing automation, social media ads, and more. This can help businesses tailor their marketing efforts to specific customer segments and improve engagement rates.\n", |
||||
"4. **Image and Video Editing**: Generative AI can be used to edit images and videos, such as removing background noise, correcting color casts, and enhancing video quality. This can save time and resources for businesses that need to create high-quality visual content.\n", |
||||
"5. **Chatbots and Virtual Assistants**: Generative AI can be used to create chatbots and virtual assistants that can understand natural language and respond accordingly. This can help businesses provide better customer service and improve user experience.\n", |
||||
"6. **Predictive Analytics**: Generative AI can be used to analyze large datasets and generate predictive models that can forecast future trends and behaviors. This can help businesses make data-driven decisions and stay ahead of the competition.\n", |
||||
"7. **Customer Segmentation**: Generative AI can be used to segment customers based on their behavior, demographics, and preferences. This can help businesses tailor their marketing efforts and improve customer engagement.\n", |
||||
"8. **Language Translation**: Generative AI can be used to translate languages in real-time, which can help businesses communicate with international clients and customers more effectively.\n", |
||||
"9. **Music Composition**: Generative AI can be used to compose music for various applications such as advertising, film scoring, and video game soundtracks.\n", |
||||
"10. **Financial Modeling**: Generative AI can be used to create financial models that can predict future revenue streams, costs, and other financial metrics. This can help businesses make more accurate predictions and inform better investment decisions.\n", |
||||
"\n", |
||||
"Some of the industries that are already leveraging generative AI include:\n", |
||||
"\n", |
||||
"* E-commerce\n", |
||||
"* Healthcare\n", |
||||
"* Finance\n", |
||||
"* Marketing\n", |
||||
"* Education\n", |
||||
"* Entertainment\n", |
||||
"* Manufacturing\n", |
||||
"\n", |
||||
"These applications have the potential to transform various business processes, improve customer experiences, and drive innovation in various sectors.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Alternative approach - using OpenAI python library to connect to Ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Generative AI has numerous business applications across various industries, transforming the way companies operate, create products, and interact with customers. Some key applications include:\n", |
||||
"\n", |
||||
"1. **Content Generation**: Automate content creation for marketing materials, such as blog posts, product descriptions, social media posts, and more, using Generative AI-powered tools.\n", |
||||
"2. **Product Design and Prototyping**: Use Generative AI to design new products, furniture, or other innovative solutions, reducing design time and costs while increasing creativity.\n", |
||||
"3. **Customer Experience (CX) Tools**: Leverage Generative AI to create personalized customer experiences, such as chatbots that can respond to customer queries and provide tailored recommendations.\n", |
||||
"4. **Predictive Maintenance**: Use Generative AI to analyze sensor data, identify potential issues, and predict maintenance needs for equipment, reducing downtime and increasing overall efficiency.\n", |
||||
"5. **Personalized Marketing**: Use Generative AI to create targeted marketing campaigns based on individual customer preferences, behaviors, and demographics.\n", |
||||
"6. **Content Optimization**: Utilize Generative AI to optimize content for better performance in search engine results pages (SERPs), ensuring improved visibility and traffic.\n", |
||||
"7. **Brand Storytelling**: Automate the creation of brand stories, taglines, and overall brand narrative using Generative AI-powered tools.\n", |
||||
"8. **Financial Modeling and Forecasting**: Use Generative AI to create financial models, forecasts, and predictions for businesses, helping them make data-driven decisions.\n", |
||||
"9. **Supply Chain Optimization**: Leverage Generative AI to optimize supply chain operations, predicting demand, reducing inventory levels, and streamlining logistics.\n", |
||||
"10. **Automated Transcription and Translation**: Use Generative AI to automate the transcription of audio and video files into written text, as well as translate materials across languages.\n", |
||||
"11. **Digital Asset Management**: Utilize Generative AI to manage digital assets, such as images, videos, and documents, and automatically generate metadata for easy search and retrieval.\n", |
||||
"12. **Chatbots and Virtual Assistants**: Create more advanced chatbots using Generative AI that can understand context, emotions, and intent, providing better customer service experiences.\n", |
||||
"\n", |
||||
"In healthcare, Generative AI is being applied to:\n", |
||||
"\n", |
||||
"1. Medical Imaging Analysis\n", |
||||
"2. Personalized Medicine\n", |
||||
"3. Patient Data Analysis\n", |
||||
"\n", |
||||
"In education, Generative AI is used in:\n", |
||||
"\n", |
||||
"1. Adaptive Learning Systems\n", |
||||
"2. Automated Grading and Feedback\n", |
||||
"\n", |
||||
"Generative AI has numerous applications across various industries, from creative content generation to predictive maintenance and supply chain optimization.\n", |
||||
"\n", |
||||
"Keep in mind that these are just a few examples of the many business applications of Generative AI as this technology continues to evolve at a rapid pace.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# There's actually an alternative approach that some people might prefer\n", |
||||
"# You can use the OpenAI client python library to call Ollama:\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# NOW the exercise for you\n", |
||||
"\n", |
||||
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 28, |
||||
"id": "de923314-a427-4199-b1f9-0e60f85114c3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"\n", |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 31, |
||||
"id": "0cedada6-adc6-40dc-bdf3-bc8a3b6b3826", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Well, hi there.\n", |
||||
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", |
||||
"very\n", |
||||
"amateur) and losing myself in\n", |
||||
"Hacker News\n", |
||||
", nodding my head sagely to things I only half understand.\n", |
||||
"I’m the co-founder and CTO of\n", |
||||
"Nebula.io\n", |
||||
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", |
||||
"acquired in 2021\n", |
||||
".\n", |
||||
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", |
||||
"patented\n", |
||||
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", |
||||
"Connect\n", |
||||
"with me for more!\n", |
||||
"November 13, 2024\n", |
||||
"Mastering AI and LLM Engineering – Resources\n", |
||||
"October 16, 2024\n", |
||||
"From Software Engineer to AI Data Scientist – resources\n", |
||||
"August 6, 2024\n", |
||||
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n", |
||||
"June 26, 2024\n", |
||||
"Choosing the Right LLM: Toolkit and Resources\n", |
||||
"Navigation\n", |
||||
"Home\n", |
||||
"Outsmart\n", |
||||
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", |
||||
"About\n", |
||||
"Posts\n", |
||||
"Get in touch\n", |
||||
"ed [at] edwarddonner [dot] com\n", |
||||
"www.edwarddonner.com\n", |
||||
"Follow me\n", |
||||
"LinkedIn\n", |
||||
"Twitter\n", |
||||
"Facebook\n", |
||||
"Subscribe to newsletter\n", |
||||
"Type your email…\n", |
||||
"Subscribe\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"web_res = Website(\"https://edwarddonner.com\")\n", |
||||
"print(web_res.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 11, |
||||
"id": "64d26055-756b-4095-a1d1-298fdf4fd8f1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 52, |
||||
"id": "65b08550-7506-415f-8612-e2395d6e145d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an helper that assist user to provide crisp summary\\\n", |
||||
"of the website they pass in, respond with key points\"\n", |
||||
"\n", |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too with start bulletin.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 33, |
||||
"id": "36a0a2d0-f07a-40ac-a065-b713cdd5c028", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 50, |
||||
"id": "8c2b20ea-6a8e-41c9-be3b-f24a5b29e8de", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#website search\n", |
||||
"\n", |
||||
"web_msg=Website(\"https://www.cricbuzz.com/cricket-match-squads/91796/aus-vs-ind-3rd-test-india-tour-of-australia-2024-25\")\n", |
||||
"messages=messages_for(web_msg)\n", |
||||
"\n", |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 54, |
||||
"id": "e5636b3b-7763-4f9c-ab18-88aa25b50de6", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"name": "stdout", |
||||
"output_type": "stream", |
||||
"text": [ |
||||
"**Summary of the Website**\n", |
||||
"=========================\n", |
||||
"\n", |
||||
"* The website provides live updates and information about the 3rd Test match between Australia and India as part of India's tour of Australia in the 2024-25 season.\n", |
||||
"* It includes news, scores, stats, and analysis from the match.\n", |
||||
"* The website is affiliated with Cricbuzz.com, a popular online cricket platform.\n", |
||||
"\n", |
||||
"**News and Announcements**\n", |
||||
"==========================\n", |
||||
"\n", |
||||
"* **Rashid Khan to miss the rest of the series**: Australian all-rounder Mitchell Marsh's teammate Rashid Khan has been ruled out of the remaining Tests due to a knee injury.\n", |
||||
"* **Bumrah to feature in the third Test**: Indian fast bowler Jasprit Bumrah is expected to return for the third Test, which starts on January 5 at the Sydney Cricket Ground.\n" |
||||
] |
||||
} |
||||
], |
||||
"source": [ |
||||
"#Using Ollama to run it in the local\n", |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()['message']['content'])" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,213 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Also trying the amazing reasoning model DeepSeek\n", |
||||
"\n", |
||||
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n", |
||||
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n", |
||||
"\n", |
||||
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!ollama pull deepseek-r1:1.5b" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4bdcd35a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!ollama pull deepseek-r1:8b" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# NOW the exercise for you\n", |
||||
"\n", |
||||
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1c106420", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"import ollama\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "22d62f00", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"deepseek-r1:8b\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4449b7dc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "daca9448", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0ec9d5d2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6e1ab04a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the OpenAI API. You will get very familiar with this!\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model = MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response['message']['content']" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0d3b5628", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "938e5633", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "llms", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,511 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Welcome to your first assignment!\n", |
||||
"\n", |
||||
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n", |
||||
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n", |
||||
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
||||
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# HOMEWORK EXERCISE ASSIGNMENT\n", |
||||
"\n", |
||||
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n", |
||||
"\n", |
||||
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n", |
||||
"\n", |
||||
"**Benefits:**\n", |
||||
"1. No API charges - open-source\n", |
||||
"2. Data doesn't leave your box\n", |
||||
"\n", |
||||
"**Disadvantages:**\n", |
||||
"1. Significantly less power than Frontier Model\n", |
||||
"\n", |
||||
"## Recap on installation of Ollama\n", |
||||
"\n", |
||||
"Simply visit [ollama.com](https://ollama.com) and install!\n", |
||||
"\n", |
||||
"Once complete, the ollama server should already be running locally. \n", |
||||
"If you visit: \n", |
||||
"[http://localhost:11434/](http://localhost:11434/)\n", |
||||
"\n", |
||||
"You should see the message `Ollama is running`. \n", |
||||
"\n", |
||||
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n", |
||||
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n", |
||||
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n", |
||||
"\n", |
||||
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a messages list using the same format that we used for OpenAI\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's just make sure the model is loaded\n", |
||||
"\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# If this doesn't work for any reason, try the 2 versions in the following cells\n", |
||||
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n", |
||||
"# And if none of that works - contact me!\n", |
||||
"\n", |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Introducing the ollama package\n", |
||||
"\n", |
||||
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n", |
||||
"\n", |
||||
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Alternative approach - using OpenAI python library to connect to Ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# There's actually an alternative approach that some people might prefer\n", |
||||
"# You can use the OpenAI client python library to call Ollama:\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Also trying the amazing reasoning model DeepSeek\n", |
||||
"\n", |
||||
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n", |
||||
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n", |
||||
"\n", |
||||
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d", |
||||
"metadata": { |
||||
"collapsed": true, |
||||
"jupyter": { |
||||
"outputs_hidden": true |
||||
} |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!ollama pull deepseek-r1:1.5b" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1d3d554b-e00d-4c08-9300-45e073950a76", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# This may take a few minutes to run! You should then see a fascinating \"thinking\" trace inside <think> tags, followed by some decent definitions\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=\"deepseek-r1:1.5b\",\n", |
||||
" messages=[{\"role\": \"user\", \"content\": \"Please give definitions of some core concepts behind LLMs: a neural network, attention and the transformer\"}]\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# NOW the exercise for you\n", |
||||
"\n", |
||||
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ffaa3470-884c-467e-b4ce-c1b8d39294da", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"This is the code from day 1 notebook. Here we create the class to extract the text from the website, using BeautifulSoup library, and the we execute it to see the the results" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8d8c9f01-ca12-4018-b7fa-698c9fa1aa93", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6fd198df-bac5-42c5-83a0-06c5f71fb76a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's try one out. Change the website and add print statements to follow along.\n", |
||||
"\n", |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "995b637d-a5db-4ad9-ac78-5980fd7ef112", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"#### Define the system prompt, to instruct the model how we want to respond to our query. " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ee810d49-e88a-4137-a4be-98812e0d0748", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Define our system prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "482b5d4c-69ed-4332-abb5-8b0986dcf368", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d966cb09-3ca2-49f7-8462-f6ef26c01159", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2f9be84f-4cd7-4ce7-8f33-e60d16f02852", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# For test purpose\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f5cb0e9f-eb56-4633-ba4c-76817be98856", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# To give you a preview -- calling ollama with system and user messages:\n", |
||||
"\n", |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c554903f-eb04-4a16-87fc-f1d9ff58f6d9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# See how this function creates exactly the format above\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6b64b814-123f-436d-9366-4c762ac4b89a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try this out, and then try for a few more websites\n", |
||||
"\n", |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d1ef4be2-ef3a-4b5d-8d18-f2eafa9d6a93", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"### So, here let's run the summarize by using ollama and see how appears." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7c46edc5-c85d-4ad0-89fd-39c4fdc44a5d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# And now: call the ollama API. \n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model = MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response['message']['content']" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "466c2f78-91ca-4ed2-b60b-40661d0b6f68", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7ab7c9a1-70fd-421c-be06-c36eb6c9aedf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1cedc9d9-6a76-4225-82c1-82240da16260", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "82c48586-33c8-4797-a24f-41602c1297b3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "llms", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,435 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", |
||||
"metadata": { |
||||
"jp-MarkdownHeadingCollapsed": true |
||||
}, |
||||
"source": [ |
||||
"# Welcome to your first assignment!\n", |
||||
"\n", |
||||
"Instructions are below. Please give this a try, and look in the solutions folder if you get stuck (or feel free to ask me!)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ada885d9-4d42-4d9b-97f0-74fbbbfe93a9", |
||||
"metadata": { |
||||
"jupyter": { |
||||
"source_hidden": true |
||||
} |
||||
}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">Just before we get to the assignment --</h2>\n", |
||||
" <span style=\"color:#f71;\">I thought I'd take a second to point you at this page of useful resources for the course. This includes links to all the slides.<br/>\n", |
||||
" <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">https://edwarddonner.com/2024/11/13/llm-engineering-resources/</a><br/>\n", |
||||
" Please keep this bookmarked, and I'll continue to add more useful links there over time.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6e9fa1fc-eac5-4d1d-9be4-541b3f2b3458", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# HOMEWORK EXERCISE ASSIGNMENT\n", |
||||
"\n", |
||||
"Upgrade the day 1 project to summarize a webpage to use an Open Source model running locally via Ollama rather than OpenAI\n", |
||||
"\n", |
||||
"You'll be able to use this technique for all subsequent projects if you'd prefer not to use paid APIs.\n", |
||||
"\n", |
||||
"**Benefits:**\n", |
||||
"1. No API charges - open-source\n", |
||||
"2. Data doesn't leave your box\n", |
||||
"\n", |
||||
"**Disadvantages:**\n", |
||||
"1. Significantly less power than Frontier Model\n", |
||||
"\n", |
||||
"## Recap on installation of Ollama\n", |
||||
"\n", |
||||
"Simply visit [ollama.com](https://ollama.com) and install!\n", |
||||
"\n", |
||||
"Once complete, the ollama server should already be running locally. \n", |
||||
"If you visit: \n", |
||||
"[http://localhost:11434/](http://localhost:11434/)\n", |
||||
"\n", |
||||
"You should see the message `Ollama is running`. \n", |
||||
"\n", |
||||
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n", |
||||
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n", |
||||
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n", |
||||
"\n", |
||||
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29ddd15d-a3c5-4f4e-a678-873f56162724", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dac0a679-599c-441f-9bf2-ddc73d35b940", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a messages list using the same format that we used for OpenAI\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7bb9c624-14f0-4945-a719-8ddb64f66f47", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
" }" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "479ff514-e8bd-4985-a572-2ea28bb4fa40", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's just make sure the model is loaded\n", |
||||
"\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42b9f644-522d-4e05-a691-56e7658c0ea9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# If this doesn't work for any reason, try the 2 versions in the following cells\n", |
||||
"# And double check the instructions in the 'Recap on installation of Ollama' at the top of this lab\n", |
||||
"# And if none of that works - contact me!\n", |
||||
"\n", |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6a021f13-d6a1-4b96-8e18-4eae49d876fe", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Introducing the ollama package\n", |
||||
"\n", |
||||
"And now we'll do the same thing, but using the elegant ollama python package instead of a direct HTTP call.\n", |
||||
"\n", |
||||
"Under the hood, it's making the same call as above to the ollama server running at localhost:11434" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7745b9c4-57dc-4867-9180-61fa5db55eb8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"\n", |
||||
"response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
"print(response['message']['content'])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a4704e10-f5fb-4c15-a935-f046c06fb13d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Alternative approach - using OpenAI python library to connect to Ollama" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "23057e00-b6fc-4678-93a9-6b31cb704bff", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# There's actually an alternative approach that some people might prefer\n", |
||||
"# You can use the OpenAI client python library to call Ollama:\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "bc7d1de3-e2ac-46ff-a302-3b4ba38c4c90", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Also trying the amazing reasoning model DeepSeek\n", |
||||
"\n", |
||||
"Here we use the version of DeepSeek-reasoner that's been distilled to 1.5B. \n", |
||||
"This is actually a 1.5B variant of Qwen that has been fine-tuned using synethic data generated by Deepseek R1.\n", |
||||
"\n", |
||||
"Other sizes of DeepSeek are [here](https://ollama.com/library/deepseek-r1) all the way up to the full 671B parameter version, which would use up 404GB of your drive and is far too large for most!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cf9eb44e-fe5b-47aa-b719-0bb63669ab3d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"!ollama pull deepseek-r1:1.5b" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1d3d554b-e00d-4c08-9300-45e073950a76", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# This may take a few minutes to run! You should then see a fascinating \"thinking\" trace inside <think> tags, followed by some decent definitions\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=\"deepseek-r1:1.5b\",\n", |
||||
" messages=[{\"role\": \"user\", \"content\": \"Please give definitions of some core concepts behind LLMs: a neural network, attention and the transformer\"}]\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# NOW the exercise for you\n", |
||||
"\n", |
||||
"Take the code from day1 and incorporate it here, to build a website summarizer that uses Llama 3.2 running locally instead of OpenAI; use either of the above approaches." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 1, |
||||
"id": "6de38216-6d1c-48c4-877b-86d403f4e0f8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"HEADERS = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\",\n", |
||||
" \"Content-Type\": \"application/json\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"\n", |
||||
"MODEL = \"llama3.2\"\n", |
||||
"\n", |
||||
"system_prompt = \"Sei un assistente e analizzi il contenuto di un sito web \\\n", |
||||
"produci un breve sommario, ignora il testo o gli elementi relativi alla navigazione. \\\n", |
||||
"Rispondi markdown.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 2, |
||||
"id": "6f343c27-628c-4c54-9a5b-842e6ad5d176", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=HEADERS)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 4, |
||||
"id": "bf6245ca-2d53-4fd8-a19c-0e6d052031fd", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"Stai cercando un sito dal titolo: {website.title}\"\n", |
||||
" user_prompt += \"\\nI contenuti di questo sito web sono i seguenti: \\\n", |
||||
"Per favore, fornisci un breve riassunto di questo sito web in markdown. \\\n", |
||||
"Se include notizie o annunci, riassumili anch'essi. \\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 6, |
||||
"id": "dec0636f-9efc-4f91-8861-3141276a9a6e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 8, |
||||
"id": "f894b232-1ea1-4bd9-bf44-d7b1571f7913", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(url):\n", |
||||
" ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
" \n", |
||||
" website = Website(url)\n", |
||||
" response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 13, |
||||
"id": "d868d778-13b5-4934-acf5-dcb919a27d59", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "0a0d9b79-de3c-4f77-9254-f02cf4d6217a", |
||||
"metadata": {}, |
||||
"outputs": [ |
||||
{ |
||||
"data": { |
||||
"text/markdown": [ |
||||
"**Sommario del sito WP Pisa - Il sito del Meetup WordPress di Pisa**\n", |
||||
"\n", |
||||
"Il sito web WP Pisa è il punto di riferimento per gli appassionati di WordPress a Pisa. Gli organizzatori offrono meetup mensili gratuiti per discutere conoscenze, esperienze e progetti correlati al mondo di WordPress.\n", |
||||
"\n", |
||||
"**Eventi e Annunci**\n", |
||||
"\n", |
||||
"* **WordCamp Pisa 2025**: Aperta la call for organizer\n", |
||||
"* **Il Tuo Sito Ovunque in Pochi Minuti**: Un incontro con Docker e sviluppatori WordPress - Partecipa!\n", |
||||
"* **Core Days Roma: Le novità sul core di WordPress per i dev**\n", |
||||
"* **NO MORE THUMBNAILS!**\n", |
||||
"\n", |
||||
"**Informazioni Generali**\n", |
||||
"\n", |
||||
"* Il meetuu è aperto a tutti, indipendentemente dal livello di competenza in WordPress\n", |
||||
"* Tutti gli eventi sono gratuiti e organizzati con la supervisione di WordPress Foundation tramite la piattaforma Meetup.com\n", |
||||
"* La comunità WP Pisa contiene 150+ iscritti" |
||||
], |
||||
"text/plain": [ |
||||
"<IPython.core.display.Markdown object>" |
||||
] |
||||
}, |
||||
"metadata": {}, |
||||
"output_type": "display_data" |
||||
} |
||||
], |
||||
"source": [ |
||||
"display_summary(\"https://wppisa.it/\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,93 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fa4447be-7825-45d9-a6a5-ed41f2500533", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"from openai import OpenAI\n", |
||||
"\n", |
||||
"openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"MODEL = \"llama3.2\"\n", |
||||
"\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n", |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"\n", |
||||
"\n", |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ] \n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))\n", |
||||
"\n", |
||||
"\n", |
||||
"display_summary(\"https://esarijal.my.id\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,159 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "568fd96a-8cf6-42aa-b9cf-74b7aa383595", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Ollama Website Summarizer\n", |
||||
"## Scrape websites and summarize them locally using Ollama\n", |
||||
"\n", |
||||
"This script is a complete example of the day 1 program, which uses OpenAI API to summarize websites, altered to use techniques from the day 2 exercise to call Ollama models locally." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a9502a0f-d7be-4489-bb7f-173207e802b6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import ollama\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display\n", |
||||
"\n", |
||||
"MODEL = \"llama3.2\"\n", |
||||
"\n", |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" \n", |
||||
"# A function that writes a User Prompt that asks for summaries of websites:\n", |
||||
"\n", |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt\n", |
||||
" \n", |
||||
"# Create a messages list for a summarize prompt given a website\n", |
||||
"\n", |
||||
"def create_summarize_prompt(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\" },\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]\n", |
||||
"\n", |
||||
"# And now: call Ollama to summarize\n", |
||||
"\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" messages = create_summarize_prompt(website)\n", |
||||
" response = ollama.chat(model=MODEL, messages=messages)\n", |
||||
" return response['message']['content']\n", |
||||
" \n", |
||||
"# A function to display this nicely in the Jupyter output, using markdown\n", |
||||
"\n", |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "037627b0-b039-4ca4-a6d4-84ad8fc6a013", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Pre-requisites\n", |
||||
"\n", |
||||
"Before we can run the script above, we need to make sure Ollama is running on your machine!\n", |
||||
"\n", |
||||
"Simply visit ollama.com and install!\n", |
||||
"\n", |
||||
"Once complete, the ollama server should already be running locally.\n", |
||||
"If you visit:\n", |
||||
"http://localhost:11434/\n", |
||||
"\n", |
||||
"You should see the message Ollama is running." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6c2d84fd-2a9b-476d-84ad-4b8522d47023", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Run!\n", |
||||
"\n", |
||||
"Shift+Enter the code below to summarize a website.\n", |
||||
"\n", |
||||
"### NOTE!\n", |
||||
"\n", |
||||
"This will only work with websites that return HTML content, and may return unexpected results for SPAs that are created with JS." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "100829ba-8278-409b-bc0a-82ac28e1149f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "ffe4e760-dfa6-43fa-89c4-beea547707ac", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"Edit the URL above, or add code blocks of your own to try it out!" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,186 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1faf8b29-2ba6-40c7-89ee-71f71e234f11", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Extra requirements\n", |
||||
"```bash\n", |
||||
"pip install -q -U google-genai\n", |
||||
"```\n", |
||||
"\n", |
||||
"## Required environment variable\n", |
||||
"GEMINI_API_KEY\n", |
||||
"\n", |
||||
"### How to get GEMINI API KEY\n", |
||||
"\n", |
||||
"Use the link: [gemini api key](https://aistudio.google.com/app/apikey) to get yours." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "be06ce76-20ee-4066-9582-a4ed745f278f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from google import genai\n", |
||||
"from google.genai import types" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 13, |
||||
"id": "99e42519-5dac-4b13-8a26-8a635753343b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def gemini_invoke(website):\n", |
||||
" load_dotenv()\n", |
||||
" api_key = os.getenv(\"GEMINI_API_KEY\")\n", |
||||
" if not api_key or len(api_key) < 39:\n", |
||||
" print(\"No correct api key was found\")\n", |
||||
" return\n", |
||||
" else:\n", |
||||
" print(\"Api key found. Good to go!\")\n", |
||||
" client = genai.Client(api_key=api_key)\n", |
||||
" response = client.models.generate_content(\n", |
||||
" model=\"gemini-2.0-flash\",\n", |
||||
" config=types.GenerateContentConfig(\n", |
||||
" system_instruction=system_prompt),\n", |
||||
" contents=user_prompt_for(website)\n", |
||||
" )\n", |
||||
" return response.text" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "95a6ece8-8402-4cad-96b9-36a6ea444c54", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" url: str\n", |
||||
" title: str\n", |
||||
" text: str\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" soup = BeautifulSoup(response.content, \"html.parser\")\n", |
||||
" self.title = soup.title.string if soup.title else \"No title was found\"\n", |
||||
"\n", |
||||
" for irr in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irr.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" " |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "24bbd1dd-dca4-4bbc-ae91-4bad227a4278", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 15, |
||||
"id": "233b8904-7a4a-4265-8b0d-20934ae4b29c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that navigation related. Respond \\\n", |
||||
"in markdown.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 16, |
||||
"id": "5c996c03-84ab-4378-8a55-026d94404d35", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [{\"role\": \"user\", \"content\": system_prompt}]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 17, |
||||
"id": "abf9464e-dc8d-4099-aeb6-495498326673", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 20, |
||||
"id": "32ab2d29-02d1-43c5-b920-f2621f292b23", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(url, model=\"gemini\"):\n", |
||||
" website = Website(url)\n", |
||||
" if model == \"ollama\":\n", |
||||
" import ollama\n", |
||||
" Model=\"llama3.2\"\n", |
||||
" messages[0][\"content\"] += f\" Website: {url}\"\n", |
||||
" response = ollama.chat(model=Model, messages=messages)\n", |
||||
" return response[\"message\"][\"content\"]\n", |
||||
" else:\n", |
||||
" return gemini_invoke(website)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a2a0e518-7198-489d-a0ce-2eec617f939f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\", \"ollama\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.0" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,274 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "41136d6f-07bc-4f6f-acba-784b8e5707b1", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"\n", |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8612b4f7-5c31-48f3-8423-261914509617", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Constants\n", |
||||
"\n", |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\"Content-Type\": \"application/json\"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "508bd442-7860-4215-b0f2-57f7adefd807", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Create a messages list using the same format that we used for OpenAI\n", |
||||
"\n", |
||||
"messages = [\n", |
||||
" {\"role\": \"user\", \"content\": \"Describe some of the business applications of Generative AI\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cc7e8ada-4f8d-4090-be64-4aa72e03ac58", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Let's just make sure the model is loaded\n", |
||||
"\n", |
||||
"!ollama pull llama3.2" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4afd2e56-191a-4e31-949e-9b9376a39b5a", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# There's actually an alternative approach that some people might prefer\n", |
||||
"# You can use the OpenAI client python library to call Ollama:\n", |
||||
"\n", |
||||
"from openai import OpenAI\n", |
||||
"ollama_via_openai = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n", |
||||
"\n", |
||||
"response = ollama_via_openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=messages\n", |
||||
")\n", |
||||
"\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "365f3d83-2601-42fb-89cc-98a4e1f79e0d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", |
||||
"response = ollama_via_openai.chat.completions.create(model=MODEL, messages=[{\"role\":\"user\", \"content\":message}])\n", |
||||
"print(response.choices[0].message.content)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "29c383ae-bf5b-41bc-b5af-a22f851745dc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dc61e30f-653f-4554-b1cd-6e61a0e2430a", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"print(ed.title)\n", |
||||
"print(ed.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "db2066fb-3079-4775-832a-dcc0f19beb6e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "af81b070-b6fe-4b18-aa0b-c03cd76a0adf", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a website titled {website.title}\"\n", |
||||
" user_prompt += \"\\nThe contents of this website is as follows; \\\n", |
||||
"please provide a short summary of this website in markdown. \\\n", |
||||
"If it includes news or announcements, then summarize these too.\\n\\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4e66291b-23b1-4915-b6a3-11a4b6a4db66", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", |
||||
" {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "67c92f47-4a3b-491f-af00-07fda470087e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(website)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "db1b9085-e5e7-4ec9-a264-acc389085ada", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages_for(ed)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "677bfc2f-19ac-46a0-b67e-a2b2ddf9cf6b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama_via_openai.chat.completions.create(\n", |
||||
" model = MODEL,\n", |
||||
" messages = messages_for(website)\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ee3242ba-b695-4b1e-8a91-2fdeb536c2e7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://edwarddonner.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "85142cb8-ce0c-4c31-8b26-bb1744cf99ec", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "63db51a7-dd03-4514-8954-57156967f82c", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://app.daily.dev/posts/bregman-arie-devops-exercises-linux-jenkins-aws-sre-prometheus-docker-python-ansible-git-k-yli9wthnf\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python [conda env:base] *", |
||||
"language": "python", |
||||
"name": "conda-base-py" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.7" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,240 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fc3a96d1-eedf-4e3a-b3ce-151485c574b5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import requests\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "385dc3d5-f6ce-46d8-958e-83dc1150c24e", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"OLLAMA_API = \"http://localhost:11434/api/chat\"\n", |
||||
"HEADERS = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"MODEL = \"llama3.2\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "21f7dacc-1fa8-491c-8e94-39238dae52b3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class Website:\n", |
||||
" def __init__(self, url):\n", |
||||
" \"\"\"\n", |
||||
" Create this Website object from the given url using the BeautifulSoup library\n", |
||||
" \"\"\"\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=HEADERS)\n", |
||||
" soup = BeautifulSoup(response.content, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ca431e32-9191-4940-b62d-f25e8cbac627", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"web = Website(\"https://silviayomdesign.com/\")\n", |
||||
"print(web.title)\n", |
||||
"print(web.text)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "76475815-0dbc-451b-ab65-f7e2ea3aaa8a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n", |
||||
"and provides a short summary, ignoring text that might be navigation related. \\\n", |
||||
"Respond in markdown.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3cf03913-f595-4817-8580-19b182c599de", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def user_prompt_for(website):\n", |
||||
" user_prompt = f\"You are looking at a very artistic graphic designer's website titled name {website.title}\"\n", |
||||
" user_prompt += \"\\nHer creativity of her works are as follow;\\\n", |
||||
"please provide a short summary of her works in markdown. \\n\"\n", |
||||
" user_prompt += website.text\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6f130cfe-756b-4df8-b1f0-6918956a6162", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(user_prompt_for(web))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "85d85b64-1452-408f-bfae-d27b52d7dfa7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages = [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(web)}\n", |
||||
"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "36d66055-66d6-4123-b092-eceab055829d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"payload = {\n", |
||||
" \"model\": MODEL,\n", |
||||
" \"messages\": messages,\n", |
||||
" \"stream\": False\n", |
||||
"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "163db8a9-b0eb-49f3-a5f2-1e74cf51c245", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"response = requests.post(OLLAMA_API, json=payload, headers=HEADERS)\n", |
||||
"print(response.json()[\"message\"][\"content\"])" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "afabfff5-81e5-4b61-aca9-6c19d3584b86", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def messages_for(website):\n", |
||||
" return [\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_prompt_for(web)}\n", |
||||
" ]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b2e83b58-16fc-4049-8116-24a0cbb3635a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"messages_for(web)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05ed519a-514f-4ed8-b323-4f4817e1e1c6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import ollama\n", |
||||
"def summarize(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = ollama.chat(\n", |
||||
" model=MODEL, \n", |
||||
" messages=messages\n", |
||||
" )\n", |
||||
" return response[\"message\"][\"content\"]" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b182f686-0a3e-4959-9bfd-0a59d2befd4c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"summarize(\"https://silviayomdesign.com/\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f4f1f807-28d4-4b8b-9698-9b90dcbac59f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def display_summary(url):\n", |
||||
" summary = summarize(url)\n", |
||||
" display(Markdown(summary))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a988d29b-ed36-4a40-bd77-0f7d60a29ac3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display_summary(\"https://silviayomdesign.com/\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "15e72eeb-1c35-4bb2-9596-6ff2546aa046", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,663 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# A full business solution\n", |
||||
"\n", |
||||
"## Now we will take our project from Day 1 to the next level\n", |
||||
"\n", |
||||
"### BUSINESS CHALLENGE:\n", |
||||
"\n", |
||||
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n", |
||||
"\n", |
||||
"We will be provided a company name and their primary website.\n", |
||||
"\n", |
||||
"See the end of this notebook for examples of real-world business applications.\n", |
||||
"\n", |
||||
"And remember: I'm always available if you have problems or ideas! Please do reach out." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d5b08506-dc8b-4443-9201-5f1848161363", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from typing import List\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Initialize and constants\n", |
||||
"\n", |
||||
"load_dotenv(override=True)\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||
" print(\"API key looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
||||
" \n", |
||||
"MODEL = 'gpt-4o-mini'\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "106dd65e-90af-4ca8-86b6-23a41840645b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" \"\"\"\n", |
||||
" A utility class to represent a Website that we have scraped, now with links\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
"\n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"ed.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1771af9c-717a-4fca-bbbe-8a95893312c3", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First step: Have GPT-4o-mini figure out which links are relevant\n", |
||||
"\n", |
||||
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n", |
||||
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n", |
||||
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n", |
||||
"\n", |
||||
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n", |
||||
"\n", |
||||
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6957b079-0d96-45f7-a26a-3487510e9b35", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"oneshot_system_prompt = \"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company or freelancer offering their services, \\\n", |
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n", |
||||
"oneshot_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||
"oneshot_system_prompt += \"\"\"\n", |
||||
"{\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n", |
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"\n", |
||||
"oneshot_system_prompt += \"Make sure not to miss any relevant pages.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f5a8b688-b153-41a6-8b18-f6198f3df2c9", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"fewshot_system_prompt = \"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company or freelancer offering their services, \\\n", |
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n You should respond in JSON as in the following examples:\"\n", |
||||
"fewshot_system_prompt += \"\"\"\n", |
||||
" Example 1\n", |
||||
" ['https://great-comps.com/about-me', 'https://www.linkedin.com/in/great-comp/', 'mailto:hello@mygroovydomain.com', 'https://great-comps.com/news', '/case-studies', 'https://patents.google.com/patent/US20210049536A1/', 'https://great-comps.com/workshop-ai']\n", |
||||
"\n", |
||||
" Links:\n", |
||||
" {\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://great-comps.de/about-me\"},\n", |
||||
" {\"type\": \"news page\": \"url\": \"https://great-comps.de/news\"},\n", |
||||
" {\"type\": \"case studies page\": \"url\": \"https://great-comps.de/case-studies\"},\n", |
||||
" {\"type\": \"workshop page\": \"url\": \"https://great-comps.de/workshop-ai\"},\n", |
||||
" ]\n", |
||||
" }\n", |
||||
"\n", |
||||
" Example 2\n", |
||||
" ['mailto:info@robbie-doodle-domain.com','https://wahlen-robbie.at/ueber-mich', 'https://www.linkedin.com/in/robbie-doodle/', 'https://news.ycombinator.com', 'https://wahlen-robbie.at/neuigkeiten', 'https://twitter.com/robbie-d', '/whitepapers', 'https://patents.google.com/patent/US20210049536A1/', 'https://wahlen-robbie.at/services']\n", |
||||
"\n", |
||||
" Links:\n", |
||||
" {\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"über mich\", \"url\": \"https://wahlen-robbie.at/ueber-mich\"},\n", |
||||
" {\"type\": \"aktuelles\": \"url\": \"https://wahlen-robbie.at/neuigkeiten\"},\n", |
||||
" {\"type\": \"whitepaper\": \"url\": \"https://wahlen-robbie.at/whitepapers\"},\n", |
||||
" {\"type\": \"services\": \"url\": \"https://wahlen-robbie.at/services\"}\n", |
||||
" ]\n", |
||||
" }\n", |
||||
" \"\"\"\n", |
||||
"fewshot_system_prompt += \"Make sure not to miss any relevant pages.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b97e4068-97ed-4120-beae-c42105e4d59a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(f\"Oneshot system prompt:\\n{oneshot_system_prompt}\")\n", |
||||
"print(f\"\\n\\n\\nFewshot system prompt:\\n{fewshot_system_prompt}\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(website):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company or person offering their services, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, email links or social media links.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(website.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url, system_prompt=oneshot_system_prompt):\n", |
||||
" \n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n", |
||||
" ],\n", |
||||
" response_format={\"type\": \"json_object\"}\n", |
||||
" )\n", |
||||
" \n", |
||||
" result = response.choices[0].message.content \n", |
||||
" print(f\"Response: {result}\")\n", |
||||
" return json.loads(result)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2dc4150a-0042-4f5d-a7bf-158a0f9147a6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(ed_url)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n", |
||||
"hf = \"https://huggingface.co\"\n", |
||||
"\n", |
||||
"huggingface = Website(hf)\n", |
||||
"huggingface.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed_url = \"https://edwarddonner.com\"\n", |
||||
"hf_url = \"https://huggingface.co\"\n", |
||||
"\n", |
||||
"print(f\"Links generated with oneshot prompt for {ed_url}:\\n\")\n", |
||||
"get_links(ed_url)\n", |
||||
"\n", |
||||
"print(f\"\\n\\nLinks generated with fewshot prompt for {ed_url}:\\n\")\n", |
||||
"get_links(ed_url, fewshot_system_prompt)\n", |
||||
"\n", |
||||
"print(50*\"*\")\n", |
||||
"print(f\"\\nLinks generated with oneshot prompt for {hf_url}:\\n\")\n", |
||||
"get_links(hf_url)\n", |
||||
"\n", |
||||
"print(f\"\\n\\nLinks generated with fewshot prompt for {hf_url}:\\n\")\n", |
||||
"get_links(hf_url, fewshot_system_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "0d74128e-dfb6-47ec-9549-288b621c838c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Second step: make the brochure!\n", |
||||
"\n", |
||||
"Assemble all the details into another prompt to GPT4-o" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url, type=fewshot_system_prompt):\n", |
||||
" result = \"Landing page:\\n\"\n", |
||||
" result += Website(url).get_contents()\n", |
||||
"\n", |
||||
" links = get_links(url, type)\n", |
||||
" print(\"Found links:\", links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result += f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result += Website(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(ed_url))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"and creates a short brochure about the company for prospective customers, investors and recruits. \\\n", |
||||
"The brochure should be a bit unusual in terms of tone and style, it should astound the reader and pique their interest. Respond in markdown.\\\n", |
||||
"Include details of company culture, customers and careers/jobs if you have the information.\"\n", |
||||
"\n", |
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n", |
||||
"\n", |
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "05d07160-7910-4da2-92ac-36aa849fcc68", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get_brochure_user_prompt(\"Edward Donner\", ed_url)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cd909e0b-1312-4ce2-a553-821e795d7572", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" )\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6b0de762-f343-44d9-85d5-9bffba3c0ae8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"brochure_ed = create_brochure(\"Edward Donner\", ed_url)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e093444a-9407-42ae-924a-145730591a39", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"brochure_hf = create_brochure(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0d00b012-3901-492c-b985-a0340750c011", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display(Markdown(brochure_ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e33cb2e9-3b8c-4ef3-a6cb-70b3188b9120", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"display(Markdown(brochure_hf))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dea955ad-24a6-490b-8191-f066bff1b595", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def translate_brochure(brochure_content, language=\"German\"):\n", |
||||
" system_prompt = f\"You are a skilled translator. Translate the following brochure text into {language}.\\\n", |
||||
" Make sure to translate into a idiomatic {language}, matching the target language's natural structure, wording and expressions, so it can't be recognised as a translation.\\\n", |
||||
" Be sure to also meet an appropriate tone, eg a good marketing language in other languages will probably be a bit less boastful than in English.\\\n", |
||||
" Output the translated brochure in Markdown format.\"\n", |
||||
" \n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model = MODEL,\n", |
||||
" messages = [{\"role\": \"system\", \"content\": system_prompt}, {\"role\": \"user\", \"content\": brochure_content}]\n", |
||||
" )\n", |
||||
"\n", |
||||
" return response.choices[0].message.content" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9b6bdd4f-7518-4780-9da9-47f90aab974b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"translation = translate_brochure(brochure_ed, language=\"German\")\n", |
||||
"display(Markdown(translation))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f1dd96f2-0980-4a30-a152-1f38c0e319bb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"translation = translate_brochure(brochure_hf, language=\"German\")\n", |
||||
"display(Markdown(translation))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Finally - a minor improvement\n", |
||||
"\n", |
||||
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n", |
||||
"with the familiar typewriter animation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "51db0e49-f261-4137-aabe-92dd601f7725", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def stream_brochure(company_name, url):\n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
" \n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fdb3f8d8-a3eb-41c8-b1aa-9f60686a653b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Try changing the system prompt to the humorous version when you make the Brochure for Hugging Face:\n", |
||||
"\n", |
||||
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a27bf9e0-665f-4645-b66b-9725e2a959b5", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#181;\">Business applications</h2>\n", |
||||
" <span style=\"color:#181;\">In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n", |
||||
"\n", |
||||
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n", |
||||
"\n", |
||||
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype. See what other students have done in the community-contributions folder -- so many valuable projects -- it's wild!</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "14b2454b-8ef8-4b5c-b928-053a15e0d553", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#900;\">Before you move to Week 2 (which is tons of fun)</h2>\n", |
||||
" <span style=\"color:#900;\">Please see the week1 EXERCISE notebook for your challenge for the end of week 1. This will give you some essential practice working with Frontier APIs, and prepare you well for Week 2.</span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "17b64f0f-7d33-4493-985a-033d06e8db08", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../resources.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#f71;\">A reminder on 3 useful resources</h2>\n", |
||||
" <span style=\"color:#f71;\">1. The resources for the course are available <a href=\"https://edwarddonner.com/2024/11/13/llm-engineering-resources/\">here.</a><br/>\n", |
||||
" 2. I'm on LinkedIn <a href=\"https://www.linkedin.com/in/eddonner/\">here</a> and I love connecting with people taking the course!<br/>\n", |
||||
" 3. I'm trying out X/Twitter and I'm at <a href=\"https://x.com/edwarddonner\">@edwarddonner<a> and hoping people will teach me how it's done.. \n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "6f48e42e-fa7a-495f-a5d4-26bfc24d60b6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"<table style=\"margin: 0; text-align: left;\">\n", |
||||
" <tr>\n", |
||||
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n", |
||||
" <img src=\"../thankyou.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n", |
||||
" </td>\n", |
||||
" <td>\n", |
||||
" <h2 style=\"color:#090;\">Finally! I have a special request for you</h2>\n", |
||||
" <span style=\"color:#090;\">\n", |
||||
" My editor tells me that it makes a MASSIVE difference when students rate this course on Udemy - it's one of the main ways that Udemy decides whether to show it to others. If you're able to take a minute to rate this, I'd be so very grateful! And regardless - always please reach out to me at ed@edwarddonner.com if I can help at any point.\n", |
||||
" </span>\n", |
||||
" </td>\n", |
||||
" </tr>\n", |
||||
"</table>" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8d3e1a1-ba54-4907-97c5-30f89a24775b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,453 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "e14248ff-07be-4ba8-a13c-d8c7f40ffb5f", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# A full business solution\n", |
||||
"## Now we will take our project from Day 1 to the next level\n", |
||||
"## BUSINESS CHALLENGE:\n", |
||||
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n", |
||||
"\n", |
||||
"We will be provided a company name and their primary website.\n", |
||||
"\n", |
||||
"See the end of this notebook for examples of real-world business applications.\n", |
||||
"\n", |
||||
"And remember: I'm always available if you have problems or ideas! Please do reach out." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6c8dc88a-85d9-493b-965c-68895cdd93f2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"#imports \n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from typing import List\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "131c483b-dd58-4faa-baf5-469ab6b00fbb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Initialize and constants\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key=os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"if api_key and api_key[:8]=='sk-proj-':\n", |
||||
" print(\"API key looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"There might be a problem with your API key? \")\n", |
||||
"\n", |
||||
"MODEL='gpt-4o-mini'\n", |
||||
"openai=OpenAI()\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "196c0dee-7236-4f88-b7c2-f2a885190b19", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" \"\"\"\n", |
||||
" A utility class to represent a Website that we have scraped, now with links\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
"\n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f1329717-3727-4987-ada7-75df87a10459", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed=Website(\"https://www.anthropic.com/\")\n", |
||||
"print(ed.get_contents)\n", |
||||
"ed.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "912d4f83-c8f1-437c-a01b-e21988af477c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First step: Have GPT-4o-mini figure out which links are relevant\n", |
||||
"\n", |
||||
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n", |
||||
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n", |
||||
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n", |
||||
"\n", |
||||
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n", |
||||
"\n", |
||||
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ed206771-df05-429d-8743-310bc86358ce", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"link_system_prompt=\"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n", |
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n", |
||||
"link_system_prompt+=\"You should respond in JSON as in this example:\"\n", |
||||
"link_system_prompt+=\"\"\"\n", |
||||
"{\n", |
||||
" \"links\":[\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n", |
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ef835a85-9a48-42bd-979e-ca5f51bb1586", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(link_system_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f2885e89-6455-4239-a98d-5599ea6e5947", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(website):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, email links.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(website.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "da7e4468-a225-4263-a212-94b1c69d38da", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "53c59051-eed0-4292-8204-abbbd1d78df4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url):\n", |
||||
" website=Website(url)\n", |
||||
" response=openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n", |
||||
" ],\n", |
||||
" response_format={\"type\":\"json_object\"}\n", |
||||
" )\n", |
||||
" result=response.choices[0].message.content\n", |
||||
" return json.loads(result)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "76d3d68d-6534-4b04-8a26-a07a9e532665", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"anthropic=Website(\"https://www.anthropic.com/\")\n", |
||||
"anthropic.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "12ca6438-bc99-4b45-9603-54bee5d8bce2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(\"https://www.anthropic.com/\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "4304d6e8-900e-4702-b84c-f202d6265459", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Second step: make the brochure!\n", |
||||
"\n", |
||||
"Assemble all the details into another prompt to GPT4-o" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "91ac10e6-8a7a-4367-939b-ac537c1c6c67", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url):\n", |
||||
" result=\"Landing page:\\n\"\n", |
||||
" result+=Website(url).get_contents()\n", |
||||
" links=get_links(url)\n", |
||||
" print(\"Found links:\",links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result+=f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result+=Website(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "765e9c71-2bbc-4222-bce1-0f553d8d2b10", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(\"https://anthropic.com\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "7116adc1-6f5e-445f-9869-ffcf5fa6a9b8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"Include details of company culture, customers and careers/jobs if you have the information.\"\n", |
||||
"\n", |
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n", |
||||
"\n", |
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "02edb903-6352-417f-8c0f-85c2eee269b6", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:20_000] # Truncate if more than 20,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "2f760069-910e-4209-b357-b97e710f560d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_brochure_user_prompt(\"Anthropic\", \"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "faf9d9cc-fe30-4441-9adc-aee5b4dc80ca", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" display(Markdown(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c8a672f4-ee87-4e2a-a6b1-dfb46f344ef3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure(\"Anthropic\", \"https://anthropic.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "781fa1db-7acc-41fc-b26c-0d64964eb161", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Finally - a minor improvement\n", |
||||
"\n", |
||||
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n", |
||||
"with the familiar typewriter animation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b8359501-9f05-42bc-916c-7990ac910866", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def stream_brochure(company_name, url):\n", |
||||
" stream= openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
"\n", |
||||
" response=\"\"\n", |
||||
" display_handle=display(Markdown(\"\"),display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response +=chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\",\"\")\n", |
||||
" update_display(Markdown(response),display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cd834aa7-deda-40cd-97ab-5fa5117fc6e0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stream_brochure(\"HuggingFace\", \"http://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "207068f8-d768-46b2-8b92-0ec78a9f71ae", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Convert the brochure to a specified language\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e75be9e6-040d-4178-a5b3-1b7ae4460bc8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure_language(company_name, url, language):\n", |
||||
" language_prompt = f\"You are a professional translator and writer specializing in creating and translating brochures. Convert the brochure to {language} while maintaining its original tone, format, and purpose.\"\n", |
||||
" user_language_prompt = f\"Generate a brochure for the company '{company_name}' available at the URL: {url}, and translate it into {language}.\"\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": language_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": user_language_prompt}\n", |
||||
" ],\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" display(Markdown(result))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0748ec58-335b-4796-ae15-300dee7b24b0", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure_language(\"HuggingFace\", \"http://huggingface.co\",\"Hindi\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "ba54f80b-b2cd-4a50-b460-e0d042499c49", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "182f35da-d7b1-40f8-b1a7-74e0cd7fd6fe", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,513 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# A full business solution\n", |
||||
"\n", |
||||
"## Now we will take our project from Day 1 to the next level\n", |
||||
"\n", |
||||
"### BUSINESS CHALLENGE:\n", |
||||
"\n", |
||||
"Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n", |
||||
"\n", |
||||
"We will be provided a company name and their primary website.\n", |
||||
"\n", |
||||
"See the end of this notebook for examples of real-world business applications.\n", |
||||
"\n", |
||||
"And remember: I'm always available if you have problems or ideas! Please do reach out." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 1, |
||||
"id": "d5b08506-dc8b-4443-9201-5f1848161363", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# imports\n", |
||||
"# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n", |
||||
"\n", |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from typing import List\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Initialize and constants\n", |
||||
"\n", |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv('OPENAI_API_KEY')\n", |
||||
"\n", |
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||
" print(\"API key looks good so far\")\n", |
||||
"else:\n", |
||||
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", |
||||
" \n", |
||||
"MODEL = 'gpt-4o-mini'\n", |
||||
"openai = OpenAI()" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 3, |
||||
"id": "106dd65e-90af-4ca8-86b6-23a41840645b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" \"\"\"\n", |
||||
" A utility class to represent a Website that we have scraped, now with links\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
"\n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"ed = Website(\"https://edwarddonner.com\")\n", |
||||
"ed.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "1771af9c-717a-4fca-bbbe-8a95893312c3", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## First step: Have GPT-4o-mini figure out which links are relevant\n", |
||||
"\n", |
||||
"### Use a call to gpt-4o-mini to read the links on a webpage, and respond in structured JSON. \n", |
||||
"It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n", |
||||
"We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n", |
||||
"\n", |
||||
"This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n", |
||||
"\n", |
||||
"Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 5, |
||||
"id": "6957b079-0d96-45f7-a26a-3487510e9b35", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"link_system_prompt = \"You are provided with a list of links found on a webpage. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a brochure about the company, \\\n", |
||||
"such as links to an About page, or a Company page, or Careers/Jobs pages.\\n\"\n", |
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||
"link_system_prompt += \"\"\"\n", |
||||
"{\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"about page\", \"url\": \"https://full.url/goes/here/about\"},\n", |
||||
" {\"type\": \"careers page\": \"url\": \"https://another.full.url/careers\"}\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "b97e4068-97ed-4120-beae-c42105e4d59a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(link_system_prompt)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 7, |
||||
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(website):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {website.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, email links.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(website.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(ed))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 9, |
||||
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url):\n", |
||||
" website = Website(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(website)}\n", |
||||
" ],\n", |
||||
" response_format={\"type\": \"json_object\"}\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" return json.loads(result)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n", |
||||
"\n", |
||||
"huggingface = Website(\"https://huggingface.co\")\n", |
||||
"huggingface.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(\"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "0d74128e-dfb6-47ec-9549-288b621c838c", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Second step: make the brochure!\n", |
||||
"\n", |
||||
"Assemble all the details into another prompt to GPT4-o" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 12, |
||||
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url):\n", |
||||
" result = \"Landing page:\\n\"\n", |
||||
" result += Website(url).get_contents()\n", |
||||
" links = get_links(url)\n", |
||||
" print(\"Found links:\", links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result += f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result += Website(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(\"https://huggingface.co\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 14, |
||||
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"and creates a short brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"Include details of company culture, customers and careers/jobs if you have the information.\"\n", |
||||
"\n", |
||||
"# Or uncomment the lines below for a more humorous brochure - this demonstrates how easy it is to incorporate 'tone':\n", |
||||
"\n", |
||||
"# system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a company website \\\n", |
||||
"# and creates a short humorous, entertaining, jokey brochure about the company for prospective customers, investors and recruits. Respond in markdown.\\\n", |
||||
"# Include details of company culture, customers and careers/jobs if you have the information.\"\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 15, |
||||
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_brochure_user_prompt(company_name, url):\n", |
||||
" user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "cd909e0b-1312-4ce2-a553-821e795d7572", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 17, |
||||
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_brochure(company_name, url):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" display(Markdown(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "e093444a-9407-42ae-924a-145730591a39", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_brochure(\"HuggingFace\", \"https://huggingface.com\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "61eaaab7-0b47-4b29-82d4-75d474ad8d18", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## Finally - a minor improvement\n", |
||||
"\n", |
||||
"With a small adjustment, we can change this so that the results stream back from OpenAI,\n", |
||||
"with the familiar typewriter animation" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 19, |
||||
"id": "51db0e49-f261-4137-aabe-92dd601f7725", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def stream_brochure(company_name, url):\n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
" \n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "87bd1188", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a9e7375d", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"## **Multi-lingual with Multi-Tone in Desire Format**" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 24, |
||||
"id": "af5c959f", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def multi_lingual_stream_brochure(company_name, url, language, tone):\n", |
||||
"\n", |
||||
" system_prompt = f\"\"\"\n", |
||||
"You are an assistant that analyzes the contents of several relevant pages from a company website and creates a visually appealing and professional short brochure for prospective customers, investors, and recruits. \n", |
||||
"The brochure should be written in {language} and use a {tone.lower()} tone throughout.\n", |
||||
"\n", |
||||
"The brochure should follow this structure (in {language}):\n", |
||||
"\n", |
||||
"1. **Front Cover**:\n", |
||||
" - Prominently display the company name as Title.\n", |
||||
" - Include a compelling headline or tagline.\n", |
||||
" - Add something engaging relevant to the company’s mission.\n", |
||||
"\n", |
||||
"2. **About Us**:\n", |
||||
" - Provide a brief introduction to the company.\n", |
||||
" - State the company’s core mission and vision.\n", |
||||
" - Mention the founding story or key milestones.\n", |
||||
"\n", |
||||
"3. **What We Offer**:\n", |
||||
" - Summarize the company's products, services, or solutions.\n", |
||||
" - Highlight benefits or unique selling points.\n", |
||||
" - Include testimonials or case studies if available.\n", |
||||
"\n", |
||||
"4. **Our Culture**:\n", |
||||
" - Outline the company’s key values or guiding principles.\n", |
||||
" - Describe the workplace environment (e.g., innovation-driven, inclusive, collaborative).\n", |
||||
" - Highlight community engagement or CSR initiatives.\n", |
||||
"\n", |
||||
"5. **Who We Serve**:\n", |
||||
" - Describe the target customers or industries served.\n", |
||||
" - Mention notable clients or partners.\n", |
||||
" - Include testimonials or endorsements from customers.\n", |
||||
"\n", |
||||
"6. **Join Us**:\n", |
||||
" - Detail career or internship opportunities.\n", |
||||
" - Highlight benefits, career growth, or training opportunities.\n", |
||||
" - Provide direct links or steps to apply.\n", |
||||
"\n", |
||||
"7. **Contact Us**:\n", |
||||
" - Provide the company’s address, phone number, and email.\n", |
||||
" - Include links to social media platforms.\n", |
||||
" - Add a link to the company’s website.\n", |
||||
"\n", |
||||
"8. **Closing Note**:\n", |
||||
" - End with a thank-you message or an inspirational note for the reader.\n", |
||||
" - Add a call-to-action (e.g., “Get in touch today!” or “Explore more on our website”).\n", |
||||
"\n", |
||||
"Ensure the content is concise, engaging, visually clear, and tailored to the target audience. Use headings and subheadings to make the brochure easy to navigate. Include links and contact information wherever applicable.\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"\n", |
||||
" \n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_brochure_user_prompt(company_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
" \n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "744bfc05", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"\n", |
||||
"multi_lingual_stream_brochure(\"OpenAI\", \"https://openai.com/\", \"Urdu\", \"humorous, entertaining, jokey\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "llm_env", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.9" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,81 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "a98030af-fcd1-4d63-a36e-38ba053498fa", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# A Small Tweak to Week1-Day5\n", |
||||
"\n", |
||||
"If you have network restrictions (such as using a custom DNS provider, or firewall rules at work), you can disable SSL cert verification.\n", |
||||
"Once you do that and start executing your code, the output will be riddled with warnings. Thankfully, you can suppress those warnings,too.\n", |
||||
"\n", |
||||
"See the 2 lines added to the init method, below." |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 22, |
||||
"id": "106dd65e-90af-4ca8-86b6-23a41840645b", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"# A class to represent a Webpage\n", |
||||
"\n", |
||||
"# Some websites need you to use proper headers when fetching them:\n", |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}\n", |
||||
"\n", |
||||
"class Website:\n", |
||||
" \"\"\"\n", |
||||
" A utility class to represent a Website that we have scraped, now with links\n", |
||||
" \"\"\"\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
"\n", |
||||
" #\n", |
||||
" # If you must disable SSL cert validation, and also suppress all the warning that will come with it,\n", |
||||
" # add the 2 lines below. This comes in very handy if you have DNS/firewall restrictions; alas, use\n", |
||||
" # with caution, especially if deploying this in a non-dev environment.\n", |
||||
" requests.packages.urllib3.disable_warnings() \n", |
||||
" response = requests.get(url, headers=headers, verify=False) \n", |
||||
" # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n", |
||||
" \n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get('href') for link in soup.find_all('a')]\n", |
||||
" self.links = [link for link in links if link]" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.11.11" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
@ -1,440 +0,0 @@
|
||||
{ |
||||
"cells": [ |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "07be6aa3-6636-4b57-be16-823c3907f4c4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"import os\n", |
||||
"import requests\n", |
||||
"import json\n", |
||||
"from dotenv import load_dotenv\n", |
||||
"from bs4 import BeautifulSoup\n", |
||||
"from IPython.display import Markdown, display, update_display\n", |
||||
"from openai import OpenAI" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "0e64af7b-6956-4437-ab32-857a6ea814c3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"load_dotenv()\n", |
||||
"api_key = os.getenv(\"OPENAI_API_KEY\")\n", |
||||
"\n", |
||||
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", |
||||
" print(\"Api key found. Good to go!\") \n", |
||||
"else:\n", |
||||
" print(\"No correct api key was found\")\n", |
||||
"MODEL = \"gpt-4o-mini\"\n", |
||||
"openai = OpenAI(api_key=api_key)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4667e3ee-d5b7-42ed-99ad-5e9fa75c8660", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"headers = {\n", |
||||
" \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36\"\n", |
||||
"}" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "42adb18b-3ec9-4700-95e4-c0041ce8f17a", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"class GithubProfile:\n", |
||||
"\n", |
||||
" def __init__(self, url):\n", |
||||
" self.url = url\n", |
||||
" response = requests.get(url, headers=headers)\n", |
||||
" self.body = response.content\n", |
||||
" soup = BeautifulSoup(self.body, 'html.parser')\n", |
||||
" self.title = soup.title.string if soup.title else \"No title found\"\n", |
||||
" if soup.body:\n", |
||||
" for irrelevant in soup.body([\"script\", \"style\", \"img\", \"input\"]):\n", |
||||
" irrelevant.decompose()\n", |
||||
" self.text = soup.body.get_text(separator=\"\\n\", strip=True)\n", |
||||
" else:\n", |
||||
" self.text = \"\"\n", |
||||
" links = [link.get(\"href\") for link in soup.find_all(\"a\")]\n", |
||||
" self.links = [link for link in links if link]\n", |
||||
" \n", |
||||
" def get_contents(self):\n", |
||||
" return f\"Webpage Title:\\n{self.title}\\nWebpage Contents:\\n{self.text}\\n\\n\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "661b5377-c444-45a9-9455-85f83ff525d3", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"profile = GithubProfile(\"https://github.com/ertgl\")\n", |
||||
"profile.links" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "8f9a3c08-0db2-4baa-a8a4-f5642049a57c", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"link_system_prompt = \"You are provided with a list of links found on a Github page. \\\n", |
||||
"You are able to decide which of the links would be most relevant to include in a portfolio about the github user, \\\n", |
||||
"such as links to an About page, or a Repositories, or Projects.\\n\"\n", |
||||
"link_system_prompt += \"You should respond in JSON as in this example:\"\n", |
||||
"link_system_prompt += \"\"\"\n", |
||||
"{\n", |
||||
" \"links\": [\n", |
||||
" {\"type\": \"overview page\", \"url\": \"https://another.full.url\"},\n", |
||||
" {\"type\": \"repositories page\": \"url\": \"https://another.full.url?tab=repositories\"}\n", |
||||
" ]\n", |
||||
"}\n", |
||||
"\"\"\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "30eafd50-9735-4388-9cc1-8337a00069a2", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(link_system_prompt)\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4dc4f366-5c00-441d-b1bd-8dda148f1ffb", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links_user_prompt(profile):\n", |
||||
" user_prompt = f\"Here is the list of links on the website of {profile.url} - \"\n", |
||||
" user_prompt += \"please decide which of these are relevant web links for a portfolio about the user, respond with the full https URL in JSON format. \\\n", |
||||
"Do not include Terms of Service, Privacy, Login, Blog or Github trending related pages.\\n\"\n", |
||||
" user_prompt += \"Links (some might be relative links):\\n\"\n", |
||||
" user_prompt += \"\\n\".join(profile.links)\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "c066b2ac-5863-408e-bb42-1388d130d164", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_links_user_prompt(profile))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "dc0ccb95-479c-4f6e-9686-1ff38aa543fa", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_links(url):\n", |
||||
" profile = GithubProfile(url)\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": link_system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_links_user_prompt(profile)}\n", |
||||
" ],\n", |
||||
" response_format= {\"type\": \"json_object\"}\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" return json.loads(result)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "9f5e3b8b-398d-4e23-867e-401faca7db03", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"get_links(profile.url)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "b9024a4f-4038-4c0e-b0c7-74226feaccfd", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Second step: make the portfolio!" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "f9906d73-801a-4aea-b620-10ac39eaf424", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_all_details(url):\n", |
||||
" result = \"Landing page:\\n\"\n", |
||||
" result += GithubProfile(url).get_contents()\n", |
||||
" links = get_links(url)\n", |
||||
" print(\"Found links:\", links)\n", |
||||
" for link in links[\"links\"]:\n", |
||||
" result += f\"\\n\\n{link['type']}\\n\"\n", |
||||
" result += GithubProfile(link[\"url\"]).get_contents()\n", |
||||
" return result" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "02039450-7f7f-4556-8645-39cd31f30265", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_all_details(\"https://github.com/ertgl\"))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "4119b96f-0aa1-4cdb-9a09-d51b163069b8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"system_prompt = \"You are an assistant that analyzes the contents of several relevant pages from a personal github page \\\n", |
||||
"and creates a short portfolio about the user profile, especially projects and repositories and summary of the repo's \\\n", |
||||
"README files for prospective recruiters, investors. Respond in markdown.\\\n", |
||||
"Include details of person profile overview, if you have the information.\"" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "842834d2-a5e9-4b56-a792-492a1a137fbc", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def get_portfolio_user_prompt(profile_name, url):\n", |
||||
" user_prompt = f\"You are looking at a user called: {profile_name} on Github.\\n\"\n", |
||||
" user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short portfolio of the user in markdown.\\n\"\n", |
||||
" user_prompt += get_all_details(url)\n", |
||||
" user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", |
||||
" return user_prompt" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "285b3a1d-894a-463c-8c30-b5de203b8358", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"print(get_portfolio_user_prompt(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\"))\n" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "78dc7495-d0a5-409b-8ecf-3a5ef9220e25", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def create_portfolio(profile_name, url):\n", |
||||
" response = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n", |
||||
" ]\n", |
||||
" )\n", |
||||
" result = response.choices[0].message.content\n", |
||||
" display(Markdown(result))" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "abe39377-2d52-434a-aace-e9397cdd4f20", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"create_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "edd168ca-b77b-4fc7-9e11-2114a43553e4", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def stream_portfolio(profile_name, url):\n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
" \n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "1ea391d4-775e-483d-9e55-e3ae30fa9bd8", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"stream_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\")" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "markdown", |
||||
"id": "498ca0c8-8f68-4389-8184-078706b62cf6", |
||||
"metadata": {}, |
||||
"source": [ |
||||
"# Multi-lingual with Multi-Tone in Desire Format" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": 100, |
||||
"id": "f11e3391-03f9-409c-9f5a-6286959690ec", |
||||
"metadata": {}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"def multi_lingual_stream_portfolio(profile_name, url, language, tone):\n", |
||||
"\n", |
||||
" system_prompt = f\"\"\"\n", |
||||
"You are an assistant that analyzes the contents of several relevant pages from a github profile page and \n", |
||||
"creates a visually appealing and professional short portfolio for prospective investors, and recruiters. \n", |
||||
"The portfolio should be written in {language} and use a {tone.lower()} tone throughout.\n", |
||||
"\n", |
||||
"The portfolio should follow this structure (in {language}):\n", |
||||
"\n", |
||||
"1. **Front Cover**:\n", |
||||
" - Prominently display the user name as Title.\n", |
||||
" - Include a compelling headline or tagline.\n", |
||||
" - Add something engaging relevant to the user’s summarized README files if available.\n", |
||||
"\n", |
||||
"2. **About**:\n", |
||||
" - Provide a brief introduction to the user's projects approach.\n", |
||||
" - State which repository they own or they contributed.\n", |
||||
"\n", |
||||
"3. **Overview**:\n", |
||||
" - Summarize the user's projects, repositories, or solutions by summarized README files if available.\n", |
||||
" - Highlight benefits or unique developer/development points.\n", |
||||
" - Mention the follower and following users count and total stars they got.\n", |
||||
"\n", |
||||
"\n", |
||||
"4. **My Culture**:\n", |
||||
" - Outline the user’s key values or guiding principles.\n", |
||||
" - Describe the workplace environment (e.g., innovation-driven, inclusive, collaborative).\n", |
||||
" - Highlight community engagement.\n", |
||||
"\n", |
||||
"5. **What kind of companies may be interested**:\n", |
||||
" - Describe the target customers or industries served.\n", |
||||
" - Mention open source contributions also if available. \n", |
||||
" \n", |
||||
"6. **Projects**:\n", |
||||
" \n", |
||||
" ***Owner***:\n", |
||||
" - List owned projects/repositories with summaries. (Summarize README file of the each project)\n", |
||||
" \n", |
||||
" ***Contributer***:\n", |
||||
" - List contributed projects/repositories with summaries. (Summarize README file of the each project)\n", |
||||
"\n", |
||||
"\n", |
||||
"7. **Support and Donation**:\n", |
||||
" - Encourage those interested in user's open source projects to donate.\n", |
||||
" - Provide direct links or steps to apply if available.\n", |
||||
"\n", |
||||
"8. **Contact Us**:\n", |
||||
" - Provide the user’s address, phone number, and email.\n", |
||||
" - Include links to social media platforms.\n", |
||||
" - Add a link to the user’s website.\n", |
||||
"\n", |
||||
"9. **Closing Note**:\n", |
||||
" - End with a thank-you message or an inspirational note for the reader.\n", |
||||
" - Add a call-to-action (e.g., “Get in touch today!” or “Explore more on my website”).\n", |
||||
"\n", |
||||
"Ensure the content is concise, engaging, visually clear, and tailored to the target audience. Use headings and subheadings to make the brochure easy to navigate. Include links and contact information wherever applicable.\n", |
||||
"\"\"\"\n", |
||||
"\n", |
||||
"\n", |
||||
" \n", |
||||
" stream = openai.chat.completions.create(\n", |
||||
" model=MODEL,\n", |
||||
" messages=[\n", |
||||
" {\"role\": \"system\", \"content\": system_prompt},\n", |
||||
" {\"role\": \"user\", \"content\": get_portfolio_user_prompt(profile_name, url)}\n", |
||||
" ],\n", |
||||
" stream=True\n", |
||||
" )\n", |
||||
" \n", |
||||
" response = \"\"\n", |
||||
" display_handle = display(Markdown(\"\"), display_id=True)\n", |
||||
" for chunk in stream:\n", |
||||
" response += chunk.choices[0].delta.content or ''\n", |
||||
" response = response.replace(\"```\",\"\").replace(\"markdown\", \"\")\n", |
||||
" update_display(Markdown(response), display_id=display_handle.display_id)" |
||||
] |
||||
}, |
||||
{ |
||||
"cell_type": "code", |
||||
"execution_count": null, |
||||
"id": "3a38dc0b-27de-4738-8883-b3857e067b45", |
||||
"metadata": { |
||||
"scrolled": true |
||||
}, |
||||
"outputs": [], |
||||
"source": [ |
||||
"multi_lingual_stream_portfolio(\"Ertuğrul Noyan Keremoğlu\", \"https://github.com/ertgl\", \"English\", \"serious, entertaining, witty\")" |
||||
] |
||||
} |
||||
], |
||||
"metadata": { |
||||
"kernelspec": { |
||||
"display_name": "Python 3 (ipykernel)", |
||||
"language": "python", |
||||
"name": "python3" |
||||
}, |
||||
"language_info": { |
||||
"codemirror_mode": { |
||||
"name": "ipython", |
||||
"version": 3 |
||||
}, |
||||
"file_extension": ".py", |
||||
"mimetype": "text/x-python", |
||||
"name": "python", |
||||
"nbconvert_exporter": "python", |
||||
"pygments_lexer": "ipython3", |
||||
"version": "3.12.0" |
||||
} |
||||
}, |
||||
"nbformat": 4, |
||||
"nbformat_minor": 5 |
||||
} |
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue