From 7ea4ba144dd8c02940214705c0fa205eb2327885 Mon Sep 17 00:00:00 2001 From: MuhammedDele <100720854+MuhammedDele@users.noreply.github.com> Date: Sat, 14 Dec 2024 20:54:00 +0300 Subject: [PATCH] last version --- README.md | 260 ++- environment.yml | 41 +- requirements.txt | 13 +- .../day1-selenium-for-javascript-sites.ipynb | 2 +- week1/day1.ipynb | 389 ++--- week1/day5.ipynb | 1515 +++++++++++++++-- week1/solutions/week1 SOLUTION.ipynb | 76 +- 7 files changed, 1848 insertions(+), 448 deletions(-) diff --git a/README.md b/README.md index 94285b3..9323ce5 100644 --- a/README.md +++ b/README.md @@ -11,48 +11,154 @@ I'm so happy you're joining me on this path. We'll be building immensely satisfy I'm here to help you be most successful with your learning! If you hit any snafus, or if you have any ideas on how I can improve the course, please do reach out in the platform or by emailing me direct (ed@edwarddonner.com). It's always great to connect with people on LinkedIn to build up the community - you'll find me here: https://www.linkedin.com/in/eddonner/ -Resources to accompany the course, including the slides and useful links, are here: -https://edwarddonner.com/2024/11/13/llm-engineering-resources/ +### An important point on API costs -## Instant Gratification instructions for Week 1, Day 1 +During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. -We will start the course by installing Ollama so you can see results immediately! -1. Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly -2. On a PC, start a Command prompt / Powershell (Press Win + R, type `cmd`, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal). -3. Run `ollama run llama3.2` or for smaller machines try `ollama run llama3.2:1b` -4. If this doesn't work, you may need to run `ollama serve` in another Powershell (Windows) or Terminal (Mac), and try step 3 again -5. And if that doesn't work on your box, I've set up this on the cloud. This is on Google Colab, which will need you to have a Google account to sign in, but is free: https://colab.research.google.com/drive/1-_f5XZPsChvfU1sJ0QqCePtIuc55LSdu?usp=sharing +Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. -Any problems, please contact me! +### How this Repo is organized -## Then, Setup instructions +There are folders for each of the "weeks", representing modules of the class, culminating in a powerful autonomous Agentic AI solution in Week 8 that draws on many of the prior weeks. +Follow the setup instructions below, then open the Week 1 folder and prepare for joy. -After we do the Ollama quick project, and after I introduce myself and the course, we get to work with the full environment setup. +### The most important part -Hopefully I've done a decent job of making these guides bulletproof - but please contact me right away if you hit roadblocks: +The mantra of the course is: the best way to learn is by **DOING**. You should work along with me, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work. -- PC people please follow the instructions in [SETUP-PC.md](SETUP-PC.md) -- Mac people please follow the instructions in [SETUP-mac.md](SETUP-mac.md) -- Linux people, the Mac instructions should be close enough! +## Setup instructions -### An important point on API costs (which are optional! No need to spend if you don't wish) +The recommended approach is to use Anaconda for your environment. Even if you've never used it before, it makes such a difference. Anaconda ensures that you're working with the right version of Python and all your packages are compatible with mine, even if we're on different platforms. -During the course, I'll suggest you try out the leading models at the forefront of progress, known as the Frontier models. I'll also suggest you run open-source models using Google Colab. These services have some charges, but I'll keep cost minimal - like, a few cents at a time. And I'll provide alternatives if you'd prefer not to use them. +**Update** Some people have had problems with Anaconda - horrors! The idea of Anaconda is to make it really smooth and simple to be working with the same environment. If you hit any problems with the instructions below, please skip to near the end of this README for the alternative approach using `pip`, and hopefully you'll be up and running fast. And please do message me if I can help with anything. -Please do monitor your API usage to ensure you're comfortable with spend; I've included links below. There's no need to spend anything more than a couple of dollars for the entire course. Some AI providers such as OpenAI require a minimum credit like \$5 or local equivalent; we should only spend a fraction of it, and you'll have plenty of opportunity to put it to good use in your own projects. During Week 7 you have an option to spend a bit more if you're enjoying the process - I spend about $10 myself and the results make me very happy indeed! But it's not necessary in the least; the important part is that you focus on learning. +We'll be mostly using Jupyter Lab in this course. For those new to Jupyter Lab / Jupyter Notebook, it's a delightful Data Science environment where you can simply hit shift+return in any cell to run it; start at the top and work your way down! When we move to Google Colab in Week 3, you'll experience the same interface for Python runtimes in the cloud. -I'll also show you an alternative if you'd rather not spend anything on APIs. +### For Windows Users -### How this Repo is organized +1. **Install Git** (if not already installed): -There are folders for each of the "weeks", representing modules of the class, culminating in a powerful autonomous Agentic AI solution in Week 8 that draws on many of the prior weeks. -Follow the setup instructions above, then open the Week 1 folder and prepare for joy. +- Download Git from https://git-scm.com/download/win +- Run the installer and follow the prompts, using default options -### The most important part +2. **Open Command Prompt:** + +- Press Win + R, type `cmd`, and press Enter + +3. **Navigate to your projects folder:** + +If you have a specific folder for projects, navigate to it using the cd command. For example: +`cd C:\Users\YourUsername\Documents\Projects` + +If you don't have a projects folder, you can create one: +``` +mkdir C:\Users\YourUsername\Documents\Projects +cd C:\Users\YourUsername\Documents\Projects +``` +(Replace YourUsername with your actual Windows username) + +3. **Clone the repository:** + +- Go to the course's GitHub page +- Click the green 'Code' button and copy the URL +- In the Command Prompt, type: `git clone ` + +4. **Install Anaconda:** + +- Download Anaconda from https://docs.anaconda.com/anaconda/install/windows/ +- Run the installer and follow the prompts +- A student mentioned that if you are prompted to upgrade Anaconda to a newer version during the install, you shouldn't do it, as there might be problems with the very latest update for PC. (Thanks for the pro-tip!) + +5. **Set up the environment:** + +- Open Anaconda Prompt (search for it in the Start menu) +- Navigate to the cloned repository folder using `cd path\to\repo` (replace `path\to\repo` with the actual path to the llm_engineering directory, your locally cloned version of the repo) +- Create the environment: `conda env create -f environment.yml` +- Wait for a few minutes for all packages to be installed +- Activate the environment: `conda activate llms` + +You should see `(llms)` in your prompt, which indicates you've activated your new environment. + +6. **Start Jupyter Lab:** + +- In the Anaconda Prompt, from within the `llm_engineering` folder, type: `jupyter lab` + +...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipnbk`. + +### For Mac Users + +1. **Install Git** if not already installed (it will be in most cases) + +- Open Terminal (Applications > Utilities > Terminal) +- Type `git --version` If not installed, you'll be prompted to install it + +2. **Navigate to your projects folder:** + +If you have a specific folder for projects, navigate to it using the cd command. For example: +`cd ~/Documents/Projects` + +If you don't have a projects folder, you can create one: +``` +mkdir ~/Documents/Projects +cd ~/Documents/Projects +``` + +3. **Clone the repository** + +- Go to the course's GitHub page +- Click the green 'Code' button and copy the URL +- In Terminal, type: `git clone ` + +4. **Install Anaconda:** + +- Download Anaconda from https://docs.anaconda.com/anaconda/install/mac-os/ +- Double-click the downloaded file and follow the installation prompts + +5. **Set up the environment:** + +- Open Terminal +- Navigate to the cloned repository folder using `cd path/to/repo` (replace `path/to/repo` with the actual path to the llm_engineering directory, your locally cloned version of the repo) +- Create the environment: `conda env create -f environment.yml` +- Wait for a few minutes for all packages to be installed +- Activate the environment: `conda activate llms` + +You should see `(llms)` in your prompt, which indicates you've activated your new environment. + +6. **Start Jupyter Lab:** + +- In Terminal, from within the `llm_engineering` folder, type: `jupyter lab` + +...and Jupyter Lab should open up, ready for you to get started. Open the `week1` folder and double click on `day1.ipnbk`. + +### When we get to it, creating your API keys + +Particularly during weeks 1 and 2 of the course, you'll be writing code to call the APIs of Frontier models (models at the forefront of progress). You'll need to join me in setting up accounts and API keys. -The mantra of the course is: the best way to learn is by **DOING**. I don't type all the code during the course; I execute it for you to see the results. You should work along with me or after each lecture, running each cell, inspecting the objects to get a detailed understanding of what's happening. Then tweak the code and make it your own. There are juicy challenges for you throughout the course. I'd love it if you wanted to push your code so I can follow along with your progress, and I can make your solutions available to others so we share in your progress. While the projects are enjoyable, they are first and foremost designed to be _educational_, teaching you business skills that can be put into practice in your work. +- [GPT API](https://platform.openai.com/) from OpenAI +- [Claude API](https://console.anthropic.com/) from Anthropic +- [Gemini API](https://ai.google.dev/gemini-api) from Google -## Starting in Week 3, we'll also be using Google Colab for running with GPUs +Initially we'll only use OpenAI, so you can start with that, and we'll cover the others soon afterwards. The webpage where you set up your OpenAI key is [here](https://platform.openai.com/api-keys). See the extra note on API costs below if that's a concern. One student mentioned to me that OpenAI can take a few minutes to register; if you initially get an error about being out of quota, wait a few minutes and try again. Another reason you might encounter the out of quota error is if you haven't yet added a valid payment method to your OpenAI account. You can do this by clicking your profile picture on the OpenAI website then clicking "Your profile." Once you are redirected to your profile page, choose "Billing" on the left-pane menu. You will need to enter a valid payment method and charge your account with a small advance payment. It is recommended that you **disable** the automatic recharge as an extra failsafe. If it's still a problem, see more troubleshooting tips in the Week 1 Day 1 notebook, and/or message me! + +Later in the course you'll be using the fabulous HuggingFace platform; an account is available for free at [HuggingFace](https://huggingface.co) - you can create an API token from the Avatar menu >> Settings >> Access Tokens. + +And in Week 6/7 you'll be using the terrific [Weights & Biases](https://wandb.ai) platform to watch over your training batches. Accounts are also free, and you can set up a token in a similar way. + +When you have these keys, please create a new file called `.env` in your project root directory. This file won't appear in Jupyter Lab because it's a hidden file; you should create it using something like Notepad (PC) or nano (Mac / Linux). I've put detailed instructions at the end of this README. + +It should have contents like this, and to start with you only need the first line: + +``` +OPENAI_API_KEY=xxxx +GOOGLE_API_KEY=xxxx +ANTHROPIC_API_KEY=xxxx +HF_TOKEN=xxxx +``` + +This file is listed in the `.gitignore` file, so it won't get checked in and your keys stay safe. +If you have any problems with this process, there's a simple workaround which I explain in the video. + +### Starting in Week 3, we'll also be using Google Colab for running with GPUs You should be able to use the free tier or minimal spend to complete all the projects in the class. I personally signed up for Colab Pro+ and I'm loving it - but it's not required. @@ -75,19 +181,91 @@ The charges for the exercsies in this course should always be quite low, but if 2. For Anthropic: Always use model `claude-3-haiku-20240307` in the code instead of the other Claude models 3. During week 7, look out for my instructions for using the cheaper dataset -Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on. +## And that's it! Happy coding! + +### Alternative Setup Instructions if Anaconda is giving you problems + +First please run: +`python --version` +To find out which python you're on. Ideally you'd be using Python 3.11.x, so we're completely in sync. You can download python at +https://www.python.org/downloads/ + +Here are the steps: + +After cloning the repo, cd into the project root directory `llm_engineering`. +Then: + +1. Create a new virtual environment: `python -m venv venv` +2. Activate the virtual environment with +On a Mac: `source venv/bin/activate` +On a PC: `venv\Scripts\activate` +3. Run `pip install -r requirements.txt` +4. Create a file called `.env` in the project root directory and add any private API keys, such as below. (The next section has more detailed instructions for this, if you prefer.) + +``` +OPENAI_API_KEY=xxxx +GOOGLE_API_KEY=xxxx +ANTHROPIC_API_KEY=xxxx +HF_TOKEN=xxxx +``` + +5. Run `jupyter lab` to launch Jupyter and head over to the intro folder to get started. + +Let me know if you hit problems. + +### Guide to creating the `.env` file + +**For PC users:** + +1. Open the Notepad (Windows + R to open the Run box, enter notepad) + +2. In the Notepad, type the contents of the file, such as: - - - - - -
- - -

Other resources

- I've put together this webpage with useful resources for the course. This includes links to all the slides.
- https://edwarddonner.com/2024/11/13/llm-engineering-resources/
- Please keep this bookmarked, and I'll continue to add more useful links there over time. -
-
+``` +OPENAI_API_KEY=xxxx +GOOGLE_API_KEY=xxxx +ANTHROPIC_API_KEY=xxxx +HF_TOKEN=xxxx +``` +Double check there are no spaces before or after the `=` sign, and no spaces at the end of the key. + +3. Go to File > Save As. In the "Save as type" dropdown, select All Files. In the "File name" field, type ".env". Choose the root of the project folder (the folder called `llm_engineering`) and click Save. + +4. Navigate to the foler where you saved the file in Explorer and ensure it was saved as ".env" not ".env.txt" - if necessary rename it to ".env" - you might need to ensure that "Show file extensions" is set to "On" so that you see the file extensions. Message or email me if that doesn't make sense! + +**For Mac users:** + +1. Open Terminal (Command + Space to open Spotlight, type Terminal and press Enter) + +2. cd to your project root directory + +cd /path/to/your/project + +(in other words, change to the directory like `/Users/your_name/Projects/llm_engineering`, or wherever you have cloned llm_engineering). + +3. Create the .env file with + +nano .env + +4. Then type your API keys into nano: + +``` +OPENAI_API_KEY=xxxx +GOOGLE_API_KEY=xxxx +ANTHROPIC_API_KEY=xxxx +HF_TOKEN=xxxx +``` + +5. Save the file: + +Control + O +Enter (to confirm save the file) +Control + X to exit the editor + +6. Use this command to list files in your file + +`ls -a` + +And confirm that the `.env` file is there. + +Please do message me or email me at ed@edwarddonner.com if this doesn't work or if I can help with anything. I can't wait to hear how you get on. diff --git a/environment.yml b/environment.yml index 1247085..fd643b2 100644 --- a/environment.yml +++ b/environment.yml @@ -7,44 +7,41 @@ dependencies: - pip - python-dotenv - requests + - beautifulsoup4 + - pydub - numpy - pandas - scipy - pytorch - jupyterlab - ipywidgets + - pyarrow + - anthropic + - google-generativeai - matplotlib - scikit-learn - chromadb - - jupyter-dash - - sentencepiece - - pyarrow + - langchain + - langchain-text-splitters + - langchain-openai + - langchain-experimental + - langchain-chroma - faiss-cpu + - tiktoken + - jupyter-dash + - plotly + - twilio + - duckdb + - feedparser - pip: - - beautifulsoup4 - - plotly - - bitsandbytes - transformers - sentence-transformers - datasets - accelerate + - sentencepiece + - bitsandbytes - openai - - anthropic - - google-generativeai - gradio - gensim - modal - - ollama - - psutil - - setuptools - - speedtest-cli - - langchain - - langchain-core - - langchain-text-splitters - - langchain-openai - - langchain-chroma - - langchain-community - - faiss-cpu - - feedparser - - twilio - - pydub + diff --git a/requirements.txt b/requirements.txt index 5dd33ad..8471a21 100644 --- a/requirements.txt +++ b/requirements.txt @@ -10,6 +10,9 @@ matplotlib gensim torch transformers +accelerate +sentencepiece +bitsandbytes tqdm openai gradio @@ -31,12 +34,4 @@ chromadb plotly jupyter-dash beautifulsoup4 -pydub -modal -ollama -accelerate -sentencepiece -bitsandbytes -psutil -setuptools -speedtest-cli +pydub \ No newline at end of file diff --git a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb index fd3a3ba..febcc6b 100644 --- a/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb +++ b/week1/community-contributions/day1-selenium-for-javascript-sites.ipynb @@ -376,7 +376,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.11" + "version": "3.11.4" } }, "nbformat": 4, diff --git a/week1/day1.ipynb b/week1/day1.ipynb index 2c2e1c2..349ad9e 100644 --- a/week1/day1.ipynb +++ b/week1/day1.ipynb @@ -5,9 +5,7 @@ "id": "d15d8294-3328-4e07-ad16-8a03e9bbfdb9", "metadata": {}, "source": [ - "# Instant Gratification\n", - "\n", - "## Your first Frontier LLM Project!\n", + "# Instant Gratification!\n", "\n", "Let's build a useful LLM solution - in a matter of minutes.\n", "\n", @@ -15,61 +13,39 @@ "\n", "Our goal is to code a new kind of Web Browser. Give it a URL, and it will respond with a summary. The Reader's Digest of the internet!!\n", "\n", - "Before starting, you should have completed the setup for [PC](../SETUP-PC.md) or [Mac](../SETUP-mac.md) and you hopefully launched this jupyter lab from within the project root directory, with your environment activated.\n", + "Before starting, be sure to have followed the instructions in the \"README\" file, including creating your API key with OpenAI and adding it to the `.env` file.\n", "\n", "## If you're new to Jupyter Lab\n", "\n", "Welcome to the wonderful world of Data Science experimentation! Once you've used Jupyter Lab, you'll wonder how you ever lived without it. Simply click in each \"cell\" with code in it, such as the cell immediately below this text, and hit Shift+Return to execute that cell. As you wish, you can add a cell with the + button in the toolbar, and print values of variables, or try out variations. \n", "\n", - "I've written a notebook called [Guide to Jupyter](Guide%20to%20Jupyter.ipynb) to help you get more familiar with Jupyter Labs, including adding Markdown comments, using `!` to run shell commands, and `tqdm` to show progress.\n", - "\n", - "If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", + "If you need to start a 'notebook' again, go to Kernel menu >> Restart kernel. \n", "\n", - "## If you'd like to brush up your Python\n", + "If you want to become a pro at Jupyter Lab, you can read their tutorial [here](https://jupyterlab.readthedocs.io/en/latest/). But this isn't required for our course; just a good technique for hitting Shift + Return and enjoying the result!\n", "\n", - "I've added a notebook called [Intermediate Python](Intermediate%20Python.ipynb) to get you up to speed. But you should give it a miss if you already have a good idea what this code does: \n", - "`yield from {book.get(\"author\") for book in books if book.get(\"author\")}`\n", + "If you prefer to work in IDEs like VSCode or Pycharm, they both work great with these lab notebooks too. \n", "\n", "## I am here to help\n", "\n", "If you have any problems at all, please do reach out. \n", - "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect (and I love connecting!)\n", + "I'm available through the platform, or at ed@edwarddonner.com, or at https://www.linkedin.com/in/eddonner/ if you'd like to connect.\n", "\n", "## More troubleshooting\n", "\n", - "Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder to diagnose and fix common problems. At the very end of it is a diagnostics script with some useful debug info.\n", + "Please see the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more ideas!\n", "\n", "## If this is old hat!\n", "\n", "If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n", "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Please read - important note

\n", - " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", - "
\n", - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Business value of these exercises

\n", - " A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.\n", - "
" + "## Business value of these exercises\n", + "\n", + "A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "4e2a9393-7767-488e-a8bf-27c12dca35bd", "metadata": {}, "outputs": [], @@ -81,9 +57,7 @@ "from dotenv import load_dotenv\n", "from bs4 import BeautifulSoup\n", "from IPython.display import Markdown, display\n", - "from openai import OpenAI\n", - "\n", - "# If you get an error running this cell, then please head over to the troubleshooting notebook!" + "from openai import OpenAI" ] }, { @@ -97,18 +71,23 @@ "\n", "## Troubleshooting if you have problems:\n", "\n", - "Head over to the [troubleshooting](troubleshooting.ipynb) notebook in this folder for step by step code to identify the root cause and fix it!\n", + "1. OpenAI takes a few minutes to register after you set up an account. If you receive an error about being over quota, try waiting a few minutes and try again.\n", + "2. You'll need to set up billing and add the minimum amount of credit at this page [here](https://platform.openai.com/settings/organization/billing/overview). OpenAI requires a minimum of $5 to get started in the U.S. right now - this might be different for your region. You'll only need to use a fraction for this course. In my view, this is well worth the investment for your education and future projects - but if you have any concerns, you can skip this and watch me using OpenAI instead. In week 3 we will start to use free open-source models!\n", + "3. Also, double check you have the right kind of API token with the right permissions. You should find it on [this webpage](https://platform.openai.com/api-keys) and it should show with Permissions of \"All\". If not, try creating another key by:\n", + "- Pressing \"Create new secret key\" on the top right\n", + "- Select **Owned by:** you, **Project:** Default project, **Permissions:** All\n", + "- Click Create secret key, and use that new key in the code and the `.env` file (it might take a few minutes to activate)\n", + "- Do a Kernel >> Restart kernel, and execute the cells in this Jupyter lab starting at the top\n", + "4. As a fallback, replace the line `openai = OpenAI()` with `openai = OpenAI(api_key=\"your-key-here\")` - while it's not recommended to hard code tokens in Jupyter lab, because then you can't share your lab with others, it's a workaround for now\n", + "5. See the [troubleshooting](troubleshooting.ipynb) notebook in this folder for more instructions\n", + "6. Contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", "\n", - "If you make a change, try restarting the \"Kernel\" (the python process sitting behind this notebook) by Kernel menu >> Restart Kernel and Clear Outputs of All Cells. Then try this notebook again, starting at the top.\n", - "\n", - "Or, contact me! Message me or email ed@edwarddonner.com and we will get this to work.\n", - "\n", - "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point. You can also use Ollama as a free alternative, which we discuss during Day 2." + "Any concerns about API costs? See my notes in the README - costs should be minimal, and you can control it at every point." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "7b87cadb-d513-4303-baee-a37b6f938e4d", "metadata": {}, "outputs": [], @@ -116,75 +95,29 @@ "# Load environment variables in a file called .env\n", "\n", "load_dotenv()\n", - "api_key = os.getenv('OPENAI_API_KEY')\n", - "\n", - "# Check the key\n", - "\n", - "if not api_key:\n", - " print(\"No API key was found - please head over to the troubleshooting notebook in this folder to identify & fix!\")\n", - "elif not api_key.startswith(\"sk-proj-\"):\n", - " print(\"An API key was found, but it doesn't start sk-proj-; please check you're using the right key - see troubleshooting notebook\")\n", - "elif api_key.strip() != api_key:\n", - " print(\"An API key was found, but it looks like it might have space or tab characters at the start or end - please remove them - see troubleshooting notebook\")\n", - "else:\n", - " print(\"API key found and looks good so far!\")\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "019974d9-f3ad-4a8a-b5f9-0a3719aea2d3", - "metadata": {}, - "outputs": [], - "source": [ + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "openai = OpenAI()\n", "\n", - "# If this doesn't work, try Kernel menu >> Restart Kernel and Clear Outputs Of All Cells, then run the cells from the top of this notebook down.\n", - "# If it STILL doesn't work (horrors!) then please see the troubleshooting notebook, or try the below line instead:\n", - "# openai = OpenAI(api_key=\"your-key-here-starting-sk-proj-\")" - ] - }, - { - "cell_type": "markdown", - "id": "442fc84b-0815-4f40-99ab-d9a5da6bda91", - "metadata": {}, - "source": [ - "# Let's make a quick call to a Frontier model to get started, as a preview!" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "a58394bf-1e45-46af-9bfd-01e24da6f49a", - "metadata": {}, - "outputs": [], - "source": [ - "# To give you a preview -- calling OpenAI with these messages is this easy:\n", - "\n", - "message = \"Hello, GPT! This is my first ever message to you! Hi!\"\n", - "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=[{\"role\":\"user\", \"content\":message}])\n", - "print(response.choices[0].message.content)" - ] - }, - { - "cell_type": "markdown", - "id": "2aa190e5-cb31-456a-96cc-db109919cd78", - "metadata": {}, - "source": [ - "## OK onwards with our first project" + "# See the troubleshooting notebook, ot try the below line instead if this gives you any problems:\n", + "# openai = OpenAI(api_key=\"your-key-here\")" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "c5e793b2-6775-426a-a139-4848291d0463", "metadata": {}, "outputs": [], "source": [ "# A class to represent a Webpage\n", - "# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n", "\n", "class Website:\n", + " \"\"\"\n", + " A utility class to represent a Website that we have scraped\n", + " \"\"\"\n", + " url: str\n", + " title: str\n", + " text: str\n", "\n", " def __init__(self, url):\n", " \"\"\"\n", @@ -201,12 +134,65 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97", "metadata": {}, - "outputs": [], - "source": [ - "# Let's try one out. Change the website and add print statements to follow along.\n", + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Home - Edward Donner\n", + "Home\n", + "Outsmart\n", + "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", + "About\n", + "Posts\n", + "Well, hi there.\n", + "I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n", + "very\n", + "amateur) and losing myself in\n", + "Hacker News\n", + ", nodding my head sagely to things I only half understand.\n", + "I’m the co-founder and CTO of\n", + "Nebula.io\n", + ". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n", + "acquired in 2021\n", + ".\n", + "We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n", + "patented\n", + "our matching model, and our award-winning platform has happy customers and tons of press coverage.\n", + "Connect\n", + "with me for more!\n", + "November 13, 2024\n", + "Mastering AI and LLM Engineering – Resources\n", + "October 16, 2024\n", + "From Software Engineer to AI Data Scientist – resources\n", + "August 6, 2024\n", + "Outsmart LLM Arena – a battle of diplomacy and deviousness\n", + "June 26, 2024\n", + "Choosing the Right LLM: Toolkit and Resources\n", + "Navigation\n", + "Home\n", + "Outsmart\n", + "An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n", + "About\n", + "Posts\n", + "Get in touch\n", + "ed [at] edwarddonner [dot] com\n", + "www.edwarddonner.com\n", + "Follow me\n", + "LinkedIn\n", + "Twitter\n", + "Facebook\n", + "Subscribe to newsletter\n", + "Type your email…\n", + "Subscribe\n" + ] + } + ], + "source": [ + "# Let's try one out\n", "\n", "ed = Website(\"https://edwarddonner.com\")\n", "print(ed.title)\n", @@ -233,7 +219,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "abdb8417-c5dc-44bc-9bee-2e059d162699", "metadata": {}, "outputs": [], @@ -247,7 +233,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c", "metadata": {}, "outputs": [], @@ -256,23 +242,13 @@ "\n", "def user_prompt_for(website):\n", " user_prompt = f\"You are looking at a website titled {website.title}\"\n", - " user_prompt += \"\\nThe contents of this website is as follows; \\\n", + " user_prompt += \"The contents of this website is as follows; \\\n", "please provide a short summary of this website in markdown. \\\n", "If it includes news or announcements, then summarize these too.\\n\\n\"\n", " user_prompt += website.text\n", " return user_prompt" ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "26448ec4-5c00-4204-baec-7df91d11ff2e", - "metadata": {}, - "outputs": [], - "source": [ - "print(user_prompt_for(ed))" - ] - }, { "cell_type": "markdown", "id": "ea211b5f-28e1-4a86-8e52-c0b7677cadcc", @@ -287,48 +263,12 @@ "[\n", " {\"role\": \"system\", \"content\": \"system message goes here\"},\n", " {\"role\": \"user\", \"content\": \"user message goes here\"}\n", - "]\n", - "\n", - "To give you a preview, the next 2 cells make a rather simple call - we won't stretch the might GPT (yet!)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5", - "metadata": {}, - "outputs": [], - "source": [ - "messages = [\n", - " {\"role\": \"system\", \"content\": \"You are a snarky assistant\"},\n", - " {\"role\": \"user\", \"content\": \"What is 2 + 2?\"}\n", "]" ] }, { "cell_type": "code", - "execution_count": null, - "id": "21ed95c5-7001-47de-a36d-1d6673b403ce", - "metadata": {}, - "outputs": [], - "source": [ - "# To give you a preview -- calling OpenAI with system and user messages:\n", - "\n", - "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n", - "print(response.choices[0].message.content)" - ] - }, - { - "cell_type": "markdown", - "id": "d06e8d78-ce4c-4b05-aa8e-17050c82bb47", - "metadata": {}, - "source": [ - "## And now let's build useful messages for GPT-4o-mini, using a function" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "0134dfa4-8299-48b5-b444-f2a8c3403c88", "metadata": {}, "outputs": [], @@ -342,18 +282,6 @@ " ]" ] }, - { - "cell_type": "code", - "execution_count": null, - "id": "36478464-39ee-485c-9f3f-6a4e458dbc9c", - "metadata": {}, - "outputs": [], - "source": [ - "# Try this out, and then try for a few more websites\n", - "\n", - "messages_for(ed)" - ] - }, { "cell_type": "markdown", "id": "16f49d46-bf55-4c3e-928f-68fc0bf715b0", @@ -364,7 +292,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "905b9919-aba7-45b5-ae65-81b3d1d78e34", "metadata": {}, "outputs": [], @@ -382,17 +310,28 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 11, "id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "\"# Summary of Edward Donner's Website\\n\\nEdward Donner's website highlights his interests and expertise in programming, experimenting with Large Language Models (LLMs), and electronic music production. He serves as the co-founder and CTO of **Nebula.io**, which focuses on using AI to help individuals discover their potential in the talent acquisition sector. The site also notes his previous role as the founder and CEO of **untapt**, an AI startup acquired in 2021.\\n\\n## Recent Posts\\n- **October 16, 2024:** Resources for transitioning from Software Engineer to AI Data Scientist.\\n- **August 6, 2024:** Announcement of the *Outsmart LLM Arena*, a competitive platform for LLMs.\\n- **June 26, 2024:** Guidance on choosing the right LLM with suggested tools and resources.\\n- **February 7, 2024:** Insights on fine-tuning an LLM to simulate personal writing styles.\\n\\nThe website encourages visitors to connect with Ed for further collaboration or discussions.\"" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ "summarize(\"https://edwarddonner.com\")" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "id": "3d926d59-450e-4609-92ba-2d6f244f1342", "metadata": {}, "outputs": [], @@ -406,12 +345,54 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 15, "id": "3018853a-445f-41ff-9560-d925d1774b2f", "metadata": {}, - "outputs": [], - "source": [ - "display_summary(\"https://edwarddonner.com\")" + "outputs": [ + { + "data": { + "text/markdown": [ + "# İstanbul Nişantaşı Üniversitesi\n", + "\n", + "İstanbul Nişantaşı Üniversitesi, çeşitli akademik alanlarda eğitim veren bir yükseköğretim kurumudur. Temel misyonu, öğrencilere nitelikli eğitim ve araştırma fırsatları sunmaktır. Üniversitede tıp, mühendislik, sanat ve sosyal bilimler gibi birçok fakülte bulunmaktadır.\n", + "\n", + "## Kurumsal Bilgiler\n", + "- **Misyon/Vizyon**: Nişantaşı Eğitim Vakfı tarafından kurulan üniversite, kaliteli eğitim sunmayı hedeflemektedir.\n", + "- **Yönetim**: Rektör, senato üyeleri ve yönetim kurulu hakkında bilgiler mevcuttur.\n", + "- **Kalite Yönetimi**: Kalite ve yönetişim, sürekli eğitim ve bilimsel faaliyetler ile ilgili koordinatörlük birimleri bulunmaktadır.\n", + "\n", + "## Akademik Yapılar\n", + "- **Fakülteler**: Tıp, Diş Hekimliği, Mühendislik ve Mimarlık, İktisadi İdari ve Sosyal Bilimler, Sanat ve Tasarım, Sağlık Bilimleri.\n", + "- **Yüksekokullar ve Meslek Yüksekokulları**: Spor, Sivil Havacılık, Uygulamalı Bilimler ve Konservatuvar gibi farklı alanlarda yüksekokul programları sunulmaktadır.\n", + "- **Araştırma Merkezleri**: Ağız ve Diş Sağlığı, Finans Ekonomi gibi çeşitli araştırma merkezleri bulunmaktadır.\n", + "\n", + "## Öğrenci Kaynakları\n", + "- Öğrenci kulüpleri, spor faaliyetleri, psikolojik danışmanlık ve sağlık birimi gibi destek hizmetleri mevcut.\n", + "- Akademik takvim, ders programları ve sıkça sorulan sorular gibi öğrenci kaynakları sağlanmaktadır.\n", + "\n", + "## Güncel Haberler\n", + "- **Cumhuriyetin 101. Yılı**: İstanbul Nişantaşı Üniversitesi, ilkokul öğrencilerini ağırlamıştır.\n", + "- **Seminerler**: Ekonomik verilerin analizi ve yapay zeka ile öğrencilik eğitimi üzerine seminerler gerçekleştirilmiştir.\n", + "\n", + "## Etkinlikler ve Duyurular\n", + "- **29 Ekim Kutlamaları** ve **İlk Yardım Semineri** gibi çeşitli etkinlikler düzenlenmektedir.\n", + "- 29 Ekim resmi tatili ve öğretim görevlisi değerlendirme gibi duyurular yapılmıştır.\n", + "\n", + "## Başarılar\n", + "Üniversitenin spor takımları ulusal ve uluslararası düzeyde çeşitli başarılar elde etmiştir, örneğin masa tenisi takımı Avrupa Şampiyonu olmuştur.\n", + "\n", + "İstanbul Nişantaşı Üniversitesi, öğrencilere kapsamlı bir akademik ve sosyal deneyim sunmayı amaçlamaktadır." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], + "source": [ + "display_summary(\"https://www.nisantasi.edu.tr/\")" ] }, { @@ -455,59 +436,11 @@ "id": "c951be1a-7f1b-448f-af1f-845978e47e2c", "metadata": {}, "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Business applications

\n", - " In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", - "\n", - "More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.\n", - "
\n", - "\n", - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Before you continue - now try yourself

\n", - " Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.\n", - "
" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "00743dac-0e70-45b7-879a-d7293a6f68a6", - "metadata": {}, - "outputs": [], - "source": [ - "# Step 1: Create your prompts\n", - "\n", - "system_prompt = \"something here\"\n", - "user_prompt = \"\"\"\n", - " Lots of text\n", - " Can be pasted here\n", - "\"\"\"\n", - "\n", - "# Step 2: Make the messages list\n", - "\n", - "messages = [] # fill this in\n", - "\n", - "# Step 3: Call OpenAI\n", - "\n", - "response =\n", + "## Business Applications\n", "\n", - "# Step 4: print the result\n", + "In this exercise, you experienced calling the API of a Frontier Model (a leading model at the frontier of AI) for the first time. This is broadly applicable across Gen AI use cases and we will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n", "\n", - "print(" + "More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution." ] }, { @@ -517,7 +450,7 @@ "source": [ "## An extra exercise for those who enjoy web scraping\n", "\n", - "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them. In the community-contributions folder, you'll find an example Selenium solution from a student (thank you!)" + "You may notice that if you try `display_summary(\"https://openai.com\")` - it doesn't work! That's because OpenAI has a fancy website that uses Javascript. There are many ways around this that some of you might be familiar with. For example, Selenium is a hugely popular framework that runs a browser behind the scenes, renders the page, and allows you to query it. If you have experience with Selenium, Playwright or similar, then feel free to improve the Website class to use them." ] }, { @@ -559,7 +492,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.11" + "version": "3.11.4" } }, "nbformat": 4, diff --git a/week1/day5.ipynb b/week1/day5.ipynb index 3cdc54a..c51a99b 100644 --- a/week1/day5.ipynb +++ b/week1/day5.ipynb @@ -7,10 +7,6 @@ "source": [ "# A full business solution\n", "\n", - "## Now we will take our project from Day 1 to the next level\n", - "\n", - "### BUSINESS CHALLENGE:\n", - "\n", "Create a product that builds a Brochure for a company to be used for prospective clients, investors and potential recruits.\n", "\n", "We will be provided a company name and their primary website.\n", @@ -22,13 +18,12 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 1, "id": "d5b08506-dc8b-4443-9201-5f1848161363", "metadata": {}, "outputs": [], "source": [ "# imports\n", - "# If these fail, please check you're running from an 'activated' environment with (llms) in the command prompt\n", "\n", "import os\n", "import requests\n", @@ -42,7 +37,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61", "metadata": {}, "outputs": [], @@ -50,20 +45,14 @@ "# Initialize and constants\n", "\n", "load_dotenv()\n", - "api_key = os.getenv('OPENAI_API_KEY')\n", - "\n", - "if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n", - " print(\"API key looks good so far\")\n", - "else:\n", - " print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n", - " \n", + "os.environ['OPENAI_API_KEY'] = os.getenv('OPENAI_API_KEY', 'your-key-if-not-using-env')\n", "MODEL = 'gpt-4o-mini'\n", "openai = OpenAI()" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "106dd65e-90af-4ca8-86b6-23a41840645b", "metadata": {}, "outputs": [], @@ -71,9 +60,11 @@ "# A class to represent a Webpage\n", "\n", "class Website:\n", - " \"\"\"\n", - " A utility class to represent a Website that we have scraped, now with links\n", - " \"\"\"\n", + " url: str\n", + " title: str\n", + " body: str\n", + " links: List[str]\n", + " text: str\n", "\n", " def __init__(self, url):\n", " self.url = url\n", @@ -96,13 +87,26 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "e30d8128-933b-44cc-81c8-ab4c9d86589a", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Webpage Title:\n", + "الرئيسية\n", + "Webpage Contents:\n", + "\n", + "\n", + "\n" + ] + } + ], "source": [ - "ed = Website(\"https://edwarddonner.com\")\n", - "ed.links" + "ed = Website(\"https://www.scripters.academy/\")\n", + "print(ed.get_contents())" ] }, { @@ -116,14 +120,12 @@ "It should decide which links are relevant, and replace relative links such as \"/about\" with \"https://company.com/about\". \n", "We will use \"one shot prompting\" in which we provide an example of how it should respond in the prompt.\n", "\n", - "This is an excellent use case for an LLM, because it requires nuanced understanding. Imagine trying to code this without LLMs by parsing and analyzing the webpage - it would be very hard!\n", - "\n", "Sidenote: there is a more advanced technique called \"Structured Outputs\" in which we require the model to respond according to a spec. We cover this technique in Week 8 during our autonomous Agentic AI project." ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "6957b079-0d96-45f7-a26a-3487510e9b35", "metadata": {}, "outputs": [], @@ -144,17 +146,7 @@ }, { "cell_type": "code", - "execution_count": null, - "id": "b97e4068-97ed-4120-beae-c42105e4d59a", - "metadata": {}, - "outputs": [], - "source": [ - "print(link_system_prompt)" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3", "metadata": {}, "outputs": [], @@ -170,24 +162,34 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "6bcbfa78-6395-4685-b92c-22d592050fd7", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Here is the list of links on the website of https://www.scripters.academy/ - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n", + "Links (some might be relative links):\n", + "\n" + ] + } + ], "source": [ "print(get_links_user_prompt(ed))" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "id": "a29aca19-ca13-471c-a4b4-5abbfa813f69", "metadata": {}, "outputs": [], "source": [ "def get_links(url):\n", " website = Website(url)\n", - " response = openai.chat.completions.create(\n", + " completion = openai.chat.completions.create(\n", " model=MODEL,\n", " messages=[\n", " {\"role\": \"system\", \"content\": link_system_prompt},\n", @@ -195,31 +197,36 @@ " ],\n", " response_format={\"type\": \"json_object\"}\n", " )\n", - " result = response.choices[0].message.content\n", + " result = completion.choices[0].message.content\n", " return json.loads(result)" ] }, { "cell_type": "code", - "execution_count": null, - "id": "74a827a0-2782-4ae5-b210-4a242a8b4cc2", - "metadata": {}, - "outputs": [], - "source": [ - "# Anthropic has made their site harder to scrape, so I'm using HuggingFace..\n", - "\n", - "huggingface = Website(\"https://huggingface.co\")\n", - "huggingface.links" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 11, "id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/plain": [ + "{'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'},\n", + " {'type': 'careers page', 'url': 'https://anthropic.com/careers'},\n", + " {'type': 'team page', 'url': 'https://anthropic.com/team'},\n", + " {'type': 'research page', 'url': 'https://anthropic.com/research'},\n", + " {'type': 'enterprise page', 'url': 'https://anthropic.com/enterprise'},\n", + " {'type': 'api page', 'url': 'https://anthropic.com/api'},\n", + " {'type': 'pricing page', 'url': 'https://anthropic.com/pricing'},\n", + " {'type': 'news page', 'url': 'https://anthropic.com/news'}]}" + ] + }, + "execution_count": 11, + "metadata": {}, + "output_type": "execute_result" + } + ], "source": [ - "get_links(\"https://huggingface.co\")" + "get_links(\"https://anthropic.com\")" ] }, { @@ -234,7 +241,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 12, "id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5", "metadata": {}, "outputs": [], @@ -252,17 +259,1106 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 13, "id": "5099bd14-076d-4745-baf3-dac08d8e5ab2", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}, {'type': 'research page', 'url': 'https://anthropic.com/research'}, {'type': 'enterprise page', 'url': 'https://anthropic.com/enterprise'}, {'type': 'api page', 'url': 'https://anthropic.com/api'}, {'type': 'pricing page', 'url': 'https://anthropic.com/pricing'}, {'type': 'news page', 'url': 'https://anthropic.com/news'}]}\n", + "Landing page:\n", + "Webpage Title:\n", + "Home \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "AI\n", + "research\n", + "and\n", + "products\n", + "that put safety at the frontier\n", + "Claude.ai\n", + "Meet Claude 3.5 Sonnet\n", + "Claude 3.5 Sonnet, our most intelligent AI model, is now available.\n", + "Talk to Claude\n", + "API\n", + "Build with Claude\n", + "Start using Claude to drive efficiency and create new revenue streams.\n", + "Learn more\n", + "Announcements\n", + "Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku\n", + "Oct 22, 2024\n", + "Model updates\n", + "3.5 Sonnet\n", + "3.5 Haiku\n", + "Our Work\n", + "Product\n", + "Claude for Enterprise\n", + "Sep 4, 2024\n", + "Alignment\n", + "·\n", + "Research\n", + "Constitutional AI: Harmlessness from AI Feedback\n", + "Dec 15, 2022\n", + "Announcements\n", + "Core Views on AI Safety: When, Why, What, and How\n", + "Mar 8, 2023\n", + "Work with Anthropic\n", + "Anthropic is an AI safety and research company based in San Francisco. Our interdisciplinary team has experience across ML, physics, policy, and product. Together, we generate research and create reliable, beneficial AI systems.\n", + "See open roles\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "about page\n", + "Webpage Title:\n", + "Company \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Making AI systems\n", + "you can rely on\n", + "Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems.\n", + "Join us\n", + "Our Purpose\n", + "We believe AI will have a vast impact on the world. Anthropic is dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.\n", + "We Build Safer Systems\n", + "We aim to build frontier AI systems that are reliable, interpretable, and steerable. We conduct frontier research, develop and apply a variety of safety techniques, and deploy the resulting systems via a set of partnerships and products.\n", + "Safety Is a Science\n", + "We treat AI safety as a systematic science, conducting research, applying it to our products, feeding those insights back into our research, and regularly sharing what we learn with the world along the way.\n", + "Interdisciplinary\n", + "Anthropic is a collaborative team of researchers, engineers, policy experts, business leaders and operators, who bring our experience from many different domains to our work.\n", + "AI Companies are One Piece of a Big Puzzle\n", + "AI has the potential to fundamentally change how the world works. We view ourselves as just one piece of this evolving puzzle. We collaborate with civil society, government, academia, nonprofits and industry to promote safety industry-wide.\n", + "The Team\n", + "We’re a team of researchers, engineers, policy experts and operational leaders, with experience spanning a variety of disciplines, all working together to build reliable and understandable AI systems.\n", + "Research\n", + "We conduct frontier AI research across a variety of modalities, and explore novel and emerging safety research areas from interpretability to RL from human feedback to policy and societal impacts analysis.\n", + "Policy\n", + "We think about the impacts of our work and strive to communicate what we’re seeing at the frontier to policymakers and civil society in the US and abroad to help promote safe and reliable AI.\n", + "Product\n", + "We translate our research into tangible, practical tools like Claude that benefit businesses, nonprofits and civil society groups and their clients and people around the globe.\n", + "Operations\n", + "Our people, finance, legal, and recruiting teams are the human engines that make Anthropic go. We’ve had previous careers at NASA, startups, and the armed forces and our diverse experiences help make Anthropic a great place to work (and we love plants!).\n", + "Our Values\n", + "01\n", + "Here for the mission\n", + "Anthropic exists for our mission: to ensure transformative AI helps people and society flourish. Progress this decade may be rapid, and we expect increasingly capable systems to pose novel challenges. We pursue our mission by building frontier systems, studying their behaviors, working to responsibly deploy them, and regularly sharing our safety insights. We collaborate with other projects and stakeholders seeking a similar outcome.\n", + "02\n", + "Unusually high trust\n", + "Our company is an unusually high trust environment: we assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness. At its best, our trust enables us to make better decisions as an organization than any one of us could as individuals.\n", + "03\n", + "One big team\n", + "Collaboration is central to our work, culture, and value proposition. While we have many teams at Anthropic, we feel the broader sense in which we are all on the same team working together towards the mission. Leadership sets the strategy, with broad input from everyone, and trusts each piece of the organization to pursue these goals in their unique style. Individuals commonly contribute to work across many different areas.\n", + "04\n", + "Do the simple thing that works\n", + "We celebrate trying the simple thing before the clever, novel thing. We embrace pragmatism - sensible, practical approaches that acknowledge tradeoffs. We love empiricism - finding out what actually works by trying it - and apply this to our research, our engineering and our collaboration. We aim to be open about what we understand and what we don’t.\n", + "Governance\n", + "Anthropic is a Public Benefit Corporation, whose purpose is the responsible development and maintenance of advanced AI for the long-term benefit of humanity. Our Board of Directors is elected by stockholders and our Long-Term Benefit Trust, as explained\n", + "here.\n", + "Current members of the Board and the Long-Term Benefit Trust (LTBT) are listed below.\n", + "Anthropic Board of Directors\n", + "Dario Amodei, Daniela Amodei, Yasmin Razavi, and Jay Kreps.\n", + "LTBT Trustees\n", + "Neil Buddy Shah, Kanika Bahl, and Zach Robinson.\n", + "Company News\n", + "See All\n", + "Announcements\n", + "Introducing the Model Context Protocol\n", + "Nov 25, 2024\n", + "Announcements\n", + "Powering the next generation of AI development with AWS\n", + "Nov 22, 2024\n", + "Announcements\n", + "Claude 3.5 Sonnet on GitHub Copilot\n", + "Oct 29, 2024\n", + "Want to help us build the future of safe AI?\n", + "Join us\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "careers page\n", + "Webpage Title:\n", + "Careers \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Join the team\n", + "making AI safe\n", + "We’re a public benefit corporation headquartered in San Francisco. Our team’s experience spans a variety of backgrounds and disciplines, from physics and machine learning to public policy and business. We work as a cohesive team that collectively forecasts the impact and tractability of research ideas in advancing our mission.\n", + "See open roles\n", + "What We Offer\n", + "Health & Wellness\n", + "At Anthropic, we believe that supporting our employees is crucial to our collective success and wellbeing. That's why we offer a range of benefits to best support you and your family, now and in the future.\n", + "Comprehensive health, dental, and vision insurance for you and your dependents\n", + "Inclusive fertility benefits via Carrot Fertility\n", + "22 weeks of paid parental leave\n", + "Flexible paid time off and absence policies\n", + "Generous mental health support for you and your dependents\n", + "Compensation & Support\n", + "Our goal is to foster an environment where you can thrive professionally while feeling confident that you and your loved ones are taken care of.\n", + "Competitive salary and equity packages\n", + "Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant\n", + "Robust retirement plans and salary sacrifice programs with market competitive matching\n", + "Life and income protection plans\n", + "Additional Benefits\n", + "$500/month flexible wellness and time saver stipend\n", + "Commuter benefits\n", + "Annual education stipend\n", + "Home office stipends\n", + "Relocation support for those moving for Anthropic\n", + "Daily meals and snacks in the office\n", + "How We Hire\n", + "The interview process at Anthropic varies based on role and candidate, but our standard process looks like this:\n", + "Step 1\n", + "Resume\n", + "Submit your resume via our website.\n", + "Step 2\n", + "Exploratory chat\n", + "You’ll have a chat with one of our staff to discuss your career interests and relevant experience, and learn more about Anthropic.\n", + "Step 3\n", + "Skills Assessment\n", + "For technical roles, you’ll have a one-hour technical screening interview.\n", + "For operations or policy roles, you’ll get a take-home assignment. These typically involve writing responses to several role-relevant questions; they may occasionally require some outside research. Assignments usually take between 2-5 hours, depending on the role.\n", + "We include this to minimize bias and make well-informed hiring decisions. We think seeing a candidate’s work helps us assess how they might actually perform on the job; similarly, the assignment gives candidates a better idea of what their work at Anthropic might entail. If a candidate likes working through their take-home, that is one indicator that they would enjoy taking on the role, and vice versa.\n", + "We recognize that completing work assignments requires time and effort, and that they are not perfectly reflective of the role’s work. Nonetheless, we think that work tests are a useful complement to interviews and reference checks.\n", + "Step 4\n", + "Team Screen\n", + "You'll have a conversation with either the Hiring Manager or a member of your potential team.\n", + "Step 5\n", + "Interview Panel\n", + "For technical roles, you’ll have 3-4 more one-hour technical interviews, plus a culture interview.\n", + "For operations or policy roles, you’ll have 3-5 hours of interviews, including a culture interview.\n", + "Step 6\n", + "Final Checks\n", + "We’ll ask for some references, and have you chat with our leadership.\n", + "Step 7\n", + "Offer\n", + "We’ll make you an offer!\n", + "Technical Interviews\n", + "The novel challenges we think about at Anthropic demand diverse expertise and perspectives. Our interview process is designed to identify thoughtful candidates who bring unique strengths to our multidisciplinary team. If you think this may describe you, we’d love to hear from you regardless of your background or experience.\n", + "One of the most common questions we get is about whether it is worth applying to work at Anthropic if you have not worked on modern machine learning systems in the past. Yes! For some roles, ML experience is expected, but many technical staff have arrived at Anthropic with no machine learning experience. If you aren’t sure about the ML experience needed for your role, ask your recruiter.\n", + "We use shared environments like Colab and Replit for our programming-focused interviews. We’ll be very interested in how you think through each problem and analyze the tradeoffs between possible approaches, and we’ll also expect you to write, run, and debug your solutions. You’ll be allowed to look things up in documentation or on the web, just like you usually can (which is why we’ll ask you to share your screen throughout each interview); but it’s still important to be familiar with basic syntax, standard libraries, and common idioms in the language you’re interviewing in, so that looking things up doesn’t consume too much time. Your interview process will also include non-technical questions about your experience and what motivates you, and, of course, you’ll have time to ask us about Anthropic! We can’t wait to meet you.\n", + "Other Things\n", + "Engineers here do lots of research, and researchers do lots of engineering\n", + "While there’s historically been a division between engineering and research in machine learning, we think that boundary has dissolved with the advent of large models. The distribution of candidates we interview is strongly bimodal in both engineering and research experience however, and we have necessarily tailored our interview structure to that.\n", + "If you’ve an engineering background, please apply as an engineer. You’ll perform much better in the interviews, and if you join you’ll have as much input to Anthropic’s direction and interests as anyone else.\n", + "As evidence towards this: all of our papers have engineers as authors, and often as first author. Research and engineering hires all share a single title - ‘Member of Technical Staff’.\n", + "We value direct evidence of ability\n", + "If you’ve done interesting independent research, written an insightful blog post, or made substantial contributions to open-source software, put that at the top of your resume!\n", + "Feedback\n", + "We do not provide feedback on resumes or interviews.\n", + "Visas\n", + "Anthropic sponsors visas! We aren't able to sponsor them for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.\n", + "Green cards\n", + "Once you’re eligible, we’re also keen to sponsor green cards!\n", + "Educational backgrounds and experience vary across our team and across our roles.\n", + "We do not require PhDs, degrees, or previous ML experience — About half of Anthropic technical staff have a PhD of some sort; about half had prior experience in ML. We have several brilliant colleagues who never went to college.\n", + "Remote interviewing\n", + "All our interviews are conducted over Google Meet. We prefer PST office hours, but we can be flexible if that’s difficult for you.\n", + "Re-applying\n", + "Similarly, if interviews don’t work out this time, you’re welcome to re-apply after 12 months, and earlier if something materially changes about your experience or skills.\n", + "Remote work\n", + "Anthropic staff all come to the office regularly. Most staff live in the Bay Area, though a few live further away and come in for one week a month. We also understand that moving can take time, so as a transitional phase some folks start while fully remote.\n", + "Offer timing\n", + "If we make an offer, we’re happy to give you time to think about it and finish up any other interview processes you’re going through.\n", + "Internships\n", + "We do not offer internships.\n", + "Candidate Privacy Policy\n", + "US Candidate Privacy Policy\n", + "UK Employee and Candidate Privacy Policy\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "team page\n", + "Webpage Title:\n", + "Team up with Claude \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Try Claude\n", + "Team up with Claude\n", + "Shorten the path from idea to impact with an AI assistant that taps into your team’s shared expertise.\n", + "Get started\n", + "Request demo\n", + "Easy collaboration for better outcomes\n", + "Claude doesn’t just speed up daily tasks like writing emails or docs. It’s a virtual teammate that moves work forward using your team’s knowledge.\n", + "Create with Claude\n", + "Claude can be a sounding board for your ideas, help you generate new ones, and pull insights from data in a snap.\n", + "Prime the canvas\n", + "Use Projects to ground Claude in specific knowledge that helps you produce higher-quality work with less effort.\n", + "Spark inspiration\n", + "Share your best chats with Claude across the team to spark creativity and improve your project deliverables.\n", + "Transform how you work\n", + "Claude makes work more productive—whether you need a partner for deep work, a creative collaborator, or an assistant for daily tasks.\n", + "Create with Claude\n", + "Draft and iterate on documents, code and websites, and images alongside your chat with Artifacts.\n", + "Write and debug code\n", + "Create marketing campaigns\n", + "Draft job descriptions\n", + "Build interactive visualizations\n", + "Transform how your team works\n", + "Claude can serve as your go-to expert, empowering each team member with shared knowledge from all across the organization.\n", + "Prime the canvas\n", + "Create Projects and add knowledge so each person on the team can deliver expert-level results.\n", + "Find and summarize information faster\n", + "Use Claude as your subject-matter expert\n", + "Expand how each teammate can contribute\n", + "Spark inspiration\n", + "Share your best chats with everyone on the Project to spark better ideas, iterate on Artifacts, and move work forward.\n", + "Brainstorm on new product ideas\n", + "Discuss insights from user interviews\n", + "Collaborate on hard research questions\n", + "Every team can work with Claude\n", + "Engineering\n", + "Generate code snippets in seconds\n", + "Create clear, comprehensive docs with no effort\n", + "Get help debugging even the most complex issues\n", + "Turn product feedback into roadmap items faster\n", + "Support\n", + "Resolve customer issues in record time\n", + "Craft personalized responses effortlessly\n", + "Build a dynamic, user-friendly knowledge base\n", + "Generate insightful metrics reports instantly\n", + "Marketing\n", + "Create engaging content tailored to your audience\n", + "Segment customers with pinpoint accuracy\n", + "Analyze competitors with unparalleled depth\n", + "Optimize campaigns for maximum ROI\n", + "Sales\n", + "Customize pitches for any customer segment\n", + "Uncover hidden sales trends effortlessly\n", + "Draft compelling follow-up emails in seconds\n", + "Get comprehensive competitor insights on demand\n", + "By leveraging content from our help center in Projects, we were able to generate comprehensive standard operating procedures for our core workflows in just a few hours—a task that previously took our team weeks to complete.\n", + "Bradley Silicani\n", + "COO, Anrok\n", + "Claude Team is transforming our way of working at North Highland. Claude is a truly exceptional writer that has helped our team complete content creation and analysis tasks up to 5x faster than before—turning what was once two weeks of writing and research into minutes of work.\n", + "Luka Anic\n", + "Senior Director, Technical AI Program and Product Manager, North Highland\n", + "Generating content, completing creative tasks, and creating summarized reports is much easier than before. There are many other areas of our business—like engineering, legal, risk and compliance—where we're excited to see what Claude can do.\n", + "Olga Pirog\n", + "Head of AI Transformation, IG Group\n", + "Join the teams transforming with Claude\n", + "See Pricing\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "research page\n", + "Webpage Title:\n", + "Research \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Researching\n", + "at the frontier\n", + "At Anthropic, we develop large-scale AI systems, and our research teams help us to create safer, steerable, and more reliable models.\n", + "See open roles\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "enterprise page\n", + "Webpage Title:\n", + "Enterprise \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Claude for\n", + " Enterprise\n", + "Securely connect Claude to your company knowledge and empower every team with trusted AI.\n", + "Contact sales\n", + "Empower your entire organization with AI\n", + "Enable every team to spark new ideas, achieve more, and collaborate better.\n", + "Use company knowledge\n", + "Scale internal expertise and knowledge across projects and teams.\n", + "Create and share work\n", + "Produce high-impact output more efficiently with Claude.\n", + "Secure your data\n", + "Protect your sensitive data. Anthropic does not train our models on your Claude for Work data.\n", + "Use company knowledge\n", + "Bring internal knowledge to scale institutional expertise, collaboration and decision-making across your enterprise with Claude as your subject matter expert.\n", + "Intelligence at scale\n", + "Take action with Projects.\n", + "Upload relevant documents, text, code, and files to dedicated knowledge bases for Claude to use as context and background in your chats–enabling everyone to operate like an expert. Claude can reference large amounts of information for every task, including the equivalent of:\n", + "Up to 100 30-minute sales transcripts\n", + "Up to 15 full financial reports\n", + "Up to 100K lines of code\n", + "Integrate with key data sources\n", + "Sync key data sources as context for Claude. Our GitHub integration, now in beta, enables Claude to learn about your codebase to help brainstorm new features, start refactoring projects and onboard new engineers.\n", + "Create and share work\n", + "Claude helps employees learn new skills, speed up tasks and tackle hard projects to boost productivity and extend your organization’s expertise.\n", + "Create with Claude\n", + "Bring your ideas and projects to life with Artifacts\n", + "— dynamic, creative and collaborative work spaces to see and build upon Claude’s creations in real-time. Draft and iterate on documents, code, websites, and images alongside your chat.\n", + "Intricate code structures\n", + "Comprehensive product roadmaps\n", + "In-depth research reports\n", + "Interactive campaign content calendars\n", + "Share and collaborate\n", + "Share your best chats and Projects with teammates to spark ideas, make joint decisions and create purposeful outputs.\n", + "Analyze user and market insights\n", + "Brainstorm and execute on product ideas\n", + "Create shared documentation and processes\n", + "Facilitate meeting preparation and project tracking\n", + "Secure your data\n", + "Your data is protected with Claude. Manage access with enterprise-grade control—and rest assured that we do not train our models on your Claude for Work data.\n", + "Protected company data\n", + "By default, we will not use your Claude for Work data to train our models.\n", + "Single sign-on (SSO) and domain capture\n", + "Secure user access and centralized provisioning control.\n", + "Role-based access with fine-grained permissioning\n", + "Single primary owner of a workspace for security and information management.\n", + "System for Cross-domain Identity Management (SCIM)\n", + "Automate user provisioning and access controls.\n", + "Audit logs\n", + "Trace system activities for security and compliance monitoring.\n", + "Critical cross-functional work starts with Claude\n", + "Engineering\n", + "Marketing\n", + "Sales\n", + "Product management\n", + "Human resources\n", + "Legal\n", + "Engineering\n", + "Marketing\n", + "Sales\n", + "Product management\n", + "Human resources\n", + "Legal\n", + "Engineering\n", + "Convert project requirements into technical specifications\n", + "Design system architecture and component interactions\n", + "Troubleshoot errors and runtime issues\n", + "Identify code optimizations and performance improvements\n", + "Marketing\n", + "Interpret market trends and consumer behavior patterns\n", + "Brainstorm multi-platform content items\n", + "Develop marketing campaign strategies\n", + "Create post campaign performance reports\n", + "Sales\n", + "Analyze sales calls to craft tailored account plans\n", + "Develop objection handling strategies\n", + "Build compelling and tailored pitches\n", + "Interpret sales metrics and KPIS\n", + "Product management\n", + "Define product vision and objectives\n", + "Analyze user feedback and usage data\n", + "Create product specifications and requirements documents\n", + "Interpret product usage metrics and KPIs\n", + "Human resources\n", + "Craft job descriptions and postings\n", + "Create training modules and documentation\n", + "Create employee development plans\n", + "Interpret employee engagement results\n", + "Legal\n", + "Summarize complex contracts and agreements\n", + "Assist in drafting legal documents and templates\n", + "Monitor regulatory changes across different jurisdictions\n", + "Automate routine legal tasks and processes\n", + "We're a global FinTech business with omnichannel touchpoints in marketing and communications. Our global growth requires our marketing resources to expand in capacity and language capability. Claude's excellent writing and transcreation capabilities have been a big enabler for us to scale globally and achieve higher ROI.\n", + "Olga Pirog\n", + "Global Head of Data and AI transformation at IG Group\n", + "Claude offers our team members a tool that feels like an extension of their work and expertise, allowing us to take on more complex tasks and deliver greater impact while ensuring GitLab’s IP remains private and protected.\n", + "Taylor McCaslin\n", + "Product lead for AI and ML tech at GitLab\n", + "Read the full story\n", + "Deloitte is leading the way in the trustworthy use of Generative AI within enterprises. Our exploration of Claude for Work will help us reveal how this transformative technology can empower our workforce\n", + "Gina Schaefer\n", + "AI Managing Director and Alliance Leader at Deloitte Consulting LLP\n", + "Piloting Claude has revolutionized our workflows, becoming our most requested tool. It's dramatically accelerated content creation and data analysis. In months, we've unlocked thousands of hours for high-impact initiatives previously out of reach—propelling us into a new era of innovation and continuous learning.\n", + "Luka Anic\n", + "Senior Director, Technical AI Program and Product Manager at North Highland\n", + "Claude has been an incredible virtual collaborator for Midjourney. We use Claude for everything from summarizing research papers, to doing Q&A with user feedback notes, to iterating on our moderation policies. We're excited to keep working alongside Claude as we grow and explore new domains.\n", + "Caleb Kruse\n", + "Chief of Staff at Midjourney\n", + "With Claude, we can condense data down to make sure we’re not missing anything. It gives our teams a high-level view while still allowing us to link directly to specific feedback sources. This makes our work more strategic and enables our teams to create higher impact work.\n", + "Justin Dorfman\n", + "Open Source Community Manager at Sourcegraph\n", + "Read the full story\n", + "Launching our $100M Anthology Fund with Anthropic, we received thousands of AI startup applications. Claude enabled a streamlined evaluation process, reducing time spent matching applications to partners and allowing more effective engagement with founders.\n", + "Tim Tully\n", + "Partner at Menlo Ventures\n", + "Transform how your organization operates with Claude\n", + "Contact sales\n", + "Frequently asked questions\n", + "What is the Claude Enterprise plan?\n", + "Claude is a trusted, secure, and collaborative AI expert that integrates with organizational knowledge and workflows to support high-quality work. Claude enhances productivity and creativity across various business functions within an organization. The Claude Enterprise plan is designed for organizations that require large knowledge uploads, enhanced security and user management, and an AI solution that scales across cross-functional teams in support of deep work.\n", + "What is included in the Claude Enterprise plan?\n", + "The Claude Enterprise plan supports deep, cross-functional workflows and includes everything in the Claude Team plan in addition to the following new features:\n", + "Enterprise-grade security features to ensure the safety and compliance of your organization’s data including single-sign on (SSO) & domain capture, audit logs, System for Cross-domain Identity Management (SCIM), and role-based permissioning for fine-grained user management.\n", + "Expanded context window that enables users to upload hundreds of sales transcripts, dozens of 100+ page documents and 100K lines of code.\n", + "Increased usage, which means more messages with Claude.\n", + "Native integrations with data sources like GitHub provide the ability for engineering teams to brainstorm alongside your codebase, iterate on new features, onboard engineers and debug issues.\n", + "What security is in place for the Claude Enterprise plan?\n", + "By default, we will not use your Inputs or Outputs to train our models. To find out more, or if you would like to know how to contact us regarding a privacy related topic, see our\n", + "Trust Center\n", + ".\n", + "The Claude Enterprise plan offers critical security and data management components including single sign-on (SSO) and domain capture for secure user access and centralized provisioning control; Audit logs that trace system activities for security and compliance monitoring; System for Cross-domain Identity Management (SCIM) to automate user provisioning and access controls; Role-based permissioning that assigns a single primary owner of a workspace for security and information management.\n", + "What is Claude for Work?\n", + "Claude for Work is a comprehensive solution for organizations to securely use Claude for business purposes. Within Claude for Work, organizations can choose between our Team plan and Enterprise plan, which offer a spectrum of features and capacity based on your usage and security needs.\n", + "How can I integrate Claude into my own products or services?\n", + "If you’re a developer looking to create user-facing experiences and new products with Claude, the Anthropic API is right for you. To learn more about different API plans, contact our sales team\n", + "here\n", + ". To get started, explore our developer docs\n", + "here\n", + ".\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "api page\n", + "Webpage Title:\n", + "Build with Claude \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Build with Claude\n", + "Create user-facing experiences, new products, and new ways to work with the most advanced AI models on the market.\n", + "Start building\n", + "Developer docs\n", + "Get started\n", + "Self-serve\n", + "Launch your own generative AI solution with:\n", + "Access to all Claude models\n", + "Usage-based tiers\n", + "Automatically increasing rate limits\n", + "Simple pay-as-you-go pricing\n", + "Self-serve deployment on workbench\n", + "Prompting guides & developer documentation\n", + "Start building\n", + "Need additional support?\n", + "Need custom rate limits or hands-on support? Reach out to the Anthropic sales team for:\n", + "Anthropic-supported onboarding\n", + "Custom rate limits\n", + "Billing via monthly invoices\n", + "Prompting support\n", + "Deployment support\n", + "Contact sales\n", + "Announcements\n", + "Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku\n", + "Oct 22, 2024\n", + "Model updates\n", + "3.5 Sonnet\n", + "3.5 Haiku\n", + "The Claude model family\n", + "Right-sized for any task, the Claude family of models offers the best combination of speed and performance.\n", + "Light & fast\n", + "Haiku\n", + "Our fastest model that can execute lightweight actions, with industry-leading speed.\n", + "Hard-working\n", + "Sonnet\n", + "Our best combination of performance and speed for efficient, high-throughput tasks.\n", + "Powerful\n", + "Opus\n", + "Our highest-performing model, which can handle complex analysis, longer tasks with many steps, and higher-order math and coding tasks.\n", + "Cost\n", + "Intelligence\n", + "Use cases for Claude\n", + "Coding\n", + "Claude models are constantly improving on coding, math, and reasoning. Our latest model, Claude 3.5 Sonnet, can be instructed to write, edit, and run code with strong troubleshooting capabilities.\n", + "Productivity\n", + "Claude can extract relevant information from business emails and documents, categorize and summarize survey responses, and wrangle reams of text with high speed and accuracy.\n", + "Customer support\n", + "Claude can handle ticket triage, on-demand complex inquiries using rich context awareness, and multi-step support workflows—all with a casual tone and conversational responses.\n", + "Leading companies build with Claude\n", + "Read customer stories\n", + "Start building with the Anthropic API\n", + "See pricing\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "pricing page\n", + "Webpage Title:\n", + "Pricing \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Pricing\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n", + "\n", + "news page\n", + "Webpage Title:\n", + "Newsroom \\ Anthropic\n", + "Webpage Contents:\n", + "Claude\n", + "Overview\n", + "Team\n", + "Enterprise\n", + "API\n", + "Pricing\n", + "Research\n", + "Company\n", + "Careers\n", + "News\n", + "Newsroom\n", + "Featured\n", + "Powering the next generation of AI development with AWS\n", + "Press inquiries\n", + "press@anthropic.com\n", + "Non-media inquiries\n", + "support.anthropic.com\n", + "Media assets\n", + "Download press kit\n", + "Follow Anthropic\n", + "Featured\n", + "Announcing our updated Responsible Scaling Policy\n", + "Featured\n", + "Developing a computer use model\n", + "News\n", + "No results found.\n", + "Product\n", + "Claude 3.5 Haiku on AWS Trainium2 and model distillation in Amazon Bedrock\n", + "Dec 3, 2024\n", + "Product\n", + "Tailor Claude’s responses to your personal style\n", + "Nov 26, 2024\n", + "Announcements\n", + "Introducing the Model Context Protocol\n", + "Nov 25, 2024\n", + "Announcements\n", + "Powering the next generation of AI development with AWS\n", + "Nov 22, 2024\n", + "Product\n", + "Improve your prompts in the developer console\n", + "Nov 14, 2024\n", + "Policy\n", + "The case for targeted regulation\n", + "Oct 31, 2024\n", + "Product\n", + "Raising the bar on SWE-bench Verified with Claude 3.5 Sonnet\n", + "Oct 30, 2024\n", + "Announcements\n", + "Claude 3.5 Sonnet on GitHub Copilot\n", + "Oct 29, 2024\n", + "Product\n", + "Introducing the analysis tool in Claude.ai\n", + "Oct 24, 2024\n", + "Announcements\n", + "·\n", + "Product\n", + "Developing a computer use model\n", + "Oct 22, 2024\n", + "Announcements\n", + "Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku\n", + "Oct 22, 2024\n", + "Announcements\n", + "Announcing our updated Responsible Scaling Policy\n", + "Oct 15, 2024\n", + "Societal Impacts\n", + "U.S. Elections Readiness\n", + "Oct 8, 2024\n", + "Product\n", + "Introducing the Message Batches API\n", + "Oct 8, 2024\n", + "Product\n", + "·\n", + "Announcements\n", + "Introducing Contextual Retrieval\n", + "Sep 19, 2024\n", + "Product\n", + "Workspaces in the Anthropic API Console\n", + "Sep 10, 2024\n", + "Product\n", + "Claude for Enterprise\n", + "Sep 4, 2024\n", + "Announcements\n", + "Salesforce teams up with Anthropic to enhance Einstein capabilities with Claude\n", + "Sep 3, 2024\n", + "Announcements\n", + "Artifacts are now generally available\n", + "Aug 27, 2024\n", + "Product\n", + "Prompt caching with Claude\n", + "Aug 14, 2024\n", + "Announcements\n", + "Expanding our model safety bug bounty program\n", + "Aug 8, 2024\n", + "Announcements\n", + "Claude is now available in Brazil\n", + "Aug 1, 2024\n", + "Announcements\n", + "Anthropic partners with Menlo Ventures to launch Anthology Fund\n", + "Jul 17, 2024\n", + "Product\n", + "Claude Android app\n", + "Jul 16, 2024\n", + "Product\n", + "Fine-tune Claude 3 Haiku in Amazon Bedrock\n", + "Jul 11, 2024\n", + "Product\n", + "Evaluate prompts in the developer console\n", + "Jul 9, 2024\n", + "Announcements\n", + "A new initiative for developing third-party model evaluations\n", + "Jul 1, 2024\n", + "Announcements\n", + "Expanding access to Claude for government\n", + "Jun 26, 2024\n", + "Product\n", + "Collaborate with Claude on Projects\n", + "Jun 25, 2024\n", + "Announcements\n", + "Claude 3.5 Sonnet\n", + "Jun 21, 2024\n", + "Policy\n", + "Challenges in red teaming AI systems\n", + "Jun 12, 2024\n", + "Policy\n", + "·\n", + "Societal Impacts\n", + "Testing and mitigating elections-related risks\n", + "Jun 6, 2024\n", + "Announcements\n", + "Introducing Claude to Canada\n", + "Jun 5, 2024\n", + "Product\n", + "Claude can now use tools\n", + "May 30, 2024\n", + "Announcements\n", + "Jay Kreps appointed to Anthropic's Board of Directors\n", + "May 29, 2024\n", + "Product\n", + "Golden Gate Claude\n", + "May 23, 2024\n", + "Announcements\n", + "Krishna Rao joins Anthropic as Chief Financial Officer\n", + "May 21, 2024\n", + "Interpretability\n", + "Mapping the Mind of a Large Language Model\n", + "May 21, 2024\n", + "Product\n", + "Generate better prompts in the developer console\n", + "May 20, 2024\n", + "Policy\n", + "Reflections on our Responsible Scaling Policy\n", + "May 20, 2024\n", + "Announcements\n", + "Mike Krieger joins Anthropic as Chief Product Officer\n", + "May 15, 2024\n", + "Announcements\n", + "Claude is now available in Europe\n", + "May 14, 2024\n", + "Announcements\n", + "Updating our Usage Policy\n", + "May 10, 2024\n", + "Product\n", + "·\n", + "Announcements\n", + "Introducing the Claude Team plan and iOS app\n", + "May 1, 2024\n", + "Announcements\n", + "Aligning on child safety principles\n", + "Apr 23, 2024\n", + "Alignment\n", + "Many-shot jailbreaking\n", + "Apr 2, 2024\n", + "Policy\n", + "Third-party testing as a key ingredient of AI policy\n", + "Mar 25, 2024\n", + "Announcements\n", + "Anthropic, AWS, and Accenture team up to build trusted solutions for enterprises\n", + "Mar 20, 2024\n", + "Announcements\n", + "Claude 3 models on Vertex AI\n", + "Mar 19, 2024\n", + "Announcements\n", + "Claude 3 Haiku: our fastest model yet\n", + "Mar 13, 2024\n", + "Announcements\n", + "Introducing the next generation of Claude\n", + "Mar 4, 2024\n", + "Product\n", + "Prompt engineering for business performance\n", + "Feb 29, 2024\n", + "Policy\n", + "Preparing for global elections in 2024\n", + "Feb 16, 2024\n", + "Announcements\n", + "Expanded legal protections and improvements to our API\n", + "Dec 19, 2023\n", + "Product\n", + "Long context prompting for Claude 2.1\n", + "Dec 6, 2023\n", + "Product\n", + "Introducing Claude 2.1\n", + "Nov 21, 2023\n", + "Policy\n", + "Thoughts on the US Executive Order, G7 Code of Conduct, and Bletchley Park Summit\n", + "Nov 5, 2023\n", + "Policy\n", + "Dario Amodei’s prepared remarks from the AI Safety Summit on Anthropic’s Responsible Scaling Policy\n", + "Nov 1, 2023\n", + "Announcements\n", + "Claude on Amazon Bedrock now available to every AWS customer\n", + "Sep 28, 2023\n", + "Announcements\n", + "Expanding access to safer AI with Amazon\n", + "Sep 25, 2023\n", + "Product\n", + "Prompt engineering for Claude's long context window\n", + "Sep 23, 2023\n", + "Announcements\n", + "Anthropic's Responsible Scaling Policy\n", + "Sep 19, 2023\n", + "Announcements\n", + "The Long-Term Benefit Trust\n", + "Sep 19, 2023\n", + "Announcements\n", + "Anthropic partners with BCG\n", + "Sep 14, 2023\n", + "Announcements\n", + "Introducing Claude Pro\n", + "Sep 7, 2023\n", + "Product\n", + "Claude 2 on Amazon Bedrock\n", + "Aug 23, 2023\n", + "Announcements\n", + "SKT Partnership Announcement\n", + "Aug 15, 2023\n", + "Announcements\n", + "Releasing Claude Instant 1.2\n", + "Aug 9, 2023\n", + "Announcements\n", + "Frontier Threats Red Teaming for AI Safety\n", + "Jul 26, 2023\n", + "Announcements\n", + "Frontier Model Security\n", + "Jul 25, 2023\n", + "Announcements\n", + "Claude 2\n", + "Jul 11, 2023\n", + "Announcements\n", + "Charting a Path to AI Accountability\n", + "Jun 13, 2023\n", + "Announcements\n", + "Anthropic Raises $450 Million in Series C Funding to Scale Reliable AI Products\n", + "May 23, 2023\n", + "Announcements\n", + "Zoom Partnership and Investment in Anthropic\n", + "May 16, 2023\n", + "Announcements\n", + "Introducing 100K Context Windows\n", + "May 11, 2023\n", + "Announcements\n", + "Claude’s Constitution\n", + "May 9, 2023\n", + "Announcements\n", + "Partnering with Scale to Bring Generative AI to Enterprises\n", + "Apr 26, 2023\n", + "Announcements\n", + "An AI Policy Tool for Today: Ambitiously Invest in NIST\n", + "Apr 20, 2023\n", + "Announcements\n", + "Claude, now in Slack\n", + "Mar 30, 2023\n", + "Announcements\n", + "Introducing Claude\n", + "Mar 14, 2023\n", + "Announcements\n", + "Core Views on AI Safety: When, Why, What, and How\n", + "Mar 8, 2023\n", + "Announcements\n", + "Anthropic Partners with Google Cloud\n", + "Feb 3, 2023\n", + "Announcements\n", + "Anthropic Raises Series B to Build Steerable, Interpretable, Robust AI Systems\n", + "Apr 29, 2022\n", + "Announcements\n", + "Anthropic raises $124 million to build more reliable, general AI systems\n", + "May 28, 2021\n", + "Claude\n", + "API\n", + "Team\n", + "Pricing\n", + "Research\n", + "Company\n", + "Customers\n", + "News\n", + "Careers\n", + "Press Inquiries\n", + "Support\n", + "Status\n", + "Availability\n", + "Twitter\n", + "LinkedIn\n", + "YouTube\n", + "Terms of Service – Consumer\n", + "Terms of Service – Commercial\n", + "Privacy Policy\n", + "Usage Policy\n", + "Responsible Disclosure Policy\n", + "Compliance\n", + "Privacy Choices\n", + "© 2024 Anthropic PBC\n", + "\n", + "\n" + ] + } + ], "source": [ - "print(get_all_details(\"https://huggingface.co\"))" + "print(get_all_details(\"https://anthropic.com\"))" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 14, "id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2", "metadata": {}, "outputs": [], @@ -280,7 +1376,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 15, "id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23", "metadata": {}, "outputs": [], @@ -289,23 +1385,13 @@ " user_prompt = f\"You are looking at a company called: {company_name}\\n\"\n", " user_prompt += f\"Here are the contents of its landing page and other relevant pages; use this information to build a short brochure of the company in markdown.\\n\"\n", " user_prompt += get_all_details(url)\n", - " user_prompt = user_prompt[:5_000] # Truncate if more than 5,000 characters\n", + " user_prompt = user_prompt[:20_000] # Truncate if more than 20,000 characters\n", " return user_prompt" ] }, { "cell_type": "code", - "execution_count": null, - "id": "cd909e0b-1312-4ce2-a553-821e795d7572", - "metadata": {}, - "outputs": [], - "source": [ - "get_brochure_user_prompt(\"HuggingFace\", \"https://huggingface.co\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, + "execution_count": 16, "id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46", "metadata": {}, "outputs": [], @@ -324,12 +1410,118 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 17, "id": "e093444a-9407-42ae-924a-145730591a39", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}, {'type': 'enterprise page', 'url': 'https://anthropic.com/enterprise'}, {'type': 'research page', 'url': 'https://anthropic.com/research'}]}\n" + ] + }, + { + "data": { + "text/markdown": [ + "# Anthropic Brochure\n", + "\n", + "---\n", + "\n", + "## **Company Overview**\n", + "\n", + "**Anthropic** is a pioneering AI safety and research company headquartered in San Francisco. Our mission is centered around building AI systems that are reliable, interpretable, and steerable, while ensuring their safe integration into everyday life. With our state-of-the-art AI model, **Claude**, we provide tools designed to empower organizations while maintaining a strong emphasis on safety.\n", + "\n", + "---\n", + "\n", + "## **Our Purpose**\n", + "\n", + "We recognize the transformative potential of AI on a global scale and strive to develop systems that individuals and organizations can trust. Through rigorous research and collaborative efforts, we aim to navigate the complex landscape of AI opportunities and risks.\n", + "\n", + "---\n", + "\n", + "## **Safety is a Science**\n", + "\n", + "At Anthropic, AI safety is treated as a systematic science. This involves conducting comprehensive research, actively applying insights into our products, and sharing our findings within the industry and with the public.\n", + "\n", + "### **Interdisciplinary Team**\n", + "\n", + "Our team comprises experts from various fields including machine learning, physics, policy, and business operations, all integrated to develop beneficial AI systems. \n", + "\n", + "---\n", + "\n", + "## **Our Products**\n", + "\n", + "### **Claude**\n", + "Meet **Claude 3.5 Sonnet**, our most intelligent AI model, enabling teams to enhance efficiency and drive innovation.\n", + "\n", + "- **Enterprise Solutions**: Securely connect Claude to your company knowledge, empowering every team with trusted AI.\n", + "- **API Access**: Integrate Claude into your applications to leverage advanced AI capabilities.\n", + "\n", + "---\n", + "\n", + "## **Customer Focus**\n", + "\n", + "Our customers range from businesses to nonprofits, all benefiting from Claude's advanced capabilities. We prioritize collaboration with civil society, government entities, and industry stakeholders to ensure safe and effective AI utilization.\n", + "\n", + "---\n", + "\n", + "## **Company Culture**\n", + "\n", + "At Anthropic, collaboration and trust form the foundation of our culture. Here are some of our core values:\n", + "\n", + "1. **Mission-Driven**: We exist to ensure AI benefits people and society.\n", + "2. **High Trust**: We promote honesty, emotional maturity, and open disagreement.\n", + "3. **One Big Team**: Collaboration is central, allowing us to pursue our goals collectively.\n", + "4. **Simplicity & Pragmatism**: We value straightforward solutions and empiricism in all aspects of our work.\n", + "\n", + "---\n", + "\n", + "## **Careers at Anthropic**\n", + "\n", + "Join us in building the future of safe AI! We celebrate diverse backgrounds and rich experiences, understanding that innovative solutions arise from a blend of perspectives.\n", + "\n", + "### **What We Offer:**\n", + "\n", + "- **Health & Wellness**: Comprehensive insurance, mental health support, and flexible time-off policies.\n", + "- **Competitive Compensation**: Salary and equity packages along with generous retirement plans.\n", + "- **Additional Benefits**: Culture of learning with stipends for wellness, education, and home office setups.\n", + "\n", + "### **The Hiring Process**\n", + "\n", + "We aim to minimize bias through a structured interview process that includes exploratory chats, skills assessments, team screens, and final checks to find the best fit for our multidisciplinary team.\n", + "\n", + "### **Work Environment**\n", + "\n", + "While we are headquartered in the Bay Area, we promote flexible work arrangements, often allowing remote work options and supporting those who relocate.\n", + "\n", + "---\n", + "\n", + "## **Join Us**\n", + "\n", + "Are you ready to be part of a forward-thinking company shaping the future of AI safely? Visit our [Careers page](#) to see open positions!\n", + "\n", + "---\n", + "\n", + "## **Get in Touch**\n", + "\n", + "For more information, check our website or follow us on social media:\n", + "- [Twitter](#)\n", + "- [LinkedIn](#)\n", + "- [YouTube](#)\n", + "\n", + "Together, let's build reliable and beneficial AI systems. " + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ - "create_brochure(\"HuggingFace\", \"https://huggingface.com\")" + "create_brochure(\"Anthropic\", \"https://anthropic.com\")" ] }, { @@ -346,6 +1538,14 @@ { "cell_type": "code", "execution_count": null, + "id": "bcb358a4-aa7f-47ec-b2bc-67768783dfe1", + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": 18, "id": "51db0e49-f261-4137-aabe-92dd601f7725", "metadata": {}, "outputs": [], @@ -370,12 +1570,101 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 19, "id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d", "metadata": {}, - "outputs": [], + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}, {'type': 'enterprise page', 'url': 'https://anthropic.com/enterprise'}, {'type': 'api page', 'url': 'https://anthropic.com/api'}, {'type': 'pricing page', 'url': 'https://anthropic.com/pricing'}, {'type': 'research page', 'url': 'https://anthropic.com/research'}, {'type': 'news page', 'url': 'https://anthropic.com/news'}]}\n" + ] + }, + { + "data": { + "text/markdown": [ + "# Anthropic: Pioneering Safe and Reliable AI\n", + "\n", + "## Welcome to Anthropic\n", + "\n", + "Anthropic is a San Francisco-based AI safety and research company committed to building reliable and interpretable AI systems that prioritize safety through innovative research and thoughtful design. With a mission centered around ensuring that transformative AI technologies benefit society, we are focused on empowering our customers through our state-of-the-art AI model, Claude.\n", + "\n", + "---\n", + "\n", + "## Our Core Beliefs\n", + "\n", + "- **Safety is a Science**: We approach AI safety as a systematic discipline, conducting rigorous research and applying findings to our products.\n", + "- **Interdisciplinary Collaboration**: Our team of experts spans disciplines including machine learning, policy, physics, and business, working together toward shared goals.\n", + "- **Community Engagement**: We actively collaborate with civil society, government, and academia to promote AI safety industry-wide.\n", + "\n", + "---\n", + "\n", + "## Meet Claude\n", + "\n", + "Claude is our flagship AI assistant, designed to enhance productivity and creativity across teams. Claude not only streamlines daily tasks but also serves as a collaborative partner, boosting innovation and decision-making through shared knowledge and insights.\n", + "\n", + "### Key Features:\n", + "- **Easy Collaboration**: Claude helps teams work smarter, generating ideas and performing complex tasks more efficiently.\n", + "- **Data Protection**: Customer confidentiality is paramount, and all data remains secure—we do not train our models on your sensitive information.\n", + "- **Enterprise Integration**: Empower organizations to scale their internal expertise and knowledge with Claude effortlessly.\n", + "\n", + "---\n", + "\n", + "## Company Culture\n", + "\n", + "At Anthropic, we cultivate an environment of **high trust**, collaboration, and **pragmatism**. Our core values emphasize working together towards our mission, assuming good intentions, and pursuing excellence through simple, effective solutions.\n", + "\n", + "### Our Values Include:\n", + "- **Mission-Driven**: We're here to ensure AI supports human flourishing.\n", + "- **Collaborative Spirit**: We function as one big team where every voice matters in shaping our path forward.\n", + "- **Openness and Honesty**: We encourage emotional maturity and intellectual openness at all levels.\n", + "\n", + "---\n", + "\n", + "## Join Us\n", + "\n", + "Anthropic is a **Public Benefit Corporation** dedicated to the long-term well-being of humanity through responsible AI. We are seeking talented individuals from diverse backgrounds to join our mission of building safer AI systems.\n", + "\n", + "### What We Offer:\n", + "- **Health & Wellness**: Comprehensive benefits, including health, dental, and vision insurance, parental leave, and mental health support.\n", + "- **Compensation**: Competitive salaries, equity packages, and retirement plans.\n", + "- **Professional Development**: Flexible stipends for wellness, education, and home office setups.\n", + "\n", + "### Career Opportunities\n", + "We are always on the lookout for individuals talented in various fields such as engineering, policy, and operations. Our recruitment process values unique strengths and diverse experiences.\n", + "\n", + "---\n", + "\n", + "## Customer Commitment\n", + "\n", + "Our customers range from enterprises and nonprofits to educational institutions seeking to leverage AI for innovative applications. With Claude, businesses enhance operations, foster collaboration, and drive growth, all while ensuring safety and reliability.\n", + "\n", + "### Join the teams transforming their operations with Claude's AI capabilities!\n", + "\n", + "---\n", + "\n", + "For more information on our products, career opportunities, or to explore our cutting-edge research, visit our [website](https://www.anthropic.com). Together, let’s shape the future of AI!\n", + "\n", + "---\n", + "\n", + "**Connect with Us:**\n", + "- [Twitter](https://twitter.com/anthropicai)\n", + "- [LinkedIn](https://www.linkedin.com/company/anthropic/)\n", + "- [YouTube](https://www.youtube.com/channel/UCxZgY7lzOs3AEs9H5HKA4bg)\n", + "\n", + "© 2024 Anthropic PBC. All rights reserved." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ - "stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" + "stream_brochure(\"Anthropic\", \"https://anthropic.com\")" ] }, { @@ -385,8 +1674,6 @@ "metadata": {}, "outputs": [], "source": [ - "# Try changing the system prompt to the humorous version when you make the Brochure for Hugging Face:\n", - "\n", "stream_brochure(\"HuggingFace\", \"https://huggingface.co\")" ] }, @@ -395,65 +1682,19 @@ "id": "a27bf9e0-665f-4645-b66b-9725e2a959b5", "metadata": {}, "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Business applications

\n", - " In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n", + "## Business Applications\n", "\n", - "This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n", + "In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n", "\n", - "Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.\n", - "
" - ] - }, - { - "cell_type": "markdown", - "id": "14b2454b-8ef8-4b5c-b928-053a15e0d553", - "metadata": {}, - "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

Before you move to Week 2 (which is tons of fun)

\n", - " Please see the week1 EXERCISE notebook for your challenge for the end of week 1. This will give you some essential practice working with Frontier APIs, and prepare you well for Week 2.\n", - "
" - ] - }, - { - "cell_type": "markdown", - "id": "17b64f0f-7d33-4493-985a-033d06e8db08", - "metadata": {}, - "source": [ - "\n", - " \n", - " \n", - " \n", - " \n", - "
\n", - " \n", - " \n", - "

A reminder on 2 useful resources

\n", - " 1. The resources for the course are available here.
\n", - " 2. I'm on LinkedIn here and I love connecting with people taking the course!\n", - "
\n", - "
" + "In terms of techniques, this is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n", + "\n", + "In terms of business applications - generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype." ] }, { "cell_type": "code", "execution_count": null, - "id": "3de35771-455f-40b5-ba44-7c0a6b7c427a", + "id": "22e878f1-08fe-4465-b50c-869352174eae", "metadata": {}, "outputs": [], "source": [] @@ -461,7 +1702,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "llms", "language": "python", "name": "python3" }, @@ -475,7 +1716,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.11" + "version": "3.11.10" } }, "nbformat": 4, diff --git a/week1/solutions/week1 SOLUTION.ipynb b/week1/solutions/week1 SOLUTION.ipynb index 5a7f2a7..b660bf2 100644 --- a/week1/solutions/week1 SOLUTION.ipynb +++ b/week1/solutions/week1 SOLUTION.ipynb @@ -15,7 +15,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "c1070317-3ed9-4659-abe3-828943230e03", "metadata": {}, "outputs": [], @@ -25,12 +25,12 @@ "from dotenv import load_dotenv\n", "from IPython.display import Markdown, display, update_display\n", "from openai import OpenAI\n", - "import ollama" + "# import ollama" ] }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "4a456906-915a-4bfd-bb9d-57e505c5093f", "metadata": {}, "outputs": [], @@ -43,7 +43,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "a8d7923c-5f28-4c30-8556-342d7c8497c1", "metadata": {}, "outputs": [], @@ -56,7 +56,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "3f0d0137-52b0-47a8-81a8-11a90a010798", "metadata": {}, "outputs": [], @@ -71,7 +71,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "8595807b-8ae2-4e1b-95d9-e8532142e8bb", "metadata": {}, "outputs": [], @@ -84,7 +84,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "9605cbb6-3d3f-4969-b420-7f4cae0b9328", "metadata": {}, "outputs": [], @@ -99,10 +99,66 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "60ce7000-a4a5-4cce-a261-e75ef45063b4", "metadata": {}, - "outputs": [], + "outputs": [ + { + "data": { + "text/markdown": [ + "Certainly! Let's break down the code snippet you've provided to understand what it does and why it operates that way.\n", + "\n", + "The code snippet is:\n", + "\n", + "python\n", + "yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n", + "\n", + "\n", + "### Explanation of Components\n", + "\n", + "1. **Set Comprehension**:\n", + " - `{book.get(\"author\") for book in books if book.get(\"author\")}`\n", + " - This is a set comprehension, which is a concise way to create a set in Python.\n", + " - **Iterates**: It iterates over a collection named `books`.\n", + " - **Extraction**: `book.get(\"author\")` attempts to retrieve the value associated with the key `\"author\"` from each `book` dictionary.\n", + " - **Filtering**: The `if book.get(\"author\")` condition filters out any books that do not have an `\"author\"` key or where the value is `None` or an empty string. This means only those books that have a valid author will be included in the set.\n", + "\n", + "2. **Set**:\n", + " - The output of the set comprehension is a set of unique author names from the `books` collection. \n", + " - Sets automatically handle duplicates, so if multiple books have the same author, their name will only appear once in the resulting set.\n", + "\n", + "3. **Yielding with `yield from`**:\n", + " - `yield from` is a syntax used in Python to delegate part of a generator’s operations to another generator.\n", + " - In this context, it means that each element obtained from the set (the unique authors) will be yielded one by one.\n", + "\n", + "### Overall Functionality\n", + "\n", + "- This code essentially extracts all unique authors from the list of `books` (where each `book` is presumably a dictionary containing various attributes) and yields each author one at a time from a generator function.\n", + "\n", + "### Practical Implications\n", + "\n", + "- When this line of code is executed within a generator function, it allows the generator to yield each unique author efficiently, enabling the caller to iterate over them.\n", + "- Utilizing a set ensures that authors are only returned once, even if they appear multiple times across different books.\n", + "\n", + "### Summary\n", + "\n", + "To summarize, the code snippet:\n", + "\n", + "1. Extracts authors from a list of book dictionaries.\n", + "2. Filters out any entries without an author.\n", + "3. Collects unique authors in a set.\n", + "4. Uses `yield from` to yield each author one at a time.\n", + "\n", + "This approach is efficient and concise, leveraging the power of Python's set comprehension and generator functions to handle potentially large data sets with an emphasis on uniqueness and iteration simplicity." + ], + "text/plain": [ + "" + ] + }, + "metadata": {}, + "output_type": "display_data" + } + ], "source": [ "# Get gpt-4o-mini to answer, with streaming\n", "\n", @@ -154,7 +210,7 @@ ], "metadata": { "kernelspec": { - "display_name": "Python 3 (ipykernel)", + "display_name": "llms", "language": "python", "name": "python3" },