Browse Source

Updated explanations and tips

pull/14/head
Edward Donner 6 months ago
parent
commit
f4206653c3
  1. BIN
      business.jpg
  2. BIN
      important.jpg
  3. 14
      week1/Intermediate Python.ipynb
  4. 86
      week1/day1.ipynb
  5. 13
      week1/day2 EXERCISE.ipynb
  6. 40
      week1/day5.ipynb
  7. 49
      week1/troubleshooting.ipynb
  8. 77
      week2/day1.ipynb
  9. 83
      week2/day2.ipynb
  10. 18
      week2/day3.ipynb
  11. 51
      week4/day3.ipynb
  12. 34
      week4/day4.ipynb
  13. 38
      week8/day1.ipynb

BIN
business.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 367 KiB

BIN
important.jpg

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

14
week1/Intermediate Python.ipynb

@ -23,7 +23,7 @@
"source": [
"## First: if you need a refresher on the foundations\n",
"\n",
"I'm going to defer to an AI friend for this, because these explanations are so well written with great examples. Copy and paste the code examples into a new cell to give them a try.\n",
"I'm going to defer to an AI friend for this, because these explanations are so well written with great examples. Copy and paste the code examples into a new cell to give them a try. Pick whichever section(s) you'd like to brush up on.\n",
"\n",
"**Python imports:** \n",
"https://chatgpt.com/share/672f9f31-8114-8012-be09-29ef0d0140fb\n",
@ -40,8 +40,14 @@
"**Python lists, dicts and sets**, including the `get()` method: \n",
"https://chatgpt.com/share/672fa225-3f04-8012-91af-f9c95287da8d\n",
"\n",
"**Python files** including modes, encoding, context managers, Path, glob.glob: \n",
"https://chatgpt.com/share/673b53b2-6d5c-8012-a344-221056c2f960\n",
"\n",
"**Python classes:** \n",
"https://chatgpt.com/share/672fa07a-1014-8012-b2ea-6dc679552715"
"https://chatgpt.com/share/672fa07a-1014-8012-b2ea-6dc679552715\n",
"\n",
"**Pickling Python objects and converting to JSON:** \n",
"https://chatgpt.com/share/673b553e-9d0c-8012-9919-f3bb5aa23e31"
]
},
{
@ -123,7 +129,7 @@
"source": [
"# But you may not know that you can do this to create dictionaries, too:\n",
"\n",
"fruit_mapping = {fruit:fruit.upper() for fruit in fruits}\n",
"fruit_mapping = {fruit: fruit.upper() for fruit in fruits}\n",
"fruit_mapping"
]
},
@ -147,7 +153,7 @@
"metadata": {},
"outputs": [],
"source": [
"fruit_mapping_unless_starts_with_a = {fruit:fruit.upper() for fruit in fruits if not fruit.startswith('A')}\n",
"fruit_mapping_unless_starts_with_a = {fruit: fruit.upper() for fruit in fruits if not fruit.startswith('A')}\n",
"fruit_mapping_unless_starts_with_a"
]
},

86
week1/day1.ipynb

@ -43,9 +43,28 @@
"\n",
"If you're already comfortable with today's material, please hang in there; you can move swiftly through the first few labs - we will get much more in depth as the weeks progress.\n",
"\n",
"## Business value of these exercises\n",
"\n",
"A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me."
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Please read - important note</h2>\n",
" <span style=\"color:#900;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you do this with me, either at the same time, or (perhaps better) right afterwards. Add print statements to understand what's going on, and then come up with your own variations. If you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business value of these exercises</h2>\n",
" <span style=\"color:#181;\">A final thought. While I've designed these notebooks to be educational, I've also tried to make them enjoyable. We'll do fun things like have LLMs tell jokes and argue with each other. But fundamentally, my goal is to teach skills you can apply in business. I'll explain business implications as we go, and it's worth keeping this in mind: as you build experience with models and techniques, think of ways you could put this into action at work today. Please do contact me if you'd like to discuss more or if you have ideas to bounce off me.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
@ -136,9 +155,6 @@
"# If you're not familiar with Classes, check out the \"Intermediate Python\" notebook\n",
"\n",
"class Website:\n",
" \"\"\"\n",
" A utility class to represent a Website that we have scraped\n",
" \"\"\"\n",
"\n",
" def __init__(self, url):\n",
" \"\"\"\n",
@ -160,7 +176,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Let's try one out\n",
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
@ -267,6 +283,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
"messages_for(ed)"
]
},
@ -371,11 +389,59 @@
"id": "c951be1a-7f1b-448f-af1f-845978e47e2c",
"metadata": {},
"source": [
"## Business Applications\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise, you experienced calling the Cloud API of a Frontier Model (a leading model at the frontier of AI) for the first time. We will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution.</span>\n",
" </td>\n",
" </tr>\n",
"</table>\n",
"\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue - now try yourself</h2>\n",
" <span style=\"color:#900;\">Use the cell below to make your own simple commercial example. Stick with the summarization use case for now. Here's an idea: write something that will take the contents of an email, and will suggest an appropriate short subject line for the email. That's the kind of feature that might be built into a commercial email tool.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"something here\"\n",
"user_prompt = \"\"\"\n",
" Lots of text\n",
" Can be pasted here\n",
"\"\"\"\n",
"\n",
"# Step 2: Make the messages list\n",
"\n",
"messages = [] # fill this in\n",
"\n",
"# Step 3: Call OpenAI\n",
"\n",
"response =\n",
"\n",
"In this exercise, you experienced calling the API of a Frontier Model (a leading model at the frontier of AI) for the first time. This is broadly applicable across Gen AI use cases and we will be using APIs like OpenAI at many stages in the course, in addition to building our own LLMs.\n",
"# Step 4: print the result\n",
"\n",
"More specifically, we've applied this to Summarization - a classic Gen AI use case to make a summary. This can be applied to any business vertical - summarizing the news, summarizing financial performance, summarizing a resume in a cover letter - the applications are limitless. Consider how you could apply Summarization in your business, and try prototyping a solution."
"print("
]
},
{

13
week1/day2 EXERCISE.ipynb

@ -29,7 +29,10 @@
"You should see the message `Ollama is running`. \n",
"\n",
"If not, bring up a new Terminal (Mac) or Powershell (Windows) and enter `ollama serve` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again."
"And in another Terminal (Mac) or Powershell (Windows), enter `ollama pull llama3.2` \n",
"Then try [http://localhost:11434/](http://localhost:11434/) again.\n",
"\n",
"If Ollama is slow on your machine, try using `llama3.2:1b` as an alternative. Run `ollama pull llama3.2:1b` from a Terminal or Powershell, and change the code below from `MODEL = \"llama3.2\"` to `MODEL = \"llama3.2:1b\"`"
]
},
{
@ -124,14 +127,6 @@
"print(response['message']['content'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a611b05-b5b0-4c83-b82d-b3a39ffb917d",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "1622d9bb-5c68-4d4e-9ca4-b492c751f898",

40
week1/day5.ipynb

@ -52,7 +52,7 @@
"load_dotenv()\n",
"api_key = os.getenv('OPENAI_API_KEY')\n",
"\n",
"if api_key and api_key[:8]=='sk-proj-':\n",
"if api_key and api_key.startswith('sk-proj-') and len(api_key)>10:\n",
" print(\"API key looks good so far\")\n",
"else:\n",
" print(\"There might be a problem with your API key? Please visit the troubleshooting notebook!\")\n",
@ -391,19 +391,45 @@
"id": "a27bf9e0-665f-4645-b66b-9725e2a959b5",
"metadata": {},
"source": [
"## Business Applications\n",
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business applications</h2>\n",
" <span style=\"color:#181;\">In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n",
"\n",
"In this exercise we extended the Day 1 code to make multiple LLM calls, and generate a document.\n",
"This is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
"\n",
"In terms of techniques, this is perhaps the first example of Agentic AI design patterns, as we combined multiple calls to LLMs. This will feature more in Week 2, and then we will return to Agentic AI in a big way in Week 8 when we build a fully autonomous Agent solution.\n",
"\n",
"In terms of business applications - generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype."
"Generating content in this way is one of the very most common Use Cases. As with summarization, this can be applied to any business vertical. Write marketing content, generate a product tutorial from a spec, create personalized email content, and so much more. Explore how you can apply content generation to your business, and try making yourself a proof-of-concept prototype.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "14b2454b-8ef8-4b5c-b928-053a15e0d553",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you move to Week 2 (which is tons of fun)</h2>\n",
" <span style=\"color:#900;\">Please see the week1 EXERCISE notebook for your challenge for the end of week 1. This will give you some essential practice working with Frontier APIs, and prepare you well for Week 2.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "22e878f1-08fe-4465-b50c-869352174eae",
"id": "55b2620c-35ee-4d42-a4d9-90fe98dbef02",
"metadata": {},
"outputs": [],
"source": []

49
week1/troubleshooting.ipynb

@ -34,7 +34,7 @@
"4. Kernel menu >> Restart Kernel and Clear Outputs of All Cells\n",
"5. Come back to this notebook and try the cell below again.\n",
"\n",
"If **that** doesn't work, then please contact me! I'll respond quickly, and we'll figure it out."
"If **that** doesn't work, then please contact me! I'll respond quickly, and we'll figure it out. If you used Anaconda, it might be that for some reason your environment is corrupted, in which case the simplest fix is to use the virtualenv approach instead (Part 2B in the setup guides)."
]
},
{
@ -46,6 +46,8 @@
"source": [
"# This should run with no output - no import errors.\n",
"# Import errors might indicate that you started jupyter lab without your environment activated? See SETUP part 5.\n",
"# Or you might need to restart your Kernel and Jupyter Lab.\n",
"# Or it's possible that something is wrong with Anaconda, in which case we may have to use virtualenv instead.\n",
"\n",
"from openai import OpenAI"
]
@ -60,7 +62,9 @@
"Let's check your .env file exists and has the OpenAI key set properly inside it. \n",
"Please run this code and check that it prints a successful message, otherwise follow its instructions.\n",
"\n",
"Note that the `.env` file won't show up in your Jupyter Lab file browser, because Jupyter hides files that start with a dot for your security; they're considered hidden files. If you need to change the name, you'll need to use a command terminal or File Explorer (PC) / Finder Window (Mac). Ask ChatGPT if that's giving you problems, or email me!"
"Note that the `.env` file won't show up in your Jupyter Lab file browser, because Jupyter hides files that start with a dot for your security; they're considered hidden files. If you need to change the name, you'll need to use a command terminal or File Explorer (PC) / Finder Window (Mac). Ask ChatGPT if that's giving you problems, or email me!\n",
"\n",
"If you're having challenges creating the `.env` file, we can also do it with code! See the cell after the next one."
]
},
{
@ -102,6 +106,45 @@
" print(file.name)"
]
},
{
"cell_type": "markdown",
"id": "105f9e0a-9ff4-4344-87c8-e3e41bc50869",
"metadata": {},
"source": [
"## Fallback plan - python code to create the .env file for you\n",
"\n",
"Only run the next cell if you're having problems making the .env file. \n",
"Replace the text in the first line of code with your key from OpenAI."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ab9ea6ef-49ee-4899-a1c7-75a8bd9ac36b",
"metadata": {},
"outputs": [],
"source": [
"# Only run this code in this cell if you want to have a .env file created for you!\n",
"\n",
"make_me_a_file_with_this_key = \"put your key here inside these quotes.. it should start sk-proj-\"\n",
"\n",
"from pathlib import Path\n",
"\n",
"parent_dir = Path(\"..\")\n",
"env_path = parent_dir / \".env\"\n",
"\n",
"if env_path.exists():\n",
" print(\"There is already a .env file - if you want me to create a new one, please delete the existing one first\")\n",
"else:\n",
" try:\n",
" with env_path.open(mode='w', encoding='utf-8') as env_file:\n",
" env_file.write(f\"OPENAI_API_KEY={make_me_a_file_with_this_key}\")\n",
" print(f\"Successfully created the .env file at {env_path}\")\n",
" print(\"Now rerun the previous cell to confirm that the file is created and the key is correct.\")\n",
" except Exception as e:\n",
" print(f\"An error occurred while creating the .env file: {e}\")"
]
},
{
"cell_type": "markdown",
"id": "0ba9420d-3bf0-4e08-abac-f2fbf0e9c7f1",
@ -109,7 +152,7 @@
"source": [
"# Step 3\n",
"\n",
"Now let's check that your API key is correct set up in your `.env` file.\n",
"Now let's check that your API key is correct set up in your `.env` file, and available using the dotenv package.\n",
"Try running the next cell."
]
},

77
week2/day1.ipynb

@ -14,6 +14,31 @@
"Today we'll connect with the APIs for Anthropic and Google, as well as OpenAI."
]
},
{
"cell_type": "markdown",
"id": "2b268b6e-0ba4-461e-af86-74a41f4d681f",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Important Note - Please read me</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "85cfe275-4705-4d30-abea-643fbddf1db0",
@ -465,30 +490,64 @@
" claude_messages.append(claude_next)"
]
},
{
"cell_type": "markdown",
"id": "1d10e705-db48-4290-9dc8-9efdb4e31323",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you continue</h2>\n",
" <span style=\"color:#900;\">\n",
" Be sure you understand how the conversation above is working, and in particular how the <code>messages</code> list is being populated. Add print statements as needed. Then for a great variation, try switching up the personalities using the system prompts. Perhaps one can be pessimistic, and one optimistic?<br/>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "3637910d-2c6f-4f19-b1fb-2f916d23f9ac",
"metadata": {},
"source": [
"# See the community-contributions folder\n",
"# More advanced exercises\n",
"\n",
"For a great variation with a 3-way bringing Gemini into the conversation!\n",
"Try creating a 3-way, perhaps bringing Gemini into the conversation! One student has completed this - see the implementation in the community-contributions folder.\n",
"\n",
"Try doing this yourself before you look in the folder.\n",
"Try doing this yourself before you look at the solutions.\n",
"\n",
"## Additional exercise\n",
"\n",
"Try adding in an Ollama model in to the conversation.\n",
"\n",
"## Business relevance\n",
"\n",
"This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business."
"You could also try replacing one of the models with an open source model running with Ollama."
]
},
{
"cell_type": "markdown",
"id": "446c81e3-b67e-4cd9-8113-bc3092b93063",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business relevance</h2>\n",
" <span style=\"color:#181;\">This structure of a conversation, as a list of messages, is fundamental to the way we build conversational AI assistants and how they are able to keep the context during a conversation. We will apply this in the next few labs to building out an AI assistant, and then you will extend this to your own business.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0d86790a-3a6f-4b18-ab0a-bc6107945a27",
"id": "c23224f6-7008-44ed-a57f-718975f4e291",
"metadata": {},
"outputs": [],
"source": []

83
week2/day2.ipynb

@ -172,6 +172,8 @@
"metadata": {},
"outputs": [],
"source": [
"# The simplicty of gradio. This might appear in \"light mode\" - I'll show you how to make this in dark mode later.\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\").launch()"
]
},
@ -182,9 +184,55 @@
"metadata": {},
"outputs": [],
"source": [
"# Adding share=True means that it can be accessed publically\n",
"# A more permanent hosting is available using a platform called Spaces from HuggingFace, which we will touch on next week\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(share=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd87533a-ff3a-4188-8998-5bedd5ba2da3",
"metadata": {},
"outputs": [],
"source": [
"# Adding inbrowser=True opens up a new browser window automatically\n",
"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\").launch(inbrowser=True)"
]
},
{
"cell_type": "markdown",
"id": "b42ec007-0314-48bf-84a4-a65943649215",
"metadata": {},
"source": [
"## Forcing dark mode\n",
"\n",
"Gradio appears in light mode or dark mode depending on the settings of the browser and computer. There is a way to force gradio to appear in dark mode, but Gradio recommends against this as it should be a user preference (particularly for accessibility reasons). But if you wish to force dark mode for your screens, below is how to do it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8129afa-532b-4b15-b93c-aa9cca23a546",
"metadata": {},
"outputs": [],
"source": [
"# Define this variable and then pass js=force_dark_mode when creating the Interface\n",
"\n",
"force_dark_mode = \"\"\"\n",
"function refresh() {\n",
" const url = new URL(window.location);\n",
" if (url.searchParams.get('__theme') !== 'dark') {\n",
" url.searchParams.set('__theme', 'dark');\n",
" window.location.href = url.href;\n",
" }\n",
"}\n",
"\"\"\"\n",
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", flagging_mode=\"never\", js=force_dark_mode).launch()"
]
},
{
"cell_type": "code",
"execution_count": null,
@ -192,6 +240,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Inputs and Outputs\n",
"\n",
"view = gr.Interface(\n",
" fn=shout,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
@ -208,6 +258,8 @@
"metadata": {},
"outputs": [],
"source": [
"# And now - changing the function from \"shout\" to \"message_gpt\"\n",
"\n",
"view = gr.Interface(\n",
" fn=message_gpt,\n",
" inputs=[gr.Textbox(label=\"Your message:\", lines=6)],\n",
@ -224,6 +276,11 @@
"metadata": {},
"outputs": [],
"source": [
"# Let's use Markdown\n",
"# Are you wondering why it makes any difference to set system_message when it's not referred to in the code below it?\n",
"# I'm taking advantage of system_message being a global variable, used back in the message_gpt function (go take a look)\n",
"# Not a great software engineering practice, but quite sommon during Jupyter Lab R&D!\n",
"\n",
"system_message = \"You are a helpful assistant that responds in markdown\"\n",
"\n",
"view = gr.Interface(\n",
@ -243,6 +300,8 @@
"outputs": [],
"source": [
"# Let's create a call that streams back results\n",
"# If you'd like a refresher on Generators (the \"yield\" keyword),\n",
"# Please take a look at the Intermediate Python notebook in week1 folder.\n",
"\n",
"def stream_gpt(prompt):\n",
" messages = [\n",
@ -334,7 +393,9 @@
"\n",
"There's actually a more elegant way to achieve this (which Python people might call more 'Pythonic'):\n",
"\n",
"`yield from result`"
"`yield from result`\n",
"\n",
"I cover this in more detail in the Intermediate Python notebook in the week1 folder - take a look if you'd like more."
]
},
{
@ -380,6 +441,26 @@
"Now you know how - it's simple!"
]
},
{
"cell_type": "markdown",
"id": "92d7c49b-2e0e-45b3-92ce-93ca9f962ef4",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Before you read the next few cells</h2>\n",
" <span style=\"color:#900;\">\n",
" Try to do this yourself - go back to the company brochure in week1, day5 and add a Gradio UI to the end. Then come and look at the solution.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "code",
"execution_count": null,

18
week2/day3.ipynb

@ -256,11 +256,19 @@
"id": "82a57ee0-b945-48a7-a024-01b56a5d4b3e",
"metadata": {},
"source": [
"# Business Applications\n",
"\n",
"Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
"\n",
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM."
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../business.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#181;\">Business Applications</h2>\n",
" <span style=\"color:#181;\">Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n",
"<br/><br/>\n",
"Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.</span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{

51
week4/day3.ipynb

@ -7,11 +7,52 @@
"source": [
"# Code Generator\n",
"\n",
"The requirement: use a Frontier model to generate high performance C++ code from Python code\n",
"\n",
"# Important Note\n",
"\n",
"In the exercise I use GPT-4o and Claude-3.5-Sonnet, which are the slightly higher priced versions. The costs are still low, but if you'd prefer to keep costs ultra low, please make the suggested switches to the models (3 cells down from here)."
"The requirement: use a Frontier model to generate high performance C++ code from Python code\n"
]
},
{
"cell_type": "markdown",
"id": "d5ccb926-7b49-44a4-99ab-8ef20b5778c0",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Reminder: fetch latest code</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"id": "d90e04a2-5b8a-4fd5-9db8-27c02f033313",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h1 style=\"color:#900;\">Important Note</h1>\n",
" <span style=\"color:#900;\">\n",
" In this lab, I use GPT-4o and Claude-3.5-Sonnet, which are the slightly higher priced models. The costs are still low, but if you'd prefer to keep costs ultra low, please make the suggested switches to the models (3 cells down from here).\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{

34
week4/day4.ipynb

@ -11,18 +11,30 @@
"\n",
"To replicate this, you'll need to set up a HuggingFace endpoint as I do in the video. It's simple to do, and it's quite satisfying to see the results!\n",
"\n",
"It's also an important part of your learning; this is the first example of deploying an open source model to be behind an API. We'll return to this in Week 8, but this should plant a seed in your mind for what's involved in moving open source models into production.\n",
"\n",
"## Important Note\n",
"\n",
"If you do decide to use HuggingFace endpoints for this project, you should stop or pause the endpoints when you are done to avoid accruing unnecessary running cost. The costs are very low as long as you only run the endpoint when you're using it. Naviagte to the HuggingFace endpoint UI here:\n",
"\n",
"https://ui.endpoints.huggingface.co/\n",
"\n",
"And open your endpoint, and click Pause to put it on pause so you no longer pay for it. \n",
"It's also an important part of your learning; this is the first example of deploying an open source model to be behind an API. We'll return to this in Week 8, but this should plant a seed in your mind for what's involved in moving open source models into production."
]
},
{
"cell_type": "markdown",
"id": "22e1567b-33fd-49e7-866e-4b635d15715a",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h1 style=\"color:#900;\">Important - Pause Endpoints when not in use</h1>\n",
" <span style=\"color:#900;\">\n",
" If you do decide to use HuggingFace endpoints for this project, you should stop or pause the endpoints when you are done to avoid accruing unnecessary running cost. The costs are very low as long as you only run the endpoint when you're using it. Navigate to the HuggingFace endpoint UI <a href=\"https://ui.endpoints.huggingface.co/\">here,</a> open your endpoint, and click Pause to put it on pause so you no longer pay for it. \n",
"Many thanks to student John L. for raising this.\n",
"\n",
"In week 8 we will use Modal instead of HuggingFace endpoints; with Modal you only pay for the time that you use it."
"<br/><br/>\n",
"In week 8 we will use Modal instead of HuggingFace endpoints; with Modal you only pay for the time that you use it and you should get free credits.\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{

38
week8/day1.ipynb

@ -9,14 +9,32 @@
"\n",
"## We have lots to do this week!\n",
"\n",
"We'll move at a faster pace than usual, particularly as you're becoming proficient LLM engineers.\n",
"\n",
"One quick admin thing: I've added a number of packages to the environment.yml file during Sep and Oct. To make sure you have the latest repo with the latest code, it's worth doing this from the `llm_engineering` project folder:\n",
"\n",
"```\n",
"git pull\n",
"conda env update --f environment.yml --prune\n",
"```"
"We'll move at a faster pace than usual, particularly as you're becoming proficient LLM engineers.\n"
]
},
{
"cell_type": "markdown",
"id": "b3cf5389-93c5-4523-bc48-78fabb91d8f6",
"metadata": {},
"source": [
"<table style=\"margin: 0; text-align: left;\">\n",
" <tr>\n",
" <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
" <img src=\"../important.jpg\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
" </td>\n",
" <td>\n",
" <h2 style=\"color:#900;\">Especially important this week: pull the latest</h2>\n",
" <span style=\"color:#900;\">I'm continually improving these labs, adding more examples and exercises.\n",
" At the start of each week, it's worth checking you have the latest code.<br/>\n",
" First do a <a href=\"https://chatgpt.com/share/6734e705-3270-8012-a074-421661af6ba9\">git pull and merge your changes as needed</a>. Any problems? Try asking ChatGPT to clarify how to merge - or contact me!<br/><br/>\n",
" After you've pulled the code, from the llm_engineering directory, in an Anaconda prompt (PC) or Terminal (Mac), run:<br/>\n",
" <code>conda env update --f environment.yml --prune</code><br/>\n",
" Or if you used virtualenv rather than Anaconda, then run this from your activated environment in a Powershell (PC) or Terminal (Mac):<br/>\n",
" <code>pip install -r requirements.txt</code>\n",
" </span>\n",
" </td>\n",
" </tr>\n",
"</table>"
]
},
{
@ -43,7 +61,9 @@
"\n",
"A student on Windows mentioned that on Windows, you might also need to run this command from a command prompt afterwards: \n",
"`modal token new` \n",
"(Thank you Ed B. for that!)\n"
"(Thank you Ed B. for that!)\n",
"\n",
"And I've also heard that in some situations, you might need to restart the Kernel of this jupyter notebook after running this. (Kernel menu >> Restart Kernel and Clear Outputs of All Cells)."
]
},
{

Loading…
Cancel
Save