{ "cells": [ { "cell_type": "markdown", "id": "75e2ef28-594f-4c18-9d22-c6b8cd40ead2", "metadata": {}, "source": [ "# Day 3 - Conversational AI - aka Chatbot!" ] }, { "cell_type": "code", "execution_count": 1, "id": "70e39cd8-ec79-4e3e-9c26-5659d42d0861", "metadata": {}, "outputs": [], "source": [ "# imports\n", "\n", "import os\n", "import ollama\n", "import gradio as gr" ] }, { "cell_type": "code", "execution_count": 2, "id": "6541d58e-2297-4de1-b1f7-77da1b98b8bb", "metadata": {}, "outputs": [], "source": [ "# Initialize\n", "MODEL_LLAMA = 'llama3.2'" ] }, { "cell_type": "code", "execution_count": 3, "id": "e16839b5-c03b-4d9d-add6-87a0f6f37575", "metadata": {}, "outputs": [], "source": [ "system_message = \"You are a helpful assistant\"" ] }, { "cell_type": "code", "execution_count": 5, "id": "1eacc8a4-4b48-4358-9e06-ce0020041bc1", "metadata": {}, "outputs": [], "source": [ "\n", "\n", "def chat(message, history):\n", " messages = [{\"role\": \"system\", \"content\": system_message}] + history + [{\"role\": \"user\", \"content\": message}]\n", "\n", " print(\"History is:\")\n", " print(history)\n", " print(\"And messages is:\")\n", " print(messages)\n", "\n", " stream = ollama.chat(model=MODEL_LLAMA, messages=messages, stream=True)\n", "\n", " response_text = \"\"\n", " for chunk in stream:\n", " response_text += chunk['message']['content']\n", " yield response_text" ] }, { "cell_type": "markdown", "id": "1334422a-808f-4147-9c4c-57d63d9780d0", "metadata": {}, "source": [ "## And then enter Gradio's magic!" ] }, { "cell_type": "code", "execution_count": 7, "id": "0866ca56-100a-44ab-8bd0-1568feaf6bf2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "* Running on local URL: http://127.0.0.1:7861\n", "* Running on public URL: https://6539f61952f430fa2d.gradio.live\n", "\n", "This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)\n" ] }, { "data": { "text/html": [ "
" ], "text/plain": [ "\n",
" ![]() | \n",
" \n",
" Business Applications\n", " Conversational Assistants are of course a hugely common use case for Gen AI, and the latest frontier models are remarkably good at nuanced conversation. And Gradio makes it easy to have a user interface. Another crucial skill we covered is how to use prompting to provide context, information and examples.\n", "\n", "Consider how you could apply an AI Assistant to your business, and make yourself a prototype. Use the system prompt to give context on your business, and set the tone for the LLM.\n", " | \n",
"