{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "603cd418-504a-4b4d-b1c3-be04febf3e79",
   "metadata": {},
   "source": [
    "# Article Title Generator\n",
    "\n",
    "Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n",
    "\n",
    "**NOTES**:\n",
    "\n",
    "1. This version does NOT support website scrapping. You must copy and paste the required article.\n",
    "2. The following models were configured:\n",
    "    a. OpenAI gpt-4o-mini\n",
    "    b. Llama llama3.2\n",
    "    c. Deepseek deepseek-r1:1.5b\n",
    "   It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n",
    "   initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n",
    "   get_answer('NEW_MODEL')***.\n",
    "3. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n",
    "   Example: https://www.isitwp.com/headline-analyzer/. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Confirming Llama is loaded\n",
    "!ollama pull llama3.2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "279b0c00-9bb0-4c7f-9c6d-aa0b108274b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# imports\n",
    "import os\n",
    "from dotenv import load_dotenv\n",
    "from IPython.display import Markdown, display\n",
    "from openai import OpenAI"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d4730d8d-3e20-4f3c-a4ff-ed2ac0a8aa27",
   "metadata": {},
   "outputs": [],
   "source": [
    "# set environment variables for OpenAi\n",
    "load_dotenv(override=True)\n",
    "api_key = os.getenv('OPENAI_API_KEY')\n",
    "\n",
    "# validate API Key\n",
    "if not api_key:\n",
    "    raise ValueError(\"No API key was found! Please check the .env file.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1abbb826-de66-498c-94d8-33369ad01885",
   "metadata": {},
   "outputs": [],
   "source": [
    "# constants\n",
    "MODELS = { 'GPT': 'gpt-4o-mini', \n",
    "          'LLAMA': 'llama3.2', \n",
    "          'DEEPSEEK': 'deepseek-r1:1.5b'\n",
    "         }\n",
    "\n",
    "CLIENTS = { 'GPT': OpenAI(), \n",
    "            'LLAMA': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama'),\n",
    "            'DEEPSEEK': OpenAI(base_url='http://localhost:11434/v1', api_key='ollama') \n",
    "          }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f490fe4-32d5-41f3-890d-ecf4e5e01dd4",
   "metadata": {},
   "source": [
    "### Copy & paste your article (without a title)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ddd76319-13ce-480b-baa7-cab6a5c88168",
   "metadata": {},
   "outputs": [],
   "source": [
    "# article - copy & paste your article\n",
    "article = \"\"\"\n",
    "            REPLACE WITH YOUR ARTICLE CONTENT\n",
    "          \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1914afad-dbd8-4c1f-8e68-80b0e5d743a9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# system prompt\n",
    "system_prompt = \"\"\"\n",
    "    You are an experienced SEO-focused copywriter. The user will provide an article, and your task is to analyze its content and generate the most effective, keyword-optimized title to maximize SEO performance.Respond in Markdown format.\n",
    "    \"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "176cfac7-5e6d-4d4a-a1c4-1b63b60de1f7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# user prompt\n",
    "user_prompt = f\"Following the article to be analyzed. Respond in Markdown format./n/n{article}\"\n",
    "                "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c45fc7d7-08c9-4e34-b427-b928a219bb94",
   "metadata": {},
   "outputs": [],
   "source": [
    "# message list\n",
    "messages = [\n",
    "            {\"role\": \"system\", \"content\": system_prompt},\n",
    "            {\"role\": \"user\", \"content\": user_prompt}\n",
    "           ]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f67b881f-1040-4cf7-82c5-e85f4c0bd252",
   "metadata": {},
   "outputs": [],
   "source": [
    "# call model and get answer\n",
    "def get_answer(model):\n",
    "    # set required client\n",
    "    client = CLIENTS[model]\n",
    "\n",
    "    # call model\n",
    "    response = client.chat.completions.create(\n",
    "        model=MODELS[model],\n",
    "        messages=messages\n",
    "    )\n",
    "    \n",
    "    # return answer\n",
    "    return response.choices[0].message.content\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "947b42ed-5b43-486d-8af3-e5b671c1fd0e",
   "metadata": {},
   "source": [
    "### Get OpenAI Suggested Title"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eb6f66e3-ab99-4f76-9358-896cb43c1fa1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# get openAi answer\n",
    "answer = get_answer('GPT')\n",
    "\n",
    "# display openAi answer\n",
    "display(Markdown(f\"### {MODELS['GPT']} Answer\\n\\n{answer}\" ))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "70073ebf-a00a-416b-854d-642d450cd99b",
   "metadata": {},
   "source": [
    "### Get Llama Suggested Title"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "caa190bb-de5f-45cc-b671-5d62688f7b25",
   "metadata": {},
   "outputs": [],
   "source": [
    "# get Llama answer\n",
    "answer = get_answer('LLAMA')\n",
    "\n",
    "# display Llama answer\n",
    "display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "811edc4f-20e2-482d-ac89-fae9d1b70bed",
   "metadata": {},
   "source": [
    "### Get Deepseek Suggested Title"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "082628e4-ff4c-46dd-ae5f-76578eb017ad",
   "metadata": {},
   "outputs": [],
   "source": [
    "# get Deepseek answer\n",
    "answer = get_answer('DEEPSEEK')\n",
    "\n",
    "# display Deepseek answer\n",
    "display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7fc404a6-3a91-4c09-89de-867d3d69b4b2",
   "metadata": {},
   "source": [
    "### Suggested future improvements\n",
    "\n",
    "1. Add website scrapping support to replace copy/pasting of articles.\n",
    "2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n",
    "3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n",
    "4. Add the logic that would allow each model to assess the recommendations from the different models and \n",
    "   select the best among these. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cf7403ac-d43b-4493-98bb-6fee94950cb0",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}