Browse Source

remove client.close() to allow multiple llm runs

pull/231/head
Octavio Ortiz-Bosch 2 months ago
parent
commit
be59cb5378
  1. 54
      week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb

54
week1/community-contributions/Week_1-Day 2-Article_Title_Generator.ipynb

@ -9,7 +9,7 @@
"\n", "\n",
"Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n", "Summarization use-case in which the user provides an article, which the LLM will analyze to suggest an SEO-optimized title.\n",
"\n", "\n",
"NOTES:\n", "**NOTES**:\n",
"\n", "\n",
"1. This version does NOT support website scrapping. You must copy and paste the required article.\n", "1. This version does NOT support website scrapping. You must copy and paste the required article.\n",
"2. The following models were configured:\n", "2. The following models were configured:\n",
@ -17,7 +17,21 @@
" b. Llama llama3.2\n", " b. Llama llama3.2\n",
" c. Deepseek deepseek-r1:1.5b\n", " c. Deepseek deepseek-r1:1.5b\n",
" It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n", " It is possible to configure additional models by adding the new model to the MODELS dictionary and its\n",
" initialization to the CLIENTS dictionary." " initialization to the CLIENTS dictionary. Then, call the model with --> ***answer =\n",
" get_answer('NEW_MODEL')***.\n",
"3. Users are encouraged to assess and rank the suggested titles using any headline analyzer tool online.\n",
" Example: https://www.isitwp.com/headline-analyzer/. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e",
"metadata": {},
"outputs": [],
"source": [
"# Confirming Llama is loaded\n",
"!ollama pull llama3.2"
] ]
}, },
{ {
@ -43,18 +57,11 @@
"source": [ "source": [
"# set environment variables for OpenAi\n", "# set environment variables for OpenAi\n",
"load_dotenv(override=True)\n", "load_dotenv(override=True)\n",
"api_key = os.getenv('OPENAI_API_KEY')\n" "api_key = os.getenv('OPENAI_API_KEY')\n",
] "\n",
}, "# validate API Key\n",
{ "if not api_key:\n",
"cell_type": "code", " raise ValueError(\"No API key was found! Please check the .env file.\")"
"execution_count": null,
"id": "e773daa6-d05e-49bf-ad8e-a8ed4882b77e",
"metadata": {},
"outputs": [],
"source": [
"# Confirming Llama is loaded\n",
"!ollama pull llama3.2"
] ]
}, },
{ {
@ -153,9 +160,6 @@
" model=MODELS[model],\n", " model=MODELS[model],\n",
" messages=messages\n", " messages=messages\n",
" )\n", " )\n",
"\n",
" # closing LLM client connection\n",
" client.close()\n",
" \n", " \n",
" # return answer\n", " # return answer\n",
" return response.choices[0].message.content\n", " return response.choices[0].message.content\n",
@ -199,10 +203,10 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# get openAi answer\n", "# get Llama answer\n",
"answer = get_answer('LLAMA')\n", "answer = get_answer('LLAMA')\n",
"\n", "\n",
"# display openAi answer\n", "# display Llama answer\n",
"display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))" "display(Markdown(f\"### {MODELS['LLAMA']} Answer\\n\\n{answer}\" ))"
] ]
}, },
@ -221,10 +225,10 @@
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": [
"# get openAi answer\n", "# get Deepseek answer\n",
"answer = get_answer('DEEPSEEK')\n", "answer = get_answer('DEEPSEEK')\n",
"\n", "\n",
"# display openAi answer\n", "# display Deepseek answer\n",
"display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))" "display(Markdown(f\"### {MODELS['DEEPSEEK']} Answer\\n\\n{answer}\" ))"
] ]
}, },
@ -235,7 +239,7 @@
"source": [ "source": [
"### Suggested future improvements\n", "### Suggested future improvements\n",
"\n", "\n",
"1. Add support for website scrapping to replace copy/pasting of articles.\n", "1. Add website scrapping support to replace copy/pasting of articles.\n",
"2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n", "2. Improve the system_prompt to provide specific SEO best practices to adopt during the title generation.\n",
"3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n", "3. Rephrase the system_prompt to ensure the model provides a single Title (not a list of suggestions). \n",
"4. Add the logic that would allow each model to assess the recommendations from the different models and \n", "4. Add the logic that would allow each model to assess the recommendations from the different models and \n",
@ -245,12 +249,10 @@
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": null,
"id": "1af8260b-5ba1-4eeb-acd0-02de537b1bf4", "id": "cf7403ac-d43b-4493-98bb-6fee94950cb0",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
"source": [ "source": []
"S"
]
} }
], ],
"metadata": { "metadata": {

Loading…
Cancel
Save