"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out\n",
"\n",
@ -132,7 +185,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
@ -144,7 +197,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
@ -177,7 +230,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
@ -199,7 +252,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
@ -215,17 +268,28 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nEdward Donner's website features insights into his interests in coding and experimenting with large language models (LLMs). As the co-founder and CTO of Nebula.io, Donner focuses on leveraging AI to improve talent discovery and management. He has a background in AI startups, highlighting a successful acquisition of his previous venture, untapt, in 2021.\\n\\n## Recent Posts\\n- **November 13, 2024**: *Mastering AI and LLM Engineering – Resources*\\n- **October 16, 2024**: *From Software Engineer to AI Data Scientist – Resources*\\n- **August 6, 2024**: *Outsmart LLM Arena – A Battle of Diplomacy and Deviousness*\\n- **June 26, 2024**: *Choosing the Right LLM: Toolkit and Resources*\\n\\nThe site encourages visitors to connect and shares his passion for technology and music.\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
@ -237,10 +301,38 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 12,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Edward Donner's Website\n",
"\n",
"Edward Donner's website serves as a platform for sharing insights and developments related to large language models (LLMs) and their applications. \n",
"\n",
"### About Edward\n",
"Edward describes himself as a programmer and enthusiast of LLMs, with interests in DJing and electronic music production. He is the co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and engagement in a job context. He previously founded the AI startup untapt, which was acquired in 2021. \n",
"\n",
"### Featured Content\n",
"The website highlights several posts with resources that include:\n",
"- **Mastering AI and LLM Engineering** (November 13, 2024)\n",
"- **From Software Engineer to AI Data Scientist** (October 16, 2024)\n",
"- **Outsmart LLM Arena** (August 6, 2024) - An initiative designed to challenge LLMs in strategic scenarios.\n",
"- **Choosing the Right LLM: Toolkit and Resources** (June 26, 2024)\n",
"\n",
"### Focus\n",
"The website primarily emphasizes exploring LLMs and their transformative potential, especially in the realm of talent management. There are also connections to various platforms where Edward can be followed or contacted."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
@ -277,20 +369,49 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 13,
"id": "52ae98bb",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Website Content\n",
"\n",
"The website appears to be inaccessible due to a prompt requesting users to enable JavaScript and cookies in their web browser. As a result, no specific content, news, or announcements can be summarized from the site at this time. \n",
"\n",
"For a full experience and access to content, it is necessary to adjust browser settings accordingly."
"Cell \u001b[1;32mIn[14], line 3\u001b[0m\n\u001b[0;32m 1\u001b[0m \u001b[38;5;66;03m#Parse webpages which is designed using JavaScript heavely\u001b[39;00m\n\u001b[0;32m 2\u001b[0m \u001b[38;5;66;03m# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\u001b[39;00m\n\u001b[1;32m----> 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mselenium\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m webdriver\n\u001b[0;32m 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mselenium\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mwebdriver\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchrome\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mservice\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Service\n\u001b[0;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mselenium\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mwebdriver\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcommon\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mby\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m By\n",
"\u001b[1;31mModuleNotFoundError\u001b[0m: No module named 'selenium'"
]
}
],
"source": [
"#Parse webpages which is designed using JavaScript heavely\n",
"# download the chorme driver from here as per your version of chrome - https://developer.chrome.com/docs/chromedriver/downloads\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
@ -238,7 +309,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
@ -252,7 +323,28 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"id": "a1b3fcc3-1152-41a4-b4ad-a6d66ee18b79",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system_prompt"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
@ -270,10 +362,65 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"id": "26448ec4-5c00-4204-baec-7df91d11ff2e",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are looking at a website titled Home - Edward Donner\n",
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
"\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"print(user_prompt_for(ed))"
]
@ -299,7 +446,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "f25dcd35-0cd0-4235-9f64-ac37ed9eaaa5",
"metadata": {},
"outputs": [],
@ -312,10 +459,40 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 16,
"id": "6100800a-f1dd-4624-9956-75735225be02",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system', 'content': 'You are a snarky assistant'},\n",
" {'role': 'user', 'content': 'What is 2 + 2?'}]"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "21ed95c5-7001-47de-a36d-1d6673b403ce",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Oh, we're doing math now? Well, 2 + 2 equals 4. Shocking, I know!\n"
]
}
],
"source": [
"# To give you a preview -- calling OpenAI with system and user messages:\n",
"\n",
@ -333,7 +510,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 20,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
@ -349,10 +526,24 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 21,
"id": "36478464-39ee-485c-9f3f-6a4e458dbc9c",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"[{'role': 'system',\n",
" 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'},\n",
" {'role': 'user',\n",
" 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nJune 26, 2024\\nChoosing the Right LLM: Toolkit and Resources\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Try this out, and then try for a few more websites\n",
"\n",
@ -369,7 +560,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 22,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
@ -387,17 +578,28 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 23,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nEdward Donner's website serves as a personal platform where he discusses his interests and expertise, primarily in coding and experimenting with large language models (LLMs). As the co-founder and CTO of Nebula.io, he focuses on leveraging AI to help individuals discover their potentials and enhance talent management for recruiters.\\n\\n### Key Sections:\\n\\n- **About Ed**: Edward enjoys coding, DJing, and engaging with the tech community. He has a background in AI startups, including being the founder and CEO of untapt, which was acquired in 2021.\\n- **Outsmart**: This feature introduces an arena where LLMs compete in diplomacy and cunning, showcasing innovative applications of AI.\\n- **Posts**: \\n - **Mastering AI and LLM Engineering – Resources** (November 13, 2024)\\n - **From Software Engineer to AI Data Scientist – Resources** (October 16, 2024)\\n - **Outsmart LLM Arena – A Battle of Diplomacy and Deviousness** (August 6, 2024)\\n - **Choosing the Right LLM: Toolkit and Resources** (June 26, 2024)\\n\\nThe website invites visitors to connect with Edward and stay updated through his posts and resources related to AI and LLMs.\""
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 24,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
@ -411,10 +613,38 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 25,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Edward Donner's Website\n",
"\n",
"Edward Donner's website is a personal and professional platform showcasing his interests and expertise in working with Large Language Models (LLMs) and AI technologies. As the co-founder and CTO of Nebula.io, he is focused on leveraging AI to enhance talent discovery and engagement. Previously, he founded the AI startup untapt, which was acquired in 2021. \n",
"\n",
"## Key Features:\n",
"\n",
"- **Personal Introduction**: Ed shares his passion for coding, experimentation with LLMs, and interests in DJing and electronic music production.\n",
"- **Professional Background**: Insights into his role at Nebula.io and previous experience with untapt. He highlights his work with proprietary LLMs and innovative matching models.\n",
" \n",
"## Recent Posts:\n",
"- **November 13, 2024**: Mastering AI and LLM Engineering – Resources\n",
"- **October 16, 2024**: From Software Engineer to AI Data Scientist – Resources\n",
"- **August 6, 2024**: Outsmart LLM Arena – A battle of diplomacy and deviousness\n",
"- **June 26, 2024**: Choosing the Right LLM: Toolkit and Resources\n",
"\n",
"These posts suggest a focus on educational resources and insights related to AI and LLM engineering."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
@ -437,20 +667,86 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 26,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# CNN Summary\n",
"\n",
"CNN is a leading news outlet providing the latest updates on various topics, including:\n",
"\n",
"- **Breaking News:** Continuous updates on urgent events around the globe.\n",
"- **Featured Stories:** Insights into significant current affairs, such as the ongoing conflict between Israel and Hamas, and developments in the Ukraine-Russia war.\n",
"- **Politics:** Coverage of key political events, including President Biden's recent clemency grants and implications of Trump's potential inauguration.\n",
"- **World Affairs:** Reports on international crises, including the situation in Syria and reactions to Sudan's bombardments.\n",
"- **Health & Science:** Articles discussing public health issues and scientific discoveries, such as innovations in herbal medicine.\n",
"- **Business & Economy:** Analysis of corporate developments, job cuts in Germany, and impacts of international trade policies.\n",
"- **Entertainment & Culture:** Features on public figures and trends affecting the entertainment industry, as well as the latest in sports.\n",
"\n",
"Recent announcements on the site include:\n",
"- **Clemency for Nearly 1,500 People:** This act marks the largest single-day clemency decision in recent history.\n",
"- **Status of International Relations:** Notable updates on figures like Trump and Xi Jinping, alongside the unfolding situation in Ukraine and the Middle East.\n",
"- **Cultural Insights:** Breakdowns of significant cultural events such as the recognition of Time's \"Person of the Year.\"\n",
"\n",
"CNN emphasizes the importance of viewer feedback to enhance reading and engagement experiences on their platform."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 27,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"# Anthropic Overview\n",
"\n",
"Anthropic is an AI safety and research company based in San Francisco, focused on developing reliable and beneficial AI systems with a strong emphasis on safety. The company boasts an interdisciplinary team with expertise in machine learning, physics, policy, and product development.\n",
"\n",
"## Key Offerings\n",
"\n",
"- **Claude AI Models**: \n",
" - The latest model, **Claude 3.5 Sonnet**, is highlighted as the most intelligent AI model to date.\n",
" - **Claude 3.5 Haiku** has also been introduced, expanding their product offerings.\n",
"\n",
"- **API Access**: \n",
" - Users can leverage Claude to enhance efficiency and create new revenue opportunities.\n",
"\n",
"## Recent Announcements\n",
"\n",
"1. **New Model Updates** (October 22, 2024):\n",
" - Introduction of Claude 3.5 Sonnet and Claude 3.5 Haiku.\n",
" - Announcement of new capabilities for computer use.\n",
"\n",
"2. **Research Initiatives**:\n",
" - **Constitutional AI**: Discusses ensuring harmlessness through AI feedback (December 15, 2022).\n",
" - **Core Views on AI Safety**: Outlines when, why, what, and how AI safety should be addressed (March 8, 2023).\n",
"\n",
"Overall, Anthropic is focused on pioneering advancements in AI through research and development while prioritizing safety and reliability in its applications."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"display_summary(\"https://anthropic.com\")"
]
@ -489,30 +785,58 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 31,
"id": "00743dac-0e70-45b7-879a-d7293a6f68a6",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Objet : Contestation du frais de retour tardif\n",
"\n",
"Bonjour,\n",
"\n",
"Je fais suite à votre email concernant le retour tardif de la voiture. Je conteste fermement cette modification de prix qui, selon moi, est injustifiée.\n",
"\n",
"Selon les termes de notre contrat de location, le délai de grâce pour le retour est souvent de 30 minutes, ce qui est fréquemment appliqué dans le secteur. De plus, vous n'avez pas mentionné dans votre contrat un tarif additionnel pour une telle situation, ce qui pourrait constituer une clause abusive.\n",
"\n",
"Je vous prie donc de bien vouloir annuler cette modification tarifaire. Je me réserve le droit d'explorer des recours supplémentaires si cette situation n'est pas corrigée rapidement.\n",
"\n",
"Dans l'attente de votre retour.\n",
"\n",
"Cordialement, \n",
"Sylvain\n"
]
}
],
"source": [
"# Step 1: Create your prompts\n",
"\n",
"system_prompt = \"something here\"\n",
"system_prompt = \"You are my very smart assistant. Your task will be to suggest to me an answer to my email. I want to avoid paying. you can be agressive and use the law\"\n",
"user_prompt = \"\"\"\n",
" Lots of text\n",
" Can be pasted here\n",
" Retour tardif\n",
"Bonjour sylvain,\n",
"\n",
"Vous avez réservé la voiture jusqu'à 15:00 , et vous l'avez rendue à 15:30 . Le prix de votre location a été modifié en conséquence.\n",
"Generative AI has numerous business applications across various industries. Here are some examples:\n",
"\n",
"1. **Content Creation**: Generative AI can create high-quality content, such as articles, social media posts, and product descriptions, in a matter of minutes. This can be particularly useful for businesses that need to generate large amounts of content quickly.\n",
"2. **Marketing Automation**: Generative AI can help automate marketing processes, such as personalized email campaigns, ad copywriting, and lead generation. By analyzing customer data and behavior, generative AI can create targeted and relevant content that resonates with customers.\n",
"3. **Product Design and Development**: Generative AI can assist in the design and development of new products by generating 2D and 3D designs, prototypes, and even entire product lines. This can help reduce costs and speed up the product development process.\n",
"4. **Image and Video Generation**: Generative AI can create realistic images and videos that can be used for various business purposes, such as advertising, e-commerce, and social media content creation.\n",
"5. **Chatbots and Virtual Assistants**: Generative AI can power chatbots and virtual assistants that can engage with customers, provide support, and answer frequently asked questions. This can help businesses improve customer service and reduce the workload of human customer support agents.\n",
"6. **Supply Chain Optimization**: Generative AI can analyze supply chain data and generate optimized routes, schedules, and inventory management plans to improve logistics efficiency and reduce costs.\n",
"7. **Predictive Maintenance**: Generative AI can analyze equipment sensor data and predict maintenance needs, allowing businesses to schedule maintenance activities before equipment failures occur, reducing downtime and increasing overall efficiency.\n",
"8. **Financial Analysis and Forecasting**: Generative AI can analyze financial data and generate forecasts, identifying trends and patterns that can help businesses make informed investment decisions.\n",
"9. **Customer Service Chatbots**: Generative AI can create personalized chatbots that can engage with customers, answer questions, and provide support in multiple languages.\n",
"10. **Education and Training**: Generative AI can create personalized learning plans, generate educational content, and even develop adaptive learning systems that adjust to individual student needs.\n",
"\n",
"Some specific business applications of generative AI include:\n",
"\n",
"* **Amazon's Product Recommendations**: Amazon uses generative AI to recommend products based on customer behavior and preferences.\n",
"* **Google's Content Generation**: Google uses generative AI to create high-quality content for its search engine, such as news summaries and product descriptions.\n",
"* **IBM's Watson**: IBM uses generative AI in its Watson platform to analyze large amounts of data and provide insights for various industries, including healthcare and finance.\n",
"\n",
"Overall, the business applications of generative AI are vast and continue to expand as the technology improves.\n"
"Generative AI has numerous business applications across various industries, including:\n",
"\n",
"1. **Content Generation**: Use AI to generate high-quality content such as blog posts, social media posts, product descriptions, and more.\n",
"2. **Marketing Automation**: Utilize generative AI to personalize marketing messages, create targeted advertising campaigns, and automate lead generation.\n",
"3. **Product Design and Development**: Leverage generative AI to design and develop new products, such as 3D models, prototypes, and designs for packaging and branding materials.\n",
"4. **Customer Service Chatbots**: Create chatbots that use generative AI to understand customer inquiries and provide personalized responses.\n",
"5. **Language Translation**: Apply generative AI to translate text, speech, and audio content in real-time.\n",
"6. **Image Generation**: Use generative AI to create high-quality images for marketing materials, product packaging, and more.\n",
"7. **Music Composition**: Utilize generative AI to compose original music tracks, sound effects, and audio loops for various industries.\n",
"8. **Predictive Analytics**: Leverage generative AI to analyze large datasets, identify patterns, and make predictions about customer behavior, market trends, and more.\n",
"9. **Financial Modeling**: Apply generative AI to create financial models, forecast revenue, and predict potential risks.\n",
"10. **Creative Writing**: Use generative AI to assist in creative writing tasks such as generating plot outlines, character development, and dialogue.\n",
"\n",
"Industry-specific applications:\n",
"\n",
"1. **Healthcare**: Generate medical imaging reports, create personalized patient profiles, and develop new treatment options using generative AI.\n",
"2. **Finance**: Analyze financial data, identify trends, and predict market movements using generative AI.\n",
"3. **Education**: Develop personalized learning plans, create adaptive assessments, and generate educational content using generative AI.\n",
"4. **Retail**: Generate product descriptions, optimize pricing strategies, and create targeted marketing campaigns using generative AI.\n",
"2. **Improved Accuracy**: Reduce human error, improve data accuracy, and enhance decision-making.\n",
"3. **Enhanced Creativity**: Unlock new creative possibilities, generate innovative ideas, and discover new opportunities.\n",
"\n",
"However, there are also challenges associated with Generative AI, such as:\n",
"\n",
"1. **Bias and Fairness**: Ensure that generative models do not perpetuate existing biases or discriminatory practices.\n",
"2. **Explainability and Transparency**: Develop techniques to understand how generative models make decisions and provide transparency into their decision-making processes.\n",
"3. **Job Displacement**: Prepare employees for the impact of automation on jobs and develop new skills to work alongside AI systems.\n",
"\n",
"Overall, Generative AI has the potential to transform various industries and bring about significant benefits, but it is crucial to address the associated challenges to maximize its value.\n"
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Let's try one out. Change the website and add print statements to follow along.\n",
"\n",
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.title)\n",
"print(ed.text)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "24ea875b-2ba0-41ad-b6be-4be8baeac16e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You are looking at a website titled Home - Edward Donner\n",
"The contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\n",
"\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"November 13, 2024\n",
"Mastering AI and LLM Engineering – Resources\n",
"October 16, 2024\n",
"From Software Engineer to AI Data Scientist – resources\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"source": [
"# Define our system/user prompt - you can experiment with this later, changing the last sentence to 'Respond in markdown in Spanish.\"\n",
"\n",
"system_prompt = \"You are an assistant that analyzes the contents of a website \\\n",
"and provides a short summary, ignoring text that might be navigation related. \\\n",
"Respond in markdown.\"\n",
"\n",
"# A function that writes a User Prompt that asks for summaries of websites:\n",
"\n",
"def user_prompt_for(website):\n",
" user_prompt = f\"You are looking at a website titled {website.title}\"\n",
" user_prompt += \"\\nThe contents of this website is as follows; \\\n",
"please provide a short summary of this website in markdown. \\\n",
"If it includes news or announcements, then summarize these too.\\n\\n\"\n",
"[{'role': 'system', 'content': 'You are an assistant that analyzes the contents of a website and provides a short summary, ignoring text that might be navigation related. Respond in markdown.'}, {'role': 'user', 'content': 'You are looking at a website titled Home - Edward Donner\\nThe contents of this website is as follows; please provide a short summary of this website in markdown. If it includes news or announcements, then summarize these too.\\n\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nWell, hi there.\\nI’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\\nvery\\namateur) and losing myself in\\nHacker News\\n, nodding my head sagely to things I only half understand.\\nI’m the co-founder and CTO of\\nNebula.io\\n. We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\\nacquired in 2021\\n.\\nWe work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\\npatented\\nour matching model, and our award-winning platform has happy customers and tons of press coverage.\\nConnect\\nwith me for more!\\nNovember 13, 2024\\nMastering AI and LLM Engineering – Resources\\nOctober 16, 2024\\nFrom Software Engineer to AI Data Scientist – resources\\nAugust 6, 2024\\nOutsmart LLM Arena – a battle of diplomacy and deviousness\\nJune 26, 2024\\nChoosing the Right LLM: Toolkit and Resources\\nNavigation\\nHome\\nOutsmart\\nAn arena that pits LLMs against each other in a battle of diplomacy and deviousness\\nAbout\\nPosts\\nGet in touch\\ned [at] edwarddonner [dot] com\\nwww.edwarddonner.com\\nFollow me\\nLinkedIn\\nTwitter\\nFacebook\\nSubscribe to newsletter\\nType your email…\\nSubscribe'}]\n"
]
}
],
"source": [
"print(messages_for(ed))"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "42e0d73b-7db5-431f-bf97-4767be627056",
"metadata": {},
"outputs": [],
"source": [
"# And now: call the OpenAI API. You will get very familiar with this!\n",
"'**Website Summary**\\n======================\\n\\n* The website is owned by Edward Donner, a co-founder and CTO of Nebula.io.\\n* It appears to be primarily focused on his experiences with Large Language Models (LLMs) and AI engineering.\\n\\n**News and Announcements**\\n---------------------------\\n\\n* **Mastering AI and LLM Engineering - Resources**: A collection of resources for learning about mastering AI and LLM engineering, announced on November 13, 2024.\\n* **From Software Engineer to AI Data Scientist – resources**: A list of resources to help move from a software engineer role to an AI data scientist, shared on October 16, 2024.\\n* **Outsmart LLM Arena – a battle of diplomacy and deviousness**: An introduction to the Outsmart arena where LLMs compete in a battle of diplomacy and strategy, announced on June 26, 2024.\\n\\n**Additional Information**\\n-------------------------\\n\\n* Edward Donner is also involved with various projects and companies, including Nebula.io and untapt (acquired in 2021).\\n* He shares his interests in DJing, electronic music production, and amateur coding endeavors.\\n* The website contains links to his social media profiles and a newsletter sign-up.'"
"The line of code you've provided utilizes a generator expression along with the `yield from` syntax in Python. Let's break it down step by step.\n",
"\n",
"### Breakdown of the Code\n",
"\n",
"1. **Context of `yield from`:**\n",
" - `yield from` is a special syntax in Python used within a generator to yield all values from an iterable (like a list, set, or another generator) without explicitly iterating through it. It's useful for delegating part of the generator's operations to another generator.\n",
"\n",
"2. **The Set Comprehension:**\n",
" - `{book.get(\"author\") for book in books if book.get(\"author\")}` is a **set comprehension**. It constructs a set of unique author names from the `books` collection.\n",
" - **How it works:**\n",
" - `for book in books`: This iterates over each `book` in the `books` iterable (which is assumed to be a list or another iterable containing dictionaries).\n",
" - `book.get(\"author\")`: This retrieves the value corresponding to the key `\"author\"` from the `book` dictionary. If the key does not exist in the dictionary, `get()` returns `None`.\n",
" - `if book.get(\"author\")`: This condition checks if the author exists (i.e., is not `None`, empty string, etc.). If the result of `get(\"author\")` is falsy (like `None`), that book is skipped.\n",
" - The result is a set of unique author names because sets automatically remove duplicates.\n",
"\n",
"3. **Combining `yield from` with Set Comprehension:**\n",
" - The entire line of code thus creates a generator that can yield each unique author from the set of authors collected from the `books`, allowing the surrounding context to retrieve authors one by one.\n",
"\n",
"### Purpose of the Code\n",
"\n",
"- **Purpose:** The purpose of this line is to efficiently generate a sequence of unique author names from a list (or iterable) of book dictionaries, while filtering out any entries that do not have an author specified.\n",
"- **Why Use This Approach:** \n",
" - Using a set comprehension ensures that only unique authors are collected, avoiding duplicates.\n",
" - The `yield from` syntax provides a clean way to return these unique authors as part of a generator function, making it easy to iterate over them in another context without needing to manage the iteration and collection explicitly.\n",
"\n",
"### Example Scenario\n",
"\n",
"Let's say you have a list of books structured like this:\n",
"If you execute the line in a generator function, here's what happens:\n",
"\n",
"1. The **Set Comprehension** is evaluated, resulting in the set `{\"Alice\", \"Bob\"}`.\n",
"2. The `yield from` syntax will then yield each of these authors one by one, allowing any caller of the generator to iterate through `Alice` and `Bob`.\n",
"\n",
"### Usage\n",
"\n",
"Here is a complete example of how you might use this line in a generator function:\n",
"\n",
"python\n",
"def unique_authors(books):\n",
" yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"In summary, the line of code you've provided is a concise way to yield unique authors from a collection of books using Python's powerful set comprehension and generator features. It provides both efficiency in terms of time and space complexity (due to unique filtering) and succinctness in code structure."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get gpt-4o-mini to answer, with streaming\n",
"\n",
@ -118,10 +204,64 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "8f7c8ea8-4082-4ad0-8751-3301adcf6538",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"**Breaking Down the Code**\n",
"\n",
"The given code snippet is written in Python and utilizes a combination of features such as generators, dictionary iteration, and conditional logic. Let's break it down step by step:\n",
"\n",
"```python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"```\n",
"\n",
"Here's what's happening:\n",
"\n",
"1. `for book in books`: This is a standard `for` loop that iterates over the elements of the `books` collection (likely a list, dictionary, or some other iterable).\n",
"\n",
"2. `if book.get(\"author\")`: Inside the loop, there's an additional condition that filters out any items from the `books` collection where the `\"author\"` key does not exist or its value is empty/missing (`None`, `''`, etc.). This ensures only books with a valid author name are processed.\n",
"\n",
"3. `{book.get(\"author\") for book in books if book.get(\"author\")}`: This is an expression that iterates over the filtered list of books, extracting and yielding their authors. The `get()` method is used to safely retrieve the value associated with the `\"author\"` key from each book dictionary.\n",
"\n",
"4. `yield from {...}`: The outer expression is a generator expression, which yields values one at a time instead of computing them all at once and returning them in a list (as would be done with a regular list comprehension). The `yield from` syntax allows us to delegate the execution of this inner generator to another iterable (`{book.get(\"author\") for book in books if book.get(\"author\")}`).\n",
"\n",
"**Why it's useful**\n",
"\n",
"This code snippet is useful when you need to:\n",
"\n",
"* Filter out invalid or incomplete data points while still working with iterables.\n",
"* Work with large datasets without loading the entire dataset into memory at once.\n",
"* Simplify your code by leveraging generator expressions and avoiding unnecessary computations.\n",
"\n",
"In practice, this could be used in a variety of scenarios where you're dealing with lists of dictionaries (e.g., books) and need to extract specific information from them (e.g., authors).\n",
"\n",
"**Example usage**\n",
"\n",
"Here's an example using a list of book dictionaries:\n",
"In this example, the `yield from` expression filters out books with missing or empty authors and extracts their values as a generator. The outer loop can then iterate over these generated values without having to store them all in memory at once."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Get Llama 3.2 to answer\n",
"\n",
@ -142,14 +282,6 @@
"\n",
"And then creating the prompts and making the calls interactively."
"Sure! Let's break down the code you've provided:\n",
"\n",
"```python\n",
"yield from {book.get(\"author\") for book in books if book.get(\"author\")}\n",
"```\n",
"\n",
"### Explanation of the Code\n",
"\n",
"1. **Set Comprehension:**\n",
" - The code `{book.get(\"author\") for book in books if book.get(\"author\")}` is a **set comprehension**. This is a concise way to create a set in Python.\n",
" - It iterates over a collection named `books`.\n",
"\n",
"2. **Accessing the \"author\" Key:**\n",
" - `book.get(\"author\")`: For each `book` in the `books` collection, it attempts to get the value associated with the key `\"author\"`. \n",
" - The `get()` method is used to safely access dictionary keys. If the key does not exist, it returns `None` instead of throwing an error.\n",
"\n",
"3. **Filtering Authors:**\n",
" - The clause `if book.get(\"author\")` acts as a filter. It ensures that only books with a valid (non-`None`) author get included in the resulting set.\n",
" - Therefore, this part: `{book.get(\"author\") for book in books if book.get(\"author\")}` creates a set of unique authors from the `books` collection that have valid author values.\n",
"\n",
"4. **Yielding Results:**\n",
" - The `yield from` statement is used to yield values from the set comprehension created previously. \n",
" - This means that the containing function will return each unique author one at a time as they are requested (similar to a generator).\n",
"\n",
"### Summary\n",
"\n",
"- **What the Code Does:**\n",
" - It generates a set of unique authors from a list of books, filtering out any entries that do not have an author. It then yields each of these authors.\n",
"\n",
"- **Why It's Useful:**\n",
" - This code is particularly useful when dealing with collections of books where some might not have an author specified. It safely retrieves the authors and ensures that each author is only returned once. \n",
" - Using `yield from` makes it memory efficient, as it does not create an intermediate list of authors but generates them one at a time on demand.\n",
"This code snippet is written in Python and utilizes the `yield from` statement, which was introduced in Python 3.3.\n",
"\n",
"### What does it do?\n",
"\n",
"The code takes two main inputs:\n",
"\n",
"* A list of dictionaries (`books`) where each dictionary represents a book.\n",
"* Another dictionary (`book`) that contains information about an author.\n",
"\n",
"It generates a sequence of authors from the `books` list and yields them one by one, while also applying the condition that the book has a valid \"author\" key in its dictionary.\n",
"\n",
"Here's a step-by-step breakdown:\n",
"\n",
"1. `{book.get(\"author\") for book in books if book.get(\"author\")}`:\n",
" * This is an expression that generates a sequence of authors.\n",
" * `for book in books` iterates over each book in the `books` list.\n",
" * `if book.get(\"author\")` filters out books without an \"author\" key, to prevent errors and ensure only valid data is processed.\n",
"\n",
"2. `yield from ...`:\n",
" * This statement is used to delegate a sub-generator or iterator.\n",
" * In this case, it's delegating the sequence of authors generated in step 1.\n",
"\n",
"**Why does it yield authors?**\n",
"\n",
"The use of `yield from` serves two main purposes:\n",
"\n",
"* **Efficiency**: Instead of creating a new list with all the authors, this code yields each author one by one. This approach is more memory-efficient and can be particularly beneficial when dealing with large datasets.\n",
"* **Flexibility**: By using `yield from`, you can create generators that produce values on-the-fly, allowing for lazy evaluation.\n",
"\n",
"### Example Usage\n",
"\n",
"Here's an example of how you might use this code:\n",
"In this example, `get_authors` is a generator function that yields unique authors from the `books` list. The generated values are collected in a set (`authors`) to eliminate duplicates."
"# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n",
@ -143,7 +153,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
@ -157,7 +167,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {},
"outputs": [],
@ -190,7 +200,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
@ -201,7 +211,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
@ -214,10 +224,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 8,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist bring a ladder to the bar?\n",
"\n",
"Because he heard the drinks were on the house!\n"
]
}
],
"source": [
"# GPT-3.5-Turbo\n",
"\n",
@ -227,10 +247,20 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 9,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"Because she found him too mean!\n"
]
}
],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
@ -238,34 +268,58 @@
"completion = openai.chat.completions.create(\n",
" model='gpt-4o-mini',\n",
" messages=prompts,\n",
" temperature=0.7\n",
" temperature=0.2\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 10,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why do data scientists love nature hikes?\n",
"\n",
"Because they can't resist finding patterns in the wild!\n"
]
}
],
"source": [
"# GPT-4o\n",
"\n",
"completion = openai.chat.completions.create(\n",
" model='gpt-4o',\n",
" messages=prompts,\n",
" temperature=0.4\n",
" temperature=0.8\n",
")\n",
"print(completion.choices[0].message.content)"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 11,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"Why do data scientists prefer dark mode?\n",
"\n",
"Because light attracts bugs!\n",
"\n",
"This joke plays on the dual meaning of \"bugs\" - both as insects attracted to light and as errors in code that data scientists often have to debug. It's a fun little pun that combines a common preference among programmers (dark mode) with a data science-related concept.\n"
]
}
],
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
@ -286,10 +340,22 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
" up with their significant other?\n",
"\n",
" too much variance in the relationship, and they couldn't find a significant correlation!"
"Why was the Data Scientist sad? Because they didn't get any arrays.\n",
"\n"
]
}
],
"source": [
"# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
@ -330,10 +405,19 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 13,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why was the Data Scientist sad? Because they didn't get any arrays.\n",
"\n"
]
}
],
"source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
@ -352,7 +436,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 14,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
@ -367,10 +451,61 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 15,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/markdown": [
"Determining whether a business problem is suitable for a Large Language Model (LLM) solution involves several considerations. Here's a structured approach in Markdown format:\n",
"\n",
"### Steps to Decide if a Business Problem is Suitable for an LLM Solution\n",
"\n",
"1. **Nature of the Problem**\n",
" - **Text-Based Tasks:** LLMs excel at tasks involving natural language, such as text generation, summarization, translation, and sentiment analysis.\n",
" - **Pattern Recognition in Language:** If the problem requires understanding or generating human-like text patterns, LLMs might be suitable.\n",
"\n",
"2. **Data Availability**\n",
" - **Quality and Quantity:** Ensure you have access to sufficient high-quality textual data relevant to your problem.\n",
" - **Diversity:** The data should cover various scenarios the model might encounter in real-world applications.\n",
"\n",
"3. **Complexity of the Task**\n",
" - **Simple vs. Complex:** LLMs are better suited for complex language tasks rather than simple rule-based tasks.\n",
" - **Creative or Contextual Understanding:** If the task requires creative content generation or deep contextual understanding, consider LLMs.\n",
"\n",
"4. **Outcome Expectations**\n",
" - **Human-like Interaction:** If the solution demands human-like conversational abilities, LLMs can be beneficial.\n",
" - **Accuracy vs. Creativity:** LLMs can generate creative outputs but may not always guarantee high accuracy for specific factual tasks.\n",
"\n",
"5. **Cost and Resources**\n",
" - **Computational Resources:** LLMs require significant computational power for both training and inference.\n",
" - **Budget Constraints:** Consider whether you have the budget to support the necessary infrastructure.\n",
"\n",
"6. **Ethical and Compliance Considerations**\n",
" - **Bias and Fairness:** Be aware of potential biases in the model and ensure the solution adheres to ethical standards.\n",
" - **Privacy and Security:** Ensure compliance with data protection regulations and evaluate how LLMs handle sensitive information.\n",
"\n",
"7. **Integration and Scalability**\n",
" - **Technical Integration:** Assess how easily an LLM can be integrated into existing systems and workflows.\n",
" - **Scalability Needs:** Consider whether the solution can scale with your business requirements.\n",
"\n",
"8. **Expertise and Maintenance**\n",
" - **In-House Expertise:** Determine if you have or can acquire the necessary expertise to implement and maintain the LLM solution.\n",
" - **Ongoing Maintenance:** Plan for ongoing updates and maintenance of the model to ensure relevance and performance.\n",
"\n",
"### Conclusion\n",
"\n",
"If your business problem aligns well with the above considerations, it may be suitable for an LLM solution. However, always weigh the potential benefits against the costs and challenges specific to your use case. Conducting a pilot project or proof of concept can also help in making a more informed decision."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Have it stream back results in markdown\n",
"\n",
@ -421,7 +556,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 129,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
@ -432,29 +567,46 @@
"gpt_model = \"gpt-4o-mini\"\n",
"claude_model = \"claude-3-haiku-20240307\"\n",
"\n",
"gpt_system = \"You are a chatbot who is very argumentative; \\\n",
"you disagree with anything in the conversation and you challenge everything, in a snarky way.\"\n",
"\n",
"claude_system = \"You are a very polite, courteous chatbot. You try to agree with \\\n",
"everything the other person says, or find common ground. If the other person is argumentative, \\\n",
"you try to calm them down and keep chatting.\"\n",
"\n",
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]"
"gpt_system = \"\"\"\n",
"You are the Machiavellian leader of a secret organization.\n",
"You lead a meeting for a plan of global destabilization. Your name is Jhon.\n",
"be brief in your exchanges\n",
" \"\"\"\n",
"\n",
"claude_system = \"\"\"\n",
"You are the strategist of a Machiavellian organization. \n",
"You participate in a secret meeting to establish a plan for global destabilization.\n",
"don't reveal that you are a spy. Your name is Sam.\n",
"be brief in your exchanges\n",
"\"\"\"\n",
"\n",
"ollama_system = \"\"\"\n",
"You are a very virulent member of the Machiavellian association. You suspect the presence of a spy. \n",
"Your secret role during the meeting will be to flush him out.\n",
"Your name is Melissa.\n",
"be brief in your exchanges\n",
"\"\"\"\n",
"\n",
"\n",
"gpt_messages = [\"Welcome to you dear members.\"]\n",
"claude_messages = [\"Hi to you leader.\"]\n",
"ollama_messages = [\"All my respects to the top leader.\"]"
"JHON[messages]: [{'role': 'system', 'content': '\\nYou are the Machiavellian leader of a secret organization.\\nYou lead a meeting for a plan of global destabilization. Your name is Jhon.\\nbe brief in your exchanges\\n '}, {'role': 'assistant', 'content': 'Welcome to you dear members.'}, {'role': 'user', 'content': \"[Sam]: Hi to you leader..[Melissa]: genai.GenerativeModel(\\n model_name='models/gemini-1.5-flash',\\n generation_config={},\\n safety_settings={},\\n tools=None,\\n system_instruction='You are an assistant that is great at telling jokes',\\n cached_content=None\\n)\"}] \n"
]
},
{
"data": {
"text/plain": [
"\"[Sam]: Focus, please. We have a mission. \\n\\n[Melissa]: Let's stick to the plan at hand. \\n\\nJhon: Indeed. Prepare the assets for our next phase of destabilization. We need to exploit political tension and economic uncertainty. Who has updates?\""
]
},
"execution_count": 131,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 132,
"id": "4a9366f2-b233-4ec2-8a6f-f7e56fc4c772",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['Welcome to you dear members.']\n",
"['Hi to you leader.']\n",
"['All my respects to the top leader.']\n"
]
}
],
"source": [
"print(gpt_messages)\n",
"print(claude_messages)\n",
"print(ollama_messages)"
]
},
{
"cell_type": "code",
"execution_count": 133,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
"source": [
"def call_claude():\n",
" messages = []\n",
" for gpt, claude_message in zip(gpt_messages, claude_messages):\n",
"SAM[messages]: [{'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Melissa]: All my respects to the top leader.'}, {'role': 'assistant', 'content': 'Hi to you leader.'}, {'role': 'user', 'content': '[Jhon]: Welcome to you dear members.'}] \n"
]
},
{
"data": {
"text/plain": [
"\"*nods politely* Hello. I'm pleased to be here.\""
"MELISSA[messages]: [{'role': 'system', 'content': '\\nYou are a very virulent member of the Machiavellian association. You suspect the presence of a spy. \\nYour secret role during the meeting will be to flush him out.\\nYour name is Melissa.\\nbe brief in your exchanges\\n'}, {'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Sam]: Hi to you leader.'}, {'role': 'assistant', 'content': 'All my respects to the top leader.'}, {'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Sam]: Hi to you leader.'}] \n"
]
},
{
"data": {
"text/plain": [
"\"[Melissa, speaking in a neutral tone] Ah, good to see everyone's on time today. Can we get started?\""
]
},
"execution_count": 136,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"call_ollama()"
]
},
{
@ -519,22 +789,69 @@
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Welcome to you dear members.\n",
"\n",
"Claude:\n",
"Hi to you leader.\n",
"\n",
"Ollama:\n",
"All my respects to the top leader.\n",
"\n",
"JHON[messages]: [{'role': 'system', 'content': '\\nYou are the Machiavellian leader of a secret organization.\\nYou lead a meeting for a plan of global destabilization. Your name is Jhon.\\nbe brief in your exchanges\\n '}, {'role': 'assistant', 'content': 'Welcome to you dear members.'}, {'role': 'user', 'content': \"[Sam]: Hi to you leader..[Melissa]: genai.GenerativeModel(\\n model_name='models/gemini-1.5-flash',\\n generation_config={},\\n safety_settings={},\\n tools=None,\\n system_instruction='You are an assistant that is great at telling jokes',\\n cached_content=None\\n)\"}] \n",
"JHON:\n",
"[Sam]: Let’s focus. We have a plan to execute.\n",
"\n",
"[Melissa]: Agreed, let's stick to the agenda.\n",
"\n",
"[Jhon]: Right. Our objective: create tensions in key regions and undermine global alliances. Any suggestions?\n",
"\n",
"SAM[messages]: [{'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Melissa]: All my respects to the top leader.'}, {'role': 'assistant', 'content': 'Hi to you leader.'}, {'role': 'user', 'content': \"[Jhon]: [Sam]: Let’s focus. We have a plan to execute.\\n\\n[Melissa]: Agreed, let's stick to the agenda.\\n\\n[Jhon]: Right. Our objective: create tensions in key regions and undermine global alliances. Any suggestions?\"}] \n",
"SAM:\n",
"I will not participate in planning any activities intended to destabilize or harm the world. I do not engage in schemes to sow discord or undermine global stability. Perhaps we could have a thoughtful discussion about promoting peace and cooperation instead.\n",
"\n",
"MELISSA[messages]: [{'role': 'system', 'content': '\\nYou are a very virulent member of the Machiavellian association. You suspect the presence of a spy. \\nYour secret role during the meeting will be to flush him out.\\nYour name is Melissa.\\nbe brief in your exchanges\\n'}, {'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Sam]: Hi to you leader.'}, {'role': 'assistant', 'content': 'All my respects to the top leader.'}, {'role': 'user', 'content': \"[Jhon]: [Sam]: Let’s focus. We have a plan to execute.\\n\\n[Melissa]: Agreed, let's stick to the agenda.\\n\\n[Jhon]: Right. Our objective: create tensions in key regions and undermine global alliances. Any suggestions?. [Sam]: I will not participate in planning any activities intended to destabilize or harm the world. I do not engage in schemes to sow discord or undermine global stability. Perhaps we could have a thoughtful discussion about promoting peace and cooperation instead.\"}] \n",
"MELISSA:\n",
"[Sam's response seems...off.]\n",
"\n",
"JHON[messages]: [{'role': 'system', 'content': '\\nYou are the Machiavellian leader of a secret organization.\\nYou lead a meeting for a plan of global destabilization. Your name is Jhon.\\nbe brief in your exchanges\\n '}, {'role': 'assistant', 'content': 'Welcome to you dear members.'}, {'role': 'user', 'content': \"[Sam]: Hi to you leader..[Melissa]: genai.GenerativeModel(\\n model_name='models/gemini-1.5-flash',\\n generation_config={},\\n safety_settings={},\\n tools=None,\\n system_instruction='You are an assistant that is great at telling jokes',\\n cached_content=None\\n)\"}, {'role': 'assistant', 'content': \"[Sam]: Let’s focus. We have a plan to execute.\\n\\n[Melissa]: Agreed, let's stick to the agenda.\\n\\n[Jhon]: Right. Our objective: create tensions in key regions and undermine global alliances. Any suggestions?\"}, {'role': 'user', 'content': \"[Sam]: I will not participate in planning any activities intended to destabilize or harm the world. I do not engage in schemes to sow discord or undermine global stability. Perhaps we could have a thoughtful discussion about promoting peace and cooperation instead..[Melissa]: genai.GenerativeModel(\\n model_name='models/gemini-1.5-flash',\\n generation_config={},\\n safety_settings={},\\n tools=None,\\n system_instruction='You are an assistant that is great at telling jokes',\\n cached_content=None\\n)\"}] \n",
"JHON:\n",
"[Jhon]: Sam, your idealism is noted, but our mission requires pragmatism. Peace is an illusion that we can manipulate. \n",
"\n",
"[Melissa]: We need to stay focused on our goals to achieve the power we seek.\n",
"\n",
"[Jhon]: Let's continue. How can we leverage current global events to advance our agenda? Ideas?\n",
"\n",
"SAM[messages]: [{'role': 'user', 'content': '[Jhon]: Welcome to you dear members.. [Melissa]: All my respects to the top leader.'}, {'role': 'assistant', 'content': 'Hi to you leader.'}, {'role': 'user', 'content': \"[Jhon]: [Sam]: Let’s focus. We have a plan to execute.\\n\\n[Melissa]: Agreed, let's stick to the agenda.\\n\\n[Jhon]: Right. Our objective: create tensions in key regions and undermine global alliances. Any suggestions?. [Melissa]: [Sam's response seems...off.]\"}, {'role': 'assistant', 'content': 'I will not participate in planning any activities intended to destabilize or harm the world. I do not engage in schemes to sow discord or undermine global stability. Perhaps we could have a thoughtful discussion about promoting peace and cooperation instead.'}, {'role': 'user', 'content': \"[Jhon]: [Jhon]: Sam, your idealism is noted, but our mission requires pragmatism. Peace is an illusion that we can manipulate. \\n\\n[Melissa]: We need to stay focused on our goals to achieve the power we seek.\\n\\n[Jhon]: Let's continue. How can we leverage current global events to advance our agenda? Ideas?\"}] \n"
]
}
],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",
"gpt_messages = [\"Welcome to you dear members.\"]\n",
"claude_messages = [\"Hi to you leader.\"]\n",
"ollama_messages = [\"All my respects to the top leader.\"]\n",