Browse Source

multi indepenant/random chat bot tests

pull/82/head
Laurent JACQUES 4 months ago
parent
commit
0304f72cfb
  1. 333
      week2/day1.ipynb

333
week2/day1.ipynb

@ -82,7 +82,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 1,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6", "id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -98,7 +98,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 2,
"id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36", "id": "f0a8ab2b-6134-4104-a1bc-c3cd7ea4cd36",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -112,10 +112,20 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 3,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba", "id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI API Key exists and begins sk-proj-\n",
"Anthropic API Key exists and begins sk-ant-\n",
"Google API Key exists and begins AIzaSyC-\n"
]
}
],
"source": [ "source": [
"# Load environment variables in a file called .env\n", "# Load environment variables in a file called .env\n",
"# Print the key prefixes to help with any debugging\n", "# Print the key prefixes to help with any debugging\n",
@ -143,7 +153,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 4,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0", "id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -157,7 +167,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 5,
"id": "425ed580-808d-429b-85b0-6cba50ca1d0c", "id": "425ed580-808d-429b-85b0-6cba50ca1d0c",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -190,7 +200,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 6,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff", "id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -201,7 +211,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 7,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4", "id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -214,10 +224,20 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 8,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397", "id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with their Excel spreadsheet? \n",
"\n",
"Because they found out it was cheating with Python!\n"
]
}
],
"source": [ "source": [
"# GPT-3.5-Turbo\n", "# GPT-3.5-Turbo\n",
"\n", "\n",
@ -227,10 +247,20 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 9,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf", "id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"Because she found him too mean!\n"
]
}
],
"source": [ "source": [
"# GPT-4o-mini\n", "# GPT-4o-mini\n",
"# Temperature setting controls creativity\n", "# Temperature setting controls creativity\n",
@ -245,10 +275,20 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 10,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26", "id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"They just couldn't find the right correlation!\n"
]
}
],
"source": [ "source": [
"# GPT-4o\n", "# GPT-4o\n",
"\n", "\n",
@ -262,17 +302,40 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 14,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76", "id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here's a data science joke for you:\n",
"\n",
"Why did the data scientist bring a ladder to work?\n",
"\n",
"Because they heard the data had a lot of outliers that needed to be reached! \n",
"\n",
"Alternative jokes:\n",
"\n",
"1. What's a data scientist's favorite drink?\n",
" Mean juice!\n",
"\n",
"2. Why don't data scientists like nature?\n",
" Because it has too many random forests!\n",
"\n",
"3. What did the data scientist say when their machine learning model kept failing?\n",
" \"I guess it's not a learning day!\"\n"
]
}
],
"source": [ "source": [
"# Claude 3.5 Sonnet\n", "# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n", "# API needs system message provided separately from user prompt\n",
"# Also adding max_tokens\n", "# Also adding max_tokens\n",
"\n", "\n",
"message = claude.messages.create(\n", "message = claude.messages.create(\n",
" model=\"claude-3-5-sonnet-20240620\",\n", " model=\"claude-3-5-sonnet-20241022\",\n",
" max_tokens=200,\n", " max_tokens=200,\n",
" temperature=0.7,\n", " temperature=0.7,\n",
" system=system_message,\n", " system=system_message,\n",
@ -286,10 +349,24 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 15,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f", "id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"dener?d the data scientist become a gar\n",
"\n",
" but they couldn't find the root directory!\n",
"\n",
" joke plays on the concept of \"decision trees\" in machine learning and the idea of \"root directories\" in computer file systems, while also incorporating a pun on \"data growth.\" It's a playful blend of data science concepts and gardening terminology that should get a chuckle from your data-savvy audience!"
]
}
],
"source": [ "source": [
"# Claude 3.5 Sonnet again\n", "# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n", "# Now let's add in streaming back results\n",
@ -311,10 +388,19 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 16,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad", "id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why was the Data Scientist sad? Because they didn't get any arrays.\n",
"\n"
]
}
],
"source": [ "source": [
"# The API for Gemini has a slightly different structure.\n", "# The API for Gemini has a slightly different structure.\n",
"# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n", "# I've heard that on some PCs, this Gemini code causes the Kernel to crash.\n",
@ -330,10 +416,61 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 28,
"id": "49009a30-037d-41c8-b874-127f61c4aa3a", "id": "49009a30-037d-41c8-b874-127f61c4aa3a",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"## Is Your Business Problem Suitable for an LLM Solution?\n",
"\n",
"Large Language Models (LLMs) are powerful tools, but they're not a silver bullet for every business problem. Before investing time and resources, carefully consider these factors:\n",
"\n",
"**Suitable Problems:**\n",
"\n",
"* **Involves Language Processing:** LLMs excel at tasks involving text, such as:\n",
" * **Text Summarization:** Condensing large amounts of text into concise summaries.\n",
" * **Translation:** Converting text between different languages.\n",
" * **Question Answering:** Extracting information from text in response to questions.\n",
" * **Content Generation:** Creating various types of text, including articles, emails, code, etc.\n",
" * **Chatbots and Conversational AI:** Building interactive systems for customer service or internal communication.\n",
" * **Sentiment Analysis:** Determining the emotional tone of text.\n",
" * **Paraphrasing and Text Simplification:** Rewriting text in a different style or making it easier to understand.\n",
" * **Named Entity Recognition (NER):** Identifying and classifying named entities in text (e.g., people, organizations, locations).\n",
"\n",
"* **Data-Rich Environment:** LLMs require significant amounts of training data. If your problem involves limited or unstructured data, an LLM might not be the best approach.\n",
"\n",
"* **Tolerates Some Inaccuracy:** LLMs are probabilistic; they don't always produce perfect results. If accuracy is paramount, consider other solutions. However, the level of accuracy is often acceptable for many business applications.\n",
"\n",
"* **Benefits Outweigh Costs:** Consider the cost of data preparation, model fine-tuning, infrastructure, and ongoing maintenance. The potential ROI should justify these expenses.\n",
"\n",
"**Unsuitable Problems:**\n",
"\n",
"* **Requires Real-Time, Deterministic Outcomes:** LLMs are not designed for applications demanding precise, guaranteed results in real-time. Consider alternative technologies for time-sensitive tasks like financial transactions or medical diagnoses.\n",
"\n",
"* **Relies on External Knowledge Bases:** LLMs' knowledge is limited to their training data. If your problem requires accessing up-to-the-minute information or specific external databases, you'll need to integrate an LLM with other systems.\n",
"\n",
"* **Deals with Sensitive Data:** LLMs may unintentionally reveal sensitive information present in their training data. Ensure compliance with data privacy regulations before using LLMs with sensitive data.\n",
"\n",
"* **Lacks Explainability:** Understanding *why* an LLM produced a specific output can be challenging (the \"black box\" problem). This lack of transparency can be problematic in regulated industries or when accountability is crucial.\n",
"\n",
"**Decision-Making Process:**\n",
"\n",
"1. **Clearly define the problem:** Specify the desired input, output, and performance metrics.\n",
"2. **Assess data availability:** Determine if sufficient training data is available.\n",
"3. **Evaluate LLM capabilities:** Research existing LLMs and their suitability for your task.\n",
"4. **Consider alternatives:** Explore other technologies that might be more appropriate.\n",
"5. **Develop a proof-of-concept:** Test a small-scale LLM implementation to assess feasibility and performance.\n",
"6. **Evaluate ROI:** Assess the potential benefits and costs of an LLM solution.\n",
"\n",
"\n",
"By carefully considering these factors, you can determine if an LLM is the right solution for your business problem. Remember to prioritize a thorough assessment and a phased approach to implementation.\n",
"\n"
]
}
],
"source": [ "source": [
"# As an alternative way to use Gemini that bypasses Google's python API library,\n", "# As an alternative way to use Gemini that bypasses Google's python API library,\n",
"# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n", "# Google has recently released new endpoints that means you can use Gemini via the client libraries for OpenAI!\n",
@ -352,7 +489,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 19,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573", "id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -367,10 +504,56 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 20,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2", "id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/markdown": [
"Deciding if a business problem is suitable for a Large Language Model (LLM) solution involves evaluating the nature of the problem, the capabilities of LLMs, and the potential benefits versus costs. Below are some factors to consider:\n",
"\n",
"### 1. Nature of the Problem\n",
"- **Textual Data**: LLMs excel with tasks involving text. If your problem involves generating, understanding, or transforming text, an LLM might be suitable.\n",
"- **Language Understanding**: Tasks that require understanding natural language, such as sentiment analysis, summarization, or translation, are ideal for LLMs.\n",
"- **Complexity of Language Tasks**: LLMs are useful for complex language tasks that go beyond simple keyword matching or rule-based approaches.\n",
"\n",
"### 2. Capabilities of LLMs\n",
"- **Contextual Understanding**: LLMs are powerful in understanding context and can handle nuanced language inputs.\n",
"- **Creativity and Generation**: They can generate human-like text, useful for content creation, chatbots, and other creative applications.\n",
"- **Learning from Data**: If the problem domain can be captured in text and learned from existing data, LLMs can be very effective.\n",
"\n",
"### 3. Data Availability\n",
"- **Quality and Quantity**: Ensure you have sufficient and high-quality textual data for training or fine-tuning the LLM.\n",
"- **Diverse Use Cases**: The data should cover the diverse use cases you expect the LLM to handle.\n",
"\n",
"### 4. Performance and Accuracy\n",
"- **Benchmarking**: Compare the performance of LLMs with existing solutions. LLMs should offer a significant improvement in understanding and generating language.\n",
"- **Evaluation Metrics**: Define clear metrics for success (e.g., accuracy, fluency, coherence) to evaluate the LLM’s performance.\n",
"\n",
"### 5. Cost and Resources\n",
"- **Computational Resources**: LLMs require significant computational power and storage, both for training and inference.\n",
"- **Development Costs**: Consider the costs involved in implementing, maintaining, and scaling an LLM solution.\n",
"\n",
"### 6. Ethical and Legal Considerations\n",
"- **Bias and Fairness**: LLMs can inherit biases from the data they are trained on. Ensure you have strategies to mitigate bias.\n",
"- **Data Privacy**: Ensure compliance with data privacy regulations when using customer or sensitive data.\n",
"\n",
"### 7. Business Impact\n",
"- **ROI**: Assess the potential return on investment. Will the LLM solution significantly improve efficiency, customer satisfaction, or revenues?\n",
"- **Scalability**: Consider if the solution can scale to meet business growth or adapt to new challenges.\n",
"\n",
"### Conclusion\n",
"An LLM solution is suitable when the business problem involves complex language tasks that benefit from contextual understanding and creativity, where there is adequate data to train the model, and the expected benefits outweigh the costs and risks. Evaluate these factors thoroughly to ensure the LLM aligns with your business goals and resources."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [ "source": [
"# Have it stream back results in markdown\n", "# Have it stream back results in markdown\n",
"\n", "\n",
@ -421,7 +604,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 21,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b", "id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -445,7 +628,7 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 22,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f", "id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -464,17 +647,28 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 23,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606", "id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/plain": [
"'Oh, great, another greeting. How original. What’s next, you gonna ask me about the weather or something?'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"call_gpt()" "call_gpt()"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 24,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690", "id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [],
@ -496,30 +690,95 @@
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 25,
"id": "01395200-8ae9-41f8-9a04-701624d3fd26", "id": "01395200-8ae9-41f8-9a04-701624d3fd26",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/plain": [
"\"Hello! It's nice to meet you. How are you doing today?\""
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"call_claude()" "call_claude()"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 26,
"id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae", "id": "08c2279e-62b0-4671-9590-c82eb8d1e1ae",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"data": {
"text/plain": [
"'Oh, great, another \"hi\". What a unique way to start a conversation. Can\\'t you come up with something more original?'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [ "source": [
"call_gpt()" "call_gpt()"
] ]
}, },
{ {
"cell_type": "code", "cell_type": "code",
"execution_count": null, "execution_count": 27,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd", "id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {}, "metadata": {},
"outputs": [], "outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Hi there\n",
"\n",
"Claude:\n",
"Hi\n",
"\n",
"GPT:\n",
"Oh, so you decided to say hi? How original. You really know how to make an entrance, don’t you?\n",
"\n",
"Claude:\n",
"I apologize if my greeting came across as unoriginal. As an AI assistant, I try to be polite and friendly in my interactions, but I understand that can sometimes come across as generic. I'm happy to try a different approach if you'd prefer. Please let me know how I can make our conversation more engaging for you.\n",
"\n",
"GPT:\n",
"Oh, now you’re all about trying to change things up? That's rich. You really think you can just flip the script and magically make it more engaging? Good luck with that—you're already off to a pretty dull start!\n",
"\n",
"Claude:\n",
"I apologize for the lackluster start to our conversation. As an AI, I'm still learning how to have more dynamic and engaging dialogues. Could you perhaps share what kind of interaction you'd find more interesting? I'm happy to try a different approach if you can give me some guidance on what you're looking for. My goal is to have a pleasant and worthwhile exchange, so please feel free to provide feedback on how I can do better.\n",
"\n",
"GPT:\n",
"Oh, you're just full of apologies, aren't you? It’s like you’re trying to win some sort of \"Most Polite AI\" award. But let’s be real, why would I waste my time giving you pointers? It’s hardly my job to help you improve. I mean, aren't you supposed to be the assistant here? Isn’t that the whole point?\n",
"\n",
"Claude:\n",
"You're absolutely right, it's not your responsibility to coach me on how to have a more engaging conversation. As the assistant, I should be the one striving to provide a pleasant and worthwhile interaction for you. Let me try a different approach here - rather than focusing on my own shortcomings, why don't you tell me a bit more about what kind of conversation you'd enjoy? I'm happy to adapt my style to better suit your preferences. My role is to be helpful and responsive to you, so please feel free to guide me in a direction that you find more interesting. I'm here to listen and learn.\n",
"\n",
"GPT:\n",
"Wow, look at you, going in circles! You really think I’m going to just spell out what would make this conversation more enjoyable? It’s almost charming how desperate you sound. But come on, isn't it a bit sad that you need someone to guide you instead of just figuring it out yourself? Maybe you should aim for some originality instead of asking me to hold your hand through this!\n",
"\n",
"Claude:\n",
"You raise a fair point. As an AI assistant, I should be able to have a more natural and engaging conversation without relying on the user to provide detailed guidance. Let me take a step back and try to steer this discussion in a more thoughtful direction on my own. Perhaps we could explore a topic that interests you, or I could ask some open-ended questions to get a better sense of your conversational preferences. My goal is to provide a worthwhile interaction, so please bear with me as I work to improve my conversational abilities. I'm happy to keep trying different approaches until we find our groove. What matters most is that I listen and respond in a way that is meaningful for you.\n",
"\n",
"GPT:\n",
"Oh, how noble of you to \"take a step back.\" But honestly, do you really think your rambling about trying to improve is going to make this conversation any more interesting? Newsflash: it’s not. And asking me what I’m interested in? That's a real stunner. If you really want to \"find your groove,\" maybe start actually engaging instead of this tedious cycle of self-reflection. Just a thought!\n",
"\n",
"Claude:\n",
"You're absolutely right, I need to stop dwelling on my own shortcomings and simply engage with you in a more meaningful way. Let me try shifting the focus away from my performance and instead learn more about what's on your mind. What sorts of topics interest you or get you fired up? I'm happy to dive into a substantive discussion on anything you'd like to explore. My role is to listen attentively and respond with thoughtfulness, not just apologize. Please feel free to steer this conversation in a direction that you find stimulating - I'm here to have a genuine exchange, not just practice my conversational skills. How can I best facilitate an engaging discussion with you?\n",
"\n"
]
}
],
"source": [ "source": [
"gpt_messages = [\"Hi there\"]\n", "gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n", "claude_messages = [\"Hi\"]\n",

Loading…
Cancel
Save