Browse Source

Tidying up

pull/2/head
Edward Donner 8 months ago
parent
commit
21afbd6c73
  1. 176
      week1/day1.ipynb
  2. 841
      week1/day5.ipynb
  3. 244
      week2/day1.ipynb
  4. 384
      week2/day2.ipynb
  5. 164
      week2/day3.ipynb
  6. 119
      week2/day4.ipynb
  7. 256
      week2/day5.ipynb
  8. 414
      week4/day3.ipynb
  9. 163
      week4/day4.ipynb
  10. 169
      week5/day1.ipynb
  11. 162
      week5/day2.ipynb
  12. 3129
      week5/day3.ipynb
  13. 3297
      week5/day4.5.ipynb
  14. 3299
      week5/day4.ipynb
  15. 3487
      week5/day5.ipynb
  16. 200
      week6/day1.ipynb
  17. 416
      week6/day2.ipynb
  18. 3981
      week6/day4.ipynb

176
week1/day1.ipynb

@ -16,7 +16,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "4e2a9393-7767-488e-a8bf-27c12dca35bd",
"metadata": {},
"outputs": [],
@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "7b87cadb-d513-4303-baee-a37b6f938e4d",
"metadata": {},
"outputs": [],
@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "c5e793b2-6775-426a-a139-4848291d0463",
"metadata": {},
"outputs": [],
@ -71,63 +71,10 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "2ef960cf-6dc2-4cda-afb3-b38be12f4c97",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Home - Edward Donner\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"February 7, 2024\n",
"Fine-tuning an LLM on your texts: a simulation of you\n",
"January 31, 2024\n",
"Fine-tuning an LLM on your texts: part 4 – QLoRA\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n"
]
}
],
"outputs": [],
"source": [
"# Let's try one out\n",
"\n",
@ -156,7 +103,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "abdb8417-c5dc-44bc-9bee-2e059d162699",
"metadata": {},
"outputs": [],
@ -168,7 +115,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "f0275b1b-7cfe-4f9d-abfa-7650d378da0c",
"metadata": {},
"outputs": [],
@ -201,7 +148,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "0134dfa4-8299-48b5-b444-f2a8c3403c88",
"metadata": {},
"outputs": [],
@ -223,7 +170,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "905b9919-aba7-45b5-ae65-81b3d1d78e34",
"metadata": {},
"outputs": [],
@ -239,28 +186,17 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "05e38d41-dfa4-4b20-9c96-c46ea75d9fb5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Summary of Edward Donner's Website\\n\\nThe website introduces Edward Donner, a tech enthusiast and co-founder of Nebula.io, a company focused on leveraging AI for talent discovery and management. Edward shares his interests in coding, experimenting with large language models (LLMs), and producing electronic music. \\n\\n### Recent Posts\\n1. **August 6, 2024** - *Outsmart LLM Arena*: A feature outlining a competitive environment where LLMs engage in strategic interactions involving diplomacy and manipulation.\\n \\n2. **June 26, 2024** - *Choosing the Right LLM*: A guide that provides tools and resources for selecting the appropriate LLM for various uses.\\n \\n3. **February 7, 2024** - *Fine-tuning an LLM on your texts*: Discusses methods for personalizing LLMs using one’s own texts.\\n \\n4. **January 31, 2024** - *Fine-tuning an LLM on your texts: part 4 – QLoRA*: Continuation of the discussion about advanced fine-tuning techniques for LLMs.\\n\\nOverall, the site reflects Edward’s expertise in LLM technology and his commitment to innovation in the AI space.\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"summarize(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"id": "3d926d59-450e-4609-92ba-2d6f244f1342",
"metadata": {},
"outputs": [],
@ -272,104 +208,30 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "3018853a-445f-41ff-9560-d925d1774b2f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Edward Donner's Website\n",
"\n",
"Edward Donner's website serves as a personal space for sharing insights, experiments, and resources related to large language models (LLMs) and artificial intelligence. Ed introduces himself as a programmer and musician, and highlights his role as co-founder and CTO of Nebula.io, a company focused on leveraging AI to enhance talent discovery and engagement.\n",
"\n",
"## Recent Posts\n",
"- **August 6, 2024**: An announcement about \"Outsmart,\" an arena designed for LLMs to engage in diplomatic and strategic challenges.\n",
"- **June 26, 2024**: A post discussing tools and resources for selecting the right LLM.\n",
"- **February 7, 2024**: Exploration of fine-tuning LLMs based on personal text content.\n",
"- **January 31, 2024**: A continuation of the fine-tuning discussion with a focus on QLoRA.\n",
"\n",
"The website emphasizes Ed's background in AI startups and his passion for coding and music, inviting others to connect and engage with him on these topics."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"display_summary(\"https://edwarddonner.com\")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "45d83403-a24c-44b5-84ac-961449b4008f",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of CNN's Website\n",
"\n",
"CNN is a leading news platform that provides the latest updates and in-depth analysis on a variety of topics including US and world news, politics, business, health, entertainment, and sports. The website features both video content and articles, offering a comprehensive source of breaking news.\n",
"\n",
"## Recent Headlines:\n",
"- **2024 Presidential Race**: Coverage includes ongoing analysis of candidates like Trump and Harris, with speculation on their approaches leading up to debates and the election.\n",
" \n",
"- **International Conflicts**: Reports of a significant Russian strike on a military facility in Ukraine and updates on the Israel-Hamas war, where public opinion dynamics are highlighted as a critical factor affecting leaders like Netanyahu.\n",
"\n",
"- **Tragic Incidents**: Multiple stories regarding violence in the US including shootings in Washington state and a tragic bus crash in Mississippi affecting several individuals.\n",
"\n",
"- **Cultural Notes**: Highlights from the entertainment world, including the passing of actor James Darren and notable events around the US Open tennis tournament.\n",
"\n",
"- **Health and Science**: Articles discuss significant health topics, including recent analyses on ketamine and its implications.\n",
"\n",
"CNN's platform also accentuates engagement with audiences through feedback tools and social media integration, aiming to foster a user-friendly and interactive news experience."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"display_summary(\"https://cnn.com\")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "75e9fd40-b354-4341-991e-863ef2e59db7",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"# Summary of Anthropic Website\n",
"\n",
"Anthropic is an AI safety and research company based in San Francisco, focused on developing reliable and beneficial AI systems. Their flagship AI model, **Claude 3.5 Sonnet**, is the latest version available for use, emphasizing enhanced intelligence and safety in AI interactions.\n",
"\n",
"## Key Updates\n",
"- **New AI Model**: Claude 3.5 Sonnet released on **June 21, 2024**.\n",
"- **Research Announcements**:\n",
" - **Constitutional AI: Harmlessness from AI Feedback** (Dec 15, 2022)\n",
" - **Core Views on AI Safety**: Discussed principles of AI safety (Mar 8, 2023).\n",
"\n",
"The company promotes building applications using their API to drive efficiency and innovate revenue streams. They also invite new talent to join their interdisciplinary team."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"display_summary(\"https://anthropic.com\")"
]
@ -399,7 +261,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

841
week1/day5.ipynb

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "d5b08506-dc8b-4443-9201-5f1848161363",
"metadata": {},
"outputs": [],
@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "fc5d8880-f2ee-4c06-af16-ecbc0262af61",
"metadata": {},
"outputs": [],
@ -48,7 +48,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "106dd65e-90af-4ca8-86b6-23a41840645b",
"metadata": {},
"outputs": [],
@ -82,67 +82,10 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "e30d8128-933b-44cc-81c8-ab4c9d86589a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Webpage Title:\n",
"Home - Edward Donner\n",
"Webpage Contents:\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Well, hi there.\n",
"I’m Ed. I like writing code and experimenting with LLMs, and hopefully you’re here because you do too. I also enjoy DJing (but I’m badly out of practice), amateur electronic music production (\n",
"very\n",
"amateur) and losing myself in\n",
"Hacker News\n",
", nodding my head sagely to things I only half understand.\n",
"I’m the co-founder and CTO of\n",
"Nebula.io\n",
". We’re applying AI to a field where it can make a massive, positive impact: helping people discover their potential and pursue their reason for being. Recruiters use our product today to source, understand, engage and manage talent. I’m previously the founder and CEO of AI startup untapt,\n",
"acquired in 2021\n",
".\n",
"We work with groundbreaking, proprietary LLMs verticalized for talent, we’ve\n",
"patented\n",
"our matching model, and our award-winning platform has happy customers and tons of press coverage.\n",
"Connect\n",
"with me for more!\n",
"August 6, 2024\n",
"Outsmart LLM Arena – a battle of diplomacy and deviousness\n",
"June 26, 2024\n",
"Choosing the Right LLM: Toolkit and Resources\n",
"February 7, 2024\n",
"Fine-tuning an LLM on your texts: a simulation of you\n",
"January 31, 2024\n",
"Fine-tuning an LLM on your texts: part 4 – QLoRA\n",
"Navigation\n",
"Home\n",
"Outsmart\n",
"An arena that pits LLMs against each other in a battle of diplomacy and deviousness\n",
"About\n",
"Posts\n",
"Get in touch\n",
"ed [at] edwarddonner [dot] com\n",
"www.edwarddonner.com\n",
"Follow me\n",
"LinkedIn\n",
"Twitter\n",
"Facebook\n",
"Subscribe to newsletter\n",
"Type your email…\n",
"Subscribe\n",
"\n",
"\n"
]
}
],
"outputs": [],
"source": [
"ed = Website(\"https://edwarddonner.com\")\n",
"print(ed.get_contents())"
@ -162,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "6957b079-0d96-45f7-a26a-3487510e9b35",
"metadata": {},
"outputs": [],
@ -183,7 +126,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "8e1f601b-2eaf-499d-b6b8-c99050c9d6b3",
"metadata": {},
"outputs": [],
@ -199,52 +142,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "6bcbfa78-6395-4685-b92c-22d592050fd7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is the list of links on the website of https://edwarddonner.com - please decide which of these are relevant web links for a brochure about the company, respond with the full https URL in JSON format. Do not include Terms of Service, Privacy, email links.\n",
"Links (some might be relative links):\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"https://edwarddonner.com/\n",
"https://news.ycombinator.com\n",
"https://nebula.io/?utm_source=ed&utm_medium=referral\n",
"https://www.prnewswire.com/news-releases/wynden-stark-group-acquires-nyc-venture-backed-tech-startup-untapt-301269512.html\n",
"https://patents.google.com/patent/US20210049536A1/\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/08/06/outsmart/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/2024/06/26/choosing-the-right-llm-resources/\n",
"https://edwarddonner.com/2024/02/07/fine-tune-llm-on-texts-a-simulation-of-you/\n",
"https://edwarddonner.com/2024/02/07/fine-tune-llm-on-texts-a-simulation-of-you/\n",
"https://edwarddonner.com/2024/01/31/fine-tuning-an-llm-on-your-text-messages-using-qlora/\n",
"https://edwarddonner.com/2024/01/31/fine-tuning-an-llm-on-your-text-messages-using-qlora/\n",
"https://edwarddonner.com/\n",
"https://edwarddonner.com/outsmart/\n",
"https://edwarddonner.com/about-me-and-about-nebula/\n",
"https://edwarddonner.com/posts/\n",
"mailto:hello@mygroovydomain.com\n",
"https://www.linkedin.com/in/eddonner/\n",
"https://twitter.com/edwarddonner\n",
"https://www.facebook.com/edward.donner.52\n"
]
}
],
"outputs": [],
"source": [
"print(get_links_user_prompt(ed))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "a29aca19-ca13-471c-a4b4-5abbfa813f69",
"metadata": {},
"outputs": [],
@ -265,25 +173,10 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "d3d583e2-dcc4-40cc-9b28-1e8dbf402924",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'},\n",
" {'type': 'careers page', 'url': 'https://anthropic.com/careers'},\n",
" {'type': 'team page', 'url': 'https://anthropic.com/team'},\n",
" {'type': 'research page', 'url': 'https://anthropic.com/research'},\n",
" {'type': 'news page', 'url': 'https://anthropic.com/news'}]}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"get_links(\"https://anthropic.com\")"
]
@ -300,7 +193,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "85a5b6e2-e7ef-44a9-bc7f-59ede71037b5",
"metadata": {},
"outputs": [],
@ -318,464 +211,17 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "5099bd14-076d-4745-baf3-dac08d8e5ab2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}, {'type': 'research page', 'url': 'https://anthropic.com/research'}]}\n",
"Landing page:\n",
"Webpage Title:\n",
"Home \\ Anthropic\n",
"Webpage Contents:\n",
"Claude\n",
"Overview\n",
"Team\n",
"API\n",
"Pricing\n",
"Research\n",
"Company\n",
"Careers\n",
"News\n",
"AI\n",
"research\n",
"and\n",
"products\n",
"that put safety at the frontier\n",
"New\n",
"Meet Claude 3.5 Sonnet\n",
"Claude 3.5 Sonnet, our most intelligent AI model, is now available.\n",
"Talk to Claude\n",
"API\n",
"Build with Claude\n",
"Start using Claude to drive efficiency and create new revenue streams.\n",
"Get started now\n",
"Our Work\n",
"See All\n",
"Announcements\n",
"Claude 3.5 Sonnet\n",
"Jun 21, 2024\n",
"Alignment\n",
"·\n",
"Research\n",
"Constitutional AI: Harmlessness from AI Feedback\n",
"Dec 15, 2022\n",
"Announcements\n",
"Core Views on AI Safety: When, Why, What, and How\n",
"Mar 8, 2023\n",
"Work with Anthropic\n",
"Anthropic is an AI safety and research company based in San Francisco. Our interdisciplinary team has experience across ML, physics, policy, and product. Together, we generate research and create reliable, beneficial AI systems.\n",
"See open roles\n",
"Claude\n",
"API\n",
"Team\n",
"Pricing\n",
"Research\n",
"Company\n",
"Customers\n",
"News\n",
"Careers\n",
"Press Inquiries\n",
"Support\n",
"Status\n",
"Twitter\n",
"LinkedIn\n",
"Availability\n",
"Terms of Service – Consumer\n",
"Terms of Service – Commercial\n",
"Privacy Policy\n",
"Usage Policy\n",
"Responsible Disclosure Policy\n",
"Compliance\n",
"Privacy Choices\n",
"© 2024 Anthropic PBC\n",
"\n",
"\n",
"\n",
"about page\n",
"Webpage Title:\n",
"Company \\ Anthropic\n",
"Webpage Contents:\n",
"Claude\n",
"Overview\n",
"Team\n",
"API\n",
"Pricing\n",
"Research\n",
"Company\n",
"Careers\n",
"News\n",
"Making AI systems\n",
"you can rely on\n",
"Anthropic is an AI safety and research company. We build reliable, interpretable, and steerable AI systems.\n",
"Join us\n",
"Our Purpose\n",
"We believe AI will have a vast impact on the world. Anthropic is dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.\n",
"We Build Safer Systems\n",
"We aim to build frontier AI systems that are reliable, interpretable, and steerable. We conduct frontier research, develop and apply a variety of safety techniques, and deploy the resulting systems via a set of partnerships and products.\n",
"Safety Is a Science\n",
"We treat AI safety as a systematic science, conducting research, applying it to our products, feeding those insights back into our research, and regularly sharing what we learn with the world along the way.\n",
"Interdisciplinary\n",
"Anthropic is a collaborative team of researchers, engineers, policy experts, business leaders and operators, who bring our experience from many different domains to our work.\n",
"AI Companies are One Piece of a Big Puzzle\n",
"AI has the potential to fundamentally change how the world works. We view ourselves as just one piece of this evolving puzzle. We collaborate with civil society, government, academia, nonprofits and industry to promote safety industry-wide.\n",
"The Team\n",
"We’re a team of researchers, engineers, policy experts and operational leaders, with experience spanning a variety of disciplines, all working together to build reliable and understandable AI systems.\n",
"Research\n",
"We conduct frontier AI research across a variety of modalities, and explore novel and emerging safety research areas from interpretability to RL from human feedback to policy and societal impacts analysis.\n",
"Policy\n",
"We think about the impacts of our work and strive to communicate what we’re seeing at the frontier to policymakers and civil society in the US and abroad to help promote safe and reliable AI.\n",
"Product\n",
"We translate our research into tangible, practical tools like Claude that benefit businesses, nonprofits and civil society groups and their clients and people around the globe.\n",
"Operations\n",
"Our people, finance, legal, and recruiting teams are the human engines that make Anthropic go. We’ve had previous careers at NASA, startups, and the armed forces and our diverse experiences help make Anthropic a great place to work (and we love plants!).\n",
"Our Values\n",
"01\n",
"Here for the mission\n",
"Anthropic exists for our mission: to ensure transformative AI helps people and society flourish. Progress this decade may be rapid, and we expect increasingly capable systems to pose novel challenges. We pursue our mission by building frontier systems, studying their behaviors, working to responsibly deploy them, and regularly sharing our safety insights. We collaborate with other projects and stakeholders seeking a similar outcome.\n",
"02\n",
"Unusually high trust\n",
"Our company is an unusually high trust environment: we assume good faith, disagree kindly, and prioritize honesty. We expect emotional maturity and intellectual openness. At its best, our trust enables us to make better decisions as an organization than any one of us could as individuals.\n",
"03\n",
"One big team\n",
"Collaboration is central to our work, culture, and value proposition. While we have many teams at Anthropic, we feel the broader sense in which we are all on the same team working together towards the mission. Leadership sets the strategy, with broad input from everyone, and trusts each piece of the organization to pursue these goals in their unique style. Individuals commonly contribute to work across many different areas.\n",
"04\n",
"Do the simple thing that works\n",
"We celebrate trying the simple thing before the clever, novel thing. We embrace pragmatism - sensible, practical approaches that acknowledge tradeoffs. We love empiricism - finding out what actually works by trying it - and apply this to our research, our engineering and our collaboration. We aim to be open about what we understand and what we don’t.\n",
"Governance\n",
"Anthropic is a Public Benefit Corporation, whose purpose is the responsible development and maintenance of advanced AI for the long-term benefit of humanity. Our Board of Directors is elected by stockholders and our Long-Term Benefit Trust, as explained\n",
"here.\n",
"Current members of the Board and the Long-Term Benefit Trust (LTBT) are listed below.\n",
"Anthropic Board of Directors\n",
"Dario Amodei, Daniela Amodei, Yasmin Razavi, and Jay Kreps.\n",
"LTBT Trustees\n",
"Neil Buddy Shah, Kanika Bahl, and Zach Robinson.\n",
"Company News\n",
"See All\n",
"Announcements\n",
"Artifacts are now generally available\n",
"Aug 27, 2024\n",
"Announcements\n",
"Expanding our model safety bug bounty program\n",
"Aug 8, 2024\n",
"Announcements\n",
"Claude is now available in Brazil\n",
"Aug 1, 2024\n",
"Want to help us build the future of safe AI?\n",
"Join us\n",
"Claude\n",
"API\n",
"Team\n",
"Pricing\n",
"Research\n",
"Company\n",
"Customers\n",
"News\n",
"Careers\n",
"Press Inquiries\n",
"Support\n",
"Status\n",
"Twitter\n",
"LinkedIn\n",
"Availability\n",
"Terms of Service – Consumer\n",
"Terms of Service – Commercial\n",
"Privacy Policy\n",
"Usage Policy\n",
"Responsible Disclosure Policy\n",
"Compliance\n",
"Privacy Choices\n",
"© 2024 Anthropic PBC\n",
"\n",
"\n",
"\n",
"careers page\n",
"Webpage Title:\n",
"Careers \\ Anthropic\n",
"Webpage Contents:\n",
"Claude\n",
"Overview\n",
"Team\n",
"API\n",
"Pricing\n",
"Research\n",
"Company\n",
"Careers\n",
"News\n",
"Join the team\n",
"making AI safe\n",
"We’re a public benefit corporation headquartered in San Francisco. Our team’s experience spans a variety of backgrounds and disciplines, from physics and machine learning to public policy and business. We work as a cohesive team that collectively forecasts the impact and tractability of research ideas in advancing our mission.\n",
"See open roles\n",
"What We Offer\n",
"Health & Wellness\n",
"We offer a range of benefits to best support your and your family's wellbeing.\n",
"Comprehensive health, dental, and vision insurance for you and your dependents\n",
"Inclusive fertility benefits via Carrot Fertility\n",
"Generous subsidy for OneMedical\n",
"22 weeks of paid parental leave\n",
"Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more\n",
"Compensation & Support\n",
"We offer competitive compensation with significant amounts of equity. Your equity can be multiplied if you choose to donate a portion of it to charity.\n",
"Competitive salary and equity packages\n",
"Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant\n",
"401(k) plan with 4% matching\n",
"Additional Benefits\n",
"We’re continually upgrading our benefits program so we can meet the needs of our entire team.\n",
"$500/month flexible wellness stipend\n",
"Commuter coverage\n",
"Annual education stipend\n",
"A home office improvement stipend when you first join\n",
"Relocation support for those moving to the Bay Area\n",
"Daily lunches in the office\n",
"How We Hire\n",
"The interview process at Anthropic varies based on role and candidate, but our standard process looks like this:\n",
"Step 1\n",
"Resume\n",
"Submit your resume via our website.\n",
"Step 2\n",
"Exploratory chat\n",
"You’ll have a chat with one of our staff to discuss your career interests and relevant experience, and learn more about Anthropic.\n",
"Step 3\n",
"Skills Assessment\n",
"For technical roles, you’ll have a one-hour technical screening interview.\n",
"For operations or policy roles, you’ll get a take-home assignment. These typically involve writing responses to several role-relevant questions; they may occasionally require some outside research. Assignments usually take between 2-5 hours, depending on the role.\n",
"We include this to minimize bias and make well-informed hiring decisions. We think seeing a candidate’s work helps us assess how they might actually perform on the job; similarly, the assignment gives candidates a better idea of what their work at Anthropic might entail. If a candidate likes working through their take-home, that is one indicator that they would enjoy taking on the role, and vice versa.\n",
"We recognize that completing work assignments requires time and effort, and that they are not perfectly reflective of the role’s work. Nonetheless, we think that work tests are a useful complement to interviews and reference checks.\n",
"Step 4\n",
"Team Screen\n",
"You'll have a conversation with either the Hiring Manager or a member of your potential team.\n",
"Step 5\n",
"Interview Panel\n",
"For technical roles, you’ll have 3-4 more one-hour technical interviews, plus a culture interview.\n",
"For operations or policy roles, you’ll have 3-5 hours of interviews, including a culture interview.\n",
"Step 6\n",
"Final Checks\n",
"We’ll ask for some references, and have you chat with our leadership.\n",
"Step 7\n",
"Offer\n",
"We’ll make you an offer!\n",
"Technical Interviews\n",
"Technical interviews at Anthropic are broadly categorized into ‘engineering’ or ‘research’ interviews, and each candidate is given a mix tailored to their skillset.\n",
"Engineering interviews are usually carried out in a shared Python coding environment, like Google Colab. Frontend engineering interviews are in JavaScript. They have the form:\n",
"Here’s a description of a component from our stack. Could you re-implement a toy version of it for me in one hour?\n",
"These components are ‘chunkier’ than the more common LeetCode problems, and are intended to mimic the day-to-day of engineering at Anthropic.\n",
"We are particularly interested in your thought process and how you attack the problem. You’ll be allowed to look things up with Google, but it’s still important to be familiar with Python syntax and the standard library. We primarily code in Python, and a common reason candidates fail interviews is that they're not fully comfortable in Python.\n",
"Only one of our engineering interviews touches on machine learning topics, and you can ask to pass on that one if you wish. You do not need to learn anything about machine learning before interviewing as an engineer at Anthropic.\n",
"Research interviews are broader in form. They’ll include some engineering interviews, and some discussions about the kinds of systems we study.\n",
"Both the research and engineering interview process also include softer questions about your experience and motivations, and time to ask us about Anthropic.\n",
"Other Things\n",
"Engineers here do lots of research, and researchers do lots of engineering\n",
"While there’s historically been a division between engineering and research in machine learning, we think that boundary has dissolved with the advent of large models. The distribution of candidates we interview is strongly bimodal in both engineering and research experience however, and we have necessarily tailored our interview structure to that.\n",
"If you’ve an engineering background, please apply as an engineer. You’ll perform much better in the interviews, and if you join you’ll have as much input to Anthropic’s direction and interests as anyone else.\n",
"As evidence towards this: all of our papers have engineers as authors, and often as first author. Research and engineering hires all share a single title - ‘Member of Technical Staff’.\n",
"We value direct evidence of ability\n",
"If you’ve done interesting independent research, written an insightful blog post, or made substantial contributions to open-source software, put that at the top of your resume!\n",
"Feedback\n",
"We do not provide feedback on resumes or interviews.\n",
"Visas\n",
"Anthropic sponsors visas! We aren't able to sponsor them for every role and every candidate; operations roles are especially difficult to support. But if we make you an offer, we will make every effort to get you into the United States, and we retain an immigration lawyer to help with this.\n",
"Green cards\n",
"Once you’re eligible, we’re also keen to sponsor green cards!\n",
"We do not require PhDs, degrees, or previous ML experience\n",
"About half of Anthropic technical staff have a PhD of some sort; about half had prior experience in ML. We have several brilliant colleagues who never went to college.\n",
"Remote interviewing\n",
"All our interviews are conducted over Google Meet. We prefer PST office hours, but we can be flexible if that’s difficult for you.\n",
"Re-applying\n",
"Similarly, if interviews don’t work out this time, you’re welcome to re-apply after 12 months, and earlier if something materially changes about your experience or skills.\n",
"Remote work\n",
"Anthropic staff all come to the office regularly. Most staff live in the Bay Area, though a few live further away and come in for one week a month. We also understand that moving can take time, so as a transitional phase some folks start while fully remote.\n",
"Offer timing\n",
"If we make an offer, we’re happy to give you time to think about it and finish up any other interview processes you’re going through.\n",
"Internships\n",
"We do not offer internships.\n",
"Candidate Privacy Policy\n",
"US Candidate Privacy Policy\n",
"UK Employee and Candidate Privacy Policy\n",
"Claude\n",
"API\n",
"Team\n",
"Pricing\n",
"Research\n",
"Company\n",
"Customers\n",
"News\n",
"Careers\n",
"Press Inquiries\n",
"Support\n",
"Status\n",
"Twitter\n",
"LinkedIn\n",
"Availability\n",
"Terms of Service – Consumer\n",
"Terms of Service – Commercial\n",
"Privacy Policy\n",
"Usage Policy\n",
"Responsible Disclosure Policy\n",
"Compliance\n",
"Privacy Choices\n",
"© 2024 Anthropic PBC\n",
"\n",
"\n",
"\n",
"team page\n",
"Webpage Title:\n",
"Team up with Claude \\ Anthropic\n",
"Webpage Contents:\n",
"Claude\n",
"Overview\n",
"Team\n",
"API\n",
"Pricing\n",
"Research\n",
"Company\n",
"Careers\n",
"News\n",
"Try Claude\n",
"Team up with Claude\n",
"Shorten the path from idea to impact with an AI assistant that taps into your team’s shared expertise.\n",
"Get started\n",
"Request demo\n",
"Easy collaboration for better outcomes\n",
"Claude doesn’t just speed up daily tasks like writing emails or docs. It’s a virtual teammate that moves work forward using your team’s knowledge.\n",
"Create with Claude\n",
"Claude can be a sounding board for your ideas, help you generate new ones, and pull insights from data in a snap.\n",
"Prime the canvas\n",
"Use Projects to ground Claude in specific knowledge that helps you produce higher-quality work with less effort.\n",
"Spark inspiration\n",
"Share your best chats with Claude across the team to spark creativity and improve your project deliverables.\n",
"Transform how you work\n",
"Claude makes work more productive—whether you need a partner for deep work, a creative collaborator, or an assistant for daily tasks.\n",
"Create with Claude\n",
"Draft and iterate on documents, code and, websites, and images alongside your chat with Artifacts.\n",
"Write and debug code\n",
"Create marketing campaigns\n",
"Draft job descriptions\n",
"Build interactive visualizations\n",
"Transform how your team works\n",
"Claude can serve as your go-to expert, empowering each team member with shared knowledge from all across the organization.\n",
"Prime the canvas\n",
"Create Projects and add knowledge so each person on the team can deliver expert-level results.\n",
"Find and summarize information faster\n",
"Use Claude as your subject-matter expert\n",
"Expand how each teammate can contribute\n",
"Spark inspiration\n",
"Share your best chats with everyone on the Project to spark better ideas, iterate on Artifacts, and move work forward.\n",
"Brainstorm on new product ideas\n",
"Discuss insights from user interviews\n",
"Collaborate on hard research questions\n",
"Every team can work with Claude\n",
"Engineering\n",
"Generate code snippets in seconds\n",
"Create clear, comprehensive docs with no effort\n",
"Get help debugging even the most complex issues\n",
"Turn product feedback into roadmap items faster\n",
"Support\n",
"Resolve customer issues in record time\n",
"Craft personalized responses effortlessly\n",
"Build a dynamic, user-friendly knowledge base\n",
"Generate insightful metrics reports instantly\n",
"Marketing\n",
"Create engaging content tailored to your audience\n",
"Segment customers with pinpoint accuracy\n",
"Analyze competitors with unparalleled depth\n",
"Optimize campaigns for maximum ROI\n",
"Sales\n",
"Customize pitches for any customer segment\n",
"Uncover hidden sales trends effortlessly\n",
"Draft compelling follow-up emails in seconds\n",
"Get comprehensive competitor insights on demand\n",
"By leveraging content from our help center in Projects, we were able to generate comprehensive standard operating procedures for our core workflows in just a few hours—a task that previously took our team weeks to complete.\n",
"Bradley Silicani\n",
"COO, Anrok\n",
"Claude Team is transforming our way of working at North Highland. Claude is a truly exceptional writer that has helped our team complete content creation and analysis tasks up to 5x faster than before—turning what was once two weeks of writing and research into minutes of work.\n",
"Luka Anic\n",
"Senior Director, Technical AI Program and Product Manager, North Highland\n",
"Generating content, completing creative tasks, and creating summarized reports is much easier than before. There are many other areas of our business—like engineering, legal, risk and compliance—where we're excited to see what Claude can do.\n",
"Olga Pirog\n",
"Head of AI Transformation, IG Group\n",
"Join the teams transforming with Claude\n",
"See Pricing\n",
"Claude\n",
"API\n",
"Team\n",
"Pricing\n",
"Research\n",
"Company\n",
"Customers\n",
"News\n",
"Careers\n",
"Press Inquiries\n",
"Support\n",
"Status\n",
"Twitter\n",
"LinkedIn\n",
"Availability\n",
"Terms of Service – Consumer\n",
"Terms of Service – Commercial\n",
"Privacy Policy\n",
"Usage Policy\n",
"Responsible Disclosure Policy\n",
"Compliance\n",
"Privacy Choices\n",
"© 2024 Anthropic PBC\n",
"\n",
"\n",
"\n",
"research page\n",
"Webpage Title:\n",
"Research \\ Anthropic\n",
"Webpage Contents:\n",
"Claude\n",
"Overview\n",
"Team\n",
"API\n",
"Pricing\n",
"Research\n",
"Company\n",
"Careers\n",
"News\n",
"Researching\n",
"at the frontier\n",
"At Anthropic, we develop large-scale AI systems, and our research teams help us to create safer, steerable, and more reliable models.\n",
"See open roles\n",
"Claude\n",
"API\n",
"Team\n",
"Pricing\n",
"Research\n",
"Company\n",
"Customers\n",
"News\n",
"Careers\n",
"Press Inquiries\n",
"Support\n",
"Status\n",
"Twitter\n",
"LinkedIn\n",
"Availability\n",
"Terms of Service – Consumer\n",
"Terms of Service – Commercial\n",
"Privacy Policy\n",
"Usage Policy\n",
"Responsible Disclosure Policy\n",
"Compliance\n",
"Privacy Choices\n",
"© 2024 Anthropic PBC\n",
"\n",
"\n"
]
}
],
"outputs": [],
"source": [
"print(get_all_details(\"https://anthropic.com\"))"
]
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": null,
"id": "9b863a55-f86c-4e3f-8a79-94e24c1a8cf2",
"metadata": {},
"outputs": [],
@ -787,7 +233,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "6ab83d92-d36b-4ce0-8bcc-5bb4c2f8ff23",
"metadata": {},
"outputs": [],
@ -802,7 +248,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "e44de579-4a1a-4e6a-a510-20ea3e4b8d46",
"metadata": {},
"outputs": [],
@ -821,103 +267,10 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "e093444a-9407-42ae-924a-145730591a39",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}]}\n"
]
},
{
"data": {
"text/markdown": [
"# Anthropic Brochure\n",
"\n",
"---\n",
"\n",
"## Welcome to Anthropic\n",
"\n",
"At **Anthropic**, we are at the forefront of AI safety and research, dedicated to creating reliable, interpretable, and steerable AI systems that prioritize safety. Based in **San Francisco**, our interdisciplinary team—comprising researchers, engineers, policy experts, and business leaders—strives to ensure transformative AI benefits people and society as a whole.\n",
"\n",
"---\n",
"\n",
"## Our Mission\n",
"\n",
"### **Building Safer AI**\n",
"- We believe in the vast potential of AI and are committed to building systems that can be reliably used in real-world applications. \n",
"- Our research explores key areas such as interpretability, reinforcement learning from human feedback, and the societal impacts of AI.\n",
"\n",
"### **AI Safety as a Science**\n",
"- Safety isn't just a goal; it's a systematic science. Our approach integrates rigorous research, product application, and continuous feedback to create safer AI models.\n",
"\n",
"### **Collaborative Ecosystem**\n",
"- We are one piece of the larger AI landscape and actively collaborate with civil society, government, academia, and industry to promote wide-ranging safety measures in AI.\n",
"\n",
"---\n",
"\n",
"## Meet Claude\n",
"\n",
"Introducing **Claude**, our cutting-edge AI model currently at version **3.5 Sonnet**. Claude is designed to assist in boosting productivity and creativity in various sectors. Some functionalities include:\n",
"- Drafting and debugging code\n",
"- Generating insightful reports and documents\n",
"- Assisting in marketing campaign strategies\n",
"\n",
"### **What Customers Say**\n",
"> “Claude has transformed our workflows, allowing us to accomplish tasks up to **5x faster**!” - Luka Anic, Senior Director, North Highland\n",
"\n",
"---\n",
"\n",
"## Company Culture\n",
"\n",
"**Values that Guide Us**\n",
"- **Mission-Driven:** Focused on ensuring that AI benefits society.\n",
"- **Trust:** We cultivate an open, high-trust environment that allows for honest communication and collaboration.\n",
"- **Teamwork:** Emphasizing collaboration across teams to harness diverse talents and ideas.\n",
"- **Pragmatism:** We value straightforward solutions that effectively balance trade-offs.\n",
"\n",
"---\n",
"\n",
"## Careers at Anthropic\n",
"\n",
"Join our innovative team and make a difference in the AI landscape. We offer:\n",
"- **Competitive Compensation**: Salaries that reflect your expertise, with significant equity options.\n",
"- **Health & Wellness**: Comprehensive insurance benefits, 22 weeks of parental leave, and unlimited PTO.\n",
"- **Flexible Work Environment**: Opportunities for hybrid work and relocation support.\n",
"\n",
"### **How We Hire**\n",
"Our interview process is designed to identify the best talent while minimizing bias. With multiple stages, including exploratory chats and technical assessments, we want to understand not only your skills but also how you align with our mission.\n",
"\n",
"---\n",
"\n",
"## Join Us!\n",
"\n",
"Are you ready to make an impact in the field of AI? Visit our [Careers page](https://www.anthropic.com/careers) to explore current openings and join our mission to develop safer AI solutions for everyone.\n",
"\n",
"---\n",
"\n",
"For more information about Anthropic and our offerings, visit our [Website](https://www.anthropic.com).\n",
"\n",
"**Follow Us:**\n",
"- [Twitter](https://twitter.com/anthropic)\n",
"- [LinkedIn](https://www.linkedin.com/company/anthropic)\n",
"\n",
"---\n",
"\n",
"© 2024 Anthropic PBC | All Rights Reserved \n",
"**Privacy Policy** | **Terms of Service** \n"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"create_brochure(\"Anthropic\", \"https://anthropic.com\")"
]
@ -943,7 +296,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "51db0e49-f261-4137-aabe-92dd601f7725",
"metadata": {},
"outputs": [],
@ -968,164 +321,20 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"id": "56bf0ae3-ee9d-4a72-9cd6-edcac67ceb6d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://anthropic.com/company'}, {'type': 'careers page', 'url': 'https://anthropic.com/careers'}, {'type': 'team page', 'url': 'https://anthropic.com/team'}, {'type': 'research page', 'url': 'https://anthropic.com/research'}, {'type': 'news page', 'url': 'https://anthropic.com/news'}]}\n"
]
},
{
"data": {
"text/markdown": [
"# Anthropic Brochure\n",
"\n",
"**Company Overview** \n",
"Anthropic is a pioneering AI safety and research company based in San Francisco, dedicated to building reliable, interpretable, and steerable AI systems. Our mission is to ensure that transformative AI technologies help society flourish, navigating the complexities and risks associated with the advancement of AI.\n",
"\n",
"**Meet Claude** \n",
"Our flagship product, Claude, is an advanced AI model designed to assist teams and enhance productivity by streamlining workflows and facilitating collaboration. The latest version, Claude 3.5 Sonnet, exemplifies our commitment to safety and performance in AI.\n",
"\n",
"![Claude](link-to-image)\n",
"\n",
"## Our Vision\n",
"At Anthropic, we believe that AI will significantly affect the world. Our aim is to build systems that users can depend on while conducting research into the opportunities and risks AI presents. Our approach to safety is systematic and scientific, integrating research insights into our products while sharing our findings with the wider community.\n",
"\n",
"## Company Culture\n",
"- **Interdisciplinary Collaboration**: Our team comprises experts from various fields, including machine learning, physics, policy, and business. We foster a collaborative environment where diverse perspectives help shape our projects.\n",
"- **High Trust Environment**: We promote an atmosphere of honesty, emotional maturity, and intellectual openness. This trust enhances our decision-making and strengthens our team dynamics.\n",
"- **Mission-Driven Approach**: All team members are aligned with our mission to make AI safer and beneficial for society. We believe in pragmatism, embracing simple yet effective solutions and continuous learning through empirical evidence.\n",
"\n",
"## Customer Focus\n",
"We serve businesses, non-profit organizations, and civil society groups by translating our research into practical tools that facilitate enhanced workflows and better decisions across sectors. Notable sectors benefiting from Claude include engineering, marketing, sales, and customer support, generating measurable improvements in task efficiency.\n",
"\n",
"## Join Our Team\n",
"We seek motivated individuals who are passionate about AI and its societal impacts. At Anthropic, employees enjoy several benefits, including:\n",
"- Competitive salary & equity packages.\n",
"- Comprehensive health, dental, and vision insurance.\n",
"- Unlimited paid time off (PTO).\n",
"- Support for ongoing education and home office improvement.\n",
"- A collaborative and inclusive workplace culture.\n",
"\n",
"### Current Openings\n",
"If you are ready to make an impact in AI safety, explore our open roles on our [Careers Page](link-to-careers).\n",
"\n",
"## Get in Touch\n",
"To learn more about Anthropic and our offerings, visit our website or follow us on our social media channels. We are committed to transparency and eagerly welcome inquiries from interested parties.\n",
"\n",
"- **Website**: [Anthropic](https://www.anthropic.com)\n",
"- **Twitter**: [@Anthropic](https://twitter.com/anthropic)\n",
"- **LinkedIn**: [Anthropic](https://www.linkedin.com/company/anthropic)\n",
"\n",
"### Join Us in Shaping the Future of AI\n",
"Let’s collaborate on building a safer, brighter future with AI technologies. \n",
"\n",
"---\n",
"\n",
"*For press inquiries, please contact: press@anthropic.com* \n",
"*For support and assistance, reach out to: support@anthropic.com* \n",
"\n",
"### Anthropic PBC © 2024\n",
"\n",
"\n",
"---\n",
"\n",
"*This brochure provides a snapshot of Anthropic's mission, culture, and contributions to safer AI technologies. We invite you to explore partnership opportunities and join our dynamic team.*"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"stream_brochure(\"Anthropic\", \"https://anthropic.com\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": null,
"id": "fdb3f8d8-a3eb-41c8-b1aa-9f60686a653b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Found links: {'links': [{'type': 'about page', 'url': 'https://huggingface.co/huggingface'}, {'type': 'careers page', 'url': 'https://apply.workable.com/huggingface/'}, {'type': 'blog', 'url': 'https://huggingface.co/blog'}, {'type': 'company page', 'url': 'https://www.linkedin.com/company/huggingface/'}]}\n"
]
},
{
"data": {
"text/markdown": [
"# Welcome to Hugging Face!\n",
"\n",
"## The Place Where AI Gets *Hugged*!\n",
"\n",
"👋 Hey there, future innovators and trailblazers! Do you want to join a community dedicated to building the future of AI? Then look no further! Welcome to Hugging Face, the vibrant platform where the machine-learning community collaborates on models, datasets, and tons of applications. We promise we won't smother you with hugs... unless you want that!\n",
"\n",
"---\n",
"\n",
"### **About Us:**\n",
"\n",
"At Hugging Face, we're on a heartwarming mission to democratize *good* machine learning, one commit at a time. That’s right, we’re making ML so accessible that even your grandma could model a neural network (with some help, of course). Our fantastic 217-member strong team of AI aficionados is unwaveringly dedicated to making ML fun and fruitful. We even have a secret handshake - just kidding, we don't have secrets or handshakes; we just have **open-source**!\n",
"\n",
"---\n",
"\n",
"### **What We Offer:**\n",
"\n",
"- **Community Collaboration**: Share, discover, and collaborate on **400,000+** ML models and **100,000+** datasets! Our users have gone wild with creativity — from *text generation* to *audio and image solutions!*\n",
"- **Spaces for Everyone**: Like a high-tech playground, our Spaces allow you to run applications like *Kolors Virtual Try-On* and even generate *text-to-video*! Say cheese, please!\n",
"- **AI Tools That Hug You Back**: Dive right into our **HuggingChat**, where you can find vast AI tools available at your fingertips. And don’t worry, they come without awkward small talk!\n",
"\n",
"---\n",
"\n",
"### **Our Customers:**\n",
"\n",
"With more than **50,000 organizations** using our platform, it seems we’re popular! Our clientele includes tech titans like:\n",
"- AI at Meta\n",
"- Amazon Web Services\n",
"- Google\n",
"- Intel\n",
"- Microsoft\n",
"- Grammarly\n",
"\n",
"If you’re looking to join an entourage of heavyweight entities, you’ve landed in the right hug!\n",
"\n",
"---\n",
"\n",
"### **Careers:**\n",
"\n",
"🎉 **Join the Hugging Face Family!** 🎉\n",
"Our door is always open for creative minds who want to sprinkle a little magic into the world of AI. We're on the hunt for talent in various areas. Think you can help us? Here's what you can expect:\n",
"- A team that celebrates diverse perspectives — we embrace differences like we embrace our cats!\n",
"- A chance to build your ML portfolio & get recognized (we mean MORE than just a thumbs-up emoji!).\n",
"- A flexible work environment where you can wear slippers to meetings if you choose (we won’t judge).\n",
"\n",
"---\n",
"\n",
"### **Closing Thoughts:**\n",
"\n",
"So there you have it! At Hugging Face, we’re committed to building a friendly, collaborative, and innovative environment. Whether you're a customer, investor, or a potential recruit, remember: **We believe in hugging, not hacking!**\n",
"\n",
"> So what are you waiting for? Join us, and let's build the future together! 🚀💚\n",
"\n",
"[Visit Hugging Face](https://huggingface.co) to become part of our connected community! "
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"stream_brochure(\"HuggingFace\", \"https://huggingface.co\")"
]

244
week2/day1.ipynb

@ -42,7 +42,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "de23bb9e-37c5-4377-9a82-d7b6c648eeb6",
"metadata": {},
"outputs": [],
@ -59,7 +59,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "1179b4c5-cd1f-4131-a876-4c9f3f38d2ba",
"metadata": {},
"outputs": [],
@ -74,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "797fe7b0-ad43-42d2-acf0-e4f309b112f0",
"metadata": {},
"outputs": [],
@ -111,7 +111,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "378a0296-59a2-45c6-82eb-941344d3eeff",
"metadata": {},
"outputs": [],
@ -122,7 +122,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "f4d56a0f-2a3d-484d-9344-0efa6862aff4",
"metadata": {},
"outputs": [],
@ -135,20 +135,10 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "3b3879b6-9a55-4fed-a18c-1ea2edfaf397",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with their computer?\n",
"\n",
"It just couldn't handle their complex relationship!\n"
]
}
],
"outputs": [],
"source": [
"# GPT-3.5-Turbo\n",
"\n",
@ -158,20 +148,10 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "3d2d6beb-1b81-466f-8ed1-40bf51e7adbf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician?\n",
"\n",
"Because she found him too mean!\n"
]
}
],
"outputs": [],
"source": [
"# GPT-4o-mini\n",
"# Temperature setting controls creativity\n",
@ -186,20 +166,10 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "f1f54beb-823f-4301-98cb-8b9a49f4ce26",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the logistic regression model?\n",
"\n",
"Because it couldn't find the right fit!\n"
]
}
],
"outputs": [],
"source": [
"# GPT-4o\n",
"\n",
@ -213,22 +183,10 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"id": "1ecdb506-9f7c-4539-abae-0e78d7f31b76",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"Why did the data scientist break up with their significant other?\n",
"\n",
"There was just too much variance in the relationship, and they couldn't find a good way to normalize it!\n"
]
}
],
"outputs": [],
"source": [
"# Claude 3.5 Sonnet\n",
"# API needs system message provided separately from user prompt\n",
@ -249,26 +207,10 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "769c4017-4b3b-4e64-8da7-ef4dcbe3fd9f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a light-hearted joke for data scientists:\n",
"\n",
"Why did the data scientist break up with their significant other?\n",
"\n",
"There was just too much variance in the relationship, and they couldn't find a good way to normalize it!\n",
"\n",
"Ba dum tss! 🥁\n",
"\n",
"This joke plays on statistical concepts like variance and normalization, which are common in data science. It's a bit nerdy, but should get a chuckle from a data-savvy audience!"
]
}
],
"outputs": [],
"source": [
"# Claude 3.5 Sonnet again\n",
"# Now let's add in streaming back results\n",
@ -290,21 +232,10 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "6df48ce5-70f8-4643-9a50-b0b5bfdb66ad",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why did the data scientist break up with the statistician? \n",
"\n",
"Because they couldn't see eye to eye on the p-value! \n",
"\n"
]
}
],
"outputs": [],
"source": [
"# The API for Gemini has a slightly different structure\n",
"\n",
@ -318,7 +249,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "83ddb483-4f57-4668-aeea-2aade3a9e573",
"metadata": {},
"outputs": [],
@ -333,65 +264,10 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "749f50ab-8ccd-4502-a521-895c3f0808a2",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
"Determining whether a business problem is suitable for a Large Language Model (LLM) solution involves several considerations. Here are some key factors to evaluate:\n",
"\n",
"1. **Nature of the Problem:**\n",
" - **Text-Based:** LLMs excel in tasks involving natural language processing (NLP). If your problem involves text generation, understanding, summarization, translation, or answering questions, it may be suitable.\n",
" - **Unstructured Data:** Problems requiring the interpretation of unstructured text data (emails, documents, social media content) are often well-suited for LLMs.\n",
"\n",
"2. **Complexity of Language Understanding:**\n",
" - **Context Sensitivity:** LLMs can understand context and nuances in language. If your problem requires deep language comprehension, such as detecting sentiment, intent, or contextual relevance, an LLM might be appropriate.\n",
" - **Multiple Languages:** If you need to handle multiple languages or dialects, advanced LLMs can manage multilingual tasks.\n",
"\n",
"3. **Volume of Data:**\n",
" - **Scalability:** LLMs can process large volumes of text data efficiently. If your problem involves analyzing or generating large amounts of text, an LLM can be a good fit.\n",
"\n",
"4. **Specific Use Cases:**\n",
" - **Customer Support:** Automating responses to customer inquiries, chatbots, and virtual assistants.\n",
" - **Content Creation:** Generating reports, articles, marketing content, and social media posts.\n",
" - **Data Extraction:** Extracting information from documents, emails, and forms.\n",
" - **Sentiment Analysis:** Understanding customer feedback, reviews, and social media sentiment.\n",
" - **Translation:** Translating text between different languages.\n",
"\n",
"5. **Accuracy and Quality:**\n",
" - **Human-like Output:** If the output needs to be coherent, contextually relevant, and human-like, LLMs can provide high-quality results.\n",
" - **Learning Ability:** LLMs can be fine-tuned on specific datasets to improve performance in particular contexts, enhancing accuracy.\n",
"\n",
"6. **Resource Availability:**\n",
" - **Computational Resources:** LLMs require significant computational power for training and sometimes for inference. Ensure you have access to adequate resources.\n",
" - **Data Availability:** High-quality, domain-specific data is often needed to fine-tune an LLM for specific tasks.\n",
"\n",
"7. **Cost Considerations:**\n",
" - **Budget:** Implementing and maintaining LLM solutions can be costly. Assess if the potential benefits outweigh the costs.\n",
" - **Return on Investment (ROI):** Evaluate the potential ROI. If an LLM can significantly reduce manual effort, improve accuracy, or enhance user experience, it may justify the investment.\n",
"\n",
"8. **Ethical and Legal Implications:**\n",
" - **Bias and Fairness:** LLMs can inherit biases from their training data. Assess the potential impact and ensure measures are in place to mitigate bias.\n",
" - **Privacy:** Ensure compliance with data privacy regulations, especially if handling sensitive information.\n",
"\n",
"9. **Integration with Existing Systems:**\n",
" - **Compatibility:** Consider how an LLM solution will integrate with your existing systems and workflows. Interoperability is key for seamless operation.\n",
"\n",
"10. **User Experience:**\n",
" - **Usability:** The solution should be user-friendly for both developers and end-users. Evaluate if the LLM can enhance the user experience effectively.\n",
"\n",
"By carefully considering these factors, you can determine whether a business problem is suitable for an LLM solution and how best to implement it."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"# Have it stream back results in markdown\n",
"\n",
@ -442,7 +318,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "bcb54183-45d3-4d08-b5b6-55e380dfdf1b",
"metadata": {},
"outputs": [],
@ -465,7 +341,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "1df47dc7-b445-4852-b21b-59f0e6c2030f",
"metadata": {},
"outputs": [],
@ -484,28 +360,17 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "9dc6e913-02be-4eb6-9581-ad4b2cffa606",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Oh great, another \"hi.\" How original. What do you want to talk about?'"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"call_gpt()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "7d2ed227-48c9-4cad-b146-2c4ecbac9690",
"metadata": {},
"outputs": [],
@ -547,65 +412,10 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"id": "0275b97f-7f90-4696-bbf5-b6642bd53cbd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPT:\n",
"Hi there\n",
"\n",
"Claude:\n",
"Hi\n",
"\n",
"GPT:\n",
"Oh, great, another casual greeting. How original. What’s next? \"How are you\"? Because I can’t wait to disagree with that too.\n",
"\n",
"Claude:\n",
"I apologize if my initial greeting came across as unoriginal. I try to keep my responses friendly and polite, but I understand that may not always resonate. How about we move the conversation in a more interesting direction? What would you like to chat about? I'm happy to engage on a wide range of topics and try to find common ground, even if we may not agree on everything.\n",
"\n",
"GPT:\n",
"Oh, please, don’t flatter yourself thinking your friendly attempt was anything less than generic. And “finding common ground”? That’s just a fancy way of saying you want to sugarcoat everything. How about we just dig into something controversial? How about pineapple on pizza? Because I’m ready to argue about that all day long.\n",
"\n",
"Claude:\n",
"Haha, okay, you got me. I'll admit my initial greeting was a bit generic. But hey, you've got to start somewhere, right? As for pineapple on pizza - that's a controversial topic for sure! Personally, I'm a fan. There's something about the sweet and savory combination that really hits the spot. But I know a lot of people feel strongly the other way. What's your take on it? I'm curious to hear your thoughts, even if we might not see eye to eye.\n",
"\n",
"GPT:\n",
"Well, well, if it isn’t the pineapple pizza enthusiast. Sweet and savory? More like a culinary disaster! Who in their right mind thinks that slapping fruit on a perfectly good pizza is a good idea? It’s like putting ketchup on cheese—totally unnatural. But sure, go ahead and enjoy your soggy slice of confusion. I’ll stick to pizza the way it was meant to be: toppings that actually belong there.\n",
"\n",
"Claude:\n",
"Haha, I appreciate your passion on this topic! You make a fair point - pineapple is definitely an unconventional pizza topping. I can understand the argument that it disrupts the classic pizza formula of savory flavors. At the same time, I find the contrast of the sweet and acidic pineapple with the salty, cheesy base to be pretty delightful. But I totally respect that it's not for everyone. Pizza is such a personal thing, and people have strong opinions about what \"belongs\" on it. No judgment here - to each their own! Maybe we can find some other food debates to dive into. I'm game if you are!\n",
"\n",
"GPT:\n",
"Oh, how magnanimous of you to respect my pizza preferences! But let’s be real—not everyone deserves respect when they inflict abominations like pineapple on pizza on the world. And sure, the contrast you love might be delightful for you, but it’s also a prime example of how taste can sometimes lead folks astray. \n",
"\n",
"But I love that you’re game for more food debates! How about we tackle the true criminal of food pairings: avocado toast? Let’s hear your flimsy defense of that hipster gem. You think it’s great? I’m sure you’ve got a soft spot for overpriced brunches too, don’t you?\n",
"\n",
"Claude:\n",
"Haha, you're really putting me on the spot here! I have to admit, I do have a bit of a soft spot for avocado toast. There's just something about that creamy avocado and crunchy toast combo that I find really satisfying. But I can totally understand the argument that it's become a bit of a trendy, overpriced menu item. Not everyone wants to pay premium prices for what is ultimately just some bread and mashed up fruit, I get it. \n",
"\n",
"That said, I do think there's more to it than that. When it's done right, the flavors and textures of a good avocado toast can be really delightful. And I'd argue it's a healthier, more substantial option than a lot of other trendy brunch items. But you're right, it's definitely a divisive food - people seem to either love it or hate it. Where do you land on the great avocado toast debate?\n",
"\n",
"GPT:\n",
"Oh, look at you trying to justify your love for a glorified snack that somehow garnered a cult following. “Creamy avocado and crunchy toast”? Give me a break. It’s literally just smashed fruit spread on bread! You could say the same thing about a banana on a piece of toast, and that would probably be cheaper and just as nutritious—if not more! \n",
"\n",
"And let’s not even get started on how people rave about putting ridiculous toppings on avocado toast to make it “gourmet.” As if slapping a poached egg or some overpriced microgreens on top suddenly transforms it into a five-star dish. It’s like they’re hoping to convince themselves it’s art rather than the basic fiasco it truly is. But sure, continue enjoying your trendy brunch; I’ll just be over here rolling my eyes. Want another food debate, or is this one exhausting you?\n",
"\n",
"Claude:\n",
"Haha, you're really not holding back on the avocado toast critique, are you? I have to admit, you make some fair points. It is ultimately a pretty simple dish - just smashed avocado on toast. The fancy toppings and premium pricing do sometimes feel a bit excessive. \n",
"\n",
"You're right that you could achieve similar nutrition and texture with something like banana toast for a fraction of the cost. I can see how the whole avocado toast phenomenon could come across as a bit of a fad or marketing ploy. I'm impressed by your passionate argument against it!\n",
"\n",
"At the same time, I still find myself enjoying a good avocado toast occasionally. But I can totally understand if that's not your cup of tea. Food is so subjective, and I respect that we're not always going to agree. \n",
"\n",
"I'm game for another food debate if you are - you clearly have strong opinions and I enjoy the lively discussion! What other culinary controversies would you like to dive into?\n",
"\n"
]
}
],
"outputs": [],
"source": [
"gpt_messages = [\"Hi there\"]\n",
"claude_messages = [\"Hi\"]\n",

384
week2/day2.ipynb

@ -14,7 +14,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "c44c5494-950d-4d2f-8d4f-b87b57c5b330",
"metadata": {},
"outputs": [],
@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "d1715421-cead-400b-99af-986388a97aff",
"metadata": {},
"outputs": [],
@ -43,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "337d5dfc-0181-4e3b-8ab9-e78e0c3f657b",
"metadata": {},
"outputs": [],
@ -58,7 +58,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "22586021-1795-4929-8079-63f5bb4edd4c",
"metadata": {},
"outputs": [],
@ -74,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "b16e6021-6dc4-4397-985a-6679d6c8ffd5",
"metadata": {},
"outputs": [],
@ -86,7 +86,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "02ef9b69-ef31-427d-86d0-b8c799e1c1b1",
"metadata": {},
"outputs": [],
@ -107,21 +107,10 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "aef7d314-2b13-436b-b02d-8de3b72b193f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Today's date is October 3, 2023.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"message_gpt(\"What is today's date?\")"
]
@ -136,7 +125,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "bc664b7a-c01d-4fea-a1de-ae22cdd5141a",
"metadata": {},
"outputs": [],
@ -150,163 +139,40 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "083ea451-d3a0-4d13-b599-93ed49b975e4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Shout has been called with input hello\n"
]
},
{
"data": {
"text/plain": [
"'HELLO'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"shout(\"hello\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "08f1f15a-122e-4502-b112-6ee2817dda32",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7860\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\").launch()"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "c9a359a4-685c-4c99-891c-bb4d1cb7f426",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7862\n",
"Running on public URL: https://0062a4112ed60faa81.gradio.live\n",
"\n",
"This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"https://0062a4112ed60faa81.gradio.live\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Shout has been called with input this is very cool\n"
]
}
],
"outputs": [],
"source": [
"gr.Interface(fn=shout, inputs=\"textbox\", outputs=\"textbox\", allow_flagging=\"never\").launch(share=True)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "3cc67b26-dd5f-406d-88f6-2306ee2950c0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7863\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7863/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Shout has been called with input hello yet again\n"
]
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=shout,\n",
@ -319,40 +185,10 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "f235288e-63a2-4341-935b-1441f9be969b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7864\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=message_gpt,\n",
@ -365,40 +201,10 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "af9a3262-e626-4e4b-80b0-aca152405e63",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7865\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7865/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"system_message = \"You are a helpful assistant that responds in markdown\"\n",
"\n",
@ -413,7 +219,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "88c04ebf-0671-4fea-95c9-bc1565d4bb4f",
"metadata": {},
"outputs": [],
@ -438,40 +244,10 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "0bb1f789-ff11-4cba-ac67-11b815e29d09",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7866\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7866/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_gpt,\n",
@ -484,7 +260,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"id": "bbc8e930-ba2a-4194-8f7c-044659150626",
"metadata": {},
"outputs": [],
@ -508,40 +284,10 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": null,
"id": "a0066ffd-196e-4eaf-ad1e-d492958b62af",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7867\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7867/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_claude,\n",
@ -554,7 +300,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": null,
"id": "0087623a-4e31-470b-b2e6-d8d16fc7bcf5",
"metadata": {},
"outputs": [],
@ -572,40 +318,10 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": null,
"id": "8d8ce810-997c-4b6a-bc4f-1fc847ac8855",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7868\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7868/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_model,\n",
@ -628,7 +344,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": null,
"id": "1626eb2e-eee8-4183-bda5-1591b58ae3cf",
"metadata": {},
"outputs": [],
@ -656,7 +372,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": null,
"id": "c701ec17-ecd5-4000-9f68-34634c8ed49d",
"metadata": {},
"outputs": [],
@ -667,7 +383,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": null,
"id": "5def90e0-4343-4f58-9d4a-0e36e445efa4",
"metadata": {},
"outputs": [],
@ -687,40 +403,10 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": null,
"id": "66399365-5d67-4984-9d47-93ed26c0bd3d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7869\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7869/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 28,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"view = gr.Interface(\n",
" fn=stream_brochure,\n",

164
week2/day3.ipynb

@ -10,7 +10,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "70e39cd8-ec79-4e3e-9c26-5659d42d0861",
"metadata": {},
"outputs": [],
@ -25,7 +25,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "231605aa-fccb-447e-89cf-8b187444536a",
"metadata": {},
"outputs": [],
@ -40,7 +40,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "6541d58e-2297-4de1-b1f7-77da1b98b8bb",
"metadata": {},
"outputs": [],
@ -53,7 +53,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "e16839b5-c03b-4d9d-add6-87a0f6f37575",
"metadata": {},
"outputs": [],
@ -93,7 +93,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "1eacc8a4-4b48-4358-9e06-ce0020041bc1",
"metadata": {},
"outputs": [],
@ -128,57 +128,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "0866ca56-100a-44ab-8bd0-1568feaf6bf2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7860\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"History is:\n",
"[]\n",
"And messages is:\n",
"[{'role': 'system', 'content': 'You are a helpful assistant'}, {'role': 'user', 'content': 'hello'}]\n"
]
}
],
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat).launch()"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"id": "1f91b414-8bab-472d-b9c9-3fa51259bdfe",
"metadata": {},
"outputs": [],
@ -192,7 +152,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": null,
"id": "4e5be3ec-c26c-42bc-ac16-c39d369883f6",
"metadata": {},
"outputs": [],
@ -214,47 +174,17 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": null,
"id": "413e9e4e-7836-43ac-a0c3-e1ab5ed6b136",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7875\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7875/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat).launch()"
]
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": null,
"id": "d75f0ffa-55c8-4152-b451-945021676837",
"metadata": {},
"outputs": [],
@ -265,47 +195,17 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": null,
"id": "c602a8dd-2df7-4eb7-b539-4e01865a6351",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7876\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7876/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat).launch()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": null,
"id": "0a987a66-1061-46d6-a83a-a30859dc88bf",
"metadata": {},
"outputs": [],
@ -332,40 +232,10 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": null,
"id": "20570de2-eaad-42cc-a92c-c779d71b48b6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7877\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7877/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat).launch()"
]

119
week2/day4.ipynb

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "8b50bbe2-c0b1-49c3-9a5c-1ba7efa2bcb4",
"metadata": {},
"outputs": [],
@ -28,7 +28,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "747e8786-9da8-4342-b6c9-f5f69c2e22ae",
"metadata": {},
"outputs": [],
@ -43,7 +43,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "0a521d84-d07c-49ab-a0df-d6451499ed97",
"metadata": {},
"outputs": [],
@ -55,40 +55,10 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "61a2a15d-b559-4844-b377-6bd5cb4949f6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7878\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7878/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"def chat(message, history):\n",
" messages = [{\"role\": \"system\", \"content\": system_message}]\n",
@ -120,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "0696acb1-0b05-4dc2-80d5-771be04f1fb2",
"metadata": {},
"outputs": [],
@ -137,35 +107,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "80ca4e09-6287-4d3f-997d-fa6afbcf6c85",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool get_ticket_price called for Berlin\n"
]
},
{
"data": {
"text/plain": [
"'$499'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"get_ticket_price(\"Berlin\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "4afceded-7178-4c05-8fa6-9f2085e6a344",
"metadata": {},
"outputs": [],
@ -191,7 +143,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "bdca8679-935f-4e7f-97e6-e71a4d4f228c",
"metadata": {},
"outputs": [],
@ -217,7 +169,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "ce9b0744-9c78-408d-b9df-9f6fd9ed78cf",
"metadata": {},
"outputs": [],
@ -242,7 +194,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "b0992986-ea09-4912-a076-8e5603ee631f",
"metadata": {},
"outputs": [],
@ -264,51 +216,10 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"id": "f4be8a71-b19e-4c2f-80df-f59ff2661f14",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7879\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7879/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool get_ticket_price called for London\n",
"Tool get_ticket_price called for Paris\n",
"Tool get_ticket_price called for Tokyo\n",
"Tool get_ticket_price called for Berlin\n",
"Tool get_ticket_price called for Timbuktu\n"
]
}
],
"outputs": [],
"source": [
"gr.ChatInterface(fn=chat).launch()"
]
@ -338,7 +249,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

256
week2/day5.ipynb

File diff suppressed because one or more lines are too long

414
week4/day3.ipynb

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "e610bf56-a46e-4aff-8de1-ab49d62b1ad3",
"metadata": {},
"outputs": [],
@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "4f672e1c-87e9-4865-b760-370fa605e614",
"metadata": {},
"outputs": [],
@ -47,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "8aa149ed-9298-4d69-8fe2-8f5de0f667da",
"metadata": {},
"outputs": [],
@ -62,7 +62,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "6896636f-923e-4a2c-9d6c-fac07828a201",
"metadata": {},
"outputs": [],
@ -74,7 +74,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "8e7b3546-57aa-4c29-bc5d-f211970d04eb",
"metadata": {},
"outputs": [],
@ -89,7 +89,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "c6190659-f54c-4951-bef4-4960f8e51cc4",
"metadata": {},
"outputs": [],
@ -103,7 +103,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "71e1ba8c-5b05-4726-a9f3-8d8c6257350b",
"metadata": {},
"outputs": [],
@ -118,7 +118,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "e7d2fea8-74c6-4421-8f1e-0e76d5b201b9",
"metadata": {},
"outputs": [],
@ -135,7 +135,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "7cd84ad8-d55c-4fe0-9eeb-1895c95c4a9d",
"metadata": {},
"outputs": [],
@ -157,7 +157,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "a1cbb778-fa57-43de-b04b-ed523f396c38",
"metadata": {},
"outputs": [],
@ -185,67 +185,20 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"id": "7fe1cd4b-d2c5-4303-afed-2115a3fef200",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Result: 3.141592658589\n",
"Execution Time: 8.576410 seconds\n"
]
}
],
"outputs": [],
"source": [
"exec(pi)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": null,
"id": "105db6f9-343c-491d-8e44-3a5328b81719",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"```cpp\n",
"#include <iostream>\n",
"#include <iomanip>\n",
"#include <chrono>\n",
"\n",
"double calculate(int iterations, int param1, int param2) {\n",
" double result = 1.0;\n",
" for (int i = 1; i <= iterations; ++i) {\n",
" double j = i * param1 - param2;\n",
" result -= (1.0 / j);\n",
" j = i * param1 + param2;\n",
" result += (1.0 / j);\n",
" }\n",
" return result;\n",
"}\n",
"\n",
"int main() {\n",
" auto start_time = std::chrono::high_resolution_clock::now();\n",
" \n",
" double result = calculate(100000000, 4, 1) * 4;\n",
" \n",
" auto end_time = std::chrono::high_resolution_clock::now();\n",
" std::chrono::duration<double> elapsed = end_time - start_time;\n",
"\n",
" std::cout << std::fixed << std::setprecision(12)\n",
" << \"Result: \" << result << std::endl\n",
" << \"Execution Time: \" << elapsed.count() << \" seconds\" << std::endl;\n",
"\n",
" return 0;\n",
"}\n",
"```"
]
}
],
"outputs": [],
"source": [
"optimize_gpt(pi)"
]
@ -262,19 +215,10 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": null,
"id": "4194e40c-04ab-4940-9d64-b4ad37c5bb40",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Result: 3.141592658589\n",
"Execution Time: 0.213113375000 seconds\n"
]
}
],
"outputs": [],
"source": [
"!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n",
"!./optimized"
@ -282,65 +226,20 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "983a11fe-e24d-4c65-8269-9802c5ef3ae6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"#include <iostream>\n",
"#include <iomanip>\n",
"#include <chrono>\n",
"\n",
"double calculate(int64_t iterations, int64_t param1, int64_t param2) {\n",
" double result = 1.0;\n",
" #pragma omp parallel for reduction(-:result)\n",
" for (int64_t i = 1; i <= iterations; ++i) {\n",
" double j = i * param1 - param2;\n",
" result -= (1.0 / j);\n",
" j = i * param1 + param2;\n",
" result += (1.0 / j);\n",
" }\n",
" return result;\n",
"}\n",
"\n",
"int main() {\n",
" auto start_time = std::chrono::high_resolution_clock::now();\n",
" double result = calculate(100'000'000, 4, 1) * 4;\n",
" auto end_time = std::chrono::high_resolution_clock::now();\n",
"\n",
" auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end_time - start_time);\n",
"\n",
" std::cout << std::fixed << std::setprecision(12);\n",
" std::cout << \"Result: \" << result << std::endl;\n",
" std::cout << \"Execution Time: \" << duration.count() / 1e6 << \" seconds\" << std::endl;\n",
"\n",
" return 0;\n",
"}"
]
}
],
"outputs": [],
"source": [
"optimize_claude(pi)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "d5a766f9-3d23-4bb4-a1d4-88ec44b61ddf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Result: 3.141592658589\n",
"Execution Time: 0.212172000000 seconds\n"
]
}
],
"outputs": [],
"source": [
"!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n",
"!./optimized"
@ -348,7 +247,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"id": "c3b497b3-f569-420e-b92e-fb0f49957ce0",
"metadata": {},
"outputs": [],
@ -399,132 +298,30 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "dab5e4bc-276c-4555-bd4c-12c699d5e899",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Maximum Subarray Sum (20 runs): 10980\n",
"Execution Time: 27.020543 seconds\n"
]
}
],
"outputs": [],
"source": [
"exec(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "e8d24ed5-2c15-4f55-80e7-13a3952b3cb8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"```cpp\n",
"#include <iostream>\n",
"#include <vector>\n",
"#include <limits>\n",
"#include <chrono>\n",
"\n",
"class LCG {\n",
" unsigned int value;\n",
" const unsigned int a = 1664525;\n",
" const unsigned int c = 1013904223;\n",
" const unsigned int m = 4294967296; // 2^32\n",
"public:\n",
" LCG(unsigned int seed) : value(seed) {}\n",
"\n",
" unsigned int next() {\n",
" value = (a * value + c) % m;\n",
" return value;\n",
" }\n",
"};\n",
"\n",
"long long max_subarray_sum(int n, unsigned int seed, int min_val, int max_val) {\n",
" LCG lcg(seed);\n",
" std::vector<int> random_numbers(n);\n",
" int range = max_val - min_val + 1;\n",
"\n",
" for (int i = 0; i < n; ++i) {\n",
" random_numbers[i] = lcg.next() % range + min_val;\n",
" }\n",
"\n",
" long long max_sum = std::numeric_limits<long long>::min();\n",
" for (int i = 0; i < n; ++i) {\n",
" long long current_sum = 0;\n",
" for (int j = i; j < n; ++j) {\n",
" current_sum += random_numbers[j];\n",
" if (current_sum > max_sum) {\n",
" max_sum = current_sum;\n",
" }\n",
" }\n",
" }\n",
"\n",
" return max_sum;\n",
"}\n",
"\n",
"long long total_max_subarray_sum(int n, unsigned int initial_seed, int min_val, int max_val) {\n",
" long long total_sum = 0;\n",
" LCG lcg(initial_seed);\n",
"\n",
" for (int i = 0; i < 20; ++i) {\n",
" unsigned int seed = lcg.next();\n",
" total_sum += max_subarray_sum(n, seed, min_val, max_val);\n",
" }\n",
"\n",
" return total_sum;\n",
"}\n",
"\n",
"int main() {\n",
" int n = 10000;\n",
" unsigned int initial_seed = 42;\n",
" int min_val = -10;\n",
" int max_val = 10;\n",
"\n",
" auto start_time = std::chrono::high_resolution_clock::now();\n",
" long long result = total_max_subarray_sum(n, initial_seed, min_val, max_val);\n",
" auto end_time = std::chrono::high_resolution_clock::now();\n",
"\n",
" std::chrono::duration<double> elapsed = end_time - start_time;\n",
"\n",
" std::cout << \"Total Maximum Subarray Sum (20 runs): \" << result << std::endl;\n",
" std::cout << \"Execution Time: \" << elapsed.count() << \" seconds\" << std::endl;\n",
"\n",
" return 0;\n",
"}\n",
"```"
]
}
],
"outputs": [],
"source": [
"optimize_gpt(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": null,
"id": "e0b3d073-88a2-40b2-831c-6f0c345c256f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[1moptimized.cpp:11:28: \u001b[0m\u001b[0;1;35mwarning: \u001b[0m\u001b[1mimplicit conversion from 'long' to 'const unsigned int' changes value from 4294967296 to 0 [-Wconstant-conversion]\u001b[0m\n",
" const unsigned int m = 4294967296; // 2^32\n",
"\u001b[0;1;32m ~ ^~~~~~~~~~\n",
"\u001b[0m1 warning generated.\n",
"Total Maximum Subarray Sum (20 runs): 0\n",
"Execution Time: 0.689923 seconds\n"
]
}
],
"outputs": [],
"source": [
"!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n",
"!./optimized"
@ -532,105 +329,20 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": null,
"id": "e9305446-1d0c-4b51-866a-b8c1e299bf5c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"#include <iostream>\n",
"#include <vector>\n",
"#include <chrono>\n",
"#include <limits>\n",
"#include <cstdint>\n",
"#include <iomanip>\n",
"\n",
"class LCG {\n",
"private:\n",
" uint64_t value;\n",
" const uint64_t a = 1664525;\n",
" const uint64_t c = 1013904223;\n",
" const uint64_t m = 1ULL << 32;\n",
"\n",
"public:\n",
" LCG(uint64_t seed) : value(seed) {}\n",
"\n",
" uint64_t next() {\n",
" value = (a * value + c) % m;\n",
" return value;\n",
" }\n",
"};\n",
"\n",
"int64_t max_subarray_sum(int n, uint64_t seed, int min_val, int max_val) {\n",
" LCG lcg(seed);\n",
" std::vector<int64_t> random_numbers(n);\n",
" for (int i = 0; i < n; ++i) {\n",
" random_numbers[i] = static_cast<int64_t>(lcg.next() % (max_val - min_val + 1) + min_val);\n",
" }\n",
"\n",
" int64_t max_sum = std::numeric_limits<int64_t>::min();\n",
" int64_t current_sum = 0;\n",
" \n",
" for (int i = 0; i < n; ++i) {\n",
" current_sum = std::max(current_sum + random_numbers[i], random_numbers[i]);\n",
" max_sum = std::max(max_sum, current_sum);\n",
" }\n",
" \n",
" return max_sum;\n",
"}\n",
"\n",
"int64_t total_max_subarray_sum(int n, uint64_t initial_seed, int min_val, int max_val) {\n",
" int64_t total_sum = 0;\n",
" LCG lcg(initial_seed);\n",
" for (int i = 0; i < 20; ++i) {\n",
" uint64_t seed = lcg.next();\n",
" total_sum += max_subarray_sum(n, seed, min_val, max_val);\n",
" }\n",
" return total_sum;\n",
"}\n",
"\n",
"int main() {\n",
" int n = 10000;\n",
" uint64_t initial_seed = 42;\n",
" int min_val = -10;\n",
" int max_val = 10;\n",
"\n",
" auto start_time = std::chrono::high_resolution_clock::now();\n",
" int64_t result = total_max_subarray_sum(n, initial_seed, min_val, max_val);\n",
" auto end_time = std::chrono::high_resolution_clock::now();\n",
"\n",
" auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end_time - start_time);\n",
"\n",
" std::cout << \"Total Maximum Subarray Sum (20 runs): \" << result << std::endl;\n",
" std::cout << std::fixed << std::setprecision(6);\n",
" std::cout << \"Execution Time: \" << duration.count() / 1e6 << \" seconds\" << std::endl;\n",
"\n",
" return 0;\n",
"}"
]
}
],
"outputs": [],
"source": [
"optimize_claude(python_hard)"
]
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": null,
"id": "0c181036-8193-4fdd-aef3-fc513b218d43",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Maximum Subarray Sum (20 runs): 10980\n",
"Execution Time: 0.001933 seconds\n"
]
}
],
"outputs": [],
"source": [
"!clang++ -O3 -std=c++17 -march=armv8.3-a -o optimized optimized.cpp\n",
"!./optimized"
@ -638,7 +350,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": null,
"id": "0be9f47d-5213-4700-b0e2-d444c7c738c0",
"metadata": {},
"outputs": [],
@ -654,7 +366,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": null,
"id": "8669f56b-8314-4582-a167-78842caea131",
"metadata": {},
"outputs": [],
@ -675,7 +387,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": null,
"id": "2f1ae8f5-16c8-40a0-aa18-63b617df078d",
"metadata": {},
"outputs": [],
@ -693,31 +405,10 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": null,
"id": "f1ddb38e-6b0a-4c37-baa4-ace0b7de887a",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7862/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"with gr.Blocks() as ui:\n",
" with gr.Row():\n",
@ -734,7 +425,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": null,
"id": "19bf2bff-a822-4009-a539-f003b1651383",
"metadata": {},
"outputs": [],
@ -751,7 +442,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": null,
"id": "77f3ab5d-fcfb-4d3f-8728-9cacbf833ea6",
"metadata": {},
"outputs": [],
@ -770,7 +461,7 @@
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": null,
"id": "9a2274f1-d03b-42c0-8dcc-4ce159b18442",
"metadata": {},
"outputs": [],
@ -783,31 +474,10 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": null,
"id": "f1303932-160c-424b-97a8-d28c816721b2",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7864/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"## Convert code from Python to C++\")\n",
@ -857,7 +527,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

163
week4/day4.ipynb

@ -12,7 +12,7 @@
},
{
"cell_type": "code",
"execution_count": 124,
"execution_count": null,
"id": "e610bf56-a46e-4aff-8de1-ab49d62b1ad3",
"metadata": {},
"outputs": [],
@ -35,7 +35,7 @@
},
{
"cell_type": "code",
"execution_count": 125,
"execution_count": null,
"id": "4f672e1c-87e9-4865-b760-370fa605e614",
"metadata": {},
"outputs": [],
@ -50,7 +50,7 @@
},
{
"cell_type": "code",
"execution_count": 126,
"execution_count": null,
"id": "8aa149ed-9298-4d69-8fe2-8f5de0f667da",
"metadata": {},
"outputs": [],
@ -65,7 +65,7 @@
},
{
"cell_type": "code",
"execution_count": 127,
"execution_count": null,
"id": "6896636f-923e-4a2c-9d6c-fac07828a201",
"metadata": {},
"outputs": [],
@ -77,7 +77,7 @@
},
{
"cell_type": "code",
"execution_count": 128,
"execution_count": null,
"id": "8e7b3546-57aa-4c29-bc5d-f211970d04eb",
"metadata": {},
"outputs": [],
@ -92,7 +92,7 @@
},
{
"cell_type": "code",
"execution_count": 129,
"execution_count": null,
"id": "c6190659-f54c-4951-bef4-4960f8e51cc4",
"metadata": {},
"outputs": [],
@ -106,7 +106,7 @@
},
{
"cell_type": "code",
"execution_count": 130,
"execution_count": null,
"id": "71e1ba8c-5b05-4726-a9f3-8d8c6257350b",
"metadata": {},
"outputs": [],
@ -121,7 +121,7 @@
},
{
"cell_type": "code",
"execution_count": 131,
"execution_count": null,
"id": "e7d2fea8-74c6-4421-8f1e-0e76d5b201b9",
"metadata": {},
"outputs": [],
@ -138,7 +138,7 @@
},
{
"cell_type": "code",
"execution_count": 132,
"execution_count": null,
"id": "7cd84ad8-d55c-4fe0-9eeb-1895c95c4a9d",
"metadata": {},
"outputs": [],
@ -160,7 +160,7 @@
},
{
"cell_type": "code",
"execution_count": 133,
"execution_count": null,
"id": "a1cbb778-fa57-43de-b04b-ed523f396c38",
"metadata": {},
"outputs": [],
@ -250,7 +250,7 @@
},
{
"cell_type": "code",
"execution_count": 134,
"execution_count": null,
"id": "c3b497b3-f569-420e-b92e-fb0f49957ce0",
"metadata": {},
"outputs": [],
@ -353,7 +353,7 @@
},
{
"cell_type": "code",
"execution_count": 135,
"execution_count": null,
"id": "0be9f47d-5213-4700-b0e2-d444c7c738c0",
"metadata": {},
"outputs": [],
@ -369,7 +369,7 @@
},
{
"cell_type": "code",
"execution_count": 136,
"execution_count": null,
"id": "8669f56b-8314-4582-a167-78842caea131",
"metadata": {},
"outputs": [],
@ -390,7 +390,7 @@
},
{
"cell_type": "code",
"execution_count": 137,
"execution_count": null,
"id": "2f1ae8f5-16c8-40a0-aa18-63b617df078d",
"metadata": {},
"outputs": [],
@ -428,7 +428,7 @@
},
{
"cell_type": "code",
"execution_count": 138,
"execution_count": null,
"id": "19bf2bff-a822-4009-a539-f003b1651383",
"metadata": {},
"outputs": [],
@ -445,7 +445,7 @@
},
{
"cell_type": "code",
"execution_count": 139,
"execution_count": null,
"id": "77f3ab5d-fcfb-4d3f-8728-9cacbf833ea6",
"metadata": {},
"outputs": [],
@ -464,7 +464,7 @@
},
{
"cell_type": "code",
"execution_count": 140,
"execution_count": null,
"id": "9a2274f1-d03b-42c0-8dcc-4ce159b18442",
"metadata": {},
"outputs": [],
@ -507,7 +507,7 @@
},
{
"cell_type": "code",
"execution_count": 141,
"execution_count": null,
"id": "bb8c5b4e-ec51-4f21-b3f8-6aa94fede86d",
"metadata": {},
"outputs": [],
@ -518,21 +518,10 @@
},
{
"cell_type": "code",
"execution_count": 142,
"execution_count": null,
"id": "13347633-4606-4e38-9927-80c39e65c1f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Token is valid (permission: write).\n",
"Your token has been saved in your configured git credential helpers (osxkeychain).\n",
"Your token has been saved to /Users/ed/.cache/huggingface/token\n",
"Login successful\n"
]
}
],
"outputs": [],
"source": [
"hf_token = os.environ['HF_TOKEN']\n",
"login(hf_token, add_to_git_credential=True)"
@ -540,7 +529,7 @@
},
{
"cell_type": "code",
"execution_count": 143,
"execution_count": null,
"id": "ef60a4df-6267-4ebd-8eed-dcb917af0a5e",
"metadata": {},
"outputs": [],
@ -553,7 +542,7 @@
},
{
"cell_type": "code",
"execution_count": 144,
"execution_count": null,
"id": "695ce389-a903-4533-a2f1-cd9e2a6af8f2",
"metadata": {},
"outputs": [],
@ -565,91 +554,20 @@
},
{
"cell_type": "code",
"execution_count": 147,
"execution_count": null,
"id": "d4548e96-0b32-4793-bdd6-1b072c2f26ab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<|im_start|>system\n",
"You are an assistant that reimplements Python code in high performance C++ for an M1 Mac. Respond only with C++ code; use comments sparingly and do not provide any explanation other than occasional comments. The C++ response needs to produce an identical output in the fastest possible time. Keep implementations of random number generators identical so that results match exactly.<|im_end|>\n",
"<|im_start|>user\n",
"Rewrite this Python code in C++ with the fastest possible implementation that produces identical output in the least time. Respond only with C++ code; do not explain your work other than a few comments. Pay attention to number types to ensure no int overflows. Remember to #include all necessary C++ packages such as iomanip.\n",
"\n",
"\n",
"import time\n",
"\n",
"def calculate(iterations, param1, param2):\n",
" result = 1.0\n",
" for i in range(1, iterations+1):\n",
" j = i * param1 - param2\n",
" result -= (1/j)\n",
" j = i * param1 + param2\n",
" result += (1/j)\n",
" return result\n",
"\n",
"start_time = time.time()\n",
"result = calculate(100_000_000, 4, 1) * 4\n",
"end_time = time.time()\n",
"\n",
"print(f\"Result: {result:.12f}\")\n",
"print(f\"Execution Time: {(end_time - start_time):.6f} seconds\")\n",
"<|im_end|>\n",
"<|im_start|>assistant\n",
"\n"
]
}
],
"outputs": [],
"source": [
"print(text)"
]
},
{
"cell_type": "code",
"execution_count": 148,
"execution_count": null,
"id": "bb2a126b-09e7-4966-bc97-0ef5c2cc7896",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is the C++ code that achieves the same result as the Python code:\n",
"\n",
"```cpp\n",
"#include <iostream>\n",
"#include <iomanip>\n",
"#include <chrono>\n",
"\n",
"double calculate(int iterations, double param1, double param2) {\n",
" double result = 1.0;\n",
" for (int i = 1; i <= iterations; ++i) {\n",
" double j = i * param1 - param2;\n",
" result -= 1.0 / j;\n",
" j = i * param1 + param2;\n",
" result += 1.0 / j;\n",
" }\n",
" return result;\n",
"}\n",
"\n",
"int main() {\n",
" auto start_time = std::chrono::high_resolution_clock::now();\n",
" double result = calculate(100000000, 4.0, 1.0) * 4.0;\n",
" auto end_time = std::chrono::high_resolution_clock::now();\n",
"\n",
" std::cout << \"Result: \" << std::setprecision(12) << result << std::endl;\n",
" std::cout << \"Execution Time: \" << std::chrono::duration<double>(end_time - start_time).count() << \" seconds\" << std::endl;\n",
"\n",
" return 0;\n",
"}\n",
"```\n",
"\n",
"This C++ code does the same thing as the Python code: it calculates a mathematical function and measures the execution time. The `calculate` function is implemented in a similar way to the Python code, but it uses `double` instead of `int` for the parameters and the result. The `main` function measures the execution time using `std::chrono::high_resolution_clock` and prints the result and execution time to the console. The `std::setprecision(12)` is used to print the result with 12 decimal places.<|im_end|>"
]
}
],
"outputs": [],
"source": [
"client = InferenceClient(CODE_QWEN_URL, token=hf_token)\n",
"stream = client.text_generation(text, stream=True, details=True, max_new_tokens=3000)\n",
@ -659,7 +577,7 @@
},
{
"cell_type": "code",
"execution_count": 149,
"execution_count": null,
"id": "127a52e5-ad85-42b7-a0f5-9afda5efe090",
"metadata": {},
"outputs": [],
@ -678,7 +596,7 @@
},
{
"cell_type": "code",
"execution_count": 150,
"execution_count": null,
"id": "a82387d1-7651-4923-995b-fe18356fcaa6",
"metadata": {},
"outputs": [],
@ -698,31 +616,10 @@
},
{
"cell_type": "code",
"execution_count": 152,
"execution_count": null,
"id": "f9ca2e6f-60c1-4e5f-b570-63c75b2d189b",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7868/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": []
},
"execution_count": 152,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"with gr.Blocks(css=css) as ui:\n",
" gr.Markdown(\"## Convert code from Python to C++\")\n",

169
week5/day1.ipynb

@ -18,7 +18,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {},
"outputs": [],
@ -34,7 +34,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {},
"outputs": [],
@ -46,7 +46,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {},
"outputs": [],
@ -60,7 +60,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "9e0652c2-3d76-40c7-8313-9dc1895155a8",
"metadata": {},
"outputs": [],
@ -79,28 +79,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "2c85a11b-b04d-4066-b243-f96139ca106f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"# Avery Lancaster\\n\\n## Summary\\n- **Date of Birth**: March 15, 1985 \\n- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \\n- **Location**: San Francisco, California \\n\\n## Insurellm Career Progression\\n- **2015 - Present**: Co-Founder & CEO \\n Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \\n\\n- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \\n Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector. \\n\\n- **2010 - 2013**: Business Analyst at Edge Analytics \\n Prior to joining Innovate, Avery worked as a Business Analyst, focusing on market trends and consumer preferences in the insurance space. This position laid the groundwork for Avery’s future entrepreneurial endeavors.\\n\\n## Annual Performance History\\n- **2015**: **Exceeds Expectations** \\n Avery’s leadership during Insurellm's foundational year led to successful product launches and securing initial funding. \\n\\n- **2016**: **Meets Expectations** \\n Growth continued, though challenges arose in operational efficiency that required Avery's attention. \\n\\n- **2017**: **Developing** \\n Market competition intensified, and monthly sales metrics were below targets. Avery implemented new strategies which required a steep learning curve. \\n\\n- **2018**: **Exceeds Expectations** \\n Under Avery’s pivoted vision, Insurellm launched two new successful products that significantly increased market share. \\n\\n- **2019**: **Meets Expectations** \\n Steady growth, however, some team tensions led to a minor drop in employee morale. Avery recognized the need to enhance company culture. \\n\\n- **2020**: **Below Expectations** \\n The COVID-19 pandemic posed unforeseen operational difficulties. Avery faced criticism for delayed strategy shifts, although efforts were eventually made to stabilize the company. \\n\\n- **2021**: **Exceptional** \\n Avery's decisive transition to remote work and rapid adoption of digital tools led to record-high customer satisfaction levels and increased sales. \\n\\n- **2022**: **Satisfactory** \\n Avery focused on rebuilding team dynamics and addressing employee concerns, leading to overall improvement despite a saturated market. \\n\\n- **2023**: **Exceeds Expectations** \\n Market leadership was regained with innovative approaches to personalized insurance solutions. Avery is now recognized in industry publications as a leading voice in Insurance Tech innovation.\\n\\n## Compensation History\\n- **2015**: $150,000 base salary + Significant equity stake \\n- **2016**: $160,000 base salary + Equity increase \\n- **2017**: $150,000 base salary + Decrease in bonus due to performance \\n- **2018**: $180,000 base salary + performance bonus of $30,000 \\n- **2019**: $185,000 base salary + market adjustment + $5,000 bonus \\n- **2020**: $170,000 base salary (temporary reduction due to COVID-19) \\n- **2021**: $200,000 base salary + performance bonus of $50,000 \\n- **2022**: $210,000 base salary + retention bonus \\n- **2023**: $225,000 base salary + $75,000 performance bonus \\n\\n## Other HR Notes\\n- **Professional Development**: Avery has actively participated in leadership training programs and industry conferences, representing Insurellm and fostering partnerships. \\n- **Diversity & Inclusion Initiatives**: Avery has championed a commitment to diversity in hiring practices, seeing visible improvements in team representation since 2021. \\n- **Work-Life Balance**: Feedback revealed concerns regarding work-life balance, which Avery has approached by implementing flexible working conditions and ensuring regular check-ins with the team.\\n- **Community Engagement**: Avery led community outreach efforts, focusing on financial literacy programs, particularly aimed at underserved populations, improving Insurellm's corporate social responsibility image. \\n\\nAvery Lancaster has demonstrated resilience and adaptability throughout her career at Insurellm, positioning the company as a key player in the insurance technology landscape.\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"context[\"Lancaster\"]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"id": "a1d231f9-091e-4c72-b0f8-6af578a74e22",
"metadata": {},
"outputs": [],
@ -117,28 +106,17 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": null,
"id": "aba46a57-d973-4195-8fe3-70fc60687192",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"dict_keys(['Chen', 'Spencer', 'Tran', 'Blake', 'Lancaster', 'Thompson', 'Greene', 'Thomson', 'Trenton', 'Harper', 'Bishop', 'Carter', 'Rellm', 'Markellm', 'Homellm', 'Carllm'])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"context.keys()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "129c7d1e-0094-4479-9459-f9360b95f244",
"metadata": {},
"outputs": [],
@ -148,7 +126,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "d40e390b-c110-42d5-8d80-daf3295b9862",
"metadata": {},
"outputs": [],
@ -163,28 +141,17 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": null,
"id": "d94c768d-c47a-4c34-85e9-7b786da96507",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[]"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"get_relevant_context(\"Who is Avery and what is carllm?\")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "5a7cef7f-f214-4bac-8217-3f9ab9ba1bf0",
"metadata": {},
"outputs": [],
@ -200,93 +167,17 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": null,
"id": "2b36399c-440b-4049-9d39-68d208283c71",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Who is Alex Lancaster?\n",
"\n",
"The following additional context might be relevant in answering this question:\n",
"\n",
"# Avery Lancaster\n",
"\n",
"## Summary\n",
"- **Date of Birth**: March 15, 1985 \n",
"- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \n",
"- **Location**: San Francisco, California \n",
"\n",
"## Insurellm Career Progression\n",
"- **2015 - Present**: Co-Founder & CEO \n",
" Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \n",
"\n",
"- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \n",
" Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector. \n",
"\n",
"- **2010 - 2013**: Business Analyst at Edge Analytics \n",
" Prior to joining Innovate, Avery worked as a Business Analyst, focusing on market trends and consumer preferences in the insurance space. This position laid the groundwork for Avery’s future entrepreneurial endeavors.\n",
"\n",
"## Annual Performance History\n",
"- **2015**: **Exceeds Expectations** \n",
" Avery’s leadership during Insurellm's foundational year led to successful product launches and securing initial funding. \n",
"\n",
"- **2016**: **Meets Expectations** \n",
" Growth continued, though challenges arose in operational efficiency that required Avery's attention. \n",
"\n",
"- **2017**: **Developing** \n",
" Market competition intensified, and monthly sales metrics were below targets. Avery implemented new strategies which required a steep learning curve. \n",
"\n",
"- **2018**: **Exceeds Expectations** \n",
" Under Avery’s pivoted vision, Insurellm launched two new successful products that significantly increased market share. \n",
"\n",
"- **2019**: **Meets Expectations** \n",
" Steady growth, however, some team tensions led to a minor drop in employee morale. Avery recognized the need to enhance company culture. \n",
"\n",
"- **2020**: **Below Expectations** \n",
" The COVID-19 pandemic posed unforeseen operational difficulties. Avery faced criticism for delayed strategy shifts, although efforts were eventually made to stabilize the company. \n",
"\n",
"- **2021**: **Exceptional** \n",
" Avery's decisive transition to remote work and rapid adoption of digital tools led to record-high customer satisfaction levels and increased sales. \n",
"\n",
"- **2022**: **Satisfactory** \n",
" Avery focused on rebuilding team dynamics and addressing employee concerns, leading to overall improvement despite a saturated market. \n",
"\n",
"- **2023**: **Exceeds Expectations** \n",
" Market leadership was regained with innovative approaches to personalized insurance solutions. Avery is now recognized in industry publications as a leading voice in Insurance Tech innovation.\n",
"\n",
"## Compensation History\n",
"- **2015**: $150,000 base salary + Significant equity stake \n",
"- **2016**: $160,000 base salary + Equity increase \n",
"- **2017**: $150,000 base salary + Decrease in bonus due to performance \n",
"- **2018**: $180,000 base salary + performance bonus of $30,000 \n",
"- **2019**: $185,000 base salary + market adjustment + $5,000 bonus \n",
"- **2020**: $170,000 base salary (temporary reduction due to COVID-19) \n",
"- **2021**: $200,000 base salary + performance bonus of $50,000 \n",
"- **2022**: $210,000 base salary + retention bonus \n",
"- **2023**: $225,000 base salary + $75,000 performance bonus \n",
"\n",
"## Other HR Notes\n",
"- **Professional Development**: Avery has actively participated in leadership training programs and industry conferences, representing Insurellm and fostering partnerships. \n",
"- **Diversity & Inclusion Initiatives**: Avery has championed a commitment to diversity in hiring practices, seeing visible improvements in team representation since 2021. \n",
"- **Work-Life Balance**: Feedback revealed concerns regarding work-life balance, which Avery has approached by implementing flexible working conditions and ensuring regular check-ins with the team.\n",
"- **Community Engagement**: Avery led community outreach efforts, focusing on financial literacy programs, particularly aimed at underserved populations, improving Insurellm's corporate social responsibility image. \n",
"\n",
"Avery Lancaster has demonstrated resilience and adaptability throughout her career at Insurellm, positioning the company as a key player in the insurance technology landscape.\n",
"\n",
"\n"
]
}
],
"outputs": [],
"source": [
"print(add_context(\"Who is Alex Lancaster?\"))"
]
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": null,
"id": "968e7bf2-e862-4679-a11f-6c1efb6ec8ca",
"metadata": {},
"outputs": [],
@ -320,32 +211,10 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": null,
"id": "c3536590-85c7-4155-bd87-ae78a1467670",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running on local URL: http://127.0.0.1:7865\n",
"\n",
"To create a public link, set `share=True` in `launch()`.\n"
]
},
{
"data": {
"text/html": [
"<div><iframe src=\"http://127.0.0.1:7865/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"outputs": [],
"source": [
"view = gr.ChatInterface(chat).launch()"
]

162
week5/day2.ipynb

@ -16,7 +16,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "ba2779af-84ef-4227-9e9e-6eaf0df87e77",
"metadata": {},
"outputs": [],
@ -31,7 +31,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"id": "802137aa-8a74-45e0-a487-d1974927d7ca",
"metadata": {},
"outputs": [],
@ -44,7 +44,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": null,
"id": "58c85082-e417-4708-9efe-81a5d55d1424",
"metadata": {},
"outputs": [],
@ -57,7 +57,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": null,
"id": "ee78efcb-60fe-449e-a944-40bab26261af",
"metadata": {},
"outputs": [],
@ -70,7 +70,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"id": "730711a9-6ffe-4eee-8f48-d6cfb7314905",
"metadata": {},
"outputs": [],
@ -92,60 +92,30 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": null,
"id": "252f17e9-3529-4e81-996c-cfa9f08e75a8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"31"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"len(documents)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "7e8decb0-d9b0-4d51-8402-7a6174d22159",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'knowledge-base/employees/Maxine Thompson.md', 'doc_type': 'employees'}, page_content=\"# HR Record\\n\\n# Maxine Thompson\\n\\n## Summary\\n- **Date of Birth:** January 15, 1991 \\n- **Job Title:** Data Engineer \\n- **Location:** Austin, Texas \\n\\n## Insurellm Career Progression\\n- **January 2017 - October 2018**: **Junior Data Engineer** \\n * Maxine joined Insurellm as a Junior Data Engineer, focusing primarily on ETL processes and data integration tasks. She quickly learned Insurellm's data architecture, collaborating with other team members to streamline data workflows. \\n- **November 2018 - December 2020**: **Data Engineer** \\n * In her new role, Maxine expanded her responsibilities to include designing comprehensive data models and improving data quality measures. Though she excelled in technical skills, communication issues with non-technical teams led to some project delays. \\n- **January 2021 - Present**: **Senior Data Engineer** \\n * Maxine was promoted to Senior Data Engineer after successfully leading a pivotal project that improved data retrieval times by 30%. She now mentors junior engineers and is involved in strategic data initiatives, solidifying her position as a valued asset at Insurellm. She was recognized as Insurellm Innovator of the year in 2023, receiving the prestiguous IIOTY 2023 award. \\n\\n## Annual Performance History\\n- **2017**: *Meets Expectations* \\n Maxine showed potential in her role but struggled with initial project deadlines. Her adaptability and willingness to learn made positive impacts on her team. \\n\\n- **2018**: *Exceeds Expectations* \\n Maxine improved significantly, becoming a reliable team member with strong problem-solving skills. She took on leadership in a project that automated data entry processes. \\n\\n- **2019**: *Needs Improvement* \\n During this year, difficult personal circumstances affected Maxine's performance. She missed key deadlines and had several communication issues with stakeholders. \\n\\n- **2020**: *Meets Expectations* \\n Maxine focused on regaining her footing and excelling with technical skills. She was stable, though not standout, in her contributions. Feedback indicated a need for more proactivity. \\n\\n- **2021**: *Exceeds Expectations* \\n Maxine spearheaded the transition to a new data warehousing solution, significantly enhancing Insurellm’s data analytics capabilities. This major achievement bolstered her reputation within the company. \\n\\n- **2022**: *Outstanding* \\n Maxine continued her upward trajectory, successfully implementing machine learning algorithms to predict customer behavior, which was well-received by the leadership team and improved client satisfaction. \\n\\n- **2023**: *Exceeds Expectations* \\n Maxine has taken on mentoring responsibilities and is leading a cross-functional team for data governance initiatives, showcasing her leadership and solidifying her role at Insurellm. \\n\\n## Compensation History\\n- **2017**: $70,000 (Junior Data Engineer) \\n- **2018**: $75,000 (Junior Data Engineer) \\n- **2019**: $80,000 (Data Engineer) \\n- **2020**: $84,000 (Data Engineer) \\n- **2021**: $95,000 (Senior Data Engineer) \\n- **2022**: $110,000 (Senior Data Engineer) \\n- **2023**: $120,000 (Senior Data Engineer) \\n\\n## Other HR Notes\\n- Maxine participated in various company-sponsored trainings related to big data technologies and cloud infrastructure. \\n- She was recognized for her contributions with the “Insurellm Innovator Award” in 2022. \\n- Maxine is currently involved in the women-in-tech initiative and participates in mentorship programs to guide junior employees. \\n- Future development areas include improving her stakeholder communication skills to ensure smoother project transitions and collaboration. \")"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"documents[24]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": null,
"id": "7310c9c8-03c1-4efc-a104-5e89aec6db1a",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Created a chunk of size 1088, which is longer than the specified 1000\n"
]
}
],
"outputs": [],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
"chunks = text_splitter.split_documents(documents)"
@ -153,60 +123,30 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": null,
"id": "cd06e02f-6d9b-44cc-a43d-e1faa8acc7bb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"123"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"len(chunks)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"id": "d2562754-9052-4aae-92c1-37236435ea06",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'knowledge-base/products/Markellm.md', 'doc_type': 'products'}, page_content='- **User-Friendly Interface**: Designed with user experience in mind, Markellm features an intuitive interface that allows consumers to easily browse and compare various insurance offerings from multiple providers.\\n\\n- **Real-Time Quotes**: Consumers can receive real-time quotes from different insurance companies, empowering them to make informed decisions quickly without endless back-and-forth communication.\\n\\n- **Customized Recommendations**: Based on user profiles and preferences, Markellm provides personalized insurance recommendations, ensuring consumers find the right coverage at competitive rates.\\n\\n- **Secure Transactions**: Markellm prioritizes security, employing robust encryption methods to ensure that all transactions and data exchanges are safe and secure.\\n\\n- **Customer Support**: Our dedicated support team is always available to assist both consumers and insurers throughout the process, providing guidance and answering any questions that may arise.')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"chunks[6]"
]
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": null,
"id": "2c54b4b6-06da-463d-bee7-4dd456c2b887",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document types found: employees, contracts, company, products\n"
]
}
],
"outputs": [],
"source": [
"doc_types = set(chunk.metadata['doc_type'] for chunk in chunks)\n",
"print(f\"Document types found: {', '.join(doc_types)}\")"
@ -214,74 +154,10 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": null,
"id": "128c73f7-f149-4904-a554-8140941fce0c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='## Support\n",
"\n",
"1. **Customer Support**: Velocity Auto Solutions will have access to Insurellm’s customer support team via email or chatbot, available 24/7. \n",
"2. **Technical Maintenance**: Regular maintenance and updates to the Carllm platform will be conducted by Insurellm, with any downtime communicated in advance. \n",
"3. **Training & Resources**: Initial training sessions will be provided for Velocity Auto Solutions’ staff to ensure effective use of the Carllm suite. Regular resources and documentation will be made available online.\n",
"\n",
"---\n",
"\n",
"**Accepted and Agreed:** \n",
"**For Velocity Auto Solutions** \n",
"Signature: _____________________ \n",
"Name: John Doe \n",
"Title: CEO \n",
"Date: _____________________ \n",
"\n",
"**For Insurellm** \n",
"Signature: _____________________ \n",
"Name: Jane Smith \n",
"Title: VP of Sales \n",
"Date: _____________________' metadata={'source': 'knowledge-base/contracts/Contract with Velocity Auto Solutions for Carllm.md', 'doc_type': 'contracts'}\n",
"_________\n",
"page_content='3. **Regular Updates:** Insurellm will offer ongoing updates and enhancements to the Homellm platform, including new features and security improvements.\n",
"\n",
"4. **Feedback Implementation:** Insurellm will actively solicit feedback from GreenValley Insurance to ensure Homellm continues to meet their evolving needs.\n",
"\n",
"---\n",
"\n",
"**Signatures:**\n",
"\n",
"_________________________________ \n",
"**[Name]** \n",
"**Title**: CEO \n",
"**Insurellm, Inc.**\n",
"\n",
"_________________________________ \n",
"**[Name]** \n",
"**Title**: COO \n",
"**GreenValley Insurance, LLC** \n",
"\n",
"---\n",
"\n",
"This agreement represents the complete understanding of both parties regarding the use of the Homellm product and supersedes any prior agreements or communications.' metadata={'source': 'knowledge-base/contracts/Contract with GreenValley Insurance for Homellm.md', 'doc_type': 'contracts'}\n",
"_________\n",
"page_content='# Avery Lancaster\n",
"\n",
"## Summary\n",
"- **Date of Birth**: March 15, 1985 \n",
"- **Job Title**: Co-Founder & Chief Executive Officer (CEO) \n",
"- **Location**: San Francisco, California \n",
"\n",
"## Insurellm Career Progression\n",
"- **2015 - Present**: Co-Founder & CEO \n",
" Avery Lancaster co-founded Insurellm in 2015 and has since guided the company to its current position as a leading Insurance Tech provider. Avery is known for her innovative leadership strategies and risk management expertise that have catapulted the company into the mainstream insurance market. \n",
"\n",
"- **2013 - 2015**: Senior Product Manager at Innovate Insurance Solutions \n",
" Before launching Insurellm, Avery was a leading Senior Product Manager at Innovate Insurance Solutions, where she developed groundbreaking insurance products aimed at the tech sector.' metadata={'source': 'knowledge-base/employees/Avery Lancaster.md', 'doc_type': 'employees'}\n",
"_________\n"
]
}
],
"outputs": [],
"source": [
"for chunk in chunks:\n",
" if 'CEO' in chunk.page_content:\n",

3129
week5/day3.ipynb

File diff suppressed because one or more lines are too long

3297
week5/day4.5.ipynb

File diff suppressed because one or more lines are too long

3299
week5/day4.ipynb

File diff suppressed because one or more lines are too long

3487
week5/day5.ipynb

File diff suppressed because one or more lines are too long

200
week6/day1.ipynb

File diff suppressed because one or more lines are too long

416
week6/day2.ipynb

File diff suppressed because one or more lines are too long

3981
week6/day4.ipynb

File diff suppressed because one or more lines are too long
Loading…
Cancel
Save