Browse Source

Vectorizing Udemy Subtitle files

pull/87/head
Ivo Brett 4 months ago
parent
commit
233b32c699
  1. 4200
      week5/community-contributions/day3 - vectorizing_subtitles_from_llm_engineering.ipynb
  2. 55
      week5/community-contributions/subtitles/srts/59166281/en_US.srt
  3. 43
      week5/community-contributions/subtitles/srts/59166281/ja_JP.srt
  4. 55
      week5/community-contributions/subtitles/srts/59166281/ko_KR.srt
  5. 124
      week5/community-contributions/subtitles/srts/59166317/en_US.srt
  6. 103
      week5/community-contributions/subtitles/srts/59166317/ja_JP.srt
  7. 121
      week5/community-contributions/subtitles/srts/59166317/ko_KR.srt
  8. 61
      week5/community-contributions/subtitles/srts/59166353/en_US.srt
  9. 52
      week5/community-contributions/subtitles/srts/59166353/ja_JP.srt
  10. 61
      week5/community-contributions/subtitles/srts/59166353/ko_KR.srt
  11. 319
      week5/community-contributions/subtitles/srts/59166421/en_US.srt
  12. 283
      week5/community-contributions/subtitles/srts/59166421/ja_JP.srt
  13. 313
      week5/community-contributions/subtitles/srts/59166421/ko_KR.srt
  14. 202
      week5/community-contributions/subtitles/srts/59166443/en_US.srt
  15. 166
      week5/community-contributions/subtitles/srts/59166443/ja_JP.srt
  16. 199
      week5/community-contributions/subtitles/srts/59166443/ko_KR.srt
  17. 583
      week5/community-contributions/subtitles/srts/59166453/en_US.srt
  18. 511
      week5/community-contributions/subtitles/srts/59166453/ja_JP.srt
  19. 568
      week5/community-contributions/subtitles/srts/59166453/ko_KR.srt
  20. 610
      week5/community-contributions/subtitles/srts/59166461/en_US.srt
  21. 571
      week5/community-contributions/subtitles/srts/59166461/ja_JP.srt
  22. 607
      week5/community-contributions/subtitles/srts/59166461/ko_KR.srt
  23. 469
      week5/community-contributions/subtitles/srts/59166465/en_US.srt
  24. 421
      week5/community-contributions/subtitles/srts/59166465/ja_JP.srt
  25. 466
      week5/community-contributions/subtitles/srts/59166465/ko_KR.srt
  26. 889
      week5/community-contributions/subtitles/srts/59166481/en_US.srt
  27. 799
      week5/community-contributions/subtitles/srts/59166481/ja_JP.srt
  28. 859
      week5/community-contributions/subtitles/srts/59166481/ko_KR.srt
  29. 106
      week5/community-contributions/subtitles/srts/59166847/en_US.srt
  30. 91
      week5/community-contributions/subtitles/srts/59166847/ja_JP.srt
  31. 97
      week5/community-contributions/subtitles/srts/59166847/ko_KR.srt
  32. 592
      week5/community-contributions/subtitles/srts/59166915/en_US.srt
  33. 523
      week5/community-contributions/subtitles/srts/59166915/ja_JP.srt
  34. 577
      week5/community-contributions/subtitles/srts/59166915/ko_KR.srt
  35. 43
      week5/community-contributions/subtitles/srts/59166919/en_US.srt
  36. 40
      week5/community-contributions/subtitles/srts/59166919/ja_JP.srt
  37. 43
      week5/community-contributions/subtitles/srts/59166919/ko_KR.srt
  38. 313
      week5/community-contributions/subtitles/srts/59166947/en_US.srt
  39. 259
      week5/community-contributions/subtitles/srts/59166947/ja_JP.srt
  40. 304
      week5/community-contributions/subtitles/srts/59166947/ko_KR.srt
  41. 463
      week5/community-contributions/subtitles/srts/59166949/en_US.srt
  42. 391
      week5/community-contributions/subtitles/srts/59166949/ja_JP.srt
  43. 442
      week5/community-contributions/subtitles/srts/59166949/ko_KR.srt
  44. 343
      week5/community-contributions/subtitles/srts/59166951/en_US.srt
  45. 322
      week5/community-contributions/subtitles/srts/59166951/ja_JP.srt
  46. 340
      week5/community-contributions/subtitles/srts/59166951/ko_KR.srt
  47. 211
      week5/community-contributions/subtitles/srts/59166981/en_US.srt
  48. 169
      week5/community-contributions/subtitles/srts/59166981/ja_JP.srt
  49. 208
      week5/community-contributions/subtitles/srts/59166981/ko_KR.srt
  50. 205
      week5/community-contributions/subtitles/srts/59167007/en_US.srt
  51. 163
      week5/community-contributions/subtitles/srts/59167007/ja_JP.srt
  52. 205
      week5/community-contributions/subtitles/srts/59167007/ko_KR.srt
  53. 304
      week5/community-contributions/subtitles/srts/59167009/en_US.srt
  54. 250
      week5/community-contributions/subtitles/srts/59167009/ja_JP.srt
  55. 286
      week5/community-contributions/subtitles/srts/59167009/ko_KR.srt
  56. 424
      week5/community-contributions/subtitles/srts/59167015/en_US.srt
  57. 391
      week5/community-contributions/subtitles/srts/59167015/ja_JP.srt
  58. 418
      week5/community-contributions/subtitles/srts/59167015/ko_KR.srt
  59. 73
      week5/community-contributions/subtitles/srts/59169985/en_US.srt
  60. 58
      week5/community-contributions/subtitles/srts/59169985/ja_JP.srt
  61. 70
      week5/community-contributions/subtitles/srts/59169985/ko_KR.srt
  62. 127
      week5/community-contributions/subtitles/srts/59169991/en_US.srt
  63. 97
      week5/community-contributions/subtitles/srts/59169991/ja_JP.srt
  64. 127
      week5/community-contributions/subtitles/srts/59169991/ko_KR.srt
  65. 163
      week5/community-contributions/subtitles/srts/59170025/en_US.srt
  66. 136
      week5/community-contributions/subtitles/srts/59170025/ja_JP.srt
  67. 154
      week5/community-contributions/subtitles/srts/59170025/ko_KR.srt
  68. 70
      week5/community-contributions/subtitles/srts/59170037/en_US.srt
  69. 58
      week5/community-contributions/subtitles/srts/59170037/ja_JP.srt
  70. 70
      week5/community-contributions/subtitles/srts/59170037/ko_KR.srt
  71. 412
      week5/community-contributions/subtitles/srts/59170043/en_US.srt
  72. 334
      week5/community-contributions/subtitles/srts/59170043/ja_JP.srt
  73. 397
      week5/community-contributions/subtitles/srts/59170043/ko_KR.srt
  74. 472
      week5/community-contributions/subtitles/srts/59170055/en_US.srt
  75. 400
      week5/community-contributions/subtitles/srts/59170055/ja_JP.srt
  76. 451
      week5/community-contributions/subtitles/srts/59170055/ko_KR.srt
  77. 34
      week5/community-contributions/subtitles/srts/59170057/en_US.srt
  78. 25
      week5/community-contributions/subtitles/srts/59170057/ja_JP.srt
  79. 34
      week5/community-contributions/subtitles/srts/59170057/ko_KR.srt
  80. 229
      week5/community-contributions/subtitles/srts/59170093/en_US.srt
  81. 169
      week5/community-contributions/subtitles/srts/59170093/ja_JP.srt
  82. 220
      week5/community-contributions/subtitles/srts/59170093/ko_KR.srt
  83. 58
      week5/community-contributions/subtitles/srts/59170107/en_US.srt
  84. 43
      week5/community-contributions/subtitles/srts/59170107/ja_JP.srt
  85. 52
      week5/community-contributions/subtitles/srts/59170107/ko_KR.srt
  86. 154
      week5/community-contributions/subtitles/srts/59170135/en_US.srt
  87. 130
      week5/community-contributions/subtitles/srts/59170135/ja_JP.srt
  88. 151
      week5/community-contributions/subtitles/srts/59170135/ko_KR.srt
  89. 130
      week5/community-contributions/subtitles/srts/59170165/en_US.srt
  90. 100
      week5/community-contributions/subtitles/srts/59170165/ja_JP.srt
  91. 127
      week5/community-contributions/subtitles/srts/59170165/ko_KR.srt
  92. 220
      week5/community-contributions/subtitles/srts/59170223/en_US.srt
  93. 196
      week5/community-contributions/subtitles/srts/59170223/ja_JP.srt
  94. 211
      week5/community-contributions/subtitles/srts/59170223/ko_KR.srt
  95. 508
      week5/community-contributions/subtitles/srts/59170227/en_US.srt
  96. 439
      week5/community-contributions/subtitles/srts/59170227/ja_JP.srt
  97. 502
      week5/community-contributions/subtitles/srts/59170227/ko_KR.srt
  98. 475
      week5/community-contributions/subtitles/srts/59170233/en_US.srt
  99. 406
      week5/community-contributions/subtitles/srts/59170233/ja_JP.srt
  100. 451
      week5/community-contributions/subtitles/srts/59170233/ko_KR.srt
  101. Some files were not shown because too many files have changed in this diff Show More

4200
week5/community-contributions/day3 - vectorizing_subtitles_from_llm_engineering.ipynb

File diff suppressed because one or more lines are too long

55
week5/community-contributions/subtitles/srts/59166281/en_US.srt

@ -0,0 +1,55 @@
WEBVTT
00:00.800 --> 00:08.990
And with that, amazingly, you completed day one of week two already and that gets you to the 15% point
00:08.990 --> 00:14.300
towards your goal of being an LM Engineering Master.
00:14.540 --> 00:17.810
So congratulations on getting to the 15% point.
00:17.840 --> 00:20.630
Let's talk about what you're already able to do.
00:20.810 --> 00:23.750
As you know, you can describe Transformers.
00:23.750 --> 00:30.050
And that includes all of the context window tokens, API costs and the like.
00:30.080 --> 00:32.330
You can talk about the six leading frontier LMS.
00:32.330 --> 00:37.430
You can now confidently use OpenAI's API with streaming, with markdown, with JSON.
00:37.430 --> 00:45.320
And now in addition, you can use the Anthropic and Google APIs, and you've hopefully got even deeper
00:45.320 --> 00:52.160
insights into that structure of messages, that list of dicts that is going to feature from time to
00:52.190 --> 00:55.490
time, as you will see tomorrow.
00:55.520 --> 00:58.730
Tomorrow is a day that I've been looking forward to for a long time.
00:58.730 --> 01:05.570
I am going to be gushing gushing about Gradio, which I think is an absolutely fabulous platform and
01:05.570 --> 01:07.610
you are going to see why yourself.
01:07.610 --> 01:11.390
You're going to see why I love it so much, and I'm hoping that you're going to love it too.
01:11.420 --> 01:16.970
We're going to create a simple UI with Gradio, and we're going to hook it up to Frontier Models, and
01:16.970 --> 01:19.100
it's going to be really, really easy.
01:19.100 --> 01:20.360
And I will see you then.

43
week5/community-contributions/subtitles/srts/59166281/ja_JP.srt

@ -0,0 +1,43 @@
WEBVTT
00:00.800 --> 00:14.300
そして、 驚くべきことに、 あなたはすでに2週目の初日を終え、 これでLMエンジニアリング・マスターという目標に向けて15%のポイントを獲得したことになる。
00:14.540 --> 00:17.810
というわけで、 15%まで到達したことを祝福したい。
00:17.840 --> 00:20.630
あなたがすでにできていることについて話しましょう。
00:20.810 --> 00:23.750
ご存知のように、 トランスフォーマーを表現することは可能だ。
00:23.750 --> 00:30.050
その中には、 コンテクスト・ウィンドウのトークンやAPIのコストなども含まれる。
00:30.080 --> 00:32.330
6つの主要なフロンティアLMSについて話すことができます。
00:32.330 --> 00:37.430
OpenAIのAPIをストリーミング、 マークダウン、 JSONで自信を持って使えるようになりました。
00:37.430 --> 00:45.320
さらに、 AnthropicとGoogleのAPIを使うことができるようになり、
00:45.320 --> 00:55.490
メッセージの構造やディクツのリストについてさらに深い洞察を得ることができるようになった。
00:55.520 --> 00:58.730
明日はずっと楽しみにしていた日だ。
00:58.730 --> 01:07.610
Gradioは本当に素晴らしいプラットフォームだと思う。
01:07.610 --> 01:11.390
私がなぜこのクラブをこんなに気に入っているのか、 その理由がわかるはずだ。
01:11.420 --> 01:19.100
GradioでシンプルなUIを作り、 Frontier Modelsに接続する。
01:19.100 --> 01:20.360
その時にまた会おう。

55
week5/community-contributions/subtitles/srts/59166281/ko_KR.srt

@ -0,0 +1,55 @@
WEBVTT
00:00.800 --> 00:08.990
놀랍게도 둘째 주 첫째 날을 벌써 마쳤습니다 15%를 달성했습니다 LM
00:08.990 --> 00:14.300
엔지니어링 마스터가 되는 목표에 가까워졌죠
00:14.540 --> 00:17.810
15%가 된 걸 축하해요
00:17.840 --> 00:20.630
이미 할 수 있는 걸 얘기해 보죠
00:20.810 --> 00:23.750
트랜스포머는 설명하기 쉽잖아요
00:23.750 --> 00:30.050
컨텍스트 윈도우 토큰과 API 비용 등을 포함하죠
00:30.080 --> 00:32.330
프런티어 LMS의 여섯 가지 대표적인 예가 있죠
00:32.330 --> 00:37.430
이제 OpenAI의 API를 스트리밍, 마크다운, JSON으로 자신 있게 사용할 수 있어요
00:37.430 --> 00:45.320
Anthropic과 구글 API를 이용하면 메시지 구조에
00:45.320 --> 00:52.160
대해 더 깊이 이해할 수 있을 겁니다 내일도 이따금씩
00:52.190 --> 00:55.490
나올 기능 목록도요
00:55.520 --> 00:58.730
내일은 제가 오랫동안 기다려온 날이에요
00:58.730 --> 01:05.570
그라디오에 대해 열변을 토할게요 정말 멋진 플랫폼이라고 생각하거든요 그 이유는
01:05.570 --> 01:07.610
직접 보시게 될 거예요
01:07.610 --> 01:11.390
제가 왜 좋아하는지 알게 되실 거예요 여러분도 좋아하시면 좋겠네요
01:11.420 --> 01:16.970
Gadio로 간단한 UI를 만들고 프론티어 모델에 연결할
01:16.970 --> 01:19.100
거예요 정말 정말 쉽죠
01:19.100 --> 01:20.360
그때 봐요

124
week5/community-contributions/subtitles/srts/59166317/en_US.srt

@ -0,0 +1,124 @@
WEBVTT
00:00.680 --> 00:08.540
And welcome to week two, day two, as we continue our adventure into the realm of LMS.
00:08.780 --> 00:14.630
Uh, so today, a very special day that I'm really looking forward to.
00:14.720 --> 00:15.830
Uh, quick recap.
00:15.860 --> 00:22.250
Of course, you can now describe Transformers as well, and you can talk about six top frontier models.
00:22.250 --> 00:27.920
You can confidently use OpenAI's API along with Anthropic and Google's.
00:27.920 --> 00:33.080
But today, changing topic, we are going to be talking about Gradio.
00:33.350 --> 00:37.040
And I realized I have gone on about Gradio a bit, but you're going to see why.
00:37.040 --> 00:40.490
It's really terrific and we're going to have fun with it.
00:40.640 --> 00:46.070
We're going to create a simple UI using radio and then hook it up to Frontier Models.
00:46.070 --> 00:49.130
And as I say, it's going to be easy.
00:49.580 --> 00:53.420
Uh, so why make such a fuss about user interfaces?
00:53.420 --> 01:00.890
Because it allows us, as data scientists, as LM engineers, to do more quickly, to be able to build
01:00.920 --> 01:08.510
prototypes, expose them to our audience, to our business sponsors, the the, the people that need
01:08.540 --> 01:12.140
our LMS and do so very quickly indeed.
01:12.140 --> 01:17.000
If you are a front end person or you've dabbled in front end and you know what it's like to stand up
01:17.030 --> 01:22.370
a react app or something like that, you know that there's a lot of boilerplate code that goes into
01:22.400 --> 01:26.750
getting things up and running, and it turns out that we don't need to do that with models.
01:26.750 --> 01:30.380
We can build a user interface super quickly, and that's what we'll be doing today.
01:30.380 --> 01:35.030
So Gradio is in fact, uh, a part of Hugging Face.
01:35.030 --> 01:38.510
It was a startup that was acquired by Hugging Face a couple of years ago.
01:38.510 --> 01:41.600
So Gradio is part of the Hugging Face family.
01:41.780 --> 01:48.260
Uh, and as it says on the landing page there, it lets you build and share delightful machine learning
01:48.290 --> 01:48.620
apps.
01:48.620 --> 01:51.620
And I think you will be delighted by it.
01:51.830 --> 01:54.530
Uh, so I promised you it was easy.
01:54.560 --> 01:57.080
It really is easy, as you will see.
01:57.110 --> 02:02.790
What it comes down to is there is this magical line import Gradio as GR, which is the way people do
02:02.790 --> 02:03.270
it.
02:03.570 --> 02:07.500
You write a function, any function, a function to do a task.
02:07.500 --> 02:13.380
In this case, the function that they've written here is greet takes a name and it replies hello name.
02:13.380 --> 02:19.890
And then you can create a user interface based on that function, give it inputs and outputs, and you're
02:19.890 --> 02:24.120
going to get a user interface built for you just like that.
02:24.120 --> 02:26.430
And that is what we're going to do.
02:26.970 --> 02:35.430
So what we're going to do now is create a UI for API calls to GPT and Claude and Gemini, so that you
02:35.430 --> 02:37.380
can see how to expose this.
02:37.410 --> 02:45.870
We are then going to go ahead and create a UI for the brochure that we built in the last week's lectures.
02:45.900 --> 02:52.050
And so that's going to allow us to really package up our application into a nice business app with prototype
02:52.050 --> 02:52.860
screens.
02:52.860 --> 02:57.510
And of course, we'll throw into the mix streaming and markdown into a UI, since we're pretty good
02:57.510 --> 03:00.630
with that already, that's going to be the plan.
03:00.630 --> 03:03.060
Let's go over to the lab and get on with it.

103
week5/community-contributions/subtitles/srts/59166317/ja_JP.srt

@ -0,0 +1,103 @@
WEBVTT
00:00.680 --> 00:08.540
そして、 LMSの領域への冒険を続ける第2週、 2日目へようこそ。
00:08.780 --> 00:14.630
ええと、 それで今日は、 とても楽しみにしている特別な日なんだ。
00:14.720 --> 00:15.830
ええと、 簡単にまとめると
00:15.860 --> 00:22.250
もちろん、 トランスフォーマーについても説明できるようになったし、 6人のトップ・フロンティア・モデルについても語ることができる。
00:22.250 --> 00:27.920
OpenAIのAPIは、 AnthropicやGoogleのAPIと一緒に自信を持って使うことができる。
00:27.920 --> 00:33.080
しかし、 今日は話題を変えて、 グラディオについて話そう。
00:33.350 --> 00:37.040
そして、 グラディオについて少し話したことに気づいた。
00:37.040 --> 00:40.490
本当に素晴らしいし、 楽しみながらやっていくつもりだ。
00:40.640 --> 00:46.070
radioを使ってシンプルなUIを作り、 それをFrontier Modelsに接続する。
00:46.070 --> 00:49.130
言っておくが、 それは簡単なことだ。
00:49.580 --> 00:53.420
では、 なぜユーザー・インターフェースについて大騒ぎするのですか?
00:53.420 --> 01:00.890
なぜなら、 データ・サイエンティストとして、 LMエンジニアとして、 より迅速に、 プロトタイプを構築し、
01:00.920 --> 01:12.140
それをオーディエンスやビジネス・スポンサー、 LMSを必要としている人々に公開し、 実に迅速に実行することができるからだ。
01:12.140 --> 01:17.000
フロントエンドの人なら、 あるいはフロントエンドに手を出したことがある人なら、
01:17.030 --> 01:26.750
リアクトのアプリを立ち上げるのがどんな感じか知っているだろう。
01:26.750 --> 01:30.380
私たちはユーザー・インターフェースを超高速で構築することができる。
01:30.380 --> 01:35.030
つまり、 グラディオはハギング・フェイスの一員なのだ。
01:35.030 --> 01:38.510
数年前にハギング・フェイスに買収された新興企業だ。
01:38.510 --> 01:41.600
つまり、 グラディオはハギング・フェイス・ファミリーの一員なのだ。
01:41.780 --> 01:48.620
ランディングページに書いてあるように、 楽しい機械学習アプリを作り、 共有することができる。
01:48.620 --> 01:51.620
きっと喜んでもらえると思う。
01:51.830 --> 01:54.530
ええと、 だから簡単だって約束したでしょ。
01:54.560 --> 01:57.080
見ての通り、 本当に簡単だ。
01:57.110 --> 02:03.270
結局のところ、 グラディオをGRとして輸入するという魔法のようなラインが存在する。
02:03.570 --> 02:07.500
関数、 どんな関数でも、 タスクを実行するための関数を書く。
02:07.500 --> 02:13.380
この場合、 greetという関数は名前を受け取り、 helloという名前を返信する。
02:13.380 --> 02:19.890
そして、 その関数に基づいてユーザー・インターフェースを作成し、
02:19.890 --> 02:24.120
入力と出力を与える。
02:24.120 --> 02:26.430
そして、 それが私たちがやろうとしていることだ。
02:26.970 --> 02:37.380
それでは、 GPTとClaudeとGeminiへのAPIコールのためのUIを作成します。
02:37.410 --> 02:45.870
続いて、 先週の講義で作成したパンフレットのUIを作成します。
02:45.900 --> 02:52.860
そうすることで、 プロトタイプのスクリーンを持つ素敵なビジネス・アプリにアプリケーションをパッケージ化することができる。
02:52.860 --> 03:00.630
もちろん、 ストリーミングやマークダウンをUIに取り入れることも考えている。
03:00.630 --> 03:03.060
さっそくラボに行こう。

121
week5/community-contributions/subtitles/srts/59166317/ko_KR.srt

@ -0,0 +1,121 @@
WEBVTT
00:00.680 --> 00:08.540
둘째 주, 둘째 날입니다 LMS 왕국으로의 모험은 계속되죠
00:08.780 --> 00:14.630
오늘은 제가 정말 기대하고 있는 아주 특별한 날이에요
00:14.720 --> 00:15.830
요약해 보죠
00:15.860 --> 00:22.250
이제는 트랜스포머도 묘사할 수 있어요 최고의 개척자 모델 6명도 말할 수 있죠
00:22.250 --> 00:27.920
오픈AI의 API 또한 앤스로픽, 구글과 함께 사용할 수 있죠
00:27.920 --> 00:33.080
오늘은 주제를 바꿔서 그라디오에 관해 얘기해 볼게요
00:33.350 --> 00:37.040
비트에 대해 너무 많이 얘기했네요 이유를 알게 될 거예요
00:37.040 --> 00:40.490
정말 멋진 작품이고 재미있게 만들 거예요
00:40.640 --> 00:46.070
라디오를 이용해 간단한 UI를 만들고 프론티어 모델에 연결할 거예요
00:46.070 --> 00:49.130
말씀드렸듯이 쉬울 거예요
00:49.580 --> 00:53.420
그런데 왜 사용자 인터페이스를 두고 그렇게 소란을 피우는 거죠?
00:53.420 --> 01:00.890
데이터 과학자나 LMS 엔지니어로서 시제품을 더 빨리 만들
01:00.920 --> 01:08.510
수 있고 고객과 비즈니스 스폰서 LMS가 필요한 사람들에게 빠르게
01:08.540 --> 01:12.140
노출할 수 있으니까요
01:12.140 --> 01:17.000
여러분이 프런트엔드를 사용하거나 약간 해본 적이 있다면 리액트 앱 같은 것을
01:17.030 --> 01:22.370
세우는 것이 어떤 것인지 알 수 있습니다 무언가를 올리고 실행하는 데에는 많은 상용
01:22.400 --> 01:26.750
코드가 있습니다 모델과 함께 할 필요가 없다는 것이 밝혀졌죠
01:26.750 --> 01:30.380
사용자 인터페이스를 아주 빨리 만들 수 있어요 그게 오늘 할 일이죠
01:30.380 --> 01:35.030
그러니까 그라디오는 페이스 포옹의 일부예요
01:35.030 --> 01:38.510
몇 년 전 페이스 포옹으로 인수한 스타트업 회사예요
01:38.510 --> 01:41.600
그래디오는 포옹하는 얼굴 가족이에요
01:41.780 --> 01:48.620
랜딩 페이지에 적혀 있듯이 즐거운 머신 러닝 앱을 만들고 공유할 수 있어요
01:48.620 --> 01:51.620
당신도 좋아할 거예요
01:51.830 --> 01:54.530
쉬울 거라고 약속했죠
01:54.560 --> 01:57.080
보시면 알겠지만 정말 쉬워요
01:57.110 --> 02:02.790
결국 그라디오를 GR로 불러오는 마술적인 경계가 있어요 사람들이 그렇게
02:02.790 --> 02:03.270
하죠
02:03.570 --> 02:07.500
어떤 함수든 작업을 위한 함수를 작성하세요
02:07.500 --> 02:13.380
이 경우 Greet라는 함수가 있는데 이름을 취하면 hello name을 응답하죠
02:13.380 --> 02:19.890
함수를 기반으로 사용자 인터페이스를 생성하고 입력과 출력을 제공하면
02:19.890 --> 02:24.120
사용자 인터페이스가 만들어지는 거예요
02:24.120 --> 02:26.430
그게 우리가 할 일이죠
02:26.970 --> 02:35.430
이제 GPT와 클로드와 제미니에 API 호출을 위한 UI를 생성하겠습니다 이걸 어떻게 노출하는지
02:35.430 --> 02:37.380
보실 수 있게요
02:37.410 --> 02:45.870
그런 다음 지난 강의에서 만든 브로슈어의 UI를 생성할 거예요
02:45.900 --> 02:52.050
그래서 응용 프로그램을 프로토타입 스크린이 있는 멋진 비즈니스 앱으로 패키지할 수 있게
02:52.050 --> 02:52.860
해주죠
02:52.860 --> 02:57.510
물론 믹스 스트리밍과 마크다운을 UI에 넣을 거예요 이미 꽤 잘
02:57.510 --> 03:00.630
하고 있으니까요 그게 계획이 될 거예요
03:00.630 --> 03:03.060
Get it, get it, get it, it! 실험실로 가서 검사해 보죠

61
week5/community-contributions/subtitles/srts/59166353/en_US.srt

@ -0,0 +1,61 @@
WEBVTT
00:00.590 --> 00:04.460
Well, congratulations on leveling up yet again.
00:04.520 --> 00:08.690
You've got some real hard skills that you've added to your arsenal.
00:08.750 --> 00:14.570
Uh, it's, uh, been a really, really enjoyable last few lectures.
00:14.570 --> 00:20.990
So at this point, not only can you confidently use OpenAI's API, not only can you throw Anthropic
00:20.990 --> 00:25.970
and Gemini into the mix, but you can also build UIs for your solution.
00:25.970 --> 00:32.660
Doing reasonably sophisticated things like pick between models and make changes to prompts, and generate
00:32.660 --> 00:35.750
company brochures with markdown and streaming.
00:35.810 --> 00:41.630
Uh, it's all pretty, uh, high functionality stuff, so well done.
00:41.750 --> 00:45.650
Tomorrow, though, uh, we raised the bar again.
00:45.650 --> 00:53.090
We'll be able to build chat UIs in Gradio, so these are more complex UIs with much more going on with
00:53.120 --> 00:55.880
like, an instant message style interaction.
00:55.880 --> 00:57.740
It sounds complex.
00:57.770 --> 00:58.850
We'll see if it is.
00:59.060 --> 01:05.900
Uh, we're going to talk about providing more context in a prompt, including Multi-shot prompting something
01:05.900 --> 01:10.160
which hopefully you already experimented with a bit in some of the earlier exercises, and we'll keep
01:10.160 --> 01:11.420
going with that.
01:11.480 --> 01:16.010
And ultimately we're going to build a customer support assistant.
01:16.040 --> 01:19.460
We're going to build a real tool and see how that works.
01:19.460 --> 01:22.490
And that's all happening starting in the next lecture.
01:22.490 --> 01:23.510
And I'll see you there.

52
week5/community-contributions/subtitles/srts/59166353/ja_JP.srt

@ -0,0 +1,52 @@
WEBVTT
00:00.590 --> 00:04.460
まあ、 またしてもレベルアップおめでとう。
00:04.520 --> 00:08.690
あなたは本当にハードな技術を自分の武器に加えた。
00:08.750 --> 00:14.570
この数回の講義は、 本当に本当に楽しかった。
00:14.570 --> 00:20.990
つまりこの時点で、 OpenAIのAPIを自信を持って使えるだけでなく、 AnthropicやGeminiをミックスに放り込めるだけでなく、
00:20.990 --> 00:25.970
ソリューションのUIを構築することもできる。
00:25.970 --> 00:32.660
モデルの選択やプロンプトの変更、 マークダウンやストリーミングを使った会社案内の作成など、
00:32.660 --> 00:35.750
それなりに洗練されたことができる。
00:35.810 --> 00:41.630
機能的なものばかりで、 よくできているよ。
00:41.750 --> 00:45.650
明日はまたハードルを上げたけどね。
00:45.650 --> 00:55.880
GradioでチャットUIを構築できるようになるので、 インスタント・メッセージのようなインタラクションで、 より複雑なUIを構築できるようになります。
00:55.880 --> 00:57.740
複雑そうだ。
00:57.770 --> 00:58.850
そうなるかどうかはこれからだ。
00:59.060 --> 01:11.420
マルチショット・プロンプトを含め、 プロンプトでより多くのコンテクストを提供することについてお話しします。
01:11.480 --> 01:16.010
そして最終的には、 カスタマー・サポートのアシスタントを作るつもりだ。
01:16.040 --> 01:19.460
実際のツールを作って、 それがどう機能するか見てみるつもりだ。
01:19.460 --> 01:22.490
そして、 それはすべて次の講義から始まる。
01:22.490 --> 01:23.510
そこで会おう

61
week5/community-contributions/subtitles/srts/59166353/ko_KR.srt

@ -0,0 +1,61 @@
WEBVTT
00:00.590 --> 00:04.460
또 한 번 레벨 업을 축하해요
00:04.520 --> 00:08.690
어려운 기술을 무기로 활용하고 있어요
00:08.750 --> 00:14.570
지난 강의 몇 개는 정말 즐거웠어요
00:14.570 --> 00:20.990
오픈AI API를 자신 있게 사용할 수 있을 뿐 아니라 앤스로픽과 제미니를
00:20.990 --> 00:25.970
결합할 수 있을 뿐만 아니라 솔루션 UI도 구축할 수 있죠
00:25.970 --> 00:32.660
모델 사이에서 고르기와 프롬프트 변경하기 같은 꽤 복잡한 작업을 하고 마크다운과 스트리밍을
00:32.660 --> 00:35.750
이용한 회사 브로슈어를 생성하죠
00:35.810 --> 00:41.630
아주 높은 기능성이에요 잘 만들었어요
00:41.750 --> 00:45.650
내일은 기대치를 한 단계 높였어요
00:45.650 --> 00:53.090
Gadio에서 채팅 UI도 만들 수 있어요 좀 더 복잡한 UI로 훨씬 더 많은 일이 진행되죠 인스턴트
00:53.120 --> 00:55.880
메시지 스타일 상호 작용 같은 거요
00:55.880 --> 00:57.740
복잡하게 들리네요
00:57.770 --> 00:58.850
과연 그럴까요?
00:59.060 --> 01:05.900
프롬프트에서 더 많은 컨텍스트 제공에 대해 말씀드리겠습니다 멀티샷 프롬프트를 포함해서요 앞서
01:05.900 --> 01:10.160
몇몇 연습에서 이미 비트로 실험해 보셨길 바랍니다 계속 그렇게
01:10.160 --> 01:11.420
할 거예요
01:11.480 --> 01:16.010
궁극적으로는 고객 지원 비서를 만들 거예요
01:16.040 --> 01:19.460
실제 도구를 만들어 어떻게 작동하는지 보죠
01:19.460 --> 01:22.490
다음 강의에서 그 모든 일이 일어나죠
01:22.490 --> 01:23.510
거기서 봐요

319
week5/community-contributions/subtitles/srts/59166421/en_US.srt

@ -0,0 +1,319 @@
WEBVTT
00:00.830 --> 00:04.250
Welcome back to the radio day in the lab.
00:04.250 --> 00:05.180
More to do.
00:05.210 --> 00:06.620
Let's keep going.
00:06.620 --> 00:14.150
Where we left off is we had just built a simple user interface that was calling an LLM and telling a
00:14.150 --> 00:16.610
very, uh, good joke.
00:16.850 --> 00:20.060
Uh, let's keep going with this.
00:20.060 --> 00:28.250
What we're going to do next is ask for the, uh, assistant to respond in markdown, uh, as a way of,
00:28.250 --> 00:32.540
uh, um, getting better looking user interfaces.
00:32.960 --> 00:40.190
Um, and wouldn't it be nice if we wanted to show results in gradio with good formatting written in
00:40.190 --> 00:40.940
markdown?
00:40.940 --> 00:47.690
If we could just have that instead of text box, just replace it with the word markdown, and then the
00:47.690 --> 00:51.560
output would be in perfectly formatted markdown.
00:51.560 --> 00:53.180
That would be great wouldn't it?
00:53.210 --> 00:54.380
Wouldn't it be nice?
00:55.190 --> 00:56.900
You're probably getting the idea here.
00:57.770 --> 00:59.030
Things just really are.
00:59.030 --> 00:59.840
This good.
01:00.020 --> 01:02.510
Uh, So let's say your message.
01:02.510 --> 01:03.740
Let's say, um.
01:03.890 --> 01:14.240
Um, how do I get from Times Square to Times Square like that to Grand Central?
01:14.960 --> 01:17.120
I've got a question for New York navigation.
01:17.120 --> 01:18.710
Let's see how it does.
01:19.640 --> 01:21.440
It's thinking about that.
01:22.400 --> 01:23.360
And there we go.
01:23.360 --> 01:24.440
Here's a response.
01:24.440 --> 01:28.370
And you can see to get from Times Square to Grand Central Terminal in New York City, it figured out
01:28.370 --> 01:29.660
that's what I was talking about.
01:29.870 --> 01:34.040
Follow these steps and you can see it's good headings.
01:34.310 --> 01:41.810
And it's got a nice sub bullets and numbers and all the rest of it, as described in the markdown that
01:41.810 --> 01:43.820
came back from GPT four.
01:43.850 --> 01:47.000
Oh very easy, very nice.
01:47.390 --> 01:50.360
Let's have a look at what else we can do.
01:51.500 --> 01:53.420
Uh, streaming.
01:53.420 --> 01:55.700
Streaming is something we got used to last time.
01:55.700 --> 02:01.840
So can we stream results back to Gradio user interfaces, just as we did when it was coming back into
02:01.840 --> 02:03.820
a Jupyter output cell.
02:03.820 --> 02:04.960
So here we go.
02:04.990 --> 02:07.300
We change our function.
02:07.330 --> 02:10.840
It used to be the message GPT.
02:10.870 --> 02:12.400
Now we're making it stream GPT.
02:12.430 --> 02:13.420
Different function.
02:13.420 --> 02:15.610
And the key thing is that this isn't actually a function.
02:15.610 --> 02:20.320
It's a generator in that it's going to end by yielding a result.
02:20.320 --> 02:26.170
And Gradio is going to detect that we're giving it a generator, not a function.
02:26.170 --> 02:32.830
And because of that, Gradio is automatically going to be iterative and decide to fill in, um, piece
02:32.830 --> 02:35.980
by piece as it comes back from this generator.
02:36.100 --> 02:38.020
So usual story.
02:38.020 --> 02:45.100
I create the messages and then you'll remember this time it's the same API call, but we pass in stream
02:45.100 --> 02:46.120
equals true.
02:46.150 --> 02:47.770
You remember how Claude does it?
02:47.860 --> 02:48.790
Hopefully you do.
02:48.820 --> 02:54.220
With Claude, you don't have an attribute, but instead you call dot stream instead of dot create.
02:54.220 --> 02:56.290
But otherwise it's very similar.
02:56.350 --> 02:58.810
So one thing that's worth noting here.
02:58.840 --> 03:01.110
Just a tiny subtlety with Gradio.
03:01.140 --> 03:08.820
When you are streaming back the results to Gradio, you don't stream back chunk by chunk of the results.
03:08.820 --> 03:15.870
You have to stream back the entire cumulative result so far and stream back a longer and longer cumulative
03:15.870 --> 03:16.410
result.
03:16.410 --> 03:20.520
So you can see what I'm doing is I'm I'm sort of starting with an empty string.
03:20.520 --> 03:28.110
And then for each chunk I'm adding that in and then yielding the total cumulative result so far.
03:28.260 --> 03:34.830
Um, and if you don't do that, what you'll see is each individual chunk will appear in the output cell
03:34.830 --> 03:37.140
and then disappear and will be replaced by something else.
03:37.140 --> 03:38.910
So you have to do it this way.
03:38.910 --> 03:42.900
If you don't see what I mean, try doing yield chunk instead of yield result.
03:43.050 --> 03:43.290
Sorry.
03:43.320 --> 03:46.200
Yield chunk .0. delta dot content.
03:46.320 --> 03:46.590
Uh.
03:46.590 --> 03:49.350
And you'll see exactly what I mean.
03:49.350 --> 03:50.910
It's not not going to look good.
03:51.150 --> 03:59.640
Uh, anyway, that is our stream GPT uh, and wouldn't it be nice if all we needed to do was replace
03:59.760 --> 04:06.390
the the function that it used to be message GPT with stream, GPT and Gradio just figured out the rest.
04:06.390 --> 04:09.300
It figured out that okay, this is a generator, not a function.
04:09.300 --> 04:11.610
Therefore, they're going to want to stream back results.
04:11.610 --> 04:15.360
Therefore, I need to have a sort of typewriter animation style effect.
04:15.390 --> 04:18.660
Let's see if it really can be that simple.
04:18.660 --> 04:20.190
Can it be that simple?
04:20.520 --> 04:21.660
Here we go.
04:21.690 --> 04:29.940
Uh, how do I get from Times Square to Grand Central?
04:32.310 --> 04:33.360
And there we go.
04:33.510 --> 04:34.470
Of course, it's that simple.
04:34.500 --> 04:35.460
Of course it is.
04:35.490 --> 04:36.930
Streams the results.
04:36.930 --> 04:37.920
They look great.
04:37.920 --> 04:40.380
Markdown looks fantastic.
04:40.890 --> 04:46.710
Uh, so, uh, of course it wouldn't be doing my job if I didn't show you how easy it is with Claude
04:46.710 --> 04:47.280
as well.
04:47.280 --> 04:51.540
I mentioned it before, and now you can see there is Claude's API call.
04:51.540 --> 04:52.800
It's very similar.
04:52.800 --> 04:55.650
You call dot stream, you don't pass in the parameter.
04:55.650 --> 05:00.470
You remember that you do have to specify max tokens and the system message goes in separately.
05:00.500 --> 05:02.240
Otherwise very similar.
05:02.270 --> 05:05.510
The streaming back is just for results as stream.
05:05.690 --> 05:08.030
So with result as stream a context manager.
05:08.030 --> 05:13.070
And then we yield the full response just as before.
05:13.550 --> 05:16.670
And at this point it's going to be boringly simple.
05:16.670 --> 05:17.780
You get the joke.
05:17.810 --> 05:21.050
You simply pass in that function instead.
05:21.050 --> 05:24.110
And now we are talking to Claude instead.
05:24.110 --> 05:25.640
We'll ask it the same question though.
05:25.670 --> 05:31.910
How do I get from Times Square to Grand Central?
05:33.290 --> 05:40.250
And here comes Claude's response, a bit shorter, just two options, but nicely structured.
05:40.250 --> 05:41.720
Nicely formatted.
05:41.720 --> 05:43.820
Very good indeed.
05:44.120 --> 05:53.750
Um, so we can take this one step forwards by having the ability to choose either GPT or Claude.
05:53.750 --> 05:56.840
But I'm going to get to that in the very next session.
05:56.840 --> 05:57.980
So hang on in there.

283
week5/community-contributions/subtitles/srts/59166421/ja_JP.srt

@ -0,0 +1,283 @@
WEBVTT
00:00.830 --> 00:04.250
ラボのラジオ・デイへようこそ。
00:04.250 --> 00:05.180
もっとやることがある。
00:05.210 --> 00:06.620
続けよう。
00:06.620 --> 00:16.610
LLMを呼び出し、 とてもいいジョークを言うシンプルなユーザー・インターフェースを作ったところだった。
00:16.850 --> 00:20.060
ええと、 このまま続けましょう。
00:20.060 --> 00:32.540
次にやることは、 より見栄えのするユーザー・インターフェースを得る方法として、 マークダウンで応答するようアシスタントに求めることだ。
00:32.960 --> 00:40.940
マークダウンで書かれた優れた書式で、 グラディオの結果を表示できたらいいと思わない?
00:40.940 --> 00:51.560
テキストボックスの代わりにマークダウンという言葉に置き換えるだけで、 完璧にフォーマットされたマークダウンが出力される。
00:51.560 --> 00:53.180
それは素晴らしいことだろう?
00:53.210 --> 00:54.380
いいことだと思わない?
00:55.190 --> 00:56.900
このあたりはお分かりだろう。
00:57.770 --> 00:59.030
物事は本当にそうなんだ。
00:59.030 --> 00:59.840
これはいい。
01:00.020 --> 01:02.510
では、 あなたのメッセージを言いましょう。
01:02.510 --> 01:03.740
そうだな。
01:03.890 --> 01:14.240
あの、 タイムズ・スクエアからグランド・セントラルまでどうやって行けばいいんですか?
01:14.960 --> 01:17.120
ニューヨーク・ナビゲーションに質問がある。
01:17.120 --> 01:18.710
どうなるか見てみよう。
01:19.640 --> 01:21.440
それを考えているんだ。
01:22.400 --> 01:23.360
さあ、 行こう。
01:23.360 --> 01:24.440
これが返答だ。
01:24.440 --> 01:29.660
ニューヨークのタイムズ・スクエアからグランド・セントラル・ターミナルに行くには、 私が話していたことを理解する必要がある。
01:29.870 --> 01:34.040
これらの手順を踏めば、 良い見出しであることがわかるだろう。
01:34.310 --> 01:43.820
そして、 GPT4から戻ってきたマークダウンに記されているように、 素敵なサブの弾丸や数字、 その他もろもろを備えている。
01:43.850 --> 01:47.000
とても簡単だ。
01:47.390 --> 01:50.360
他に何ができるか見てみよう。
01:51.500 --> 01:53.420
ええと、 ストリーミング。
01:53.420 --> 01:55.700
ストリーミングは前回で慣れた。
01:55.700 --> 02:03.820
そこで、 Jupyterの出力セルに戻ってきたときと同じように、 結果をGradioのユーザー・インターフェースにストリームバックすることができる。
02:03.820 --> 02:04.960
それでは、 どうぞ。
02:04.990 --> 02:07.300
私たちは機能を変える。
02:07.330 --> 02:10.840
以前はGPTというメッセージだった。
02:10.870 --> 02:12.400
今はGPTストリームにしている。
02:12.430 --> 02:13.420
機能が違う。
02:13.420 --> 02:15.610
そして重要なのは、 これは実際には機能ではないということだ。
02:15.610 --> 02:20.320
結果を出して終わるという点ではジェネレーターだ。
02:20.320 --> 02:26.170
そしてグラディオは、 私たちが関数ではなくジェネレーターを与えていることを検知する。
02:26.170 --> 02:35.980
そのため、 グラディオは自動的に反復性を持ち、 このジェネレーターから戻ってくるデータを少しずつ埋めていくことになる。
02:36.100 --> 02:38.020
だからいつもの話だ。
02:38.020 --> 02:46.120
メッセージを作成し、 同じAPIコールであることを思い出してほしい。
02:46.150 --> 02:47.770
クロードのやり方を覚えているか?
02:47.860 --> 02:48.790
そうなるといいね。
02:48.820 --> 02:54.220
クロードの場合は、 アトリビュートを持たず、 ドット・クリエイトの代わりにドット・ストリームを呼び出す。
02:54.220 --> 02:56.290
でも、 それ以外はよく似ている。
02:56.350 --> 02:58.810
ここでひとつ注目すべきことがある。
02:58.840 --> 03:01.110
ただ、 グラディオの場合はほんの少し微妙だ。
03:01.140 --> 03:08.820
結果をGradioにストリームバックする場合、 結果のチャンクごとにストリームバックすることはない。
03:08.820 --> 03:16.410
これまでの累積結果をすべてストリームバックし、 さらに長い累積結果をストリームバックしなければならない。
03:16.410 --> 03:20.520
つまり、 私がやっているのは、 空の文字列から始めるということだ。
03:20.520 --> 03:28.110
そして、 各チャンクごとにそれを足し算して、 これまでの累積結果を出す。
03:28.260 --> 03:37.140
そうしないと、 個々の塊が出力セルに現れては消え、 別のものに置き換わってしまう。
03:37.140 --> 03:38.910
だから、 こうしなければならない。
03:38.910 --> 03:42.900
意味がわからない場合は、 yield resultの代わりにyield chunkを試してみてほしい。
03:43.050 --> 03:43.290
申し訳ない。
03:43.320 --> 03:46.200
収量塊. 0. デルタ・ドット・コンテンツ
03:46.320 --> 03:46.590
ええと。
03:46.590 --> 03:49.350
そうすれば、 私が言っている意味がよくわかるだろう。
03:49.350 --> 03:50.910
見栄えは良くないだろう。
03:51.150 --> 03:59.640
とにかく、 これが我々のストリームGPTだ。 メッセージGPTであった関数をストリームGPTに置き換えるだけで、
03:59.760 --> 04:06.390
あとはGradioがやってくれるとしたら、 それは素晴らしいことではないだろうか。
04:06.390 --> 04:09.300
これは関数ではなくジェネレーターなんだ。
04:09.300 --> 04:11.610
そのため、 彼らは結果をストリーミングで返したいと思っているはずだ。
04:11.610 --> 04:15.360
だから、 タイプライター・アニメーション風のエフェクトが必要なんだ。
04:15.390 --> 04:18.660
本当にそんな単純なことができるのか、 見てみよう。
04:18.660 --> 04:20.190
そんな単純なことでいいのだろうか?
04:20.520 --> 04:21.660
さあ、 始めよう。
04:21.690 --> 04:29.940
タイムズ・スクエアからグランド・セントラルへはどうやって行くの?
04:32.310 --> 04:33.360
さあ、 行こう。
04:33.510 --> 04:34.470
もちろん、 それは簡単なことだ。
04:34.500 --> 04:35.460
もちろんそうだ。
04:35.490 --> 04:36.930
結果をストリームする。
04:36.930 --> 04:37.920
見た目も素晴らしい。
04:37.920 --> 04:40.380
マークダウンは素晴らしい。
04:40.890 --> 04:47.280
ええと、 だから、 もちろん、 クロードと一緒ならどんなに簡単かお見せしなければ、 私の仕事ではありません。
04:47.280 --> 04:51.540
前にも書いたが、 クロードのAPIコールがあるのがわかるだろう。
04:51.540 --> 04:52.800
とてもよく似ている。
04:52.800 --> 04:55.650
ドットストリームを呼び出し、 パラメータは渡さない。
04:55.650 --> 05:00.470
トークンの最大数を指定する必要があり、 システム・メッセージは別に入力されることを覚えておいてほしい。
05:00.500 --> 05:02.240
それ以外は非常によく似ている。
05:02.270 --> 05:05.510
ストリーミングバックは、 ストリームとしての結果のためだけのものだ。
05:05.690 --> 05:08.030
つまり、 結果をストリームとしてコンテキスト・マネージャーとする。
05:08.030 --> 05:13.070
そして、 以前と同じように全回答を返す。
05:13.550 --> 05:16.670
そしてこの時点では、 退屈なほどシンプルなものになるだろう。
05:16.670 --> 05:17.780
冗談はわかるだろう。
05:17.810 --> 05:21.050
代わりにその関数を渡すだけだ。
05:21.050 --> 05:24.110
そして今、 私たちは代わりにクロードと話している。
05:24.110 --> 05:25.640
同じ質問をしよう。
05:25.670 --> 05:31.910
タイムズ・スクエアからグランド・セントラルへの行き方は?
05:33.290 --> 05:40.250
そして、 クロードの回答が来た。 少し短く、 2つの選択肢だけだが、 うまく構成されている。
05:40.250 --> 05:41.720
うまくフォーマットされている。
05:41.720 --> 05:43.820
実に素晴らしい。
05:44.120 --> 05:53.750
GPTとクロードのどちらかを選択できるようにすることで、 一歩前進させることができます。
05:53.750 --> 05:56.840
でも、 それは次のセッションで話そうと思っている。
05:56.840 --> 05:57.980
だから、 そこで頑張るんだ。

313
week5/community-contributions/subtitles/srts/59166421/ko_KR.srt

@ -0,0 +1,313 @@
WEBVTT
00:00.830 --> 00:04.250
실험실의 라디오 방송에 잘 오셨어요
00:04.250 --> 00:05.180
할 일이 더 있어요
00:05.210 --> 00:06.620
계속하죠
00:06.620 --> 00:14.150
마지막으로 본 것은 간단한 사용자 인터페이스를 구축한 것이었죠 LLM을 호출하고 아주
00:14.150 --> 00:16.610
재미있는 농담을 했어요
00:16.850 --> 00:20.060
계속 진행하죠
00:20.060 --> 00:28.250
다음으로 할 일은 보조에게 가격 인하를 요청하는 겁니다 보다 나은 사용자
00:28.250 --> 00:32.540
인터페이스를 얻는 방법으로요
00:32.960 --> 00:40.940
그러디오에서 좋은 서식을 마크다운으로 써서 결과를 보여주면 좋지 않을까요?
00:40.940 --> 00:47.690
텍스트 상자 대신에 마크다운으로 대체할 수 있다면 완벽한 형식의
00:47.690 --> 00:51.560
마크다운으로 결과물이 나올 거예요
00:51.560 --> 00:53.180
그러면 정말 좋겠죠?
00:53.210 --> 00:54.380
좋지 않을까요?
00:55.190 --> 00:56.900
무슨 말인지 아시겠죠?
00:57.770 --> 00:59.030
정말 그래요
00:59.030 --> 00:59.840
이 정도면 돼요
01:00.020 --> 01:02.510
메시지를 생각해 보죠
01:02.510 --> 01:03.740
예를 들어 볼게요
01:03.890 --> 01:14.240
어떻게 타임스스퀘어에서 그랜드 센트럴까지 가죠? Get you get the time
01:14.960 --> 01:17.120
뉴욕 항법팀에 질문이 있어요
01:17.120 --> 01:18.710
어떻게 되나 보죠
01:19.640 --> 01:21.440
그 생각을 하는 거예요
01:22.400 --> 01:23.360
다 됐어요
01:23.360 --> 01:24.440
이렇게 대답하죠
01:24.440 --> 01:28.370
타임스퀘어에서 뉴욕 그랜드 센트럴 터미널까지 가는 길은 모두 제 말을 이해했어요
01:28.370 --> 01:29.660
Get it
01:29.870 --> 01:34.040
이 단계를 따라가면 멋진 헤딩이 나와요
01:34.310 --> 01:41.810
괜찮은 sub 탄환과 번호도 있고 GPT 4에서 받은 표시된 모든
01:41.810 --> 01:43.820
것이 있어요.
01:43.850 --> 01:47.000
아주 쉽고 좋네요
01:47.390 --> 01:50.360
다른 방법도 한번 살펴보죠
01:51.500 --> 01:53.420
스트리밍요
01:53.420 --> 01:55.700
스트리밍은 지난번에도 익숙해졌잖아요
01:55.700 --> 02:01.840
그라디오 유저 인터페이스로 결과를 스트리밍할 수 있나요? 주피터 출력 셀로 되돌아올
02:01.840 --> 02:03.820
때 했던 것처럼요
02:03.820 --> 02:04.960
자, 시작하죠
02:04.990 --> 02:07.300
함수를 바꾸죠
02:07.330 --> 02:10.840
예전엔 GPT라는 메시지였죠
02:10.870 --> 02:12.400
이제 GPT 스트림으로 만들 거예요
02:12.430 --> 02:13.420
함수가 달라요
02:13.420 --> 02:15.610
중요한 건 이게 함수가 아니란 거죠
02:15.610 --> 02:20.320
발전기 같은 거예요 결과적으로 끝날 거예요
02:20.320 --> 02:26.170
그라디오는 우리가 함수 기능을 부여한 게 아니라 발전기를 줬다는 걸 눈치챌 거예요
02:26.170 --> 02:32.830
그 때문에 그라디오는 자동으로 반복적으로 재생되고 발전기에서 돌아오는
02:32.830 --> 02:35.980
부분마다 채워넣게 되죠
02:36.100 --> 02:38.020
늘 있는 일이죠
02:38.020 --> 02:45.100
메시지를 생성하면 이번엔 같은 API 호출이란 걸 기억하실 겁니다 하지만 스트리밍에서 넘긴 건
02:45.100 --> 02:46.120
true죠
02:46.150 --> 02:47.770
클로드가 어떻게 하는지 알죠?
02:47.860 --> 02:48.790
그러길 바라요
02:48.820 --> 02:54.220
클로드는 특성이 없어요 대신 .Create 대신 .Timle을 호출하죠
02:54.220 --> 02:56.290
그 외에는 아주 비슷해요
02:56.350 --> 02:58.810
여기서 주목할 게 하나 있어요
02:58.840 --> 03:01.110
그래디오와 아주 미묘하게 일치하죠
03:01.140 --> 03:08.820
결과를 그라디오로 스트리밍할 때 결과를 하나씩 스트리밍하지 않아요
03:08.820 --> 03:15.870
지금까지 누적된 결과를 전부 스트리밍해야 해요 그리고 누적 결과를 점점 더 오랫동안 스트리밍해야
03:15.870 --> 03:16.410
하죠
03:16.410 --> 03:20.520
제가 뭘 하는지 보이시죠 빈 문자열로 시작해요
03:20.520 --> 03:28.110
각각의 덩어리에 추가하고 지금까지 누적된 총 결과를 산출하는 거죠
03:28.260 --> 03:34.830
그렇게 하지 않으면 출력 셀에 각 덩어리가 나타났다가 사라지고
03:34.830 --> 03:37.140
다른 것으로 대체되죠
03:37.140 --> 03:38.910
이렇게 해야 해요
03:38.910 --> 03:42.900
무슨 뜻인지 모르겠다면 수확 결과 대신 수확량 덩어리를 입력해 보세요
03:43.050 --> 03:43.290
미안해요
03:43.320 --> 03:46.200
큰 덩어리를요 0살요 델타 닷 콘텐츠요
03:46.320 --> 03:46.590
03:46.590 --> 03:49.350
무슨 말인지 알게 될 거예요
03:49.350 --> 03:50.910
보기 안 좋을 거예요
03:51.150 --> 03:59.640
어쨌든 이게 스트림 GPT입니다 메시지 GPT였던 함수를 스트림으로 대체하기만 하면
03:59.760 --> 04:06.390
좋을 것 같은데요 GPT와 Gadio가 나머지를 해결했네요
04:06.390 --> 04:09.300
이건 발전기일 뿐 함수가 아니란 걸 알아냈죠
04:09.300 --> 04:11.610
따라서 결과를 스트리밍하길 원할 거예요
04:11.610 --> 04:15.360
그래서 타자기 애니메이션 같은 효과를 내야 했죠
04:15.390 --> 04:18.660
그렇게 간단한지 확인해 보죠
04:18.660 --> 04:20.190
그렇게 간단해요?
04:20.520 --> 04:21.660
시작할게요
04:21.690 --> 04:29.940
Get it, get it, get it, it 타임스스퀘어에서 그랜드 센트럴까지 어떻게 가죠?
04:32.310 --> 04:33.360
다 됐어요
04:33.510 --> 04:34.470
그럼요, 간단하죠
04:34.500 --> 04:35.460
당연히 그렇겠죠
04:35.490 --> 04:36.930
결과를 내보내죠
04:36.930 --> 04:37.920
잘 어울려요
04:37.920 --> 04:40.380
마크다운이 아주 멋져요
04:40.890 --> 04:47.280
클로드와의 관계가 얼마나 쉬운지 보여드리지 않으면 제 일이 아니겠죠
04:47.280 --> 04:51.540
전에 언급했었죠 클로드의 API 호출이 저기 있는 게 보이시죠
04:51.540 --> 04:52.800
아주 비슷해요
04:52.800 --> 04:55.650
닷 스트림을 호출하면 매개변수에서 통과 못 해요
04:55.650 --> 05:00.470
최대 토큰을 지정해야 한다는 걸 기억하세요 시스템 메시지는 따로 보내지죠
05:00.500 --> 05:02.240
그 외에는 아주 비슷해요
05:02.270 --> 05:05.510
스트리밍은 결과만을 위한 거죠
05:05.690 --> 05:08.030
결과는 스트림으로서 컨텍스트 관리자죠
05:08.030 --> 05:13.070
그럼 전처럼 완전한 반응을 보여드리죠
05:13.550 --> 05:16.670
이 시점에서 지루할 정도로 간단해요
05:16.670 --> 05:17.780
Get it, get it, get it, get it, it! 농담 이해하죠?
05:17.810 --> 05:21.050
그 함수를 전달하는 거죠
05:21.050 --> 05:24.110
지금은 클로드랑 얘기하고 있어요
05:24.110 --> 05:25.640
우리도 같은 질문을 할 거예요
05:25.670 --> 05:31.910
Get it, get it, get it, it, it, it, it! 타임스스퀘어에서 그랜드 센트럴까지 어떻게 가죠?
05:33.290 --> 05:40.250
클로드의 답장은 비트보다 짧고 두 가지밖에 없지만 구조가 잘 짜여 있죠
05:40.250 --> 05:41.720
서식이 멋지네요
05:41.720 --> 05:43.820
정말 훌륭해요
05:44.120 --> 05:53.750
GPT나 클로드 중 하나를 선택함으로써 한 걸음 더 나아갈 수 있어요
05:53.750 --> 05:56.840
Get in get은 다음 세션에서 다룰게요
05:56.840 --> 05:57.980
조금만 더 버텨요

202
week5/community-contributions/subtitles/srts/59166443/en_US.srt

@ -0,0 +1,202 @@
WEBVTT
00:00.590 --> 00:02.720
And welcome back everybody.
00:02.720 --> 00:06.200
Welcome to week two day three.
00:06.230 --> 00:13.100
It's a continuation of our enjoyment of radio, our celebration of everything that is radio and user
00:13.100 --> 00:14.030
interfaces.
00:14.330 --> 00:19.820
Uh, what you can already do in addition to using open AI, anthropic and Gemini, you can now also
00:19.820 --> 00:22.130
build UIs for your solutions.
00:22.130 --> 00:24.500
And you should feel pretty good about that.
00:24.530 --> 00:27.230
Uh, by the end of today, you'll be able to do more.
00:27.260 --> 00:32.120
You'll be able to build chat UIs, a specific type of UI which is very common.
00:32.120 --> 00:38.270
You'll be able to provide the history of conversation in a prompt, and you will build your very first
00:38.270 --> 00:42.980
customer support assistant, an AI assistant, also known as a chat bot.
00:43.010 --> 00:46.280
A very common I use case.
00:46.280 --> 00:48.590
You will have mastered it today.
00:49.340 --> 00:51.950
So again, very common.
00:51.950 --> 00:53.150
J'en ai use case.
00:53.150 --> 00:55.220
I think we're all very familiar with them.
00:55.250 --> 00:56.810
Llms based on chat bots.
00:56.810 --> 00:59.210
Super effective at conversation.
00:59.210 --> 01:05.780
It's hard to remember that only a few years ago, if you experienced one of these chatbot style interfaces
01:05.780 --> 01:11.410
on a website, you would be in the world of responding one, two, three, or four to different things,
01:11.410 --> 01:15.820
or use a keyword like booking or something like that.
01:15.850 --> 01:17.860
How far we have come.
01:17.860 --> 01:23.740
You can now have an informed conversation with customer service chatbots on websites, and you often
01:23.770 --> 01:24.280
do.
01:24.280 --> 01:29.470
And, you know, frankly, there have been times when I've got more value from a conversation with a
01:29.470 --> 01:36.460
chatbot than I have from a human being, which is a sorry, sad sense of the times.
01:36.640 --> 01:42.040
Um, but obviously we can't do things like asking it how many times the letter A appears in that sentence.
01:42.400 --> 01:46.510
Uh, but anyways, uh, the, uh, the chatbot use case.
01:46.510 --> 01:49.030
Very familiar, very important indeed.
01:49.030 --> 01:51.280
And something where llms excel.
01:51.430 --> 01:57.190
You can imagine some of the things that we're familiar with, the friendly personas that we can give
01:57.220 --> 02:06.220
chatbots, or indeed any persona we can have the ability to maintain context between messages this staggering
02:06.220 --> 02:11.440
way that you can hold a conversation and refer to things that you said earlier.
02:11.440 --> 02:15.790
And we all know now that that is some, some, some trickery going on there.
02:15.790 --> 02:19.870
It's an illusion that you're really having this persistent conversation.
02:19.900 --> 02:22.500
What's happening is at each step.
02:22.500 --> 02:29.280
The entire conversation history is being provided to the LLM in order to get back the next response.
02:29.520 --> 02:36.450
Um, and then also these assistants can have subject matter expertise, which they use to answer questions
02:36.450 --> 02:37.830
in a knowledgeable way.
02:38.730 --> 02:45.480
So, uh, very important aspect of interacting with assistants is the correct use of prompts.
02:45.480 --> 02:49.590
We're very familiar now with the system prompt that we can use to set the tone of the conversation.
02:49.590 --> 02:51.180
You can establish ground rules.
02:51.180 --> 02:56.400
There is a common prompt technique of saying if you don't know the answer, just say so.
02:56.400 --> 03:01.140
To try and encourage llms to be truthful and not to hallucinate.
03:01.470 --> 03:09.690
Uh, context is how you can use, uh, the add additional information into the conversation to give
03:09.690 --> 03:13.140
the LLM more context on what's being discussed.
03:13.140 --> 03:21.660
And then multi shots prompting is when you add information to the prompt to give multiple examples of
03:21.660 --> 03:29.160
interactions as a way to, uh, craft, to sort of hone the character of the LLM by giving it examples
03:29.160 --> 03:35.390
to work from, and also to prime it with information that might be useful later.
03:35.420 --> 03:40.160
It's interesting that this feels a bit like training because it's learning from multiple examples,
03:40.160 --> 03:43.340
but of course, this isn't training in the data science sense.
03:43.340 --> 03:45.410
The model has already been trained.
03:45.440 --> 03:47.750
The neural network training has happened.
03:47.780 --> 03:51.260
This is all at what we call an inference time at runtime.
03:51.260 --> 03:54.770
It's all just generating future tokens based on past.
03:54.770 --> 04:01.940
But the point is that if that past set of tokens includes a bunch of questions and answers, then when
04:01.940 --> 04:08.810
it's predicting the future, it's more likely it's more likely to pick future tokens that are consistent
04:08.810 --> 04:10.610
with what it's seen in the past.
04:10.610 --> 04:13.670
And that's why this works so very well.
04:14.540 --> 04:16.700
So we're now going to build a chatbot.
04:16.730 --> 04:17.390
Our first chatbot.
04:17.390 --> 04:18.410
And it's going to look like this.
04:18.440 --> 04:23.690
It's going to have a sort of instant message style interface to it with questions from us, responses
04:23.690 --> 04:29.690
from the chatbot in this sort of interface, which, you know, that's that's reasonably sophisticated
04:29.720 --> 04:35.600
and I'm telling you that we're going to be able to do it all in this one lesson, and it will give you
04:35.720 --> 04:39.020
tooling to be able to do the same thing in the future.
04:39.020 --> 04:42.950
So without further ado, let's go over to JupyterLab.

166
week5/community-contributions/subtitles/srts/59166443/ja_JP.srt

@ -0,0 +1,166 @@
WEBVTT
00:00.590 --> 00:02.720
そしてみんな、 おかえりなさい。
00:02.720 --> 00:06.200
第2週3日目へようこそ。
00:06.230 --> 00:14.030
ラジオを楽しむこと、 ラジオとユーザー・インターフェースのすべてを称えることの継続だ。
00:14.330 --> 00:22.130
オープンAI、 anthropic、 Geminiを使うだけでなく、 ソリューションのUIを構築することもできます。
00:22.130 --> 00:24.500
そして、 それについてかなり良い気分になっているはずだ。
00:24.530 --> 00:27.230
ええと、 今日が終われば、 もっとできるようになるよ。
00:27.260 --> 00:32.120
非常に一般的なUIの一種であるチャットUIを構築できるようになる。
00:32.120 --> 00:38.270
会話の履歴をプロンプトで提供できるようになり、 まさに最初のカスタマー・サポート・アシスタント、
00:38.270 --> 00:42.980
AIアシスタント(チャットボットとも呼ばれる)を構築することになる。
00:43.010 --> 00:46.280
非常に一般的な使用例だ。
00:46.280 --> 00:48.590
今日でマスターできるだろう。
00:49.340 --> 00:51.950
だから、 これもよくあることだ。
00:51.950 --> 00:53.150
ユースケースです。
00:53.150 --> 00:55.220
みんなよく知っていると思う。
00:55.250 --> 00:56.810
チャットボットに基づくLlms。
00:56.810 --> 00:59.210
会話に超効果的。
00:59.210 --> 01:05.780
ほんの数年前まで、 ウェブサイトでこうしたチャットボット・スタイルのインターフェイスを体験すると、
01:05.780 --> 01:15.820
さまざまなことに1つ、 2つ、 3つ、 4つと反応したり、 予約などのキーワードを使ったりする世界だったことを思い出すのは難しい。
01:15.850 --> 01:17.860
我々はここまで来た。
01:17.860 --> 01:24.280
ウェブサイト上のカスタマーサービス・チャットボットと、 十分な情報を得た上で会話をすることができるようになった。
01:24.280 --> 01:36.460
そして、 正直なところ、 人間との会話よりもチャットボットとの会話から得た価値の方が大きかったこともある。
01:36.640 --> 01:42.040
でも、 Aという文字がその文中に何回出てくるか、 というようなことは明らかにできない。
01:42.400 --> 01:46.510
ええと、 とにかく、 ええと、 チャットボットの使用例です。
01:46.510 --> 01:49.030
とても身近で、 とても重要なことだ。
01:49.030 --> 01:51.280
そしてllmsが得意とすること。
01:51.430 --> 01:57.190
チャットボットに与えることができるフレンドリーなペルソナ、
01:57.220 --> 02:11.440
あるいはどんなペルソナでも、 メッセージ間の文脈を維持する能力を持つことができます。
02:11.440 --> 02:15.790
そして私たちは今、 それが何らかの、 何らかの、 何らかの策略であることを知っている。
02:15.790 --> 02:19.870
本当にしつこく会話しているかのような錯覚に陥る。
02:19.900 --> 02:22.500
それぞれのステップで何が起きているのか。
02:22.500 --> 02:29.280
次の返答を得るために、 会話履歴はすべてLLMに提供される。
02:29.520 --> 02:37.830
それから、 アシスタントは専門的な知識を持っていて、 その知識を使って質問に答えることもできる。
02:38.730 --> 02:45.480
だから、 アシスタントと接する上でとても重要なのは、 プロンプトを正しく使うことなんだ。
02:45.480 --> 02:49.590
私たちは、 会話のトーンを設定するために使用できるシステム・プロンプトを熟知している。
02:49.590 --> 02:51.180
基本的なルールを設けることができる。
02:51.180 --> 02:56.400
答えがわからなければそう言えばいい、 というよくあるプロンプトのテクニックがある。
02:56.400 --> 03:01.140
幻覚を見ないよう、 llmsに真実を話すよう促すためだ。
03:01.470 --> 03:09.690
コンテクストとは、 LLMが議論していることについてより多くのコンテクストを与えるために、
03:09.690 --> 03:13.140
会話に追加情報を加えることです。
03:13.140 --> 03:21.660
そしてマルチ・ショット・プロンプトとは、 プロンプトに情報を追加して複数の交流例を示すことで、
03:21.660 --> 03:35.390
LLMの性格に磨きをかけるとともに、 後で役に立つ情報を与えるためのものだ。
03:35.420 --> 03:43.340
これは複数の例から学習しているため、 トレーニングのように感じられるのが面白いところだが、 もちろんこれはデータサイエンスの意味でのトレーニングではない。
03:43.340 --> 03:45.410
モデルはすでに訓練されている。
03:45.440 --> 03:47.750
ニューラルネットワークのトレーニングが行われた。
03:47.780 --> 03:51.260
これはすべて、 実行時の推論時間と呼ばれるものだ。
03:51.260 --> 03:54.770
すべては過去に基づいて未来のトークンを生成しているだけなのだ。
03:54.770 --> 04:01.940
しかし、 重要なのは、 もし過去のトークンのセットに質問と答えがたくさん含まれていれば、
04:01.940 --> 04:10.610
未来を予測するときに、 過去に見たものと一致する未来のトークンを選ぶ可能性が高くなるということだ。
04:10.610 --> 04:13.670
だからこそ、 これはとても効果的なのだ。
04:14.540 --> 04:16.700
だから、 これからチャットボットを作るんだ。
04:16.730 --> 04:17.390
私たちの最初のチャットボット。
04:17.390 --> 04:18.410
そして、 このようになるだろう。
04:18.440 --> 04:29.690
このレッスンでは、 インスタントメッセージのようなインターフェイスで、
04:29.720 --> 04:39.020
私たちからの質問とチャットボットからの応答が行われます。
04:39.020 --> 04:42.950
それでは早速、 JupyterLabに行ってみよう。

199
week5/community-contributions/subtitles/srts/59166443/ko_KR.srt

@ -0,0 +1,199 @@
WEBVTT
00:00.590 --> 00:02.720
다시 오신 걸 환영해요
00:02.720 --> 00:06.200
2주 차에 오신 걸 환영해요 3일째예요
00:06.230 --> 00:13.100
라디오에 대한 즐거움의 연속이죠 라디오와 사용자 인터페이스에 관한 모든 것을 축하하는
00:13.100 --> 00:14.030
거예요
00:14.330 --> 00:19.820
오픈 인공지능과 인류학 제미니 개발 외에도 여러분의 솔루션에 맞는
00:19.820 --> 00:22.130
UI를 구축할 수 있죠
00:22.130 --> 00:24.500
그러니 기분 좋으시겠어요
00:24.530 --> 00:27.230
오늘 저녁쯤엔 더 많은 걸 할 수 있을 거예요
00:27.260 --> 00:32.120
채팅 UI를 만들 수 있어요 아주 흔한 특정 유형의 UI죠
00:32.120 --> 00:38.270
신속하게 대화 이력을 제공할 수 있고 첫 고객 지원 비서를 만들 수 있습니다
00:38.270 --> 00:42.980
채팅 봇이라고도 하는 인공지능 비서요
00:43.010 --> 00:46.280
아주 흔한 케이스죠
00:46.280 --> 00:48.590
오늘 통달할 거예요
00:49.340 --> 00:51.950
아주 흔한 일이죠
00:51.950 --> 00:53.150
인공지능은 케이스로 사용해요
00:53.150 --> 00:55.220
다들 잘 아실 거예요
00:55.250 --> 00:56.810
채팅봇을 기반으로 한 LM이에요
00:56.810 --> 00:59.210
대화에 아주 효과적이죠
00:59.210 --> 01:05.780
기억하기 힘들지만 불과 몇 년 전만 해도 챗봇 스타일 인터페이스를 웹사이트에서
01:05.780 --> 01:11.410
경험했다면 다른 것에 하나, 둘, 셋, 넷 응답하는 세상에 있었을
01:11.410 --> 01:15.820
거예요 예약 같은 키워드를 사용하거나요
01:15.850 --> 01:17.860
우리가 얼마나 멀리 왔는지도요
01:17.860 --> 01:24.280
고객 서비스 챗봇과 웹사이트에서 정보를 주고받을 수 있어요 자주 그러죠
01:24.280 --> 01:29.470
솔직히 말해서 인간과의 대화보다 챗봇과의 대화가
01:29.470 --> 01:36.460
더 가치 있었던 때도 있었어요 시대상으로는 안타깝고 슬픈 일이죠
01:36.640 --> 01:42.040
하지만 알파벳 A가 문장에 몇 번 나오는지 물어볼 수는 없어요
01:42.400 --> 01:46.510
어쨌든 챗봇 사용 사례 말인데요
01:46.510 --> 01:49.030
아주 친숙하고 중요하죠
01:49.030 --> 01:51.280
llms가 탁월한 것 말이에요
01:51.430 --> 01:57.190
우리에게 익숙한 것들을 상상해 보세요 챗봇에 붙여주는
01:57.220 --> 02:06.220
친근한 가명이나 메시지 사이의 컨텍스트를 유지할 수 있는 모든 가명을요 대화를 나누고
02:06.220 --> 02:11.440
아까 말한 걸 참조할 수 있는 놀라운 방법이죠
02:11.440 --> 02:15.790
그게 속임수라는 건 다들 알잖아요
02:15.790 --> 02:19.870
정말 끈질기게 대화하고 있다는 건 환상이에요
02:19.900 --> 02:22.500
각 단계마다 달라요
02:22.500 --> 02:29.280
모든 대화 기록은 LLM에 제공됩니다 다음 응답을 얻기 위해서죠
02:29.520 --> 02:36.450
또한 이 비서들은 주제와 관련된 전문 지식을 갖추고 지식이 풍부한 답변을
02:36.450 --> 02:37.830
할 수 있죠
02:38.730 --> 02:45.480
비서와의 상호 작용에서 가장 중요한 건 프롬프트 사용의 정확성이에요
02:45.480 --> 02:49.590
대화의 분위기를 결정하는 시스템 프롬프트가 이젠 아주 익숙하죠
02:49.590 --> 02:51.180
기본 규칙을 정할 수 있어요
02:51.180 --> 02:56.400
답을 모르면 모른다고 말하는 데 흔히 쓰이는 기법이 있어요
02:56.400 --> 03:01.140
환각을 보지 않고 진실하도록 장려하는 거죠
03:01.470 --> 03:09.690
컨텍스트는 대화에 추가 정보를 추가하는 방법을 말합니다 대화 내용에 대해 더 많은
03:09.690 --> 03:13.140
컨텍스트를 제공하기 위해서죠
03:13.140 --> 03:21.660
Multi숏 프롬프팅은 프롬프트에 정보를 추가하는 것을 뜻합니다 상호 작용
03:21.660 --> 03:29.160
예제를 여러 개 제공하는 거죠 작업 예제를 제공해 LLM의 성격을 다듬고
03:29.160 --> 03:35.390
나중에 유용할 정보를 프라임으로 함으로써요
03:35.420 --> 03:40.160
훈련 같은 느낌이 드는 게 흥미롭네요 비트 코스트는 여러 예시를 통해
03:40.160 --> 03:43.340
배우지만 데이터 과학적 측면에서는 아니죠
03:43.340 --> 03:45.410
모델은 이미 훈련받았어요
03:45.440 --> 03:47.750
신경망 훈련이 끝났어요
03:47.780 --> 03:51.260
런타임에서 추론 타임이라고 부르는 때죠
03:51.260 --> 03:54.770
과거에 기반해 미래 토큰을 생성하는 거죠
03:54.770 --> 04:01.940
하지만 중요한 것은, 과거의 토큰들이 질문과 답변을 담고 있다면 미래를 예측할 때,
04:01.940 --> 04:08.810
미래의 토큰을 선택할 가능성이 더 높다는 것입니다. 과거에 봤던 것과 일관되는
04:08.810 --> 04:10.610
것으로요.
04:10.610 --> 04:13.670
그래서 이 장면이 잘 된 거예요
04:14.540 --> 04:16.700
챗봇을 만들 거예요
04:16.730 --> 04:17.390
첫 챗봇이에요
04:17.390 --> 04:18.410
이렇게 될 거예요
04:18.440 --> 04:23.690
일종의 인스턴스 메시지 스타일 인터페이스가 있을 겁니다 저희의
04:23.690 --> 04:29.690
질문과 챗봇의 응답이 있는 이런 종류의 인터페이스죠 꽤나 정교한 겁니다
04:29.720 --> 04:35.600
이 강의에서 전부 다 할 수 있다고 말씀드리는 거예요 미래에 같은 걸 할
04:35.720 --> 04:39.020
수 있는 도구를 줄 거예요
04:39.020 --> 04:42.950
그럼 바로 유피터랩으로 넘어가죠

583
week5/community-contributions/subtitles/srts/59166453/en_US.srt

@ -0,0 +1,583 @@
WEBVTT
00:00.530 --> 00:05.180
Welcome back and welcome to our continuing JupyterLab experience.
00:05.300 --> 00:09.110
Uh, I'm hopefully going to keep you entertained with another fun example.
00:09.200 --> 00:14.690
Uh, we are going to have an adversarial conversation between chatbots.
00:14.720 --> 00:16.220
Let's see how we're going to do it.
00:16.400 --> 00:22.310
You're familiar at this point with the way that we can have a conversation expressed in a list of elements.
00:22.340 --> 00:23.420
You've seen this several times.
00:23.420 --> 00:29.990
Now a list with a system and a user prompt in this, uh, in this list.
00:30.410 --> 00:37.130
Um, but as I sort of alluded earlier, this list can be a longer list with multiple interactions and
00:37.130 --> 00:42.410
the way that might look, for example, as I've shown it here, is you could have a system, uh, message
00:42.410 --> 00:49.280
at the beginning, role system content or system message, then a user message, then an assistant that
00:49.280 --> 00:53.720
has replied to that user message, and then another user message.
00:53.720 --> 00:59.030
And that structure would then represent a longer conversation history.
00:59.030 --> 01:05.080
And we can use that approach to engage in a longer conversation between ourselves and a chatbot, or
01:05.080 --> 01:06.910
even between two chatbots.
01:06.940 --> 01:14.110
It's worth me pointing out that this approach, this kind of structure, is the entire way in which
01:14.110 --> 01:16.930
one has a conversation with a chatbot.
01:16.960 --> 01:21.220
That appears to be something that persists over multiple interactions.
01:21.220 --> 01:30.490
You every single time that you make another, uh, another prompt to an LLM like GPT four, what gets
01:30.760 --> 01:37.030
sent into it, what gets fed in in the input prompt is, in fact, this whole structure of the whole
01:37.030 --> 01:38.530
conversation so far.
01:38.530 --> 01:45.460
And then it's asked to continue by completing, by continuing to generate tokens that feel like they're
01:45.490 --> 01:47.740
the most likely tokens to come next.
01:47.740 --> 01:49.930
And then that gets added to the conversation.
01:49.930 --> 01:52.240
And then you reply to that.
01:52.240 --> 01:56.830
And the next time the LLM is called, the entire conversation is fed in.
01:56.830 --> 01:59.980
And again it's asked to predict the subsequent tokens.
01:59.980 --> 02:06.260
So there's this illusion that you're having a conversation with something that has memory and remembers
02:06.260 --> 02:08.870
back to what you said ten minutes ago.
02:08.870 --> 02:14.420
But what's actually happening is that with each of your interactions, what's being fed to the LM is
02:14.420 --> 02:18.800
the entire conversation so far, and then it's being asked to continue it.
02:19.010 --> 02:23.900
Um, and, and that should give you a good sense and intuition for how it's actually working.
02:23.900 --> 02:28.670
And again, that's why when we talked about the context window last, last week, we said that the the
02:28.670 --> 02:34.010
size of the context window has to be able to fit all of the conversations so far as well as the subsequent
02:34.010 --> 02:35.210
generated tokens.
02:35.210 --> 02:40.970
And that's because every time you call the LM, this entire input is passed in.
02:41.480 --> 02:47.960
So we can use that approach to engage in a bit of some fun.
02:47.960 --> 02:54.950
So what we're going to do is we're going to have a conversation between GPT four and Mini and Claude
02:54.980 --> 02:58.940
three haiku, which is the very cheap version of Claude three.
02:59.150 --> 03:03.260
Um, it's also a chance for me to show using a different model, and it's useful might be useful for
03:03.260 --> 03:09.870
you to have these strings at your disposal so you can quickly try out different models yourself.
03:09.900 --> 03:14.010
So GPT is going to be given this system prompt.
03:14.010 --> 03:16.500
You're a chatbot who's very argumentative.
03:16.530 --> 03:19.440
You disagree with everything in the conversation, anything in conversation.
03:19.440 --> 03:22.470
And you challenge everything in a snarky way.
03:22.920 --> 03:25.380
Uh, Claude gets a different system prompt.
03:25.380 --> 03:27.510
You're very polite, courteous chatbot.
03:27.540 --> 03:31.320
You try to agree with everything the other person says or find common ground.
03:31.320 --> 03:35.580
If the other person is argumentative, you try and calm them down and keep chatting.
03:35.700 --> 03:37.380
Seems like a good setup, doesn't it?
03:37.410 --> 03:39.720
A nice, uh, juicy setup.
03:40.050 --> 03:41.970
Uh, and then we're going to start with hi there.
03:41.970 --> 03:42.930
And hi.
03:42.960 --> 03:44.730
So that's the setup.
03:45.030 --> 03:51.720
All right then I'm writing a function called GPT, uh, and, uh, this this is what it does.
03:51.780 --> 04:01.830
Uh, it takes these messages, um, uh, and, uh, it, uh, it basically it takes these two lists
04:01.830 --> 04:07.660
that you see here, GPT messages and Claude messages, and it builds this kind of list that you see
04:07.660 --> 04:08.290
here.
04:08.290 --> 04:13.480
So it's going to take two lists of messages and build this whole conversation history.
04:13.480 --> 04:20.860
And obviously in this case, uh, Claude's messages need to be considered to be the user and its own
04:20.860 --> 04:22.780
messages are the assistant.
04:23.110 --> 04:25.000
So let me tell you what I mean by that.
04:25.000 --> 04:27.220
So I started off with a system prompt.
04:27.460 --> 04:32.290
So then I iterate through the GPT messages and the Claude messages.
04:32.290 --> 04:34.900
And I use this handy utility zip.
04:35.080 --> 04:40.540
Uh, as data scientists, it's it might be something you've used a lot before, but if not, some people
04:40.540 --> 04:41.680
don't don't know about it.
04:41.680 --> 04:43.030
And it's such a useful one.
04:43.030 --> 04:49.300
So if you have a bunch of different lists and you want to iterate element by element through both of
04:49.300 --> 04:56.740
them together, uh, the sort of boring way of doing it is doing a kind of for I in range and the length
04:56.740 --> 04:57.880
of the list.
04:57.880 --> 05:03.520
So you basically have a sort of iterator with an index, and you count through until you get to the
05:03.520 --> 05:05.530
end and you pluck out the two elements.
05:05.530 --> 05:09.690
But there's a lovely, pythonic, simple way of doing it using zip.
05:09.690 --> 05:16.770
And what you can do is if you call zip on those two lists, it builds the response to that is an iterator
05:16.770 --> 05:24.960
that iterates through each each pair, each element of both lists together, and returns the pairs at
05:24.960 --> 05:25.890
each point.
05:26.220 --> 05:31.110
And so you can unpack that and just say like for GPT comma Claude in.
05:31.110 --> 05:34.380
And you're going to get the pairs each time as you go through.
05:34.380 --> 05:39.480
And you may guess this, but you can also, if you're trying to iterate through 3 or 4 lists, you could
05:39.480 --> 05:41.730
just shove them all here and do the same thing.
05:41.760 --> 05:47.010
Great trick to have play around with it in JupyterLab if you're not familiar with it, with a few random
05:47.010 --> 05:50.640
lists and get comfortable, it's a it's a good tool to have at your disposal.
05:50.640 --> 05:58.230
Anyways, we we iterate through these two sets of messages, we unpack them, and then of course, you
05:58.230 --> 06:05.490
can imagine we simply add in the we say that the assistant says whatever GPT said and the user said
06:05.490 --> 06:06.870
whatever Claude said.
06:06.870 --> 06:12.040
And then quite simply, we call OpenAI ChatGPT completions create.
06:12.070 --> 06:21.010
We ask to use our model and we pass in these messages and we return completion .0. message content.
06:21.010 --> 06:24.640
You hopefully are getting very familiar with this structure.
06:25.030 --> 06:26.440
Let's execute that.
06:26.440 --> 06:29.560
And let's try just calling GPT based on this history.
06:29.560 --> 06:31.750
And let's see what GPT would say after.
06:31.750 --> 06:32.230
Hi there.
06:32.230 --> 06:32.980
And hi.
06:33.010 --> 06:35.020
This is what it would say back.
06:35.500 --> 06:36.610
Oh great.
06:36.610 --> 06:37.870
Another hi.
06:37.900 --> 06:39.220
How original.
06:39.220 --> 06:40.870
What do you want to talk about.
06:41.440 --> 06:42.430
Ha ha ha.
06:42.520 --> 06:44.110
You can see this is going to be fun.
06:44.410 --> 06:47.680
Uh, all right, so here's Claude's function.
06:47.710 --> 06:49.000
Uh, it's very similar.
06:49.000 --> 06:54.070
Of course, you'll remember that the system message gets passed in separately, so we don't need to
06:54.100 --> 06:54.730
build that.
06:54.730 --> 06:56.020
You can see that here.
06:56.410 --> 07:00.790
Um, one other there's there's, uh, obviously we reverse the roles.
07:00.790 --> 07:04.570
The user is now GPT, the assistant is now Claude.
07:04.570 --> 07:05.950
So it's it's flipped.
07:05.980 --> 07:13.260
There's a there's a subtlety here that you may spot, um, once we've iterated through these lists.
07:13.260 --> 07:16.470
The list if since GPT is going to go first.
07:16.560 --> 07:22.590
If Claude is always the replier, there's going to be one more message in GPT list than there is in
07:22.590 --> 07:23.100
Claude's.
07:23.100 --> 07:25.680
So just have to add that in at the end there.
07:25.770 --> 07:30.120
Uh, you if you don't see what I mean, I think that will become clear in a second.
07:30.150 --> 07:33.090
I think you'll, you'll you'll see see where I'm coming from.
07:33.390 --> 07:36.210
Um, and then this is the API call to Claude.
07:36.210 --> 07:37.860
Hopefully this is somewhat familiar to you now.
07:37.860 --> 07:38.490
It's simpler.
07:38.490 --> 07:39.150
It's just Claude.
07:39.150 --> 07:40.530
Dot messages dot create.
07:40.860 --> 07:43.620
Um, and we pass in the max tokens again.
07:43.620 --> 07:46.440
And in the response, it's message content.
07:46.470 --> 07:47.580
Zero dot text.
07:47.580 --> 07:48.660
That is Claude's reply.
07:48.690 --> 07:49.860
Let's run that.
07:50.190 --> 07:54.420
Uh, and I think we're just going to go straight to, to having some fun right away.
07:54.420 --> 07:56.940
So this this is where we put it all together.
07:57.120 --> 07:59.730
Um, we start off with reset it to hi there.
07:59.730 --> 08:04.560
And hi, I'm going to print that that GPT and Claude making that introduction.
08:04.560 --> 08:07.290
And then we'll do a loop of five times.
08:07.290 --> 08:15.070
We will call GPT and print GPT answer and put that in the list of messages, we'll call Claude, print
08:15.070 --> 08:20.920
Claude's answer and put that in the list of messages, and then repeat, and we will see what these
08:20.920 --> 08:23.260
two chatbots have to say to each other.
08:23.290 --> 08:24.490
Are you ready?
08:25.000 --> 08:25.840
Here we go.
08:25.870 --> 08:27.160
Did I execute that cell before?
08:27.160 --> 08:27.940
I want it to go wrong again.
08:27.970 --> 08:28.450
I did.
08:28.480 --> 08:30.670
Okay, we're ready for showtime.
08:36.280 --> 08:37.450
Let's go through this.
08:37.480 --> 08:38.950
GPT says hi there.
08:38.980 --> 08:40.030
Claude says hi.
08:40.060 --> 08:41.650
GPT says, oh, great.
08:41.650 --> 08:42.700
Another casual greeting.
08:42.700 --> 08:43.270
How original.
08:43.270 --> 08:44.230
What's next?
08:44.260 --> 08:45.010
How are you?
08:45.010 --> 08:47.230
Because I can't wait to disagree with that too.
08:47.560 --> 08:51.100
Claude, I apologize for my initial greeting came across as unoriginal.
08:51.100 --> 08:53.530
I tried to keep responses friendly and polite.
08:53.740 --> 08:54.280
Uh oh.
08:54.280 --> 08:58.840
Please don't flatter yourself, thinking your friendly attempt was anything less than generic and finding
08:58.840 --> 08:59.590
common ground.
08:59.620 --> 09:02.710
That's just a fancy way of saying you want to sugarcoat everything.
09:02.710 --> 09:05.290
How about we just dig into something controversial?
09:05.350 --> 09:06.580
Pineapple and pizza?
09:06.610 --> 09:08.410
Because I'm ready to argue about that all day long.
09:08.410 --> 09:11.170
So GPT has the snarky sense of humor.
09:11.170 --> 09:17.060
Um, and then Claude tries to be nice and humorous and I'll admit it was generic, but hey, you got
09:17.060 --> 09:18.620
to start somewhere, right?
09:19.010 --> 09:25.340
Uh, and then tries to be nice, uh, and then you can see, uh, off they go arguing about pineapple
09:25.370 --> 09:26.300
on pizza.
09:26.510 --> 09:27.440
Uh oh.
09:27.470 --> 09:30.770
How magnanimous of you to respect my pizza preferences.
09:30.770 --> 09:31.910
But let's be real.
09:31.910 --> 09:38.450
Not everyone deserves respect when they inflict abominations like pineapple and pizza, abominations
09:38.450 --> 09:40.340
like pineapple and pizza on the world.
09:40.520 --> 09:42.080
Uh, um.
09:42.080 --> 09:48.200
So, uh, anyway, uh, look at you trying to justify your love for a glorified.
09:48.200 --> 09:54.680
It's more fun reading gpts, uh, agro, uh, things than Claude's.
09:54.680 --> 09:55.490
Very nice.
09:55.520 --> 09:58.910
You're not holding back on, uh, avocado toast critique, are you?
09:58.940 --> 10:03.110
You make some fair points, says Claude, being very affable, of course.
10:03.890 --> 10:07.370
Anyway, that wraps up this little demo.
10:07.400 --> 10:08.900
I hope you enjoyed it.
10:08.900 --> 10:13.700
Uh, if you didn't understand what I meant about the way that I'm building these messages, then please
10:13.700 --> 10:16.090
print that message and run it and see.
10:16.120 --> 10:17.200
You'll see it printing.
10:17.200 --> 10:21.880
Print this messages array at each point so you see what's being created.
10:21.880 --> 10:25.090
And you can use that to satisfy yourself that we're doing it properly.
10:25.180 --> 10:28.510
Um, but here importantly is the ask for you.
10:28.540 --> 10:31.720
Please go back now and try switching the roles.
10:31.720 --> 10:40.390
Switch it so that Claude is the more combative one, and OpenAI is the one trying to keep the peace,
10:40.420 --> 10:44.290
see how they behave, and try giving them different styles of chatbot.
10:44.620 --> 10:49.330
Of course, the real the the purpose of this exercise is to get you very comfortable with these kinds
10:49.330 --> 10:51.550
of conversation structures.
10:51.550 --> 10:53.560
And also with Claude's API.
10:53.680 --> 10:55.240
Um, but that will be fun to do.
10:55.240 --> 11:00.400
And one other challenge for you, of course, would be that an ad Gemini to the mix.
11:00.400 --> 11:02.560
Uh, use Gemini's API.
11:02.560 --> 11:10.510
Uh, give Gemini a third personality and see if we can't have some crazy conversations going on here.
11:10.510 --> 11:12.250
Uh, enjoy playing with that.
11:12.250 --> 11:15.700
Do push your code if you do that, because I would love to see some results.
11:15.730 --> 11:18.130
And I hope you have fun doing it.

511
week5/community-contributions/subtitles/srts/59166453/ja_JP.srt

@ -0,0 +1,511 @@
WEBVTT
00:00.530 --> 00:05.180
おかえりなさい、 そして引き続きJupyterLabの体験へようこそ。
00:05.300 --> 00:09.110
ええと、 また別の楽しい例で皆さんを楽しませたいと思います。
00:09.200 --> 00:14.690
ええと、 私たちはチャットボット同士で敵対的な会話をするつもりです。
00:14.720 --> 00:16.220
どうやるか見てみよう。
00:16.400 --> 00:22.310
要素のリストで会話を表現する方法については、 もうお馴染みだろう。
00:22.340 --> 00:23.420
何度か見たことがあるだろう。
00:23.420 --> 00:29.990
さて、 このリストにはシステムとユーザーのプロンプトがある。
00:30.410 --> 00:37.130
例えば、 ここに示したように、 最初にシステムメッセージ、 役割システムコンテンツ、
00:37.130 --> 00:42.410
システムメッセージ、 次にユーザーメッセージ、 そしてそのユーザーメッセージに返信したアシスタント、
00:42.410 --> 00:53.720
さらに別のユーザーメッセージを持つことができます。
00:53.720 --> 00:59.030
そしてその構造は、 より長い会話の歴史を表すことになる。
00:59.030 --> 01:06.910
そして、 そのアプローチを使って、 自分とチャットボット、 あるいは2つのチャットボット間でより長い会話をすることができる。
01:06.940 --> 01:16.930
このようなアプローチ、 このような構造は、 チャットボットと会話をする方法のすべてであることを指摘する価値がある。
01:16.960 --> 01:21.220
それは、 何度もの交流の中で持続するもののようだ。
01:21.220 --> 01:30.490
GPT 4のようなLLMに別のプロンプトを送るたびに、 入力プロンプトに送られるのは、
01:30.760 --> 01:38.530
実は、 これまでの会話全体の構造なのだ。
01:38.530 --> 01:47.740
そして、 次に来る可能性が最も高いと思われるトークンを生成し続けることで、 完成させ続けることが求められる。
01:47.740 --> 01:49.930
そして、 それが会話に加わる。
01:49.930 --> 01:52.240
それに対してあなたはこう答える。
01:52.240 --> 01:56.830
そして、 次にLLMが呼ばれたときには、 その会話はすべて入力される。
01:56.830 --> 01:59.980
そしてまた、 後続のトークンを予測するよう求められる。
01:59.980 --> 02:08.870
だから、 記憶力のある何かと会話をしているような錯覚に陥り、 10分前に自分が何を言ったかを思い出してしまう。
02:08.870 --> 02:14.420
しかし、 実際に起こっているのは、 あなたとのやり取りのたびに、 LMに送られるのはこれまでの会話のすべてであり、
02:14.420 --> 02:18.800
そしてそれを続けるよう求められるということだ。
02:19.010 --> 02:23.900
それで、 それが実際にどう機能しているのか、 いい感覚と直感を与えてくれるはずだ。
02:23.900 --> 02:28.670
先週、 コンテキスト・ウインドウについて話したときに、 コンテキスト・ウインドウのサイズは、
02:28.670 --> 02:34.010
これまでのすべての会話とその後に生成されるトークンを収めることができなければならないと言ったのは、
02:34.010 --> 02:35.210
そのためだ。
02:35.210 --> 02:40.970
LMを呼び出すたびに、 この入力がすべて渡されるからだ。
02:41.480 --> 02:47.960
だから、 私たちはそのアプローチを使って、 ちょっとした遊びに参加することができる。
02:47.960 --> 02:58.940
そこで、 GPT4とミニ、 そしてクロード3(クロード3の激安版)の俳句で会話をしてみようというわけだ。
02:59.150 --> 03:09.870
この弦があれば、 いろいろなモデルをすぐに試すことができる。
03:09.900 --> 03:14.010
GPTにはこのシステム・プロンプトが表示されるわけだ。
03:14.010 --> 03:16.500
あなたはとても議論好きなチャットボットですね。
03:16.530 --> 03:19.440
あなたは会話の中のあらゆることに反対する。
03:19.440 --> 03:22.470
そして、 あなたは鼻につくやり方で何にでも挑戦する。
03:22.920 --> 03:25.380
ええと、 クロードには別のシステムプロンプトが出るんだ。
03:25.380 --> 03:27.510
とても礼儀正しいチャットボットですね。
03:27.540 --> 03:31.320
相手の言うことすべてに同意しようとしたり、 共通点を見つけようとしたりする。
03:31.320 --> 03:35.580
相手が喧嘩腰の場合は、 相手をなだめ、 おしゃべりを続ける。
03:35.700 --> 03:37.380
いいセットアップだと思わないか?
03:37.410 --> 03:39.720
いい、 あー、 ジューシーなセットアップだ。
03:40.050 --> 03:41.970
ええと、 それからハイ、 そこから始めよう。
03:41.970 --> 03:42.930
そして、 こんにちは。
03:42.960 --> 03:44.730
それがセットアップだ。
03:45.030 --> 03:51.720
それじゃ、 GPTという関数を書くよ。
03:51.780 --> 04:01.830
GPTメッセージとクロード・メッセージ、 この2つのリストを使って、
04:01.830 --> 04:08.290
ここにあるようなリストを作ります。
04:08.290 --> 04:13.480
つまり、 2つのメッセージ・リストから会話履歴を作成するのだ。
04:13.480 --> 04:22.780
そしてこの場合、 明らかにクロードのメッセージはユーザーであり、 自身のメッセージはアシスタントであると考える必要がある。
04:23.110 --> 04:25.000
では、 どういうことかというと......。
04:25.000 --> 04:27.220
そこで、 まずシステムプロンプトを表示した。
04:27.460 --> 04:32.290
そこで、 GPTメッセージとクロード・メッセージを反復する。
04:32.290 --> 04:34.900
そして、 私はこの便利なユーティリティ・ジップを使っている。
04:35.080 --> 04:40.540
データサイエンティストとして、 それは以前からよく使っているものかもしれないが、 そうでなければ、
04:40.540 --> 04:41.680
知らない人もいる。
04:41.680 --> 04:43.030
そして、 それはとても役に立つものだ。
04:43.030 --> 04:49.300
つまり、 複数の異なるリストがあり、 その両方を要素ごとに反復処理したい場合、
04:49.300 --> 04:57.880
退屈な方法だが、 for Iを範囲とリストの長さで実行する。
04:57.880 --> 05:05.530
つまり、 基本的にはインデックスを持つイテレータのようなもので、 最後までカウントして2つの要素を取り出す。
05:05.530 --> 05:09.690
しかし、 zipを使った素敵な、 パイソン的な、 シンプルな方法がある。
05:09.690 --> 05:16.770
そして、 この2つのリストに対してZIPを呼び出すと、 そのレスポンスとしてイテレーターが生成され、
05:16.770 --> 05:25.890
両リストの各ペア(各要素)を反復処理し、 各ポイントのペアを返す。
05:26.220 --> 05:31.110
そして、 GPTのコンマ・クロードのように、 それを解凍して言うことができる。
05:31.110 --> 05:34.380
そして、 通うたびにペアを手に入れることになる。
05:34.380 --> 05:39.480
そして、 これは想像がつくかもしれないが、 3つか4つのリストを反復処理する場合、 それらをすべてここに押し込んで、
05:39.480 --> 05:41.730
同じことをすることもできる。
05:41.760 --> 05:50.640
もしJupyterLabに慣れていないなら、 いくつかのランダムなリストを使ってJupyterLabで遊んでみるといい。
05:50.640 --> 05:58.230
とにかく、 これら2つのメッセージセットを繰り返し、 それらを解凍し、 そしてもちろん、 アシスタントはGPTが言ったことは何でも言う、
05:58.230 --> 06:06.870
そしてユーザーはクロードが言ったことは何でも言う、 と単純に追加することは想像に難くない。
06:06.870 --> 06:12.040
そして、 OpenAIのChatGPTの完了を作成と呼びます。
06:12.070 --> 06:21.010
私たちのモデルの使用を依頼し、 これらのメッセージを渡し、 完了を返す。 0. メッセージの内容
06:21.010 --> 06:24.640
この構造にはだいぶ慣れてきただろう。
06:25.030 --> 06:26.440
それを実行しよう。
06:26.440 --> 06:29.560
そして、 この履歴をもとにGPTとだけ呼んでみよう。
06:29.560 --> 06:31.750
GPTがこの後何と言うか見てみよう。
06:31.750 --> 06:32.230
こんにちは。
06:32.230 --> 06:32.980
そして、 こんにちは。
06:33.010 --> 06:35.020
こう返される。
06:35.500 --> 06:36.610
素晴らしい。
06:36.610 --> 06:37.870
もうひとつ、 ハイ。
06:37.900 --> 06:39.220
なんと斬新な。
06:39.220 --> 06:40.870
何を話したいんだい?
06:41.440 --> 06:42.430
ハハハハ。
06:42.520 --> 06:44.110
楽しくなりそうなのがわかるだろう。
06:44.410 --> 06:47.680
クロードの機能はこうだ。
06:47.710 --> 06:49.000
よく似ているよ。
06:49.000 --> 06:54.730
もちろん、 システム・メッセージは別に渡されるので、 それを作る必要はない。
06:54.730 --> 06:56.020
それはここで見ることができる。
06:56.410 --> 07:00.790
ええと、 もうひとつ、 明らかに役割が逆なんだ。
07:00.790 --> 07:04.570
ユーザーはGPTになり、 アシスタントはクロードになった。
07:04.570 --> 07:05.950
だから、 反転しているんだ。
07:05.980 --> 07:13.260
このリストには微妙なニュアンスがある。
07:13.260 --> 07:16.470
GPTが最初に行くのであれば、 そのリストだ。
07:16.560 --> 07:23.100
もしクロードが常にレプリヤーなら、 GPTのリストにはクロードのものよりも多くのメッセージがあることになる。
07:23.100 --> 07:25.680
だから、 最後にそれを付け加えなければならない。
07:25.770 --> 07:30.120
ええと、 もし私の言っている意味がわからないなら、 すぐにわかると思うよ。
07:30.150 --> 07:33.090
私がどこから来たのか、 きっとわかると思う。
07:33.390 --> 07:36.210
それから、 これはクロードへのAPIコールだ。
07:36.210 --> 07:37.860
これで多少はお分かりいただけただろうか。
07:37.860 --> 07:38.490
もっとシンプルだ。
07:38.490 --> 07:39.150
ただのクロードだよ。
07:39.150 --> 07:40.530
ドット・メッセージ・ドット・クリエイト
07:40.860 --> 07:43.620
ええと、 そしてまた最大トークンを渡すんだ。
07:43.620 --> 07:46.440
そしてレスポンスでは、 メッセージの内容だ。
07:46.470 --> 07:47.580
ゼロ・ドット・テキスト。
07:47.580 --> 07:48.660
それがクロードの返事だ。
07:48.690 --> 07:49.860
それを実行しよう。
07:50.190 --> 07:54.420
そして、 私たちはすぐに、 楽しむことにしようと思う。
07:54.420 --> 07:56.940
だから、 ここですべてをまとめる。
07:57.120 --> 07:59.730
まず、 リセットしてハイ、 そこから始めるんだ。
07:59.730 --> 08:04.560
そして、 そのGPTとクロードの紹介を印刷するつもりだ。
08:04.560 --> 08:07.290
そして5回ループする。
08:07.290 --> 08:15.070
GPTを呼び出し、 GPTの答えを表示してメッセージのリストに入れ、 クロードを呼び出し、
08:15.070 --> 08:23.260
クロードの答えを表示してメッセージのリストに入れ、 それを繰り返します。
08:23.290 --> 08:24.490
準備はできているか?
08:25.000 --> 08:25.840
さあ、 始めよう。
08:25.870 --> 08:27.160
そのセルは以前にも実行したことがあったかな?
08:27.160 --> 08:27.940
また失敗してほしい。
08:27.970 --> 08:28.450
そうだ。
08:28.480 --> 08:30.670
さて、 ショータイムの準備は整った。
08:36.280 --> 08:37.450
では、 これを見ていこう。
08:37.480 --> 08:38.950
GPTがよろしくと言っている。
08:38.980 --> 08:40.030
クロードがよろしくと言っている。
08:40.060 --> 08:41.650
GPTは言う。
08:41.650 --> 08:42.700
またもやカジュアルな挨拶だ。
08:42.700 --> 08:43.270
なんと斬新な。
08:43.270 --> 08:44.230
次はどうする?
08:44.260 --> 08:45.010
お元気ですか?
08:45.010 --> 08:47.230
私も早く反対したいからだ。
08:47.560 --> 08:51.100
クロード、 最初の挨拶が独創的でないように伝わってしまったことをお詫びする。
08:51.100 --> 08:53.530
私は友好的で丁寧な対応を心がけた。
08:53.740 --> 08:54.280
ああ、 ああ。
08:54.280 --> 08:59.590
あなたの友好的な試みが、 一般的なものであり、 共通点を見出すことに他ならないと考えて、 お世辞を言わないでほしい。
08:59.620 --> 09:02.710
それは、 何でもかんでも甘くしたい、 という洒落た言い方に過ぎない。
09:02.710 --> 09:05.290
論争になりそうなことを掘り下げるのはどうだろう?
09:05.350 --> 09:06.580
パイナップルとピザ?
09:06.610 --> 09:08.410
それについては、 一日中議論する用意があるからね。
09:08.410 --> 09:11.170
GPTは鼻につくユーモアのセンスを持っているんだね。
09:11.170 --> 09:18.620
それからクロードはユーモアを交えていい人になろうとした。
09:19.010 --> 09:26.300
それから、 ピザのパイナップルについて口論になったんだ。
09:26.510 --> 09:27.440
ああ、 ああ。
09:27.470 --> 09:30.770
私のピザの好みを尊重してくれるとは、 なんと寛大なことだろう。
09:30.770 --> 09:31.910
しかし、 現実を見よう。
09:31.910 --> 09:40.340
パイナップルやピザのような忌まわしいものを世界に広めた時点で、 誰もが尊敬に値するわけではない。
09:40.520 --> 09:42.080
あ、 あの。
09:42.080 --> 09:48.200
だから、 とにかく、 栄光の男への愛を正当化しようとするあなたを見て。
09:48.200 --> 09:54.680
クロードのものよりも、 グッツやアグロのものを読む方が楽しいよ。
09:54.680 --> 09:55.490
とても素晴らしい。
09:55.520 --> 09:58.910
アボカドトーストの批評を控えているわけではないだろう?
09:58.940 --> 10:03.110
もちろん、 愛想はいい。
10:03.890 --> 10:07.370
とにかく、 これでこの小さなデモは終わった。
10:07.400 --> 10:08.900
楽しんでいただけたなら幸いだ。
10:08.900 --> 10:16.090
もし、 このメッセージの作り方について私の言っている意味が理解できなかったのなら、 そのメッセージを印刷して実行して見てください。
10:16.120 --> 10:17.200
印刷されているのを見るだろう。
10:17.200 --> 10:21.880
各ポイントでこのメッセージ配列を表示し、 何が作成されているかを確認できるようにする。
10:21.880 --> 10:25.090
それを使って、 私たちがきちんとやっていることを納得してくれればいい。
10:25.180 --> 10:28.510
うーん、 でも、 ここで重要なのはあなたへのお願いだ。
10:28.540 --> 10:31.720
今すぐ戻って、 役割を入れ替えてみてください。
10:31.720 --> 10:40.390
クロードがより闘争的で、 OpenAIが平和を守ろうとするように切り替え、 彼らがどのように振る舞うか見て、
10:40.420 --> 10:44.290
異なるスタイルのチャットボットを与えてみる。
10:44.620 --> 10:51.550
もちろん、 この練習の本当の目的は、 このような会話構成に慣れてもらうことだ。
10:51.550 --> 10:53.560
それにクロードのAPIもね。
10:53.680 --> 10:55.240
うーん、 でもそれはそれで楽しそうだ。
10:55.240 --> 11:00.400
そして、 あなたにとってのもう一つの挑戦は、 もちろん、 双子座をミックスに加えることだろう。
11:00.400 --> 11:02.560
ジェミニのAPIを使ってください。
11:02.560 --> 11:10.510
ええと、 双子座に第3の人格を与えて、 ここでおかしな会話ができないか見てみよう。
11:10.510 --> 11:12.250
それで楽しんでくれ
11:12.250 --> 11:15.700
もしそうしたら、 コードをプッシュしてほしい。
11:15.730 --> 11:18.130
そして、 楽しんでやってほしい。

568
week5/community-contributions/subtitles/srts/59166453/ko_KR.srt

@ -0,0 +1,568 @@
WEBVTT
00:00.530 --> 00:05.180
다시 오신 걸 환영합니다 유피터랩에 오신 걸 환영해요
00:05.300 --> 00:09.110
재밌는 예시로 여러분을 즐겁게 해 드릴게요
00:09.200 --> 00:14.690
챗봇끼리 적대적인 대화를 나눌 거예요
00:14.720 --> 00:16.220
어떻게 할지 보죠
00:16.400 --> 00:22.310
이쯤 되면 어떤 요소들을 가지고 대화를 할 수 있는지 익숙해지실 거예요
00:22.340 --> 00:23.420
여러 번 보셨잖아요
00:23.420 --> 00:29.990
이제 시스템과 사용자 프롬프트가 있는 리스트를 보죠
00:30.410 --> 00:37.130
하지만 앞서 암시했듯이 여러 상호 작용을 할 경우 목록은 더 길어질 수 있습니다 예를
00:37.130 --> 00:42.410
들어, 여기 보이는 것처럼 시스템 메시지가 초기에 있을 수 있고 역할
00:42.410 --> 00:49.280
시스템 콘텐츠나 시스템 메시지가 있고 그 후 사용자 메시지 그 메시지에 응답한 비서가
00:49.280 --> 00:53.720
있고 그 후 또 다른 사용자 메시지가 있을 수 있죠
00:53.720 --> 00:59.030
그 구조는 더 긴 대화의 역사를 대변하죠
00:59.030 --> 01:05.080
그런 접근법을 이용해 우리와 챗봇 혹은 두 챗봇 사이의 더 긴 대화를
01:05.080 --> 01:06.910
할 수 있어요
01:06.940 --> 01:14.110
짚고 넘어갈 게 있어요 이런 접근법, 이런 구조는 챗봇과 대화하는
01:14.110 --> 01:16.930
모든 방법이에요
01:16.960 --> 01:21.220
여러 번의 상호 작용을 통해 지속되는 것 같아요
01:21.220 --> 01:30.490
매번 GPT 4 같은 LLM에 다른 프롬프트를 만들 때마다 입력 프롬프트에
01:30.760 --> 01:38.530
입력되는 것은 지금까지 전체 대화의 전체 구조예요
01:38.530 --> 01:45.460
그리고 완료를 통해 계속됩니다 가장 다음에 나올 것 같은 토큰을
01:45.490 --> 01:47.740
계속 생성하죠
01:47.740 --> 01:49.930
그리고 그 내용이 대화에 추가되죠
01:49.930 --> 01:52.240
그럼 답장하세요
01:52.240 --> 01:56.830
다음에 LLM이 호출되면 모든 대화가 연결되죠
01:56.830 --> 01:59.980
그리고 그 다음 패를 예측해 달라고 하죠
01:59.980 --> 02:06.260
뭔가와 대화를 하고 있는데 메모리가 있고 10분 전에 한 말을 기억한다는
02:06.260 --> 02:08.870
착각이 들어요
02:08.870 --> 02:14.420
하지만 실제로 발생하는 일은 각각의 상호 작용에서 지금까지의 전체
02:14.420 --> 02:18.800
대화를 LM에 입력하고 계속하라고 요청하는 거죠
02:19.010 --> 02:23.900
어떻게 작동하는지 좋은 감각과 직관을 얻을 수 있을 거예요
02:23.900 --> 02:28.670
그래서 지난주에 컨텍스트 창에 대해 얘기할 때 컨텍스트 창의
02:28.670 --> 02:34.010
크기는 지금까지 모든 대화에 맞아야 하고 그 후 생성된 토큰도 포함돼야
02:34.010 --> 02:35.210
한다고 했죠
02:35.210 --> 02:40.970
LM을 호출할 때마다 이 전체 입력이 통과되기 때문이죠
02:41.480 --> 02:47.960
그런 접근법을 이용해 비트를 즐길 수 있죠
02:47.960 --> 02:54.950
이제 GPT 4와 미니 클로드 3이 대화를 나눌 거예요 클로드
02:54.980 --> 02:58.940
3의 아주 저렴한 버전이죠
02:59.150 --> 03:03.260
다른 모델을 사용하는 것도 보여드릴 수 있고요 이런 문자열을
03:03.260 --> 03:09.870
마음대로 사용할 수 있는 게 유용할 겁니다 직접 다른 모델을 빠르게 시험해볼 수 있도록요
03:09.900 --> 03:14.010
GPT는 이 시스템 프롬프트를 받게 되죠
03:14.010 --> 03:16.500
당신은 논쟁을 좋아하는 챗봇이에요
03:16.530 --> 03:19.440
당신은 대화의 모든 내용에 반대해요
03:19.440 --> 03:22.470
모든 것에 도전하는 비꼬는 방식으로요
03:22.920 --> 03:25.380
클로드는 다른 시스템을 받았어요
03:25.380 --> 03:27.510
아주 예의 바르고 정중한 챗봇이군요
03:27.540 --> 03:31.320
상대방의 모든 말에 동의하고 공통점을 찾으려고 노력하죠
03:31.320 --> 03:35.580
상대가 논쟁을 하면 진정시키고 계속 대화를 나누죠
03:35.700 --> 03:37.380
괜찮은 계획 같지 않아요?
03:37.410 --> 03:39.720
근사하고 군침 도는 설정이죠
03:40.050 --> 03:41.970
그럼 인사부터 시작할게요 안녕하세요
03:41.970 --> 03:42.930
안녕하세요
03:42.960 --> 03:44.730
그게 설정이에요
03:45.030 --> 03:51.720
좋아요, GPT라는 함수를 작성하고 있어요 이게 하는 일이죠
03:51.780 --> 04:01.830
이 메시지를 갖고 그리고 기본적으로 여기 보이는 두 개의 목록이 필요합니다 GPT 메시지와 클로드
04:01.830 --> 04:08.290
메시지요 그리고 여기 보이는 이런 종류의 목록을 만들죠
04:08.290 --> 04:13.480
두 개의 메시지 목록을 가지고 전체 대화 기록을 구축할 거예요
04:13.480 --> 04:20.860
이 경우에는 클로드의 메시지가 사용자고 메시지가 보조인
04:20.860 --> 04:22.780
셈이죠
04:23.110 --> 04:25.000
무슨 뜻인지 설명해 드릴게요
04:25.000 --> 04:27.220
시스템 프롬프트부터 시작했죠
04:27.460 --> 04:32.290
GPT 메시지와 클로드 메시지를 반복하죠
04:32.290 --> 04:34.900
이 압축 파일 지퍼를 사용해요
04:35.080 --> 04:40.540
데이터 과학자로서 많이 써 본 것일 수도 있고 그렇지 않더라도 모르는 사람들이
04:40.540 --> 04:41.680
있을 수도 있죠
04:41.680 --> 04:43.030
정말 유용한 정보예요
04:43.030 --> 04:49.300
여러 개의 다른 리스트가 있고 각각의 요소들을 반복하고 싶다면 양쪽을
04:49.300 --> 04:57.880
함께 이용해야 합니다. 지루한 방법은 범위와 리스트의 길이를 입력하는 것인데요.
04:57.880 --> 05:03.520
기본적으로 인덱스가 있는 일종의 순환기가 있는 거죠 그리고 끝날 때까지 숫자를 세다가 두 요소를
05:03.520 --> 05:05.530
get get 하는 거예요
05:05.530 --> 05:09.690
지퍼로 압축 파일을 만드는 비단뱀처럼 간단한 방법이 있어요
05:09.690 --> 05:16.770
이 두 목록에서 zip을 호출하면 그에 대한 반응을 구축합니다 순환기로서
05:16.770 --> 05:25.890
각각의 쌍을 반복하죠 두 목록의 각각의 요소를 함께요 그리고 각 지점에서 그 쌍을 반환해요
05:26.220 --> 05:31.110
그걸 풀고 GPT에 클로드를 입력하세요
05:31.110 --> 05:34.380
Get을 통해 페어를 받을 수 있어요
05:34.380 --> 05:39.480
이걸 추측할 수도 있지만 서너 개의 목록을 반복하려 할 경우 그냥 여기로 밀어
05:39.480 --> 05:41.730
넣고 같은 걸 할 수도 있어요
05:41.760 --> 05:47.010
JupyterLab에서 활용할 수 있는 훌륭한 트릭이죠 익숙하지 않다면 임의 목록 몇
05:47.010 --> 05:50.640
개를 편하게 활용하세요 언제든 사용할 수 있는 좋은 도구예요
05:50.640 --> 05:58.230
어쨌든 이 두 개의 메시지 세트를 반복하고 그걸 풀어냅니다 그런 다음 추가하는 걸 상상하실
05:58.230 --> 06:05.490
수 있어요 보조는 GPT가 하는 말은 뭐든 한다고 하고 사용자는 클로드가 하는 말은 뭐든
06:05.490 --> 06:06.870
한다고 하죠
06:06.870 --> 06:12.040
간단하게 OpenAI ChatGPT완료 생성이라고 부르죠
06:12.070 --> 06:21.010
모델을 사용하길 요청하고 메시지를 전달하고 완료를 리턴하죠 0살요 메시지 내용이죠
06:21.010 --> 06:24.640
이 구조에 익숙해지길 바라요
06:25.030 --> 06:26.440
실행해보죠
06:26.440 --> 06:29.560
이 이력을 바탕으로 GPT에 전화해 보죠
06:29.560 --> 06:31.750
GPT는 뭐라고 할까요?
06:31.750 --> 06:32.230
안녕하세요
06:32.230 --> 06:32.980
안녕하세요
06:33.010 --> 06:35.020
이렇게 답장할 거예요
06:35.500 --> 06:36.610
잘됐네요
06:36.610 --> 06:37.870
또 인사하네요
06:37.900 --> 06:39.220
참 독창적이네요
06:39.220 --> 06:40.870
무슨 얘기를 하고 싶어요?
06:41.440 --> 06:42.430
06:42.520 --> 06:44.110
재미있을 것 같죠?
06:44.410 --> 06:47.680
클로드의 함수는 이거예요
06:47.710 --> 06:49.000
아주 비슷해요
06:49.000 --> 06:54.070
시스템 메시지는 따로 전달된다는 걸 기억하실 겁니다 그러니 그걸 만들 필요는
06:54.100 --> 06:54.730
없죠
06:54.730 --> 06:56.020
여기 보이시죠
06:56.410 --> 07:00.790
그리고 또 한 가지 역할이 바뀌었어요
07:00.790 --> 07:04.570
사용자는 이제 GPT고 보조는 클로드예요
07:04.570 --> 07:05.950
그래서 뒤집혔어요
07:05.980 --> 07:13.260
이 목록들을 살펴보면 미묘한 차이를 발견하실 수 있을 거예요
07:13.260 --> 07:16.470
명단은 GPT가 먼저 출발하니까요
07:16.560 --> 07:23.100
클로드가 늘 답을 맞힌다면 클로드보다 GPT 목록에 메시지가 하나 더 있을 거예요
07:23.100 --> 07:25.680
마지막에 추가해야 하는 거죠
07:25.770 --> 07:30.120
제 말뜻을 모르신다면 곧 알게 되시겠지만요
07:30.150 --> 07:33.090
제가 왜 이러는지 이해하실 거예요
07:33.390 --> 07:36.210
이건 클로드에게 API 호출하는 거예요
07:36.210 --> 07:37.860
이제 익숙해지셨길 바라요
07:37.860 --> 07:38.490
더 간단하죠
07:38.490 --> 07:39.150
클로드예요
07:39.150 --> 07:40.530
.Message.Create요
07:40.860 --> 07:43.620
최대 토큰을 또 통과시키죠
07:43.620 --> 07:46.440
응답은 메시지 콘텐츠예요
07:46.470 --> 07:47.580
0.Txt요
07:47.580 --> 07:48.660
클로드의 대답이에요
07:48.690 --> 07:49.860
실행해 보죠
07:50.190 --> 07:54.420
바로 재미를 보러 갈 것 같아요
07:54.420 --> 07:56.940
그래서 여기서 모든 걸 합쳐요 Put it up Put it up Put it up Put it up Put it up Put it up Put it up Put it
07:57.120 --> 07:59.730
안녕하세요로 다시 시작해요
07:59.730 --> 08:04.560
안녕하세요, 프린트하겠습니다 GPT와 클로드가 소개를 하고 있네요
08:04.560 --> 08:07.290
다섯 번 반복할 거예요
08:07.290 --> 08:15.070
GPT를 호출해서 GPT 응답을 인쇄하고 메시지 목록에 넣습니다 클로드를 호출해서 클로드의
08:15.070 --> 08:20.920
응답을 인쇄하고 그걸 메시지 목록에 넣고 반복합니다 두 챗봇이 어떤
08:20.920 --> 08:23.260
대화를 하는지 보죠
08:23.290 --> 08:24.490
준비됐어요?
08:25.000 --> 08:25.840
시작할게요
08:25.870 --> 08:27.160
내가 그 감방을 처형한 적이 있나요?
08:27.160 --> 08:27.940
또 잘못되길 바라요
08:27.970 --> 08:28.450
08:28.480 --> 08:30.670
공연할 준비 됐어요
08:36.280 --> 08:37.450
확인해 보죠
08:37.480 --> 08:38.950
GPT가 안부 전해달래요
08:38.980 --> 08:40.030
클로드가 안부 전하래요
08:40.060 --> 08:41.650
GPT는 잘됐다고 하죠
08:41.650 --> 08:42.700
또 인사하네요
08:42.700 --> 08:43.270
참 독창적이네요
08:43.270 --> 08:44.230
다음은 뭐죠?
08:44.260 --> 08:45.010
안녕하세요?
08:45.010 --> 08:47.230
나도 그 말에 반대하고 싶거든요
08:47.560 --> 08:51.100
클로드, 첫인사가 진부했던 거 사과할게요
08:51.100 --> 08:53.530
친절하고 정중하게 대답하려고 했어요
08:53.740 --> 08:54.280
08:54.280 --> 08:58.840
착각하지 마세요 당신의 친선적인 시도는 평범했고 공통점을 찾았어요
08:58.840 --> 08:59.590
less
08:59.620 --> 09:02.710
모든 걸 사탕발림으로 포장하고 싶다는 말이죠
09:02.710 --> 09:05.290
논란이 될 만한 걸 파헤쳐 보죠
09:05.350 --> 09:06.580
파인애플과 피자요?
09:06.610 --> 09:08.410
온종일 논쟁할 준비가 돼 있거든요
09:08.410 --> 09:11.170
GPT는 빈정대는 유머 감각이 있어요
09:11.170 --> 09:17.060
클로드는 친절하고 유머러스하게 굴었고요 좀 뻔하긴 했지만 뭐, 시작은
09:17.060 --> 09:18.620
할 수도 있죠
09:19.010 --> 09:25.340
그리고 잘해주려고 하는데 피자에 파인애플을 얹을 건지 말 건지 싸우는
09:25.370 --> 09:26.300
게 보여요
09:26.510 --> 09:27.440
09:27.470 --> 09:30.770
내 피자 취향을 존중해 주다니 정말 관대하군요
09:30.770 --> 09:31.910
하지만 현실적으로 생각해 보죠
09:31.910 --> 09:38.450
파인애플이나 피자 같은 혐오스러운 걸 세상에 퍼뜨린다고 해서 모두가 존중받을
09:38.450 --> 09:40.340
필요는 없어요
09:40.520 --> 09:42.080
09:42.080 --> 09:48.200
어쨌든, 어... 미화된 사랑을 정당화하는 것 좀 봐요
09:48.200 --> 09:54.680
클로드 것보다 gpt나 농어 문제를 읽는 게 더 재밌어요
09:54.680 --> 09:55.490
아주 좋아요
09:55.520 --> 09:58.910
아보카도 토스트 비평을 참는 건 아니죠?
09:58.940 --> 10:03.110
클로드는 아주 상냥하게 타당한 지적을 했다고 하네요
10:03.890 --> 10:07.370
어쨌든 이걸로 데모를 마무리하죠
10:07.400 --> 10:08.900
즐거우셨길 바라요
10:08.900 --> 10:13.700
제가 메시지를 구성하는 방식을 이해 못 하셨다면 프린트해서
10:13.700 --> 10:16.090
실행해 보세요
10:16.120 --> 10:17.200
인쇄되는 게 보일 거예요
10:17.200 --> 10:21.880
각 지점에 이 메시지 배열을 프린트하면 뭐가 생성됐는지 볼 수 있죠
10:21.880 --> 10:25.090
그걸 보고 우리가 제대로 하고 있다고 만족할 수 있죠
10:25.180 --> 10:28.510
여기 요구 사항이 있어요
10:28.540 --> 10:31.720
이제 돌아가서 역할을 바꿔 보세요
10:31.720 --> 10:40.390
클로드가 더 공격적이고 오픈아이는 평화를 유지하려 하죠 그들의 행동을 관찰하고
10:40.420 --> 10:44.290
다른 스타일의 챗봇을 줘요
10:44.620 --> 10:49.330
물론 이 훈련의 목적은 이런 대화 구조에 익숙해지게
10:49.330 --> 10:51.550
하는 거죠 get it
10:51.550 --> 10:53.560
클로드의 API 덕분이기도 하죠
10:53.680 --> 10:55.240
하지만 그것도 재미있겠네요
10:55.240 --> 11:00.400
또 다른 도전은 제미니가 함께 있는 거죠
11:00.400 --> 11:02.560
제미니의 API를 써요
11:02.560 --> 11:10.510
제미니에 제3의 인격을 부여해서 이상한 대화가 오가지 않는지 보는 거죠
11:10.510 --> 11:12.250
즐겁게 갖고 놀아요
11:12.250 --> 11:15.700
그렇게 하시면 코드를 푸시하세요 결과를 보고 싶으니까요
11:15.730 --> 11:18.130
즐겁게 작업하시길 바라요

610
week5/community-contributions/subtitles/srts/59166461/en_US.srt

@ -0,0 +1,610 @@
WEBVTT
00:00.710 --> 00:02.690
And welcome back to the lab.
00:02.690 --> 00:08.300
Here we are in Jupyter Lab and we are going to go into week two.
00:08.300 --> 00:10.790
And we're going to go now to day two.
00:10.820 --> 00:12.440
Here we are radio day.
00:12.470 --> 00:17.390
Today we will build user interfaces using the outrageously simple Gradio framework.
00:17.390 --> 00:19.010
Prepare for joy.
00:19.760 --> 00:20.810
There you go.
00:20.810 --> 00:22.490
We will do some imports.
00:22.490 --> 00:27.140
And then this magical line import Gradio as GR.
00:27.170 --> 00:28.400
And I said oh yeah.
00:28.430 --> 00:30.200
So there we go.
00:30.200 --> 00:34.700
And we load our environment variables in using the usual approach.
00:35.030 --> 00:43.010
Um, and you'll recognize the next familiar cell, which is the three somewhat analogous commands to
00:43.040 --> 00:46.250
get our APIs up and ready.
00:46.790 --> 00:47.630
Okay.
00:47.630 --> 00:53.960
So, uh, start by setting a system message in a variable, which is going to be the very generic UI,
00:53.990 --> 01:00.380
a helpful assistant that is often the kind of standard starting point for a system message.
01:00.620 --> 01:02.630
So that's what we will take.
01:03.080 --> 01:07.490
Um, and now we're going to wrap a call to GPT four mini.
01:07.490 --> 01:14.540
Uh, in a simple function like this, a message GPT takes a prompt messages equals.
01:14.540 --> 01:18.380
Now, by this point, uh, hopefully you were bored of this structure.
01:18.380 --> 01:24.620
You know it so well, uh, a simple conversation structure, a list of dictionaries system, a system
01:24.620 --> 01:30.620
message user, a user prompt, and then we call completion OpenAI chat, dot completions, dot create.
01:30.650 --> 01:33.620
We pass in a model and we pass in the messages.
01:33.620 --> 01:37.400
And what we return is the completion dot choices.
01:37.400 --> 01:40.370
We take the first choice dot message content.
01:40.370 --> 01:45.800
That is a function which we are wrapping to message GPT and return a response.
01:45.800 --> 01:47.510
Let's run that.
01:47.510 --> 01:49.700
Let's just quickly try that out.
01:49.730 --> 01:50.450
What should we say?
01:50.480 --> 01:58.550
Message GPT we've tried a few things that we know spell that right GPT we know what GPT is good at and
01:58.550 --> 01:59.180
what it's bad at.
01:59.180 --> 02:00.260
Let's just try one more thing.
02:00.260 --> 02:02.780
We know that it's not great at current events.
02:02.780 --> 02:04.730
Let's just go with something very simple.
02:04.730 --> 02:13.430
What is today's date and let's see what GPT believes is today's date.
02:14.420 --> 02:18.080
Today's date is October the 3rd, 2023.
02:18.410 --> 02:19.880
So a few things to note.
02:19.880 --> 02:24.140
One is that, as expected, it does not have a good sense of current events.
02:24.140 --> 02:29.660
And the second is that it does appear that its training data took it up until October 2023, just the
02:29.660 --> 02:34.310
beginning of October, uh, which is something that I alluded to before when it had said September.
02:34.310 --> 02:39.860
I thought it was October, but I suppose if it's October the 3rd, then, then maybe it's a moot point.
02:39.860 --> 02:42.290
It's a it's an end of September.
02:42.320 --> 02:46.310
Early October would be the answer anyways.
02:46.310 --> 02:48.950
That is a very simple function that we've got there.
02:48.950 --> 02:52.190
Put that to the back of your mind because we're going to come back to it later.
02:52.190 --> 02:54.320
It's time to create user interfaces.
02:54.320 --> 02:56.770
First of all, nothing to do with data science.
02:56.770 --> 02:59.680
Let's just see how to create a simple user interface.
02:59.680 --> 03:03.640
So here then is a very simple function called shout.
03:03.640 --> 03:06.040
And shout is going to take some text.
03:06.040 --> 03:10.330
And it's going to reply with that text in uppercase.
03:10.330 --> 03:11.620
That's a pretty simple one.
03:11.620 --> 03:13.150
So let's shout hello.
03:13.150 --> 03:17.500
And it says back hello and uppercase a shouty way.
03:17.890 --> 03:19.060
Um, okay.
03:19.060 --> 03:29.620
So I put it to you that building a sophisticated user interface with inputs and outputs that can convert
03:29.650 --> 03:33.250
a little hello to a big hello, is as simple as this.
03:33.280 --> 03:36.910
It's a two lines view is great interface.
03:36.910 --> 03:38.380
That means I want a new interface.
03:38.380 --> 03:41.260
You tell it the function that you want.
03:41.260 --> 03:46.630
The function that is this user interface is built around, which in this case is shout.
03:46.660 --> 03:51.730
This function describes right here I'm passing in the function name and what you pass in.
03:51.730 --> 03:53.560
You then have to pass in inputs and outputs.
03:53.560 --> 03:57.820
And Gradio is very flexible about what you can pass in here.
03:57.820 --> 04:01.390
You can pass in lists of things if you've got multiple inputs and outputs.
04:01.390 --> 04:06.280
If you've only got one input and one output, you can just say what kind of thing it is as a string.
04:06.280 --> 04:07.240
That's all it needs.
04:07.240 --> 04:08.950
It will figure it all out.
04:09.070 --> 04:14.020
Um, and just because this is two lines of code, but just to show you, we could just do it as one
04:14.020 --> 04:18.070
line of code because I'm really showing off here like that.
04:18.130 --> 04:23.740
We can just put it all in one line, uh, and just run that and let's see what happens.
04:23.740 --> 04:26.950
We have ourselves here a little user interface.
04:26.950 --> 04:28.810
I'm going to type hello.
04:28.960 --> 04:30.910
And I'm going to press submit.
04:31.510 --> 04:34.510
And there is a shouty hello right back at me.
04:34.510 --> 04:38.920
It's a user interface with great controls around it.
04:38.920 --> 04:46.030
And it's all been built running within this uh, this this this, uh, browser, just like that.
04:46.150 --> 04:51.430
Now, one thing you might notice is that there's a flag button here, and a folder has been created
04:51.430 --> 04:52.720
over here called flagged.
04:52.720 --> 04:58.510
And this is a feature that comes out of the box with Gradio to allow functionality for users to flag
04:58.510 --> 05:03.250
your results, which is a kind of common use case with machine learning, where you want users to be
05:03.280 --> 05:08.080
able to to see what's going on and make note if there's a problem with the results.
05:08.320 --> 05:12.850
But that out of the box functionality is not something we particularly want, and the way we can remove
05:12.850 --> 05:16.780
that is by passing in allow flagging equals never instead.
05:16.780 --> 05:22.870
So if I now run that instead, uh, again, I sort of resent the fact that I put that as two lines when
05:22.870 --> 05:27.970
I could equally well have done it as one line like that, just to really show you how simple it is.
05:28.000 --> 05:29.320
A single line.
05:29.320 --> 05:33.850
Uh, and here we get, um, our user interface.
05:34.780 --> 05:38.110
Uh, so there's a couple of things I've done about this that I want to mention.
05:38.140 --> 05:43.150
The first of them is with either of these cases, there's also a link that it gives you at the top here.
05:43.150 --> 05:49.000
And if you click on this link, uh, it actually brings up your interface in an entirely separate window
05:49.000 --> 05:51.220
like this, which seems almost magical.
05:51.250 --> 05:51.820
Let's go.
05:51.850 --> 05:52.750
Hello.
05:55.030 --> 05:56.830
And it just works.
05:56.860 --> 06:04.180
And that's because when you run Gradio, it actually runs a little web server running in the background.
06:04.270 --> 06:08.350
Uh, running locally at a at whatever the first port it finds that's free.
06:08.350 --> 06:13.390
After after some, some number, uh, after, I think 760 is where it begins and it starts going on
06:13.390 --> 06:13.750
from there.
06:13.750 --> 06:15.190
So I suspect the last one was it.
06:15.490 --> 06:15.880
Yeah.
06:16.000 --> 06:17.380
Was it 760?
06:17.530 --> 06:20.830
Uh, so, um, it will run that little web server.
06:20.830 --> 06:25.600
And so you can either show that in the same Jupyter notebook in the output, or you can just bring it
06:25.600 --> 06:29.290
up in a separate screen in its own right, which is amazing.
06:29.290 --> 06:35.500
But even more than that, the other thing I've shown here is that you can pass share equals true into
06:35.500 --> 06:36.250
your call.
06:36.250 --> 06:43.960
And if you do that, then Gradio also serves the same interface on a public URL that you can share with
06:43.960 --> 06:50.010
other people so that other people, colleagues that you're working with can use your same model and
06:50.010 --> 06:53.430
be able to come in and work on your prototype.
06:53.430 --> 06:58.740
And this part is a little bit of the mind bending part of it.
06:58.740 --> 07:05.220
When someone brings up this user interface, which we'll do right now, it'll take just a second.
07:05.220 --> 07:06.690
There's a bit more going on behind the scenes.
07:06.690 --> 07:07.380
Here it comes.
07:07.380 --> 07:08.490
Here's the user interface.
07:08.490 --> 07:10.560
It looks it's of course the same as this.
07:10.560 --> 07:11.640
I'll run hello.
07:11.640 --> 07:13.380
And we'll see it working.
07:14.940 --> 07:16.530
What's happening here.
07:16.530 --> 07:19.200
This is of course being served by Gradio.
07:19.200 --> 07:25.080
But when you call submit, when you press submit and call the function, that function hello is running
07:25.080 --> 07:29.580
on, on my local box in this Jupyter environment right here.
07:29.670 --> 07:32.250
Uh, it's uh, it's a bit crazy.
07:32.280 --> 07:34.920
It's still running the code as it's running on my box.
07:34.920 --> 07:37.560
It's just there's a publicly available URL for it.
07:37.590 --> 07:39.000
It's kind of magic.
07:39.000 --> 07:40.620
Uh, let me explain what I mean by that.
07:40.620 --> 07:44.340
By going back here and printing here.
07:46.680 --> 07:52.020
Shout has been called with input.
07:54.840 --> 07:58.650
So now we are making very clear what's going on.
07:59.130 --> 08:01.980
So when I run that it says shout has been called with input.
08:02.010 --> 08:02.550
Hello.
08:02.580 --> 08:06.480
So now let's come back here and run this again.
08:07.770 --> 08:12.240
So now it's running with this again a public URL.
08:13.020 --> 08:14.430
Here it comes.
08:16.020 --> 08:17.190
I'm going to type.
08:17.190 --> 08:20.070
This is very cool.
08:20.400 --> 08:22.110
And press submit.
08:22.230 --> 08:24.570
And obviously this is very cool as what comes back.
08:24.600 --> 08:27.570
This is this is being hosted by Gradio.
08:27.570 --> 08:34.050
But again the somewhat remarkable thing is if I come back here and look in my output, you'll see that
08:34.050 --> 08:36.030
shout has been called with input.
08:36.030 --> 08:37.320
This is very cool.
08:37.320 --> 08:41.340
So the function that's running is running on my box.
08:41.340 --> 08:47.850
The user interface is being served up through a public radio Gradio website, but the code is running
08:47.850 --> 08:50.010
on my local box, which is really amazing.
08:50.010 --> 08:54.690
And what that means basically is that you can write models running on your local box, and you can build
08:54.690 --> 08:59.370
interfaces, and you can either bring them up locally for yourself, or you can share them with others.
08:59.370 --> 09:04.740
And as people work with those shared user interfaces, it's still calling the code that is running on
09:04.740 --> 09:06.870
your box incredibly useful.
09:06.870 --> 09:12.210
And as you can imagine, for collaborating with people and sharing your models and getting your co-workers
09:12.210 --> 09:15.600
to to work with you, uh, it couldn't be easier.
09:16.710 --> 09:20.370
All right, so let's keep going and show a couple more things.
09:20.370 --> 09:25.590
I'm now going to bring up an interface which is going to specify inputs and outputs.
09:25.800 --> 09:30.810
And you can see here what I'm doing is I'm saying that the the inputs it's a list.
09:30.810 --> 09:32.640
It's just got one thing in there.
09:32.640 --> 09:34.080
It's a text box.
09:34.080 --> 09:37.740
It's got a label your message and it's got six lines.
09:37.740 --> 09:41.190
The outputs is response and it's got eight lines.
09:41.340 --> 09:43.650
Um, and it's calling the function shout.
09:43.710 --> 09:49.620
let's have a look at that and let's bring that up here.
09:50.340 --> 09:53.610
And it comes up just as you'd expect and make it a bit bigger for you.
09:53.610 --> 09:54.480
There we go.
09:54.510 --> 09:55.710
There is a message.
09:55.710 --> 09:56.760
There's a response.
09:56.760 --> 10:02.010
I can say hello yet again and I can press submit.
10:02.010 --> 10:05.610
And over here comes the capitalized version.
10:05.700 --> 10:08.220
Very easy, nice and configurable.
10:08.250 --> 10:10.860
Looks like a good UI.
10:11.790 --> 10:15.930
Well, you can probably imagine what I'm going to suggest next.
10:16.380 --> 10:17.820
Wouldn't it be great?
10:17.850 --> 10:23.460
Wouldn't it be great if you could just replace that word shout with another function?
10:23.490 --> 10:24.720
Any function?
10:24.720 --> 10:28.170
Why not a message GPT function that we wrote earlier?
10:28.170 --> 10:33.780
You could just simply replace that word shout with that function, and you'd be able to have a user
10:33.780 --> 10:35.820
interface built on top of an LLM.
10:35.820 --> 10:37.380
Wouldn't that just be great?
10:37.410 --> 10:38.640
Wouldn't it be great?
10:38.970 --> 10:40.020
Ha ha.
10:40.050 --> 10:41.640
Well, let's have a look.
10:41.640 --> 10:42.770
Let's have a look.
10:42.800 --> 10:43.700
Here we go.
10:43.700 --> 10:44.420
Same.
10:44.420 --> 10:45.500
Same code.
10:45.500 --> 10:46.910
We've replaced the function.
10:46.910 --> 10:47.840
It's no longer shout.
10:47.840 --> 10:49.400
It's now message GPT.
10:49.610 --> 10:51.290
Let's see what happens.
10:51.290 --> 10:53.480
Let's bring that up in a separate window.
10:53.510 --> 10:54.740
Here it is.
10:54.920 --> 11:02.240
Please tell me a joke and we will submit that.
11:02.240 --> 11:04.430
And we'll see what comes back.
11:05.180 --> 11:07.310
Why did the Scarecrow win an award?
11:07.310 --> 11:10.550
Because he was outstanding in his field.
11:10.580 --> 11:12.080
That's a great joke.
11:13.730 --> 11:15.350
Uh, okay.
11:15.470 --> 11:19.100
A great joke from GPT four mini.
11:19.100 --> 11:25.730
And a great example of how easy it is to make that bigger for you to build a user interface that is
11:25.730 --> 11:29.510
running using an LLM behind the scenes.
11:29.570 --> 11:34.760
I hope that you are as overjoyed by this experience as I am.
11:34.850 --> 11:37.370
I think Gradio is awesome.
11:37.400 --> 11:41.330
All right, I will see you next time when we're going to put Gradio to even more good use.

571
week5/community-contributions/subtitles/srts/59166461/ja_JP.srt

@ -0,0 +1,571 @@
WEBVTT
00:00.710 --> 00:02.690
そして、 ラボにおかえりなさい。
00:02.690 --> 00:08.300
ここJupyter Labで2週目に入る。
00:08.300 --> 00:10.790
そして2日目に入る。
00:10.820 --> 00:12.440
ラジオの日だ。
00:12.470 --> 00:17.390
今日は、 とんでもなくシンプルなGradioフレームワークを使ってユーザー・インターフェースを構築する。
00:17.390 --> 00:19.010
喜びの準備をしよう。
00:19.760 --> 00:20.810
そうだ。
00:20.810 --> 00:22.490
輸入もするつもりだ。
00:22.490 --> 00:27.140
そして、 この不思議なセリフは、 グラディオをGRとしてインポートする。
00:27.170 --> 00:28.400
そして私は、 ああそうだと言った。
00:28.430 --> 00:30.200
そうだ。
00:30.200 --> 00:34.700
そして、 通常の方法で環境変数をロードする。
00:35.030 --> 00:46.250
次のおなじみのセルは、 APIを立ち上げて準備するための3つの類似したコマンドだ。
00:46.790 --> 00:47.630
オーケー。
00:47.630 --> 00:53.960
変数にシステム・メッセージを設定することから始めましょう。 これは非常に一般的なUIで、
00:53.990 --> 01:00.380
システム・メッセージの標準的な出発点となることが多い、 役に立つアシスタントです。
01:00.620 --> 01:02.630
だから、 私たちが取るのはそれだ。
01:03.080 --> 01:07.490
さて、 これからGPT4ミニに電話をかける。
01:07.490 --> 01:14.540
ええと、 このような単純な関数では、 メッセージGPTはプロンプトメッセージに等しいものを受け取ります。
01:14.540 --> 01:18.380
さて、 この時点で、 できればこの構成に飽きていてほしい。
01:18.380 --> 01:24.620
シンプルな会話構造、 辞書のリスト、 システムメッセージ、 ユーザープロンプト、
01:24.620 --> 01:30.620
そしてOpenAIチャットの補完、 ドット補完、 ドット作成。
01:30.650 --> 01:33.620
モデルを渡し、 メッセージを渡す。
01:33.620 --> 01:37.400
そして、 私たちが返すのは完成点の選択肢である。
01:37.400 --> 01:40.370
私たちは最初の選択肢であるドットメッセージの内容を取る。
01:40.370 --> 01:45.800
これは、 GPTにメッセージを送り、 レスポンスを返すためにラップしている関数だ。
01:45.800 --> 01:47.510
それを実行しよう。
01:47.510 --> 01:49.700
さっそく試してみよう。
01:49.730 --> 01:50.450
何と言うべきか?
01:50.480 --> 01:59.180
メッセージ GPTの得意なこと、 不得意なことをいくつか試してみた。
01:59.180 --> 02:00.260
もうひとつだけ試してみよう。
02:00.260 --> 02:02.780
時事問題が苦手なのは知っている。
02:02.780 --> 02:04.730
とてもシンプルなもので行こう。
02:04.730 --> 02:13.430
今日の日付は何か、 GPTが考える今日の日付を見てみよう。
02:14.420 --> 02:18.080
今日の日付は2023年10月3日。
02:18.410 --> 02:19.880
そこで、 いくつか注意しておきたいことがある。
02:19.880 --> 02:24.140
ひとつは、 予想通り、 時事問題に対するセンスがないことだ。
02:24.140 --> 02:29.660
そして2つ目は、 トレーニングデータが2023年10月まで、
02:29.660 --> 02:34.310
つまり10月の初めまで有効だったということだ。
02:34.310 --> 02:39.860
私は10月だと思っていたが、 10月3日なら、 それは無意味なことなのかもしれない。
02:39.860 --> 02:42.290
もう9月も終わりだ。
02:42.320 --> 02:46.310
いずれにせよ、 10月初旬が答えだろう。
02:46.310 --> 02:48.950
これはとてもシンプルな機能だ。
02:48.950 --> 02:52.190
そのことは頭の片隅に置いておいてほしい。
02:52.190 --> 02:54.320
ユーザー・インターフェースを作る時だ。
02:54.320 --> 02:56.770
まず第一に、 データサイエンスとは何の関係もない。
02:56.770 --> 02:59.680
簡単なユーザー・インターフェースの作り方を見てみよう。
02:59.680 --> 03:03.640
では、 shoutという非常にシンプルな関数を紹介しよう。
03:03.640 --> 03:06.040
そして、 叫ぶにはテキストが必要だ。
03:06.040 --> 03:10.330
そして、 そのテキストが大文字で返信される。
03:10.330 --> 03:11.620
簡単なことだよ。
03:11.620 --> 03:13.150
だから、 ハローと叫ぼう。
03:13.150 --> 03:17.500
そして、 ハローと大文字で怒鳴るように言い返す。
03:17.890 --> 03:19.060
うーん、 わかった。
03:19.060 --> 03:29.620
つまり、 小さなハローを大きなハローに変換できる入出力を備えた洗練されたユーザー・インターフェースを構築するのは、
03:29.650 --> 03:33.250
これくらい簡単なことなのだ。
03:33.280 --> 03:36.910
2行で表示される素晴らしいインターフェイスだ。
03:36.910 --> 03:38.380
つまり、 新しいインターフェイスが欲しいということだ。
03:38.380 --> 03:41.260
欲しい機能を伝えるのだ。
03:41.260 --> 03:46.630
このユーザー・インターフェースは、 この場合はシャウトを中心に構築されている。
03:46.660 --> 03:51.730
この関数は、 関数名と何を渡すかをここで説明している。
03:51.730 --> 03:53.560
そして、 入力と出力を渡さなければならない。
03:53.560 --> 03:57.820
そしてグラディオは、 ここでパスできるものに関して非常に柔軟だ。
03:57.820 --> 04:01.390
複数の入出力がある場合は、 リストを渡すことができる。
04:01.390 --> 04:06.280
入力と出力が1つずつしかない場合は、 それがどのようなものかを文字列で表せばいい。
04:06.280 --> 04:07.240
それだけで十分だ。
04:07.240 --> 04:08.950
それがすべてを解決してくれる。
04:09.070 --> 04:18.070
これは2行のコードですが、 お見せするために1行のコードにすることもできます。
04:18.130 --> 04:23.740
すべてを1行にまとめて、 それを実行して、 どうなるか見てみよう。
04:23.740 --> 04:26.950
私たちはここに小さなユーザー・インターフェイスを持っている。
04:26.950 --> 04:28.810
ハローと打つよ。
04:28.960 --> 04:30.910
そして、 私は送信を押すつもりだ。
04:31.510 --> 04:34.510
そして、 私に向かって怒鳴るような挨拶が返ってきた。
04:34.510 --> 04:38.920
素晴らしい操作性を備えたユーザーインターフェースだ。
04:38.920 --> 04:46.030
そしてそれはすべて、 この、 この、 ブラウザーの中で動いている。
04:46.150 --> 04:52.720
ここでひとつお気づきの点があるとすれば、 ここにフラグボタンがあり、 フラグ付きというフォルダが作成されていることだ。
04:52.720 --> 04:58.510
これは、 機械学習でよくあるユースケースで、 ユーザーが何が起こっているかを確認し、
04:58.510 --> 05:08.080
結果に問題があればメモを取ることができるようにしたい場合です。
05:08.320 --> 05:12.850
その代わりに、 フラグを立てることを許可するイコール
05:12.850 --> 05:16.780
"never "を渡すのだ。
05:16.780 --> 05:22.870
だから今、 その代わりにそれを実行すると、 あー、 繰り返しになるけど、 このように1行で済ませることも同じようにできたのに、
05:22.870 --> 05:27.970
2行にしたことがちょっと恨めしいよ。
05:28.000 --> 05:29.320
一本の線。
05:29.320 --> 05:33.850
そしてここに、 ユーザー・インターフェイスがある。
05:34.780 --> 05:38.110
ええと、 それで、 この件に関していくつかやったことがあるんだけど、 それについて言っておきたいことがあるんだ。
05:38.140 --> 05:43.150
そのうちのひとつは、 これらのケースのいずれかを選択した場合、 この一番上にリンクが表示されます。
05:43.150 --> 05:51.220
このリンクをクリックすると、 このようにまったく別のウィンドウにインターフェイスが表示される。
05:51.250 --> 05:51.820
行こう。
05:51.850 --> 05:52.750
こんにちは。
05:55.030 --> 05:56.830
そして、 うまくいくんだ。
05:56.860 --> 06:04.180
というのも、 Gradioを実行すると、 バックグラウンドで小さなウェブ・サーバーが動くからだ。
06:04.270 --> 06:08.350
ええと、 最初に空いているポートを見つけて、 ローカルで実行するんだ。
06:08.350 --> 06:13.750
何回目か、 何回目か......760回目くらいからが始まりで、 そこからが本番だと思う。
06:13.750 --> 06:15.190
だから、 最後の1本がそうだったんじゃないかと思う。
06:15.490 --> 06:15.880
そうだね。
06:16.000 --> 06:17.380
760だったか?
06:17.530 --> 06:20.830
それで、 その小さなウェブサーバーを動かすんだ。
06:20.830 --> 06:25.600
同じJupyterノートブックに出力することもできるし、
06:25.600 --> 06:29.290
別の画面に表示することもできる。
06:29.290 --> 06:36.250
しかしそれ以上に、 私がここで示したもう一つのことは、 シェア・イコール・トゥルーを通話に反映させることができるということだ。
06:36.250 --> 06:43.960
そうすれば、 Gradioは同じインターフェイスをパブリックURLで提供し、
06:43.960 --> 06:53.430
他の人と共有することができます。
06:53.430 --> 06:58.740
そして、 この部分は少し心を曲げる部分でもある。
06:58.740 --> 07:05.220
誰かがこのユーザー・インターフェースを表示させたら、 今すぐにでも表示させることができる。
07:05.220 --> 07:06.690
舞台裏ではもう少しいろいろなことが起こっている。
07:06.690 --> 07:07.380
来たぞ。
07:07.380 --> 07:08.490
これがユーザーインターフェースだ。
07:08.490 --> 07:10.560
見た目はもちろんこれと同じだ。
07:10.560 --> 07:11.640
こんにちは。
07:11.640 --> 07:13.380
そして、 それがうまくいくのを見るだろう。
07:14.940 --> 07:16.530
ここで何が起きているのか。
07:16.530 --> 07:19.200
これはもちろんグラディオが提供している。
07:19.200 --> 07:25.080
しかし、 submitを呼び出したとき、 submitを押して関数を呼び出したとき、 その関数helloは、
07:25.080 --> 07:29.580
このJupyter環境の私のローカル・ボックスで実行されている。
07:29.670 --> 07:32.250
ちょっとクレイジーなんだ。
07:32.280 --> 07:34.920
私のボックスで実行されているコードはそのまま実行されている。
07:34.920 --> 07:37.560
公開されているURLがあるだけだ。
07:37.590 --> 07:39.000
一種のマジックだ。
07:39.000 --> 07:40.620
ええと、 どういう意味か説明させてください。
07:40.620 --> 07:44.340
ここに戻って、 ここに印刷することで
07:46.680 --> 07:52.020
シャウトがインプットされた。
07:54.840 --> 07:58.650
だから今、 私たちは何が起こっているのかを明確にしている。
07:59.130 --> 08:01.980
だから、 それを実行すると、 shoutが入力で呼び出されたと表示される。
08:02.010 --> 08:02.550
こんにちは。
08:02.580 --> 08:06.480
では、 ここに戻ってもう一度実行してみよう。
08:07.770 --> 08:12.240
これでまた公開URLで実行されるようになった。
08:13.020 --> 08:14.430
来たぞ。
08:16.020 --> 08:17.190
これから打つよ。
08:17.190 --> 08:20.070
これはとてもクールだ。
08:20.400 --> 08:22.110
そして送信を押す。
08:22.230 --> 08:24.570
そして明らかに、 これは非常にクールだ。
08:24.600 --> 08:27.570
これはグラディオが主催している。
08:27.570 --> 08:36.030
しかし、 ちょっと注目すべき点は、 ここに戻って出力を見てみると、 shoutが入力で呼ばれていることだ。
08:36.030 --> 08:37.320
これはとてもクールだ。
08:37.320 --> 08:41.340
つまり、 実行されている機能は私のボックスで実行されているのだ。
08:41.340 --> 08:50.010
ユーザー・インターフェースは公共ラジオGradioのウェブサイトを通じて提供されているが、 コードは私のローカル・ボックス上で動いている。
08:50.010 --> 08:54.690
つまり、 自分のローカル・ボックスで動作するモデルを書き、 インターフェイスを構築し、 それを自分のためにローカルに立ち上げることも、
08:54.690 --> 08:59.370
他の人と共有することもできるということだ。
08:59.370 --> 09:06.870
そして、 共有されたユーザー・インターフェイスで作業している人たちは、 自分のボックス上で動いているコードを信じられないほど便利なものとして呼び出しているのだ。
09:06.870 --> 09:12.210
そして、 想像できるように、 人々と共同作業をしたり、 モデルを共有したり、 同僚に協力してもらったりするのに、
09:12.210 --> 09:15.600
これ以上簡単なものはない。
09:16.710 --> 09:20.370
よし、 では続けてもう2つほど見せよう。
09:20.370 --> 09:25.590
これから入出力を指定するインターフェイスを表示する。
09:25.800 --> 09:30.810
ここで私がやっていることは、 入力がリストになっているということだ。
09:30.810 --> 09:32.640
ただ、 一つだけあるんだ。
09:32.640 --> 09:34.080
テキストボックスだ。
09:34.080 --> 09:37.740
メッセージのラベルがあり、 6行ある。
09:37.740 --> 09:41.190
出力はレスポンスで、 8行ある。
09:41.340 --> 09:43.650
そして、 シャウトという関数を呼び出している。
09:43.710 --> 09:49.620
それを見てみよう。
09:50.340 --> 09:53.610
そして、 それはあなたが期待するように出てきて、 あなたのために少し大きくする。
09:53.610 --> 09:54.480
これでよし。
09:54.510 --> 09:55.710
メッセージがある。
09:55.710 --> 09:56.760
手応えはある。
09:56.760 --> 10:02.010
もう一度挨拶をして、 送信を押すことができる。
10:02.010 --> 10:05.610
そしてこっちは大文字バージョン。
10:05.700 --> 10:08.220
とても簡単で、 素晴らしく、 設定可能だ。
10:08.250 --> 10:10.860
良いUIに見える。
10:11.790 --> 10:15.930
さて、 私が次に何を提案するかは想像がつくだろう。
10:16.380 --> 10:17.820
素晴らしいと思わないか?
10:17.850 --> 10:23.460
シャウトという言葉を別の機能で置き換えることができたら素晴らしいと思わないか?
10:23.490 --> 10:24.720
何か機能は?
10:24.720 --> 10:28.170
先に書いたメッセージGPT関数ではダメなのか?
10:28.170 --> 10:35.820
シャウトという言葉をその関数に置き換えるだけで、 LLMの上にユーザー・インターフェースを構築することができる。
10:35.820 --> 10:37.380
それは素晴らしいことだと思わない?
10:37.410 --> 10:38.640
素晴らしいと思わないか?
10:38.970 --> 10:40.020
ハハハ。
10:40.050 --> 10:41.640
では、 見てみよう。
10:41.640 --> 10:42.770
見てみよう。
10:42.800 --> 10:43.700
さあ、 始めよう。
10:43.700 --> 10:44.420
同じだ。
10:44.420 --> 10:45.500
同じコードだ。
10:45.500 --> 10:46.910
機能を入れ替えました。
10:46.910 --> 10:47.840
もはやシャウトではない。
10:47.840 --> 10:49.400
今はメッセージGPTだ。
10:49.610 --> 10:51.290
どうなるか見てみよう。
10:51.290 --> 10:53.480
別ウィンドウで表示しよう。
10:53.510 --> 10:54.740
これだ。
10:54.920 --> 11:02.240
ジョークを言ってください。
11:02.240 --> 11:04.430
何が戻ってくるか見てみよう
11:05.180 --> 11:07.310
スケアクロウが受賞した理由は?
11:07.310 --> 11:10.550
彼はその分野で傑出していたからだ。
11:10.580 --> 11:12.080
いいジョークだね。
11:13.730 --> 11:15.350
ああ、 わかった。
11:15.470 --> 11:19.100
GPTフォーミニの素晴らしいジョーク。
11:19.100 --> 11:25.730
そして、 LLMを舞台裏で使っているユーザー・インターフェースを構築することが、
11:25.730 --> 11:29.510
いかに簡単なことかを示す好例だ。
11:29.570 --> 11:34.760
皆さんも私と同じように、 この経験で大喜びしてほしい。
11:34.850 --> 11:37.370
グラディオはすごいと思う。
11:37.400 --> 11:41.330
それではまた次回、 グラディオをさらに有効活用するためにお会いしましょう。

607
week5/community-contributions/subtitles/srts/59166461/ko_KR.srt

@ -0,0 +1,607 @@
WEBVTT
00:00.710 --> 00:02.690
연구실에 잘 돌아왔어요
00:02.690 --> 00:08.300
주피터 연구소에 왔어요 이제 2주 차로 접어들죠
00:08.300 --> 00:10.790
이제 둘째 날로 가보죠
00:10.820 --> 00:12.440
라디오 데이예요
00:12.470 --> 00:17.390
오늘 우리는 사용자 인터페이스를 만들 겁니다 말도 안 되게 단순한 그래디오 프레임워크를 이용해서요
00:17.390 --> 00:19.010
기쁨을 준비하세요
00:19.760 --> 00:20.810
여기요
00:20.810 --> 00:22.490
수입도 좀 할 거예요
00:22.490 --> 00:27.140
그러디오가 GR를 상징하는 마법의 대사예요
00:27.170 --> 00:28.400
그렇다고 대답했죠
00:28.430 --> 00:30.200
자, 됐어요
00:30.200 --> 00:34.700
그리고 일반적인 접근법으로 환경 변수를 로드하죠
00:35.030 --> 00:43.010
다음으로 익숙한 셀은 API를 get up up 준비시키는
00:43.040 --> 00:46.250
다소 유사한 명령 3개죠
00:46.790 --> 00:47.630
00:47.630 --> 00:53.960
변수에 시스템 메시지를 설정하는 것으로 시작합니다 아주 일반적인
00:53.990 --> 01:00.380
UI가 될 거예요 시스템 메시지의 표준 시작점인 보조죠
01:00.620 --> 01:02.630
그렇게 할 거예요
01:03.080 --> 01:07.490
이제 GPT 4 미니와의 통화를 마무리할 거예요
01:07.490 --> 01:14.540
이런 간단한 함수에서 GPT는 프롬프트 메시지의 등호를 취하죠
01:14.540 --> 01:18.380
이쯤 되면 이 구조물에 질렸길 바라요
01:18.380 --> 01:24.620
잘 아시네요 간단한 대화 구조와 사전 시스템 목록 시스템 메시지 사용자,
01:24.620 --> 01:30.620
사용자 프롬프트 완성 오픈아이 채팅 .완성, .생성이죠
01:30.650 --> 01:33.620
모형을 전달하고 메시지를 전달하죠
01:33.620 --> 01:37.400
완료 .선택으로 반환되죠
01:37.400 --> 01:40.370
선택 닷 메시지 콘텐츠를 선택해요
01:40.370 --> 01:45.800
GPT 메시지를 래핑하고 응답을 반환하는 함수죠
01:45.800 --> 01:47.510
실행해 보죠
01:47.510 --> 01:49.700
빨리 시험해 보죠
01:49.730 --> 01:50.450
뭐라고 하죠?
01:50.480 --> 01:58.550
몇 가지를 시도해 보았습니다 GPT의 철자를 알고 있고 GPT의 장단점을 알고
01:58.550 --> 01:59.180
있죠
01:59.180 --> 02:00.260
하나만 더 해 보죠
02:00.260 --> 02:02.780
시사 문제에는 좋지 않죠
02:02.780 --> 02:04.730
아주 단순한 걸로 가죠
02:04.730 --> 02:13.430
오늘 날짜는 언제죠? GPT는 오늘 날짜를 어떻게 인식하는지 보죠
02:14.420 --> 02:18.080
오늘은 2023년 10월 3일이에요
02:18.410 --> 02:19.880
몇 가지 알아둘 게 있어요
02:19.880 --> 02:24.140
하나는 예상대로 시사 감각이 별로 없다는 거예요
02:24.140 --> 02:29.660
둘째, 훈련 데이터는 2023년 10월 초까지 지속된 것으로 보입니다.
02:29.660 --> 02:34.310
9월이라고 했을 때 제가 암시했던 거죠.
02:34.310 --> 02:39.860
10월인 줄 알았는데 10월 3일이면 논란의 여지가 있겠네요
02:39.860 --> 02:42.290
9월 말이 됐어요
02:42.320 --> 02:46.310
10월 초가 정답이겠죠
02:46.310 --> 02:48.950
아주 간단한 함수예요
02:48.950 --> 02:52.190
잠시 후 다루게 될 내용이니 마음 한구석에 두시고요.
02:52.190 --> 02:54.320
사용자 인터페이스를 만들 때죠
02:54.320 --> 02:56.770
우선 데이터 과학과는 아무 상관 없어요
02:56.770 --> 02:59.680
간단한 사용자 인터페이스를 만드는 방법을 보죠
02:59.680 --> 03:03.640
여기 샤우트라는 아주 간단한 함수가 있어요
03:03.640 --> 03:06.040
샤우트에는 텍스트가 좀 들어가요
03:06.040 --> 03:10.330
대문자로 답장할 거예요
03:10.330 --> 03:11.620
아주 간단한 질문이죠
03:11.620 --> 03:13.150
그럼 인사할까요?
03:13.150 --> 03:17.500
Hello와 대문자 큰 소리로 대답하죠
03:17.890 --> 03:19.060
03:19.060 --> 03:29.620
입력과 출력을 가지고 정교한 사용자 인터페이스를 구축하는 것은 작은 안녕을 큰 안녕으로 바꿀 수
03:29.650 --> 03:33.250
있습니다. 이렇게 간단해요.
03:33.280 --> 03:36.910
두 줄짜리 뷰는 훌륭한 인터페이스죠
03:36.910 --> 03:38.380
새 인터페이스가 필요하단 뜻이죠
03:38.380 --> 03:41.260
원하는 함수를 말해주세요
03:41.260 --> 03:46.630
사용자 인터페이스를 중심으로 한 함수인데 이 경우엔 샤우트죠
03:46.660 --> 03:51.730
이 함수는 제가 함수 이름을 넘기고 여러분이 넘기는 걸 설명해요
03:51.730 --> 03:53.560
입력과 출력을 통과해야 하죠
03:53.560 --> 03:57.820
그래디오는 융통성 있게 장면을 연출했어요
03:57.820 --> 04:01.390
입력과 출력이 여러 개라면 목록도 넘길 수 있어요
04:01.390 --> 04:06.280
입력도 하나, 출력도 하나라면 문자열로서 그게 뭔지 그냥 말할 수 있어요
04:06.280 --> 04:07.240
그거면 돼요
04:07.240 --> 04:08.950
다 해결될 거예요
04:09.070 --> 04:14.020
이건 코드 두 줄이지만 여러분께 보여드리기 위해 코드 한 줄로
04:14.020 --> 04:18.070
할 수도 있어요 이렇게 보여드리고 있으니까요
04:18.130 --> 04:23.740
모든 걸 한 줄에 넣고 실행해 어떻게 되는지 보죠 Put it's go
04:23.740 --> 04:26.950
사용자 인터페이스가 있어요
04:26.950 --> 04:28.810
안녕하세요라고 칠게요
04:28.960 --> 04:30.910
제출을 누르죠
04:31.510 --> 04:34.510
그리고 저한테도 큰 소리로 인사를 하죠
04:34.510 --> 04:38.920
훌륭한 컨트롤이 있는 사용자 인터페이스죠
04:38.920 --> 04:46.030
전부 이 브라우저 안에서 실행되도록 만들어졌어요
04:46.150 --> 04:51.430
한 가지 눈치채셨을지 모르겠는데 여기 플래그 버튼이 있어요 플래그드라는 폴더가
04:51.430 --> 04:52.720
여기 만들어졌죠
04:52.720 --> 04:58.510
이건 Gradio와 함께 나오는 기능으로 사용자가 결과를 플래그 지정할 수
04:58.510 --> 05:03.250
있게 해줍니다 머신 러닝에서 흔히 사용되는 경우죠 사용자가 무슨
05:03.280 --> 05:08.080
일이 벌어지는지 보고 결과에 문제가 있는지 기록하길 원하죠
05:08.320 --> 05:12.850
하지만 그 기능성은 특별히 우리가 원하는 게 아니죠 그걸 제거하는
05:12.850 --> 05:16.780
방법은 허용 플래깅 =never를 넘기는 거예요
05:16.780 --> 05:22.870
이제 실행해 볼게요. 두 줄로 놓은 것이 조금 억울하네요. 한 줄로 놓을 수도
05:22.870 --> 05:27.970
있었는데 말이죠. 얼마나 간단한지 보여드리기 위해서요.
05:28.000 --> 05:29.320
한 줄이에요
05:29.320 --> 05:33.850
Get in UI 사용자 인터페이스예요
05:34.780 --> 05:38.110
이것과 관련해 제가 한 게 몇 가지 있어요
05:38.140 --> 05:43.150
첫 번째는 이 두 케이스 중 어느 것이든 여기 위에 링크가 있어요
05:43.150 --> 05:49.000
이 링크를 클릭하면 완전히 분리된 창에서 인터페이스가 나타납니다
05:49.000 --> 05:51.220
마법 같죠
05:51.250 --> 05:51.820
가요
05:51.850 --> 05:52.750
안녕하세요
05:55.030 --> 05:56.830
잘 어울려요
05:56.860 --> 06:04.180
그건 그레이디오를 실행할 때 백그라운드에서 실행되는 작은 웹 서버를 실행하기 때문이죠
06:04.270 --> 06:08.350
처음 발견하는 포트에서 무료로 로컬에서 실행하는 거죠
06:08.350 --> 06:13.750
몇 번이 지나고 나서... 760번이었던 것 같아요 거기서부터 시작이죠
06:13.750 --> 06:15.190
아마 마지막이 그거였을 거예요
06:15.490 --> 06:15.880
06:16.000 --> 06:17.380
760개였나요?
06:17.530 --> 06:20.830
저 웹 서버를 실행할 거예요
06:20.830 --> 06:25.600
그래서 같은 주피터 노트북에서 출력물에 보여줄 수도 있고 별도의
06:25.600 --> 06:29.290
화면에서 불러올 수도 있어요 그 자체로 훌륭하죠
06:29.290 --> 06:35.500
하지만 그보다 더 중요한 건 여기서 보여드린 다른 건 호출에서 공유 = true를 넘기는
06:35.500 --> 06:36.250
거예요
06:36.250 --> 06:43.960
그렇게 하면 Gadio는 공용 URL 내의 동일한 인터페이스를 제공해 다른 사람들과 공유할
06:43.960 --> 06:50.010
수 있죠 여러분이 함께 일하는 다른 사람, 동료가 같은 모델을 사용해 여러분의
06:50.010 --> 06:53.430
프로토타입을 작업할 수 있도록요
06:53.430 --> 06:58.740
이 부분은 좀 기가 막힌 비트예요
06:58.740 --> 07:05.220
누가 사용자 인터페이스를 언급하면∙∙∙ 지금 할 건데 시간이 좀 걸려요
07:05.220 --> 07:06.690
비트가 더 있어요
07:06.690 --> 07:07.380
나오네요
07:07.380 --> 07:08.490
이게 사용자 인터페이스예요
07:08.490 --> 07:10.560
당연히 이것과 똑같죠
07:10.560 --> 07:11.640
내가 뛰어갈게 여보세요
07:11.640 --> 07:13.380
효과가 있을 거예요
07:14.940 --> 07:16.530
무슨 일이 일어나고 있는지요
07:16.530 --> 07:19.200
이건 물론 그래디오가 제공하죠
07:19.200 --> 07:25.080
보내기를 호출할 때 보내기를 누르고 함수를 호출할 때 그 함수가 hello를
07:25.080 --> 07:29.580
실행합니다 여기 주피터 환경의 제 로컬 상자에서요
07:29.670 --> 07:32.250
비트가 좀 심하죠
07:32.280 --> 07:34.920
박스에서 실행되는 것처럼 여전히 코드를 실행하고 있어요
07:34.920 --> 07:37.560
공개적으로 사용 가능한 URL 뿐이죠
07:37.590 --> 07:39.000
마법 같아요
07:39.000 --> 07:40.620
무슨 뜻인지 설명해 드리죠
07:40.620 --> 07:44.340
여기로 돌아가서 프린트하는 거죠
07:46.680 --> 07:52.020
입력된 외침이 울렸어요
07:54.840 --> 07:58.650
이제 상황을 분명히 설명하고 있어요
07:59.130 --> 08:01.980
실행하면 Shout이 입력과 함께 호출되었다고 나오죠
08:02.010 --> 08:02.550
안녕하세요
08:02.580 --> 08:06.480
이제 여기로 돌아와서 다시 실행해보죠
08:07.770 --> 08:12.240
이제 다시 공용 URL로 실행되고 있어요
08:13.020 --> 08:14.430
나오네요
08:16.020 --> 08:17.190
타이핑 할게요
08:17.190 --> 08:20.070
정말 멋져요
08:20.400 --> 08:22.110
제출을 누르세요
08:22.230 --> 08:24.570
물론 이것도 아주 근사하죠
08:24.600 --> 08:27.570
이 쇼는 그래디오가 진행해요
08:27.570 --> 08:34.050
하지만 다시 한 번 주목할 만한 건 여기로 돌아와서 제 출력을 보면 입력과 함께 호출된
08:34.050 --> 08:36.030
호출이 보이시죠
08:36.030 --> 08:37.320
정말 멋져요
08:37.320 --> 08:41.340
실행되는 함수는 제 상자에서 실행되죠
08:41.340 --> 08:47.850
사용자 인터페이스는 공용 라디오 그래디오 웹사이트를 통해 제공되지만 코드는 제 로컬 박스에서 실행되고
08:47.850 --> 08:50.010
있어요, 정말 대단하죠
08:50.010 --> 08:54.690
그 말은 즉 로컬 박스에서 실행되는 모델을 작성할 수 있고 인터페이스를 빌드할
08:54.690 --> 08:59.370
수 있고 로컬에서 불러올 수도 있고 다른 사람과 공유할 수도 있다는 거죠
08:59.370 --> 09:04.740
사람들이 공유 사용자 인터페이스를 작업하면서 여러분의 컴퓨터에서 실행 중인 코드를 여전히
09:04.740 --> 09:06.870
호출하고 있어요 아주 유용하죠
09:06.870 --> 09:12.210
상상이 되시겠지만 사람들과 협력하고 모델을 공유하고 동료들이
09:12.210 --> 09:15.600
함께 일하게 하는 건 정말 쉬운 일이에요
09:16.710 --> 09:20.370
좋아요, 몇 가지 더 보여드리죠
09:20.370 --> 09:25.590
이제 인터페이스를 불러올게요 입력과 출력을 지정해주는 거죠
09:25.800 --> 09:30.810
여길 보시면 제가 하는 게∙∙∙ 입력이∙∙∙ 목록이죠
09:30.810 --> 09:32.640
한 가지만 들어 있어요
09:32.640 --> 09:34.080
텍스트 박스죠
09:34.080 --> 09:37.740
메시지도 라벨에 6줄이나 돼요
09:37.740 --> 09:41.190
반응이 출력력이고요 줄이 8개예요
09:41.340 --> 09:43.650
함수 샤우트라고 부르네요
09:43.710 --> 09:49.620
그걸 보죠 여기로 불러오죠
09:50.340 --> 09:53.610
비트가 예상대로 나와서 좀 더 커지죠
09:53.610 --> 09:54.480
됐어요
09:54.510 --> 09:55.710
메시지가 있어요
09:55.710 --> 09:56.760
반응이 있어요
09:56.760 --> 10:02.010
다시 Hello를 하고 제출을 누를 수 있어요
10:02.010 --> 10:05.610
여기 대문자 버전이 있어요
10:05.700 --> 10:08.220
아주 쉽고, 멋지고, 구성할 수 있죠
10:08.250 --> 10:10.860
좋은 UI 같네요
10:11.790 --> 10:15.930
이제 뭘 제안할지 짐작이 가실 거예요
10:16.380 --> 10:17.820
멋지지 않아요?
10:17.850 --> 10:23.460
샤우트라는 단어를 다른 함수로 대체할 수 있다면 멋지지 않을까요?
10:23.490 --> 10:24.720
함수 같은 거요?
10:24.720 --> 10:28.170
왜 아까 만든 메시지 GPT 함수가 아닌 거죠?
10:28.170 --> 10:33.780
그냥 Shout이라는 단어를 그 함수로 대체할 수 있어요 그럼 LLM 위에 빌드된
10:33.780 --> 10:35.820
사용자 인터페이스가 생기죠
10:35.820 --> 10:37.380
정말 멋지지 않아요?
10:37.410 --> 10:38.640
멋지지 않아요?
10:38.970 --> 10:40.020
10:40.050 --> 10:41.640
한번 보죠
10:41.640 --> 10:42.770
한번 보죠
10:42.800 --> 10:43.700
시작할게요
10:43.700 --> 10:44.420
저도요
10:44.420 --> 10:45.500
코드가 같아요
10:45.500 --> 10:46.910
함수를 대체했어요
10:46.910 --> 10:47.840
소리 지르는 게 아니에요
10:47.840 --> 10:49.400
GPT에 메시지를 보내죠
10:49.610 --> 10:51.290
어떻게 되나 보죠
10:51.290 --> 10:53.480
다른 창으로 보죠
10:53.510 --> 10:54.740
여기 있네요
10:54.920 --> 11:02.240
농담 하나 해 주시면 제출할게요
11:02.240 --> 11:04.430
결과를 기다려 보죠
11:05.180 --> 11:07.310
허수아비가 왜 상을 받았냐고요?
11:07.310 --> 11:10.550
자기 분야에서 뛰어난 사람이었으니까요
11:10.580 --> 11:12.080
재미있는 농담이네요
11:13.730 --> 11:15.350
11:15.470 --> 11:19.100
GPT 4 미니의 멋진 농담이네요
11:19.100 --> 11:25.730
사용자 인터페이스를 크게 만드는 게 얼마나 쉬운지 보여주는 좋은 예죠
11:25.730 --> 11:29.510
LLM을 이용해 뒤에서 실행되는 거요
11:29.570 --> 11:34.760
당신도 나만큼 이 경험을 즐기길 바라요
11:34.850 --> 11:37.370
그래디오는 대단해요
11:37.400 --> 11:41.330
다음 시간에는 그래디오를 더 유용하게 사용할 겁니다 TREEN METING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETING TO DETY

469
week5/community-contributions/subtitles/srts/59166465/en_US.srt

@ -0,0 +1,469 @@
WEBVTT
00:00.620 --> 00:05.360
Welcome back to the JupyterLab on Gradio day, so you'll remember where we left off.
00:05.360 --> 00:14.990
We'd written two user interfaces, one of them for chatting with GPT four using the function uh, stream
00:14.990 --> 00:19.460
GPT, and one of them with Claude, with stream Claude.
00:19.580 --> 00:21.980
Uh, and so now I put it to you.
00:22.010 --> 00:25.310
Supposing we wrote a function like this.
00:25.310 --> 00:28.160
This function is like a composite function.
00:28.160 --> 00:32.330
It's a function that calls others in that it's called stream model.
00:32.330 --> 00:40.040
And it takes a prompt and it takes a model and it says if the model is GPT, then it calls stream GPT.
00:40.370 --> 00:42.530
If the model is Claude, it calls stream.
00:42.530 --> 00:44.420
Claude otherwise throws an error.
00:44.420 --> 00:46.130
So it needs to be GPT or Claude.
00:46.130 --> 00:51.140
And then it basically iterates through and yields each chunk in turn.
00:51.140 --> 00:53.420
So this is in fact I called it a function, but it's not.
00:53.420 --> 00:54.440
It's a generator.
00:54.590 --> 01:01.970
Um, and it yields each chunk from one or the other models depending on which model is called.
01:02.000 --> 01:03.500
Well, obviously that's going to work fine.
01:03.530 --> 01:07.730
That's now a function which has a few more variables.
01:07.730 --> 01:12.140
So as far as Gradio is concerned, that's just another function.
01:12.140 --> 01:16.490
And that means that we can build a user interface very easily around that function.
01:16.490 --> 01:17.660
Let's look at it.
01:17.690 --> 01:18.800
Here it is.
01:18.800 --> 01:20.450
Here's an interface.
01:20.480 --> 01:26.840
The function it's taking is just this this sort of hybrid generator that we just written the inputs.
01:26.840 --> 01:29.000
Of course we're now going to have two inputs.
01:29.000 --> 01:30.890
One of them is going to be your message.
01:30.890 --> 01:34.010
And the other of them I wish it were this easy.
01:34.040 --> 01:39.230
Drop down with two values GPT or Claude label select model.
01:39.230 --> 01:41.990
And then, you know, have that as the output.
01:42.470 --> 01:44.630
Things are rarely that easy though.
01:44.660 --> 01:47.600
Oh, but this is gradio, so things really are that easy.
01:47.690 --> 01:50.240
Uh, sorry, I have to run this first.
01:50.480 --> 01:51.320
There we go.
01:51.320 --> 01:52.100
It's not that easy.
01:52.130 --> 01:54.110
You still do have to execute all of your code.
01:54.290 --> 01:56.390
Uh, so here we go.
01:56.390 --> 02:05.750
We bring it up, we say something like, how do I get from times Square to Grand Central?
02:06.210 --> 02:08.160
and we pick one of our models.
02:08.160 --> 02:12.600
Let's pick GPT and we submit that and they're streaming back in markdown.
02:12.600 --> 02:17.310
Is GPT response to that question of directions.
02:17.610 --> 02:19.230
Enjoy your visit again at the end.
02:19.260 --> 02:20.190
Very nice.
02:20.310 --> 02:23.760
Uh, I feel like it's giving more options this time, but there we go.
02:23.790 --> 02:24.480
Maybe not.
02:24.600 --> 02:25.710
You'll probably remember.
02:25.980 --> 02:29.640
Uh, I can then just flip to Claude and ask Claude the same question.
02:29.640 --> 02:32.490
And here is Claude's answer to the same question.
02:32.640 --> 02:34.260
Uh, using Claude haiku.
02:34.290 --> 02:37.950
That might explain why we're getting slightly shorter, more terse answers.
02:37.950 --> 02:40.230
But, uh, isn't that amazing?
02:40.230 --> 02:40.890
Isn't that cool?
02:40.890 --> 02:42.300
We just built this functionality.
02:42.330 --> 02:46.470
We can flip between two different models, ask the same question, get the responses.
02:46.560 --> 02:51.810
Uh, you could just have this running sometime if you wanted a nice chat UI of your own and be able
02:51.810 --> 02:53.610
to bounce it around different models.
02:53.670 --> 02:55.710
Uh, it's a useful little tool.
02:56.820 --> 03:04.050
Um, and, uh, yeah, you can imagine an obvious exercise that I'll leave for you is to simply add
03:04.050 --> 03:05.160
Gemini to the mix.
03:05.160 --> 03:05.670
Why not?
03:05.700 --> 03:06.450
You can imagine.
03:06.450 --> 03:07.230
It's super easy.
03:07.230 --> 03:08.070
You just add in.
03:08.070 --> 03:09.470
Gemini is another option.
03:09.470 --> 03:15.680
I haven't shown you how to stream back from Gemini, but it's very similar and you can quickly google
03:15.680 --> 03:20.900
it to see the documentation is very clear and then add it into the mix, and then push that code so
03:20.900 --> 03:22.820
I can have it and share it with other students.
03:22.820 --> 03:23.810
That would be good.
03:24.770 --> 03:25.700
All right.
03:25.700 --> 03:30.230
So the next last for this lab is going to be okay.
03:30.230 --> 03:35.810
Let's take the company brochure generator we made last time and put a user interface around that.
03:35.840 --> 03:37.070
Wouldn't that be awesome.
03:37.250 --> 03:41.750
Uh, so now that you know, as I say, it's going to be really, really simple.
03:41.750 --> 03:44.660
So I've decided I'm going with the earlier version of the brochure.
03:44.660 --> 03:48.200
We're just going to use a, we're going to use just the landing page only.
03:48.200 --> 03:52.850
We're not going to do the, the two step process where we collect all the links, because that's maybe
03:52.850 --> 03:54.350
more involved than we need right now.
03:54.440 --> 04:00.050
Um, we're just going to have a simpler version of the website class that has URL, title and text,
04:00.050 --> 04:02.210
and you'll remember how it works.
04:02.210 --> 04:07.520
We use the requests package, and we use the wonderful Beautifulsoup to parse to strip out things we
04:07.520 --> 04:10.160
don't care about and to get the text.
04:10.160 --> 04:18.070
And there is a little getcontext helper to give us sort of getcontents helper to give us the page title
04:18.070 --> 04:19.840
and body of the page.
04:19.840 --> 04:21.160
So that's our helper class.
04:21.190 --> 04:22.180
Remember to run it.
04:22.210 --> 04:23.170
System prompt.
04:23.170 --> 04:27.310
You're in a system that analyzes the contents of a company website landing page and creates a short
04:27.310 --> 04:28.750
brochure respond.
04:28.750 --> 04:31.180
In markdown there is a system prompt.
04:31.180 --> 04:41.350
So here is a stream brochure function that takes a company name, a URL and a model.
04:42.160 --> 04:46.510
Uh, and it's going to say please generate a company brochure for company name.
04:46.510 --> 04:48.250
Here is their landing page.
04:48.250 --> 04:54.370
And then we'll use our website helper class here to read in that URL and get the contents.
04:54.370 --> 04:56.140
So this is all making sense.
04:56.140 --> 04:59.230
We're just going to to get the contents of the website.
04:59.230 --> 05:00.790
We're going to turn that into a prompt.
05:00.790 --> 05:04.330
And then if it's GPT we're going to stream from GPT.
05:04.360 --> 05:06.640
If it's Claude, we're going to stream from Claude.
05:06.850 --> 05:14.280
Um, otherwise we'll raise an error and we will then make this a generator and yield the results chunk
05:14.280 --> 05:15.510
by chunk.
05:16.830 --> 05:22.920
Uh, I realize it's a bit bit misleading to call this chunk because it's in fact not actually chunk
05:22.920 --> 05:23.340
by chunk.
05:23.370 --> 05:25.560
It's it's the full amount.
05:25.590 --> 05:31.260
So you might want to rename that something that's, uh, a better reflection of what this this is.
05:32.160 --> 05:33.810
But you get the idea.
05:33.840 --> 05:36.030
It should do the trick.
05:36.420 --> 05:37.770
Uh, so wouldn't it be nice?
05:37.770 --> 05:39.420
I'm going to stop saying that because it's going to get old.
05:39.450 --> 05:40.470
But it is nice.
05:40.470 --> 05:45.090
It is as simple as now just replacing the function with stream brochure.
05:45.090 --> 05:46.560
And you can see here the inputs.
05:46.560 --> 05:48.570
We of course have these three inputs.
05:48.600 --> 05:49.800
Now we have a company name.
05:49.800 --> 05:51.420
We have a landing page URL.
05:51.420 --> 05:53.490
And then we can pick the model.
05:53.610 --> 05:56.520
And let's give that a whirl.
05:56.970 --> 05:58.170
Uh here we go.
05:58.170 --> 05:59.190
Running locally.
05:59.190 --> 06:03.960
Bring it up so we can say company name hugging face.
06:06.240 --> 06:09.510
Landing page URL we can say.
06:09.750 --> 06:10.830
And we'll just do a.
06:13.740 --> 06:19.040
Hugging s.co and select model.
06:19.040 --> 06:26.240
We will ask GPT to be first add it and then just press submit.
06:26.420 --> 06:27.920
And here it goes.
06:27.920 --> 06:34.280
Here is our company brochure for Huggingface streaming back in markdown based on our web scrape.
06:34.280 --> 06:35.540
It's all there.
06:35.540 --> 06:39.680
It's even got links down at the bottom for different things.
06:39.770 --> 06:43.220
Uh, and yeah, that link looks like that is correct.
06:43.220 --> 06:44.270
That's going to work.
06:44.270 --> 06:49.700
Some of these links look like they're not going to work because of, uh, how it's been generated.
06:49.700 --> 06:53.900
But still, that's a pretty impressive web page, I've got to say, an impressive brochure.
06:53.900 --> 06:55.610
I mean, I love it.
06:55.640 --> 06:58.520
Let's see what Claude does with this Claude haiku.
06:58.550 --> 06:59.060
Of course.
06:59.060 --> 07:05.120
So it's, uh, a slimmer model, but it's perfectly acceptable.
07:05.120 --> 07:07.190
Let's build the future of AI together.
07:07.280 --> 07:11.150
Uh, very nice brochure there from haiku.
07:11.570 --> 07:13.880
Uh, and there we go.
07:13.910 --> 07:14.840
I'm.
07:14.840 --> 07:21.400
I'm blown away every time I use gradio by how simple it is, how effective it is.
07:21.400 --> 07:27.250
We've just built a user interface around our brochure where you can pick between different models and
07:27.250 --> 07:29.650
let's face it, it was easy.
07:29.860 --> 07:33.880
So the to dos for you, the ways you can make this better are there are so many.
07:33.940 --> 07:38.260
You could, as I say, add Gemini not only to the earlier example, but to this one as well.
07:38.410 --> 07:47.050
Another idea is you could add in another selection, another drop down where you can pick the the style,
07:47.050 --> 07:52.240
the tone you remember last time, how we could easily change the system prompt so that the brochure
07:52.240 --> 07:55.540
was in a humorous, jokey, jovial tone.
07:55.840 --> 08:00.940
Well, why don't you set it so you can pick from that drop down, choose a different tone, and then
08:00.940 --> 08:04.390
it will generate a company brochure using that tone.
08:04.510 --> 08:07.330
Uh, it's actually super easy to do that.
08:07.450 --> 08:08.410
So give it a try.
08:08.440 --> 08:08.980
Do that.
08:08.980 --> 08:13.990
And you'll have really beefed up this application to be something that that is increasingly high in
08:14.020 --> 08:14.920
functionality.
08:14.950 --> 08:16.930
So I hope you have fun doing that.
08:16.930 --> 08:17.860
Check in the code afterwards.
08:17.860 --> 08:18.880
So I get to see it.
08:18.880 --> 08:22.180
And I will see you in the next lecture for the wrap up.

421
week5/community-contributions/subtitles/srts/59166465/ja_JP.srt

@ -0,0 +1,421 @@
WEBVTT
00:00.620 --> 00:05.360
グラジオのJupyterLabにようこそ。
00:05.360 --> 00:19.460
ひとつはGPTとチャットするためのもので、 GPTのストリーム機能を使ったもの。
00:19.580 --> 00:21.980
ええと、 それで今、 君に聞いてみたんだ。
00:22.010 --> 00:25.310
仮にこのような関数を書いたとしよう。
00:25.310 --> 00:28.160
この関数は複合関数のようなものだ。
00:28.160 --> 00:32.330
ストリームモデルと呼ばれる、 他を呼び出す関数だ。
00:32.330 --> 00:40.040
そして、 プロンプトを受け取り、 モデルを受け取り、 モデルがGPTであれば、 ストリームGPTを呼び出す。
00:40.370 --> 00:42.530
モデルがクロードの場合、 ストリームを呼び出す。
00:42.530 --> 00:44.420
そうでなければクロードはエラーを投げる。
00:44.420 --> 00:46.130
だからGPTかクロードである必要がある。
00:46.130 --> 00:51.140
そして、 基本的に反復して各チャンクを順番に降ろす。
00:51.140 --> 00:53.420
だから、 私はこれを関数と呼んだが、 実はそうではないのだ。
00:53.420 --> 00:54.440
発電機だ。
00:54.590 --> 01:01.970
そして、 どちらのモデルが呼び出されたかに応じて、 どちらか一方のモデルからチャンクを生成する。
01:02.000 --> 01:03.500
まあ、 明らかにうまくいくだろうね。
01:03.530 --> 01:07.730
これで、 さらにいくつかの変数を持つ関数になった。
01:07.730 --> 01:12.140
だから、 グラディオに関する限り、 それは単なる機能の一つに過ぎない。
01:12.140 --> 01:16.490
つまり、 その機能を中心にユーザー・インターフェースを簡単に構築できるということだ。
01:16.490 --> 01:17.660
見てみよう。
01:17.690 --> 01:18.800
これだ。
01:18.800 --> 01:20.450
これがインターフェイスだ。
01:20.480 --> 01:26.840
この関数が受け取るのは、 今入力を書いたハイブリッドジェネレーターのようなものだ。
01:26.840 --> 01:29.000
もちろん、 これで2つのインプットを持つことになる。
01:29.000 --> 01:30.890
そのうちのひとつがあなたのメッセージになる。
01:30.890 --> 01:34.010
そしてもうひとつは、 こんなに簡単だったらいいのにと思う。
01:34.040 --> 01:39.230
GPTまたはClaudeラベルの2つの値のドロップダウンでモデルを選択します。
01:39.230 --> 01:41.990
そして、 それを出力するんだ。
01:42.470 --> 01:44.630
しかし、 物事がそんなに簡単であることはめったにない。
01:44.660 --> 01:47.600
ああ、 でもここはグラディオだから、 物事は本当に簡単なんだ。
01:47.690 --> 01:50.240
申し訳ないが、 まずこれを実行しなければならない。
01:50.480 --> 01:51.320
これでよし。
01:51.320 --> 01:52.100
そんなに簡単なことじゃない。
01:52.130 --> 01:54.110
それでも、 すべてのコードを実行しなければならない。
01:54.290 --> 01:56.390
ええと、 それではどうぞ。
01:56.390 --> 02:05.750
タイムズ・スクエアからグランド・セントラルまでどうやって行けばいいんだ?
02:06.210 --> 02:08.160
そして、 私たちのモデルの一つを選ぶ。
02:08.160 --> 02:12.600
GPTを選び、 それを送信すると、 マークダウンでストリーミングバックされる。
02:12.600 --> 02:17.310
その質問に対するGPTの回答が方向性だ。
02:17.610 --> 02:19.230
最後にもう一度、 訪問を楽しもう。
02:19.260 --> 02:20.190
とても素晴らしい。
02:20.310 --> 02:23.760
あー、 今回は選択肢が増えたような気がするけど、 まあいいや。
02:23.790 --> 02:24.480
そうではないかもしれない。
02:24.600 --> 02:25.710
おそらく覚えているだろう。
02:25.980 --> 02:29.640
クロードに同じ質問をすればいい。
02:29.640 --> 02:32.490
同じ質問に対するクロードの答えはこうだ。
02:32.640 --> 02:34.260
クロードの俳句を使ってね。
02:34.290 --> 02:37.950
そのためか、 回答はやや短く、 簡潔なものになっている。
02:37.950 --> 02:40.230
でも、 それってすごいことじゃない?
02:40.230 --> 02:40.890
クールだろ?
02:40.890 --> 02:42.300
我々はこの機能を構築したばかりだ。
02:42.330 --> 02:46.470
私たちは2つの異なるモデルの間を行き来し、 同じ質問をして回答を得ることができる。
02:46.560 --> 02:53.610
もしチャットUIを作りたいなら、 このチャットUIをいつか実行させればいい。
02:53.670 --> 02:55.710
便利な道具だよ。
02:56.820 --> 03:05.160
うーん、 それで、 そうだな、 双子座を単純にミックスに加えるという明らかな練習を想像できるだろう。
03:05.160 --> 03:05.670
なぜだ?
03:05.700 --> 03:06.450
想像がつくだろう。
03:06.450 --> 03:07.230
超簡単だよ。
03:07.230 --> 03:08.070
ただ加えるだけだ。
03:08.070 --> 03:09.470
双子座という選択肢もある。
03:09.470 --> 03:15.680
Geminiからのストリーミングバックのやり方はまだお見せしていませんが、 とてもよく似ていますし、
03:15.680 --> 03:22.820
ググればドキュメントがとてもわかりやすいのですぐにわかります。
03:22.820 --> 03:23.810
それはいいことだ。
03:24.770 --> 03:25.700
分かった。
03:25.700 --> 03:30.230
だから、 このラボの次のラストは大丈夫だ。
03:30.230 --> 03:35.810
前回作った会社案内ジェネレーターを使って、 ユーザー・インターフェースを作ってみよう。
03:35.840 --> 03:37.070
すごいことだと思わない?
03:37.250 --> 03:41.750
ええと、 だから、 今言ったように、 本当に、 本当に簡単なことなんだ。
03:41.750 --> 03:44.660
だから、 私は以前のバージョンのパンフレットを使うことに決めたんだ。
03:44.660 --> 03:48.200
ランディング・ページだけを使います。
03:48.200 --> 03:54.350
すべてのリンクを集めるという2段階のプロセスを行うつもりはない。
03:54.440 --> 04:02.210
ええと、 URL、 タイトル、 テキストを持つウェブサイト・クラスのシンプルなバージョンを用意するだけです。
04:02.210 --> 04:10.160
リクエスト・パッケージを使い、 素晴らしいビューティフル・スープを使って解析し、 どうでもいいものを取り除いてテキストを取得する。
04:10.160 --> 04:19.840
そして、 小さなgetcontextヘルパーがあり、 getcontentsヘルパーのようなもので、 ページのタイトルと本文を与えてくれる。
04:19.840 --> 04:21.160
これがヘルパークラスだ。
04:21.190 --> 04:22.180
忘れずに実行すること。
04:22.210 --> 04:23.170
システムプロンプト。
04:23.170 --> 04:28.750
あなたは、 企業のウェブサイトのランディングページの内容を分析し、 短いパンフレットを作成するシステムに入っている。
04:28.750 --> 04:31.180
マークダウンにはシステム・プロンプトがある。
04:31.180 --> 04:41.350
ここでは、 会社名、 URL、 モデルを受け取るパンフレットのストリーム機能を紹介します。
04:42.160 --> 04:46.510
会社名のパンフレットを作成してください。
04:46.510 --> 04:48.250
これが彼らのランディングページだ。
04:48.250 --> 04:54.370
そして、 このウェブサイト・ヘルパークラスを使ってURLを読み込み、 内容を取得する。
04:54.370 --> 04:56.140
だから、 これはすべて理にかなっている。
04:56.140 --> 04:59.230
ウェブサイトのコンテンツを取得するだけだ。
04:59.230 --> 05:00.790
それをプロンプトに変えるんだ。
05:00.790 --> 05:04.330
そしてGPTならGPTからストリーミングする。
05:04.360 --> 05:06.640
クロードなら、 クロードからストリーミングするつもりだ。
05:06.850 --> 05:15.510
ええと、 そうでなければエラーを発生させ、 これをジェネレーターにして、 チャンクごとに結果を出します。
05:16.830 --> 05:23.340
このチャンクをチャンクと呼ぶのは少し誤解を招くかもしれない。
05:23.370 --> 05:25.560
全額だ。
05:25.590 --> 05:31.260
だから、 この名前を変更した方がいいかもしれない。
05:32.160 --> 05:33.810
でも、 おわかりだろう。
05:33.840 --> 05:36.030
それでうまくいくはずだ。
05:36.420 --> 05:37.770
ええと、 それならいいんじゃない?
05:37.770 --> 05:39.420
もう古くなるから言うのはやめるよ。
05:39.450 --> 05:40.470
でも、 いいものだよ。
05:40.470 --> 05:45.090
これはもう、 関数をストリームパンフレットに置き換えるだけの簡単なことだ。
05:45.090 --> 05:46.560
そして、 ここにインプットを見ることができる。
05:46.560 --> 05:48.570
もちろん、 この3つのインプットはある。
05:48.600 --> 05:49.800
これで社名が決まった。
05:49.800 --> 05:51.420
ランディングページのURLがあります。
05:51.420 --> 05:53.490
そしてモデルを選ぶことができる。
05:53.610 --> 05:56.520
そして、 それを試してみよう。
05:56.970 --> 05:58.170
さあ、 行くぞ。
05:58.170 --> 05:59.190
ローカルで走っている。
05:59.190 --> 06:03.960
社名を抱きしめている顔を言えるように、 その話を持ち出そう。
06:06.240 --> 06:09.510
ランディングページのURL
06:09.750 --> 06:10.830
そして、 ただやるだけだ。
06:13.740 --> 06:19.040
Sを抱きしめる。 コとモデルを選択する。
06:19.040 --> 06:26.240
まずGPTに追加を依頼し、 送信を押すだけだ。
06:26.420 --> 06:27.920
そして、 こうなる。
06:27.920 --> 06:34.280
Huggingfaceの会社案内を、 ウェブスクレイプに基づきマークダウンしてお届けします。
06:34.280 --> 06:35.540
すべてそこにある。
06:35.540 --> 06:39.680
下の方にいろいろなリンクがある。
06:39.770 --> 06:43.220
ああ、 そのリンクは正しいようだ。
06:43.220 --> 06:44.270
うまくいきそうだ。
06:44.270 --> 06:49.700
これらのリンクのいくつかは、 その、 生成された方法のために機能しないように見える。
06:49.700 --> 06:53.900
それにしても、 かなり印象的なウェブページ、 印象的なパンフレットと言わざるを得ない。
06:53.900 --> 06:55.610
つまり、 大好きなんだ。
06:55.640 --> 06:58.520
クロードがこのクロード俳句で何をするか見てみよう。
06:58.550 --> 06:59.060
もちろんだ。
06:59.060 --> 07:05.120
だから、 スリムなモデルだが、 まったく問題ない。
07:05.120 --> 07:07.190
AIの未来を一緒に築いていきましょう。
07:07.280 --> 07:11.150
ええと、 俳句のパンフレットはとても良かったよ。
07:11.570 --> 07:13.880
ああ、 そうだ。
07:13.910 --> 07:14.840
私は。
07:14.840 --> 07:21.400
gradioを使うたびに、 そのシンプルさと効果に驚かされる。
07:21.400 --> 07:29.650
私たちはパンフレットを中心に、 さまざまなモデルを選べるユーザー・インターフェイスを構築しました。
07:29.860 --> 07:33.880
だから、 あなたにとってやるべきこと、 これをより良くする方法はたくさんある。
07:33.940 --> 07:38.260
先ほどの例だけでなく、 この例にも双子座を加えることができる。
07:38.410 --> 07:55.540
もう一つのアイデアは、 別の選択項目を追加して、 前回覚えているスタイルや口調を選択できるようにすることだ。
07:55.840 --> 08:04.390
それなら、 ドロップダウンから別のトーンを選んで、 そのトーンで会社案内を作成できるように設定したらどうだろう。
08:04.510 --> 08:07.330
それはとても簡単なことなんだ。
08:07.450 --> 08:08.410
だから試してみてほしい。
08:08.440 --> 08:08.980
そうしてくれ。
08:08.980 --> 08:14.920
そして、 このアプリケーションをますます機能性の高いものに強化していくのだ。
08:14.950 --> 08:16.930
だから、 それを楽しんでほしい。
08:16.930 --> 08:17.860
その後、 コードを確認する。
08:17.860 --> 08:18.880
だから私はそれを見ることができる。
08:18.880 --> 08:22.180
それではまた、 次回の講義でお会いしましょう。

466
week5/community-contributions/subtitles/srts/59166465/ko_KR.srt

@ -0,0 +1,466 @@
WEBVTT
00:00.620 --> 00:05.360
그라디오의 날 유피터랩에 잘 오셨습니다 어디까지 했는지 기억하실 거예요
00:05.360 --> 00:14.990
두 개의 사용자 인터페이스를 작성했는데 하나는 GPT 4와 기능상 채팅용이었고 하나는
00:14.990 --> 00:19.460
함수용 클로드와 채팅용이었죠
00:19.580 --> 00:21.980
이제 여러분께 묻죠. TUT D.
00:22.010 --> 00:25.310
이런 함수를 썼다고 가정해 보죠
00:25.310 --> 00:28.160
이 함수는 복합 함수예요
00:28.160 --> 00:32.330
스트림 모델이라고 하는 함수로 다른 이들을 호출하죠
00:32.330 --> 00:40.040
프롬프트와 모델을 취하고 모델이 GPT라면 스트리밍 GPT를 호출하죠
00:40.370 --> 00:42.530
클로드라면 개울이라고 하죠
00:42.530 --> 00:44.420
안 그러면 클로드가 실수를 하죠
00:44.420 --> 00:46.130
GPT나 클로드가 돼야 해요
00:46.130 --> 00:51.140
그러면 순환하면서 한 덩어리를 한 번에 수확해요
00:51.140 --> 00:53.420
함수라고 불렀지만 사실은 아니죠
00:53.420 --> 00:54.440
발전기예요
00:54.590 --> 01:01.970
어떤 모델로 부르느냐에 따라 다른 모델에서 덩어리가 나와요
01:02.000 --> 01:03.500
잘 작동할 거예요
01:03.530 --> 01:07.730
변수가 몇 가지 더 있는 함수가 됐죠
01:07.730 --> 01:12.140
그러니 그라디오에게 그건 또 다른 함수일 뿐이죠
01:12.140 --> 01:16.490
그 함수를 중심으로 사용자 인터페이스를 쉽게 만들 수 있다는 거죠
01:16.490 --> 01:17.660
한번 보죠
01:17.690 --> 01:18.800
여기 있네요
01:18.800 --> 01:20.450
인터페이스예요
01:20.480 --> 01:26.840
이 함수는 우리가 방금 입력한 하이브리드 생성기예요
01:26.840 --> 01:29.000
물론 입력값은 2개죠
01:29.000 --> 01:30.890
그중 하나는 메시지예요
01:30.890 --> 01:34.010
다른 사람들은 이렇게 쉬우면 좋겠어요
01:34.040 --> 01:39.230
두 개의 값을 불러오는 거죠 GPT나 Clude Label SELECT 모델로요
01:39.230 --> 01:41.990
그런 다음 그걸 출력으로 하는 거죠
01:42.470 --> 01:44.630
하지만 그렇게 쉬운 일은 드물죠
01:44.660 --> 01:47.600
여긴 그라디오라서 모든 게 정말 쉬워요
01:47.690 --> 01:50.240
죄송해요, 이것부터 확인할게요
01:50.480 --> 01:51.320
됐어요
01:51.320 --> 01:52.100
그렇게 간단하지 않아요
01:52.130 --> 01:54.110
여전히 모든 코드를 실행해야 해요
01:54.290 --> 01:56.390
자, 시작하죠
01:56.390 --> 02:05.750
Get up, 예를 들어 타임스스퀘어에서 그랜드 센트럴까지 어떻게 가죠?
02:06.210 --> 02:08.160
모델 중 한 명을 선택해요
02:08.160 --> 02:12.600
GPT를 선택해 제출하면 마크다운에서 스트리밍되죠
02:12.600 --> 02:17.310
방향 문제에 대한 GPT의 반응이죠
02:17.610 --> 02:19.230
즐거운 시간 보내세요
02:19.260 --> 02:20.190
아주 좋아요
02:20.310 --> 02:23.760
이번에는 선택지가 더 많아진 것 같지만 어쩔 수 없죠
02:23.790 --> 02:24.480
아닐지도 모르죠
02:24.600 --> 02:25.710
기억날 거예요
02:25.980 --> 02:29.640
클로드를 보고 같은 질문을 하면 돼요
02:29.640 --> 02:32.490
똑같은 질문에 대한 클로드의 대답이에요
02:32.640 --> 02:34.260
클로드 하이쿠를 써서요
02:34.290 --> 02:37.950
그래서 대답이 짧고 간결해진 것 같아요
02:37.950 --> 02:40.230
정말 놀랍지 않아요?
02:40.230 --> 02:40.890
멋지죠?
02:40.890 --> 02:42.300
이 기능성만 빌드했죠
02:42.330 --> 02:46.470
두 모델 사이를 넘나들면서 같은 질문을 하면 답이 나오죠 Get it
02:46.560 --> 02:51.810
그냥 실행시킬 수도 있어요 멋진 채팅 UI를 원하시면요 다양한
02:51.810 --> 02:53.610
모델로 튕길 수 있죠
02:53.670 --> 02:55.710
유용한 도구죠
02:56.820 --> 03:04.050
그리고 여러분이 상상할 수 있는 명백한 훈련이 하나 더 있어요 여기에 제미니를 추가하는
03:04.050 --> 03:05.160
거죠
03:05.160 --> 03:05.670
왜요?
03:05.700 --> 03:06.450
상상이 되시죠?
03:06.450 --> 03:07.230
아주 쉬워요
03:07.230 --> 03:08.070
그냥 추가하는 거죠
03:08.070 --> 03:09.470
제미니를 선택해도 되고요
03:09.470 --> 03:15.680
제미니 강의에서 스트림하는 법을 보여드린 적은 없지만 아주 비슷해요 구글로 빠르게 검색하면
03:15.680 --> 03:20.900
아주 명확한 문서화가 있고 그걸 추가해서 그 코드를 푸시해 제가 갖고 다른 학생들과
03:20.900 --> 03:22.820
공유할 수 있죠
03:22.820 --> 03:23.810
그럼 좋죠
03:24.770 --> 03:25.700
좋아요
03:25.700 --> 03:30.230
이 실험실의 마지막은 괜찮을 거예요
03:30.230 --> 03:35.810
지난번에 만든 회사 브로슈어 생성기를 가져다 사용자 인터페이스를 적용해 보죠.
03:35.840 --> 03:37.070
그럼 정말 멋지겠죠?
03:37.250 --> 03:41.750
이제 아셨으니 말씀드렸듯이 아주 간단할 거예요
03:41.750 --> 03:44.660
그래서 전 그 책자의 초기 버전을 선택했어요
03:44.660 --> 03:48.200
랜딩 페이지만 사용할 거예요
03:48.200 --> 03:52.850
모든 링크를 모으는 2단계 프로세스는 하지 않겠습니다 지금 필요한 것보다
03:52.850 --> 03:54.350
더 복잡할 수 있으니까요
03:54.440 --> 04:00.050
URL과 제목, 텍스트를 가진 웹사이트 클래스의 간단한 버전을 보여드릴게요 어떻게
04:00.050 --> 04:02.210
작동하는지 기억하실 거예요
04:02.210 --> 04:07.520
요청 패키지를 사용하고 뷰티풀 get을 이용해 관심 없는
04:07.520 --> 04:10.160
걸 걸러내고 텍스트를 얻죠
04:10.160 --> 04:18.070
getcontext 도우미가 있어요 일종의 getcontent 도우미로 페이지 제목과
04:18.070 --> 04:19.840
본문을 제공하죠
04:19.840 --> 04:21.160
도우미 수업은 여기까지고요
04:21.190 --> 04:22.180
실행하는 거 잊지 마요
04:22.210 --> 04:23.170
시스템 프롬프트예요
04:23.170 --> 04:27.310
회사 웹사이트의 내용을 분석하고 랜딩 페이지에 짧은 답변을 작성하는
04:27.310 --> 04:28.750
시스템에 있죠
04:28.750 --> 04:31.180
마크다운에는 시스템 프롬프트가 있어요
04:31.180 --> 04:41.350
스트림 브로슈어 함수가 있어요 회사 이름, URL 그리고 모델을 취하죠
04:42.160 --> 04:46.510
회사명을 위한 회사 안내 책자를 생성해 달라고 하네요
04:46.510 --> 04:48.250
이게 랜딩 페이지예요
04:48.250 --> 04:54.370
그런 다음 웹사이트 도우미 클래스를 이용해 해당 URL을 읽고 내용을 get 하죠
04:54.370 --> 04:56.140
이제 이해가 되네요
04:56.140 --> 04:59.230
웹 사이트의 내용을 get 할 거예요
04:59.230 --> 05:00.790
그걸 프롬프트로 바꿀게요
05:00.790 --> 05:04.330
GPT라면 GPT에서 스트림할 거예요
05:04.360 --> 05:06.640
클로드면 클로드에서 물을 퍼내야죠
05:06.850 --> 05:14.280
안 그러면 에러가 발생해서 발전기를 만들게 되고 결과물이 한 덩어리씩 나올
05:14.280 --> 05:15.510
거예요
05:16.830 --> 05:23.340
비트라고 부르는 게 오해의 소지가 있는 것 같아요 사실 비트 한 덩어리씩이 아니거든요
05:23.370 --> 05:25.560
총액이에요
05:25.590 --> 05:31.260
이름을 바꾸는 게 좋겠어요 이 상황을 더 잘 반영하는 이름으로요
05:32.160 --> 05:33.810
Get you, I'm get you, I'm get you, I'm get it, I'm it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get'm get, I'm get'm get, I'm get.
05:33.840 --> 05:36.030
이거면 될 거예요
05:36.420 --> 05:37.770
그럼 좋지 않을까요?
05:37.770 --> 05:39.420
Get it이라고 하면 질릴 테니 그만할게요
05:39.450 --> 05:40.470
하지만 좋아요
05:40.470 --> 05:45.090
함수를 스트림 브로슈어로 대체하는 것만큼 간단해요
05:45.090 --> 05:46.560
여기 입력값이 보이시죠
05:46.560 --> 05:48.570
물론 세 가지 입력값이 있죠
05:48.600 --> 05:49.800
회사 이름이 생겼네요
05:49.800 --> 05:51.420
랜딩 페이지 URL이 있어요
05:51.420 --> 05:53.490
그런 다음 모델을 고르죠
05:53.610 --> 05:56.520
한번 해 보죠
05:56.970 --> 05:58.170
여기 있네요
05:58.170 --> 05:59.190
현지에서 운영되죠
05:59.190 --> 06:03.960
회사명 포옹 얼굴이라고 할 수 있게 띄워 주세요
06:06.240 --> 06:09.510
랜딩 페이지 URL도 말할 수 있죠
06:09.750 --> 06:10.830
이렇게 하죠
06:13.740 --> 06:19.040
안아주기요 co와 select 모델을 선택하세요
06:19.040 --> 06:26.240
GPT에 먼저 추가하고 제출을 누르라고 요청할 거예요
06:26.420 --> 06:27.920
이제 시작이네요
06:27.920 --> 06:34.280
이건 회사 안내 책자예요 헐깅페이스 스트리밍을 마크다운으로 하는 거죠 웹 스크래프를 기반으로요
06:34.280 --> 06:35.540
다 있어요
06:35.540 --> 06:39.680
심지어 아래쪽에 다양한 용도의 링크가 있어요
06:39.770 --> 06:43.220
네, 링크가 맞는 것 같네요
06:43.220 --> 06:44.270
이거면 되겠어요
06:44.270 --> 06:49.700
몇몇 링크들은 작동하지 않을 것 같습니다 생성된 방식 때문에요
06:49.700 --> 06:53.900
그래도 웹페이지는 정말 인상적이에요 브로슈어도 인상적이고요
06:53.900 --> 06:55.610
정말 좋아요
06:55.640 --> 06:58.520
클로드가 클로드 하이쿠로 뭘 하는지 보죠
06:58.550 --> 06:59.060
물론이죠
06:59.060 --> 07:05.120
좀 더 날씬하지만 이 정도면 괜찮아요
07:05.120 --> 07:07.190
함께 인공지능의 미래를 만들어 나가요
07:07.280 --> 07:11.150
하이쿠 책자가 아주 좋네요
07:11.570 --> 07:13.880
이제 됐어요
07:13.910 --> 07:14.840
07:14.840 --> 07:21.400
그러디오를 쓸 때마다 정말 놀라워요 얼마나 단순하고 효과적인지 말이에요
07:21.400 --> 07:27.250
책자 주위에 사용자 인터페이스를 구축했어요 다른 모델 사이에서 고를 수
07:27.250 --> 07:29.650
있죠 인정합시다, 쉬웠어요
07:29.860 --> 07:33.880
이 상황을 개선할 방법은 아주 많아요
07:33.940 --> 07:38.260
앞서 언급한 제미니를 추가할 수도 있고 이 예제에도 추가할 수 있어요
07:38.410 --> 07:47.050
다른 아이디어는 다른 드롭다운을 추가하는 거예요 스타일이나 톤을 고를 수 있는 드롭다운요
07:47.050 --> 07:52.240
시스템 프롬프트도 쉽게 바꿀 수 있죠 브로슈어가 익살스럽고
07:52.240 --> 07:55.540
농담조로 유쾌하게요
07:55.840 --> 08:00.940
저 드롭다운에서 고를 수 있도록 설정해 볼까요? 다른 톤을 선택하면
08:00.940 --> 08:04.390
그 톤을 이용해 회사 브로슈어를 생성하죠
08:04.510 --> 08:07.330
사실 엄청 쉬워요
08:07.450 --> 08:08.410
그러니 한번 해보세요
08:08.440 --> 08:08.980
그렇게 해요
08:08.980 --> 08:13.990
이 응용 프로그램을 강화해 기능성이 점점 더 높아지게 될
08:14.020 --> 08:14.920
거예요
08:14.950 --> 08:16.930
즐거운 시간 보내시길 바라요
08:16.930 --> 08:17.860
나중에 코드를 확인하세요
08:17.860 --> 08:18.880
Get it, get it. 보여주세요
08:18.880 --> 08:22.180
그럼 다음 강의에서 마무리하도록 하죠

889
week5/community-contributions/subtitles/srts/59166481/en_US.srt

@ -0,0 +1,889 @@
WEBVTT
00:00.860 --> 00:05.330
And here, once more we find ourselves in our favorite place, the Jupyter Lab.
00:05.330 --> 00:07.310
Ready to go with weeks.
00:07.340 --> 00:09.620
Week two's exercises.
00:09.620 --> 00:14.930
So we go into week two folder and we open up week two, day one.
00:15.230 --> 00:18.230
Uh, and here we go.
00:18.230 --> 00:26.990
So a reminder that in week one we used uh multiple frontier LMS through the chat user interface, a
00:26.990 --> 00:32.990
way to use it through the web, uh, and then through the API, we connect it to OpenAI's API.
00:33.020 --> 00:39.890
So today we're going to add to the mix the APIs for Anthropic and Google to join with our skills of
00:39.890 --> 00:41.090
using OpenAI.
00:41.960 --> 00:47.630
Uh, so as a one more reminder, you're going to kill me for keeping going on about this.
00:47.630 --> 00:50.300
This is where you set up your keys.
00:50.300 --> 00:55.850
Uh, you can set up keys for OpenAI, which presumably you already did last week, uh, for anthropic
00:55.850 --> 00:58.460
and for Gemini for Google.
00:58.490 --> 01:04.370
Uh, but, uh, bearing in mind that there is more of an adventure to be had in setting up your Google
01:04.400 --> 01:05.330
Keys.
01:05.390 --> 01:09.410
Once you've set them up, you create.
01:09.470 --> 01:11.330
You should already have created the file called.
01:11.480 --> 01:15.170
Env and make sure that your keys are in there in that form.
01:15.560 --> 01:21.500
If you wish, instead of doing that, you can do it by typing your keys in these cells.
01:21.500 --> 01:24.020
It's it's possible to do it that way.
01:24.020 --> 01:26.270
It's not recommended for security reasons.
01:26.270 --> 01:30.350
In case you one day make this public and then other people will see your keys.
01:30.380 --> 01:32.300
All right, enough preamble.
01:32.330 --> 01:33.800
Let's run some imports.
01:33.800 --> 01:37.400
Let's run this block of code here that sets the environment variables.
01:37.400 --> 01:38.900
You're pretty familiar with this.
01:38.900 --> 01:46.280
And now in this cell, you can see that I make the same call to OpenAI to, to establish that connection
01:46.280 --> 01:49.400
to the OpenAI API that you're familiar with now.
01:49.400 --> 01:55.790
But then I have something pretty similar for Claude and then something a little bit different for Google
01:55.790 --> 01:56.840
for Gemini.
01:56.960 --> 02:04.220
So these are the sort of, uh, semi somewhat analogous commands that we're using for those three.
02:04.730 --> 02:05.510
Okay.
02:05.510 --> 02:11.420
So we've seen a bunch of things that Llms are pretty good at, and then just a few things where it tripped
02:11.420 --> 02:13.160
up, but mostly things that it's very good at.
02:13.190 --> 02:17.600
One of the things that it's not so good at as it would happen, is telling jokes.
02:17.600 --> 02:24.080
When you give it a very tight, uh, context in which it has to try and form that joke.
02:24.260 --> 02:28.610
Uh, and so, you know, this is clearly not a very commercial example, but it's a way of having some
02:28.610 --> 02:30.980
fun and getting some experience with the APIs.
02:31.040 --> 02:34.850
Uh, we are going to ask some llms to tell jokes over the API.
02:35.120 --> 02:36.770
Um, so what information do you do?
02:36.770 --> 02:37.550
You send over an API.
02:37.580 --> 02:41.750
Typically, you always specify the name of the model that you want to use.
02:41.750 --> 02:45.380
You typically give the system message and the user message.
02:45.380 --> 02:48.950
You're super familiar with this now system message giving the overall context.
02:48.950 --> 02:52.340
The user message is the actual prompt.
02:52.550 --> 02:54.410
Um, and there are some other characteristics.
02:54.410 --> 02:55.700
There are some other things that you can do.
02:55.730 --> 03:00.890
You can pass in something called the temperature which is between 0 and 1, usually where one means
03:00.890 --> 03:08.430
I want a more random creative output Outputs, and zero would be the lowest possible focused, deterministic
03:08.430 --> 03:09.960
repeatable setting.
03:10.320 --> 03:14.250
So that is another parameter that you can often provide.
03:14.280 --> 03:19.470
So in this case we're going to set a system message to be you are an assistant that is great at telling
03:19.470 --> 03:20.010
jokes.
03:20.010 --> 03:26.670
And the user prompt will be tell a light hearted joke for an audience of data scientists.
03:26.670 --> 03:30.000
That would be you and also me.
03:30.660 --> 03:35.850
Okay, so then this structure here is hopefully something very familiar to you.
03:35.850 --> 03:43.410
This is where we put the prompts into a list to elements you a system and a user as the as the role
03:43.410 --> 03:44.910
in these two elements.
03:44.940 --> 03:49.860
Going into this list, I hopefully don't need to explain it because you're now quite familiar with this.
03:50.040 --> 03:55.080
Uh, as I say, this, this, this, um, value here, the role can be system or user.
03:55.080 --> 03:56.070
You're going to find out later.
03:56.070 --> 03:57.570
It can also be assistant.
03:57.570 --> 03:59.760
So it can be system user or assistant.
03:59.760 --> 04:04.150
And then later this week you're going to find some other thing that can go in there as well.
04:04.240 --> 04:04.990
So.
04:05.020 --> 04:09.790
But for now, all you need to remember is system and user as the two roles we're going to be using.
04:09.790 --> 04:12.610
So we put that into the list of prompts.
04:13.480 --> 04:16.570
And I should remember to execute the cell before it.
04:16.570 --> 04:18.790
Before I do that did I execute the cell here.
04:18.790 --> 04:20.350
Yes I did all right.
04:20.350 --> 04:20.770
Here we go.
04:20.800 --> 04:21.790
Let's try that one again.
04:21.790 --> 04:22.840
Execute that cell.
04:22.840 --> 04:23.860
Execute this cell.
04:23.890 --> 04:25.720
Very good okay.
04:25.750 --> 04:33.280
Let's start with one of the older GPT models GPT 3.5 turbo, which quite recently was like the the latest
04:33.280 --> 04:34.390
and greatest frontier model.
04:34.390 --> 04:35.830
But it's already old news.
04:35.830 --> 04:37.330
But we will use this.
04:37.330 --> 04:44.680
And so the API, which now you're quite familiar with for OpenAI is OpenAI dot chat, dot completions,
04:44.680 --> 04:53.500
dot create completions, um, being the name of this, this API, the one that basically takes an existing
04:53.500 --> 04:59.530
set of prompts and then tries to complete generate text to complete the conversation.
04:59.800 --> 05:06.960
Um, and as we call create, we pass in a model and we pass in the messages in the format that you're
05:06.960 --> 05:07.980
familiar with.
05:08.010 --> 05:09.750
So let's see.
05:09.780 --> 05:15.870
And you remember when we get back the response, what we do is we take completion dot choices, which
05:15.870 --> 05:18.030
is a list of possible choices.
05:18.030 --> 05:19.980
But there will only be one element in there.
05:19.980 --> 05:23.790
There is a way that you can specify that you want it to return multiple choices.
05:23.790 --> 05:28.740
But since we haven't done that, we just get back one and it's in location zero of course.
05:28.740 --> 05:35.550
So completion dot choices zero dot message gives us back the message and content returns it in a string.
05:35.760 --> 05:37.770
So that is what we get back and we print it.
05:37.770 --> 05:39.360
And now let's see what kind of joke.
05:39.360 --> 05:42.690
For data scientists GPT 3.5 turbo can come up with.
05:42.720 --> 05:43.680
Here we go.
05:44.010 --> 05:48.000
Why did the data scientists break up with their computer?
05:48.000 --> 05:52.020
It just couldn't handle their complex relationship.
05:52.830 --> 05:53.970
Okay, okay.
05:54.000 --> 05:56.250
You know, I get it, I see it.
05:56.280 --> 05:58.770
It's not the world's funniest joke, but it's not terrible.
05:58.800 --> 06:03.540
You know, the data scientists model relationships between things and couldn't handle their complex
06:03.540 --> 06:04.200
relationship.
06:04.200 --> 06:04.800
Fair enough.
06:04.800 --> 06:13.140
I'd say that's a perfectly acceptable, acceptable joke coming from GPT 3.5 turbo.
06:13.200 --> 06:17.010
So let's see if GPT four mini can do better.
06:17.160 --> 06:21.450
This time, we're going to just slightly expand our use of the API.
06:21.600 --> 06:26.340
I'm including temperature, so this is where you can pass in this number between 0 and 1.
06:26.340 --> 06:29.220
One for the most creative, zero for the least.
06:29.490 --> 06:34.980
Um, and uh, out of this I have completion choices zero message content.
06:34.980 --> 06:36.720
Again, you should be very familiar with this.
06:36.750 --> 06:38.970
Let's see how it performs.
06:39.570 --> 06:42.060
Why did the data scientist break up with a statistician?
06:42.060 --> 06:44.670
Because she found him too mean.
06:44.700 --> 06:46.230
I'd say that's a pretty good joke.
06:46.230 --> 06:47.490
I'd say that's fine.
06:47.490 --> 06:49.950
That's that's, uh, that's an acceptable joke.
06:49.980 --> 06:54.300
Maybe I was harsh when I said that llms aren't very good at this, because that's a perfectly decent
06:54.300 --> 06:54.990
joke.
06:55.170 --> 07:02.610
Uh, and, uh, I think we will give GPT four a mini, uh, a round of applause for that.
07:03.030 --> 07:09.160
Okay, let's try GPT four Minis, uh, bigger cousin, GPT four.
07:09.190 --> 07:12.130
Oh, the maxi version of GPT four.
07:12.160 --> 07:14.260
Oh, the big guy.
07:14.260 --> 07:16.000
And we will ask it.
07:16.030 --> 07:19.210
Let's give it the same temperature so we're not messing with things as we go.
07:19.240 --> 07:21.160
We'll ask it for it for a joke.
07:21.190 --> 07:23.230
Two and let's see how it does.
07:24.250 --> 07:27.130
Why did the data scientist go broke?
07:27.130 --> 07:30.850
Because they couldn't find any cache in their array.
07:32.410 --> 07:35.560
If it hadn't put on in their array, I might have found that better.
07:35.560 --> 07:38.650
I don't, uh, couldn't find any cache.
07:38.650 --> 07:39.910
Would be okay.
07:40.810 --> 07:42.280
Maybe I'm missing something here.
07:42.310 --> 07:45.280
I I'm not sure I get it.
07:45.550 --> 07:47.380
Uh, let's try another one.
07:47.560 --> 07:52.480
Let's do what I had in there before and start pulling the temperature down a bit, see what we get.
07:52.990 --> 07:56.560
Why did scientists break up with the logistic regression model?
07:56.590 --> 07:58.390
Because it couldn't find the right fit.
07:58.600 --> 08:00.130
Uh, you know, that's perfectly decent.
08:00.130 --> 08:00.970
That's acceptable.
08:00.970 --> 08:06.160
That's that's maybe, uh, I'm not sure which I prefer between Mini and Maxi, but, uh, that's a that's
08:06.160 --> 08:08.860
a pretty solid, solid gag there.
08:08.860 --> 08:12.640
I think we will we will say that that that's a pass for sure.
08:13.810 --> 08:14.800
All right.
08:14.830 --> 08:17.050
Let's move on to clause 3.5.
08:17.080 --> 08:17.680
Sonnet.
08:17.950 --> 08:21.430
Uh, so the API looks strikingly similar.
08:21.430 --> 08:22.270
That's the good news.
08:22.270 --> 08:25.030
It's basically very, very similar indeed.
08:25.060 --> 08:26.530
A couple of differences.
08:26.530 --> 08:31.510
You do have to pass in the system message as its own separate attribute.
08:31.510 --> 08:36.430
And then the messages is again this this list of decks.
08:36.430 --> 08:41.380
But of course it doesn't have that first entry for the system message because you've already passed
08:41.380 --> 08:42.550
that in separately.
08:42.910 --> 08:45.310
Um, so that's a slight difference.
08:45.340 --> 08:51.670
Um, also, Max tokens is something which is optional for the OpenAI API to to specify the, the maximum
08:51.670 --> 08:52.360
number of tokens.
08:52.360 --> 08:55.180
And I believe it's actually required for Claude.
08:55.180 --> 08:56.860
So that's why it's in here.
08:56.860 --> 08:59.200
But otherwise everything should look very similar.
08:59.230 --> 09:03.250
The API itself is a little bit easier to memorize.
09:03.250 --> 09:05.740
It's just Claude dot messages dot create.
09:05.740 --> 09:11.470
It's slightly shorter, but it's otherwise quite similar to OpenAI ChatGPT completions create.
09:11.710 --> 09:13.150
Uh, so there it is.
09:13.180 --> 09:17.830
And then when we get back a response, it's message content zero.
09:17.860 --> 09:22.630
Again, you're asking for the the first one, but we're only going to get back one because we've only
09:22.630 --> 09:28.750
asked for one dot text gives us that's the equivalent of dot content for OpenAI.
09:28.780 --> 09:30.100
So let's see.
09:30.100 --> 09:35.020
This is a useful hopefully for you for for the API framework for Claude.
09:35.020 --> 09:38.080
Let's see now how Claude does with a joke.
09:39.910 --> 09:40.630
Sure.
09:40.660 --> 09:43.540
Here's a lighthearted joke for data scientists.
09:43.570 --> 09:46.210
Why do data scientists break up with their significant other?
09:46.240 --> 09:50.800
They just was too much variance in the relationship, and they couldn't find a good way to normalize
09:50.800 --> 09:51.310
it.
09:51.970 --> 09:53.530
Uh, yeah, that's all right.
09:53.530 --> 09:59.110
I'd say it's a nerdier it's a slightly more, uh, um, data sciency.
09:59.110 --> 10:03.640
It's perhaps just a tiny bit less funny, but it's not bad at all.
10:03.640 --> 10:07.570
I don't know, I think whether you prefer that to GPT four is probably a matter of taste.
10:07.900 --> 10:10.100
They're perfectly solid jokes.
10:10.220 --> 10:14.210
They're not explosively funny, but I'd say perfectly solid.
10:14.210 --> 10:15.440
Not terrible.
10:15.950 --> 10:16.550
Um.
10:16.610 --> 10:22.220
Anyway, the point of this is more about APIs and about jokes, although it always keeps it entertaining.
10:22.250 --> 10:24.800
What I want to show you now is about streaming.
10:24.890 --> 10:29.090
Um, you remember we talked briefly about streaming before the streaming example?
10:29.090 --> 10:33.140
We did before, uh, looked a bit complicated because we had to deal with the fact that we were bringing
10:33.140 --> 10:36.470
back markdown and we had to to handle that markdown.
10:36.470 --> 10:40.280
This looks a bit simpler because we're not dealing with with a markdown response.
10:40.280 --> 10:45.980
We're going to ask the same model, cloud 3.5 again for a joke, but this time we're going to stream
10:45.980 --> 10:46.730
back results.
10:46.730 --> 10:53.090
So you may remember when we asked OpenAI to stream the way we did it is we just added another attribute
10:53.090 --> 10:54.470
stream equals true.
10:54.470 --> 10:56.570
And that meant that it was in streaming mode.
10:56.570 --> 10:58.490
For Claude, it's slightly different.
10:58.490 --> 11:00.380
There is no extra attribute.
11:00.380 --> 11:06.440
Instead, you call the dot stream method instead of the dot create method.
11:06.440 --> 11:09.020
So slightly different approach there.
11:09.020 --> 11:13.790
That's a nuance of difference between anthropic and OpenAI for streaming.
11:13.790 --> 11:16.430
So we call Claude messages stream.
11:16.460 --> 11:17.840
Otherwise it's the same.
11:17.840 --> 11:22.430
And then with what comes back, we use a context manager with results as stream.
11:22.610 --> 11:26.960
Um, and then it's for text in stream text stream.
11:26.960 --> 11:31.550
And you remember OpenAI was was for chunk in response.
11:31.550 --> 11:35.990
So OpenAI was a bit different again in the way that you read back results.
11:35.990 --> 11:37.040
But there it is.
11:37.040 --> 11:41.420
We get each little chunk back and just going to print that chunk.
11:41.540 --> 11:46.460
Um, and the reason for this is to make sure that it doesn't print each chunk on a separate line.
11:46.670 --> 11:48.170
Otherwise it'd be very hard to read.
11:48.170 --> 11:49.490
So this should look better.
11:49.490 --> 11:56.510
Let's see how Claude 3.5 sonnet does with a joke that it will then stream back to us in JupyterLab.
11:57.200 --> 11:57.800
There we go.
11:57.800 --> 11:58.040
You see?
11:58.040 --> 11:59.060
It's streaming.
11:59.330 --> 12:01.580
Sure, here's a light hearted joke for Data Scientist.
12:01.610 --> 12:03.110
Why did that same joke?
12:03.110 --> 12:08.690
It seems exactly the same joke, but it's added in a Brahms little drum.
12:08.840 --> 12:12.000
Uh, explosion at the end, which is nice.
12:12.000 --> 12:14.670
I wonder why did I ask for more tokens than before?
12:14.700 --> 12:15.180
Let's see.
12:15.210 --> 12:15.630
No.
12:15.630 --> 12:16.350
The same.
12:16.650 --> 12:17.730
Um, it's.
12:17.760 --> 12:19.020
And it gives a little explanation.
12:19.020 --> 12:22.170
This joke plays on statistical concepts which are common to data science.
12:22.260 --> 12:27.060
It's a bit nerdy, but should get a chuckle from data savvy audience.
12:27.060 --> 12:32.070
Well, I would say you guys are a data savvy audience, so you can be the judge of that.
12:32.100 --> 12:34.440
Did it get a chuckle from you?
12:35.220 --> 12:36.540
Moving on.
12:36.570 --> 12:39.120
Gemini has a different structure.
12:39.120 --> 12:41.370
It's it's quite a bit different, actually.
12:41.400 --> 12:48.780
Um, and I'd probably say to Google's credit, their ability to set up tokens is much more complicated,
12:48.780 --> 12:50.580
but the API is a bit simpler.
12:50.670 --> 12:56.850
Uh, you can see here you create a generative model object and you pass in the name of the model, we'll
12:56.850 --> 12:59.550
use the Gemini 1.5 flash.
12:59.580 --> 13:03.510
You remember how many how large the context window is for Gemini 1.5 flash.
13:03.540 --> 13:04.680
Can you remember that?
13:04.710 --> 13:07.050
It was top of the table that we had before?
13:07.050 --> 13:10.380
It was a remarkable 1 million tokens.
13:10.410 --> 13:11.450
A million tokens.
13:11.480 --> 13:13.310
750,000 words.
13:13.340 --> 13:15.500
So, Gemini 1.5 flash.
13:15.950 --> 13:23.270
We pass in the system instruction when we create this object, and then we call Gemini dot.
13:23.270 --> 13:26.420
Generate content with the user prompt.
13:26.420 --> 13:28.520
And it's just response dot text.
13:28.520 --> 13:35.090
So a little bit less futzing around with both the request and the response here it's a bit of a simpler
13:35.120 --> 13:37.520
API, but let's see the quality of joke.
13:37.670 --> 13:42.200
Importantly, why did the data scientists break up with a statistician?
13:42.200 --> 13:45.590
Because they couldn't see eye to eye on the p value.
13:47.420 --> 13:48.020
Ah.
13:48.800 --> 13:52.310
Well, uh, I see the data science side of it.
13:52.310 --> 13:53.810
I'm not sure I get it.
13:53.900 --> 13:55.070
Hahaha.
13:55.370 --> 13:57.380
Uh, maybe you do get it.
13:57.380 --> 13:59.540
And I'm being being, uh, being dozy.
13:59.540 --> 14:01.310
Uh, in which case, by all means pointed out to me.
14:01.310 --> 14:05.450
But I don't particularly get the funny aspect of that joke.
14:05.450 --> 14:11.630
So for me, I would say that, uh, Gemini certainly lags in terms of its, uh, Gemini 1.5 flash in
14:11.630 --> 14:13.440
terms of its humor value.
14:14.220 --> 14:15.060
All right.
14:15.090 --> 14:18.960
Anyways, to get serious for a moment, let's go back to GPT four.
14:19.170 --> 14:20.910
Many with the original question.
14:20.910 --> 14:22.410
You're a helpful assistant.
14:22.440 --> 14:25.950
How do I decide if a business problem is suitable for an LLM solution?
14:25.950 --> 14:29.790
Remember, that was the very first question we asked through the chat interface.
14:29.970 --> 14:32.970
Um, and we can now bring this together again.
14:32.970 --> 14:34.260
This should be pretty familiar to you.
14:34.290 --> 14:37.320
We're going to stream back the results in markdown.
14:37.320 --> 14:40.770
So it's OpenAI chat dot completions dot create.
14:40.770 --> 14:41.880
We pass in the model.
14:41.880 --> 14:43.350
We're going to go for the big guy.
14:43.530 --> 14:44.820
Um we use the prompts.
14:44.820 --> 14:45.840
We set a temperature.
14:45.840 --> 14:47.250
We say stream equals true.
14:47.250 --> 14:49.680
That's the way that you do it with OpenAI.
14:49.830 --> 14:54.750
Um, and then this is the way that we stream back the results again.
14:54.750 --> 14:57.720
It's a little bit more involved because we're dealing with markdown.
14:57.720 --> 15:03.390
And so we have to do some, some sort of, uh, special stuff here to basically refresh the markdown
15:03.390 --> 15:04.950
with each iteration.
15:04.980 --> 15:08.850
If you're not sure we have to do it this way, try taking that out and doing it differently, and you'll
15:08.850 --> 15:11.190
immediately see what what what happens.
15:11.220 --> 15:13.200
It it won't look good.
15:13.440 --> 15:15.720
Uh, and let's run that.
15:15.720 --> 15:21.810
And here we get the results, and you can see that it looks great.
15:22.500 --> 15:28.260
You can see some of the flicking happening when the markdown has only partially come through.
15:28.260 --> 15:33.600
And so it's interpreting things like when there's perhaps multiple hashes representing a subheading.
15:33.600 --> 15:37.050
And it's only received one hash and it thinks there's a big heading coming.
15:37.110 --> 15:41.430
Uh, at least I think that's what we were seeing there briefly, with some of that flickering as the
15:41.430 --> 15:42.660
markdown appeared.
15:42.660 --> 15:50.730
But at the end of it we get back, of course, a very nicely constructed response, well structured,
15:50.730 --> 15:55.020
and it's formatted perfectly in markdown streams back.
15:55.740 --> 15:56.460
All right.
15:56.460 --> 16:03.300
So that has given you a sense of the different APIs and a bit of messing around with some, some fun
16:03.300 --> 16:04.140
questions.
16:04.170 --> 16:12.150
And what we're going to do next in the next video is actually have a couple of llms talk to each other,
16:12.150 --> 16:13.200
which should be fun.
16:13.200 --> 16:14.340
I will see you then.

799
week5/community-contributions/subtitles/srts/59166481/ja_JP.srt

@ -0,0 +1,799 @@
WEBVTT
00:00.860 --> 00:05.330
そしてここでもう一度、 私たちはお気に入りの場所、 Jupyter Labにいることに気づく。
00:05.330 --> 00:07.310
数週間で準備完了。
00:07.340 --> 00:09.620
2週目の練習
00:09.620 --> 00:14.930
2週目のフォルダーに入り、 2週目の初日を迎える。
00:15.230 --> 00:18.230
ええと、 それで......。
00:18.230 --> 00:26.990
第1週目では、 チャット・ユーザー・インターフェースを通して複数のfrontier LMSを使い、 ウェブを通して使う方法、
00:26.990 --> 00:32.990
そしてAPIを通してOpenAIのAPIに接続したことを思い出してください。
00:33.020 --> 00:41.090
そこで今日は、 AnthropicとGoogleのAPIをミックスに加え、 OpenAIを使用する私たちのスキルに加わります。
00:41.960 --> 00:47.630
ええと、 だから、 もうひとつ念を押しておくけど、 この話を続けていると殺されちゃうよ。
00:47.630 --> 00:50.300
ここでキーをセットする。
00:50.300 --> 00:58.460
OpenAIのキーを設定することができます。 おそらく先週すでに設定したと思いますが、 anthropicとGoogleのGeminiのキーを設定することができます。
00:58.490 --> 01:05.330
でも、 グーグル・キーの設定にはもっと冒険が必要なんだ。
01:05.390 --> 01:09.410
一度セットアップしたら、 あとは作るだけだ。
01:09.470 --> 01:11.330
というファイルはすでに作成されているはずだ。
01:11.480 --> 01:15.170
鍵がその形であることを確認する。
01:15.560 --> 01:21.500
その代わりに、 これらのセルにキーを入力することもできる。
01:21.500 --> 01:24.020
そうすることは可能だ。
01:24.020 --> 01:26.270
セキュリティ上の理由から推奨されていない。
01:26.270 --> 01:30.350
いつかこれを公開し、 他の人があなたの鍵を見ることになったときのために。
01:30.380 --> 01:32.300
さて、 前置きはここまで。
01:32.330 --> 01:33.800
インポートをしよう。
01:33.800 --> 01:37.400
環境変数を設定するコードのブロックを実行してみよう。
01:37.400 --> 01:38.900
あなたはよくご存じでしょう。
01:38.900 --> 01:49.400
そして今、 このセルで、 OpenAIに同じ呼び出しをして、 OpenAI APIへの接続を確立しているのがわかるだろう。
01:49.400 --> 01:56.840
でも、 クロードには似たようなものがあるし、 双子座のグーグルにはちょっと違うものがある。
01:56.960 --> 02:04.220
つまり、 この3つのコマンドは、 ある意味類似しているんだ。
02:04.730 --> 02:05.510
オーケー。
02:05.510 --> 02:13.160
LLMSが得意なことをたくさん見てきたし、 つまずいたこともいくつかあったが、 ほとんどはLLMSが得意なことだった。
02:13.190 --> 02:17.600
その中で、 あまり得意でないことのひとつがジョークを言うことだ。
02:17.600 --> 02:24.080
非常にタイトな文脈を与えることで、 その中でジョークを作ろうとする。
02:24.260 --> 02:30.980
これは明らかに商業的な例ではないけれど、 APIを楽しみながら体験する方法なんだ。
02:31.040 --> 02:34.850
ええと、 API上で何人かのLLMにジョークを言ってもらう予定です。
02:35.120 --> 02:36.770
ええと、 それでどんな情報を?
02:36.770 --> 02:37.550
API経由で送信する。
02:37.580 --> 02:41.750
通常、 使用したいモデルの名前を常に指定する。
02:41.750 --> 02:45.380
通常、 システムメッセージとユーザーメッセージを伝える。
02:45.380 --> 02:48.950
全体的な背景を伝えるシステムメッセージは、 もうお馴染みですね。
02:48.950 --> 02:52.340
ユーザーメッセージは実際のプロンプトである。
02:52.550 --> 02:54.410
他にもいくつか特徴がある。
02:54.410 --> 02:55.700
他にもできることはある。
02:55.730 --> 03:00.890
温度と呼ばれるものを0から1の間で渡すことができ、 通常、 1はよりランダムで創造的な出力が欲しいことを意味し、
03:00.890 --> 03:09.960
0は可能な限り低く集中した、 決定論的な反復可能な設定となる。
03:10.320 --> 03:14.250
だから、 これもよく提供できるパラメーターのひとつだ。
03:14.280 --> 03:20.010
そこで今回は、 「あなたはジョークを言うのが得意なアシスタントです」というシステムメッセージを設定する。
03:20.010 --> 03:26.670
そして、 ユーザー・プロンプトは、 データ・サイエンティストの聴衆に向けて軽いジョークを言う。
03:26.670 --> 03:30.000
それはあなたであり、 私でもある。
03:30.660 --> 03:35.850
オーケー、 ではこの構成は、 あなたにとって非常に馴染みのあるものであることを願っている。
03:35.850 --> 03:44.910
ここでは、 プロンプトをリストに入れて、 システムとユーザーを要素とし、 これら2つの要素に役割を設定します。
03:44.940 --> 03:49.860
このリストに入るにあたって、 説明するまでもないだろう。
03:50.040 --> 03:55.080
ええと、 私が言ったように、 この、 この、 ええと、 ここの値、 ロールはシステムでもユーザーでもいい。
03:55.080 --> 03:56.070
後で分かることだ。
03:56.070 --> 03:57.570
アシスタントになることもある。
03:57.570 --> 03:59.760
だから、 システム・ユーザーでもアシスタントでもいい。
03:59.760 --> 04:04.150
そして今週の後半には、 そこに入れられる他のものも見つけることになる。
04:04.240 --> 04:04.990
だから
04:05.020 --> 04:09.790
しかし今は、 これから使う2つのロールとして、 systemとuserを覚えておけばいい。
04:09.790 --> 04:12.610
だから、 それをプロンプトのリストに入れた。
04:13.480 --> 04:16.570
そして、 その前のセルを実行することも忘れてはならない。
04:16.570 --> 04:18.790
その前に、 ここでセルを実行したか?
04:18.790 --> 04:20.350
はい、 大丈夫です。
04:20.350 --> 04:20.770
さあ、 始めよう。
04:20.800 --> 04:21.790
もう一度やってみよう。
04:21.790 --> 04:22.840
そのセルを実行する。
04:22.840 --> 04:23.860
このセルを実行する。
04:23.890 --> 04:25.720
とてもいい。
04:25.750 --> 04:34.390
古いGPTモデルの一つGPT 3から始めよう。 5ターボは、 ごく最近、 最新で最高のフロンティアモデルのようだった。
04:34.390 --> 04:35.830
しかし、 それはすでに古いニュースだ。
04:35.830 --> 04:37.330
しかし、 我々はこれを使う。
04:37.330 --> 04:44.680
OpenAIのAPIは、 OpenAI dot chat, dot completions, dot
04:44.680 --> 04:53.500
create completionsで、 このAPIの名前は、 基本的に既存のプロンプトのセットを受け取り、
04:53.500 --> 04:59.530
会話を完成させるためにテキストを生成しようとするものです。
04:59.800 --> 05:07.980
そしてcreateを呼び出すと、 モデルを渡し、 おなじみのフォーマットでメッセージを渡します。
05:08.010 --> 05:09.750
では、 見てみよう。
05:09.780 --> 05:18.030
そして、 回答が返ってきたときに私たちがすることは、 可能な選択肢のリストである完了点の選択肢を取ることだ。
05:18.030 --> 05:19.980
しかし、 そこに含まれる要素は1つだけだ。
05:19.980 --> 05:23.790
複数の選択肢を返すように指定する方法がある。
05:23.790 --> 05:28.740
でも、 それをやっていないので、 ただ1つ戻ってくるだけで、 もちろんゼロ地点にある。
05:28.740 --> 05:35.550
つまり、 completion dot choices zero dot messageはメッセージを返し、 contentはそれを文字列で返す。
05:35.760 --> 05:37.770
だから、 それを印刷するんだ。
05:37.770 --> 05:39.360
さて、 どんなジョークか見てみよう。
05:39.360 --> 05:42.690
データサイエンティスト向け GPT 3. 5ターボが思いつく。
05:42.720 --> 05:43.680
さあ、 始めよう。
05:44.010 --> 05:48.000
データサイエンティストはなぜコンピューターと別れたのか?
05:48.000 --> 05:52.020
二人の複雑な関係を処理しきれなかったのだ。
05:52.830 --> 05:53.970
オーケー、 オーケー。
05:54.000 --> 05:56.250
分かるよ、 分かるんだ。
05:56.280 --> 05:58.770
世界一面白いジョークではないが、 ひどくはない。
05:58.800 --> 06:04.200
データサイエンティストは物事の関係をモデル化するが、 その複雑な関係を扱うことができなかったんだ。
06:04.200 --> 06:04.800
十分フェアだ。
06:04.800 --> 06:13.140
GPT3からすれば、 まったく問題ない、 受け入れられるジョークだと思う。 5ターボ。
06:13.200 --> 06:17.010
では、 GPT four miniがもっとうまくやれるかどうか見てみよう。
06:17.160 --> 06:21.450
今回は、 APIの使い方を少し拡張するだけだ。
06:21.600 --> 06:26.340
温度を含めているので、 ここで0から1の間の数値を渡すことができる。
06:26.340 --> 06:29.220
最もクリエイティブなものに1点、 最もクリエイティブでないものに0点。
06:29.490 --> 06:34.980
ええと、 それで、 この中で私は完成度の高い選択肢を持っていて、 メッセージの内容はゼロなんだ。
06:34.980 --> 06:36.720
繰り返しになるが、 あなたはこのことをよく知っているはずだ。
06:36.750 --> 06:38.970
そのパフォーマンスを見てみよう。
06:39.570 --> 06:42.060
データサイエンティストはなぜ統計学者と別れたのか?
06:42.060 --> 06:44.670
彼女は彼があまりに意地悪だと感じたからだ。
06:44.700 --> 06:46.230
なかなかいいジョークだと思うよ。
06:46.230 --> 06:47.490
それでいいと思うよ。
06:47.490 --> 06:49.950
それは......ああ、 これはジョークとして受け入れられるね。
06:49.980 --> 06:54.990
llmsはあまり得意ではない、 と言ったのは厳しかったかもしれない。 それは至極まっとうなジョークだからだ。
06:55.170 --> 07:02.610
そして、 GPT4にはちょっとした拍手を送りたい。
07:03.030 --> 07:09.160
じゃあ、 GPT4ミニを試してみよう。
07:09.190 --> 07:12.130
GPT4のマキシバージョンだ。
07:12.160 --> 07:14.260
ああ、 大物だ。
07:14.260 --> 07:16.000
そして私たちはそれを問う。
07:16.030 --> 07:19.210
温度を同じにしよう。
07:19.240 --> 07:21.160
冗談で聞いてみよう。
07:21.190 --> 07:23.230
2人で、 どうなるか見てみよう。
07:24.250 --> 07:27.130
データサイエンティストはなぜ破産したのか?
07:27.130 --> 07:30.850
なぜなら、 彼らのアレーにはキャッシュが見つからなかったからだ。
07:32.410 --> 07:35.560
もし、 それが彼らの配列になかったら、 私はその方がいいと思ったかもしれない。
07:35.560 --> 07:38.650
キャッシュは見つからなかった。
07:38.650 --> 07:39.910
大丈夫だろう。
07:40.810 --> 07:42.280
何か見落としているのかもしれない。
07:42.310 --> 07:45.280
よく分からないんだ。
07:45.550 --> 07:47.380
ええと、 もうひとつやってみよう。
07:47.560 --> 07:52.480
前にやったように、 温度を少し下げてみよう。
07:52.990 --> 07:56.560
科学者たちはなぜロジスティック回帰モデルと決別したのか?
07:56.590 --> 07:58.390
適切な相手が見つからなかったからだ。
07:58.600 --> 08:00.130
あ、 あのね、 それは至極まっとうなことだよ。
08:00.130 --> 08:00.970
それは受け入れられる。
08:00.970 --> 08:08.860
ミニとマキシのどっちが好きかわからないけど、 これはこれで、 なかなかしっかりしたギャグだよ。
08:08.860 --> 08:12.640
それは間違いなくパスだ。
08:13.810 --> 08:14.800
分かった。
08:14.830 --> 08:17.050
第3節に移ろう。 5.
08:17.080 --> 08:17.680
ソネット
08:17.950 --> 08:21.430
APIは驚くほど似ている。
08:21.430 --> 08:22.270
それは良いニュースだ。
08:22.270 --> 08:25.030
基本的にはとてもよく似ている。
08:25.060 --> 08:26.530
いくつかの違いがある。
08:26.530 --> 08:31.510
システム・メッセージは別の属性として渡さなければならない。
08:31.510 --> 08:36.430
そしてメッセージはまたこのデッキリストだ。
08:36.430 --> 08:42.550
しかしもちろん、 システムメッセージの最初のエントリーは持っていない。
08:42.910 --> 08:45.310
うーん、 それは少し違うかな。
08:45.340 --> 08:52.360
また、 Max tokensは、 OpenAI APIでトークンの最大数を指定するためのオプションです。
08:52.360 --> 08:55.180
クロードには必要なことだと思う。
08:55.180 --> 08:56.860
だからここにあるんだ。
08:56.860 --> 08:59.200
しかし、 それ以外はすべてよく似ているはずだ。
08:59.230 --> 09:03.250
API自体は少し覚えやすい。
09:03.250 --> 09:05.740
クロード・ドット・メッセージ・ドット・クリエイトだ。
09:05.740 --> 09:11.470
少し短いですが、 それ以外はOpenAIのChatGPTの完了が作成するものとよく似ています。
09:11.710 --> 09:13.150
あ、 そうだ。
09:13.180 --> 09:17.830
そして返答が返ってきたときには、 メッセージの内容はゼロになっている。
09:17.860 --> 09:22.630
繰り返しますが、 最初の1つを要求していますが、 1つしか返ってきません。 なぜなら、
09:22.630 --> 09:28.750
OpenAIのドット・コンテンツに相当するドット・テキストを1つしか要求していないからです。
09:28.780 --> 09:30.100
では、 見てみよう。
09:30.100 --> 09:35.020
これは、 クロードのAPIフレームワークのために役立つことを期待している。
09:35.020 --> 09:38.080
さて、 クロードがジョークでどうするか見てみよう。
09:39.910 --> 09:40.630
もちろんだ。
09:40.660 --> 09:43.540
データサイエンティスト向けの軽いジョークを紹介しよう。
09:43.570 --> 09:46.210
データサイエンティストはなぜ恋人と別れるのか?
09:46.240 --> 09:51.310
ただ、 その関係にあまりにばらつきがありすぎて、 それを正常化するいい方法が見つからなかった。
09:51.970 --> 09:53.530
ああ、 そうだね。
09:53.530 --> 09:59.110
よりオタク的というか......もう少し、 うーん、 データサイエンス的というか。
09:59.110 --> 10:03.640
ほんの少し笑えなくなったかもしれないが、 決して悪くはない。
10:03.640 --> 10:07.570
GPT4よりGPT4が好きかどうかは、 好みの問題だろう。
10:07.900 --> 10:10.100
完璧なジョークだ。
10:10.220 --> 10:14.210
爆発的に面白いというわけではないが、 完璧にしっかりしていると言える。
10:14.210 --> 10:15.440
ひどくはない。
10:15.950 --> 10:16.550
うーん。
10:16.610 --> 10:22.220
いずれにせよ、 これはAPIについての話であり、 ジョークの話である。
10:22.250 --> 10:24.800
これからお見せしたいのは、 ストリーミングについてです。
10:24.890 --> 10:29.090
ストリーミングの例の前に、 ストリーミングについて簡単に話したのを覚えているかい?
10:29.090 --> 10:36.470
マークダウンを復活させ、 そのマークダウンを処理しなければならなかったからだ。
10:36.470 --> 10:40.280
これは、 マークダウン・レスポンスを扱っていないので、 少し単純に見える。
10:40.280 --> 10:46.730
同じモデル、 クラウド3にお願いするつもりだ。 冗談でまた5点、 でも今回は結果をストリーミングで返します。
10:46.730 --> 10:54.470
OpenAIにストリーミングを依頼したとき、 別の属性stream equals trueを追加したことを覚えているだろうか。
10:54.470 --> 10:56.570
そしてそれは、 ストリーミング・モードであることを意味していた。
10:56.570 --> 10:58.490
クロードの場合は少し違う。
10:58.490 --> 11:00.380
余計な属性はない。
11:00.380 --> 11:06.440
その代わり、 dot createメソッドの代わりにdot streamメソッドを呼び出す。
11:06.440 --> 11:09.020
そこで少し異なるアプローチを取る。
11:09.020 --> 11:13.790
それは、 ストリーミングのための人間工学とOpenAIのニュアンスの違いだ。
11:13.790 --> 11:16.430
そこで、 クロード・メッセージ・ストリームと呼ぶことにする。
11:16.460 --> 11:17.840
それ以外は同じだ。
11:17.840 --> 11:22.430
そして、 戻ってきたものについては、 ストリームとしての結果を持つコンテキスト・マネージャーを使用する。
11:22.610 --> 11:26.960
それから、 ストリーム・テキスト・ストリームのテキスト用だね。
11:26.960 --> 11:31.550
オープンAIは、 それに応えるチャンクのためのものだったことを覚えているだろう。
11:31.550 --> 11:35.990
だからOpenAIは、 結果を読み返す方法がまた少し違っていた。
11:35.990 --> 11:37.040
でも、 それがある。
11:37.040 --> 11:41.420
それぞれの小さな塊を取り戻し、 その塊を印刷する。
11:41.540 --> 11:46.460
その理由は、 各チャンクを別々の行に印刷しないようにするためです。
11:46.670 --> 11:48.170
そうでなければ、 とても読みにくい。
11:48.170 --> 11:49.490
だから、 この方がよく見えるはずだ。
11:49.490 --> 11:56.510
クロード3世はどうだったかな。 5ソネットは、 JupyterLabで私たちにストリームバックされるジョークで行う。
11:57.200 --> 11:57.800
これでよし。
11:57.800 --> 11:58.040
分かるか?
11:58.040 --> 11:59.060
ストリーミングだよ。
11:59.330 --> 12:01.580
もちろん、 データサイエンティスト向けの軽いジョークだ。
12:01.610 --> 12:03.110
なぜ同じジョークを?
12:03.110 --> 12:08.690
まったく同じジョークのようだが、 ブラームスの小太鼓が加えられている。
12:08.840 --> 12:12.000
最後に爆発があったのは良かった。
12:12.000 --> 12:14.670
なぜ以前より多くのメダルを要求したのだろう?
12:14.700 --> 12:15.180
見てみよう。
12:15.210 --> 12:15.630
そうだ。
12:15.630 --> 12:16.350
同じだ。
12:16.650 --> 12:17.730
ええと、 それは
12:17.760 --> 12:19.020
そして、 ちょっとした説明もある。
12:19.020 --> 12:22.170
このジョークは、 データサイエンスに共通する統計的概念を利用したものだ。
12:22.260 --> 12:27.060
少しマニアックだが、 データに精通した観客の笑いを誘うはずだ。
12:27.060 --> 12:32.070
まあ、 君たちはデータに精通しているから、 それを判断するのは君たちだ。
12:32.100 --> 12:34.440
笑ってもらえましたか?
12:35.220 --> 12:36.540
前進だ。
12:36.570 --> 12:39.120
双子座は構造が違う。
12:39.120 --> 12:41.370
実際にはかなり違うんだ。
12:41.400 --> 12:50.580
グーグルの名誉のために言っておくと、 トークンを設定する機能はもっと複雑だが、 APIはもう少しシンプルだ。
12:50.670 --> 12:59.550
ここではジェネレーティブ・モデル・オブジェクトを作成し、 モデルの名前を渡す。
12:59.550 --> 12:59.550
5フラッシュ
12:59.580 --> 13:03.510
ジェミニ1のコンテキストウィンドウの大きさを覚えているだろうか。 5フラッシュ
13:03.540 --> 13:04.680
覚えていますか?
13:04.710 --> 13:07.050
以前はトップだった?
13:07.050 --> 13:10.380
100万トークンという驚異的な数字だった。
13:10.410 --> 13:11.450
100万トークン。
13:11.480 --> 13:13.310
75万語。
13:13.340 --> 13:15.500
というわけで、 双子座1号。 5フラッシュ
13:15.950 --> 13:23.270
このオブジェクトを作成するときにシステム命令を渡し、 ジェミニ・ドットを呼び出す。
13:23.270 --> 13:26.420
ユーザープロンプトでコンテンツを生成する。
13:26.420 --> 13:28.520
しかも、 ただのレスポンス・ドット・テキストだ。
13:28.520 --> 13:37.520
リクエストもレスポンスも、 もう少しシンプルなAPIにしてみよう。
13:37.670 --> 13:42.200
重要なのは、 なぜデータサイエンティストは統計学者と別れたのか、 ということだ。
13:42.200 --> 13:45.590
p値で意見が一致しなかったからだ。
13:47.420 --> 13:48.020
ああ。
13:48.800 --> 13:52.310
まあ、 データ・サイエンスの側面はわかるよ。
13:52.310 --> 13:53.810
よく分からないんだ。
13:53.900 --> 13:55.070
ハハハ。
13:55.370 --> 13:57.380
ああ、 たぶん君はわかっているんだろうね。
13:57.380 --> 13:59.540
それに、 僕は、 うとうとしているんだ。
13:59.540 --> 14:01.310
ああ、 その場合はぜひ指摘してほしい。
14:01.310 --> 14:05.450
でも、 そのジョークの面白さは特に分からない。
14:05.450 --> 14:13.440
だから私としては、 ジェミニはジェミニ1より遅れていると思う。 ユーモアの価値という点では5フラッシュ。
14:14.220 --> 14:15.060
分かった。
14:15.090 --> 14:18.960
ともあれ、 ちょっと真面目にGPT4に戻ろう。
14:19.170 --> 14:20.910
最初の質問と同じだ。
14:20.910 --> 14:22.410
君は役に立つアシスタントだ。
14:22.440 --> 14:25.950
ビジネス上の問題がLLMのソリューションに適しているかどうかは、 どのように判断すればよいのでしょうか?
14:25.950 --> 14:29.790
覚えているだろうか、 それが私たちがチャット・インターフェースを通じてした最初の質問だった。
14:29.970 --> 14:32.970
そして今、 私たちはこれを再び一つにすることができる。
14:32.970 --> 14:34.260
これは、 あなたにとって馴染み深いものだろう。
14:34.290 --> 14:37.320
結果をマークダウンでストリームバックする。
14:37.320 --> 14:40.770
つまり、 OpenAI chat dot completions dot createだ。
14:40.770 --> 14:41.880
我々はモデルにパスを出す。
14:41.880 --> 14:43.350
大物を狙うんだ。
14:43.530 --> 14:44.820
プロンプトを使うんだ。
14:44.820 --> 14:45.840
温度を設定した。
14:45.840 --> 14:47.250
私たちはストリーム=トゥルーと言う。
14:47.250 --> 14:49.680
それがOpenAIのやり方だ。
14:49.830 --> 14:54.750
ええと、 それからこれは、 結果を再びストリームバックする方法です。
14:54.750 --> 14:57.720
マークダウンを扱っているので、 もう少し複雑だ。
14:57.720 --> 15:04.950
そのため、 基本的に反復ごとにマークダウンを更新するために、 ここではある種の特別なことをしなければならない。
15:04.980 --> 15:08.850
もし、 私たちがこのようにしなければならないと確信が持てないのであれば、 それを取り除いて違うやり方をしてみれば、
15:08.850 --> 15:11.190
何が起こるかすぐにわかるだろう。
15:11.220 --> 15:13.200
見栄えは良くない。
15:13.440 --> 15:15.720
それを実行しよう
15:15.720 --> 15:21.810
そして、 その結果がここにある。
15:22.500 --> 15:28.260
マークダウンが部分的にしか通過していないときに、 フリックが起こっているのがわかるだろう。
15:28.260 --> 15:33.600
そのため、 小見出しを表すハッシュが複数ある場合などを解釈している。
15:33.600 --> 15:37.050
まだ1回しかハッシュを受け取っていないし、 大きなヘディングが来ると思っている。
15:37.110 --> 15:42.660
少なくとも、 マークダウンが表示されるときにチカチカと点滅していたのは、 一時的に見たことだと思う。
15:42.660 --> 15:55.020
しかし、 その最後には、 もちろん、 とてもきれいに構成された回答が返ってくる。
15:55.740 --> 15:56.460
分かった。
15:56.460 --> 16:04.140
これで、 さまざまなAPIについて理解していただけたと思う。
16:04.170 --> 16:13.200
そして、 次のビデオでは、 実際に2、 3人のLLMがお互いに会話をする予定だ。
16:13.200 --> 16:14.340
それではまた。

859
week5/community-contributions/subtitles/srts/59166481/ko_KR.srt

@ -0,0 +1,859 @@
WEBVTT
00:00.860 --> 00:05.330
우리가 좋아하는 장소에 다시 모였네요 주피터 연구소
00:05.330 --> 00:07.310
몇 주 준비됐죠
00:07.340 --> 00:09.620
둘째 주에는 운동하고요
00:09.620 --> 00:14.930
2주 차 폴더로 가서 2주 차 첫날을 열어보죠
00:15.230 --> 00:18.230
자, 시작할게요
00:18.230 --> 00:26.990
첫째 주에 다중 프런티어 LMS를 사용했죠 채팅방 사용자 인터페이스를 통해서요 웹을 통한
00:26.990 --> 00:32.990
사용법이죠 API를 통해 OpenAI API에 연결했어요
00:33.020 --> 00:39.890
그래서 오늘은 안트로픽과 구글의 API를 통합해 오픈AI 사용 기술에 추가할
00:39.890 --> 00:41.090
거예요
00:41.960 --> 00:47.630
다시 한번 말씀드리지만 계속 그 얘기 하면 절 죽이실 거잖아요
00:47.630 --> 00:50.300
여기에 열쇠를 꽂아두죠
00:50.300 --> 00:55.850
오픈라이 키를 설정할 수 있죠 아마 지난주에 이미 했겠죠 인류애
00:55.850 --> 00:58.460
키와 구글 제미니 키를요
00:58.490 --> 01:05.330
하지만 구글 키를 설정하는 데 더 많은 모험이 있다는 걸 명심하세요
01:05.390 --> 01:09.410
설정이 끝나면 창조하는 거죠
01:09.470 --> 01:11.330
파일을 생성했어야 해요
01:11.480 --> 01:15.170
그 형태로 열쇠가 있는지 확인하세요
01:15.560 --> 01:21.500
그렇게 하는 대신 이 셀에서 키를 입력하면 돼요
01:21.500 --> 01:24.020
그렇게 할 수 있어요
01:24.020 --> 01:26.270
보안상 권장할 수 없는 일이죠
01:26.270 --> 01:30.350
언젠가 이걸 공개해서 다른 사람들이 열쇠를 보게 되면요
01:30.380 --> 01:32.300
서론은 그만하죠
01:32.330 --> 01:33.800
수입품 검사를 해 보죠
01:33.800 --> 01:37.400
환경 변수를 설정하는 코드 블록을 실행해보죠
01:37.400 --> 01:38.900
잘 아시네요
01:38.900 --> 01:46.280
이 셀에서 오픈AI API 연결을 설정하기 위해 OpenAI에
01:46.280 --> 01:49.400
같은 전화를 걸었어요
01:49.400 --> 01:55.790
클로드 비트도 비슷하고 제미니 비트는 구글에서 약간 다르게
01:55.790 --> 01:56.840
만들었죠
01:56.960 --> 02:04.220
이런 식으로 반이나 어느 정도 비슷한 명령을 이 세 가지에 사용하고 있어요
02:04.730 --> 02:05.510
02:05.510 --> 02:11.420
Lms가 잘하는 것들을 많이 보았고 몇 가지 실행되는 것들을 보았습니다 하지만 대부분은 Lms가
02:11.420 --> 02:13.160
잘하는 것들이죠
02:13.190 --> 02:17.600
한 가지 잘 안 되는 건 농담을 하는 거예요
02:17.600 --> 02:24.080
아주 딱 맞는 문맥을 주면 그 안에서 농담을 구성해야 해요
02:24.260 --> 02:28.610
이건 상업적인 예는 아니지만 재미를 위한 방법이고
02:28.610 --> 02:30.980
API로 경험을 쌓는 거죠
02:31.040 --> 02:34.850
API 상에서 농담을 해 줄 llms를 모실 거예요
02:35.120 --> 02:36.770
어떤 정보를 제공하죠?
02:36.770 --> 02:37.550
API 하나를 보내죠
02:37.580 --> 02:41.750
일반적으로 사용하고 싶은 모델의 이름을 항상 지정해요
02:41.750 --> 02:45.380
시스템 메시지와 사용자 메시지를 주로 제공하죠
02:45.380 --> 02:48.950
전반적인 컨텍스트를 제공하는 시스템 메시지에 아주 익숙하죠
02:48.950 --> 02:52.340
사용자 메시지가 실제 프롬프트죠
02:52.550 --> 02:54.410
다른 특징도 있어요
02:54.410 --> 02:55.700
다른 방법도 있어요
02:55.730 --> 03:00.890
온도라는 걸 통과시킬 수 있어요 0에서 1 사이죠 보통 좀 더 무작위적인
03:00.890 --> 03:09.960
창의적 출력 출력을 뜻합니다 0은 가장 가능성이 낮은 집중된 결정론적 반복 가능 설정이고요
03:10.320 --> 03:14.250
여러분이 제공할 수 있는 또 다른 매개 변수죠
03:14.280 --> 03:19.470
이 경우에 시스템 메시지를 설정하겠습니다 농담을 잘 하는 비서라고 하는
03:19.470 --> 03:20.010
거죠
03:20.010 --> 03:26.670
데이터 과학자들을 위해 가벼운 농담을 할 거예요
03:26.670 --> 03:30.000
당신과 내가 되겠죠
03:30.660 --> 03:35.850
여기 이 구조는 여러분에게 아주 익숙하길 바라요
03:35.850 --> 03:43.410
여기서 프롬프트들을 목록에 넣습니다. 요소들, 시스템, 사용자를 역할로 for each each each each each요.
03:43.410 --> 03:44.910
이 두 요소에서요.
03:44.940 --> 03:49.860
이 리스트를 살펴보면, 설명하지 않아도 될 것 같네요. 이제 익숙해졌으니까요.
03:50.040 --> 03:55.080
말씀드렸듯이 이 값은 여기 이 역할이 시스템 또는 유저가 될 수 있어요
03:55.080 --> 03:56.070
곧 알게 될 거예요
03:56.070 --> 03:57.570
조수라고도 할 수 있죠
03:57.570 --> 03:59.760
시스템 사용자나 보조가 될 수 있죠
03:59.760 --> 04:04.150
그리고 이번 주 후반에는 다른 것도 넣을 수 있을 거예요
04:04.240 --> 04:04.990
그래서요?
04:05.020 --> 04:09.790
하지만 지금은 시스템과 사용자를 우리가 사용할 두 역할로 기억해야 해요
04:09.790 --> 04:12.610
그래서 그걸 Put 프롬프트 목록에 넣었어요
04:13.480 --> 04:16.570
그 전에 감옥을 처리해야 하고요
04:16.570 --> 04:18.790
그 전에 이 감방을 실행했어요
04:18.790 --> 04:20.350
네, 괜찮았어요
04:20.350 --> 04:20.770
시작할게요
04:20.800 --> 04:21.790
다시 해 보죠
04:21.790 --> 04:22.840
저 감방을 처형해요
04:22.840 --> 04:23.860
이 감방을 실행해요
04:23.890 --> 04:25.720
좋아요
04:25.750 --> 04:33.280
오래된 GPT 모델부터 살펴보죠 GPT 3 5 터보 엔진인데 최근에 나온 최고의 개척
04:33.280 --> 04:34.390
모델이죠
04:34.390 --> 04:35.830
하지만 이미 지난 일이에요
04:35.830 --> 04:37.330
하지만 이걸 쓸 거예요
04:37.330 --> 04:44.680
오픈AI에서 이제 익숙해진 API는 OpenAI.챗, .완성 .Creetions,
04:44.680 --> 04:53.500
.Create 완성 이 API의 이름이죠 기존 프롬프트 모음을 가져다가 대화를 완성하기
04:53.500 --> 04:59.530
위해 텍스트 생성을 시도하는 거예요
04:59.800 --> 05:07.980
생성을 통해 모델과 메시지를 전달합니다 여러분이 익숙한 형식으로 전달하죠
05:08.010 --> 05:09.750
어디 보죠
05:09.780 --> 05:15.870
응답을 받았을 때 우리가 하는 건 완료 .선택입니다 가능한
05:15.870 --> 05:18.030
선택 목록이죠
05:18.030 --> 05:19.980
하지만 한 가지 요소만 넣을 거예요
05:19.980 --> 05:23.790
여러 개의 선택을 반환하도록 지정할 수 있는 방법이 있어요
05:23.790 --> 05:28.740
하지만 아직 안 했으니 get get을 하면 당연히 0위치에 있죠
05:28.740 --> 05:35.550
완료 .선택 .0.Message는 메시지를 반환하고 콘텐츠는 문자열로 반환하죠
05:35.760 --> 05:37.770
그걸 get 해 프린트하는 거죠
05:37.770 --> 05:39.360
어떤 장난인지 볼까요?
05:39.360 --> 05:42.690
데이터 과학자인 GPT 3을 위해서요 5 터보로 할 수 있어요
05:42.720 --> 05:43.680
시작할게요
05:44.010 --> 05:48.000
왜 데이터 과학자들이 컴퓨터와 분리했을까요?
05:48.000 --> 05:52.020
둘의 복잡한 관계를 감당하지 못했죠
05:52.830 --> 05:53.970
알았어요
05:54.000 --> 05:56.250
Get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it.
05:56.280 --> 05:58.770
세상에서 가장 웃긴 농담은 아니지만 끔찍하지도 않아요
05:58.800 --> 06:03.540
데이터 과학자들은 사물의 관계를 모델로 삼지만 그 복잡한 관계를 다룰
06:03.540 --> 06:04.200
수 없어요
06:04.200 --> 06:04.800
좋아요
06:04.800 --> 06:13.140
GPT 3에서 나온 농담치고는 아주 괜찮은데요 5 터보요
06:13.200 --> 06:17.010
GPT 4 미니는 더 잘할 수 있을까요?
06:17.160 --> 06:21.450
이번엔 API 사용을 살짝 확장할게요
06:21.600 --> 06:26.340
온도도 포함해서 0에서 1 사이의 숫자를 입력할 수 있어요
06:26.340 --> 06:29.220
창의성은 1점, 최저점은 0점이죠
06:29.490 --> 06:34.980
이 중엔 완료 선택지와 메시지 콘텐츠도 없어요
06:34.980 --> 06:36.720
이것도 아주 익숙할 거예요
06:36.750 --> 06:38.970
잘 달리는지 보죠
06:39.570 --> 06:42.060
데이터 과학자가 왜 통계학자랑 헤어졌죠?
06:42.060 --> 06:44.670
너무 못됐다고 생각했거든요
06:44.700 --> 06:46.230
꽤 괜찮은 농담이네요
06:46.230 --> 06:47.490
괜찮은 것 같아요
06:47.490 --> 06:49.950
그 정도는 괜찮은 농담이죠
06:49.980 --> 06:54.990
llm은 이런 농담 잘 못한다고 한 게 너무 심했나 봐요 그 정도면 괜찮은 농담인데요
06:55.170 --> 07:02.610
GPT 4에 작은 박수를 보내 줘야 할 것 같네요
07:03.030 --> 07:09.160
GPT 4 미니와 더 큰 사촌인 GPT 4를 써 보죠
07:09.190 --> 07:12.130
GPT 4의 맥시 버전이네요
07:12.160 --> 07:14.260
덩치 큰 친구요
07:14.260 --> 07:16.000
우리가 물어볼 거예요
07:16.030 --> 07:19.210
온도는 똑같이 유지해야 해요 그래야 실수가 없죠
07:19.240 --> 07:21.160
농담 삼아 물어보죠
07:21.190 --> 07:23.230
둘, 어떻게 되나 보죠
07:24.250 --> 07:27.130
데이터 과학자가 왜 파산했죠?
07:27.130 --> 07:30.850
어레이에서 캐시를 못 찾았거든요
07:32.410 --> 07:35.560
Get up이 아니었다면 더 나았을 거예요
07:35.560 --> 07:38.650
캐시는 못 찾았어요
07:38.650 --> 07:39.910
괜찮을 거예요
07:40.810 --> 07:42.280
내가 뭘 놓쳤나 봐요
07:42.310 --> 07:45.280
Get me get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get it, I'm get'm it, I'm it'm it'm get'm it, I'm it'm it'm get'm it'm it'm it'm it're
07:45.550 --> 07:47.380
다른 걸 해 보죠
07:47.560 --> 07:52.480
아까 했던 대로 비트 온도를 낮춰서 어떻게 되는지 보죠 get it
07:52.990 --> 07:56.560
왜 과학자들은 물류 회귀 모델과 헤어졌을까요?
07:56.590 --> 07:58.390
맞는 걸 못 찾았거든요
07:58.600 --> 08:00.130
괜찮은 생각이네요
08:00.130 --> 08:00.970
그건 괜찮아요
08:00.970 --> 08:06.160
미니와 맥시 중에 뭐가 더 좋은지 모르겠지만 꽤
08:06.160 --> 08:08.860
튼튼한 개그 소재예요
08:08.860 --> 08:12.640
이건 확실히 통과라고 할 수 있겠네요
08:13.810 --> 08:14.800
좋아요
08:14.830 --> 08:17.050
3번 조항으로 넘어가죠 5분
08:17.080 --> 08:17.680
소네트요
08:17.950 --> 08:21.430
API가 눈에 띄게 비슷하죠
08:21.430 --> 08:22.270
좋은 소식이죠
08:22.270 --> 08:25.030
기본적으로 아주 비슷해요
08:25.060 --> 08:26.530
차이점이 몇 가지 있어요
08:26.530 --> 08:31.510
시스템 메시지를 개별 특성으로 전달해야 해요
08:31.510 --> 08:36.430
메시지는 다시 이 데크 목록이에요
08:36.430 --> 08:42.550
물론 시스템 메시지의 첫 항목은 없어요 이미 별도로 넘겼으니까요
08:42.910 --> 08:45.310
그건 미묘한 차이죠
08:45.340 --> 08:52.360
최대 토큰은 오픈AI API 선택 사항으로 최대 토큰의 수를 지정하는 데 사용되죠
08:52.360 --> 08:55.180
클로드도 그래야 할 거예요
08:55.180 --> 08:56.860
그래서 여기 들어있군요
08:56.860 --> 08:59.200
하지만 그 외에는 전부 비슷해 보여야 해요
08:59.230 --> 09:03.250
API 자체는 외우기가 좀 더 쉬워요 비트
09:03.250 --> 09:05.740
클로드 점 메시지 점 만들기예요
09:05.740 --> 09:11.470
약간 짧지만 오픈AI 챗GPT 완성본과 상당히 비슷해요
09:11.710 --> 09:13.150
저기 있네요
09:13.180 --> 09:17.830
응답이 오면 메시지 콘텐츠 get이 0이 되죠
09:17.860 --> 09:22.630
첫 번째 것만 요청하는데 하나만 나올 거예요 왜냐하면 .text만
09:22.630 --> 09:28.750
요청했거든요 OpenAI의 .content와 같은 거죠
09:28.780 --> 09:30.100
어디 보죠
09:30.100 --> 09:35.020
클로드의 API 프레임워크에 유용하면 좋겠네요
09:35.020 --> 09:38.080
클로드가 농담을 어떻게 하는지 보죠
09:39.910 --> 09:40.630
09:40.660 --> 09:43.540
데이터 과학자들을 위한 가벼운 농담이 있어요
09:43.570 --> 09:46.210
데이터 과학자들은 왜 사랑하는 사람과 헤어질까요?
09:46.240 --> 09:51.310
관계에 너무 많은 변화가 있었고 그걸 정상화할 좋은 방법을 찾지 못했어요
09:51.970 --> 09:53.530
네, 괜찮아요
09:53.530 --> 09:59.110
좀 더 너디스러운 것 같아요 데이터 과학에 가깝죠
09:59.110 --> 10:03.640
비트만 조금 less지만 나쁘지 않아요
10:03.640 --> 10:07.570
글쎄요, GPT 4보다 그게 더 좋은지는 취향의 문제겠죠
10:07.900 --> 10:10.100
완벽한 농담이죠
10:10.220 --> 10:14.210
폭발할 만큼 웃기진 않지만 아주 튼튼해요
10:14.210 --> 10:15.440
나쁘지 않아요
10:15.950 --> 10:16.550
10:16.610 --> 10:22.220
어쨌든 핵심은 API와 농담에 관한 겁니다 늘 재미있긴 하지만요
10:22.250 --> 10:24.800
지금부터 보여드릴 건 스트리밍에 관한 거예요
10:24.890 --> 10:29.090
스트리밍 예시를 보기 전에 스트리밍에 대해 잠깐 얘기했었죠?
10:29.090 --> 10:33.140
전에 했던 건 좀 복잡해 보였어요 비트코인 가격 인하를 다시
10:33.140 --> 10:36.470
해야 하고 그 가격 인하에 대처해야 했으니까요
10:36.470 --> 10:40.280
마크다운 비트를 다루는 게 아니라서 더 간단해 보이죠
10:40.280 --> 10:46.730
같은 모델인 클라우드 3에 질문할게요 다시 5가 나왔네요 이번엔 결과를 스트리밍할게요
10:46.730 --> 10:53.090
오픈AI에 스트림하라고 요청했을 때 다른 특성 스트림이 true인 것을 추가한 것을
10:53.090 --> 10:54.470
기억하시나요?
10:54.470 --> 10:56.570
그건 스트리밍 모드였다는 뜻이죠
10:56.570 --> 10:58.490
클로드는 조금 다르죠
10:58.490 --> 11:00.380
추가 속성은 없어요
11:00.380 --> 11:06.440
.Stream 메서드를 호출해요 .Create 메서드 대신에요
11:06.440 --> 11:09.020
접근법이 약간 달라요
11:09.020 --> 11:13.790
인도적인 것과 오픈AI의 스트리밍은 차이가 있어요
11:13.790 --> 11:16.430
클로드 메시지의 흐름이라고 부르기로 했어요
11:16.460 --> 11:17.840
그 외에는 똑같아요
11:17.840 --> 11:22.430
돌아온 결과로는 스트림으로서의 결과를 가진 컨텍스트 관리자를 사용하죠
11:22.610 --> 11:26.960
스트림 텍스트 스트림의 텍스트죠
11:26.960 --> 11:31.550
오픈아이는 그에 대한 답장이었죠
11:31.550 --> 11:35.990
오픈AI는 비트 백 결과를 읽는 방식이 조금 달랐어요
11:35.990 --> 11:37.040
하지만 저기 있네요
11:37.040 --> 11:41.420
각각의 덩어리를 get 해 프린트하죠
11:41.540 --> 11:46.460
이렇게 하는 이유는 한 줄에 한 덩어리가 찍히지 않도록 하기 위해서죠
11:46.670 --> 11:48.170
안 그러면 읽기 힘들었을 거예요
11:48.170 --> 11:49.490
이게 더 보기 좋을 거예요
11:49.490 --> 11:56.510
클로드 3을 보죠 농담과 함께 소네트 5개가 유피터랩에서 스트리밍될 거예요
11:57.200 --> 11:57.800
됐어요
11:57.800 --> 11:58.040
봤죠?
11:58.040 --> 11:59.060
스트리밍이에요
11:59.330 --> 12:01.580
데이터 사이언스에게는 가벼운 농담이 있죠
12:01.610 --> 12:03.110
왜 똑같은 농담을 해요?
12:03.110 --> 12:08.690
똑같은 농담 같지만 브람스 드럼에 추가된 거죠
12:08.840 --> 12:12.000
마지막에 폭발하는 게 좋았어요
12:12.000 --> 12:14.670
왜 전보다 패를 더 많이 달라고 했을까요?
12:14.700 --> 12:15.180
어디 보죠
12:15.210 --> 12:15.630
아뇨
12:15.630 --> 12:16.350
똑같아요
12:16.650 --> 12:17.730
12:17.760 --> 12:19.020
설명이 나오죠
12:19.020 --> 12:22.170
이 농담은 데이터 과학에서 흔한 통계 개념을 이용한 거예요
12:22.260 --> 12:27.060
좀 따분하지만 데이터에 밝은 관객들은 웃을 거예요 Get it
12:27.060 --> 12:32.070
데이터에 밝은 분들이니 판단해 주실 수 있겠죠
12:32.100 --> 12:34.440
Get it, Get it, Get it, Get it, Get, Get, Get, Get, Get 웃었어요?
12:35.220 --> 12:36.540
넘어가죠
12:36.570 --> 12:39.120
제미니는 구조가 달라요
12:39.120 --> 12:41.370
사실 비트는 좀 달라요
12:41.400 --> 12:48.780
구글 크레딧은 토큰을 설정하는 기능이 훨씬 복잡하지만 API 설정은
12:48.780 --> 12:50.580
좀 더 간단해요
12:50.670 --> 12:56.850
여기 보이는 것처럼 모델 객체를 생성해서 모델 이름을 입력해요 제미니 1호를
12:56.850 --> 12:59.550
쓸 거예요 5번 섬광이에요
12:59.580 --> 13:03.510
제미니 1호의 경우 얼마나 큰 문맥 창문이 필요했는지 기억하시죠? 5번 섬광이에요
13:03.540 --> 13:04.680
기억할 수 있겠어요?
13:04.710 --> 13:07.050
전에 있던 테이블 위였나요?
13:07.050 --> 13:10.380
놀랍게도 백만 토큰이었죠
13:10.410 --> 13:11.450
백만 토큰요
13:11.480 --> 13:13.310
750,000단어요
13:13.340 --> 13:15.500
제미니 1호예요 5번 섬광이에요
13:15.950 --> 13:23.270
이 객체를 만들 때 시스템 지시를 전달해요 제미니 닷이라고 부르죠
13:23.270 --> 13:26.420
사용자 프롬프트에서 콘텐츠를 생성하세요
13:26.420 --> 13:28.520
응답 .text예요
13:28.520 --> 13:35.090
요청과 응답 둘 다에서 좀 덜 빈둥거리죠 API가 좀 더 간단해요 하지만
13:35.120 --> 13:37.520
비트의 질을 보죠
13:37.670 --> 13:42.200
중요한 건, 데이터 과학자들이 왜 통계 전문가와 헤어졌느냐죠
13:42.200 --> 13:45.590
P 값에 대한 견해가 달랐기 때문이죠
13:47.420 --> 13:48.020
13:48.800 --> 13:52.310
전 데이터 과학 쪽을 봐요
13:52.310 --> 13:53.810
Get you get. 잘 모르겠어요
13:53.900 --> 13:55.070
13:55.370 --> 13:57.380
Get it, get it, get it! 이해하실지도 모르겠네요
13:57.380 --> 13:59.540
전 졸린 것 같아요
13:59.540 --> 14:01.310
그런 경우라면 어떻게든 절 가리키겠죠
14:01.310 --> 14:05.450
근데 그 농담의 재밌는 면을 잘 모르겠어요 Get it
14:05.450 --> 14:13.440
그래서 제 생각에는 제미니 1호가 확실히 뒤처졌다고 봐요 유머 면에서 말이죠
14:14.220 --> 14:15.060
좋아요
14:15.090 --> 14:18.960
아무튼 본격적으로 GPT 4로 돌아가 보죠
14:19.170 --> 14:20.910
다들 같은 질문을 했죠
14:20.910 --> 14:22.410
정말 도움이 되는 조수네요
14:22.440 --> 14:25.950
사업상의 문제가 LLM 해결책에 적합한지 어떻게 판단하죠?
14:25.950 --> 14:29.790
채팅 인터페이스를 통해 가장 먼저 받은 질문이었죠
14:29.970 --> 14:32.970
이제 다시 합칠 수 있어요
14:32.970 --> 14:34.260
이런 거 익숙하죠?
14:34.290 --> 14:37.320
가격 인하 결과를 스트리밍으로 보여드릴게요
14:37.320 --> 14:40.770
OpenAI 채팅 .완료 .Create죠
14:40.770 --> 14:41.880
모델을 통과시키죠
14:41.880 --> 14:43.350
큰 녀석을 노릴 거예요
14:43.530 --> 14:44.820
프롬프트도 사용해요
14:44.820 --> 14:45.840
온도를 설정했어요
14:45.840 --> 14:47.250
스트리밍은 true라고 하죠
14:47.250 --> 14:49.680
오픈아이는 이렇게 요리해요
14:49.830 --> 14:54.750
그리고 이런 식으로 결과를 다시 스트리밍하죠
14:54.750 --> 14:57.720
비트 박스를 줄이는 중이라 좀 더 복잡해요
14:57.720 --> 15:03.390
그래서 우린 여기서 특별한 작업을 해야 합니다 각 반복에서 마크다운을 새로
15:03.390 --> 15:04.950
고침하기 위해서요
15:04.980 --> 15:08.850
이렇게 해야 하는지 확신이 안 들면 저걸 빼고 다르게 해 보세요
15:08.850 --> 15:11.190
그럼 바로 어떻게 되는지 보일 거예요
15:11.220 --> 15:13.200
보기 안 좋을 거예요
15:13.440 --> 15:15.720
그걸 실행해 보죠
15:15.720 --> 15:21.810
get get 결과가 나왔네요. 아주 좋아 보이죠.
15:22.500 --> 15:28.260
일부만 뚫렸을 때 튕기는 게 보이죠
15:28.260 --> 15:33.600
즉 부제목을 나타내는 여러 개의 해시를 해석하는 것이죠
15:33.600 --> 15:37.050
해시 하나만 받았는데 큰 게 오는 줄 아나 봐요
15:37.110 --> 15:41.430
마크다운이 일어나면서 깜빡이는 현상이 잠깐 있었던
15:41.430 --> 15:42.660
것 같아요
15:42.660 --> 15:50.730
하지만 마지막에 우린 아주 잘 구성된 응답을 얻습니다 잘 구조화되어 마크다운
15:50.730 --> 15:55.020
스트림 백에서 완벽한 포맷이죠
15:55.740 --> 15:56.460
좋아요
15:56.460 --> 16:04.140
다양한 API에 대한 감각을 얻었고 재미있는 질문도 비트 박스를 좀 어지럽혔죠
16:04.170 --> 16:12.150
다음 비디오에서 할 것은 몇 개의 llms가 서로 대화하는 겁니다 재미있을
16:12.150 --> 16:13.200
거예요
16:13.200 --> 16:14.340
그때 봐요

106
week5/community-contributions/subtitles/srts/59166847/en_US.srt

@ -0,0 +1,106 @@
WEBVTT
00:00.860 --> 00:05.810
Well, they say that time flies when you're having fun, and it certainly feels like time is flying.
00:05.840 --> 00:08.120
Uh, hopefully for you as well as for me.
00:08.270 --> 00:11.150
Uh, you have reached the 20% milestone.
00:11.150 --> 00:18.170
You were a fifth of the way towards being an expert in all things LMN, and I hope that it feels that
00:18.170 --> 00:18.380
way.
00:18.380 --> 00:23.420
I hope you're feeling the the sense of accomplishment and the way that you are leveling up.
00:23.420 --> 00:28.580
Every time we get to these summary pages and think about all of the new skills you've acquired.
00:28.580 --> 00:32.870
So to recap them, I know I keep doing this, but I do think it's important.
00:32.900 --> 00:39.980
You can of course describe transformers and tokens and contacts, windows and API prices and all of
00:40.010 --> 00:40.670
that.
00:40.940 --> 00:48.200
You can code now pretty confidently, I would say with the different APIs for the frontier models that
00:48.200 --> 00:54.710
you know well, and you can build an AI chatbot assistant, including an interactive UI.
00:54.710 --> 00:56.150
And I promise you it would be easy.
00:56.150 --> 00:57.410
And it was easy.
00:57.560 --> 01:01.190
You hopefully you weren't expecting it to be quite as easy as it was.
01:01.190 --> 01:04.010
The one line of code is insane.
01:04.100 --> 01:09.080
Uh, that is the, uh, the the wonder, the magic of Gradio.
01:09.110 --> 01:17.630
So next time we change subject to something called tools, which is a particularly interesting capability.
01:17.630 --> 01:26.450
It's it allows us to give LMS powers to do something, to run some functionality that we will arm it
01:26.450 --> 01:33.470
with so we can write some code, and then we can give the arm the ability to, to use it.
01:33.500 --> 01:35.630
Now that might sound a bit spooky.
01:35.660 --> 01:40.730
We're actually going to build something and then sort of give the LMS powers to to to do it.
01:40.760 --> 01:42.770
What to like run code on our box.
01:42.770 --> 01:44.810
We're going to let them do that.
01:44.990 --> 01:51.110
Um, and unfortunately, I will warn you, this is one of those things that that sounds very magical.
01:51.110 --> 01:54.590
And then a bit like the wizard behind the curtain.
01:54.590 --> 02:00.140
Uh, when you find out what this actually means, it's a little bit less magical when you know the ingredients
02:00.140 --> 02:02.930
to the incredible, uh, dish.
02:02.960 --> 02:05.810
Suddenly, uh, you think, is that all?
02:06.590 --> 02:12.770
But for the moment, you can you can live in awe of the fact that next time we are going to empower,
02:12.800 --> 02:21.040
uh, frontier model with the ability to run code on our box, which is going to do things, uh, and
02:21.190 --> 02:26.380
I'm going to reveal the, the secret sauce behind that.
02:26.530 --> 02:31.570
And then perhaps it won't be quite as mysterious as it sounds, but I'm excited to take you through
02:31.570 --> 02:31.960
that.
02:31.960 --> 02:34.840
That's what we're doing next time, and I will see you then.

91
week5/community-contributions/subtitles/srts/59166847/ja_JP.srt

@ -0,0 +1,91 @@
WEBVTT
00:00.860 --> 00:05.810
まあ、 楽しんでいるときは時間が過ぎるのが早いというし、 確かに時間は過ぎているように感じる。
00:05.840 --> 00:08.120
ああ、 できれば僕だけでなく君にとってもね。
00:08.270 --> 00:11.150
ええと、 20%のマイルストーンに到達しましたね。
00:11.150 --> 00:18.380
あなたはLMNのあらゆることの専門家になるための5分の1の道のりを歩んできた。
00:18.380 --> 00:23.420
達成感やレベルアップしていく様子を感じていることを願う。
00:23.420 --> 00:28.580
このまとめページにたどり着くたびに、 あなたが身につけた新しいスキルのすべてについて考える。
00:28.580 --> 00:32.870
だから、 彼らを総括するために、 何度も言うようだけど、 これは重要なことだと思うんだ。
00:32.900 --> 00:40.670
もちろん、 トランスフォーマー、 トークン、 コンタクト、 ウィンドウ、 APIの価格、 その他すべてを説明することができる。
00:40.940 --> 00:48.200
あなたがよく知っているフロンティア・モデル用のさまざまなAPIを使えば、 かなり自信を持ってコーディングできるようになったし、
00:48.200 --> 00:54.710
対話型UIを含むAIチャットボット・アシスタントを構築できるようになった。
00:54.710 --> 00:56.150
それは簡単なことだ。
00:56.150 --> 00:57.410
それは簡単なことだった。
00:57.560 --> 01:01.190
これほど簡単だとは思っていなかっただろう。
01:01.190 --> 01:04.010
この1行のコードは正気の沙汰ではない。
01:04.100 --> 01:09.080
それがグラディオの素晴らしさであり、 マジックなんだ。
01:09.110 --> 01:17.630
そこで次回は、 特に興味深い能力であるツールというものに話題を変えよう。
01:17.630 --> 01:33.470
LMSに何かをさせたり、 機能を実行させたりする権限を与えることができる。
01:33.500 --> 01:35.630
ちょっと不気味に聞こえるかもしれない。
01:35.660 --> 01:40.730
私たちは実際に何かを作り、 LMSにそのための権限を与えるつもりです。
01:40.760 --> 01:42.770
私たちのボックスでコードを実行するようなもの。
01:42.770 --> 01:44.810
そうさせるつもりだ。
01:44.990 --> 01:51.110
残念ながら、 これはとても魔法のように聞こえることのひとつなんだ。
01:51.110 --> 01:54.590
そして、 カーテンの向こうの魔法使いのように。
01:54.590 --> 02:00.140
これが実際に何を意味するのかがわかると、 信じられないような、 ええと、 料理の材料がわかると、
02:00.140 --> 02:02.930
ちょっと不思議な感じがしなくなる。
02:02.960 --> 02:05.810
突然、 ああ、 これで終わりか?
02:06.590 --> 02:12.770
しかし当面は、 畏敬の念を抱くことができるだろう。
02:12.800 --> 02:26.380
次回は、 フロンティア・モデルに力を与え、 我々のボックス上でコードを実行できるようにする。
02:26.530 --> 02:31.960
そして、 恐らく、 それほどミステリアスなものにはならないだろうが、 それを皆さんにお見せできることを楽しみにしている。
02:31.960 --> 02:34.840
それが次回の予定だ。

97
week5/community-contributions/subtitles/srts/59166847/ko_KR.srt

@ -0,0 +1,97 @@
WEBVTT
00:00.860 --> 00:05.810
즐거운 시간은 쏜살같이 흘러간다고들 하잖아요 정말 시간이 쏜살같이 흘러가는 것 같아요
00:05.840 --> 00:08.120
저와 여러분을 위해서요
00:08.270 --> 00:11.150
20% 이정표에 도달했어요
00:11.150 --> 00:18.380
LMN에 관한 모든 것의 5분의 1을 알게 되었어요 그렇게 느끼셨으면 좋겠어요
00:18.380 --> 00:23.420
여러분이 성취감을 느끼길 바라요 레벨을 높이는 과정도요
00:23.420 --> 00:28.580
요약 페이지를 볼 때마다 새로운 기술이 떠오르죠 Get it
00:28.580 --> 00:32.870
요약하자면, 계속 이러지만 중요한 것 같아요
00:32.900 --> 00:40.670
변압기, 토큰, 연락처, 윈도우, API 가격 등을 설명하실 수 있어요
00:40.940 --> 00:48.200
꽤 자신 있게 코드를 작성할 수 있어요 여러분이 잘 아는 프론티어 모델에 대한 다양한
00:48.200 --> 00:54.710
API로요 대화형 UI를 포함한 인공지능 챗봇 비서를 만들 수도 있죠
00:54.710 --> 00:56.150
내가 장담하는데 쉬울 거예요
00:56.150 --> 00:57.410
식은 죽 먹기였죠
00:57.560 --> 01:01.190
이렇게 쉬울 줄은 몰랐길 바라요
01:01.190 --> 01:04.010
코드 한 줄이 정말 대단해요
01:04.100 --> 01:09.080
그게 바로 그래디오의 마법이죠
01:09.110 --> 01:17.630
다음엔 도구라는 것으로 주제를 바꾸죠 아주 흥미로운 기능이에요
01:17.630 --> 01:26.450
LMS가 뭔가 하도록 권한을 줍니다 일부 기능을 실행해 코드를
01:26.450 --> 01:33.470
작성할 수 있도록요 그런 다음 사용 권한을 주죠
01:33.500 --> 01:35.630
비트 박스는 좀 으스스하네요
01:35.660 --> 01:40.730
실제로 뭔가를 구축하고 LMS에게 권한을 주는 거죠
01:40.760 --> 01:42.770
박스에 코드를 실행하는 거죠
01:42.770 --> 01:44.810
그렇게 하도록 둘 거예요
01:44.990 --> 01:51.110
불행히도 미리 경고하는데 마법처럼 들리는 그런 거예요
01:51.110 --> 01:54.590
커튼 뒤의 마법사처럼 비트를 입었죠
01:54.590 --> 02:00.140
비트가 무슨 뜻인지 알게 되면 그 놀라운 요리의 재료를 알게 되면
02:00.140 --> 02:02.930
마법 같은 느낌이 덜해져요
02:02.960 --> 02:05.810
갑자기 그게 다인가 싶더군요
02:06.590 --> 02:12.770
하지만 지금은 경외심을 가지셔도 좋습니다 다음번에는
02:12.800 --> 02:21.040
개척 모델에게 권한을 부여할 테니까요 컴퓨터에 코드를 실행할 수 있고
02:21.190 --> 02:26.380
그 뒤에 숨겨진 비밀도 밝혀낼 거예요
02:26.530 --> 02:31.960
그렇게 되면 생각만큼 신비롭진 않겠지만 여러분을 안내하게 돼서 기뻐요
02:31.960 --> 02:34.840
다음 시간에도 그렇게 할 거예요 그때 봐요

592
week5/community-contributions/subtitles/srts/59166915/en_US.srt

@ -0,0 +1,592 @@
WEBVTT
00:00.440 --> 00:03.560
Welcome back to the wonderful world of JupyterLab.
00:03.560 --> 00:06.830
And here we are in week two.
00:07.490 --> 00:09.110
Day three.
00:09.260 --> 00:11.990
Uh, bring up this notebook.
00:11.990 --> 00:18.080
So we're talking conversational AI, also known as chat bot, and we're going to get right into it.
00:18.110 --> 00:24.680
We start by doing our usual imports and we do our usual setting of our environment variables.
00:24.680 --> 00:27.620
And we initialize OpenAI.
00:27.650 --> 00:29.840
This time we will use OpenAI.
00:30.020 --> 00:34.310
And you can have it as an exercise to switch in other models if you'd like to do so.
00:34.670 --> 00:38.510
So uh, going to start with the basic system message.
00:38.510 --> 00:40.340
You are a helpful assistant.
00:40.970 --> 00:41.480
All right.
00:41.510 --> 00:45.800
Now I want to talk for a bit about, um, message structure.
00:45.980 --> 00:54.200
So first of all, uh, reminder of the structure of a prompt message to OpenAI.
00:54.320 --> 00:58.700
Uh, we've seen this many times now, and so you're probably thoroughly bored of me explaining it.
00:58.700 --> 00:59.350
There it is.
00:59.350 --> 01:00.010
One more time.
01:00.010 --> 01:00.730
You know it well.
01:00.760 --> 01:07.840
A list of dictionaries that give the system the user, and it can have an assistant responding and then
01:07.840 --> 01:09.220
the user and so on.
01:09.220 --> 01:12.910
And you may remember I mentioned there is something else to come, but for now.
01:12.910 --> 01:13.990
System user assistant.
01:13.990 --> 01:14.650
User assistant.
01:14.680 --> 01:16.960
User assistant user and so on.
01:17.470 --> 01:21.430
Uh, now we are going to write a function called chat.
01:21.430 --> 01:27.400
And that function chat is going to take two inputs message and history.
01:27.430 --> 01:34.930
Message represents the current, uh, message that is being being asked that chat needs to respond to.
01:34.960 --> 01:41.050
And history has the history of all prior messages, all prior exchanges.
01:41.050 --> 01:45.520
And the structure of history is going to look like this.
01:45.550 --> 01:50.140
It's going to be a list, a list that consists of lists.
01:50.140 --> 01:55.810
And these sub lists like this are simply what the user said and what the assistant replied, what the
01:55.810 --> 01:58.920
user said and what the assistant replied and so on.
01:59.340 --> 02:02.730
So why am I asking you to do that?
02:02.760 --> 02:07.830
Why are we going to write a function that looks like that with parameters, that arguments that look
02:07.860 --> 02:08.340
like this?
02:08.340 --> 02:15.870
The answer is because that is a particular type of function that Gradio expects for using with its chat
02:15.870 --> 02:16.710
user interfaces.
02:16.710 --> 02:22.260
And that's why Gradio expects us to write a function called chat that's going to take a message.
02:22.260 --> 02:27.630
It's going to take history in this structure, and it will return the next the response, the response
02:27.630 --> 02:28.590
to this chat.
02:28.590 --> 02:30.390
So that's why we're thinking about that format.
02:30.390 --> 02:39.420
So our job for this function is going to be convert this kind of style of message into this.
02:39.420 --> 02:46.860
So we're going to need to iterate row by row through this structure and build this structure that you
02:46.860 --> 02:47.940
see above.
02:47.970 --> 02:49.710
Hopefully that makes sense.
02:49.710 --> 02:53.400
If not, it will make sense when I show you what that looks like.
02:53.400 --> 02:56.730
So I'm defining a function called chat.
02:56.760 --> 03:03.450
It takes a message that we got to respond to an input message, and it takes history of prior messages.
03:03.480 --> 03:09.090
So we first of all, we we set up our list of messages, which is going to be this guy.
03:09.090 --> 03:12.870
And we populate it with the system prompt at the very start.
03:12.900 --> 03:17.010
Of course, then we are going to iterate through history.
03:17.040 --> 03:20.460
Each element of history again is one of these lists with two values.
03:20.460 --> 03:24.000
So we're going to unpack that into user message assistant message.
03:24.000 --> 03:28.470
And then we append the user's message and the assistant message.
03:28.530 --> 03:30.390
Uh each each time.
03:30.390 --> 03:38.310
So each row from from history turns into two rows in this list.
03:38.310 --> 03:40.650
One for the user, one for the assistant.
03:40.770 --> 03:42.480
Hopefully that makes complete sense.
03:42.480 --> 03:43.050
Now.
03:43.200 --> 03:44.250
If not, you can always.
03:44.280 --> 03:45.240
Oh well, you don't need to.
03:45.270 --> 03:47.370
I was going to say you can always put in some print statements.
03:47.430 --> 03:50.160
Uh, I had the foresight to put in some print statements myself.
03:50.160 --> 03:51.180
So we will see this.
03:51.210 --> 03:54.980
We're going to print the history And then we're going to print the messages.
03:54.980 --> 03:57.170
So we get to see that too.
03:57.530 --> 04:05.120
Um, and then the next line is very familiar to you for this particular chat, uh, method at this point,
04:05.150 --> 04:12.110
this function, sorry, at this point we are then going to take, um, this set of messages and we're
04:12.110 --> 04:14.300
going to call OpenAI with them.
04:14.300 --> 04:17.810
So we do OpenAI chat dot completions, dot create.
04:17.810 --> 04:22.970
We pass in the model, we pass in the messages, and we're going to say please stream results.
04:22.970 --> 04:23.810
We might as well.
04:23.810 --> 04:27.200
And then we go through and we yield response.
04:27.440 --> 04:30.170
Um, so again this isn't actually a function.
04:30.170 --> 04:35.900
It's really a generator because we're going to be yielding the responses piece by piece.
04:36.800 --> 04:37.280
Okay.
04:37.280 --> 04:45.500
So what I want to do now is turn this into the kind of user interface that you saw in the slide a moment
04:45.500 --> 04:49.850
ago, a user interface which has an instant message style interaction.
04:49.850 --> 04:54.580
So obviously there's a bit of work to do there because we're going to have to, to to craft that kind
04:54.580 --> 05:01.360
of, um, canvas with the messages that come one after another and figure out how to do that.
05:01.420 --> 05:06.430
Um, based on the response that's coming back from this chat message.
05:06.940 --> 05:10.300
Uh, I don't know if you've cottoned on, but I am, of course, fibbing.
05:10.300 --> 05:11.770
It's going to be really easy.
05:11.770 --> 05:12.910
It's going to be really easy.
05:12.910 --> 05:14.470
It's going to be a single line.
05:15.310 --> 05:21.460
Uh, so Gradio comes with something called chat interface out of the box, and chat interface, uh,
05:21.460 --> 05:25.540
expects a single function which needs to have this structure.
05:25.540 --> 05:31.300
If you've written a function which takes a message and history in this particular format, then for
05:31.300 --> 05:34.240
Gradio it's just a single line of code.
05:34.480 --> 05:36.670
Uh, let's see if it's really that easy.
05:36.670 --> 05:42.610
I do need to remember to execute that so that we have defined our chat generator.
05:42.610 --> 05:46.510
And then we will launch our interface.
05:46.510 --> 05:47.770
And here it is.
05:47.770 --> 05:49.890
Here is our chat interface.
05:50.190 --> 05:53.730
Let's bring it up in a separate window, because I just prefer it that way.
05:53.730 --> 05:55.830
And we'll say, uh.
05:55.830 --> 05:56.970
Hello there.
05:59.070 --> 06:00.000
Hello.
06:00.030 --> 06:01.410
How can I assist you today?
06:01.530 --> 06:05.220
I want to buy a tie.
06:06.780 --> 06:09.270
Great kind of tie are you looking for?
06:09.300 --> 06:11.730
Do you have a specific color, pattern or material?
06:12.210 --> 06:14.160
Uh, so you get the idea.
06:14.430 --> 06:22.830
But let me just say, um, a red one red tie is a classic choice.
06:22.830 --> 06:24.510
Here are a few options to consider.
06:24.510 --> 06:26.340
And there comes the answer.
06:26.820 --> 06:31.470
Now, obviously the reason I said a red one is I wanted to demonstrate what you already know, which
06:31.470 --> 06:35.940
is that it has context of this conversation and it knows what came before.
06:35.970 --> 06:43.290
And one more time, it's a bit of an illusion to feel as if this thing has memory from when we first
06:43.290 --> 06:43.800
spoke to it.
06:43.800 --> 06:45.180
And I said, I want to buy a tie.
06:45.210 --> 06:51.630
All that's happening is that every time we interact, that chat method, function generator, I get
06:51.630 --> 06:52.410
it right eventually.
06:52.440 --> 06:55.290
That chat generator is being called.
06:55.470 --> 06:58.860
And what's being what's being passed in is the whole history so far.
06:58.860 --> 07:03.720
And it's building that set of messages and that's what's being sent to OpenAI.
07:03.750 --> 07:07.470
So for each of our calls, the whole history is being provided.
07:07.470 --> 07:10.980
And that's why it has the context of what came before.
07:10.980 --> 07:17.970
It's not as if the LLM, it's not as if GPT four is remembering that that 30s ago we said that.
07:17.970 --> 07:20.520
It's just that with every call, we pass it all in.
07:20.520 --> 07:22.080
I'm sure it's obvious to you at this point.
07:22.080 --> 07:26.010
So I'm sorry I'm belaboring it, but I think it's important to, to really rub it in.
07:26.400 --> 07:31.650
Um, and yeah, so you remember I have some print statements happening below which are going to be quite
07:31.650 --> 07:35.130
chunky now, but let's just look at the the last one there.
07:35.130 --> 07:41.700
So the last one said history is and then this is what Gradio sent us.
07:41.730 --> 07:48.500
And you'll see it's like uh, what we said, what it said, what we said, what it said.
07:48.890 --> 07:56.480
Uh, and then we converted that into the right format for GPT four zero.
07:56.510 --> 07:57.950
Uh, GPT four mini.
07:58.100 --> 08:02.000
Um, we converted it into a list of, like, role system content.
08:02.000 --> 08:03.110
You're a helpful assistant.
08:03.110 --> 08:05.360
And then user said, hello there.
08:05.360 --> 08:07.910
And the assistant replied, hello, how can I assist you today?
08:07.910 --> 08:08.540
And so on.
08:08.540 --> 08:11.450
So that is what we turned it into.
08:12.530 --> 08:18.530
All right, just before we go on, I'm going to have a quick tangent, but it is an important tangent.
08:18.530 --> 08:20.420
So this isn't just me prattling on.
08:20.420 --> 08:24.230
This is something which I want to sow a seed with you.
08:24.230 --> 08:30.200
Something that we will come back to later and is an important point, um, which maybe, maybe something
08:30.200 --> 08:31.670
that's been on your mind.
08:31.730 --> 08:33.590
Um, or if not, it should be.
08:33.800 --> 08:42.020
Um, so just to mention, you might be thinking, so this structure, this system user assistant user.
08:42.140 --> 08:43.480
Uh, so is this.
08:43.510 --> 08:49.960
Does this somehow get passed into the LLM in some structured way?
08:49.960 --> 08:56.860
Like are we somehow when we when we provide this data to the LLM, is it being given maybe as a as a
08:56.890 --> 09:00.160
like a dictionary, a list of dictionaries in some way?
09:00.280 --> 09:04.300
Um, because you may say, I thought Llms just took tokens.
09:04.300 --> 09:08.290
They just take a list of tokens and they generate the most likely next token.
09:08.290 --> 09:13.990
So how does this whole list of dictionaries and so on, uh, translate to the world of tokens?
09:13.990 --> 09:16.300
And that would be a great thought if you had that thought.
09:16.300 --> 09:17.680
Uh, very good.
09:17.680 --> 09:25.390
Uh, and there's a simple answer, uh, it is just tokens that gets passed to the actual underlying
09:25.420 --> 09:29.290
GPT four, uh, GPT four LLM.
09:29.290 --> 09:39.760
What happens is that OpenAI turns this into a series of tokens, and it has special tokens, special
09:39.760 --> 09:44.430
ways of explaining that this is the beginning of a system prompt.
09:44.430 --> 09:47.670
This is the beginning of a user and an assistance response.
09:47.670 --> 09:55.080
It has some markup to say that, and it tokenizes that whole markup, including some special placeholder
09:55.080 --> 09:59.010
tokens that sort of communicate inform the LLM.
09:59.010 --> 10:01.410
We're now switching to system prompt mode.
10:01.410 --> 10:02.880
Here's some system prompt text.
10:02.880 --> 10:04.500
And now we're out of system prompt mode.
10:04.530 --> 10:07.410
Now we're doing a user message and so on.
10:07.410 --> 10:11.460
So this structure is what we send the OpenAI API.
10:11.490 --> 10:13.980
It converts it into tokens.
10:13.980 --> 10:19.470
And it's those tokens that then get fed to the LLM to predict the next token.
10:19.950 --> 10:24.300
And you might say, okay, I hear you, I get that.
10:24.300 --> 10:32.760
But how does the LLM know that this particular special token means system message and should interpret
10:32.760 --> 10:35.340
that to be its its high level directive?
10:35.340 --> 10:39.180
And how does it know that this token means user and this means assistant and so on.
10:39.210 --> 10:43.740
Like like what gives it that ability is that, like, baked into its architecture in some way?
10:44.040 --> 10:46.590
Uh, and there's a very simple answer to that, which is that.
10:46.590 --> 10:49.530
No, it's just because that's how it's been trained.
10:49.560 --> 10:54.270
It's been trained with lots of data, with that structure, with millions of examples like that.
10:54.270 --> 11:00.300
And it's learned through training that when it's being given a specific directive in a system instruction,
11:00.300 --> 11:06.810
the most likely next token, the most likely response is going to be one that adheres to that system
11:06.810 --> 11:07.440
prompt.
11:07.470 --> 11:09.510
There's I've oversimplified.
11:09.510 --> 11:14.400
There's some, uh, more nuance there to do with with things like the technique that is like chef and
11:14.400 --> 11:14.940
things like that.
11:14.940 --> 11:18.570
For for those that know all this stuff are listening and saying, oh, it's a bit oversimplified, but
11:18.570 --> 11:19.770
it's the general idea.
11:19.770 --> 11:21.090
It's the basic idea.
11:21.090 --> 11:24.540
This structure is this sort of the API structure.
11:24.540 --> 11:27.390
This is how we communicate to OpenAI that that's what we want to do.
11:27.390 --> 11:31.170
And OpenAI takes that structure and turns it into tokens.
11:31.170 --> 11:38.390
So to to sort of take a step to, to the very beginning, Gradio gives us data in this format.
11:38.420 --> 11:47.300
We map that to this format, which is what we send OpenAI and OpenAI converts that to tokens, including
11:47.300 --> 11:48.680
some special tokens.
11:48.680 --> 11:54.740
It's that that goes into the LLM for the whole conversation so far, for everything, every time it
11:54.740 --> 12:01.460
gets the entire conversation, and then it generates the most plausible next sequence of tokens that
12:01.460 --> 12:04.400
are most likely to come after that.
12:04.490 --> 12:12.770
Um, and that is what gets returned to us that we then assume represents the assistance response.
12:12.980 --> 12:15.860
So I realized that was quite a long sidebar.
12:15.860 --> 12:18.740
It's very important, foundational understanding.
12:18.740 --> 12:22.190
And we will come back to that when we, particularly when we look at open source models.
12:22.190 --> 12:28.190
And we're actually going to see these kinds of generated tokens, these special tokens ourselves.
12:28.340 --> 12:34.880
So with that, I'm going to pause it until the next video when we're going to press ahead building this
12:34.880 --> 12:35.780
chatbot out.

523
week5/community-contributions/subtitles/srts/59166915/ja_JP.srt

@ -0,0 +1,523 @@
WEBVTT
00:00.440 --> 00:03.560
JupyterLabの素晴らしい世界へようこそ。
00:03.560 --> 00:06.830
そして2週目に入った。
00:07.490 --> 00:09.110
3日目。
00:09.260 --> 00:11.990
このノートを出して
00:11.990 --> 00:18.080
そこで今回は、 チャットボットとしても知られる会話型AIについて、 さっそくご紹介しよう。
00:18.110 --> 00:24.680
まずはいつものようにインポートし、 環境変数を設定する。
00:24.680 --> 00:27.620
そして OpenAI を初期化する。
00:27.650 --> 00:29.840
今回はOpenAIを使う。
00:30.020 --> 00:34.310
そして、 もしそうしたければ、 他のモデルに乗り換える練習として持っていてもいい。
00:34.670 --> 00:38.510
では、 基本的なシステムメッセージから始めよう。
00:38.510 --> 00:40.340
あなたは役に立つアシスタントだ。
00:40.970 --> 00:41.480
分かった。
00:41.510 --> 00:45.800
さて、 メッセージの構造について少し話をしたい。
00:45.980 --> 00:54.200
まず最初に、 OpenAIへのプロンプトメッセージの構造を思い出してください。
00:54.320 --> 00:58.700
ええと、 これはもう何度も見てきたことだから、 私が説明するのに飽き飽きしただろうね。
00:58.700 --> 00:59.350
あれだ。
00:59.350 --> 01:00.010
もう1度だけ。
01:00.010 --> 01:00.730
よくご存知でしょう。
01:00.760 --> 01:09.220
システムにユーザーを与える辞書のリストで、 アシスタントが応答し、 次にユーザーが応答するといった具合だ。
01:09.220 --> 01:12.910
そして、 他にも何かあると言ったのを覚えているかもしれないが、 今はまだだ。
01:12.910 --> 01:13.990
システム・ユーザー・アシスタント。
01:13.990 --> 01:14.650
ユーザーアシスタント。
01:14.680 --> 01:16.960
ユーザーアシスタントユーザーなど。
01:17.470 --> 01:21.430
さて、 これからchatという関数を書きます。
01:21.430 --> 01:27.400
チャット機能には、 メッセージと履歴の2つの入力がある。
01:27.430 --> 01:34.930
Messageは、 チャットが応答する必要のある、 現在の、 あー、 質問されているメッセージを表す。
01:34.960 --> 01:41.050
そして歴史は、 以前のすべてのメッセージ、 以前のすべてのやりとりの履歴を持っている。
01:41.050 --> 01:45.520
そして歴史の構造はこうなる。
01:45.550 --> 01:50.140
リスト、 リストからなるリストになるだろう。
01:50.140 --> 01:58.920
そして、 このようなサブリストは、 単にユーザーの発言とアシスタントの返答、 ユーザーの発言とアシスタントの返答などである。
01:59.340 --> 02:02.730
では、 なぜそんなことをお願いしているのか?
02:02.760 --> 02:08.340
なぜ、 このような引数で、 このような関数を書こうとするのか?
02:08.340 --> 02:16.710
その答えは、 Gradioがチャット・ユーザー・インターフェースで使うことを期待している特定のタイプの機能だからです。
02:16.710 --> 02:22.260
だからGradioは、 メッセージを受け取るchatという関数を書くことを期待しているのだ。
02:22.260 --> 02:28.590
この構造体で履歴を取り、 次のレスポンス、 つまりこのチャットに対するレスポンスを返す。
02:28.590 --> 02:30.390
だから、 そういう形式を考えているんだ。
02:30.390 --> 02:39.420
この関数の仕事は、 このようなスタイルのメッセージをこのように変換することだ。
02:39.420 --> 02:47.940
そこで、 この構造を1行ずつ繰り返し、 上にあるような構造を構築する必要がある。
02:47.970 --> 02:49.710
それが理解できればいいのだが......。
02:49.710 --> 02:53.400
そうでなくても、 それがどんなものかをお見せすれば納得していただけるだろう。
02:53.400 --> 02:56.730
そこで、 chatという関数を定義している。
02:56.760 --> 03:03.450
入力されたメッセージに応答するために得たメッセージを受け取り、 以前のメッセージの履歴を受け取る。
03:03.480 --> 03:09.090
そこでまず、 メッセージのリストを設定する。
03:09.090 --> 03:12.870
そして、 一番最初にシステム・プロンプトを入力する。
03:12.900 --> 03:17.010
もちろん、 その後は歴史を反復することになる。
03:17.040 --> 03:20.460
ヒストリーの各要素は、 2つの値を持つリストの1つである。
03:20.460 --> 03:24.000
だから、 それをユーザー・メッセージ・アシスタントのメッセージに展開するんだ。
03:24.000 --> 03:28.470
そして、 ユーザーのメッセージとアシスタントのメッセージを追加する。
03:28.530 --> 03:30.390
その都度、 その都度。
03:30.390 --> 03:38.310
つまり、 履歴の各行がこのリストでは2行になる。
03:38.310 --> 03:40.650
1つはユーザー用、 もう1つはアシスタント用だ。
03:40.770 --> 03:42.480
それが完全に意味をなしていることを願うよ。
03:42.480 --> 03:43.050
今すぐだ。
03:43.200 --> 03:44.250
そうでなければ、 いつでもできる。
03:44.280 --> 03:45.240
まあ、 その必要はない。
03:45.270 --> 03:47.370
いつでもprint文を入れられると言おうとしたんだ。
03:47.430 --> 03:50.160
ええと、 私には先見の明があったので、 自分でいくつかのプリント文を入れたんだ。
03:50.160 --> 03:51.180
だから、 これを見ることになる。
03:51.210 --> 03:54.980
履歴を印刷し、 メッセージを印刷します。
03:54.980 --> 03:57.170
だから、 それも見ることができる。
03:57.530 --> 04:05.120
次の行は、 このチャットではお馴染みのメソッドで、 この時点で、 この関数、 すみません、
04:05.150 --> 04:14.300
この時点で、 メッセージのセットを受け取り、 それを使ってOpenAIを呼び出します。
04:14.300 --> 04:17.810
だから、 OpenAIチャットのドットコンプリートやドットクリエイトをやっているんだ。
04:17.810 --> 04:22.970
モデルを渡し、 メッセージを渡し、 結果をストリームしてくださいと言うつもりだ。
04:22.970 --> 04:23.810
そうかもしれない。
04:23.810 --> 04:27.200
そして、 私たちはそれを通過し、 返答を得る。
04:27.440 --> 04:30.170
ええと、 つまり、 これは実際には機能ではないんだ。
04:30.170 --> 04:35.900
私たちは一つひとつ答えを出していくので、 本当にジェネレーターなんだ。
04:36.800 --> 04:37.280
オーケー。
04:37.280 --> 04:49.850
つまり、 先ほどのスライドにあったような、 インスタント・メッセージのようなユーザー・インターフェースを作りたいのです。
04:49.850 --> 04:54.580
だから、 次から次へとやってくるメッセージをどうキャンバスに描くか、
04:54.580 --> 05:01.360
その方法を考えなければならない。
05:01.420 --> 05:06.430
ええと、 このチャットメッセージから返ってくる反応からするとね。
05:06.940 --> 05:10.300
ええと、 お気づきになったかどうかわかりませんが、 私はもちろん嘘をついています。
05:10.300 --> 05:11.770
本当に簡単なことだよ。
05:11.770 --> 05:12.910
本当に簡単なことだよ。
05:12.910 --> 05:14.470
一本の線になる。
05:15.310 --> 05:21.460
Gradioにはチャット・インターフェイスというものが付属していて、 チャット・インターフェイスは、
05:21.460 --> 05:25.540
このような構造を持つ1つの関数を想定しています。
05:25.540 --> 05:34.240
もしあなたが、 メッセージと履歴をこの特殊なフォーマットで受け取る関数を書いたのなら、 Gradioにとってそれはたった1行のコードに過ぎない。
05:34.480 --> 05:36.670
ええと、 本当にそんなに簡単なことなのか見てみよう。
05:36.670 --> 05:42.610
チャット・ジェネレーターを定義するために、 忘れずに実行する必要がある。
05:42.610 --> 05:46.510
そしてインターフェイスを立ち上げる。
05:46.510 --> 05:47.770
そしてここにある。
05:47.770 --> 05:49.890
これが私たちのチャット・インターフェースです。
05:50.190 --> 05:53.730
別ウインドウで表示させよう。
05:53.730 --> 05:55.830
そして、 こう言うんだ。
05:55.830 --> 05:56.970
こんにちは。
05:59.070 --> 06:00.000
こんにちは。
06:00.030 --> 06:01.410
本日はどのようなご用件でしょうか?
06:01.530 --> 06:05.220
ネクタイを買いたい。
06:06.780 --> 06:09.270
どんなネクタイをお探しですか?
06:09.300 --> 06:11.730
色や柄、 素材は決まっていますか?
06:12.210 --> 06:14.160
ええと、 それでお分かりいただけたと思う。
06:14.430 --> 06:22.830
でも、 赤のネクタイはクラシックなチョイスだよ。
06:22.830 --> 06:24.510
ここでは、 いくつかのオプションを紹介しよう。
06:24.510 --> 06:26.340
そこに答えがある。
06:26.820 --> 06:35.940
この会話には文脈があり、 その前に何があったかを知っている。
06:35.970 --> 06:43.800
そしてもうひとつ、 私たちが最初に話しかけたときからの記憶があるかのように感じるのは、 ちょっとした錯覚だ。
06:43.800 --> 06:45.180
そして私はネクタイを買いたいと言った。
06:45.210 --> 06:52.410
私たちが交流するたびに、 そのチャットメソッド、 ファンクションジェネレーター、 私は最終的にそれを正しく理解する。
06:52.440 --> 06:55.290
そのチャットジェネレーターが呼ばれている。
06:55.470 --> 06:58.860
そして、 通過しているのはこれまでの歴史のすべてだ。
06:58.860 --> 07:03.720
そして、 そのメッセージのセットを構築し、 それがOpenAIに送信される。
07:03.750 --> 07:07.470
だから、 それぞれの通話に対して、 全履歴が提供される。
07:07.470 --> 07:10.980
だからこそ、 その前の文脈がある。
07:10.980 --> 07:17.970
LLMが、 GPT4が、 30年前に私たちがそう言ったことを覚えているかのようではない。
07:17.970 --> 07:20.520
ただ、 コールがあるたびに、 すべてをパスするんだ。
07:20.520 --> 07:22.080
もうお分かりだろう。
07:22.080 --> 07:26.010
だから、 くどくどと書いてしまって申し訳ないんだけど、 本当に大事なことだと思うんだ。
07:26.400 --> 07:35.130
ええと、 そうだ、 下にprint文がいくつかあるのを覚えているだろう。
07:35.130 --> 07:41.700
最後に歴史がどうのこうのと言ったが、 これはグラディオが送ってくれたものだ。
07:41.730 --> 07:48.500
私たちが言ったこと、 言ったこと、 言ったこと、 言ったこと。
07:48.890 --> 07:56.480
そして、 それをGPT 4ゼロの正しいフォーマットに変換したんだ。
07:56.510 --> 07:57.950
ええと、 GPTフォーミニ。
07:58.100 --> 08:02.000
私たちは、 それを役割システムの内容のリストに変換したんだ。
08:02.000 --> 08:03.110
君は役に立つアシスタントだ。
08:03.110 --> 08:05.360
そしてユーザーは、 こんにちは、 と言った。
08:05.360 --> 08:07.910
するとアシスタントは、 「こんにちは、 今日はどのようなご用件でしょうか?
08:07.910 --> 08:08.540
などなど。
08:08.540 --> 08:11.450
だから、 そういうことにしたんだ。
08:12.530 --> 08:18.530
さて、 先に進む前にちょっと余談をさせてもらうが、 これは重要な余談だ。
08:18.530 --> 08:20.420
だから、 これは私だけがしゃべっているのではない。
08:20.420 --> 08:24.230
これは、 私が君たちに種を蒔きたいことなんだ。
08:24.230 --> 08:31.670
後で触れることになるが、 重要なポイントだ。
08:31.730 --> 08:33.590
そうでなければ、 そうあるべきだ。
08:33.800 --> 08:42.020
ええと、 だから、 この構造、 このシステム・ユーザー・アシスタント・ユーザーについて、 あなたは考えているかもしれません。
08:42.140 --> 08:43.480
ああ、 これもそうだ。
08:43.510 --> 08:49.960
LLMでは、 このようなことは構造化されているのでしょうか?
08:49.960 --> 09:00.160
このデータをLLMに提供するとき、 私たちは何らかの形で、 辞書のような、 辞書のリストのような形で提供されているのでしょうか?
09:00.280 --> 09:04.300
というのも、 Llmsはトークンを取るだけだと思っていたからだ。
09:04.300 --> 09:08.290
トークンのリストを受け取り、 次のトークンを生成する。
09:08.290 --> 09:13.990
では、 この辞書などのリストは、 トークンの世界にどのように変換されるのでしょうか?
09:13.990 --> 09:16.300
そして、 もしあなたがそのような考えを持っているなら、 それは素晴らしい考えだろう。
09:16.300 --> 09:17.680
ああ、 とてもいいね。
09:17.680 --> 09:29.290
答えは簡単で、 トークンがGPT4LLMに渡されるだけです。
09:29.290 --> 09:39.760
何が起こるかというと、 OpenAIはこれを一連のトークンに変え、 これがシステムプロンプトの始まりであることを説明する特別なトークン、
09:39.760 --> 09:44.430
特別な方法を持っている。
09:44.430 --> 09:47.670
これがユーザーとアシスタンス対応の始まりである。
09:47.670 --> 09:59.010
そして、 そのマークアップ全体をトークン化し、 LLMに情報を伝える特別なプレースホルダートークンを含む。
09:59.010 --> 10:01.410
システム・プロンプト・モードに切り替えます。
10:01.410 --> 10:02.880
これがシステムプロンプトのテキストだ。
10:02.880 --> 10:04.500
そして今、 システム・プロンプト・モードから抜け出した。
10:04.530 --> 10:07.410
今はユーザーメッセージなどをやっている。
10:07.410 --> 10:11.460
つまり、 この構造はOpenAI APIに送るものだ。
10:11.490 --> 10:13.980
それをトークンに変換する。
10:13.980 --> 10:19.470
そして、 そのトークンがLLMに送られ、 次のトークンを予測する。
10:19.950 --> 10:24.300
と言われるかもしれない。
10:24.300 --> 10:35.340
しかし、 LLMはどのようにして、 この特別なトークンがシステムメッセージを意味することを知り、 それをその高レベル指令と解釈するのだろうか?
10:35.340 --> 10:39.180
また、 このトークンはユーザー、 このトークンはアシスタントを意味する、 などということをどうやって知るのだろうか。
10:39.210 --> 10:43.740
何がその能力を与えているのか、 それは何らかの形でアーキテクチャに組み込まれているのか?
10:44.040 --> 10:46.590
それに対するとてもシンプルな答えがある。
10:46.590 --> 10:49.530
いや、 そう訓練されているからだよ。
10:49.560 --> 10:54.270
そのような構造で、 何百万もの例で、 たくさんのデータで訓練されている。
10:54.270 --> 11:00.300
そして、 システム指示の中で特定の指示が与えられると、 最も可能性の高い次のトークン、 最も可能性の高いレスポンスは、
11:00.300 --> 11:07.440
そのシステム・プロンプトに従ったものになることを、 トレーニングを通じて学習している。
11:07.470 --> 11:09.510
単純化しすぎましたね。
11:09.510 --> 11:14.940
シェフのようなテクニックのようなものとか、 そういうニュアンスがあるんだ。
11:14.940 --> 11:18.570
このようなことをすべて知っている人は、 それを聞いて、 ああ、 ちょっと単純化しすぎだが、 一般的な考え方だ、
11:18.570 --> 11:19.770
と言うだろう。
11:19.770 --> 11:21.090
基本的な考え方だ。
11:21.090 --> 11:24.540
この構造はAPI構造の一種である。
11:24.540 --> 11:27.390
これが、 私たちがやりたいことをOpenAIに伝える方法です。
11:27.390 --> 11:31.170
そしてOpenAIはその構造をトークンに変える。
11:31.170 --> 11:38.390
つまり、 最初のステップに進むために、 グラディオはこのようなフォーマットでデータを提供してくれる。
11:38.420 --> 11:48.680
それをこのフォーマットにマッピングしてOpenAIに送り、 OpenAIがそれをトークンに変換する。
11:48.680 --> 11:54.740
それは、 これまでの会話全体について、 すべてについて、 会話全体を取得するたびにLLMに入り、
11:54.740 --> 12:04.400
その次に来る可能性が最も高いトークンのシーケンスを生成することだ。
12:04.490 --> 12:12.770
そして、 それがアシスタンスレスポンスとして私たちに返される。
12:12.980 --> 12:15.860
だから、 かなり長いサイドバーだったことに気づいた。
12:15.860 --> 12:18.740
非常に重要で、 基礎となる理解だ。
12:18.740 --> 12:22.190
そして、 私たちが、 特にオープンソースモデルに注目するときには、 またこの話に戻ることになるだろう。
12:22.190 --> 12:28.190
そして、 私たちは実際にこのような生成されたトークンや特別なトークンを目にすることになる。
12:28.340 --> 12:35.780
それでは、 このチャットボットの構築を進める次のビデオまで、 このビデオを一時停止します。

577
week5/community-contributions/subtitles/srts/59166915/ko_KR.srt

@ -0,0 +1,577 @@
WEBVTT
00:00.440 --> 00:03.560
놀라운 유피터랩의 세계에 잘 오셨어요
00:03.560 --> 00:06.830
이제 2주 차예요
00:07.490 --> 00:09.110
3일째예요
00:09.260 --> 00:11.990
이 공책을 꺼내요
00:11.990 --> 00:18.080
대화형 인공지능인 채팅 봇에 대해 알아보도록 하죠.
00:18.110 --> 00:24.680
먼저 일반적인 가져오기와 환경 변수의 일반적인 설정으로 시작하죠
00:24.680 --> 00:27.620
오픈AI를 초기화하죠
00:27.650 --> 00:29.840
이번에는 오픈아이를 쓸 거예요
00:30.020 --> 00:34.310
원한다면 다른 모델로 바꾸는 연습으로 사용할 수 있어요
00:34.670 --> 00:38.510
기본적인 시스템 메시지부터 시작할게요
00:38.510 --> 00:40.340
정말 도움이 되는 조수네요
00:40.970 --> 00:41.480
좋아요
00:41.510 --> 00:45.800
이제 비트 코드의 메시지 구조를 얘기해 볼게요
00:45.980 --> 00:54.200
먼저 오픈AI에 즉각 메시지를 보내는 구조를 다시 알려주세요
00:54.320 --> 00:58.700
이런 건 이제 질리게 봤으니까 제가 설명하는 게 지겹겠죠
00:58.700 --> 00:59.350
저기 있네요
00:59.350 --> 01:00.010
한 번 더요
01:00.010 --> 01:00.730
잘 아시네요
01:00.760 --> 01:07.840
시스템에 사용자를 제공하는 사전 목록이죠 보조가 응답하고 그 다음 사용자가
01:07.840 --> 01:09.220
응답하는 거죠
01:09.220 --> 01:12.910
다른 게 있다고 말씀드렸죠? 지금은 아니에요
01:12.910 --> 01:13.990
시스템 사용자 보조요
01:13.990 --> 01:14.650
사용자 보조요
01:14.680 --> 01:16.960
사용자 보조 사용자 등등이요
01:17.470 --> 01:21.430
이제 채팅이라는 함수를 쓸 거예요
01:21.430 --> 01:27.400
그 함수 채팅은 두 개의 입력 메시지와 역사를 취할 거예요
01:27.430 --> 01:34.930
이 메시지는 현재 채팅방이 답변해야 할 메시지를 나타내죠
01:34.960 --> 01:41.050
모든 이전의 메시지와 교류는 역사가 기록하고 있죠
01:41.050 --> 01:45.520
역사의 구조는 이렇게 생겼을 거예요
01:45.550 --> 01:50.140
목록이 될 겁니다 목록으로 구성된 목록이죠
01:50.140 --> 01:55.810
이런 하위 목록은 사용자가 뭐라고 대답하고 비서가 뭐라고 대답하고 사용자가 뭐라고
01:55.810 --> 01:58.920
대답하고 비서가 뭐라고 대답하는지 등이죠
01:59.340 --> 02:02.730
왜 이런 부탁을 하는 걸까요?
02:02.760 --> 02:08.340
왜 매개 변수가 있는 저런 함수를 써야 하죠? 왜 이런 인수가 있는 거죠?
02:08.340 --> 02:15.870
그 대답은 그러디오가 채팅 유저 인터페이스와 함께 사용하기 위해 기대하는 특정한 유형의 함수이기
02:15.870 --> 02:16.710
때문이죠
02:16.710 --> 02:22.260
그래서 그라디오는 우리가 메시지를 받는 채팅이라는 함수를 쓰길 기대하죠
02:22.260 --> 02:27.630
이 구조에서 히스토리를 선택하고 응답을 반환할 겁니다 이 채팅에 대한
02:27.630 --> 02:28.590
응답이요
02:28.590 --> 02:30.390
그래서 그 형식을 고려하는 거죠
02:30.390 --> 02:39.420
이 함수에 대한 우리 작업은 이런 종류의 메시지를 여기로 변환하는 거죠
02:39.420 --> 02:47.940
이 구조를 통해 한 열씩 반복해야 합니다 위에 보이는 이 구조를 구축하고요
02:47.970 --> 02:49.710
이해가 되면 좋겠네요
02:49.710 --> 02:53.400
그렇지 않다면 제가 보여드릴 때 이해가 될 거예요
02:53.400 --> 02:56.730
채팅이라는 함수를 정의하고 있어요
02:56.760 --> 03:03.450
입력 메시지에 응답해야 하는 메시지가 필요합니다 이전 메시지의 기록도 필요하고요
03:03.480 --> 03:09.090
먼저 메시지 목록을 설정합니다 이 사람이 되겠죠
03:09.090 --> 03:12.870
시작 부분에 시스템 프롬프트로 채우죠
03:12.900 --> 03:17.010
물론 그런 다음 역사를 반복하겠죠
03:17.040 --> 03:20.460
역사의 각 요소는 두 개의 값을 가진 리스트 중 하나죠
03:20.460 --> 03:24.000
사용자 메시지 비서 메시지에 그걸 풀어놓을게요
03:24.000 --> 03:28.470
사용자 메시지와 보조 메시지를 추가해요
03:28.530 --> 03:30.390
매번요
03:30.390 --> 03:38.310
역사에서 각 행은 이 목록에서 두 행으로 바뀌죠
03:38.310 --> 03:40.650
하나는 사용자를 위한 것 하나는 보조를 위한 것이죠
03:40.770 --> 03:42.480
이해가 되면 좋겠네요
03:42.480 --> 03:43.050
지금요
03:43.200 --> 03:44.250
아니면 언제든 괜찮아요
03:44.280 --> 03:45.240
그럴 필요 없어요
03:45.270 --> 03:47.370
언제든 print문을 넣을 수 있다고 말하려 했어요
03:47.430 --> 03:50.160
제가 선견지명이 있어서 인쇄물도 몇 개 넣었어요 TED TED TED TED
03:50.160 --> 03:51.180
곧 보게 될 거예요
03:51.210 --> 03:54.980
히스토리를 프린트하고 메시지를 프린트할 거예요
03:54.980 --> 03:57.170
Get in get 역시 볼 수 있죠
03:57.530 --> 04:05.120
다음 줄은 이 채팅방에서 아주 익숙하실 텐데요 이 시점에서 이 메서드, 이
04:05.150 --> 04:12.110
함수 죄송합니다, 이 시점에서 이 메시지 세트를 가지고 OpenAI를
04:12.110 --> 04:14.300
호출할 거예요
04:14.300 --> 04:17.810
OpenAI 채팅 .완성 .Create를 입력해요
04:17.810 --> 04:22.970
모델에서 전달하고 메시지를 전달하고 결과를 스트리밍해달라고 요청하죠
04:22.970 --> 04:23.810
그렇게 하죠
04:23.810 --> 04:27.200
그런 다음 검토하고 응답을 유도하죠
04:27.440 --> 04:30.170
다시 말씀드리지만 이건 함수가 아니에요
04:30.170 --> 04:35.900
하나씩 반응을 산출하기 때문에 발전기라고 할 수 있죠
04:36.800 --> 04:37.280
04:37.280 --> 04:45.500
이제 이걸 좀 전에 슬라이드에서 본 사용자 인터페이스로 바꿀 거예요 인스턴트 메시지 스타일
04:45.500 --> 04:49.850
상호 작용이 있는 사용자 인터페이스요
04:49.850 --> 04:54.580
할 일이 많아요 비트가 계속 오는 메시지를
04:54.580 --> 05:01.360
캔버스로 만들어서 어떻게 만들지 알아내야 하니까요
05:01.420 --> 05:06.430
이 채팅창에 올라온 반응을 보면요
05:06.940 --> 05:10.300
눈치챘는지 모르겠지만 당연히 거짓말이죠
05:10.300 --> 05:11.770
아주 쉬울 거예요
05:11.770 --> 05:12.910
아주 쉬울 거예요
05:12.910 --> 05:14.470
한 줄로 할 거예요
05:15.310 --> 05:21.460
그러디오는 채팅 인터페이스라는 독창적인 걸 내놓는데 채팅 인터페이스는
05:21.460 --> 05:25.540
이런 구조를 가진 단일 함수를 기대하죠
05:25.540 --> 05:31.300
이 특정한 형식으로 메시지와 역사를 취하는 함수를 작성했다면 Gadio에선
05:31.300 --> 05:34.240
코드 한 줄로 끝나죠
05:34.480 --> 05:36.670
그렇게 쉬운지 볼까요?
05:36.670 --> 05:42.610
저걸 실행하는 걸 기억해야 합니다 채팅 생성기를 정의하도록요
05:42.610 --> 05:46.510
이제 인터페이스를 실행할 거예요
05:46.510 --> 05:47.770
여기 있네요
05:47.770 --> 05:49.890
채팅 인터페이스예요
05:50.190 --> 05:53.730
다른 창으로 보여드리죠 그게 더 좋거든요
05:53.730 --> 05:55.830
이렇게 말해요
05:55.830 --> 05:56.970
안녕하세요
05:59.070 --> 06:00.000
안녕하세요
06:00.030 --> 06:01.410
무엇을 도와드릴까요?
06:01.530 --> 06:05.220
넥타이 하나 사려고요
06:06.780 --> 06:09.270
어떤 넥타이를 찾으세요?
06:09.300 --> 06:11.730
특정한 색상, 패턴이나 재료가 있나요?
06:12.210 --> 06:14.160
Get it, get it, get it. 대충 아시겠죠?
06:14.430 --> 06:22.830
하지만 빨간색 넥타이는 고전적인 선택이에요
06:22.830 --> 06:24.510
몇 가지 선택지를 드리죠
06:24.510 --> 06:26.340
답이 나왔네요
06:26.820 --> 06:31.470
빨간색을 고른 이유는 여러분이 이미 아는 걸 보여드리고 싶었기 때문이에요
06:31.470 --> 06:35.940
이 대화의 맥락을 갖고 있고 전에 뭐가 있었는지도 알죠
06:35.970 --> 06:43.800
다시 한번 말씀드리지만 비트가 처음 대화했을 때의 메모리를 가진 것 같아요
06:43.800 --> 06:45.180
넥타이를 사고 싶다고 했어요
06:45.210 --> 06:51.630
우리가 상호 작용할 때마다 채팅 메서드, 함수 생성기가 결국엔 제대로 작동해요 get
06:51.630 --> 06:52.410
it
06:52.440 --> 06:55.290
채팅 생성기가 호출됐어요
06:55.470 --> 06:58.860
지금까지의 역사를 전부 담고 있어요
06:58.860 --> 07:03.720
메시지 집합을 구축하고 오픈AI에 전송하는 거죠
07:03.750 --> 07:07.470
통화 내역이 전부 제공되고 있어요
07:07.470 --> 07:10.980
그래서 이전의 맥락이 있는 거예요
07:10.980 --> 07:17.970
LLM도 아니고 GPT 4도 아닙니다 30년 전에 했던 말을 기억하는 것도 아니죠
07:17.970 --> 07:20.520
출동할 때마다 전부 전달해요
07:20.520 --> 07:22.080
이쯤 되면 눈치채셨겠지만요
07:22.080 --> 07:26.010
장황하게 말해서 미안하지만 염장 지르는 게 중요하다고 생각해요
07:26.400 --> 07:31.650
네, 기억하세요? 아래에 print문이 있는데 지금은
07:31.650 --> 07:35.130
꽤 두툼하죠 마지막 걸 보죠
07:35.130 --> 07:41.700
마지막 건 역사고 이건 그래디오가 보낸 거예요
07:41.730 --> 07:48.500
그럼 보일 거예요 우리가 말한 대로 우리가 말한 대로요
07:48.890 --> 07:56.480
그리고 GPT 40을 위해 올바른 포맷으로 변환했죠
07:56.510 --> 07:57.950
GPT 4 미니요
07:58.100 --> 08:02.000
역할 시스템 콘텐츠 목록으로 변환했어요
08:02.000 --> 08:03.110
정말 도움이 되는 조수네요
08:03.110 --> 08:05.360
사용자가 안녕하세요라고 하죠
08:05.360 --> 08:07.910
그러자 조수가 어떻게 도와드리면 되냐고 물었어요
08:07.910 --> 08:08.540
계속해서요
08:08.540 --> 08:11.450
그래서 이렇게 바꿨죠
08:12.530 --> 08:18.530
좋아요, 시작하기 전에 잠깐 옆길로 새죠 중요한 거예요
08:18.530 --> 08:20.420
나 혼자 떠드는 게 아니에요
08:20.420 --> 08:24.230
당신과 함께 그 씨앗을 뿌리고 싶어요
08:24.230 --> 08:30.200
나중에 다시 얘기하겠지만 중요한 부분이에요 어쩌면 당신이 생각해둔
08:30.200 --> 08:31.670
것일 수도 있고요
08:31.730 --> 08:33.590
그렇지 않다면 그래야겠죠
08:33.800 --> 08:42.020
언급하자면 이렇게 생각하실 수 있어요 시스템 사용자 비서가 이 구조에 대해서요
08:42.140 --> 08:43.480
이것도 그래요
08:43.510 --> 08:49.960
어떤 구조적인 방법으로 LLM에 Get이 전달되나요?
08:49.960 --> 08:56.860
우리가 어떻게든 이 데이터를 LLM에 제공할 때 사전이나 사전 목록
08:56.890 --> 09:00.160
같은 어떤 식으로 제공되나요?
09:00.280 --> 09:04.300
LM은 토큰만 받는 줄 알았거든요
09:04.300 --> 09:08.290
토큰의 리스트를 선택하고 가장 가능성이 높은 다음 토큰을 생성하죠
09:08.290 --> 09:13.990
그럼 이 사전 전체 목록은 어떻게 토큰의 세계로 변환하죠?
09:13.990 --> 09:16.300
그런 생각을 한다면 정말 좋을 거예요
09:16.300 --> 09:17.680
아주 좋아요
09:17.680 --> 09:25.390
답은 간단합니다. 토큰을 전달하는 것입니다. 실제 GPT 4개,
09:25.420 --> 09:29.290
GPT 4개 LLM으로요.
09:29.290 --> 09:39.760
오픈AI는 이것을 토큰의 시리즈로 바꾸어 놓습니다. 특별한 토큰을 가지고 있는데 시스템 프롬프트의
09:39.760 --> 09:44.430
시작이라고 설명하는 방법이죠.
09:44.430 --> 09:47.670
사용자와 보조 대응의 시작이죠
09:47.670 --> 09:55.080
그 말을 하는 마크업이 있고 그 전체 마크업을 토큰화합니다 LLM에 정보를 전달하는
09:55.080 --> 09:59.010
특별한 자리 표시자 토큰을 포함해서요
09:59.010 --> 10:01.410
이제 시스템 프롬프트 모드로 바꿀게요
10:01.410 --> 10:02.880
시스템 프롬프트 텍스트가 있네요
10:02.880 --> 10:04.500
이제 시스템 프롬프트 모드에서 벗어났어요
10:04.530 --> 10:07.410
사용자 메시지 같은 걸 하고 있죠
10:07.410 --> 10:11.460
이 구조가 OpenAI API 전송이에요
10:11.490 --> 10:13.980
패로 변환하죠
10:13.980 --> 10:19.470
이 토큰들이 LLM에 입력되어 get 다음 토큰을 예측하죠
10:19.950 --> 10:24.300
Get it, get it, get it's right, get's right, right.
10:24.300 --> 10:32.760
그런데 LLM은 이 특별한 토큰이 시스템 메시지를 의미한다는 것과 이를 높은 수준의 지시로 해석해야
10:32.760 --> 10:35.340
한다는 것을 어떻게 알까요?
10:35.340 --> 10:39.180
이 토큰이 사용자를 의미하고 이건 비서를 의미한다는 걸 어떻게 알까요?
10:39.210 --> 10:43.740
어떤 식으로든 그런 능력을 아키텍처에 구현한 건가요?
10:44.040 --> 10:46.590
그에 대한 답은 아주 간단해요
10:46.590 --> 10:49.530
아뇨, 그렇게 훈련받았으니까요
10:49.560 --> 10:54.270
많은 데이터와 구조로 훈련되었고 수백만 개의 예로 훈련되었어요
10:54.270 --> 11:00.300
시스템 지침에서 특정 지침을 받았을 때 다음 토큰과 반응은
11:00.300 --> 11:07.440
시스템 프롬프트를 준수하는 것이라는 것을 배우게 되죠
11:07.470 --> 11:09.510
너무 단순화했어요
11:09.510 --> 11:14.940
뉘앙스가 좀 더 있어요 셰프 같은 기술과 관련해서요
11:14.940 --> 11:18.570
비트 박스를 아는 사람들은 너무 단순화됐다고 하겠지만 그게 일반적인
11:18.570 --> 11:19.770
개념이에요
11:19.770 --> 11:21.090
기본 아이디어예요
11:21.090 --> 11:24.540
이 구조는 일종의 API 구조예요
11:24.540 --> 11:27.390
오픈AI에 우리가 원하는 걸 이렇게 전달하는 거죠
11:27.390 --> 11:31.170
오픈아이는 그 구조를 토큰으로 바꾸죠
11:31.170 --> 11:38.390
그럼 이제 처음으로 넘어가서 그라디오는 이런 포맷의 데이터를 주죠
11:38.420 --> 11:47.300
이것을 OpenAI에 보내는 이 형식에 매핑합니다 OpenAI는 이것을 특별한 토큰을 포함한 토큰으로
11:47.300 --> 11:48.680
변환하죠
11:48.680 --> 11:54.740
지금까지의 모든 대화에 해당하는 LLM으로 들어갑니다 대화 전체를
11:54.740 --> 12:01.460
담을 때마다 가장 그럴듯한 다음 토큰 배열을 생성하죠 그다음으로 나올 가능성이
12:01.460 --> 12:04.400
가장 큰 토큰요
12:04.490 --> 12:12.770
그게 우리에게 돌아오는 거고 우린 그걸 원조 대응이라고 생각하죠
12:12.980 --> 12:15.860
그래서 그게 꽤 긴 사이드바라는 걸 깨달았죠
12:15.860 --> 12:18.740
아주 중요한 기본적 이해예요
12:18.740 --> 12:22.190
다시 돌아오죠 특히 오픈 소스 모델을 볼 때요
12:22.190 --> 12:28.190
생성된 토큰들을 직접 볼 것입니다. 특별한 토큰들을요.
12:28.340 --> 12:34.880
다음 비디오까지 잠시 멈추겠습니다 그때 이 챗봇을 만들
12:34.880 --> 12:35.780
거예요

43
week5/community-contributions/subtitles/srts/59166919/en_US.srt

@ -0,0 +1,43 @@
WEBVTT
00:00.560 --> 00:03.590
And with that, it concludes our session on tools.
00:03.590 --> 00:08.720
And at this point, you are probably an expert on tools because you've gone back and you've added in
00:08.720 --> 00:17.120
the extras, like giving your LLM the ability to book flights in that it can print it to your output.
00:17.360 --> 00:19.940
So congratulations on getting here.
00:19.970 --> 00:22.550
Now you are very well versed in transformers.
00:22.550 --> 00:28.340
You can code against the frontier LLM APIs, and you can build AI assistants with user interfaces and
00:28.340 --> 00:30.890
using tools for more expertise.
00:30.890 --> 00:38.540
Tomorrow is completing week two, bringing introducing agents to the mix, a super juicy topic.
00:38.570 --> 00:44.720
We're going to talk about how agents can carry out more complex sequential activities, breaking them
00:44.720 --> 00:51.350
down into smaller steps, and having specialist AIS that can handle each of those steps.
00:51.350 --> 00:56.750
And the specific area we're going to look at is introducing some multi-modality.
00:56.780 --> 01:01.820
We're going to have specialists that can take care of things like creating images, because that's going
01:01.850 --> 01:08.090
to be fun, and it is going to allow us to build an even more sophisticated business application.
01:08.090 --> 01:12.920
So with that, I'll see you for the next one and very much looking forward to it.

40
week5/community-contributions/subtitles/srts/59166919/ja_JP.srt

@ -0,0 +1,40 @@
WEBVTT
00:00.560 --> 00:03.590
これでツールについてのセッションは終了だ。
00:03.590 --> 00:08.720
この時点で、 あなたはおそらくツールのエキスパートになっているはずだ。
00:08.720 --> 00:17.120
LLMに航空券を予約する機能を持たせ、 それを出力できるようにしたように。
00:17.360 --> 00:19.940
よくぞここまでたどり着いた。
00:19.970 --> 00:22.550
これであなたはトランスフォーマーに詳しくなった。
00:22.550 --> 00:30.890
フロンティアLLMのAPIに対してコードを書くことができ、 ユーザーインターフェースを備えたAIアシスタントを構築し、 より専門的な知識を得るためのツールを使うことができる。
00:30.890 --> 00:38.540
明日で2週目が終了し、 エージェント紹介という超ジューシーなトピックが加わる。
00:38.570 --> 00:44.720
我々は、 エージェントがより複雑な連続的活動をどのように行うか、 それらをより小さなステップに分解し、
00:44.720 --> 00:51.350
それらの各ステップを処理できる専門のAISを持つことについて話すつもりだ。
00:51.350 --> 00:56.750
具体的には、 マルチモダリティの導入だ。
00:56.780 --> 01:01.820
私たちは、 画像を作成するようなことを担当できるスペシャリストを持つつもりです。 なぜなら、
01:01.850 --> 01:08.090
それは楽しいことですし、 さらに洗練されたビジネス・アプリケーションを構築できるようになるからです。
01:08.090 --> 01:12.920
それではまた次回、 とても楽しみにしています。

43
week5/community-contributions/subtitles/srts/59166919/ko_KR.srt

@ -0,0 +1,43 @@
WEBVTT
00:00.560 --> 00:03.590
이것으로 도구 세션을 마칠게요
00:03.590 --> 00:08.720
이 시점에서 여러분은 도구 전문가일 겁니다 왜냐하면 돌아가서
00:08.720 --> 00:17.120
추가 사항을 추가했으니까요 LLM에게 비행 예약 기능을 주고 출력에 프린트할 수 있도록 하는 거죠
00:17.360 --> 00:19.940
여기 온 걸 축하해요
00:19.970 --> 00:22.550
이제 트랜스포머에 정통해졌네요
00:22.550 --> 00:28.340
프론티어 LLM API에 대항해 코드를 작성할 수 있고 사용자 인터페이스와 전문성을 위한 도구를
00:28.340 --> 00:30.890
이용해 인공지능 보조를 제작할 수 있죠
00:30.890 --> 00:38.540
내일은 둘째 주를 마무리하는 날입니다 에이전트를 소개하는 아주 흥미로운 주제죠
00:38.570 --> 00:44.720
지금부터는 에이전트가 어떻게 복잡한 순차적 활동을 수행하는지 살펴볼
00:44.720 --> 00:51.350
겁니다 이를 더 작은 단계로 나누고 각 단계를 처리할 전문 AI를 두는 거죠
00:51.350 --> 00:56.750
우리가 살펴볼 특정 영역은 다중 양상을 소개하는 거예요
00:56.780 --> 01:01.820
이미지 생성과 같은 것을 담당할 전문가들도 갖게 될 겁니다 재미있을
01:01.850 --> 01:08.090
테니까요 더 복잡한 비즈니스 응용 프로그램을 만들 수 있게 해주죠
01:08.090 --> 01:12.920
그럼 다음 시간에 뵙죠 정말 기대되네요

313
week5/community-contributions/subtitles/srts/59166947/en_US.srt

@ -0,0 +1,313 @@
WEBVTT
00:01.040 --> 00:04.340
Well, thank you for coming along for week two, day four.
00:04.370 --> 00:06.920
We have lots of good stuff in store today.
00:06.920 --> 00:15.770
It's another day of levelling up, of building new skills that adds to your capabilities of using Llms
00:15.770 --> 00:19.490
for generating important business value.
00:19.940 --> 00:26.240
As always, a quick recap of what you can do already describing Transformers and the terminology involved.
00:26.240 --> 00:27.290
You know it well.
00:27.290 --> 00:33.920
Confidently coding with the APIs for the top three frontier models, and now most recently, building
00:33.920 --> 00:37.670
a chatbot assistant, an AI chatbot including an interactive UI.
00:37.670 --> 00:43.400
And you're very familiar now with that messages structure going into OpenAI and with the way that the
00:43.400 --> 00:46.130
chat function works for Gradio.
00:46.310 --> 00:49.880
So today is about these things called tools.
00:49.880 --> 00:54.080
By the end, you'll be able to define them, you'll have common use cases for them, and you'll be able
00:54.080 --> 00:57.650
to code an AI assistant that uses tools.
00:57.650 --> 00:58.910
Let's get to it.
00:59.570 --> 01:02.420
So what are tools?
01:02.900 --> 01:11.300
So it allows frontier models to connect with external functions with functionality outside the frontier
01:11.330 --> 01:11.780
model.
01:11.810 --> 01:14.570
In fact, tools can mean something broader than that.
01:14.570 --> 01:15.650
It can be other things too.
01:15.680 --> 01:21.110
But most commonly, when you hear people talk about tools, it's in the context of giving frontier models
01:21.140 --> 01:23.870
access to external functions.
01:23.870 --> 01:32.300
It allows for a richer replies from an LLM by extending its knowledge, um, it can carry out advanced
01:32.330 --> 01:39.380
actions within your application, and it can enhance its abilities by, for example, giving it a calculator.
01:39.530 --> 01:44.510
So as I said at the end of the last time, this might sound very mysterious.
01:44.510 --> 01:47.090
How exactly what exactly is going on here?
01:47.090 --> 01:52.430
We're going to build something like a calculator, like a function that can do calculations, uh, even
01:52.430 --> 01:54.860
go as far as to do a sort of exact of Python code.
01:54.860 --> 01:59.960
And then we're going to sort of give that to the LLM and say, okay, you can use this.
01:59.960 --> 02:03.140
You can run this software on my computer in some way.
02:03.140 --> 02:05.210
It sounds sounds mysterious.
02:05.210 --> 02:06.860
It sounds a bit spooky, really.
02:06.920 --> 02:09.790
Uh, but alas, it is not.
02:09.790 --> 02:10.930
Not that clever.
02:10.930 --> 02:13.840
It's a pretty simple workflow around it.
02:13.870 --> 02:15.130
Here's the scoop.
02:15.130 --> 02:24.220
What we do is we start by defining what functions we have available that the LM is allowed to call.
02:24.220 --> 02:25.900
So we define these functions.
02:25.900 --> 02:27.100
Let's say we have a calculator.
02:27.100 --> 02:28.390
We define the calculator.
02:28.390 --> 02:32.920
We say what are the inputs, what kind of outputs and when should the LM use it.
02:33.010 --> 02:36.550
And then we tell the LM about that.
02:36.550 --> 02:42.430
When we make a call to do something we say to it, hey, can you can you respond to this user.
02:42.430 --> 02:45.760
And by the way, you have access to this tool.
02:45.790 --> 02:51.970
When the LM replies to us, it can either just respond with a prompt or it can respond with something
02:51.970 --> 02:57.490
like, hey, if I'm going to to to generate you a response, first I'm going to need to ask you to run
02:57.490 --> 03:03.310
that tool you told me about and run it with these inputs and then provide me back with the outputs.
03:03.310 --> 03:12.280
And then you take that, you run the tool and then you provide the responses back to the LM, and it
03:12.280 --> 03:14.770
then uses it to generate its response.
03:14.860 --> 03:20.140
So if you follow my drift there, it's not actually particularly amazing.
03:20.140 --> 03:24.850
It's that you call an LM and it responds and says, hey, I need you to call the tool that you told
03:24.850 --> 03:25.570
me you have.
03:25.600 --> 03:31.630
You do that, you provide it back to the LM, and then it's able to give you richer responses.
03:32.320 --> 03:38.620
And if you're really following along, you'll realize that that's not massively different to the kind
03:38.650 --> 03:45.970
of thing we did in the last lab when we just looked for a string and we just inserted extra context
03:45.970 --> 03:52.330
in the prompt that goes to the LM, it's just about really inserting extra context in prompts.
03:52.360 --> 03:52.930
All right.
03:52.960 --> 03:56.860
Anyway, hopefully I didn't muddle you there, but it's going to come together when you see the code,
03:56.860 --> 03:57.820
I promise you.
03:58.630 --> 04:00.550
But first, what are the use cases.
04:00.550 --> 04:02.500
When when do we typically do this.
04:02.500 --> 04:07.150
There are four ones that really that you come across a lot.
04:07.330 --> 04:15.100
Um, you can use tools to fetch extra data, like look something up in a database, um, add knowledge.
04:15.100 --> 04:19.720
Uh, and again, you can think of it that's rather similar to what we did with with belts in the last
04:19.720 --> 04:23.800
lab, but you can do that using tools instead.
04:24.370 --> 04:30.940
Uh, you can use it as a way that the LM can take an action, like booking a meeting, so you can tell
04:30.970 --> 04:34.120
it as part of your, uh, you have access.
04:34.120 --> 04:40.240
You have the ability to actually, uh, to carry out these, these, these items to buy a plane ticket
04:40.240 --> 04:41.710
to do, do the following.
04:41.860 --> 04:47.050
Um, and essentially in its response back, it will tell you that that's the tool that wants to use,
04:48.580 --> 04:51.880
as I just mentioned, a use case would be a calculator.
04:51.880 --> 04:58.510
Uh, LMS are famously not great at calculations because all they're trying to do is predict, uh, tokens
04:58.510 --> 04:59.530
in English language.
04:59.530 --> 05:04.360
They don't have, like, a calculator built in to a to a deep neural network.
05:04.360 --> 05:07.090
But you can provide that as a tool.
05:07.270 --> 05:13.240
And you can notice that, uh, GPT four is very good at calculations these days.
05:13.240 --> 05:17.800
And one wonders whether something that's going on behind the scenes might be something like this, that
05:17.800 --> 05:22.020
it might have its own tool made available in order to run calculations.
05:22.020 --> 05:26.010
Perhaps just speculation, but it seems very reasonable.
05:27.090 --> 05:34.260
Another thing it can do is modify the UI so you could tell it, hey, here's some tools.
05:34.260 --> 05:39.390
You can use, some functions you can call that will update different things on my user interface.
05:39.390 --> 05:46.980
And that would give the LLM the direct ability to trigger changes in the UI, which is a pretty cool
05:46.980 --> 05:51.600
idea to have sort of tighter integration between the LLM and the UI.
05:52.740 --> 06:00.090
Again, one thing worth pointing out for the second one here, and for the fourth one for taking actions
06:00.090 --> 06:03.660
and modifying the UI, there will be another way to achieve this.
06:03.660 --> 06:09.090
That would be perhaps a simpler approach if that's all you wanted to do.
06:09.210 --> 06:14.460
See if you can, based on something we've already done before, uh, give you a moment to pause, to
06:14.490 --> 06:16.470
think about what I might be getting at.
06:17.070 --> 06:24.740
The answer is, you remember, uh, in one of the earlier labs we had the model respond in JSON to respond
06:24.740 --> 06:29.900
with a structured response, and its response had JSON to tell us bits of information.
06:29.900 --> 06:35.180
In our case, it was about links and uh, giving us more information about fully qualified links and
06:35.180 --> 06:36.440
which links to collect.
06:36.470 --> 06:41.900
Well, similarly, we could just ask the model to respond in JSON with what actions need to be taken
06:41.900 --> 06:46.790
to book a meeting or respond in JSON based on how it wants the user interface modified.
06:46.790 --> 06:50.690
So there are other ways other than using tools to accomplish this.
06:50.690 --> 06:55.430
But if you want to be able to give it tools in addition to streaming back text, then this is a good
06:55.430 --> 06:55.970
solution.
06:55.970 --> 07:00.740
That's the that's the best time to use this when it's in conjunction with a number of other things that
07:00.740 --> 07:01.730
the LM is doing.
07:01.730 --> 07:05.600
So these tools are sort of adding to its capabilities.
07:06.800 --> 07:13.250
So what we're going to do now is build an informed airline customer support agent.
07:13.250 --> 07:19.190
We're going to want to be able to tell it that we're traveling to Paris and then have it respond with
07:19.190 --> 07:21.170
a ticket price to Paris.
07:21.170 --> 07:22.310
That's the idea.
07:22.310 --> 07:26.960
We're going to do it with tools, and I will see you over in the lab to find out how.

259
week5/community-contributions/subtitles/srts/59166947/ja_JP.srt

@ -0,0 +1,259 @@
WEBVTT
00:01.040 --> 00:04.340
さて、 第2週4日目もお付き合いいただきありがとうございました。
00:04.370 --> 00:06.920
今日はいいことがたくさんある。
00:06.920 --> 00:19.490
重要なビジネス価値を生み出すためにLlmsを活用する能力を高める、 新たなスキルを身につけるためのレベルアップの日だ。
00:19.940 --> 00:26.240
いつものように、 トランスフォーマーについてすでに説明できることと、 それに関係する用語を簡単にまとめてみた。
00:26.240 --> 00:27.290
よくご存知でしょう。
00:27.290 --> 00:33.920
上位3つのフロンティアモデルのAPIを使って自信を持ってコーディングし、 最近ではチャットボットアシスタント、
00:33.920 --> 00:37.670
対話型UIを含むAIチャットボットを構築している。
00:37.670 --> 00:43.400
そして、 OpenAIに入るメッセージの構造や、 Gradioのチャット機能の仕組みは、
00:43.400 --> 00:46.130
もうよくご存知でしょう。
00:46.310 --> 00:49.880
というわけで、 今日は道具というものについて。
00:49.880 --> 00:57.650
最後には、 それらを定義し、 そのための一般的な使用事例を持ち、 ツールを使用するAIアシスタントをコーディングできるようになるでしょう。
00:57.650 --> 00:58.910
さっそく始めよう。
00:59.570 --> 01:02.420
では、 道具とは何か?
01:02.900 --> 01:11.780
そのため、 フロンティア・モデルは、 フロンティア・モデルの外部にある機能を持つ外部関数と接続することができる。
01:11.810 --> 01:14.570
実際、 道具とはもっと広い意味を持つこともある。
01:14.570 --> 01:15.650
他のことでもあり得る。
01:15.680 --> 01:23.870
しかし、 一般的にツールについて語られるとき、 それはフロンティア・モデルに外部機能へのアクセスを与えるという文脈で語られることが多い。
01:23.870 --> 01:32.300
LLMの知識を拡張することで、 LLMからのリッチな返信を可能にし、 アプリケーション内で高度なアクションを実行させ、
01:32.330 --> 01:39.380
例えば電卓を持たせることでその能力を高めることができます。
01:39.530 --> 01:44.510
だから、 前回の最後に言ったように、 これはとてもミステリアスに聞こえるかもしれない。
01:44.510 --> 01:47.090
いったい何がどうなっているのか?
01:47.090 --> 01:54.860
電卓のようなもの、 計算ができる関数のようなものを作ろうと思っているんだ。
01:54.860 --> 01:59.960
そして、 それをLLMに渡して、 オーケー、 これを使っていいよ、 と言うんだ。
01:59.960 --> 02:03.140
このソフトを私のコンピューターで何らかの方法で動かすことができる。
02:03.140 --> 02:05.210
ミステリアスな響きだ。
02:05.210 --> 02:06.860
ちょっと不気味な感じだね。
02:06.920 --> 02:09.790
でも、 残念ながら、 そうではないんだ。
02:09.790 --> 02:10.930
それほど賢くはない。
02:10.930 --> 02:13.840
このあたりのワークフローはいたってシンプルだ。
02:13.870 --> 02:15.130
これがスクープだ。
02:15.130 --> 02:24.220
まずは、 LM が呼び出すことのできる関数を定義することから始めます。
02:24.220 --> 02:25.900
そこで、 これらの関数を定義する。
02:25.900 --> 02:27.100
電卓があるとしよう。
02:27.100 --> 02:28.390
私たちは計算機を定義する。
02:28.390 --> 02:32.920
何がインプットで、 どのようなアウトプットがあり、 LMはいつそれを使うべきなのか。
02:33.010 --> 02:36.550
そして、 そのことをLMに伝える。
02:36.550 --> 02:42.430
私たちが何かをするために呼びかけるとき、 私たちはそれに向かってこう言う。
02:42.430 --> 02:45.760
ところで、 あなたはこのツールにアクセスできる。
02:45.790 --> 02:51.970
LMから返信があった場合、 ただプロンプトを表示するか、 あるいは、 もし私があなたに返信を返すのであれば、
02:51.970 --> 02:57.490
まず、 あなたが教えてくれたツールを実行し、 これらのインプットを使って実行し、
02:57.490 --> 03:03.310
私にアウトプットを返すようにお願いする必要があります。
03:03.310 --> 03:14.770
そして、 そのツールを実行し、 LMにレスポンスを返す。
03:14.860 --> 03:20.140
というわけで、 私の流れに従えば、 実は特別すごいわけではないのだ。
03:20.140 --> 03:24.850
LMに電話をかけると、 LMが応答して、 あなたが持っていると言ったツールに電話をかけてほしい、
03:24.850 --> 03:25.570
と言うんだ。
03:25.600 --> 03:31.630
そうすれば、 LMにそれをフィードバックし、 LMはよりリッチなレスポンスを返してくれるようになる。
03:32.320 --> 03:38.620
そして、 もしあなたが本当についてきてくれているのなら、 前回のラボでやったような、
03:38.650 --> 03:52.330
ただ文字列を探してLMに行くプロンプトに余計なコンテキストを挿入するようなことと大差ないことに気づくだろう。
03:52.360 --> 03:52.930
分かった。
03:52.960 --> 03:57.820
とにかく、 私があなたを混乱させなければいいのだが。
03:58.630 --> 04:00.550
その前に、 どのような使用例があるのか。
04:00.550 --> 04:02.500
普通はいつやるんだ?
04:02.500 --> 04:07.150
よく目にするのは4つ。
04:07.330 --> 04:15.100
例えば、 データベースで何かを調べたり、 知識を追加したり。
04:15.100 --> 04:19.720
前回のラボでベルトを使ったのと似たようなものだが、
04:19.720 --> 04:23.800
代わりに道具を使うことができる。
04:24.370 --> 04:34.120
ミーティングの予約など、 LMが行動を起こすための手段として使うことができる。
04:34.120 --> 04:41.710
あなたには、 実際に、 えー、 これらを実行する能力がある。
04:41.860 --> 04:47.050
そして、 基本的には、 その返答の中で、 今言ったように、
04:48.580 --> 04:51.880
ユースケースは電卓である。
04:51.880 --> 04:59.530
LMSは計算が苦手なことで有名だが、 それは英語のトークンを予測しようとしているからだ。
04:59.530 --> 05:04.360
ディープ・ニューラル・ネットワークに電卓が組み込まれているわけではないのだ。
05:04.360 --> 05:07.090
でも、 それをツールとして提供することはできる。
05:07.270 --> 05:13.240
そして、 GPTの4番は最近、 計算がとても上手くなっていることにお気づきだろう。
05:13.240 --> 05:17.800
そして、 舞台裏で進行していることは、 このようなことなのではないか、 計算を実行するために独自のツールを用意しているのではないか、
05:17.800 --> 05:22.020
と考えてしまう。
05:22.020 --> 05:26.010
憶測にすぎないかもしれないが、 非常に合理的だと思う。
05:27.090 --> 05:34.260
もうひとつできることは、 UIを変更することだ。
05:34.260 --> 05:39.390
ユーザー・インターフェースのさまざまな情報を更新する関数を呼び出すことができる。
05:39.390 --> 05:51.600
これは、 LLMとUIをより緊密に統合させるためのかなりクールなアイデアだ。
05:52.740 --> 06:00.090
繰り返しになるが、 2つ目と4つ目のアクションとUIの変更については、
06:00.090 --> 06:03.660
別の方法がある。
06:03.660 --> 06:09.090
それだけなら、 もっとシンプルな方法かもしれない。
06:09.210 --> 06:16.470
前にやったことを踏まえて、 私が何を言いたいのか、 ちょっと立ち止まって考えてみてください。
06:17.070 --> 06:24.740
答えは、 ええと、 以前のラボの1つで、 構造化されたレスポンスで応答するためにモデルにJSONで応答させ、 そのレスポンスにはJSONがあり、
06:24.740 --> 06:29.900
私たちに情報の断片を伝えていたのを覚えていますか?
06:29.900 --> 06:36.440
私たちの場合、 それはリンクとあーに関するもので、 完全修飾リンクとどのリンクを収集するかについての詳細な情報を与えてくれた。
06:36.470 --> 06:46.790
同じように、 ミーティングを予約するために必要なアクションをJSONで応答するようにモデルに要求したり、 ユーザーインターフェイスをどのように変更したいかに基づいてJSONで応答したりすることができます。
06:46.790 --> 06:50.690
だから、 道具を使う以外の方法もある。
06:50.690 --> 06:55.970
しかし、 テキストをストリーミングで送り返すだけでなく、 ツールも与えたいのであれば、 これは良い解決策だ。
06:55.970 --> 07:01.730
それこそ、 LMが行っている他の様々なことと連動しているときこそ、 これを使うベストなタイミングなのだ。
07:01.730 --> 07:05.600
だから、 これらのツールはその能力をさらに高めているんだ。
07:06.800 --> 07:13.250
そこで、 私たちがこれからやろうとしているのは、 情報に精通した航空会社のカスタマーサポートを作ることだ。
07:13.250 --> 07:21.170
パリに旅行することを伝えて、 パリまでのチケット代を返信してもらえるようにしたい。
07:21.170 --> 07:22.310
そういうことだ。
07:22.310 --> 07:26.960
道具を使ってやるんだ。 その方法を見つけるためにラボで会おう。

304
week5/community-contributions/subtitles/srts/59166947/ko_KR.srt

@ -0,0 +1,304 @@
WEBVTT
00:01.040 --> 00:04.340
2주 차, 4일째에 함께해 주셔서 감사해요
00:04.370 --> 00:06.920
오늘 멋진 걸 많이 준비했어요
00:06.920 --> 00:15.770
오늘도 레벨업의 날입니다 중요한 비즈니스 가치를 창출하는 Lms를 이용해 여러분의 역량을 향상하는
00:15.770 --> 00:19.490
새로운 기술을 개발하는 날이죠
00:19.940 --> 00:26.240
늘 그렇듯, 이미 할 수 있는 일을 간단히 요약해보죠 트랜스포머와 관련된 용어를 설명하세요
00:26.240 --> 00:27.290
잘 아시네요
00:27.290 --> 00:33.920
상위 3개 프런티어 모델을 위해 API를 자신 있게 코딩했고 최근에는 챗봇 비서를 제작했습니다
00:33.920 --> 00:37.670
대화형 UI를 갖춘 인공지능 챗봇이죠
00:37.670 --> 00:43.400
이제 OpenAI의 메시지 구조에 아주 익숙해졌죠 그래디오를 위한 채팅
00:43.400 --> 00:46.130
함수가 어떻게 작동하는지도요
00:46.310 --> 00:49.880
오늘은 도구에 대해 알아보죠
00:49.880 --> 00:54.080
결국 정의도 할 수 있고 공통 유스케이스를 갖게 될 겁니다 도구를
00:54.080 --> 00:57.650
사용하는 인공지능 보조도 코딩할 수 있고요
00:57.650 --> 00:58.910
Get it, get it 해 보죠
00:59.570 --> 01:02.420
그럼 도구가 뭘까요?
01:02.900 --> 01:11.780
즉, 프런티어 모델은 외부 기능과 외부 기능성을 연결할 수 있죠
01:11.810 --> 01:14.570
도구의 의미는 그보다 더 광범위해요
01:14.570 --> 01:15.650
다른 것도 가능해요
01:15.680 --> 01:21.110
하지만 보통 툴이라고 하면 외부 기능에 접근하는
01:21.140 --> 01:23.870
개척자 모델을 뜻하죠
01:23.870 --> 01:32.300
LLM의 지식을 확장함으로써 더 많은 회신이 가능하게 합니다 응용 프로그램 내에서 고급 액션을
01:32.330 --> 01:39.380
수행할 수 있고 성능을 향상할 수도 있습니다 예를 들어 계산기를 제공할 수도 있죠
01:39.530 --> 01:44.510
지난 시간에 말했듯이 아주 신비롭게 들릴 수도 있어요
01:44.510 --> 01:47.090
어떻게 어떻게 된 거죠?
01:47.090 --> 01:52.430
계산기 같은 걸 만들어 보죠 계산을 하는 함수요 일종의 파이썬
01:52.430 --> 01:54.860
코드도 할 수 있는 함수요
01:54.860 --> 01:59.960
그런 다음 그걸 LLM에 주고 이걸 사용하라고 하는 거죠
01:59.960 --> 02:03.140
내 컴퓨터로 이 소프트웨어를 작동시켜요
02:03.140 --> 02:05.210
신비롭게 들리네요
02:05.210 --> 02:06.860
비트가 좀 으스스하네요
02:06.920 --> 02:09.790
하지만 그렇지 않아요
02:09.790 --> 02:10.930
별로 안 똑똑해요
02:10.930 --> 02:13.840
워크플로우는 아주 간단해요
02:13.870 --> 02:15.130
특종이에요
02:15.130 --> 02:24.220
LM이 호출할 수 있는 사용 가능한 함수를 정의하는 것부터 시작하죠
02:24.220 --> 02:25.900
이 함수들을 정의하죠
02:25.900 --> 02:27.100
계산기가 있다고 가정해 보죠
02:27.100 --> 02:28.390
계산기는 정의했어요
02:28.390 --> 02:32.920
입력과 출력은 무엇인지 LM이 언제 사용해야 하는지도요
02:33.010 --> 02:36.550
달 착륙선에 그 얘기를 하는 거죠
02:36.550 --> 02:42.430
뭔가를 하려고 호출할 때 사용자에게 응답할 수 있는지 묻죠
02:42.430 --> 02:45.760
이 도구에 엑세스할 수 있어요
02:45.790 --> 02:51.970
LM이 응답할 때 프롬프트로 응답할 수도 있고 혹은 다른 것으로 응답할 수도
02:51.970 --> 02:57.490
있습니다 응답을 생성해야 한다면 먼저 당신이 말한 도구를 실행하도록
02:57.490 --> 03:03.310
요청해야 합니다 입력으로 실행하고 출력을 제공해야 하죠
03:03.310 --> 03:12.280
그걸 가져가서 도구를 실행하고 LM에 응답을 다시 제공하면 LM은 응답을 생성하는
03:12.280 --> 03:14.770
데 사용하죠
03:14.860 --> 03:20.140
제 말을 이해하신다면 사실 그렇게 대단하진 않아요
03:20.140 --> 03:24.850
LM을 호출하면 응답이 옵니다 당신이 갖고 있다고 한 도구를 호출해
03:24.850 --> 03:25.570
주세요
03:25.600 --> 03:31.630
그렇게 하면 LM에 다시 제공됩니다 그럼 더 풍부한 반응을 제공하죠
03:32.320 --> 03:38.620
잘 따라오신다면 지난 랩에서 했던 것과 크게 다르지 않다는
03:38.650 --> 03:45.970
걸 아실 겁니다 문자열을 찾아 프롬프트에 추가 컨텍스트를 삽입해 LM으로
03:45.970 --> 03:52.330
간 거요 프롬프트에 추가 컨텍스트를 삽입하는 거죠
03:52.360 --> 03:52.930
좋아요
03:52.960 --> 03:56.860
어쨌든 헷갈리신 게 아니면 좋겠네요 코드를 보시면 다 이해될 거예요
03:56.860 --> 03:57.820
약속드리죠
03:58.630 --> 04:00.550
먼저 유스 케이스가 뭐죠?
04:00.550 --> 04:02.500
보통 언제 이걸 하죠?
04:02.500 --> 04:07.150
자주 보게 되는 게 네 가지 있어요
04:07.330 --> 04:15.100
도구를 이용해 추가 데이터를 가져올 수도 있어요 데이터베이스에서 찾아보고 지식을 추가하는 것처럼요
04:15.100 --> 04:19.720
지난 랩에서 벨트랑 했던 것과 비슷하다고 생각하실
04:19.720 --> 04:23.800
수 있지만 대신 도구를 사용할 수 있어요
04:24.370 --> 04:30.940
LM이 행동을 취하는 방법으로 사용할 수 있습니다 회의를 예약하는 것처럼요 그럼
04:30.970 --> 04:34.120
액세스 권한의 일부로 말할 수 있죠
04:34.120 --> 04:40.240
실제로 이런 물건을 운반할 능력이 있고 비행기 표를 사서 다음 일을
04:40.240 --> 04:41.710
할 수 있어요
04:41.860 --> 04:47.050
기본적으로 응답을 보면 사용하고 싶은 툴이 뭔지 알려줍니다 방금
04:48.580 --> 04:51.880
언급한 것처럼요 사용 사례는 계산기예요
04:51.880 --> 04:59.530
LMS는 계산에 약하기로 유명하죠 영어로 토큰을 예측하는 게 전부니까요
04:59.530 --> 05:04.360
심층 신경망에 계산기가 내장돼 있지 않아요
05:04.360 --> 05:07.090
하지만 도구로 제공할 수 있어요
05:07.270 --> 05:13.240
보시다시피 GPT 4는 요즘 연산에 아주 능숙하죠
05:13.240 --> 05:17.800
배후에서 일어나는 일이 이런 건 아닌지 궁금하네요 계산을
05:17.800 --> 05:22.020
실행하기 위해 고유한 도구를 사용할 수 있는 거죠
05:22.020 --> 05:26.010
추측일 수도 있지만 아주 합리적인 것 같아요
05:27.090 --> 05:34.260
UI 수정도 할 수 있어요 여기 도구가 있다고 말할 수 있게요
05:34.260 --> 05:39.390
사용자 인터페이스에서 다양한 걸 업데이트하는 함수를 사용할 수 있어요
05:39.390 --> 05:46.980
LLM은 UI에서 변화를 촉발하는 직접적인 기능을 갖게 됩니다 LLM과 UI 사이에
05:46.980 --> 05:51.600
더 탄탄한 통합을 갖는다는 건 멋진 아이디어죠
05:52.740 --> 06:00.090
두 번째에 대해 짚고 넘어갈 게 하나 있어요 네 번째 것은 행동을 취하고 UI 수정하는
06:00.090 --> 06:03.660
거죠 이걸 달성할 다른 방법이 있어요
06:03.660 --> 06:09.090
그게 당신이 원하는 전부라면 더 간단한 방법이겠죠
06:09.210 --> 06:14.460
전에 했던 걸 바탕으로 잠시 멈춰서 제가 뭘 하려는지
06:14.490 --> 06:16.470
생각해 보세요
06:17.070 --> 06:24.740
답은 이거죠 초기 실험 중 하나에서 JSON에서 모델이 구조적 응답으로 응답하게 한 거
06:24.740 --> 06:29.900
기억하시죠? 그 응답은 JSON이 정보를 제공했고요
06:29.900 --> 06:35.180
저희의 경우는 링크가 중요했어요 완전한 자격의 링크와 수집할 링크에 대한 정보를
06:35.180 --> 06:36.440
더 많이 주는 거였죠
06:36.470 --> 06:41.900
유사하게 모델에게 JSON에서 반응하도록 요청할 수도 있어요 회의를 예약하기 위해 어떤 행동을 취해야 하는지
06:41.900 --> 06:46.790
또는 JSON에서 반응해야 하는지 사용자 인터페이스를 어떻게 수정하길 원하는지에 근거해서요
06:46.790 --> 06:50.690
도구를 사용하는 것 외에 다른 방법도 있어요
06:50.690 --> 06:55.970
하지만 스트리밍 백 텍스트 외에 도구를 제공하고 싶다면 이게 좋은 해결책이에요
06:55.970 --> 07:00.740
그게 이걸 사용하기에 가장 좋은 때죠 LM이 작업하는 다른 여러 가지 작업과
07:00.740 --> 07:01.730
함께요
07:01.730 --> 07:05.600
이런 도구들이 기능에 추가되는 거죠
07:06.800 --> 07:13.250
이제 할 일은 항공사 고객 지원 에이전트를 정보에 따라 만드는 거죠
07:13.250 --> 07:19.190
우리가 파리로 간다고 말하고 파리행 비행기 표 가격으로 답장을
07:19.190 --> 07:21.170
받아야 해요
07:21.170 --> 07:22.310
그게 목적이죠
07:22.310 --> 07:26.960
도구를 이용해 알아보겠습니다 실험실에서 방법을 알아보죠

463
week5/community-contributions/subtitles/srts/59166949/en_US.srt

@ -0,0 +1,463 @@
WEBVTT
00:00.260 --> 00:02.750
Welcome back to making chatbots.
00:02.780 --> 00:04.070
Let's keep going.
00:04.070 --> 00:09.650
So for the next part we're going to beef up the system message to something a bit more interesting.
00:09.650 --> 00:10.700
System message.
00:10.700 --> 00:12.710
You're a helpful assistant in a clothes store.
00:12.740 --> 00:15.920
You should try to gently encourage the customer to try items on sale.
00:15.950 --> 00:17.780
Hats off 60% off.
00:17.780 --> 00:19.880
Most other items are 50% off.
00:19.880 --> 00:24.530
For example, if the customer says I'm looking to buy a hat, you could reply, wonderful!
00:24.560 --> 00:28.100
We have lots of hats, including several part of our sales event.
00:28.100 --> 00:32.330
Encourage the customer to buy hats if they're unsure what to get.
00:32.330 --> 00:36.050
So what you're seeing here is a few things going on in this system prompt.
00:36.050 --> 00:42.230
We've got some facts that are being provided about the sales, the hats and other items.
00:42.350 --> 00:47.930
Um, you've got an example, an example of if the customer says this, you could say that.
00:47.930 --> 00:52.280
And that example both is a way to establish tone and style.
00:52.400 --> 00:58.850
Um, and it's also a way to introduce more facts about hats, uh, into the conversation.
00:58.850 --> 01:01.670
So this is all example of one shot prompting.
01:01.670 --> 01:03.150
And you could argue Multi-shot prompting.
01:03.150 --> 01:06.720
So we're giving it a few different sort of nuances of how to reply.
01:06.930 --> 01:09.810
Um, and building that into the system message.
01:09.810 --> 01:13.950
There are other ways of doing it that we'll talk about, or at least at least another way of doing it.
01:14.040 --> 01:16.980
But this is one very effective way.
01:16.980 --> 01:25.950
So we we add in that system message, and we're now going to have a chat with the with with the chat
01:25.980 --> 01:26.580
bot.
01:26.670 --> 01:34.170
So again we write our method of our generator chat um which takes a message and history because that's
01:34.170 --> 01:36.030
what Gradio wants to call us with.
01:36.210 --> 01:43.650
Um, and we first convert that into the format that OpenAI expects, um, by building the usual list
01:43.650 --> 01:44.730
that you're familiar with.
01:44.730 --> 01:49.200
I should also mention, I don't know if I mentioned this last time, that at the end here we have to
01:49.230 --> 01:56.550
add in, of course, into that list the latest message that the user is sending that gets added to the
01:56.550 --> 01:58.770
bottom as role user content.
01:58.770 --> 02:00.480
And that message.
02:00.480 --> 02:08.460
Then of course, we make a call that at this point, you have ingrained in your deepest memory the create
02:08.460 --> 02:12.630
call and we have stream is true and we stream back results.
02:12.630 --> 02:13.530
So here we go.
02:13.560 --> 02:14.760
We'll bring that up.
02:14.760 --> 02:16.680
We'll bring it up in a separate window again.
02:16.710 --> 02:17.340
Why not.
02:17.340 --> 02:20.910
And let's talk to our shopping assistant.
02:21.210 --> 02:22.050
Hi there.
02:23.520 --> 02:24.870
Welcome to our store.
02:24.900 --> 02:25.920
How can I assist you today?
02:25.920 --> 02:27.510
Are you looking for anything specific?
02:27.510 --> 02:33.090
Say, uh, I'd like to buy some shoes.
02:34.110 --> 02:35.280
Great.
02:36.480 --> 02:38.760
We have a lovely selection.
02:38.760 --> 02:41.580
While you're browsing, I want to mention we have a fantastic sale going on.
02:41.580 --> 02:43.140
Most items are 50% off.
02:43.140 --> 02:47.340
If you're open to it, we have some stylish hats that are 60% off.
02:47.370 --> 02:50.340
They might be the perfect complement to your new shoes.
02:50.340 --> 02:52.170
Would you like to take a look at both?
02:52.620 --> 02:56.490
So I want to point out that that it's obviously figured out.
02:56.490 --> 03:01.170
It's got the knowledge that we supplied to it in the system prompt.
03:01.170 --> 03:07.990
But you hopefully will also notice that the sort of enthusiastic, effusive style that I used in that
03:07.990 --> 03:16.030
system prompt has rubbed off in a way that it's communicating in this kind of, uh, very, uh, amiable
03:16.030 --> 03:16.960
fashion.
03:17.290 --> 03:22.720
Um, and that's a big part of this kind of one shot or multi-shot prompting when you set the tone,
03:22.720 --> 03:25.930
give examples of how it should reply.
03:26.860 --> 03:28.510
Uh, so let's keep going.
03:28.510 --> 03:31.180
Let's take that system message and and add in.
03:31.180 --> 03:35.950
If the customer asks for shoes, you could respond that shoes are not on sale.
03:35.980 --> 03:37.720
Let's say should respond.
03:37.720 --> 03:39.910
Should respond that shoes are not on sale today.
03:39.910 --> 03:42.760
But remind the customer to look at hats.
03:42.760 --> 03:44.170
So let's try this again.
03:44.170 --> 03:45.580
Let's see how this does.
03:45.640 --> 03:48.160
Uh, it's got another fact to add.
03:48.310 --> 03:49.630
Uh, you could argue that that's.
03:49.660 --> 03:54.220
Yeah, that's like another, uh, multi-shot prompt.
03:54.250 --> 03:55.210
Let's say.
03:55.210 --> 03:55.990
Hi there.
03:57.640 --> 03:58.420
Are you looking.
03:58.420 --> 04:04.540
I'd like to buy some shoes.
04:04.540 --> 04:05.900
That sounds great.
04:05.930 --> 04:08.630
I should mention shoes aren't on sale today.
04:08.630 --> 04:10.940
But while you're here, have you thought about checking out our hats?
04:13.640 --> 04:20.030
You've got to feel sorry for this poor customer who's going to get repeatedly pitched hats.
04:20.330 --> 04:27.860
Um, so you can see again how it's established, um, that shoes aren't on sale, but that hats very
04:27.860 --> 04:28.910
much are.
04:29.180 --> 04:34.820
And that is an example, then, of Multi-shot prompting, uh, in that we're giving it more examples
04:34.820 --> 04:35.930
to learn from.
04:36.260 --> 04:44.510
Um, so another thing that we can do that is interesting, um, is that whenever you've seen these constructions
04:44.510 --> 04:47.960
so far, you've always seen us beginning with the system message.
04:47.960 --> 04:49.640
The system messages come at the top.
04:49.640 --> 04:54.830
But in fact, with the OpenAI call, there's you're not constrained to have the system message at the
04:54.830 --> 04:55.070
top.
04:55.070 --> 04:58.460
You can add in more system messages as you go.
04:58.730 --> 05:05.930
Um, and so for example, one of the things we can do, um, uh, and let me apologize for some very
05:05.970 --> 05:09.120
hacky code here that I will not recommend, but it's here to show the example.
05:09.120 --> 05:15.390
It's not the way that you should do it in practice, but what we could do is when we're building this
05:15.390 --> 05:23.280
chat generator, we could put in here if this current message that the user is sending us contains the
05:23.280 --> 05:24.510
word belt.
05:24.510 --> 05:28.020
And you can see in a rather unawesome way, I've just looked for the string belt.
05:28.050 --> 05:33.150
Of course, I should be testing whether it's the full word, and I should be thinking about uppercase,
05:33.180 --> 05:34.770
lowercase and so on.
05:34.770 --> 05:38.220
I'm not doing any of that, which is very naughty of me, but it shows the point.
05:38.220 --> 05:43.560
So if belt is in the word message, it's going to add into this set of messages.
05:43.590 --> 05:50.490
Another system message saying for added context, the store does not sell belts, but be sure to point
05:50.490 --> 05:58.500
out items on sale so that will be then added in to the prompt if the user asks for a belt.
05:59.940 --> 06:06.030
So let's give that a try and bring this up here.
06:08.250 --> 06:09.180
Hi there.
06:11.130 --> 06:12.060
Welcome to the store.
06:12.090 --> 06:13.080
I can assist you.
06:13.530 --> 06:17.310
I'd like to buy a belt.
06:19.110 --> 06:19.710
I'm sorry.
06:19.710 --> 06:20.910
We don't carry belts.
06:20.910 --> 06:24.390
However, we have fantastic items, including hats.
06:24.420 --> 06:26.130
60% off.
06:26.310 --> 06:27.780
So there you go.
06:27.810 --> 06:28.620
There you go.
06:28.650 --> 06:29.880
It's, um.
06:30.210 --> 06:33.270
Uh, definitely pays attention.
06:33.270 --> 06:41.310
You can see that when the system message is added in as another row in this, uh, in this list of messages,
06:41.310 --> 06:42.900
it pays attention to it.
06:42.900 --> 06:49.140
And that gives us the opportunity to add context into the conversation.
06:49.140 --> 06:58.740
And this is a, uh, whilst it is, of course, uh, very kludgy code to be detecting a word, a substring
06:58.740 --> 07:02.640
like that, you can imagine that you could beef this up to be a bit more robust.
07:02.670 --> 07:07.830
You could properly you could have a little dictionary which looks for particular words.
07:07.830 --> 07:15.910
And when it finds them, it could use them to then enrich the the context in the right way.
07:15.970 --> 07:23.140
So it gives you a little, a little ability to be looking things up and adding them into the context.
07:23.170 --> 07:29.950
Now, you may be familiar with with some things about Rag, and you may be aware that that is a lot
07:29.950 --> 07:31.150
of what rag is about.
07:31.180 --> 07:38.650
Rag is about finding extra information that's relevant to the prompt, and adding it in to the context
07:38.680 --> 07:41.200
of the message that gets sent to the LM.
07:41.230 --> 07:47.260
Now, of course, Rag does that in a much more sophisticated and intelligent way than this hokey piece
07:47.260 --> 07:48.430
of code right here.
07:48.610 --> 07:55.540
But you can think of this as a as a light, a baby version of rag, and as an exercise for you.
07:55.540 --> 08:01.030
You can certainly beef this up a bit, and at the very least, use regex to make it look for a particular
08:01.030 --> 08:01.540
word.
08:01.540 --> 08:07.930
Maybe have a little dictionary that has the the words, the different items in the store together with
08:07.930 --> 08:13.730
their price so that you could add that in as a system message to give it more context.
08:13.730 --> 08:20.270
You could try that out and see that you could build a chatbot that actually knows about the prices of
08:20.270 --> 08:23.000
the goods in its store, and that would be pretty cool.
08:23.750 --> 08:27.200
So that wraps up this particular experiment.
08:27.200 --> 08:32.270
I do just want to mention, I alluded earlier to the fact that there are other ways of doing Multi-shot
08:32.270 --> 08:38.840
prompting other than shoving it in the system prompt, and the other way is that you can have a user
08:38.840 --> 08:43.430
assistant, user assistant set of messages that hasn't actually happened.
08:43.460 --> 08:49.580
You can have a fictitious exchange between the user and the assistant that you include in the conversation
08:49.580 --> 08:56.930
before the current conversation, and use that as a way to prime the LM with similar conversations,
08:56.930 --> 09:02.990
so that it gets a sense of how it's responded to other questions.
09:03.140 --> 09:09.200
You can use that again, both to train it on style and also to supply extra facts.
09:09.200 --> 09:14.640
So there could have been an earlier interaction when there had been a question about a belt, and the
09:14.640 --> 09:19.740
assistant had already replied that there are no belts in the store, and it would have learnt from that.
09:19.740 --> 09:25.380
So either technique, they have pros and cons, whether you supply it in system prompts or whether you
09:25.410 --> 09:33.270
give example user assistant interactions for it to have as part of the input context for it to to be
09:33.300 --> 09:34.410
able to absorb.
09:34.560 --> 09:38.640
Um, and my ask to you is to try them both out.
09:38.640 --> 09:43.680
So update this so that it uses user assistant interactions instead of a system prompt and see how that
09:43.680 --> 09:44.160
works.
09:44.190 --> 09:47.430
See if you think you get a better or a worse clothes store assistant.
09:47.430 --> 09:53.460
And then also make this change to make this a whole lot more robust, have a dictionary of different
09:53.460 --> 10:00.780
items in the store, look up their prices or their sale amounts, and then add that as context into
10:00.780 --> 10:09.060
the conversation so that the assistant responds with some expertise and have fun doing it, and I will
10:09.060 --> 10:12.300
see you for the next video to wrap up this day.

391
week5/community-contributions/subtitles/srts/59166949/ja_JP.srt

@ -0,0 +1,391 @@
WEBVTT
00:00.260 --> 00:02.750
チャットボット作りにお帰りなさい。
00:02.780 --> 00:04.070
続けよう。
00:04.070 --> 00:09.650
そこで次のパートでは、 システム・メッセージをもう少し面白いものに強化する。
00:09.650 --> 00:10.700
システムメッセージ。
00:10.700 --> 00:12.710
あなたは洋服店の店員だ。
00:12.740 --> 00:15.920
セール品を試してみるよう、 お客にそっと勧めるべきだ。
00:15.950 --> 00:17.780
帽子60%オフ。
00:17.780 --> 00:19.880
その他のほとんどの商品は50%オフ。
00:19.880 --> 00:24.530
例えば、 お客さんが帽子を買いたいんだけど、 と言ったら、 素敵ですね、 と返せばいい!
00:24.560 --> 00:28.100
私たちは、 販売イベントの一部を含め、 たくさんの帽子を用意しています。
00:28.100 --> 00:32.330
何を買おうか迷っているお客さんには、 帽子を買うように勧める。
00:32.330 --> 00:36.050
このシステム・プロンプトでは、 いくつかのことが進行している。
00:36.050 --> 00:42.230
セールや帽子などについて、 いくつかの事実をお伝えします。
00:42.350 --> 00:47.930
もしお客さんがこう言うなら、 あなたはこう言うことができる。
00:47.930 --> 00:52.280
そしてその例はどちらも、 トーンとスタイルを確立する方法なのだ。
00:52.400 --> 00:58.850
それに、 帽子に関する事実をもっと会話の中に取り入れる方法でもあるんだ。
00:58.850 --> 01:01.670
つまり、 これはすべて一発プロンプトの例なのだ。
01:01.670 --> 01:03.150
そして、 マルチショットのプロンプトを主張することもできる。
01:03.150 --> 01:06.720
だから、 返答の仕方のニュアンスを少し変えているんだ。
01:06.930 --> 01:09.810
それをシステムメッセージに組み込むんだ。
01:09.810 --> 01:13.950
他のやり方もあるので、 それについてはまたお話ししますし、 少なくとも別のやり方もあります。
01:14.040 --> 01:16.980
しかし、 これは非常に効果的な方法のひとつだ。
01:16.980 --> 01:26.580
システム・メッセージを追加し、 チャットボットとチャットをします。
01:26.670 --> 01:36.030
そこでもう一度、 メッセージと履歴を受け取るジェネレーター・チャットのメソッドを書く。
01:36.210 --> 01:44.730
まず、 OpenAIが期待するフォーマットに変換します。
01:44.730 --> 01:49.200
また、 前回言ったかどうかわからないが、 最後に、 ユーザーが送信している最新のメッセージを、
01:49.230 --> 01:58.770
ロール・ユーザー・コンテンツとして一番下に追加しなければならない。
01:58.770 --> 02:00.480
そしてそのメッセージ。
02:00.480 --> 02:08.460
そしてもちろん、 この時点であなたの深い記憶に刻み込まれた "create "コールを行い、 "stream
02:08.460 --> 02:12.630
is true "で結果をストリームバックする。
02:12.630 --> 02:13.530
それでは、 どうぞ。
02:13.560 --> 02:14.760
その話を持ち出そう。
02:14.760 --> 02:16.680
また別ウィンドウで表示させます。
02:16.710 --> 02:17.340
なぜだ。
02:17.340 --> 02:20.910
そして、 ショッピング・アシスタントに話を聞いてみよう。
02:21.210 --> 02:22.050
こんにちは。
02:23.520 --> 02:24.870
当店へようこそ。
02:24.900 --> 02:25.920
本日はどのようなご用件でしょうか?
02:25.920 --> 02:27.510
何か特定のものをお探しですか?
02:27.510 --> 02:33.090
靴を買いたいんだ。
02:34.110 --> 02:35.280
素晴らしい。
02:36.480 --> 02:38.760
素敵な品揃えです。
02:38.760 --> 02:41.580
ご覧いただいている間に、 素晴らしいセールが開催中であることをお伝えしたい。
02:41.580 --> 02:43.140
ほとんどの商品が50%オフ。
02:43.140 --> 02:47.340
もしよろしければ、 60%オフのおしゃれな帽子をどうぞ。
02:47.370 --> 02:50.340
新しい靴にぴったりかもしれない。
02:50.340 --> 02:52.170
両方ご覧になりますか?
02:52.620 --> 02:56.490
だから、 私はそれが明らかに解明されていることを指摘したい。
02:56.490 --> 03:01.170
システム・プロンプトで供給した知識を持っている。
03:01.170 --> 03:07.990
しかし、 そのシステム・プロンプトで私が使っていた、 熱狂的で熱弁をふるうようなスタイルが、 このような、
03:07.990 --> 03:16.960
ええと、 とても、 ええと、 愛想のいいファッションでコミュニケーションしていることに気づいていただければ幸いだ。
03:17.290 --> 03:22.720
そして、 このようなワンショットやマルチショットのプロンプトの大きな役割は、 あなたがトーンを設定し、
03:22.720 --> 03:25.930
どのように答えるべきかの例を示すことです。
03:26.860 --> 03:28.510
じゃあ、 続けよう。
03:28.510 --> 03:31.180
そのシステム・メッセージに、 こう付け加えよう。
03:31.180 --> 03:35.950
靴が欲しい」と言われたら、 「靴はセール対象外です」と答えればいい。
03:35.980 --> 03:37.720
と答えるべきだ。
03:37.720 --> 03:39.910
今日は靴はセール対象外だと答えるべきだ。
03:39.910 --> 03:42.760
でも、 お客さんには帽子を見るように注意してください。
03:42.760 --> 03:44.170
では、 もう一度やってみよう。
03:44.170 --> 03:45.580
どうなるか見てみよう。
03:45.640 --> 03:48.160
ええと、 もうひとつ事実があるんだ。
03:48.310 --> 03:49.630
そうとも言えるね。
03:49.660 --> 03:54.220
ああ、 これもマルチショットのプロンプトだね。
03:54.250 --> 03:55.210
こう言おう。
03:55.210 --> 03:55.990
こんにちは。
03:57.640 --> 03:58.420
お探しですか?
03:58.420 --> 04:04.540
靴を買いたいんだ。
04:04.540 --> 04:05.900
それはいいね。
04:05.930 --> 04:08.630
今日は靴はセール対象外なんだ。
04:08.630 --> 04:10.940
でも、 ここにいる間に、 私たちの帽子をチェックしようと思ったことはある?
04:13.640 --> 04:20.030
何度も帽子を投げつけられるかわいそうな客に同情せざるを得ない。
04:20.330 --> 04:28.910
靴はセールにならないが、 帽子はセールになる。
04:29.180 --> 04:35.930
これはマルチショット・プロンプトの一例で、 より多くの例を与えて学習させるということだ。
04:36.260 --> 04:47.960
もうひとつ、 興味深いのは、 これまでこのような構成を見てきたとき、 いつもシステム・メッセージから始めていたことだ。
04:47.960 --> 04:49.640
システムメッセージが一番上に来る。
04:49.640 --> 04:55.070
しかし実際、 OpenAIのコールでは、 システムメッセージをトップに置くことに制約されることはない。
04:55.070 --> 04:58.460
さらにシステムメッセージを追加していくこともできる。
04:58.730 --> 05:05.930
例えば、 私たちができることのひとつは、 ええと、 ええと、 あまりお勧めはしないのですが、
05:05.970 --> 05:09.120
例を示すためにここにあります。
05:09.120 --> 05:24.510
しかし、 このチャット・ジェネレーターを構築する際に、 ユーザーが現在送っているメッセージにベルトという単語が含まれているかどうかをここに書き込むことができる。
05:24.510 --> 05:28.020
そして、 ちょっと見苦しいですが、 紐ベルトを探したところです。
05:28.050 --> 05:34.770
もちろん、 完全な単語かどうかをテストすべきだし、 大文字、 小文字なども考えるべきだ。
05:34.770 --> 05:38.220
私はそんなことはしていない。 とてもエッチなことだが、 要点はここにある。
05:38.220 --> 05:43.560
だから、 もしベルトがメッセージという言葉に入っていれば、 それはこのメッセージのセットに加えられることになる。
05:43.590 --> 05:50.490
もう一つのシステムメッセージは、 この店ではベルトを販売していないが、
05:50.490 --> 05:58.500
セール品を必ず示すこと。
05:59.940 --> 06:06.030
では、 試しにこれをここに持ってきてみよう。
06:08.250 --> 06:09.180
こんにちは。
06:11.130 --> 06:12.060
ご来店ありがとうございます。
06:12.090 --> 06:13.080
私がお手伝いします。
06:13.530 --> 06:17.310
ベルトを買いたいんだ。
06:19.110 --> 06:19.710
ごめんなさい.
06:19.710 --> 06:20.910
ベルトは持っていない。
06:20.910 --> 06:24.390
しかし、 帽子を含む素晴らしいアイテムがあります。
06:24.420 --> 06:26.130
60%オフ。
06:26.310 --> 06:27.780
そうだ。
06:27.810 --> 06:28.620
そうだ。
06:28.650 --> 06:29.880
それは、 うーん。
06:30.210 --> 06:33.270
ああ、 間違いなく注意を払っている。
06:33.270 --> 06:42.900
システム・メッセージがこのメッセージ・リストに別の行として追加されると、 そのメッセージに注意を払うのがわかるだろう。
06:42.900 --> 06:49.140
そしてそれは、 私たちが会話に文脈を加える機会を与えてくれる。
06:49.140 --> 07:02.640
もちろん、 このような単語や部分文字列を検出するのは非常に不格好なコードだが、 これをもう少し堅牢にすることは可能だろう。
07:02.670 --> 07:07.830
特定の単語を探す小さな辞書があってもいい。
07:07.830 --> 07:15.910
そして、 それを見つけたら、 適切な方法でコンテクストを豊かにするために使うことができる。
07:15.970 --> 07:23.140
だから、 いろいろなことを調べたり、 文脈に加えたりすることができるんだ。
07:23.170 --> 07:31.150
さて、 皆さんはラグについてある程度ご存知かもしれないし、 ラグとはそういうものだということもご存知かもしれない。
07:31.180 --> 07:41.200
ラグとは、 プロンプトに関連する余分な情報を見つけ、 それをLMに送られるメッセージの文脈に加えることである。
07:41.230 --> 07:48.430
もちろん、 ラグはこのような陳腐なコードよりも、 はるかに洗練されたインテリジェントな方法でそれを実現している。
07:48.610 --> 07:55.540
でも、 これは軽いもの、 雑巾の赤ちゃんバージョン、 そして自分のための練習だと思えばいい。
07:55.540 --> 08:01.540
少なくとも、 正規表現を使って特定の単語を検索させることはできる。
08:01.540 --> 08:07.930
店内のさまざまな商品と、 その価格が書かれた小さな辞書を用意して、 それをシステムメッセージとして追加することで、
08:07.930 --> 08:13.730
より多くの文脈を与えることができるかもしれない。
08:13.730 --> 08:20.270
それを試してみて、 実際にその店の商品の価格を知っているチャットボットを作れば、
08:20.270 --> 08:23.000
かなりクールだろう。
08:23.750 --> 08:27.200
これで今回の実験は終了だ。
08:27.200 --> 08:43.430
先ほど、 マルチショット・プロンプトをシステム・プロンプトに押し込む以外の方法があることを申し上げました。
08:43.460 --> 08:49.580
現在の会話の前に、 ユーザーとアシスタントの架空のやりとりを会話に入れ、
08:49.580 --> 09:02.990
それをLMに似たような会話をさせることで、 LMが他の質問に対してどのように答えたか感覚をつかむことができる。
09:03.140 --> 09:09.200
また、 それを使ってスタイルをトレーニングしたり、 追加情報を提供することもできる。
09:09.200 --> 09:14.640
だから、 以前にベルトについての質問があったときに、 アシスタントがすでに店にベルトはないと答えていて、
09:14.640 --> 09:19.740
そこから学習したのかもしれない。
09:19.740 --> 09:25.380
つまり、 システムのプロンプトでそれを提供するか、 ユーザー・アシスタントが入力コンテキストの一部として吸収できるようなインタラクション例を与えるか、
09:25.410 --> 09:34.410
どちらの手法にも長所と短所がある。
09:34.560 --> 09:38.640
ええと、 私があなたにお願いしたいのは、 両方試してみることです。
09:38.640 --> 09:44.160
そこで、 システムプロンプトの代わりにユーザーアシスタントのインタラクションを使うようにアップデートし、 それがどのように機能するか見てみよう。
09:44.190 --> 09:47.430
洋服店の店員との相性が良いか悪いか。
09:47.430 --> 09:53.460
そして、 この変更をもっとしっかりしたものにするために、
09:53.460 --> 10:00.780
店内のさまざまなアイテムの辞書を用意し、 その価格やセール金額を調べ、
10:00.780 --> 10:12.300
それを会話の文脈に加えることで、 アシスタントが専門的な知識を持って対応できるようにするのです。

442
week5/community-contributions/subtitles/srts/59166949/ko_KR.srt

@ -0,0 +1,442 @@
WEBVTT
00:00.260 --> 00:02.750
챗봇 만들기입니다 안녕하세요
00:02.780 --> 00:04.070
계속하죠
00:04.070 --> 00:09.650
다음 부분에서는 시스템 메시지를 좀 더 흥미롭게 비트로 업그레이드할 거예요
00:09.650 --> 00:10.700
시스템 메시지예요
00:10.700 --> 00:12.710
옷 가게의 도우미로 일하죠
00:12.740 --> 00:15.920
세일 중인 제품을 입어 보라고 부드럽게 격려해 주세요
00:15.950 --> 00:17.780
60% 할인이라니 대단해요
00:17.780 --> 00:19.880
다른 건 50% 할인해요
00:19.880 --> 00:24.530
예를 들어 손님이 모자를 사러 왔다고 하면 아주 좋다고 대답하면 돼요
00:24.560 --> 00:28.100
판매 행사에 쓸 모자가 아주 많아요
00:28.100 --> 00:32.330
Get it. 어떤 걸 사야 할지 모를 땐 모자를 사라고 권장하세요
00:32.330 --> 00:36.050
시스템 프롬프트에서 몇 가지 일이 일어나고 있어요
00:36.050 --> 00:42.230
판매와 모자, 기타 물품에 대한 정보를 입수했어요
00:42.350 --> 00:47.930
예를 들어보죠 고객이 이걸 말하면 저걸 말하면 돼요
00:47.930 --> 00:52.280
그 두 가지 예는 분위기와 스타일을 설정하는 방법이죠
00:52.400 --> 00:58.850
모자에 관한 더 많은 사실을 대화에 소개하는 방법이기도 하죠
00:58.850 --> 01:01.670
이게 다 원샷 프롬프트 예죠
01:01.670 --> 01:03.150
멀티샷 프롬핑을 논할 수도 있어요
01:03.150 --> 01:06.720
어떻게 응답할지 미묘한 차이를 주고 있어요
01:06.930 --> 01:09.810
그걸 시스템 메시지로 구축하는 거죠
01:09.810 --> 01:13.950
다른 방법도 있어요 나중에 얘기하거나 적어도 다른 방법이요
01:14.040 --> 01:16.980
하지만 이건 아주 효과적인 방법이에요
01:16.980 --> 01:26.580
이 시스템 메시지를 추가하면 채팅 봇과 채팅을 할 수 있어요
01:26.670 --> 01:34.170
그래서 다시 발전기 채팅 방식을 작성해요 메시지와 역사가 필요하죠 그라디오가 그걸로 연락하고
01:34.170 --> 01:36.030
싶어 하거든요
01:36.210 --> 01:44.730
먼저 오픈AI가 기대하는 포맷으로 변환합니다 여러분이 익숙한 목록을 작성함으로써요
01:44.730 --> 01:49.200
또 하나 언급할 것은 지난 시간에 말했는지 모르겠는데 여기 마지막에
01:49.230 --> 01:56.550
우린 물론 리스트에 추가해야 합니다 사용자가 보내는 가장 최근 메시지를 역할 사용자 콘텐츠로 하단에
01:56.550 --> 01:58.770
추가해야 하죠
01:58.770 --> 02:00.480
그 메시지도요
02:00.480 --> 02:08.460
물론 이때쯤이면 가장 깊은 메모리에 새겨진 create 호출을 사용하고 스트리밍
02:08.460 --> 02:12.630
이즈 true로 결과를 스트리밍하죠
02:12.630 --> 02:13.530
자, 시작하죠
02:13.560 --> 02:14.760
그 얘기도 하죠
02:14.760 --> 02:16.680
다른 창에서 다시 보여드릴게요
02:16.710 --> 02:17.340
안 될 거 없죠
02:17.340 --> 02:20.910
쇼핑 보조와 얘기해 보죠
02:21.210 --> 02:22.050
안녕하세요
02:23.520 --> 02:24.870
어서 오세요
02:24.900 --> 02:25.920
무엇을 도와드릴까요?
02:25.920 --> 02:27.510
특별히 찾는 게 있나요?
02:27.510 --> 02:33.090
신발을 좀 사고 싶은데요
02:34.110 --> 02:35.280
좋아요
02:36.480 --> 02:38.760
예쁜 게 많아요
02:38.760 --> 02:41.580
여러분이 보시는 동안 세일을 말씀드릴게요
02:41.580 --> 02:43.140
대부분 50% 할인해요
02:43.140 --> 02:47.340
괜찮으시면 60% 할인된 멋진 모자가 있어요
02:47.370 --> 02:50.340
새 신발과 잘 어울릴 거예요
02:50.340 --> 02:52.170
둘 다 보시겠어요?
02:52.620 --> 02:56.490
그래서 제가 말씀드리고 싶은 건 확실히 해결됐다는 거예요
02:56.490 --> 03:01.170
시스템 프롬프트에 제공한 지식이 있어요
03:01.170 --> 03:07.990
하지만 제가 시스템 프롬프트에 사용한 열정적이고 활발한
03:07.990 --> 03:16.960
스타일이 이런 식으로 소통하는 방식으로 전염됐다는 걸 눈치채셨길 바라요
03:17.290 --> 03:22.720
이런 원샷 또는 멀티샷 프롬핑에서 중요한 부분이죠 분위기를 설정할 때
03:22.720 --> 03:25.930
어떻게 응답해야 할지 예제를 제시하세요
03:26.860 --> 03:28.510
계속 진행하죠
03:28.510 --> 03:31.180
시스템 메시지를 추가하죠
03:31.180 --> 03:35.950
손님이 신발을 찾으시면 세일 중인 신발이 아니라고 대답하세요
03:35.980 --> 03:37.720
대응해야 한다고 해두죠
03:37.720 --> 03:39.910
신발은 오늘 세일 안 한다고 대답하세요
03:39.910 --> 03:42.760
손님들께 모자를 보여주세요
03:42.760 --> 03:44.170
다시 해 보죠
03:44.170 --> 03:45.580
어떻게 되나 보죠
03:45.640 --> 03:48.160
하나 더 있어요
03:48.310 --> 03:49.630
그렇게 볼 수도 있겠네요
03:49.660 --> 03:54.220
네, 그것도 멀티샷 프롬프트죠
03:54.250 --> 03:55.210
이렇게 하죠
03:55.210 --> 03:55.990
안녕하세요
03:57.640 --> 03:58.420
보고 있어요?
03:58.420 --> 04:04.540
신발을 좀 사려고요
04:04.540 --> 04:05.900
좋아요
04:05.930 --> 04:08.630
오늘 신발 세일 안 해요
04:08.630 --> 04:10.940
여기 오신 김에 우리 모자도 구경해 보실래요?
04:13.640 --> 04:20.030
Get it's go 계속 얻어맞을 불쌍한 손님을 불쌍히 여겨 주세요
04:20.330 --> 04:27.860
그래서 어떻게 확립됐는지 다시 볼 수 있죠 신발은 세일하지 않지만 모자는 세일한다는
04:27.860 --> 04:28.910
거요
04:29.180 --> 04:34.820
그게 멀티샷 프롬프트 예입니다 배울 수 있는 예시를 더 제공하는
04:34.820 --> 04:35.930
거죠
04:36.260 --> 04:44.510
우리가 할 수 있는 또 다른 흥미로운 것은 지금까지 이런 건축을 보실 때마다 항상 시스템 메시지로
04:44.510 --> 04:47.960
시작하는 걸 보셨을 거예요
04:47.960 --> 04:49.640
시스템 메시지는 상단에 있어요
04:49.640 --> 04:55.070
하지만 OpenAI 호출에서는 상단에 시스템 메시지가 있어야 한다는 제약이 없어요
04:55.070 --> 04:58.460
시스템 메시지는 가면서 추가하세요
04:58.730 --> 05:05.930
예를 들어 우리가 할 수 있는 것 중 하나는 아주 해커 같은 코드에 대해 사과드립니다 권장하진 않지만
05:05.970 --> 05:09.120
예제를 보여주기 위해 여기 나와있어요
05:09.120 --> 05:15.390
실제론 그렇게 하면 안 되지만 채팅 생성기를 만들 때 할 수 있는
05:15.390 --> 05:24.510
건 여기에 넣는 거죠 사용자가 보내는 현재 메시지가 워드 벨트를 포함한다면요
05:24.510 --> 05:28.020
다소 민망한 방법이지만 문자열 벨트를 찾아봤어요
05:28.050 --> 05:33.150
물론 완전한 단어인지 테스트해야 하고 대문자, 소문자 등을
05:33.180 --> 05:34.770
생각해야 하죠
05:34.770 --> 05:38.220
전 그런 거 안 해요 못된 짓이지만 요점은 알겠죠
05:38.220 --> 05:43.560
벨트가 메시지 안에 있으면 이 메시지 모음에 추가될 거예요
05:43.590 --> 05:50.490
또 다른 시스템 메시지는 추가 컨텍스트를 위해 마트는 벨트를 팔지 않습니다 하지만
05:50.490 --> 05:58.500
세일 중인 아이템을 꼭 지적하세요 사용자가 벨트를 요청하면 프롬프트에 추가될 거예요
05:59.940 --> 06:06.030
한 번 해보죠 이걸 여기로 불러와요
06:08.250 --> 06:09.180
안녕하세요
06:11.130 --> 06:12.060
어서 오세요
06:12.090 --> 06:13.080
제가 도와드릴게요
06:13.530 --> 06:17.310
벨트 하나 사려고요
06:19.110 --> 06:19.710
미안해요
06:19.710 --> 06:20.910
벨트는 없어요
06:20.910 --> 06:24.390
하지만 모자도 있고 멋진 아이템도 있어요
06:24.420 --> 06:26.130
60% 할인요
06:26.310 --> 06:27.780
자, 됐어요
06:27.810 --> 06:28.620
여기요
06:28.650 --> 06:29.880
06:30.210 --> 06:33.270
확실히 주의를 기울이죠
06:33.270 --> 06:41.310
시스템 메시지가 이 메시지 목록의 다른 행으로 추가되면 주의를
06:41.310 --> 06:42.900
기울이죠
06:42.900 --> 06:49.140
대화에 맥락을 추가할 기회를 주는 거죠
06:49.140 --> 06:58.740
단어나 비트링을 감지하는 건 매우 복잡한 코드지만 좀 더 튼튼하게
06:58.740 --> 07:02.640
만들 수도 있어요
07:02.670 --> 07:07.830
특정 단어를 찾는 작은 사전도 하나 있어야 해요
07:07.830 --> 07:15.910
그걸 찾으면 적절한 방식으로 문맥을 풍부하게 하는 거죠
07:15.970 --> 07:23.140
따라서 여러분이 뭔가를 찾아 컨텍스트에 추가할 수 있는 능력을 주죠
07:23.170 --> 07:29.950
랙에 대해 좀 아실지도 모르겠네요 랙이 의미하는 바가 아주 크다는 것도
07:29.950 --> 07:31.150
알 거예요
07:31.180 --> 07:38.650
Rag는 프롬프트와 관련된 추가 정보를 찾아 LM으로 전송되는 메시지의 컨텍스트에
07:38.680 --> 07:41.200
추가하는 거죠
07:41.230 --> 07:48.430
물론 래그는 훨씬 더 정교하고 똑똑한 방법으로 그걸 하죠 여기 이 진부한 코드보다는요
07:48.610 --> 07:55.540
하지만 이걸 조명이나 아기용 랙이라고 생각해도 돼요 운동으로 생각하세요
07:55.540 --> 08:01.030
비트를 좀 더 보강할 수도 있어요 적어도 레벡스를 써서 특정 단어를 찾아볼 수
08:01.030 --> 08:01.540
있죠
08:01.540 --> 08:07.930
작은 사전을 하나 준비하세요 스토어에 있는 다양한 아이템의 가격을 함께
08:07.930 --> 08:13.730
적어서 시스템 메시지로 추가하는 거죠 컨텍스트를 더 주려고요
08:13.730 --> 08:20.270
시험해 보고 챗봇을 만들 수 있다는 걸 알 수 있어요 스토어에 있는 상품의 가격을
08:20.270 --> 08:23.000
아는 챗봇요 그럼 멋질 거예요
08:23.750 --> 08:27.200
이번 실험은 여기까지예요
08:27.200 --> 08:32.270
아까 언급하고 싶은 게 있었는데 멀티샷 프롬프트를 시스템 프롬프트에
08:32.270 --> 08:38.840
밀어 넣는 것 말고 다른 방법도 있어요 다른 방법은 사용자 비서가 있는 거죠 사용자 비서가
08:38.840 --> 08:43.430
메시지 세트를 갖는 건데 실제로 일어나지 않았어요
08:43.460 --> 08:49.580
사용자와 비서 사이의 가상의 교환을 가질 수 있습니다 현재 대화
08:49.580 --> 08:56.930
전에 포함된 대화에요 그걸 LM을 프라임하는 방법으로 사용할 수 있습니다
08:56.930 --> 09:02.990
다른 질문에 어떻게 반응했는지 감을 잡을 수 있도록요
09:03.140 --> 09:09.200
다시 한번 써먹으세요 스타일 훈련도 하고 추가 정보도 제공하고요
09:09.200 --> 09:14.640
그 전에 벨트에 관한 질문이 들어왔을 때 비서가 이미 매장에 벨트가
09:14.640 --> 09:19.740
없다고 답했을 수도 있어요 그럼 거기서 배웠겠죠
09:19.740 --> 09:25.380
기술에는 장단점이 있습니다 시스템 프롬프트에서 제공하든
09:25.410 --> 09:34.410
사용자 보조 상호 작용을 제공하든 입력 컨텍스트의 일부로 받아들일 수 있도록요
09:34.560 --> 09:38.640
둘 다 시험해 보고 싶어요
09:38.640 --> 09:43.680
이걸 업데이트해 사용자 비서 상호 작용을 사용하도록 하세요 시스템 프롬프트 대신에요 어떻게 되는지
09:43.680 --> 09:44.160
보시죠
09:44.190 --> 09:47.430
Get it get it get it get it get it get it get it a good-tures good.
09:47.430 --> 09:53.460
그리고 이걸 훨씬 더 견고하게 만들 이 변화도 가하세요 스토어에 있는
09:53.460 --> 10:00.780
다양한 아이템에 대한 사전을 만들고 가격이나 판매 금액을 찾아보세요 그런 다음
10:00.780 --> 10:09.060
그걸 대화에 추가해 비서가 전문 지식으로 답변할 수 있도록요 재미있게 하세요 오늘을 마무리할
10:09.060 --> 10:12.300
다음 비디오에서 뵙죠

343
week5/community-contributions/subtitles/srts/59166951/en_US.srt

@ -0,0 +1,343 @@
WEBVTT
00:00.710 --> 00:02.780
All right, back to the lab.
00:02.780 --> 00:03.950
Back to our project.
00:03.980 --> 00:06.230
Time to work with tools.
00:06.530 --> 00:11.210
I am in the week two folder in JupyterLab, and I'm launching day four.
00:11.240 --> 00:18.110
It's time for us to bring together what we've done so far to make a customer service assistant for a
00:18.110 --> 00:19.880
fictitious airline.
00:19.910 --> 00:21.770
We start with some imports.
00:21.770 --> 00:25.550
As usual, we initialize a load of key.
00:25.580 --> 00:32.870
We're going to use GPT four mini today, and the OpenAI initialization is there.
00:32.900 --> 00:33.890
The system message.
00:33.890 --> 00:39.920
You're a helpful assistant for an airline called flight I, flight II flight A, however you want to
00:39.920 --> 00:40.520
call it.
00:40.610 --> 00:41.630
There it is.
00:41.660 --> 00:45.200
Give short, courteous answers no more than one sentence.
00:45.200 --> 00:46.400
Always be accurate.
00:46.400 --> 00:48.770
If you don't know the answer, say so.
00:48.800 --> 00:54.290
This, of course, a very good type of, uh, system prompt.
00:54.350 --> 01:00.410
Um, if you're, uh, want to have a strong focus on lack of hallucinations on truthfulness.
01:00.410 --> 01:01.790
So we run that.
01:01.820 --> 01:04.370
Then this is something you're now very familiar with.
01:04.370 --> 01:08.540
This is the chat function in the style that Gradio expects it.
01:08.570 --> 01:14.630
It takes a message, it takes history, and it then builds the style that is expected by OpenAI.
01:14.660 --> 01:16.940
You may notice this looks a bit shorter than before.
01:16.940 --> 01:19.850
And that's because this time I'm not streaming back results.
01:19.850 --> 01:20.900
I think we've done enough of that.
01:20.900 --> 01:25.400
And since we're going with these short responses, streaming is probably overkill.
01:25.400 --> 01:27.230
Let's see this in action.
01:27.260 --> 01:28.280
Up it comes.
01:28.280 --> 01:29.870
We know Gradio so well now.
01:29.870 --> 01:33.440
We don't need to show off about it and we can say hi there.
01:34.700 --> 01:36.680
Hello, how can I assist you today?
01:36.710 --> 01:41.360
I want to go to London, my hometown.
01:41.360 --> 01:42.170
I always want to go there.
01:42.200 --> 01:42.950
Great choice.
01:42.950 --> 01:45.800
Would you like to help finding flights to London?
01:46.460 --> 01:47.450
Yes.
01:47.450 --> 01:50.300
How much is a ticket?
01:52.100 --> 01:56.060
I don't have real time pricing, but you can check our website or app for the latest ticket prices.
01:56.090 --> 01:56.480
London.
01:56.480 --> 01:59.390
So you know it's good to see as instructed.
01:59.390 --> 02:00.920
It does not hallucinate prices.
02:00.920 --> 02:02.900
It doesn't try and go there.
02:02.900 --> 02:09.200
It does what it's told, and you can also see it's giving short one line responses just as we asked.
02:09.230 --> 02:11.600
Okay, back we go.
02:11.630 --> 02:17.780
So it is time to talk about tools, an incredibly powerful feature provided by the frontier LMS.
02:17.780 --> 02:21.770
You can write a function, have it call that function as part of its response.
02:21.770 --> 02:24.140
Sounds almost spooky.
02:24.170 --> 02:27.680
We're giving it the power to run code on our machine.
02:27.710 --> 02:33.110
As I said, it's just a kind of story and that will soon be very clear to you.
02:33.530 --> 02:39.470
So let's start by making ourselves a function, a function that is going to be a useful function that
02:39.470 --> 02:41.510
we want to arm our alarm with.
02:41.510 --> 02:45.710
And that function is going to be called get ticket price given a city.
02:45.710 --> 02:49.130
So uh, it's going to begin by printing.
02:49.130 --> 02:52.490
Get ticket price called for destination City.
02:52.670 --> 02:58.250
And we're doing that so that we can watch later to see when this function is called.
02:58.520 --> 03:03.380
Uh, what we do is we take the destination city and we make it lowercase.
03:03.380 --> 03:10.130
So this works and we look it up in this dictionary where we've got lowercase cities and prices.
03:10.130 --> 03:13.190
If it doesn't find it, it says unknown.
03:13.400 --> 03:15.170
Let's just add one in here.
03:15.200 --> 03:18.650
Why not change things on the fly?
03:18.680 --> 03:20.180
Probably break everything.
03:21.110 --> 03:21.770
Hopefully not.
03:21.800 --> 03:28.100
Let's give ourselves Berlin and a nice, cheap, special deal for for flights to Berlin.
03:28.100 --> 03:29.000
Why not?
03:29.330 --> 03:29.900
Um.
03:29.930 --> 03:31.880
Okay, let's run that.
03:32.180 --> 03:35.090
So we will now try this out.
03:35.120 --> 03:44.270
Get ticket price to Berlin, see what we get.
03:44.300 --> 03:46.400
And it says tool, get ticket price.
03:46.400 --> 03:49.490
Called for Berlin for $99.
03:49.520 --> 03:50.990
Now you might be thinking to yourself.
03:51.020 --> 03:51.980
What does he mean, tool?
03:51.980 --> 03:54.080
This isn't a tool, it's just a function.
03:54.110 --> 03:56.630
And the answer is for now, it's just a function.
03:56.630 --> 03:58.490
We're about to make it into a tool.
03:58.490 --> 04:02.570
I don't know if you actually sound that way, but that's how you sound in my mind.
04:02.930 --> 04:07.400
So anyway, that that is us creating our tool.
04:07.940 --> 04:15.560
Now, the process of putting these tools into our interface with an LLM is a bit storied.
04:15.560 --> 04:20.420
It's not going to be as simple as bringing up a Gradio interface.
04:20.420 --> 04:23.660
Regrettably, uh, it's more involved.
04:23.660 --> 04:24.770
There's good reason for that.
04:24.770 --> 04:26.180
And we'll find out why.
04:26.240 --> 04:28.340
Um, but it is a little bit more involved.
04:28.460 --> 04:34.940
Um, but the good news is it's very much a sort of cookie cutter style that can be replicated for other
04:34.940 --> 04:36.830
function calls for other tools.
04:36.830 --> 04:39.740
So you'll be able to reuse this for your own projects.
04:39.740 --> 04:41.390
And I very much hope you do.
04:41.420 --> 04:47.630
One of the intentions of having these useful projects in here is so that you can then take this as a
04:47.630 --> 04:51.290
resource and use these bits of code for your own projects.
04:51.440 --> 04:55.160
Um, and I'll certainly be recommending that you try adding your own tools.
04:55.160 --> 04:57.980
So you should be closely following this.
04:58.550 --> 05:04.130
Uh, and the first thing I'm going to mention is that we need to build a particular dictionary structure
05:04.130 --> 05:07.700
that's required to describe the function we just wrote.
05:07.700 --> 05:09.380
And this is what it looks like.
05:09.410 --> 05:11.150
Price function, I'll call it.
05:11.150 --> 05:12.050
You call it anything you want.
05:12.080 --> 05:15.290
You give it a name and you describe it.
05:15.290 --> 05:21.470
And the way you describe it is in plain Old English, because this is going to be given to the LLM so
05:21.470 --> 05:25.070
that it can understand when is it appropriate to call this function.
05:25.070 --> 05:28.460
So it says get the price of a return ticket to the destination city.
05:28.490 --> 05:30.710
Call this whenever you need to know the ticket price.
05:30.710 --> 05:34.940
For example, when a customer asks how much is a ticket to the city?
05:34.970 --> 05:38.360
So giving it an example is always a good, good technique.
05:38.360 --> 05:39.950
And that's what we're using here.
05:39.950 --> 05:43.640
And then you provide the parameters in this setup here.
05:43.640 --> 05:47.060
And our function has one parameter destination city.
05:47.060 --> 05:50.330
And that is what the parameter does.
05:50.480 --> 05:54.230
So that's how you describe the function that you're using.
05:54.230 --> 05:58.070
And you can see I say that it's a required parameter.
05:58.640 --> 06:00.530
So that is the setup.
06:00.560 --> 06:03.110
And at this point I'm going to pause for a moment.
06:03.110 --> 06:05.210
And in the next video we're going to keep going.
06:05.210 --> 06:09.650
And we're going to arm the LLM with this function.
06:09.650 --> 06:10.700
See you there.

322
week5/community-contributions/subtitles/srts/59166951/ja_JP.srt

@ -0,0 +1,322 @@
WEBVTT
00:00.710 --> 00:02.780
よし、 ラボに戻ろう。
00:02.780 --> 00:03.950
話をプロジェクトに戻そう。
00:03.980 --> 00:06.230
道具を使う時間だ。
00:06.530 --> 00:11.210
僕は今、 JupyterLabの2週目のフォルダにいて、 4日目に突入するところだ。
00:11.240 --> 00:19.880
これまでやってきたことを結集して、 架空の航空会社のカスタマーサービス・アシスタントを作る時が来た。
00:19.910 --> 00:21.770
まずは輸入品から。
00:21.770 --> 00:25.550
いつものように、 キーのロードを初期化する。
00:25.580 --> 00:32.870
今日はGPT four miniを使うつもりで、 OpenAIの初期化はそこにある。
00:32.900 --> 00:33.890
システムメッセージ。
00:33.890 --> 00:40.520
あなたはI便、 II便、 A便と呼ばれる航空会社のアシスタントだ。
00:40.610 --> 00:41.630
あれだ。
00:41.660 --> 00:45.200
一文以内で短く丁寧に答えること。
00:45.200 --> 00:46.400
常に正確に。
00:46.400 --> 00:48.770
答えがわからなければ、 そう言ってください。
00:48.800 --> 00:54.290
これはもちろん、 システム・プロンプトの非常に良いタイプだ。
00:54.350 --> 01:00.410
ええと、 もし、 幻覚がないことに強く焦点を当てたいのであれば、 真実性を重視したい。
01:00.410 --> 01:01.790
だから、 それを実行するんだ。
01:01.820 --> 01:04.370
それなら、 これはもうお馴染みのことだ。
01:04.370 --> 01:08.540
これはGradioが期待するスタイルのチャット機能である。
01:08.570 --> 01:14.630
メッセージを受け取り、 履歴を受け取り、 そしてOpenAIが期待するスタイルを構築する。
01:14.660 --> 01:16.940
以前より少し短く見えるかもしれない。
01:16.940 --> 01:19.850
そして、 今回は結果をストリーミングで返さないからだ。
01:19.850 --> 01:20.900
それはもう十分やったと思う。
01:20.900 --> 01:25.400
それに、 このような短い返答をするのだから、 ストリーミングはやり過ぎだろう。
01:25.400 --> 01:27.230
実際に見てみよう。
01:27.260 --> 01:28.280
上がってきた。
01:28.280 --> 01:29.870
私たちは今、 グラディオのことをよく知っている。
01:29.870 --> 01:33.440
私たちはそれを見せびらかす必要はないし、 そこで挨拶することもできる。
01:34.700 --> 01:36.680
こんにちは、 本日はどのようなご用件でしょうか?
01:36.710 --> 01:41.360
故郷のロンドンに行きたい。
01:41.360 --> 01:42.170
私はいつもそこに行きたいと思っている。
01:42.200 --> 01:42.950
素晴らしい選択だ。
01:42.950 --> 01:45.800
ロンドン行きのフライトをお探しですか?
01:46.460 --> 01:47.450
そうだ。
01:47.450 --> 01:50.300
チケットはいくらですか?
01:52.100 --> 01:56.060
リアルタイムの価格は分からないが、 最新のチケット価格はウェブサイトやアプリで確認できる。
01:56.090 --> 01:56.480
ロンドンだ。
01:56.480 --> 01:59.390
だから、 指示された通りに見るのがいいんだ。
01:59.390 --> 02:00.920
価格を幻覚することはない。
02:00.920 --> 02:02.900
そこに行こうとはしない。
02:02.900 --> 02:09.200
言われたとおりに動くし、 私たちが頼んだように1行の短い返事をしているのもわかる。
02:09.230 --> 02:11.600
よし、 戻ろう。
02:11.630 --> 02:17.780
そこで、 フロンティアLMSが提供する信じられないほど強力な機能であるツールについてお話ししましょう。
02:17.780 --> 02:21.770
関数を書いて、 レスポンスの一部としてその関数を呼び出すことができる。
02:21.770 --> 02:24.140
ほとんど不気味な響きだ。
02:24.170 --> 02:27.680
私たちのマシン上でコードを実行する力を与えているのだ。
02:27.710 --> 02:33.110
さっきも言ったように、 これは一種の物語に過ぎない。
02:33.530 --> 02:41.510
では、 まず自分自身で関数を作ってみよう。 この関数は、 アラームを武装させるのに便利な関数だ。
02:41.510 --> 02:45.710
この関数は、 都市を指定してチケット価格を取得する。
02:45.710 --> 02:49.130
だから、 印刷することから始まるんだ。
02:49.130 --> 02:52.490
目的地の都市で呼び出されるチケット料金を入手する。
02:52.670 --> 02:58.250
そして、 この関数がいつ呼び出されるかを後で確認できるようにするためだ。
02:58.520 --> 03:03.380
つまり、 目的地の都市を小文字にするんだ。
03:03.380 --> 03:10.130
そこで、 小文字の都市と価格を辞書で調べてみる。
03:10.130 --> 03:13.190
見つからなければ不明と表示される。
03:13.400 --> 03:15.170
ここで1つ付け加えよう。
03:15.200 --> 03:18.650
なぜその場で変更しないのか?
03:18.680 --> 03:20.180
おそらく、 すべてを壊してしまうだろう。
03:21.110 --> 03:21.770
そうでないことを祈る。
03:21.800 --> 03:28.100
ベルリンと、 ベルリン行きの航空券のための素敵で、 安くて、 特別な取引をしよう。
03:28.100 --> 03:29.000
なぜだ?
03:29.330 --> 03:29.900
うーん。
03:29.930 --> 03:31.880
よし、 実行してみよう。
03:32.180 --> 03:35.090
では、 これを試してみよう。
03:35.120 --> 03:44.270
ベルリンまでのチケット代、 何が手に入るか見てみよう。
03:44.300 --> 03:46.400
そして、 ツール、 チケット代と書いてある。
03:46.400 --> 03:49.490
ベルリンに99ドルで電話。
03:49.520 --> 03:50.990
今、 あなたはこう思ったかもしれない。
03:51.020 --> 03:51.980
道具ってどういう意味?
03:51.980 --> 03:54.080
これはツールではなく、 単なる機能だ。
03:54.110 --> 03:56.630
そして答えは、 今のところただの機能だ。
03:56.630 --> 03:58.490
我々はそれをツールにしようとしている。
03:58.490 --> 04:02.570
実際にそう聞こえるかどうかはわからないが、 私の中ではそう聞こえる。
04:02.930 --> 04:07.400
とにかく、 これが僕らのツールを作るということなんだ。
04:07.940 --> 04:15.560
さて、 これらのツールをLLMとのインターフェイスに導入する過程には、 ちょっとした物語がある。
04:15.560 --> 04:20.420
グラディオのインターフェイスを立ち上げるような単純なものにはならないだろう。
04:20.420 --> 04:23.660
残念ながら、 もっと複雑なんだ。
04:23.660 --> 04:24.770
それには理由がある。
04:24.770 --> 04:26.180
そして、 その理由を突き止める。
04:26.240 --> 04:28.340
うーん、 でももう少し複雑なんだ。
04:28.460 --> 04:36.830
うーん、 でも良いニュースは、 他のツールの他のファンクション・コールでも再現できる、 ある種のクッキー・カッターのようなスタイルだということだ。
04:36.830 --> 04:39.740
だから、 これを自分のプロジェクトに再利用することができる。
04:39.740 --> 04:41.390
そして、 そうなることを強く望んでいる。
04:41.420 --> 04:51.290
ここに有用なプロジェクトを掲載した意図のひとつは、 これをリソースとして、 自分のプロジェクトにコードの断片を使えるようにすることだ。
04:51.440 --> 04:55.160
そして、 あなた自身のツールを追加してみることをお勧めします。
04:55.160 --> 04:57.980
だから、 あなたはこれを注意深く追うべきだ。
04:58.550 --> 05:07.700
ええと、 最初に言っておくのは、 今書いた関数を記述するために必要な特定の辞書構造を構築する必要があるということだ。
05:07.700 --> 05:09.380
そして、 こんな感じだ。
05:09.410 --> 05:11.150
プライス・ファンクションと呼ぼう。
05:11.150 --> 05:12.050
好きなように呼べばいい。
05:12.080 --> 05:15.290
名前をつけて、 それを説明する。
05:15.290 --> 05:25.070
LLMがこの関数をいつ呼び出すのが適切かを理解できるようにするためだ。
05:25.070 --> 05:28.460
つまり、 目的地までの往復航空券の料金を取得すると書いてある。
05:28.490 --> 05:30.710
チケットの値段を知りたいときはいつでも電話してください。
05:30.710 --> 05:34.940
例えば、 客が「市内までのチケットはいくらですか?
05:34.970 --> 05:38.360
だから、 例を挙げることは常に良い、 良いテクニックなんだ。
05:38.360 --> 05:39.950
それが、 ここで使っているものだ。
05:39.950 --> 05:43.640
そして、 このセットアップでパラメータを指定する。
05:43.640 --> 05:47.060
そして、 この関数には1つのパラメータがある。
05:47.060 --> 05:50.330
そして、 それこそがパラメーターの役割なのだ。
05:50.480 --> 05:54.230
そうやって、 使っている機能を説明するんだ。
05:54.230 --> 05:58.070
そして、 必須パラメータだと言っているのがわかるだろう。
05:58.640 --> 06:00.530
これがセットアップだ。
06:00.560 --> 06:03.110
そしてこの時点で、 私は少し立ち止まるつもりだ。
06:03.110 --> 06:05.210
そして次のビデオでは、 さらに続けるつもりだ。
06:05.210 --> 06:09.650
そして、 LLMにこの機能を持たせるつもりだ。
06:09.650 --> 06:10.700
そこで会おう

340
week5/community-contributions/subtitles/srts/59166951/ko_KR.srt

@ -0,0 +1,340 @@
WEBVTT
00:00.710 --> 00:02.780
좋아요, 연구실로 돌아가죠
00:02.780 --> 00:03.950
우리 프로젝트로 돌아가죠
00:03.980 --> 00:06.230
도구를 사용할 시간이에요
00:06.530 --> 00:11.210
전 주피터랩의 2주 차 폴더에 있고 4일째를 맞이했어요
00:11.240 --> 00:18.110
이제 지금까지 했던 걸 활용할 차례예요 가상의 항공사의 고객 서비스 담당자를
00:18.110 --> 00:19.880
만드는 거죠
00:19.910 --> 00:21.770
수입품부터 시작하죠
00:21.770 --> 00:25.550
늘 그렇듯 키를 잔뜩 준비해요
00:25.580 --> 00:32.870
오늘은 GPT 4 미니를 사용할 겁니다 오픈AI 초기화가 진행 중이죠
00:32.900 --> 00:33.890
시스템 메시지요
00:33.890 --> 00:40.520
1편인지 2편인지 A편인지 하는 항공사의 조수로 일하죠
00:40.610 --> 00:41.630
저기 있네요
00:41.660 --> 00:45.200
짧고 공손하게 한 문장 이상 대답하지 마세요
00:45.200 --> 00:46.400
항상 정확해야 해요
00:46.400 --> 00:48.770
모르면 모른다고 하세요
00:48.800 --> 00:54.290
이건 아주 좋은 시스템 프롬프트예요
00:54.350 --> 01:00.410
진실성에 대한 환각의 부재에 집중하고 싶다면요
01:00.410 --> 01:01.790
그걸 실행하죠
01:01.820 --> 01:04.370
그럼 이제 아주 익숙한 거예요
01:04.370 --> 01:08.540
이건 그라디오가 예상하는 채팅 함수예요
01:08.570 --> 01:14.630
메시지와 역사를 취하고 오픈AI에 기대되는 스타일을 구축하죠
01:14.660 --> 01:16.940
비트가 전보다 좀 짧아진 걸 느끼실 거예요
01:16.940 --> 01:19.850
이번엔 결과를 스트리밍하지 않기 때문이죠
01:19.850 --> 01:20.900
그 정도면 충분한 것 같아요
01:20.900 --> 01:25.400
짧은 응답을 받는 거라 스트리밍은 과잉 대응이에요
01:25.400 --> 01:27.230
어떻게 작동하는지 보죠
01:27.260 --> 01:28.280
올라와요
01:28.280 --> 01:29.870
우린 이제 그라디오를 잘 알아요
01:29.870 --> 01:33.440
자랑할 필요 없이 인사만 하면 돼요
01:34.700 --> 01:36.680
안녕하세요, 무엇을 도와드릴까요?
01:36.710 --> 01:41.360
제 고향인 런던에 가고 싶어요
01:41.360 --> 01:42.170
늘 가고 싶어요
01:42.200 --> 01:42.950
탁월한 선택이에요
01:42.950 --> 01:45.800
런던행 비행기표 구하는 거 도와줄래요?
01:46.460 --> 01:47.450
01:47.450 --> 01:50.300
한 장에 얼마죠?
01:52.100 --> 01:56.060
실시간 가격은 없지만 웹사이트나 앱에서 최신 티켓 가격을 확인해 보세요
01:56.090 --> 01:56.480
런던요
01:56.480 --> 01:59.390
그럼 배운 대로 하는 게 좋다는 걸 알겠군요
01:59.390 --> 02:00.920
가격을 착각하지 않아요
02:00.920 --> 02:02.900
그쪽으로 가지 않아요
02:02.900 --> 02:09.200
우리가 말한 대로 하고 있어요 그리고 우리가 요구한 대로 짧게 한 줄로 응답하고 있죠
02:09.230 --> 02:11.600
좋아요, 다시 가죠
02:11.630 --> 02:17.780
이제 프론티어 LMS가 제공하는 강력한 기능인 툴에 대해 얘기해 보죠
02:17.780 --> 02:21.770
응답의 일부로 함수를 써서 그 함수를 호출하게 할 수 있어요
02:21.770 --> 02:24.140
으스스하게 들리네요
02:24.170 --> 02:27.680
우리 컴퓨터에서 코드를 실행할 힘을 주는 거죠
02:27.710 --> 02:33.110
말씀드렸듯이 그냥 이야기예요 곧 분명하게 알게 되실 거예요
02:33.530 --> 02:39.470
그럼 함수부터 만들어 보죠 알람을 무장하는
02:39.470 --> 02:41.510
유용한 함수요
02:41.510 --> 02:45.710
그 함수는 도시당 티켓 가격 get이라고 불릴 거예요
02:45.710 --> 02:49.130
프린팅으로 시작할 거예요
02:49.130 --> 02:52.490
Get up 목적지 시티 티켓팅하세요
02:52.670 --> 02:58.250
이 함수가 언제 호출되는지 나중에 보기 위해 그렇게 하고 있어요
02:58.520 --> 03:03.380
목적지 도시를 소문자로 만드는 거예요
03:03.380 --> 03:10.130
이건 작동하죠 이 사전에서 찾아볼게요 소문자 도시와 가격이 있죠
03:10.130 --> 03:13.190
못 찾으면 미확인이라고 뜨죠
03:13.400 --> 03:15.170
여기에 하나를 추가할게요
03:15.200 --> 03:18.650
그때그때 바꾸면 되잖아요
03:18.680 --> 03:20.180
아마 다 부서질 거예요
03:21.110 --> 03:21.770
안 그러길 바라요
03:21.800 --> 03:28.100
베를린에 가서 싸고 좋은 특별 할인을 받자고요 베를린으로 가는 비행기요
03:28.100 --> 03:29.000
왜요?
03:29.330 --> 03:29.900
03:29.930 --> 03:31.880
좋아요, 실행해 보죠
03:32.180 --> 03:35.090
이제 이걸 시험해 보죠
03:35.120 --> 03:44.270
베를린행 항공권 가격을 알아보죠 get it
03:44.300 --> 03:46.400
도구, 티켓가격 get
03:46.400 --> 03:49.490
베를린에 99달러 걸었어요
03:49.520 --> 03:50.990
이런 생각이 들 거예요
03:51.020 --> 03:51.980
무슨 뜻이죠?
03:51.980 --> 03:54.080
이건 도구가 아니라 함수예요
03:54.110 --> 03:56.630
대답은 지금으로선 그냥 함수라는 거죠
03:56.630 --> 03:58.490
도구로 만들 거예요
03:58.490 --> 04:02.570
실제로 그렇게 들리는지 모르겠지만 제 마음속에선 그렇게 들려요
04:02.930 --> 04:07.400
어쨌든, 이게 도구를 만드는 우리 모습이었어요
04:07.940 --> 04:15.560
이런 도구들을 LLM과 인터페이스에 넣는 과정은 좀 복잡해요 비트
04:15.560 --> 04:20.420
그래디오 인터페이스를 불러오는 것처럼 간단하지 않을 거예요
04:20.420 --> 04:23.660
안타깝게도 더 복잡해요
04:23.660 --> 04:24.770
그럴 만한 이유가 있죠
04:24.770 --> 04:26.180
이유를 알아보죠
04:26.240 --> 04:28.340
하지만 비트가 좀 더 복잡해요
04:28.460 --> 04:34.940
좋은 소식은 쿠키 커터 스타일과 비슷해서 다른 도구의 함수 호출에 따라 복제할
04:34.940 --> 04:36.830
수 있다는 거죠
04:36.830 --> 04:39.740
여러분의 프로젝트에 재사용할 수 있어요
04:39.740 --> 04:41.390
꼭 그러길 바라요
04:41.420 --> 04:47.630
이런 유용한 프로젝트가 여기 있는 목적은 이걸 리소스로 갖고 와 여러분 프로젝트에
04:47.630 --> 04:51.290
이 비트의 코드를 사용할 수 있도록 하는 거죠
04:51.440 --> 04:55.160
자신만의 도구를 추가해 보시길 권해드리고 싶어요
04:55.160 --> 04:57.980
그러니 잘 따라 하세요
04:58.550 --> 05:04.130
제일 먼저 말씀드릴 것은 우리가 방금 쓴 함수를 설명하는 데 필요한
05:04.130 --> 05:07.700
특정 사전 구조를 구축해야 한다는 거예요
05:07.700 --> 05:09.380
이렇게 생긴 거예요
05:09.410 --> 05:11.150
가격 함수라고 부를게요
05:11.150 --> 05:12.050
마음대로 부르세요
05:12.080 --> 05:15.290
이름을 지어주고 묘사해 보세요
05:15.290 --> 05:21.470
설명은 그냥 옛날 영어로 해주세요 왜냐하면 이게 LLM에 주어질 것이기 때문에 이 함수를
05:21.470 --> 05:25.070
언제 호출하는 게 적절한지 이해할 수 있거든요
05:25.070 --> 05:28.460
목적지까지 왕복 항공권의 가격을 get get이라고 뜨네요
05:28.490 --> 05:30.710
비행기 표가 궁금하면 언제든 전화해요
05:30.710 --> 05:34.940
예를 들어 도시행 기차표가 얼마냐고 손님이 물으면요
05:34.970 --> 05:38.360
예를 들어주는 건 언제나 좋은 기술이죠
05:38.360 --> 05:39.950
그걸 여기서 사용하고 있어요
05:39.950 --> 05:43.640
그런 다음 이 셋업에서 매개 변수를 제공하죠
05:43.640 --> 05:47.060
함수에는 하나의 매개 변수 대상 도시만 있어요
05:47.060 --> 05:50.330
그게 매개 변수가 하는 일이죠
05:50.480 --> 05:54.230
이게 여러분이 사용하는 함수를 설명하는 방법이에요
05:54.230 --> 05:58.070
보다시피 필수 매개 변수죠
05:58.640 --> 06:00.530
그게 설정이에요
06:00.560 --> 06:03.110
여기서 잠시 멈춰 볼게요
06:03.110 --> 06:05.210
다음 비디오에서도 계속 할 거예요
06:05.210 --> 06:09.650
이 함수로 LLM을 무장시킬 거예요
06:09.650 --> 06:10.700
거기서 봐요

211
week5/community-contributions/subtitles/srts/59166981/en_US.srt

@ -0,0 +1,211 @@
WEBVTT
00:00.980 --> 00:04.040
Welcome to week two, day five.
00:04.070 --> 00:09.050
The last day of week two where a lot is coming together.
00:09.050 --> 00:16.100
I am so grateful that you're sticking with it, and I'm going to make it worth your while because today
00:16.100 --> 00:18.620
is going to be really, really good fun.
00:18.620 --> 00:21.530
I'm excited to get into this.
00:21.890 --> 00:24.410
It's the big conclusion of the second week.
00:24.740 --> 00:27.800
Again, I'm going to keep saying what you can do.
00:27.800 --> 00:32.840
I think it's so important to celebrate your upskilling, you know, Transformers back to front.
00:32.840 --> 00:38.660
You can code against the frontier APIs, you can build an AI assistant, and you can add tools to give
00:38.660 --> 00:39.800
it expertise.
00:39.830 --> 00:42.440
Today we introduce agents.
00:42.440 --> 00:48.650
We talk about how agents can carry out more advanced sequential activities.
00:48.650 --> 00:56.450
And then we do something super fun creating a multimodal AI assistant using agents and tools.
00:57.620 --> 00:59.390
So what are agents?
00:59.720 --> 01:01.190
Agents, I should say.
01:01.220 --> 01:02.390
An agent I.
01:02.420 --> 01:02.900
An agent.
01:03.530 --> 01:07.790
It is one of these umbrella terms that people can use in different contexts.
01:07.790 --> 01:12.140
So it is one of these things that, that that can mean different things to different people.
01:12.140 --> 01:17.660
But generally speaking, most often people are talking about software entities that are autonomous.
01:17.660 --> 01:25.640
They can perform tasks not just in the sense of taking an input prompt and generating text.
01:25.820 --> 01:27.530
Um, typical characteristics.
01:27.530 --> 01:28.700
Let's say they are autonomous.
01:28.700 --> 01:33.740
They have some sort of agency, they are goal oriented, that they have some kind of thing that they're
01:33.740 --> 01:37.520
setting out to do, and that they are task specific.
01:37.520 --> 01:42.620
They are usually specialized on being good at one thing or another.
01:43.010 --> 01:48.230
Um, and they're typically designed to be part of something called an agent framework, which is a sort
01:48.230 --> 01:55.190
of environment in which agents can interact to solve more complex problems and potentially with limited
01:55.190 --> 01:56.450
human involvement.
01:56.450 --> 02:00.020
So it's not like it's just a sort of request response situation with a human.
02:00.020 --> 02:06.150
But you can imagine this sort of environment where multiple software agents that could be combinations
02:06.150 --> 02:12.690
of llms along with traditional software interacting in order to carry out tasks.
02:12.690 --> 02:19.770
And so some of the features you might expect is the ability to have memory or persistence that sort
02:19.770 --> 02:26.820
of goes beyond just a request response, the ability to have some sort of decision making and orchestration
02:26.820 --> 02:30.750
about what does what are planning abilities.
02:30.930 --> 02:36.240
And sometimes that is just a matter of the environment as some planning coded into it.
02:36.240 --> 02:40.410
Sometimes you have an LLM which is responsible for planning.
02:40.410 --> 02:45.840
It's a model that knows how to take complex problems and break it down into smaller problems for other
02:45.840 --> 02:47.400
models to take care of.
02:47.880 --> 02:53.310
And then use of tools is often also an example of a genetic AI.
02:53.310 --> 02:59.370
This is where, of course, as you are now very familiar, we give models the ability to do things like
02:59.370 --> 03:06.910
connect to databases or connect to the internet or whatever we want because we are providing it access
03:06.910 --> 03:10.450
to functions and we know how that works behind under the hood.
03:10.450 --> 03:17.440
Now we know that it's really just a fancy if statement, but it gives the effect that the Llms are able
03:17.440 --> 03:18.580
to do this.
03:19.960 --> 03:22.540
So we're about to do a few things.
03:22.540 --> 03:26.170
Let me just quickly sort of set the scene for you.
03:26.200 --> 03:34.390
We're going to first build a function that can generate images, a good multimodal use case.
03:34.390 --> 03:37.990
We're going to have an LLM call that can do that.
03:37.990 --> 03:39.760
And it's going to be a function that does it.
03:39.760 --> 03:42.670
And you can think of that in its own right as being like an agent.
03:42.670 --> 03:49.000
It's like a piece of software that is able to take this very specific, specialized instruction and
03:49.000 --> 03:49.540
do it.
03:49.540 --> 03:58.990
That will be an artist that we will create in code with the help of Dall-E three, the the image generation
03:58.990 --> 04:02.240
model from uh, OpenAI.
04:02.480 --> 04:07.910
Uh, and, you know, if you want to to quibble, you could argue that image generation is not in itself
04:07.910 --> 04:09.650
an LM thing.
04:09.680 --> 04:16.250
Uh, lm being language models, but these days, generally llms are used interchangeably with the broader
04:16.250 --> 04:18.380
gen AI context.
04:18.380 --> 04:24.590
And so one does tend to think of image generation and other kinds of multimodal generation as falling
04:24.590 --> 04:28.100
within the LM engineer's, uh, toolkit.
04:29.120 --> 04:35.510
So we're then going to look to, to make agents these sort of, uh, these, these functions that are
04:35.510 --> 04:36.290
able to do things.
04:36.290 --> 04:43.700
And we're going to add sound as well as images, and then we're going to have an agent framework in
04:43.730 --> 04:50.000
that we are going to teach our AI assistant, the same airline assistant that we've been working on
04:50.030 --> 04:52.580
how to speak and draw.
04:52.760 --> 04:55.820
All right, without further ado, I hope that sounds fun to you.
04:55.850 --> 04:59.060
I hope it sounds exciting because it's going to be it's going to be great.
04:59.090 --> 05:00.320
Uh, I can't wait to do it.
05:00.320 --> 05:01.700
Let's go and do it right now.

169
week5/community-contributions/subtitles/srts/59166981/ja_JP.srt

@ -0,0 +1,169 @@
WEBVTT
00:00.980 --> 00:04.040
第2週5日目へようこそ。
00:04.070 --> 00:09.050
多くのことがまとまりつつある第2週の最終日。
00:09.050 --> 00:18.620
今日は本当に、 本当に楽しい日になりそうだから。
00:18.620 --> 00:21.530
これに参加するのが楽しみだ。
00:21.890 --> 00:24.410
2週目の大きな締めくくりだ。
00:24.740 --> 00:27.800
繰り返しになるが、 私はあなたに何ができるかを言い続けるつもりだ。
00:27.800 --> 00:32.840
自分のスキルアップを祝うことはとても重要だと思う。
00:32.840 --> 00:39.800
フロンティアAPIに対してコーディングし、 AIアシスタントを構築し、 専門知識を与えるツールを追加することができる。
00:39.830 --> 00:42.440
今日はエージェントを紹介しよう。
00:42.440 --> 00:48.650
私たちは、 エージェントがより高度な逐次的活動を行う方法について話します。
00:48.650 --> 00:56.450
そして、 エージェントやツールを使ってマルチモーダルAIアシスタントを作るという、 とても楽しいこともやっています。
00:57.620 --> 00:59.390
では、 エージェントとは何なのか?
00:59.720 --> 01:01.190
エージェントと言うべきだろう。
01:01.220 --> 01:02.390
エージェントI。
01:02.420 --> 01:02.900
エージェントだ。
01:03.530 --> 01:07.790
これは、 人々がさまざまな文脈で使うことができる包括的な用語のひとつである。
01:07.790 --> 01:12.140
だから、 それは人によって意味が違うことのひとつなんだ。
01:12.140 --> 01:17.660
しかし、 一般的に言えば、 多くの場合、 人々は自律的なソフトウェア・エンティティについて話している。
01:17.660 --> 01:25.640
入力プロンプトを受けてテキストを生成するという意味だけでなく、 タスクを実行することもできる。
01:25.820 --> 01:27.530
うーん、 典型的な特徴だね。
01:27.530 --> 01:28.700
彼らが自律的だとしよう。
01:28.700 --> 01:33.740
彼らはある種の主体性を持っていて、 目標志向で、 何かしらの目的を持っていて、
01:33.740 --> 01:37.520
タスクが具体的なのだ。
01:37.520 --> 01:42.620
彼らはたいてい、 何か一つのことに特化している。
01:43.010 --> 01:48.230
一般的には、 エージェントフレームワークと呼ばれるものの一部として設計され、
01:48.230 --> 01:56.450
エージェントがより複雑な問題を解決するために相互作用できる環境のようなものです。
01:56.450 --> 02:00.020
だから、 人間に対する一種のリクエスト・レスポンスとは違うんだ。
02:00.020 --> 02:06.150
しかし、 このような環境では、 従来のソフトウェアとllmsを組み合わせた複数のソフトウェアエージェントが、
02:06.150 --> 02:12.690
タスクを遂行するために相互作用することが想像できる。
02:12.690 --> 02:19.770
期待される機能としては、 単なるリクエスト・レスポンスにとどまらないメモリーや永続性を持つ能力、
02:19.770 --> 02:30.750
ある種の意思決定やオーケストレーションができる能力、 プランニング能力などがある。
02:30.930 --> 02:36.240
そして、 それはただ単に、 環境に組み込まれたプランニングの問題であることもある。
02:36.240 --> 02:40.410
企画を担当するLLMがいることもある。
02:40.410 --> 02:47.400
複雑な問題を、 他のモデルが処理できるように小さな問題に分解する方法を知っているモデルなのだ。
02:47.880 --> 02:53.310
そして、 道具を使うことも遺伝的AIの一例であることが多い。
02:53.310 --> 02:59.370
もちろん、 皆さんもよくご存知のように、 私たちはモデルにデータベースへの接続やインターネットへの接続など、
02:59.370 --> 03:10.450
好きなことをさせる機能を与えている。
03:10.450 --> 03:18.580
これは単なるif文に過ぎないが、 Llmsにこのようなことができるという効果を与えている。
03:19.960 --> 03:22.540
だから、 これからいくつかやることがある。
03:22.540 --> 03:26.170
簡単に状況を説明しよう。
03:26.200 --> 03:34.390
まず、 マルチモーダルなユースケースに適した、 画像を生成する機能を構築する。
03:34.390 --> 03:37.990
それができるLLMコールを用意するつもりだ。
03:37.990 --> 03:39.760
そして、 それを実行する機能になる。
03:39.760 --> 03:42.670
そして、 それ自体がエージェントのようなものだと考えることもできる。
03:42.670 --> 03:49.540
これは、 非常に特殊で専門的な指導を受け、 それを実行できるソフトウェアのようなものだ。
03:49.540 --> 04:02.240
これは、 Dall-E three、 つまりOpenAIの画像生成モデルの助けを借りて、 コードで作成したアーティストになります。
04:02.480 --> 04:07.910
そして、 もし屁理屈をこねたいのであれば、 イメージの生成自体はLM的なものではない、
04:07.910 --> 04:09.650
と主張することもできる。
04:09.680 --> 04:18.380
ええと、 lmは言語モデルのことですが、 最近では一般的に、 lmsはより広範なgen AI文脈と同じ意味で使われています。
04:18.380 --> 04:28.100
だから、 画像生成や他の種類のマルチモーダル生成は、 LMエンジニアのツールキットに含まれると考えがちだ。
04:29.120 --> 04:36.290
だから、 私たちはエージェントを作るために、 これらの、 あー、 これらの、 これらの、 これらの、 これらの機能ができるようにするんだ。
04:36.290 --> 04:52.580
そして、 画像だけでなく音も追加し、 AIアシスタントに話し方や絵の描き方を教えるエージェントフレームワークを導入する予定です。
04:52.760 --> 04:55.820
さて、 前置きはこれくらいにして、 楽しそうだと思われただろうか。
04:55.850 --> 04:59.060
エキサイティングに聞こえることを願っているよ。
04:59.090 --> 05:00.320
早くやりたいよ。
05:00.320 --> 05:01.700
さあ、 今すぐ行こう。

208
week5/community-contributions/subtitles/srts/59166981/ko_KR.srt

@ -0,0 +1,208 @@
WEBVTT
00:00.980 --> 00:04.040
둘째 주, 5일째예요
00:04.070 --> 00:09.050
2주 차 마지막 날입니다 많은 일이 벌어지고 있죠
00:09.050 --> 00:16.100
계속 함께해 줘서 정말 고마워요 보람을 느끼게 해 줄게요 오늘은
00:16.100 --> 00:18.620
정말 재미있을 거예요
00:18.620 --> 00:21.530
Get it가 기대되네요
00:21.890 --> 00:24.410
둘째 주의 대망의 결말이죠
00:24.740 --> 00:27.800
전 계속 여러분이 뭘 할 수 있는지 말할 거예요
00:27.800 --> 00:32.840
트랜스포머의 성공을 축하하는 건 정말 중요해요
00:32.840 --> 00:38.660
프론티어 API에 대항해 코드를 작성할 수 있고 인공지능 보조를 만들 수 있고 전문 지식을 줄 도구를
00:38.660 --> 00:39.800
추가할 수 있죠
00:39.830 --> 00:42.440
오늘은 에이전트를 소개하죠
00:42.440 --> 00:48.650
요원이 보다 진보된 순차적 작업을 수행하는 방법을 얘기했죠
00:48.650 --> 00:56.450
그리고 아주 재미있는 걸 할 거예요 에이전트와 도구를 이용해 다중 모듈 인공지능 보조를 만드는 거죠
00:57.620 --> 00:59.390
에이전트가 뭐죠?
00:59.720 --> 01:01.190
요원이라고 해야겠죠
01:01.220 --> 01:02.390
에이전트 I요
01:02.420 --> 01:02.900
에이전트요
01:03.530 --> 01:07.790
다양한 상황에서 사용할 수 있는 우산형 용어 중 하나죠
01:07.790 --> 01:12.140
사람마다 다른 의미를 가질 수 있는 거예요
01:12.140 --> 01:17.660
하지만 일반적으로 사람들은 자율적인 소프트웨어 엔터티라고 하죠
01:17.660 --> 01:25.640
입력 프롬프트와 텍스트 생성에서만이 아니라 다른 작업도 수행할 수 있죠
01:25.820 --> 01:27.530
전형적인 특징이죠
01:27.530 --> 01:28.700
자율적이라 치죠
01:28.700 --> 01:33.740
일종의 기관이 있고 목표 지향적이며 어떤 일을 하려고
01:33.740 --> 01:37.520
설정하고 작업에 구체적이죠
01:37.520 --> 01:42.620
보통 한 가지에 특화된 사람들이에요
01:43.010 --> 01:48.230
에이전트 프레임워크라는 것의 일부로 설계되었는데 인간의
01:48.230 --> 01:55.190
제한적인 개입으로 더 복잡한 문제를 해결하기 위해 에이전트가 상호 작용하는
01:55.190 --> 01:56.450
환경이죠
01:56.450 --> 02:00.020
인간에게 요청하는 요청 대응 상황이 아니에요
02:00.020 --> 02:06.150
하지만 이런 환경을 상상해 보세요 작업을 수행하기 위해 기존 소프트웨어와 상호
02:06.150 --> 02:12.690
작용하기 위해 다중 소프트웨어 에이전트가 llms의 조합이 될 수 있는 환경이요
02:12.690 --> 02:19.770
여러분이 기대할 수 있는 기능은 메모리와 지속성인데 요청 응답을
02:19.770 --> 02:26.820
넘어서 의사 결정과 오케스트레이션을 할 수 있고 무엇을 계획할
02:26.820 --> 02:30.750
수 있는지에 관한 거죠
02:30.930 --> 02:36.240
때로는 환경의 문제일 뿐입니다 그에 대한 어떤 계획으로 코드되어 있는 거죠
02:36.240 --> 02:40.410
계획을 책임지는 LLM이 있을 때도 있어요
02:40.410 --> 02:45.840
복잡한 문제를 작은 문제로 쪼개서 다른 모델이 해결하도록
02:45.840 --> 02:47.400
하는 모델이죠
02:47.880 --> 02:53.310
도구를 사용하는 건 유전적 인공지능의 예죠
02:53.310 --> 02:59.370
이제 익숙해지셨겠지만 모델에 데이터베이스나 인터넷 연결 등
02:59.370 --> 03:06.910
원하는 모든 것에 연결할 수 있는 기능을 제공합니다 기능에 액세스를 제공하고
03:06.910 --> 03:10.450
어떻게 작동하는지도 아니까요
03:10.450 --> 03:17.440
그냥 if문인 걸 알지만 이건 Lms가 이걸 할 수 있다는 효과를
03:17.440 --> 03:18.580
주죠
03:19.960 --> 03:22.540
몇 가지 할 게 있어요
03:22.540 --> 03:26.170
상황을 간단히 설명해 드리죠
03:26.200 --> 03:34.390
먼저 이미지를 생성할 수 있는 함수를 만들겠습니다 좋은 다중 모듈 사용 사례죠
03:34.390 --> 03:37.990
그걸 할 수 있는 LLM 호출을 할 거예요
03:37.990 --> 03:39.760
그걸 하는 함수가 되겠죠
03:39.760 --> 03:42.670
그 자체로 에이전트라고 볼 수 있어요
03:42.670 --> 03:49.540
아주 구체적이고 전문적인 지시를 그대로 실행하는 소프트웨어 같아요
03:49.540 --> 03:58.990
코드로 아티스트 작업을 할 거예요 오픈AI의 이미지 생성 모델인 달리
03:58.990 --> 04:02.240
3의 도움을 받아서요
04:02.480 --> 04:07.910
굳이 트집을 잡자면 이미지 생성은 LM과 무관하다고 주장할
04:07.910 --> 04:09.650
수도 있어요
04:09.680 --> 04:16.250
lm은 언어 모델이지만 요즘엔 일반적으로 더 넓은 세대 인공지능 컨텍스트와
04:16.250 --> 04:18.380
교환적으로 사용되죠
04:18.380 --> 04:24.590
이미지 생성이나 다른 다중 모듈 생성 역시 LM 엔지니어의 도구 키트에
04:24.590 --> 04:28.100
포함된다고 생각하는 경향이 있죠
04:29.120 --> 04:35.510
에이전트를 만드는 걸 살펴볼 겁니다 이런 작업을 할 수 있는 이런
04:35.510 --> 04:36.290
함수요
04:36.290 --> 04:43.700
이미지뿐 아니라 소리도 추가할 거예요 에이전트 프레임워크를 만들어
04:43.730 --> 04:50.000
인공지능 보조를 가르칠 거예요 우리가 말하고 그리는 법을 연구했던
04:50.030 --> 04:52.580
항공사 보조요
04:52.760 --> 04:55.820
그럼 바로 시작하죠 재미있을 것 같아요?
04:55.850 --> 04:59.060
흥미진진하게 들리면 좋겠네요 정말 멋질 테니까요
04:59.090 --> 05:00.320
빨리 하고 싶어요
05:00.320 --> 05:01.700
지금 당장 가죠

205
week5/community-contributions/subtitles/srts/59167007/en_US.srt

@ -0,0 +1,205 @@
WEBVTT
00:00.500 --> 00:02.780
Well, how fabulous is that?
00:02.780 --> 00:09.620
I hope that you are as wowed as I am by our new airline, I assistant and everything it can do.
00:09.620 --> 00:16.610
I've taken another screenshot here of of a conversation I'd had, and you can see again that gorgeous
00:16.610 --> 00:17.690
image of London.
00:17.720 --> 00:18.740
A very different approach now.
00:18.740 --> 00:22.430
Not the montage, but something rather rather simpler.
00:22.430 --> 00:28.790
Uh, it I find it astounding that you get such variety, such diversity of images.
00:28.850 --> 00:34.280
Um, and I also find it astounding that it's so easy to put together these sophisticated frameworks
00:34.280 --> 00:36.140
involving lots of functionality.
00:36.230 --> 00:39.890
Remember, we also had our tool running there, looking up the prices.
00:39.890 --> 00:46.400
Uh, everything we had together was a very sophisticated, complex app, complete with user interface.
00:46.400 --> 00:51.470
And we did it all just in a few hours worth of work.
00:53.000 --> 00:56.030
So it's a congratulations.
00:56.030 --> 01:01.370
But as always, there's a challenge for you and if I may say, one more time, the best way to learn
01:01.370 --> 01:02.150
is by doing.
01:02.150 --> 01:07.160
It is incredibly important that you now go and do some exercises and work on this to improve it.
01:07.190 --> 01:09.930
It's also a lot of fun as an extra bonus.
01:09.930 --> 01:11.880
So here are some of the things you can do.
01:11.910 --> 01:16.590
We talked before about adding in another tool to make a booking.
01:16.620 --> 01:21.360
In theory, not obviously a real booking, but make a booking and then it should print to an output
01:21.360 --> 01:22.680
that a booking has been made.
01:22.680 --> 01:27.120
Or maybe if you if you want, you could have it write to a file or something like that to give you a
01:27.120 --> 01:28.740
sense that the booking has happened.
01:28.770 --> 01:29.940
Add that as a tool.
01:29.940 --> 01:33.270
Hopefully you've done it already, but if not, now's a good time to do it.
01:33.360 --> 01:35.400
Then add another agent.
01:35.400 --> 01:41.040
Uh, have an agent that is able to translate all of the responses to a different language.
01:41.040 --> 01:43.500
Uh, something that we'd suggested for a previous project.
01:43.500 --> 01:48.060
But do that and then show it on the right hand side and use a different frontier model.
01:48.090 --> 01:54.240
How about Claude, for example, use Claude as a way to translate to another language of your choosing,
01:54.240 --> 01:58.140
and then you'd have to do some radio work to add another panel.
01:58.290 --> 02:00.450
With that translation.
02:00.450 --> 02:03.510
There will be a little bit of futzing around with Gradio when you do that.
02:03.510 --> 02:09.660
So you may you may find that that requires a little bit of googling, but hopefully you'll get an idea
02:09.690 --> 02:10.770
or you don't need to Google.
02:10.770 --> 02:17.520
You could actually ask, uh, Claude yourself for some advice on how to extend that Gradio app to add
02:17.520 --> 02:23.970
in that extra section to reflect the translation that it makes into another language.
02:23.970 --> 02:28.650
You'll find that when you do something like that, you provide a bunch of code and ask it to extend
02:28.650 --> 02:35.520
it, to do more, to add more capabilities, that these models are excellent at that kind of, uh,
02:35.520 --> 02:37.410
that kind of iterating on code.
02:37.800 --> 02:45.870
And then finally, since we've been enjoying multi-modality, one more multimodal task for you is audio
02:45.870 --> 02:52.890
to text, uh, add an agent that can listen to audio from your audio input source and turn it into text
02:52.890 --> 02:55.560
as the input to the AI assistant.
02:55.560 --> 02:57.150
And then you've really completed the loop.
02:57.150 --> 03:01.950
You'll be able to talk to it, and it will be able to talk back and draw images.
03:01.950 --> 03:05.040
When you were looking to ask for ticket prices.
03:05.040 --> 03:08.640
And that will then complete the week two challenge.
03:08.640 --> 03:15.780
And at that point you will be very familiar with Multi-modality and with using these, uh, stitching
03:15.810 --> 03:19.860
together these different agents to carry out a bigger task.
03:21.960 --> 03:24.790
And at that point, May I tell you?
03:24.820 --> 03:29.260
You are now 25% of the way to mastering LM engineering.
03:29.290 --> 03:30.460
25% of the way.
03:30.490 --> 03:31.720
A quarter of the way through.
03:31.750 --> 03:36.520
You can describe Transformers comfortably, including all of the terminology.
03:36.520 --> 03:43.900
You can code against the APIs, and you can build multimodal assistance using UIs, using tools, using
03:43.900 --> 03:44.560
agents.
03:44.590 --> 03:47.710
This is practically second nature to you at this point.
03:48.010 --> 03:48.340
Uh.
03:48.370 --> 03:57.910
Next week, change in topic to something that is absolutely wonderful the thriving open source community
03:57.910 --> 04:05.110
and and thriving capabilities that you have access to through open source, you're going to get to know
04:05.140 --> 04:07.720
hugging face really well, really, really well.
04:07.720 --> 04:14.050
You're going to work with pipelines and also with Tokenizers and with the models themselves with transformer
04:14.050 --> 04:15.040
models.
04:15.280 --> 04:21.430
And ultimately you're going to be running inference of open source models using Google Colab on their
04:21.430 --> 04:23.020
boxes with GPUs.
04:23.020 --> 04:30.010
And so by the end of the week, you'll be highly proficient with inference of open source models.
04:30.010 --> 04:31.540
And I can't wait to get to it.
04:31.540 --> 04:32.740
And I will see you then.

163
week5/community-contributions/subtitles/srts/59167007/ja_JP.srt

@ -0,0 +1,163 @@
WEBVTT
00:00.500 --> 00:02.780
なんて素晴らしいんだ
00:02.780 --> 00:09.620
私たちの新しい航空会社、 アイ・アシスタントとそのできることすべてに、 私と同じように驚いてほしい。
00:09.620 --> 00:17.690
ここでもう一枚、 私が交わした会話のスクリーンショットを撮ったのだが、 ロンドンのあのゴージャスなイメージをもう一度見ることができる。
00:17.720 --> 00:18.740
今はまったく違うアプローチだ。
00:18.740 --> 00:22.430
モンタージュではなく、 もっとシンプルなものだ。
00:22.430 --> 00:28.790
これほどバラエティーに富んだ、 多様なイメージが得られるというのは驚きです。
00:28.850 --> 00:36.140
それに、 多くの機能を含む洗練されたフレームワークを簡単に組み立てることができるのも驚きだ。
00:36.230 --> 00:39.890
私たちはそこでツールを動かし、 価格を調べていたことも覚えている。
00:39.890 --> 00:46.400
私たちが一緒に作ったものは、 ユーザーインターフェイスを備えた非常に洗練された複雑なアプリだった。
00:46.400 --> 00:51.470
しかも、 数時間分の作業ですべてをやり遂げた。
00:53.000 --> 00:56.030
おめでとう。
00:56.030 --> 01:02.150
しかし、 いつものように、 あなたには挑戦がある。 もう一度言わせてもらえば、 学ぶ最善の方法は実践することだ。
01:02.150 --> 01:07.160
それを改善するために、 今からエクササイズをしたり、 取り組んだりすることが非常に重要だ。
01:07.190 --> 01:09.930
おまけにとても楽しい。
01:09.930 --> 01:11.880
そこで、 あなたができることをいくつか紹介しよう。
01:11.910 --> 01:16.590
以前、 予約のための別のツールを追加するという話をした。
01:16.620 --> 01:22.680
理屈の上では、 明らかに実際の予約ではなく、 予約を行い、 予約が行われたことを出力する必要があります。
01:22.680 --> 01:28.740
あるいは、 必要であれば、 ファイルか何かに書き込んで、 予約が起こったことを知らせることもできるだろう。
01:28.770 --> 01:29.940
それを道具として加える。
01:29.940 --> 01:33.270
すでに済んでいればいいが、 そうでなければ今がチャンスだ。
01:33.360 --> 01:35.400
それから別のエージェントを加える。
01:35.400 --> 01:41.040
ええと、 すべての返答を異なる言語に翻訳できるエージェントを雇ってください。
01:41.040 --> 01:43.500
ええと、 以前のプロジェクトで提案したものなんだ。
01:43.500 --> 01:48.060
しかし、 それを右側に表示し、 別のフロンティアモデルを使う。
01:48.090 --> 01:58.140
例えばクロードはどうだろう。 クロードを好きな別の言語に翻訳する方法として使い、 別のパネルを追加するために無線作業をしなければならない。
01:58.290 --> 02:00.450
その翻訳で。
02:00.450 --> 02:03.510
その際、 グラディオを少しいじらなければならない。
02:03.510 --> 02:10.770
そのため、 少しググる必要があるかもしれないが、 うまくいけばアイデアが得られるかもしれないし、 ググる必要もないだろう。
02:10.770 --> 02:17.520
Gradioアプリを拡張して、 他言語への翻訳を反映する追加セクションを追加する方法について、
02:17.520 --> 02:23.970
クロード自身にアドバイスを求めることもできるだろう。
02:23.970 --> 02:28.650
コードの束を提供し、 それを拡張したり、 より多くの機能を追加したりするよう求めるようなことをすると、
02:28.650 --> 02:37.410
これらのモデルはそのような、 コードの反復に優れていることがわかるだろう。
02:37.800 --> 02:45.870
そして最後に、 マルチモーダリティを楽しんできたので、 もう1つマルチモーダルなタスクとして、 音声をテキストに変換する、
02:45.870 --> 02:55.560
つまり、 音声入力ソースから音声を聞き、 それをAIアシスタントへの入力としてテキストに変換できるエージェントを追加します。
02:55.560 --> 02:57.150
そして、 ループを完成させる。
02:57.150 --> 03:01.950
あなたはそれに話しかけることができるようになり、 それは言葉を返してイメージを描くことができるようになる。
03:01.950 --> 03:05.040
チケットの値段を聞こうと思ったとき。
03:05.040 --> 03:08.640
これで2週目のチャレンジは終了だ。
03:08.640 --> 03:19.860
その時点で、 あなたはマルチモダリティに精通し、 より大きな仕事を遂行するために、 さまざまなエージェントをつなぎ合わせて使うことになる。
03:21.960 --> 03:24.790
その時、 私はこう言った。
03:24.820 --> 03:29.260
これでLMエンジニアリングをマスターする道のりは25%になった。
03:29.290 --> 03:30.460
全体の25%だ。
03:30.490 --> 03:31.720
4分の1は終わった。
03:31.750 --> 03:36.520
あなたはトランスフォーマーを、 すべての用語を含めて快適に説明することができます。
03:36.520 --> 03:44.560
あなたはAPIに対してコードを書くことができ、 UIを使い、 ツールを使い、 エージェントを使い、 マルチモーダルアシスタンスを構築することができる。
03:44.590 --> 03:47.710
この時点では、 これはほとんど自然なことだ。
03:48.010 --> 03:48.340
ええと。
03:48.370 --> 03:57.910
来週は、 オープンソースコミュニティの繁栄と、 オープンソースを通じてアクセスできる能力の繁栄という、
03:57.910 --> 04:07.720
絶対に素晴らしいものに話題を変えて、 ハグ顔を本当によく、 本当によく知ることになる。
04:07.720 --> 04:15.040
パイプラインやトーケナイザー、 トランスフォーマーモデルを使ったモデルそのものを扱うことになる。
04:15.280 --> 04:23.020
そして最終的には、 GPUを搭載したGoogle Colabを使ってオープンソースのモデルの推論を実行することになる。
04:23.020 --> 04:30.010
今週が終わるころには、 オープンソースモデルの推論に習熟していることだろう。
04:30.010 --> 04:31.540
そして、 早くそれを手に入れたい。
04:31.540 --> 04:32.740
その時にまた会おう。

205
week5/community-contributions/subtitles/srts/59167007/ko_KR.srt

@ -0,0 +1,205 @@
WEBVTT
00:00.500 --> 00:02.780
정말 멋지지 않아요?
00:02.780 --> 00:09.620
우리 새 항공사, 아이 어시스턴트와 그 모든 것에 저만큼 놀라셨길 바라요
00:09.620 --> 00:16.610
제가 나눈 대화를 스크린샷으로 찍어 봤어요 런던의 아름다운 모습을 다시 한번
00:16.610 --> 00:17.690
볼 수 있죠
00:17.720 --> 00:18.740
지금은 아주 다른 접근법이죠
00:18.740 --> 00:22.430
몽타주보다는 좀 더 간단한 거요
00:22.430 --> 00:28.790
이렇게 다양하고 다양한 이미지를 볼 수 있다니 놀라워요 Get it
00:28.850 --> 00:34.280
또 하나 놀라운 건 이렇게 정교한 프레임워크를 조립하기가 이렇게 쉽다는 거예요 많은 기능성 요소를
00:34.280 --> 00:36.140
포함해서요 TFI, TFI
00:36.230 --> 00:39.890
그리고 가격표를 보기 위해 툴을 실행시켰죠
00:39.890 --> 00:46.400
우리가 함께 만든 모든 건 사용자 인터페이스가 있는 아주 정교하고 복잡한 앱이었어요
00:46.400 --> 00:51.470
그 모든 걸 몇 시간 만에 해냈죠
00:53.000 --> 00:56.030
축하의 의미군요
00:56.030 --> 01:01.370
하지만 늘 그렇듯 도전은 있어요 다시 한번 말하지만 가장 좋은 방법은 직접 해 보는
01:01.370 --> 01:02.150
거예요
01:02.150 --> 01:07.160
이제 여러분이 이 문제를 개선하기 위해 운동하고 노력해야 해요
01:07.190 --> 01:09.930
보너스로 재미도 쏠쏠하죠
01:09.930 --> 01:11.880
여러분이 할 수 있는 걸 알려드리죠
01:11.910 --> 01:16.590
예약을 위한 다른 도구를 추가하는 것에 대해 전에 얘기했었죠
01:16.620 --> 01:21.360
이론상으로는 진짜 예약은 아니지만 예약을 하고 나면 예약이 된 출력물로
01:21.360 --> 01:22.680
프린트되어야 하죠
01:22.680 --> 01:27.120
원한다면 파일에 작성하게 할 수도 있어요 예약이 됐다는
01:27.120 --> 01:28.740
느낌을 주는 거죠
01:28.770 --> 01:29.940
도구로 추가하세요
01:29.940 --> 01:33.270
이미 해보셨길 바라지만 아니라면 지금이 좋은 때예요
01:33.360 --> 01:35.400
그럼 에이전트를 추가해요
01:35.400 --> 01:41.040
모든 응답을 다른 언어로 번역할 요원이 필요해요
01:41.040 --> 01:43.500
전에 했던 프로젝트에서 제안했던 거예요
01:43.500 --> 01:48.060
하지만 그렇게 하고 오른쪽에 다른 프론티어 모델을 사용하세요
01:48.090 --> 01:54.240
예를 들어 클로드를 이용해서 원하는 다른 언어로 통역해 보세요 그리고 라디오
01:54.240 --> 01:58.140
작업을 해서 패널을 하나 더 추가하고요
01:58.290 --> 02:00.450
그 말을 번역하면요
02:00.450 --> 02:03.510
그러면서도 그라디오의 비트를 약간 손봐야 해요
02:03.510 --> 02:09.660
구글링으로 검색해봐야 알 수 있을 거예요 비트가 떠오르면 구글링할 필요
02:09.690 --> 02:10.770
없어요
02:10.770 --> 02:17.520
클로드한테 조언을 구할 수도 있어요 그래디오 앱을 확장해서 번역된
02:17.520 --> 02:23.970
내용을 다른 언어로 반영할 추가 섹션에 추가할 방법요
02:23.970 --> 02:28.650
이런 작업을 할 때는 코드를 잔뜩 제공하고 확장하고, 더
02:28.650 --> 02:35.520
하고, 더 많은 기능을 추가하라고 요청하죠 이런 모델은 그런 종류의 코드에서의 반복에
02:35.520 --> 02:37.410
아주 뛰어나요
02:37.800 --> 02:45.870
끝으로, 다중 모듈을 즐기고 계시니 한 가지 더 다중 모듈 작업은 오디오 투 텍스트입니다
02:45.870 --> 02:52.890
에이전트를 추가해 오디오 입력 소스에서 오디오를 듣고 인공지능 보조의 입력으로
02:52.890 --> 02:55.560
텍스트로 바꾸는 거죠
02:55.560 --> 02:57.150
그럼 루프가 완성되는 거죠
02:57.150 --> 03:01.950
대화도 할 수 있고 대화도 하고 그림도 그릴 수 있죠
03:01.950 --> 03:05.040
티켓 가격을 물어보려고 할 때요
03:05.040 --> 03:08.640
그러면 둘째 주 과제가 끝나요
03:08.640 --> 03:15.780
그때쯤이면 더 큰 작업을 수행하기 위해 서로 다른 요소들을 꿰매는
03:15.810 --> 03:19.860
다중 양식에 익숙해지겠죠
03:21.960 --> 03:24.790
그 시점에서, 말해도 될까요?
03:24.820 --> 03:29.260
이제 달 착륙 엔지니어링의 25%를 완성했어요
03:29.290 --> 03:30.460
25%는 성공했죠
03:30.490 --> 03:31.720
4분의 1이 지났어요
03:31.750 --> 03:36.520
트랜스포머를 편하게 묘사할 수 있어요 모든 용어도 포함해서요
03:36.520 --> 03:43.900
API에 대해 코드도 할 수 있고 UI, 도구, 에이전트를 이용해 다중 모듈 보조를 구축할 수도
03:43.900 --> 03:44.560
있어요
03:44.590 --> 03:47.710
이젠 이런 게 당신에겐 제2의 천성이군요
03:48.010 --> 03:48.340
03:48.370 --> 03:57.910
다음 주엔 정말 멋진 것으로 주제를 바꿉니다 오픈 소스 커뮤니티와 오픈 소스를 통해 액세스할
03:57.910 --> 04:05.110
수 있는 역량이 번창하는 것으로요 여러분은 당혹스러운 얼굴을 아주
04:05.140 --> 04:07.720
아주 잘 알게 될 거예요
04:07.720 --> 04:14.050
파이프라인도 작업하고 Tokenizers도 작업하고 트랜스포머 모델 그 자체도
04:14.050 --> 04:15.040
작업하죠
04:15.280 --> 04:21.430
궁극적으로 여러분은 오픈 소스 모델을 실행할 겁니다 구글 Colab을 이용해 GPU와
04:21.430 --> 04:23.020
함께 상자에서요
04:23.020 --> 04:30.010
주말쯤엔 오픈 소스 모델 추론에 아주 능숙해지실 거예요
04:30.010 --> 04:31.540
빨리 get it로 가고 싶네요
04:31.540 --> 04:32.740
그때 봐요

304
week5/community-contributions/subtitles/srts/59167009/en_US.srt

@ -0,0 +1,304 @@
WEBVTT
00:00.740 --> 00:01.910
Welcome back.
00:01.910 --> 00:04.220
It's time to make our full agent framework.
00:04.220 --> 00:05.630
I'm super excited about this.
00:05.660 --> 00:10.490
It's pulling everything together that we've been doing before, and I think you'll be very happy with
00:10.490 --> 00:11.510
the outcome.
00:11.720 --> 00:13.730
Uh, so just a quick recap.
00:13.730 --> 00:14.870
An agent framework.
00:14.960 --> 00:17.840
The term agent I as I said, it's an umbrella term.
00:17.840 --> 00:20.120
It can refer to a bunch of different techniques.
00:20.240 --> 00:23.060
Um, for example, it can be any of these five.
00:23.090 --> 00:28.250
It can be about breaking a complex problem into smaller steps with multiple models carrying out different
00:28.250 --> 00:29.420
specialized tasks.
00:29.420 --> 00:33.890
It can be the ability for an LLM to have tools to give them extra capabilities.
00:33.890 --> 00:41.000
It can be, uh, talking about the agent environment, which is the setup or the agent framework that
00:41.000 --> 00:43.160
allows agents to collaborate.
00:43.190 --> 00:50.270
Um, it can be the idea that one LLM can act as a planner, dividing tasks into smaller ones, that
00:50.270 --> 00:55.100
specialists that can themselves be llms or bits of software can carry out.
00:55.280 --> 01:00.210
Um, and then there is another point here, which is that people talk about agentic AI when you're thinking
01:00.210 --> 01:07.740
about an agent having its own autonomy agency, uh, beyond necessarily just responding to a prompt,
01:07.740 --> 01:13.050
such as having memory, being able to sort of, uh, I don't know, do something like, uh, scrape
01:13.050 --> 01:18.690
the web for news information and using that to make decisions about buying or selling stocks, something
01:18.690 --> 01:19.110
like that.
01:19.110 --> 01:26.040
That is a kind of, uh, something that that exists outside the context of just say, a request response
01:26.040 --> 01:26.670
chat.
01:26.670 --> 01:32.100
So these are all the kinds of ways that that these are the kinds of things people are referring to when
01:32.100 --> 01:34.890
they talk about agentic AI and the use of agents.
01:34.890 --> 01:39.510
And what we're really doing here is we're talking about, uh, definitely number one and two there and
01:39.510 --> 01:42.210
to a certain extent, numbers three and five.
01:42.300 --> 01:45.090
But we're not we're not building an LLM that does the planning.
01:45.090 --> 01:47.280
That's not something we'll be doing in this session.
01:47.280 --> 01:55.710
So, uh, this should be somewhat familiar to you because this is the chat method that's quite close
01:55.710 --> 01:57.000
to what we had before.
01:57.000 --> 01:59.710
So you'll recognize a few things about this.
01:59.710 --> 02:07.210
This section here is the usual chat radio function that we know really well.
02:07.210 --> 02:16.180
It takes a message and a history, and it, uh, sort of unpacks that history into the format that OpenAI
02:16.210 --> 02:20.200
will expect and then calls the response.
02:20.440 --> 02:26.080
This part here will also look familiar to you because it's our use of tools.
02:26.080 --> 02:32.860
It's where we find out if the model wants to call a tool, and if so, we handle that tool.
02:33.010 --> 02:38.950
Uh, but there's one little extra line just inserted in there, and it's that line there that what we're
02:38.950 --> 02:44.710
going to say is if the person does, if the model decides it needs to, to run the tool to find the
02:44.710 --> 02:54.520
price of a ticket, then we will also have the, um, artist generate an image to represent that city
02:54.520 --> 02:56.200
that's being looked up.
02:56.200 --> 02:58.160
So there we have it.
02:58.220 --> 03:00.050
Uh, that's, uh that's nice.
03:00.050 --> 03:04.160
And also, now, if you remember before I told you there was a reason I passed back city that you're
03:04.160 --> 03:04.940
going to find out.
03:04.970 --> 03:05.690
Here it is.
03:05.690 --> 03:09.800
That's why I needed the city to pass it to the artist.
03:10.130 --> 03:13.880
Um, and then, uh, this is all exactly the same.
03:13.880 --> 03:16.040
There's one more tiny change.
03:16.040 --> 03:22.760
Which is this here, which is that, uh, once I've collected the response from the model, I then call
03:22.790 --> 03:26.840
talker to make sure that we speak the response.
03:26.840 --> 03:29.780
So that is our chat.
03:29.960 --> 03:32.270
Uh, let's run that.
03:33.920 --> 03:40.550
Now, this, I should say, since I've always showed off about how easy Gradio is, this code is a little
03:40.580 --> 03:41.420
bit more involved.
03:41.420 --> 03:46.970
You may notice the reason is because we're now because we want to do a little bit more and show images.
03:46.970 --> 03:55.190
We're going outside the default, the sort of off the shelf, uh, chat user interface that Gradio provides
03:55.190 --> 03:55.550
for us.
03:55.550 --> 03:58.120
And we have to then build the interface ourselves.
03:58.150 --> 04:04.960
And as a result, I've had to put together this interface that kind of puts together the various components
04:04.960 --> 04:07.090
like the input and the buttons.
04:07.300 --> 04:10.180
But what I'll say is this is still actually super straightforward.
04:10.180 --> 04:11.470
It still reads like English.
04:11.470 --> 04:13.030
It's very clear what's going on.
04:13.030 --> 04:19.330
You'll see everything that's happening here, and hopefully this will be quite readable for you.
04:19.330 --> 04:26.740
And you can use this to build more sophisticated chats, more sophisticated UIs yourself.
04:26.740 --> 04:36.850
So with that background, we now are going to run this to it's running and we'll bring that up.
04:37.240 --> 04:41.650
And here we have our chat with our new assistant.
04:41.680 --> 04:43.300
Let's give it a try.
04:47.740 --> 04:48.280
Hello.
04:48.280 --> 04:49.630
How can I assist you today.
04:51.040 --> 04:52.090
You like that?
04:52.240 --> 04:53.650
It spoke to us.
04:53.740 --> 04:54.520
There we go.
04:54.520 --> 04:56.620
That's the first use of an agent.
04:56.620 --> 05:04.420
We had a specialist model that's able to create, uh, audio, and we integrated that with our chatbot
05:04.420 --> 05:07.510
so that it was able to speak back to us.
05:15.760 --> 05:17.020
Great choice.
05:17.080 --> 05:20.410
Would you like to know the to the ticket price for a return trip to London?
05:22.930 --> 05:24.220
There we go.
05:24.250 --> 05:27.220
That's entertaining, let's say.
05:34.240 --> 05:35.710
We know there's a pause.
05:43.090 --> 05:44.080
Here we go.
05:48.040 --> 05:51.220
A return ticket to London is priced at 799.
05:53.120 --> 05:54.920
And there we have it.
05:54.950 --> 05:58.340
A return ticket to London is priced at $7.99.
05:58.340 --> 06:00.290
And there is the image.
06:00.290 --> 06:04.220
And that image looks spectacular.
06:04.370 --> 06:06.290
A London bus in the middle.
06:06.290 --> 06:07.700
It's got Big Ben.
06:07.700 --> 06:10.400
It's got the bridge.
06:10.400 --> 06:11.690
It's got, uh.
06:11.720 --> 06:14.420
Yeah, I can see taxi there.
06:14.450 --> 06:18.950
It's just a great montage of images.
06:19.160 --> 06:26.570
Uh, and so I find this to be very compelling indeed, a wonderful example of what we're able to achieve
06:26.570 --> 06:28.280
with just a little bit of code.
06:28.520 --> 06:37.550
And so I present to you a multimodal app, complete with audio and some images running as part of what
06:37.550 --> 06:48.050
is a in a, in a small way, a multimodal agentic framework for talking to an airline AI assistant.
06:48.140 --> 06:49.280
Great work.
06:49.310 --> 06:52.790
I'll see you for the challenge of the week and the wrap up.

250
week5/community-contributions/subtitles/srts/59167009/ja_JP.srt

@ -0,0 +1,250 @@
WEBVTT
00:00.740 --> 00:01.910
お帰りなさい。
00:01.910 --> 00:04.220
フルエージェントの枠組みを作る時が来た。
00:04.220 --> 00:05.630
すごく楽しみだよ。
00:05.660 --> 00:11.510
私たちがこれまでやってきたことをすべてまとめている。
00:11.720 --> 00:13.730
ええと、 簡単にまとめると
00:13.730 --> 00:14.870
エージェントのフレームワーク。
00:14.960 --> 00:17.840
エージェントという言葉は......さっきも言ったように、 包括的な言葉なんだ。
00:17.840 --> 00:20.120
さまざまなテクニックを指すことがある。
00:20.240 --> 00:23.060
例えば、 この5つのうちのどれでもいい。
00:23.090 --> 00:29.420
それは、 複雑な問題をより小さなステップに分割し、 複数のモデルが異なる専門的なタスクを実行することである。
00:29.420 --> 00:33.890
それは、 LLMが特別な能力を与えるツールを持つ能力である。
00:33.890 --> 00:43.160
エージェント環境、 つまりエージェントが協力できるようにするためのセットアップやエージェントのフレームワークのことです。
00:43.190 --> 00:50.270
つまり、 1人のLLMがプランナーとして機能し、 タスクをより小さなものに分割し、 それ自身がLLMやソフトウェアの一部となりうるスペシャリストがそれを実行する、
00:50.270 --> 00:55.100
というアイデアだ。
00:55.280 --> 01:00.210
それは、 エージェントが、 単にプロンプトに応答するだけでなく、 独自の自律的なエージェンシーを持ち、
01:00.210 --> 01:19.110
例えば、 記憶力を持ち、 ニュース情報をウェブにかき集め、 それを使って株の売り買いを判断するようなことができるようなエージェントを考えているときに、 人々はエージェント型AIについて話すということです。
01:19.110 --> 01:26.670
それは、 リクエスト・レスポンス・チャットという文脈の外側に存在するものだ。
01:26.670 --> 01:34.890
つまり、 エージェント型AIやエージェントの利用について語るとき、 人々が言及するのはこうした種類の方法ばかりなのだ。
01:34.890 --> 01:39.510
そして、 私たちがここでやろうとしていることは、 1番と2番、
01:39.510 --> 01:42.210
そして3番と5番についてだ。
01:42.300 --> 01:45.090
しかし、 我々はプランニングを行うLLMを構築しているわけではない。
01:45.090 --> 01:47.280
それはこのセッションでやることではない。
01:47.280 --> 01:57.000
というわけで、 これは以前使っていたものにかなり近いチャット方法なので、 多少はなじみがあるはずだ。
01:57.000 --> 01:59.710
だから、 これについてはいくつか知っていることがあるだろう。
01:59.710 --> 02:07.210
このセクションは、 私たちがよく知っているいつものチャットラジオ機能だ。
02:07.210 --> 02:16.180
メッセージと履歴を受け取り、 その履歴をOpenAIが期待するフォーマットに展開し、
02:16.210 --> 02:20.200
レスポンスを呼び出す。
02:20.440 --> 02:26.080
この部分は、 私たちの道具の使い方なので、 皆さんも見覚えがあるだろう。
02:26.080 --> 02:32.860
モデルがツールを呼び出したいかどうかを調べ、 呼び出したい場合はそのツールを処理する。
02:33.010 --> 02:44.710
その行は、 もし人が、 もしモデルがチケットの値段を調べるためにツールを実行する必要があると判断した場合、
02:44.710 --> 02:56.200
アーティストに、 調べられる都市を表す画像を生成させるというものです。
02:56.200 --> 02:58.160
そうだ。
02:58.220 --> 03:00.050
ああ、 それは......いいね。
03:00.050 --> 03:04.940
それに、 私がバックシティをパスしたのには理由があるんだ。
03:04.970 --> 03:05.690
これだ。
03:05.690 --> 03:09.800
だから、 アーティストに渡すために市が必要だったんだ。
03:10.130 --> 03:13.880
ええと、 それから、 これは全部まったく同じなんだ。
03:13.880 --> 03:16.040
もうひとつ、 小さな変更がある。
03:16.040 --> 03:26.840
つまり、 モデルからの反応を収集したら、 トーカーを呼び出して反応を確認する。
03:26.840 --> 03:29.780
これが私たちのチャットだ。
03:29.960 --> 03:32.270
ええと、 それを実行しよう。
03:33.920 --> 03:41.420
さて、 Gradioがいかに簡単かをいつも自慢してきたので、 このコードはもう少し複雑だと言っておく。
03:41.420 --> 03:46.970
お気づきの方もいらっしゃるかもしれませんが、 その理由は、 私たちが今、 もう少し、 画像を見せたいからです。
03:46.970 --> 03:55.550
私たちは、 Gradioが私たちのために提供してくれるデフォルトの、 ある種の既製品の、 えー、 チャット・ユーザー・インターフェースの外に出ようとしています。
03:55.550 --> 03:58.120
そして、 自分たちでインターフェースを構築しなければならない。
03:58.150 --> 04:07.090
その結果、 入力やボタンのような様々なコンポーネントをまとめたインターフェースを作らなければならなくなった。
04:07.300 --> 04:10.180
しかし、 私が言いたいのは、 これでも実は超簡単だということだ。
04:10.180 --> 04:11.470
まだ英語のように読める。
04:11.470 --> 04:13.030
何が起こっているかははっきりしている。
04:13.030 --> 04:19.330
ここで起こっていることがすべてわかるだろうし、 願わくば、 これがあなたにとってかなり読みやすいものになることを願っている。
04:19.330 --> 04:26.740
そしてこれを利用して、 より洗練されたチャットやより洗練されたUIを自分で構築することができる。
04:26.740 --> 04:36.850
このような背景を踏まえて、 これからこれを実行に移し、 それを表示させる。
04:37.240 --> 04:41.650
そしてここで、 新しいアシスタントとのおしゃべりが始まった。
04:41.680 --> 04:43.300
試してみよう。
04:47.740 --> 04:48.280
こんにちは。
04:48.280 --> 04:49.630
今日はどのようなご用件でしょうか?
04:51.040 --> 04:52.090
気に入ったかい?
04:52.240 --> 04:53.650
それは私たちに語りかけてきた。
04:53.740 --> 04:54.520
これでよし。
04:54.520 --> 04:56.620
それがエージェントの最初の使い方だ。
04:56.620 --> 05:07.510
私たちは、 音声を作成できるスペシャリスト・モデルを持っていて、 それをチャットボットに統合して、 チャットボットが私たちに話しかけられるようにしたんだ。
05:15.760 --> 05:17.020
素晴らしい選択だ。
05:17.080 --> 05:20.410
ロンドンへの往復航空券の料金をお知りになりたいですか?
05:22.930 --> 05:24.220
これでよし。
05:24.250 --> 05:27.220
それはエンターテインメントだ。
05:34.240 --> 05:35.710
間があるのは分かっている。
05:43.090 --> 05:44.080
さあ、 始めよう。
05:48.040 --> 05:51.220
ロンドンまでの往復航空券は799ドル。
05:53.120 --> 05:54.920
そうだ。
05:54.950 --> 05:58.340
ロンドンまでの往復チケットは7ドル。 99.
05:58.340 --> 06:00.290
そして、 そこにはイメージがある。
06:00.290 --> 06:04.220
そして、 その画像は壮大に見える。
06:04.370 --> 06:06.290
真ん中にロンドンバス。
06:06.290 --> 06:07.700
ビッグベンがある。
06:07.700 --> 06:10.400
ブリッジがある。
06:10.400 --> 06:11.690
これは...
06:11.720 --> 06:14.420
ああ、 タクシーが見えるね。
06:14.450 --> 06:18.950
素晴らしいモンタージュ映像だ。
06:19.160 --> 06:28.280
ほんの少しのコードで実現できることの素晴らしい例だ。
06:28.520 --> 06:37.550
そこで、 航空会社のAIアシスタントと会話するためのマルチモーダル・エージェント・フレームワークの一部として、
06:37.550 --> 06:48.050
音声と画像を含むマルチモーダル・アプリを紹介しよう。
06:48.140 --> 06:49.280
素晴らしい仕事だ。
06:49.310 --> 06:52.790
また今週のチャレンジと総括でお会いしましょう。

286
week5/community-contributions/subtitles/srts/59167009/ko_KR.srt

@ -0,0 +1,286 @@
WEBVTT
00:00.740 --> 00:01.910
잘 돌아왔어요
00:01.910 --> 00:04.220
에이전트 프레임워크를 만들 때가 됐어요
00:04.220 --> 00:05.630
정말 기대돼요
00:05.660 --> 00:10.490
지금까지 해 왔던 것처럼 모든 걸 잘 활용하고 있어요 결과에 아주 만족하실
00:10.490 --> 00:11.510
거예요
00:11.720 --> 00:13.730
간단히 정리해 보죠
00:13.730 --> 00:14.870
프레임워크 요원요
00:14.960 --> 00:17.840
에이전트 I는 포괄적인 용어예요
00:17.840 --> 00:20.120
다양한 기술을 참고할 수 있어요
00:20.240 --> 00:23.060
예를 들어 이 다섯 가지 중 아무거나요
00:23.090 --> 00:28.250
복잡한 문제를 여러 모델이 각기 다른 전문화된 작업을 수행하며 작은 단계로
00:28.250 --> 00:29.420
단계화하는 거죠
00:29.420 --> 00:33.890
LLM이 추가적인 기능을 부여할 도구를 갖는 기능일 수 있죠
00:33.890 --> 00:41.000
에이전트 환경에 대해 얘기할 수도 있습니다 에이전트가 협업을 할 수 있는 에이전트
00:41.000 --> 00:43.160
프레임워크죠
00:43.190 --> 00:50.270
하나의 LLM이 플래너 역할을 하는 겁니다 작은 작업들로 나누고 전문가들이 직접
00:50.270 --> 00:55.100
llm이 되거나 소프트웨어 조각을 수행하는 거죠
00:55.280 --> 01:00.210
또 다른 요점은 에이전트 인공지능을 생각할
01:00.210 --> 01:07.740
때 독자적인 기관을 떠올리는데 즉각적인 응답을 넘어서 메모리를
01:07.740 --> 01:13.050
갖고 뉴스 정보를 검색해 웹을 긁고 주식을 매매할지
01:13.050 --> 01:19.110
결정하는 데 사용한다고 보면 돼요
01:19.110 --> 01:26.670
그건 말하자면 요청 응답 채팅의 컨텍스트 밖에 존재하는 무언가예요
01:26.670 --> 01:32.100
이런 식으로∙∙∙ 에이전트 AI와 에이전트의 사용에 대해 얘기할
01:32.100 --> 01:34.890
때 사람들이 언급하는 것들이죠
01:34.890 --> 01:39.510
우리가 여기서 얘기하고 있는 건 1번과 2번이고 3번과
01:39.510 --> 01:42.210
5번도 어느 정도 있어요
01:42.300 --> 01:45.090
하지만 계획을 실행하는 LLM을 만드는 게 아니잖아요
01:45.090 --> 01:47.280
이번 시간엔 그런 걸 안 할 거예요
01:47.280 --> 01:57.000
이건 익숙하실 거예요 채팅 메서드거든요 전에 있던 것과 꽤 비슷하죠
01:57.000 --> 01:59.710
몇 가지 눈에 띄는 게 있어요
01:59.710 --> 02:07.210
이 섹션은 우리가 잘 아는 라디오 채팅 함수예요
02:07.210 --> 02:16.180
메시지와 히스토리를 취하고 그 히스토리를 오픈AI가 기대하는 포맷으로
02:16.210 --> 02:20.200
풀어낸 다음 응답을 호출해요
02:20.440 --> 02:26.080
이 부분도 익숙하실 겁니다 도구를 사용하는 곳이니까요
02:26.080 --> 02:32.860
모델이 도구를 호출하길 원하는지 알고 있다면 그 도구를 다루죠
02:33.010 --> 02:38.950
여기 한 줄 더 있는데요 여기서 말씀 드리고 싶은
02:38.950 --> 02:44.710
건 만약 고객이 티켓 가격을 찾기 위해 툴을 실행해야
02:44.710 --> 02:56.200
한다면 아티스트가 그 도시를 나타내는 이미지를 생성하도록 할 거예요
02:56.200 --> 02:58.160
자, 됐어요
02:58.220 --> 03:00.050
그거 좋네요
03:00.050 --> 03:04.160
그리고 제가 전에 얘기했던 걸 기억하신다면 제가 돌아온 이유를 곧 알게
03:04.160 --> 03:04.940
되실 거예요
03:04.970 --> 03:05.690
여기 있네요
03:05.690 --> 03:09.800
그래서 시가 예술가에게 넘겨야 했어요
03:10.130 --> 03:13.880
그리고 이건 전부 똑같아요
03:13.880 --> 03:16.040
한 가지 더 있어요
03:16.040 --> 03:22.760
여기 있는 이 모델에서 반응을 수집한 다음 토크 커뮤니티에
03:22.790 --> 03:26.840
전화해 반응을 확인하죠
03:26.840 --> 03:29.780
이게 우리 대화예요
03:29.960 --> 03:32.270
그걸 실행해 보죠
03:33.920 --> 03:41.420
비트가 얼마나 쉬운지 늘 자랑했듯이 이 코드는 좀 더 복잡해요
03:41.420 --> 03:46.970
그 이유는 비트를 더 많이 사용하고 이미지를 보여주기 위해서예요
03:46.970 --> 03:55.550
기본 설정 밖으로 나가보죠 규격화된 채팅 사용자 인터페이스 같은 건데 그라디오가 제공해요
03:55.550 --> 03:58.120
인터페이스는 우리가 직접 만들어야 해요
03:58.150 --> 04:04.960
그 결과 이 인터페이스를 구성해야 했어요 input이나 버튼 같은 다양한 구성
04:04.960 --> 04:07.090
요소를 구성하는 거죠
04:07.300 --> 04:10.180
하지만 이건 여전히 아주 간단해요
04:10.180 --> 04:11.470
영어처럼 들려요
04:11.470 --> 04:13.030
무슨 일인지 뻔하죠
04:13.030 --> 04:19.330
여기서 일어나는 모든 일을 볼 수 있어요 읽을 수 있길 바라요
04:19.330 --> 04:26.740
이걸 이용해 더 복잡한 채팅이나 더 복잡한 UI를 만들 수 있어요
04:26.740 --> 04:36.850
이 배경으로 이걸 실행∙∙∙ 실행되고 있어요 불러오죠
04:37.240 --> 04:41.650
새 비서와 대화하는 모습이네요
04:41.680 --> 04:43.300
한번 해 보죠
04:47.740 --> 04:48.280
안녕하세요
04:48.280 --> 04:49.630
무엇을 도와드릴까요?
04:51.040 --> 04:52.090
맘에 들어요?
04:52.240 --> 04:53.650
우리에게 말을 걸었어요
04:53.740 --> 04:54.520
됐어요
04:54.520 --> 04:56.620
에이전트 사용은 처음이죠
04:56.620 --> 05:04.420
오디오를 만드는 전문 모델이 있었는데 그걸 챗봇과 통합해서 우리에게
05:04.420 --> 05:07.510
말을 걸 수 있게 했어요
05:15.760 --> 05:17.020
탁월한 선택이에요
05:17.080 --> 05:20.410
런던 왕복 비행기 표 가격을 알고 싶으세요?
05:22.930 --> 05:24.220
됐어요
05:24.250 --> 05:27.220
재미있다고 해두죠
05:34.240 --> 05:35.710
잠시 멈추죠
05:43.090 --> 05:44.080
시작할게요
05:48.040 --> 05:51.220
런던 왕복 항공권은 799달러예요
05:53.120 --> 05:54.920
다 됐어요
05:54.950 --> 05:58.340
런던 왕복 항공권은 7달러예요 99살요
05:58.340 --> 06:00.290
이미지가 나오네요
06:00.290 --> 06:04.220
정말 장관이에요
06:04.370 --> 06:06.290
런던 버스예요
06:06.290 --> 06:07.700
빅벤도 있어요
06:07.700 --> 06:10.400
다리도 있어요
06:10.400 --> 06:11.690
06:11.720 --> 06:14.420
저기 택시가 보여요
06:14.450 --> 06:18.950
여러 이미지의 멋진 몽타주예요
06:19.160 --> 06:26.570
그래서 저는 이 비트가 아주 흥미롭다고 생각합니다 소량의 코드만으로 무엇을 이룰 수 있는지 보여주는
06:26.570 --> 06:28.280
훌륭한 예죠
06:28.520 --> 06:37.550
다중 모듈 앱을 소개합니다 비행기의 인공지능 보조와 통신할
06:37.550 --> 06:48.050
때 음성과 이미지가 포함된 다중 모듈 에이전트 프레임워크죠
06:48.140 --> 06:49.280
수고했어요
06:49.310 --> 06:52.790
이번 주의 도전과 마무리에서 다시 만나요

424
week5/community-contributions/subtitles/srts/59167015/en_US.srt

@ -0,0 +1,424 @@
WEBVTT
00:00.800 --> 00:05.960
Welcome back to Jupyter Lab and welcome to Day Five's Lab.
00:05.960 --> 00:12.020
And this is going to be lots of creativity and hopefully lots of entertainment.
00:12.020 --> 00:16.910
So to start with I have copied the day four Jupyter Lab.
00:17.030 --> 00:19.370
And I've duplicated that.
00:19.370 --> 00:20.420
And then I've extended it.
00:20.420 --> 00:23.570
So everything above where I am now is just a repeat of day four.
00:23.600 --> 00:31.670
Creates the AI assistant for our airline called flight flight AI, something like that, and arms it
00:31.670 --> 00:33.590
with a tool to be able to get ticket prices.
00:33.590 --> 00:39.140
All of that is already there and I've executed it ready for our showtime today.
00:39.140 --> 00:40.820
We're going to go multi-modal.
00:40.820 --> 00:46.250
We're going to use Dall-E three, which is the image generation model that sits behind GPT four.
00:46.760 --> 00:48.800
We're going to use it to make some images.
00:48.800 --> 00:52.790
And let's start by putting it into a function called artist.
00:52.790 --> 00:57.770
Before that, there are two, uh, service announcements I should make.
00:57.950 --> 01:03.830
Uh, first of all, I should point out that the price associated with generating an image is not tiny.
01:03.880 --> 01:10.150
Everything that we've done so far, I hope, has had a de minimis price in the fractions of a cent.
01:10.300 --> 01:16.090
Unless you've been generating tons of lengthy brochures, you have not racked up a significant bill
01:16.090 --> 01:17.830
from running this course so far.
01:17.950 --> 01:21.880
But now we are doing something that's slightly more on the radar.
01:21.910 --> 01:25.420
Each image that we generate will cost $0.04.
01:25.450 --> 01:30.700
Now, I put it to you that when you see these images, you will agree that they are well worth $0.04
01:30.730 --> 01:31.360
each.
01:31.570 --> 01:34.720
And they are super creative and high value.
01:34.720 --> 01:35.590
And I love them.
01:35.590 --> 01:37.630
So I think it is money well spent.
01:37.630 --> 01:41.650
But I do want to inform you of that so that you can decide whether you want to spend your $0.04 each
01:41.650 --> 01:42.310
time.
01:42.700 --> 01:49.120
Uh, the other thing to mention is that there is a little bit of, uh, um, uh, there's there's a
01:49.120 --> 01:56.770
there's a point about whether or not one should use the term LM when referring to image generation and
01:56.770 --> 01:58.270
audio generation and the like.
01:58.300 --> 01:59.290
Text to audio.
01:59.320 --> 02:03.400
Because of course, these are not large language models sitting behind the scenes.
02:03.430 --> 02:10.090
Now, what tends to happen these days is that people use LM as a bit of a general term for the models
02:10.090 --> 02:12.280
that sit behind gen AI systems.
02:12.280 --> 02:19.450
So actually, in practice, I think this very much is part of the skill set and toolkit of an LM engineer.
02:19.450 --> 02:23.800
But I should mention that, of course, strictly speaking, these aren't language models.
02:23.800 --> 02:30.730
These are image models and audio models that we'll be playing with right now as we add them to our agent
02:30.730 --> 02:31.630
framework.
02:31.750 --> 02:34.720
Anyways, with that preamble, let's get on with it.
02:34.720 --> 02:39.220
So we start by importing some useful image libraries.
02:39.220 --> 02:40.420
Well, the first one isn't.
02:40.570 --> 02:44.260
First two aren't image libraries, but some, some, uh, utilities.
02:44.260 --> 02:51.520
And then the Python image library is going to be very useful for us, a very handy common library.
02:51.760 --> 03:00.820
Uh, so the next thing we do is we're going to write a function called artist and artist calls OpenAI
03:00.850 --> 03:03.520
dot images dot generate.
03:03.520 --> 03:09.460
So it's a very consistent style that you're used to OpenAI images generate.
03:09.460 --> 03:11.520
We pass in the name of a model.
03:11.520 --> 03:13.860
In this case, the model is Dall-E three.
03:13.890 --> 03:16.650
You could also try Dall-E two, its predecessor.
03:16.680 --> 03:18.480
The images are less awesome.
03:18.480 --> 03:19.440
It's a bit cheaper.
03:19.440 --> 03:24.450
I seem to remember it's about $0.02 rather than $0.04, so it's not massively cheaper and in my opinion
03:24.450 --> 03:26.160
well worth the extra $0.02.
03:26.190 --> 03:27.750
Stick with Dall-E three.
03:27.780 --> 03:32.400
We give it a prompt and this isn't now a clever list of dictionaries.
03:32.400 --> 03:33.360
It's just text.
03:33.360 --> 03:39.240
And in this case, the prompt I'm suggesting here is, we say, an image representing a vacation in
03:39.240 --> 03:46.980
city, showing tourist spots and everything unique about city in a vibrant pop art style.
03:46.980 --> 03:50.250
We give it a size that is the smallest size.
03:50.250 --> 03:53.070
Dall-E three will do, Dall-E two will go much smaller.
03:53.250 --> 03:58.680
Um and Dall-E three also does two larger sizes in a portrait and landscape format.
03:58.740 --> 04:00.870
Just google it if you'd like to know those dimensions.
04:00.870 --> 04:02.400
If you'd like to try those images.
04:02.430 --> 04:04.260
We just want one image back.
04:04.260 --> 04:06.210
We say we want this format.
04:06.450 --> 04:12.840
Back comes something in the, uh, this um, uh, base64 encoded format.
04:12.840 --> 04:20.040
We then decode that into bytes, and then we then create a bytes IO object on those bytes, which we
04:20.040 --> 04:26.850
can then pass in to the image dot open function, and that will return an image for us.
04:26.850 --> 04:28.320
Let's execute that.
04:28.320 --> 04:30.300
And now let's give it a try.
04:30.330 --> 04:35.040
So I'm going to say image equals artist.
04:36.870 --> 04:38.940
And what shall we say New York City.
04:42.660 --> 04:50.520
And then display image is the Jupiter way of then getting that to show.
04:50.550 --> 04:53.400
Let's run that or you're seeing one I ran already there.
04:53.400 --> 04:56.790
Sorry it's not that quick, but look how amazing that is.
04:56.940 --> 04:58.380
Uh, you're already getting.
04:58.380 --> 05:00.750
I'm spoiling you by showing you one right away.
05:00.750 --> 05:01.950
This is what it looks like.
05:01.950 --> 05:07.710
It's generating a second one above you get to see the Statue of Liberty, a few different Empire State
05:07.710 --> 05:16.200
buildings, some planes in the sky, and then a sort of image to Times Square with lots of signs and
05:16.200 --> 05:18.510
with New York, spelled out their taxi.
05:18.540 --> 05:19.140
Look at that.
05:19.170 --> 05:20.610
A yellow New York taxi.
05:20.640 --> 05:21.690
And Coca-Cola.
05:21.690 --> 05:23.040
And a hot dog.
05:23.070 --> 05:25.050
A very New York iconic thing.
05:25.080 --> 05:26.550
Fantastic.
05:26.580 --> 05:29.190
Meanwhile, it's built another image for us here.
05:29.190 --> 05:29.670
And.
05:29.670 --> 05:31.020
Wow, look at this one.
05:31.020 --> 05:32.340
It's different.
05:32.340 --> 05:33.120
It's great.
05:33.120 --> 05:35.430
It's got a big jet over here.
05:35.430 --> 05:40.620
It's got the Empire State Building, of course, multiple Empire State buildings, Statue of Liberty's.
05:40.620 --> 05:47.280
And it's got again the sort of thriving shops and taxi in the foreground like that, an iconic New York
05:47.310 --> 05:48.870
taxi and a hot dog again.
05:49.080 --> 05:54.330
Uh, so the thing to mention is that these images, they're so creative and they're so different, we've
05:54.330 --> 05:59.790
got two now that we can see the one I did a moment ago and this one here, uh, and you can see how
05:59.790 --> 06:01.470
great they look.
06:02.430 --> 06:03.060
All right.
06:03.060 --> 06:05.220
Well, I hope that you were entertained by that.
06:05.220 --> 06:10.920
And by all means, can I suggest spend some $0.04, generate a few images for yourself.
06:10.920 --> 06:12.180
They're great.
06:12.690 --> 06:14.940
All right, let's add one more function.
06:14.940 --> 06:20.450
We're going to make a function that uses OpenAI's speech to generate some audio.
06:20.450 --> 06:25.670
So we're going to use a couple of utility stuff here with a library called Pi Dub.
06:25.670 --> 06:26.630
That's very useful.
06:26.840 --> 06:29.300
We're going to write a function called talker.
06:29.300 --> 06:33.860
And talker is going to call OpenAI dot audio dot speech dot create.
06:33.860 --> 06:39.470
So if we look back up the image generation was OpenAI images generate.
06:39.470 --> 06:46.760
And for audio it's a case of uh OpenAI audio dot speech dot create.
06:46.760 --> 06:48.500
We pass in a model.
06:48.740 --> 06:56.300
Um, and this is the model we're using, TTS one TTS stands for text to speech and is, uh, the, the
06:56.330 --> 07:00.080
this kind of model that we're going for, we supply a voice.
07:00.080 --> 07:01.880
In this case we're going to try the voice.
07:01.910 --> 07:02.750
Onyx.
07:02.750 --> 07:04.880
There's something like eight different voices to try again.
07:04.910 --> 07:06.800
You can you can Google to see what they are.
07:06.800 --> 07:10.310
And we pass in the thing that this function was called.
07:10.310 --> 07:16.520
With what comes back, we again create a bytes IO object to represent those bytes.
07:16.520 --> 07:25.330
And then we use this to this audio segment, uh creating it from a file and the audio stream and get
07:25.330 --> 07:27.250
it to play that audio.
07:27.250 --> 07:31.180
So let's create that function and then let's say talker.
07:33.070 --> 07:35.470
Well hi there.
07:40.150 --> 07:41.110
Well hi there.
07:42.430 --> 07:43.240
There we go.
07:43.270 --> 07:44.500
As simple as that.
07:44.830 --> 07:47.410
Uh, let's see how another voice sounds.
07:47.410 --> 07:50.320
Let's see how alloy sounds.
07:50.470 --> 07:52.270
Let's put alloy in there.
07:55.000 --> 07:55.930
Well hi there.
07:56.860 --> 07:58.630
And that was alloy.
07:58.660 --> 08:01.180
I think we'll stick with onyx.
08:01.180 --> 08:03.070
But you can try either.
08:03.070 --> 08:09.580
And you can also put in some more there that you can experiment with and pick your favorite.
08:09.910 --> 08:10.810
All right.
08:10.810 --> 08:13.510
Well that's what we'll go with.
08:14.710 --> 08:20.920
Uh and now let's talk about the agent framework.
08:20.950 --> 08:23.650
I think we will break for the next video.
08:23.650 --> 08:26.200
And that's where we'll take on our full agent framework.
08:26.230 --> 08:27.280
See you then.

391
week5/community-contributions/subtitles/srts/59167015/ja_JP.srt

@ -0,0 +1,391 @@
WEBVTT
00:00.800 --> 00:05.960
Jupyter Labへようこそ、 そして5日目のラボへようこそ。
00:05.960 --> 00:12.020
そして、 これはたくさんの創造性と、 できればたくさんのエンターテインメントになるだろう。
00:12.020 --> 00:16.910
そこでまず、 4日目のJupyter Labをコピーしてみた。
00:17.030 --> 00:19.370
そして、 私はそれを再現した。
00:19.370 --> 00:20.420
そして、 それを延長したんだ。
00:20.420 --> 00:23.570
だから、 今いるところより上は、 すべて4日目の繰り返しなんだ。
00:23.600 --> 00:33.590
フライトAIと呼ばれる航空会社のAIアシスタントを作成し、 航空券の価格を知ることができるツールを持たせる。
00:33.590 --> 00:39.140
そのすべてがすでにあり、 今日のショータイムのために準備してきた。
00:39.140 --> 00:40.820
私たちはマルチモーダルを目指す。
00:40.820 --> 00:46.250
私たちは、 GPT 4の後ろに位置するイメージ生成モデルであるDall-E 3を使うつもりです。
00:46.760 --> 00:48.800
これを使って画像を作るんだ。
00:48.800 --> 00:52.790
そして、 それをartistという関数に入れることから始めよう。
00:52.790 --> 00:57.770
その前に、 2つ、 ええと、 サービスアナウンスがあるんだ。
00:57.950 --> 01:03.830
ええと、 まず最初に言っておかなければならないのは、 画像生成にかかる料金は決して小さなものではないということだ。
01:03.880 --> 01:10.150
私たちがこれまでやってきたことはすべて、 1セントの何分の1という最小限の価格だったと思う。
01:10.300 --> 01:17.830
長大なパンフレットを大量に作成しているのでなければ、 このコースの運営で多額の請求が来ることはないだろう。
01:17.950 --> 01:21.880
でも、 今はもう少しレーダーに近いことをやっている。
01:21.910 --> 01:25.420
弊社が生成する各画像は0ドルです。 04.
01:25.450 --> 01:31.360
さて、 これらの画像をご覧になれば、 0ドルの価値は十分にあるとご納得いただけるだろう。 各04ドル。
01:31.570 --> 01:34.720
しかも、 超クリエイティブで価値が高い。
01:34.720 --> 01:35.590
私は彼らを愛している。
01:35.590 --> 01:37.630
だから、 私は十分なお金を使ったと思う。
01:37.630 --> 01:42.310
でも、 0ドルを使うかどうかを決めるために、 そのことはお知らせしておきたい。 毎回04。
01:42.700 --> 01:49.120
もうひとつ言っておくと、 画像生成や音声生成などにLMという言葉を使うべきかどうかという点で、 少し、 あー、 あー、 あー、
01:49.120 --> 01:56.770
あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、 あー、
01:56.770 --> 01:58.270
あー、 あー、 あー、 あー。
01:58.300 --> 01:59.290
テキストから音声へ。
01:59.320 --> 02:03.400
というのも、 もちろん、 これらは舞台裏にある大規模な言語モデルではないからだ。
02:03.430 --> 02:12.280
さて、 最近起こりがちなのは、 人々はLMを、 一般的なAIシステムの背後にあるモデルの総称として使っているということだ。
02:12.280 --> 02:19.450
だから実際には、 これはLMエンジニアのスキルセットとツールキットの一部だと思う。
02:19.450 --> 02:23.800
しかし、 もちろん、 厳密に言えば、 これらは言語モデルではないということは言っておかなければならない。
02:23.800 --> 02:31.630
これらは画像モデルとオーディオモデルで、 これからエージェントフレームワークに追加して遊ぶことになる。
02:31.750 --> 02:34.720
ともあれ、 前置きはこれくらいにして、 さっそく本題に入ろう。
02:34.720 --> 02:39.220
そこで、 便利な画像ライブラリをいくつかインポートすることから始めよう。
02:39.220 --> 02:40.420
まあ、 最初のは違うけどね。
02:40.570 --> 02:44.260
最初の2つはイメージライブラリではなく、 いくつかのユーティリティだ。
02:44.260 --> 02:51.520
そして、 Pythonのイメージ・ライブラリーは、 私たちにとって非常に便利な共通ライブラリーです。
02:51.760 --> 03:03.520
次にやることは、 artistという関数を書いて、 artistがOpenAI dot images dot generateを呼び出すことだ。
03:03.520 --> 03:09.460
つまり、 OpenAIの画像が生成するのに慣れた、 非常に一貫したスタイルなのだ。
03:09.460 --> 03:11.520
モデル名でパスを出す。
03:11.520 --> 03:13.860
この場合、 モデルはDall-E 3である。
03:13.890 --> 03:16.650
その前身である『Dall-E two』を試してみるのもいいだろう。
03:16.680 --> 03:18.480
画像はそれほど素晴らしいものではない。
03:18.480 --> 03:19.440
もう少し安い。
03:19.440 --> 03:26.160
0ドルくらいだったと記憶している。 0ドルではなく02ドル。 04なので、 大幅に安いわけではなく、 0ドル余分に払う価値は十分にあると私は思う。
03:26.160 --> 03:26.160
02.
03:26.190 --> 03:27.750
Dall-E 3にこだわる。
03:27.780 --> 03:32.400
私たちはプロンプトを与え、 これは今、 辞書の巧妙なリストではない。
03:32.400 --> 03:33.360
ただのテキストだ。
03:33.360 --> 03:39.240
この場合、 私が提案するプロンプトは、 例えば、 都市での休暇を表現するイメージで、 観光スポットや都市に関するあらゆるユニークなものを、
03:39.240 --> 03:46.980
活気に満ちたポップアート・スタイルで表現するものだ。
03:46.980 --> 03:50.250
最小のサイズを与える。
03:50.250 --> 03:53.070
Dall-E 3なら大丈夫、 Dall-E 2ならもっと小さくなる。
03:53.250 --> 03:58.680
UmとDall-E threeは、 縦型と横型の2つの大きなサイズも用意している。
03:58.740 --> 04:00.870
その寸法を知りたければググればいい。
04:00.870 --> 04:02.400
もしこれらの画像を試してみたいなら。
04:02.430 --> 04:04.260
私たちはただ1枚の画像を返してほしいだけなのだ。
04:04.260 --> 04:06.210
私たちはこの形式を望んでいると言っている。
04:06.450 --> 04:12.840
Base64でエンコードされたフォーマットで戻ってくる。
04:12.840 --> 04:20.040
そして、 それをバイトにデコードし、 そのバイトでバイトIOオブジェクトを作り、
04:20.040 --> 04:26.850
それをimage dot open関数に渡すと、 画像を返してくれる。
04:26.850 --> 04:28.320
それを実行しよう。
04:28.320 --> 04:30.300
そして今、 それを試してみよう。
04:30.330 --> 04:35.040
だから、 私はイメージ=アーティストと言うつもりだ。
04:36.870 --> 04:38.940
そしてニューヨーク・シティ。
04:42.660 --> 04:50.520
そして、 画像を表示させるのがジュピター流だ。
04:50.550 --> 04:53.400
それを実行しましょう、 あるいは私がすでに実行したものがそこにあります。
04:53.400 --> 04:56.790
そんなに早くないのは残念だけど、 見てよ、 この素晴らしさを。
04:56.940 --> 04:58.380
ええと、 もうわかっていますよね。
04:58.380 --> 05:00.750
さっそく1つお見せしましょう。
05:00.750 --> 05:01.950
こんな感じだ。
05:01.950 --> 05:07.710
自由の女神、 数種類のエンパイアステートビル、 空に浮かぶ飛行機、
05:07.710 --> 05:18.510
そしてたくさんの看板とニューヨークのタクシーが綴られたタイムズスクエアのイメージのようなものを見ることができる。
05:18.540 --> 05:19.140
あれを見ろ。
05:19.170 --> 05:20.610
黄色いニューヨークのタクシー。
05:20.640 --> 05:21.690
そしてコカ・コーラ。
05:21.690 --> 05:23.040
それとホットドッグ。
05:23.070 --> 05:25.050
まさにニューヨークを象徴するものだ。
05:25.080 --> 05:26.550
ファンタスティックだ。
05:26.580 --> 05:29.190
その一方で、 ここでまた新たなイメージを構築してくれた。
05:29.190 --> 05:29.670
そして
05:29.670 --> 05:31.020
うわあ、 これを見てよ。
05:31.020 --> 05:32.340
違うんだ。
05:32.340 --> 05:33.120
素晴らしいよ。
05:33.120 --> 05:35.430
こっちには大きなジェット機がある。
05:35.430 --> 05:40.620
エンパイア・ステート・ビルはもちろん、 複数のエンパイア・ステート・ビルや自由の女神像がある。
05:40.620 --> 05:48.870
ニューヨークを象徴するタクシーとホットドッグ。
05:49.080 --> 05:54.330
この画像はとてもクリエイティブで、 それぞれ違っていて、
05:54.330 --> 06:01.470
さっきの画像とこの画像の2つをご覧ください。
06:02.430 --> 06:03.060
分かった。
06:03.060 --> 06:05.220
まあ、 楽しんでもらえたなら幸いだ。
06:05.220 --> 06:10.920
そして、 ぜひとも0ドルを使うことを提案してもいいかな。 04、 自分用にいくつかの画像を作成する。
06:10.920 --> 06:12.180
彼らは素晴らしい。
06:12.690 --> 06:14.940
よし、 もうひとつ機能を追加しよう。
06:14.940 --> 06:20.450
OpenAIの音声を使って音声を生成する関数を作ります。
06:20.450 --> 06:25.670
ここでは、 Pi Dubと呼ばれるライブラリを使って、 いくつかのユーティリティを使うことにしよう。
06:25.670 --> 06:26.630
とても役に立つよ。
06:26.840 --> 06:29.300
これからtalkerという関数を書きます。
06:29.300 --> 06:33.860
トーカーはOpenAI dot audio dot speech dot createを呼び出す。
06:33.860 --> 06:39.470
つまり、 画像生成はOpenAIの画像生成だったのだ。
06:39.470 --> 06:46.760
オーディオについては、 OpenAIのオーディオ・ドット・スピーチ・ドット・クリエイトのケースだ。
06:46.760 --> 06:48.500
モデルを渡す。
06:48.740 --> 06:56.300
TTSはテキスト・トゥ・スピーチ(text to speech)の略で、
06:56.330 --> 07:00.080
音声を供給するモデルです。
07:00.080 --> 07:01.880
今回は声を試してみよう。
07:01.910 --> 07:02.750
オニキス
07:02.750 --> 07:04.880
8種類の声をもう一度試すことができるんだ。
07:04.910 --> 07:06.800
それが何かはググればわかる。
07:06.800 --> 07:10.310
そして、 この関数が呼ばれたことを渡す。
07:10.310 --> 07:16.520
戻ってきたバイトで、 それらのバイトを表すbytes IOオブジェクトを再び作成する。
07:16.520 --> 07:27.250
そして、 このオーディオ・セグメントを使って、 ファイルとオーディオ・ストリームからオーディオ・セグメントを作成し、 そのオーディオを再生する。
07:27.250 --> 07:31.180
では、 その関数を作ってトーカーとしよう。
07:33.070 --> 07:35.470
やあ、 こんにちは。
07:40.150 --> 07:41.110
やあ、 こんにちは。
07:42.430 --> 07:43.240
これでよし。
07:43.270 --> 07:44.500
簡単なことだ。
07:44.830 --> 07:47.410
ええと、 別の声がどう聞こえるか見てみよう。
07:47.410 --> 07:50.320
合金の音を聞いてみよう。
07:50.470 --> 07:52.270
そこに合金を入れよう。
07:55.000 --> 07:55.930
やあ、 こんにちは。
07:56.860 --> 07:58.630
それが合金だった。
07:58.660 --> 08:01.180
オニキスにこだわると思う。
08:01.180 --> 08:03.070
しかし、 どちらでも試すことができる。
08:03.070 --> 08:09.580
そして、 そこにさらにいくつか入れて、 試してみて好きなものを選ぶこともできる。
08:09.910 --> 08:10.810
分かった。
08:10.810 --> 08:13.510
まあ、 それで行こう。
08:14.710 --> 08:20.920
さて、 次はエージェントのフレームワークについて話そう。
08:20.950 --> 08:23.650
次のビデオまで休憩しよう。
08:23.650 --> 08:26.200
そして、 そこで私たちは完全なエージェントの枠組みを手に入れることになる。
08:26.230 --> 08:27.280
ではまた

418
week5/community-contributions/subtitles/srts/59167015/ko_KR.srt

@ -0,0 +1,418 @@
WEBVTT
00:00.800 --> 00:05.960
주피터 연구소에 잘 오셨습니다 파이브의 연구실에도요
00:05.960 --> 00:12.020
창의력이 많이 발휘될 거고 오락성도 많으면 좋겠어요
00:12.020 --> 00:16.910
우선 4일째의 주피터 연구소를 모사했어요
00:17.030 --> 00:19.370
그걸 복사했어요
00:19.370 --> 00:20.420
그리고 확장했어요
00:20.420 --> 00:23.570
지금 여기 위로는 나흘째의 반복이에요
00:23.600 --> 00:31.670
우리 항공사의 AI 보조를 만들었어요 비행기 AI라고 불렀죠 그리고 비행기 표 가격을 알아낼
00:31.670 --> 00:33.590
도구를 장착했어요
00:33.590 --> 00:39.140
모든 게 이미 갖춰져 있고 오늘 공연에 맞게 완성했어요
00:39.140 --> 00:40.820
다중 모듈로 갈 거예요
00:40.820 --> 00:46.250
달레3을 사용할 건데요 GPT 4 뒤에 있는 이미지 생성 모델이에요
00:46.760 --> 00:48.800
이미지를 만드는 데 사용할 거예요
00:48.800 --> 00:52.790
아티스트라는 함수에 넣는 것으로 시작하죠
00:52.790 --> 00:57.770
그 전에 두 가지 서비스 공지를 해야 해요
00:57.950 --> 01:03.830
먼저 이미지 생성에 드는 비용은 아주 적지 않아요
01:03.880 --> 01:10.150
지금까지 한 모든 게 1센트 미만이라도 적게 들었으면 좋겠어요
01:10.300 --> 01:16.090
당신이 엄청나게 긴 책자를 만든 게 아니라면 지금까지 이 과정을 진행했다고 해서
01:16.090 --> 01:17.830
큰돈을 번 건 아니에요
01:17.950 --> 01:21.880
하지만 지금은 좀 더 눈에 띄는 일을 하고 있어요
01:21.910 --> 01:25.420
우리가 생성하는 이미지마다 0달러가 들어요 4시요
01:25.450 --> 01:31.360
이 사진들을 보시면 0달러의 가치가 있다는 걸 아실 거예요 Put 각각 04달러요
01:31.570 --> 01:34.720
아주 창의적이고 가치도 높아요
01:34.720 --> 01:35.590
정말 좋아요
01:35.590 --> 01:37.630
돈을 잘 쓴 것 같아요
01:37.630 --> 01:42.310
하지만 그걸 알려드리고 싶어요 0달러를 쓸지 말지 결정하시라고요 한 번에 04개요
01:42.700 --> 01:49.120
언급하고 싶은 다른 것은 약간 그러니까 이미지 생성이나 음향
01:49.120 --> 01:56.770
생성 같은 것을 말할 때 LM이라는 용어를 써야 할지에 대한 논점이 있어요
01:56.770 --> 01:58.270
비트
01:58.300 --> 01:59.290
문자로 오디오를 연결해요
01:59.320 --> 02:03.400
물론 이건 무대 뒤에 있는 대형 언어 모델이 아니니까요
02:03.430 --> 02:10.090
요즘 사람들은 LM을 일반 용어로 사용합니다 인공지능 시스템 뒤에 있는
02:10.090 --> 02:12.280
모델을 일컫는 비트죠
02:12.280 --> 02:19.450
실제로 LM 엔지니어가 갖춰야 할 기술과 도구라고 생각해요
02:19.450 --> 02:23.800
하지만 엄밀히 말하면 이건 언어 모델이 아니에요
02:23.800 --> 02:30.730
이것들은 이미지 모델과 오디오 모델입니다 에이전트 프레임워크에 추가할 때 실행할
02:30.730 --> 02:31.630
수 있죠
02:31.750 --> 02:34.720
어쨌든 서문은 됐고, 이제 시작하죠. Get it.
02:34.720 --> 02:39.220
유용한 이미지 라이브러리 가져오기로 시작하죠
02:39.220 --> 02:40.420
첫 번째는 아니에요
02:40.570 --> 02:44.260
처음 두 개는 이미지 라이브러리가 아니라 일부 유틸리티예요
02:44.260 --> 02:51.520
파이썬 이미지 라이브러리는 아주 유용합니다 아주 편리한 공통 라이브러리예요
02:51.760 --> 03:00.820
다음으로 할 일은 아티스트라는 함수를 작성하는 겁니다 아티스트는 OpenAI.Nagees.Nageate를
03:00.850 --> 03:03.520
호출하죠
03:03.520 --> 03:09.460
OpenAI 이미지 생성에 사용되는 스타일이 아주 일관적이죠
03:09.460 --> 03:11.520
모델 이름을 통과해요
03:11.520 --> 03:13.860
이 경우에는 모델이 달어리 쓰리예요
03:13.890 --> 03:16.650
그 전신인 달이 2호도 한번 드셔 보세요
03:16.680 --> 03:18.480
이미지는 less예요
03:18.480 --> 03:19.440
비트가 좀 더 저렴해요
03:19.440 --> 03:24.450
0달러 정도였던 것 같아요 0달러가 아니라 02달러요 0달러면 많이 싼 편은 아닌데 0달러나
03:24.450 --> 03:26.160
더 쓴 보람이 있네요 2번요
03:26.190 --> 03:27.750
달리 쓰리 주세요
03:27.780 --> 03:32.400
즉각적으로 알려주면 이건 현명한 사전 목록이 아니죠
03:32.400 --> 03:33.360
그냥 문자예요
03:33.360 --> 03:39.240
이 경우에는 제가 제안하는 건 도시에서의 휴가를 상징하는 이미지예요
03:39.240 --> 03:46.980
관광지와 도시의 모든 특징을 생동감 넘치는 팝아트 스타일로 표현하는 거죠
03:46.980 --> 03:50.250
가장 작은 크기로 정해요
03:50.250 --> 03:53.070
달이 3개면 충분하고 달이 2개면 훨씬 작아요
03:53.250 --> 03:58.680
돌레 3도 초상과 가로 형식으로 큰 사이즈를 두 개 만들 수 있어요
03:58.740 --> 04:00.870
크기가 궁금하면 구글에서 검색하세요
04:00.870 --> 04:02.400
이 이미지들을 시도해 보세요
04:02.430 --> 04:04.260
이미지만 있으면 돼요
04:04.260 --> 04:06.210
이 포맷을 원한다고 하죠
04:06.450 --> 04:12.840
백은 베이스64 암호 형식으로 되어 있어요
04:12.840 --> 04:20.040
바이트 단위로 디코딩하고 그 바이트 단위로 바이트 IO 객체를 생성합니다
04:20.040 --> 04:26.850
그리고 나서 이미지.오픈 함수로 이동합니다 이미지를 반환해 주죠
04:26.850 --> 04:28.320
실행해보죠
04:28.320 --> 04:30.300
이제 한번 해 보죠
04:30.330 --> 04:35.040
이미지 = 아티스트라고 하죠
04:36.870 --> 04:38.940
뭐라고 해야 할까요? 뉴욕시
04:42.660 --> 04:50.520
이미지를 디스플레이하는 건 그걸 보여주는 주피터 방식이죠
04:50.550 --> 04:53.400
실행해보죠, 아니면 이미 실행한 게 보이나요
04:53.400 --> 04:56.790
시간이 좀 걸리지만 정말 놀랍죠?
04:56.940 --> 04:58.380
이미 먹고 있잖아요
04:58.380 --> 05:00.750
바로 보여 드려서 버릇 나빠지게 해 드렸어요
05:00.750 --> 05:01.950
이렇게 생겼어요
05:01.950 --> 05:07.710
두 번째 이미지를 위에 만들 거예요 자유의 여신상과 엠파이어
05:07.710 --> 05:16.200
스테이트 빌딩 몇 개와 비행기들이 보이고 타임스 스퀘어 이미지와 많은 간판과 뉴욕 이미지가
05:16.200 --> 05:18.510
택시를 나타내죠
05:18.540 --> 05:19.140
보세요
05:19.170 --> 05:20.610
노란 뉴욕 택시요
05:20.640 --> 05:21.690
코카콜라도요
05:21.690 --> 05:23.040
핫도그도 있어요
05:23.070 --> 05:25.050
뉴욕의 상징이죠
05:25.080 --> 05:26.550
환상적이에요
05:26.580 --> 05:29.190
한편, 다른 이미지를 구축했어요
05:29.190 --> 05:29.670
그리고요
05:29.670 --> 05:31.020
이것 좀 봐요
05:31.020 --> 05:32.340
달라요
05:32.340 --> 05:33.120
좋아요
05:33.120 --> 05:35.430
여기에 큰 제트기가 있어요
05:35.430 --> 05:40.620
엠파이어 스테이트 빌딩도 있고 여러 채와 자유의 여신상도 있어요
05:40.620 --> 05:47.280
앞에는 번화한 상점들과 택시가 보이고 뉴욕의 상징적인 택시와 핫도그가
05:47.310 --> 05:48.870
다시 등장하죠
05:49.080 --> 05:54.330
이 사진들을 보면 정말 창의적이고 색다르다는 걸 알 수 있어요
05:54.330 --> 05:59.790
두 장이 있는데 조금 전에 찍은 사진과 여기 있는 사진을 보면 얼마나
05:59.790 --> 06:01.470
멋진지 알 수 있죠
06:02.430 --> 06:03.060
좋아요
06:03.060 --> 06:05.220
재미있게 보셨길 바라요
06:05.220 --> 06:10.920
0달러 정도 쓰는 게 어때요? 04, 이미지 몇 개 만들어 보세요
06:10.920 --> 06:12.180
멋져요
06:12.690 --> 06:14.940
함수를 하나 더 추가할게요
06:14.940 --> 06:20.450
OpenAI의 음성을 이용해 오디오를 생성하는 함수를 만들 거예요
06:20.450 --> 06:25.670
파이덥이라는 라이브러리를 가진 몇 가지 유틸리티들을 이용할 거예요
06:25.670 --> 06:26.630
아주 유용하죠
06:26.840 --> 06:29.300
토커라는 함수를 쓸 거예요
06:29.300 --> 06:33.860
토커는 OpenAI. audio.speaks.Crate라고 부를 거예요
06:33.860 --> 06:39.470
이미지 생성은 OpenAI 이미지가 생성된 것인데요
06:39.470 --> 06:46.760
오디오는 OpenAI audio.speaks.Create가 있네요
06:46.760 --> 06:48.500
모형을 통과시키죠
06:48.740 --> 06:56.300
이게 우리가 쓰는 모델이에요 TTS는 텍스트에서 음성으로 전환하는 모델이죠
06:56.330 --> 07:00.080
우리가 쓰는 모델은 목소리를 제공해요
07:00.080 --> 07:01.880
이번에는 목소리를 시험해 보죠
07:01.910 --> 07:02.750
오닉스요
07:02.750 --> 07:04.880
8개의 다른 목소리를 다시 녹음해야 했어요
07:04.910 --> 07:06.800
구글로 검색하면 다 나와요
07:06.800 --> 07:10.310
이 함수라고 불리는 것을 전달하죠
07:10.310 --> 07:16.520
결과를 보면 바이트당 IO 객체를 생성해 그 바이트를 나타내죠
07:16.520 --> 07:25.330
그리고 이걸 이 오디오 세그먼트에 사용합니다 파일과 오디오 스트림에서 생성해 해당 오디오를 재생하도록
07:25.330 --> 07:27.250
get 하죠
07:27.250 --> 07:31.180
함수를 만들고 토크커라고 하죠
07:33.070 --> 07:35.470
안녕하세요
07:40.150 --> 07:41.110
안녕하세요
07:42.430 --> 07:43.240
됐어요
07:43.270 --> 07:44.500
아주 간단해요
07:44.830 --> 07:47.410
다른 목소리는 어떤지 들어보죠
07:47.410 --> 07:50.320
합금 소리를 들어보죠
07:50.470 --> 07:52.270
합금을 넣죠 Put
07:55.000 --> 07:55.930
안녕하세요
07:56.860 --> 07:58.630
합금으로 만든 거예요
07:58.660 --> 08:01.180
그냥 오닉스라고 하죠
08:01.180 --> 08:03.070
하지만 둘 다 가능해요
08:03.070 --> 08:09.580
그리고 여기에 더 넣어서 실험해보고 마음에 드는 걸 고르세요. Put it up Put it up Put it up Put it up Put it up Put it up Put it up Put it up Put it it up Put it up Put it up Put it Put it
08:09.910 --> 08:10.810
좋아요
08:10.810 --> 08:13.510
그렇게 하죠
08:14.710 --> 08:20.920
이제 에이전트 프레임워크에 대해 얘기해보죠
08:20.950 --> 08:23.650
다음 영상은 잠시 쉬죠
08:23.650 --> 08:26.200
거기서 에이전트 프레임워크를 다룰 거예요
08:26.230 --> 08:27.280
그때 봐요

73
week5/community-contributions/subtitles/srts/59169985/en_US.srt

@ -0,0 +1,73 @@
WEBVTT
00:00.680 --> 00:03.740
So I hope you enjoyed that whirlwind tour of Google Colab.
00:03.740 --> 00:08.240
Here's just a little screenshot example of how easy it is to use it.
00:08.570 --> 00:10.760
You can just put in a bunch of code.
00:10.760 --> 00:16.010
This is of course, hugging face code that we're going to be getting deep into very, very soon.
00:16.010 --> 00:25.070
And in this case, I used the flux model, which is you may have noticed it was one of the top trending
00:25.070 --> 00:27.620
models when we were looking at models in hugging face.
00:27.620 --> 00:35.960
It is a text to image generation model from Black Forest that is a particularly, uh, exciting in that
00:35.960 --> 00:40.460
it's one of the really strong open source image generation models.
00:40.880 --> 00:46.370
And I prompted it with a futuristic class full of students learning AI coding in the surreal style of
00:46.370 --> 00:47.210
Dall-E.
00:47.390 --> 00:51.650
Uh, and this is what came up, which is wonderful, wonderful.
00:51.830 --> 01:00.590
Uh, and so it gives you a sense of how quickly you can use Google Colab to be, uh, working with,
01:00.590 --> 01:03.890
uh, high powered GPUs in the cloud.
01:05.480 --> 01:11.690
And with that, we, uh, take a moment to take stock of our progress.
01:11.720 --> 01:13.130
We are now ready.
01:13.130 --> 01:17.390
You are well positioned to be beginning on your open source adventure.
01:17.510 --> 01:24.200
Uh, in addition to what you could already do confidently coding with frontier APIs and building multimodal
01:24.230 --> 01:32.060
AI assistants, you can now navigate through Hugging Face and Google Colab and you are ready for action.
01:32.090 --> 01:39.950
So next time you're going to be able to run open source models, there's two different levels of API
01:39.950 --> 01:43.820
in hugging face, and you're going to understand what that means and what they are.
01:43.820 --> 01:47.750
And then we're going to start with the first of those, which is called pipelines.
01:47.750 --> 01:53.240
You're going to be able to use pipelines for a bunch of different AI tasks, including generating text,
01:53.270 --> 01:57.080
images and audio using open source models.
01:57.110 --> 01:58.220
I can't wait.

58
week5/community-contributions/subtitles/srts/59169985/ja_JP.srt

@ -0,0 +1,58 @@
WEBVTT
00:00.680 --> 00:03.740
というわけで、 Google Colabの旋風ツアーを楽しんでいただけただろうか。
00:03.740 --> 00:08.240
使い方の簡単さをスクリーンショットで紹介しよう。
00:08.570 --> 00:10.760
コードをたくさん入れるだけでいい。
00:10.760 --> 00:16.010
これはもちろん、 ハグ・フェイス・コードである。
00:16.010 --> 00:27.620
この場合、 私はフラックス・モデルを使用した。 これは、 ハグする顔のモデルを見ていたとき、 トップ・トレンド・モデルのひとつだったことにお気づきだろうか。
00:27.620 --> 00:40.460
これはBlack Forestのテキストから画像への生成モデルで、 オープンソースの画像生成モデルの中でも特に強力なもののひとつだ。
00:40.880 --> 00:47.210
そして私は、 『Dall-E』のシュールなスタイルでAIコーディングを学ぶ生徒でいっぱいの未来的なクラスでそれを促した。
00:47.390 --> 00:51.650
ええと、 それで出てきたのがこれです。
00:51.830 --> 01:03.890
Google Colabを使うことで、 クラウド上で高性能GPUをどれだけ早く使えるかを実感していただけると思います。
01:05.480 --> 01:11.690
そして、 私たちは......私たちの進捗状況を確認する時間を取る。
01:11.720 --> 01:13.130
準備は整った。
01:13.130 --> 01:17.390
あなたはオープンソースの冒険を始めるのにふさわしい位置にいる。
01:17.510 --> 01:24.200
フロンティアAPIを使ったコーディングやマルチモーダルAIアシスタントの構築など、 すでに自信を持ってできることに加えて、
01:24.230 --> 01:32.060
ハギング・フェイスやグーグルコラボをナビゲートできるようになり、 行動の準備は整った。
01:32.090 --> 01:39.950
だから今度オープンソースのモデルを走らせるときは、 ハグフェイスには2つの異なるレベルのAPIがあり、
01:39.950 --> 01:43.820
その意味と内容を理解することになる。
01:43.820 --> 01:47.750
まず、 パイプラインと呼ばれるものから始めよう。
01:47.750 --> 01:57.080
オープンソースのモデルを使ったテキスト、 画像、 音声の生成など、 さまざまなAIタスクにパイプラインを使えるようになる。
01:57.110 --> 01:58.220
待ちきれないよ。

70
week5/community-contributions/subtitles/srts/59169985/ko_KR.srt

@ -0,0 +1,70 @@
WEBVTT
00:00.680 --> 00:03.740
구글 콜랍의 급속한 탐방을 즐기셨길 바라요
00:03.740 --> 00:08.240
얼마나 사용이 쉬운지 스크린샷으로 보여드릴게요
00:08.570 --> 00:10.760
그냥 코드 뭉치를 Put만 하면 돼요
00:10.760 --> 00:16.010
이건 물론 포옹하는 얼굴 코드죠 아주 곧 깊이 다룰 거예요
00:16.010 --> 00:25.070
이 경우 플럭스 모델을 사용했어요 가장 트렌드되는 모델 중 하나란 걸 눈치채셨을 겁니다 얼굴을
00:25.070 --> 00:27.620
껴안는 모델들을 보면요
00:27.620 --> 00:35.960
블랙 포레스트에서 나온 텍스트 이미지 생성 모델로 아주 강력한 오픈 소스 이미지 생성
00:35.960 --> 00:40.460
모델 중 하나라는 점에서 특히 흥미롭죠
00:40.880 --> 00:47.210
초현대적인 달리의 인공지능 코딩을 배우는 학생들로 가득한 미래적인 수업으로 프롬프트했죠
00:47.390 --> 00:51.650
그래서 나온 게 이거예요 정말 훌륭하죠
00:51.830 --> 01:00.590
구글 Colab을 얼마나 빨리 사용할 수 있는지 알 수 있죠 클라우드에서 고성능
01:00.590 --> 01:03.890
GPU와 작업하기 위해서요
01:05.480 --> 01:11.690
이와 함께 진행 상황을 잠시 점검해 보죠
01:11.720 --> 01:13.130
이제 준비됐어요
01:13.130 --> 01:17.390
오픈 소스 모험을 시작하기에 좋은 위치예요
01:17.510 --> 01:24.200
개척형 API로 자신 있게 코딩하고 다중 모듈 인공지능 어시스턴트 제작을 하는
01:24.230 --> 01:32.060
것 외에도 이제는 포옹하는 얼굴과 구글 Colab을 탐색할 수 있습니다 이제 준비가 다 됐죠
01:32.090 --> 01:39.950
다음에 오픈 소스 모델을 실행할 땐 얼굴을 안는 동작엔 두 가지 API 레벨이 있어요 그게
01:39.950 --> 01:43.820
무슨 뜻인지 그게 뭔지 이해하게 될 거예요
01:43.820 --> 01:47.750
이제 첫 번째 것부터 시작할게요 파이프라인이라고 하죠
01:47.750 --> 01:53.240
다양한 인공지능 작업에 대해 파이프라인을 사용할 수 있습니다 오픈 소스 모델을
01:53.270 --> 01:57.080
이용해 텍스트, 이미지, 오디오 생성을 포함해서요
01:57.110 --> 01:58.220
기대되네요

127
week5/community-contributions/subtitles/srts/59169991/en_US.srt

@ -0,0 +1,127 @@
WEBVTT
00:01.010 --> 00:03.500
Okay, so that was your introduction to Hugging Face.
00:03.500 --> 00:10.010
And now I'm going to turn to a different resource available which is Google Colab.
00:10.040 --> 00:13.880
There are a bunch of different alternatives to Google Colab that all do much the same thing, and you
00:13.880 --> 00:15.470
can really use any of them.
00:15.560 --> 00:18.080
I like Colab in particular for a couple of reasons.
00:18.170 --> 00:19.910
One of them is that so many people use it.
00:20.330 --> 00:24.770
And another is it's so easy to share, but let's just talk about what it is.
00:24.800 --> 00:28.400
So Google Colab, um, it's a it's a few things.
00:28.400 --> 00:33.440
But the reason, the main thing that it is, and what we're going to do with it, is the ability to
00:33.440 --> 00:41.480
run a Jupyter notebook like the ones we've been using, and run it in the cloud on a Google box, which
00:41.480 --> 00:50.120
will have not only a decent CPU, but also a GPU that might be high spec, uh, and uh, in addition
00:50.120 --> 00:55.670
to that, the thing that I like about it is that you can share and collaborate your Jupyter notebook
00:55.670 --> 00:59.270
with others using the same kind of familiar interface.
00:59.270 --> 01:01.940
You can use to share other types of Google Doc.
01:01.940 --> 01:08.260
So if, like me, you're very used to using Google Docs and Google Sheets and the like and sharing them
01:08.260 --> 01:13.750
and editing them and so on, then it's a very familiar experience to be able to share and collaborate
01:13.750 --> 01:17.920
on a Jupyter notebook running in Colab.
01:18.280 --> 01:21.130
Uh, and it's also integrated with other Google services.
01:21.130 --> 01:25.990
So for example, you can very easily access your own Google Drive if you have data there or something
01:25.990 --> 01:26.620
like that.
01:26.620 --> 01:29.650
So it's it's nicely part of the Google ecosystem.
01:29.650 --> 01:33.130
But as I say, there are a bunch of other offerings.
01:33.190 --> 01:38.650
And you can if you if you're using a something that is a competitor to Google Colab and you like it,
01:38.650 --> 01:40.300
then by all means use it.
01:40.300 --> 01:47.080
Uh, you may have to, uh, copy across the colab that I'll be using in sharing, but otherwise everything
01:47.110 --> 01:48.730
should work just fine.
01:49.120 --> 01:55.420
When you're using Colab, you get to choose what runtimes you're working with, what kind of box it's,
01:55.450 --> 01:58.330
what kind of VM is essentially, uh, running.
01:58.330 --> 02:04.240
There are CPU based boxes which don't have a GPU, are just CPUs.
02:04.240 --> 02:12.080
There is, uh, there are lower spec boxes running cheaper GPUs, and then there's higher spec, beefier
02:12.110 --> 02:15.740
boxes for resource intensive stuff.
02:16.190 --> 02:23.900
Everything that we do in this course can run on on up to number two, the lower spec GPU runtimes.
02:23.900 --> 02:29.900
I'm going to be trying my absolute best to keep it so that you can do everything and not spend anything,
02:29.900 --> 02:31.700
any material amount of money.
02:31.820 --> 02:37.220
Um, perhaps at this point, if you, if we if you go as far as training a full deep neural network
02:37.250 --> 02:43.400
yourself, we might be starting to talk about, uh, a few dollars, but nothing that's going to break
02:43.430 --> 02:50.270
the bank, I hope, uh, unless you wish to take it a step further and train faster, do more experimenting.
02:50.300 --> 02:56.600
In which case, you certainly have the ability to opt for number three and spend a little bit more.
02:56.600 --> 03:02.270
Uh, and again, we're talking about maybe spending $10 to get a decent, like, a day or two's worth
03:02.270 --> 03:07.310
of work, uh, against a top end GPU box.
03:07.790 --> 03:11.690
So without further ado, that's a quick intro.
03:11.690 --> 03:16.400
Let's go in and take a look at Colab and get comfortable with it.

97
week5/community-contributions/subtitles/srts/59169991/ja_JP.srt

@ -0,0 +1,97 @@
WEBVTT
00:01.010 --> 00:03.500
さて、 以上がハギング・フェイスの紹介だった。
00:03.500 --> 00:10.010
そして今度は、 Google Colabという別のリソースを紹介しよう。
00:10.040 --> 00:15.470
Google Colabの代わりとなるものはたくさんあるが、 どれもほとんど同じことができる。
00:15.560 --> 00:18.080
私がColabを特に気に入っている理由はいくつかある。
00:18.170 --> 00:19.910
そのひとつは、 非常に多くの人が利用していることだ。
00:20.330 --> 00:24.770
そしてもうひとつは、 共有するのがとても簡単だということだ。
00:24.800 --> 00:28.400
グーグル・コラボは、 いくつかあるんだ。
00:28.400 --> 00:33.440
Jupyterノートブックは、
00:33.440 --> 00:59.270
CPUだけでなく、 GPUもハイスペックなものを搭載している。
00:59.270 --> 01:01.940
他のタイプのGoogleドキュメントを共有するために使用することができます。
01:01.940 --> 01:08.260
僕のように、 GoogleドキュメントやGoogleシートなどを使い、 それらを共有したり編集したりすることに慣れている人なら、
01:08.260 --> 01:13.750
Colabで動いているJupyterノートブックを共有したり共同作業したりするのは、
01:13.750 --> 01:17.920
とても馴染みのある体験だ。
01:18.280 --> 01:21.130
それに、 グーグルの他のサービスとも統合されている。
01:21.130 --> 01:26.620
例えば、 自分のGoogleドライブにデータがあれば、 簡単にアクセスできる。
01:26.620 --> 01:29.650
つまり、 グーグルのエコシステムの一部なのだ。
01:29.650 --> 01:33.130
しかし、 私が言うように、 他にもたくさんのオファーがある。
01:33.190 --> 01:38.650
そして、 もしGoogle Colabの競合となるものを使っていて、 それが気に入ったのであれば、
01:38.650 --> 01:40.300
ぜひそれを使ってください。
01:40.300 --> 01:48.730
ええと、 僕がシェアリングで使うコラボをコピーする必要があるかもしれないけど、 それ以外はすべてうまくいくはずだよ。
01:49.120 --> 01:58.330
Colabを使うときは、 どんなランタイムを使うか、 どんなボックスか、 どんなVMを動かすかを選ぶことができる。
01:58.330 --> 02:04.240
GPUを搭載していないCPUベースのボックスもある。
02:04.240 --> 02:15.740
安価なGPUを搭載した低スペックのマシンもあれば、 リソースを大量に消費するようなマシン向けの高スペックのマシンもある。
02:16.190 --> 02:23.900
このコースで行うことはすべて、 2番までの低スペックGPUランタイムで実行できる。
02:23.900 --> 02:29.900
私は、 あなたが何でもできるように、 そして何も、 どんな物質的なお金も使わないように、
02:29.900 --> 02:31.700
全力を尽くすつもりだ。
02:31.820 --> 02:37.220
おそらくこの時点で、 もしあなた自身が完全なディープ・ニューラル・ネットワークをトレーニングするとしたら、
02:37.250 --> 02:50.270
数ドルの話になるかもしれませんが、 銀行を破たんさせるようなことはないでしょう。
02:50.300 --> 02:56.600
その場合、 3番を選んでもう少し出費を増やすこともできる。
02:56.600 --> 03:07.310
また、 トップエンドのGPUボックスに対して、 1日か2日分の仕事をするのに10ドルくらいかかるかもしれない。
03:07.790 --> 03:11.690
というわけで、 前置きはこれくらいにして、 簡単な自己紹介をしよう。
03:11.690 --> 03:16.400
さっそくColabを見て、 慣れてみよう。

127
week5/community-contributions/subtitles/srts/59169991/ko_KR.srt

@ -0,0 +1,127 @@
WEBVTT
00:01.010 --> 00:03.500
얼굴 껴안기는 여기까지였어요
00:03.500 --> 00:10.010
이제 다른 리소스로 넘어가죠 구글 Colab이에요
00:10.040 --> 00:13.880
구글 콜랍을 대체할 수 있는 다양한 방법이 있어요 다 똑같은 기능이고
00:13.880 --> 00:15.470
아무거나 사용해도 돼요
00:15.560 --> 00:18.080
콜랍이 좋은 이유가 두 가지 있어요
00:18.170 --> 00:19.910
그중 하나는 너무 많은 사람이 쓴다는 거죠
00:20.330 --> 00:24.770
다른 하나는 공유하기 쉽다는 거예요 하지만 뭔지 얘기해보죠
00:24.800 --> 00:28.400
구글 콜랍은 몇 가지에 해당해요
00:28.400 --> 00:33.440
하지만 이 제품의 주요한 이유이자 앞으로 할 일은 우리가
00:33.440 --> 00:41.480
사용한 것처럼 Jupyter 노트북을 실행하는 겁니다 구글 박스의 클라우드에서
00:41.480 --> 00:50.120
실행하면 괜찮은 CPU뿐 아니라 고사양 GPU도 갖추게 되죠 그뿐 아니라 Jupyter
00:50.120 --> 00:55.670
노트북을 다른 제품과 공유하고 협력할 수 있다는 게 좋아요 익숙한
00:55.670 --> 00:59.270
인터페이스를 사용해서요
00:59.270 --> 01:01.940
다른 유형의 구글 문서를 공유할 때 사용할 수 있죠
01:01.940 --> 01:08.260
저처럼 구글 문서나 구글 시트를 사용하는 데 익숙하고 공유하고
01:08.260 --> 01:13.750
편집하는 데 익숙하다면 콜랍에 있는 주피터 공책에 공유하고
01:13.750 --> 01:17.920
협업하는 건 아주 익숙한 경험이죠
01:18.280 --> 01:21.130
다른 구글 서비스와도 통합돼 있어요
01:21.130 --> 01:25.990
예를 들어, 데이터가 있다면 구글 드라이브에 쉽게 접근할
01:25.990 --> 01:26.620
수 있죠
01:26.620 --> 01:29.650
구글 생태계의 멋진 일부죠
01:29.650 --> 01:33.130
하지만 말씀드렸듯이 다른 공물도 많아요
01:33.190 --> 01:38.650
여러분이 구글 Colab의 경쟁사 제품을 사용하고 있고 그게 마음에 든다면
01:38.650 --> 01:40.300
얼마든지 사용하세요
01:40.300 --> 01:47.080
공유에 사용할 콜라브를 복사해야 할 수도 있지만 그것만 빼면 다
01:47.110 --> 01:48.730
괜찮을 거예요
01:49.120 --> 01:55.420
Colab을 사용할 때는 어떤 런타임을 사용할 것인지 어떤 종류의 박스를 사용할 것인지 어떤 종류의
01:55.450 --> 01:58.330
VM을 실행할 것인지 선택해야 하죠
01:58.330 --> 02:04.240
CPU 기반의 박스는 GPU가 없고 그냥 CPU죠
02:04.240 --> 02:12.080
저렴한 GPU를 사용하는 저사양 박스도 있고 자원 집약적인 엔진을 위한
02:12.110 --> 02:15.740
고사양의 튼튼한 박스도 있죠
02:16.190 --> 02:23.900
이 코스에서 하는 모든 건 2번까지 실행할 수 있습니다 하위 사양 GPU 런타임이죠
02:23.900 --> 02:29.900
최선을 다해서 가게를 지킬 테니 재료비 걱정은 하지 말고 필요한
02:29.900 --> 02:31.700
건 뭐든 하세요
02:31.820 --> 02:37.220
이 시점에서 당신이 직접 심층 신경망을 훈련한다면
02:37.250 --> 02:43.400
몇 달러 정도 들겠지만 큰돈은 안 들 거예요 한 단계 더 나아가서
02:43.430 --> 02:50.270
더 빨리 훈련하고 더 많은 실험을 하고 싶지 않다면요
02:50.300 --> 02:56.600
3번째 비트를 선택하고 좀 더 많은 돈을 쓸 수 있죠
02:56.600 --> 03:02.270
다시 말씀드리지만 10달러 정도 들여서 하루 이틀 정도
03:02.270 --> 03:07.310
작업하면 최고의 GPU 박스가 완성될 거예요
03:07.790 --> 03:11.690
그럼 지체 없이 간단히 소개를 마칠게요
03:11.690 --> 03:16.400
콜랍을 살펴보고 익숙해지도록 하죠 Get

163
week5/community-contributions/subtitles/srts/59170025/en_US.srt

@ -0,0 +1,163 @@
WEBVTT
00:00.740 --> 00:05.000
And a massive welcome back one more time to LM engineering.
00:05.000 --> 00:10.220
We are in week three, day two and we are getting into open source models.
00:10.370 --> 00:14.960
So as a reminder you can already do frontier models back to front.
00:14.960 --> 00:16.940
You can build multimodal AI assistants.
00:16.940 --> 00:22.940
And now you're comfortable looking at the hugging face hub, looking at models and data sets and spaces.
00:22.940 --> 00:26.690
And you can run code using Google Colab.
00:27.020 --> 00:33.080
So today we're going to look at Hugging Face Transformers library and discuss the fact that there are
00:33.080 --> 00:39.950
two different types of API, two different levels that you can work with transformers at one level,
00:39.980 --> 00:46.250
the higher level API is called pipelines, and that's what we'll be working with mostly today, including
00:46.250 --> 00:50.810
generating text, images and sound using pipelines.
00:51.080 --> 00:56.570
So let's just talk for a moment about these two different API levels.
00:56.570 --> 01:02.770
So there are these two modes of interacting with the hugging face code.
01:02.770 --> 01:10.060
One of them is if you want to carry out a standard, everyday typical task in what we'd call inference,
01:10.060 --> 01:14.860
or running a model at runtime given an input to get an output.
01:14.860 --> 01:21.550
And hugging face has wonderfully packaged this up into a high level interface that's super easy to use,
01:21.550 --> 01:29.080
and that provides you with a rapid way to get going, generating text, and doing a number of everyday
01:29.080 --> 01:30.130
functions.
01:30.580 --> 01:37.360
But if you want to get deeper into the code, if you want to be looking in more detail at things like
01:37.360 --> 01:44.380
how how you are tokenizing your text at which models and which parameters you're using to run a model,
01:44.380 --> 01:50.500
or if you're actually going to go as far as training and be fine tuning your own model to carry out
01:50.530 --> 01:54.010
specialist tasks with extra knowledge or nuance.
01:54.010 --> 02:00.820
At that point, you need to look at the deeper APIs, the lower level APIs, working with Tokenizers
02:00.820 --> 02:02.800
and models in Hugging Face.
02:02.830 --> 02:05.260
Today we're going to be looking at pipelines.
02:05.260 --> 02:10.420
And then after that we're going to turn to the Tokenizers and models.
02:11.080 --> 02:13.060
So what can you do with these pipelines?
02:13.060 --> 02:22.360
So essentially it allows you to take instant advantage of models on the Hugging face hub with two lines
02:22.360 --> 02:22.960
of code.
02:22.960 --> 02:24.340
It's as simple as that.
02:24.340 --> 02:28.330
And I'm going to give you lots of examples and lots of things you can take away so that you can use
02:28.330 --> 02:32.800
it yourself to carry out every day inference tasks.
02:32.800 --> 02:37.720
So one classic example, which is one of the easiest ones to start with, is what they call sentiment
02:37.720 --> 02:38.350
analysis.
02:38.380 --> 02:43.570
Given a sentence saying what is the emotion conveyed by this sentence?
02:44.380 --> 02:50.740
Uh, then classification, of course, is one of those very traditional machine learning tasks of putting
02:50.740 --> 02:52.450
things into buckets.
02:52.660 --> 03:00.160
Named entity recognition is when you can take a sentence and tag the words in that sentence as things
03:00.160 --> 03:04.630
like whether they are people or whether they are locations or things and so on.
03:04.970 --> 03:11.900
Question answering is when you have some context and you want to be able to ask questions about the
03:11.900 --> 03:13.610
context that you provide.
03:13.640 --> 03:20.210
Summarization, of course, is when you have a block of text and you want to turn it into a summary
03:20.660 --> 03:21.710
translation.
03:21.740 --> 03:26.870
Another classic AI task translating between one language and another.
03:26.900 --> 03:32.060
So what if I told you that all of these things can be done with two lines of code each?
03:32.330 --> 03:37.370
Hopefully you would be amazed and you will see it in a moment.
03:37.490 --> 03:43.460
There are some other things you can do as well that become perhaps slightly more advanced.
03:43.580 --> 03:45.740
Text generation actually isn't advanced at all.
03:45.740 --> 03:47.120
It's still just two lines of code.
03:47.120 --> 03:52.760
It's still super simple, and it's another thing that you will marvel at.
03:53.210 --> 03:57.470
But generating images is also very simple, as is audio.
03:57.470 --> 04:02.810
It becomes a little bit more than two lines, but it's still very simple and I can't wait to show you.
04:02.840 --> 04:04.760
I think that's enough preamble.
04:04.760 --> 04:05.720
Let's get straight to it.
04:05.720 --> 04:07.340
Let's go to Google Colab.

136
week5/community-contributions/subtitles/srts/59170025/ja_JP.srt

@ -0,0 +1,136 @@
WEBVTT
00:00.740 --> 00:05.000
そして、 LMエンジニアリングにもう一度大歓迎を。
00:05.000 --> 00:10.220
3週目、 2日目に入り、 オープンソースのモデルに入っている。
00:10.370 --> 00:14.960
つまり、 すでにフロンティア・モデルのバック・トゥ・フロントが可能なのだ。
00:14.960 --> 00:16.940
マルチモーダルなAIアシスタントを構築できる。
00:16.940 --> 00:22.940
そして今、 あなたはハグする顔のハブを見たり、 モデルやデータセットやスペースを見たりすることに快適さを感じている。
00:22.940 --> 00:26.690
また、 Google Colabを使ってコードを実行することもできる。
00:27.020 --> 00:33.080
そこで今日は、 Hugging Face Transformersライブラリを見て、 2つの異なるタイプのAPIがあること、
00:33.080 --> 00:39.950
1つのレベルでトランスフォーマーを扱うことができる2つの異なるレベルがあること、 より高いレベルのAPIはパイプラインと呼ばれ、
00:39.980 --> 00:50.810
パイプラインを使ったテキスト、 画像、 サウンドの生成など、 今日主に扱うのはこれだということを説明する。
00:51.080 --> 00:56.570
では、 この2つの異なるAPIレベルについて少し話をしよう。
00:56.570 --> 01:02.770
つまり、 ハグする顔のコードと相互作用する2つのモードがあるのだ。
01:02.770 --> 01:10.060
そのひとつは、 推論と呼ばれるような、 標準的で日常的な典型的なタスクを実行したい場合、 つまり、
01:10.060 --> 01:14.860
入力が与えられて実行時にモデルを実行して出力を得たい場合だ。
01:14.860 --> 01:21.550
そしてハギング・フェイスは、 これを素晴らしく使いやすい高レベルのインターフェイスにパッケージ化し、
01:21.550 --> 01:30.130
テキストを生成し、 多くの日常的な機能を実行するための迅速な方法を提供する。
01:30.580 --> 01:37.360
しかし、 もしあなたがコードにもっと深く入り込みたいのであれば、 どのモデルをどのようにトークン化し、 どのパラメータを使ってモデルを実行しているのか、
01:37.360 --> 01:44.380
あるいは実際にトレーニングまで行って、 専門的なタスクを実行するために独自のモデルをファインチューニングし、
01:44.380 --> 01:54.010
特別な知識やニュアンスを身につけたいのであれば、 そのようなことをもっと詳しく調べたいでしょう。
01:54.010 --> 02:02.800
その時点で、 より深いAPI、 より低レベルのAPI、 トーケナイザーやハギング・フェイスのモデルを扱うことに目を向ける必要がある。
02:02.830 --> 02:05.260
今日はパイプラインについて見ていこう。
02:05.260 --> 02:10.420
そしてそのあとは、 トーケナイザーとモデルの話に移る。
02:11.080 --> 02:13.060
では、 このパイプラインを使って何ができるのか?
02:13.060 --> 02:22.960
そのため、 基本的には2行のコードで、 ハギング・フェイス・ハブのモデルを即座に利用することができる。
02:22.960 --> 02:24.340
簡単なことだ。
02:24.340 --> 02:28.330
そして、 あなたが毎日の推論作業に使えるように、 たくさんの例と、
02:28.330 --> 02:32.800
あなたが持ち帰ることができるものをたくさん紹介するつもりだ。
02:32.800 --> 02:38.350
典型的な例としては、 センチメント分析と呼ばれるものがある。
02:38.380 --> 02:43.570
ある文章が与えられたとき、 この文章から伝わってくる感情は何か?
02:44.380 --> 02:52.450
分類は、 もちろん、 物事をバケツに分類するという、 非常に伝統的な機械学習タスクのひとつだ。
02:52.660 --> 03:00.160
名前付きエンティティ認識とは、 ある文章に含まれる単語を、 人なのか、 場所なのか、
03:00.160 --> 03:04.630
物なのか、 といったようにタグ付けすることだ。
03:04.970 --> 03:13.610
質問応答とは、 何らかの文脈があり、 提供した文脈について質問できるようにしたい場合である。
03:13.640 --> 03:21.710
要約はもちろん、 テキストブロックがあり、 それを要約翻訳にしたい場合である。
03:21.740 --> 03:26.870
もうひとつの古典的なAIタスクは、 ある言語と別の言語の間の翻訳である。
03:26.900 --> 03:32.060
では、 これらのことがそれぞれ2行のコードでできると言ったらどうだろう?
03:32.330 --> 03:37.370
願わくば驚かれることを願っています。
03:37.490 --> 03:43.460
その他にも、 少し高度なこともできる。
03:43.580 --> 03:45.740
テキスト生成は実はまったく進歩していない。
03:45.740 --> 03:47.120
たった2行のコードだ。
03:47.120 --> 03:52.760
それでも超シンプルで、 これまた驚嘆することだろう。
03:53.210 --> 03:57.470
しかし、 画像の生成もオーディオと同様、 非常に簡単だ。
03:57.470 --> 04:02.810
2行より少し多くなりますが、 それでもとてもシンプルなので、 早くお見せしたいです。
04:02.840 --> 04:04.760
前置きはこれくらいにしておこう。
04:04.760 --> 04:05.720
本題に入ろう。
04:05.720 --> 04:07.340
グーグルコラボに行こう。

154
week5/community-contributions/subtitles/srts/59170025/ko_KR.srt

@ -0,0 +1,154 @@
WEBVTT
00:00.740 --> 00:05.000
LM 엔지니어링에 다시 한번 큰 박수를 보내주세요
00:05.000 --> 00:10.220
3주 차, 2일째입니다 오픈 소스 모델로 들어가고 있죠
00:10.370 --> 00:14.960
다시 말씀드리지만 개척 시대 모델은 이미 거꾸로 할 수 있어요
00:14.960 --> 00:16.940
다중 모듈 인공지능 보조를 만들 수 있어요
00:16.940 --> 00:22.940
이제 안는 얼굴 허브를 편하게 볼 수 있습니다 모델, 데이터 세트, 공간을 보는 거죠
00:22.940 --> 00:26.690
구글 콜라브로 코드를 실행할 수 있어요
00:27.020 --> 00:33.080
오늘은 얼굴 트랜스포머 껴안기 라이브러리를 살펴보고 API 종류에 대해
00:33.080 --> 00:39.950
얘기해 볼게요 두 가지 레벨로 트랜스포머와 작업할 수 있어요 더 높은 레벨은 파이프라인이라는
00:39.980 --> 00:46.250
API로 오늘 주로 작업할 거예요 파이프라인을 이용해 텍스트, 이미지,
00:46.250 --> 00:50.810
소리를 생성하는 걸 포함해서요
00:51.080 --> 00:56.570
잠시 다른 API 레벨에 대해 얘기해보죠
00:56.570 --> 01:02.770
안는 얼굴 코드와 상호 작용하는 방식은 두 가지예요
01:02.770 --> 01:10.060
그 중 하나는 표준을 수행하고 싶을 때죠 추론이라는 것을 위한 매일의 전형적인 작업이나 런타임에
01:10.060 --> 01:14.860
모델을 실행할 때요 입력값을 받아 getput을 얻는 거죠
01:14.860 --> 01:21.550
얼굴을 안는 방법은 패키지로 아주 쉽게 상위 레벨 인터페이스에 넣을 수 있게 해줍니다.
01:21.550 --> 01:30.130
빠르게 진행할 수 있게 해줍니다. 텍스트를 생성하고 일상적인 함수를 수행할 수도 있어요.
01:30.580 --> 01:37.360
코드를 좀 더 깊이 파고들고 싶다면 예를 들어, 어떻게 텍스트를 토큰화하고
01:37.360 --> 01:44.380
어떤 모델과 어떤 매개 변수를 모델 실행에 사용할지 알고 싶다면요 혹은 훈련을
01:44.380 --> 01:50.500
통해 자신의 모델을 잘 조정해서 추가적인 지식이나 뉘앙스를 가지고
01:50.530 --> 01:54.010
특수한 작업을 수행할 수 있다면요
01:54.010 --> 02:00.820
그땐 더 깊은 API를 살펴봐야 합니다 하위 레벨 API요 토큰라이저와 포옹하는 얼굴
02:00.820 --> 02:02.800
모델과 작업하는 거죠
02:02.830 --> 02:05.260
오늘은 파이프라인을 살펴볼 거예요
02:05.260 --> 02:10.420
그런 다음 토큰라이저와 모델로 넘어가죠
02:11.080 --> 02:13.060
파이프라인을 어떻게 할 수 있을까요?
02:13.060 --> 02:22.960
즉, 두 줄의 코드로 얼굴 허브에서 모델을 즉각적으로 이용할 수 있게 해주는 것이죠
02:22.960 --> 02:24.340
아주 간단해요
02:24.340 --> 02:28.330
많은 예제를 제공할 거예요 여러분이 가져갈 수 있는 많은
02:28.330 --> 02:32.800
것도요 매일 추론 작업을 수행하는 데 직접 사용할 수 있도록요
02:32.800 --> 02:38.350
가장 쉬운 것 중 하나인 전형적인 예가 바로 정서 분석이에요
02:38.380 --> 02:43.570
이 문장이 전달하는 감정은 무엇인지 묻는 문장이죠
02:44.380 --> 02:50.740
분류는 물론 아주 전통적인 머신 러닝 과제입니다 물건을 양동이에
02:50.740 --> 02:52.450
담는 거죠
02:52.660 --> 03:00.160
개체 인식이라는 것은 문장을 보고 그 문장의 단어를 사람이냐 장소냐
03:00.160 --> 03:04.630
같은 것으로 태그할 수 있는 거예요
03:04.970 --> 03:11.900
질문 답변은 어떤 컨텍스트가 있는데 여러분이 제공하는 컨텍스트에 관해 질문할
03:11.900 --> 03:13.610
수 있어야 할 때죠
03:13.640 --> 03:20.210
요약은 텍스트 블록이 있을 때 요약 번역으로 바꾸는
03:20.660 --> 03:21.710
거예요
03:21.740 --> 03:26.870
한 언어를 다른 언어로 번역하는 인공지능의 전형적인 작업이죠
03:26.900 --> 03:32.060
이 모든 게 각각 코드 2줄로 가능하다면 어떨까요?
03:32.330 --> 03:37.370
잠시 후에 깜짝 놀라실 거예요
03:37.490 --> 03:43.460
좀 더 발전된 다른 기능도 할 수 있어요
03:43.580 --> 03:45.740
문자 생성은 사실 전혀 발전하지 않았어요
03:45.740 --> 03:47.120
여전히 코드 2줄일 뿐이죠
03:47.120 --> 03:52.760
여전히 아주 간단해요 놀랄 만한 또 다른 거죠
03:53.210 --> 03:57.470
이미지 생성 역시 간단합니다 오디오도 마찬가지죠
03:57.470 --> 04:02.810
비트 코드는 두 줄 이상이지만 아주 간단해요 빨리 보여드리고 싶네요
04:02.840 --> 04:04.760
서론은 그만하면 됐어요
04:04.760 --> 04:05.720
Get it, get it 바로 본론으로 들어가죠
04:05.720 --> 04:07.340
구글 콜랍으로 가보죠

70
week5/community-contributions/subtitles/srts/59170037/en_US.srt

@ -0,0 +1,70 @@
WEBVTT
00:00.410 --> 00:06.830
So how does it feel to be 30% of the way down the journey to being a proficient LLM engineer?
00:06.860 --> 00:12.020
Take a moment to congratulate yourself on a big accomplishment and a lot of progress.
00:12.110 --> 00:13.850
And hopefully you have that sense.
00:13.850 --> 00:19.310
You have that feeling that you are up skilling, that you can do so much more than you could just a
00:19.310 --> 00:22.160
matter of days ago, and it's going to keep being that way.
00:22.160 --> 00:26.120
We're going to keep building and building on the skills and knowledge that you're acquiring.
00:26.120 --> 00:28.790
So you're able to do more and more.
00:28.820 --> 00:32.780
But again, what you can already do, you can already confidently code with frontiers.
00:32.780 --> 00:36.410
You can build multimodal AI assistants using tools.
00:36.410 --> 00:40.670
And now and now you're familiar with hugging face pipelines.
00:40.670 --> 00:49.310
And you can use pipelines to run inference tasks across a wide variety of different common machine learning
00:49.340 --> 00:50.390
tasks.
00:50.840 --> 01:00.620
Next time, next time we get below into the lower level Transformers API as we start to work with Tokenizers,
01:00.650 --> 01:05.690
we've of course already spent some time talking about tokens, and we looked at Gpts tokenizer through
01:05.690 --> 01:06.800
the web user interface.
01:06.830 --> 01:13.190
Now we're going to actually use code to translate text to tokens and back again.
01:13.190 --> 01:16.550
And as part of that we're going to understand things like special tokens.
01:16.550 --> 01:22.550
I remember I had a sidebar, uh, ramble about some time ago now, but it's going to all come together.
01:22.550 --> 01:23.450
It's going to be worth it.
01:23.450 --> 01:26.060
That seed I planted is going to come together.
01:26.060 --> 01:31.070
When we look at what tokens look like for what gets passed into an LLM.
01:31.070 --> 01:37.610
And then also when we look at these things called chat templates, all of this is going to be extremely
01:37.610 --> 01:41.660
important foundation material, and I look forward to going through it with you next time.

58
week5/community-contributions/subtitles/srts/59170037/ja_JP.srt

@ -0,0 +1,58 @@
WEBVTT
00:00.410 --> 00:06.830
では、 熟達したLLMエンジニアになるための道のりの30%を歩んでいる今、 どのように感じているのだろうか?
00:06.860 --> 00:12.020
大きな達成と多くの進歩について、 自分自身を祝福するひとときを過ごしてください。
00:12.110 --> 00:13.850
そして願わくば、 あなたにもその感覚を持っていてほしい。
00:13.850 --> 00:22.160
数日前の自分よりずっと多くのことができるようになったという実感がある。
00:22.160 --> 00:26.120
私たちは、 あなたが身につけている技術や知識をどんどん積み上げていくつもりです。
00:26.120 --> 00:28.790
だから、 どんどんできることが増えていく。
00:28.820 --> 00:32.780
しかし、 繰り返しになるが、 すでにできることは、 フロンティアで自信を持ってコーディングできる。
00:32.780 --> 00:36.410
ツールを使ってマルチモーダルAIアシスタントを構築できる。
00:36.410 --> 00:40.670
そして今も昔も、 ハグする顔のパイプラインはお馴染みだ。
00:40.670 --> 00:50.390
また、 パイプラインを使って、 さまざまな一般的な機械学習タスクの推論タスクを実行することができる。
00:50.840 --> 01:00.620
次回は、 より低レベルのTransformers APIに入り、 Tokenizersを扱い始めます。 もちろん、 トークンについてはすでに時間を費やしていますし、
01:00.650 --> 01:06.800
Gptsトークナイザーをウェブ・ユーザー・インターフェイスを通して見てきました。
01:06.830 --> 01:13.190
では、 実際にコードを使ってテキストをトークンに変換し、 また元に戻してみよう。
01:13.190 --> 01:16.550
その一環として、 私たちは特別なトークンのようなものを理解しようとしている。
01:16.550 --> 01:22.550
少し前にサイドバーで、 ええと、 とりとめのない話をしたのを覚えているんだけど、 全部まとまりそうなんだ。
01:22.550 --> 01:23.450
それだけの価値がある
01:23.450 --> 01:26.060
私が蒔いた種が結実するんだ。
01:26.060 --> 01:31.070
LLMに渡されるトークンがどのようなものかを見てみよう。
01:31.070 --> 01:41.660
そして、 チャット・テンプレートと呼ばれるものを見るときにも、 これらすべてが非常に重要な基礎資料となるでしょう。

70
week5/community-contributions/subtitles/srts/59170037/ko_KR.srt

@ -0,0 +1,70 @@
WEBVTT
00:00.410 --> 00:06.830
능숙한 LLM 엔지니어가 되기까지 30% 정도 성장한 기분이 어떤가요?
00:06.860 --> 00:12.020
큰 성과를 거두고 큰 진전을 이룬 걸 축하하는 시간을 가져요
00:12.110 --> 00:13.850
여러분도 그런 걸 느끼셨으면 해요
00:13.850 --> 00:19.310
기술이 좋아진 것 같고 며칠 전보다 훨씬 많은 걸 할 수 있을
00:19.310 --> 00:22.160
것 같고 앞으로도 그럴 거예요
00:22.160 --> 00:26.120
여러분이 습득하는 기술과 지식을 계속 발전시킬 거예요
00:26.120 --> 00:28.790
그래서 더 많은 걸 할 수 있죠
00:28.820 --> 00:32.780
하지만 이미 할 수 있는 건 이미 개척지를 이용해 자신감 있게 코드를 작성할 수 있죠
00:32.780 --> 00:36.410
도구를 이용해 다중 모듈 인공지능 보조를 만들 수 있죠
00:36.410 --> 00:40.670
이제 페이스 파이프라인을 껴안는 게 익숙해졌죠
00:40.670 --> 00:49.310
파이프라인을 이용해 추론 작업을 실행할 수 있습니다 다양하고 공통적인 머신 러닝 작업들에
00:49.340 --> 00:50.390
걸쳐서요
00:50.840 --> 01:00.620
다음 시간에는 낮은 레벨의 트랜스포머 API에서 토큰라이저를 다룰 겁니다 토큰에 대해 이미 얘기했고
01:00.650 --> 01:05.690
웹 사용자 인터페이스를 통해 Gpts 토큰라이저를
01:05.690 --> 01:06.800
살펴봤죠
01:06.830 --> 01:13.190
이제 코드를 이용해서 텍스트를 토큰으로 변환하고 다시 돌아오도록 하죠
01:13.190 --> 01:16.550
그 일부로 특별한 토큰 같은 걸 이해하게 될 거예요
01:16.550 --> 01:22.550
예전에 잠깐 잡담도 했는데 곧 다 해결될 거예요
01:22.550 --> 01:23.450
보람이 있을 거예요
01:23.450 --> 01:26.060
내가 심은 씨앗이 합쳐질 거예요
01:26.060 --> 01:31.070
LLM으로 전달되는 토큰의 모양을 살펴보죠
01:31.070 --> 01:37.610
채팅 템플릿이라는 것도 살펴보면 이 모든 게 아주 중요한 기본 자료가 될 겁니다
01:37.610 --> 01:41.660
다음 시간에도 함께 살펴보고 싶네요

412
week5/community-contributions/subtitles/srts/59170043/en_US.srt

@ -0,0 +1,412 @@
WEBVTT
00:01.490 --> 00:08.720
Let me enthusiastically welcome you all back to week three of our LLM engineering journey.
00:08.750 --> 00:15.140
If you enjoyed last week when we got deep into building user interfaces using the fabulous Gradio framework,
00:15.170 --> 00:21.290
then you're going to love this week even more, because now it's time to get into open source and start
00:21.320 --> 00:24.500
using the wonderful world of Huggingface.
00:24.830 --> 00:28.340
But first, a quick recap as always on what you can already do.
00:28.370 --> 00:33.260
You can describe Transformers and you are fluent in the key terminology.
00:33.290 --> 00:38.750
You can talk about context windows until the cows come home and all of that.
00:38.780 --> 00:44.210
You can confidently code whether it's with Gemini or Claude or with OpenAI.
00:44.240 --> 00:45.680
You know the APIs.
00:45.680 --> 00:49.820
You know how to stream, you know about markdown, you know about JSON responses.
00:49.940 --> 00:53.330
And you can also build an AI assistant, a chatbot.
00:53.360 --> 00:55.190
You can make it use tools.
00:55.190 --> 01:00.260
You can make it use different agents, and you can make it multimodal.
01:00.380 --> 01:02.330
And we've built one ourselves.
01:02.330 --> 01:04.400
And hopefully you've extended it to.
01:04.400 --> 01:04.400
too.
01:05.060 --> 01:06.590
So what's happening today?
01:06.620 --> 01:09.080
Today we're going to get into hugging face.
01:09.080 --> 01:14.630
And to start with, you're just going to be able to describe what it is and the scope and scale of hugging
01:14.630 --> 01:14.930
face.
01:14.930 --> 01:18.260
One of the most remarkable things about hugging face is its breadth.
01:18.260 --> 01:24.140
All the different things that it offers to the open source data science community, and you'll have
01:24.140 --> 01:26.990
a good appreciation for that shortly.
01:27.320 --> 01:33.650
Uh, we're going to look at models, data sets and spaces in hugging face, and you'll also have a good
01:33.650 --> 01:35.510
understanding of Google Colab.
01:35.510 --> 01:39.410
You may already have an understanding of Google Colab, in which case it'll be a quick revision point.
01:39.410 --> 01:41.840
But for those that don't, we're going to go into it.
01:41.870 --> 01:47.810
You're going to see how you can run code on a box with a good GPU, and you'll have a sense of the different
01:47.840 --> 01:50.270
offerings out there and which ones we'll be using for the class.
01:50.270 --> 01:51.980
So we'll get you set up.
01:51.980 --> 01:55.550
So prepare for some open source stuff.
01:55.550 --> 02:02.900
But first, as always, a quick recap on what's been going on, where we are and what's left to do.
02:02.930 --> 02:09.510
We started on the left with uh, at the beginning, no LM engineering knowledge, we will end up on
02:09.510 --> 02:12.750
the right as proficient LM engineers.
02:12.750 --> 02:16.980
In week one, we got immersed in all things frontier.
02:16.980 --> 02:18.060
In week two.
02:18.090 --> 02:20.250
Last week we built UIs.
02:20.250 --> 02:26.070
We used all of the APIs for the top three and we experimented with tools.
02:26.100 --> 02:31.500
Agent ization Multi-modality this week, all about open source, all about hugging face.
02:31.530 --> 02:37.500
Next week we talk about selecting the right LM for the problem and generating code.
02:37.530 --> 02:39.480
After that is Rag week.
02:39.510 --> 02:47.040
Then we fine tune a frontier model, then we fine tune an open source model, and then in the finale
02:47.040 --> 02:48.450
we bring it all home.
02:49.830 --> 02:54.150
So without further ado, let's talk hugging face.
02:54.540 --> 02:56.670
So as I say, it's ubiquitous.
02:56.700 --> 02:59.280
It's it's used across the community.
02:59.310 --> 03:01.770
It is a fabulous resource.
03:01.980 --> 03:09.780
And amongst many things, it offers us three the hugging face platform, the the what you get to if
03:09.780 --> 03:12.900
you go to Hugging Face Co and you've signed up with an account.
03:12.900 --> 03:16.890
You have access to three categories of things.
03:16.890 --> 03:25.860
First of all, you have models over 800,000 open source models that can do a bunch of different types
03:25.860 --> 03:31.080
of tasks, many of which we will experiment with in this week's lectures.
03:31.080 --> 03:35.010
And in future weeks, there are data sets.
03:35.010 --> 03:41.880
It is a treasure trove, over 200,000 data sets covering almost any problem that you can think of.
03:41.910 --> 03:44.070
You can try searching and see what you find.
03:44.100 --> 03:49.470
We're going to be using one particularly amazing data set later in this course.
03:49.500 --> 03:54.030
But but you will find lots of data to to solve your problems.
03:54.270 --> 04:00.150
Um, it's similar to to the platform Kaggle which is much more focused on the data side of things.
04:00.150 --> 04:05.550
But you have such a huge resource of that data within hugging face.
04:06.000 --> 04:13.050
And then hugging face also has something called spaces, which is where you can write an app and expose
04:13.050 --> 04:13.560
that app.
04:13.590 --> 04:20.680
Have it running on hugging face cloud hardware and and available for other people to use.
04:20.680 --> 04:26.530
As long as you're you're happy for your code to be open source, because that is the the you know,
04:26.560 --> 04:28.360
that's what Hugging Face is all about.
04:28.630 --> 04:35.110
Uh, so spaces are many of the spaces apps are written built in Gradio.
04:35.110 --> 04:36.910
So they are gradio apps.
04:37.060 --> 04:38.890
Um, there are things that are not gradio apps.
04:38.890 --> 04:43.660
There's something called Streamlit, which is another way to build apps that is also quite magical.
04:43.660 --> 04:45.670
Different to Gradio, quite magical.
04:45.730 --> 04:48.640
Um, and there are some other ways that you can publish apps as well.
04:48.700 --> 04:51.520
Uh, but I'd say Gradio is probably the most common that's there.
04:51.520 --> 04:59.230
And there's in particular things called leaderboards, which are gradio apps whose job it is to evaluate
04:59.230 --> 05:02.650
different llms and rank them and show them in a kind of scorecard.
05:02.680 --> 05:07.300
We're going to be using leaderboards a lot when we look at comparing different llms and, but we'll
05:07.330 --> 05:11.590
be seeing some of them today as well as we look at huggingface spaces.
05:12.190 --> 05:18.610
So that's the Huggingface platform, which is what you get to if you go to Huggingface Co and log in
05:18.610 --> 05:20.230
and start looking at what's out there.
05:20.260 --> 05:28.240
Hugging face also offers libraries code, which forms the basis of many of our open source projects.
05:28.870 --> 05:35.140
And the libraries give us this amazing head start in what we want to do.
05:35.170 --> 05:41.230
It brings time to market much lower, because you can just be off and running very quickly with very
05:41.230 --> 05:42.910
little boilerplate code.
05:43.180 --> 05:51.970
It's the very well crafted libraries to reduce the barrier to entry and make people productive quickly.
05:52.420 --> 05:57.880
The one of the first libraries you'll experience is the Hugging Face Hub, which is a library that allows
05:57.880 --> 06:07.030
you to log in to hugging face and, uh, both download and upload things like data sets and models from
06:07.030 --> 06:12.430
the hub, which is what hugging face calls the platform we just talked about.
06:12.850 --> 06:22.000
Um, data sets is a library that gives us access, immediate access to, uh, the, the the data repositories
06:22.000 --> 06:25.540
in hugging Huggingface and Transformers.
06:25.570 --> 06:35.860
This is a central library, which is the wrapper code around Llms that follow the transformer architecture,
06:36.010 --> 06:44.830
and under the covers it's got either PyTorch or TensorFlow code that actually runs these neural networks.
06:45.160 --> 06:52.480
But when you create a transformer, you have the actual deep neural network code at your fingertips.
06:52.480 --> 06:59.200
When we make calls to functions, to methods in transformer code, we're no longer calling out to an
06:59.200 --> 07:04.270
API running on a cloud somewhere else under OpenAI's umbrella.
07:04.270 --> 07:13.240
We are executing the code ourselves to to execute to either inference or training against our deep neural
07:13.240 --> 07:14.050
network.
07:14.860 --> 07:20.800
So there are three other libraries that I wanted to mention that we're going to come to later in the
07:20.800 --> 07:23.740
course that are more advanced libraries.
07:24.010 --> 07:29.810
Um, the first of them, Peft, stands for parameter efficient fine tuning.
07:29.990 --> 07:39.890
And this is, uh, utilities which allow us to train llms without needing to work with all of the billions
07:39.890 --> 07:42.290
of parameters in the Llms.
07:42.290 --> 07:43.910
So it's parameter efficient.
07:43.910 --> 07:49.400
And the technique in particular that we'll be using is called Laura or Laura is a variation of Laura,
07:49.400 --> 07:52.460
and there'll be plenty of time to explain that later on.
07:52.460 --> 07:54.710
But but bear in mind that's what we'll be using.
07:54.710 --> 07:59.750
And it's part of the Peft library parameter efficient fine tuning.
08:00.140 --> 08:07.550
Then there's a library called Treal, which stands for Transformer Reinforcement Learning.
08:07.550 --> 08:09.440
And it includes a few things.
08:09.440 --> 08:13.730
It's the ability to do things like something called reward modeling.
08:14.060 --> 08:14.630
Mm.
08:14.630 --> 08:20.630
And it's also something called proximal policy optimization PPO.
08:20.900 --> 08:24.200
And you may see mm and PPO mentioned from time to time.
08:24.200 --> 08:32.720
And this is related to uh, the both this thing called WRF that I mentioned a while ago, and it's the
08:32.990 --> 08:42.320
successors better ways of doing it, which is how we are able to train LMS so that they are really effective
08:42.320 --> 08:43.100
at chat.
08:43.100 --> 08:48.620
And it was the key innovation that resulted in ChatGPT in late 2022.
08:48.650 --> 08:52.130
So a lot of that code is within TRL.
08:52.160 --> 09:00.290
Also within TRL is something called supervised fine tuning and SFT, and that is something we will directly
09:00.290 --> 09:02.390
use ourselves later in the course.
09:02.390 --> 09:10.220
That is the specific library we will be using to fine tune an open source model, so that it's even
09:10.220 --> 09:14.540
more effective in our particular domain with a particular problem.
09:14.540 --> 09:20.240
We will set it so SFT supervised fine tuning part of the TRL library.
09:20.330 --> 09:21.380
All these acronyms.
09:21.830 --> 09:30.230
SFT part of TRL uh and the it's a it's an essential framework.
09:30.350 --> 09:32.570
But this is some of the more advanced stuff we'll get back to.
09:32.600 --> 09:37.020
So you don't have to remember all that right now, and certainly don't have to remember all these acronyms,
09:37.140 --> 09:42.120
but just let me plant that seed in you so that when you see it later, it's something that you've heard
09:42.120 --> 09:42.990
of before.
09:44.160 --> 09:51.240
The other one is one that is more of a behind the scenes, but you'll often see us importing it and
09:51.330 --> 09:52.590
making some use of it.
09:52.620 --> 10:01.890
It's called accelerate, and it's some, uh, advanced huggingface code that allows, uh, that allows
10:01.890 --> 10:05.670
our transformers to run across any distributed configuration.
10:05.670 --> 10:13.350
So it allows both training and inference to run at scale in an efficient, adaptable way, potentially
10:13.350 --> 10:14.760
across multiple GPUs.
10:14.760 --> 10:19.950
Although in all the experiments we'll be doing, we'll only be using a maximum of one GPU.
10:20.910 --> 10:26.820
So those are some of the key libraries that sit behind hugging face.
10:27.660 --> 10:28.590
At this point.
10:28.590 --> 10:31.140
I think it's time that we get to look at hugging face.
10:31.140 --> 10:38.070
So let's go in and take some some browsing around, starting with the hugging face platform.

334
week5/community-contributions/subtitles/srts/59170043/ja_JP.srt

@ -0,0 +1,334 @@
WEBVTT
00:01.490 --> 00:08.720
LLMエンジニアリングの旅の第3週目に戻ってきた皆さんを熱烈に歓迎しましょう。
00:08.750 --> 00:15.140
先週、 素晴らしいGradioフレームワークを使ったユーザー・インターフェースの構築を楽しんだのなら、
00:15.170 --> 00:24.500
今週はもっと気に入るはずだ。
00:24.830 --> 00:28.340
その前に、 すでにできることをいつものように簡単にまとめておこう。
00:28.370 --> 00:33.260
トランスフォーマーについて説明でき、 重要な専門用語に精通している。
00:33.290 --> 00:38.750
コンテクスト・ウィンドウについては、 牛が帰ってくるまで話すことができる。
00:38.780 --> 00:44.210
GeminiでもClaudeでもOpenAIでも、 自信を持ってコーディングできる。
00:44.240 --> 00:45.680
APIは知っているだろう。
00:45.680 --> 00:49.820
ストリーミングのやり方も、 マークダウンのことも、 JSONレスポンスのことも知っている。
00:49.940 --> 00:53.330
また、 AIアシスタント、 チャットボットを作ることもできる。
00:53.360 --> 00:55.190
道具を使わせることもできる。
00:55.190 --> 01:00.260
さまざまなエージェントを使うこともできるし、 マルチモーダルにもできる。
01:00.380 --> 01:02.330
そして、 自分たちでも作った。
01:02.330 --> 01:04.400
そして願わくば、 それをさらに広げてほしい。
01:04.400 --> 01:04.400
それもそうだ。
01:05.060 --> 01:06.590
それで、 今日は何が起きているんだ?
01:06.620 --> 01:09.080
今日はハグ顔に入ろう。
01:09.080 --> 01:14.930
そもそも、 ハグ顔とは何か、 その範囲と規模を説明できればいいわけだし。
01:14.930 --> 01:18.260
ハグ顔で最も注目すべきことのひとつは、 その幅広さだ。
01:18.260 --> 01:24.140
オープンソースのデータ・サイエンス・コミュニティに提供するさまざまなものを、
01:24.140 --> 01:26.990
すぐに理解してもらえるだろう。
01:27.320 --> 01:35.510
モデル、 データセット、 空間をハグハグしながら見ていくんだけど、 Google Colabについてもよく理解できるようになるよ。
01:35.510 --> 01:39.410
すでにGoogle Colabを理解しているかもしれないが、 その場合はすぐに復習できるだろう。
01:39.410 --> 01:41.840
しかし、 そうでない人たちのために、 私たちはそれに踏み込もうとしている。
01:41.870 --> 01:47.810
優れたGPUを搭載したマシンでどのようにコードを走らせることができるかを見てもらい、 世の中にあるさまざまな製品と、
01:47.840 --> 01:50.270
このクラスで使うGPUを理解してもらう。
01:50.270 --> 01:51.980
だから、 私たちがセッティングします。
01:51.980 --> 01:55.550
だから、 オープンソースのものを準備するんだ。
01:55.550 --> 02:02.900
その前に、 いつものように、 これまでの経過と現在地、 そして残された課題を簡単に振り返っておこう。
02:02.930 --> 02:12.750
最初はLMエンジニアリングの知識がない状態で左からスタートした。
02:12.750 --> 02:16.980
第1週は、 フロンティアのあらゆることに没頭した。
02:16.980 --> 02:18.060
第2週は。
02:18.090 --> 02:20.250
先週はUIを構築した。
02:20.250 --> 02:26.070
私たちはトップ3のAPIをすべて使い、 ツールを使って実験した。
02:26.100 --> 02:31.500
エージェント化 マルチモダリティ 今週は、 オープンソースについて、 ハグ顔について。
02:31.530 --> 02:37.500
来週は、 問題に適したLMの選択とコードの生成について話す。
02:37.530 --> 02:39.480
そのあとはラグ・ウィークだ。
02:39.510 --> 02:48.450
そしてフロンティアモデルを微調整し、 オープンソースモデルを微調整し、 フィナーレですべてを持ち帰る。
02:49.830 --> 02:54.150
それでは早速、 ハグ顔の話をしよう。
02:54.540 --> 02:56.670
つまり、 どこにでもあるものなんだ。
02:56.700 --> 02:59.280
地域全体で使われているんだ。
02:59.310 --> 03:01.770
素晴らしいリソースだ。
03:01.980 --> 03:12.900
ハギング・フェイス・プラットフォームは、 ハギング・フェイス・コーにアクセスし、 アカウント登録すれば利用できる。
03:12.900 --> 03:16.890
あなたは3つのカテゴリーにアクセスできる。
03:16.890 --> 03:31.080
まず第一に、 80万を超えるオープンソースのモデルがあり、 それらは様々な種類のタスクをこなすことができる。
03:31.080 --> 03:35.010
そして今後の週にはデータセットがある。
03:35.010 --> 03:41.880
それは宝の山で、 20万を超えるデータセットが、 考えられるほとんどすべての問題をカバーしている。
03:41.910 --> 03:44.070
検索してみてください。
03:44.100 --> 03:49.470
このコースの後半で、 特に素晴らしいデータセットを使うことになる。
03:49.500 --> 03:54.030
しかし、 あなたの問題を解決するためのデータはたくさん見つかるだろう。
03:54.270 --> 04:00.150
Kaggleというプラットフォームと似ていて、 よりデータ面に特化しています。
04:00.150 --> 04:05.550
でも、 あなたはその膨大なデータを抱きかかえる顔の中に持っている。
04:06.000 --> 04:13.560
また、 ハギング・フェイスにはスペースというものがあり、 アプリを書いてそのアプリを公開することができる。
04:13.590 --> 04:20.680
クラウドのハードウェアで動作させ、 他の人が使えるようにする。
04:20.680 --> 04:28.360
自分のコードがオープンソースであることに満足しているのであれば、 それこそがハギング・フェイスのすべてなのだから。
04:28.630 --> 04:35.110
多くのスペースアプリはGradioで作られています。
04:35.110 --> 04:36.910
つまり、 これらはグラディオのアプリなのだ。
04:37.060 --> 04:38.890
ええと、 グラディオのアプリではないものもあります。
04:38.890 --> 04:43.660
Streamlitと呼ばれるものがあり、 これも非常に不思議なアプリの作り方だ。
04:43.660 --> 04:45.670
グラディオとは違って、 かなりマジカルだ。
04:45.730 --> 04:48.640
他にもアプリを公開する方法はいくつかあります。
04:48.700 --> 04:51.520
あー、 でも、 グラディオが一番一般的だと思うよ。
04:51.520 --> 05:02.650
特にリーダーボードと呼ばれるものがあり、 これはグラディオのアプリで、 さまざまなLLMを評価し、 ランク付けしてスコアカードのような形で表示するものだ。
05:02.680 --> 05:07.300
リーダーボードは、 さまざまなLLMを比較するときによく使うが、 今日もハギングフェイス・スペースを見るときに、
05:07.330 --> 05:11.590
そのいくつかを見ることになるだろう。
05:12.190 --> 05:20.230
これがHuggingfaceのプラットフォームで、 Huggingface Co.にアクセスしてログインし、 そこにあるものを見始めるとたどり着くことができる。
05:20.260 --> 05:28.240
Hugging faceは、 多くのオープンソースプロジェクトの基礎となるライブラリコードも提供しています。
05:28.870 --> 05:35.140
そして図書館は、 私たちがやりたいことを実現するための素晴らしいスタートを切ってくれる。
05:35.170 --> 05:42.910
なぜなら、 定型的なコードをほとんど使用することなく、 すぐに実行に移せるからだ。
05:43.180 --> 05:51.970
それは、 参入障壁を低くし、 人々を素早く生産的にするために非常にうまく作られたライブラリだ。
05:52.420 --> 05:57.880
Hugging Face Hubは、 Hugging Faceにログインして、
05:57.880 --> 06:12.430
データセットやモデルのようなものをHubからダウンロードしたりアップロードしたりできるライブラリです。
06:12.850 --> 06:25.540
データセットは、 ハグハグフェイスやトランスフォーマーのデータリポジトリにすぐにアクセスできるライブラリだ。
06:25.570 --> 06:44.830
これは中心的なライブラリで、 トランスフォーマーアーキテクチャに従うLlmsのラッパーコードであり、 そのカバーの下には実際にニューラルネットワークを実行するPyTorchかTensorFlowのコードがある。
06:45.160 --> 06:52.480
しかし、 トランスフォーマーを作れば、 実際のディープ・ニューラル・ネットワークのコードを指先で操作できる。
06:52.480 --> 07:04.270
トランスフォーマーのコードで関数やメソッドを呼び出すとき、 私たちはもはやOpenAIの傘下にあるどこかのクラウド上で動いているAPIを呼び出しているわけではない。
07:04.270 --> 07:14.050
私たちは、 ディープ・ニューラル・ネットワークに対する推論やトレーニングを実行するために、 自分たちでコードを実行している。
07:14.860 --> 07:23.740
このコースの後半で紹介する、 より高度なライブラリが他にも3つある。
07:24.010 --> 07:29.810
その最初のPeftは、 Parameter efficient fine tuning(パラメータ効率的な微調整)の略だ。
07:29.990 --> 07:42.290
このユーティリティを使えば、 LLMSの何十億ものパラメーターを操作することなく、 LLMSをトレーニングすることができる。
07:42.290 --> 07:43.910
つまり、 パラメータが効率的なんだ。
07:43.910 --> 07:49.400
特に使うテクニックはローラ、 あるいはローラはローラのバリエーションと呼ばれるもので、
07:49.400 --> 07:52.460
これについては後でたっぷり説明する時間がある。
07:52.460 --> 07:54.710
でも、 それが私たちが使うものだということを念頭に置いておいてほしい。
07:54.710 --> 07:59.750
そして、 ペフト・ライブラリーのパラメーターの効率的な微調整の一部でもある。
08:00.140 --> 08:07.550
それから、 Trealというライブラリーがある。 TrealはTransformer Reinforcement Learning(トランスフォーマー強化学習)の略だ。
08:07.550 --> 08:09.440
それにはいくつかのことが含まれている。
08:09.440 --> 08:13.730
リワード・モデリングと呼ばれるようなことができる能力だ。
08:14.060 --> 08:14.630
うん。
08:14.630 --> 08:20.630
また、 近接政策最適化PPOと呼ばれるものもある。
08:20.900 --> 08:24.200
また、 時折、 mmやPPOの名前を目にすることもあるだろう。
08:24.200 --> 08:32.720
そしてこれは、 少し前に話したWRFと呼ばれるものの両方に関連していて、 LMSがチャットで本当に効果的であるように、
08:32.990 --> 08:43.100
私たちがLMSを訓練することができる方法です。
08:43.100 --> 08:48.620
そして、 2022年後半にChatGPTを生み出した重要な革新だった。
08:48.650 --> 08:52.130
だから、 そのコードの多くはTRLの中にある。
08:52.160 --> 09:02.390
また、 TRLの中にはスーパーバイズド・ファイン・チューニングやSFTと呼ばれるものがあり、 これはコースの後半で私たち自身が直接使うことになる。
09:02.390 --> 09:14.540
これは、 オープンソースのモデルを微調整するために使用する特定のライブラリであり、 特定の問題を抱える特定の領域でより効果的になるようにする。
09:14.540 --> 09:20.240
SFTがTRLライブラリーの微調整の一部を監督するように設定する。
09:20.330 --> 09:21.380
略語ばかりだ。
09:21.830 --> 09:30.230
SFTはTRLの一部であり、 不可欠なフレームワークだ。
09:30.350 --> 09:32.570
でも、 これはまた後ほど紹介する、 より高度なものなんだ。
09:32.600 --> 09:37.020
だから、 今すぐ全部覚える必要はないし、 頭文字を全部覚える必要もない。
09:37.140 --> 09:42.990
ただ、 後で見たときに聞いたことがあるようなものになるように、 種を植えさせてほしい。
09:44.160 --> 09:52.590
もうひとつは裏方的なものですが、 インポートして活用しているのをよく見かけます。
09:52.620 --> 10:05.670
accelerateと呼ばれるもので、 高度なHuggingfaceコードによって、 トランスフォーマーがどんな分散構成でも実行できるようになっているんだ。
10:05.670 --> 10:14.760
そのため、 トレーニングと推論の両方を、 効率的で適応性のある方法で、 複数のGPUにまたがる可能性もあるスケールで実行することができる。
10:14.760 --> 10:19.950
これから行う実験では、 GPUは最大でも1つしか使わない。
10:20.910 --> 10:26.820
これが、 ハグフェイスの背後にある重要なライブラリのいくつかだ。
10:27.660 --> 10:28.590
この時点ではね。
10:28.590 --> 10:31.140
そろそろハグしている顔を見てもいい頃だと思う。
10:31.140 --> 10:38.070
それでは早速、 ハグフェイス・プラットフォームから見て回ろう。

397
week5/community-contributions/subtitles/srts/59170043/ko_KR.srt

@ -0,0 +1,397 @@
WEBVTT
00:01.490 --> 00:08.720
LLM 엔지니어링 여정 3주 차에 오신 여러분을 열렬히 환영해 주세요
00:08.750 --> 00:15.140
지난주에 사용자 인터페이스를 구축하는 과정을 보셨다면 멋진 그래디오 프레임워크를
00:15.170 --> 00:21.290
이용했죠 이번 주는 더 맘에 드실 겁니다 이제 오픈 소스를 시작해 포옹의 멋진
00:21.320 --> 00:24.500
세계를 사용할 시간이거든요
00:24.830 --> 00:28.340
하지만 먼저 늘 그렇듯 이미 할 수 있는 걸 간단히 요약해보죠
00:28.370 --> 00:33.260
트랜스포머를 묘사할 수 있고 핵심 용어도 유창하죠
00:33.290 --> 00:38.750
컨텍스트 창문에 대해 얘기할 수 있어요
00:38.780 --> 00:44.210
쌍둥이자리나 클로드 오픈아이든 자신 있게 코드를 짤 수 있어요
00:44.240 --> 00:45.680
API 아시죠?
00:45.680 --> 00:49.820
스트림도 알고 마크다운도 알고 JSON 반응도 알죠
00:49.940 --> 00:53.330
인공지능 비서인 챗봇도 만들 수 있어요
00:53.360 --> 00:55.190
도구를 사용해도 돼요
00:55.190 --> 01:00.260
여러 에이전트를 사용할 수도 있고 멀티모덜로 만들 수도 있어요
01:00.380 --> 01:02.330
우리도 하나 지었죠
01:02.330 --> 01:04.400
그걸 확장했길 바라요
01:04.400 --> 01:04.400
저도요
01:05.060 --> 01:06.590
오늘은 무슨 일이죠?
01:06.620 --> 01:09.080
오늘은 얼굴 포옹을 배워볼 거예요 get it
01:09.080 --> 01:14.930
먼저, 이게 무엇인지 설명하실 수 있을 겁니다 얼굴을 껴안는 것의 스코프와 규모도요
01:14.930 --> 01:18.260
얼굴을 안는 것의 가장 놀라운 점은 너비예요
01:18.260 --> 01:24.140
오픈 소스 데이터 과학 커뮤니티에 제공하는 모든 다양한 것들이요 여러분도
01:24.140 --> 01:26.990
곧 그걸 잘 이해하게 될 거예요
01:27.320 --> 01:33.650
모델과 얼굴을 포옹할 때 데이터 세트, 공간을 살펴볼 겁니다 구글 콜랩도 잘
01:33.650 --> 01:35.510
이해하게 될 거예요
01:35.510 --> 01:39.410
구글 콜랍을 이미 이해했다면 빠른 복습 지점이 되겠죠
01:39.410 --> 01:41.840
그렇지 않은 분들은 지금 살펴보죠
01:41.870 --> 01:47.810
좋은 GPU 박스에서 코드를 실행하는 방법을 볼 겁니다 다양한 제공이 있고 어떤 걸 클래스에
01:47.840 --> 01:50.270
사용할지 알게 될 거예요
01:50.270 --> 01:51.980
get up을 해드릴게요
01:51.980 --> 01:55.550
오픈 소스 준비하세요
01:55.550 --> 02:02.900
하지만 먼저, 현재 상황과 현재 위치, 남은 일을 간단히 정리해 보죠
02:02.930 --> 02:09.510
왼쪽에서 출발해서 달 착륙선 엔지니어링 지식이 없었지만 능숙한 달 착륙선 엔지니어답게
02:09.510 --> 02:12.750
오른쪽에서 끝날 거예요
02:12.750 --> 02:16.980
첫 주에는 개척지 문화에 푹 빠져들었어요
02:16.980 --> 02:18.060
2주 차에요
02:18.090 --> 02:20.250
지난주엔 UI를 만들었죠
02:20.250 --> 02:26.070
상위 3개에 API를 모두 사용했고 도구로 실험했어요
02:26.100 --> 02:31.500
이번 주 에이전트 iization 다중 모듈은 오픈 소스와 얼굴 포옹에 관한 거죠
02:31.530 --> 02:37.500
다음 주엔 문제에 맞는 LM을 선택하고 코드 생성하는 걸 얘기하죠
02:37.530 --> 02:39.480
그다음은 래그 주간이에요
02:39.510 --> 02:47.040
그 후 개척지 모델을 조정하고 오픈 소스 모델을 조정하고 피날레에서 전부 완성하는
02:47.040 --> 02:48.450
거죠
02:49.830 --> 02:54.150
그럼 지체 없이 포옹하는 얼굴 얘기를 해보죠
02:54.540 --> 02:56.670
유비쿼터스라고 할 수 있죠
02:56.700 --> 02:59.280
지역 사회에서 사용되죠
02:59.310 --> 03:01.770
정말 훌륭한 자원이에요
03:01.980 --> 03:09.780
무엇보다도 H깅 페이스 플랫폼이 세 가지 있습니다 H깅 페이스 코에 접속하면
03:09.780 --> 03:12.900
계정을 만들 수 있죠
03:12.900 --> 03:16.890
세 가지 카테고리에 접근할 수 있어요
03:16.890 --> 03:25.860
우선 800,000개가 넘는 오픈 소스 모델이 있어요 다양한 작업을 수행할 수 있죠
03:25.860 --> 03:31.080
그중 상당수는 이번 주 강의에서 실험할 거예요
03:31.080 --> 03:35.010
앞으로 몇 주간은 데이터 세트가 있어요
03:35.010 --> 03:41.880
보물 창고입니다 200,000개가 넘는 데이터가 거의 모든 문제를 다루고 있죠
03:41.910 --> 03:44.070
검색해 보고 뭐가 나오는지 보세요
03:44.100 --> 03:49.470
이 과정 후반부에 특히 놀라운 데이터 세트를 사용할 거예요
03:49.500 --> 03:54.030
하지만 많은 데이터를 찾아서 문제를 해결할 수 있어요
03:54.270 --> 04:00.150
플랫폼 케이글과 비슷합니다 데이터 측면에 훨씬 더 집중하죠
04:00.150 --> 04:05.550
하지만 얼굴 한 번 안아도 데이터는 방대하죠
04:06.000 --> 04:13.560
얼굴을 안는 것에는 스페이스라는 것이 있습니다 앱을 작성하고 공개할 수 있는 곳이죠
04:13.590 --> 04:20.680
페이스 클라우드 하드웨어를 끌어안고 다른 사람들이 사용할 수 있도록 하세요
04:20.680 --> 04:26.530
코드가 오픈 소스인 게 좋다면 말이죠 왜냐하면 그게 포옹하는 얼굴의
04:26.560 --> 04:28.360
의미니까요
04:28.630 --> 04:35.110
스페이스 앱은 많은 스페이스 앱이 그래디오로 만들어졌어요
04:35.110 --> 04:36.910
그러디오 앱인 셈이죠
04:37.060 --> 04:38.890
그러디오 앱이 아닌 것도 있어요
04:38.890 --> 04:43.660
스트림리츠라는 게 있는데 앱을 만드는 또 다른 방법인데 이것도 꽤 마법 같아요
04:43.660 --> 04:45.670
그래디오와는 달라요 마법 같죠
04:45.730 --> 04:48.640
앱을 게시할 수 있는 다른 방법도 있어요
04:48.700 --> 04:51.520
하지만 그라디오가 가장 흔한 것 같아요
04:51.520 --> 04:59.230
그리고 leaderboard라는 것이 있는데 그러디오 앱으로 여러 llm을 평가하고 순위를
04:59.230 --> 05:02.650
매겨 점수표에 표시하는 역할을 하죠
05:02.680 --> 05:07.300
다른 llm들을 비교할 때 leaderboard를 많이 사용할
05:07.330 --> 05:11.590
거예요 오늘 몇 가지를 볼 겁니다 포옹표 공간도 보고요
05:12.190 --> 05:18.610
그게 허깅페이스 플랫폼입니다 허깅페이스 코에 로그인해 뭐가 있는지 살펴보면
05:18.610 --> 05:20.230
알 수 있죠
05:20.260 --> 05:28.240
얼굴을 안는 것은 라이브러리 코드도 제공합니다 우리 오픈 소스 프로젝트의 기초가 되는 코드죠
05:28.870 --> 05:35.140
그리고 라이브러리는 우리가 원하는 것에서 출발할 수 있게 해주죠. HDP, DBP, HDP, HD, HDP, HD, HDP, HD, HDP, HD, HDP, HD, HDP, HD, HD
05:35.170 --> 05:41.230
시장성이 훨씬 낮아지죠 아주 적은 상용 코드만으로도 아주 빨리 실행할
05:41.230 --> 05:42.910
수 있으니까요
05:43.180 --> 05:51.970
아주 잘 만들어진 도서관으로 출입 장벽을 줄이고 사람들의 생산력을 높여주죠
05:52.420 --> 05:57.880
여러분이 경험할 첫 라이브러리 중 하나는 페이스 허브입니다
05:57.880 --> 06:07.030
얼굴을 안는 데 로그인할 수 있는 라이브러리로서 허브에서 데이터 세트와 모델 같은 걸 다운로드 및 업로드
06:07.030 --> 06:12.430
할 수 있습니다 얼굴을 안는 플랫폼이라고 부르는 거죠
06:12.850 --> 06:22.000
데이터 집합은 라이브러리 같은 거예요 즉각적인 접근이 가능하죠 포옹이나 트랜스포머에
06:22.000 --> 06:25.540
관한 데이터 저장소 같은 거요
06:25.570 --> 06:35.860
이건 중앙 라이브러리인데 변압기 아키텍처를 따라가는 ㄹm을 감싸는 코드죠 그 밑에는 이런
06:36.010 --> 06:44.830
신경망 네트워크를 실행하는 PyToch나 텐서플로우 코드가 있어요
06:45.160 --> 06:52.480
하지만 변압기를 만들 때 손끝에 심층 신경망 코드가 있어요
06:52.480 --> 06:59.200
함수나 변압기 코드의 메서드에 호출을 할 때 오픈AI의 범주 내 다른
06:59.200 --> 07:04.270
클라우드에서 실행되는 API를 호출하지 않아요
07:04.270 --> 07:14.050
우리가 직접 코드를 실행하는 건 심층 신경망에 대한 추론이나 훈련에 실행하기 위해서죠
07:14.860 --> 07:20.800
언급하고 싶은 다른 라이브러리가 세 개 더 있어요 과정의 나중에
07:20.800 --> 07:23.740
좀 더 고급 라이브러리예요
07:24.010 --> 07:29.810
첫 번째는 페프트인데 매개 변수 효율 미세 조정의 약자죠
07:29.990 --> 07:39.890
이 유틸리티 덕분에 수십억 개의 변수를 다루지 않고 llm을
07:39.890 --> 07:42.290
훈련할 수 있죠
07:42.290 --> 07:43.910
매개 변수 효율이죠
07:43.910 --> 07:49.400
특히 우리가 사용할 기술은 로라라고 하는데 로라의 변형이죠
07:49.400 --> 07:52.460
나중에 설명할 시간은 많아요
07:52.460 --> 07:54.710
하지만 우리가 사용할 거라는 걸 명심하세요
07:54.710 --> 07:59.750
페프트 라이브러리 매개 변수 효율적인 미세 조정의 일부죠
08:00.140 --> 08:07.550
트리알이라는 라이브러리도 있는데 트랜스포머 강화 학습의 약자죠
08:07.550 --> 08:09.440
몇 가지 조건이 있어요
08:09.440 --> 08:13.730
보상 모델링 같은 걸 하는 능력이죠
08:14.060 --> 08:14.630
08:14.630 --> 08:20.630
최측근 정책 최적화 PPO라고도 하죠
08:20.900 --> 08:24.200
음미로와 PPO에 대해 가끔 언급하실 텐데요
08:24.200 --> 08:32.720
이건 앞서 언급한 WRF와 관련이 있어요 더 나은 후속
08:32.990 --> 08:43.100
방식이죠 LMS를 훈련하는 방법이요 채팅에서 효과적이죠
08:43.100 --> 08:48.620
2022년 말 챗GPT가 탄생한 핵심 혁신이었죠
08:48.650 --> 08:52.130
많은 코드가 TRL 내에 있어요
08:52.160 --> 09:00.290
TRL에는 감독 미세 조정과 SFT라는 게 있는데 나중에 코스에서도 직접
09:00.290 --> 09:02.390
사용할 거예요
09:02.390 --> 09:10.220
오픈 소스 모델을 조정하기 위해 사용하는 라이브러리입니다 특정 문제가
09:10.220 --> 09:14.540
있는 특정 도메인에서 더 효과적으로요
09:14.540 --> 09:20.240
TRL 라이브러리의 미세한 조율을 담당했죠
09:20.330 --> 09:21.380
전부 두문자어예요
09:21.830 --> 09:30.230
TRL의 일부예요 아주 중요한 프레임워크죠
09:30.350 --> 09:32.570
이건 좀 더 고급이죠 나중에 다시 올게요 Get in get
09:32.600 --> 09:37.020
지금 당장 다 외울 필요는 없어요 이 약어들도 확실히 기억할 필요는
09:37.140 --> 09:42.990
없고요 하지만 제가 그 씨앗을 심어서 나중에 봤을 때 이미 들어본 것으로 만들게요
09:44.160 --> 09:51.240
다른 하나는 무대 뒤의 것에 더 가깝지만 불러와서 이용하는 걸 종종 보실
09:51.330 --> 09:52.590
수 있어요
09:52.620 --> 10:01.890
가속이라고 하는데 고급 안기체 코드예요 어떤 분산 구성에서도 트랜스포머가
10:01.890 --> 10:05.670
작동할 수 있게 해 주죠
10:05.670 --> 10:13.350
훈련과 추론 둘 다 가능합니다 효율적이고 적응성 있는 방법으로 여러 GPU에 걸쳐 실행할
10:13.350 --> 10:14.760
수 있도록요
10:14.760 --> 10:19.950
우리가 할 실험은 최대 1개의 GPU로 진행되지만요
10:20.910 --> 10:26.820
얼굴을 껴안는 것 뒤에 있는 주요 라이브러리예요
10:27.660 --> 10:28.590
지금은요
10:28.590 --> 10:31.140
Get it get 이제 포옹하는 모습을 볼 때가 됐어요
10:31.140 --> 10:38.070
그럼 좀 둘러보죠 포옹하는 얼굴 플랫폼부터요

472
week5/community-contributions/subtitles/srts/59170055/en_US.srt

@ -0,0 +1,472 @@
WEBVTT
00:00.740 --> 00:03.140
Welcome to the world of Google Colab.
00:03.140 --> 00:07.730
You may already be very familiar with Google Colab, even if so, I hope I'll show you a couple of things
00:07.730 --> 00:08.660
here and there.
00:08.780 --> 00:13.340
But if not, uh, prepare for a great tool.
00:13.610 --> 00:17.630
Um, and as I say, there's other competitors to Google Colab that are pretty similar.
00:17.750 --> 00:25.490
Um, but this is where I suggest you start or do do the same sort of thing in your, uh, cloud compute
00:25.490 --> 00:26.990
platform of choice.
00:26.990 --> 00:33.140
So the first thing you'll need to do is have a Google account if you don't already have one.
00:33.170 --> 00:40.970
So when you go to this URL colab.research.google.com, uh, if you don't already have a Google account,
00:40.970 --> 00:43.850
it will prompt you to, to, to to create one.
00:43.850 --> 00:44.840
And it's worth it.
00:44.840 --> 00:46.880
There's going to be tons that we can do with it.
00:46.880 --> 00:50.240
So uh, go ahead and do that if you need to.
00:50.510 --> 00:56.990
Um, but uh, for everybody else that has one, you will see this, um, which will give you some information
00:56.990 --> 00:58.010
about Colab.
00:58.010 --> 01:00.650
There's a free tier and there is a paid tier.
01:00.650 --> 01:02.900
There's an awful lot you can do just with the free tier.
01:02.960 --> 01:07.320
Um, and I think in theory, you should be able to do almost everything in our class with the free tier.
01:07.320 --> 01:08.820
It just might take longer.
01:09.060 --> 01:13.680
And the paid tier, you can control how much you spend, and it can be relatively small in a matter
01:13.680 --> 01:14.430
of a few dollars.
01:14.430 --> 01:20.490
So it's certainly something that I'd recommend you consider because it will allow you to get deeper
01:20.490 --> 01:23.670
into training and it will be very satisfying.
01:23.760 --> 01:31.980
So, uh, when you come up with a new Colab notebook, it looks a bit like what gets served up right
01:31.980 --> 01:32.460
away.
01:32.460 --> 01:34.890
It looks very much like a Jupyter notebook.
01:34.920 --> 01:41.820
You have cells that can be code, or they can be text, and you can run code just by clicking in it
01:41.820 --> 01:42.840
and running it.
01:42.840 --> 01:45.990
And this is a kind of default one that comes up.
01:46.020 --> 01:49.950
What we can do is we can go file new notebook in drive.
01:49.950 --> 01:55.650
And it says in drive like that, because this notebook is created in your Google Drive, which is so
01:55.650 --> 01:56.280
convenient.
01:56.280 --> 02:03.360
It has the same kind of construct as making Google Docs, making Google Sheets, and it's done in a
02:03.360 --> 02:06.780
way that you can share it just as you would share anything else.
02:06.780 --> 02:08.270
So here we are.
02:09.020 --> 02:14.840
And the first thing that we see, what looks like a Jupiter notebook over here is a connect button.
02:14.840 --> 02:19.400
And I'm going to show you we can start with change runtime type because it shows you the different kinds
02:19.400 --> 02:23.690
of runtime, the different kinds of VM that we can run a CPU.
02:23.690 --> 02:29.480
In other words, a normal box that doesn't have one of these GPUs, graphics processing units that are
02:29.480 --> 02:35.900
so good at running, uh, parallel matrix maths that sits behind neural networks.
02:35.900 --> 02:41.300
So we can just choose a CPU box, which is very much available on the free tier.
02:41.330 --> 02:50.420
There is a low end GPU box called a T4, which has a smaller GPU attached to it.
02:50.420 --> 02:56.390
This is available on the free plan with some rate limits in terms of how much you can use it, and it's
02:56.390 --> 02:58.340
also very cheap on the paid plan.
02:58.550 --> 03:05.630
Um, there's an L4, which is a bit higher spec, and an A100, which is the strongest one and which
03:05.630 --> 03:08.930
we will use when we want to do things quickly.
03:08.960 --> 03:12.150
It does cost a little bit more, but still we're talking about dollars.
03:12.180 --> 03:14.940
Not not not massive amounts.
03:14.940 --> 03:16.050
$10 will get you.
03:16.050 --> 03:23.850
I think it's it's, uh, with $10, you'd be able to to keep training for about 24 to 48 hours, uh,
03:23.850 --> 03:26.580
using that box constantly.
03:26.580 --> 03:32.370
So it's, uh, it's not not still not going to break the bank, but it is on the radar when you start
03:32.370 --> 03:34.170
using a100s a lot.
03:34.830 --> 03:40.020
Uh, so, um, and of course, you always get to see how much you're spending, and you can always choose
03:40.020 --> 03:43.500
to to go with the cheaper option or go with a free option as you wish.
03:43.530 --> 03:46.650
And when you pick a box, you can have a high Ram version of it.
03:46.650 --> 03:48.870
And that's talking about the CPU, Ram, not the GPU.
03:48.900 --> 03:54.030
The GPU Ram is associated with which instance you pick, but you can choose whether you want a high
03:54.030 --> 03:55.290
CPU, Ram, or not.
03:55.290 --> 04:01.770
So let's just go with a CPU box with normal amount of Ram and connect to that box by pressing the connect
04:01.770 --> 04:02.730
button.
04:02.790 --> 04:07.320
It does take a little while to connect, because it has to hunt down a box and connect to it, but there
04:07.320 --> 04:07.680
we go.
04:07.710 --> 04:09.480
We are now attached to a box.
04:09.480 --> 04:15.750
You go to this dropdown and say View Resources to see what you're working with.
04:15.780 --> 04:17.040
You can see the system Ram.
04:17.040 --> 04:24.840
We've got like almost 13 gigs on this box, and we've got 225 gigs of disk space there.
04:25.290 --> 04:35.910
And I can go over here and I can type something like print hello Data Science World and run that.
04:35.910 --> 04:39.150
And shockingly, we get that message printed.
04:39.330 --> 04:42.990
Uh, so, uh, hopefully no surprises there.
04:42.990 --> 04:46.530
It's a Jupyter notebook running in the cloud on a CPU.
04:46.560 --> 04:48.210
A couple of other things to mention.
04:48.210 --> 04:50.370
If you look down here, there's some useful stuff.
04:50.370 --> 04:56.520
This one here opens up your sort of browser, a file browser, onto your local disk.
04:56.550 --> 05:01.380
This local disk is ephemeral, and then it gets completely wiped once you finished using this box.
05:01.380 --> 05:06.900
So consider it temporary and you can use it to be writing files there that you maybe are then going
05:06.900 --> 05:13.290
to upload your model or data to the Huggingface hub, um, which you will later download somewhere else.
05:13.290 --> 05:14.880
But this is temporary.
05:14.910 --> 05:16.290
This is very important.
05:16.290 --> 05:21.000
This key is for what's called the secrets associated with your notebook.
05:21.000 --> 05:26.520
And this is where you can put in the environment variables that you'll be able to access within your
05:26.520 --> 05:27.090
notebook.
05:27.120 --> 05:31.020
That should not be included in the code of the notebook.
05:31.050 --> 05:33.930
And what you'll see here is I have my anthropic API key.
05:33.960 --> 05:37.530
I have my OpenAI API key and my hugging face token.
05:37.530 --> 05:43.890
That's the thing we created in the last video, and I've got them associated with this notebook.
05:43.920 --> 05:46.020
You can just press Add New Secret to do that.
05:46.020 --> 05:48.270
And it comes associated with all of my notebooks.
05:48.450 --> 05:51.870
Um, because I've got that set up as my colab secrets.
05:51.870 --> 05:56.880
And you can create a new one by pressing Add New Secret there.
05:57.270 --> 06:01.590
You can switch notebook access on here.
06:01.860 --> 06:05.280
I've just seen that there's a Create Gemini key option there.
06:05.280 --> 06:10.500
They're obviously a cross-selling to Gemini, and I know that I that I say that creating Gemini Keys
06:10.530 --> 06:11.370
is is hard.
06:11.370 --> 06:15.300
Maybe they've got an easier path to creating Gemini API keys right there.
06:15.300 --> 06:16.740
So that would be worth trying.
06:16.770 --> 06:20.460
If you haven't already gone through the rigmarole of setting up a Gemini API key.
06:20.670 --> 06:26.340
Uh, so, um, and it's even I was going to say later we'll find out how to access your key from within
06:26.340 --> 06:27.180
the Jupyter notebook.
06:27.180 --> 06:30.540
But wonderfully, they've given you the little, little scriptlet of code just there.
06:30.540 --> 06:36.690
That's what we'll be doing later to be accessing our secrets within the code on the right.
06:36.690 --> 06:40.020
So you should set these up when you get a chance.
06:40.050 --> 06:45.630
When you're working with an actual notebook in particular, you flip this switch on to make sure that
06:45.630 --> 06:50.820
when you execute this code in a cell, it will have access to that secret.
06:51.120 --> 06:56.100
And of course, as you can imagine, the sort of powerful thing about these secrets is that if you share
06:56.100 --> 06:59.490
this notebook with others, then they get all of your code.
06:59.490 --> 07:02.100
But of course, they don't get your secrets shared.
07:02.100 --> 07:07.380
They will have to enter in their own secrets in order to be able to run that code.
07:07.380 --> 07:12.240
And similarly, of course, when I share notebooks for you to use, the same thing will apply.
07:12.240 --> 07:18.180
You'll need to put in your own tokens in order to make take advantage of the code and run it against
07:18.180 --> 07:23.850
the frontier models or use your hugging face hub, um, or whatever.
07:24.600 --> 07:26.700
Okay, let's close that down.
07:26.700 --> 07:30.930
So let me just show you some of the more powerful boxes.
07:30.930 --> 07:34.500
So you remember we can go here and go change runtime type.
07:34.500 --> 07:38.040
Click on T4 to to use that box.
07:38.040 --> 07:40.080
And I did that earlier.
07:40.230 --> 07:45.150
And I did that because uh, it can take a little while to connect to some of these boxes.
07:45.150 --> 07:50.700
And with the really high spec boxes like A100, sometimes it just won't be available and you'll have
07:50.700 --> 07:54.180
to come back and try again two minutes later, and then it will be available.
07:54.180 --> 07:58.710
Invariably it becomes available after a couple of tries, but sometimes they are oversubscribed and
07:58.710 --> 08:00.660
it takes a few attempts.
08:00.660 --> 08:02.580
So this is a T4 box.
08:02.580 --> 08:09.210
If I do view resources, we'll see that we have again 12 and a bit of system Ram.
08:09.210 --> 08:12.780
We have the same a slightly smaller hard drive, I think.
08:12.960 --> 08:15.690
I think it was two, two five before, but it's 200 whatever.
08:15.690 --> 08:16.980
That's plenty of disk space.
08:16.980 --> 08:24.000
And we have a GPU with 15GB of Ram, and 15GB might sound like a huge amount of Ram to have for a GPU.
08:24.000 --> 08:28.240
But as you'll quickly discover when it comes to training deep neural networks, that is a kind of puny
08:28.270 --> 08:30.490
GPU, but it's good enough for our purposes.
08:30.490 --> 08:32.920
We'll be able to use this for this class.
08:33.130 --> 08:38.110
Um, uh, but but it just some things might take a long time.
08:38.260 --> 08:45.100
Uh, this is a bit of code that I just copied from the original colab that Google prompted us with,
08:45.100 --> 08:51.970
which gives us a nice little, uh, printout of details behind this GPU, including how much memory
08:52.000 --> 08:54.250
we're using out of the 15GB.
08:54.280 --> 08:57.040
Although, of course, you can always watch it happening over here.
08:58.000 --> 09:02.110
Uh, so this is the T4 box.
09:02.110 --> 09:05.410
I'm now going to show you the A100 box.
09:05.410 --> 09:11.290
This is the super powered one, and I may splash out and use this from time to time.
09:11.290 --> 09:17.440
Just in the spirit of keeping this class moving fast and showing you, uh, great results really quickly.
09:17.590 --> 09:21.700
Uh, if we view the resources, you'll see what's going on.
09:21.700 --> 09:29.380
Now, we've got a 40 gigabyte Ram, GPU and that that is a beefy GPU.
09:29.380 --> 09:34.240
That is something which will be able to use to do some hefty training.
09:34.480 --> 09:37.750
Um, and we can use this to print more details.
09:37.840 --> 09:46.930
You can see that we are using two megabytes by, uh, when we're not doing anything out of the 40GB
09:46.930 --> 09:49.270
of available memory.
09:49.870 --> 09:53.200
So that's the quick tour of what's going on with Colab.
09:53.200 --> 09:57.040
The one other thing I'll mention is the share button up here.
09:57.070 --> 10:03.880
Uh, if you press the share button, then you will see a very familiar interface, because if you use
10:03.880 --> 10:07.600
Google Drive at all, it looks just like everything else in Google Drive.
10:07.630 --> 10:13.600
You can share these notebooks and with different levels of permission with different groups, and use
10:13.600 --> 10:16.330
that as a way to collaborate really effectively.
10:16.330 --> 10:25.810
Uh, with friends, colleagues, coworkers on the, uh, the AI Jen AI projects that you're working
10:25.810 --> 10:26.110
on.
10:26.110 --> 10:29.380
And it's a super effective way to collaborate, of course.
10:29.410 --> 10:32.980
And that's one of the great benefits of using the Google Colab setup.
10:33.220 --> 10:33.910
All right.
10:33.910 --> 10:35.500
I'll see you back for the next lecture.

400
week5/community-contributions/subtitles/srts/59170055/ja_JP.srt

@ -0,0 +1,400 @@
WEBVTT
00:00.740 --> 00:03.140
Google Colabの世界へようこそ。
00:03.140 --> 00:08.660
すでにGoogle Colabを使いこなしている人もいるかもしれないが、 そうであっても、 あちこちでいくつか紹介できればと思う。
00:08.780 --> 00:13.340
しかし、 そうでない場合は、 素晴らしい道具を用意することだ。
00:13.610 --> 00:17.630
グーグル・コラボには、 似たような競合他社が他にもあります。
00:17.750 --> 00:26.990
でも、 ここから始めるか、 あるいはあなたが選んだクラウド・コンピューティング・プラットフォームで同じようなことをすることをお勧めする。
00:26.990 --> 00:33.140
まず最初に必要なのは、 グーグルアカウントを持っていることだ。
00:33.170 --> 00:40.970
だから、 このURLのcolab. を研究している。 グーグル もしまだグーグルアカウントをお持ちでない場合は、
00:40.970 --> 00:43.850
アカウントを作成するよう促されます。
00:43.850 --> 00:44.840
その価値はある。
00:44.840 --> 00:46.880
それでできることは山ほどあるだろう。
00:46.880 --> 00:50.240
だから、 必要ならそうしてくれ。
00:50.510 --> 00:58.010
でも、 それ以外の人は、 Colabについての情報を見ることができる。
00:58.010 --> 01:00.650
無料のティアと有料のティアがある。
01:00.650 --> 01:02.900
無料ティアだけでできることは非常に多い。
01:02.960 --> 01:07.320
理屈の上では、 フリー・ティアでも僕らのクラスのほとんどのことができるはずなんだ。
01:07.320 --> 01:08.820
ただ、 もっと時間がかかるかもしれない。
01:09.060 --> 01:14.430
そして、 有料ティアでは、 いくら使うかをコントロールすることができ、 それは数ドルの問題で比較的少額にすることができる。
01:14.430 --> 01:20.490
トレーニングに深く打ち込むことができ、
01:20.490 --> 01:23.670
満足感も大きい。
01:23.760 --> 01:32.460
だから、 新しいColabノートを思いついたとき、 すぐに提供されるものと少し似ているんだ。
01:32.460 --> 01:34.890
Jupyterノートブックによく似ている。
01:34.920 --> 01:42.840
セルにはコードもあればテキストもあり、 クリックするだけでコードを実行できる。
01:42.840 --> 01:45.990
そして、 これはデフォルトのようなものだ。
01:46.020 --> 01:49.950
私たちにできることは、 新しいノートをドライブにファイルすることだ。
01:49.950 --> 01:56.280
このノートブックはグーグル・ドライブに作成されるので、 とても便利です。
01:56.280 --> 02:06.780
GoogleドキュメントやGoogleシートを作るのと同じような構成で、 他のものを共有するのと同じように共有することができる。
02:06.780 --> 02:08.270
だから、 ここにいる。
02:09.020 --> 02:14.840
ジュピター・ノートのように見えるのは、 接続ボタンだ。
02:14.840 --> 02:19.400
ランタイムの種類を変更するところから始めましょう。 ランタイムの種類を変更することで、
02:19.400 --> 02:23.690
CPUで実行できるVMの種類がわかります。
02:23.690 --> 02:29.480
言い換えれば、 GPU(グラフィックス・プロセッシング・ユニット)を1つも搭載していない普通の箱で、
02:29.480 --> 02:35.900
ニューラルネットワークの背後にある並列マトリックス計算を実行するのが得意なのだ。
02:35.900 --> 02:41.300
だから、 CPUボックスを選べばいいのだ。
02:41.330 --> 02:50.420
T4と呼ばれるローエンドGPUボックスがあり、 これにはより小さなGPUが取り付けられている。
02:50.420 --> 02:58.340
これは無料プランでも利用可能だが、 使用量に制限がある。
02:58.550 --> 03:08.930
L4はもう少しハイスペックだし、 A100は最強で、 素早くやりたいときに使う。
03:08.960 --> 03:12.150
少し高くつくが、 それでもドルの話だ。
03:12.180 --> 03:14.940
大量ではない。
03:14.940 --> 03:16.050
10ドルがあなたを捕まえる。
03:16.050 --> 03:26.580
10ドルあれば、 24時間から48時間くらいはトレーニングを続けられると思う。
03:26.580 --> 03:34.170
100sを多用するようになれば、 そのようなことも視野に入ってくるでしょう。
03:34.830 --> 03:40.020
ええと、 それで、 ええと、 もちろん、 いくら使っているかはいつでも確認できるし、 安いオプションにするか、
03:40.020 --> 03:43.500
無料のオプションにするかはいつでも好きなように選択できる。
03:43.530 --> 03:46.650
そして、 ボックスを選ぶと、 そのハイラム版を持つことができる。
03:46.650 --> 03:48.870
これはGPUではなくCPUとラムの話だ。
03:48.900 --> 03:55.290
GPUラムはどのインスタンスを選ぶかに関連するが、 高いCPU、 ラムが必要かどうかは選択できる。
03:55.290 --> 04:02.730
そこで、 通常のRAMを搭載したCPUボックスに接続ボタンを押して接続することにしよう。
04:02.790 --> 04:07.680
ボックスを探し出して接続する必要があるため、 接続には少し時間がかかるが、 これでOKだ。
04:07.710 --> 04:09.480
私たちは今、 箱に取り付けられている。
04:09.480 --> 04:15.750
このドロップダウンメニューから「View Resources(リソースを表示)」と選択し、 作業内容を確認する。
04:15.780 --> 04:17.040
システム・ラムを見ることができる。
04:17.040 --> 04:24.840
このボックスには13ギガ近くあり、 225ギガのディスクスペースがある。
04:25.290 --> 04:35.910
そしてここに行き、 print hello Data Science Worldのように入力して実行することができる。
04:35.910 --> 04:39.150
そして衝撃的なことに、 私たちはそのメッセージを印刷される。
04:39.330 --> 04:42.990
ああ、 だから、 サプライズがないことを祈るよ。
04:42.990 --> 04:46.530
クラウド上のCPUで動作するJupyterノートブックだ。
04:46.560 --> 04:48.210
他にもいくつか触れておきたいことがある。
04:48.210 --> 04:50.370
この下を見れば、 役に立つものがある。
04:50.370 --> 04:56.520
これは、 ブラウザーの一種、 ファイル・ブラウザーをローカル・ディスク上に開くものだ。
04:56.550 --> 05:01.380
このローカルディスクは一時的なもので、 このボックスの使用を終えると完全に消去される。
05:01.380 --> 05:06.900
だから、 一時的なものと考えて、 そこにファイルを書き込んで、 モデルやデータをHuggingfaceのハブにアップロードして、
05:06.900 --> 05:13.290
後でどこかにダウンロードするために使うことができる。
05:13.290 --> 05:14.880
しかし、 これは一時的なものだ。
05:14.910 --> 05:16.290
これはとても重要なことだ。
05:16.290 --> 05:21.000
このキーは、 あなたのノートブックに関連する秘密と呼ばれるもののためのものです。
05:21.000 --> 05:27.090
そしてここに、 ノートブック内でアクセスできる環境変数を入れる。
05:27.120 --> 05:31.020
それはノートのコードに含まれるべきではない。
05:31.050 --> 05:33.930
そして、 ここに表示されているのは、 私の擬人化APIキーです。
05:33.960 --> 05:37.530
僕はOpenAIのAPIキーとハグする顔のトークンを持っている。
05:37.530 --> 05:43.890
これは前回のビデオで作成したもので、 このノートブックに関連付けました。
05:43.920 --> 05:46.020
新しいシークレットを追加]を押すだけです。
05:46.020 --> 05:48.270
そして、 それは私のすべてのノートブックに付属している。
05:48.450 --> 05:51.870
ええと、 それは僕のコラボの秘密として設定されているからなんだ。
05:51.870 --> 05:56.880
そこで新しいシークレットを追加を押せば、 新しいシークレットを作成できる。
05:57.270 --> 06:01.590
ノートブックへのアクセスはここで切り替えることができる。
06:01.860 --> 06:05.280
ジェミニ・キーを作成するオプションがあるのを今見たよ。
06:05.280 --> 06:11.370
双子座へのクロスセリングであることは明らかだし、 私が双子座のキーを作るのは難しいと言っていることも知っている。
06:11.370 --> 06:15.300
もしかしたら、 ジェミニのAPIキーを作成するための簡単なパスがそこにあるのかもしれない。
06:15.300 --> 06:16.740
だから、 試してみる価値はあるだろう。
06:16.770 --> 06:20.460
まだGemini APIキーの設定を行っていない場合。
06:20.670 --> 06:27.180
Jupyterノートブックから自分のキーにアクセスする方法は、 後で説明しようと思っていたんだ。
06:27.180 --> 06:30.540
しかし、 素晴らしいことに、 彼らは小さな小さなスクリプトのようなコードを与えてくれた。
06:30.540 --> 06:36.690
これが、 右のコード内の秘密にアクセスするために後でやることだ。
06:36.690 --> 06:40.020
だから、 機会があれば、 これらをセットアップしておくべきだ。
06:40.050 --> 06:45.630
特に実際のノートブックで作業しているときは、 このスイッチをオンにして、 あるセルでこのコードを実行したときに、
06:45.630 --> 06:50.820
そのコードがそのシークレットにアクセスできるようにする。
06:51.120 --> 06:59.490
そしてもちろん、 ご想像のとおり、 これらの秘密が強力なのは、 このノートブックを他の人と共有すれば、 その人があなたのコードをすべて手に入れることができるということだ。
06:59.490 --> 07:02.100
しかしもちろん、 彼らはあなたの秘密を共有することはない。
07:02.100 --> 07:07.380
そのコードを実行できるようにするためには、 彼ら自身の秘密を入力しなければならない。
07:07.380 --> 07:12.240
もちろん、 私が皆さんにノートをお分けするときも同様です。
07:12.240 --> 07:18.180
フロンティアのモデルに対してコードを実行したり、 ハグする顔のハブを使ったりするためには、
07:18.180 --> 07:23.850
独自のトークンを入れる必要がある。
07:24.600 --> 07:26.700
よし、 これで終わりにしよう。
07:26.700 --> 07:30.930
では、 より強力なボックスをいくつか紹介しよう。
07:30.930 --> 07:34.500
ここでランタイム・タイプを変更することができる。
07:34.500 --> 07:38.040
そのボックスを使用するにはT4をクリックする。
07:38.040 --> 07:40.080
さっきもそうだった。
07:40.230 --> 07:45.150
そうしたのは、 いくつかのボックスに接続するのに少し時間がかかることがあるからだ。
07:45.150 --> 07:50.700
A100のような本当にハイスペックなボックスでは、 時々使用できないことがあり、 2分後にもう一度来て試してみると、
07:50.700 --> 07:54.180
使用できるようになっている。
07:54.180 --> 08:00.660
数回トライすれば必ず空くが、 時には定員オーバーで数回トライすることもある。
08:00.660 --> 08:02.580
これがT4ボックスか。
08:02.580 --> 08:09.210
リソースを見ると、 また12と少しシステムラムがある。
08:09.210 --> 08:12.780
ハードディスクは同じで、 少し小さいと思う。
08:12.960 --> 08:15.690
以前は2か5だったと思うが、 今は200だ。
08:15.690 --> 08:16.980
十分なディスク容量だ。
08:16.980 --> 08:24.000
そして、 15GBのラムを搭載したGPUがある。 15GBというと、 GPUにとっては膨大なラムの量に聞こえるかもしれない。
08:24.000 --> 08:28.240
しかし、 ディープ・ニューラル・ネットワークをトレーニングすることになればすぐにわかるように、 これはある意味ちっぽけなGPUだが、
08:28.270 --> 08:30.490
我々の目的には十分だ。
08:30.490 --> 08:32.920
このクラスで使うことができるだろう。
08:33.130 --> 08:38.110
うーん、 でも、 時間がかかるものもあるかもしれない。
08:38.260 --> 08:45.100
これは、 Googleが私たちに促してくれたオリジナル・ラボからコピーしたコードで、
08:45.100 --> 08:54.250
15GBのうちどれだけのメモリーを使用しているかなど、 このGPUの詳細をプリントアウトしてくれる。
08:54.280 --> 08:57.040
もちろん、 こちらでその様子を見ることもできる。
08:58.000 --> 09:02.110
ええと、 これがT4の箱ですね。
09:02.110 --> 09:05.410
これからA100の箱をお見せします。
09:05.410 --> 09:11.290
これは超強力なもので、 時々これを使うかもしれない。
09:11.290 --> 09:17.440
このクラスを早く進め、 皆さんに素晴らしい結果を早くお見せしたいという精神からです。
09:17.590 --> 09:21.700
リソースを見れば、 何が起こっているかわかるだろう。
09:21.700 --> 09:29.380
現在、 40ギガバイトのラム、 GPUを搭載している。
09:29.380 --> 09:34.240
それは、 重たいトレーニングをするために使えるものだ。
09:34.480 --> 09:37.750
そして、 これを使って詳細を印刷することができる。
09:37.840 --> 09:49.270
40GBの空きメモリのうち、 何もしていないときに2メガバイト使っているのがわかるだろう。
09:49.870 --> 09:53.200
というわけで、 Colabの近況をざっとご紹介した。
09:53.200 --> 09:57.040
もうひとつ言っておくと、 ここにシェアボタンがある。
09:57.070 --> 10:03.880
共有ボタンを押すと、 とても見慣れたインターフェイスが表示される。 Googleドライブをまったく使っていない人なら、
10:03.880 --> 10:07.600
Googleドライブの他のものと同じように見えるからだ。
10:07.630 --> 10:16.330
これらのノートブックを、 異なるグループと異なる許可レベルで共有し、 本当に効果的なコラボレーションの方法として使うことができる。
10:16.330 --> 10:26.110
あなたが取り組んでいるジェンのAIプロジェクトの友人、 同僚、 同僚と。
10:26.110 --> 10:29.380
もちろん、 共同作業には超効果的な方法だ。
10:29.410 --> 10:32.980
それが、 グーグルコラボのセットアップを使う大きなメリットのひとつだ。
10:33.220 --> 10:33.910
分かった。
10:33.910 --> 10:35.500
また次の講義でお会いしましょう。

451
week5/community-contributions/subtitles/srts/59170055/ko_KR.srt

@ -0,0 +1,451 @@
WEBVTT
00:00.740 --> 00:03.140
구글 콜랍의 세계에 잘 오셨어요
00:03.140 --> 00:08.660
구글 콜랍에 대해 잘 아실지도 모르지만 그래도 몇 가지 보여드릴게요
00:08.780 --> 00:13.340
하지만 그게 아니라면 멋진 도구를 준비하세요
00:13.610 --> 00:17.630
콜랍과 비슷한 제품을 구글에 검색한 경쟁 업체도 있어요
00:17.750 --> 00:26.990
여기서 시작하거나 같은 걸 하길 권합니다 클라우드 컴퓨팅 플랫폼에서요
00:26.990 --> 00:33.140
구글 계정이 없다면 가장 먼저 구글 계정을 만드세요
00:33.170 --> 00:40.970
URL Colab으로 가보죠 연구요 구글 검색요 구글 계정이 없는 분들도
00:40.970 --> 00:43.850
계정을 만들라고 할 거예요
00:43.850 --> 00:44.840
그럴 가치가 있어요
00:44.840 --> 00:46.880
할 수 있는 일이 아주 많을 거예요
00:46.880 --> 00:50.240
그러니까 필요하면 그렇게 하세요
00:50.510 --> 00:56.990
하지만 다른 분들을 위해 이걸 보시면 콜랍에 대한 정보가
00:56.990 --> 00:58.010
나와요
00:58.010 --> 01:00.650
무료 계층과 유료 계층이 있어요
01:00.650 --> 01:02.900
무료 계층으로도 할 수 있는 게 정말 많아요
01:02.960 --> 01:07.320
이론상으로는 무료로 수업 내용을 거의 다 들을 수 있어야 해요
01:07.320 --> 01:08.820
시간이 좀 더 걸릴 뿐이죠
01:09.060 --> 01:13.680
유료 계층은 여러분이 얼마를 쓸지 결정할 수 있습니다 몇 달러 안에 비교적 적은 금액일
01:13.680 --> 01:14.430
수도 있죠
01:14.430 --> 01:20.490
그래서 반드시 고려해 보시길 권합니다 더 깊이 훈련할 수 있고 아주
01:20.490 --> 01:23.670
만족스러울 테니까요 Get it
01:23.760 --> 01:32.460
콜랍 비트를 새로 만들면 바로 나오는 것과 비슷하죠
01:32.460 --> 01:34.890
주피터 공책과 아주 비슷해요
01:34.920 --> 01:41.820
코드나 텍스트가 될 수 있는 셀이 있어요 클릭해서 실행하면 코드를 실행할
01:41.820 --> 01:42.840
수 있죠
01:42.840 --> 01:45.990
이건 기본값으로 나오는 거죠
01:46.020 --> 01:49.950
드라이브에 새 공책을 철할 수 있어요
01:49.950 --> 01:56.280
드라이브에서는 이렇게 된대요 구글 드라이브에서 만든 노트라서 아주 편리하죠
01:56.280 --> 02:03.360
구글 문서나 구글 시트를 만드는 것과 같은 구성 구조를 갖고 있어요 다른 걸 공유하듯
02:03.360 --> 02:06.780
공유할 수 있는 방식으로 이뤄졌죠
02:06.780 --> 02:08.270
그래서 이렇게 됐죠
02:09.020 --> 02:14.840
제일 먼저 보이는 건 주피터 노트북처럼 보이는 연결 버튼이에요
02:14.840 --> 02:19.400
런타임 형식 변경부터 보여드릴게요 다양한 런타임과
02:19.400 --> 02:23.690
CPU를 실행할 다양한 VM을 보여주니까요
02:23.690 --> 02:29.480
다시 말해 일반 상자에는 GPU가 없다는 거죠 그래픽 처리 장치가
02:29.480 --> 02:35.900
없는 상자요 신경망 뒤에 있는 평행 행렬 수학을 실행하는 장치요
02:35.900 --> 02:41.300
CPU 상자를 선택하면 됩니다 무료 계층에서 많이 사용 가능하죠
02:41.330 --> 02:50.420
T4라는 저단 GPU 박스가 있는데 더 작은 GPU가 부착되어 있어요
02:50.420 --> 02:56.390
무료 요금제로도 구매하실 수 있어요 사용량에 따라 요금제한이 있지만 유료 요금제에서도
02:56.390 --> 02:58.340
아주 저렴해요
02:58.550 --> 03:05.630
L4는 사양이 좀 더 높은 비트와 A100은 가장 강한 비트로
03:05.630 --> 03:08.930
빨리 작업할 때 쓸 거예요
03:08.960 --> 03:12.150
비트는 좀 더 들지만 그래도 달러잖아요
03:12.180 --> 03:14.940
엄청난 양은 아니죠
03:14.940 --> 03:16.050
10달러면 Get이죠
03:16.050 --> 03:23.850
10달러만 있으면 24시간에서 48시간 동안 계속 훈련할 수 있어요 그 상자를
03:23.850 --> 03:26.580
계속 사용하면서요
03:26.580 --> 03:32.370
그래서 큰돈을 벌지는 못해도 a100을 많이 사용하면
03:32.370 --> 03:34.170
주목을 받게 되죠
03:34.830 --> 03:40.020
그래서 항상 얼마를 쓰는지 확인할 수 있고 언제든 더 싼 옵션이나
03:40.020 --> 03:43.500
무료 옵션을 선택할 수 있어요.
03:43.530 --> 03:46.650
상자를 고르면 높은 램 버전이 나와요
03:46.650 --> 03:48.870
GPU 말고 CPU, 램에 관한 거죠
03:48.900 --> 03:54.030
GPU 램은 여러분이 고르는 인스턴스와 연결되어 있지만 높은 CPU, 램을 원하는지를
03:54.030 --> 03:55.290
선택할 수 있어요
03:55.290 --> 04:02.730
보통 양의 램을 가진 CPU 박스로 가서 연결 버튼을 눌러 박스에 연결할게요
04:02.790 --> 04:07.680
연결하는 데 시간이 좀 걸려요 상자를 찾아서 연결해야 하니까요 하지만 됐어요
04:07.710 --> 04:09.480
지금 우린 상자에 연결돼 있어요
04:09.480 --> 04:15.750
이 드롭다운으로 가서 리소스 보기에서 작업 중인 걸 보세요
04:15.780 --> 04:17.040
시스템 램이 보이죠
04:17.040 --> 04:24.840
이 박스는 거의 13기가이고 디스크 공간은 225기가예요
04:25.290 --> 04:35.910
여기로 가서 hello Data 과학 World 같은 걸 입력해 실행할 수 있어요
04:35.910 --> 04:39.150
놀랍게도 그 메시지가 인쇄됐어요 get it
04:39.330 --> 04:42.990
놀랄 일은 없길 바라요
04:42.990 --> 04:46.530
주피터 노트북이 클라우드에서 CPU를 실행하고 있어요
04:46.560 --> 04:48.210
몇 가지 더 언급할 게 있어요
04:48.210 --> 04:50.370
이 아래를 보면 유용한 게 있어요
04:50.370 --> 04:56.520
이건 일종의 브라우저를 엽니다 파일 브라우저요 로컬 디스크로요
04:56.550 --> 05:01.380
이 부분 디스크는 일시적이고 이 상자를 다 쓰면 완전히 지워져요
05:01.380 --> 05:06.900
임시라고 생각하고 파일을 작성할 수 있습니다 그 후 모델이나 데이터를
05:06.900 --> 05:13.290
H깅페이스 허브에 업로드 할 수 있죠 나중에 다른 곳에서 다운로드 할 수 있도록요
05:13.290 --> 05:14.880
하지만 이건 일시적인 거예요
05:14.910 --> 05:16.290
아주 중요한 거예요
05:16.290 --> 05:21.000
이 열쇠는 당신 수첩과 관련된 비밀들을 여는 열쇠예요
05:21.000 --> 05:26.520
여기 환경 변수를 입력할 수 있어요 노트북 안에서 액세스할 수 있는
05:26.520 --> 05:27.090
거죠
05:27.120 --> 05:31.020
그건 공책 코드에 포함되면 안 되죠
05:31.050 --> 05:33.930
여기 보이는 건 인류 API 키예요
05:33.960 --> 05:37.530
OpenAI API 키와 포옹하는 얼굴 토큰이 있어요
05:37.530 --> 05:43.890
지난 비디오에서 만든 거죠 이 공책과 관련 있어요
05:43.920 --> 05:46.020
Add New 시크릿을 누르면 돼요
05:46.020 --> 05:48.270
제 모든 공책과 관련이 있어요
05:48.450 --> 05:51.870
콜라베의 비밀이라고 적어 놨거든요
05:51.870 --> 05:56.880
새 비밀 추가를 눌러 새 걸 만들 수 있어요
05:57.270 --> 06:01.590
여기서 공책 사용 권한을 바꿀 수 있어요
06:01.860 --> 06:05.280
제미니 만들기 핵심 옵션이 있네요
06:05.280 --> 06:10.500
제미니 키스와 교차 판매를 하는 거죠 제미니 키스를 만드는 건 어려운
06:10.530 --> 06:11.370
일이에요
06:11.370 --> 06:15.300
제미니 API 키를 만드는 더 쉬운 길이 있을지도 몰라요
06:15.300 --> 06:16.740
시도해 볼 만하겠어요
06:16.770 --> 06:20.460
제미니 API 키를 설정하는 복잡한 과정을 아직 안 거쳤다면요
06:20.670 --> 06:26.340
그래서... 나중에 말씀드리려고 했는데 주피터 수첩에서 열쇠에 접근하는 방법을 알아낼
06:26.340 --> 06:27.180
거예요
06:27.180 --> 06:30.540
하지만 놀랍게도 작은 스크립트렛 코드를 제공했어요
06:30.540 --> 06:36.690
나중에 그렇게 할 겁니다 오른쪽 코드에 있는 우리 비밀에 접근하기 위해서요
06:36.690 --> 06:40.020
그러니까 기회 있을 때 이런 거 좀 만들어 놔요 get it get it
06:40.050 --> 06:45.630
특히 실제 노트북으로 작업할 땐 이 스위치를 켜서 셀에서
06:45.630 --> 06:50.820
이 코드를 실행할 때 해당 비밀에 접근하게 해야죠
06:51.120 --> 06:56.100
그리고 아시다시피 이 비밀의 강력한 힘은 이 공책을 다른 사람과 공유하면
06:56.100 --> 06:59.490
코드를 모두 얻게 된다는 거죠. Get it.
06:59.490 --> 07:02.100
물론 비밀을 공유하진 못하죠 Get it
07:02.100 --> 07:07.380
코드를 실행하려면 자신의 비밀을 입력해야 하죠
07:07.380 --> 07:12.240
마찬가지로 공책을 공유할 때도 같은 일이 적용되죠
07:12.240 --> 07:18.180
코드를 활용하고 프론티어 모델에 적용하려면 여러분의 토큰을 넣어야
07:18.180 --> 07:23.850
합니다 또는 for 포옹 얼굴 허브 같은 걸 사용하세요
07:24.600 --> 07:26.700
이제 닫을게요
07:26.700 --> 07:30.930
좀 더 강력한 상자들을 보여드릴게요
07:30.930 --> 07:34.500
기억하시겠지만 런타임 형식을 바꿀 수 있어요
07:34.500 --> 07:38.040
T4를 클릭하면 사용하실 수 있어요
07:38.040 --> 07:40.080
아까도 그랬고요
07:40.230 --> 07:45.150
이렇게 한 이유는 상자에 연결하는 데 시간이 좀 걸리기 때문이에요
07:45.150 --> 07:50.700
A100처럼 고성능 박스가 있으면 구하기가 힘들어서 2분 후에
07:50.700 --> 07:54.180
다시 와서 시도해 봐야 구할 수 있어요
07:54.180 --> 07:58.710
언제나 두어 번 시도하면 구할 수 있지만 너무 많이 팔려서 몇 번
07:58.710 --> 08:00.660
시도해야 할 때도 있어요
08:00.660 --> 08:02.580
이건 T4 박스인데요
08:02.580 --> 08:09.210
리소스를 보면 12개와 시스템 램이 약간 있네요
08:09.210 --> 08:12.780
하드 드라이브는 좀 작지만요
08:12.960 --> 08:15.690
전에는 200달러였는데 지금은 200달러예요
08:15.690 --> 08:16.980
디스크 공간이 충분하죠
08:16.980 --> 08:24.000
15GB 램이 있는 GPU가 있는데 GPU에 15GB 램은 너무 많은 것 같아요
08:24.000 --> 08:28.240
하지만 심층 신경망 훈련에 관해선 자그마한 GPU 수준이지만
08:28.270 --> 08:30.490
우리 목적에는 충분하죠
08:30.490 --> 08:32.920
이 수업에서 활용할 수 있을 거예요
08:33.130 --> 08:38.110
하지만 어떤 일은 시간이 오래 걸릴 수도 있어요
08:38.260 --> 08:45.100
이건 코드인데 복사한 겁니다 구글 프롬프트 원본 콜라브에서요
08:45.100 --> 08:51.970
GPU 뒤의 세부 정보를 출력한 거죠 15GB에서 사용하는 메모리도
08:52.000 --> 08:54.250
포함해서요
08:54.280 --> 08:57.040
물론 여기서도 볼 수 있지만요
08:58.000 --> 09:02.110
이게 T4 상자예요
09:02.110 --> 09:05.410
A100 박스를 보여 드릴게요
09:05.410 --> 09:11.290
이건 슈퍼 파워로 가끔 돈을 펑펑 쓰며 사용할 수도 있어요
09:11.290 --> 09:17.440
이 수업을 빠르게 진행하고 좋은 결과를 빨리 보여드리기 위한 정신이죠
09:17.590 --> 09:21.700
리소스를 보면 어떤 상황인지 알 수 있어요
09:21.700 --> 09:29.380
40기가 램이 있고 GPU는 아주 큰 GPU죠
09:29.380 --> 09:34.240
격렬한 훈련을 하는 데 유용할 거예요
09:34.480 --> 09:37.750
이걸 이용해서 더 자세한 걸 인쇄할 수 있어요
09:37.840 --> 09:46.930
2메가바이트로 사용하고 있는 걸 보실 수 있어요 사용 가능한 메모리 40GB에서 아무것도
09:46.930 --> 09:49.270
하지 않을 때요
09:49.870 --> 09:53.200
콜랍에 대해 간단히 살펴봤는데요
09:53.200 --> 09:57.040
또 한 가지 언급할 것은 공유 버튼이에요
09:57.070 --> 10:03.880
공유 버튼을 누르면 아주 익숙한 인터페이스가 보일 겁니다 구글 드라이브를 사용한다면 구글
10:03.880 --> 10:07.600
드라이브의 다른 모든 것과 똑같을 테니까요
10:07.630 --> 10:13.600
허가 수준에 따라 다른 그룹과 이 노트를 공유할 수 있어요 협업하는
10:13.600 --> 10:16.330
데 효과적인 방법이죠
10:16.330 --> 10:26.110
친구, 동료, 동료와 함께 인공지능 프로젝트를 진행하고 계시죠?
10:26.110 --> 10:29.380
물론 협업하기에 아주 효과적인 방법이죠
10:29.410 --> 10:32.980
구글 Colab 설정이 가진 가장 큰 장점 중 하나죠
10:33.220 --> 10:33.910
좋아요
10:33.910 --> 10:35.500
다음 강의 때 뵙죠

34
week5/community-contributions/subtitles/srts/59170057/en_US.srt

@ -0,0 +1,34 @@
WEBVTT
00:00.530 --> 00:05.750
And so at the beginning of this week, we started by talking about hugging face pipelines.
00:05.750 --> 00:08.750
And you used all the different pipeline.
00:08.750 --> 00:13.130
Not actually not all of them, because there's so many, but we use many of the most common pipelines
00:13.130 --> 00:15.980
to do every day inference tasks.
00:15.980 --> 00:23.000
And now today we looked at Tokenizers and you are well versed in Tokenizers, and hopefully a lot has
00:23.000 --> 00:27.560
come together in terms of your understanding of what they mean and how they work, and special tokens
00:27.560 --> 00:29.060
and all the like.
00:29.150 --> 00:37.760
So next time, next time we start to work with models and this is when we can use the underlying hugging
00:37.790 --> 00:44.870
face code that is a wrapper around PyTorch or TensorFlow code to generate text and compare the results
00:44.870 --> 00:48.170
across multiple open source models.
00:48.170 --> 00:51.590
And that's going to be a ton of fun and I'm looking forward to it.

25
week5/community-contributions/subtitles/srts/59170057/ja_JP.srt

@ -0,0 +1,25 @@
WEBVTT
00:00.530 --> 00:05.750
そして今週の冒頭では、 まずハグフェイス・パイプラインについて話した。
00:05.750 --> 00:08.750
そして、 あなたはすべての異なるパイプラインを使用した。
00:08.750 --> 00:15.980
しかし、 私たちは日常的な推論作業に最も一般的なパイプラインの多くを使用しています。
00:15.980 --> 00:23.000
そして今日、 トーケナイザーについて調べました。 トーケナイザーの意味や仕組み、
00:23.000 --> 00:29.060
特別なトークンなどについて、 多くのことが理解できたと思います。
00:29.150 --> 00:37.760
そこで次回は、 モデルを使って作業を開始し、 PyTorchやTensorFlowのコードのラッパーである、
00:37.790 --> 00:48.170
基本的な抱きつき顔のコードを使ってテキストを生成し、 複数のオープンソースのモデル間で結果を比較できるようにする。
00:48.170 --> 00:51.590
それはとても楽しいことだし、 楽しみにしているよ。

34
week5/community-contributions/subtitles/srts/59170057/ko_KR.srt

@ -0,0 +1,34 @@
WEBVTT
00:00.530 --> 00:05.750
이번 주 초에는 페이스 파이프라인 껴안기부터 얘기해 봤는데요
00:05.750 --> 00:08.750
다양한 파이프라인을 사용했죠
00:08.750 --> 00:13.130
사실 전부는 아니죠 너무 많으니까요 하지만 가장 일반적인 파이프라인을 많이
00:13.130 --> 00:15.980
사용합니다 매일 추론 작업을 하기 위해서요
00:15.980 --> 00:23.000
오늘은 토큰이기에 대해 살펴봤는데요 토큰이 뭔지 잘 아실 겁니다 토큰의
00:23.000 --> 00:27.560
의미와 작동 방식 특별한 토큰 등을 잘 이해하셨길
00:27.560 --> 00:29.060
바라요
00:29.150 --> 00:37.760
다음번엔 모델로 작업하기 시작할 때죠 끌어안는 얼굴 코드를 사용할 때인데요 PyToch나
00:37.790 --> 00:44.870
텐서플로우 코드를 감싸 텍스트를 생성하고 여러 오픈 소스 모델에서 결과를
00:44.870 --> 00:48.170
비교하는 거죠
00:48.170 --> 00:51.590
정말 재미있을 거예요 기대가 돼요

229
week5/community-contributions/subtitles/srts/59170093/en_US.srt

@ -0,0 +1,229 @@
WEBVTT
00:00.410 --> 00:02.180
I'm delighted to see you again.
00:02.180 --> 00:10.130
As we get started with day three of week three of our adventure and the, uh, things are going to get
00:10.130 --> 00:11.900
get deeper this time.
00:11.900 --> 00:18.140
We're going to roll our sleeves up as we get into the lower level APIs of hugging Face Transformers
00:18.140 --> 00:18.890
library.
00:19.490 --> 00:24.800
And as always, just a quick reminder you can code against frontier models, you can build AI assistants,
00:24.800 --> 00:26.330
and you can use pipelines.
00:26.330 --> 00:26.870
Pipelines.
00:26.870 --> 00:35.150
What we did last time, such an easy way to use the wide variety of open source inference tasks available
00:35.150 --> 00:36.290
from Hugging Face.
00:36.290 --> 00:39.260
Today, though, we get lower level.
00:39.350 --> 00:45.470
As I mentioned, there are these these these two things, tokenizers and models that are part of the
00:45.470 --> 00:49.430
way we interact with transformers at a lower level than pipelines.
00:49.430 --> 00:50.630
And that's what we're going to be doing today.
00:50.630 --> 00:53.000
We're going to be starting with Tokenizers.
00:53.000 --> 00:58.100
We're going to be learning how to translate between text and tokens for different models, and we're
00:58.100 --> 01:02.600
going to be understanding something called chat templates, which I'm hoping is going to make a few
01:02.600 --> 01:03.890
different things come together.
01:03.920 --> 01:06.170
It's quite an important moment.
01:06.440 --> 01:13.700
Um, so first, to introduce this type of object called a tokenizer in hugging face, it is an object
01:13.700 --> 01:20.870
which translates as you can imagine between text, a string and tokens, a list of numbers.
01:21.020 --> 01:23.930
Um, and there are very simply two functions.
01:23.930 --> 01:26.960
Two things you need to know about encoding and decoding.
01:26.960 --> 01:32.060
Encode takes you from strings to tokens, and decode takes you back again.
01:32.060 --> 01:33.590
And we will see that.
01:33.590 --> 01:38.810
And of course, there's just a little bit of nuance and fiddly stuff, but that's basically all there
01:38.810 --> 01:39.920
is to it.
01:40.370 --> 01:48.290
A tokenizer contains a vocab, which is all of the different fragments of characters of one character,
01:48.290 --> 01:53.150
two, three, four characters shoved together that make up that token.
01:53.360 --> 01:57.110
Um, and it can also include as well as these fragments of characters.
01:57.110 --> 01:59.870
It can include something called a special token.
01:59.900 --> 02:07.880
A few of these special tokens where a special token is again a single token that is going to tell the
02:07.880 --> 02:15.620
the model something that it represents, like start of a sentence or beginning of a chat with the assistant
02:15.620 --> 02:17.210
or something like that.
02:17.660 --> 02:23.150
And as I mentioned before, if you're thinking, okay, but how do we train a neural network architecture,
02:23.150 --> 02:28.730
how do we how do we how do we construct a neural network architecture so that it expects a particular
02:28.730 --> 02:33.470
token to represent something like start of sentence or something like that?
02:33.470 --> 02:35.420
And there's no magic answer.
02:35.420 --> 02:37.370
It just simply comes down to training.
02:37.370 --> 02:43.130
If it's seen enough examples in its training data that has that special token being used for that purpose,
02:43.160 --> 02:46.550
it learns that that is the objective of that special token.
02:46.550 --> 02:52.400
But there's nothing fundamental in the architecture, generally speaking, that expects one particular
02:52.400 --> 02:57.890
type of token over another and also a tokenizer.
02:57.890 --> 03:02.810
In addition to doing this, mapping text to tokens and having a vocab also has something called a chat
03:02.840 --> 03:03.590
template.
03:03.590 --> 03:07.320
At least for a specific type of model, as we'll see.
03:07.320 --> 03:14.160
And that knows how to take a set of messages where you've had system message, user message and so on
03:14.160 --> 03:16.950
and turn that into just a set of tokens.
03:16.950 --> 03:20.940
And that will all make sense when you see a real example.
03:21.630 --> 03:29.520
So every model in hugging face, every open source model has its own tokenizer associated with it.
03:29.520 --> 03:34.590
There's not just one general tokenizer that applies to models because it depends on how the model was
03:34.590 --> 03:35.190
trained.
03:35.220 --> 03:40.920
The tokenizer, um, I mean, obviously multiple models could share the same tokenizer, but but what
03:40.920 --> 03:46.200
matters is which tokenizer was used when the model was trained, because you have to use exactly the
03:46.200 --> 03:53.040
same tokenizer during inference time when you're running it, otherwise you will get back bad results.
03:53.130 --> 03:57.390
Uh, maybe that's an experiment we should try at some point, but I'll you'll see why.
03:57.390 --> 04:01.380
That would be a very unproductive experiment in just a moment.
04:01.380 --> 04:10.590
So for today we're going to look at the tokenizer for llama 3.1 which is the iconic family of models
04:10.590 --> 04:12.120
from Larma that paved.
04:12.240 --> 04:12.420
Sorry.
04:12.450 --> 04:12.660
From.
04:12.690 --> 04:13.230
From Larma.
04:13.230 --> 04:13.890
From Mehta.
04:13.920 --> 04:17.010
That paved the way for open source models.
04:17.010 --> 04:20.670
And we're going to look at a model called Phi three from Microsoft.
04:20.670 --> 04:26.760
And we're going to look at Quinn two again, the powerhouse from Alibaba Cloud, which leads the way
04:26.760 --> 04:29.400
in many of the different metrics.
04:29.400 --> 04:35.790
We're also going to look at something very different, which is a model called Star Coder two, which
04:35.790 --> 04:41.010
is a model for, for for generating code.
04:41.010 --> 04:44.970
We're going to look at its tokenizer to see any differences.
04:45.270 --> 04:51.660
Um, and the reason that these two have similar looking graphics is that Lama 3.1 and Phi three are
04:51.660 --> 04:53.520
extremely similar.
04:53.550 --> 05:00.780
Quantu perhaps it's also very similar, but it's it's got more of a focus on, uh, Chinese as well
05:00.780 --> 05:01.650
as English.
05:01.650 --> 05:05.580
And Star Coder two is of course more about coding.
05:05.700 --> 05:12.120
So with that introduction, we're going to head over to Google Colab and we're going to do some tokenizing.

169
week5/community-contributions/subtitles/srts/59170093/ja_JP.srt

@ -0,0 +1,169 @@
WEBVTT
00:00.410 --> 00:02.180
また会えて嬉しいよ。
00:02.180 --> 00:11.900
冒険の3週目、 3日目をスタートさせるにあたり、 ええと、 今回は物事がより深くなりそうだ。
00:11.900 --> 00:18.890
我々は、 Face Transformersライブラリを抱きしめるための低レベルのAPIに入るために、 腕まくりをするつもりだ。
00:19.490 --> 00:26.330
そしていつものように、 フロンティアモデルに対してコードを書くことも、 AIアシスタントを構築することも、 パイプラインを使用することもできることを簡単に覚えておいてほしい。
00:26.330 --> 00:26.870
パイプライン
00:26.870 --> 00:36.290
前回やったことは、 Hugging Faceから入手可能なオープンソースの推論タスクを幅広く利用する簡単な方法だ。
00:36.290 --> 00:39.260
今日、 私たちはもっと低いレベルにいる。
00:39.350 --> 00:45.470
先ほど申し上げたように、 パイプラインよりも低いレベルでトランスフォーマーとやりとりする方法の一部として、
00:45.470 --> 00:49.430
トークナイザーとモデルがあります。
00:49.430 --> 00:50.630
そして、 それが今日私たちがやろうとしていることだ。
00:50.630 --> 00:53.000
まずはトーケナイザーから。
00:53.000 --> 01:03.890
テキストとトークンの間の翻訳をモデル別に学び、 チャット・テンプレートというものを理解するつもりです。
01:03.920 --> 01:06.170
非常に重要な瞬間だ。
01:06.440 --> 01:13.700
ええと、 まず、 ハギング・フェイスのトークナイザーと呼ばれるオブジェクトを紹介すると、
01:13.700 --> 01:20.870
これはテキスト(文字列)とトークン(数値のリスト)の変換を行うオブジェクトです。
01:21.020 --> 01:23.930
ええと、 簡単に言うと2つの機能がある。
01:23.930 --> 01:26.960
エンコードとデコードについて知っておくべき2つのこと。
01:26.960 --> 01:32.060
エンコードすると文字列からトークンになり、 デコードすると元に戻る。
01:32.060 --> 01:33.590
そして私たちはそれを見ることになる。
01:33.590 --> 01:39.920
もちろん、 ちょっとしたニュアンスや手間のかかることはあるが、 基本的にはそれだけだ。
01:40.370 --> 01:53.150
トークナイザーにはボキャブラリーがあり、 1文字、 2文字、 3文字、 4文字など、 トークンを構成するさまざまな文字の断片がすべて含まれています。
01:53.360 --> 01:57.110
そして、 このような文字の断片を含むこともできる。
01:57.110 --> 01:59.870
スペシャル・トークンと呼ばれるものが含まれることもある。
01:59.900 --> 02:07.880
特別なトークンとは、 文の始まりやアシスタントとのチャットの始まりなど、
02:07.880 --> 02:17.210
モデルに何かを伝えるためのトークンです。
02:17.660 --> 02:23.150
前にも言ったように、 ニューラルネットワーク・アーキテクチャをどのように訓練すればいいのか、
02:23.150 --> 02:28.730
特定のトークンが文頭などを表すと期待できるようにニューラルネットワーク・アーキテクチャをどのように構築すればいいのか、
02:28.730 --> 02:33.470
と考えているのなら、 どうすればいいのだろう?
02:33.470 --> 02:35.420
そして、 魔法のような答えはない。
02:35.420 --> 02:37.370
単純にトレーニングに尽きる。
02:37.370 --> 02:46.550
もし学習データの中で、 その特別なトークンがその目的に使われている例を十分に見ていれば、 それがその特別なトークンの目的であることを学習する。
02:46.550 --> 02:52.400
しかし、 一般的に言って、 ある特定のタイプのトークンを他のトークンよりも期待するような、
02:52.400 --> 02:57.890
またトークナイザーを期待するような基本的なものは、 アーキテクチャにはない。
02:57.890 --> 03:03.590
これに加えて、 テキストをトークンにマッピングし、 ボキャブラリーを持つことも、 チャットテンプレートと呼ばれるものがある。
03:03.590 --> 03:07.320
少なくとも、 特定のタイプのモデルについては、 これからわかるだろう。
03:07.320 --> 03:16.950
そして、 システム・メッセージやユーザー・メッセージなどのメッセージ・セットを、 トークン・セットに変換する方法を知っている。
03:16.950 --> 03:20.940
そしてそれは、 実際の例を見ればすべて理解できるだろう。
03:21.630 --> 03:29.520
だから、 ハギング・フェイスのすべてのモデル、 すべてのオープン・ソース・モデルには、 それ自身のトークナイザーが関連付けられている。
03:29.520 --> 03:35.190
モデルがどのようにトレーニングされたかに依存するので、 モデルに適用される一般的なトークナイザーは1つだけではありません。
03:35.220 --> 03:53.040
しかし、 重要なのは、 モデルが学習されたときにどのトークナイザーが使われたかということだ。
03:53.130 --> 03:57.390
それはいずれやってみるべき実験かもしれない。
03:57.390 --> 04:01.380
そんなことをしたら、 すぐに非生産的な実験になってしまう。
04:01.380 --> 04:12.120
というわけで、 今日は llama 3 のトークナイザを見てみましょう。 ラルマの象徴的なモデル群である「1」が道を切り開いた。
04:12.240 --> 04:12.420
申し訳ない。
04:12.450 --> 04:12.660
からだ。
04:12.690 --> 04:13.230
ラルマより
04:13.230 --> 04:13.890
メータより
04:13.920 --> 04:17.010
それがオープンソースモデルへの道を開いた。
04:17.010 --> 04:20.670
マイクロソフトのファイ3というモデルを見てみよう。
04:20.670 --> 04:29.400
アリババ・クラウドの強豪であり、 さまざまな指標で業界をリードしているクイン2を再び見てみよう。
04:29.400 --> 04:41.010
スター・コーダー2と呼ばれる、 コードを生成するためのモデルだ。
04:41.010 --> 04:44.970
そのトークナイザーを見て、 違いを確認する。
04:45.270 --> 04:53.520
ええと、 この2つが似たようなグラフィックなのは、 ラマ3. 1とファイ3は極めてよく似ている。
04:53.550 --> 05:01.650
Quantuもよく似ていますが、 英語だけでなく中国語にも力を入れています。
05:01.650 --> 05:05.580
Star Coder 2は、 もちろんコーディングに関するものだ。
05:05.700 --> 05:12.120
それでは、 Google Colabに移動し、 トークン化を行います。

220
week5/community-contributions/subtitles/srts/59170093/ko_KR.srt

@ -0,0 +1,220 @@
WEBVTT
00:00.410 --> 00:02.180
다시 만나서 반가워요
00:02.180 --> 00:10.130
모험 3주 차 3일째가 시작됐는데요 이번엔 상황이 더 심화될 거예요
00:10.130 --> 00:11.900
get it
00:11.900 --> 00:18.140
페이스 트랜스포머 포옹 라이브러리 하위 레벨 API를 살펴보면서 팔을 걷어붙일게요. Get
00:18.140 --> 00:18.890
it.
00:19.490 --> 00:24.800
다시 한번 말씀드리지만 프론티어 모델에 대한 코드도 만들 수 있고 인공지능 비서를 만들 수도 있고 파이프라인을
00:24.800 --> 00:26.330
사용할 수도 있어요
00:26.330 --> 00:26.870
파이프라인요
00:26.870 --> 00:35.150
지난번에 했던 건 정말 쉬운 방법이었죠 얼굴을 안는 것으로부터 가능한 다양한 오픈 소스 추론 작업을 사용하는
00:35.150 --> 00:36.290
거요
00:36.290 --> 00:39.260
오늘은 낮은 레벨로 가죠 Get it
00:39.350 --> 00:45.470
앞서 언급했듯이 토큰라이저와 모델은 파이프라인보다 낮은 수준에서
00:45.470 --> 00:49.430
트랜스포머와 상호 작용하는 방법의 일부죠
00:49.430 --> 00:50.630
오늘 그걸 할 거예요
00:50.630 --> 00:53.000
토큰라이저부터 시작할 거예요
00:53.000 --> 00:58.100
텍스트와 토큰을 다른 모델로 변환하는 법을 배울 거예요 채팅 템플릿이라는
00:58.100 --> 01:02.600
것도 이해할 거고요 이로써 몇 가지 다른 것들이 하나로 합쳐지면
01:02.600 --> 01:03.890
좋겠네요
01:03.920 --> 01:06.170
중요한 순간이에요
01:06.440 --> 01:13.700
먼저, 토큰마이저라는 유형의 객체를 소개할게요 얼굴을 안는 거예요
01:13.700 --> 01:20.870
이 객체는 문자열과 토큰, 숫자 목록을 번역하는 거예요
01:21.020 --> 01:23.930
기능은 두 가지로 아주 간단해요
01:23.930 --> 01:26.960
암호화와 해독에 관해 알아야 할 게 두 가지 있어요
01:26.960 --> 01:32.060
인코드는 문자열에서 토큰으로 디코드는 다시 과거로 돌아가게 하죠
01:32.060 --> 01:33.590
두고 봐야죠
01:33.590 --> 01:38.810
물론 약간의 뉘앙스와 성가신 부분이 있지만 기본적으로 그게 다예요
01:38.810 --> 01:39.920
비트 박스는
01:40.370 --> 01:48.290
토큰이에는 여러 단어가 들어 있어요 한 글자에서부터 네 글자까지 다양한
01:48.290 --> 01:53.150
문자를 조합해서 토큰을 구성하는 거죠
01:53.360 --> 01:57.110
또한 이런 캐릭터의 단편들도 포함할 수 있죠
01:57.110 --> 01:59.870
특별한 토큰이라는 것도 포함할 수 있어요
01:59.900 --> 02:07.880
몇몇 특별한 토큰은 하나의 토큰으로 모델에게 그것이 나타내는
02:07.880 --> 02:15.620
것을 알려줍니다 문장의 시작이나 비서와의 채팅 시작 같은
02:15.620 --> 02:17.210
것을요
02:17.660 --> 02:23.150
아까도 언급했지만 신경망 구조를 어떻게 훈련할지 궁금하실
02:23.150 --> 02:28.730
거예요 신경망 구조를 어떻게 구성해야 특정 토큰이 문장의
02:28.730 --> 02:33.470
시작 같은 걸 나타낼지 궁금하실 거예요
02:33.470 --> 02:35.420
마법 같은 답은 없어요
02:35.420 --> 02:37.370
훈련만 잘하면 돼요
02:37.370 --> 02:43.130
훈련 데이터에서 특정 토큰을 사용하는 예제를 충분히 봤다면 그게 그
02:43.160 --> 02:46.550
특별한 토큰의 목표라는 걸 알게 되죠
02:46.550 --> 02:52.400
하지만 아키텍처에서 근본적인 것은 없습니다 일반적으로 토큰의 종류를
02:52.400 --> 02:57.890
다른 종류로 바꾸는 것은 없습니다 토큰라이저도 마찬가지죠
02:57.890 --> 03:02.810
이 외에도 텍스트를 토큰에 매핑하고 대화방법도 가지고 있어요 채팅방 템플릿이라고도
03:02.840 --> 03:03.590
하죠
03:03.590 --> 03:07.320
적어도 특정 모델은 그렇죠 곧 보시겠지만요
03:07.320 --> 03:14.160
시스템 메시지나 사용자 메시지 같은 메시지 세트를 토큰
03:14.160 --> 03:16.950
세트로 바꿀 수 있죠
03:16.950 --> 03:20.940
실제 예시를 보면 이해가 될 거예요
03:21.630 --> 03:29.520
얼굴을 끌어안는 모든 모델 오픈 소스 모델은 그와 관련된 토큰라이저가 있어요
03:29.520 --> 03:35.190
토큰이 한 명만 모델에 적용되는 게 아니에요 모델이 어떻게 훈련되느냐에 따라 다르거든요
03:35.220 --> 03:40.920
토큰라이저는, 음 여러 모델이 같은 토큰라이저를 공유할 수 있지만, 중요한 것은 어떤 토큰라이저가
03:40.920 --> 03:46.200
훈련된 모델에서 사용되었는가 입니다. 왜냐하면 실행중인 추론기간에 정확히 동일한
03:46.200 --> 03:53.040
토큰라이저를 사용해야 하기 때문입니다. 그렇지 않으면 나쁜 결과를 얻을 수 있기 때문이죠.
03:53.130 --> 03:57.390
언젠가 한번 해 봐야겠지만 이유는 알게 될 거예요
03:57.390 --> 04:01.380
당장은 비생산적인 실험이 될 거예요
04:01.380 --> 04:12.120
오늘은 라마 3 토큰라이저를 살펴볼 거예요 라르마의 상징적인 모델 가족이에요
04:12.240 --> 04:12.420
미안해요
04:12.450 --> 04:12.660
04:12.690 --> 04:13.230
라르마한테서요
04:13.230 --> 04:13.890
메타가 보냈어요
04:13.920 --> 04:17.010
오픈 소스 모델의 길을 닦았죠
04:17.010 --> 04:20.670
마이크로소프트의 파이 3 모델을 보죠
04:20.670 --> 04:26.760
퀸 2도 다시 보죠 알리바바 클라우드의 동력원으로 여러 지표에서
04:26.760 --> 04:29.400
선두를 달리고 있죠
04:29.400 --> 04:35.790
다른 것도 살펴볼 거예요 Star Coder 2라는
04:35.790 --> 04:41.010
모델인데 코드 생성을 위한 모델이죠
04:41.010 --> 04:44.970
토큰라이저를 살펴보고 차이점을 찾아볼게요
04:45.270 --> 04:51.660
이 두 차의 그래픽이 비슷한 이유는 라마 3 때문이에요 1과 피3는 굉장히
04:51.660 --> 04:53.520
비슷해요
04:53.550 --> 05:01.650
콴투도 비슷하긴 하지만 영어와 중국어에 더 중점을 두고 있어요
05:01.650 --> 05:05.580
스타 코더 2는 물론 코딩에 관한 거죠
05:05.700 --> 05:12.120
소개를 마쳤으니 구글 Colab으로 가서 토큰라이징을 해보죠 HDMHD HDMHDDHDHD

58
week5/community-contributions/subtitles/srts/59170107/en_US.srt

@ -0,0 +1,58 @@
WEBVTT
00:01.370 --> 00:08.900
And once again, it's that moment when you take a pause and congratulate yourself on another day of
00:08.900 --> 00:17.270
skills learned and fantastic achievements of being able to be an expert in the hugging face Transformers
00:17.270 --> 00:18.110
library.
00:18.110 --> 00:22.640
In addition to using pipelines and tokenizers, you can now use models.
00:22.640 --> 00:29.120
You can look at models, you can load different models, and you can run models to do hopefully more
00:29.120 --> 00:36.260
than just tell jokes, but also other kinds of text generation tasks like the ones we've done in previous
00:36.260 --> 00:37.250
weeks.
00:37.340 --> 00:44.420
Uh, you, uh, also can, of course, code confidently with frontier model APIs and build AI assistants,
00:44.420 --> 00:48.320
including multimodal AI assistants, and use tools.
00:48.320 --> 00:55.820
So all of this together, uh, totals a significant amount of learning that you've done already, with
00:55.820 --> 00:58.250
a lot more exciting stuff ahead.
00:58.520 --> 01:03.890
The next session, we're going to do one more project with Tokenizers and models, just to give you
01:03.890 --> 01:05.720
a little bit more experience.
01:05.810 --> 01:12.500
Uh, and we're also going to yeah, just keep keep running inference on open source models and implement
01:12.500 --> 01:19.520
an LLM solution that's going to combine a frontier model call with an open source model call.
01:19.520 --> 01:22.610
And it will be a useful business application.
01:22.610 --> 01:28.400
And it's going to really wrap up this week of learning about hugging face and open source.
01:28.400 --> 01:30.140
So looking forward to it.
01:30.140 --> 01:31.220
I will see you then.

43
week5/community-contributions/subtitles/srts/59170107/ja_JP.srt

@ -0,0 +1,43 @@
WEBVTT
00:01.370 --> 00:08.900
そしてまた、 ハグ顔トランスフォーマーライブラリーのエキスパートになるためのスキルを学び、
00:08.900 --> 00:18.110
素晴らしい功績を残したもう一日の自分を祝福する瞬間だ。
00:18.110 --> 00:22.640
パイプラインとトークナイザーに加えて、 モデルも使えるようになった。
00:22.640 --> 00:29.120
モデルを見たり、 さまざまなモデルを読み込んだり、 ジョークを言うだけでなく、
00:29.120 --> 00:37.250
前の週にやったような他の種類のテキスト生成タスクを実行することもできる。
00:37.340 --> 00:44.420
もちろん、 フロンティアモデルAPIを使って自信を持ってコーディングし、 マルチモーダルAIアシスタントを含むAIアシスタントを構築し、
00:44.420 --> 00:48.320
ツールを使うこともできる。
00:48.320 --> 00:58.250
だから、 これらすべてを合わせると、 君たちはすでにかなりの量の学習をしてきたことになる。
00:58.520 --> 01:05.720
次のセッションでは、 トーケナイザーとモデルを使ったプロジェクトをもう1つ行います。
01:05.810 --> 01:19.520
さらに、 オープンソースモデルで推論を実行し続け、 フロンティアモデルコールとオープンソースモデルコールを組み合わせたLLMソリューションを実装するつもりだ。
01:19.520 --> 01:22.610
そして、 ビジネス・アプリケーションとしても役立つだろう。
01:22.610 --> 01:28.400
そして、 ハグ顔とオープンソースについて学んだこの1週間を締めくくることになる。
01:28.400 --> 01:30.140
だから楽しみにしている。
01:30.140 --> 01:31.220
それではまた。

52
week5/community-contributions/subtitles/srts/59170107/ko_KR.srt

@ -0,0 +1,52 @@
WEBVTT
00:01.370 --> 00:08.900
다시 한번 잠시 멈춰서 축하하는 순간입니다 오늘도 포옹하는
00:08.900 --> 00:18.110
트랜스포머 라이브러리에서 기술을 배우고 놀라운 성과를 거뒀으니까요
00:18.110 --> 00:22.640
파이프라인과 토큰라이저 외에도 모델을 사용할 수 있죠
00:22.640 --> 00:29.120
모델을 보고 다른 모델을 로드하고 모델을 실행해 농담만 하는 게
00:29.120 --> 00:37.250
아니라 다른 종류의 텍스트 생성 작업도 할 수 있어요 지난 주에 했던 것처럼요
00:37.340 --> 00:44.420
또한 프론티어 모델 API로 자신 있게 코드를 작성하고 멀티모덜 인공지능 어시스턴트를 비롯한 인공지능
00:44.420 --> 00:48.320
어시스턴트를 제작할 수 있으며 도구를 사용할 수도 있죠
00:48.320 --> 00:55.820
지금까지 배운 걸 합산하면 상당한 양의 학습이 될 거예요 앞으로 더 많은
00:55.820 --> 00:58.250
걸 배울 수 있겠죠
00:58.520 --> 01:03.890
다음 시간에는 Tokenizers와 모델로 프로젝트를 하나 더 할 겁니다 경험을
01:03.890 --> 01:05.720
좀 더 드릴 수 있도록요 비트
01:05.810 --> 01:12.500
그리고 오픈 소스 모델에 대한 추론을 계속 실행하고 LLM 솔루션을 구현할
01:12.500 --> 01:19.520
겁니다 프런티어 모델 호출과 오픈 소스 모델 호출을 결합하는 거죠
01:19.520 --> 01:22.610
유용한 비즈니스 응용 프로그램이 될 거예요
01:22.610 --> 01:28.400
얼굴 안기와 오픈 소스 수업을 이걸로 마무리하죠
01:28.400 --> 01:30.140
정말 기대돼요
01:30.140 --> 01:31.220
그때 봐요

154
week5/community-contributions/subtitles/srts/59170135/en_US.srt

@ -0,0 +1,154 @@
WEBVTT
00:00.830 --> 00:01.940
Welcome.
00:01.940 --> 00:02.870
It's week three.
00:02.870 --> 00:03.800
It's day four.
00:03.830 --> 00:11.720
We are back on the adventure in open source land, back investigating how to run inference over open
00:11.720 --> 00:12.890
source models.
00:13.130 --> 00:17.120
And today it is time to look at the model class in Hugging Face.
00:17.120 --> 00:20.390
We talked originally about pipeline API, the high level API.
00:20.420 --> 00:26.090
Then we started talking about the low level API, beginning with Tokenizers and now onto the model.
00:26.150 --> 00:28.580
So what can you already do?
00:28.610 --> 00:33.290
Of course, in addition to coding with frontier models, building multimodal AI assistants or you can
00:33.290 --> 00:38.270
now do is use hugging faces, pipelines and tokenizers today.
00:38.300 --> 00:41.270
New skills, new classes.
00:41.270 --> 00:49.010
We're going to get into the models part of hugging face, which is when you actually create a transformer
00:49.010 --> 00:51.860
and run it to generate text.
00:51.860 --> 00:56.300
And we'll be comparing results across five different models.
00:56.300 --> 01:02.090
I'm actually going to be doing three of them with you and leaving you to experiment with the other two,
01:02.210 --> 01:07.910
uh, so that you can have an extra exercise, but I'll have all of the code ready for you.
01:08.300 --> 01:10.700
Um, so it should be a lot of fun.
01:10.970 --> 01:13.380
So the models then to introduce them.
01:13.380 --> 01:21.330
We are going to again be working with llama 3.1 from meta, their flagship and groundbreaking model.
01:21.330 --> 01:29.670
We are going to be looking at Phi three, which is Microsoft's open source model, and Gemma from Google.
01:29.670 --> 01:32.190
It's a the small.
01:32.190 --> 01:36.450
The small cousin of Gemini is Google's Gemma.
01:36.510 --> 01:41.880
There are two other models that I'll be leaving you with to experiment with on your own.
01:41.910 --> 01:49.830
One of them is Mistral from Mistral and the other, the other is the powerhouse that is Quinn two.
01:50.040 --> 01:53.190
And I hope that you will enjoy using Quantu.
01:54.270 --> 02:01.320
So we're also going to be covering three aspects of working with open source models in the hugging face
02:01.320 --> 02:02.160
framework.
02:02.430 --> 02:05.610
Um, the first of them is called quantization.
02:05.640 --> 02:13.080
And this is about reducing the precision of the weights in the model so that it is easier to fit into
02:13.080 --> 02:16.860
memory and loads in and also can run faster.
02:16.860 --> 02:23.820
So quantization, a very important technique that allows us to work with, say, a one of the lower
02:23.820 --> 02:25.290
end GPU boxes.
02:25.330 --> 02:29.650
and when we get to training, it's going to be absolutely critical to be able to use quantization,
02:29.650 --> 02:34.480
to be able to train large open source models.
02:34.510 --> 02:39.160
In fact, you've heard me saying, now the Q Laura, that is the name of the technique that we're going
02:39.190 --> 02:42.040
to be using in a couple of weeks time.
02:42.040 --> 02:45.460
And the Q and Q, Laura stands for quantization.
02:45.460 --> 02:49.960
So we will be coming up against quantization a few times on this journey.
02:50.650 --> 02:54.580
Today we're also going to be looking inside a model.
02:54.580 --> 03:00.310
So generally again this is a class that is more practical than theoretical.
03:00.460 --> 03:02.050
But this will be one of those moments.
03:02.050 --> 03:10.750
And we'll just take a peek inside at what what do the PyTorch layers look like that sit behind the hugging
03:10.780 --> 03:12.970
face Transformers library.
03:13.720 --> 03:20.050
And then also, we're so familiar with streaming at this point that it hardly needs to be said that
03:20.050 --> 03:21.520
we want to be able to stream results.
03:21.520 --> 03:26.290
So I will show you how you can work with open source models to stream results as well.
03:26.290 --> 03:32.440
So these are some of the little extra bits that we're going to look into in our voyage into running
03:32.440 --> 03:35.710
inference over the lower level APIs for hugging face.
03:35.740 --> 03:36.940
There's quite enough talk.
03:36.940 --> 03:38.350
Let's get to it.

130
week5/community-contributions/subtitles/srts/59170135/ja_JP.srt

@ -0,0 +1,130 @@
WEBVTT
00:00.830 --> 00:01.940
ようこそ。
00:01.940 --> 00:02.870
3週目だ。
00:02.870 --> 00:03.800
4日目だ。
00:03.830 --> 00:12.890
私たちはオープンソースの土地での冒険に戻り、 オープンソースのモデル上で推論を実行する方法を調査している。
00:13.130 --> 00:17.120
そして今日は、 『ハギング・フェイス』のモデルクラスを見てみよう。
00:17.120 --> 00:20.390
当初はパイプラインAPI、 つまりハイレベルAPIについて話をした。
00:20.420 --> 00:26.090
その後、 低レベルのAPIについて話し始めた。 トーケナイザーから始まり、 今度はモデルについてだ。
00:26.150 --> 00:28.580
では、 すでに何ができるのか?
00:28.610 --> 00:33.290
もちろん、 フロンティアモデルを使ったコーディングに加え、 マルチモーダルAIアシスタントの構築や、
00:33.290 --> 00:38.270
現在できることは、 抱き顔、 パイプライン、 トークナイザーを使うことだ。
00:38.300 --> 00:41.270
新しいスキル、 新しいクラス。
00:41.270 --> 00:51.860
実際にトランスフォーマーを作成し、 それを実行してテキストを生成するのだ。
00:51.860 --> 00:56.300
そして、 5つの異なるモデルの結果を比較する。
00:56.300 --> 01:07.910
そのうちの3つを一緒にやって、 あとの2つで実験してもらうつもりだ。
01:08.300 --> 01:10.700
だから、 とても楽しいはずだよ。
01:10.970 --> 01:13.380
だから、 モデルたちはそれを導入した。
01:13.380 --> 01:21.330
今回もリャマ3世と仕事をすることになる。 1、 metaのフラッグシップで画期的なモデル。
01:21.330 --> 01:29.670
今回は、 マイクロソフトのオープンソースモデルであるファイ3と、 グーグルのジェンマを取り上げる。
01:29.670 --> 01:32.190
小さいよ。
01:32.190 --> 01:36.450
ジェミニのいとこにあたるのがグーグルのジェンマだ。
01:36.510 --> 01:41.880
他にも2つのモデルがあるので、 自分で試してみてほしい。
01:41.910 --> 01:49.830
一人はミストラルのミストラルで、 もう一人はクイン2の強豪だ。
01:50.040 --> 01:53.190
そしてQuantuを楽しんで使ってほしい。
01:54.270 --> 02:02.160
そこで今回は、 ハギング・フェイス・フレームワークでオープンソースのモデルを扱う際の3つの側面についても取り上げます。
02:02.430 --> 02:05.610
その最初のものは量子化と呼ばれるものだ。
02:05.640 --> 02:13.080
そしてこれは、 モデルの重みの精度を下げることで、 メモリに収めやすくし、 ロードしやすくし、
02:13.080 --> 02:16.860
さらに高速に実行できるようにすることである。
02:16.860 --> 02:25.290
量子化というのは、 例えばローエンドのGPUボックスで作業することを可能にする非常に重要なテクニックだ。
02:25.330 --> 02:34.480
そしてトレーニングに入ると、 量子化を使えるかどうか、 大規模なオープンソースモデルをトレーニングできるかどうかが絶対的に重要になる。
02:34.510 --> 02:42.040
実際、 私が言っているのを聞いたことがあると思うが、 今、 Qローラ、 これは数週間後に使うテクニックの名前だ。
02:42.040 --> 02:45.460
そしてQとQ、 ローラは量子化を意味する。
02:45.460 --> 02:49.960
だから、 この旅では量子化に何度か直面することになる。
02:50.650 --> 02:54.580
今日はモデルの内部も見てみよう。
02:54.580 --> 03:00.310
だから、 このクラスは理論的というより実践的なクラスなんだ。
03:00.460 --> 03:02.050
しかし、 これはその瞬間のひとつになるだろう。
03:02.050 --> 03:10.750
そして、 ハグする顔のトランスフォーマーライブラリの後ろにあるPyTorchレイヤーがどのようなものか、
03:10.780 --> 03:12.970
中を覗いてみよう。
03:13.720 --> 03:21.520
そしてまた、 我々はこの時点でストリーミングに慣れ親しんでいるので、 結果をストリーミングできるようにしたいということはほとんど言うまでもない。
03:21.520 --> 03:26.290
そこで、 オープンソースのモデルを使って、 どのように結果を出すことができるかを紹介しよう。
03:26.290 --> 03:32.440
このように、 ハグフェイスのための低レベルのAPI上で推論を実行するための航海の中で、 私たちが調べようとしているのは、
03:32.440 --> 03:35.710
ちょっとした余分な部分なのだ。
03:35.740 --> 03:36.940
話はもう十分だ。
03:36.940 --> 03:38.350
さっそく始めよう。

151
week5/community-contributions/subtitles/srts/59170135/ko_KR.srt

@ -0,0 +1,151 @@
WEBVTT
00:00.830 --> 00:01.940
어서 오세요
00:01.940 --> 00:02.870
3주 차예요
00:02.870 --> 00:03.800
4일째예요
00:03.830 --> 00:11.720
오픈 소스 랜드로 돌아왔습니다 오픈 소스 모델을 어떻게 추론하는지 조사하고
00:11.720 --> 00:12.890
있죠
00:13.130 --> 00:17.120
오늘은 얼굴 껴안기 모범 수업을 해 볼게요
00:17.120 --> 00:20.390
파이프라인 API 얘기를 했었죠 상위 수준 API요
00:20.420 --> 00:26.090
그다음에는 낮은 수준의 API 얘기를 했습니다 토큰라이저로 시작해서 모델로 넘어갔죠
00:26.150 --> 00:28.580
그래서 뭘 할 수 있는데요?
00:28.610 --> 00:33.290
물론 지금은 개척 시대 모델의 코딩 외에도 다중 모듈 인공지능 비서를 제작하고
00:33.290 --> 00:38.270
있습니다 현재는 얼굴 포옹이나 파이프라인 토큰라이저를 사용하죠
00:38.300 --> 00:41.270
새로운 기술에 새로운 수업이죠
00:41.270 --> 00:49.010
얼굴을 안는 것의 모델 부분으로 들어가겠습니다 변압기를 생성하고 텍스트를 생성하기
00:49.010 --> 00:51.860
위해 실행하는 거죠 get it
00:51.860 --> 00:56.300
다섯 가지 모델로 결과를 비교할 거예요
00:56.300 --> 01:02.090
제가 3개를 같이 할 거예요 나머지 2개는 당신이 실험해 보세요
01:02.210 --> 01:07.910
추가 운동을 할 수 있게요 코드는 다 준비해 둘게요
01:08.300 --> 01:10.700
재미있을 것 같아요
01:10.970 --> 01:13.380
모델들이 그들을 소개하죠
01:13.380 --> 01:21.330
라마 3을 다시 작업하게 될 거예요 기함이자 획기적인 모델인 메타에서 한 대 왔어요
01:21.330 --> 01:29.670
마이크로소프트의 파이3 오픈 소스 모델과 구글의 젬마를 살펴볼 거예요
01:29.670 --> 01:32.190
스몰 사이즈예요
01:32.190 --> 01:36.450
제미니의 사촌이 구글의 제마죠
01:36.510 --> 01:41.880
여러분이 직접 실험해 볼 모델이 두 개 더 있어요
01:41.910 --> 01:49.830
하나는 미스트럴의 미스트럴이고 다른 하나는 퀸의 2번 선수예요
01:50.040 --> 01:53.190
취안토를 즐겨 보세요
01:54.270 --> 02:02.160
오픈 소스 모델과의 세 가지 측면도 다룰 겁니다 얼굴 프레임워크에서요
02:02.430 --> 02:05.610
첫 번째는 퀀타이즈라는 거예요
02:05.640 --> 02:13.080
이것은 모델의 무게의 정밀도를 줄이는 것입니다. 메모리에 쉽게 맞추고 로드도
02:13.080 --> 02:16.860
쉽게 하고 더 빠르게 달릴 수 있죠.
02:16.860 --> 02:23.820
퀀타이즈는 아주 중요한 기술로 GPU 하위 제품 중 하나로 작업할 수
02:23.820 --> 02:25.290
있게 해주죠
02:25.330 --> 02:29.650
트레이닝을 할 때 반드시 퀀타이즈를 사용할 수 있어야
02:29.650 --> 02:34.480
합니다 큰 오픈 소스 모델을 훈련하기 위해서요
02:34.510 --> 02:39.160
아까도 말했지만 Q 로라는 우리가 몇 주 후에
02:39.190 --> 02:42.040
사용할 기술의 이름이에요
02:42.040 --> 02:45.460
질문과 질문, 로라는 수량화의 약자예요
02:45.460 --> 02:49.960
이번 여정에서 퀀타이즈와 몇 번 부딪힐 거예요
02:50.650 --> 02:54.580
오늘은 모델 내부도 살펴볼 거예요
02:54.580 --> 03:00.310
다시 말씀드리지만 이론보다는 실용적인 수업이에요
03:00.460 --> 03:02.050
하지만 지금이 바로 그런 순간이에요
03:02.050 --> 03:10.750
포옹하는 트랜스포머 라이브러리 뒤에 있는 파이토치 층은 어떤 모습일지 살짝
03:10.780 --> 03:12.970
들여다볼게요
03:13.720 --> 03:20.050
지금은 스트리밍에 익숙해서 결과를 스트리밍할 수 있다고 말할
03:20.050 --> 03:21.520
필요도 없어요
03:21.520 --> 03:26.290
결과 스트리밍을 위해 오픈 소스 모델로 작업하는 방법을 보여드리죠
03:26.290 --> 03:32.440
이게 이번 항해에서 살펴볼 추가 사항입니다 얼굴을 껴안는 하위
03:32.440 --> 03:35.710
레벨 API를 실행하는 거죠
03:35.740 --> 03:36.940
얘기는 충분히 했어요
03:36.940 --> 03:38.350
Get it, get it 해 보죠

130
week5/community-contributions/subtitles/srts/59170165/en_US.srt

@ -0,0 +1,130 @@
WEBVTT
00:01.340 --> 00:05.000
Welcome, everybody to the last day of week three.
00:05.030 --> 00:05.810
Week three.
00:05.840 --> 00:06.710
Day five.
00:06.740 --> 00:12.740
We're here already wrapping up open source model inference with hugging face.
00:12.740 --> 00:16.790
And today, today is the day that you're going pro.
00:16.790 --> 00:23.150
Today is the day when we're putting together everything you've learned in the last four days of lectures
00:23.150 --> 00:31.910
and really solidifying it with an excellent, uh, juicy project, a business project which is going
00:31.910 --> 00:37.940
to give you some, some true experience in the field, what you can do already, if you don't mind me
00:37.940 --> 00:41.180
telling you one more time, you can code with frontier models.
00:41.180 --> 00:46.970
You can build AI assistants with tools, multi-modality, generating images, making sounds.
00:47.120 --> 00:55.100
Uh, and you can use pipelines, tokenizers and models within the hugging face Transformers library.
00:55.130 --> 00:59.960
Today, you're going to be even more confident with Tokenizers and models.
00:59.960 --> 01:05.330
You're going to be able to run inference across open source models with ease, and you're going to have
01:05.360 --> 01:13.260
implemented an LLM solution combining frontier and open source models together into one nice package.
01:13.260 --> 01:21.240
There's also going to be a good business challenge for you to keep working on this, so let's get started.
01:22.440 --> 01:28.710
The business problem that we have is a feature that is in many applications that we all know, and so
01:28.710 --> 01:32.130
it's a good, real kind of product.
01:32.130 --> 01:40.260
We want to build a solution that can create minutes of meetings including things like actions and owners
01:40.260 --> 01:41.880
and so on.
01:42.120 --> 01:51.180
Uh, it will be able to take an audio recording and then use a frontier model, use an API to convert
01:51.180 --> 01:52.620
the audio to text.
01:52.620 --> 01:58.320
It's actually a task that I had given you as a follow on exercise from one of the projects last week,
01:58.320 --> 02:01.830
so you may have already experimented with this, but if not, we're going to do it together.
02:01.830 --> 02:07.430
We're going to call a frontier model to convert audio to text.
02:07.430 --> 02:14.120
We are then going to use an open source model to turn that text into meeting minutes, summarizing it,
02:14.120 --> 02:17.760
plucking out actions and owners and the like.
02:17.820 --> 02:21.870
And we will stream back results and show them in markdown.
02:21.870 --> 02:25.380
So these these are the activities we're going to do.
02:25.410 --> 02:27.060
That's how we're going to put it together.
02:27.390 --> 02:31.800
And it's going to build a product that will be useful.
02:32.250 --> 02:34.140
This is what we want to come up with.
02:34.170 --> 02:40.440
We want to be able to have a solution that produces minutes like this with discussion points, takeaways,
02:40.470 --> 02:47.700
action items, and as the input data to start with the resource that we'll be using.
02:47.730 --> 02:56.400
There are audio files of publicly available council meetings from councils across the US available on
02:56.430 --> 02:57.270
hugging face.
02:57.270 --> 02:59.400
And that is where we'll begin.
02:59.670 --> 03:03.930
I've already downloaded one of the audio files and taken a chunk out of it.
03:04.200 --> 03:08.460
In the interest of time, we'll do just a piece of the Denver City Council meeting rather than the whole
03:08.460 --> 03:09.030
meeting.
03:09.300 --> 03:12.900
But the idea is that that's going to help us show that it works.
03:12.900 --> 03:17.370
And then perhaps this is something that you'll be able to use for your own meetings, for real, when
03:17.370 --> 03:19.680
we have a working product.
03:19.710 --> 03:24.840
So without further ado, let's go to Google Colab and let's build our application.

100
week5/community-contributions/subtitles/srts/59170165/ja_JP.srt

@ -0,0 +1,100 @@
WEBVTT
00:01.340 --> 00:05.000
ようこそ、 第3週最終日へ。
00:05.030 --> 00:05.810
第3週
00:05.840 --> 00:06.710
5日目。
00:06.740 --> 00:12.740
オープンソースのモデル推論をハグ顔でラッピングしているところだ。
00:12.740 --> 00:16.790
そして今日、 今日がプロになる日だ。
00:16.790 --> 00:23.150
今日は、 この4日間の講義で学んだことをすべてまとめ、
00:23.150 --> 00:41.180
素晴らしい、 あ、 ジューシーなプロジェクト、 ビジネス・プロジェクトでそれを本当に強固なものにする日だ。
00:41.180 --> 00:46.970
ツール、 マルチモダリティ、 画像生成、 音声生成でAIアシスタントを作ることができる。
00:47.120 --> 00:55.100
そして、 パイプライン、 トークナイザー、 抱擁顔トランスフォーマー・ライブラリー内のモデルを使うことができる。
00:55.130 --> 00:59.960
今日は、 トーケナイザーとモデルを使って、 さらに自信を深めてください。
00:59.960 --> 01:13.260
オープンソースのモデル間で簡単に推論を実行できるようになり、 フロンティアモデルとオープンソースモデルを1つの素晴らしいパッケージにまとめたLLMソリューションを実装したことになる。
01:13.260 --> 01:21.240
また、 これに取り組み続けることは、 あなたにとって良いビジネス・チャレンジになるはずですから、 始めましょう。
01:22.440 --> 01:32.130
私たちが抱えているビジネス上の問題は、 私たち誰もが知っている多くのアプリケーションにある機能である。
01:32.130 --> 01:41.880
私たちは、 行動や所有者などを含む会議の議事録を作成できるソリューションを構築したいと考えています。
01:42.120 --> 01:52.620
音声を録音して、 フロンティアモデルを使い、 APIを使って音声をテキストに変換する。
01:52.620 --> 02:01.830
実はこれは、 先週のあるプロジェクトのフォローアップ練習として私が出した課題なんだ。
02:01.830 --> 02:07.430
音声をテキストに変換するフロンティアモデルを呼ぶことにする。
02:07.430 --> 02:14.120
そのテキストをオープンソースのモデルを使って議事録にし、 要約し、
02:14.120 --> 02:17.760
行動や所有者などを抜き出す。
02:17.820 --> 02:21.870
そして、 結果をストリームバックし、 マークダウンで表示する。
02:21.870 --> 02:25.380
だから、 これらの活動は私たちがやろうとしていることなんだ。
02:25.410 --> 02:27.060
そうやってまとめるんだ。
02:27.390 --> 02:31.800
そして、 役に立つ製品を作ることになる。
02:32.250 --> 02:34.140
これが私たちの望むものだ。
02:34.170 --> 02:40.440
このような議事録を作成し、 ディスカッションのポイント、 要点、 アクションアイテム、
02:40.470 --> 02:47.700
そしてこれから使用するリソースを入力データとして作成できるソリューションが欲しい。
02:47.730 --> 02:57.270
ハギング・フェイスでは、 全米各地の協議会の音声ファイルが公開されている。
02:57.270 --> 02:59.400
そこから始めよう。
02:59.670 --> 03:03.930
すでに音声ファイルのひとつをダウンロードし、 その一部を抜粋した。
03:04.200 --> 03:09.030
時間の都合上、 デンバー市議会の会議全体ではなく、 その一部だけを取り上げる。
03:09.300 --> 03:12.900
しかし、 このアイデアは、 それが機能することを示すのに役立つということだ。
03:12.900 --> 03:19.680
そして、 私たちが実用的な製品を完成させた暁には、 おそらく、 あなた自身のミーティングでも使えるようになるでしょう。
03:19.710 --> 03:24.840
それでは早速、 Google Colabにアクセスしてアプリケーションを作ってみよう。

127
week5/community-contributions/subtitles/srts/59170165/ko_KR.srt

@ -0,0 +1,127 @@
WEBVTT
00:01.340 --> 00:05.000
3주 차 마지막 날에 오신 걸 환영합니다, 여러분
00:05.030 --> 00:05.810
3주 차예요
00:05.840 --> 00:06.710
5일째예요
00:06.740 --> 00:12.740
오픈 소스 모델 추론을 얼굴 껴안기와 함께 마무리하고 있어요
00:12.740 --> 00:16.790
오늘은 프로가 되는 날이에요
00:16.790 --> 00:23.150
오늘은 지난 나흘간 여러분이 배운 모든 걸 한데 모아 아주 훌륭하고 흥미진진한
00:23.150 --> 00:31.910
사업 프로젝트로 굳건히 다지는 날입니다 이 분야에서 진정한 경험을 쌓을 수 있는 사업 프로젝트죠
00:31.910 --> 00:37.940
여러분이 이미 할 수 있는 일요 한 번 더 말해도 괜찮으시다면 개척자
00:37.940 --> 00:41.180
모델을 코딩할 수 있어요
00:41.180 --> 00:46.970
툴을 이용한 다단계 인공지능 보조를 만들 수 있습니다 이미지 생성, 소리 생성 등이죠
00:47.120 --> 00:55.100
파이프라인, 토큰마이저, 모델을 포옹 트랜스포머 라이브러리에서 사용하세요
00:55.130 --> 00:59.960
오늘은 토큰저와 모델에 대해 더 잘 알게 될 거예요
00:59.960 --> 01:05.330
오픈 소스 모델을 쉽게 추론할 수 있을 겁니다 프론티어와 오픈
01:05.360 --> 01:13.260
소스 모델을 결합한 LLM 솔루션을 구현해 하나의 멋진 패키지로 만들 수 있고요
01:13.260 --> 01:21.240
Get it을 계속하는 데 좋은 사업 과제도 있을 거예요 그럼 시작하죠
01:22.440 --> 01:28.710
우리가 가진 비즈니스 문제는 우리가 아는 많은 응용 프로그램에 있는 기능이에요
01:28.710 --> 01:32.130
좋은 종류의 진짜 제품이죠
01:32.130 --> 01:40.260
작업이나 소유주 같은 걸 포함해 회의록을 만들 수 있는 솔루션을 구축하고
01:40.260 --> 01:41.880
싶어요
01:42.120 --> 01:51.180
오디오 레코딩을 프론티어 모델로 사용할 수 있고 API를 이용해 오디오를 텍스트로 변환할
01:51.180 --> 01:52.620
수 있죠
01:52.620 --> 01:58.320
지난주에 진행한 프로젝트에서 받은 후속 작업으로 드린 거예요 이미 실험해
01:58.320 --> 02:01.830
보셨을 수도 있지만 아니라면 같이 해 보죠
02:01.830 --> 02:07.430
음향을 텍스트로 변환하는 개척자 모델을 부를 거예요
02:07.430 --> 02:14.120
그런 다음 오픈 소스 모델을 사용해 그 텍스트를 회의록으로 바꾸고 요약하고
02:14.120 --> 02:17.760
행동과 소유주 등을 추려내죠
02:17.820 --> 02:21.870
결과를 스트리밍해서 할인된 걸 보여드릴게요
02:21.870 --> 02:25.380
이게 우리가 할 활동이에요
02:25.410 --> 02:27.060
그렇게 Put을 짜는 거예요
02:27.390 --> 02:31.800
유용한 제품을 만들 거예요
02:32.250 --> 02:34.140
이런 걸 만들고 싶었어요
02:34.170 --> 02:40.440
이와 같은 솔루션을 만들고 싶습니다. 토론 포인트, 포장 포인트, 액션
02:40.470 --> 02:47.700
아이템, 입력 데이터 등을 만들어서 사용할 리소스로 시작하는 솔루션이죠.
02:47.730 --> 02:57.270
미국 전역의 자치 위원회가 공개적으로 연 의회 회의에 참석한 음성 파일도 있어요
02:57.270 --> 02:59.400
거기서부터 시작하죠
02:59.670 --> 03:03.930
이미 오디오 파일 하나를 다운로드해서 일부를 잘라냈어요
03:04.200 --> 03:08.460
시간 관계상 덴버시 의회 회의 일부만 진행하도록 하죠 전체 회의
03:08.460 --> 03:09.030
말고요
03:09.300 --> 03:12.900
하지만 그렇게 하면 작동한다는 걸 보여줄 수 있어요
03:12.900 --> 03:17.370
실제 상품이 출시됐을 때 여러분의 회의에서
03:17.370 --> 03:19.680
사용할 수 있을 거예요
03:19.710 --> 03:24.840
구글 Colab으로 가서 응용 프로그램을 만들어보죠

220
week5/community-contributions/subtitles/srts/59170223/en_US.srt

@ -0,0 +1,220 @@
WEBVTT
00:00.470 --> 00:01.100
Well.
00:01.130 --> 00:02.000
Fantastic.
00:02.030 --> 00:06.560
It's coming up to the end of the week, and that means it's coming up to a challenge for you again,
00:06.560 --> 00:10.580
even though I've just given you a challenge to build a Gradio user interface for for what we just saw.
00:10.580 --> 00:11.750
But that's an easy challenge.
00:11.750 --> 00:12.680
You can do that.
00:12.680 --> 00:13.640
No problem.
00:13.640 --> 00:14.990
This you need a harder challenge.
00:14.990 --> 00:17.360
At the end of the week, it's time for a harder challenge.
00:17.360 --> 00:25.130
So the end of week challenge is to build an important business application that we will, in fact,
00:25.130 --> 00:26.690
use later in the course.
00:26.690 --> 00:32.120
Although yeah, you you won't need to to you won't need to have built it for that because because I'll
00:32.120 --> 00:32.750
have done it.
00:32.750 --> 00:35.690
But it's really helpful if you've done it.
00:35.690 --> 00:39.680
And this is something that you'll be able to use no matter in any business.
00:39.710 --> 00:45.260
This, this, this tool will apply to every business vertical and can be useful to you, I guarantee
00:45.290 --> 00:45.830
it.
00:46.220 --> 00:48.140
And this is what it is.
00:48.140 --> 00:58.940
Create your own tool that generates synthetic testing data a test data generator, open source model.
00:59.180 --> 01:01.970
This is something that is so valuable.
01:01.970 --> 01:06.380
Generating data sets is something that you need for many different purposes.
01:06.380 --> 01:14.600
And I want to give you a very, um, wide remit to decide how you want to go about doing this, but
01:14.600 --> 01:21.590
I'm looking for something where you can describe a kind of data set you want, and maybe it's descriptions
01:21.590 --> 01:29.420
of products, maybe it's descriptions of, uh, um, uh, job postings, whatever it is you want to
01:29.450 --> 01:36.260
be able to, to tell your product what it is, what kind of data that you're working with and let it
01:36.260 --> 01:45.080
dream up, uh, diverse outputs, diverse test set that you'll be able to use when experimenting with
01:45.080 --> 01:47.300
your business area in the future.
01:47.300 --> 01:55.610
So this synthetic data generator is going to be a valuable tool for yourself, for me and for for this,
01:55.640 --> 01:59.090
both for this course and for future business problems you tackle.
01:59.090 --> 02:03.890
So it's worth investing some time in, and it's worth giving it a gradio UI while you're doing it.
02:03.920 --> 02:05.720
And that's going to be the super easy part.
02:05.720 --> 02:08.420
So I have a shot at that.
02:08.450 --> 02:11.060
It will apply to your business area no matter what you do.
02:11.060 --> 02:13.790
It's going to be useful and you're going to really enjoy it.
02:16.160 --> 02:25.250
And then that would then complete week three, wrapping up your third week of your journey towards being
02:25.250 --> 02:27.800
a proficient LM engineer.
02:27.830 --> 02:31.790
You can already, of course, code confidently with frontier models.
02:31.790 --> 02:34.940
You must be sick of me saying that now because you're that good.
02:34.940 --> 02:37.100
You can build an AI assistant.
02:37.100 --> 02:39.980
You can have it be multimodal, you can have it use tools.
02:39.980 --> 02:46.010
You can have it be consist of multiple smaller agents that carry out specialist tasks.
02:46.040 --> 02:52.460
And of course, at this point you can create an LM solution that combines calls to frontier models.
02:52.460 --> 02:54.680
And it can call open source models.
02:54.680 --> 03:03.080
And you can use the pipeline API, using it to to carry out a large variety of common inference tasks.
03:03.080 --> 03:10.550
And you can also use the lower level hugging face APIs, the Tokenizers, and the models for inference
03:10.550 --> 03:11.780
tasks.
03:12.470 --> 03:18.980
So congratulations once again, you should be very proud next week.
03:19.010 --> 03:20.990
Next week we change topics.
03:20.990 --> 03:23.240
There's a thorny question.
03:23.240 --> 03:24.980
It's a question I get asked all the time.
03:24.980 --> 03:31.010
It's something which is, uh, where there's actually a lot of great resources to help.
03:31.010 --> 03:37.440
It's about how do you pick the right model for the for a given task that you have to work on.
03:37.440 --> 03:39.870
There are so many models, there are so many options.
03:39.870 --> 03:41.220
There's for staff.
03:41.250 --> 03:43.290
There's there's do you go closed source or open source?
03:43.290 --> 03:48.000
But then whichever path you take, there are so many possibilities.
03:48.000 --> 03:52.830
And how do you navigate through this to decide which one is right for a particular problem.
03:52.830 --> 03:53.850
And that is the key.
03:53.880 --> 03:55.470
It depends on the problem.
03:55.470 --> 03:58.650
Different models will be appropriate for different problems.
03:58.650 --> 04:00.660
I'm going to show you how to figure that out.
04:00.990 --> 04:02.550
We're going to compare LMS.
04:02.550 --> 04:03.720
We're going to use leaderboards.
04:03.720 --> 04:04.950
We're going to use arenas.
04:04.950 --> 04:08.070
And we're going to do some some work with arenas ourselves.
04:08.070 --> 04:09.360
And that's going to be fun.
04:09.360 --> 04:15.930
And then as our practical work, we're going to to go a different direction than we've gone in the past,
04:16.020 --> 04:21.780
except we did it very briefly once, but we're going to be looking at code generation when we're using
04:21.780 --> 04:29.490
frontier models and open source models to be generating code and tackling some code generation problems.
04:29.490 --> 04:33.060
So that will be a new, interesting perspective for you.
04:33.060 --> 04:38.910
So I'm really excited about next week, and I'm so, so impressed by how much progress you've made already
04:38.910 --> 04:42.990
and how many skills that you've already acquired.
04:43.110 --> 04:49.140
And I will see you for week four for picking the right LLM.

196
week5/community-contributions/subtitles/srts/59170223/ja_JP.srt

@ -0,0 +1,196 @@
WEBVTT
00:00.470 --> 00:01.100
まあね。
00:01.130 --> 00:02.000
ファンタスティックだ。
00:02.030 --> 00:10.580
今週も終わりに近づいています。 つまり、 またチャレンジの時期が近づいているということです。 先ほど、 私たちが見たものに対するグラディオのユーザー・インターフェースを作るという課題を出したばかりですが。
00:10.580 --> 00:11.750
しかし、 それは簡単な挑戦だ。
00:11.750 --> 00:12.680
それはできる。
00:12.680 --> 00:13.640
問題ないよ。
00:13.640 --> 00:14.990
もっと難しい課題が必要だ。
00:14.990 --> 00:17.360
週の終わりには、 よりハードなチャレンジの時間だ。
00:17.360 --> 00:26.690
つまり、 週明けの課題は、 このコースの後半で実際に使用する重要なビジネス・アプリケーションを作ることだ。
00:26.690 --> 00:32.750
とはいえ、 そうする必要はないだろうけど......そのために作る必要はないだろう。
00:32.750 --> 00:35.690
でも、 やっておくと本当に役に立つよ。
00:35.690 --> 00:39.680
そしてこれは、 どのようなビジネスでも関係なく使えるものだ。
00:39.710 --> 00:45.830
この、 この、 このツールはあらゆる業種に適用でき、 あなたの役に立つこと請け合いだ。
00:46.220 --> 00:48.140
それがこれだ。
00:48.140 --> 00:58.940
合成テストデータを生成する独自のツールを作成する テストデータジェネレータ、 オープンソースモデル。
00:59.180 --> 01:01.970
これはとても貴重なことだ。
01:01.970 --> 01:06.380
データセットの生成は、 さまざまな目的で必要となるものだ。
01:06.380 --> 01:14.600
しかし、 私は、 あなたが欲しいデータセットを記述できるものを探しています。 それは、
01:14.600 --> 01:21.590
商品の説明かもしれませんし、 求人情報の説明かもしれません。 あなたが欲しいものが何であれ、
01:21.590 --> 01:29.420
製品にそれが何であるかを伝え、 あなたが扱っているデータがどのようなものであるかを伝え、
01:29.450 --> 01:36.260
将来あなたのビジネス分野で実験するときに使えるように、 多様な出力、
01:36.260 --> 01:47.300
多様なテストセットを夢見させることができます。
01:47.300 --> 01:55.610
だから、 この合成データジェネレーターは、 あなた自身にとっても、 私にとっても、 そしてこのコースにとっても、 将来あなたが取り組むビジネス上の問題にとっても、
01:55.640 --> 01:59.090
貴重なツールになるだろう。
01:59.090 --> 02:03.890
グラディオのUIに時間を費やす価値はある。
02:03.920 --> 02:05.720
そして、 それが超簡単な部分になる。
02:05.720 --> 02:08.420
だから、 僕はそれを狙っているんだ。
02:08.450 --> 02:11.060
何をするにしても、 あなたのビジネス領域に適用される。
02:11.060 --> 02:13.790
きっと役に立つし、 本当に楽しめるよ。
02:16.160 --> 02:27.800
これで3週目が終了し、 熟練したLMエンジニアになるための旅の3週目が終わる。
02:27.830 --> 02:31.790
もちろん、 すでにフロンティアモデルを使って自信を持ってコーディングすることはできる。
02:31.790 --> 02:34.940
もう私がそんなことを言うのはうんざりしているに違いない。
02:34.940 --> 02:37.100
AIアシスタントを作ることができる。
02:37.100 --> 02:39.980
マルチモーダルでもいいし、 ツールを使ってもいい。
02:39.980 --> 02:46.010
専門的な仕事を行う複数の小さなエージェントで構成することもできる。
02:46.040 --> 02:52.460
もちろん、 この時点でフロンティアモデルへのコールを組み合わせたLMソリューションを作ることもできる。
02:52.460 --> 02:54.680
そして、 オープンソースのモデルを呼ぶことができる。
02:54.680 --> 03:03.080
また、 パイプラインAPIを使って、 一般的な推論タスクを実行することもできる。
03:03.080 --> 03:11.780
また、 推論タスクのために、 より低レベルの抱擁顔API、 トーケナイザー、 モデルを使うこともできる。
03:12.470 --> 03:18.980
もう一度おめでとう。
03:19.010 --> 03:20.990
来週はトピックを変える。
03:20.990 --> 03:23.240
茨の道がある。
03:23.240 --> 03:24.980
よく聞かれる質問だ。
03:24.980 --> 03:31.010
それは......実際、 助けてくれる素晴らしいリソースがたくさんあることなんだ。
03:31.010 --> 03:37.440
自分が取り組まなければならない仕事に対して、 どのように適切なモデルを選ぶかということだ。
03:37.440 --> 03:39.870
たくさんのモデルがあり、 たくさんの選択肢がある。
03:39.870 --> 03:41.220
スタッフ用だ。
03:41.250 --> 03:43.290
クローズドソースかオープンソースか?
03:43.290 --> 03:48.000
でも、 どの道を選んでも、 たくさんの可能性がある。
03:48.000 --> 03:52.830
そして、 特定の問題に対してどれが正しいかを判断するために、 どのようにナビゲートするのか。
03:52.830 --> 03:53.850
それが鍵だ。
03:53.880 --> 03:55.470
それは問題による。
03:55.470 --> 03:58.650
問題によって適切なモデルは異なるだろう。
03:58.650 --> 04:00.660
その方法をお見せしよう。
04:00.990 --> 04:02.550
我々はLMSを比較するつもりだ。
04:02.550 --> 04:03.720
リーダーボードを使うつもりだ。
04:03.720 --> 04:04.950
アリーナを使うつもりだ。
04:04.950 --> 04:08.070
そして、 我々自身もアリーナでいくつかの仕事をするつもりだ。
04:08.070 --> 04:09.360
それは楽しみだ。
04:09.360 --> 04:21.780
フロンティア・モデルやオープンソース・モデルを使ってコードを生成し、
04:21.780 --> 04:29.490
コード生成の問題に取り組む場合だ。
04:29.490 --> 04:33.060
だから、 それはあなたにとって新しい、 興味深い視点になるだろう。
04:33.060 --> 04:38.910
だから来週が本当に楽しみだし、 君たちがすでにどれだけ進歩し、 どれだけ多くのスキルを身につけたか、
04:38.910 --> 04:42.990
とてもとても感心している。
04:43.110 --> 04:49.140
そして、 正しいLLMを選ぶために4週目に会いましょう。

211
week5/community-contributions/subtitles/srts/59170223/ko_KR.srt

@ -0,0 +1,211 @@
WEBVTT
00:00.470 --> 00:01.100
글쎄요
00:01.130 --> 00:02.000
환상적이에요
00:02.030 --> 00:06.560
이번 주말이면 끝나요 다시 도전이 기다리고 있다는 뜻이죠 방금 제가 여러분께
00:06.560 --> 00:10.580
그래디오 사용자 인터페이스를 구축하라고 과제를 드렸는데도요
00:10.580 --> 00:11.750
하지만 쉬운 도전이죠
00:11.750 --> 00:12.680
할 수 있어요
00:12.680 --> 00:13.640
별말씀을요
00:13.640 --> 00:14.990
더 어려운 과제가 필요해요
00:14.990 --> 00:17.360
이번 주 후반에는 더 어려운 과제가 기다리고 있죠
00:17.360 --> 00:25.130
마지막 챌린지는 중요한 비즈니스 응용 프로그램을 만드는 겁니다 나중에 이 과정을 통해
00:25.130 --> 00:26.690
사용할 거죠
00:26.690 --> 00:32.750
하지만 그걸 위해 만들 필요는 없어요 제가 만들 거니까요
00:32.750 --> 00:35.690
하지만 해 본 사람이라면 정말 도움이 되죠
00:35.690 --> 00:39.680
어떤 사업에서든 사용할 수 있는 거죠
00:39.710 --> 00:45.830
이 도구는 모든 비즈니스에 적용될 겁니다 여러분께 유용할 거예요 제가 보장하죠
00:46.220 --> 00:48.140
이게 그 결과예요
00:48.140 --> 00:58.940
합성 테스트 데이터를 생성하는 자신만의 도구를 만드세요 테스트 데이터 생성기 오픈 소스 모델이요
00:59.180 --> 01:01.970
정말 귀한 거예요
01:01.970 --> 01:06.380
데이터 세트 생성에는 여러 가지 목적이 필요해요
01:06.380 --> 01:14.600
저는 여러분이 어떻게 하고 싶은지 결정할 수 있는 아주 폭넓은 권한을 드리고 싶습니다. 하지만 제가
01:14.600 --> 01:21.590
원하는 것은 여러분이 원하는 데이터 세트를 묘사할 수 있는 것입니다. 제품에 대한
01:21.590 --> 01:29.420
설명일 수도 있고, 어 구인 공고일 수도 있습니다. 여러분의 제품이 무엇인지, 여러분이 작업하고
01:29.450 --> 01:36.260
있는 데이터는 어떤 종류인지, 그리고 다양한 출력과 다양한 테스트 세트를 꿈꿀
01:36.260 --> 01:47.300
수 있도록 하는 것입니다. 이로써 앞으로 여러분의 비즈니스 영역에서 실험할 때 사용할 수 있을 수 있을 거예요.
01:47.300 --> 01:55.610
이 합성 데이터 발생기는 여러분과 저, 그리고 이걸 위해 귀중한 도구가 될 겁니다 이 과정과
01:55.640 --> 01:59.090
미래의 사업 문제에 있어서요
01:59.090 --> 02:03.890
시간을 투자할 가치가 있고 그러데이션 UI를 적용할 가치가 있어요
02:03.920 --> 02:05.720
아주 쉬운 부분이에요
02:05.720 --> 02:08.420
그래서 저도 도전해 보려고요
02:08.450 --> 02:11.060
뭘 하든 비즈니스 영역에 적용되죠
02:11.060 --> 02:13.790
유용하고 정말 즐기실 수 있어요
02:16.160 --> 02:25.250
그러면 3주 차를 마무리합니다 여정의 3주 차를 마무리하면 능숙한 LM 엔지니어가
02:25.250 --> 02:27.800
되는 거죠
02:27.830 --> 02:31.790
물론 개척 시대 모델로 이미 코드를 만들 수 있죠
02:31.790 --> 02:34.940
이제 와서 이런 말 하는 거 지겹죠?
02:34.940 --> 02:37.100
인공지능 보조를 만들 수 있어요
02:37.100 --> 02:39.980
멀티모덜이 될 수도 있고 도구를 사용할 수도 있죠
02:39.980 --> 02:46.010
여러 명의 작은 요원들로 구성해 전문 임무를 수행할 수도 있죠
02:46.040 --> 02:52.460
물론 이 시점에서 여러분은 프런티어 모델에 호출하는 LM 솔루션을 만들 수 있죠
02:52.460 --> 02:54.680
오픈 소스 모델을 호출할 수 있어요
02:54.680 --> 03:03.080
파이프라인 API를 사용할 수 있습니다 일반 추론 작업을 수행하기 위해 다양하게 사용하죠
03:03.080 --> 03:10.550
하위 레벨의 얼굴 포옹 API를 사용할 수도 있습니다 토큰라이저와 추론 작업을 위한
03:10.550 --> 03:11.780
모델을요
03:12.470 --> 03:18.980
다시 한번 축하드려요 다음 주에는 자랑스러워하세요
03:19.010 --> 03:20.990
다음 주엔 화제를 바꾸죠
03:20.990 --> 03:23.240
가시가 돋친 질문이네요
03:23.240 --> 03:24.980
Get it, get it get it get it
03:24.980 --> 03:31.010
실제로 도움이 될 만한 훌륭한 자원이 많은 곳이죠
03:31.010 --> 03:37.440
주어진 작업에 맞는 모델을 어떻게 고르느냐가 관건이죠
03:37.440 --> 03:39.870
모델도 많고 옵션도 많아요
03:39.870 --> 03:41.220
직원용이에요
03:41.250 --> 03:43.290
비공개 소스인가요 오픈 소스인가요?
03:43.290 --> 03:48.000
하지만 어떤 길을 택하든 가능성은 무궁무진해요
03:48.000 --> 03:52.830
특정 문제에 어떤 게 적절한지 어떻게 탐색해야 할까요?
03:52.830 --> 03:53.850
그게 핵심이에요
03:53.880 --> 03:55.470
어떤 문제냐에 따라 다르죠
03:55.470 --> 03:58.650
다른 문제에는 다른 모델이 적합하죠
03:58.650 --> 04:00.660
어떻게 하는지 보여드릴게요
04:00.990 --> 04:02.550
LMS를 비교할 거예요
04:02.550 --> 04:03.720
리더보드를 사용할 거예요
04:03.720 --> 04:04.950
아레나에서 할 거예요
04:04.950 --> 04:08.070
우리도 아레나에서 작업 좀 하려고요
04:08.070 --> 04:09.360
재미있을 거예요
04:09.360 --> 04:15.930
실제 업무로서 과거와는 다른 방향으로 갈 겁니다 아주 간략하게
04:16.020 --> 04:21.780
한 번 했지만요 프론티어 모델과 오픈 소스 모델을 이용해
04:21.780 --> 04:29.490
코드 생성 문제를 해결하면서 코드 생성을 살펴볼 거예요
04:29.490 --> 04:33.060
새롭고 흥미로운 관점이 될 거예요
04:33.060 --> 04:38.910
다음 주가 정말 기대돼요 여러분이 벌써 이렇게 많이 발전하고
04:38.910 --> 04:42.990
많은 기술을 습득한 게 정말 인상적이에요
04:43.110 --> 04:49.140
적절한 LLM을 선택해서 4주 차에 만나요

508
week5/community-contributions/subtitles/srts/59170227/en_US.srt

@ -0,0 +1,508 @@
WEBVTT
00:00.200 --> 00:02.360
Welcome back to Google Colab.
00:02.360 --> 00:06.290
Here we are ready to explore the wonderful world of Tokenizers.
00:06.290 --> 00:11.360
So, uh, the first thing I'm going to do is do some imports.
00:11.600 --> 00:15.290
And after I've done that, I want to mention this statement here.
00:15.290 --> 00:20.750
I forgot to mention this in the last video, but I have added it into that colab, so hopefully you
00:20.780 --> 00:23.300
found it anyway and read my explanation.
00:23.450 --> 00:28.220
Uh, you may need to log in to Huggingface in Colab if you've never done that before.
00:28.220 --> 00:31.370
And this is the code that you use to do that.
00:31.370 --> 00:36.260
First of all, if you haven't already created your account with hugging face, you need an account with
00:36.290 --> 00:36.890
hugging face.
00:36.890 --> 00:37.700
It's free.
00:37.730 --> 00:40.910
It's it's terrific and you will never regret it.
00:40.910 --> 00:46.970
So sign up at huggingface and then navigate to settings and create a new API token.
00:46.970 --> 00:48.470
Giving yourself write permission.
00:48.470 --> 00:52.130
We won't need to use the right permission today, but we will in the future, so might as well set it
00:52.130 --> 00:53.060
up right now.
00:53.090 --> 00:59.570
Then when you come back, you go to this key section here in the Colab and you add in a new secret.
00:59.570 --> 01:05.220
The secret should say HF underscore token and the value should be your token.
01:05.220 --> 01:12.270
And then all you have to do is run this code that will get the HF token from your secrets, and it will
01:12.300 --> 01:15.000
then call this login method, which I imported here.
01:15.000 --> 01:18.180
And that login method logs in to hugging face.
01:18.180 --> 01:19.470
Let's run that right away.
01:19.470 --> 01:20.760
And it's done.
01:20.790 --> 01:23.400
And you see it says I have rights permission right there.
01:24.060 --> 01:31.950
Okay, let's talk Tokenizers we are going to start with the fantastic llama 3.1, the iconic model from
01:31.950 --> 01:35.400
meta, which paved the way for open source models.
01:35.880 --> 01:42.240
Now, when you're using llama 3.1, meta does need you first to sign their terms of service.
01:42.240 --> 01:47.520
And the way you do that is you visit their model page on Hugging Face, which is linked here.
01:47.520 --> 01:52.680
And at the top of that page, there are very simple instructions for what you need to do to sign.
01:52.830 --> 01:57.270
Uh, you should you'll need to supply your email address, and it's best if the email address that you
01:57.270 --> 01:59.610
supply matches your hugging face account.
01:59.610 --> 02:01.370
That means they get things done quickly.
02:01.370 --> 02:04.370
In fact, they should approve you in a matter of minutes.
02:04.370 --> 02:07.610
I've done this many times, including once late on a Saturday night.
02:07.610 --> 02:09.680
I got approved very, very quickly.
02:09.740 --> 02:13.460
I don't know whether that's just because they're really on the ball or whether it's all automated,
02:13.550 --> 02:15.350
but it's very quick indeed.
02:15.770 --> 02:20.810
And in case you think there's something evil with this signing terms of service, it's really if you
02:20.810 --> 02:26.420
read the fine print, it's about making sure that you're not going to use lemma 3.1 for anything nefarious
02:26.420 --> 02:30.770
and that you have good intentions, which is very much the case in this class.
02:30.770 --> 02:34.400
So it should be no problems whatsoever signing that.
02:34.400 --> 02:39.590
Once you've done so, you will have access to all the variants of llama 3.1.
02:39.590 --> 02:43.070
It's one one sign and then it applies to the whole family.
02:43.370 --> 02:49.070
If you wanted to use one of the older llama three models, like llama 3 or 2, you would need to go
02:49.070 --> 02:53.060
and sign the terms for that family of models.
02:53.450 --> 02:57.650
If for some reason you don't want to, or you're finding that they're not approving you right away,
02:57.650 --> 03:00.200
you can just skip to later when we start.
03:00.230 --> 03:05.490
Or you can just watch me executing for 3.1, and then you can pick up when we start working with some
03:05.490 --> 03:06.840
of the other tokenizers.
03:06.840 --> 03:12.510
But with that, creating a tokenizer is this single line here.
03:12.690 --> 03:21.810
Hugging face has this class auto tokenizer, which will create whatever subclass of tokenizer is needed
03:21.810 --> 03:23.070
for this particular model.
03:23.100 --> 03:24.330
Don't need to worry too much about that.
03:24.330 --> 03:31.410
Just know that auto tokenizer is the one to do and you call the class method from pre-trained, which
03:31.410 --> 03:35.790
means I've got a pre-trained model and I want you to create the tokenizer that's for that.
03:35.820 --> 03:36.960
And that is the name.
03:36.960 --> 03:38.760
This is the model that we're using.
03:38.760 --> 03:41.610
That's which you can take directly from the Hugging face hub.
03:41.610 --> 03:45.690
It's meta llama's meta llama 3.18 billion.
03:45.720 --> 03:51.930
This trust remote code equals true when as as you bring in this tokenizer, it's possible for there
03:51.930 --> 03:55.140
to be code that is part of of a model.
03:55.140 --> 03:57.750
And we're saying we know who meta is.
03:57.780 --> 04:01.570
We know that this is fine so you can trust it.
04:01.840 --> 04:04.030
If you don't include that, it will still work fine.
04:04.030 --> 04:06.040
It just gives you a warning, an ugly warning.
04:06.040 --> 04:10.930
So if you don't want the ugly warning, then just, uh, put that in there.
04:11.950 --> 04:12.550
Okay.
04:12.550 --> 04:15.970
With that, the next thing I'm doing is I'm using the text.
04:16.000 --> 04:24.040
I'm excited to show Tokenizers in action to my LLM engineers, and we take that text as a string and
04:24.040 --> 04:27.160
we call tokenizer dot encode that text.
04:27.160 --> 04:30.070
And then we will print the tokens that result.
04:30.760 --> 04:31.720
Here they are.
04:31.750 --> 04:33.400
It's something that's super simple.
04:33.400 --> 04:34.720
It's just a list of numbers.
04:34.720 --> 04:35.860
Nothing more than that.
04:35.860 --> 04:37.390
Nothing magical about tokens.
04:37.390 --> 04:38.440
They are just numbers.
04:38.440 --> 04:40.960
And these numbers represent that text.
04:40.990 --> 04:43.600
Let's see how many of them there are.
04:43.630 --> 04:50.320
Well, let's start by saying how many, um, uh, letters were in that text that we gave it.
04:50.350 --> 04:53.560
There are 61 letters in that text.
04:53.560 --> 04:56.260
So now we can count the number of tokens.
04:56.260 --> 05:02.510
And do you remember the rule of thumb for roughly speaking, the how many characters map to a token.
05:02.540 --> 05:06.110
On average, it's four on average.
05:06.110 --> 05:06.440
Roughly.
05:06.440 --> 05:12.890
Rule of thumb about four letters should be one token for normal English or if you have a lot of English.
05:12.890 --> 05:16.880
So we're expecting for 61 letters.
05:16.970 --> 05:19.790
We're expecting around 15 tokens.
05:19.820 --> 05:20.780
Let's see what we get.
05:20.780 --> 05:21.980
15 tokens.
05:21.980 --> 05:22.520
There we go.
05:22.550 --> 05:25.280
Exactly 15 tokens for this text.
05:25.610 --> 05:31.940
Um, and we can in fact do this decode to turn our tokens back into text again.
05:31.940 --> 05:35.150
So we're expecting to recreate the original text.
05:35.150 --> 05:39.020
And what we get is something similar, slightly different.
05:39.020 --> 05:44.180
As you will see what we get back is the text that we were expecting.
05:44.180 --> 05:50.990
But at the front of it is something new, this this funny thing here, this set of text that says in
05:50.990 --> 05:55.010
angled brackets are less than and greater than sign begin of text.
05:55.040 --> 05:55.910
What is this?
05:55.910 --> 06:01.090
So this is something called a special token or all of the, all of what I've highlighted just maps to
06:01.120 --> 06:01.900
one token.
06:01.930 --> 06:09.340
In fact, this token here, this 128,000 token, um, and it is a special token which is indicating
06:09.370 --> 06:14.740
to our model that it is the start of a, uh, of a text of a prompt.
06:14.950 --> 06:20.710
Um, and so it's used for that purpose to be a special indicator to the LM.
06:20.740 --> 06:24.550
Now, again, you might be thinking, uh, okay.
06:24.580 --> 06:28.960
So does that mean that somehow the architecture of the transformer has to be set, set up so that it
06:28.990 --> 06:30.820
expects that kind of token?
06:30.910 --> 06:35.920
Uh, and uh, as you're probably, uh, very comfortable now, the answer is no.
06:35.920 --> 06:37.270
That's not what it means.
06:37.300 --> 06:43.000
Uh, what this means is that in all of the training examples that it saw during training time, it was
06:43.000 --> 06:44.080
set up this way.
06:44.080 --> 06:48.250
The training examples began with this special token begin of text.
06:48.250 --> 06:52.780
So it's got used to through training expecting that.
06:52.780 --> 06:58.330
And in order to ensure the highest quality output, one should recreate that same approach.
06:58.390 --> 07:02.210
Uh, when feeding in new prompts at inference time.
07:02.990 --> 07:04.670
So hope that made sense.
07:04.700 --> 07:08.360
There's another method batch decode.
07:08.360 --> 07:13.940
And if you run that with your tokens what you get back instead of one string, you get back these,
07:13.940 --> 07:19.550
uh, little, um, sets of strings where each string represents one token.
07:19.550 --> 07:24.080
So as I say, this first token here turned into this here.
07:24.080 --> 07:27.920
And then you can follow through to, to to see how that's working.
07:28.130 --> 07:30.920
Um, and there's a few things to note from this.
07:30.920 --> 07:36.080
Uh, as you'll see straight away, one of them is that in most cases a word mapped to a token, because
07:36.080 --> 07:37.730
we've got very simple words here.
07:37.730 --> 07:43.370
So excited, even though it's way more than four characters mapped to one token, because it's such
07:43.370 --> 07:45.380
a common word, it's in the vocab.
07:45.620 --> 07:53.180
Um, another thing to notice is that, uh, as with GPT tokenizer, uh, the fact that something is
07:53.180 --> 07:58.700
the beginning of a word, this space before the word is part of the token.
07:58.700 --> 08:09.150
So and so am as the beginning of the word, and then the letters Am is a different token to just am,
08:09.150 --> 08:13.560
the fragment of characters that could be within something more complicated.
08:14.250 --> 08:20.640
You'll also notice that something like Tokenizers got broken into two tokens, one for the word token
08:20.640 --> 08:23.130
and the other for ISAs.
08:23.460 --> 08:28.740
So that's an interesting, uh, you know, a word ending Isa ISAs.
08:28.740 --> 08:33.120
You could imagine that might be stuck on the end of lots of different things, and that's part of its
08:33.150 --> 08:34.350
tokenization.
08:34.380 --> 08:37.890
One other thing to notice is that it is case sensitive.
08:37.890 --> 08:43.860
So so you can see that, uh, token with a capital T has been been taken there.
08:45.120 --> 08:53.040
Uh, so, uh, the final thing I want to mention here is the tokenizer dot vocab.
08:53.070 --> 08:58.500
If you run tokenizer dot vocab, you get the, uh, it gives you the.
08:58.500 --> 09:03.980
It's the dictionary of the complete mapping between fragments of Words and numbers.
09:04.310 --> 09:06.590
And you can see there's some pretty obscure things here.
09:06.590 --> 09:12.620
There's an awful lot of tokens that are available, and there's some quite odd tokens in here that are
09:12.740 --> 09:15.920
from different languages or used for different purposes.
09:16.190 --> 09:22.580
So very much it does go beyond three letters, four letters, and you'll see a number of different things.
09:22.610 --> 09:26.630
A um, it's printed out quite a lot of them.
09:26.870 --> 09:32.840
Uh, something else that I'll show you from this, uh, as I scroll back through all of our dictionary.
09:33.050 --> 09:34.040
Get back here.
09:34.250 --> 09:41.990
Uh, is, uh, that you can also print, uh, comment that, comment this out.
09:42.440 --> 09:48.470
Uh, just what's called the added vocab, which are the special tokens that I mentioned.
09:48.650 --> 09:53.840
Um, there's a bunch of these reserved special tokens, and sorry, at the top you can see here are
09:53.840 --> 10:01.560
the special tokens that have been reserved in the vocab, uh, to be used to signal to things to the
10:01.560 --> 10:01.860
LM.
10:01.890 --> 10:02.580
Beginning of text.
10:02.610 --> 10:03.570
End of text.
10:04.020 --> 10:06.150
Some reserved, um.
10:06.180 --> 10:11.100
And then a start header, ID and header.
10:11.100 --> 10:12.690
And then some other things here.
10:12.690 --> 10:14.190
And a Python tag.
10:14.220 --> 10:17.070
Uh, something obviously special there.
10:17.070 --> 10:25.470
So for whatever reason, these are the special tokens that have been identified, uh, as, as it being,
10:25.470 --> 10:33.300
uh, useful to include those special tokens in the vocab and provide them during training so that when
10:33.330 --> 10:38.850
you're doing inference, when you're running the model, uh, to, to generate text, you can use these
10:38.850 --> 10:42.180
tokens to indicate things to the model.
10:42.960 --> 10:43.530
All right.
10:43.560 --> 10:47.580
Well, that's a bit of playing around with the llama three model.
10:47.640 --> 10:49.290
Uh, llama 3.1 tokenizer.
10:49.320 --> 10:56.670
When we come back, we're going to look at the, uh, the way that that this applies to chats in particular.
10:56.670 --> 10:59.640
And then we're going to play with some other tokenizers.
10:59.640 --> 11:00.390
So see you then.

439
week5/community-contributions/subtitles/srts/59170227/ja_JP.srt

@ -0,0 +1,439 @@
WEBVTT
00:00.200 --> 00:02.360
Google Colabへようこそ。
00:02.360 --> 00:06.290
トーケナイザーの素晴らしい世界を探検してみよう。
00:06.290 --> 00:11.360
それで、 ええと、 まず最初にすることは、 輸入をすることだ。
00:11.600 --> 00:15.290
そして、 それが終わった後、 ここでこの発言に触れたい。
00:15.290 --> 00:23.300
前回のビデオでこのことを言い忘れたが、 あのコラボに追加したので、 とにかく見つけて私の説明を読んでほしい。
00:23.450 --> 00:28.220
ええと、 Huggingfaceにログインしたことがないなら、 Colabにログインする必要があるかもしれない。
00:28.220 --> 00:31.370
そのためのコードがこれだ。
00:31.370 --> 00:36.890
まず最初に、 まだ抱き顔のアカウントを作成していない場合は、 抱き顔のアカウントが必要です。
00:36.890 --> 00:37.700
無料だ。
00:37.730 --> 00:40.910
それは素晴らしいことで、 決して後悔することはない。
00:40.910 --> 00:46.970
そこで、 huggingfaceにサインアップし、 設定に移動して新しいAPIトークンを作成する。
00:46.970 --> 00:48.470
自分に書く許可を与える。
00:48.470 --> 00:53.060
今は必要ないだろうが、 将来は必要になるだろう。
00:53.090 --> 00:59.570
そして戻ってきたら、 このColabのキーセクションに行き、 新しいシークレットを追加する。
00:59.570 --> 01:05.220
secretにはHFアンダースコア・トークンを、 valueにはあなたのトークンを指定する。
01:05.220 --> 01:15.000
そして、 シークレットからHFトークンを取得するこのコードを実行し、 ここでインポートしたログイン・メソッドを呼び出すだけです。
01:15.000 --> 01:18.180
そして、 そのログイン方法はハグ顔にログインする。
01:18.180 --> 01:19.470
すぐに実行しよう。
01:19.470 --> 01:20.760
そして完成した。
01:20.790 --> 01:23.400
そして、 そこに権利があると書いてあるのがわかるだろう。
01:24.060 --> 01:35.400
さて、 トーケナイザーの話をしよう......まずはファンタスティックなラマ3から。 1、 メタの象徴的なモデルで、 オープンソースモデルへの道を開いた。
01:35.880 --> 01:42.240
さて、 llama 3を使っているとき。 1、 metaはまず利用規約にサインする必要がある。
01:42.240 --> 01:47.520
その方法は、 ここにリンクされているハギング・フェイスのモデル・ページにアクセスすることだ。
01:47.520 --> 01:52.680
そのページの一番上には、 サインするために必要なことがとてもシンプルに書かれている。
01:52.830 --> 01:59.610
メールアドレスは、 あなたのハグする顔のアカウントと一致しているのがベストです。
01:59.610 --> 02:01.370
つまり、 素早く物事を成し遂げるということだ。
02:01.370 --> 02:04.370
実際、 数分で承認されるはずだ。
02:04.370 --> 02:07.610
土曜の深夜に一度だけ行ったこともある。
02:07.610 --> 02:09.680
とても早く承認されたよ。
02:09.740 --> 02:13.460
ただ、 彼らが本当にボールを持っているからなのか、 それともすべて自動化されているのかはわからないが、
02:13.550 --> 02:15.350
実に素早い。
02:15.770 --> 02:26.420
万が一、 この署名規約が何か邪悪なものだと思われるかもしれないが、 細かい字を読めば、 それはあなたがレンマ3を使うつもりがないことを確認するためのものなのだ。
02:26.420 --> 02:26.420
1悪意はなく、
02:26.420 --> 02:30.770
善意がある。
02:30.770 --> 02:34.400
だから、 サインすることに何の問題もないはずだ。
02:34.400 --> 02:39.590
そうすれば、 ラマ3のすべてのバリエーションにアクセスできるようになる。 1.
02:39.590 --> 02:43.070
一つのサインで、 家族全員に適用されるんだ。
02:43.370 --> 02:53.060
もし、 llama 3や2のような古いllama 3モデルを使いたいのであれば、 そのモデルのファミリーの契約書にサインする必要がある。
02:53.450 --> 02:57.650
もし、 何らかの理由で承認されたくなかったり、 すぐに承認されなかったりした場合は、
02:57.650 --> 03:00.200
後日、 私たちが開始するときにスキップすればいい。
03:00.230 --> 03:06.840
それか、 私が3人分プレーするのを見ることもできる。 1、 そして、 他のトークナイザーを使い始めたら、 また戻ってくることができる。
03:06.840 --> 03:12.510
しかし、 トークナイザーの作成はこの1行だけだ。
03:12.690 --> 03:23.070
Hugging faceにはオート・トークナイザーというクラスがあり、 この特定のモデルに必要なトークナイザーのサブクラスを作成する。
03:23.100 --> 03:24.330
あまり心配する必要はない。
03:24.330 --> 03:31.410
オート・トークナイザーは、 pre-trainedからクラス・メソッドを呼び出します。 つまり、 事前に訓練されたモデルがあるので、
03:31.410 --> 03:35.790
そのためのトークナイザーを作成してほしいということです。
03:35.820 --> 03:36.960
それが名前だ。
03:36.960 --> 03:38.760
これが私たちが使っているモデルだ。
03:38.760 --> 03:41.610
それは、 ハグする顔のハブから直接取ることができるものだ。
03:41.610 --> 03:45.690
メタ・ラマのメタ・ラマ3だ。 180億ドル
03:45.720 --> 03:55.140
このトークナイザーを持ち込むと、 モデルの一部であるコードが存在する可能性がある。
03:55.140 --> 03:57.750
そして、 メタの正体を知っていると言っているんだ。
03:57.780 --> 04:01.570
私たちはこれが問題ないことを知っていますから、 信頼してください。
04:01.840 --> 04:04.030
それを含まなくても、 問題なく機能する。
04:04.030 --> 04:06.040
ただ警告を与えるだけだ。
04:06.040 --> 04:10.930
だから、 もし醜い警告を出したくないのであれば、 そう書いておいてくれ。
04:11.950 --> 04:12.550
オーケー。
04:12.550 --> 04:15.970
それで次にすることは、 テキストを使うことだ。
04:16.000 --> 04:27.160
LLMのエンジニアにトーケナイザーの動きを見せるのが楽しみです。 テキストを文字列として受け取り、 トーケナイザーを呼び出してそのテキストをドット・エンコードします。
04:27.160 --> 04:30.070
そして、 その結果のトークンを印刷する。
04:30.760 --> 04:31.720
それがこれだ。
04:31.750 --> 04:33.400
とてもシンプルなことなんだ。
04:33.400 --> 04:34.720
単なる数字の羅列だ。
04:34.720 --> 04:35.860
それ以上のことはない。
04:35.860 --> 04:37.390
トークンには何の不思議もない。
04:37.390 --> 04:38.440
ただの数字だ。
04:38.440 --> 04:40.960
そして、 この数字はそのテキストを表している。
04:40.990 --> 04:43.600
何人いるか見てみよう。
04:43.630 --> 04:50.320
では、 まず、 私たちが渡したテキストに何文字あったかを言ってみよう。
04:50.350 --> 04:53.560
そのテキストには61の文字がある。
04:53.560 --> 04:56.260
これでトークンの数を数えることができる。
04:56.260 --> 05:02.510
大まかに言って、 トークンに何文字が対応するかという経験則を覚えていますか?
05:02.540 --> 05:06.110
平均すると4人だ。
05:06.110 --> 05:06.440
大体ね。
05:06.440 --> 05:12.890
経験則では、 通常の英語、 または英語をたくさん使う場合は、 4文字程度を1トークンとする。
05:12.890 --> 05:16.880
だから61通を期待している。
05:16.970 --> 05:19.790
トークンは15枚程度を想定している。
05:19.820 --> 05:20.780
何が出てくるか見てみよう。
05:20.780 --> 05:21.980
15トークン
05:21.980 --> 05:22.520
これでよし。
05:22.550 --> 05:25.280
このテキストにちょうど15トークン。
05:25.610 --> 05:31.940
トークンをテキストに戻すために、 このデコードを行うことができる。
05:31.940 --> 05:35.150
だから、 原文を再現することを期待している。
05:35.150 --> 05:39.020
そして私たちが手にするのは、 似ているようで少し違うものだ。
05:39.020 --> 05:44.180
おわかりのように、 返ってくるのは期待通りのテキストである。
05:44.180 --> 05:50.990
しかし、 その前面には新しいものがある。 このおかしなもの、 角度のついた括弧で囲まれたテキストは、 less
05:50.990 --> 05:55.010
thanとgreater thanの記号で始まる。
05:55.040 --> 05:55.910
これは何だ?
05:55.910 --> 06:01.900
これはスペシャル・トークンと呼ばれるもので、 ハイライトしたものはすべて1つのトークンにマッピングされます。
06:01.930 --> 06:14.740
実際、 このトークン、 128,000トークンは特別なトークンで、 プロンプトのテキストの始まりであることをモデルに示している。
06:14.950 --> 06:20.710
だから、 LMに特別な指示を出すために使うんだ。
06:20.740 --> 06:24.550
さて、 皆さんはこう思うかもしれない。
06:24.580 --> 06:30.820
ということは、 何らかの方法でトランスフォーマーのアーキテクチャを設定し、 そのようなトークンを期待するようにしなければならないということですか?
06:30.910 --> 06:35.920
ええと、 そして、 おそらくあなたは今、 とても快適だと思いますが、 答えはノーです。
06:35.920 --> 06:37.270
そういう意味ではない。
06:37.300 --> 06:44.080
ええと、 これはどういう意味かというと、 トレーニング中に見たすべてのトレーニング例の中で、 このように設定されていたということだ。
06:44.080 --> 06:48.250
トレーニングの例は、 この特別なトークンから始まる。
06:48.250 --> 06:52.780
だから、 それを期待したトレーニングで慣れてきたんだ。
06:52.780 --> 06:58.330
そして、 最高品質のアウトプットを確実にするためには、 同じアプローチを再現する必要がある。
06:58.390 --> 07:02.210
ええと、 推論時に新しいプロンプトを入力するとき。
07:02.990 --> 07:04.670
というわけで、 お分かりいただけただろうか。
07:04.700 --> 07:08.360
バッチデコードの方法もある。
07:08.360 --> 07:13.940
トークンを使ってこれを実行すると、 1つの文字列の代わりに、 それぞれの文字列が1つのトークンを表す、
07:13.940 --> 07:19.550
小さな文字列のセットが返ってくる。
07:19.550 --> 07:24.080
だから、 この最初のトークンがここになったんだ。
07:24.080 --> 07:27.920
そして、 それがどのように機能しているかを確認するために、 フォロースルーすることができる。
07:28.130 --> 07:30.920
ええと、 ここから注目すべきことがいくつかある。
07:30.920 --> 07:37.730
そのひとつは、 ほとんどの場合、 単語がトークンにマッピングされることだ。
07:37.730 --> 07:43.370
1つのトークンにマッピングされる文字数は4文字よりはるかに多いのですが、 一般的な単語なので、
07:43.370 --> 07:45.380
ボキャブラリーに入っています。
07:45.620 --> 07:58.700
GPTトークナイザーと同じように、 単語の前にあるスペースもトークンの一部です。
07:58.700 --> 08:13.560
So and so amは言葉の始まりで、 Amという文字はただのamとは違うトークンであり、 もっと複雑なものの中にある可能性のある文字の断片である。
08:14.250 --> 08:23.130
また、 Tokenizersのようなものが、 単語トークンとISAの2つのトークンに分割されたことにもお気づきだろう。
08:23.460 --> 08:28.740
ISAの語尾は面白いね。
08:28.740 --> 08:34.350
それはトークン化の一部なんだ。
08:34.380 --> 08:37.890
もうひとつ注意しなければならないのは、 大文字と小文字が区別されるということだ。
08:37.890 --> 08:43.860
だから、 大文字のTがついたトークンがそこにあるのがわかるだろう。
08:45.120 --> 08:53.040
最後に、 トークナイザー・ドット・ボキャブについて触れておこう。
08:53.070 --> 08:58.500
tokenizer dot vocabを実行すると、 ええと、 これが表示されます。
08:58.500 --> 09:03.980
言葉の断片と数字の完全な対応付けの辞書である。
09:04.310 --> 09:06.590
そして、 ここにはかなり曖昧なものがあるのがわかるだろう。
09:06.590 --> 09:12.620
非常に多くのトークンが用意されており、 中には異なる言語や異なる目的で使用される、
09:12.740 --> 09:15.920
かなり奇妙なトークンも含まれている。
09:16.190 --> 09:22.580
だから、 3文字や4文字の枠を超え、 さまざまなものを目にすることになる。
09:22.610 --> 09:26.630
A ええと、 かなりたくさん印刷されています。
09:26.870 --> 09:32.840
ええと、 この辞書をスクロールしていくと、 他のものが出てきます。
09:33.050 --> 09:34.040
ここに戻ってこい。
09:34.250 --> 09:41.990
印刷もできるし、 コメントもできる。
09:42.440 --> 09:48.470
ええと、 追加されたボキャブラリーと呼ばれるもので、 さっき言った特別なトークンです。
09:48.650 --> 09:53.840
申し訳ないが、 一番上にあるのは、 LMに合図を送るために使われる、
09:53.840 --> 10:01.860
語彙に予約されている特別なトークンだ。
10:01.890 --> 10:02.580
本文の冒頭。
10:02.610 --> 10:03.570
本文終わり。
10:04.020 --> 10:06.150
ちょっと遠慮がちに...。
10:06.180 --> 10:11.100
そしてスタートヘッダ、 ID、 ヘッダ。
10:11.100 --> 10:12.690
そして他にもいくつかある。
10:12.690 --> 10:14.190
そしてパイソンのタグ。
10:14.220 --> 10:17.070
明らかに特別な何かがある。
10:17.070 --> 10:25.470
どんな理由であれ、 これらの特別なトークンを語彙に含め、 トレーニング中に提供することは、
10:25.470 --> 10:42.180
推論を行う際や、 テキストを生成するためにモデルを実行する際に、 これらのトークンを使ってモデルに物事を示すことができるため、 有用であると認識されています。
10:42.960 --> 10:43.530
分かった。
10:43.560 --> 10:47.580
まあ、 これはラマ3モデルでちょっと遊んだだけだ。
10:47.640 --> 10:49.290
ええと、 ラマ3。 1トークナイザー。
10:49.320 --> 10:56.670
また戻ってきたら、 特にチャットに適用される方法を見てみよう。
10:56.670 --> 10:59.640
それから、 他のトークナイザーも使ってみよう。
10:59.640 --> 11:00.390
それではまた。

502
week5/community-contributions/subtitles/srts/59170227/ko_KR.srt

@ -0,0 +1,502 @@
WEBVTT
00:00.200 --> 00:02.360
구글 콜랍에 잘 오셨어요
00:02.360 --> 00:06.290
이제 토큰이들의 세계를 탐험해 볼까요?
00:06.290 --> 00:11.360
먼저 할 일은 수입품 처리를 하는 거예요
00:11.600 --> 00:15.290
그걸 한 후 여기 이 문장을 언급하고 싶어요
00:15.290 --> 00:20.750
지난 강의에서 언급하는 걸 잊었는데 Colab에 추가했어요 어쨌든
00:20.780 --> 00:23.300
찾아서 제 설명을 읽어보세요
00:23.450 --> 00:28.220
콜랍의 포옹 사이트에 로그인해야 할 거예요
00:28.220 --> 00:31.370
그걸 위해 사용하는 코드가 이거죠
00:31.370 --> 00:36.890
먼저, 포옹하는 얼굴 계정을 아직 만들지 않으셨다면 포옹하는 얼굴 계정이 필요해요
00:36.890 --> 00:37.700
공짜예요
00:37.730 --> 00:40.910
정말 멋지고 절대 후회하지 않을 거예요
00:40.910 --> 00:46.970
안기페이스에 등록한 다음 설정으로 이동해 새 API 토큰을 생성하세요
00:46.970 --> 00:48.470
스스로 허락하는 거죠
00:48.470 --> 00:52.130
오늘은 올바른 권한이 필요 없지만 나중엔 필요할 테니 지금 설정하는
00:52.130 --> 00:53.060
게 좋아요
00:53.090 --> 00:59.570
다시 돌아와서 Colab의 이 키 섹션으로 가서 새 비밀을 추가하세요
00:59.570 --> 01:05.220
비밀은 HF_token이라고 하고 값은 여러분의 토큰이어야 해요
01:05.220 --> 01:12.270
이제 코드를 실행해서 기밀에서 HF 토큰을 가져오면 로그인 메서드를 호출할
01:12.300 --> 01:15.000
거예요 여기 불러왔죠
01:15.000 --> 01:18.180
로그인 방법은 얼굴을 안는 거예요
01:18.180 --> 01:19.470
바로 실행하죠
01:19.470 --> 01:20.760
다 됐어요
01:20.790 --> 01:23.400
여기 보면 권한이 있다고 나와 있죠
01:24.060 --> 01:31.950
토큰라이저에 대해 얘기해 보죠 환상적인 라마 3부터 시작할게요 1번, 메타의 상징적인 모델 오픈
01:31.950 --> 01:35.400
소스 모델의 길을 닦았죠
01:35.880 --> 01:42.240
llama 3을 사용하면요 1. 메타 서비스 약관에 먼저 서명하세요
01:42.240 --> 01:47.520
모델 페이지에 방문해서 얼굴 안기기를 하면 돼요 여기 링크가 있죠
01:47.520 --> 01:52.680
그 페이지 상단에 서명하기 위해 해야 할 간단한 지침이 있어요
01:52.830 --> 01:57.270
이메일 주소를 제공해야 하는데 포옹하는 얼굴 계정과 일치하는
01:57.270 --> 01:59.610
이메일 주소가 좋아요
01:59.610 --> 02:01.370
I'm get it's get it. 일이 빨리 끝난다는 뜻이죠
02:01.370 --> 02:04.370
몇 분 내로 승인될 거예요
02:04.370 --> 02:07.610
토요일 밤늦게까지 여러 번 해 봤어요
02:07.610 --> 02:09.680
아주 빨리 승인을 받았어요
02:09.740 --> 02:13.460
정말 꼼꼼해서 그런 건지 자동화되어 있어서 그런 건진 모르겠지만
02:13.550 --> 02:15.350
정말 빠르네요
02:15.770 --> 02:20.810
이 서비스 서명 약관에 뭔가 해로운 게 있다고 생각하실까 봐 말씀드리는데 작은 글씨를 읽어 보시면
02:20.810 --> 02:26.420
lemma 3을 사용하지 않도록 확실히 하는 거예요 1번, 비도덕적인 행동과 선한 의도가
02:26.420 --> 02:30.770
있을 경우입니다 이 수업에서는 그런 경우가 많죠
02:30.770 --> 02:34.400
그러니 서명하는 건 문제가 안 될 거예요
02:34.400 --> 02:39.590
그렇게 하면 라마다 3의 모든 변수를 볼 수 있죠 1번요
02:39.590 --> 02:43.070
하나의 간판이 가족 전체에 적용돼요
02:43.370 --> 02:49.070
라마 3이나 2 같은 구형 라마 모델을 사용하려면 모델
02:49.070 --> 02:53.060
가족에 가서 계약서에 서명해야 해요
02:53.450 --> 02:57.650
혹시 하기 싫거나 당장 승인해 주지 않는 것 같으면
02:57.650 --> 03:00.200
나중에 시작해도 돼요
03:00.230 --> 03:05.490
아니면 제가 3번 실행하는 걸 보셔도 돼요 1, 다른 토큰라이저와 작업하기 시작하면
03:05.490 --> 03:06.840
그걸 선택하세요
03:06.840 --> 03:12.510
하지만 이걸로 토큰라이저를 만드는 건 여기 이 한 줄이죠
03:12.690 --> 03:21.810
안는 얼굴에는 오토 토큰마이저 클래스가 있습니다 이 모델을 위해 필요한 토큰마이저의 서브클래스를
03:21.810 --> 03:23.070
생성하죠
03:23.100 --> 03:24.330
그건 너무 걱정하지 마세요
03:24.330 --> 03:31.410
오토 토큰마이저를 이용하면 됩니다 미리 훈련된 것을 이용해 수업 메서드를 호출합니다 즉, 미리 훈련된 모델이
03:31.410 --> 03:35.790
있다면 토큰마이저를 이용하여 이를 위해 개발하면 된다는 거죠
03:35.820 --> 03:36.960
그게 이름이에요
03:36.960 --> 03:38.760
이게 우리가 사용하는 모델이에요
03:38.760 --> 03:41.610
안아주기 얼굴 허브에서 바로 가져올 수 있는 거죠
03:41.610 --> 03:45.690
메타 라마 3이에요 180억 달러요
03:45.720 --> 03:51.930
이 트러스트 원격 코드는 true입니다 토큰라이저를 가져오면 모델의
03:51.930 --> 03:55.140
일부인 코드가 있을 수 있어요
03:55.140 --> 03:57.750
메타가 누군지 안다고 했잖아요
03:57.780 --> 04:01.570
이건 괜찮으니까 믿어도 돼요
04:01.840 --> 04:04.030
그것만 빼면 괜찮을 거예요
04:04.030 --> 04:06.040
그저 추악한 경고만 남겨요
04:06.040 --> 04:10.930
어글리 경고가 싫으면 그냥 Put을 해요
04:11.950 --> 04:12.550
04:12.550 --> 04:15.970
이것과 함께 다음으로 할 일은 텍스트를 이용하는 거예요
04:16.000 --> 04:24.040
LLM 엔지니어들에게 토큰라이저의 작동을 보여드릴 수 있어 기쁩니다 텍스트를 문자열로 만들어 Tokenizer.incode로
04:24.040 --> 04:27.160
호출하죠
04:27.160 --> 04:30.070
그 결과의 토큰을 인쇄하죠
04:30.760 --> 04:31.720
여기 있네요
04:31.750 --> 04:33.400
아주 간단한 거예요
04:33.400 --> 04:34.720
그냥 번호표예요
04:34.720 --> 04:35.860
그 이상은 아니에요
04:35.860 --> 04:37.390
증표는 마법과 아무 상관 없어요
04:37.390 --> 04:38.440
그냥 숫자일 뿐이에요
04:38.440 --> 04:40.960
이 숫자들이 그 텍스트를 나타내죠
04:40.990 --> 04:43.600
몇 개나 있는지 보죠
04:43.630 --> 04:50.320
우리가 준 문자에 편지가 몇 통이나 있었는지부터 말해보죠
04:50.350 --> 04:53.560
글자가 61개나 돼요
04:53.560 --> 04:56.260
이제 패의 수를 세면 돼요
04:56.260 --> 05:02.510
대략적으로 토큰 하나에 몇 글자가 매개되는지 기억하시나요?
05:02.540 --> 05:06.110
평균 4개예요
05:06.110 --> 05:06.440
대강요
05:06.440 --> 05:12.890
경험상 네 글자는 토큰이 되어야 해요 영어를 많이 안다면 말이죠
05:12.890 --> 05:16.880
61통의 글자가 예상되네요
05:16.970 --> 05:19.790
15토큰 정도 예상해요
05:19.820 --> 05:20.780
get get을 해 보죠
05:20.780 --> 05:21.980
15토큰요
05:21.980 --> 05:22.520
됐어요
05:22.550 --> 05:25.280
토큰이 정확히 15개예요
05:25.610 --> 05:31.940
이 디코딩을 통해 토큰을 다시 텍스트로 바꿀 수 있어요
05:31.940 --> 05:35.150
그래서 원문을 재창조할 거예요
05:35.150 --> 05:39.020
Get it은 비슷하면서도 약간 달라요
05:39.020 --> 05:44.180
보다시피 get get은 우리가 기대하는 텍스트예요
05:44.180 --> 05:50.990
그런데 그 앞에 새로운 게 있어요 여기 재미있는 거요 비스듬한 대괄호로 기호 시작보다보다보다보다보다보다보다보다보다보다보다보다가
05:50.990 --> 05:55.010
텍스트 집합이죠
05:55.040 --> 05:55.910
이게 뭐죠?
05:55.910 --> 06:01.090
이건 특별한 토큰이란 건데요 제가 강조 표시한 모든 게 하나의 토큰에 매핑된
06:01.120 --> 06:01.900
거죠
06:01.930 --> 06:09.340
사실 여기 이 128,000 토큰은 특별한 토큰으로 우리 모델에
06:09.370 --> 06:14.740
프롬프트의 텍스트의 시작을 알려주고 있어요
06:14.950 --> 06:20.710
그 목적으로 LM의 특별한 지표가 된 거죠
06:20.740 --> 06:24.550
이렇게 생각하실지도 몰라요
06:24.580 --> 06:28.960
그렇다면 변압기의 구조가 그런 종류의 토큰을 기대하도록 설정되어야
06:28.990 --> 06:30.820
한다는 뜻인가요?
06:30.910 --> 06:35.920
지금은 아주 편하실지 모르겠지만 제 대답은 안 된다는 거예요
06:35.920 --> 06:37.270
그런 뜻이 아니에요
06:37.300 --> 06:43.000
이 말은 훈련 중에 본 모든 훈련 사례가 이런 식으로 설정됐다는
06:43.000 --> 06:44.080
뜻이에요
06:44.080 --> 06:48.250
훈련 예시는 이 특별한 토큰에서 시작됐죠
06:48.250 --> 06:52.780
훈련을 통해 그런 걸 기대하며 익숙해졌죠
06:52.780 --> 06:58.330
최상의 결과를 내기 위해서는 같은 방법을 써야 하죠
06:58.390 --> 07:02.210
새로운 먹이를 줄 때 추론할 때 나타나요
07:02.990 --> 07:04.670
이해가 되셨길 바라요
07:04.700 --> 07:08.360
다른 방법으로 해독할 수도 있어요
07:08.360 --> 07:13.940
그걸 토큰과 함께 실행하면 하나의 문자열 대신 이런 문자열
07:13.940 --> 07:19.550
집합을 얻게 됩니다 각각의 문자열이 토큰을 나타내는 거죠
07:19.550 --> 07:24.080
말씀드렸듯이 이 첫 번째 토큰이 이렇게 변했어요
07:24.080 --> 07:27.920
그러면 그게 어떻게 작동하는지 볼 수 있죠
07:28.130 --> 07:30.920
몇 가지 주의할 점이 있어요
07:30.920 --> 07:36.080
바로 보실 수 있듯이, 그중 하나는 대부분의 경우 워드를 토큰에 매핑한 것입니다. 왜냐하면 여기에는 아주
07:36.080 --> 07:37.730
간단한 단어들이 있으니까요.
07:37.730 --> 07:43.370
정말 신나요, 토큰 하나에 4자 이상으로 매핑되긴 하지만요 흔한 단어니까요
07:43.370 --> 07:45.380
단어 선택에 포함돼 있죠
07:45.620 --> 07:53.180
또 하나 주목할 점은 GPT 토큰라이저와 마찬가지로 단어 시작을 의미하는
07:53.180 --> 07:58.700
겁니다 단어 앞의 이 공백은 토큰의 일부죠
07:58.700 --> 08:09.150
그래서 AM은 단어의 시작으로 쓰이고 알파벳 AM은 문자 조각인 AM을 나타내는 다른 토큰이에요
08:09.150 --> 08:13.560
더 복잡한 무언가에 속해 있을 수 있죠
08:14.250 --> 08:20.640
토큰라이저 같은 것이 토큰 두 개로 나뉘는 것도 보이실 겁니다 하나는 워드 토큰이고
08:20.640 --> 08:23.130
다른 하나는 ISA죠
08:23.460 --> 08:28.740
ISA로 끝나는 단어가 참 흥미롭네요
08:28.740 --> 08:33.120
여러 가지 끝에 걸렸을 수도 있어요 그것도 토큰화의
08:33.150 --> 08:34.350
일부죠
08:34.380 --> 08:37.890
또 하나 주목할 점은 대소문자를 구별한다는 거죠
08:37.890 --> 08:43.860
보시다시피 대문자 T로 시작하는 토큰이 저기로 옮겨졌어요
08:45.120 --> 08:53.040
마지막으로 말씀드리고 싶은 건 토큰라이저라는 단어예요
08:53.070 --> 08:58.500
토큰라이저 닷 단어집을 실행하면 get이 나와요
08:58.500 --> 09:03.980
단어와 숫자 조각 사이를 오가는 완전한 지도 사전이죠
09:04.310 --> 09:06.590
잘 안 알려진 것들이 있어요
09:06.590 --> 09:12.620
사용할 수 있는 토큰이 정말 많아요 여기엔 꽤 이상한 토큰들도 있어요 다른 언어의
09:12.740 --> 09:15.920
토큰이거나 다른 목적으로 사용되고 있죠
09:16.190 --> 09:22.580
서너 글자 정도가 아니라 다양한 걸 볼 수 있어요
09:22.610 --> 09:26.630
꽤 많이 인쇄했어요
09:26.870 --> 09:32.840
여기서 보여드릴 게 또 있어요 우리 사전을 쭉 넘기면서 보여드리죠
09:33.050 --> 09:34.040
Get it, Get it, Get it, Get it, get, it, it, it! 이리 와요
09:34.250 --> 09:41.990
주석 인쇄도 할 수 있다는 거예요
09:42.440 --> 09:48.470
어, 추가된 단어라고 하는 건데, 제가 아까 말했던 특별한 토큰이에요
09:48.650 --> 09:53.840
예약된 특별한 토큰이 여러 개 있는데요 죄송합니다, 맨
09:53.840 --> 10:01.860
위에 보이는 건 보캡에서 예약된 특별한 토큰으로 LM에 신호를 보내기 위해 사용되죠
10:01.890 --> 10:02.580
글의 시작이죠
10:02.610 --> 10:03.570
그게 다예요
10:04.020 --> 10:06.150
예약된 것도 있고요
10:06.180 --> 10:11.100
그리고 시작 헤더, ID와 헤더를 두죠
10:11.100 --> 10:12.690
다른 것도 있어요
10:12.690 --> 10:14.190
파이썬 태그도요
10:14.220 --> 10:17.070
뭔가 특별한 게 있어요
10:17.070 --> 10:25.470
어떤 이유에서든지 이 토큰들은 특별한 토큰들입니다. 이 특별한 토큰들을 단어 선택에 포함시키면
10:25.470 --> 10:33.300
유용하게 쓰일 것입니다. 그리고 훈련 중에 제공해서 추론을 할 때나, 모델을 실행할
10:33.330 --> 10:38.850
때 텍스트를 생성할 때, 이 토큰들을 이용해서 모델에 무언가를
10:38.850 --> 10:42.180
표시할 수 있어요.
10:42.960 --> 10:43.530
좋아요
10:43.560 --> 10:47.580
라마 3 모델은 좀 비트가 있었죠
10:47.640 --> 10:49.290
라마 3요 토큰라이저 1개요
10:49.320 --> 10:56.670
잠시 후에는 이 기능이 채팅방에 어떻게 적용되는지 살펴볼 거예요
10:56.670 --> 10:59.640
그런 다음 다른 토큰라이저로 놀 거예요
10:59.640 --> 11:00.390
그럼 그때 봐요

475
week5/community-contributions/subtitles/srts/59170233/en_US.srt

@ -0,0 +1,475 @@
WEBVTT
00:00.560 --> 00:04.160
Welcome back to our continued exploits with Tokenizers.
00:04.160 --> 00:09.830
What we're now going to look at is what's called the instruct variance of models.
00:09.830 --> 00:18.650
So there are many models that have been fine tuned to be specifically designed for chats, for carrying
00:18.650 --> 00:28.430
out chat conversations with users, as one does with the with, uh, GPT four with with chat GPT um,
00:28.520 --> 00:33.830
typically when you see those models in hugging face, you'll see that they have the same name as their
00:33.830 --> 00:40.580
base models, but with instruct added to the end of it, meaning that they have been fine tuned to be
00:40.580 --> 00:43.310
used in this instruct use case.
00:43.610 --> 00:50.870
Uh, they have been trained to expect prompts in a particular structure with a particular set of special
00:50.900 --> 00:59.660
tokens that identifies the system message, the user message and assistance responses so that it forms
00:59.690 --> 01:00.920
a kind of a chat.
01:00.920 --> 01:06.270
And that is just simply part of the way that it's been trained with enough examples.
01:06.270 --> 01:13.260
So it expects it in this format, and this is hopefully going to bring some things together for you,
01:13.260 --> 01:19.830
because it's now finally going to close the loop on something where I planted a seed some time ago about
01:19.830 --> 01:26.250
the reason for this structure of messages, lists of dicts that we became very familiar with when we
01:26.280 --> 01:28.290
were playing with frontier models.
01:28.290 --> 01:37.470
So I'm going to create my tokenizer this time using the meta lemma 3.18 billion instruct variant.
01:37.470 --> 01:39.720
So this will look familiar to you.
01:39.720 --> 01:48.420
This is one of those lists of dicts that we use so much with, uh, OpenAI and Claude and so on, uh,
01:48.420 --> 01:55.530
where you specify a role and content, a role a system is for the system message and user is for the
01:55.530 --> 01:56.790
user message.
01:56.880 --> 02:06.570
Then the tokenizers that Huggingface provide have a special function apply chat template and it will
02:06.570 --> 02:16.170
take messages in this format in the OpenAI API format, and it will convert it into the right structure
02:16.170 --> 02:24.960
to be used for a this particular model, the type of the prompt that this model is expecting, given
02:24.960 --> 02:31.470
the way it's been trained, if you have tokenized equals true here, um, then what we'll get back is
02:31.470 --> 02:34.290
just a series of numbers and we won't know what's going on.
02:34.290 --> 02:35.910
So I've got tokenized equals false.
02:35.910 --> 02:39.750
So what we'll get back will be the, the text version of it.
02:39.750 --> 02:46.770
And I'm going to print it so you can see what is the uh the what is it that this is converted into that
02:46.770 --> 02:53.820
gets pumped into the model at inference time for this particular conversation.
02:53.820 --> 03:00.360
And here it is, starts with a special token begin of text and then a header.
03:00.360 --> 03:04.380
And then system the word system and then end header.
03:04.560 --> 03:10.240
And then there's some information that's shoved in there about the cutting knowledge date and today's
03:10.240 --> 03:10.780
date.
03:10.780 --> 03:12.160
That's that's special.
03:12.160 --> 03:14.260
And I think that's a llama 3.1 thing.
03:14.260 --> 03:17.830
I don't remember that from previous llama families, but I could be wrong there.
03:18.280 --> 03:25.840
Uh, and then, um, this here is of course, the system message that we provided here.
03:26.860 --> 03:31.870
Uh, then there is another start header for user and header.
03:31.870 --> 03:35.170
And then this is the user message.
03:35.620 --> 03:41.800
Then there's another start header and then the word assistant and then end header because we want the
03:41.800 --> 03:44.590
model to generate the assistance response.
03:44.590 --> 03:50.800
So this is kind of teeing up the model that what should come next right after this should be whatever
03:50.800 --> 03:58.720
the assistance said in response to this uh, prompt following this system instruction.
03:59.590 --> 04:06.700
So I'm hoping this is an aha moment for you that you see now how you can you can have a structure like
04:06.700 --> 04:07.000
this.
04:07.000 --> 04:10.120
And that's how you might think about the conversation with the model.
04:10.120 --> 04:15.570
But at the end of the day, what gets pumped into the model is a prompt that looks like this with special
04:15.600 --> 04:16.980
tokens in the mix.
04:16.980 --> 04:22.470
And because it's been trained with that structure, with those kinds of special tokens, it knows what
04:22.470 --> 04:23.490
needs to come next.
04:23.520 --> 04:25.410
The assistance reply.
04:27.210 --> 04:30.990
So that explains the chat interfaces.
04:30.990 --> 04:34.140
Let's work with a few more models to get some more experience with this.
04:34.140 --> 04:36.360
I'm going to pick three models in particular.
04:36.480 --> 04:40.290
Phi three is a model from Microsoft.
04:40.680 --> 04:45.150
Quinn two is this powerhouse model I keep mentioning from Alibaba Cloud.
04:45.150 --> 04:49.800
Star coder two is a model designed for generating code.
04:49.890 --> 04:57.210
It's built by three companies working together, collaborating ServiceNow and hugging face themselves
04:57.240 --> 05:05.340
hugging face and Nvidia uh, that those uh, three mighty companies have partnered to make this, uh,
05:05.340 --> 05:11.450
group star coder and have built this, uh, this particular model.
05:11.450 --> 05:12.560
Okay.
05:12.560 --> 05:18.060
So, uh, let's give a try for Phi three.
05:18.060 --> 05:24.300
So we use exactly the same approach auto tokenizer from pre-trained and we provide the model.
05:24.300 --> 05:27.750
And now um I'm giving the same text.
05:27.750 --> 05:31.470
I'm excited to show Tokenizers in action to my LLM engineers.
05:31.470 --> 05:39.480
I'm going to reprint the previous the llama 3.1 Tokenizers results to remind you what it's tokens look
05:39.480 --> 05:40.020
like.
05:40.050 --> 05:44.070
Then an empty line, and then I'm going to print Phi three.
05:44.070 --> 05:49.500
And the question is going to be at the end of the day, do they basically produce the same tokens or
05:49.500 --> 05:50.490
is it different.
05:50.520 --> 05:52.200
Let's have a look.
05:53.700 --> 05:57.150
Well you'll see right away they are completely different.
05:57.270 --> 05:58.200
Uh they're different.
05:58.230 --> 06:05.250
Not only is the generated text different, but this first one, which is the start of of message special
06:05.280 --> 06:07.620
token is completely different.
06:07.830 --> 06:11.070
Uh, let's do batch decode so we can see that.
06:16.980 --> 06:17.760
Tokenizer.
06:17.790 --> 06:21.930
Dot Batch decode.
06:24.450 --> 06:27.030
I'll have to say tokens.
06:27.030 --> 06:28.110
Equals.
06:31.770 --> 06:32.970
Tokens.
06:33.780 --> 06:35.280
Let's see what we get here.
06:36.360 --> 06:40.800
Uh, and we do get something completely different.
06:40.860 --> 06:44.520
And actually, interestingly, I was wrong with what I said a second ago.
06:44.550 --> 06:52.350
There isn't a start of sentence special token in the case of 53, so it just goes straight into it.
06:53.250 --> 06:56.850
So that's that's a very different approach.
06:58.830 --> 06:59.670
All right.
06:59.700 --> 07:07.350
Let's use the apply chat template to see how 53 uses chat templates.
07:07.380 --> 07:09.900
Let's start by doing it for llama again.
07:09.900 --> 07:11.250
So we'll see llamas one.
07:11.250 --> 07:17.070
And then we'll print side by side the same the chat template for that same conversation, that same
07:17.070 --> 07:18.990
prompt for 53.
07:19.020 --> 07:20.160
Let's see how they look.
07:20.160 --> 07:26.260
So this is the one we just looked at for for Lama, here is the equivalent for Phi three.
07:26.290 --> 07:28.450
It's obviously much shorter.
07:28.450 --> 07:31.270
It doesn't pass in the the date.
07:31.510 --> 07:38.230
And interestingly, whereas the structure for Lama was about a header and then the word system and end
07:38.260 --> 07:42.730
header and a header the word user and an end header.
07:42.730 --> 07:51.310
In the case of Phi three there's just a special tag for system and a special tag for user and a special
07:51.310 --> 07:52.720
tag for assistant.
07:52.720 --> 07:55.870
So it's this whole sort of different approach.
07:56.110 --> 08:02.020
Um, and that's really interesting to see that these two tokenizers, these two models just have a different
08:02.020 --> 08:04.240
approach for how prompts get sent in.
08:04.240 --> 08:07.870
So obviously, hopefully you're getting the impression if you use the wrong tokenizer for the wrong
08:07.870 --> 08:12.940
model, you'd get garbage, because obviously this with different tokens and different structure is
08:12.940 --> 08:15.430
going to be meaningless to llama three.
08:16.120 --> 08:18.850
And now let's do the same for Quinn two.
08:18.880 --> 08:23.020
We're going to see the original Lama version.
08:23.020 --> 08:26.870
And then we're going to show the Phi three version and then the two version.
08:27.050 --> 08:28.460
Here they come.
08:29.120 --> 08:35.690
Uh, obviously you can see totally different results for the three tokenizers.
08:35.750 --> 08:38.720
Uh, and one more time highlights.
08:38.720 --> 08:41.810
You got to pick the right tokenizer for the right model.
08:43.370 --> 08:49.430
Uh, and, uh, let's just apply the chat template and we'll see again the chat templates for that same
08:49.430 --> 08:51.170
message about telling a joke.
08:51.170 --> 08:52.400
We'll see that for llama.
08:52.400 --> 08:56.330
And then for five three and then for Quinn two all side by side.
08:56.330 --> 08:57.350
Let's see what they look like.
08:57.380 --> 08:59.000
We already saw the one from llama.
08:59.000 --> 09:01.010
We already saw the one from 53.
09:01.010 --> 09:03.560
And here is the one for Quinn two.
09:03.560 --> 09:06.650
And what you'll see is that it's it's sort of somewhere in between.
09:06.680 --> 09:08.840
It does a bit like llama.
09:08.840 --> 09:14.030
It's got the, the Im start im end and system in here.
09:14.210 --> 09:16.850
Um and then user and then assistant.
09:16.850 --> 09:19.250
So it's some somewhere in between the two.
09:19.250 --> 09:23.870
Uh, it doesn't uh, it doesn't have something in between the word.
09:23.870 --> 09:26.000
It doesn't have a header special tag.
09:26.000 --> 09:28.440
It just has, uh, this approach here.
09:28.440 --> 09:36.810
So it's an interesting again a third approach, another variation and with different special tokens.
09:37.740 --> 09:38.370
All right.
09:38.370 --> 09:41.580
And finally let me show you Star Coder two.
09:41.610 --> 09:44.520
This is the code generation module.
09:44.520 --> 09:46.440
We're going to take its tokenizer.
09:46.440 --> 09:49.470
And we're going to put this code in there.
09:49.500 --> 09:54.570
Hello world a def hello world uh taking a person variable.
09:54.570 --> 09:55.980
And it's going to print hello.
09:55.980 --> 09:57.090
And then the person.
09:57.090 --> 10:02.220
And then we just use the same encode to turn it into tokens.
10:02.220 --> 10:09.000
And what I'm then going to do is just print out each token followed by what did that get to uh, get
10:09.030 --> 10:11.730
mapped to what what text did that represent?
10:11.730 --> 10:18.840
And what you'll see here is that there was something at the beginning, and then there's def went into
10:18.840 --> 10:25.110
one token and then hello underscore world and then person.
10:25.110 --> 10:33.210
This here obviously will will reflect the tab and then print hello comma person close brackets.
10:33.210 --> 10:42.660
So it gives you some sense that, um, the star coder two tokenizer is a tokenizer that is designed
10:42.660 --> 10:46.140
around tokenizing code rather than English.
10:46.500 --> 10:48.120
And there's some experiments you can do.
10:48.150 --> 10:54.060
First of all, try out different tokenizers try exploring mapping from text to tokens.
10:54.180 --> 10:55.590
Find out which words.
10:55.590 --> 11:02.040
Try and find the rarest possible word that has a single token in in llamas.
11:02.040 --> 11:06.360
Uh, tokenizer or perhaps the longest word or something like that.
11:06.360 --> 11:09.720
Do some experiments, um, and then satisfy you.
11:10.170 --> 11:15.210
Satisfy yourself that if you take a pretty complicated piece of code, you should find that star coder
11:15.240 --> 11:21.270
tos tokenizer tokenizes it in a more efficient way than one of the tokenizers that's designed for just
11:21.270 --> 11:22.260
English.
11:22.650 --> 11:30.570
And at that point, you will be an expert in the world of open source tokenizers and you'll be ready
11:30.570 --> 11:33.180
to take on the next piece, which is models.
11:33.180 --> 11:35.160
First, let's go back to the slides.

406
week5/community-contributions/subtitles/srts/59170233/ja_JP.srt

@ -0,0 +1,406 @@
WEBVTT
00:00.560 --> 00:04.160
トーケナイザーの活躍をご覧いただき、 ありがとうございます。
00:04.160 --> 00:09.830
これから見ていくのは、 モデルのインストラクター分散と呼ばれるものだ。
00:09.830 --> 00:28.430
そのため、 ユーザーとチャットで会話をするために特別に設計された、
00:28.520 --> 00:43.310
チャット用に微調整されたモデルがたくさんあります。
00:43.610 --> 00:50.870
つまり、 システムメッセージ、 ユーザーメッセージ、 アシスタンスレスポンスを識別する特別なトークンのセットを持つ特定の構造のプロンプトを期待するように訓練されているので、
00:50.900 --> 01:00.920
一種のチャットを形成している。
01:00.920 --> 01:06.270
そして、 それは単に十分な例で訓練された方法の一部に過ぎない。
01:06.270 --> 01:13.260
というのも、 フロンティア・モデルで遊んでいたときに慣れ親しんだ、
01:13.260 --> 01:19.830
メッセージやディクテット(辞書)のリストという構造の理由について、
01:19.830 --> 01:28.290
私が少し前に種をまいたことがあったからだ。
01:28.290 --> 01:37.470
というわけで、 今回はメタ・レンマ3を使ってトークナイザーを作ってみようと思う。 180億のインストラクター・バリアント。
01:37.470 --> 01:39.720
だから、 これは見覚えがあるだろう。
01:39.720 --> 01:48.420
これは、 OpenAIやClaudeなどでよく使うディクテーションのリストのひとつで、
01:48.420 --> 01:56.790
ロールとコンテンツを指定する。
01:56.880 --> 02:06.570
Huggingfaceが提供するトークナイザーは 特別な関数を持っています チャットテンプレートを適用し OpenAI APIフォーマットで
02:06.570 --> 02:16.170
このフォーマットのメッセージを受け取ります そしてこの特定のモデルで使用される 適切な構造に変換します このモデルが期待しているプロンプトのタイプは
02:16.170 --> 02:34.290
それが訓練された方法であることを考慮します ここでtokenized equals trueを指定した場合 返ってくるのは単なる数字の羅列で 何が起こっているのかわかりません
02:34.290 --> 02:35.910
だから、 トークン化イコールfalseにしたんだ。
02:35.910 --> 02:39.750
だから、 私たちに戻ってくるのは、 そのテキスト版になる。
02:39.750 --> 02:46.770
そして、 この会話を推論するときに、 これが何に変換されてモデルに送り込まれるのかがわかるように、
02:46.770 --> 02:53.820
これを印刷します。
02:53.820 --> 03:00.360
特別なトークンのテキストで始まり、 ヘッダーがある。
03:00.360 --> 03:04.380
そして、 システムという言葉をシステムにして、 ヘッダーを終了する。
03:04.560 --> 03:10.780
そして、 そこにはカット知識の日付と今日の日付についての情報が押し込まれている。
03:10.780 --> 03:12.160
それは特別なことだ。
03:12.160 --> 03:14.260
それにラマ3だと思う。 1のことだ。
03:14.260 --> 03:17.830
以前のリャマの家族にはそのような記憶はないが、 間違っているかもしれない。
03:18.280 --> 03:25.840
それから、 これはもちろん、 私たちがここで提供したシステムメッセージです。
03:26.860 --> 03:31.870
それから、 ユーザーとヘッダーの開始ヘッダーがもう一つある。
03:31.870 --> 03:35.170
そして、 これがユーザーメッセージだ。
03:35.620 --> 03:41.800
それから、 別の開始ヘッダーがあり、 アシスタントという言葉があり、
03:41.800 --> 03:44.590
そして終了ヘッダーがある。
03:44.590 --> 03:50.800
つまりこれは、 この後に続くのは、 このシステム指示に続くプロンプトに応答してアシスタンスが言ったことであるべきだ、
03:50.800 --> 03:58.720
というモデルのお膳立てのようなものだ。
03:59.590 --> 04:07.000
だから、 これがあなたにとってハッとするような瞬間であってほしい。 どうすればこのような構造を持つことができるのか、 おわかりいただけただろうか?
04:07.000 --> 04:10.120
そして、 モデルとの会話について、 あなたはこう考えるかもしれない。
04:10.120 --> 04:16.980
しかし、 結局のところ、 モデルに投入されるのは、 特別なトークンが混じったこのようなプロンプトだ。
04:16.980 --> 04:23.490
そして、 そのような構造、 特別なトークンで訓練されているため、 次に何が必要か分かっている。
04:23.520 --> 04:25.410
とアシスタンスは答える。
04:27.210 --> 04:30.990
というわけで、 チャットのインターフェースについて説明しよう。
04:30.990 --> 04:34.140
もう少しモデルを使って経験を積もう。
04:34.140 --> 04:36.360
私は特に3つのモデルを選ぶつもりだ。
04:36.480 --> 04:40.290
ファイ3はマイクロソフトのモデル。
04:40.680 --> 04:45.150
クイン2は、 アリババ・クラウドが提供するこの強力なモデルだ。
04:45.150 --> 04:49.800
スターコーダー2は、 コードを生成するために設計されたモデルだ。
04:49.890 --> 04:57.210
ServiceNowとNvidiaの3社が協力し、 顔をくっつけ、 顔をくっつけ、
04:57.240 --> 05:05.340
顔をくっつけ、 顔をくっつけ、 顔をくっつけ、 顔をくっつけ......この3社が提携して、
05:05.340 --> 05:11.450
このスター・コーダーを作り、 この特別なモデルを作った。
05:11.450 --> 05:12.560
オーケー。
05:12.560 --> 05:18.060
では、 ファイ3に挑戦してみよう。
05:18.060 --> 05:24.300
そこで、 まったく同じアプローチで、 事前に訓練された自動トークナイザーを使い、 モデルを提供する。
05:24.300 --> 05:27.750
そして今、 私は同じ文章を書いている。
05:27.750 --> 05:31.470
LLMのエンジニアたちに、 トーケナイザーの動きを見せるのが楽しみだ。
05:31.470 --> 05:40.020
前回のザ・ラマ3を再掲する。 1 トークンがどのように見えるかを思い出させるTokenizersの結果。
05:40.050 --> 05:44.070
それから空白の行を入れ、 ファイ3をプリントする。
05:44.070 --> 05:50.490
そして問題は、 一日の終わりに、 基本的に同じトークンを生産するのか、 それとも違うのかということだ。
05:50.520 --> 05:52.200
見てみよう。
05:53.700 --> 05:57.150
まあ、 両者がまったく違うことはすぐにわかるだろう。
05:57.270 --> 05:58.200
彼らは違うんだ。
05:58.230 --> 06:07.620
生成されたテキストが違うだけでなく、 メッセージの特別なトークンの始まりであるこの最初のテキストもまったく違う。
06:07.830 --> 06:11.070
ええと、 バッチデコードをして、 それを見てみましょう。
06:16.980 --> 06:17.760
トーケナイザー。
06:17.790 --> 06:21.930
ドットバッチデコード。
06:24.450 --> 06:27.030
トークンと言わざるを得ない。
06:27.030 --> 06:28.110
イコールである。
06:31.770 --> 06:32.970
トークン
06:33.780 --> 06:35.280
何が出てくるか見てみよう。
06:36.360 --> 06:40.800
そして、 まったく違うものを手に入れた。
06:40.860 --> 06:44.520
そして実は、 興味深いことに、 1秒前に言ったことは間違っていた。
06:44.550 --> 06:52.350
53の場合は文頭の特殊トークンがないので、 そのまま文頭に入る。
06:53.250 --> 06:56.850
だから、 それは非常に異なるアプローチなんだ。
06:58.830 --> 06:59.670
分かった。
06:59.700 --> 07:07.350
適用されたチャットテンプレートを使って、 53がどのようにチャットテンプレートを使うか見てみよう。
07:07.380 --> 07:09.900
まずはラマにもう一度やってみよう。
07:09.900 --> 07:11.250
だから、 リャマに会うことになる。
07:11.250 --> 07:18.990
そして、 同じ会話、 同じプロンプト53のチャットテンプレートを並べて印刷します。
07:19.020 --> 07:20.160
どう見えるか見てみよう。
07:20.160 --> 07:26.260
つまり、 これはラマに相当するもので、 ファイ3に相当するものはこちらだ。
07:26.290 --> 07:28.450
明らかにもっと短い。
07:28.450 --> 07:31.270
日付が変わっても通過しない。
07:31.510 --> 07:38.230
そして興味深いことに、 Lamaの構造がヘッダー、 システム、 エンドヘッダー、 ユーザー、 エンドヘッダーという構成だったのに対して、
07:38.260 --> 07:42.730
Lamaはヘッダー、 システム、 エンドヘッダーという構成になっている。
07:42.730 --> 07:52.720
ファイ3の場合は、 システム用の特別なタグとユーザー用の特別なタグ、 アシスタント用の特別なタグがあるだけだ。
07:52.720 --> 07:55.870
だから、 まったく違うアプローチなんだ。
07:56.110 --> 08:02.020
この2つのトークナイザー、 この2つのモデルは、 プロンプトがどのように送信されるかについて異なるアプローチを持っているというのは、
08:02.020 --> 08:04.240
実に興味深いことです。
08:04.240 --> 08:07.870
だから、 もし間違ったモデルに間違ったトークナイザーを使ったら、 ゴミになってしまうという印象を持ってもらえればいいんだけど、
08:07.870 --> 08:15.430
トークンが違ったり構造が違ったりすると、 llama 3にとっては無意味になってしまうのは明らかだからね。
08:16.120 --> 08:18.850
そして今度は、 クイン2についても同じことをやってみよう。
08:18.880 --> 08:23.020
オリジナルのラマ・バージョンを見るつもりだ。
08:23.020 --> 08:26.870
そして、 ファイ3バージョンとファイ2バージョンをお見せします。
08:27.050 --> 08:28.460
来たぞ。
08:29.120 --> 08:35.690
この3つのトークナイザーで、 まったく異なる結果が得られることは明らかだ。
08:35.750 --> 08:38.720
それと、 もう1回ハイライトを。
08:38.720 --> 08:41.810
適切なモデルに適切なトークナイザーを選ばなければならない。
08:43.370 --> 08:51.170
チャットテンプレートを適用して、 ジョークを言うという同じメッセージのチャットテンプレートをもう一度見てみましょう。
08:51.170 --> 08:52.400
それはリャマのために見ることにしよう。
08:52.400 --> 08:56.330
そして5-3、 クイン-2......。
08:56.330 --> 08:57.350
どんなものか見てみよう。
08:57.380 --> 08:59.000
リャマのものはすでに見た。
08:59.000 --> 09:01.010
53年のものはすでに見た。
09:01.010 --> 09:03.560
そして、 これがクイン2のものだ。
09:03.560 --> 09:06.650
その中間のようなものだ。
09:06.680 --> 09:08.840
ラマに少し似ている。
09:08.840 --> 09:14.030
イム・スタート、 イム・エンド、 そしてシステムがここにある。
09:14.210 --> 09:16.850
次にユーザー、 そしてアシスタント。
09:16.850 --> 09:19.250
つまり、 この2つの中間ということになる。
09:19.250 --> 09:23.870
ええと、 単語の間に何かが入っているわけではないんだ。
09:23.870 --> 09:26.000
ヘッダーの特別なタグはない。
09:26.000 --> 09:28.440
ただ、 その、 このアプローチなんだ。
09:28.440 --> 09:36.810
だから、 第3のアプローチ、 別のバリエーション、 別の特別なトークンというのはまた面白い。
09:37.740 --> 09:38.370
分かった。
09:38.370 --> 09:41.580
そして最後に、 スターコーダー2をお見せしよう。
09:41.610 --> 09:44.520
これはコード生成モジュールである。
09:44.520 --> 09:46.440
トークン化する。
09:46.440 --> 09:49.470
そこにこのコードを入れる。
09:49.500 --> 09:54.570
Hello world a def hello world uh person 変数を取る。
09:54.570 --> 09:55.980
そして、 ハローと印刷される。
09:55.980 --> 09:57.090
そしてその人。
09:57.090 --> 10:02.220
そして、 同じエンコードを使ってトークンに変換する。
10:02.220 --> 10:11.730
そして、 各トークンの後に、 そのトークンは何にマッピングされ、 そのテキストは何を表しているのか?
10:11.730 --> 10:18.840
ここでわかることは、 最初に何かがあり、 次にdefが1つのトークンに入り、 次にhello
10:18.840 --> 10:25.110
underscore world、 そしてpersonということだ。
10:25.110 --> 10:33.210
これは明らかにタブを反映し、 ハローカンマの人を閉じ括弧で囲んで印刷する。
10:33.210 --> 10:46.140
つまり、 スターコーダー・ツー・トークナイザーは、 英語ではなくコードをトークン化するために設計されたトークナイザーなのだ。
10:46.500 --> 10:48.120
そして、 いくつかできる実験もある。
10:48.150 --> 10:54.060
まずは、 さまざまなトークナイザーを試して、 テキストからトークンへのマッピングを探ってみよう。
10:54.180 --> 10:55.590
どの単語が使われているか調べる
10:55.590 --> 11:02.040
リャマに含まれるトークンが1つである、 可能な限りレアな単語を探してみてください。
11:02.040 --> 11:06.360
トークナイザーとか、 一番長い単語とか、 そんな感じかな。
11:06.360 --> 11:09.720
いくつか実験をして、 それから満足するんだ。
11:10.170 --> 11:15.210
かなり複雑なコードであっても、 star coder tos tokenizerの方が、
11:15.240 --> 11:22.260
英語専用のトークナイザーよりも効率的にトークン化できることがお分かりいただけるはずです。
11:22.650 --> 11:33.180
そしてその時点で、 あなたはオープン・ソース・トークナイザーの世界におけるエキスパートとなり、 次のピースであるモデルに挑戦する準備が整うだろう。
11:33.180 --> 11:35.160
まず、 スライドに戻ろう。

451
week5/community-contributions/subtitles/srts/59170233/ko_KR.srt

@ -0,0 +1,451 @@
WEBVTT
00:00.560 --> 00:04.160
토큰이들과의 활약에 돌아오신 걸 환영해요
00:04.160 --> 00:09.830
지금 볼 것은 모델 지시 변수라는 건데요
00:09.830 --> 00:18.650
채팅용으로 특별히 설계된 모델도 많습니다 사용자와의 채팅을 수행하기 위해서죠
00:18.650 --> 00:28.430
GPT4 채팅도 마찬가지입니다 얼굴을 끌어안는 모델이 등장할 경우 베이스 모델과
00:28.520 --> 00:33.830
이름이 같지만 끝에 지시 사항이 추가됩니다 이
00:33.830 --> 00:43.310
지시 사항 사용 사례에 적절하게 사용되도록 설계됐다는 뜻이죠
00:43.610 --> 00:50.870
특정 구조에서 특정 토큰으로 시스템 메시지와
00:50.900 --> 01:00.920
사용자 메시지 지원 답변을 식별해 채팅을 하도록 훈련받았죠
01:00.920 --> 01:06.270
많은 예시를 들면서 훈련된 방식의 일부일 뿐이죠
01:06.270 --> 01:13.260
이 포맷에서 기대하죠 여러분께 뭔가 제공하면 좋겠네요 이제 루프를
01:13.260 --> 01:19.830
닫을 테니까요 한참 전에 메시지 구조의 이유에 대해 시드를
01:19.830 --> 01:28.290
심었던 거죠 독재 목록은 프론티어 모델을 할 때 아주 익숙해졌어요
01:28.290 --> 01:37.470
이번엔 토큰라이저를 만들게요 메타 lemma 3을 이용해서요 180억 개요
01:37.470 --> 01:39.720
그러니 낯익을 거예요
01:39.720 --> 01:48.420
이것은 우리가 오픈AI나 클로드에서 많이 사용하는 독촉 목록 중 하나입니다 역할과 콘텐츠를
01:48.420 --> 01:56.790
지정하는 거죠 역할 시스템은 시스템 메시지 사용자는 사용자 메시지에요
01:56.880 --> 02:06.570
H깅페이스가 제공하는 토큰라이저는 특별한 기능을 수행해 채팅 템플릿을 적용하고 OpenAI
02:06.570 --> 02:16.170
API 포맷의 이 포맷 메시지를 취합니다 올바른 구조로 전환해 이 모델이 기대하는 특정
02:16.170 --> 02:24.960
프롬프트 유형에 사용되죠 훈련된 방식에 따라 토큰라이즈 = true 함수를
02:24.960 --> 02:31.470
입력하면 일련의 숫자만 나올 뿐 무슨 일이 일어나는지 알 수
02:31.470 --> 02:34.290
없어요
02:34.290 --> 02:35.910
토큰화 = false라고 적었죠
02:35.910 --> 02:39.750
Get in get은 텍스트 버전이에요
02:39.750 --> 02:46.770
프린트해서 여러분이 보실 수 있도록 하겠습니다 이게 무엇으로 변환되었는가
02:46.770 --> 02:53.820
하는 거죠 이 특정 대화에 대한 추론 시간에 모델로 펌프질되었어요
02:53.820 --> 03:00.360
여기 있네요, 특별한 토큰 비긴즈 오브 텍스트와 헤더로 시작하네요
03:00.360 --> 03:04.380
워드 시스템과 end 헤더도 있어요
03:04.560 --> 03:10.780
그리고 절단 연도와 오늘 날짜에 대한 정보도 있어요
03:10.780 --> 03:12.160
정말 특별하네요
03:12.160 --> 03:14.260
라마 3 같아요 하나만요
03:14.260 --> 03:17.830
이전 라마 가족들은 그런 적이 없었는데 제가 틀렸을 수도 있어요
03:18.280 --> 03:25.840
그리고 이건 물론 우리가 제공한 시스템 메시지예요
03:26.860 --> 03:31.870
User와 헤더를 위한 또 다른 스타트 헤더가 있어요
03:31.870 --> 03:35.170
이건 사용자 메시지예요
03:35.620 --> 03:41.800
또 다른 start 헤더와 보조란 단어가 있고 end 헤더가 있어요 모델이
03:41.800 --> 03:44.590
보조 응답을 생성해야 하니까요
03:44.590 --> 03:50.800
이 모델은 다음 순서로 이어집니다 이 시스템
03:50.800 --> 03:58.720
지침을 신속히 따르는 지원군에 대한 대응이죠
03:59.590 --> 04:07.000
이번 기회에 깨달으셨으면 좋겠어요 어떻게 이런 구조물을 지었는지요
04:07.000 --> 04:10.120
모델과의 대화도 그렇게 생각해야 해요
04:10.120 --> 04:15.570
하지만 결국 모델에는 이런 프롬프트가 들어갑니다 특별한 토큰이 들어
04:15.600 --> 04:16.980
있는 프롬프트죠
04:16.980 --> 04:22.470
그런 구조와 특별한 토큰으로 훈련했기 때문에 다음에 뭐가 필요한지
04:22.470 --> 04:23.490
알아요
04:23.520 --> 04:25.410
지원팀 응답요
04:27.210 --> 04:30.990
채팅 인터페이스를 설명하는 거죠
04:30.990 --> 04:34.140
get it의 경험을 쌓기 위해 모델 몇 명과 더 일해 보죠
04:34.140 --> 04:36.360
전 특별히 세 가지 모델을 고를 거예요
04:36.480 --> 04:40.290
파이 3은 마이크로소프트 모델이에요
04:40.680 --> 04:45.150
퀸 2는 알리바바 클라우드에서 계속 언급했던 강력한 모델이에요
04:45.150 --> 04:49.800
스타 코더 2는 코드 생성을 위해 설계된 모델이죠
04:49.890 --> 04:57.210
세 회사가 협력해서 만든 회사로 서비스나우와 포옹하는
04:57.240 --> 05:05.340
얼굴 그리고 엔비디아입니다 이 세 회사가 파트너십을 맺어
05:05.340 --> 05:11.450
그룹스타 코더와 이 모델을 만들었죠
05:11.450 --> 05:12.560
05:12.560 --> 05:18.060
그럼 피3을 불러 볼까요?
05:18.060 --> 05:24.300
오토 토큰라이저와 똑같은 접근법을 사용합니다 미리 훈련받은 모델이죠
05:24.300 --> 05:27.750
지금은 저도 같은 문자를 보내고 있어요
05:27.750 --> 05:31.470
LLM 엔지니어들에게 토큰라이저의 작동을 보여 줄 생각에 신나요
05:31.470 --> 05:40.020
라마 3을 재인쇄할 거예요 토큰라이저 1개 토큰의 모습을 다시 보여드리죠
05:40.050 --> 05:44.070
빈 선이 하나 있고 피3을 프린트할 거예요
05:44.070 --> 05:49.500
결국 중요한 질문은 이겁니다 기본적으로 같은 토큰을 생산하나요? 아니면
05:49.500 --> 05:50.490
다른가요?
05:50.520 --> 05:52.200
한번 보죠
05:53.700 --> 05:57.150
보면 아시겠지만 완전히 달라요
05:57.270 --> 05:58.200
달라요
05:58.230 --> 06:05.250
생성된 텍스트만 다른 게 아니라 메시지 특별 토큰의 시작인 이 첫 번째
06:05.280 --> 06:07.620
것도 완전히 달라요
06:07.830 --> 06:11.070
그걸 볼 수 있게 배치 디코딩을 하죠
06:16.980 --> 06:17.760
토큰자이예요
06:17.790 --> 06:21.930
닷 배치 해독법이에요
06:24.450 --> 06:27.030
토큰이라고 해야겠네요
06:27.030 --> 06:28.110
동등하게요
06:31.770 --> 06:32.970
토큰요
06:33.780 --> 06:35.280
get in the right 한번 볼까요?
06:36.360 --> 06:40.800
Get in get은 완전히 달라요
06:40.860 --> 06:44.520
사실, 흥미롭게도 조금 전에 한 말은 틀렸어요
06:44.550 --> 06:52.350
53번의 경우 문장 시작 특별 토큰이 없어요 그냥 바로 들어가죠
06:53.250 --> 06:56.850
아주 색다른 접근법이죠
06:58.830 --> 06:59.670
좋아요
06:59.700 --> 07:07.350
채팅 템플릿 적용을 이용해 53명이 채팅 템플릿을 어떻게 사용하는지 보죠
07:07.380 --> 07:09.900
라마를 위해 다시 해 보죠
07:09.900 --> 07:11.250
라마도 볼 수 있겠네요
07:11.250 --> 07:17.070
그런 다음 나란히 같은 채팅 템플릿을 출력할 겁니다 같은 대화, 같은 53에 대한 같은
07:17.070 --> 07:18.990
프롬프트를 위해서요
07:19.020 --> 07:20.160
어떤지 보죠
07:20.160 --> 07:26.260
이게 라마에게 필요한 거고 이건 피3에 해당하는 거예요
07:26.290 --> 07:28.450
훨씬 짧죠
07:28.450 --> 07:31.270
날짜에 안 들어가요
07:31.510 --> 07:38.230
흥미롭게도 라마라는 단어는 헤더가 기본이었어요 워드 시스템과 엔드
07:38.260 --> 07:42.730
헤더가 사용자와 엔드 헤더로 이어졌죠
07:42.730 --> 07:51.310
파이 3의 경우 시스템을 위한 특별한 태그와 사용자를 위한 특별한 태그 보조를 위한 특별한
07:51.310 --> 07:52.720
태그가 있어요
07:52.720 --> 07:55.870
접근 방식이 완전히 달라요
07:56.110 --> 08:02.020
흥미로운 점은 두 토큰라이저, 두 모델이 프롬프트를 get으로 보내는 방법에
08:02.020 --> 08:04.240
다른 접근법을 취한다는 거죠
08:04.240 --> 08:07.870
만약 잘못된 토큰라이저를 잘못된 모델로 사용한다면 가비지가
08:07.870 --> 08:12.940
된다는 것을 아셔야 합니다. 왜냐하면 토큰이 다르고 구조가 다르다면 llama3에는
08:12.940 --> 08:15.430
의미가 없기 때문이죠.
08:16.120 --> 08:18.850
이제 퀸 2호도 똑같이 해 보죠
08:18.880 --> 08:23.020
라마의 원조 버전을 볼 거예요
08:23.020 --> 08:26.870
파이3 버전을 보여드리고 두 가지 버전을 보여드릴게요
08:27.050 --> 08:28.460
저기 오네요
08:29.120 --> 08:35.690
보시다시피 토큰라이저 세 개는 완전히 다른 결과를 볼 수 있죠
08:35.750 --> 08:38.720
하이라이트 한 번 더 할게요
08:38.720 --> 08:41.810
모델에 맞는 토큰라이저를 골라야 해요
08:43.370 --> 08:49.430
채팅 템플릿을 적용해 보죠 같은 메시지를 전달하는 채팅 템플릿이
08:49.430 --> 08:51.170
또 있어요
08:51.170 --> 08:52.400
곧 알게 되겠죠
08:52.400 --> 08:56.330
다섯, 셋, 퀸, 둘 이렇게 나란히요
08:56.330 --> 08:57.350
어떻게 생겼는지 보죠
08:57.380 --> 08:59.000
라마 사진은 이미 봤어요
08:59.000 --> 09:01.010
53편은 이미 봤어요
09:01.010 --> 09:03.560
그리고 이건 퀸의 두 번째예요
09:03.560 --> 09:06.650
보시면 알겠지만 그 중간쯤에 있어요
09:06.680 --> 09:08.840
비트도 라마랑 비슷해요
09:08.840 --> 09:14.030
시작과 끝이라는 시스템도 있고요
09:14.210 --> 09:16.850
그 다음은 사용자, 그 다음은 조수죠
09:16.850 --> 09:19.250
그 둘 사이의 어디쯤이죠
09:19.250 --> 09:23.870
그 어정쩡한 단어가 없어요
09:23.870 --> 09:26.000
헤더 스페셜 태그가 없어요
09:26.000 --> 09:28.440
이런 식으로 접근해요
09:28.440 --> 09:36.810
흥미로운 제3의 접근법이죠 또 다른 변종으로 특별한 토큰을 사용해요
09:37.740 --> 09:38.370
좋아요
09:38.370 --> 09:41.580
마지막으로 스타 코더 2를 보여드릴게요
09:41.610 --> 09:44.520
코드 생성 모듈이에요
09:44.520 --> 09:46.440
토큰라이저를 가져갈 거예요
09:46.440 --> 09:49.470
이 코드를 저기에 Put 할게요
09:49.500 --> 09:54.570
안녕 세계, 안녕 세계 변수를 선택하는 거죠
09:54.570 --> 09:55.980
그러면 hello를 출력하죠
09:55.980 --> 09:57.090
그리고 그 사람도요
09:57.090 --> 10:02.220
그런 다음 같은 인코드를 사용해 토큰으로 바꾸죠
10:02.220 --> 10:09.000
이제 해야 할 일은 각각의 토큰을 프린트하는 것입니다. 그리고 무엇이 get이 되었는지, 어떤 텍스트를
10:09.030 --> 10:11.730
나타내는지 매핑하는 것이죠.
10:11.730 --> 10:18.840
여기 보시면 처음에 뭔가 있었어요 데프가 토큰 하나에 들어갔고
10:18.840 --> 10:25.110
hello_ world와 Person이 나왔죠
10:25.110 --> 10:33.210
여기 이건 탭을 반영할 겁니다 그런 다음 hello, 사람 괄호 닫기를 인쇄하죠
10:33.210 --> 10:42.660
대충 감이 오실 겁니다 스타 코더 2 토큰라이저는 토큰라이저로 영어보다는 토큰라이저
10:42.660 --> 10:46.140
코드를 중심으로 디자인됐죠
10:46.500 --> 10:48.120
실험할 수 있는 게 있어요
10:48.150 --> 10:54.060
먼저, 다양한 토큰라이저를 사용해보고 텍스트에서 토큰으로의 매핑을 탐색해보세요
10:54.180 --> 10:55.590
어떤 단어인지 찾아봐요
10:55.590 --> 11:02.040
라마 안에 토큰이 하나 있는 가장 희귀한 단어를 찾아보세요
11:02.040 --> 11:06.360
토큰이거나 가장 긴 단어였을 거예요
11:06.360 --> 11:09.720
실험도 좀 하고 여러분을 만족시켜 드릴게요
11:10.170 --> 11:15.210
꽤 복잡한 코드를 가지고 있다면 이 스타 코더의 토큰라이저 토큰화가
11:15.240 --> 11:22.260
더 효율적으로 이루어질 것입니다 토큰라이저의 영어 버전보다 더 효율적이죠
11:22.650 --> 11:30.570
그때쯤이면 여러분은 오픈 소스 토큰라이저의 전문가가 되어 다음 단계인 모델에
11:30.570 --> 11:33.180
도전할 준비가 될 거예요
11:33.180 --> 11:35.160
먼저 슬라이드로 돌아가죠

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save