You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

394 lines
12 KiB

WEBVTT
00:00.740 --> 00:08.330
And so now the time has come to talk about the most crucial aspect of Rag, which is the idea of vectors.
00:08.360 --> 00:11.750
If you're already familiar with vectors and vector embeddings, hang on in there.
00:11.750 --> 00:13.610
I'm going to go fairly quickly through this.
00:13.610 --> 00:17.570
You may pick up a thing that you didn't know about as I as I explained this.
00:17.570 --> 00:21.290
So first of all, there's an important bit of background information, which is that we've been talking
00:21.290 --> 00:26.210
about a bunch of different LMS through this course, but almost all the LMS we've been talking about
00:26.210 --> 00:30.380
have been one kind of LM called an autoregressive LM.
00:30.380 --> 00:35.300
And there is, in fact a completely different category of LM known as Autoencoding.
00:35.300 --> 00:36.500
So what's the difference?
00:36.530 --> 00:44.990
Autoregressive LMS or LMS, which are given a past set of tokens and they are required to generate the
00:44.990 --> 00:49.070
next token in the sequence a future token given the past.
00:49.070 --> 00:52.400
And they keep doing that repeating creating the next token.
00:52.400 --> 00:56.300
Given the history of tokens, that's an autoregressive LM.
00:56.300 --> 00:58.100
And of course it's all the rage.
00:58.100 --> 01:00.710
It's most of the ones that we work with.
01:00.710 --> 01:04.270
And of course it's it's GPT four and Claude and Gemini and so on.
01:04.690 --> 01:11.380
There's also these types called autoencoding, and they take a full input that represents both the past
01:11.380 --> 01:12.700
and the present and the future.
01:12.700 --> 01:18.670
It's a full bit of input, and they create one output that reflects the whole input.
01:19.090 --> 01:22.120
And so to make that real, there's some obvious examples.
01:22.120 --> 01:26.620
Sentiment analysis where you take in a sentence and say if it's positive or negative.
01:26.770 --> 01:30.940
Classification where you take a sentence and put it into buckets.
01:31.420 --> 01:36.790
Both things that we did actually explore briefly with the Huggingface pipelines a couple of weeks ago.
01:36.850 --> 01:40.360
And those are both examples of Autoencoding llms.
01:41.260 --> 01:46.300
There is another way that they are used as well, and it is to create something called a vector embedding.
01:46.300 --> 01:48.700
And that's what we're going to be talking about today.
01:48.730 --> 01:54.520
So a vector embedding is a way of taking a sentence of text or a bunch of different things, but usually
01:54.520 --> 01:55.570
a sentence of text.
01:55.570 --> 02:02.620
And turning that into a series of numbers, a series of numbers that in some way reflect the meaning
02:02.620 --> 02:04.360
behind that text.
02:04.360 --> 02:07.000
And we'll go through exactly what that means in just a second.
02:07.000 --> 02:13.640
It sounds a bit abstract right now, but the idea is that take some text, convert it into numbers,
02:13.640 --> 02:17.960
and those numbers you could think of as representing a point in space.
02:17.960 --> 02:22.670
So if we took some text and we turned it into three numbers, you could think of that as being like
02:22.670 --> 02:28.190
an X, Y, and z that would represent exactly where abouts something is located in space.
02:28.370 --> 02:33.800
As it happens, usually when you do this, it gets converted into hundreds or thousands of numbers.
02:33.800 --> 02:37.760
So it represents a point in like 1000 dimensional space.
02:37.760 --> 02:41.690
And that's kind of hard for us to visualize because we can only think in three dimensions, but it's
02:41.690 --> 02:42.620
the same idea.
02:42.620 --> 02:49.580
It's reflecting a point in space, and that point is meant to represent in some way the meaning behind
02:49.580 --> 02:55.250
the text that went in to generate that are examples of auto encoding.
02:55.280 --> 02:58.640
Llms are Bert from Google.
02:58.670 --> 03:02.540
You may remember we actually mentioned Bert right back, I think in the first week.
03:02.570 --> 03:05.240
Uh, so so Bert's been around for a while.
03:05.390 --> 03:11.110
Um, there's also open AI embeddings from OpenAI, and that's actually the auto autoencoder model that
03:11.110 --> 03:14.680
we'll be using this week for our Rag projects.
03:15.160 --> 03:19.960
So let me just talk a bit more about what we mean by meaning.
03:19.990 --> 03:28.690
So first of all, you can use you can create one of these vectors for a single character, for a token
03:28.930 --> 03:34.660
or a bunch of characters, for a word, for a sentence, for a paragraph, for an entire document,
03:34.660 --> 03:36.490
or even for something abstract.
03:36.490 --> 03:44.020
Like in my company, Nebula, we create vectors for things like talent and jobs and things like that.
03:44.680 --> 03:49.930
Often when you're working with these vectors, they will have hundreds or even thousands of dimensions,
03:49.930 --> 03:54.700
will be like a thousand numbers that represent this one block of text.
03:55.030 --> 04:01.420
And now I've said a few times that these numbers reflect the meaning behind the inputs.
04:01.420 --> 04:03.010
What exactly does that mean?
04:03.040 --> 04:10.060
So to put simply, one of the things it means is that if you have a bunch of paragraphs of text that
04:10.060 --> 04:14.720
all end up mapping to similar points in space that are close to each other.
04:14.720 --> 04:18.800
That should mean that these blocks of text have similar meaning.
04:18.800 --> 04:21.500
They don't necessarily need to contain the same words.
04:21.500 --> 04:25.790
They could be completely different words, but their meaning is the same.
04:25.790 --> 04:32.090
They will be close to each other in vector space, so things close to each other in when they're turned
04:32.090 --> 04:35.540
into numbers should mean similar things.
04:35.540 --> 04:38.630
And that's the basic idea behind this.
04:38.630 --> 04:44.630
There's also some more sophisticated ideas behind this, including this point that you can do what's
04:44.660 --> 04:49.400
what's sometimes called vector math behind the meanings of these things.
04:49.400 --> 04:54.050
And there's this example that's very often given it's been around for a long time, this example.
04:54.050 --> 04:59.750
And you may well have heard of it before, and it says, supposing that you have the word king, and
04:59.750 --> 05:05.720
you took the word king, and you used one of these vector encodings to find the the point in space that
05:05.720 --> 05:07.880
represents the word king.
05:07.910 --> 05:11.990
And you also find the vector that reflects the word man.
05:11.990 --> 05:14.510
And the vector that reflects the word woman.
05:14.600 --> 05:21.790
And you take the word king and you subtract man from it, which means you kind of move backwards in
05:21.790 --> 05:28.750
the direction of man and you add woman, which means that you move forwards in the direction of woman.
05:29.110 --> 05:36.160
What you've effectively done is you've taken the concept, the meaning of king, and you've said, I
05:36.160 --> 05:41.470
want to replace the man with woman in this meaning king.
05:41.470 --> 05:49.210
And somewhat remarkably, if you do this, you do actually end up in the position in vector space,
05:49.210 --> 05:53.890
which is the same position as the position for the word queen.
05:53.920 --> 06:00.220
So it really does seem that if you take the word king, the meaning of the word king, and you replace
06:00.220 --> 06:06.850
the man aspect of it with woman, you're then at something which reflects the meaning of the word queen.
06:06.850 --> 06:13.780
And so it's in that sense that these vectors really seem to reflect the meaning behind the words they
06:13.780 --> 06:19.660
represent, both in terms of similar words being close to each other and the ability to carry out this
06:19.660 --> 06:26.420
kind of vector math that allows you to understand the relationship between concepts.
06:27.860 --> 06:30.860
So what's this got to do with rag?
06:30.950 --> 06:32.930
Here's where it all comes together.
06:32.930 --> 06:35.510
This is the big idea behind Rag now.
06:35.540 --> 06:38.570
So this is the same diagram we had before.
06:38.570 --> 06:40.550
But there's going to be a little bit more going on.
06:40.850 --> 06:43.970
At the top we've got a new box called encoding LM.
06:44.000 --> 06:48.650
This is something which is able to take some text and turn it into a vector.
06:48.650 --> 06:52.070
And at the bottom we have something called a vector data store.
06:52.100 --> 06:55.250
It's like the data store we had before the knowledge base.
06:55.250 --> 07:03.800
But now along with text, we can also store the vector that represents that text, the vector that represents
07:03.800 --> 07:05.750
the meaning of that text.
07:06.260 --> 07:07.130
All right.
07:07.130 --> 07:08.990
So here's what we do.
07:09.080 --> 07:12.260
In comes a question from the user.
07:12.290 --> 07:19.550
The first thing we do is we take that question and we turn it into a vector sometimes called a vectorizing.
07:19.550 --> 07:22.360
So supposing the question was who is Amy Me.
07:22.360 --> 07:23.260
Lancaster.
07:23.260 --> 07:30.160
We take who is Amy Lancaster, and we turn that into a vector that reflects the meaning of the question,
07:30.160 --> 07:31.870
Who is Amy Lancaster?
07:33.100 --> 07:34.990
You can probably imagine what I'm going to say next.
07:35.020 --> 07:43.120
What we then do is we go to the vector database and we say, tell me what information is in this vector
07:43.120 --> 07:49.180
database where the vectors are close to the vector for who is Amy Lancaster?
07:49.180 --> 07:51.760
So look at all the different documents we've got in there.
07:51.760 --> 07:53.530
We've turned them all into vectors.
07:53.530 --> 07:58.000
Some of those vectors will be close to the question who is Amy Lancaster?
07:58.030 --> 08:03.790
Give me those vectors and give me the the original information, the text that was turned into those
08:03.790 --> 08:04.630
vectors.
08:04.630 --> 08:11.890
And presumably it's extremely likely that the actual air document for Amy Lancaster is going to be located
08:11.890 --> 08:15.910
somewhere close to the vector, who is Amy Lancaster?
08:16.810 --> 08:20.860
And so when we get that information, we quite simply take that text.
08:20.860 --> 08:26.570
And just like before with the toy example, we shove that in the prompt to the LLM, we get back the
08:26.570 --> 08:29.870
response, presumably taking advantage of the extra context.
08:29.870 --> 08:32.210
And that's what goes back to the user.
08:32.300 --> 08:38.600
So it's just like the toy example, except we're using a much more powerful technique for looking up
08:38.600 --> 08:45.620
the relevant data, using vectors as a way of understanding which of our bits of knowledge have the
08:45.620 --> 08:49.010
most similar meaning to the meaning of the question.
08:49.010 --> 08:51.410
Well, that's really all there is to it.
08:51.410 --> 08:56.120
And that's a wrap for this week, because it's next time that we're going to put this into action and
08:56.120 --> 08:58.910
actually see vectors in databases.
08:58.940 --> 09:04.160
We're also next time going to start looking at something called Lang Chain, a wonderful, wonderful
09:04.160 --> 09:08.780
framework which is designed to make it easy to build these kinds of applications.
09:08.780 --> 09:10.550
We could do it all the manual way.
09:10.550 --> 09:15.860
We could actually create vectors and store them in vector databases using various APIs.
09:16.160 --> 09:19.610
But Lang Chain makes it super simple, as you will see.
09:19.640 --> 09:24.230
And it's going to be a bit like the Gradio experience, where in just a couple of lines of code, we're
09:24.230 --> 09:26.450
going to be doing very powerful things.
09:26.450 --> 09:28.160
So I'm excited about it.
09:28.160 --> 09:29.090
I hope you are too.
09:29.090 --> 09:30.260
And I'll see you then.