You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 

418 lines
12 KiB

WEBVTT
00:00.620 --> 00:03.290
And welcome back to the week six folder.
00:03.290 --> 00:08.450
We're now at day two, which is the second and final stage of data curation.
00:08.720 --> 00:11.840
We're going to be extending our data set to be much bigger.
00:11.840 --> 00:16.880
And we're going to craft it into something that is exceptional, uh, just right.
00:16.880 --> 00:23.150
For our training purposes, uh, we start with some imports and setting up our environment and logging
00:23.150 --> 00:27.110
into hugging face, and we get ourselves ready.
00:27.260 --> 00:35.240
So last time I talked about a Python module called items that I wrote that allows us to parse our data
00:35.240 --> 00:37.550
points into a nice item.
00:37.550 --> 00:43.610
You remember it well, and hopefully you have spent some time digging through this yourself and confirming
00:43.610 --> 00:44.750
the way that it works.
00:44.750 --> 00:51.470
There's another Python module that I've made that is shorter and simpler called loaders, uh, which
00:51.470 --> 01:00.470
has a single class item loader and is just a nice way to load in one of the data sets from the hugging
01:00.500 --> 01:02.060
face repository.
01:02.540 --> 01:10.850
And what it's going to do is it's going to use some fancy, uh, stuff from the concurrent futures package
01:10.850 --> 01:16.700
in Python to do this with multiple workers, because otherwise this takes a long time.
01:16.850 --> 01:20.780
Now, you don't need to do look through this in too much detail.
01:20.780 --> 01:24.260
There's a few things that I want to tell you about, and then you should just convince yourself that
01:24.260 --> 01:26.870
it's doing what it says on the tin again.
01:26.990 --> 01:31.130
Uh, so the main, uh, method that gets called is called load.
01:31.130 --> 01:35.990
And you pass in a number of workers and by default it assumes eight workers.
01:35.990 --> 01:41.420
I'm working on a MacBook Pro here that has eight cores and so it can handle it.
01:41.450 --> 01:43.430
It does really hammer my machine.
01:43.430 --> 01:44.300
While this is going.
01:44.300 --> 01:50.090
You might want to pass in a smaller number of workers, depending on how much you're willing to give
01:50.090 --> 01:54.020
up your machine's CPU if you're not doing other things at the same time.
01:54.350 --> 02:02.480
Uh, so then, um, this load in parallel method is the one that uses this process.
02:02.480 --> 02:03.470
Pool executor.
02:03.500 --> 02:09.560
If you're familiar, if you have used this before, this basically spawns a number of other.
02:09.680 --> 02:15.590
Um, I've just seen a mistake there that that should obviously say workers so that, uh, it does actually
02:15.590 --> 02:16.520
use the eight workers.
02:16.520 --> 02:18.320
Otherwise it will always be using six.
02:18.590 --> 02:26.480
Uh, so this, um, this will spawn up, uh, the number of workers that are specified, uh, and it
02:26.480 --> 02:32.300
will then load in each of our data points, but it will do so in chunks.
02:32.300 --> 02:38.000
And you can see that that really what I've done is I've just created a generator familiar with generators
02:38.000 --> 02:42.500
now because of us using them earlier with when we're streaming back results.
02:42.650 --> 02:48.440
Um, but we're using a generator to, to chop up our data set into chunks of.
02:48.470 --> 02:51.350
As it happens, I set a chunk size here of 1000.
02:51.350 --> 02:53.360
So a thousand data points at a time.
02:53.360 --> 02:56.960
So it's chunked up into sets of 1000 data points at a time.
02:56.960 --> 03:06.930
And each chunk is then passed in and ultimately made to create a new data point using the same item
03:06.930 --> 03:08.970
class that we worked on last time.
03:08.970 --> 03:10.650
So no real magic here.
03:10.650 --> 03:16.290
This is just some fancy packaging to load in items in an efficient way.
03:16.320 --> 03:20.820
1000 at a time and spread out across eight different workers.
03:20.820 --> 03:25.920
But I will say it's worth reviewing this code because this is something that's the kind of thing, again,
03:25.950 --> 03:28.350
that one tends to do a lot with these kinds of projects.
03:28.350 --> 03:35.760
You need to make good use of your box because it's going to be hefty, and the the kinds of tasks involved
03:35.850 --> 03:40.110
often are very suitable for being divided into multiple workers.
03:40.110 --> 03:42.210
And this code should be quite reusable.
03:42.240 --> 03:49.950
I've hopefully written it in a way that's easy to read and and and reuse so much encourage you to do
03:49.950 --> 03:50.520
so.
03:51.150 --> 03:56.490
Um, and uh, there is one other little trick that I want to point out, not a another trick.
03:56.490 --> 03:56.700
Sorry.
03:56.730 --> 04:02.010
One other, um, uh, decision that has been made that is going to affect our results.
04:02.160 --> 04:14.160
Uh, I've decided that we will select only products which cost anywhere between $0.50 and $999 and and
04:14.160 --> 04:15.180
$0.49.
04:15.180 --> 04:19.380
So we're going to limit it to things that price in that range.
04:19.380 --> 04:21.270
And there's various reasons I did that.
04:21.390 --> 04:28.140
Um, one is that if you take things that, uh, that go, uh, much more than that number than then
04:28.140 --> 04:29.460
the results are distorted.
04:29.490 --> 04:36.210
There's a very small number of things with a huge price and that can completely mess up things like
04:36.240 --> 04:37.290
our test performance.
04:37.290 --> 04:43.050
If we happen to pick something that costs an enormous amount, then our errors can be wild if we're
04:43.050 --> 04:43.830
off by a bit.
04:43.830 --> 04:49.140
And because we do want to be using absolute error, just the difference between the recommended price
04:49.140 --> 04:50.310
and the actual price.
04:50.340 --> 04:57.120
It's nice if we can keep our prices within a reasonable range so that we have a pretty decent sense
04:57.120 --> 05:00.870
of how the model's performing, and we don't have any wild things out there.
05:00.870 --> 05:06.030
So essentially, I'm saying for the scope of this project, for what we're doing, we are going to be
05:06.030 --> 05:08.790
talking about things that cost under $1,000.
05:08.790 --> 05:13.800
That's going to be our scope, and it allows us to focus in our data set.
05:13.890 --> 05:20.220
You could also experiment with doing this with different boundaries and try try bigger ranges and see
05:20.220 --> 05:21.150
how it goes.
05:21.150 --> 05:26.040
But I found this to be easiest to work with and give good, good results across the board.
05:26.040 --> 05:28.920
So that's what is going on here.
05:29.370 --> 05:34.980
Um, so let me save this and go back to our day.
05:34.980 --> 05:45.840
And I'm going to reload this because I made that little bug fix, restart kernel and run this again.
05:45.840 --> 05:50.310
Run our imports, log into Huggingface again.
05:50.400 --> 05:58.050
And now we're going to load in the appliances data set like we did before, and it's now going to be
05:58.080 --> 05:59.760
hammering my computer.
06:00.300 --> 06:07.500
I can see it doing its thing, and I'll tell you that before it took about a minute to load in all of
06:07.500 --> 06:14.070
the appliances data set last time, and this time it takes 0.2 minutes.
06:14.070 --> 06:17.190
So it's rather faster when you have it broken into.
06:17.220 --> 06:20.280
Eight workers hammering my computer.
06:20.400 --> 06:22.440
And so I recommend you do the same.
06:22.440 --> 06:29.850
But obviously pass in here workers equals four or less if you if you have fewer processes.
06:30.330 --> 06:34.830
Um so that is the appliance data set.
06:34.830 --> 06:42.600
It has the 28,625 data points that are priced within that range that we have restricted it to.
06:42.870 --> 06:47.610
Um, and they have all been loaded in, uh, let's have a look at the first one.
06:47.640 --> 06:49.470
Do you remember what the first one was?
06:51.780 --> 06:52.620
I did that again.
06:52.650 --> 06:53.610
I did that last time.
06:53.610 --> 06:53.970
There we go.
06:54.000 --> 06:54.810
Try this.
06:55.020 --> 06:56.400
And here we go.
06:56.400 --> 06:59.490
It is the rack, roller and stud assembly kit.
06:59.520 --> 07:02.220
Let's, uh, print its prompt.
07:04.530 --> 07:12.180
And we'll hopefully also again convince ourselves that those, uh, clunky part numbers are filtered
07:12.180 --> 07:12.690
out.
07:12.720 --> 07:13.740
Yes they are.
07:13.770 --> 07:19.350
And this is, again, the dishwasher top wrack reels and stud assembly kit.
07:22.680 --> 07:26.460
And this is the door pivot block.
07:26.850 --> 07:28.500
Always needed one of those.
07:28.650 --> 07:30.510
Uh, so there we go.
07:30.660 --> 07:35.100
Uh, here are our data points loaded in for us.
07:35.460 --> 07:43.560
Um, now it's going to be time for us to scale up and embark upon a much bigger problem.
07:43.560 --> 07:49.980
We are going to be bringing in all of these data sets from the the Amazon.
07:49.980 --> 07:55.120
Uh, Product Prices dataset repository.
07:55.150 --> 08:00.010
I've selected these to be fairly similar kinds of things that are sort of stuff you might find at a
08:00.010 --> 08:01.810
large home retail store.
08:01.810 --> 08:08.770
So it's really almost anything on Amazon, not including stuff like clothes, beauty products, books
08:08.920 --> 08:10.690
and software and things like that.
08:10.690 --> 08:14.650
So it's everything that felt like it was sort of similar kind of stuff.
08:14.740 --> 08:18.040
Um, that would make a really good comprehensive data set.
08:18.040 --> 08:19.570
So here are the data sets.
08:19.570 --> 08:25.000
And of course, um, if you're doing this as you follow along, if you're doing this when you do this,
08:25.000 --> 08:30.550
as you do this, you can play with this and you can choose a different set of data.
08:30.550 --> 08:31.030
If you wish.
08:31.030 --> 08:34.090
You could do this for clothes and see how it performs.
08:34.180 --> 08:39.970
Um, if you are concerned about the size of data and you want to be doing things quicker, you can just
08:39.970 --> 08:40.480
limit it.
08:40.480 --> 08:43.450
The appliances data set is one of the smallest ones there.
08:43.480 --> 08:46.210
Um, electronics is a nice and one in the middle.
08:46.210 --> 08:48.340
So you could just focus on electronics.
08:48.520 --> 08:54.760
Um, and you'll still have great fun, and you'll still get much the same kinds of performance results
08:54.760 --> 08:57.730
as we will get with the entire data set.
08:57.880 --> 09:00.160
So those are all of the data set names.
09:00.160 --> 09:06.550
And now I'm going to read all of them in using the item loader to get after them.
09:06.550 --> 09:09.370
And it's going to read them in one at a time.
09:09.370 --> 09:12.790
The biggest one is automotive which is the first one on the list.
09:12.790 --> 09:16.180
I've generally ordered it, I think with the biggest ones at the beginning.
09:16.330 --> 09:20.920
Um, it might take a bit longer for you because the first time you run this, it has to download the
09:20.920 --> 09:26.410
data from Huggingface to your computer because I've already run it once that step has happened, and
09:26.410 --> 09:29.110
so it just reuses it from a cache on my computer.
09:29.230 --> 09:33.130
Um, but it may it may take a bit, um, a bit longer for you because of that.
09:33.130 --> 09:35.170
And then off it will go.
09:35.380 --> 09:40.180
Now the total time for for on my computer is about 20 minutes.
09:40.240 --> 09:47.890
Uh, my can see my, my CPU is flat out, so I'm going to take a pause and see you again right after
09:47.890 --> 09:50.080
the break when the data has all loaded.