1
00:00:00,030 --> 00:00:06,660
In the lectures. So we'll see how that goes. This is scientific computing for Dphil students.
2
00:00:06,900 --> 00:00:13,230
I'm the the professor of numerical analysis. So there's a numerical analysis group here, and I'm the head of that.
3
00:00:13,890 --> 00:00:17,190
Of course, I'm curious to know who you are. So could we begin?
4
00:00:17,940 --> 00:00:20,970
Raise your hand if you're a dphil student in mathematics.
5
00:00:22,410 --> 00:00:27,490
Physics. Chemistry. That's less than usual.
6
00:00:27,500 --> 00:00:31,880
That's interesting. Engineering, science. Computer science.
7
00:00:32,930 --> 00:00:36,560
Plant science. What have I forgotten?
8
00:00:39,200 --> 00:00:42,710
Which science? Animal science. Okay, good.
9
00:00:43,370 --> 00:00:47,830
There are some mafia students here, I think. Who are you right now?
10
00:00:47,900 --> 00:00:52,280
But who else have I overlooked? Okay.
11
00:00:52,820 --> 00:00:57,300
Well, anyway, I hope you have fun and I hope you meet some of the other people in the room, you know?
12
00:00:57,320 --> 00:01:02,840
One of the problems at Oxford is outside our colleges. We're not so good at meeting people from other department as well.
13
00:01:03,020 --> 00:01:06,350
This is a chance because all of you are interested in numerical computing.
14
00:01:07,520 --> 00:01:14,239
So let's see. Let's take a moment and look over just a few operational details.
15
00:01:14,240 --> 00:01:18,820
So there's a pack of seven handouts. I hope you all got them. They're scattered about a few didn't.
16
00:01:19,250 --> 00:01:23,150
I hope you've done the quiz at the end. We'll go over the answers to the quiz.
17
00:01:26,090 --> 00:01:32,690
If you turn to the blue sheet, this contains all sorts of information about how the course runs.
18
00:01:34,490 --> 00:01:37,570
There's also an outline of the lectures of the course on another sheet.
19
00:01:37,580 --> 00:01:41,900
So as I say, I'm Nick Trevathan and there are two Tas for the course.
20
00:01:42,440 --> 00:01:49,130
One of them is Mikhail's Levinsky in the back row there, and the other is Adrian Martinelli in the back row there.
21
00:01:49,700 --> 00:01:53,600
So Mikel and Adrian know everything about scientific computing.
22
00:01:53,990 --> 00:02:00,410
And by all means, send them email if you'd like to set up an appointment, or if you have a question they can answer by email.
23
00:02:00,620 --> 00:02:06,080
And of course, you're also welcome to contact me by email. We're all in this building and not that hard to find.
24
00:02:06,830 --> 00:02:12,920
Mikhail and Adrian. We'll be marking the problem, sheets of which there are four.
25
00:02:14,060 --> 00:02:21,320
So each Tuesday of an even numbered week, we hand out an assignment and it's due the next Tuesday, the odd numbered week.
26
00:02:21,590 --> 00:02:26,270
The first assignment doesn't count for very much while you're getting up to speed and the others count for more.
27
00:02:26,720 --> 00:02:34,010
These are mostly computational work which you do in MATLAB, and I'm assuming you have access to MATLAB.
28
00:02:34,010 --> 00:02:39,410
If you don't, you'll see there's a note here about how to get it from I.T services.
29
00:02:41,000 --> 00:02:47,360
Anyway, we have these assignments and the course marks are based on those and they have to be turned in on schedule.
30
00:02:47,360 --> 00:02:52,730
They can't be late because we hand out solutions at the lecture where the assignments are due.
31
00:02:52,970 --> 00:03:03,260
So we do not accept late assignments. Now, I think handwritten on your blue sheet you'll see a mention of boxes 160 and 161.
32
00:03:03,920 --> 00:03:08,510
Those refer to some hand in boxes that are across this atrium out there.
33
00:03:08,780 --> 00:03:13,340
So that's where you turn in the assignments in those boxes across the atrium.
34
00:03:14,570 --> 00:03:18,110
Michaela, Adrian, is there anything you need to tell us about the mechanics?
35
00:03:20,830 --> 00:03:24,399
Okay. Right. Uh oh.
36
00:03:24,400 --> 00:03:33,250
Well, there's a web page so you can see the the URL is here, and basically all the lecture notes will be put up on the Web page.
37
00:03:33,460 --> 00:03:35,620
They're fairly terse lecture notes.
38
00:03:37,000 --> 00:03:44,170
In principle, you could learn this material by looking at the lecture notes, but in practice, no human being can really do that.
39
00:03:44,350 --> 00:03:47,470
You want to be here at the lectures? Things move quickly.
40
00:03:47,650 --> 00:03:53,320
You want to be here at the lectures. There are, of course, 12 lectures this term and then another 12 in the next term.
41
00:03:54,760 --> 00:03:59,080
Any questions about mechanical things? Okay.
42
00:04:00,240 --> 00:04:05,930
I don't know details of whether your department counts this course for this or that.
43
00:04:05,940 --> 00:04:18,140
That's up to you and your department. Okay.
44
00:04:18,440 --> 00:04:27,830
So the way the course is structured, it's in this term, we're doing some linear algebra,
45
00:04:27,860 --> 00:04:33,260
hopefully from a point of view that isn't entirely standard to you and then some optimisation.
46
00:04:33,500 --> 00:04:38,090
But of course, I'd like to begin with some big picture about scientific computing.
47
00:04:38,270 --> 00:04:44,780
So if you'll allow me, I'll write down the structure the first part of the course.
48
00:04:45,200 --> 00:04:50,390
Part one I call sparse matrices and iterative methods.
49
00:04:58,480 --> 00:05:04,299
And let me begin immediately, not on that subject, but with a view of the field.
50
00:05:04,300 --> 00:05:09,880
To give you an idea of how I see this fitting in to scientific computing more generally.
51
00:05:10,060 --> 00:05:13,810
So we'll call that 1.1 view of the field.
52
00:05:20,360 --> 00:05:25,910
So the first question is what is the field scientific computing, numerical analysis?
53
00:05:25,940 --> 00:05:29,870
I regard those as synonyms. Not everybody shares that view.
54
00:05:30,080 --> 00:05:36,889
But to me, numerical analysis and scientific computing are both the discipline concerned with
55
00:05:36,890 --> 00:05:43,100
algorithms of a numerical sort for mathematical problems of a continuous sort.
56
00:05:44,810 --> 00:05:48,200
And the difference in the phrases maybe is what they connote.
57
00:05:48,680 --> 00:05:54,950
Numerical analysis, of course, sounds like the more theoretical and scientific computing, the more applied end.
58
00:05:55,160 --> 00:05:59,149
But I don't really make a distinction between those two. Now,
59
00:05:59,150 --> 00:06:04,459
what I'd like to mention is that I'm trying to give you a very quick review
60
00:06:04,460 --> 00:06:09,770
of a lot of material in these lectures and fairly classical in orientation.
61
00:06:10,310 --> 00:06:18,550
And in particular, my view of the classical structure of this field is that there are three big parts of scientific computing.
62
00:06:18,560 --> 00:06:26,730
One is linear algebra. One is differential equation.
63
00:06:26,730 --> 00:06:34,560
So let's call that Odie and PD, by which we mean ordinary and partial differential equations.
64
00:06:35,790 --> 00:06:38,100
And then the other is optimisation.
65
00:06:39,420 --> 00:06:47,730
Now, of course, not everything fits this picture, but these are the three great areas I think, of classic scientific computing.
66
00:06:47,820 --> 00:06:55,380
And we're beginning on the left here with linear algebra. Let me mention three particularly big parts of linear algebra.
67
00:06:56,220 --> 00:07:00,840
Three big problems. One is solving systems of linear equations.
68
00:07:04,370 --> 00:07:09,980
Another is least squares problems where your data fitting if you like.
69
00:07:10,340 --> 00:07:16,270
Or another way to describe it is solving rectangular rather than square system of equations.
70
00:07:17,450 --> 00:07:20,720
There's plenty of room scattered about, so please sit down.
71
00:07:21,890 --> 00:07:27,770
And the third would be Eigenvalues Eigenvectors and singular values the SVD.
72
00:07:30,300 --> 00:07:36,480
Once again, of course, those are not the only three parts of this field, but they are the three biggest, very important.
73
00:07:38,430 --> 00:07:43,050
And there's an important distinction to make in all of this numerical linear algebra,
74
00:07:43,590 --> 00:07:48,900
which is that in all of these three bubbles, there are sort of two main styles of work.
75
00:07:49,140 --> 00:07:56,730
One is dense matrices, by which we mean matrices more or less all of whose entries are non-zero.
76
00:07:57,060 --> 00:08:06,690
And you could call this classical numerical linear algebra, which means 1950s and 1960s, 1970s, that kind of era.
77
00:08:07,830 --> 00:08:12,959
And then the other point of view would be the sparse numerical linear algebra.
78
00:08:12,960 --> 00:08:15,990
And another aspect of that would be iterative methods.
79
00:08:16,770 --> 00:08:25,140
And now we're getting into more modern aspects, if you like, parallel computing, large scale problems tend to be sparse.
80
00:08:29,950 --> 00:08:35,650
These of course, are themes throughout scientific computing, and we'll certainly see them in linear algebra.
81
00:08:37,620 --> 00:08:44,579
There's I'm you'll find I'm very interested in history, but to me, history gives you a perspective on a field that's interesting.
82
00:08:44,580 --> 00:08:47,640
And there's a fascinating historical aspect of all this.
83
00:08:48,000 --> 00:08:55,080
If you look at mathematics over the last few centuries, linear algebra was not important until computers were invented.
84
00:08:55,290 --> 00:09:00,569
Of course, linear algebra was a part of mathematics, but it was never a hot field.
85
00:09:00,570 --> 00:09:04,170
It was never a very big field. And then computers came along.
86
00:09:04,170 --> 00:09:08,250
And since then, linear algebra has just grown and grown in its importance.
87
00:09:09,890 --> 00:09:17,180
So, for example, every undergraduate degree on Earth nowadays probably requires linear algebra that might not have been true before the war.
88
00:09:17,180 --> 00:09:21,260
Or shall we say. I'd like to say a word about why that happens.
89
00:09:22,670 --> 00:09:25,430
And the word I'd like to say is schematic.
90
00:09:26,750 --> 00:09:36,079
So let's imagine that we have a big problem of computational science, which, let's say is a nonlinear partial differential equation,
91
00:09:36,080 --> 00:09:41,030
probably in three dimensions or something might be the navier-stokes equations and fluid mechanics.
92
00:09:41,270 --> 00:09:45,200
So a big problem. How do we solve it computationally?
93
00:09:45,530 --> 00:09:58,280
Well, whenever anything is nonlinear, you probably linear ising, which means locally you pretend it's linear and then somehow you have to iterate.
94
00:10:00,140 --> 00:10:03,500
Take a few steps in order to converge to the nonlinear solution.
95
00:10:03,920 --> 00:10:13,940
So given a nonlinear PD, the standard algorithms we all know about turn it into linear problems of a succession of linear problems, typically.
96
00:10:16,580 --> 00:10:26,900
Now what about the PD? So the non linearity turns into linearity through linear ization, but the PD is discrete sized one way or another.
97
00:10:30,230 --> 00:10:37,490
That's the standard method to solve a PD. You have a grid or finite elements or some sort of expansion, some sort of democratisation.
98
00:10:37,730 --> 00:10:41,840
And that turns a problem of analysis into a problem of algebra.
99
00:10:42,260 --> 00:10:50,630
So it really is a standard picture that Nonlinear PD e's turn into problems of linear algebra because of these two processes.
100
00:10:52,660 --> 00:10:56,530
Von Neumann in the old days, you know, the late forties, early fifties, that kind of thing.
101
00:10:56,530 --> 00:11:02,650
He began to see this, but I think those guys just had no idea how big linear algebra would grow.
102
00:11:02,830 --> 00:11:05,560
They had no idea how powerful computers would grow.
103
00:11:06,010 --> 00:11:16,110
Actually, von Neumann is another interesting point of history, so I think he predicted fairly accurately how good weather prediction would be in 2015.
104
00:11:16,150 --> 00:11:22,900
He knew that we'd be good at predicting weather, but that the reason he got it right is that he made two errors that cancel.
105
00:11:23,110 --> 00:11:26,470
It turns out it's about a million times harder than he would have thought,
106
00:11:27,220 --> 00:11:31,090
and our machines are surely a million times more powerful than he would have predicted.
107
00:11:31,270 --> 00:11:34,800
So in the end, he made the right prediction. Okay.
108
00:11:37,500 --> 00:11:45,930
Just about this philosophy. Let me emphasise that I'm well aware that you in the room have different particular interests.
109
00:11:46,200 --> 00:11:51,330
And chances are, most of you don't regard yourself as interested in linear algebra per se.
110
00:11:51,780 --> 00:11:57,599
But let me just assure you that this is foundational material that you can't know too well.
111
00:11:57,600 --> 00:12:01,830
Anyone doing computational science really needs to understand these things.
112
00:12:03,270 --> 00:12:12,690
Okay. So that's it for that beginning. Now I think I'll switch over here and start talking about speed of solving linear algebra problems.
113
00:12:12,960 --> 00:12:20,940
So let's ask the question how fast can we solve A X equals B?
114
00:12:27,430 --> 00:12:32,410
So X equals B is the standard notation for a square system of linear equations.
115
00:12:32,620 --> 00:12:37,300
A is a matrix an end by and matrix x and B are column vectors.
116
00:12:37,570 --> 00:12:42,010
We're given B we want to find x, so I suppose I should write that down.
117
00:12:42,820 --> 00:12:47,110
A is a capital N by capital n matrix.
118
00:12:48,760 --> 00:12:55,149
I think I'll be inconsistent about capital versus lowercase b is a vector,
119
00:12:55,150 --> 00:13:07,060
so we could call it an end by one vector just to emphasise that it's a column vector and X is an N by one vector of unknowns.
120
00:13:10,500 --> 00:13:14,280
So we're given A and B and we want to find X.
121
00:13:17,310 --> 00:13:21,330
And the question is how big an end is feasible.
122
00:13:22,560 --> 00:13:30,780
It's just amazing to read some of the old stuff from the early days of computing when even end of ten or 20 seemed pretty impressive.
123
00:13:30,930 --> 00:13:37,700
I have a quote here from 1951 from the great numerical analyst Wilkinson.
124
00:13:37,710 --> 00:13:44,580
So he's describing the pilot ace computer, which is the one that Turing was in charge of setting up at the National Physical Laboratory.
125
00:13:45,240 --> 00:13:51,600
And Wilkinson says by 1949, the major components of the pilot ace were complete and undergoing trials.
126
00:13:52,380 --> 00:14:01,260
During 1951, a program for solving simultaneous linear algebraic equations was used for the first time on the 26th of June.
127
00:14:01,260 --> 00:14:04,410
1951 was a landmark in the history of the machine.
128
00:14:04,800 --> 00:14:11,490
For on that day, it first rivalled alternative computing methods by yielding by 3 p.m.
129
00:14:12,030 --> 00:14:17,280
The solution of a set of 17 equations submitted the very same morning.
130
00:14:18,720 --> 00:14:27,180
So in 1951, it takes 6 hours to do a 17 by 17 system of equations.
131
00:14:28,800 --> 00:14:43,080
It's quicker now. Let me give some idea very schematic here of big values of N so following from
132
00:14:43,080 --> 00:14:48,630
Wilkinson then let's say around 1950 and equals 20 would have been a big value.
133
00:14:50,790 --> 00:14:56,280
1965 and equals 200 would have been a big value.
134
00:14:56,580 --> 00:15:04,230
1980. And equals 2000 would have been a big value.
135
00:15:05,970 --> 00:15:13,260
1995 I think the pattern is pretty clear and equals 20,000 would have been a big value.
136
00:15:13,410 --> 00:15:18,060
2010 and equals 200,000.
137
00:15:18,510 --> 00:15:23,820
Those are big matrices, reasonably close to the limit of what people can do.
138
00:15:24,570 --> 00:15:32,010
Bigger than ordinary people ever do. No normal person in 1950 was solving 20 by 20 systems, and no normal person in 2010.
139
00:15:32,580 --> 00:15:40,230
2010 was solving 200,000 by 200,000. But you can see it's a dramatic increase.
140
00:15:41,430 --> 00:15:47,729
In fact, how big an increase is it? If you take these numbers? It's about a factor of ten to the fourth, right?
141
00:15:47,730 --> 00:15:54,330
Every 15 years, roughly speaking, we're getting an increase in end of a factor of ten.
142
00:15:56,490 --> 00:16:01,740
Now, of course, computers have speeded up a lot more than that. So this is how much PN has increased?
143
00:16:02,070 --> 00:16:05,860
How much have computers increased? Well, more like ten to the 12th.
144
00:16:07,080 --> 00:16:18,720
So. In this business, we talk about flops, floating point operations and flops and mega flops and giga flops and so on.
145
00:16:18,900 --> 00:16:27,810
And roughly speaking, in this era, computers increased from flops to giga flops or to teraflops, I guess you'd say.
146
00:16:31,230 --> 00:16:35,309
So you being young, probably no more of these prefixes than I do.
147
00:16:35,310 --> 00:16:38,340
I get stuck after around exa. But they keep going, don't they?
148
00:16:38,460 --> 00:16:41,490
As yet, they all sound like characters in Star Wars to me.
149
00:16:41,490 --> 00:16:44,490
But it keeps going flops.
150
00:16:44,490 --> 00:16:51,570
It means one floating point operation per second. TeraFlops means 1010 of the 12 floating point operations.
151
00:16:51,810 --> 00:16:55,670
So let's say it would be flops and then killer flops.
152
00:16:55,680 --> 00:17:02,309
I've never heard that expression. And then mega flops and then giga flops and then teraflops and petaflops and then X of flops.
153
00:17:02,310 --> 00:17:07,890
And what, ten to the 21? Somebody remind me. We're at Terra heading for exa.
154
00:17:08,070 --> 00:17:11,070
Well, actually, yacht. What is it?
155
00:17:11,700 --> 00:17:16,860
Yacht, huh? That sounded right. It's something like that. Yeah, it's just around the corner.
156
00:17:19,260 --> 00:17:26,130
This is about a factor of ten to the 12th. I'm being very rough here with my numbers, but.
157
00:17:28,420 --> 00:17:32,620
Computers got 10 to 12 times faster and matrices got ten to the four times bigger.
158
00:17:32,620 --> 00:17:46,720
And there's a reason for that. The algorithms, the standard classical algorithms take care of and cubed flops to solve A and B problem.
159
00:17:47,050 --> 00:17:50,620
So we see it right there. Error during recording, Steve. Should I worry about that?
160
00:17:52,110 --> 00:17:56,630
Oh. Okay. Good. That was easy.
161
00:17:58,500 --> 00:18:02,579
So I think it's marvellous here how you can see in history something about algorithms.
162
00:18:02,580 --> 00:18:06,090
If you didn't know then cubed you could maybe guess from looking at the data.
163
00:18:06,570 --> 00:18:11,400
If you look at the top 500 list now, you can Google that top 500.
164
00:18:11,670 --> 00:18:15,030
This is the list that's been maintained for 15 or 20 years.
165
00:18:15,030 --> 00:18:18,030
Of the 500 fastest computers on Earth.
166
00:18:18,720 --> 00:18:26,670
Most of them are so parallel that they bear little resemblance to the type of machines that we use.
167
00:18:26,760 --> 00:18:29,370
That wasn't true 15 years ago, but now it is true.
168
00:18:30,000 --> 00:18:36,900
The whole top 500 thing has become a little like Arnold Schwarzenegger in his muscle era, a little bit divorced from reality.
169
00:18:37,200 --> 00:18:40,980
But it is incredible how things have speeded up. Let's see, I made a note here.
170
00:18:42,990 --> 00:18:47,639
The fastest machine, the number one, which is a Chinese thing, is way into the of flops.
171
00:18:47,640 --> 00:18:54,180
I think it's sort of ten to the 19th. Even the 500 fastest machine is in the hundreds of teraflops.
172
00:18:54,450 --> 00:19:02,970
So if a teraflop is ten to the 12th out, ten to the 14th would put you, you know, not even the 500th fastest machine on earth.
173
00:19:03,210 --> 00:19:07,440
It's just amazing. Let me make two comments about that.
174
00:19:09,440 --> 00:19:16,070
There exist o of end to the 2.376 algorithms.
175
00:19:20,300 --> 00:19:24,230
So although the standard algorithms are n cubed, in principle you can do better.
176
00:19:24,410 --> 00:19:33,140
However, these have no practical value. They only exist theoretically are never used because the constants are far too awful.
177
00:19:33,530 --> 00:19:40,580
So as far as anybody knows, at the moment, no practical algorithms are better than an cubed or maybe end of the 2.8.
178
00:19:41,840 --> 00:19:47,270
And the other thing I want to mention is that this classical point of view is all about operation counts.
179
00:19:48,950 --> 00:19:58,130
And that's considered these days a very classical point of view, because communication as opposed to floating point operation, also matters.
180
00:19:59,970 --> 00:20:06,629
And. You see this on your own machine, but you certainly see it on these big,
181
00:20:06,630 --> 00:20:10,950
massively parallel machines where they might have hundreds of thousands of cores.
182
00:20:11,460 --> 00:20:15,990
And most of the concern is how to move the information from one core to another.
183
00:20:16,200 --> 00:20:25,530
So as time evolves, this kind of end cube discussion is becoming a little further removed from high performance computing.
184
00:20:25,770 --> 00:20:30,240
But I think now it still does tell us a lot about what most of us do most of the time.
185
00:20:31,770 --> 00:20:38,890
Okay. So I like to play on the computer. And of course, in this course we play in MATLAB.
186
00:20:42,440 --> 00:20:49,930
So let me. To give you an idea, just explore how fast things can go.
187
00:20:51,130 --> 00:20:57,670
Now, the way I do this is I put together sheets telling you what commands I'm going to
188
00:20:57,670 --> 00:21:03,070
type so that you can make notes if you like or go back and do it in your own time.
189
00:21:03,190 --> 00:21:12,730
And these also end up online. And of course, part of the purpose here is also to teach you a bit of MATLAB.
190
00:21:12,740 --> 00:21:17,330
I hope you've all used MATLAB, but I'm sure there are some people here who haven't used it before.
191
00:21:18,260 --> 00:21:24,020
So for example, let me say a equals rand of five.
192
00:21:25,520 --> 00:21:30,830
That produces a five by five matrix of random numbers, each one between zero and one.
193
00:21:31,670 --> 00:21:40,370
And let me say B equals ones of five comma one that produces a right hand side vector B and let me shrink the font a little.
194
00:21:40,400 --> 00:21:45,680
This command, by the way, is not a standard MATLAB command, so you can't do that.
195
00:21:47,300 --> 00:21:52,220
You can do it, but much more slowly. And now let's solve the system of equations.
196
00:21:52,220 --> 00:22:00,860
So if I say X equals a backslash, B, an algorithm is invoked and you don't even know what it is at this point necessarily.
197
00:22:00,860 --> 00:22:04,610
But that's the MATLAB syntax for solving a system of equations.
198
00:22:05,570 --> 00:22:11,330
Mathematically, it's a inverse times B, although the algorithm doesn't actually construct a inverse.
199
00:22:12,380 --> 00:22:16,220
So if I do that, it solves the system. And there's the solution.
200
00:22:16,550 --> 00:22:24,050
By the way, I'm in format, short mode, but if I type format long, then you could see all the digits, more or less that are stored.
201
00:22:25,660 --> 00:22:29,830
Let's see how good it is. Suppose I say eight times X minus B.
202
00:22:30,070 --> 00:22:38,530
So mathematically that should be zero because x equals B on the computer it's not zero, it's ten to the minus 15th times various things.
203
00:22:38,740 --> 00:22:41,890
Because standard precision on most computers is.
204
00:22:42,870 --> 00:22:52,250
Ten to the -16, more or less. If I type who's in MATLAB, it tells me what variables I've got.
205
00:22:52,260 --> 00:22:58,320
And if you look at those variables, you'll see that there's a five by five matrix and a B and an X.
206
00:22:58,980 --> 00:23:02,150
The number of bytes is always eight times the number of numbers.
207
00:23:02,160 --> 00:23:13,800
So a five. Entry vector is 40 bytes because a standard word, a standard number is two words and a word is four bytes.
208
00:23:14,880 --> 00:23:21,590
So 64 bits for each number. Let's make it a little bigger.
209
00:23:21,800 --> 00:23:29,630
So suppose I now say a equals round end, which means normal random numbers of size 500 by 500.
210
00:23:30,740 --> 00:23:34,190
And now I could say B equals ones of 501.
211
00:23:34,610 --> 00:23:38,010
And I could say X equals a backslash b. No problem.
212
00:23:38,070 --> 00:23:42,370
Oh, there is a problem. No problem.
213
00:23:43,330 --> 00:23:54,280
If I now say who's you can see that the 500 by 500 matrix A each and there's 250,000 entries and each one takes eight bytes.
214
00:23:54,280 --> 00:24:03,580
So we get 200,000 sorry, 2 million bytes, which when I was your age of course would have been a big matrix, but now completely trivial on any laptop.
215
00:24:06,410 --> 00:24:11,570
What if I say A, X minus B, I get a long vector.
216
00:24:12,500 --> 00:24:18,110
It looks like big numbers, but it's not really because at the top of this vector, it said ten to the -15.
217
00:24:18,170 --> 00:24:25,070
In fact, if I look at the end entry of that long vector, you can see it's, well, ten to the -14.
218
00:24:25,460 --> 00:24:30,290
So 500 by 500 matrix, we lost another digit to random effects, presumably.
219
00:24:31,250 --> 00:24:34,690
So let's look at the speed. Let's try.
220
00:24:34,700 --> 00:24:39,470
I'll say N equals 1000 and I'll say A equals around N of n.
221
00:24:39,770 --> 00:24:48,220
So a thousand by thousand random matrix and I'll make a right hand side and then I'll say tick a backslash, B torque.
222
00:24:49,040 --> 00:24:54,500
So that's how long it takes on my laptop to solve a 1000 by thousand system.
223
00:24:55,670 --> 00:25:01,260
Let's try 2000. So on my laptop.
224
00:25:04,170 --> 00:25:07,260
According to the end cubed, it should be eight times longer.
225
00:25:07,260 --> 00:25:13,950
It's not quite there. We're not in the asymptotic regime. Timings are always complicated, but it's certainly more than four times longer.
226
00:25:14,070 --> 00:25:19,530
Let's try 4000. So that should be eight times longer than half a second.
227
00:25:20,520 --> 00:25:26,469
So the prediction is about 4 seconds. Works.
228
00:25:26,470 --> 00:25:29,530
Okay, so the Mncube is certainly visible in these experiments.
229
00:25:29,800 --> 00:25:34,180
By the way, on my MATLAB, on my laptop, you can also do 8000 and that works.
230
00:25:34,180 --> 00:25:37,660
So this says a lot about storage.
231
00:25:37,840 --> 00:25:44,170
Just a few years ago, an 8000 by 8000 matrix would not have fit on your laptop so conveniently.
232
00:25:44,380 --> 00:25:50,030
But now that's fine. Let's look at the data more systematically.
233
00:25:50,150 --> 00:25:56,690
So following the handout here, I'm going to try bank format, which means dollars and cents and I'll say four.
234
00:25:56,690 --> 00:25:58,370
I equals 1 to 12.
235
00:25:58,820 --> 00:26:15,560
Let's let RN be true to the AI and let's let a b round and then and let's say take a backslash ones of n1t equals talk and display an anti.
236
00:26:18,230 --> 00:26:25,270
So here you see, we're doing an experiment in which we keep doubling and see how long it takes in seconds,
237
00:26:25,280 --> 00:26:27,320
and most of them aren't visible in bank format.
238
00:26:27,530 --> 00:26:33,740
But if if I displayed more digits, they'd not be very informative anyway because it's hard to time quick operations.
239
00:26:34,730 --> 00:26:38,900
Throughout this course, I want to urge you always to interact with problems like this.
240
00:26:38,900 --> 00:26:42,170
You always explore, do experiments all the time.
241
00:26:42,350 --> 00:26:45,410
It's so easy in modern computing environments.
242
00:26:46,850 --> 00:26:54,230
Let's do the same thing again, except make a plot. So I'll say again, for I equals 1 to 12 and equals two to the I.
243
00:26:55,190 --> 00:27:09,410
And now I'll say tick a backslash ones of and comma one and I'll say T equals talk and I'll say, and then of I equals n and t t of I equals t.
244
00:27:10,850 --> 00:27:14,120
And I've made a mistake, Bones. That's a new one.
245
00:27:15,260 --> 00:27:30,290
Okay. I when I was getting ready for this lecture, at one point I had to take the word zeros and I left off the Z and it said Eros Unknown Command.
246
00:27:30,290 --> 00:27:37,639
And I thought, Why did I type arrows? That confused me. Okay.
247
00:27:37,640 --> 00:27:41,650
So now it's accumulating the data in a couple of vectors and we can now plot them.
248
00:27:41,660 --> 00:27:50,479
So for example, suppose, I suppose I said plot and then against t t well then we get a plot of an n cubed
249
00:27:50,480 --> 00:27:54,950
relationship of some sort number dimension of the matrix against the time.
250
00:27:55,160 --> 00:28:05,060
That's kind of hard to see too. Well, let's improve it a bit. So I could say, for example, an empty like this and then.
251
00:28:07,240 --> 00:28:11,530
S-H gee for show graph and I've just docked to the graph.
252
00:28:11,530 --> 00:28:16,120
So now we can see at least the dots are visible as well as the lines connecting them.
253
00:28:16,330 --> 00:28:19,690
But maybe a log log scale would be more enlightening.
254
00:28:19,900 --> 00:28:23,050
So let's say log log of NN against t.
255
00:28:29,460 --> 00:28:36,450
So there you can see kind of a typical experiment where the small sizes give you meaningless data.
256
00:28:36,480 --> 00:28:40,440
Of course, it's not really true that a two by two matrix poses a real difficulty,
257
00:28:40,440 --> 00:28:44,310
but when you try to time things, you always find funny stuff at the low end.
258
00:28:44,520 --> 00:28:47,580
But at the high end, things are looking nice and clean.
259
00:28:47,610 --> 00:28:53,910
There does seem to be some kind of an end cubed relationship. Let's see how close to end cubed it really does look.
260
00:28:55,380 --> 00:29:02,430
So let's say hold on. So then when I plot the next curve, it doesn't erase the old curve.
261
00:29:02,430 --> 00:29:07,050
And I'll say log log of and then one E minus ten.
262
00:29:08,270 --> 00:29:21,680
Times and then. Oops, what have I done? One E minus ten times and then cubed, and I'll do it as a dashed red line.
263
00:29:25,880 --> 00:29:30,800
So there you see, we've plotted a multiple of cubed superimposed on the plot.
264
00:29:31,640 --> 00:29:38,000
It's no sense going down to ten to the minus ten. So let's say y lim of one E minus five, 1e2.
265
00:29:38,090 --> 00:29:41,060
We're changing the Y limits of the plot there.
266
00:29:41,420 --> 00:29:50,990
So now you can see more cleanly that it is reasonably close to an end cubed relationship with the slope getting a little bigger as we go up.
267
00:29:51,140 --> 00:29:54,980
If we kept going, other things would happen due to caching and memory problems.
268
00:29:56,840 --> 00:30:02,290
And just to have fun a little bit with MATLAB graphics, of course you can do things like put a label on,
269
00:30:02,300 --> 00:30:11,100
I could say X label dimension and I could say Y label time in seconds and I could make it bigger.
270
00:30:11,120 --> 00:30:16,970
I could say font size 20. And then I could say title.
271
00:30:19,650 --> 00:30:33,100
The speed of MATLAB backslash font size 24 and I could say g text that's a MATLAB
272
00:30:33,100 --> 00:30:38,710
command for pointing for putting something a bit of text on a plot interactively.
273
00:30:38,740 --> 00:30:46,690
So let's say g text and cubed. Font size 24 in the colour red.
274
00:30:48,130 --> 00:30:57,130
So when I issue that command, it then waits for me to click somewhere and I'll click right here and it puts that bit of text on the plot.
275
00:31:05,630 --> 00:31:11,480
You might have heard that I was actually the first customer who ever bought MATLAB back in 1985.
276
00:31:11,490 --> 00:31:13,250
So I've been doing this for a long time.
277
00:31:14,390 --> 00:31:20,990
It's strange when I talk to people at the company because my closeness to their product is much greater than theirs.
278
00:31:21,020 --> 00:31:25,639
Of course there are a few old timers, but as a rule, the people that work for MathWorks,
279
00:31:25,640 --> 00:31:29,390
you know, have known MATLAB for ten years and I've known it for 35 years.
280
00:31:29,390 --> 00:31:32,930
And it's it's very strange.
281
00:31:33,590 --> 00:31:49,630
Okay. So let's move on and say a word about sparse matrices.
282
00:32:09,340 --> 00:32:15,100
And maybe we should begin by thinking, why do we want end to be so large?
283
00:32:15,280 --> 00:32:18,760
Where in the world would you ever get a matrix of size? 20,000?
284
00:32:19,870 --> 00:32:26,710
You could get it from data of some sort. But traditionally, more often, you get it by disk sizing, a PDA.
285
00:32:26,740 --> 00:32:33,850
That's where you get big matrices, you discrete high something. And the more space dimensions your discrete rising, the bigger the dimensions grow.
286
00:32:34,000 --> 00:32:38,260
So let's do an example. Yeah, a two dimensional example.
287
00:32:38,590 --> 00:32:46,390
Why do we want GN to be so large? Well, let's look at a small, small grid in 2D.
288
00:32:47,230 --> 00:32:50,290
So here's a three by three grid.
289
00:32:52,300 --> 00:32:57,070
I will imagine there are nodes like that.
290
00:32:59,410 --> 00:33:09,130
And let's suppose they're numbered. One, two, three, four, five, six, seven, eight, nine.
291
00:33:09,580 --> 00:33:16,810
And they're linked somehow because we're doing some kind of a finite difference approximation to a differential equation.
292
00:33:18,310 --> 00:33:22,450
So if we're approximating the heat equation, there might be one value at each of those nodes.
293
00:33:22,720 --> 00:33:29,980
If we're doing the navier-stokes equations, we might have a pressure and a couple of velocities or whatever you can if you're
294
00:33:29,980 --> 00:33:33,280
modelling whether you might have a dozen different variables at each node.
295
00:33:34,720 --> 00:33:40,750
So already with this trivial course grid in only two dimensions, we have a nine by nine matrix.
296
00:33:41,020 --> 00:33:45,850
And the reason for numbering them is when you try to make a matrix, it depends on the numbering.
297
00:33:46,090 --> 00:33:54,100
So the kind of matrix that would be associated with that grid, with those linkages would be a nine by nine matrix.
298
00:33:58,930 --> 00:34:05,610
And. It would have a what we call a block three by three structure.
299
00:34:07,890 --> 00:34:13,650
So it would be a nine by nine matrix, which naturally fell into three by three blocks,
300
00:34:13,650 --> 00:34:19,950
each of which was a well into a three by three array of blocks, each of which is a three by three matrix.
301
00:34:20,100 --> 00:34:25,860
So, for example, one is connected to two and four and two itself, let's say.
302
00:34:26,130 --> 00:34:33,180
So there would be a nonzero entry in the second and the fourth column two is connected to one, three and five.
303
00:34:33,390 --> 00:34:38,610
So the second row would have a non-zero n three in one, three and five.
304
00:34:38,610 --> 00:34:42,510
And also itself three is connected to two and six.
305
00:34:42,870 --> 00:34:48,150
So the third row in addition to itself would be connected to two and six.
306
00:34:49,350 --> 00:34:58,200
So you keep going through this and you find that the block three by three matrix can be described as having diagonal blocks,
307
00:34:58,740 --> 00:35:04,500
which are tri diagonal matrices, all with the same structure.
308
00:35:08,860 --> 00:35:12,490
And then off diagonal blocks, which are diagonal matrices.
309
00:35:15,420 --> 00:35:20,160
And these are all just acts. All right. These are zero.
310
00:35:21,810 --> 00:35:27,780
So the matrix associated with that connectivity pattern is a nine by nine matrix.
311
00:35:28,560 --> 00:35:34,770
It's a block tri diagonal. The blocks on the diagonal are themselves tri diagonal.
312
00:35:35,070 --> 00:35:38,550
And the seven super diagonal blocks are diagonal.
313
00:35:39,120 --> 00:35:43,980
Well, you can imagine this kind of thing scales up in all sorts of ways, in two dimensions as we make it,
314
00:35:44,700 --> 00:35:50,490
and the dimension in each direction bigger than we have a quadratic effect here.
315
00:35:50,790 --> 00:35:54,450
In three dimensions it would be cubic. So you very easily get big grids.
316
00:35:54,690 --> 00:35:58,410
If you have 100 by 100 by 100, that's a million variables right there.
317
00:35:58,740 --> 00:36:04,620
So it's to get a million by million. Matrix is nothing in computational science very easily done.
318
00:36:07,230 --> 00:36:13,530
So we're always trying to deal with that problem, and there are two ways to deal with it.
319
00:36:19,310 --> 00:36:23,000
So how do we beat the o of n cubed bottleneck?
320
00:36:27,440 --> 00:36:33,860
In practice there are two ways, and one of them is to use a better machine or let's say, parallel computing.
321
00:36:34,970 --> 00:36:39,110
And that idea has been around for a very long time.
322
00:36:39,110 --> 00:36:43,729
It's not new, but it's getting more and more powerful with every passing year.
323
00:36:43,730 --> 00:36:48,920
These days we have GPUs which make it easy for many people to use many, many cores.
324
00:36:49,220 --> 00:36:55,310
So this is a very powerful strategy, just getting more important with the years, the other approach.
325
00:36:56,330 --> 00:37:01,090
It is to take advantage of the sparsity. So special algorithms.
326
00:37:05,710 --> 00:37:18,940
Exploiting Sparsity. Because often, though not always, the matrices you want to solve have mostly zeros in them.
327
00:37:18,940 --> 00:37:23,470
Sparse means mostly zero. Enough zeros that you can take advantage of it.
328
00:37:31,040 --> 00:37:36,650
As you can imagine, that's a highly developed field. There are some beautiful algorithms and we'll talk about a few of them.
329
00:37:43,050 --> 00:37:48,330
And before I explore in MATLAB a little bit, I wanted to pass around a couple of books.
330
00:37:48,330 --> 00:37:53,490
So this is a fantastic book by Trevathan and Bough on Numerical Linear Algebra.
331
00:37:55,470 --> 00:38:00,540
And David Beau, my co-author, was a graduate student of mine when I was at Cornell,
332
00:38:00,540 --> 00:38:03,780
and he then went on to Microsoft and then Google have done very well.
333
00:38:05,040 --> 00:38:08,909
This is a beautiful little book by Toby Driscoll about learning MATLAB.
334
00:38:08,910 --> 00:38:14,850
You don't need a book to learn MATLAB. Everything is online, of course, but it is very nice if you want to look at it.
335
00:38:14,850 --> 00:38:22,280
So this is very appealing. Okay.
336
00:38:22,290 --> 00:38:27,660
I wanted to go back and explore some sparse matrices before doing that.
337
00:38:27,990 --> 00:38:31,350
Let me draw your attention to a couple of the handouts in the packet there.
338
00:38:31,620 --> 00:38:40,950
So. The first one is the paper by Gilbert, Mueller and Schreiber from 1992.
339
00:38:42,390 --> 00:38:47,120
This is a beautiful paper by three remarkable people.
340
00:38:47,130 --> 00:38:56,070
Mueller is the inventor of MATLAB. Gilbert and Schreiber are leading figures in linear algebra and algorithms and graph algorithms and so on.
341
00:38:58,490 --> 00:39:07,340
To me, this this was the moment at which MATLAB made the transition from being an educational tool to being a serious tool for scientific computing.
342
00:39:08,210 --> 00:39:13,490
And it's gradually grown in practical importance with the decades they took
343
00:39:13,790 --> 00:39:18,079
algorithms for sparse matrices that had been invented somewhat academically,
344
00:39:18,080 --> 00:39:23,450
if you like, and then found a way elegantly to put them into MATLAB and wrote this beautiful paper about it.
345
00:39:24,140 --> 00:39:30,680
It has a personal resonance in that, as you can see, it's dedicated to Gene Ghalib on the occasion of his 60th birthday.
346
00:39:30,830 --> 00:39:37,069
Ghalib was a great, great figure in numerical linear algebra, and I remember going to that 60th birthday conference,
347
00:39:37,070 --> 00:39:44,360
and now 23 years later, I've just had my 60th birthday conference so I can when I look at this, I feel the generations moving along.
348
00:39:45,500 --> 00:39:54,050
Anyway, one of the themes of this course is to remind you that the tools we use have an intellectual background.
349
00:39:54,440 --> 00:40:03,260
And this is an example of how a big part of MATLAB is based on some very major research over the years, carried forth by some very major people.
350
00:40:04,070 --> 00:40:16,160
On the back of that handout is a pointer to unpack, which is the software system that MATLAB calls these days for sparse linear algebra.
351
00:40:16,400 --> 00:40:20,840
It wasn't there when Gilbert and Mueller and Schreiber were writing their paper.
352
00:40:21,020 --> 00:40:25,010
But now this is one of the standard fast packages for sparse linear algebra.
353
00:40:25,220 --> 00:40:28,670
It's due to Timothy Davis, it says here, University of Florida.
354
00:40:28,670 --> 00:40:37,819
He's now at Texas A&M, I think. Anyway, AfPak is something to know about if you care about the details of algorithms and in fact,
355
00:40:37,820 --> 00:40:44,450
in MATLAB instead of backslash, you can type lind solve and you get more direct access to einfach.
356
00:40:44,630 --> 00:40:51,480
If you want to tune the knobs a little bit. There's also a hand out of the MATLAB guide.
357
00:40:51,690 --> 00:40:58,320
This is a book by Nick Higham and does Higham beautiful, thorough introduction to many aspects of MATLAB.
358
00:40:59,520 --> 00:41:07,470
And what I've done here is to copy on two pages illegally, I guess the entire chapter on sparse matrices.
359
00:41:07,620 --> 00:41:09,150
So it's actually worth reading this.
360
00:41:09,510 --> 00:41:18,600
I would recommend you find a few minutes and read these eight pages copied on to to to get an idea of how MATLAB stores and handles sparse matrices.
361
00:41:31,320 --> 00:41:42,149
Well, let's play around a little bit. Let's look at a sparse matrix in MATLAB.
362
00:41:42,150 --> 00:41:50,790
So I'll say format short and I'll say a equals 1 to 0 for.
363
00:41:53,810 --> 00:41:58,820
Now, if we look at a it's obvious from the display what it is.
364
00:42:02,810 --> 00:42:09,620
And if I say who's no surprises there, it's just a two by two matrix for entry.
365
00:42:09,620 --> 00:42:18,650
So 32 bytes. Now let's take another matrix called B, which is the specifications of A.
366
00:42:18,860 --> 00:42:23,090
So it's the same matrix mathematically, but stored in a sparse format.
367
00:42:23,330 --> 00:42:28,790
And you can see that there are three lines of data because I only had three non-zero entries.
368
00:42:29,000 --> 00:42:32,180
It's only telling you the non-zero entries.
369
00:42:35,020 --> 00:42:42,390
If I say who's you can see B actually is more wasteful of storage per entry.
370
00:42:42,400 --> 00:42:49,570
It takes more bites. But of course, if you have a big matrix that's mostly sparse, it will be a big win to store it in the sparse fashion.
371
00:42:51,090 --> 00:42:54,630
So let's do one other thing. Well, let's go up to dimension one.
372
00:42:54,900 --> 00:43:03,330
Let's say A equals. I have 100 and let's say B equals sparse identity SPI of 100.
373
00:43:04,860 --> 00:43:09,030
And let's do A who's. So now you see that A has 10,000 entries.
374
00:43:09,240 --> 00:43:16,260
So 80,000 bytes, whereas B 10,000 entries, but only 100 of them are zero.
375
00:43:16,320 --> 00:43:20,220
So for whatever reason, it has only 2408 bytes.
376
00:43:22,330 --> 00:43:28,360
So there is sparse storage, which enables you to hold much bigger mattresses in memory.
377
00:43:28,540 --> 00:43:33,940
But of course, also algorithms will take advantage of the zero structure to go faster.
378
00:43:35,400 --> 00:43:38,580
Let's play around a little bit to give you a sense of MATLAB syntax.
379
00:43:38,820 --> 00:43:44,370
So if I say Spy of A, I get a picture of the nonzero entries of a.
380
00:43:45,930 --> 00:43:49,860
Sorry. I'm having trouble with my mouse here. Okay.
381
00:43:50,860 --> 00:43:55,240
If I say Spy of Bee, it's the same because mathematically they're the same matrix.
382
00:43:55,840 --> 00:44:00,010
Notice at the bottom NZ does not stand for a rugby team.
383
00:44:00,010 --> 00:44:11,320
It stands for the number of non-zero-sum. Now suppose I say be of 40 coal and 6010, 20, 40 equals three.
384
00:44:12,220 --> 00:44:18,400
I've just taken a bunch of entries in the Matrix B and changed their value from 0 to 3.
385
00:44:18,760 --> 00:44:23,800
And of course, the data structure has been what has done whatever is necessary to accommodate that.
386
00:44:24,070 --> 00:44:27,430
If I do a spy plot, you can see where the non zeros have appeared.
387
00:44:27,910 --> 00:44:33,370
Or suppose I say spy of B plus B transpose b prime means B transpose.
388
00:44:33,640 --> 00:44:42,860
Then we get a symmetric structure. Or suppose I say spy of B plus B transpose squared.
389
00:44:43,520 --> 00:44:49,160
So before pressing return, I can ask you to think about what that might look like,
390
00:44:49,610 --> 00:44:53,510
what will happen when I square that matrix to the non-zero structure?
391
00:44:54,660 --> 00:45:01,070
Well, that's what happens. Now let's solve a system of equations or two.
392
00:45:01,340 --> 00:45:13,820
So just to give you a sense of how MATLAB sets up some random matrices, let me say a equals ESP ran the SIM of 7.25.
393
00:45:14,960 --> 00:45:22,310
Now what that means is. That was the command sp round sim of 7 to 5.
394
00:45:22,580 --> 00:45:32,480
It means make a random matrix of seven by seven, sparsely stored and approximately 0.25 of the entries should be non-zero.
395
00:45:33,890 --> 00:45:39,290
So SPI of a is a seven by seven matrix and about a quarter of the entries are non zeros.
396
00:45:39,560 --> 00:45:45,860
If I said full are they then you could actually see this seven by seven matrix.
397
00:45:47,190 --> 00:45:53,580
Now let's make a big one. I'll say equals ESP round sim of 10,000.
398
00:45:54,390 --> 00:45:58,920
And only a fifth of a percent of them should be non-zero.
399
00:46:00,450 --> 00:46:04,950
So now if I say Spy of A, it looks dense, doesn't it?
400
00:46:04,950 --> 00:46:08,130
But that's because the dots are too big. It's not really dense.
401
00:46:08,730 --> 00:46:12,120
Most of those entries are zero. We could zoom in by a factor of two.
402
00:46:14,120 --> 00:46:18,380
Why? Zoom in another factor of two. Every time we zoom in, we begin to see.
403
00:46:18,710 --> 00:46:26,900
So here we have a sparse random matrix where 99.8% of the entries are zero.
404
00:46:32,560 --> 00:46:37,540
Let's add a multiple of the identity to that matrix to make it a better behaved matrix.
405
00:46:37,540 --> 00:46:42,250
So I'm going to add ten times the sparse identity of size 10,000,
406
00:46:43,330 --> 00:46:53,050
and I'm going to make a right hand side ones of 10,000, comma one, and I'm going to solve a x equals a backslash p.
407
00:46:53,260 --> 00:46:58,989
Now earlier we solved a 4000 by 4000 matrix.
408
00:46:58,990 --> 00:47:03,580
Right. And that took 4 seconds. This is a 10,000 by 10,000 matrix.
409
00:47:07,270 --> 00:47:13,350
Sure enough. It might finish eventually, but it's not going to finish quickly.
410
00:47:15,390 --> 00:47:23,550
However we can solve it using a sparse algorithm. When I say backslash, it isn't using an algorithm that exploits this type of sparsity.
411
00:47:23,820 --> 00:47:30,030
So it takes a long time, but I'm going to use a iterative algorithm called preconditioned conjugate gradients.
412
00:47:30,300 --> 00:47:34,680
I'll say x equals PC G of A, comma B, and we'll be playing with this.
413
00:47:34,680 --> 00:47:41,129
Salman. Preconditioned to conjugate gradients you can see in with its default parameters.
414
00:47:41,130 --> 00:47:44,610
It doesn't succeed, but it will succeed if we.
415
00:47:45,590 --> 00:47:47,419
Increase the number of allowed iterate.
416
00:47:47,420 --> 00:47:56,270
So I'll say x equals pre-condition conjugate gradients of a b, I'll give it a tolerance of ten to the -12 and I'll allow it to take 200 steps.
417
00:47:57,470 --> 00:48:01,340
What did I do wrong? All right. Lowercase, right?
418
00:48:01,340 --> 00:48:05,800
Yeah. So you can see how quickly that happened.
419
00:48:05,830 --> 00:48:10,569
It didn't take 20 or 50 seconds. It converged in a fraction of a second.
420
00:48:10,570 --> 00:48:15,610
Let's do it again, say less than a second. It converged and presumably got the right answer.
421
00:48:15,640 --> 00:48:19,180
Let's see if I say norm of A times X minus B,
422
00:48:20,020 --> 00:48:28,240
norm means I'm taking the product of the 10,000 by 10,000 matrix times that 10,000 vector subtracting off an and another 10,000 vector.
423
00:48:28,420 --> 00:48:35,170
Let's look at the infinity norm. The biggest entry. So the biggest entry in the error is about ten to the minus 12th.
424
00:48:38,060 --> 00:48:42,410
If you're curious to learn more. Of course, there are many ways to learn things these days.
425
00:48:42,710 --> 00:48:48,110
One of them, if you're just doing it within MATLAB, is to put on the more features.
426
00:48:48,110 --> 00:48:50,240
So it only fills one screen at a time.
427
00:48:50,690 --> 00:49:00,679
And then if I say Help Spa fun for spa matrices, you can get a sense of some of the commands that are available for dealing with sparse matrices,
428
00:49:00,680 --> 00:49:05,990
random, random, symmetric diagonal matrices, identity and so on and so on.
429
00:49:07,520 --> 00:49:12,980
Here's another example. If I say a equals gallery of Watson.
430
00:49:13,520 --> 00:49:17,989
Well, Andy Watson is a faculty member here in the Numerical Analysis Group.
431
00:49:17,990 --> 00:49:21,440
He's Oxford's main expert on numerical linear algebra.
432
00:49:21,440 --> 00:49:26,810
And MATLAB has a matrix named after him. I won't go into the details, but here's what it looks like.
433
00:49:28,320 --> 00:49:33,060
You can see it's nice structure there. Let's square it.
434
00:49:35,150 --> 00:49:40,250
Let's quit. So there's a lot of stuff you can find in gallery.
435
00:49:42,520 --> 00:49:49,430
Okay, so we're at the end of this lecture, except that I want to hand out the solutions to the quiz.
436
00:49:50,450 --> 00:49:53,930
I'm guessing most of you knew about five of these acronyms.
437
00:49:53,960 --> 00:49:55,820
Who knows? Here's what they stand for.
438
00:50:05,410 --> 00:50:16,720
And let me just emphasise that the first assignment is due on Tuesday at 9:00 and it's in the box at 160 or 161 across the atrium from here.
439
00:50:16,990 --> 00:50:18,160
Okay. Thank you very much.