1
00:00:00,620 --> 00:00:06,050
Okay. So today we're going to talk about a lot of things rather hastily.
2
00:00:06,500 --> 00:00:15,160
And the first is. The question of what is the definition of numerical analysis?
3
00:00:25,840 --> 00:00:31,030
Now one definition is that it's Professor Profession's field or more broadly,
4
00:00:31,030 --> 00:00:34,600
it's the field of the numerical analysis group upstairs, whatever that is.
5
00:00:35,440 --> 00:00:44,469
But of course, it's a serious question, and one reason this has interested me for years is that there's a rather invalid
6
00:00:44,470 --> 00:00:48,130
definition that people consciously or unconsciously adhere to sometimes,
7
00:00:48,310 --> 00:00:51,550
which is that it's the study of rounding error is on a computer.
8
00:00:51,760 --> 00:00:59,350
Now, that's an incredibly boring subject. Well, actually, it's quite interesting, but it's certainly not enough to justify my existence.
9
00:01:00,220 --> 00:01:04,240
So I think the right definition is the study.
10
00:01:06,920 --> 00:01:13,340
Of algorithms. Now if we stopped there.
11
00:01:14,640 --> 00:01:18,180
That would be more or less computer science, I suppose you might say.
12
00:01:18,240 --> 00:01:24,090
But in particular, it's the study of algorithms for the problems of continuous mathematics.
13
00:01:28,640 --> 00:01:35,480
And I want to spell out really what continuous means. But the point is it's real and complex numbers, not just integers.
14
00:01:38,240 --> 00:01:40,820
So for whatever reason, justified or not,
15
00:01:41,660 --> 00:01:49,430
it's a fact that discrete algorithms and continuous algorithms tend to be carried out rather differently by rather by different people.
16
00:01:49,580 --> 00:01:54,260
It's a shame that that division is so extreme, but it's a fact that it is pretty extreme.
17
00:01:54,260 --> 00:01:57,320
And in computer science departments, broadly speaking,
18
00:01:57,470 --> 00:02:04,640
you find people working on discrete algorithms and numerical analysts are interested in continuous problems.
19
00:02:04,640 --> 00:02:07,790
So solving a system of equations, solving a differential equation.
20
00:02:08,120 --> 00:02:15,380
I believe that's what the field is all about. Now, I have been interested in these, if you like, philosophical issues for a long time.
21
00:02:15,860 --> 00:02:23,180
In the appendix of my book on Linear Algebra, you'll find an essay on the definition of numerical analysis.
22
00:02:23,510 --> 00:02:32,630
But more recently, one of the handouts today is my essay from the Princeton Companion to Mathematics, published seven or eight years ago.
23
00:02:32,810 --> 00:02:38,389
And this is a compact description of the whole field.
24
00:02:38,390 --> 00:02:42,590
And if you have the time, I think it would be very interesting for you to read this.
25
00:02:42,860 --> 00:02:51,080
This course is, in some sense, samples from this essay, if you like, expanded, of course, mathematically.
26
00:02:51,710 --> 00:02:57,140
Let me draw your attention to a couple of things in this essay. One is the table at the end.
27
00:02:57,170 --> 00:03:04,790
Table one on some algorithmic developments in the history of numerical analysis, where I've listed about 30 greatest hit algorithms.
28
00:03:05,030 --> 00:03:11,330
And I've even gone so far as to list some names of the people who were associated with originating those algorithms.
29
00:03:13,010 --> 00:03:19,610
We're all human. We like names. To have a name attached to something makes it more full for us.
30
00:03:20,000 --> 00:03:23,120
But you can imagine it was a bit controversial to make a list like this.
31
00:03:23,120 --> 00:03:27,950
And of course, I would never claim that the list is exactly right or the names are exactly right.
32
00:03:28,130 --> 00:03:33,800
But it is an interesting, pretty good approximation to some of the things that have happened in this field over the years.
33
00:03:35,300 --> 00:03:38,690
More philosophically, back to the definition of numerical analysis.
34
00:03:39,500 --> 00:03:51,260
The final section of this essay, Section seven, called The Future, is very much about this definition and its implications in particular.
35
00:03:51,260 --> 00:03:56,930
One of the themes that has fascinated me is that even problems that can be solved with
36
00:03:56,930 --> 00:04:01,790
a finite number of operations in principle are often not solved that way in practice.
37
00:04:02,090 --> 00:04:06,830
If the number of operations is n cubed and n is a million, nobody's going to do it that way.
38
00:04:07,070 --> 00:04:11,300
You use iterative methods like conjugate gradients, even for finite problems.
39
00:04:11,480 --> 00:04:14,660
That intrigues me. And you'll see that's a theme in this essay.
40
00:04:15,050 --> 00:04:20,290
The final paragraph of Section seven. Is predicting the future.
41
00:04:20,290 --> 00:04:24,220
And I think in the seven or eight years since then, some of this has already come to pass.
42
00:04:24,460 --> 00:04:30,370
I think numerical algorithms more and more are embedded inside artificial intelligence, if you like.
43
00:04:30,670 --> 00:04:35,469
And it's fascinating for us to watch how things evolve in the 20th century.
44
00:04:35,470 --> 00:04:40,780
When you computed something that was sort of deterministic, you could run it again and be sure of getting the same answer.
45
00:04:41,020 --> 00:04:47,470
That's less and less true in the 21st century as more and more things get handled in adaptive, intelligent ways.
46
00:04:50,240 --> 00:04:54,350
Okay. So that's the end of section 1.8 of the course.
47
00:05:00,270 --> 00:05:04,649
Now returning to our particular problem of matrix algorithms.
48
00:05:04,650 --> 00:05:09,090
I want to give you an overview really in the form of a table.
49
00:05:10,990 --> 00:05:12,610
Of matrix iterations.
50
00:05:21,500 --> 00:05:29,989
Now the historical overview, as I've mentioned from various angles, is that the mathematical underpinnings are really from the 1950s.
51
00:05:29,990 --> 00:05:35,510
But this stuff began to be used in the 1970s, and since then it has just grown more and more important.
52
00:05:35,780 --> 00:05:40,100
Interpretive methods are used all throughout computational science.
53
00:05:40,310 --> 00:05:43,820
Pre conditioners in particular are just amazingly important.
54
00:05:45,020 --> 00:05:56,780
Now for my table, I'd like to emphasise the two biggest problems of iterative linear algebra, and those are eigenvalues and systems of equations.
55
00:05:56,900 --> 00:06:11,240
So it's X equals B, B is given and we're trying to find X or x equals lambda x, which we haven't talked about much, but that's an eigenvalue problem.
56
00:06:11,450 --> 00:06:18,440
You're given a matrix A and you want to find vectors x and scalar is lambda such that that equation holds.
57
00:06:19,670 --> 00:06:29,180
And then there are two big classes of matrices and those are simply symmetric and non symmetric.
58
00:06:31,340 --> 00:06:38,420
So either A equals a transpose or we don't assume that A equals a transpose.
59
00:06:40,010 --> 00:06:44,150
Of course, if you pick a matrix at random, whatever that means, it will be non symmetric.
60
00:06:44,360 --> 00:06:51,410
But if you look at applications maybe half the time they are symmetric, they might also be positive, definite, which is a further condition.
61
00:06:53,570 --> 00:07:01,190
So let me now draw a table, which is merely an attempt to tell you some of the names of the main algorithms.
62
00:07:01,610 --> 00:07:04,880
A number of which we've talked about, and we're about to talk about one more.
63
00:07:05,210 --> 00:07:08,690
So let's see. It's a two by two block table.
64
00:07:15,370 --> 00:07:22,470
And the first column is A, X equals B. The second column is X equals lambda x.
65
00:07:25,600 --> 00:07:30,040
The first row of the block matrix is symmetric.
66
00:07:32,810 --> 00:07:44,990
And the second row is not symmetric. So in each of these areas there are several different ideas around which four different problems may be best.
67
00:07:45,650 --> 00:07:47,840
Conjugate gradients is where we started.
68
00:07:48,440 --> 00:07:55,999
It's really the mother of all iterative algorithms, and let's remember that that requires the matrix to be symmetric.
69
00:07:56,000 --> 00:07:58,969
Positive, definite. Actually, it's kind of interesting.
70
00:07:58,970 --> 00:08:04,549
If you run conjugate gradients on a symmetric, indefinite problem, often it'll do pretty well, at least for a while.
71
00:08:04,550 --> 00:08:09,410
And then it may fail. If you run it on a non symmetric matrix, it'll blow up properly immediately.
72
00:08:10,830 --> 00:08:15,540
Then there are other methods, other iterative methods that don't require positive, definite ness.
73
00:08:15,540 --> 00:08:20,400
And the two famous names are Min Revs and SIM l q.
74
00:08:23,760 --> 00:08:30,060
So if you have a symmetric indefinite problem, you should learn more about those and related methods.
75
00:08:31,680 --> 00:08:38,340
Now in the eigenvalue problem, absolutely analogous to C.G. is the method called the Lantos method.
76
00:08:40,630 --> 00:08:44,580
Of course, land shows, when I say it that way, is not properly pronounced.
77
00:08:44,620 --> 00:08:50,140
It's a Hungarian name. But the convention in most most people in English, say Lanchester.
78
00:08:52,120 --> 00:08:56,259
But of course, there are other related things, and a crucial variation on that theme,
79
00:08:56,260 --> 00:09:02,380
which in practice makes all the difference, is what's called implicitly restarted land shows.
80
00:09:06,740 --> 00:09:13,960
And it's interesting that mathematically this came along in the fifties computationally in the seventies, but this was more like the nineties.
81
00:09:13,970 --> 00:09:23,600
That's when a huge advance was taken. And then the other big game in town is JD, if you like, which stands for Jacoby Davidson.
82
00:09:26,900 --> 00:09:30,770
Now, we're not going to talk about most of these except for the simplest version of land shows.
83
00:09:31,100 --> 00:09:34,130
But these are big, well-developed technologies.
84
00:09:34,670 --> 00:09:40,730
And let me just emphasise that it's these two from which everything starts.
85
00:09:43,990 --> 00:09:46,390
Now non symmetric problems we haven't talked about.
86
00:09:47,920 --> 00:09:55,630
But just to give you some names, there are really three different mathematical ideas in the non symmetric linear systems business.
87
00:09:55,870 --> 00:10:00,070
One of them goes by names like CG and LZ Q are.
88
00:10:00,280 --> 00:10:03,430
But I can tell you immediately what these methods are.
89
00:10:03,760 --> 00:10:12,170
These methods. Start from the observation that if you're trying to solve X equals B,
90
00:10:12,380 --> 00:10:21,260
well then you might consider a transpose a x equals A transpose B if you can solve that, of course, it's the same solution.
91
00:10:21,470 --> 00:10:25,530
Now, this in some respects has a worse problem. The condition number of A is worse.
92
00:10:25,760 --> 00:10:28,610
On the other hand, it's a symmetric, positive, definite problem.
93
00:10:28,910 --> 00:10:38,120
So this first class of methods is really all about converting this to that and then using conjugate gradients.
94
00:10:40,400 --> 00:10:44,000
But then other methods don't do that conversion and the two big classes.
95
00:10:44,330 --> 00:10:52,280
Well, one of them is called GM Rez, which stands for generalised minimal residual min read stands for minimal residual.
96
00:10:52,310 --> 00:10:57,620
So I haven't told you what that is, but it's an iterative algorithm invented in the symmetric case,
97
00:10:57,890 --> 00:11:05,030
which was then generalised to the non symmetric case. The other class of methods is has a lot of names.
98
00:11:05,030 --> 00:11:13,219
It's really alphabet soup in this business. Kumar and by c g I conjugate a gradient that's called and CGS,
99
00:11:13,220 --> 00:11:24,380
which stands for conjugate gradient squared and then by c g stab, which stands for by conjugate gradient stabilised and so on.
100
00:11:26,840 --> 00:11:30,860
When you see all these names, it looks a little untidy intellectually,
101
00:11:30,860 --> 00:11:34,970
but the fact is these methods are spectacularly important in practical problems.
102
00:11:35,270 --> 00:11:42,170
Big stab is a remarkable one. I have I got this right in the 1990s.
103
00:11:44,580 --> 00:11:49,620
The mathematical paper that got the most citations of all was the paper that introduced
104
00:11:49,620 --> 00:11:54,240
by CGG staff much more important than Fermat's Last Theorem and stuff like that.
105
00:11:58,610 --> 00:12:05,960
Then of course there are analogues for non symmetric eigenvalue problems, which is a subject I've been very interested in over the years.
106
00:12:06,140 --> 00:12:12,440
And the basic 1950s mathematics was introduced by somebody called Arno Lee.
107
00:12:13,970 --> 00:12:25,880
And then in the nineties there's the implicitly restarted version, implicitly restarted only, which made it more and more practical.
108
00:12:27,560 --> 00:12:33,380
And then there's also a Jakobi Davidson set of algorithms in the non symmetric case.
109
00:12:34,970 --> 00:12:39,379
Now of course, I'm not really telling you very much about these things, but I want you to be aware of them.
110
00:12:39,380 --> 00:12:42,650
Let me give you a few pointers to where you could learn more.
111
00:12:43,100 --> 00:12:50,720
So if you wanted to learn more about the X equals B problem, one of the books on our reading list is by Yousef Saad.
112
00:12:50,840 --> 00:12:58,190
His first book, if you like, and another one of the books on our reading list is the book called Templates.
113
00:13:01,330 --> 00:13:06,100
And then similarly for the eigenvalue problem, Saad's other book.
114
00:13:08,040 --> 00:13:16,700
And the other templates book. So those are some remarkable places to look.
115
00:13:17,030 --> 00:13:20,690
Also references by Vandervoort and many other places.
116
00:13:23,870 --> 00:13:32,240
Now in MATLAB, there are lots of codes that do lots of these things, and I'll list a few that I happen to have jotted down.
117
00:13:32,540 --> 00:13:40,520
You know about PC G, right? So we've used that precondition conjugate gradients is the name of the MATLAB code.
118
00:13:40,910 --> 00:13:47,450
There's also a code called Minions. There's also a code called Simple Q.
119
00:13:51,120 --> 00:13:55,470
What else have we got in this business? There's a code called LS QR.
120
00:13:58,160 --> 00:14:05,250
There's a code called G m rez. There's a code called Kumar.
121
00:14:05,970 --> 00:14:11,940
There's a code called B, C, D. There's a code sigs and basically stab.
122
00:14:11,940 --> 00:14:15,900
They all have codes with exactly those names.
123
00:14:17,910 --> 00:14:21,780
If you want to do. I know the Orlan shows both of them.
124
00:14:24,510 --> 00:14:32,930
Are done by the code called I was. And just to make sure you're with me on I eggs.
125
00:14:33,870 --> 00:14:38,459
Traditionally in MATLAB, there's a command I which would find all the eigenvalues of a matrix.
126
00:14:38,460 --> 00:14:42,120
So and by and matrix and eigenvalues that's an n cubed calculation.
127
00:14:42,300 --> 00:14:46,560
It's grade event is less than 1000. It's impossible and there's 100,000.
128
00:14:46,950 --> 00:14:53,880
I is an iterative code which aims to get some of the eigenvalues of a matrix, and we're going to do that in a moment.
129
00:14:54,720 --> 00:15:00,360
By default, it will get you the six biggest eigenvalues, but of course you can specify you want some other method.
130
00:15:00,600 --> 00:15:08,669
And I as I circled land chosen are nobody but it's really implicitly restarted so I exist you based on this
131
00:15:08,670 --> 00:15:17,340
implicitly restarted technology and in particular it all came out of a Fortran package which is called a R pack.
132
00:15:19,970 --> 00:15:27,590
So when the implicitly restarted stuff was being done at Rice University 20 years ago by Danny Sorenson and Richard Luke and others,
133
00:15:28,040 --> 00:15:32,840
they made a Fortran package air pack and eventually MATLAB incorporated that.
134
00:15:32,960 --> 00:15:39,920
It's just amazing. And on your assignment problem for you can use args and just it'll it'll work.
135
00:15:45,820 --> 00:15:49,510
I think that's all I'm going to say in this overview. Are there any questions?
136
00:15:52,110 --> 00:15:55,890
Okay. So what I'm going to do now is look at one of these.
137
00:15:56,090 --> 00:16:02,100
The big one that we haven't touched yet are chose, which is mathematically very close to conjugate gradients,
138
00:16:02,100 --> 00:16:09,240
but for eigenvalues instead of systems of equations. So let's talk about land chose.
139
00:16:13,320 --> 00:16:19,590
Land chose was a remarkable man. He was, I think, a Hungarian Jew who was born Levi and changed his name.
140
00:16:19,890 --> 00:16:25,970
And if I remember right, he went to the U.S. but he had been a communist at some point.
141
00:16:25,980 --> 00:16:30,510
So this was in the early fifties. It wasn't a good time to be a former communist in the US.
142
00:16:30,870 --> 00:16:37,280
So I guess he ended up in Ireland after he left the US. So lunch with her Asian.
143
00:16:47,200 --> 00:16:53,409
1950. I'm a little unclear about how much Land chose and Destination Stiefel talked in those early years.
144
00:16:53,410 --> 00:16:57,550
They're doing such close things now, in particular remain.
145
00:16:57,820 --> 00:17:04,050
Let me remind you that in the case of conjugate gradients. We had the so-called rediscovery.
146
00:17:05,570 --> 00:17:09,080
In 1970 by John Reed or was it 71?
147
00:17:09,080 --> 00:17:19,790
I forget where he noticed that this 20 year old algorithm was more useful in practice than people realised if you regarded it as iterative.
148
00:17:20,390 --> 00:17:28,640
Now Ranchos, which is the eigenvalue analogue, had just the same history with a rediscovery by somebody else at the same time.
149
00:17:30,020 --> 00:17:34,660
And this was Chris Page. Very strange history.
150
00:17:36,420 --> 00:17:41,910
I think maybe even the same year, plus or minus one, happening within a few miles of each other.
151
00:17:41,920 --> 00:17:49,229
This was in Abingdon. This was in London. So Heston is in the Stiefel and land in the early fifties probably didn't communicate enough.
152
00:17:49,230 --> 00:17:55,680
And then these guys 20 years later, sort of rediscover the algorithms and practice again independently.
153
00:17:56,760 --> 00:17:59,520
Eventually, of course, people saw all sorts of connections.
154
00:18:01,260 --> 00:18:08,580
So I'm going to give you an idea of what nachos is all about and write down the formulas and then we're going to play with it.
155
00:18:10,020 --> 00:18:15,040
So we're given a symmetric matrix. Real symmetric matrix.
156
00:18:17,970 --> 00:18:19,320
Now for the eigenvalue problem.
157
00:18:19,320 --> 00:18:25,320
It doesn't matter whether it's positive definite when you're inverting a matrix, the location of zero matters enormously.
158
00:18:25,560 --> 00:18:30,540
But if you just want the eigenvalues, it's all translation and variance. So we don't care whether it's positive.
159
00:18:30,540 --> 00:18:34,320
Definite. So what Mann shows does.
160
00:18:36,890 --> 00:18:42,230
In principle, the Land Show's algorithm constructs a tri diagonal ization.
161
00:18:43,410 --> 00:18:50,070
Of a. And let me spell out what I mean by that.
162
00:18:55,460 --> 00:18:59,450
It finds a tri diagonal matrix that A is similar to.
163
00:18:59,690 --> 00:19:07,700
So if T equals an orthogonal matrix times a times the inverse of an orthogonal matrix.
164
00:19:07,700 --> 00:19:16,610
So that's Q inverse. Then these two matrices, A and T are similar, so they have the same eigenvalues.
165
00:19:19,450 --> 00:19:22,660
So this is what we call a similarity transformation.
166
00:19:26,700 --> 00:19:33,030
So A and T have the same eigenvalues.
167
00:19:37,570 --> 00:19:48,580
And T is tri diagonal. So the idea behind land shows is to take your matrix A and turn it into a much simpler matrix, which we can write like this.
168
00:19:48,610 --> 00:19:55,870
Alpha one one, beta one, alpha two, theta two, Baidu two and so on.
169
00:20:01,550 --> 00:20:11,660
Everything else is zero. And of course, as we very well know, when a matrix is mostly zero, that opens up all sorts of other possibilities.
170
00:20:11,870 --> 00:20:18,050
If you want to find the eigenvalues of a tri diagonal matrix, there are lots of ways to do that with relatively little work.
171
00:20:18,260 --> 00:20:23,420
In fact, you can you can do that pretty much in oh of end work as opposed to end cubed.
172
00:20:23,810 --> 00:20:29,360
So if you can change a dense matrix into a tri diagonal one, that's a spectacular improvement.
173
00:20:33,980 --> 00:20:43,430
Now when matlab computes I it try DAG analyses the matrix and then finds the eigenvalues of the tri diagonal.
174
00:20:44,420 --> 00:20:50,070
When we do AIG's this iterative stuff, it's a different method of tri diagonal lasing.
175
00:20:50,390 --> 00:20:53,480
And here's the amazing thing. You don't go all the way.
176
00:20:53,870 --> 00:21:00,380
It's a million by million matrix. Let's say in principle, we would turn it into a million by million tri diagonal matrix.
177
00:21:00,680 --> 00:21:07,249
But that would be n cubed work. You don't go that far. You just make it maybe 100 by 100 or 200 by 200.
178
00:21:07,250 --> 00:21:16,600
By 200. You go partway along. Then you look at the eigenvalues of this far from complete tri diagonal matrix,
179
00:21:17,350 --> 00:21:23,860
and very often they approximate the extreme eigenvalues of a very accurately.
180
00:21:24,190 --> 00:21:41,450
It's amazing. So that's the key idea. So the big idea, which if you like, became the standard idea after Paige in 1971, don't go all the way.
181
00:21:47,180 --> 00:21:50,840
Which would be of m cubed operations.
182
00:21:53,800 --> 00:21:58,750
You go some much lesser distance, which in a typical application might be on the order of hundreds.
183
00:21:59,440 --> 00:22:02,829
And then you have a small tri diagonal problem.
184
00:22:02,830 --> 00:22:06,370
You compute its eigenvalues and you see what you've got.
185
00:22:09,200 --> 00:22:14,270
So let's say to be very loose, take maybe hundreds of steps.
186
00:22:16,500 --> 00:22:23,190
Of course, with 100 by 100 try diagonal matrix, you'll never get all the eigenvalues of a, but you can get some of them.
187
00:22:24,450 --> 00:22:29,160
As you might imagine, the theory of this is a little involved, but it's also very beautiful.
188
00:22:29,170 --> 00:22:32,610
It's again about approximation, as we saw with conjugate gradients.
189
00:22:32,850 --> 00:22:33,899
With conjugate gradients,
190
00:22:33,900 --> 00:22:40,530
we had a polynomial whose job in life was to be small on the spectrum while being normalised to take the value one at the origin.
191
00:22:40,770 --> 00:22:44,070
Well, essentially the same polynomial comes up here,
192
00:22:44,430 --> 00:22:52,530
and if I were to draw a picture to try to give an idea of how Len chose finds eigenvalues, let me draw that picture.
193
00:22:56,990 --> 00:23:02,210
Here's the real axis. Now, the location of the origin doesn't really matter.
194
00:23:02,840 --> 00:23:07,280
Nevertheless, I'll put it in because it helps us see this as a plot of a function.
195
00:23:09,140 --> 00:23:16,160
Imagine that your matrix has a million eigenvalues, which are, let's say, located somewhere or other.
196
00:23:16,160 --> 00:23:20,900
And maybe I won't make it definite. There may be located all over the place.
197
00:23:20,910 --> 00:23:31,950
So that's a million eigenvalues. Okay. Now, what the Lantos iteration does, it turns out, is much like conjugate gradients.
198
00:23:32,190 --> 00:23:36,820
It constructs a polynomial, which is small on that set.
199
00:23:36,840 --> 00:23:41,610
The normalisation is a little bit different, but it's the same business of a smaller polynomial.
200
00:23:42,240 --> 00:23:48,420
So it magically, in some sense, finds a polynomial that's very small on that set.
201
00:23:49,380 --> 00:23:54,930
And the reason it finds eigenvalues is that, well, let's see if I want to test a parabola.
202
00:23:55,620 --> 00:23:58,680
I couldn't do very well. It would be sort of, you know, some big parabola.
203
00:23:58,950 --> 00:24:01,140
It's normalised by a monarch as it happens.
204
00:24:01,860 --> 00:24:06,870
But as you get higher and higher degree, you can get smaller, better and better agreement to the eigenvalues.
205
00:24:07,050 --> 00:24:15,840
And the reason land shows finds eigenvalues is it turns out that the optimal polynomials tend to go through approximately the extreme ones,
206
00:24:16,620 --> 00:24:24,170
and then they do things like this. So I'm just giving you the intuition.
207
00:24:25,220 --> 00:24:30,709
The monarch polynomial that is smallest at these red dots has the remarkable
208
00:24:30,710 --> 00:24:39,470
property that typically it goes right through almost exactly the extreme ones.
209
00:24:41,210 --> 00:24:48,620
So in this picture. Lantos is doing a good job of finding those three eigenvalues.
210
00:24:49,220 --> 00:24:55,190
It's an amazing connection with polynomials and approximation and linear algebra.
211
00:24:56,120 --> 00:24:59,960
Land shows probably didn't view it that way, but eventually that became clear.
212
00:25:00,260 --> 00:25:03,580
So continual gradients and land shows are both about approximation theory.
213
00:25:03,590 --> 00:25:07,340
They both converge at amazing rates for certain problems.
214
00:25:08,910 --> 00:25:16,690
And let's show you that. I need to give you the formulas before we play with it.
215
00:25:22,200 --> 00:25:28,320
It's a loop, just like conjugate gradients.
216
00:25:38,800 --> 00:25:41,290
So here is the lunch house iteration.
217
00:25:45,570 --> 00:25:55,680
You start with a scalar better not which is zero and a vector q not which is zero and a vector b which is arbitrary.
218
00:25:58,360 --> 00:26:06,030
I'm trying to write it in a format that comes close to what we wrote for CG, and then we normalise B and call it Q one.
219
00:26:06,070 --> 00:26:15,220
So we take B and divide by the two norm of B, which means that root mean square, the square root of the sum of the squares, of the absolute values.
220
00:26:15,910 --> 00:26:19,330
That gives us a normalised vector, and we call that Q one.
221
00:26:21,930 --> 00:26:34,920
And then the loop looks like this four equals one, two, three, and so on until we're happy we set the equal to eight times Q And we set Alpha N.
222
00:26:36,590 --> 00:26:40,670
Equal to the inner product of Q, n and V.
223
00:26:42,920 --> 00:26:47,270
We update V by saying V equals v minus.
224
00:26:48,430 --> 00:26:51,460
Beta and minus one. Q and minus one.
225
00:26:53,350 --> 00:27:03,429
Minus Alpha and Q in. So you can see here the flavour of a three term recurrence relation, which we also had with conjugate gradients.
226
00:27:03,430 --> 00:27:07,180
So these are very closely related to orthogonal polynomials.
227
00:27:07,180 --> 00:27:12,180
Three term recurrence relations. We compute the norm of the.
228
00:27:14,460 --> 00:27:20,190
Call it better end and then we normalise to get the next Q.
229
00:27:20,460 --> 00:27:25,770
So Q and plus one equals V divided by its norm.
230
00:27:29,370 --> 00:27:33,779
So you can see it's a iteration very much like conjugate gradients.
231
00:27:33,780 --> 00:27:40,300
And as with conjugate gradients, the work at each step is basically one matrix vector multiplication.
232
00:27:40,860 --> 00:27:45,390
So if A is dense, that's n squared work at each step.
233
00:27:45,630 --> 00:27:51,150
If A is sparse, it's less than n squared work. And now comes the remarkable step.
234
00:27:53,510 --> 00:28:02,940
Compute the eigenvalues. Of the NBN tri diagonal matrix.
235
00:28:08,920 --> 00:28:11,950
Of exactly the form that I wrote down.
236
00:28:11,970 --> 00:28:16,049
So let's call it T sub n and I'll just call it tri diag.
237
00:28:16,050 --> 00:28:24,870
But I've already written it down so you'll know what it means involving the parameters alpha one through alpha n and theta one through bad n.
238
00:28:26,220 --> 00:28:30,270
I guess maybe only beta and minus one comes up. No way that I forget.
239
00:28:33,650 --> 00:28:39,830
This is a small tri diagonal matrix. You can do that by calling matlab iig if you like, which is using other techniques.
240
00:28:40,100 --> 00:28:44,780
But it's easy because this little n is a small number compared with the dimension of your matrix.
241
00:28:45,080 --> 00:28:53,480
And that's the end. You stop when you're happy. And now let's see what happens if we do that.
242
00:28:57,480 --> 00:29:01,680
So I'm going to run the code M6 under bar lunches.
243
00:29:10,960 --> 00:29:14,350
If you look at the handout, you'll see, just as with conjugate gradients,
244
00:29:14,620 --> 00:29:19,359
what I've written is a very pure version of Alonzo's code, which does nothing but this.
245
00:29:19,360 --> 00:29:27,610
Essentially, of course, real software does much more, and in particular the whole business of implicit restarting I haven't told you about at all.
246
00:29:27,610 --> 00:29:32,360
But it's very important, very powerful. Okay.
247
00:29:32,690 --> 00:29:36,409
So let's play with the launch code.
248
00:29:36,410 --> 00:29:41,270
I'll say a equals diag of sparse of 12999.
249
00:29:44,870 --> 00:29:51,499
So that constructs a 999 dimension matrix whose entries are just one up to 999.
250
00:29:51,500 --> 00:29:58,660
It's represented as a sparse matrix in MATLAB. And let's run Lantos on that.
251
00:30:00,920 --> 00:30:08,300
20 steps and look at the result in its last four entries.
252
00:30:12,600 --> 00:30:17,720
So it has gone through this loop. It's taken 20 steps and outcome these numbers.
253
00:30:17,730 --> 00:30:26,040
And of course, we don't really know anything about those numbers. Now, you and I know the exact eigenvalues of either 1 to 999.
254
00:30:26,490 --> 00:30:32,430
So you and I know that those are not very good approximations to the top four eigenvalues.
255
00:30:33,630 --> 00:30:40,850
So we haven't had anything like Convergence yet. So let's try 40 steps.
256
00:30:44,660 --> 00:30:49,399
Well, there's a bit of a hint of convergence at the top. Eigenvalue should be 999.
257
00:30:49,400 --> 00:30:53,720
So you see we're beginning to get there, but it certainly isn't machine precision yet.
258
00:30:54,560 --> 00:31:04,700
So let's take 60 steps. So now let's see.
259
00:31:04,700 --> 00:31:09,140
We've got 999 and 997 and so on.
260
00:31:10,330 --> 00:31:14,000
It's still not there. So let's take 80 steps.
261
00:31:18,150 --> 00:31:24,660
And now you see it's beginning to converge. So the top one 999, we've got to about six digits.
262
00:31:25,830 --> 00:31:30,430
We could keep going. But I'm going to make the problem more complicated.
263
00:31:30,790 --> 00:31:35,650
So there we've got more than six digits. You can see some kind of convergence is happening.
264
00:31:35,920 --> 00:31:39,670
And we've taken far fewer steps than the dimension of the matrix.
265
00:31:40,240 --> 00:31:44,350
Even though that dimension isn't that great. Now let's do a hard problem.
266
00:31:45,220 --> 00:31:51,760
Let's say A equals a random, sparse, symmetric matrix.
267
00:31:52,240 --> 00:31:56,680
Using MATLAB is built in codes. Spar around a sim.
268
00:31:58,920 --> 00:32:06,420
So it's a 10,000 by 10,000 matrix with 0.1% of its of its entries non-zero.
269
00:32:08,380 --> 00:32:10,700
And just for fun, let's mess it up a little bit.
270
00:32:10,720 --> 00:32:19,750
I'm going to make the 500 500 entry equal to ten and I'm going to make the 700 700 entry equal to eight.
271
00:32:21,740 --> 00:32:23,899
Now we don't know the eigenvalues of this matrix,
272
00:32:23,900 --> 00:32:29,780
but we know it's a big symmetric matrix and probably those ten and eight are going to appear somewhere approximately.
273
00:32:30,110 --> 00:32:33,140
But there's no way of knowing exactly what will happen.
274
00:32:34,370 --> 00:32:37,760
So let's try the same old thing.
275
00:32:37,770 --> 00:32:40,790
First of all, let's take a look at it. You can see it's a.
276
00:32:42,400 --> 00:32:45,870
Dense matrix by appearance, but not really so dense.
277
00:32:45,880 --> 00:32:50,740
If we zoom, we'll discover as usual that it's not so dense.
278
00:32:54,400 --> 00:32:59,230
Yeah. So it's a sparse matrix. So let's do our nunchucks thing.
279
00:32:59,440 --> 00:33:07,040
Let's say. E equals land chosen.
280
00:33:07,050 --> 00:33:11,880
I'll do the same thing. I'll. I'll take 20 steps. Sorry, that was 200 steps.
281
00:33:12,090 --> 00:33:15,810
Well, let's skip that and we'll come back to that. Let's take 20 steps.
282
00:33:16,890 --> 00:33:23,870
Let's take 40 steps. So let's ask, do we seem to be converging?
283
00:33:26,780 --> 00:33:32,180
Well, we do. I'm sorry. You can see the 11.717 is looking pretty good.
284
00:33:32,420 --> 00:33:37,070
9.8 to 6 is looking pretty good. So to the eye, it does seem that we're converging.
285
00:33:37,190 --> 00:33:42,240
Let's take more steps. Let's take. 60 steps.
286
00:33:47,980 --> 00:33:51,400
And now you see the strange effect that land shows. Subject to that.
287
00:33:52,060 --> 00:33:58,360
Actually, there's only one eigenvalue of 11.717, but it tends to give you multiple copies.
288
00:33:58,360 --> 00:34:02,979
They're called ghosts for reasons, interestingly, related to the picture I drew.
289
00:34:02,980 --> 00:34:09,610
So you have to be cautious if you're running a pure chose iteration to exclude multiple eigenvalues that aren't genuine.
290
00:34:09,910 --> 00:34:14,590
Luckily, the implicit restarting does all that, so real software will not have this problem,
291
00:34:14,590 --> 00:34:20,860
but a pure lunchbox will have this problem if we keep going, if we make it 80 or 200, as we did a moment ago.
292
00:34:22,120 --> 00:34:25,540
So there you see we've got two copies of both of the top eigenvalues.
293
00:34:26,650 --> 00:34:31,030
Let's look at and minus ten. What have we got there?
294
00:34:31,030 --> 00:34:35,259
We've got two copies of the top eigenvalue, two copies of the next one.
295
00:34:35,260 --> 00:34:38,830
And then who knows, let's take 200 steps.
296
00:34:45,430 --> 00:34:52,360
So there we have six copies of the top eigenvalue and four copies of the next eigenvalue.
297
00:34:52,570 --> 00:34:57,880
So you see, a pure Lantos iteration is great for finding one eigenvalue, but not more than that.
298
00:34:58,090 --> 00:35:04,840
But once you do the restarting, it's just spectacular. And in particular, if I say I have a let's put a tick tock around it.
299
00:35:05,170 --> 00:35:09,100
So just to remind you, this is a 10,000 by 10,000 matrix.
300
00:35:09,100 --> 00:35:12,489
That's not all that it doesn't have a structured sparsity.
301
00:35:12,490 --> 00:35:25,799
It's got a mess of a sparsity pattern. So in 2.6 seconds on my laptop, it has now found the two eigenvalues we keep seeing,
302
00:35:25,800 --> 00:35:29,640
11 and nine, also some extreme negative ones, and then the next two.
303
00:35:29,640 --> 00:35:36,060
So these are the six extreme eigenvalues. And by the way, we could, if we just want the positive ones, of course there are ways to do that.
304
00:35:36,060 --> 00:35:40,140
I could say eyes of a six.
305
00:35:41,670 --> 00:35:46,020
And then I believe you say L.A., which stands for largest algebraic.
306
00:35:49,720 --> 00:35:53,860
So this will now find the six largest algebraic. Or I suppose I want to add more of them.
307
00:35:53,860 --> 00:36:00,660
I could say 12. So this implicitly restarted land shows.
308
00:36:00,670 --> 00:36:08,200
Technology is quite amazing. That's all I'm going to say about it.
309
00:36:08,230 --> 00:36:18,060
Any questions? It's a little surprising how late some of this developed historically.
310
00:36:19,530 --> 00:36:25,740
In principle, the seventies could have done this, but it took people a long time as problems kept getting bigger and bigger.
311
00:36:25,980 --> 00:36:30,000
I guess the incentive sort of steadily became stronger over the years.
312
00:36:31,080 --> 00:36:37,050
So what I want to do in the last bit of this lecture is to show you the other handout,
313
00:36:37,080 --> 00:36:41,490
the numerical software tools handout, and we're going to play with that online.
314
00:36:45,570 --> 00:36:54,560
So this is officially section one, 1140.
315
00:37:09,230 --> 00:37:23,100
The. So of course, you all know how to find things on the web.
316
00:37:24,570 --> 00:37:27,780
Better than I do, I'm sure. But.
317
00:37:28,920 --> 00:37:36,060
Maybe you're not so aware of the richness of some of the stuff that's out there related to numerical computation.
318
00:37:36,270 --> 00:37:42,329
And the goal here is to make you aware of this list of things.
319
00:37:42,330 --> 00:37:46,650
That is an interesting starting point for seeing many things that are out there.
320
00:37:46,920 --> 00:37:55,410
So we have about 10 minutes and I'm going to click on 15 or 20 of the things on this sheet of paper to give you a sense of some of what there is.
321
00:37:58,970 --> 00:38:03,560
So we'll go to. There.
322
00:38:06,010 --> 00:38:08,350
As you can see, I've organised in various categories.
323
00:38:08,380 --> 00:38:14,650
Now when I was your age, software libraries were very important and the things were collected together and you use the library.
324
00:38:14,860 --> 00:38:21,160
They're much less important than they used to be. I think things are done more ala carte, I guess you'd say.
325
00:38:21,190 --> 00:38:27,550
For example, net lib was a wonderfully important thing that came along even before the web for collecting numerical software.
326
00:38:27,700 --> 00:38:34,509
It's still important, but not not as important as it was. If you go there, you'll see an old fashioned feel to it.
327
00:38:34,510 --> 00:38:41,710
Perhaps it's but it's it's important. It's hosted at Oak Ridge National Laboratory and the University of Tennessee.
328
00:38:45,140 --> 00:38:53,330
I won't mention all of these things. Traditionally, in the early days, a lot of this software would have been in Fortran and then C became important.
329
00:38:53,540 --> 00:38:57,650
Those have certainly been the dominant languages of numerical computing over the years.
330
00:38:58,460 --> 00:39:02,360
Of course, a lot of things are done in MATLAB and Python and other languages too.
331
00:39:04,310 --> 00:39:09,170
Fortran 90 was a big change to a sort of more matrix oriented way of thinking.
332
00:39:09,890 --> 00:39:13,190
And then, of course, you all know probably better than I do about GitHub,
333
00:39:13,670 --> 00:39:19,850
which is the absolutely huge system for storing open source stuff on the web.
334
00:39:19,850 --> 00:39:24,020
And for example, our software project TAB Fund is all hosted on GitHub.
335
00:39:24,800 --> 00:39:28,850
I'm curious how many people here have used GitHub? That's impressive.
336
00:39:28,850 --> 00:39:32,270
And I'll bet even last year it would have been half as many. It's just exploding.
337
00:39:32,540 --> 00:39:38,860
I haven't checked on the web lately, but the number of projects hosted there is spectacular and you know, it's only a few years old.
338
00:39:38,870 --> 00:39:41,980
It's just amazing. Okay.
339
00:39:44,040 --> 00:39:50,040
But of course, GitHub is not numerical in general, although there is some numerical stuff there.
340
00:39:50,220 --> 00:39:53,130
If you go to information portals, the next category here.
341
00:39:55,200 --> 00:40:00,300
For people like me in the profession of numerical analysis, there's a weekly digest which means a lot to us.
342
00:40:00,660 --> 00:40:06,810
These other things are also important in their way. Let's click on the one called S.W. Math, which is relatively new.
343
00:40:07,560 --> 00:40:17,810
This comes out of the people in Germany who run Central Blot, and it's a software search engine, which seems to be pretty interesting.
344
00:40:17,820 --> 00:40:25,720
I've only played with it a bit, but this is relatively recent and I think if you're interested in finding things, it you may find them there.
345
00:40:25,740 --> 00:40:31,260
Let's just try something at random that I haven't tried. If I try, well, let's say fast for a transform.
346
00:40:31,260 --> 00:40:41,209
Let's just see what happens. I don't know what will happen. So it gives you a lot of information about various things out there.
347
00:40:41,210 --> 00:40:45,110
I don't know which hedge fund is there that was not intentional, but that's mine.
348
00:40:49,230 --> 00:40:52,320
Interestingly the most famous of their 12 pages. I see.
349
00:40:52,530 --> 00:40:55,920
But on each of these projects you could then find much more information.
350
00:40:56,340 --> 00:41:05,310
And it's pretty impressive. By the way, I have some sort of virus in my computer, so occasionally odd things happen, and that's more than most.
351
00:41:08,760 --> 00:41:13,210
I hope we're all right here. Yeah.
352
00:41:13,220 --> 00:41:18,590
See, I get ads for things that I'm not supposed to get, right.
353
00:41:19,280 --> 00:41:25,799
Let's hope we can recover here. So far, all of the ads have been perfectly clean.
354
00:41:25,800 --> 00:41:34,350
But, you know, I expect any morning that the virus will have mutated if you go to numerical analysis journals.
355
00:41:34,680 --> 00:41:38,730
I've already had a hand out on them. But of course, these days everything is available online.
356
00:41:39,000 --> 00:41:44,129
So that's, again, a list of some of the ones that mean a lot to us. General purpose libraries.
357
00:41:44,130 --> 00:41:49,860
I mentioned these were much more important years ago than they are now, but still there's some impressive stuff there.
358
00:41:50,670 --> 00:41:57,090
Wilkinson Prizes. This is a very interesting one. 20 years or so ago, a prize in numerical software was founded.
359
00:41:57,090 --> 00:42:00,870
And it's I think it's been awarded seven times and they're all remarkable.
360
00:42:01,020 --> 00:42:06,060
So these seven projects are really very special things and differential algebraic equations.
361
00:42:06,240 --> 00:42:13,709
Automatic differentiation, the fastest 48 transform in the West Triangle mess generation deal to finite elements
362
00:42:13,710 --> 00:42:18,810
IP opt for interior point methods and most recently one by a guy here in our building,
363
00:42:19,110 --> 00:42:25,830
Patrick Farrell is Dolphin Adjoint, which is an automatic system for generating add joints of finite element models.
364
00:42:26,040 --> 00:42:29,970
They're all interesting and worth looking at if you're in those areas.
365
00:42:31,450 --> 00:42:36,100
Now, of course, sometimes we do extended precision or symbolic things.
366
00:42:36,460 --> 00:42:39,790
Maple and Mathematica are the most famous in that category.
367
00:42:40,570 --> 00:42:48,820
Some people love Pari. MATLAB has its symbolic math toolbox, which is okay, but I think you can do much more in Maple and Mathematica.
368
00:42:50,240 --> 00:42:59,850
Well, lots of things there. Linear algebra has always been the most developed part of computational science scientific computing.
369
00:42:59,860 --> 00:43:01,980
It's extraordinary what is known there.
370
00:43:02,310 --> 00:43:11,969
In the seventies you had these packages Ice Pack and Linpack, which really created the modern software business, and then they mutated into L.A. Pack,
371
00:43:11,970 --> 00:43:20,130
which is has ever since been the standard scaler pack for parallel computing air pack, which I mentioned for eigenvalues.
372
00:43:21,450 --> 00:43:27,620
Here are the templates books. In ordinary difference differential equations.
373
00:43:27,620 --> 00:43:32,479
I don't have much to mention, although cab fun is very nice in pdes.
374
00:43:32,480 --> 00:43:38,930
That's a huge field of course. And different packages tend to be good at different things.
375
00:43:39,110 --> 00:43:48,020
So one that's very popular for conservation laws is claw pack or for geo physical conservation or geo claw and so on.
376
00:43:48,170 --> 00:43:59,240
So that's a nice example of a system on GitHub. Open Source APL Htmg is an ancient but still evolving educational tool for finite elements.
377
00:44:00,320 --> 00:44:04,430
What else comes on? Multi physics is commercial and though in principle we would all be using it.
378
00:44:04,460 --> 00:44:08,930
I don't know many people who seem to use it who uses comsol. That's interesting.
379
00:44:08,940 --> 00:44:13,620
It's a big company that does a lot of good stuff, but not in our world, I guess.
380
00:44:13,830 --> 00:44:20,670
The MATLAB PD toolbox hasn't been touched in 50 years, so it's a piece of ancient code that still some people like because it's available.
381
00:44:20,670 --> 00:44:25,800
If they've got MATLAB going on to optimisation, we're going to be talking about that more.
382
00:44:26,160 --> 00:44:32,580
So I won't linger on this, but that's a very important, well-developed field with a wonderful online interface.
383
00:44:32,580 --> 00:44:39,690
So nice. For example, the server in the Midwest in the U.S. is a very important resource for optimisation.
384
00:44:41,590 --> 00:44:42,909
Other numerical topics?
385
00:44:42,910 --> 00:44:49,479
Well, I've certainly mentioned the top 500 supercomputer site, which is interesting, the digital library of mathematical functions.
386
00:44:49,480 --> 00:44:53,230
So every few years the US government shuts down or nearly shuts down.
387
00:44:53,230 --> 00:44:56,470
It's about to happen again. And a couple of years ago it did shut down.
388
00:44:56,830 --> 00:45:02,620
And you might think if the US government shuts down, how does that affect us here in the Numerical Analysis Group?
389
00:45:02,650 --> 00:45:09,820
Well, the answer was two things. One is that one of us had planned a holiday in the Grand Canyon and had to go to Utah instead.
390
00:45:10,510 --> 00:45:12,480
The other is and this was the killer,
391
00:45:12,490 --> 00:45:19,150
the online digital library of mathematical functions was no longer accessible for eight or ten days, and that really was annoying.
392
00:45:19,510 --> 00:45:22,720
So the US government does do something useful and there it is.
393
00:45:28,660 --> 00:45:34,330
In CFD. Traditionally, a lot of the best stuff is commercial and you have to pay big money for it.
394
00:45:34,690 --> 00:45:40,630
I guess fluent, for example, is in that category, but there is some very interesting open source stuff.
395
00:45:40,840 --> 00:45:45,190
Phoenix is a very well developed one, and then lately one hears more and more about open phone.
396
00:45:45,340 --> 00:45:48,490
Who's used Phoenix? Who's used open phone.
397
00:45:49,370 --> 00:45:55,130
Interesting. Okay. The Dolphin Joint Project is in the Phoenix setting.
398
00:45:56,270 --> 00:46:00,229
Statistics is, of course, a huge, huge, important subject, which tends,
399
00:46:00,230 --> 00:46:04,130
for whatever reason, to be a little separate from the rest of numerical computing.
400
00:46:04,310 --> 00:46:07,219
But Oxford is very much involved in this through R in particular.
401
00:46:07,220 --> 00:46:12,950
Oxford is one of the big are developing places are as an open source statistics package.
402
00:46:13,130 --> 00:46:16,520
Most of the ones listed there are expensive, not open source.
403
00:46:16,670 --> 00:46:21,230
So who's use are? Oh, that's impressive. And even though who's in statistics?
404
00:46:21,620 --> 00:46:24,949
Yeah, I thought so. So many of you have used our. Even though you're not in the statistics.
405
00:46:24,950 --> 00:46:30,080
That's good. Certain things can be done on the web, I think.
406
00:46:32,120 --> 00:46:38,299
Predicting the future is always difficult. We now live in an era where everything is stored on the web, the cloud.
407
00:46:38,300 --> 00:46:41,540
You know, nobody saw that coming. But suddenly everything we do is stored on the web.
408
00:46:42,140 --> 00:46:46,010
Whereas 15 years ago, people predicted a lot of computing over the web.
409
00:46:46,010 --> 00:46:52,340
And of course, that is also there, obviously. But somehow it's the storage that has more changed our lives than the computing.
410
00:46:52,580 --> 00:46:57,250
Especially at the high end numerical level. But there are some interesting things.
411
00:46:57,260 --> 00:47:04,280
Of course, Wolfram Alpha is one that we all know. Just to mention some others, more mathematical.
412
00:47:04,490 --> 00:47:10,850
If you have a sequence and you want to find what is that sequence, go to the online encyclopaedia of integer sequences.
413
00:47:11,360 --> 00:47:18,439
If you have a number and you want to know what is that number, you know, 3.141592653.
414
00:47:18,440 --> 00:47:22,700
What is that number? Then try the inverse symbolic calculator.
415
00:47:29,820 --> 00:47:35,400
I'm going to talk about Chevy Fun at some other point if you're interested in Schwartz, because tarsal mapping, there's a beautiful package for that.
416
00:47:35,910 --> 00:47:38,879
High performance computing is, of course, rapidly evolving.
417
00:47:38,880 --> 00:47:45,570
And I see here I don't even have anything mentioning GPUs, but of course GPUs are a very important part of that these days.
418
00:47:45,990 --> 00:47:51,000
Traditionally, they're the two paradigms of high performance, which is shared memory and distributed memory,
419
00:47:51,000 --> 00:47:57,030
and there's open MP and MP for those graphics and visualisation I don't know much about.
420
00:47:57,030 --> 00:48:00,150
There are a few commercial things, some of which are then gone. Open source.
421
00:48:00,840 --> 00:48:06,300
Many people end up using MATLAB in practice, but of course for high end stuff, there are other packages out there,
422
00:48:06,480 --> 00:48:10,670
so please take a few moments and play around and click on some of these links.
423
00:48:11,040 --> 00:48:12,210
I guess that's it for today.