1
00:00:00,540 --> 00:00:04,470
Hand handouts. And as you can see, one of them is a questionnaire. I hope you'll feel that out.
2
00:00:04,830 --> 00:00:11,760
And in particular, we're curious to know which bits of software that have appeared in this course you might have used in your own work.
3
00:00:12,420 --> 00:00:18,510
There's also the assignment number three, which is do 11 days from today, Tuesday in lecture.
4
00:00:18,870 --> 00:00:24,750
And it's basically MATLAB, though. You'll see that on the third question, the biggest one,
5
00:00:25,080 --> 00:00:30,240
you're very welcome to use check find if you prefer, but you certainly aren't required to do that.
6
00:00:32,010 --> 00:00:37,020
So today's a pretty amazing day in the news because of the gravitational waves.
7
00:00:37,470 --> 00:00:40,740
I couldn't resist looking at the paper and having looked at the paper.
8
00:00:40,740 --> 00:00:45,690
I couldn't resist making copies of it for you. It's truly amazing.
9
00:00:45,690 --> 00:00:54,540
I'm sure you've all been hearing about this on the news. So one thing I note is that the actual paper is much more exciting than the news coverage.
10
00:00:54,840 --> 00:01:00,749
On the news coverage, you see things like, you know, stills from the Big Bang Theory and stuff, which is very cute.
11
00:01:00,750 --> 00:01:03,870
But to actually look at the paper is extraordinary.
12
00:01:04,410 --> 00:01:10,410
So let's just glance at that for a moment. This is a historic occasion, the discovery of gravitational waves.
13
00:01:11,490 --> 00:01:15,530
It's a beautifully written paper. You should really sit down and read this.
14
00:01:15,540 --> 00:01:23,850
It's fantastic. This will probably turn out to be the most sensational scientific discovery of your time in graduate school.
15
00:01:24,180 --> 00:01:27,930
Some of you are physicists. Some of you aren't. But this belongs to everybody.
16
00:01:27,930 --> 00:01:34,350
This is huge. It involves certainly all of physics and all of mathematics and all of computational mathematics.
17
00:01:34,650 --> 00:01:37,770
It's just a great achievement of international science.
18
00:01:38,430 --> 00:01:43,440
If you just leaf through, I haven't yet read it all carefully.
19
00:01:43,440 --> 00:01:49,410
But you can see, first of all, it says so much about how science, big science is done these days.
20
00:01:49,680 --> 00:01:53,010
There are hundreds of authors and even hundreds of institutions.
21
00:01:54,090 --> 00:02:01,080
It's all part of the Lego collaboration, which is international, although I guess the two main places are in the USA.
22
00:02:02,010 --> 00:02:05,639
The beginning just describes perfectly the history of this problem.
23
00:02:05,640 --> 00:02:10,330
And then if you turn to the second page, you get these sensational plots.
24
00:02:10,650 --> 00:02:14,850
So well, actually, before you turn to the second page, go back to the abstract.
25
00:02:14,850 --> 00:02:24,420
With its sensational beginning on September 14th, 2015, at 950 and 45 seconds.
26
00:02:25,320 --> 00:02:31,020
The two detectors simultaneously detected pretty much the same signal.
27
00:02:31,260 --> 00:02:37,200
So a billion years ago, two black holes merged in the course of a few seconds.
28
00:02:37,440 --> 00:02:47,100
And that merger is energetic enough to shake the Einstein and gravitational space,
29
00:02:47,490 --> 00:02:52,110
creating waves big enough to be detected a billion years later here on Earth.
30
00:02:52,320 --> 00:02:58,050
So they travel for a billion years at the speed of light, and then they go right through the United States in these detectors.
31
00:02:58,320 --> 00:03:07,560
So in these sensational plots, you see a curve from Hanford, Washington, and another curve from Livingston, Louisiana, and they match beautifully.
32
00:03:08,670 --> 00:03:11,639
Incredible. I don't know any of the story behind this,
33
00:03:11,640 --> 00:03:21,570
but I can sort of imagine the excitement in these two labs when they discovered they had each detected the same signal and then the two curves below.
34
00:03:22,560 --> 00:03:24,000
I don't know any of the details,
35
00:03:24,000 --> 00:03:30,510
but clearly the relativists have numerical models that predict what sort of a signal colliding black holes should send out.
36
00:03:30,840 --> 00:03:36,510
So these predictions match the experiments magnificently.
37
00:03:37,470 --> 00:03:43,920
It's quite incredible. I wouldn't say most of the credit here should go to numerical computation.
38
00:03:43,920 --> 00:03:46,920
There's much more going on with instrumentation and real physics.
39
00:03:47,130 --> 00:03:55,290
But obviously a part of the problem is numerics. And I don't know any of the details of what sort of equations they solve to get these amazing curves.
40
00:03:56,460 --> 00:04:00,690
But it is clear from reading the paper that once they get the signal, they know what they're looking for.
41
00:04:00,870 --> 00:04:09,299
And they were quickly able to look at the papers that generated such a curve and back out from that estimates of the masses of the black holes,
42
00:04:09,300 --> 00:04:14,040
for example. So it's truly an extraordinary bit of science.
43
00:04:15,870 --> 00:04:20,580
And let me mention one other aspect of it that relates to this course and to what we're doing right now.
44
00:04:20,850 --> 00:04:23,460
Of course, we're talking about partial differential equations.
45
00:04:25,140 --> 00:04:34,860
I mentioned that the first great example of a scientific advance that really only made sense in the context of a PD was Maxwell's equations.
46
00:04:35,100 --> 00:04:42,030
So this is going back 150 years. Of course, light waves and radio waves already were there.
47
00:04:42,540 --> 00:04:44,250
You didn't have to discover light wave.
48
00:04:44,700 --> 00:04:54,630
But Maxwell, as a the crowning achievement of his equations, was able to show that the equations had solutions in the form of waves.
49
00:04:55,530 --> 00:04:59,850
And of course, we know and he knew that those are light and radio.
50
00:04:59,950 --> 00:05:08,830
Waves. And very quickly, all those pieces came together. Well, this is a much more complicated, much longer story involving Einstein.
51
00:05:09,010 --> 00:05:16,750
So, you know, Einstein did special relativity in 1905, and that doesn't particularly involve these deformations of the universe.
52
00:05:17,020 --> 00:05:21,550
But then general relativity involving gravity came along a decade later.
53
00:05:21,790 --> 00:05:30,970
And in 1916, he realised that the solutions of his pdes should have, should include waves as possibilities.
54
00:05:31,310 --> 00:05:35,770
Now his PD eves are infinitely pdes are infinitely more complicated than Maxwell's.
55
00:05:35,980 --> 00:05:39,070
They're nonlinear, and they're ten of them instead of two or four of them.
56
00:05:39,670 --> 00:05:47,650
But they are these and they have these wave solutions. And he predicted the existence of these solutions 199 years ago.
57
00:05:48,490 --> 00:05:57,050
And. In the news, you'll see happy stories about what a great man Einstein is and what a great man Hawking is.
58
00:05:57,320 --> 00:06:07,190
But I think the even deeper story is how extraordinary it is that physical and mathematical reasoning can lead to this kind of a prediction.
59
00:06:07,550 --> 00:06:14,610
The equations suggested certain things, and a hundred years later, we discover those things are true.
60
00:06:14,630 --> 00:06:21,530
It's just extraordinary that mathematics can have such magical ability to predict how things work.
61
00:06:22,310 --> 00:06:29,570
So it's a very exciting day for all of us, and the rest of the lecture will be infinitely less exciting than what has just happened.
62
00:06:30,530 --> 00:06:33,770
I'm curious. Raise your hand. First of all, if you're in physics, raise your hand.
63
00:06:34,730 --> 00:06:40,580
Wow, that's impressive. And second of all, if you have any particular connection to this kind of physics, raise your hand.
64
00:06:41,480 --> 00:06:46,190
Okay. Well, of course, this event is going to affect people's career choices.
65
00:06:47,900 --> 00:06:52,370
That's complicated. I view with horror the thought of being on a team this big.
66
00:06:52,650 --> 00:06:56,820
What's it like for an individual to be, you know, author 146 on this paper?
67
00:06:56,840 --> 00:07:06,170
I don't know, but it's quite something. Okay.
68
00:07:06,230 --> 00:07:13,370
So you'll note I included the whole paper, including the authors and the institutions, because somehow that's part of the drama.
69
00:07:13,400 --> 00:07:20,240
I think now we're talking about a solution of relatively easy equations.
70
00:07:21,080 --> 00:07:28,160
And today the first thing I want to talk about is to say a word about what you do when your grid is not a uniform grid.
71
00:07:29,030 --> 00:07:36,570
So let's talk about finite different thing in general grids for.
72
00:07:49,370 --> 00:07:53,090
Now by general, I mean, it's not just equally face points.
73
00:07:53,600 --> 00:07:58,100
There are all sorts of reasons one might wish to have more complicated grid distribution.
74
00:07:58,370 --> 00:08:02,509
One is that the geometry of the problem you're working with might be complicated.
75
00:08:02,510 --> 00:08:08,419
You might want to do special things near boundaries. Another is that the solutions you're working with might have singularities or
76
00:08:08,420 --> 00:08:12,680
something so that you want to adaptively refine the grids in certain areas.
77
00:08:12,980 --> 00:08:25,790
So there are all sorts of reasons to use more complicated grids. So let's say our question is how should we derive a finite difference formula?
78
00:08:28,720 --> 00:08:35,950
A finite difference approximation to, let's say, the case derivative.
79
00:08:40,560 --> 00:08:48,720
On an arbitrary grid. And I'm thinking in one dimension, but of course, in multiple dimensions, the same question comes up.
80
00:08:53,270 --> 00:08:58,910
So we have some grid and maybe it's gradually getting coarser, for example.
81
00:08:59,360 --> 00:09:03,589
So there would be the kind of grid that could easily come up where maybe you have
82
00:09:03,590 --> 00:09:07,010
a boundary here and you're resolving things more closely near the boundary.
83
00:09:07,520 --> 00:09:11,220
And maybe you want the third derivative approximation at some point on this grid.
84
00:09:11,450 --> 00:09:12,890
What's the right way to get that?
85
00:09:13,730 --> 00:09:20,510
Well, there's a basic principle that nine times out of ten is the one people use, and that's polynomial interpolation.
86
00:09:24,290 --> 00:09:32,450
So let's suppose, for example, that we want at that point an approximation to the case derivative.
87
00:09:32,630 --> 00:09:38,420
Well, basically the standard thing to do amounts to picking a set of adjacent points.
88
00:09:38,420 --> 00:09:46,640
You know, maybe these interpolating those data by a polynomial and then differentiating that interpolate.
89
00:09:46,880 --> 00:09:51,350
So let's write that down. Here's the principle we're going to use.
90
00:09:54,890 --> 00:10:01,220
So at each point, each grade point, we decide which data we're going to use.
91
00:10:04,210 --> 00:10:07,510
So the data will go presumably to the left and to the right.
92
00:10:07,960 --> 00:10:18,700
So I could say which data the sub j minus R dot dot dot up to v j plus s to u.
93
00:10:21,490 --> 00:10:24,700
So in this case, this is the j grid point.
94
00:10:24,700 --> 00:10:30,760
And this is the case where our equals two in this picture and s equals two.
95
00:10:31,690 --> 00:10:34,810
Because I've gone two points to the left and two points to the right.
96
00:10:36,670 --> 00:10:41,830
Then the principle is we now imagine interpolating that data by a polynomial.
97
00:10:50,790 --> 00:10:57,210
So we interpolate the data by a polynomial and it would be the unique interpolate of degree r plus s.
98
00:11:01,710 --> 00:11:05,850
Because if you have our plus X plus one data points.
99
00:11:07,680 --> 00:11:10,710
There's a unique polynomial of that degree through that data.
100
00:11:10,980 --> 00:11:19,020
And then we differentiate the internal and k times and evaluate the result at the point we care about.
101
00:11:19,140 --> 00:11:29,730
In other words, our finite difference approximation. Is going to be the case.
102
00:11:29,750 --> 00:11:34,550
Derivative of p evaluated at the point we're interested in.
103
00:11:38,300 --> 00:11:43,940
So it's a very simple principle polynomial interpolation differentiation.
104
00:11:43,940 --> 00:11:51,079
But this really is the basis of most of what people do when they describe ease,
105
00:11:51,080 --> 00:11:55,190
at least when they disparities them by finite differences as opposed to finite elements.
106
00:11:59,300 --> 00:12:00,500
So let's do an example.
107
00:12:02,330 --> 00:12:10,549
Well, I should say that every finite, different formula I've shown you does fit this pattern, except I've shown you just simple ones on regular grids.
108
00:12:10,550 --> 00:12:17,090
So, for example, if you have a uniform grid and you want to get a second derivative approximation based on three points,
109
00:12:17,390 --> 00:12:22,610
you do exactly that if you interpolate by a parabola and differentiate the interpolate twice.
110
00:12:22,790 --> 00:12:26,390
So let's work out a case like that. I'll do a different one.
111
00:12:29,930 --> 00:12:33,200
Suppose I want the one sided first derivative?
112
00:12:36,870 --> 00:12:45,310
So I want to approximate the first derivative. And let's suppose that I have equally spaced points.
113
00:12:49,260 --> 00:12:52,100
But what's different is that it's a one sided derivative I want.
114
00:12:52,140 --> 00:12:58,920
So it's an x not equals zero and x one equals some grid size H and x two equals two h.
115
00:13:00,330 --> 00:13:06,570
And the motivation here would be that we have a boundary and a grid coming in from the boundary.
116
00:13:07,800 --> 00:13:11,910
And at the boundary point we only have data to one side.
117
00:13:11,910 --> 00:13:19,650
So we want a one sided formula. Well, we're going to interpolate by a parabola and differentiate the interpolate once.
118
00:13:21,390 --> 00:13:25,020
So here's the interpolate. I'll write it in what's called the Newton form.
119
00:13:28,820 --> 00:13:39,830
In turbulent through three data points is a parabola. And it turns out that equal to P of x equals the value at the left grid point plus.
120
00:13:41,090 --> 00:13:57,950
V one minus v not x over h where v one is the value at the second grade point plus v to minus two v one plus the not.
121
00:14:01,050 --> 00:14:07,480
Times X. Times X minus H over two x square.
122
00:14:10,820 --> 00:14:14,299
So you can easily check that that polynomial fits the data.
123
00:14:14,300 --> 00:14:21,590
It takes the value of not at zero and the value of one H and the value of two at h2h.
124
00:14:22,640 --> 00:14:25,910
It's a quadratic polynomial degree one there and degree two there.
125
00:14:26,090 --> 00:14:33,319
So we differentiate that and we find that P prime is equal to the one minus the
126
00:14:33,320 --> 00:14:44,270
nought divided by H plus v two minus two v one plus v nought divided by three times.
127
00:14:46,330 --> 00:14:52,150
To explain minus H. Over to a square.
128
00:14:56,790 --> 00:15:00,060
The interpreter is a parabola, so its derivative is linear.
129
00:15:00,810 --> 00:15:05,550
And then we just evaluate that derivative at zero if we want the one sided formula.
130
00:15:05,790 --> 00:15:14,879
So we compute that this was P prime of X, we compute that p time of zero is equal to the one minus V,
131
00:15:14,880 --> 00:15:20,250
not over h, and then when x is equal to zero, we get that thing with a minus.
132
00:15:20,580 --> 00:15:26,110
So that's minus. V two minus two.
133
00:15:26,110 --> 00:15:29,530
The one plus d nought divided by two h.
134
00:15:32,370 --> 00:15:36,120
And that in turn is equal to minus a half.
135
00:15:37,390 --> 00:15:45,900
The two. Plus 2v1 minus three halves the not.
136
00:15:46,690 --> 00:15:56,340
Divided by age. So that's it.
137
00:15:56,550 --> 00:16:04,020
We've just done the algebra and figured out the coefficients of the 3.1 sided first derivative.
138
00:16:04,350 --> 00:16:08,309
And in fact, we've seen these coefficient before. Minus two have two.
139
00:16:08,310 --> 00:16:15,690
Minus three has those are the coefficients of the second order Adams batch fourth formula because the Adam's back fourth formulas have this
140
00:16:15,690 --> 00:16:24,450
one sided nature because they're explicit formulas and it's a second order one which relates to parabola interpolating in three points.
141
00:16:24,780 --> 00:16:32,309
So that's why we see this. Now, what happens in computation, of course, is that you don't grind all this stuff out.
142
00:16:32,310 --> 00:16:36,330
It's all done in advance a priori or on the fly using clever algorithms.
143
00:16:36,660 --> 00:16:41,430
You don't write down anything on paper anymore. That's much too 19th century.
144
00:16:42,210 --> 00:16:47,340
Let me point you to two remarkable papers in this business by Bengt Fineberg.
145
00:16:48,000 --> 00:16:55,050
So. Surprisingly recently, that is to say, only a couple of decades ago.
146
00:16:55,650 --> 00:17:02,160
He has two key papers. One of them in mathematics of computation in 1988.
147
00:17:04,850 --> 00:17:08,080
And another one in Siam review.
148
00:17:12,800 --> 00:17:19,370
In 1998. And the details in the names are in our lecture notes.
149
00:17:19,610 --> 00:17:24,710
The first one is called Generation of Finite Difference Formulas on equally on arbitrarily spaced grids.
150
00:17:25,250 --> 00:17:29,090
And the second is called calculation of weights and finite difference formulas.
151
00:17:29,510 --> 00:17:40,520
These two surprisingly late find elegant formulations and stable algorithms to do a very general computations like this very rapidly and cleanly.
152
00:17:41,900 --> 00:17:47,600
Of course, people would have known what to do 50 years ago, but using somewhat less clean and robust algorithms.
153
00:17:48,590 --> 00:17:51,950
And I've got a handout from his first paper here.
154
00:17:54,380 --> 00:17:56,870
So if you take a look at the handout,
155
00:17:57,680 --> 00:18:06,110
you see on what I've got from the 1988 paper called Generation of Finite Difference Formulas on arbitrarily spaced grids.
156
00:18:07,640 --> 00:18:10,910
And let's just glance at his four tables there.
157
00:18:16,480 --> 00:18:25,670
So in table one. You can see it's centred approximations at a grid point.
158
00:18:26,240 --> 00:18:34,820
So the notion there is that you have an equally spaced grid and you want to approximate a derivative at the middle.
159
00:18:35,760 --> 00:18:41,500
So this is the most basic thing one would do all the time and you should recognise at least the first coefficient.
160
00:18:41,520 --> 00:18:44,970
So suppose you want the zeroth derivative.
161
00:18:45,240 --> 00:18:49,740
That's the first line of the table. Well, the zeroth derivative just means the interpolate.
162
00:18:50,880 --> 00:18:57,720
So you interpolate these five data points by a polynomial of degree four and you evaluate that polynomial at this point.
163
00:18:58,140 --> 00:19:02,850
Of course, you get the value at this point. So that's why the weight is just the number one.
164
00:19:04,450 --> 00:19:07,720
The next four lines of the table are the first derivative.
165
00:19:07,990 --> 00:19:14,410
So suppose you wanted the first derivative here based on three points or five points or seven points.
166
00:19:14,680 --> 00:19:16,450
Well, here are the coefficients.
167
00:19:16,690 --> 00:19:22,840
So, for example, here we have five points and that would be minus the 12th, minus two thirds, plus two thirds, and so on.
168
00:19:27,220 --> 00:19:28,900
So he's offering these.
169
00:19:31,770 --> 00:19:40,350
Partly because the numbers are interesting, but also partly to illustrate how systematically one can regard these problems of generating coefficients.
170
00:19:40,890 --> 00:19:48,720
The Next Table. Table two is centred approximations at a halfway point.
171
00:19:49,200 --> 00:19:54,870
So here the idea is that you have an even number of points and you want the value there.
172
00:19:56,280 --> 00:20:05,370
Now, why would you want that? Well, the answer is that very often, going back many years, people have described using so-called staggered grids.
173
00:20:05,850 --> 00:20:08,310
So that's the phrase staggered grids.
174
00:20:11,880 --> 00:20:18,750
What you find is that very often in fluid mechanics, for example, you have several variables you're trying to solve.
175
00:20:18,750 --> 00:20:30,000
For that, each depends on the other. And it turns out that, for example, suppose we had a grid like this, let's draw it in two dimensions.
176
00:20:30,870 --> 00:20:37,650
So here's a two dimensional grid, and maybe at each of these points we have an estimate of the pressure.
177
00:20:42,050 --> 00:20:50,690
Well, it turns out that the velocity depends on a derivative of the pressure, so it makes sense to compute the velocity at in points.
178
00:20:51,110 --> 00:20:55,580
So if we were in one dimension, I would put a velocity grid right there.
179
00:20:55,730 --> 00:20:59,190
In two dimensions. I might even put it here. Let's do one. That's simpler.
180
00:20:59,210 --> 00:21:04,040
So a staggered grid might evaluate the velocity at these points.
181
00:21:05,600 --> 00:21:10,970
So that the pressure is that some kind of a derivative of the velocity and the velocity of the derivative of the pressure.
182
00:21:11,210 --> 00:21:15,860
That kind of staggering of a grid is an old trick that people use.
183
00:21:18,700 --> 00:21:23,950
So of course, table two gives you coefficients for that and the top entry in the table we instantly
184
00:21:23,950 --> 00:21:30,519
recognise as if you want to approximate the value here based on these two numbers,
185
00:21:30,520 --> 00:21:36,280
of course you would take the mean of those two numbers. So we see a half and a half for the zero for the derivative.
186
00:21:36,790 --> 00:21:41,320
On the other hand, suppose you wanted to approximate the first derivative based on these two numbers.
187
00:21:41,710 --> 00:21:45,190
Then in the next four lines you see minus one and one and so on.
188
00:21:47,810 --> 00:21:59,090
Now let's look at the other two tables. The other two tables are analogous behaviour added boundary.
189
00:22:02,810 --> 00:22:07,670
So in table three, you see one sided approximations at a grid point.
190
00:22:11,720 --> 00:22:18,140
And in table four, you see one sided approximations valuated there.
191
00:22:20,420 --> 00:22:27,110
So of course, the motivation is what to do at a boundary in a simulation, and the motivation here is what to do at a boundary.
192
00:22:27,260 --> 00:22:34,490
When you have some kind of a staggered grip. So for example, these formulas here, we can recognise that's in table three.
193
00:22:36,780 --> 00:22:43,800
If you want, if you only have two points at which to estimate a first derivative, well, all you can do is take their difference.
194
00:22:44,370 --> 00:22:47,730
But if you have three points, you've got the minus three has two minus a half.
195
00:22:47,940 --> 00:22:54,599
And that's the second order. Adam's back fourth. And then I've also written down the next few lines because that's the third and fourth order.
196
00:22:54,600 --> 00:22:58,740
Adam's back fourth. So -11, sixth, three minus three has one third.
197
00:22:58,950 --> 00:23:02,910
Those are the Adam's best fourth coefficients for the third order form and so on.
198
00:23:05,960 --> 00:23:07,550
Now these are all with regular grids.
199
00:23:07,550 --> 00:23:15,200
But Feinberg's algorithms, especially in the second paper, give you very slick, robust ways to do it for arbitrary grids.
200
00:23:19,050 --> 00:23:23,310
Now we're going to come back to these issues of how you construct derivatives.
201
00:23:23,730 --> 00:23:28,110
Shortly when we turn to spectral methods. But for the moment, that's all I'm going to say about them.
202
00:23:28,290 --> 00:23:31,620
And then the rest of today, I want to switch to multidimensional problems.
203
00:23:40,400 --> 00:23:46,810
So this is 5.8. Multiple space dimensions.
204
00:23:58,960 --> 00:24:04,900
Well, of course, computational science happens in all sorts of dimensions in terms of physical space.
205
00:24:05,470 --> 00:24:07,570
We tend to live in a three dimensional world.
206
00:24:08,080 --> 00:24:14,290
Sometimes one might regard that as a four dimensional space time continuum if gravity waves are at issue.
207
00:24:15,700 --> 00:24:18,280
But even higher dimensions come up all the time.
208
00:24:18,550 --> 00:24:25,480
First of all, in physics, for example, you could have a state space if you have a bunch of particles in many dimensions or in finance.
209
00:24:25,480 --> 00:24:30,790
People are always working with multivariate databases involving different financial instruments.
210
00:24:31,030 --> 00:24:36,910
There are all sorts of abstract context in which one would even like to work in a hundred or a thousand dimensions.
211
00:24:37,300 --> 00:24:40,110
That stuff is actually pretty hot these days.
212
00:24:40,120 --> 00:24:49,320
There's a lot going on in high dimensions, but more classically, 2D and 3D are the kind of simulations we focus on.
213
00:24:51,360 --> 00:24:57,179
So a typical standard big problem of computational science might be a three dimensional,
214
00:24:57,180 --> 00:25:02,159
fluid mechanical simulation of the high Reynolds number, where you might have several variables involved.
215
00:25:02,160 --> 00:25:05,489
There would be three components of velocity, maybe under pressure.
216
00:25:05,490 --> 00:25:07,560
All sorts of things might come in at density.
217
00:25:08,550 --> 00:25:13,980
So you would very typically have a three dimensional grid with a number of different variables at each grid point.
218
00:25:15,020 --> 00:25:25,410
And that's the kind of motivation I'm thinking of here. So there are sort of two facts, a good fact and a bad fact, if you like.
219
00:25:26,790 --> 00:25:33,660
So the good news is. Little changes conceptually.
220
00:25:34,980 --> 00:25:43,950
So 2D and 3D are based on the same mathematics as one D.
221
00:25:47,670 --> 00:25:55,120
And what do I mean by that? Well, you can approximate derivatives in ways that are obvious extensions from one day to two or three.
222
00:25:55,920 --> 00:25:59,100
You set up a discrete system of equations. You need to solve them.
223
00:25:59,550 --> 00:26:01,680
All the issues we've talked about arise.
224
00:26:01,680 --> 00:26:08,790
For example, if you have a hyperbolic problem with a finite wave speed, then you can probably use an explicit method.
225
00:26:08,970 --> 00:26:15,030
No problem. If you have a diffusive problem with an infinite wave speed, you probably need an implicit method.
226
00:26:15,390 --> 00:26:23,100
If you have a stiff PD, you might be able to mix together an implicit method for the linear part and an explicit method for the nonlinear parts.
227
00:26:23,430 --> 00:26:26,700
All of that is still true in two or three dimensions.
228
00:26:28,730 --> 00:26:33,580
The bad news is. Linear algebra.
229
00:26:38,730 --> 00:26:46,560
And what I mean by that is all of these methods, as soon as you have a system of equations to solve, it's no longer going to be abandoned system.
230
00:26:46,890 --> 00:26:52,800
So of course, in one dimension, we're used to matrices that look like this,
231
00:26:53,850 --> 00:26:57,420
and those are so easy to solve that I haven't even discussed how one solves them.
232
00:26:57,870 --> 00:27:03,330
But in two dimensions the matrices will get more complicated and in three dimensions much more complicated.
233
00:27:03,630 --> 00:27:10,640
And so nobody would ever have predicted it. But linear algebra really is the heart of a lot of computational science.
234
00:27:10,650 --> 00:27:16,230
Because of this, if all equations were hyperbolic, linear algebra would be less important.
235
00:27:16,680 --> 00:27:24,090
But because we care about diffusive processes which require implicit miss, linear algebra is at the heart of what's done.
236
00:27:25,440 --> 00:27:31,380
So this constrains the kind of simulations that are feasible in all sorts of ways, and it leads.
237
00:27:31,500 --> 00:27:33,149
We've talked about this in the first term.
238
00:27:33,150 --> 00:27:40,560
Of course, it leads to tremendously advanced methods, both in the direct category and in the iterative category.
239
00:27:41,910 --> 00:27:46,740
And when you're doing iterative methods, you're always talking about preconditions.
240
00:27:51,390 --> 00:27:54,000
So these are just huge areas of study.
241
00:27:55,470 --> 00:28:06,120
Very roughly speaking, you might say that direct methods often win in 2D, but in 3D, the Sparsity structure is too complicated to take advantage of.
242
00:28:06,690 --> 00:28:10,430
So maybe iterative methods win, but it's not really so simple.
243
00:28:10,440 --> 00:28:16,839
Both are used in all sorts of way. So linear algebra changes the picture very much.
244
00:28:16,840 --> 00:28:23,130
We're not going to say too much about that for the moment. We're going to focus on the part that isn't so different in multiple dimensions.
245
00:28:23,410 --> 00:28:27,490
And let's take our example, the 2D heat equation.
246
00:28:31,020 --> 00:28:34,649
So suppose we have you t equals the le plus c interview.
247
00:28:34,650 --> 00:28:40,139
So that's u x x plus u y y and let's put it on a square.
248
00:28:40,140 --> 00:28:45,360
So I'll say zero less than X and y less than one.
249
00:28:46,200 --> 00:28:51,570
And since it's a very simple problem, I'll simply say that you equal zero on the boundary.
250
00:28:53,840 --> 00:28:58,850
So we have some initial heat distribution which eventually is going to flow out through the boundary,
251
00:28:59,030 --> 00:29:02,060
and we'd like to know how that evolves as a function of time.
252
00:29:04,650 --> 00:29:06,900
Well, let's do the obvious discrimination.
253
00:29:07,890 --> 00:29:20,190
So we'll take our grid, we'll set the values to zero on the boundary, but then in the interior, we'll set up a grid of points.
254
00:29:22,920 --> 00:29:27,720
And we'll do finite differences. To approximate the equation.
255
00:29:29,340 --> 00:29:33,960
So let me write down the obvious finite difference approximation of it there.
256
00:30:25,860 --> 00:30:31,590
Well, if I write down the obvious backward Euler approximation, it could look like this.
257
00:30:31,830 --> 00:30:42,210
We're going to set the iPhone X coordinate to be I times a space step and we'll set the j y coordinate
258
00:30:42,810 --> 00:30:50,580
to be j times the same space step and we'll set the time position to be in times of time step.
259
00:30:53,470 --> 00:30:59,740
And the notation I'll use is that h is one over some integer j.
260
00:31:01,060 --> 00:31:09,090
So we have a j by j grid basically. So the backward Euler disparate ization.
261
00:31:13,460 --> 00:31:22,520
I hope you can all write that down without my telling you. But I'll tell you, the stencil could look like this at the new time step.
262
00:31:24,090 --> 00:31:27,690
We sort of have X and Y there. So this is timestamp and plus one.
263
00:31:28,260 --> 00:31:34,710
We'll compute our derivatives there and then because it's backward Euler.
264
00:31:35,870 --> 00:31:39,080
We'll do the time derivative along there. Back to step. Yeah.
265
00:31:40,670 --> 00:31:48,500
So the formula will be that the time derivative is v and plus 1ij minus z n.
266
00:31:48,650 --> 00:31:51,800
I. J divided by k.
267
00:31:54,290 --> 00:32:01,250
So I hope the notations obvious the x coordinate, the j coordinate so their time approximation to the time derivative.
268
00:32:01,520 --> 00:32:07,400
And that will be the sum of two terms, the double derivative in x and the double derivative in y.
269
00:32:08,030 --> 00:32:19,700
So these will be the I plus one j minus two the i j plus the I minus one j at time step and plus one.
270
00:32:22,600 --> 00:32:38,760
Divided by X squared. And then similarly the i j plus one minus 2vij plus the i j minus one at time step and plus one.
271
00:32:42,490 --> 00:32:53,469
Divided by square. So there we have our approximation to u t equals u x x plus u.
272
00:32:53,470 --> 00:33:01,470
Y y. It's a paradox of mathematics that the policy is, of course, isotropic.
273
00:33:01,590 --> 00:33:09,660
It doesn't treat X and Y differently from 45 degrees, and yet it can be written in this form, which is completely nano.
274
00:33:10,650 --> 00:33:14,250
So our discrete possession of it is of course not isotropic.
275
00:33:14,250 --> 00:33:17,280
And yet we're discounting an isotropic operator.
276
00:33:19,540 --> 00:33:25,430
So that's the backward Euler approximation and I won't work it out, but it's unconditionally stable.
277
00:33:25,450 --> 00:33:28,450
By which we mean there's no time step restriction.
278
00:33:32,900 --> 00:33:38,270
Of course, if the time stamp is too big, you'll get inaccuracy, but you won't get explosions.
279
00:33:40,430 --> 00:33:51,730
And by the way, if we do forward Euler. Which would be everything the same except these and plus ones would be ends.
280
00:33:52,820 --> 00:34:00,530
Then you do get a time step restriction which turns out to be K has to be less than H squared over four.
281
00:34:04,200 --> 00:34:08,790
So backward. Euler, as usual, allows big time steps forward.
282
00:34:08,790 --> 00:34:14,010
Euler requires small time steps, the accuracy of both formulas.
283
00:34:17,390 --> 00:34:20,940
His O of Kate plus eight squared.
284
00:34:26,310 --> 00:34:33,540
So if K is proportional to x squared, then time and space errors would be of comparable magnitude.
285
00:34:33,930 --> 00:34:37,050
If K is bigger than the time, errors would be the biggest.
286
00:34:42,240 --> 00:34:45,420
So if you think about what's going on, when something like this happens,
287
00:34:46,260 --> 00:34:49,920
we're using matrices, except they're sort of four dimensional matrices, right?
288
00:34:50,160 --> 00:35:01,230
If you have a1d problem, then you say vector equals matrix times vector, or maybe you solve a system of equations matrix times vector equals vector.
289
00:35:01,860 --> 00:35:07,710
In a 2D problem. I need to draw a four dimensional picture, don't I?
290
00:35:07,740 --> 00:35:21,510
It's a tensor. So what we'd like to say is that a matrix is equal to a some kind of a 4D tensor times another matrix, or to solve a problem.
291
00:35:21,510 --> 00:35:29,460
We'd like to say that for D tensor. Time's matrix equals matrix.
292
00:35:31,620 --> 00:35:38,370
There's no doubt that that's what's logically going on when we in the game like this, and yet nobody thinks that way.
293
00:35:38,820 --> 00:35:43,860
This has always bothered me because it must be fruitful to think more like that than we do.
294
00:35:44,310 --> 00:35:47,490
Yet when we design our computers, we don't think that way very much.
295
00:35:47,790 --> 00:35:53,030
When we write down our algorithms, we don't think that way. Of course, people do talk about TensorFlow.
296
00:35:53,220 --> 00:36:01,380
They're not unaware of this interpretation, but most of computational science turns everything into a matrix.
297
00:36:01,740 --> 00:36:06,540
You take this four dimensional object and you string it out so it becomes a two dimensional object.
298
00:36:08,130 --> 00:36:16,680
Of course, in 3D we would want to have a three dimensional cube would be equal to a six dimensional tensor, times a three dimensional cube.
299
00:36:20,680 --> 00:36:25,180
So ignoring cancers and just thinking like linear algebra, it's.
300
00:36:28,430 --> 00:36:37,850
Abstractly, we can write down that the backward Euler formula will have our usual format of matrix times.
301
00:36:39,200 --> 00:36:45,050
Vector at the new time step equals vector at the current time step.
302
00:36:45,800 --> 00:36:53,780
So this is so abstract that formula could apply to the tensor picture as well as to a matrix interpretation.
303
00:36:54,140 --> 00:37:05,720
But certainly in practice, people think in terms of matrices. And B is equal to the identity.
304
00:37:07,890 --> 00:37:13,920
Minus. The time stamp times our discrete LA Plus.
305
00:37:20,010 --> 00:37:22,350
The street. LE Plus, is this part of the picture?
306
00:37:22,830 --> 00:37:30,710
So the whole linear algebra turns out to involve solving a system which basically is a perturbation of the identity by the discrete.
307
00:37:33,750 --> 00:37:45,240
And as we know from the first term in MATLAB, you can write the discrete in the plus in the form of a chronic product I comedy.
308
00:37:46,480 --> 00:37:49,870
Plus the chronic product in the other direction d com.
309
00:37:49,900 --> 00:38:06,490
I where d is equal to h to the minus two times the tri diagonal matrix whose nonzero entries are one minus two and one.
310
00:38:08,790 --> 00:38:15,150
So I think actually these formulas are a complete description of the linear algebra of backward Euler.
311
00:38:15,960 --> 00:38:19,170
You set up this J by J tri diagonal matrix.
312
00:38:20,570 --> 00:38:28,340
Then you make this j squared by j squared matrix from the chronic r product or the sum of the two chronic or product is called a chronic or sun.
313
00:38:28,490 --> 00:38:34,070
As we discussed last term, you then have this big matrix of dimension J squared.
314
00:38:34,820 --> 00:38:38,450
You subtracted off from the big identity.
315
00:38:38,450 --> 00:38:46,010
So let's say this is a J by J identity, but this is a j squared by j squared identity, the tensor identity, if you like.
316
00:38:48,580 --> 00:38:53,520
That's the whole linear algebra. So this is iconic.
317
00:38:53,520 --> 00:39:06,350
Her son. And chronicler sums are all about taking TEDsters and squashing them down into two dimensions.
318
00:39:09,610 --> 00:39:17,770
Now the linear algebra side is all about the ordering of the unknowns and how you can exploit that ordering at least four direct methods.
319
00:39:18,010 --> 00:39:28,630
So let me look at the ordering by focusing on the case j equals four where we have three unknowns in each direction.
320
00:39:30,400 --> 00:39:42,490
So we now have our unknowns in a three by three grid, which is interior to the boundary, the boundaries out here with taking the value zero.
321
00:39:43,660 --> 00:39:51,550
But the unknowns are in this three by three grid. So if I think of this one as the first and this is the second and this is the third unknown.
322
00:39:56,830 --> 00:39:59,890
Five, six, seven, eight.
323
00:40:01,660 --> 00:40:07,510
Nine. You can number them however you want mathematically,
324
00:40:08,620 --> 00:40:13,390
but the way you number them will affect how you put them on a computer if you string them out in order.
325
00:40:13,750 --> 00:40:17,680
So if I number them like this telegraph typewriter ordering one, two, three,
326
00:40:17,680 --> 00:40:21,370
four, five, six, seven, eight, nine and then put them on the computer that way.
327
00:40:21,550 --> 00:40:25,720
That implies something about the structure of the matrices that we'll be working with.
328
00:40:27,130 --> 00:40:30,490
So, in fact, if I use that ordering.
329
00:40:32,940 --> 00:40:41,610
Then the discrete le plus in. Is a mapping from a vector of length nine to a vector of length nine.
330
00:40:44,450 --> 00:40:48,319
And that mapping. So let's see. I want a nine by nine matrix.
331
00:40:48,320 --> 00:41:00,540
Right. One, two, three, four, five, six, seven, eight, nine. That nine by nine matrix will have block three by three structure.
332
00:41:05,110 --> 00:41:13,030
And it will contain numbers corresponding to couplings between adjacent?
333
00:41:14,440 --> 00:41:26,120
No. So for example, this point is coupled to those because of the second derivative in X and to those because of the second derivative in Y.
334
00:41:26,240 --> 00:41:31,070
But it's not coupled to the because we're not doing any different thing along the diagonal.
335
00:41:32,270 --> 00:41:38,839
So you end up with a block matrix and each block has a special structure.
336
00:41:38,840 --> 00:41:45,260
And in fact, if you look at the numbers, it's minus four, minus four, minus four, one, one, one, one.
337
00:41:46,160 --> 00:41:48,140
That's on the diagonal blocks.
338
00:41:57,180 --> 00:42:07,530
So it's a what we would call a block tri diagonal matrix whose diagonal blocks are try diagonal on the super diagonal blocks.
339
00:42:07,530 --> 00:42:18,920
We have little identities. And then of these three things, you have zero.
340
00:42:21,090 --> 00:42:26,190
So the big structure is block diagonal, and then within each block further structure.
341
00:42:27,060 --> 00:42:35,370
Of course, in 3D you get analogous things with more complicated structure and in end dimensions you'd have a sort of recursive structure of some kind.
342
00:42:37,990 --> 00:42:45,100
So the wording is that we say L is black, tri diagonal.
343
00:42:50,210 --> 00:43:10,510
With tri diagonal blocks. And for the last few minutes, I'd like to use that technology to solve a few problems.
344
00:43:10,960 --> 00:43:16,750
So now let's look at the codes I handed out there. There are three of them 46, seven and eight.
345
00:43:17,230 --> 00:43:22,210
The first two are for the heat equation, and the third is for the wave equation in two dimensions.
346
00:43:27,320 --> 00:43:35,360
So looking first at em 46 backward. Euler It's a two dimensional backward Euler simulation for the heat equation and it will do exactly this.
347
00:43:35,600 --> 00:43:41,180
And you see the key line is where I say B you equals B backslash u.
348
00:43:42,740 --> 00:43:50,090
So in that line you equals B backslash u. MATLAB is being told to solve a system with this structure.
349
00:43:50,600 --> 00:44:00,050
Now, this structure isn't that bad. It's not as narrow a band as you might like, but it's still a banded matrix with a bandwidth of on the order of J.
350
00:44:01,670 --> 00:44:05,450
So. This is the band.
351
00:44:06,680 --> 00:44:14,719
The band did this is the Matrix. Outside their everything is zero and MATLAB exploits that quite easily.
352
00:44:14,720 --> 00:44:20,900
So this two dimensional simulation goes quite smoothly, even though linear algebra is involved at each step.
353
00:44:21,650 --> 00:44:42,850
So let's run that. So if I type m 46 backward Euler it asks me what size grid I want and I'll begin by saying I'd like a ten by ten grid.
354
00:44:43,330 --> 00:44:48,610
So it shows you The Matrix and you can see there are ten blocks each.
355
00:44:49,270 --> 00:44:54,850
There's a block. This is the lowest, right try diagonal block with the ten by ten matrix.
356
00:44:55,180 --> 00:45:00,310
This is the lowest right of diagonal block, which is a ten by ten identity.
357
00:45:00,730 --> 00:45:06,310
So there's the structure of the whole matrix. It's bounded enough to be easy to solve.
358
00:45:07,420 --> 00:45:13,020
And so every time I take a step. Solve the problem.
359
00:45:13,380 --> 00:45:17,460
That's not too exciting. Let's make the grid finer. Let's say I'm 46.
360
00:45:18,030 --> 00:45:24,210
I'll do it on a 90 by 90 grid. So there's again the Matrix.
361
00:45:24,450 --> 00:45:27,460
It's of dimension 90 by 90.
362
00:45:27,480 --> 00:45:32,170
So 8100. But plenty of structure.
363
00:45:33,040 --> 00:45:41,650
So that's easy for your computer to solve. There's the initial condition and every time I take a time step.
364
00:45:43,340 --> 00:45:54,520
The heat flow. There are a couple of things I always like to do to fiddle around with codes.
365
00:45:54,740 --> 00:46:00,050
So in this one, you'll notice there's a line that says periodic boundary conditions with a percent sign.
366
00:46:00,260 --> 00:46:05,030
If I remove that percent sign, then it will change the boundary conditions to be periodic.
367
00:46:05,570 --> 00:46:11,350
So let's do that. It's called and 46 PR.
368
00:46:12,470 --> 00:46:22,040
So if I now say let's try a 30 by 30 grid, there's the grid and notice same tri diagonal structure.
369
00:46:22,040 --> 00:46:25,849
But now these things have appeared. Our matrix is now. It wraps around.
370
00:46:25,850 --> 00:46:31,370
It's a circular matrix, but MATLAB still is able to solve that without much difficulty.
371
00:46:32,810 --> 00:46:42,150
So the initial condition looks like that. And now, as time goes by, the physics is a bit different because it's periodic boundary conditions.
372
00:46:42,170 --> 00:46:50,150
It's not exciting to watch, but you can see that at the boundary the temperature is rising a little bit, so we're not losing heat anymore.
373
00:46:50,420 --> 00:47:01,820
It's merely equalising throughout the domain. But if you tried to take a big time step with this, you'd lose accuracy pretty quickly.
374
00:47:02,900 --> 00:47:10,310
So the Crank Nicholson approach would be to increase the order of accuracy in time from K to k squared.
375
00:47:10,670 --> 00:47:14,060
I won't write it down because you all know what it means, but the picture tells you all.
376
00:47:14,390 --> 00:47:24,160
So the second code m 47, which incidentally is the year that the crack Nicholson paper was published in 1947, I think shows you the picture.
377
00:47:24,170 --> 00:47:29,560
We're going to disparities, both of the new time step and of the old time step and treat those equally.
378
00:47:29,570 --> 00:47:33,740
So the symmetry now gives us k squared rather than k accuracy.
379
00:47:34,520 --> 00:47:40,040
It won't particularly appear in the pictures, but let's just run it for fun.
380
00:47:40,490 --> 00:47:47,220
If I say m 47. There's very little difference in the code.
381
00:47:48,180 --> 00:47:53,880
I've chosen a different initial condition just for variety, so you can see it works fine.
382
00:47:58,130 --> 00:48:04,070
Let's do a periodic version of that. You can see a couple of lines change their.
383
00:48:07,300 --> 00:48:10,390
Not much to say, but that works.
384
00:48:11,920 --> 00:48:16,660
Maybe something more interesting is to look at the penultimate line of code.
385
00:48:18,400 --> 00:48:21,760
Where it says removing the parents is catastrophic.
386
00:48:22,540 --> 00:48:26,000
So we have you equals B backslash a time to use.
387
00:48:26,020 --> 00:48:30,420
Why do we have that? Well, with Jack Nicholson, we're just criticising at the old time step.
388
00:48:30,430 --> 00:48:34,329
That's the A. And then we're also disguising at the new time step.
389
00:48:34,330 --> 00:48:41,470
That's the B. So we're solving a system of equations involving B, but the right hand side of those equations depends on it.
390
00:48:42,460 --> 00:48:45,940
Now, mathematically, you could do things either way.
391
00:48:45,950 --> 00:48:50,439
It's perfectly legitimate to talk about the matrix.
392
00:48:50,440 --> 00:48:58,220
B Inverse time. Z That's a matrix. And if you asked MATLAB to compute that matrix, it will do it.
393
00:49:00,470 --> 00:49:09,470
The trouble is that to actually compute that whole matrix can be very time consuming as opposed to solving a system of equations with B,
394
00:49:09,710 --> 00:49:18,010
which is much quicker. So let's try that. I think I've called it m 47 slow.
395
00:49:20,160 --> 00:49:23,250
So I've now removed the parentheses.
396
00:49:23,790 --> 00:49:28,680
So MATLAB is just doing that line from left to right and trying to compute the matrix B inverse.
397
00:49:29,850 --> 00:49:33,450
And if I say ten. No problem.
398
00:49:34,590 --> 00:49:39,149
If I say 30, it's a bit slower.
399
00:49:39,150 --> 00:49:42,720
I don't know if you notice the difference now let's say 60.
400
00:49:46,340 --> 00:49:50,210
So now it's trying to compute. It's doing it. That's pretty impressive.
401
00:49:51,680 --> 00:49:58,310
It's computed this matrix or actually at every time step it's computing this matrix of size 3600.
402
00:49:58,910 --> 00:50:07,820
I won't try 90. That would really grind to a halt. Let's in our last 30 seconds, let's quickly do the leapfrog for the wave equation.
403
00:50:10,460 --> 00:50:16,100
So there you have the two dimensional wave equation and we're solving it by a leapfrog formulation.
404
00:50:17,030 --> 00:50:30,200
Maybe bigger grid is nicer. And here the picture says it all about how we're discontinuing Leapfrog is suitable for an explicit
405
00:50:30,200 --> 00:50:34,340
it's an explicit scheme suitable for a hyperbolic problem with finite speed of propagation.
406
00:50:34,640 --> 00:50:42,140
So we're evaluating the second derivative in the middle, and then the time derivative goes from timestamp and minus one to time,
407
00:50:42,140 --> 00:50:51,380
step and plus one so that you get symmetry for the higher order, but also explicitness to avoid having to do any linear algebra.
408
00:50:52,830 --> 00:50:57,510
So that's to finish the lecture. We can imagine that that's a gravitational wave.
409
00:50:57,510 --> 00:51:02,160
And the timescale, the space scale is 1 billion light years from the left to the right.
410
00:51:02,670 --> 00:51:04,020
Okay. See you on Friday.