1
00:00:01,950 --> 00:00:07,690
Okay. Good morning, everybody. We're reaching the end of the course.
2
00:00:07,710 --> 00:00:11,220
This is the final week, so assignment three is due now and then.
3
00:00:11,460 --> 00:00:17,700
The way it works is we hand out assignment four on Friday and then let's do in the box at the usual time in eighth week.
4
00:00:17,970 --> 00:00:21,870
But Friday is the last lecture, and the last lecture is a special one.
5
00:00:23,040 --> 00:00:25,109
It's not as rigorous as most of the lectures.
6
00:00:25,110 --> 00:00:34,530
It's a talk that I first gave more than ten years ago about the history of great numerical algorithms, but I've been updating it ever since.
7
00:00:34,530 --> 00:00:41,700
So it's become richer and richer as the time has gone by. If you're interested in this subject, which you surely must be, if you're still here,
8
00:00:42,030 --> 00:00:47,610
then this should be enjoyable and you're welcome to bring friends who haven't seen it, if you like.
9
00:00:48,060 --> 00:00:52,860
Anyone interested in numerical computing should find Friday's lecture special.
10
00:00:54,570 --> 00:00:58,290
Something else I want to talk to you about is one of the handouts.
11
00:00:58,710 --> 00:01:05,730
So if you look at the essay called ten Digit Algorithms, this is something I wrote in 2005.
12
00:01:05,730 --> 00:01:10,680
Never published. Actually, though, it's on my Web page. I want you to read this.
13
00:01:10,680 --> 00:01:17,820
I'm not going to talk about it. I want you to read it. And your reaction will be because everybody has this reaction.
14
00:01:19,080 --> 00:01:22,410
Well, that's very interesting, but it has nothing to do with me.
15
00:01:23,990 --> 00:01:27,360
And then I want you to read it again and you'll probably still have that reaction.
16
00:01:27,360 --> 00:01:31,169
But I want to tell you that you're wrong. It does have to do with you.
17
00:01:31,170 --> 00:01:39,180
This essay is addressed to everybody who uses computers for computational science, including people who do large scale computing.
18
00:01:39,570 --> 00:01:49,170
So when you read that essay, when you think this doesn't apply to me, please remember that Professor Trevathan says it does apply to you.
19
00:01:49,200 --> 00:01:57,899
So think again. Please read it. I feel passionately about this subject because I spent so many years trying to help
20
00:01:57,900 --> 00:02:02,370
people get things done on the computer and the mistakes they make are always,
21
00:02:02,370 --> 00:02:07,440
it seems, about forgetting the right way for a human to interact with a machine.
22
00:02:07,680 --> 00:02:11,640
And that's what this essay is about. So please read it and please take it to heart.
23
00:02:13,860 --> 00:02:18,510
Let's see back to the course we have the assignment that you've just, I hope, turned in.
24
00:02:19,020 --> 00:02:22,620
And I wanted to play around a little bit with that, for example.
25
00:02:26,120 --> 00:02:33,470
If we. Go to shed gooey, which is the graphical user interface of Jeb.
26
00:02:34,040 --> 00:02:40,490
One of the things you can play around with there is in the demos you'll see there are some PD E's.
27
00:02:40,640 --> 00:02:50,210
So if I go to Scalar PD is there are some built in examples of the scalar fields and one of them is the advection diffusion equation.
28
00:02:50,810 --> 00:02:58,370
Well, two of them are the advection diffusion equation. And this one here is pretty close to your assignment.
29
00:02:58,670 --> 00:03:05,330
So you can see there's an initial condition specified there, and then there's the equation specified in the box up there.
30
00:03:05,630 --> 00:03:09,710
And when we press solve, it uses clever methods to solve this numerically.
31
00:03:10,160 --> 00:03:15,290
I've also given you a page from the coffee table book about advection diffusion equations.
32
00:03:17,000 --> 00:03:23,810
So if I press solve, we see that kind of behaviour.
33
00:03:25,400 --> 00:03:28,700
So advection diffusion. Well, of course advection means the solution.
34
00:03:28,700 --> 00:03:35,390
Drift in one direction and diffusion means it eventually decays away essentially because of the boundary condition.
35
00:03:35,600 --> 00:03:39,110
If you think of this as heat, then the heat is flowing out of the boundary.
36
00:03:41,300 --> 00:03:43,490
There's a lot of physics in a problem like that.
37
00:03:43,490 --> 00:03:51,620
And this is physics and mathematics that has been of interest to me over the years because I'm interested in eigenvalue problems.
38
00:03:51,890 --> 00:03:57,020
And this is a very interesting kind of situation from the point of view of eigenvalues.
39
00:03:57,380 --> 00:04:02,750
Eigenvalues are often used to describe the behaviour of a dynamical process.
40
00:04:03,020 --> 00:04:04,610
Will it grow? Will it decay?
41
00:04:04,970 --> 00:04:13,640
And if you look at the eigenvalues of this problem there in the complex left half plane indicating decay and in fact the eigenvalues
42
00:04:13,640 --> 00:04:20,600
describe that final phase where the shape has more or less settled down to something near the boundary and it's losing amplitude.
43
00:04:20,960 --> 00:04:29,120
So the eigen function is sort of this shape, which then exponentially loses amplitude.
44
00:04:29,210 --> 00:04:37,550
We could slow it down, I suppose. Let's see what happens if we change point three, 2.1.
45
00:04:43,110 --> 00:04:48,910
We've now got a narrower pulse at the thing at the site.
46
00:04:48,930 --> 00:05:05,100
Let's make it point one. Now that final shape is an eigen function, and in the study of dynamical processes, people pay a lot of attention to those.
47
00:05:05,310 --> 00:05:12,060
But what's so interesting about a problem like this is that the initial physics has nothing whatsoever to do with the eigen function.
48
00:05:12,330 --> 00:05:21,659
So when people tell you that eigen functions determine behaviour of processes in general, that's only true as time goes to infinity for finite time.
49
00:05:21,660 --> 00:05:23,489
It's not true. And in particular,
50
00:05:23,490 --> 00:05:31,170
if you're in fluid mechanics and people tell you you have to study the eigenvalues of some navier-stokes problem to find out if it's stable or not,
51
00:05:31,440 --> 00:05:37,140
you should be very suspicious of that because the initial phase of the flow.
52
00:05:40,460 --> 00:05:45,680
Has nothing to do with the eigen function. It's only this phase that has to do with the eigen function.
53
00:05:47,950 --> 00:05:51,570
And of course, another one of your problems was on the Richardson.
54
00:05:51,690 --> 00:05:57,569
Richardson is a great hero of any British person, a British scientist, a wonderful man.
55
00:05:57,570 --> 00:06:02,200
And I hope you enjoyed that problem involving his work on finite differences.
56
00:06:02,400 --> 00:06:07,800
Before computers were invented. Long before. And then the final one was on the great Scott equations.
57
00:06:08,220 --> 00:06:11,910
There's a. Built in.
58
00:06:11,940 --> 00:06:18,210
Oh, sorry. I don't know if any of you use telephone for that, but in principle, in cheap fun, we should be able to do that.
59
00:06:18,630 --> 00:06:22,170
If I say spin, spin to.
60
00:06:25,450 --> 00:06:29,019
Of G. S two. That's a Grace Scott demo.
61
00:06:29,020 --> 00:06:33,400
It's not a very exciting one, at least on my computer, because it runs too slowly.
62
00:06:34,030 --> 00:06:37,710
So far, it's not running at all. Or is it? Oops.
63
00:06:40,180 --> 00:06:41,950
Is anything happening? Oh, there we are.
64
00:06:42,340 --> 00:06:51,940
So this is not related to the particular initial conditions you computed, but there are two components, of course, and it's not moving.
65
00:06:56,060 --> 00:07:05,780
Now it's moving. So this is an example of a process where you can get very different configurations depending on the parameters.
66
00:07:06,080 --> 00:07:11,720
And that's a subject that's very much associated with Oxford, because a few decades ago,
67
00:07:11,960 --> 00:07:19,370
the famous Jim Murray was the one who wrote papers like How the Zebra Got its stripes and how the leopard got its spots and so on,
68
00:07:19,610 --> 00:07:25,460
using reaction diffusion equations like these. Okay.
69
00:07:25,490 --> 00:07:28,880
I guess that's all I'll say about the assignment.
70
00:07:31,920 --> 00:07:48,360
Let's stop that. Now in this last lecture, we're going to continue talking about polynomials and methods for solving differential equations.
71
00:07:48,750 --> 00:07:55,710
And I wanted, as an aside notice it's a side to make a remark related to the last lecture.
72
00:07:57,110 --> 00:08:05,420
You remember we talked about Fourier and Laurent and Chevy chef in the context of periodic functions and the unit circle and non periodic functions.
73
00:08:05,780 --> 00:08:09,320
And I wanted to make a further remark about non periodic functions.
74
00:08:09,620 --> 00:08:13,610
So this goes back to my high school when I took calculus,
75
00:08:15,560 --> 00:08:22,310
the teacher taught us about series and this is an example of a series, one over one plus X Squared has this Taylor series.
76
00:08:22,580 --> 00:08:29,030
So of course we all know that. Now, where does that series converge?
77
00:08:29,060 --> 00:08:32,900
Well, it converges for X less than one, right? An absolute value.
78
00:08:33,140 --> 00:08:36,350
If X is two, it diverges. If X is a half, it's fine.
79
00:08:36,620 --> 00:08:45,840
So it converges. Four minus one, less than X, less than one.
80
00:08:46,830 --> 00:08:50,610
And then if you're in high school and you're inquisitive, you ask Why?
81
00:08:51,810 --> 00:08:59,129
What is it about that function that makes the Taylor series converge just for those values of X,
82
00:08:59,130 --> 00:09:02,250
Y, not for all X or Y, not for some different range of X.
83
00:09:02,850 --> 00:09:07,260
And then if your teacher is any good, he or she tells you, Well, there is a reason.
84
00:09:08,190 --> 00:09:16,330
And the reason is the complex plain. Which you've heard of, even if you don't know much about it at that stage in your education,
85
00:09:17,500 --> 00:09:21,880
the teacher will tell you that this function has polls that I and minus I.
86
00:09:27,680 --> 00:09:33,380
And the teacher will tell you that. TAYLOR Theories also converge in the complex plane.
87
00:09:33,590 --> 00:09:39,229
And they do that in circles so that the Taylor theory for this function is
88
00:09:39,230 --> 00:09:45,680
going to converge in a circle bounded by the first singularity that it hits.
89
00:09:45,920 --> 00:09:53,600
And that's why it converges between minus one and one, because the complex plane is telling us the deeper truth about.
90
00:09:54,840 --> 00:10:00,700
Taylor theories. I remember my teacher telling me this and it just seemed so natural and important.
91
00:10:00,970 --> 00:10:06,630
And I guess I think of this as my favourite argument why the complex plane matters.
92
00:10:06,640 --> 00:10:11,920
Even for people who don't really use the complex plane, this is just a real question.
93
00:10:11,920 --> 00:10:16,210
Why does that thing converge where it does? But you need the complex plane to understand it.
94
00:10:17,230 --> 00:10:20,350
Now, the reason I wanted to mention that right now is.
95
00:10:22,340 --> 00:10:28,100
This is all about Taylor theories. What I just told you. And a Taylor series.
96
00:10:30,610 --> 00:10:36,520
Is interpolation by polynomials where all the interpolation points are a single point.
97
00:10:36,760 --> 00:10:42,370
So interpolation at x equals zero. So polynomial.
98
00:10:44,120 --> 00:10:50,070
Interpolation. At X equals zero.
99
00:10:50,310 --> 00:10:56,790
For example, the first term in the Taylor series means interpolating by a constant at x equals zero.
100
00:10:57,390 --> 00:11:01,920
These two terms mean that you've interpolated in some sense by a quadratic.
101
00:11:02,190 --> 00:11:07,150
You've matched not only the function value but its first two derivatives and so on.
102
00:11:07,170 --> 00:11:13,740
The further you go in the Taylor series, the more points you have at the origin, as it were, the more derivatives you're matching.
103
00:11:14,100 --> 00:11:20,940
Now that generalises instantly to interpolation in distinct points.
104
00:11:23,100 --> 00:11:30,090
For example, if you're on the unit interval, you would tend to have Chevy Chef points,
105
00:11:30,570 --> 00:11:36,930
which are distributed as the real parts of equally spaced points on a circle.
106
00:11:41,830 --> 00:11:50,770
So those are Chevy chef points on an interval. When you interpolate in those points, you get exactly this kind of effect.
107
00:11:50,770 --> 00:11:55,030
And the only change is that instead of a circle, you get a berenstein ellipse.
108
00:11:55,240 --> 00:12:05,470
So now. Instead of interpolating many, many times at the origin we're now interpolating.
109
00:12:06,630 --> 00:12:11,080
Many, many times at distinct points. The Chevy chef points.
110
00:12:12,580 --> 00:12:15,310
So there's minus one somewhere there and there's one.
111
00:12:17,110 --> 00:12:22,660
You spread your points of interpolation out in a sensible way and you get exactly the same effect.
112
00:12:23,380 --> 00:12:27,340
Where does the series converge? Where do these things behave well?
113
00:12:27,350 --> 00:12:30,190
Where do they converge to whatever function you're approximating?
114
00:12:30,430 --> 00:12:37,720
Well, it depends on some curve in the complex plane and the closest singularity on such a curve.
115
00:12:37,990 --> 00:12:41,740
And the curve is an ellipse with focus there and focus there.
116
00:12:42,130 --> 00:12:51,730
So if we're doing the same function as before. Whereas the mental health of mental.
117
00:12:53,510 --> 00:13:00,440
So if we were interpolating the very same function in cheryshev points rather than at the origin,
118
00:13:00,740 --> 00:13:04,700
then the convergence would occur not in a disk but in a berenstein ellipse.
119
00:13:07,910 --> 00:13:14,060
In other words, this obscure fact about Berenstein ellipsis, which none of you have ever seen before I told you about it,
120
00:13:14,390 --> 00:13:20,540
is simply a natural generalisation of something that you learned at high school if you went to a good high school.
121
00:13:20,990 --> 00:13:25,040
Very natural stuff. It's been known for about 115 years.
122
00:13:25,040 --> 00:13:28,070
But a lot of people who should know this stuff that.
123
00:13:32,850 --> 00:13:44,520
Okay. So let's get back to spectral methods and I want to talk about Chevy Spectral methods based on interpolating endpoints like this.
124
00:14:02,230 --> 00:14:08,800
So the principle that we've encountered so many times is interpolate and then differentiate.
125
00:14:09,160 --> 00:14:12,400
So we imagine that we're given.
126
00:14:13,850 --> 00:14:24,049
Some data at Chevy chef points. And the Chevy chef points are defined like this.
127
00:14:24,050 --> 00:14:35,390
The formula is x sub j equals the cosine of j pi over n zero less than or equal to j less than or equal to n.
128
00:14:37,830 --> 00:14:43,850
And we want the derivative. Of that data, whatever that means.
129
00:14:43,850 --> 00:14:51,620
It's discrete. Of course, it doesn't have a derivative. But we want somehow to differentiate the data.
130
00:14:51,980 --> 00:14:56,300
Well, the idea is to interpolate it and differentiate differentiate the interpolate.
131
00:14:56,600 --> 00:15:02,780
So that's what we'll do. We'll set p equal to the unique polynomial.
132
00:15:05,790 --> 00:15:09,270
Of degree less than or equal to and.
133
00:15:10,910 --> 00:15:18,020
Which interpolate the data. So p of x j equals, let's say, v j if that's your data.
134
00:15:19,730 --> 00:15:27,730
And then. W j is equal to p prime at x j.
135
00:15:29,970 --> 00:15:33,480
And similarly for the second derivative or the third derivative or whatever.
136
00:15:33,810 --> 00:15:43,020
We've seen this kind of idea several times. It's the method that we described for deriving finite difference formulas in irregular point sets.
137
00:15:43,320 --> 00:15:50,790
It's 48 to 48 spectral methods where you interpolate by a trigonometric polynomial in equally spaced points.
138
00:15:51,150 --> 00:15:57,000
And here with cheryshev spectral methods, you're interpolating by a polynomial in unequally spaced points.
139
00:15:59,370 --> 00:16:07,739
So this is a linear process, which means if you have one vector V and another vector W,
140
00:16:07,740 --> 00:16:14,310
then may or Q, then the derivative of V plus Q is the derivative of V plus the derivative of Q.
141
00:16:14,760 --> 00:16:18,560
So since it's linear, we can write it as a matrix.
142
00:16:20,890 --> 00:16:25,360
And how would that matrix look? Well, there's some output vector.
143
00:16:27,380 --> 00:16:30,470
Which is equal to a matrix times an input vector.
144
00:16:37,310 --> 00:16:42,170
So linear algebra gives a clean way to describe Chevy Chase differentiation.
145
00:16:43,880 --> 00:16:46,880
And let's do an example just to make it concrete.
146
00:16:49,490 --> 00:17:02,210
So, for example, if N equals two, then my Chevy chef points are just x nought equals one and x one equals zero and x to equals minus one.
147
00:17:03,650 --> 00:17:13,280
So the polynomial interpolate p of x would be 0.5 x times one plus x z not.
148
00:17:14,890 --> 00:17:19,120
Plus one plus x one minus x.
149
00:17:20,590 --> 00:17:25,480
The one plus point five.
150
00:17:26,140 --> 00:17:29,350
X x minus one. V two.
151
00:17:32,100 --> 00:17:39,570
So that's the polynomial that interpolate arbitrary data v not v1v2 at one zero minus one the derivative.
152
00:17:42,630 --> 00:17:48,720
Is equal to point five plus x times v not.
153
00:17:50,830 --> 00:17:57,100
Minus two X times v one plus x -0.5.
154
00:17:57,580 --> 00:18:06,670
The two. And that tells us what the matrix is.
155
00:18:06,930 --> 00:18:14,970
These are three by three matrix whose entries are obtained by evaluating these three functions at the three grid points.
156
00:18:15,210 --> 00:18:25,700
So. DH I guess by evaluating each column comes from evaluating one of these at the three grid points.
157
00:18:25,910 --> 00:18:34,970
So the zeroth column, if you like, comes from evaluating this at the three grid points, 1.5.5 minus point five.
158
00:18:35,360 --> 00:18:41,990
And then the first column comes from evaluating this at the grid points minus to zero two.
159
00:18:42,620 --> 00:18:46,940
And the second column or this column comes from evaluating that at the grid points.
160
00:18:47,420 --> 00:18:51,710
0.5. -0.5. -1.5.
161
00:18:53,670 --> 00:18:58,950
So that three by three matrix is what you do.
162
00:18:58,950 --> 00:19:04,740
If you have data on a three point grid, you want to interpolated and differentiate it and evaluate it on the same grid.
163
00:19:05,670 --> 00:19:14,740
It's such a natural idea and. Amazingly, nobody really wrote these things down until the 1980s.
164
00:19:14,770 --> 00:19:23,349
It's strange how late it was that the chubby chef methods came into play, and in fact, the names don't mean much to you.
165
00:19:23,350 --> 00:19:27,520
But these matrices were first written down by Gottlieb, who?
166
00:19:27,530 --> 00:19:30,850
Saini and Orszag.
167
00:19:32,490 --> 00:19:45,790
In 1984. I've mentioned the odd bit of history that all these spectral methods didn't really get much attention until the theft was discovered.
168
00:19:45,790 --> 00:19:49,780
The Fast 48 transform because in principle that accelerates them.
169
00:19:50,080 --> 00:19:55,210
But then once they started getting attention, people did all sorts of things without using the FFR.
170
00:19:55,660 --> 00:19:59,080
Of course we haven't used the FFC here. We've just written down a matrix.
171
00:20:01,320 --> 00:20:05,520
So there are, of course, general formulas for these matrices.
172
00:20:07,320 --> 00:20:11,850
There's a code that I put on the website called Check Dot M, which I'll show you in a moment,
173
00:20:11,850 --> 00:20:18,720
which is just five or six lines of MATLAB that computes these things or in CHEB fan you can type dif match.
174
00:20:20,830 --> 00:20:29,950
And get them. There's a slight change a slight difference between the two.
175
00:20:30,580 --> 00:20:40,350
Tap dot m uses exactly what I've told you where you have the cosine and therefore the indexes go from right to left.
176
00:20:40,360 --> 00:20:44,530
So there's the zeroth point and there's the end point in seven.
177
00:20:44,530 --> 00:20:47,260
We decided that was too confusing. So we go left to right.
178
00:20:47,620 --> 00:20:53,500
So whichever fun has a minus sign there, which means that would be the zeroth point and that would be the end point.
179
00:20:54,160 --> 00:20:58,390
All that does is change some signs. So, in fact, let's look at the.
180
00:21:01,660 --> 00:21:06,390
Examples. If you look at your handout, you'll see turning one of them sideways.
181
00:21:06,400 --> 00:21:12,160
There's the little chap routine which computes GMB shift differentiation matrices.
182
00:21:19,370 --> 00:21:24,560
So if I say Cheb of three. Oops I meant to say above two.
183
00:21:27,260 --> 00:21:31,850
So tip of two corresponds to an equal to which means a three by three grid.
184
00:21:32,210 --> 00:21:43,100
So there you see exactly this matrix. And if I say chapter three or Sabbath six, you see.
185
00:21:45,420 --> 00:21:50,420
We get these dense matrices, they lack the sparsity that would be so appealing.
186
00:21:50,430 --> 00:21:57,510
On the other hand, you can often get away with a 20 by 20 or 30 by 30 matrix when a finite difference scheme might require hundreds.
187
00:21:57,960 --> 00:22:03,800
So they're pretty powerful when they work. Let's compare with dif.
188
00:22:03,830 --> 00:22:07,580
Matt if I say above six, I get a seven by seven matrix.
189
00:22:07,880 --> 00:22:11,060
I think I'll get the same if I say diff. Matt a seven.
190
00:22:11,390 --> 00:22:15,430
Let me see if that's true. Yes, it's true.
191
00:22:15,440 --> 00:22:16,970
Except for the signs.
192
00:22:17,360 --> 00:22:24,170
So you'll notice it's the very same matrix with virtually every sign has changed because of numbering the nodes from left to right.
193
00:22:28,550 --> 00:22:38,270
Suppose I said chair above five squared that corresponds to taking the second derivative and evaluating it.
194
00:22:38,570 --> 00:22:48,230
Let's see if I say this match of. Six squared again, the weakest taking the second derivative.
195
00:22:49,160 --> 00:22:57,170
Let's see if I say this format of six comma two where I'm rather than squaring the matrix, I'm asking the code to take the second derivative.
196
00:22:57,740 --> 00:23:06,770
There you have the same thing. As it turns out in Tetfund we do a lot of fancier stuff where we're often using rectangular versions of these matrices.
197
00:23:07,070 --> 00:23:10,850
So I could say something like this math for six to.
198
00:23:12,820 --> 00:23:18,250
And. Let me explain what that is by reminding you what that is.
199
00:23:18,640 --> 00:23:27,040
That matrix interpolate data on a six point grid differentiate twice and samples the result on the six point grid.
200
00:23:27,760 --> 00:23:35,470
This matrix interpolated data on a six point grid differentiates twice and samples the result on a four point grid.
201
00:23:35,800 --> 00:23:40,300
And it turns out that's a slick thing to do if you have a couple of boundary conditions you want to empower.
202
00:23:41,440 --> 00:24:00,540
So it's a very well developed technology. And what I'm going to do now is show you a couple of examples of how you can use it to solve a PDA.
203
00:24:01,290 --> 00:24:06,600
So let's begin with the one dimensional wave equation.
204
00:24:12,170 --> 00:24:16,180
So suppose I wanted to solve you t t equals u x x.
205
00:24:16,190 --> 00:24:22,580
So it's a second order one dimensional wave equation and make it as simple as possible.
206
00:24:22,590 --> 00:24:30,350
I'll put it on the unit interval and I'll have the boundary condition view of plus or minus one equals zero.
207
00:24:32,550 --> 00:24:40,300
And I'll use a leapfrog formula, which means that my time dispensation is leapfrog.
208
00:24:40,320 --> 00:24:43,650
But my space destabilisation will be with these spectral matrices.
209
00:24:43,890 --> 00:24:49,980
So if you wanted to draw a stencil, it would have to include the entire grid in space.
210
00:24:52,390 --> 00:24:58,120
And then something like this in time. Nobody draws stencils for spectral methods.
211
00:24:58,390 --> 00:25:01,270
They're always global, so somehow that's not relevant.
212
00:25:02,020 --> 00:25:07,780
But in principle, this is what you do at that point of the grid in order to get from one time step to the next.
213
00:25:08,470 --> 00:25:15,700
So the formula would be the end, plus one minus to the end,
214
00:25:16,000 --> 00:25:28,090
plus V and minus one over the the time step squared is equal to the second derivative matrix times the n.
215
00:25:31,710 --> 00:25:36,810
So I've just told you a whole scheme for solving this problem.
216
00:25:39,330 --> 00:25:45,030
That's all you need to do. It'll work. And that's what happens in the code called M 55.
217
00:25:45,270 --> 00:25:57,000
So if you look at M 55, you'll see in the top, it constructs the matrix, D, it squares it, and then it goes into a leapfrog thing.
218
00:25:59,390 --> 00:26:02,120
It turns out you need a small time step for stability,
219
00:26:02,120 --> 00:26:08,899
which is a feature of the established methods, and it goes through leap frog and does its thing.
220
00:26:08,900 --> 00:26:23,860
And when you planted it's pretty interesting. So let's do that. So I will say m 55 wave equation, Chad.
221
00:26:25,030 --> 00:26:28,990
And it asked me how many grid points I want. So I'll say 64.
222
00:26:32,770 --> 00:26:40,060
Now, this is kind of an amazing plot to me, at least, because you can see the horror, the vertical lines showing you where the grid is.
223
00:26:40,750 --> 00:26:44,230
And they're, of course, totally non-uniform because it's a Chevy chef grid.
224
00:26:44,590 --> 00:26:51,570
And yet the wave is being simulated beautifully. It seems to be wobbling a bit, but that's just because we're connecting the dots by straight lines.
225
00:26:51,580 --> 00:26:58,180
It's being simulated to very high accuracy, even when the grid spacing changes so dramatically.
226
00:26:59,080 --> 00:27:06,030
Nothing funny happens. It works just fine. Let's do it again on a finer grid.
227
00:27:19,650 --> 00:27:34,100
So try 128. It's a paradox of these polynomial methods that even when the physics is completely translation invariant,
228
00:27:34,580 --> 00:27:39,170
the sensible thing to do is use a grid that is completely not translation and very.
229
00:27:45,210 --> 00:27:49,290
I totally lost track of how matlab does colour. You notice it's now changed the colour.
230
00:27:50,010 --> 00:27:54,060
I don't know why, but there you are in the code.
231
00:27:56,430 --> 00:28:02,220
You'll see that there's a line where I've changed things for non-human boundary conditions.
232
00:28:02,790 --> 00:28:06,840
So let me say a word about how one could do that. What's the mathematics?
233
00:28:14,680 --> 00:28:22,479
So suppose I said that I want the boundary condition on the right to be a human condition.
234
00:28:22,480 --> 00:28:25,510
So I'll say you minus one equals zero.
235
00:28:25,840 --> 00:28:29,320
But you have one has a zero derivative.
236
00:28:33,060 --> 00:28:37,230
So. Well, I didn't really tell you how I had imposed the first boundary condition,
237
00:28:37,230 --> 00:28:42,750
but it's the usual story that when everything is zero, the matrices sort of automatically do that for you.
238
00:28:43,080 --> 00:28:47,160
But now, of course, we have to do something substantive to impose that condition.
239
00:28:47,850 --> 00:28:52,290
Well, we use the spectral philosophy.
240
00:28:52,470 --> 00:29:06,930
So this is, as always, we interpolate by a polynomial and then we avow the derivative at one and we set that to zero.
241
00:29:07,110 --> 00:29:19,100
And what that turns out to lead to is. We have our differentiation matrix applied to some data.
242
00:29:25,610 --> 00:29:31,640
But now we want to take one of the rows of that matrix and use it to impose that boundary condition.
243
00:29:31,910 --> 00:29:35,990
And for example, we can take the first row because that corresponds to.
244
00:29:37,530 --> 00:29:46,200
Position X not the rightmost position, and we could replace that row of this matrix by the first row of the first derivative matrix.
245
00:29:47,570 --> 00:29:51,350
I'm not giving you all the details, but by following through on that concept,
246
00:29:51,740 --> 00:29:56,720
you replace rows of your square matrix with special things for boundary conditions.
247
00:29:56,990 --> 00:30:00,470
So that's the principle of how you do boundary conditions in spectral methods.
248
00:30:00,860 --> 00:30:07,790
You start out with a square matrix and replace some of its rows by other things, or in fact more generally,
249
00:30:08,480 --> 00:30:15,080
you construct a rectangular matrix and then add some special rows to incorporate boundary conditions.
250
00:30:15,290 --> 00:30:18,589
So in fact, that's the state of the art these days, though.
251
00:30:18,590 --> 00:30:28,690
It's not what I'm showing you in these demos. What you typically do is you make a rectangular matrix for the differential part.
252
00:30:35,060 --> 00:30:39,380
For the differential equation per say. And then you add.
253
00:30:40,760 --> 00:30:45,330
Some special rose. For the boundary conditions.
254
00:30:48,760 --> 00:30:56,379
So if you have a second order problem, for example, mathematically people would say that's an operator of index to you construct spectral
255
00:30:56,380 --> 00:31:00,820
disparate causation matrices that are rectangles with two fewer rows and columns,
256
00:31:01,090 --> 00:31:08,560
and then you add two more rows for the boundary conditions. So that's a very pretty technique and very accurate.
257
00:31:10,460 --> 00:31:14,750
So in this code, I'm not quite doing it that nasally. I'm doing it the old fashioned way.
258
00:31:15,560 --> 00:31:18,590
You see, there's a line at the bottom with a present sign.
259
00:31:18,590 --> 00:31:27,170
If we remove that present sign, then in effect, we're doing what I described here and we'll get annoy man boundary condition instead.
260
00:31:29,950 --> 00:31:33,630
So of course, I've got that preloaded. It's called M 55.
261
00:31:33,730 --> 00:31:39,570
Norman. Same question, let's say 128.
262
00:31:41,910 --> 00:31:45,020
So it's a human condition at the right. Can you all predict what will happen?
263
00:31:49,130 --> 00:31:53,330
It doubles in amplitude briefly. The wave sort of sloshes up against the end.
264
00:31:53,540 --> 00:31:58,100
Now, at the left, it's still a direct condition. So what will happen is.
265
00:31:59,360 --> 00:32:03,860
The change of side. If you like.
266
00:32:03,860 --> 00:32:11,900
This is a clarinet because the clarinet has one boundary condition at one end and another at another note at one end, and there slay at the other.
267
00:32:12,770 --> 00:32:30,770
A flute, on the other hand, has the same boundary conditions at both. Let's do it in two dimensions.
268
00:32:40,050 --> 00:32:45,510
So suppose we have. U t t equals u.
269
00:32:45,510 --> 00:32:53,720
X x plus u y y. So that's the second order wave equation in two dimensions.
270
00:32:53,990 --> 00:33:03,170
And of course, the simplest thing we could do is pose it on the unit square, going from -1 to 1 in both directions with a zero boundary conditions.
271
00:33:06,740 --> 00:33:15,290
So here we have the usual problem of. Deciding how to represent higher dimensional objects in a computer language.
272
00:33:15,950 --> 00:33:23,180
In principle, the matrices involved here could be regarded as four dimensional objects, which will act on a two dimensional grid.
273
00:33:23,570 --> 00:33:30,350
So the grid. Will be Chevy chef in each direction.
274
00:33:33,550 --> 00:33:36,760
So that's sort of the X indicators.
275
00:33:36,760 --> 00:33:47,720
And then similarly in Y, we'd have this. So Cheryshev grids are always irregular and in more than one dimension you get this kind of effect.
276
00:33:47,740 --> 00:33:54,580
The good points are, of course, these. So they tend to cluster near the boundaries, which is sometimes good.
277
00:33:54,580 --> 00:34:00,670
If there's a lot happening at boundaries, sometimes it's wasteful if the physics is translation invariant.
278
00:34:00,910 --> 00:34:09,100
So for this equation, it's wasteful to have all those points near the boundaries, but it's the right thing to do in the code.
279
00:34:09,280 --> 00:34:15,700
M 56 I do a slick trick that works in two dimensions, though not in more dimensions,
280
00:34:16,240 --> 00:34:27,520
and that is you notice near the top we define D two to be the square of one of these spectral matrices.
281
00:34:29,270 --> 00:34:35,450
But then we define D to P to be the transpose of that.
282
00:34:38,990 --> 00:34:43,399
The trick that's being used is to differentiate with respect to why we're going to
283
00:34:43,400 --> 00:34:48,500
multiply this would we're going to have a grid of values multiplied on the left
284
00:34:48,770 --> 00:34:53,749
by a matrix and that will act on each column to differentiate with respect to y to
285
00:34:53,750 --> 00:34:58,159
differentiate with respect to x will have the same very same matrix of values,
286
00:34:58,160 --> 00:35:05,150
but will act on it from the right. And that will have the effect of acting on each row and doing the same operation to each row.
287
00:35:05,600 --> 00:35:10,130
So we're using a slick trick in this code. You see at the bottom it says u, x, x.
288
00:35:11,750 --> 00:35:19,430
Which is the second derivative with respect to X is the date of the data matrix v multiplied by something on the right.
289
00:35:20,180 --> 00:35:26,570
You y y. The second derivatives and y's are the data matrix v multiplied by something on the left.
290
00:35:27,230 --> 00:35:31,220
So that's a very nice way to code things, sometimes in two dimensions.
291
00:35:34,660 --> 00:35:50,280
Let's see what happens with that. I do leap frog again our favourite scheme for wave equations and it's called and 56 leapfrog two D Jeb.
292
00:35:56,060 --> 00:36:02,090
So you can sort of see the irregular grid spaced more closely near the boundary than in the middle.
293
00:36:19,320 --> 00:36:33,250
Okay. Let me say a word about spectral methods for a little bit more real problems, which means typically three dimensions.
294
00:36:35,570 --> 00:36:45,560
So remember, we have the Fourier idea, which is for periodic dimensions, and we have the Chevy chef idea, which is for non periodic.
295
00:36:46,490 --> 00:36:54,190
Mathematically, they're very closely related. Both of them can be done using the 50.
296
00:36:57,950 --> 00:37:04,310
Maybe as a rule of thumb, if your grid has at least 128 points in any direction, you'll definitely use the 50.
297
00:37:04,730 --> 00:37:11,000
If it's 64, you might well use the FFP. And of course, if it's a thousand or 2000, you definitely will.
298
00:37:11,420 --> 00:37:14,479
Some of the high end simulations people do are quite incredible.
299
00:37:14,480 --> 00:37:23,600
So for example, there are certainly people who've done 4096 cubed in simulating homogeneous turbulence.
300
00:37:24,170 --> 00:37:28,610
They don't care about boundaries. They just want to know how the navier-stokes equations behave.
301
00:37:29,000 --> 00:37:34,280
So that's a big grid that's 496 cubed, and then there's time stepping on top of that.
302
00:37:34,880 --> 00:37:40,820
Those are serious computations. Probably these days they're up to 8192 cubed or something.
303
00:37:42,170 --> 00:37:48,380
Now, what often happens is that sometimes these types of grids are mixed together.
304
00:37:51,680 --> 00:37:55,909
And it's obvious why in a case like this.
305
00:37:55,910 --> 00:38:02,389
Well, you can't quite tell from the picture. Is it periodic or not? When people do turbulence calculations, they would make it periodic.
306
00:38:02,390 --> 00:38:08,180
So the turbulence people would describe as this by 48 cubed, as it were.
307
00:38:10,010 --> 00:38:17,590
Other people who really did want to work in a cube might despise it by chubby chef cubed, but then there will be problems that mix things together.
308
00:38:17,600 --> 00:38:22,760
So, for example, if you're in a cylinder, which people very often would be in fluid mechanics,
309
00:38:23,540 --> 00:38:30,530
maybe you're studying flow in a pipe or something like that. So you have a stream wise direction, obviously.
310
00:38:32,560 --> 00:38:41,530
The different dimensions have different geometry. In the other way, you might have radius, for example, and then you might have an angle theta.
311
00:38:44,370 --> 00:38:49,049
And probably the most common way people would describe something like this would they would they
312
00:38:49,050 --> 00:38:56,640
would take theta to be periodic of course and they would take ah to be non periodic of course.
313
00:38:59,860 --> 00:39:07,269
And then maybe the subtlest dimension would be that one where physically your your physics isn't periodic,
314
00:39:07,270 --> 00:39:13,270
but very often what people do for simplicity is make the pipe pretty long and make it periodic anyway.
315
00:39:13,690 --> 00:39:22,600
So X would often be Fourier and in a simulation, maybe the picture I should draw should be a bit longer.
316
00:39:22,870 --> 00:39:31,670
That might be the kind of grid people might use. Cylinders are hard and spheres are worse in a disk.
317
00:39:32,120 --> 00:39:36,469
We can imagine a radial variable and a theta variable.
318
00:39:36,470 --> 00:39:40,100
So that would be chubby, chef for you. What do you do with a sphere?
319
00:39:41,030 --> 00:39:42,730
First of all, drawing it is harder.
320
00:39:46,890 --> 00:39:53,910
And one of the funny things about computational science is that this problem just never goes away decade after decade.
321
00:39:53,940 --> 00:39:58,470
There's never a consensus on the right way to handle this here. There are many ideas out there.
322
00:39:59,040 --> 00:40:02,430
It's hugely important, not least for climate modelling.
323
00:40:02,820 --> 00:40:06,120
Whenever you do things on the whole earth, of course you have to use a sphere.
324
00:40:06,480 --> 00:40:11,610
But there are all sorts of problems connected with singularities of your grids.
325
00:40:11,910 --> 00:40:16,460
What do you do? Even in a disk? There's a kind of a grid singularity at the middle, on a sphere.
326
00:40:16,470 --> 00:40:22,290
It's much worse. You could put something at the North Pole in the South Pole, but that leads to trouble of all kinds.
327
00:40:22,770 --> 00:40:27,840
Many ideas are out there. Sometimes people expand things in spherical harmonics, but that's not perfect.
328
00:40:28,170 --> 00:40:36,960
Sometimes people use a couple of periodic variables and one non periodic if you have the inside of the sphere, many different ideas out there.
329
00:40:37,380 --> 00:40:44,490
But I think the remarkable thing is that there's really no consensus yet about the right way to handle a sphere.
330
00:40:45,780 --> 00:40:49,620
We're having fun in fun with something called Sphere fun.
331
00:40:51,140 --> 00:40:54,800
Which has not quite been released yet. But we'll be in a moment. In a few weeks.
332
00:40:56,570 --> 00:41:01,490
In Fear Fun. We actually map a sphere onto two periodic variables.
333
00:41:01,490 --> 00:41:06,920
So of course longitude is periodic, but we also treat latitude as periodic.
334
00:41:07,250 --> 00:41:13,140
So. That's one way to handle fears.
335
00:41:13,170 --> 00:41:18,000
I wouldn't say it's heavily used yet in computational science, but it's there.
336
00:41:20,310 --> 00:41:24,690
So that has the effect that you have a 2 to 1 mapping.
337
00:41:24,690 --> 00:41:32,909
The entire earth is covered twice. So if I I'm not very good at drawing continents, let's see if Africa is down here.
338
00:41:32,910 --> 00:41:37,530
Right. And Europe, then you have another copy of them sort of up here.
339
00:41:42,120 --> 00:41:52,540
It weakens the singularity problems. Now I'm going to give you one more example, going back to one dimensional problems and nonlinearity.
340
00:41:57,740 --> 00:42:04,700
And this is the Alan Kahn equation for which there's another handout from the coffee table book.
341
00:42:16,120 --> 00:42:19,570
Think I mentioned the Allen comment equation too before. It's one of my favourites.
342
00:42:19,960 --> 00:42:24,970
It's a nonlinear reaction diffusion equation that leads to interesting structure.
343
00:42:32,270 --> 00:42:37,670
So the equation is u t equals u x x.
344
00:42:39,700 --> 00:42:51,540
Plus you, minus you. Q. And the version I'm going to show you in a moment has the boundary conditions you have minus one equals minus one,
345
00:42:52,230 --> 00:42:55,790
and you have one equals one on the unit interval.
346
00:42:57,830 --> 00:43:03,320
So the interesting thing about the physics here is that this equation has three steady states.
347
00:43:05,950 --> 00:43:14,140
So one study solutions would be you equals minus one or you equals zero or you equals one.
348
00:43:14,620 --> 00:43:18,020
If you plug that in, this term becomes zero.
349
00:43:18,430 --> 00:43:22,420
So and if it's just a steady state, that would be zero. So this would be zero.
350
00:43:22,690 --> 00:43:28,360
Apart from boundary conditions, these constants would all be solutions of the Allen equation.
351
00:43:28,540 --> 00:43:32,500
And what makes it physically interesting is that these two are stable.
352
00:43:35,520 --> 00:43:41,510
Whereas this one is unstable. So what that means is that if you take some initial condition.
353
00:43:43,940 --> 00:43:47,060
So I'll just I'll draw a function. Here's one. Here's minus one.
354
00:43:50,060 --> 00:44:01,810
Suppose your initial condition looks like that? Well. What happens is that these two stable states attract the solution,
355
00:44:02,020 --> 00:44:07,600
which you can interpret in metallurgical fashions if you know enough metallurgy
356
00:44:08,470 --> 00:44:13,270
so that what you'll find is that this thing approaches after a short time,
357
00:44:13,270 --> 00:44:16,599
begins to look like that, and this begins to look like that.
358
00:44:16,600 --> 00:44:23,990
And so. So. Solutions 10 to -1 or plus one, while ignoring zero in the middle.
359
00:44:24,530 --> 00:44:28,469
But often those things. Don't stay forever.
360
00:44:28,470 --> 00:44:32,970
And in particular. Suppose you have a solution that does something like.
361
00:44:34,270 --> 00:44:42,419
This. Minus one and one. It may look like that for a very long time, a exponentially long time.
362
00:44:42,420 --> 00:44:49,060
But then eventually, in principle, it will. Turned into just a constant solution at minus one.
363
00:44:50,260 --> 00:44:55,360
So these equations can describe things that happen on very long timescales in material science.
364
00:44:57,320 --> 00:45:04,430
Now to describe ties that we could use, the fancy stuff that we talked about for stiff pedigrees.
365
00:45:04,700 --> 00:45:09,470
But the code I'm about to show you is not fancy. It just does leapfrog.
366
00:45:12,650 --> 00:45:16,340
So this one is called M 57.
367
00:45:17,980 --> 00:45:22,810
But I guess we should just look at the code. So if you look at m 57.
368
00:45:32,850 --> 00:45:39,300
You'll see that it's a Chevy chef coat up top and it's just a 24 point grid in this particular version.
369
00:45:39,870 --> 00:45:46,350
And there's an epsilon which is in front of the second derivative.
370
00:45:47,780 --> 00:45:51,590
To decide how slowly we want things to move, basically.
371
00:45:55,400 --> 00:45:58,940
And what this code does is just do leapfrog over and over again.
372
00:45:58,940 --> 00:46:04,070
You can see the key step is V equals V plus k times and then EPS times D2.
373
00:46:04,370 --> 00:46:10,390
So that's the second derivative of D2, the square of the spectral differentiation plus V minus V.
374
00:46:10,400 --> 00:46:14,750
Q So we're just plugging that in in the obvious way to leapfrog.
375
00:46:16,160 --> 00:46:29,860
Let's see what happens for that. So it's called M 57.
376
00:46:31,860 --> 00:46:41,370
And it asks me for an epsilon. So I'll try to point out to this is the initial condition I've chosen point to is fairly boring.
377
00:46:42,390 --> 00:46:46,420
So let's do it again. With .15.
378
00:46:49,510 --> 00:46:51,700
A little less boring. Let's try to point out one.
379
00:46:55,160 --> 00:47:01,820
So now you can see what I described, that it goes up to plus or minus one, but then eventually flips around.
380
00:47:03,750 --> 00:47:07,860
Let's go further. So I'll say. .09.
381
00:47:09,890 --> 00:47:21,620
So this is the metastable state. Mental stability is a huge topic in science, and the classic example is radioactivity.
382
00:47:21,630 --> 00:47:31,260
If you have a nucleus which is radioactive, well, that's a metastable nucleus, which in mathematically will not stay forever.
383
00:47:31,500 --> 00:47:36,270
But the timescale may be very long. Let's try to point out eight.
384
00:47:38,160 --> 00:47:45,300
So you can see a very small change in the diffusion coefficient has a significant change in how long it takes.
385
00:47:50,210 --> 00:47:53,600
And for this crude code, that's as far as we can go.
386
00:47:53,930 --> 00:48:05,659
If I try to go to .07. So far it looks good, but there's actually it's not numerically accurate enough.
387
00:48:05,660 --> 00:48:12,170
And something now is going to happen that shouldn't. I think. Takes a long time.
388
00:48:17,370 --> 00:48:21,300
So now we've switched from radium to U-235.
389
00:48:24,650 --> 00:48:28,100
Now notice it's broken. The symmetry. Everything in this problem was symmetrical.
390
00:48:28,100 --> 00:48:31,250
So that's got to be wrong. That's not the mathematically correct answer.
391
00:48:31,580 --> 00:48:36,890
Although if your philosophy is properly adjusted when things go wrong, you can see some deeper truth in it.
392
00:48:37,540 --> 00:48:44,090
Better metastable solutions do have that property that can be tremendously susceptible to slight perturbations.
393
00:48:44,450 --> 00:48:51,650
So in some sense maybe that's true after all, let's play with the Alan Kahn equation also in just fun.
394
00:48:51,800 --> 00:49:00,530
So if I say Chad Gooey. One of the things there is.
395
00:49:03,850 --> 00:49:16,520
Alan Kahn. So there you have the equation with .03 diffusion constant on the domain -4 to 4, and you have three initial conditions.
396
00:49:16,790 --> 00:49:20,000
And notice each one is a little broader than the previous one.
397
00:49:20,240 --> 00:49:24,770
So that means that they will act on different time scales.
398
00:49:25,160 --> 00:49:35,680
So when I solve that. The narrow one goes First Amendment, that one.
399
00:49:35,680 --> 00:49:38,860
And if I make them a little bit wider, let's see what happens.
400
00:49:39,190 --> 00:49:45,169
So I'll. Okay.
401
00:49:45,170 --> 00:49:48,379
I'll change these numbers, determine how wide they are. Right.
402
00:49:48,380 --> 00:49:55,030
So I'll change 35 to 25 and I'll change 11 to 9.
403
00:49:56,250 --> 00:50:04,590
And I'll change 7 to 5. So now it's slightly less, slightly wider initial conditions.
404
00:50:09,640 --> 00:50:14,730
I guess I didn't change them enough to have a pronounced effect. Well, but you see, the time hasn't gone far enough.
405
00:50:14,740 --> 00:50:20,620
Let's. Let's finish the job. I'll change the time interval to 100.
406
00:50:21,310 --> 00:50:38,640
And just to take a little further, I'll change 5 to 4. Okay.
407
00:50:39,220 --> 00:50:44,560
So just to remind you, on Friday, it's a special lecture, the history of numerical algorithms.
408
00:50:45,010 --> 00:50:45,340
Thanks.