1
00:00:00,750 --> 00:00:08,250
I hope the assignment went okay. The solution sheets were written rather hastily, and in fact page two was omitted from them.
2
00:00:08,250 --> 00:00:14,460
So you can see that's a separate handout. Sorry about that. Anyway, I hope you've turned them in and I hope you had some fun with them.
3
00:00:16,470 --> 00:00:26,010
We've been dis criticising PD ese, although the assignment was odes and I wanted to talk a bit today about order of accuracy for discourteous Asians,
4
00:00:26,490 --> 00:00:30,150
which is thoroughly analogous to what we did for ODS.
5
00:00:34,000 --> 00:00:37,120
The idea being to plug in Taylor series and see what cancels.
6
00:00:37,630 --> 00:00:43,540
And we're going to illustrate this on various examples. And the theme is always implicit and explicit.
7
00:00:43,840 --> 00:00:47,560
Democratisation and the interplay of them in various ways.
8
00:00:47,770 --> 00:00:49,360
So let's has an example.
9
00:00:49,930 --> 00:00:58,990
Look at the heat equation in one D, which is u, t equals u, x, x, and let's just criticise it by a backward Euler style approach.
10
00:00:59,290 --> 00:01:04,270
And if I draw you the stencil, then that's all you need to know to know what I'm going to do.
11
00:01:04,540 --> 00:01:13,179
That's my stencil. So in other words, my disorganisation will be the end, plus one minus the end and position.
12
00:01:13,180 --> 00:01:19,610
J divided by the timestamp will be v j plus one minus two.
13
00:01:19,630 --> 00:01:27,520
V j plus v j minus one divided by the space step squared.
14
00:01:28,810 --> 00:01:36,130
And then that picture tells you that I'm going to do this implicitly, which means we have M plus one up here.
15
00:01:36,910 --> 00:01:43,840
So that means that in order to get from time, step, end to time, step and plus one, we have to solve the system of equations.
16
00:01:44,380 --> 00:01:47,680
But the system of equations is linear because it's a linear problem.
17
00:01:47,980 --> 00:01:52,540
And moreover, it's try diagonal because it's a one dimensional linear problem.
18
00:01:52,750 --> 00:01:58,510
So this is easy linear algebra. You or MATLAB can solve that system without any difficulty.
19
00:02:00,590 --> 00:02:06,790
If we write it as a system, I could write it as matrix times.
20
00:02:06,800 --> 00:02:15,680
New vector equals old vector and the matrix b is a tri diagonal matrix.
21
00:02:16,580 --> 00:02:25,670
So if you'll allow me to use notation that I hope is pretty clear, it's a tri diagonal matrix with a number minus sigma on the sub diagonal,
22
00:02:26,030 --> 00:02:36,530
and then one plus two sigma on the main diagonal and minus sigma on the super diagonal where sigma is K over H squared.
23
00:02:39,170 --> 00:02:45,320
So there we have a complete description of the simplest possible implicit district causation of the heat equation.
24
00:02:46,580 --> 00:02:51,080
Now let's look at how that behaves. To get a sense of its order of accuracy.
25
00:02:51,650 --> 00:02:57,920
So I'm going to look at the code called M42. Looks like a lot of these recent codes we set up a matrix.
26
00:02:58,640 --> 00:03:08,240
I do that in a sparse mode and you can see the main line of the code is the one the for loop and then it says you equals B backslash u.
27
00:03:08,510 --> 00:03:13,220
So that's where we solve this equation at every step to get the new vector.
28
00:03:21,580 --> 00:03:25,239
Okay. So I'll type m42 backward.
29
00:03:25,240 --> 00:03:32,290
Euler And what it does is show you an initial gas that's very under resolve.
30
00:03:32,290 --> 00:03:40,599
The grid's size here is 0.5, so there are only three unknowns on such a coarse grid and it goes to time.
31
00:03:40,600 --> 00:03:47,320
One eight and there you are. Then it does it again with a grid size of one quarter.
32
00:03:49,210 --> 00:03:53,380
There you are. Then it does it again with a grid size of an eight.
33
00:03:55,720 --> 00:03:59,860
And then it does it again with a grid size of a 16. So now it's beginning to look smooth.
34
00:04:00,100 --> 00:04:05,560
You can see that's the initial condition. And after time, one eight, that's what it looks like.
35
00:04:05,830 --> 00:04:10,630
You notice the zero boundary conditions. So it's zero at the right and zero at the left.
36
00:04:12,390 --> 00:04:16,470
Keep going. What was that time space? Step of one eighth or 16th.
37
00:04:16,710 --> 00:04:20,190
This would be 1/32. Same picture.
38
00:04:20,940 --> 00:04:24,630
1/64. Is that the last one?
39
00:04:25,110 --> 00:04:30,440
Now there's one more. Yep.
40
00:04:30,890 --> 00:04:37,440
And another. Okay.
41
00:04:37,710 --> 00:04:45,240
So at the end, it measures the value at the in the middle, at the final cut at the end of the computation.
42
00:04:45,420 --> 00:04:50,460
And it looks at the error in that value as a function of the time step on a log log scale.
43
00:04:50,730 --> 00:04:54,660
So you see that as the time step gets smaller, the error gets smaller.
44
00:04:55,530 --> 00:05:00,599
The first two data points look, maybe quadratic, but you quickly see that it's only linear.
45
00:05:00,600 --> 00:05:08,370
So we're having linear convergence of K first order accuracy is what we're seeing for backward pointer.
46
00:05:10,840 --> 00:05:17,920
And that's what order of accuracy means. Let's spend a moment seeing how we could have predicted that with Taylor series.
47
00:05:27,730 --> 00:05:33,340
So following what we did for ODIs is we without being too rigorous,
48
00:05:33,550 --> 00:05:44,740
right down to a loose definition of order of accuracy, we suppose that we're given a PD with smooth solutions.
49
00:05:52,180 --> 00:05:57,040
And we're given a finite difference approximation to the PDP.
50
00:06:00,130 --> 00:06:06,910
And moreover, we have some prescription about how the time step in the space step are going to converge to zero.
51
00:06:07,630 --> 00:06:15,430
So with K and H going to zero in some systematic way that we've prescribed,
52
00:06:15,670 --> 00:06:20,410
for example, K might be proportional to H or it might be proportional to H squared.
53
00:06:20,560 --> 00:06:21,790
We can do whatever we want.
54
00:06:23,710 --> 00:06:33,640
So what we do to determine the order of accuracy, as with odds, is to talk about something called the local truncation error.
55
00:06:34,120 --> 00:06:38,650
So the LTE, the local truncation error.
56
00:06:40,270 --> 00:06:45,340
Is what you get if at one time step, if everything else has been exact.
57
00:06:47,590 --> 00:06:53,499
So remember the idea for a local truncation error. You plug in a solution, you imagine everything has been perfect up to time.
58
00:06:53,500 --> 00:06:59,470
Step in and then you take one step and you ask, How big an error have I introduced at this one step?
59
00:07:00,010 --> 00:07:06,920
So the local truncation error is the difference between the new value v.
60
00:07:08,510 --> 00:07:14,630
And the correct value at that point, which would be you of x j t m plus one.
61
00:07:17,550 --> 00:07:24,150
Assuming that you is the solution to your PD so where you as a smooth solution.
62
00:07:31,740 --> 00:07:40,890
And we get to take this one time step. So the J and plus one is computed.
63
00:07:43,100 --> 00:07:57,890
From exact values at the previous time steps. So to do that, you plug in Taylor series and see what happens.
64
00:07:58,490 --> 00:08:06,050
And as usual, I'm going to do one example and I'm going to pick the simplest example, which in fact will be forward Euler rather than backward Euler.
65
00:08:06,740 --> 00:08:15,200
So the procedure we follow mechanically is we replace everything in sight by a Taylor series.
66
00:08:15,920 --> 00:08:21,020
So for example, suppose there's a term v, j and minus one.
67
00:08:24,720 --> 00:08:29,940
We replaced that by a Taylor series, expanded around the centre point.
68
00:08:31,020 --> 00:08:35,100
So that would be a tailor series for you.
69
00:08:36,250 --> 00:08:44,530
At x j t and minus one about x j t m.
70
00:08:47,100 --> 00:09:03,890
Then we cancel as many terms as we can. And that gives us the local truncation error and then the same story as with ODS.
71
00:09:04,610 --> 00:09:12,200
If the local error has a certain order, then the global error will be worse by one factor of K because we're adding up the errors at each step.
72
00:09:12,470 --> 00:09:17,720
So we divide by one power of K.
73
00:09:22,210 --> 00:09:27,850
To find the global error, or, as I prefer to put it, the global accuracy.
74
00:09:33,470 --> 00:09:38,450
So that's the procedure. And let's do it for forward Euler.
75
00:09:46,690 --> 00:09:51,310
So if I do forward Euler descriptive zation of the same heat equation.
76
00:09:53,710 --> 00:09:57,310
That's u t equals u x x.
77
00:09:58,030 --> 00:10:11,280
Now we're working with this stencil. So my descriptive zation would be the j and plus one equals the j and plus the.
78
00:10:11,850 --> 00:10:26,220
Sorry, I forgot the the k. Yes, the j m plus one equals V.J. and plus k over x squared.
79
00:10:29,140 --> 00:10:38,590
Times. The second difference v j minus 2vvj plus one minus 2vj plus v j minus one at time.
80
00:10:38,590 --> 00:10:45,620
Step n. So we plug in the Taylor series.
81
00:10:48,810 --> 00:10:50,490
Just as with Odyssey.
82
00:10:51,720 --> 00:11:06,240
And we find that the Taylor series tells us that you evaluated at all we care about is j plus or minus one times h and at time step wn will
83
00:11:06,240 --> 00:11:16,350
be equal to you at the centre value plus or minus h times its derivative plus x squared over two times the second derivative and so on.
84
00:11:22,550 --> 00:11:29,540
So that's the Taylor series in space that tells us what happens when we move to the left or right from the centre point.
85
00:11:29,870 --> 00:11:39,940
Jay. So plugging in that Taylor theories we can see what that second order finite difference does.
86
00:11:40,270 --> 00:11:54,580
This implies that v j plus one minus to the j plus v j minus one is equal to h squared u x x.
87
00:11:55,750 --> 00:12:00,040
Plus X to the fourth over 12 u x.
88
00:12:00,100 --> 00:12:11,850
X x x. And so on. So plugging in the Taylor series tells us, as we'd expect, that the first term is just right.
89
00:12:12,210 --> 00:12:18,270
This divided by x squared is you double x, and then the next order term is at the fourth order.
90
00:12:18,540 --> 00:12:20,790
And it's symmetry, of course, that gives us that.
91
00:12:20,940 --> 00:12:28,020
We've just retired in a symmetrical fashion, so all the third order terms drop out for all the odd order terms drop out.
92
00:12:35,250 --> 00:12:40,920
So plugging that in, we can now compute that the end plus one.
93
00:12:42,690 --> 00:12:56,920
Is equal to you plus. K u x x plus k a x squared over 12 u x x x and so on.
94
00:13:00,240 --> 00:13:07,550
So that's what we get from our finite difference formula. And what we have to do now is compare that to what we should get.
95
00:13:07,580 --> 00:13:11,540
Compare that to the exact solution. So let's see.
96
00:13:15,660 --> 00:13:25,040
In order to do that. I'm going to use the PD whenever we see a USF that's equal to U t because it's the heat equation.
97
00:13:25,370 --> 00:13:28,970
So another way to write this is u plus k,
98
00:13:29,390 --> 00:13:41,780
u t and then we have k x squared over 12 u t t because any differentiation with respect to t is two differentiations with respect to X.
99
00:13:42,050 --> 00:13:46,490
So I'm using the fact that you is the exact solution to the PD.
100
00:13:54,340 --> 00:14:02,680
So the Taylor series tells us that the forward Euler formula gives us at the new time step this approximation.
101
00:14:04,480 --> 00:14:12,320
Whereas the true solution. The exact solution. We could get by a Taylor series in time.
102
00:14:12,320 --> 00:14:16,610
And that would be you at x j.
103
00:14:18,820 --> 00:14:23,750
T n plus one would be you add extra atm.
104
00:14:24,910 --> 00:14:34,450
Plus k times you t plus k squared over two and u t t plus k cubed over six times u t t t and so on.
105
00:14:35,840 --> 00:14:39,650
So this difference between what we got and what we wanted.
106
00:14:43,360 --> 00:14:48,970
Is the local truncation error. That difference is the key thing.
107
00:14:54,990 --> 00:15:03,090
So we see that the local truncation error is the difference between that and this.
108
00:15:03,090 --> 00:15:07,320
Well, this term cancels them, this term cancels and we're left with just that and that.
109
00:15:07,560 --> 00:15:17,460
So the local truncation error is k h squared over 12 u t t minus.
110
00:15:18,260 --> 00:15:23,960
K squared over to u t t plus high order terms.
111
00:15:25,220 --> 00:15:28,370
Or, as we could say, it's o of.
112
00:15:29,510 --> 00:15:34,910
K H squared plus K Square.
113
00:15:35,840 --> 00:15:46,570
So there is the conclusion of our analysis. The local truncation error is a that size and so the global error.
114
00:15:51,390 --> 00:16:00,210
Will be worse by one factor of k. So that gives us o of h squared plus k.
115
00:16:09,110 --> 00:16:14,870
So that's the whole analysis. More complicated formulas are the same in principle, but much more annoying in practice.
116
00:16:17,000 --> 00:16:22,760
Now, by the way, when I write something like this, oh, often in the sum of two terms, the reason I do that is that.
117
00:16:23,730 --> 00:16:28,920
One or the other term might be dominant depending on how you've chosen to relate k to x squared.
118
00:16:29,160 --> 00:16:33,630
For example, if K is proportional to H, then this would be the big term.
119
00:16:34,470 --> 00:16:37,890
If K is proportional to x squared, then the two terms of the same.
120
00:16:37,890 --> 00:16:43,440
So I wouldn't have to write them both. If K were proportional to h cubed, then this would be the dominant term.
121
00:16:43,710 --> 00:16:47,070
So you need to write them both to cover all possibilities.
122
00:16:49,710 --> 00:16:57,750
In our experiment I guess K was just proportional to H, and that's why as we shrank, we well, we saw this dominant term.
123
00:17:01,920 --> 00:17:06,870
Okay. So you're now experts in computing local truncation errors and orders of accuracy.
124
00:17:07,800 --> 00:17:10,770
And now I want to spend a few minutes showing you how to improve it.
125
00:17:11,400 --> 00:17:15,900
The rule of thumb in numerical computations is that when you do something obvious,
126
00:17:15,900 --> 00:17:19,980
it's first order, and then a simple trick usually gets you second order.
127
00:17:20,340 --> 00:17:23,490
But going beyond second order needs more than simple tricks.
128
00:17:24,510 --> 00:17:28,320
So let's look at the simple trick to make this second order accuracy.
129
00:17:28,800 --> 00:17:32,010
And I can say it instantly.
130
00:17:33,180 --> 00:17:40,770
We had an unseen metrical stencil there with backward Euler and an symmetrical stencil here with forward Euler.
131
00:17:41,010 --> 00:17:49,170
Well, if we want to impose symmetry and make it symmetrical and then all the odd order terms will go away and we'll get one more order of accuracy.
132
00:17:53,940 --> 00:18:02,760
So to improve this. We want k squared instead of k.
133
00:18:09,340 --> 00:18:16,270
The trick is simply to use a stencil that does the same thing at the new time step and at the old time step,
134
00:18:17,950 --> 00:18:22,540
so that by symmetry it's almost obvious that it will have to be second order accurate.
135
00:18:22,570 --> 00:18:28,030
You can see at least a few expanded around that point. You'd have all the odd derivatives would drop out.
136
00:18:28,450 --> 00:18:41,709
So this is called Crank Nicholson. So this formula was first proposed in 1947 by a man who just died a couple of years ago.
137
00:18:41,710 --> 00:18:45,640
John Crank, a very major figure in the fusion equations.
138
00:18:47,710 --> 00:18:51,280
Nicholson, I think, was either a student or a post-doc of his.
139
00:18:51,280 --> 00:18:58,510
I forget which woman who died shortly thereafter. So she's sort of ancient history at this point, but he was around for many years.
140
00:19:01,160 --> 00:19:07,190
So if you applied this idea to the heat equation, that's what people call Crank Nicholson.
141
00:19:07,400 --> 00:19:11,870
But more generally, if you say we're going to do a Crank Nicholson style descriptive zation,
142
00:19:12,200 --> 00:19:16,280
then that always means this kind of symmetry in to time levels.
143
00:19:18,500 --> 00:19:28,700
So let's write down what the formula is. We get the J and plus one equals the j n.
144
00:19:30,710 --> 00:19:35,300
And now, instead of a forward difference or a backward difference, we take half of each.
145
00:19:35,990 --> 00:19:39,500
So we now get K over two h squared.
146
00:19:41,520 --> 00:19:49,610
Plus K over two h squared. And I shouldn't have put the line there because I've already divided by x squared.
147
00:19:55,960 --> 00:19:59,320
So we have the second difference at the current time.
148
00:19:59,320 --> 00:20:05,860
Step v j plus one minus to the J plus the J minus one.
149
00:20:07,200 --> 00:20:14,400
And then the same thing at the new time step the j plus one minus to the J.
150
00:20:15,990 --> 00:20:18,360
Plus the J minus one.
151
00:20:22,840 --> 00:20:33,850
So there's a crack Nicholson democratisation and in matrix form we could write that as a vector B, V and plus one equals a times.
152
00:20:34,390 --> 00:20:41,840
Then so there's the. Matrix abstract of the Formula B and A are both tri diagonal.
153
00:20:43,340 --> 00:20:46,520
So they both will have this kind of structure.
154
00:20:48,710 --> 00:20:54,560
Thanks to the one dimensional problem. So that means it's easy to solve these equations.
155
00:20:54,560 --> 00:20:58,250
It's easy to implement this in one day. What are the numbers?
156
00:20:58,700 --> 00:21:10,400
B is equal to the tri diagonal matrix whose entries are minus sigma over two and one plus sigma and minus sigma over two.
157
00:21:11,840 --> 00:21:21,350
And a is equal to try. AG of. Sigma over to one minus sigma.
158
00:21:21,980 --> 00:21:30,720
Sigma over two. And once again, Sigma is over square.
159
00:21:36,340 --> 00:21:41,740
So we've used the symmetry. We expect second order accuracy instead of first order accuracy.
160
00:21:41,890 --> 00:21:50,470
Now let's run the code and make sure that happens. The code is called M 43 and it's pretty much identical to M42.
161
00:21:52,120 --> 00:21:57,460
But you can see in the key line, it's now the backslash eight times you.
162
00:22:09,940 --> 00:22:13,959
Okay. So let me run em.
163
00:22:13,960 --> 00:22:25,000
43. So there you have the same initial condition, but now running with Jack Nicholson.
164
00:22:25,210 --> 00:22:33,760
First we did a space step of a half and then space step of a quarter and then space step of an eighth space step of the 16th.
165
00:22:33,790 --> 00:22:42,940
Of course, here it doesn't see much difference. 30 seconds, 64, 128, 256.
166
00:22:47,980 --> 00:22:52,150
But now notice thoroughly different behaviour if you actually care about the numbers.
167
00:22:52,420 --> 00:22:57,010
This is a much better scheme. We've got five digits of accuracy rather than before.
168
00:22:57,010 --> 00:23:01,030
It was just three or four. So beautiful. Second order convergence.
169
00:23:09,100 --> 00:23:23,400
Okay. So let's do an analogous exploration for the simplest hyperbolic equation, which would be the wave equation in one day.
170
00:23:56,240 --> 00:24:10,570
For. So the one d wave equation.
171
00:24:14,770 --> 00:24:18,580
It's like the heat equation, except with a second derivative in time.
172
00:24:19,000 --> 00:24:22,720
So that would be u t t equals u x x.
173
00:24:24,190 --> 00:24:27,670
And that has solutions that go left at speed, one or right at speed one.
174
00:24:29,140 --> 00:24:36,040
So it's. Easy to write down disparate positions, and we could use Crank Nicholson and it would work fine.
175
00:24:36,700 --> 00:24:41,440
However, this is not stiff like the heat equation.
176
00:24:41,770 --> 00:24:45,729
Everything in this equation travels at speed. One with the heat equation.
177
00:24:45,730 --> 00:24:48,790
Things travel at speed, infinity. But this is speed one.
178
00:24:49,330 --> 00:24:55,989
So there's actually no need to use an implicit discrimination because you'll be able to use a big time step,
179
00:24:55,990 --> 00:24:58,090
even with an explicit, discourteous action.
180
00:24:58,660 --> 00:25:07,630
So with Crank Nicholson, where that's implicit, but you only need that for a diffusive problem, basically because of stability limit.
181
00:25:08,080 --> 00:25:16,330
So here we don't need. To use anything implicit.
182
00:25:17,620 --> 00:25:25,260
Although we could. Because stability is not a constraint.
183
00:25:28,050 --> 00:25:33,720
Any value of any reasonable value of K is probably going to be easy.
184
00:25:34,200 --> 00:25:42,739
It's enough to use a simpler descriptor ization. But let's make it second order in time.
185
00:25:42,740 --> 00:25:49,340
And to do that, we want the symmetry and the obvious symmetry would be achieved by this picture.
186
00:25:50,990 --> 00:25:57,020
So let's discard ties in space with three points and in time with three points, but centre them at the same place.
187
00:25:57,230 --> 00:26:08,350
And that's called the Leapfrog Scheme. And in some sense that sort of dates to 1928, long before people were doing actual computations.
188
00:26:09,660 --> 00:26:15,270
That's all you need to get second order accuracy in H and K and it will still allow you
189
00:26:15,270 --> 00:26:21,180
reasonably big time steps because a hyperbolic equation has the slow speed of propagation.
190
00:26:21,600 --> 00:26:24,719
We can immediately write down the formula. I hope you follow me on these.
191
00:26:24,720 --> 00:26:42,510
So I'll say the end plus one minus 2vj rn plus the j and minus one divided by k squared equals v j plus one minus 2vj plus v j minus one.
192
00:26:43,920 --> 00:26:53,420
Over eight squared. So a second derivative in time is set equal to a second derivative in space.
193
00:26:53,810 --> 00:26:58,850
And you can see that this whole formula just contains this value at the new time step.
194
00:26:59,240 --> 00:27:04,580
So it amounts to a simple linear prescription for getting the new values from the old values.
195
00:27:05,930 --> 00:27:09,350
Nobody in practice would really write this with matrices except me.
196
00:27:09,770 --> 00:27:13,099
So you can of course you could write it as a vector equals.
197
00:27:13,100 --> 00:27:14,930
And then we have this kind of structure.
198
00:27:16,310 --> 00:27:25,340
The new vector is equal to the vector from time to time steps ago, plus the matrix times the vector from one time step ago.
199
00:27:26,910 --> 00:27:33,690
And a is the tri diagonal matrix whose entries are.
200
00:27:35,820 --> 00:27:40,200
Sigma one minus two Sigma Sigma.
201
00:27:40,560 --> 00:27:46,680
And now Sigma has a different definition. Sigma is now K squared over a square.
202
00:27:48,610 --> 00:27:54,250
Everything scales differently. Now we have not against x squared but k squared against x squared.
203
00:27:57,370 --> 00:28:01,120
Of course it's second order accurate the global accuracy.
204
00:28:05,590 --> 00:28:10,490
Is oh of h squared plus k squared.
205
00:28:11,740 --> 00:28:15,550
So we could say it's second order accurate in space and time.
206
00:28:17,350 --> 00:28:25,120
And although it's not approved to say this intuitively, it's obvious that it's going to be second order accurate because of the symmetry.
207
00:28:27,530 --> 00:28:35,330
So let's play around with that one a bit. So the next code is M 44 and it has very similar structure.
208
00:28:35,750 --> 00:28:40,520
Again, I've used a matrix. So the key line is you new equals eight times.
209
00:28:40,520 --> 00:28:48,410
You mind if you hold? So when you have a multi level scheme like this, you have to store more data.
210
00:28:48,440 --> 00:28:56,000
You don't just have a vector, you you need to store the current you and the previous view and then update them as you go along.
211
00:28:59,090 --> 00:29:02,420
So let's see how that behaves. Waves are more fun than diffusion.
212
00:29:09,320 --> 00:29:17,550
So if I run em 44 leapfrog. There's my initial condition for the wave equation.
213
00:29:18,210 --> 00:29:23,250
It's pretty hard to see what's happening on a three point grid. There's a seven point grid.
214
00:29:25,840 --> 00:29:29,620
Here's a 15 point grid. Now you're beginning to see it.
215
00:29:31,520 --> 00:29:37,520
So on the 31 point grid, you'll notice two waves moving in two directions and then bouncing off the end.
216
00:29:38,000 --> 00:29:46,220
And let's look at the code to see why we have that. So if you look at the code, near the top is the initial condition.
217
00:29:46,730 --> 00:29:56,170
So you see, you hold. Is equal to E to the -50 x squared plus 0.3 times E to the -20 x minus two squared.
218
00:29:57,700 --> 00:30:01,599
And then in addition, we have to specify one other initial time level.
219
00:30:01,600 --> 00:30:08,590
So we're going to effectively specifying the value at time zero and at time K at time k.
220
00:30:09,940 --> 00:30:17,050
One of the pulses goes to the left and the other post goes to the right because you see, I have x plus K and x minus K,
221
00:30:17,500 --> 00:30:24,160
so I've set it up so that the initial condition consists of a left going pulse and a right going pulse.
222
00:30:24,490 --> 00:30:28,000
That's why we see that on the next grid.
223
00:30:29,320 --> 00:30:33,130
There are the two pulses here. There's a fairly broad one plus a less broad one.
224
00:30:34,870 --> 00:30:39,910
And you see the broad one goes right and the less broad one goes left and they both bounce off the boundary.
225
00:30:40,960 --> 00:30:50,510
Here we are again. It's a direct condition at the boundary, which is why the sign is negated when you bounce.
226
00:30:51,680 --> 00:31:06,100
I think there's one more. Okay.
227
00:31:06,100 --> 00:31:08,770
Is that the end or is there yet another? That's the end. Okay.
228
00:31:09,730 --> 00:31:16,299
So again, I've done a calculation of accuracy and although there's not too much data there, you can see that it's quadratic.
229
00:31:16,300 --> 00:31:24,410
We have k squared accuracy. There are few variations proposed in this code.
230
00:31:24,740 --> 00:31:29,030
So one of them, you can see the comments in the right where it says periodic boundary conditions.
231
00:31:29,420 --> 00:31:35,480
If I remove the percent signs, then that has the effect of making things wrap around properly.
232
00:31:35,490 --> 00:31:42,920
So everything is periodic. So let's do that. I've got a code pre-loaded for this, so I'll say.
233
00:31:44,330 --> 00:31:55,760
And 44 PR for periodic. So now notice even the initial condition looks different because now the boundary value is a genuine data point.
234
00:31:56,060 --> 00:31:59,090
It's only on that side because it's equal to the value there.
235
00:31:59,990 --> 00:32:03,230
So there's our periodic array.
236
00:32:03,650 --> 00:32:09,200
Of course, grid doesn't show you much. That doesn't show much, but the next one will show you something.
237
00:32:09,530 --> 00:32:18,859
Now we see the two pulses. And now, of course, there's no sign change at the boundary because there's no physics at the boundary.
238
00:32:18,860 --> 00:32:31,400
It's only passing right through it. Is there one more?
239
00:32:46,650 --> 00:32:49,750
Okay. I think that's the end. Okay.
240
00:32:49,810 --> 00:32:55,820
Again, we see quadratic convergence. Another variation on the theme.
241
00:32:58,360 --> 00:33:02,200
Is. Did I write this?
242
00:33:02,230 --> 00:33:08,680
Yes. You noticed? Near the top it says for fun. Switch to one point out two times h.
243
00:33:10,090 --> 00:33:14,370
The stability limit for this code is that K has to be less than H.
244
00:33:15,220 --> 00:33:20,470
I've been running with K equals 0.5 H. If I make k bigger than H, we'll see instability.
245
00:33:21,010 --> 00:33:27,790
So let's try that. I'll switch to 1.02 times H and that's called M 44 U for unstable.
246
00:33:29,340 --> 00:33:33,750
So the same initial condition. And on that course grid, you don't see a problem.
247
00:33:36,240 --> 00:33:39,750
Good size a quarter. You still don't see a problem.
248
00:33:40,560 --> 00:33:43,700
Grid size in eight. You see a problem?
249
00:33:44,720 --> 00:33:48,120
16th. No. That looked better, didn't it?
250
00:33:49,110 --> 00:33:54,170
32nd. Oops. 64th.
251
00:33:58,090 --> 00:34:02,559
It's kind of intriguing that initially the thing looked smooth and that kind of
252
00:34:02,560 --> 00:34:07,180
paradox arises often in numerical calculation that when things are analytic,
253
00:34:07,900 --> 00:34:11,080
the errors that feed instability can be very, very small.
254
00:34:11,530 --> 00:34:17,319
So it may take a while for them to appear, in fact. Sometimes in theory it would never appear.
255
00:34:17,320 --> 00:34:20,950
And it's only the rounding errors that amplify enough to give it stability.
256
00:34:21,580 --> 00:34:25,810
I think we have another grid. Okay. Ready? Here we go. Looks good.
257
00:34:25,820 --> 00:34:30,200
But then suddenly garbage. Oops.
258
00:34:30,210 --> 00:34:37,740
What's happening there? I don't know. And the plot doesn't even show any data points.
259
00:34:37,740 --> 00:34:48,260
So I'm not quite sure why. But there you are since. Oops.
260
00:34:54,390 --> 00:34:58,560
Well for the rest of today. I want to get back to the subject of nonlinear equations.
261
00:34:58,980 --> 00:35:04,500
Is there a question? Yes. Can I explain what you mean?
262
00:35:04,510 --> 00:35:08,140
Why? It's this number? Yeah. Okay, let's see.
263
00:35:09,370 --> 00:35:17,200
I may not be quick enough to do that. So the new value is obtained.
264
00:35:19,500 --> 00:35:31,490
From. I see a is telling us how the part of the contribution from level M so the terms that go into the A matrix should be.
265
00:35:33,020 --> 00:35:37,390
That and the right. So.
266
00:35:40,760 --> 00:35:44,090
This term won't have any factor in front of it. So that should be a two.
267
00:35:45,440 --> 00:35:50,800
And I see a one. Why do I see a. Is that what concerned you?
268
00:35:50,860 --> 00:35:54,160
Yeah. Yeah. And then the other one.
269
00:35:56,350 --> 00:36:00,490
Who should be who? Yes. Why is that a one rather than a two?
270
00:36:01,510 --> 00:36:05,200
I don't see. Let's look at the code and that will tell us the truth.
271
00:36:05,380 --> 00:36:13,170
What does the code say? So if we go back to the code we just ran.
272
00:36:15,750 --> 00:36:20,430
It says a is main diagonal is.
273
00:36:20,490 --> 00:36:23,790
Oh, it is a two. Thank you, Michael. So I think it's a two.
274
00:36:24,300 --> 00:36:35,700
Good. Does it look right to you now?
275
00:36:36,860 --> 00:36:46,750
That's. Is that the first ever I've made in, you know, three months at like three?
276
00:36:49,520 --> 00:36:56,120
Well, for the last few minutes, I'd like to return to the subject of nonlinear problems, which are always the most fun.
277
00:36:56,450 --> 00:37:03,500
And in particular, just a few words back to the subject of what I call reaction diffusion equations.
278
00:37:08,020 --> 00:37:24,370
And Stiff PD. I should remind you that or mentioned that one of the handouts today is an outline of the lectures in the course.
279
00:37:24,380 --> 00:37:29,860
So I gave you one last term and then again this term. And so you can see where we're heading and where we are.
280
00:37:31,540 --> 00:37:34,660
The theme here, as I mentioned, Last Lecture,
281
00:37:36,010 --> 00:37:42,550
is that very often you have a problem with a higher order linear term combined with a lower order nonlinear term.
282
00:37:42,910 --> 00:37:59,620
So many problems. Can be written in the form of a time evolution process where u t equals a linear operator, times u plus a nonlinear operator of u.
283
00:38:00,400 --> 00:38:08,350
This is standard notation. When it's linear, people tend to think of it as a multiplication so they don't put parentheses when it's nonlinear.
284
00:38:08,350 --> 00:38:11,620
It's not just a multiplication, so they do put parentheses.
285
00:38:12,310 --> 00:38:25,360
And many times what you find is that this is linear of a reasonably high order differential order, and this is nonlinear lower order.
286
00:38:30,970 --> 00:38:36,160
And what does high order mean? Well, most often that would be order two or four.
287
00:38:37,840 --> 00:38:41,500
Two would be ordinary diffusion and four would be a higher order diffusion.
288
00:38:41,830 --> 00:38:48,040
But sometimes it will be three. In fact, we're about to do the Katie V equation and there it's three.
289
00:38:49,960 --> 00:38:55,900
What does lower order mean? Well, it might be order one or it might be order zero.
290
00:38:56,860 --> 00:39:03,909
For example, with the Fisher KP equation, it was just order zero because this term was just a u squared minus you sort of a term.
291
00:39:03,910 --> 00:39:10,330
No derivatives at all. On the other hand, with the Burger's equation or the Navier-stokes equations,
292
00:39:10,600 --> 00:39:15,700
you would have linear second order diffusion coupled with nonlinear first order convection.
293
00:39:18,830 --> 00:39:24,500
So as I mentioned earlier, any old method will give you first order accuracy.
294
00:39:25,950 --> 00:39:31,500
And then there are a reasonable, reasonably large number of easy ideas to get second order accuracy.
295
00:39:32,190 --> 00:39:37,800
The trick is to get higher order than that, and that turns out to be surprisingly interesting and challenging.
296
00:39:38,070 --> 00:39:42,810
And I just want to make you aware of my favourite technique for that.
297
00:39:44,940 --> 00:39:52,410
First of all, before I do that, let me just remind you of some of the equations that fit this general template.
298
00:39:52,860 --> 00:39:57,480
So the navier-stokes from fluid mechanics are maybe the single most important.
299
00:39:58,050 --> 00:40:03,910
The core of the equation, which we'll look at in a moment, the Corona Motel Service.
300
00:40:03,910 --> 00:40:09,230
PINSKY Which we mentioned earlier. That's a fourth order one.
301
00:40:11,620 --> 00:40:15,340
The Alan Kohn equation, which is second order.
302
00:40:16,860 --> 00:40:29,040
The Con Hilliard equation, which is fourth order. The nonlinear Schrodinger equation, the Fisher CP equation we looked at.
303
00:40:32,220 --> 00:40:43,260
The Hodgkin Huxley equations from neural messaging, the great Scott equations, which is a system of multiple ones and so on.
304
00:40:43,260 --> 00:40:46,380
The Ginsburg Landau equations, they eichenberg equations.
305
00:40:46,650 --> 00:41:02,060
There are many of these set to an argument. So it's really a very major part of science you find leads to things like this.
306
00:41:02,330 --> 00:41:08,900
And my favourite set of techniques for dealing with them are called exponential integrators.
307
00:41:16,150 --> 00:41:21,130
And I'm not going to go into the mathematics of that. I'll say a word, just waving my hands.
308
00:41:22,390 --> 00:41:33,430
The idea with exponential integrators is to write the PDP an integral form and then approximate what's inside by a low order polynomial.
309
00:41:33,490 --> 00:41:39,010
That's where it all starts from. So you write the PD, you integrate in time to make it an integral.
310
00:41:39,310 --> 00:41:46,780
You approximate what's inside by a polynomial using sort of a rung, a cut, a type of thinking or a multistep sort of thinking.
311
00:41:47,050 --> 00:41:55,510
And out comes what looks like an O the E formula of a slightly unusual kind, where the linear part of the problem can be integrated.
312
00:41:55,510 --> 00:41:58,900
Exactly. And the nonlinear problem you do with a finite difference.
313
00:41:59,260 --> 00:42:02,440
Now, I'm going to show you in one of the codes what this can end up looking like.
314
00:42:02,680 --> 00:42:12,780
I'm not going to write it down here. But let me say a key thing in the history, which is this paper by Cox and MATTHEWS.
315
00:42:13,720 --> 00:42:22,360
So they weren't the first to do formulas like this, but it was their paper which somehow set things going and that that's reproduced.
316
00:42:22,360 --> 00:42:26,230
The first page is reproduced on there. It's 2002.
317
00:42:28,600 --> 00:42:34,510
So if you look at their very nice paper called Exponential Time, different thing for systems.
318
00:42:37,720 --> 00:42:43,480
Some of the ideas go back 20 or 30 years before, but this got people thinking again about them.
319
00:42:43,900 --> 00:42:47,590
And then I got involved because I was interested in PD applications.
320
00:42:47,590 --> 00:42:52,750
So on the back of that sheet of paper, you find this thing by me and a student, Ali Hassan.
321
00:42:57,990 --> 00:43:09,580
What was that? 2005? So what Cox and MATTHEWS did was introduce what seems to be the nicest of the fourth order formulas called ETD.
322
00:43:09,940 --> 00:43:13,569
AK for that stands for exponential time.
323
00:43:13,570 --> 00:43:21,020
Different thing. A cut of fourth order. And then what we did was to apply it to PDS.
324
00:43:25,750 --> 00:43:30,880
And this has now become a standard tool that people find just very convenient and very useful.
325
00:43:31,240 --> 00:43:34,600
You use this Cox MATTHEWS formula and you can do all sorts of things,
326
00:43:35,170 --> 00:43:41,559
especially when you have periodic geometries, which we haven't really talked about yet.
327
00:43:41,560 --> 00:43:47,560
But for periodic geometries, for analysis makes the linear part of the problem especially tractable.
328
00:43:49,240 --> 00:43:56,380
So I want to just spend the last few minutes showing you that, especially in the context of the CDV equation.
329
00:44:03,420 --> 00:44:08,250
So the court does make De Vries equation is a third order equation.
330
00:44:08,760 --> 00:44:12,329
It's u. T plus u.
331
00:44:12,330 --> 00:44:18,360
U x. Plus you triple x equals zero.
332
00:44:20,600 --> 00:44:25,460
So we've got the time evolution. And then this term is the nonlinear convection term.
333
00:44:31,540 --> 00:44:36,010
And then this is the linear dispersion term.
334
00:44:41,620 --> 00:44:44,590
Now the K.B., the equation goes back to the early 20th century,
335
00:44:44,770 --> 00:44:52,120
but it's really in the 1970s that people discovered its extraordinary properties, which have to do with solid times.
336
00:44:52,870 --> 00:44:55,960
And that term was invented in the 1970s.
337
00:44:58,860 --> 00:45:02,340
Throughout mathematics, you have this interplay between linear and non-linear.
338
00:45:02,520 --> 00:45:09,810
Linear is sort of easy and obvious. Nonlinear is harder, and there's much less structure somehow in nonlinear problems.
339
00:45:10,170 --> 00:45:18,240
What made solid times so exciting was that they're nonlinear, very nonlinear, and yet incredibly structured and in particular.
340
00:45:20,960 --> 00:45:28,640
One effect of a solid time solution is that the speed of propagation of a wave.
341
00:45:29,890 --> 00:45:36,430
Depends on its amplitude. So a tidal wave moves faster than a small wave.
342
00:45:37,360 --> 00:45:41,800
Now, if you take an arbitrary initial condition, it will break up into all sorts of stuff.
343
00:45:42,070 --> 00:45:46,420
But it turns out that there is a solution that looks like that that maintains its shape.
344
00:45:46,810 --> 00:45:52,750
So they have this form U of T equals.
345
00:45:53,690 --> 00:46:03,860
A number alpha. Times. The square of the hyperbolic sea cat that one over the hyperbolic cosine times another number.
346
00:46:05,120 --> 00:46:08,840
Times X minus speed times t.
347
00:46:11,330 --> 00:46:14,750
And. Here being can be arbitrary.
348
00:46:16,940 --> 00:46:20,600
Data Alpha is 12 basic data squared.
349
00:46:23,710 --> 00:46:28,440
And see. Is for back to square.
350
00:46:34,040 --> 00:46:39,859
So I pick a baiter and the bigger it is, the bigger alpha is and the bigger the speed is.
351
00:46:39,860 --> 00:46:44,360
So bigger baiters mean faster waves, but also taller waves.
352
00:46:44,660 --> 00:46:50,330
And loosely speaking, it looks like that it's got this hyperbolic secant squared shape.
353
00:46:50,720 --> 00:46:57,560
So if you plug in a wave like that for any baiter at initial time, it will just move along in a steady way.
354
00:46:58,370 --> 00:47:01,820
That's not too exciting, except that the speed depends on the amplitude.
355
00:47:02,000 --> 00:47:11,600
But then what's really amazing is that if that catches up with that, they dance around a little bit and then that one simply passes it.
356
00:47:11,900 --> 00:47:21,260
So if. At time zero. If you have a tall, solid tan and a short, solid tan, then at some later time.
357
00:47:22,480 --> 00:47:25,780
They will have switched places. They're both travelling.
358
00:47:31,340 --> 00:47:36,890
The big one overtakes that. And asymptotically, as time goes to infinity, they have exactly the same shape.
359
00:47:37,160 --> 00:47:40,990
That is completely surprising. Nonlinear problems shouldn't do that.
360
00:47:41,330 --> 00:47:45,150
When things interact non-linear, you expect all sorts of garbage to result.
361
00:47:45,410 --> 00:47:49,250
You know, when one galaxy hits another, it turns into a spiral or something.
362
00:47:49,310 --> 00:47:56,540
But here we have a case where one galaxy hits another and just passes through and they both end up with exactly the form they had before.
363
00:47:58,090 --> 00:48:02,499
This was incredibly exciting to mathematicians in the 1970s,
364
00:48:02,500 --> 00:48:09,310
and since then it has become more and more important in science and in particular in propagation of waves in optical fibres.
365
00:48:09,970 --> 00:48:14,230
That stuff is completely reliant on the mathematics of these nonlinear waves.
366
00:48:14,500 --> 00:48:19,530
That's how optical fibres manage to maintain coherence over long distances.
367
00:48:19,570 --> 00:48:29,590
They exploit the mathematics of solid times. So I've given you a handout from the coffee table book about the KTV equation.
368
00:48:32,960 --> 00:48:33,830
So that's that.
369
00:48:35,430 --> 00:48:44,420
Maybe even more fun is to look on the back of that handout, which is a page from one of the nicest papers ever written in numerical computation.
370
00:48:44,720 --> 00:48:47,240
This is by Foreign Bergin with him back in the seventies.
371
00:48:48,410 --> 00:48:57,170
With him was a nonlinear wage man at Caltech, and Thornburg is a man we've mentioned before who is an outstanding numerical algorithms person.
372
00:48:57,650 --> 00:49:01,630
Between them, they wrote this remarkable paper for the 1970s.
373
00:49:01,640 --> 00:49:03,640
This picture is just gorgeous.
374
00:49:03,650 --> 00:49:11,060
Of course, now it's easy to do things like that, but 40 years ago they realised what algorithms could be used to show these interactions.
375
00:49:11,330 --> 00:49:20,030
So you see a time zero at the bottom there are three solid columns and then as time elapses,
376
00:49:20,030 --> 00:49:27,170
moving up the tall one move fast, the middle one moves at a medium speed and the small one move at a low speed.
377
00:49:27,530 --> 00:49:31,880
And you can see eventually they separate. They all go at their different speeds.
378
00:49:32,150 --> 00:49:33,350
But you notice the shift.
379
00:49:33,350 --> 00:49:41,720
For example, the slow one loses a bit of position after the interaction and you can even see three different types of interaction.
380
00:49:42,050 --> 00:49:48,620
It can be shown that V saw the times have three different ways of interacting and this picture shows them all.
381
00:49:48,920 --> 00:49:52,790
First, you have an interaction here and then another one here and another one here.
382
00:49:53,060 --> 00:49:58,700
And the topographies of those interactions are different and everything is known about that.
383
00:50:00,230 --> 00:50:06,410
So it's a beautiful subject and let's just do it in the last couple of minutes.
384
00:50:06,500 --> 00:50:07,850
Well, in the last one minute.
385
00:50:09,830 --> 00:50:21,560
I draw your attention to the code called M 45 KTV which spells out it's a code spells out this APD arc for a scheme of Cox and MATTHEWS.
386
00:50:21,860 --> 00:50:25,100
So if you want to know the formulas, there they are. This is the whole code.
387
00:50:25,340 --> 00:50:29,390
It will solve the CDV equation and the formulas are not at all obvious.
388
00:50:29,420 --> 00:50:36,800
It's a running cutter type of thing where you have some magic algebra that makes all the right things cancel to give you high order.
389
00:50:38,060 --> 00:50:41,660
So if I run that, I'll get DV. Let's do that.
390
00:50:47,150 --> 00:50:52,980
So if I say m 45, Lee, it gives me a pair of follow times.
391
00:50:53,000 --> 00:50:56,300
It's running exactly this code, giving forth hope.
392
00:50:56,420 --> 00:51:00,240
You're not saying that, are you? Sorry.
393
00:51:06,040 --> 00:51:11,920
So it starts out with a pair of slow times and there you see them doing the right thing.
394
00:51:12,760 --> 00:51:18,070
In the end, they have the same shape they had before. It's a periodic domain, so it just keeps going like that.
395
00:51:20,100 --> 00:51:24,300
So the purpose of this code is to show you precisely what the formulas are.
396
00:51:24,600 --> 00:51:30,000
But if you actually want to work with these things, then I recommend what I showed you last time briefly.
397
00:51:30,300 --> 00:51:35,880
The built in fun stuff. The Code by Adrian Montano called Spin.
398
00:51:36,600 --> 00:51:42,290
So if I run that. With the quote active it seeds in a canned demo.
399
00:51:42,300 --> 00:51:46,200
Why is that taking so long? Oh, there it is.
400
00:51:49,490 --> 00:51:51,770
So it's actually using the same formulas.
401
00:51:51,770 --> 00:52:00,080
But this is a more accurate version and a much more flexible version because you can switch to other arbitrary equations often.
402
00:52:01,140 --> 00:52:05,160
If there were more time, I would change the initial condition a little bit to show you what happens.
403
00:52:05,160 --> 00:52:07,440
But we're at the end of time, so see you on Friday.