1
00:00:00,390 --> 00:00:05,640
I didn't spell out the precise equations for this assignment, but I hope it's obvious to you.
2
00:00:05,670 --> 00:00:09,840
We're talking about Newton's gravitational law. You know, an inverse square attraction.
3
00:00:10,140 --> 00:00:15,660
And of course, the gravitational constant is one. I don't mean you to use physical quantities.
4
00:00:15,930 --> 00:00:18,590
So if there's any doubt about that, check with me or to.
5
00:00:18,600 --> 00:00:27,509
But the obvious interpretation is the interpretation I want to pass around a few books, tell you about a few books.
6
00:00:27,510 --> 00:00:33,930
So a numerical Modise is a very well-developed subject with a number of outstanding textbooks,
7
00:00:34,500 --> 00:00:43,440
and this is only a fraction of those that I'm aware of that the authors are listed in the lecture notes online, so you can get more details that way.
8
00:00:43,650 --> 00:00:47,670
For example, there's a very nice one by Griffith and Higham.
9
00:00:47,670 --> 00:00:51,210
Numerical methods for ordinary differential equations. Pass them along.
10
00:00:52,650 --> 00:00:59,880
Perhaps a bit more advanced. There's one by default heart in born of modern scientific computing with ordinary differential equations.
11
00:01:03,740 --> 00:01:10,040
One of the themes of this subject that we won't really talk about is so-called differential algebraic equations,
12
00:01:10,250 --> 00:01:13,550
which is when you mix A.D with some algebraic conditions.
13
00:01:13,880 --> 00:01:19,020
So it's as if some of your time scales are infinitely fast in your body.
14
00:01:19,400 --> 00:01:23,360
So. DS That's called, and this is a textbook by Asher and Petzold,
15
00:01:23,600 --> 00:01:31,730
which talks about days as well as holds computer methods for ordinary differential equations and differential algebraic equations.
16
00:01:34,220 --> 00:01:41,870
There's a wonderful pair of books, thick and forbidding, looking by Hira, Noor, Sid and Varner, which are just wonderful.
17
00:01:42,050 --> 00:01:45,590
These are called solving ordinary differential equations one and two.
18
00:01:46,010 --> 00:01:52,260
They're, of course, more advanced with all the numerical analysis of this field is summarised in here.
19
00:01:52,400 --> 00:01:55,680
But what makes them so much fun is the historical flavour here.
20
00:01:55,700 --> 00:01:58,940
These guys love to quote from the original authors.
21
00:01:58,950 --> 00:02:06,170
You'll find Euler in Latin and Gauss in German or in pictures everywhere, many of them copied from old papers.
22
00:02:06,710 --> 00:02:10,640
There's a wonderful zest about these books, which to my mind,
23
00:02:10,640 --> 00:02:19,760
represent how much fun academics can be because these are academics studying an important subject, enjoying the scholarly aspect of that subject.
24
00:02:22,980 --> 00:02:29,730
The final two books I mentioned have the theme that they combine oldies and please and show you some of the analogies.
25
00:02:29,730 --> 00:02:36,440
So this one is by Aria Isserlis at Cambridge called a first course in numerical analysis of differential equations.
26
00:02:36,450 --> 00:02:39,870
So he doesn't say ordinary or partial because he's doing both.
27
00:02:42,360 --> 00:02:47,669
And finally by Randy Levick at the University of Washington in the USA.
28
00:02:47,670 --> 00:02:54,030
Again, a book that touches upon both finite difference methods for ordinary and partial differential equations.
29
00:02:57,570 --> 00:03:04,380
So some fields don't have a lot of good books. Other fields are very lucky and numerical, so is certainly one of the lucky ones.
30
00:03:07,490 --> 00:03:13,820
Okay. So today we're going to talk a bit about what it means for numerical schemes to converge.
31
00:03:14,240 --> 00:03:20,720
This is certainly not a theory course, but we will mention some of the conclusions of the theory.
32
00:03:23,090 --> 00:03:27,680
So we're going to begin by talking about the crucial notion of order of accuracy.
33
00:03:30,680 --> 00:03:43,110
It's all about how the accuracy of a numerical scheme improves when you shrink the time sack.
34
00:03:45,730 --> 00:03:58,150
So let me remind you that our standard picture for solving an OBE numerically is that we have a time axis and we this ties five
35
00:03:59,140 --> 00:04:08,950
in the first instance taking equally spaced point so t sub north and then t sub one equals k and t to equal to k and so on.
36
00:04:09,130 --> 00:04:13,150
Where K is the time step? Some number bigger than zero.
37
00:04:13,900 --> 00:04:20,920
The equation we're trying to solve is you prime equals f of t u.
38
00:04:22,540 --> 00:04:27,430
So that's our ordinary differential equation. F could be a linear or a nonlinear function.
39
00:04:27,640 --> 00:04:34,300
It could depend on time, explicitly or not. It certainly will depend on you if you want the problem to be interesting.
40
00:04:34,420 --> 00:04:38,260
If it didn't depend on you, this would just be an integral, not a nobody.
41
00:04:39,580 --> 00:04:44,770
And for an initial value problem, we will also have an initial condition, namely you.
42
00:04:44,770 --> 00:04:49,300
At the time zero is equal to some initial value.
43
00:04:50,590 --> 00:04:57,549
So these two between them represent an initial value problem and two discrete times
44
00:04:57,550 --> 00:05:02,650
that we replace the derivative by some algebraic approximation to a derivative.
45
00:05:04,300 --> 00:05:08,710
So let me say loosely what the definition of order of accuracy is.
46
00:05:09,340 --> 00:05:18,550
So we say that a numerical scheme and to be specific, we're always really talking about running a cut-off or a multi step scheme.
47
00:05:23,900 --> 00:05:27,530
So we have a numerical formula for an initial value problem.
48
00:05:29,890 --> 00:05:33,340
And loosely speaking, we say it has order of accuracy.
49
00:05:33,380 --> 00:05:38,710
P Order of accuracy.
50
00:05:38,950 --> 00:05:49,660
P If. The errors you make when you use the scheme are of size o of K to the p.
51
00:05:52,880 --> 00:06:02,480
But not to the plus one. As Kate goes to zero.
52
00:06:03,440 --> 00:06:07,130
And of course, this is not a precisely stated condition,
53
00:06:07,250 --> 00:06:13,570
but I should at least emphasise that you'll only hope to see this if you have a problem sufficiently smooth.
54
00:06:13,580 --> 00:06:17,780
So let's say for sufficiently smooth problems.
55
00:06:24,870 --> 00:06:31,890
So if you have a solution that looks like this with discontinuities in the derivative, then of course you won't expect to get fixed order conversions.
56
00:06:32,130 --> 00:06:36,450
But for a smooth problem, if your formula is good enough, you would expect that.
57
00:06:36,690 --> 00:06:40,200
So there's our loose definition of order of accuracy.
58
00:06:41,640 --> 00:06:50,370
Of course, all of this can be made very precise. Whenever you compute, you can do experiments to observe this kind of thing.
59
00:06:50,370 --> 00:06:55,810
And I want to begin with one of those. So the first computation will show uses this equation.
60
00:06:55,830 --> 00:07:08,730
So if you prime equals the sign of 40 times t times you with you zero equal one.
61
00:07:09,090 --> 00:07:15,750
So that's the example I want to show you now. And this is the code and 26.
62
00:07:35,210 --> 00:07:43,400
Okay. So let's see. Taking a look at em, 26 first on the handout, what you see is the first bit of text in which we solved the problem, as it were.
63
00:07:43,520 --> 00:07:51,230
Exactly. And the way we do that is by calling MATLAB code four or five, which is a pretty accurate solver.
64
00:07:51,560 --> 00:07:55,790
And we specify a tolerance. There's a relative tolerance and an absolute tolerance.
65
00:07:56,000 --> 00:07:59,900
We set them both to the minus eight in the homework assignment.
66
00:08:00,530 --> 00:08:04,490
In order to be confident in your solution, you better fool around with tolerances a bit.
67
00:08:04,820 --> 00:08:08,180
So this is an example of how you can set tolerances in MATLAB.
68
00:08:09,350 --> 00:08:18,530
So we compute the exact solution, and then we go into this bit of code where I forget the can software and just do it myself with run a cut-off.
69
00:08:18,890 --> 00:08:25,130
So these lines here in the middle of the code are the famous fourth order of running a cut of formula.
70
00:08:25,520 --> 00:08:29,390
You see A equals, then B equals, then C equals then D.
71
00:08:30,080 --> 00:08:36,860
Those are the four stages of the runner kind of formula. Then the four stages are combined in the next line u equals U plus,
72
00:08:37,280 --> 00:08:45,230
so on we accumulate the results and at the end we show how they compare with the, as it were, exact solution.
73
00:08:48,330 --> 00:08:59,680
So I'll do that in 26. So I started up and there's our exact solution.
74
00:09:05,910 --> 00:09:10,530
And then every time I return, I'll get the result for a finer mesh size.
75
00:09:10,800 --> 00:09:15,750
So the first time I type return, you can see what happens with a step size of 0.5.
76
00:09:16,140 --> 00:09:19,980
We have a couple of steps and they have nothing to do with the truth.
77
00:09:20,820 --> 00:09:26,190
Then we shrink the size to point to size, and still it has nothing to do with the correct solution.
78
00:09:28,410 --> 00:09:33,870
Then we shrink to one eighth and now it's beginning to be in the right ballpark, if you like.
79
00:09:34,920 --> 00:09:37,870
Then I think at the next step it's striking how much it improves.
80
00:09:37,870 --> 00:09:45,990
So we go from a timestamp of one eighth to 1/16, and then suddenly you can see that we've caught the true solution.
81
00:09:46,590 --> 00:09:48,719
Your first impression is that it's not so great,
82
00:09:48,720 --> 00:09:56,940
but actually it's better than it looks because the dots are indeed very close to the red line and the plot just drive straight lines between the dots.
83
00:09:57,180 --> 00:10:02,580
So when they cut with three or four points per wavelength is doing okay.
84
00:10:04,110 --> 00:10:07,770
Then as we shrink the time, step further, it becomes much better than okay.
85
00:10:08,190 --> 00:10:18,390
So there we are with 1/32. And now you can't see visually much difference between the red curve and the black dots keeps getting better.
86
00:10:18,420 --> 00:10:23,730
1/64 looks like that and 128 looks like that.
87
00:10:23,910 --> 00:10:28,410
So this is how things go when everything's working well and your functions are smooth.
88
00:10:31,040 --> 00:10:39,079
When I press return now will see an error plot. So there you see the error at time.
89
00:10:39,080 --> 00:10:43,399
One for these different grids. Ten. There's probably a mistake there.
90
00:10:43,400 --> 00:10:50,300
That should be the minus of the log. So one half, one quarter, one eight and so on as you shrink the time step.
91
00:10:50,960 --> 00:10:56,240
It takes a while to get in the zone, but once it's in the zone, you have a very clear K to the fourth convergence.
92
00:10:57,830 --> 00:11:03,020
By the way, one of the things that bothers me about the rhetoric of numerical analysis is the word error.
93
00:11:03,800 --> 00:11:07,910
Numerical analysis has this kind of an ugly reputation, right?
94
00:11:07,940 --> 00:11:10,700
Nobody thinks of it as a beautiful field except me.
95
00:11:11,660 --> 00:11:16,430
And one reason for that is that when people are talking about this stuff, they always use the word error.
96
00:11:16,430 --> 00:11:21,350
You know, the error decreases like K to the fourth. It's such a negative word.
97
00:11:21,350 --> 00:11:25,579
I wish people could talk about convergence instead of error. The convergence of fast.
98
00:11:25,580 --> 00:11:29,240
And that's how fast it is. But we still talk about error.
99
00:11:31,340 --> 00:11:33,470
I guess that's it for that exact.
100
00:11:44,130 --> 00:11:54,750
So let's spend 5 minutes anyway, doing for once the kind of tailor series analysis you do to find out why a formula has a certain order of accuracy.
101
00:11:55,560 --> 00:12:02,310
Roughly speaking, if I give you a formula and ask you to figure out whether it's quadratic cubic chaotic,
102
00:12:02,880 --> 00:12:09,060
you have to grind out a bunch of Taylor theories, which isn't much fun, and I certainly wouldn't ask you to do that for the fourth order.
103
00:12:09,060 --> 00:12:13,560
But you've had a formula on paper. I'm impressed that run it and Qatar did manage to do it.
104
00:12:14,220 --> 00:12:20,010
Let's do a simpler example. So let me first tell you the general way one do this, one does this.
105
00:12:20,220 --> 00:12:23,790
So how do we determine analytically?
106
00:12:28,890 --> 00:12:37,049
The order of accuracy of a given formula. So the very loosest idea is that you plug in Taylor theory,
107
00:12:37,050 --> 00:12:41,850
then cancel as many things as you can and see what you're left with to be a little more precise.
108
00:12:43,940 --> 00:12:48,320
The notion that turns out to be fruitful is what's called the local truncation error.
109
00:12:48,680 --> 00:12:51,770
And this means the error you get at one step.
110
00:12:52,040 --> 00:13:03,140
If everything else had been exact. So that is to say, it's the difference between the computed value at a new timestamp.
111
00:13:04,320 --> 00:13:10,140
And the exact solution at that time said the computed exact.
112
00:13:12,240 --> 00:13:17,640
If you is a smooth solution to an Audi.
113
00:13:20,550 --> 00:13:30,290
And. V and plus one is computed from exact previous data.
114
00:13:30,650 --> 00:13:34,780
So. What do I mean by that? Suppose by magic.
115
00:13:35,680 --> 00:13:45,069
The value we had for VN was exactly equal to the exact solution, and the value we had for van minus one was exactly equal to the exact solution.
116
00:13:45,070 --> 00:13:54,910
And so. So if you've got everything correct up to a certain time and then you take one step using your formula,
117
00:13:55,300 --> 00:14:02,970
the local truncation is the error you make at that step. Now in order to compute this.
118
00:14:05,120 --> 00:14:09,560
As I say, the idea is to plug in Taylor theory. So a little more fully.
119
00:14:09,710 --> 00:14:18,400
What you do is you replace. Things like Z and minus one and V and minus two, etc.
120
00:14:19,030 --> 00:14:24,430
By Taylor series in your Formula Taylor series.
121
00:14:31,050 --> 00:14:35,370
We're certainly not talking in this analysis about convergence of the series.
122
00:14:35,370 --> 00:14:40,560
It's all a formal analysis which would indeed converge if your functions were smooth enough.
123
00:14:42,090 --> 00:14:47,120
You plug in those Taylor theories and you cancel as many terms as you can.
124
00:14:50,280 --> 00:14:53,400
And that gives you the local truncation error.
125
00:14:53,490 --> 00:14:59,430
The local truncation error. And then.
126
00:15:01,980 --> 00:15:08,220
If the local truncation error is o of K to the p plus one.
127
00:15:11,270 --> 00:15:15,920
Then the order of accuracy. Is he?
128
00:15:19,080 --> 00:15:21,310
So in order to get four rung a cut,
129
00:15:21,420 --> 00:15:27,510
in order to show it has order of accuracy for you need to show that the local truncation errors looked like K to the fifth.
130
00:15:27,720 --> 00:15:33,560
And the reason for that is when you do an integration, you take O of K to the minus one steps.
131
00:15:33,900 --> 00:15:40,080
So let me explain that the global error, which is what determines order of accuracy.
132
00:15:42,330 --> 00:15:50,490
So that's O of K to the P for some P, and that is, loosely speaking, equal to the local error.
133
00:15:54,470 --> 00:15:57,680
Times. The number of steps, the number of time steps.
134
00:16:01,410 --> 00:16:11,400
So if the local area is O of K to the P minus plat plus one, you're going to take o of K to the minus one timestamps.
135
00:16:12,150 --> 00:16:16,380
So the global error you would expect to be a size K to the P.
136
00:16:19,330 --> 00:16:25,410
So let's do this analysis for one example and we'll take the simplest example that isn't completely trivial.
137
00:16:25,420 --> 00:16:27,460
I guess the Euler method is too trivial,
138
00:16:27,670 --> 00:16:34,660
so we just go up one level from the Euler method and do the second Euler Adam's best fourth method as our example.
139
00:16:36,680 --> 00:16:42,260
So remember, we had an infinite sequence of atoms dash forth methods.
140
00:16:43,400 --> 00:16:54,850
The second order one. Looks like this.
141
00:16:56,710 --> 00:17:09,190
It's the and plus one equals v n plus k over two times three.
142
00:17:09,370 --> 00:17:14,580
Fine. Minus F and minus one.
143
00:17:16,580 --> 00:17:24,830
And I remind you that when I say fun, I mean f applied to the end at time in.
144
00:17:27,620 --> 00:17:34,640
So that's the second order out of the best fourth method. Let's do this business of putting in Taylor theories to see how much cancelled.
145
00:17:35,840 --> 00:17:40,940
So in order to keep things simple, let me abbreviate.
146
00:17:41,840 --> 00:17:51,620
I'll write T for t to the end and I'll write you for u of t of n and do my Taylor series around ten.
147
00:17:56,200 --> 00:17:59,830
So you add ten plus one.
148
00:18:01,660 --> 00:18:10,899
By Taylor theories would be equal to you at time t plus k times u prime plus k squared
149
00:18:10,900 --> 00:18:18,610
over two times you double prime plus k cubed over six times you triple prime.
150
00:18:19,210 --> 00:18:24,550
And so. Paper series of you.
151
00:18:24,550 --> 00:18:32,320
And similarly, the only f value we need at a timestamp other than n is n minus one.
152
00:18:32,320 --> 00:18:35,460
So F and minus one.
153
00:18:35,470 --> 00:18:43,750
The Taylor theories will be well, you prime equals F, so the zeroth order term is u prime.
154
00:18:44,560 --> 00:18:51,940
Then the next order turn because it's time and minus k would be a minus k you double prime
155
00:18:52,420 --> 00:19:01,360
then a plus k squared over to you triple prime and a minus cubed over six you find prime,
156
00:19:01,360 --> 00:19:10,160
prime, prime. And so. So if I've got an exact solution, that's how the Taylor series will look.
157
00:19:10,460 --> 00:19:16,460
And now I plug those in to that formula to find out how close this is.
158
00:19:16,850 --> 00:19:21,900
To what the formula. So I compute.
159
00:19:24,560 --> 00:19:34,770
Let's see. V and plus one. 42 that is equal to VNE, which by assumption is exactly right.
160
00:19:34,950 --> 00:19:39,390
It's exactly you. And then we have K over two times this stuff.
161
00:19:39,690 --> 00:19:43,090
So that's K over two times.
162
00:19:43,110 --> 00:19:46,590
Now F n is exactly you prime.
163
00:19:47,970 --> 00:19:50,010
So we have three new prime.
164
00:19:51,900 --> 00:20:05,460
And then n minus one is this thing minus four and minus one means minus you prime plus q double prime minus k squared over two you triple prime.
165
00:20:05,850 --> 00:20:15,050
And that's far enough. So now the question is, how close is that to that?
166
00:20:16,250 --> 00:20:28,360
Well, let's see. If we compare you ten plus one and what the formula gives us, the use come out.
167
00:20:28,360 --> 00:20:34,100
Correct. Now what about the next term, the Q prime term?
168
00:20:34,120 --> 00:20:41,320
Well, we should get K times u prime. And in fact, we do get k times u prime k over two times to u.
169
00:20:42,040 --> 00:20:45,310
So. That was cancelled property.
170
00:20:48,420 --> 00:20:51,690
What about the next term? The case squared over to you.
171
00:20:51,690 --> 00:20:57,340
Double prime. That's what we want to get. What do we get? Case squared over to you.
172
00:20:57,390 --> 00:21:05,280
Double. So that one is correct to. So in fact, the next one is the first one where they don't cancel.
173
00:21:07,230 --> 00:21:13,050
The difference between this and this is equal to that minus that.
174
00:21:13,560 --> 00:21:20,400
And because one quarter is not the same as minus the sixth, they don't cancel.
175
00:21:20,790 --> 00:21:33,240
So what we find is that the end plus one is equal to you at time end plus one minus five twelfths.
176
00:21:35,000 --> 00:21:44,960
5/12 is a quarter plus a six times k cubed u triple prime and seven.
177
00:21:48,470 --> 00:21:56,810
So there's our analysis of that is best for us to we find that the local truncation error is O of K to the three.
178
00:21:58,100 --> 00:22:02,690
So the order of accuracy is two.
179
00:22:11,670 --> 00:22:16,140
When I was learning this stuff as a graduate student, this is a thousand years ago.
180
00:22:16,470 --> 00:22:22,500
One of the other students in the course was Bill Grob and the teacher was Gene Gallop.
181
00:22:22,560 --> 00:22:32,430
And Gene said that anybody can use a symbolic tool like Maxima to do this analysis for linear, multi-step methods.
182
00:22:33,060 --> 00:22:37,860
But I've never seen anyone use Maxima to do this kind of analysis for a runner, cut them.
183
00:22:38,670 --> 00:22:46,260
And then he said, if anyone can do this analysis for the fourth order running kind of method in maxima symbolic computing.
184
00:22:47,670 --> 00:22:56,579
They can have an A in the course. So Bill Group did that and got a new course and he later went on to invent NPI and all sorts of things.
185
00:22:56,580 --> 00:23:05,140
He's a world famous computer scientist. He kept attending the course but didn't turn on any more homework, I think.
186
00:23:07,450 --> 00:23:14,670
Okay. So there's our very superficial look at order of accuracy.
187
00:23:14,940 --> 00:23:23,370
Now, let's take an equally superficial look at the business of convergence and stability, which is, of course, what we would like to happen.
188
00:23:23,370 --> 00:23:29,670
We want the method to converge as the time steps shrink to zero.
189
00:23:50,070 --> 00:23:52,920
So the history here is interesting, I emphasise that.
190
00:23:54,660 --> 00:24:00,000
These methods go back many, many years sort of to Euler, but certainly to Adams in the middle of the 19th century.
191
00:24:00,270 --> 00:24:06,840
So the idea of high order approximations to numerical ode to to odes is 150 years old.
192
00:24:07,350 --> 00:24:11,700
But really, there wasn't any theory about whether these methods should converge or not.
193
00:24:11,970 --> 00:24:16,260
Until the 1950s, after computers were invented.
194
00:24:16,560 --> 00:24:24,720
That's when people started exploring all sorts of things and they realised they really had to understand convergence and stability.
195
00:24:31,030 --> 00:24:35,049
On a computer. Suddenly you could talk about a 20th order method.
196
00:24:35,050 --> 00:24:40,810
And then the question would be, would it really behave as expected or might something go wrong?
197
00:24:40,960 --> 00:24:46,660
And the key person who initiated this theory was the Swedish mathematician Norman Dahlquist.
198
00:24:48,430 --> 00:24:52,180
So that's in the 1950s, and he was a leading figure then for 50 years.
199
00:24:52,180 --> 00:25:03,550
And I guess that ten or 15 years ago. So what people discovered once computers were doing explorations of all sorts is that some formulas don't work,
200
00:25:03,820 --> 00:25:09,490
even though the order of accuracy is good. So here's an example.
201
00:25:12,870 --> 00:25:21,750
Of a perfectly sensible formula on the face of it, v n plus one equals minus four.
202
00:25:22,140 --> 00:25:26,040
V m plus five.
203
00:25:26,370 --> 00:25:31,300
V and minus one. Plus the timestamp.
204
00:25:32,950 --> 00:25:39,310
Times four F and minus two F minus one.
205
00:25:44,230 --> 00:25:52,690
So that's an example of a formula which fails completely if you try to use it, even though the order of accuracy is excellent.
206
00:25:52,720 --> 00:25:56,260
The order of accuracy is not just two, it's three.
207
00:25:57,560 --> 00:26:00,950
So this looks better than second order.
208
00:26:00,950 --> 00:26:04,429
Adams Bashford. It basically uses the same information.
209
00:26:04,430 --> 00:26:08,690
It goes back one time. And yet it has a higher order of accuracy.
210
00:26:09,110 --> 00:26:14,340
So it would be perfectly natural to try to use this formula instead of the fourth.
211
00:26:15,110 --> 00:26:18,110
If you do, it fails completely. And so it doesn't have a name.
212
00:26:18,110 --> 00:26:26,360
It's Brand X, right? There's no name for a bad formula. If you try it, the errors explode algebraically.
213
00:26:26,370 --> 00:26:32,330
You get exponentially. You get no convergence at all. And I think we won't try that today, but maybe later.
214
00:26:34,700 --> 00:26:40,940
So in a loose sense, whatever that means, it's exponentially unstable.
215
00:26:41,750 --> 00:26:49,710
And you certainly don't get the right answers. And of course, you can have analogously unstable formulas of the wrong categorising.
216
00:26:50,150 --> 00:26:57,590
Very easy to make unstable formulas. So here are the key definitions that Dahlquist famous theory is based on.
217
00:26:58,550 --> 00:27:04,780
The three key words are consistent. Stable.
218
00:27:06,730 --> 00:27:16,790
And convergent. Consistent is what you get if you get through the Taylor series gets the order of accuracy.
219
00:27:16,800 --> 00:27:20,450
So loosely speaking, I'm not giving you a very precise definition.
220
00:27:21,110 --> 00:27:25,160
A formula is consistent. If the order of accuracy is at least one.
221
00:27:28,990 --> 00:27:34,610
Euler's method, for example, is consistent. It's stable.
222
00:27:36,890 --> 00:27:40,250
Well, here's the key new thing that people didn't focus on before computers.
223
00:27:40,820 --> 00:27:45,980
Instability consists of errors accumulating not just additively, but exponentially.
224
00:27:47,120 --> 00:27:53,450
You run this thing, you make a small error at a certain time step, but the subsequent influence of that error grows exponentially.
225
00:27:53,810 --> 00:27:57,080
That's the issue with stability, exponential growth.
226
00:27:57,320 --> 00:28:01,670
But it turns out you can define it in a much more precise method.
227
00:28:01,900 --> 00:28:05,840
The way a formula is stable. If.
228
00:28:07,560 --> 00:28:12,300
In the case of the trivial function F equals zero.
229
00:28:13,530 --> 00:28:21,390
That's a trivial oddity indeed. All solutions are bounded.
230
00:28:27,280 --> 00:28:31,720
So with this formula here, if we're equal to zero, this whole thing would go away.
231
00:28:32,470 --> 00:28:38,740
All that would remain would be this part of the formula, which is a linear, constant coefficient recurrence relation.
232
00:28:38,980 --> 00:28:45,460
It's a three term recurrence relation. New value equals minus four times previous plus five times previous.
233
00:28:46,330 --> 00:28:53,170
That recurrence relation has where it has a linear, two linearly independent solutions.
234
00:28:53,170 --> 00:28:56,980
And it turns out that with these coefficients one of them blows up to infinity.
235
00:28:57,280 --> 00:29:06,790
So it's not the case that that recurrence relation has only bounded solutions and that it turns out enough of a definition of stability.
236
00:29:07,930 --> 00:29:18,120
Convergent means that the computed. Results converge to the exact results.
237
00:29:20,450 --> 00:29:28,370
And each time. So I need v of t converges t u of t as k goes to zero.
238
00:29:31,750 --> 00:29:35,230
Now, of course, on a real computer you have rounding errors as well as everything else.
239
00:29:35,710 --> 00:29:38,890
And this theory isn't talking about rounding errors explicitly.
240
00:29:38,920 --> 00:29:45,730
So when I say that approach is that I'm speaking as a mathematician doing true mathematics.
241
00:29:53,560 --> 00:29:57,200
So these three key definitions I have given you the essence of them,
242
00:29:57,200 --> 00:30:02,710
though they can be made very precise, turn out to be related in the most beautiful way imaginable.
243
00:30:03,040 --> 00:30:19,510
And this is called the Dahlquist equivalent theorem. The Equivalence Theorem says that an O.D. formula is convergent.
244
00:30:22,300 --> 00:30:28,150
If and only if. And that's really a rigorous if and only if it is both consistent.
245
00:30:29,970 --> 00:30:41,750
And stable. A beautiful theorem.
246
00:30:44,200 --> 00:30:46,450
To put it in word. What's that really telling us?
247
00:30:47,830 --> 00:30:53,469
Well, Don, convergent means you're either inconsistent with his or you're not tracking the right equation.
248
00:30:53,470 --> 00:30:57,040
That's not too surprising or unstable.
249
00:30:57,730 --> 00:30:58,690
So really,
250
00:30:58,690 --> 00:31:08,379
the the new content in the theorem essentially is saying that the only thing that can go wrong with the formula is that the local errors blow up.
251
00:31:08,380 --> 00:31:12,490
And instability is sort of the only surprising thing that can go wrong.
252
00:31:12,910 --> 00:31:17,770
Consistency is easy. It's just Tayler theories. This is the only other thing you have to look at.
253
00:31:19,240 --> 00:31:23,950
Beautiful theorem, which some would say is almost the fundamental theorem of numerical analysis.
254
00:31:25,600 --> 00:31:30,339
Dahlquist went on to do lots of other beautiful things the so-called Dahlquist, stability,
255
00:31:30,340 --> 00:31:35,500
barriers to to more theorems about what order of accuracy it's possible to have.
256
00:31:35,800 --> 00:31:40,270
And his stability barrier would say that a formula of this kind,
257
00:31:41,500 --> 00:31:50,140
which is based only on two previous values and doesn't use F and plus one, could only possibly have one or two there.
258
00:31:50,560 --> 00:32:02,049
You couldn't possibly have in order three four. More generally, if you have a K step method, call this a two step method that that is explicit.
259
00:32:02,050 --> 00:32:07,270
It doesn't use the new time value in the FS. It can only have order of accuracy.
260
00:32:07,590 --> 00:32:12,400
Okay, so it's a beautiful bit of mathematics involving polynomials and recurrence.
261
00:32:19,360 --> 00:32:23,350
So Dahlquist was also around when I was a graduate student.
262
00:32:23,740 --> 00:32:31,770
And he's a fascinating example in my experience of somebody who is intellectually so fruitful.
263
00:32:31,780 --> 00:32:36,129
He had these great ideas, beautiful results, and yet was a terrible speaker.
264
00:32:36,130 --> 00:32:38,800
And I listened to him give talks. I never understood a word.
265
00:32:40,120 --> 00:32:45,639
The other extreme would have been Jim Wilkinson, also a great mathematician, very fruitful, who was a great speaker.
266
00:32:45,640 --> 00:32:50,469
If you listen to him, you understood everything. Okay.
267
00:32:50,470 --> 00:32:55,360
So that is our 20 minute summary of the theory of numerical solution of ODA ese.
268
00:32:55,720 --> 00:33:00,760
And now let's talk briefly about the practice of software.
269
00:33:01,090 --> 00:33:05,680
So I'm going to just say a word about adaptive code. Adaptive the code.
270
00:33:21,700 --> 00:33:33,609
I mentioned already last time that the history here is that odes and integrals were really the first two early great successes of intelligent numeric,
271
00:33:33,610 --> 00:33:34,749
adaptive numeric.
272
00:33:34,750 --> 00:33:46,959
People realised in the sixties that you could very fruitfully use error estimates to guide the way your code solves the problem and thereby
273
00:33:46,960 --> 00:33:53,980
achieve a kind of way of operation where the user specifies a tolerance and you deliver a result which is probably within that tolerance.
274
00:33:54,340 --> 00:33:57,580
Let me say a word about how that's done.
275
00:33:58,930 --> 00:34:04,270
So the idea is that on input, the user specifies a tolerance.
276
00:34:09,450 --> 00:34:19,280
And then the code. The program varies things to achieve this.
277
00:34:19,550 --> 00:34:23,240
So the program chooses the step size.
278
00:34:25,690 --> 00:34:28,870
And also perhaps the formula.
279
00:34:32,070 --> 00:34:32,969
To achieve this.
280
00:34:32,970 --> 00:34:42,629
And I don't mean once and for all, but as time goes by, it's constantly monitoring things in order to get, we hope, the right accuracy.
281
00:34:42,630 --> 00:34:53,280
And how does it do that? Well, the basic idea is to use some kind of a local error and the basic idea of a local error estimate.
282
00:34:55,160 --> 00:34:59,780
Is the computer resolved by two different formulas and then take the difference.
283
00:34:59,990 --> 00:35:05,900
So the vanilla flavoured approach is you compute a new value, then plus one.
284
00:35:08,770 --> 00:35:19,030
By two different methods. And in the simplest concept, you could say two different methods of different orders of accuracy.
285
00:35:23,600 --> 00:35:29,210
So that one of them can be expected, at least when the time step is small enough to be more accurate than the other.
286
00:35:30,890 --> 00:35:39,210
And then the difference of the two. Approximates the error.
287
00:35:47,690 --> 00:35:51,290
So for example in Matt lives only four or five code.
288
00:35:52,760 --> 00:35:56,410
A value is computed by fourth order writing colour and by fifth order of rugged colour.
289
00:35:56,420 --> 00:36:03,620
And then the two are compared. And if it is small enough to be matching the prescribed tolerance, you keep going.
290
00:36:03,620 --> 00:36:08,330
Or you might even enlarge the timestamp if it's much smaller than the prescribed tolerance.
291
00:36:08,630 --> 00:36:13,570
On the other hand, if you're not achieving the tolerance, then you'd redo it with a smaller timestamp.
292
00:36:13,580 --> 00:36:19,190
That's the basic idea. This kind of thing to us nowadays seems kind of obvious.
293
00:36:19,190 --> 00:36:23,240
What's the big deal? But in the 1960s, this was really exciting and.
294
00:36:25,850 --> 00:36:31,880
In this example, as in I think many examples in computer science, the numerical people tend to do things first.
295
00:36:32,570 --> 00:36:36,740
So artificial intelligence, of course, it's an old idea of intelligent computers.
296
00:36:37,040 --> 00:36:43,310
But in terms of actually using intelligence, these adaptive codes are very early examples of that.
297
00:36:43,550 --> 00:36:46,340
And why are numerical people usually in the vanguard?
298
00:36:46,340 --> 00:36:52,940
Well, because numerical problems tend to be much more cleanly prescribed than more general reasoning problems.
299
00:36:53,270 --> 00:37:00,770
So naturally enough, numerical people tend to do things which then later, as decades pass, expand in all sorts of directions.
300
00:37:01,190 --> 00:37:08,149
Now, of course, we take it for granted that every time you interact with your keyboard, your computer is figuring things about you.
301
00:37:08,150 --> 00:37:13,130
And computers are ubiquitously intelligent nowadays.
302
00:37:13,490 --> 00:37:19,460
But in the 1960s that was not true. They were just computing with numbers and this seemed quite amazing.
303
00:37:25,140 --> 00:37:32,400
Another example of how numerical people are in the vanguard, the whole business of sharing software.
304
00:37:32,620 --> 00:37:36,239
You know, we now take it for granted. You go to the app store, you download whatever.
305
00:37:36,240 --> 00:37:40,510
You know, this is part of your life from minute to minute nowadays.
306
00:37:40,520 --> 00:37:45,270
But that essentially began with numerical people again back in the sixties and seventies,
307
00:37:45,270 --> 00:37:50,040
I guess, where because numerical problems tend to be very well defined,
308
00:37:50,040 --> 00:37:54,569
these were the first people for whom it really made sense for somebody to write a code and then
309
00:37:54,570 --> 00:38:00,030
somebody else to take it so people would send decks of cards and then later magnetic tapes and so on.
310
00:38:00,750 --> 00:38:08,010
Long before there was a World Wide Web and even before there was an Internet, people were doing that sort of thing for numerical codes.
311
00:38:11,440 --> 00:38:16,960
Let's say a word about some of the particular offerings.
312
00:38:17,500 --> 00:38:37,160
So in MATLAB. The Code Oddie to three compares to running a cut of formulas, which of course are of second order and third order.
313
00:38:39,530 --> 00:38:43,010
Let me remind you that these are not well-defined concept.
314
00:38:43,020 --> 00:38:46,850
There's more than one second order to run a cut of formula in more than one third order.
315
00:38:47,180 --> 00:38:56,450
So I'm using these expressions loosely. In contrast, AB two would be the unique and best fourth formula of order to O.D. four or five.
316
00:38:58,500 --> 00:39:03,450
That's the same thing with a fourth order formula and a fifth order formula.
317
00:39:07,270 --> 00:39:16,170
And then ODI 113. Is based on linear multistep formulas and they vary in order.
318
00:39:16,740 --> 00:39:25,160
So multistep formulas. Of orders.
319
00:39:27,120 --> 00:39:35,730
One 213. So the right way to parse this, that's two and three, four and five, one and 13.
320
00:39:38,060 --> 00:39:39,680
All of these holes in their way,
321
00:39:39,770 --> 00:39:47,479
compute along in time and are constantly monitoring things and sometimes they'll take big time steps and then if it gets complicated,
322
00:39:47,480 --> 00:39:51,680
they'll take small time steps and then other large them and then shrink them again.
323
00:39:52,190 --> 00:39:56,569
In the case of only four or five and to three, it's only the time steps.
324
00:39:56,570 --> 00:40:02,270
They're very in the case of only one, one, three, they'll also adjust the order depending on local conditions.
325
00:40:04,920 --> 00:40:09,570
If you want to read about the details of this, I think it's described in all of the textbooks I mentioned.
326
00:40:09,870 --> 00:40:13,290
There's also Molar Cleave Mueller's online textbook.
327
00:40:15,180 --> 00:40:28,650
So in addition to the ones I mentioned, Molar, the author of MATLAB, and he has a freely available textbook numerical computing with MATLAB.
328
00:40:37,270 --> 00:40:40,500
Which is a very much an undergraduate text.
329
00:40:40,510 --> 00:40:44,320
It's very nice, but not an advanced text.
330
00:40:45,670 --> 00:40:50,140
And one of the chapters in there talks about the mechanics of adaptive time stacking.
331
00:40:52,840 --> 00:40:55,600
So of course, I want to show you this myself on the computer.
332
00:40:58,560 --> 00:41:08,910
So what I did was write a code that called a MATLAB solver and then plot the time steps that it uses.
333
00:41:10,680 --> 00:41:25,990
So this is M 27. And the particular Audi that this happens to play with is you prime equals you times the hyperbolic tangent.
334
00:41:27,430 --> 00:41:34,990
Of E to the t sign five t.
335
00:41:39,320 --> 00:41:52,680
But 56. The reason for picking this is the hyperbolic tangent combined with the eve of
336
00:41:52,680 --> 00:41:56,550
the T makes this a problem that gets less and less smooth as time goes on.
337
00:41:56,790 --> 00:42:02,580
So it's a good equation for illustrating the interplay of smooth regions and non smooth regions.
338
00:42:03,480 --> 00:42:17,910
So let's run that. So I won't describe the code in detail.
339
00:42:17,910 --> 00:42:24,570
But if you look at M 27 adaptive arc, you see that there's a thing called Sol equals four or five.
340
00:42:24,840 --> 00:42:30,090
So that's where it's computing the result using a prescribed tolerance.
341
00:42:30,570 --> 00:42:37,950
So you see, I set the rail tunnel and the obstacle to a number that I give and then it extracts the
342
00:42:37,950 --> 00:42:44,310
appropriate information from that solution to figure out where the sample points were.
343
00:42:51,920 --> 00:42:58,090
Sarah and 27. And it asks me, what time step do I want?
344
00:42:58,450 --> 00:43:01,810
So I'll begin. Sorry. Not what time to that. What tolerance do I want?
345
00:43:02,140 --> 00:43:05,470
I'll begin with point one. A very loose tolerance.
346
00:43:06,780 --> 00:43:13,280
And. It computes that curve and you see how few points there are per wavelength.
347
00:43:13,730 --> 00:43:16,880
This is essentially correct. It really is getting accuracy.
348
00:43:17,510 --> 00:43:23,420
And the reason it can have so few points is the wrong kind of formula is pretty accurate in the fourth order formula.
349
00:43:26,130 --> 00:43:32,790
In that case, you don't see anything very interesting about clustering of the points because there aren't enough of them for it to be interesting.
350
00:43:34,200 --> 00:43:37,730
I'm not going to switch the ten to the minus two, and I think the points don't change at all.
351
00:43:37,740 --> 00:43:41,340
Are you ready? So when I press return, see whether the points change.
352
00:43:42,690 --> 00:43:50,190
Now, so what that is telling us is that it over held requirements even for a ten to the minus one.
353
00:43:51,180 --> 00:43:55,090
Now I'll say ten to the minus three, and this time the points will change together.
354
00:43:55,770 --> 00:44:00,210
So you see, the first bit looks the same and towards the end we're having a few more sample point.
355
00:44:00,450 --> 00:44:06,630
It's not yet interesting. Ten to the minus four time to the minus five.
356
00:44:07,020 --> 00:44:16,079
Ten to the minus six. So now it is interesting. You can see in the smooth regions it's not putting very many points,
357
00:44:16,080 --> 00:44:20,690
but in the less smooth regions it's adaptively figuring out that it needs more points.
358
00:44:20,700 --> 00:44:27,149
So this is the sharpest corner in the solution and it's putting a lot of points and that will continue,
359
00:44:27,150 --> 00:44:31,050
of course, as we decrease further one in minus seven.
360
00:44:32,850 --> 00:44:40,870
Eight. Nine. So you can see that even to get nine digits of accuracy,
361
00:44:41,560 --> 00:44:47,350
the wrong kind of method doesn't need many points in the smooth areas, but it certainly does need a lot of points near there.
362
00:44:48,490 --> 00:44:54,010
This is happening all the time. Let's see how far we can go.
363
00:44:54,030 --> 00:44:57,210
I'm not actually sure. If I go ten, it's happy.
364
00:44:57,420 --> 00:45:00,540
11. Is it happy? Yeah. Happy? Yeah.
365
00:45:02,220 --> 00:45:10,770
At some point, we'll get a message. Yeah. So the MATLAB O.D. solvers basically don't let you go below ten to the -40.
366
00:45:14,290 --> 00:45:20,050
Okay. So that's an illustration of the very standard technology in action.
367
00:45:20,320 --> 00:45:31,810
Let's for fun, show you how I would do it in Tetfund, which is always a more convenient tool for a lot of things, though not always the quickest tool.
368
00:45:34,880 --> 00:45:41,540
So instead. Fine, let's see the way you would solve this problem if you define a chip and the interval was from 0 to 3.
369
00:45:41,540 --> 00:45:47,210
So there's a step up on the interval from 0 to 3, and then you have to define the operator.
370
00:45:47,360 --> 00:46:01,640
So for this particular O.D., it was the derivative of U minus U times, the hyperbolic tangent, and each of the three times the sign of five of t5t.
371
00:46:03,890 --> 00:46:11,950
That should be called a zero. That's our operator. I hope I've got the right number of parentheses there.
372
00:46:13,330 --> 00:46:18,460
And then we need a boundary condition. And the left boundary condition was that the value is one.
373
00:46:19,270 --> 00:46:23,320
So then in seven you say l backslash zero.
374
00:46:23,980 --> 00:46:28,610
And that gives you the solution. The first time always takes longer.
375
00:46:28,660 --> 00:46:31,690
This is quite a few seconds. Let's do it again.
376
00:46:32,470 --> 00:46:35,710
I'm curious, how long did that really take?
377
00:46:38,570 --> 00:46:41,230
Yeah. So, you know, one second is no big deal.
378
00:46:41,240 --> 00:46:46,730
Of course, for a one day problem like this, this is nowhere near a fast solution, but it's a convenient solution.
379
00:46:46,880 --> 00:46:50,810
And then to plot it, we could just say plot you.
380
00:46:52,730 --> 00:46:56,270
So there it is. And that will be accurate to something like ten digits.
381
00:46:56,450 --> 00:47:00,319
In fact, let's see, I guess if I run the code and tell us the accuracy.
382
00:47:00,320 --> 00:47:15,590
So let's do that. And 27 seven tells us that this curve is in fact represented by a polynomial of degree 834 and the error is oh at the top there.
383
00:47:15,590 --> 00:47:19,460
In the in the title, you can see that it got about ten digit accuracy.
384
00:47:19,700 --> 00:47:23,900
So the way a telephone is set up for ease, the default accuracy is about ten.
385
00:47:28,630 --> 00:47:37,150
I have 2 minutes. So let me just for fun show you this kind of thing and chat gooey, which is the graphical user interface.
386
00:47:37,240 --> 00:47:43,990
So if you want to play around quickly with oldies and have fun, you can go to chat gooey.
387
00:47:49,060 --> 00:47:55,690
So in particular, under the Demos tab, you'll see that one of the categories is initial value problems.
388
00:47:56,050 --> 00:47:59,810
So. Let's say let's do a few at random.
389
00:47:59,810 --> 00:48:05,030
So there we have a initial value problem with a discontinuous coefficient.
390
00:48:07,140 --> 00:48:10,740
So of course we won't get a sixth order accuracy because of that coefficient.
391
00:48:11,580 --> 00:48:15,090
We're out here for fun. Takes forever.
392
00:48:17,010 --> 00:48:23,670
So there you have this solution up top. So there is the solution to the problem with the discontinuous coefficient.
393
00:48:25,260 --> 00:48:32,310
Let's do a couple more. If I initial value problems, let's look at a resistance problem.
394
00:48:32,940 --> 00:48:40,200
So there you have a problem with a forcing function that happens to be resonant with the differential equation.
395
00:48:40,860 --> 00:48:44,460
That's a secular term, so that will cause some growth in amplitude.
396
00:48:47,580 --> 00:48:53,130
So there you have this growth in amplitude when you pump a swing at the resonant frequency.
397
00:48:53,370 --> 00:48:58,170
Let's do one more just to emphasise that of course, systems of equations are fine.
398
00:48:58,590 --> 00:49:04,380
So for example, the Lorenz equations, which we will talk about a bit, are a coupled system of three equations.
399
00:49:05,040 --> 00:49:14,700
There are the three equations u prime, v prime, w prime, and then some initial data and it's telling it to solve it on the interval from 0 to 15.
400
00:49:14,730 --> 00:49:17,970
If I do that, it chugs away for a moment and then.
401
00:49:21,790 --> 00:49:23,500
We have this chaotic curve.
402
00:49:23,510 --> 00:49:32,410
They're showing the three components of the Lorenz equations and three curves here, showing how the these were represented by polynomials.
403
00:49:32,620 --> 00:49:35,920
They're polynomials of length, about 500 bacteria.
404
00:49:36,820 --> 00:49:40,180
Okay. So let me remind you, the assignment is due on Tuesday.