1
00:00:05,060 --> 00:00:06,410
Okay. Good morning, everybody.
2
00:00:07,970 --> 00:00:15,040
We're now starting partial differential equations, which is certainly one of the richest topics that humanity has ever created.
3
00:00:15,060 --> 00:00:18,500
I think you could say we've been talking about Odie now.
4
00:00:18,500 --> 00:00:24,530
PD So. PD Partial differential equation, which means at least two independent variable.
5
00:00:26,690 --> 00:00:32,690
So to begin with, a bit of the history, of course, Newton and Live, MIT and so on invented calculus.
6
00:00:32,690 --> 00:00:39,470
And that was in the second half of the 17th century. And I guess from there, in the ensuing hundred and 50 years or so,
7
00:00:39,860 --> 00:00:45,710
people did more and more with differential equations and then partial differential equations, especially linear ones.
8
00:00:46,070 --> 00:00:51,680
So in the 19th century, you would have had a huge understanding of linear partial differential equations.
9
00:00:52,520 --> 00:00:56,209
And then in the 20th century, non linearity became important.
10
00:00:56,210 --> 00:01:00,440
So now we have this very rich world of linear and nonlinear problems.
11
00:01:00,710 --> 00:01:05,510
Roughly speaking, in the linear case, you can often say more theoretically.
12
00:01:06,080 --> 00:01:09,950
In the nonlinear case you can do amazing things computationally,
13
00:01:10,310 --> 00:01:15,200
and many important things on earth are linear and many other important things are nonlinear.
14
00:01:15,200 --> 00:01:18,530
They're both very important. I brought along a couple of books.
15
00:01:18,980 --> 00:01:21,440
There are of course, hundreds of books on Pdes.
16
00:01:21,800 --> 00:01:27,920
I brought along a couple of my favourites, which from your point of view would be sort of at the mathematics end of things.
17
00:01:27,920 --> 00:01:35,510
These are certainly not science or engineering books. One is by far with a lurid cover introduction to partial differential equations.
18
00:01:36,290 --> 00:01:43,220
And another classic, which keeps getting reissued every decade or so, is Fritz John Partial Differential Equations.
19
00:01:49,620 --> 00:01:56,290
So let's begin with the standard notation that we'll use and that people use all the time, namely subscripts.
20
00:01:56,310 --> 00:02:04,080
When you write you sub t in this world that generally means the partial derivative of you with respect to t.
21
00:02:04,650 --> 00:02:09,560
And if you write you sub x, of course it's the partial with respect to x.
22
00:02:10,140 --> 00:02:20,460
And if you write multiple subscripts, the obvious thing happened like u x x would be the partial squared of you with respect to x squared.
23
00:02:22,440 --> 00:02:28,020
So with notation like that, you can write down PDS compactly in particular.
24
00:02:28,470 --> 00:02:33,690
There's also some vector notation which we know and love the gradient and the Laplace.
25
00:02:34,260 --> 00:02:43,470
So when I write that symbol, I mean the gradient. So the gradient of a scalar function u is a vector in three dimensions.
26
00:02:44,010 --> 00:02:51,280
It would be the vector of the three partial derivatives. You X, you y, you z.
27
00:02:53,290 --> 00:02:56,670
And numerical people generally think of vectors as columns.
28
00:02:56,910 --> 00:03:03,030
So I would tend to think of that as a column vector. And similarly so that's the gradient.
29
00:03:03,900 --> 00:03:12,870
The other big one is the Laplace, in which we write with an upward pointing triangle, and that's a scalar,
30
00:03:13,170 --> 00:03:23,010
namely the sum of the second derivatives in three D that would be u, x, x, plus u, y, y plus u z.
31
00:03:24,300 --> 00:03:35,130
So that's c la plus in. These go back about 200 years to the glory days of French mathematics after Napoleon.
32
00:03:38,910 --> 00:03:44,220
In the handouts there for handouts. You'll notice a couple of pages from the PD coffee table book.
33
00:03:44,820 --> 00:03:49,320
That was a project we did jointly about 15 years ago here at Oxford.
34
00:03:49,470 --> 00:03:55,380
It never got finished, so we finished about 40 pages intending to do about 100.
35
00:03:55,740 --> 00:03:58,980
So it has never been published, but it's freely available online.
36
00:03:59,640 --> 00:04:02,820
If you go to my web page and look under books, you'll find it.
37
00:04:04,110 --> 00:04:11,130
In each case, there's this two page spread describing a PD, and we tried to make it really beautiful and informative.
38
00:04:11,460 --> 00:04:19,200
So although some years have passed, these are still the easiest way to get to know some of the world's interesting PD.
39
00:04:19,590 --> 00:04:21,930
And, well, there are two of them today.
40
00:04:21,930 --> 00:04:28,230
And I will continue to hand out a couple more in each lecture from now on so you can get to know some of our favourite PBDEs.
41
00:04:29,790 --> 00:04:35,640
Let me write down a couple of standards. So.
42
00:04:40,740 --> 00:04:45,690
For example, the LE Plus equation.
43
00:04:49,320 --> 00:04:52,410
Is that the plus in of a function equals zero?
44
00:04:55,030 --> 00:05:01,510
And that makes sense in 3D, but also in 2D. And one day it would be a trivial, odd EU x x equals zero.
45
00:05:01,810 --> 00:05:07,570
People classify PDS into different categories, which I won't really go into very much,
46
00:05:07,570 --> 00:05:13,180
but this one is the classic, the canonical example of an elliptic PD.
47
00:05:14,380 --> 00:05:20,020
The Poisson equation is the same thing, except with a right hand side that may be non-zero.
48
00:05:20,260 --> 00:05:28,360
So that would be LA plus the interview equals F where F would be in general a function of the space variables.
49
00:05:29,080 --> 00:05:40,540
So that's also elliptic. Intuitively, an elliptic equation is one where everything is coupled together instantly.
50
00:05:40,550 --> 00:05:44,030
There's no no notion of time propagation.
51
00:05:46,580 --> 00:05:56,989
The heat equation, also known as the diffusion equation, now does have a time variable.
52
00:05:56,990 --> 00:06:01,280
So that would be u t equals le plus the end of u.
53
00:06:02,570 --> 00:06:05,210
So that now is a time dependent equation.
54
00:06:05,480 --> 00:06:16,820
It's classified, this one as parabolic, which loosely speaking, means a time dependent PDT where information travels infinitely fast.
55
00:06:19,350 --> 00:06:23,310
And then the canonical example of the third category would be the wave equation.
56
00:06:26,320 --> 00:06:31,440
Which is the second order version of this u t t equals le plus ian.
57
00:06:32,110 --> 00:06:36,430
So this is all second derivative duty equals u x x plus u y y plus u.
58
00:06:37,330 --> 00:06:41,410
That's the canonical example of the hyperbolic PD.
59
00:06:44,280 --> 00:06:49,890
And loosely speaking, a hyperbolic equation is one where information travels at a finite speed.
60
00:06:50,280 --> 00:06:54,179
So for this equation, the way I've written it, information travels at speed.
61
00:06:54,180 --> 00:06:59,250
One, just to give a couple examples of a non-linear equation.
62
00:07:01,050 --> 00:07:04,680
There's the equation due to a guy called burgers.
63
00:07:06,240 --> 00:07:18,030
So the burgers equation and that is u t equals u squared sub x plus u x x.
64
00:07:18,660 --> 00:07:22,310
So there you see non linearity. You have a derivative of u squared.
65
00:07:22,320 --> 00:07:26,220
That's the nonlinear term coupled with a linear and diffusive term.
66
00:07:27,810 --> 00:07:33,840
I'm not putting any constants in these equations. Of course, in an application you'd have physical constants all over the place.
67
00:07:35,130 --> 00:07:43,380
Then one other example of a famous nonlinear equation would be the CDV equation that stands for court to Vagaries.
68
00:07:47,510 --> 00:07:57,090
And that's you t equals you squared x plus u x x x.
69
00:07:59,180 --> 00:08:04,610
Little difference is like adding or subtracting an x of course have a huge effect.
70
00:08:05,630 --> 00:08:09,890
When you have a skilled y, you instantly see a term like that is diffusion.
71
00:08:12,380 --> 00:08:16,310
Which means things get smoother and sort of move off to infinity.
72
00:08:16,670 --> 00:08:26,220
A term like this means dispersion. Which means that something like this turns into a wave train.
73
00:08:27,060 --> 00:08:31,740
Very different behaviour. So diffusion is smooth. Dispersion of waves.
74
00:08:36,490 --> 00:08:49,030
Let's look at the handout called examples of. Just to give you a sense of how vast the intellectual terrain here is, how much of science,
75
00:08:49,360 --> 00:08:56,590
especially science but also engineering is related to these things and not just related, but founded on PBDEs.
76
00:08:56,890 --> 00:09:00,910
It's absolutely at the heart of the intellectual world.
77
00:09:01,420 --> 00:09:04,540
So I'll just read them through. So we have a sense of them.
78
00:09:04,540 --> 00:09:10,570
The wave equation I've mentioned, and that would, for example, describe sound waves in air or water.
79
00:09:11,050 --> 00:09:14,470
So the field of acoustics is based on the wave equation.
80
00:09:15,580 --> 00:09:19,150
The heat equation describes diffusion of all sorts of things.
81
00:09:19,360 --> 00:09:25,300
In the case of heat, it's momentum, it's the fusing. But of course other things can diffuse, like salt or whatever you like.
82
00:09:26,830 --> 00:09:30,490
The LE Plus equation is the steady state version of these things.
83
00:09:30,760 --> 00:09:36,400
So often you find that if there's no T in the equation, that's because tea has gone to infinity.
84
00:09:36,730 --> 00:09:42,660
If you have a steady state solution of the heat equation, it's no longer changing in time.
85
00:09:42,670 --> 00:09:49,870
So you have the LE Plus equation. So that's steady state and electromagnetics.
86
00:09:49,870 --> 00:09:55,089
Electrostatic electromagnetics is very much built on that as the the first pillar,
87
00:09:55,090 --> 00:10:01,180
the Laplace equation, the bi harmonic equation is the fourth order analogue of the LE Plus equation.
88
00:10:01,420 --> 00:10:04,960
So the LE Plus equation is second order by harmonic.
89
00:10:06,070 --> 00:10:09,430
Harmonic as LE Plus by harmonic is sort of harmonic squared.
90
00:10:09,430 --> 00:10:14,530
That's a fourth order linear PD and that's the basis of solid mechanics.
91
00:10:14,530 --> 00:10:20,560
Elastic waves are by harmonic, whereas acoustic waves are harmonic.
92
00:10:20,860 --> 00:10:24,040
So acoustic is second order. Elastic is fourth order.
93
00:10:24,460 --> 00:10:32,380
Turns out it's the same equation for this gives fluids in the limit of zero reynolds number where a viscosity completely dominates momentum.
94
00:10:34,020 --> 00:10:38,760
The Poisson equation is a steady state. Potentials, as I say here, when you have sources.
95
00:10:39,000 --> 00:10:45,150
So if you have an electrostatic field, for example, with some charge distributed around, then you'd have the LE Plus equation.
96
00:10:47,790 --> 00:10:53,190
Plus on the question. Sorry, I guess I already mentioned the elastic wave equation.
97
00:10:53,700 --> 00:11:01,080
The bi harmonic is a scalar version. Elastic waves actually involve two coupled fields.
98
00:11:01,320 --> 00:11:07,660
So you have sound waves and solids actually involve pressure waves going this way and shear waves going that way.
99
00:11:07,680 --> 00:11:16,380
So it's a coupled system of equations. The Helmholtz equation is what you get if you do acoustics, but you fix the frequency.
100
00:11:16,590 --> 00:11:22,890
So if you look at the response of a system to an input at a given frequency, that becomes the Helmholtz equation.
101
00:11:23,070 --> 00:11:28,490
Another Elliptic PD. Maxwell's equations.
102
00:11:28,490 --> 00:11:40,210
Well, they're pretty famous. They were, I think, the first great example of a scientific advance that was really necessarily cast in the light of PBS.
103
00:11:40,970 --> 00:11:46,180
This came long after Pdes existed. 50 years after Laplace, ampersand and all that stuff.
104
00:11:46,190 --> 00:11:51,140
But when Maxwell did his work, you might know a bit of this history.
105
00:11:51,560 --> 00:11:57,140
Initially, for a decade or so, he was working with fairly physical models of how electric,
106
00:11:57,500 --> 00:12:03,020
how Faraday's discoveries might work, how electric fields and magnetic fields might interconnect.
107
00:12:03,500 --> 00:12:09,050
But as time went by, he realised that the right way to do this was as a partial differential equation.
108
00:12:09,260 --> 00:12:17,420
And his great paper, which created electromagnetic wave theory, wrote down the pdes that we call Maxwell's equations.
109
00:12:19,510 --> 00:12:22,749
So that's linear. It's epical in impact, but linear.
110
00:12:22,750 --> 00:12:31,420
And then Einstein's equations are very, very nonlinear. This is a coupled system of about ten nonlinear equations at the heart of general relativity.
111
00:12:31,900 --> 00:12:36,459
If you look at a typical applied mathematician like me, I more or less know Maxwell's equations,
112
00:12:36,460 --> 00:12:41,590
and I certainly don't really know Einstein's equations. Is there anyone in the room who works with Einstein's equations?
113
00:12:42,880 --> 00:12:53,530
Okay. One of the great discoveries of all time. Schrodinger's equation is the basis of quantum mechanics and chemistry, if you like.
114
00:12:55,040 --> 00:13:00,410
I personally think that Schrodinger's equation was the most important scientific advance of the 20th century.
115
00:13:00,620 --> 00:13:09,120
With the date DNA, Watson and Crick would come in as number two because once we had this equation in principle, the whole periodic table is explained.
116
00:13:09,140 --> 00:13:12,380
Chemistry suddenly has a mathematical basis.
117
00:13:12,590 --> 00:13:15,560
It's amazing the impact of Schrodinger's equation.
118
00:13:17,480 --> 00:13:23,660
The court of negative freeze equation I mentioned here the characteristic solutions of those are solid times.
119
00:13:23,660 --> 00:13:30,080
These are nonlinear waves which travel at a speed that depends on their amplitude and interact in interesting ways.
120
00:13:30,890 --> 00:13:34,070
The the equation dates to the early 20th century,
121
00:13:34,070 --> 00:13:42,020
but it was in the 1970s that people discovered solid lines and that really for a decade became the centrepiece of applied mathematics.
122
00:13:43,370 --> 00:13:49,460
The Klein Gordon equation is something that comes up in quantum mechanics attempts to relate that to relativity.
123
00:13:50,360 --> 00:13:53,900
Berger's equation I mentioned is at the heart of the theory of shockwaves.
124
00:13:54,830 --> 00:14:01,220
If you don't have this term there at all, then you get pure shock waves with with absolute discontinuity.
125
00:14:01,520 --> 00:14:05,390
If you have a little viscosity in there, you get smoothed off shock waves.
126
00:14:05,600 --> 00:14:11,759
So typically there'd be a small epsilon multiplying this term in a Berger's equation as another.
127
00:14:11,760 --> 00:14:15,229
Your Stokes equations are the basis of all the fluid mechanics.
128
00:14:15,230 --> 00:14:23,360
So you take any Newtonian fluid, the navier-stokes equations which date to 1846 or so describe how it evolves.
129
00:14:23,360 --> 00:14:28,040
And really, for the last hundred and 70 years we've been trying to solve them in various configurations.
130
00:14:29,840 --> 00:14:36,200
The Euler equations are the same thing, but without viscosity, so they're a special case of the Navier-stokes equations.
131
00:14:36,440 --> 00:14:45,770
When viscosity doesn't matter, the fisher CP equation, which we'll get to at the end of today, is a simple travelling wave equation.
132
00:14:46,010 --> 00:14:49,280
So it describes the sort of thing like pulses in neurones.
133
00:14:49,280 --> 00:14:53,060
It actually doesn't do precisely that, but it's that kind of flavour.
134
00:14:53,540 --> 00:15:01,130
The next one, the Hodgkin Huxley equation, is precisely more complicated system of equations designed to model neurones.
135
00:15:02,450 --> 00:15:08,750
The black holes equation is something that came up a few decades ago for valuation of options in finance.
136
00:15:09,200 --> 00:15:14,750
I remember when I first heard about it, it seemed like a silly gimmick, but now it's infinitely important.
137
00:15:17,650 --> 00:15:23,080
So virtually all of what I've mentioned is science. But then there are also equations that come up in engineering.
138
00:15:23,290 --> 00:15:27,130
In other words, equations that people design in order to achieve something.
139
00:15:27,370 --> 00:15:30,850
And certainly the paranormal like equation is in that category.
140
00:15:31,060 --> 00:15:39,520
It's a nonlinear diffusion equation designed so that instead of smoothing away edges actually sharpen up in an interesting fashion.
141
00:15:40,180 --> 00:15:45,490
And that is one of the early ones in a whole industry of using PDA used to do image processing.
142
00:15:46,900 --> 00:15:54,520
The Koshy Riemann equations are an elliptic pair of equations which are at the heart of complex analysis of the Allen Khan equation,
143
00:15:54,520 --> 00:15:59,650
which will play with a bit related to formation of structure in alloys and other materials.
144
00:16:00,070 --> 00:16:04,770
The key promoter efficiency equation is a beautiful example of a chaotic problem.
145
00:16:04,780 --> 00:16:07,930
So it's a PDA whose solutions are chaotic.
146
00:16:08,500 --> 00:16:11,680
They look chaotic on the screen, and they can be proved to be chaotic.
147
00:16:11,890 --> 00:16:15,820
And it has some connection with problems of flames and turbulence.
148
00:16:17,740 --> 00:16:24,760
So here's the trivia question for you Which three equations on this list won a Nobel Prize?
149
00:16:28,850 --> 00:16:33,489
Anyone have a category candidate? Black Show.
150
00:16:33,490 --> 00:16:37,060
Yes, Black. Charles Equation 1997 Economics.
151
00:16:37,270 --> 00:16:44,690
Nobel Prize. MAXWELL No, because it was too early.
152
00:16:44,700 --> 00:16:48,300
It certainly deserved a Nobel Prize, but it was already there when Nobel got going.
153
00:16:51,410 --> 00:16:56,180
Schrodinger. Yes. So Schrodinger in 1933 won a Nobel Prize.
154
00:16:56,480 --> 00:17:08,059
And the last. Another one that should have surely was Einstein's equations.
155
00:17:08,060 --> 00:17:13,219
But Einstein famously, by best estimates, are that he deserved four Nobel Prizes.
156
00:17:13,220 --> 00:17:16,280
He got one, and it wasn't for the Einstein equations.
157
00:17:18,740 --> 00:17:22,580
So the other one is the Hodgkin Huxley equations for neural conduction.
158
00:17:22,820 --> 00:17:26,780
And so that was the 1963 Nobel Prize in Physiology or Medicine.
159
00:17:27,170 --> 00:17:33,649
So it's a wonderfully diverse here we have one in physics, Schrodinger, one in medicine, Hodgkin Huxley and one in economics.
160
00:17:33,650 --> 00:17:43,550
And Black-Scholes. Interestingly, mathematicians take the view that, you know, linear problems are easy, nonlinear problems are significant.
161
00:17:43,760 --> 00:17:46,850
There's even the expression linearity breeds contempt.
162
00:17:48,140 --> 00:17:52,740
Nevertheless, if you look at the three that won Nobel Prizes, two of them are linear.
163
00:17:52,760 --> 00:17:56,690
So Schrodinger is linear and Black-Scholes is linear.
164
00:17:57,050 --> 00:18:00,080
It's only the Hodgkin Huxley equations which are nonlinear.
165
00:18:02,110 --> 00:18:02,440
Okay.
166
00:18:06,070 --> 00:18:14,170
I have another trivia that we won't go into here, but I believe that at least a third of physics Nobel Prizes have been won by eigenvalue problems.
167
00:18:17,370 --> 00:18:26,980
So let's just glance for a moment at the PDT coffee table book on feet, just to give you a sense of that.
168
00:18:28,840 --> 00:18:36,880
So the heat equation is the diffusion equation, and as the picture illustrates, it describes how edges turn into smooth things.
169
00:18:37,570 --> 00:18:44,980
I hope you'll, in your own time read this. We try to write these things very carefully to say things that are interesting and important.
170
00:18:45,970 --> 00:18:49,900
For example, there's the interesting mathematical fact that the solution is not unique.
171
00:18:50,290 --> 00:18:53,680
It only becomes unique if you force it to be zero at infinity.
172
00:18:55,010 --> 00:19:02,000
But then, more importantly physically is the wonderful, exciting connection between diffusion and probability.
173
00:19:02,540 --> 00:19:09,020
Everything in our world that diffuses almost has some randomness as the underlying mechanism.
174
00:19:09,350 --> 00:19:14,960
And because I like to demonstrate things on the computer, I'd like to play with that a bit.
175
00:19:17,790 --> 00:19:27,020
I guess I'll write down a few things first, but then we're going to play with random effects that underline the underlie the heat equation.
176
00:19:28,400 --> 00:19:39,800
But let's first treat it as a PD. So what I'll call Section 5.2 is our first numerical approach to solving PD e's by explicit.
177
00:19:42,440 --> 00:19:56,430
Finite difference in one dimension. So when you see that phrase, you know that I'm talking about parties that have two independent variables,
178
00:19:56,850 --> 00:20:00,390
namely one space variable and one time variable.
179
00:20:01,650 --> 00:20:05,879
So the heat equation is the simplest such equation. Let's consider that one.
180
00:20:05,880 --> 00:20:09,300
Suppose we have you t equals you.
181
00:20:09,300 --> 00:20:20,030
X x the one d heat equation. And suppose we want to solve that numerically on the computer?
182
00:20:20,720 --> 00:20:25,750
Well, the obvious thing to do is make a grid with space steps and time steps.
183
00:20:25,750 --> 00:20:31,550
So let's do that. We'll have our grid of points in space.
184
00:20:32,300 --> 00:20:37,010
And here we are at time zero. And then at the next time looks like this.
185
00:20:37,940 --> 00:20:42,710
In the simplest case, we just have a regular grid in both space and time.
186
00:20:44,090 --> 00:20:50,990
And the notation we often use is that K is often the timestamp.
187
00:20:51,590 --> 00:20:56,900
So Delta T and H is often the space that.
188
00:21:00,080 --> 00:21:04,879
So you can imagine this criticising the problem by approximating this space
189
00:21:04,880 --> 00:21:09,890
derivatives by some finite difference in this direction and time in that direction.
190
00:21:10,310 --> 00:21:15,730
Many ways to do this. For example. So the simplest formula.
191
00:21:18,660 --> 00:21:23,640
I'll begin without even writing down the formula. I'll write down what's called the stencil for the formula.
192
00:21:23,880 --> 00:21:32,130
So if you show the points that a formula connects in a diagram like this, that's called the stencil.
193
00:21:34,240 --> 00:21:39,280
And what that picture means is I'm about to tell you an algebraic equation.
194
00:21:39,580 --> 00:21:44,080
To get an estimate of this value, given estimates of these values,
195
00:21:45,340 --> 00:21:51,850
the notation will use is I'll call this the end plus one j and that's this point here.
196
00:21:52,330 --> 00:22:01,090
This point is then the end. J This would be V and Jerry plus one and V and Jane minus one.
197
00:22:01,330 --> 00:22:08,150
So the subscript is telling you the x index and the superscript is telling you the time index.
198
00:22:10,010 --> 00:22:14,720
So the simple formula you could use to model the heat equation would couple these four points.
199
00:22:14,930 --> 00:22:21,050
And what we need to do is approximate the space derivative with these three and the time derivative with those two.
200
00:22:21,500 --> 00:22:30,950
So it's pretty obvious how we should do that. We should say the end plus one minus the N divided by the time step.
201
00:22:33,270 --> 00:22:40,380
Is equal to the j plus one minus to the J plus v j minus one.
202
00:22:43,750 --> 00:22:47,290
Divided by the square of the space step.
203
00:22:47,950 --> 00:22:53,480
So there you have the simplest possible finite difference formula for a p.
204
00:22:55,930 --> 00:22:59,410
There are many things you can say about that formula. We're going to run it in a minute.
205
00:22:59,980 --> 00:23:06,010
One thing you can say about it is that we could interpret it as a matrix of matrix multiplication.
206
00:23:06,280 --> 00:23:09,430
So suppose I think of V.
207
00:23:10,330 --> 00:23:16,050
As a vector. Of samples at different places in space.
208
00:23:16,560 --> 00:23:19,710
So I could think of it as v one. V to.
209
00:23:21,690 --> 00:23:24,960
Me capital, then I guess I'll call it.
210
00:23:27,500 --> 00:23:36,889
So suppose I have capital n grid points in space, then the whole state at a given time is given by this vector of linked capital.
211
00:23:36,890 --> 00:23:41,870
N and I could write this as a matrix problem.
212
00:23:42,200 --> 00:23:52,850
So that stencil, that formula tells you this new value in terms of the other values at the previous time step.
213
00:23:53,960 --> 00:23:55,610
So I could write it in this form.
214
00:23:55,610 --> 00:24:05,390
I could say the vector of values at the new time step is equal to sum matrix times the vector values at the old times.
215
00:24:07,530 --> 00:24:10,950
And that matrix has coefficients that come from this formula.
216
00:24:11,340 --> 00:24:22,370
And in fact, it's a tri diagonal matrix. And it contains a constant on the diagonal and another constant on the first super diagonal.
217
00:24:23,090 --> 00:24:26,510
And that same other constant on the first sub diagonal.
218
00:24:29,980 --> 00:24:36,160
And A is equal to. One minus two.
219
00:24:37,610 --> 00:24:43,720
K over H Square. B is equal to Kovrig Square.
220
00:24:52,280 --> 00:25:01,880
So. Very generally, whenever you deal with finite difference approximations, at least linear ones, they can be formulated as matrix problems.
221
00:25:02,390 --> 00:25:05,690
In this case, it's simply multiplying a vector by a matrix.
222
00:25:05,690 --> 00:25:08,000
So you don't have to think of that as a matrix problem.
223
00:25:08,240 --> 00:25:13,550
But in other cases, we'll be solving a system of equations if you want to do this on a computer.
224
00:25:13,730 --> 00:25:19,280
Of course, if A is big, you'd be crazy to make this giant matrix as a matrix,
225
00:25:19,730 --> 00:25:24,110
but you could use a sparse representation and just contain the numbers that matter.
226
00:25:27,240 --> 00:25:38,310
The last thing to say about the way I formulated it here. I didn't talk about boundary conditions, but of course that's a big issue involving Pdes.
227
00:25:38,580 --> 00:25:46,860
The simplest version of all of this would go as follows Imagine that I have an interval on which I want to solve the problem,
228
00:25:46,860 --> 00:25:51,180
and suppose my boundary condition is that you is zero at both ends.
229
00:25:53,460 --> 00:25:59,340
Then the natural thing to do would be to regard my unknowns as in the interior of that interval.
230
00:26:00,090 --> 00:26:04,470
So this would be the one and this would be the end.
231
00:26:08,400 --> 00:26:13,840
Of course, other boundary conditions would be treated in other ways, but for this case of zero boundary conditions.
232
00:26:14,370 --> 00:26:16,469
What I've written down is exactly everything.
233
00:26:16,470 --> 00:26:26,820
The boundary conditions are already there in the way I've written it, sort of from the fact that you'll notice the matrix of course,
234
00:26:26,820 --> 00:26:33,300
doesn't contain a term there or there, but since that term is going to multiply zero anyway, it doesn't matter.
235
00:26:33,570 --> 00:26:39,180
So this simple matrix form has implicitly imposed a couple of boundary conditions.
236
00:26:41,750 --> 00:26:46,880
Let's run that thing. And this is the code called M35.
237
00:26:48,440 --> 00:26:55,160
So if you look at M 35 heat, you'll see what it does is exactly this.
238
00:26:55,940 --> 00:27:01,280
It uses a sparse matrix. So it actually does do it as matrix times vector just for fun.
239
00:27:01,790 --> 00:27:10,790
And the matrix is spire so that that's not so inefficient. It actually constructs the main diagonal and then the off diagonal of the sparse matrix.
240
00:27:11,030 --> 00:27:14,510
You can see the commands A times sparse and B time sparse.
241
00:27:17,730 --> 00:27:27,360
There's the basic version of this code actually should have a percent sign a comment before the line that says comment is in for periodic B.S.
242
00:27:27,360 --> 00:27:34,830
So the first version I'm going to show you is zero boundary conditions, as if that line had a percent sign in front of it.
243
00:27:41,430 --> 00:27:48,310
So if I type 35 heat. You can see it flow.
244
00:27:49,900 --> 00:27:54,910
A general principle of the heat equation is that it's exciting for a few milliseconds and then it grows boring.
245
00:27:57,620 --> 00:28:01,730
Notice the boundary conditions are visible. You can see that it's zero at the end.
246
00:28:01,970 --> 00:28:06,590
There's no unknown at the end. So the first point you see is the first in from the edge.
247
00:28:07,850 --> 00:28:14,510
So there's the heat equation. Let's just run it once more. There you are.
248
00:28:15,080 --> 00:28:21,830
It started with a non smooth initial condition, but infinitesimal amount of time is enough to make that smooth.
249
00:28:24,820 --> 00:28:33,070
Notice that the space step I've used is 1/40, so a modest number of grid points and the time step I've used.
250
00:28:33,520 --> 00:28:37,750
You can see the second line of code is 0.4 times x squared.
251
00:28:38,170 --> 00:28:41,650
So I've taken a pretty small time step of 0.4 times x squared.
252
00:28:43,360 --> 00:28:46,659
Let's run it again. But pause at each step.
253
00:28:46,660 --> 00:28:50,500
So I have a variation called AM 35 pause.
254
00:28:51,190 --> 00:28:55,960
And you can see I achieve that by commenting in the pause command.
255
00:28:56,770 --> 00:29:03,820
So this is the initial condition. And then after one timestamp it looks like that mathematically this is now a smooth function.
256
00:29:04,030 --> 00:29:11,410
Of course, computationally it still lives on a grid. And then very quickly you can see the shape changing.
257
00:29:12,610 --> 00:29:15,790
It looks exactly flat in the top middle.
258
00:29:15,790 --> 00:29:20,890
Mathematically, it's not exactly flat, but it's, if you like, exponentially close to being flat.
259
00:29:21,850 --> 00:29:26,260
And then quickly you get. That behaviour.
260
00:29:29,820 --> 00:29:35,670
Then there are two other things to modify with the comments. Suppose you wanted periodic boundary conditions.
261
00:29:35,940 --> 00:29:40,920
So the value at this end, rather than being forced to be zero, is set equal to the value at this end.
262
00:29:42,120 --> 00:29:45,359
Well, you can achieve that by making the matrix wrap around you.
263
00:29:45,360 --> 00:29:52,080
Make it into a circular matrix in effect. There's a line of code that shows you that the one that breaks into the margin a little bit.
264
00:29:52,680 --> 00:30:00,780
If I run that version, I get M 35 PR, so it looks the same and so far it is almost the same.
265
00:30:00,780 --> 00:30:08,070
But now you can see at the boundary it's going up because now there's no heat being lost at the end.
266
00:30:08,370 --> 00:30:13,680
The heat is just averaging out. So that's the one of the physical interpretations here.
267
00:30:13,890 --> 00:30:19,860
We start with an uneven heat distribution. As time goes to infinity, it becomes a constant heat.
268
00:30:20,520 --> 00:30:24,420
But the total amount of heat doesn't change with periodic boundary conditions.
269
00:30:25,710 --> 00:30:31,860
Whereas for the initial one. With zero boundary conditions.
270
00:30:31,860 --> 00:30:37,559
Again, it's the heat flow, but now the ends of the bar, if you like, are being held at temperature zero.
271
00:30:37,560 --> 00:30:41,610
So he has flowing out and the destiny is zero temperature.
272
00:30:46,940 --> 00:30:50,990
The last modification in this code to make is the one that.
273
00:30:52,900 --> 00:30:59,080
Determines the size of the time step. So if you see in that second line of code, it's K equals 0.4 X squared.
274
00:30:59,320 --> 00:31:03,430
And then the comment says, try changing point four, 2.5.
275
00:31:03,430 --> 00:31:08,230
1.5 is the critical value at which the thing goes unstable.
276
00:31:08,530 --> 00:31:14,410
So now let's change it. 2.51. So that's called n 35 u for unstable.
277
00:31:17,700 --> 00:31:20,820
And you see this beautiful growth of instability.
278
00:31:21,960 --> 00:31:26,400
Normally it doesn't look like that. That's what it looks like if you're right at the edge of instability.
279
00:31:26,790 --> 00:31:32,820
Of course, normally you would have picked point seven or 3.2 and then you'd have explosion much faster.
280
00:31:35,740 --> 00:31:39,970
So obviously there are issues to discuss concerning stability of these disparate transactions.
281
00:31:42,070 --> 00:31:49,570
Now let's play with the other two codes on the page, which are the physical underpinnings of diffusion.
282
00:31:50,200 --> 00:31:55,590
This is really one of the exciting stories of the last 150 years, if you like.
283
00:31:57,190 --> 00:32:03,280
The the one minute version of the history is that you had these classical laws of continuum mechanics,
284
00:32:03,280 --> 00:32:10,570
some of them invented here in Oxford, you know, Boyle's Law and so on, which in the 19th century people realised had a statistical basis.
285
00:32:10,840 --> 00:32:13,899
So the two big names were Maxwell and Boltzmann,
286
00:32:13,900 --> 00:32:22,990
who realised that it was random bouncing around of particles that were the basis of Boyle's law and other laws of continuum mechanics.
287
00:32:23,920 --> 00:32:28,720
That was all a little bit abstract still in the 19th century, and then it became more concrete.
288
00:32:29,530 --> 00:32:37,150
And famously with Einstein, another one of the Nobel Prizes, he didn't win once for his paper on Brownian motion in 1905,
289
00:32:37,360 --> 00:32:45,040
when he really studied the details of how random effects could be visible on a macroscopic scale.
290
00:32:45,430 --> 00:32:49,989
So he explained the motion of little particles in a fluid as the result of a lot
291
00:32:49,990 --> 00:32:54,460
of impacts from even smaller particles in a fluid that were effectively random.
292
00:32:54,940 --> 00:33:00,760
And then mathematically that became the basis of a greatly growing area of mathematics.
293
00:33:00,970 --> 00:33:06,610
Norbert Wiener was a famous name who turned that idea of Brownian motion into a very precise
294
00:33:06,610 --> 00:33:12,790
mathematical concept of a continuous curve that's nowhere smooth with a dimension of a half,
295
00:33:12,790 --> 00:33:15,880
and all sorts of neat things going on, or one and a half, depending what you measure.
296
00:33:17,200 --> 00:33:25,239
And in fact, in my career, among the two or three biggest developments in mathematics over the years that I've
297
00:33:25,240 --> 00:33:29,410
been doing this is the growth of everything to do with probability and stochastic.
298
00:33:29,440 --> 00:33:38,500
It's just amazing. So maybe 30 years ago you might have said the centrepiece of somewhat applied mathematics would be pdes.
299
00:33:38,770 --> 00:33:46,060
Now many people would say it's probability. Personally, I still like beauty, and I think probability is a fad, but a very important fact.
300
00:33:46,690 --> 00:33:51,190
So the word stochastic just gets more and more important in mathematics.
301
00:33:52,600 --> 00:33:59,550
So let's go back to 1905. If Einstein had had MATLAB, this is what he would have done.
302
00:33:59,560 --> 00:34:03,610
He would have run this code called M 36 Brownian.
303
00:34:05,530 --> 00:34:13,570
And what it does is show you some Brownian motion. Now, Brownian motion as as the phrase is used by mathematicians,
304
00:34:13,810 --> 00:34:18,910
describes a mathematical limit where things are really happening on infinitely small timescales.
305
00:34:19,150 --> 00:34:22,000
So a Brownian path is a continuous path.
306
00:34:22,780 --> 00:34:29,769
Of course, one simulates that by discrete things, and if you take a random walk, then that looks like a Brownian path.
307
00:34:29,770 --> 00:34:35,380
And in a limit of more and more smaller and smaller steps, it can be proved it has the same behaviour.
308
00:34:35,680 --> 00:34:40,960
So this code is taking random walks of a thousand particles.
309
00:34:42,280 --> 00:34:45,640
So if I say M 36 Brownian.
310
00:34:49,210 --> 00:34:55,310
You say a thousand particles start in the middle and then they're just moving independently.
311
00:34:55,310 --> 00:35:00,310
Each one is moving around the block, but on a small enough scale that it's essentially Brownian motion.
312
00:35:03,660 --> 00:35:06,780
The usual principle. At first it's interesting, and then it becomes boring.
313
00:35:06,780 --> 00:35:15,740
So let's start it again. Pay attention. All sorts of things spring from this bit of physics.
314
00:35:15,770 --> 00:35:21,200
This randomness is the source of the square root of t effect that you get in diffusion problems.
315
00:35:21,200 --> 00:35:23,689
So what what's the radius of this blob?
316
00:35:23,690 --> 00:35:30,860
Well, it scales like the square root of t all the square roots that appear and statistics are related to this picture.
317
00:35:31,130 --> 00:35:37,010
It gets this universal square root effect that shows up in all kinds of ways and.
318
00:35:38,180 --> 00:35:45,020
The more you study this stuff, the more addictive it becomes. You ask yourself, how big is how far out should the biggest one be?
319
00:35:45,230 --> 00:35:50,000
That would be a question of extreme value statistics. And of course, all sorts of things are known.
320
00:35:50,210 --> 00:35:53,750
What's the density function as a function of time? Well, that's easy.
321
00:35:53,750 --> 00:35:57,110
That's a normal distribution and so on and so on.
322
00:35:57,500 --> 00:36:01,100
In the limit of infinitely many particles, infinitely small time steps,
323
00:36:01,640 --> 00:36:07,850
you get normal distributions and those arise in solving the heat equation and other diffusion equations.
324
00:36:09,840 --> 00:36:14,940
You may notice the square effect, but that, of course, is due to the finite use of my screen.
325
00:36:15,540 --> 00:36:23,900
The particles are not really confined to a square. There's another code that's even more addictive.
326
00:36:25,760 --> 00:36:28,980
So if we had a spare hour, I would spend it all running the next code.
327
00:36:29,000 --> 00:36:32,630
Let's run a little bit. This is called M 37 Rectangle.
328
00:36:34,100 --> 00:36:37,160
Now, if I do in 37 rectangle.
329
00:36:41,210 --> 00:36:46,850
Rectangle can. It ask me how many fleas do I want to put in a box of length?
330
00:36:46,850 --> 00:36:50,120
Ten and with one. So let's begin with one flea.
331
00:36:51,860 --> 00:36:58,310
So there's the flea. And when I press return, it wanders around.
332
00:36:59,370 --> 00:37:02,700
And I don't know what's going on there, sir, until it hits the boundary.
333
00:37:03,820 --> 00:37:07,000
So you can't resist wondering how long will that take?
334
00:37:07,030 --> 00:37:11,050
Sorry about the weird graphics effects. See if I can prove that.
335
00:37:13,850 --> 00:37:19,770
Let's give it more fleas. Oh, dear.
336
00:37:25,100 --> 00:37:33,850
Let's try again, please. So each one keeps going until it hits the boundary and then it stops.
337
00:37:34,360 --> 00:37:41,940
So of course, you can ask questions like. After a certain amount of time, how many do you expect to still be alive?
338
00:37:42,750 --> 00:37:59,760
Let's take more fleas. But actually the way this problem arose for me was an interest in the question
339
00:37:59,760 --> 00:38:06,030
of what's the probability of a flea reaching the end rather than a side?
340
00:38:06,970 --> 00:38:13,690
So of course it might happen by chance that one of the fleas gets all the way to the end before hitting the side.
341
00:38:14,020 --> 00:38:19,630
And you can see that that's going to be hard work. Let's take a thousand.
342
00:38:25,510 --> 00:38:33,990
And in fact, this was one of the problems in the. Siam 100 to the challenge that I showed you.
343
00:38:34,380 --> 00:38:41,820
I think the last problem was to determine the probability that this particle ends up at an end rather than at a side.
344
00:38:42,030 --> 00:38:49,920
And the answer is something like ten to the minus six. And in fact, that probability can be determined exactly analytically for what it's worth.
345
00:38:50,820 --> 00:39:00,090
You can use a conformal map of the rectangle onto a circle in order to find what would be called the harmonic measure of the end of the region.
346
00:39:00,540 --> 00:39:03,960
Let's do it one more time and then we'll stop that.
347
00:39:05,760 --> 00:39:13,710
With 10,000. Now, of course, in the world of gases, you have ten to the 23 fleas.
348
00:39:14,340 --> 00:39:18,570
So, you know, a jar of gas would have roughly Avogadro's number of fleas in it.
349
00:39:21,140 --> 00:39:26,640
Sorry about that. I have no idea.
350
00:39:28,330 --> 00:39:33,970
Those of us on the campaign team have found that this year, the last 12 months, MATLAB seems to have gotten worse.
351
00:39:35,890 --> 00:39:40,660
We hope this is a transient effect, but all sorts of weird things have been happening to us graphically,
352
00:39:40,960 --> 00:39:49,270
and we're referring specifically to the graphics. We don't know how to fix them, but when new versions come out, things tend to be fixed.
353
00:39:49,270 --> 00:39:54,940
So hopefully. Hopefully it will solve itself.
354
00:39:56,430 --> 00:39:59,620
Okay. Let's kill that. Okay.
355
00:39:59,770 --> 00:40:05,560
So you can see that probability, random walks, random effects are at the heart of important PD.
356
00:40:06,250 --> 00:40:14,200
And to put that backwards, the pdes exist in order to model in a scientifically compact way, ultimately the effects of randomness.
357
00:40:14,380 --> 00:40:19,600
But there are so many random particles in the physical world that although that may be the mechanism,
358
00:40:20,140 --> 00:40:25,510
still the right way to do it is the continuous model for many purposes, of course, never for all purposes.
359
00:40:26,830 --> 00:40:35,350
Now along those lines, I wanted to just make a remark or two about discrete versus continuous models.
360
00:40:43,250 --> 00:40:47,180
So if you ask, you know, what's the truth? Is the truth discrete or continuous?
361
00:40:47,660 --> 00:40:54,320
The more you think about that, the more you find that there are just too many things going on to give the single answer to that question.
362
00:40:54,330 --> 00:41:01,460
And let me illustrate that by mentioning some things that are discrete and their analogues that are continuous.
363
00:41:06,430 --> 00:41:10,510
So for example, molecules bouncing around are discrete, right?
364
00:41:12,090 --> 00:41:18,180
And in some sense that's the truth of what's happening with continuum mechanics.
365
00:41:19,170 --> 00:41:23,220
And then the continuous models that we use are of course continuous.
366
00:41:23,550 --> 00:41:29,700
So continuum mechanics or the continuum models used in physics.
367
00:41:31,080 --> 00:41:37,320
Our continuous and boil and hooked didn't know about all these molecules bouncing around.
368
00:41:37,560 --> 00:41:42,270
All they knew was that they could measure the pressure of air and it would satisfy an equation.
369
00:41:43,980 --> 00:41:47,790
Something else this discrete would be a finite difference.
370
00:41:47,790 --> 00:41:54,530
Approximation. And that's a discrete approximation to a PD.
371
00:41:57,640 --> 00:42:02,620
So in some sense that's the truth. But this is the mechanism we use for studying that truth.
372
00:42:04,590 --> 00:42:06,870
Something discrete would be a random walk.
373
00:42:08,480 --> 00:42:16,070
And when we say random walk, we we typically mean things like take a point and then go north, south, east or west with probability one.
374
00:42:17,520 --> 00:42:26,520
And so you might follow a trajectory like this. So a random walk has some discrete time and space, maybe.
375
00:42:27,420 --> 00:42:37,230
Or maybe just one or the other is discrete, but that's a approximation to a continuous object that mathematicians call Brownian motion.
376
00:42:39,080 --> 00:42:43,790
But of course, Brownian motion to a mathematician, that's nothing continuous to a physicist.
377
00:42:43,800 --> 00:42:46,850
Well, maybe that's something in the realm of bouncing molecules again.
378
00:42:46,860 --> 00:42:52,730
So I really mean here. Brownian motion for mathematicians.
379
00:42:55,510 --> 00:42:58,680
Another example since we've been showing pictures on the screen.
380
00:42:58,690 --> 00:43:06,290
What about the dots on a computer screen? They are discrete at various levels.
381
00:43:06,300 --> 00:43:09,990
They look discrete to you and they realise by pixels which are also discreet.
382
00:43:10,320 --> 00:43:15,600
But then somehow we end up with perceiving things that are continuous.
383
00:43:18,820 --> 00:43:21,040
So there are all sorts of complexities there.
384
00:43:21,520 --> 00:43:29,220
Of course, your brain has its discrete rods and cones, business and discrete neurones, but somehow we end up perceiving it as a continuum.
385
00:43:29,230 --> 00:43:37,300
You can see things are very mixed together. Let me just mention one more, which is floating point arithmetic.
386
00:43:39,540 --> 00:43:42,660
Which the computer is using when I do all of these experiments.
387
00:43:42,990 --> 00:43:46,380
But that's an approximation to real arithmetic.
388
00:43:54,450 --> 00:44:02,940
And the list could go on. And what strikes me about a list like this is that the different rows really have nothing to do with one another.
389
00:44:03,210 --> 00:44:07,380
The fact that this is discrete and that continuous completely unrelated to
390
00:44:07,430 --> 00:44:11,520
this discrete versus that continuous and this discrete versus that continuous,
391
00:44:11,700 --> 00:44:16,230
they're just all different. Things like this come up all the time in many different ways.
392
00:44:16,830 --> 00:44:19,470
Another handout is an essay I wrote on this.
393
00:44:19,770 --> 00:44:27,720
So the one with a pretty picture of Hurricane Katrina is an essay I wrote on discrete and continuous things.
394
00:44:33,700 --> 00:44:39,490
Okay. I want to spend the last few minutes talking about something non-linear.
395
00:44:39,520 --> 00:44:43,420
Our first example of a nonlinear PD, the Fisher CP equation.
396
00:44:56,770 --> 00:45:03,820
Fourier was way back in 1807 or something, and he's the guy who did the heat equation, which is linear,
397
00:45:04,240 --> 00:45:09,219
or at least he is a key person in that mostly the 19th century was about linear problems,
398
00:45:09,220 --> 00:45:15,010
but a big counterexample is navier-stokes, which is a nonlinear equation, and that's deep into the 19th century.
399
00:45:15,940 --> 00:45:20,080
In the 20th century, all sorts of good things happen in nonlinear problems.
400
00:45:20,500 --> 00:45:28,930
So the Fisher CP equation is an equation that was independently studied by Fisher,
401
00:45:28,930 --> 00:45:33,790
the great statistician in England, and then KP MP three Russian mathematicians.
402
00:45:36,040 --> 00:45:39,159
So what? I even forget their names. What is it? That's cool.
403
00:45:39,160 --> 00:45:44,620
McGrath, Piotrowski and his schooner. That's right. And Komarov is one of the greats of all time.
404
00:45:45,700 --> 00:45:49,060
There's another page from the coffee table book that I handed out there.
405
00:45:49,300 --> 00:45:58,390
So the point of this equation, physically, scientifically, is that the solutions of it usually are travelling waves.
406
00:45:59,500 --> 00:46:04,540
If you're a mathematician, you can prove that other solutions also exist, but they tend to be unstable.
407
00:46:04,540 --> 00:46:09,729
The stable solutions are travelling waves and they take a particular form.
408
00:46:09,730 --> 00:46:14,110
Sorry, that looks like an infinite slope. I shouldn't have done that with a finite slope.
409
00:46:15,090 --> 00:46:23,669
So at a particular time it will look like that and then it will move at a certain speed and indeed far away from boundaries.
410
00:46:23,670 --> 00:46:31,530
There exist solutions with precisely this form of a constant shape that translate and translate at a constant speed.
411
00:46:33,840 --> 00:46:38,549
So the equation is you t equals.
412
00:46:38,550 --> 00:46:41,810
I'll put an epsilon in front of the diffusion term.
413
00:46:42,740 --> 00:46:59,020
Plus you minus u squared. And this is prototypical of all sorts of pdes that blend together a linear bit of physics and a non-linear bit of physics.
414
00:46:59,320 --> 00:47:02,470
So this this is the linear diffusion term.
415
00:47:05,750 --> 00:47:09,260
And this is the nonlinear reaction term.
416
00:47:14,580 --> 00:47:20,100
Diffusion is a reasonably clear word. Reaction sort of means pretty much anything nonlinear.
417
00:47:21,650 --> 00:47:24,680
To model this, we can do just the sort of thing we did before.
418
00:47:25,580 --> 00:47:31,130
The simplest model would be the end, plus one minus the N divided by K.
419
00:47:33,470 --> 00:47:40,610
Is equal to. Well, you just do one term for that and another for this, and that will work.
420
00:47:41,000 --> 00:47:47,840
So we would say, just as with the heat equation, we'd say v j plus one minus to the J plus v minus one.
421
00:47:49,380 --> 00:47:56,220
Divided by H squared. So that's an H, so that's our diffusion term.
422
00:47:56,730 --> 00:48:04,290
And then the reaction term we could put in simply like this v j minus v j square.
423
00:48:07,680 --> 00:48:12,120
And that worked fine. Again, we're going to need a time stamp sufficiently small for it to work.
424
00:48:12,120 --> 00:48:17,910
But assuming the time stamp is small, that works fine. And if we want we could write it in matrix form.
425
00:48:18,210 --> 00:48:21,870
We would have V and plus one equals matrix times vector.
426
00:48:24,520 --> 00:48:34,370
So that would be the linear part of the problem. But then, of course, we'd have to add in a nonlinear term, so we'd have to choose your own notation.
427
00:48:34,390 --> 00:48:41,080
But it turns out it's the timestamp times this non-linear vector v j minus v square.
428
00:48:46,330 --> 00:48:51,850
So I have a code that we can run for that. It's called M 38 feature KPP.
429
00:48:51,850 --> 00:48:53,830
And you'll see it looks just like the heat equation.
430
00:48:54,010 --> 00:49:01,959
Except that I've added this additional term k times you might achieve squared, but non-linear problems are always more interesting.
431
00:49:01,960 --> 00:49:11,820
And let's see what it looks like. So this is M 38.
432
00:49:21,560 --> 00:49:24,890
So that's the initial condition I've chosen.
433
00:49:25,070 --> 00:49:28,969
And the reason for choosing this is that I want to illustrate this wonderful
434
00:49:28,970 --> 00:49:33,500
feature of nonlinearity that you can have a preferred shape of a solution.
435
00:49:33,950 --> 00:49:39,770
So this is just an arbitrary initial condition. But if I let time evolve.
436
00:49:42,050 --> 00:49:47,210
Almost any initial condition will converge towards the shape of the travelling wave.
437
00:49:47,510 --> 00:49:58,620
So look at this. And now it's exponentially close, if you like, to this way, which in principle would go forever except at the right boundary.
438
00:49:58,620 --> 00:50:02,760
I have a boundary condition, but on an infinite domain that would simply go forever.
439
00:50:02,940 --> 00:50:12,180
Let's run that a couple more times. So there's other initial effect which is complicated.
440
00:50:12,480 --> 00:50:20,760
Not much travelling is happening yet, but then eventually it settles down and the slope gets a little less and it reaches this steady effect.
441
00:50:20,940 --> 00:50:27,630
And of course, you can interpret this physically in all sorts of waves of heat or information flowing from one part to another.
442
00:50:28,260 --> 00:50:39,990
This very simple model is philosophically, if you like, very much like the Hodgkin Huxley equations, which describe how pulses flow in neurones.
443
00:50:44,480 --> 00:50:50,480
And the last thing I'll do is show you that it too goes unstable if you take the time, step too big.
444
00:50:50,990 --> 00:50:56,180
So let's run the unstable version where I change .42.51.
445
00:50:57,980 --> 00:51:02,690
So I say 30. Thank you for unstable same initial condition.
446
00:51:06,070 --> 00:51:10,100
Who knows what's happening? Looks like a shark.
447
00:51:12,200 --> 00:51:15,650
Okay. See you on Friday. I resigned.
448
00:51:16,130 --> 00:51:16,760
Oh, yes.