1
00:00:01,760 --> 00:00:06,750
Okay. Good morning, everybody. So assignment two is due now.
2
00:00:06,770 --> 00:00:10,100
I hope most of you manage to solve the big million by million problem.
3
00:00:10,550 --> 00:00:17,300
I found that the with a diagonal precondition artist of 20 seconds on my lot on my desktop, which kind of amazed me.
4
00:00:17,510 --> 00:00:22,970
The eigenvalues took longer, a couple of minutes. But anyway, the Million Buy Million worked for me.
5
00:00:23,570 --> 00:00:28,010
Is there anyone who had to go to a smaller dimension to make it work on your computer?
6
00:00:28,820 --> 00:00:33,890
Yeah. Okay. Sorry about that. Next year you could do 2 million.
7
00:00:35,030 --> 00:00:42,680
Okay, so today we're going to talk about floating point arithmetic, which is an amazingly fundamental subject and actually pretty interesting.
8
00:00:42,920 --> 00:00:48,860
It's, in my view, not quite the heart of numerical analysis, but it's one of the foundations.
9
00:00:48,860 --> 00:00:54,860
Well, the foundation. And it's pretty interesting. Let me mention three books, which I'll pass around.
10
00:00:54,860 --> 00:00:59,269
And they're also listed on the website, very different books, each in their own way.
11
00:00:59,270 --> 00:01:07,460
Fantastic. So this very small book by Michael Overton is called Numerical Computing with Tripoli Floating Point Arithmetic.
12
00:01:07,640 --> 00:01:12,320
And it's just an incredibly clear, straightforward way to find out all sorts of things.
13
00:01:12,320 --> 00:01:19,639
You had no idea about how our computers work with numbers, so pass that around very much.
14
00:01:19,640 --> 00:01:23,000
At the other end of the complexity.
15
00:01:23,000 --> 00:01:28,190
Thank to is this thick book by Nick Higham called Accuracy and Stability of Numerical Algorithms.
16
00:01:28,400 --> 00:01:34,670
And this is the book about careful analysis of all sorts of fundamental numerical algorithms.
17
00:01:35,120 --> 00:01:41,750
And in particular, what effect do the rounding errors have? How does floating point arithmetic affect the accuracy of the final result?
18
00:01:41,990 --> 00:01:45,490
Hi is at Manchester and he's the world leader on that subject.
19
00:01:47,930 --> 00:01:54,680
And then finally, this is another thick book called Handbook of Floating Point Arithmetic by Mueller et al.
20
00:01:55,110 --> 00:01:59,150
This is a group in Lyon, in France, which is a remarkable, very interesting group.
21
00:01:59,690 --> 00:02:06,860
They work with Intel and other people designing the details of the elementary functions, special functions that make computers so accurate.
22
00:02:06,870 --> 00:02:13,760
So if you're interested in the details of how sine of X or something might be computed, this is a great starting point.
23
00:02:18,900 --> 00:02:30,360
Okay. So to talk about floating point arithmetic, I want to begin by thinking in terms of physics and one of the handouts,
24
00:02:30,780 --> 00:02:35,700
the one with a colour picture on it, discrete or continuous, conveys a bit of this flavour.
25
00:02:39,840 --> 00:02:47,880
So I want us to think about the physical world that we live in, like this room which contains a lot of solid matter and a lot of gas in the middle.
26
00:02:48,090 --> 00:02:53,550
And of course, you and I know that this is all about molecules that bounce around and do their thing.
27
00:02:55,020 --> 00:03:00,470
That's a very old idea in some sense. But it's really in the last couple of hundred years that it became definite.
28
00:03:00,750 --> 00:03:04,650
I guess the atom was pretty much discovered in Manchester by John Dalton.
29
00:03:04,980 --> 00:03:09,480
And statistical mechanics came around in the 19th century, Maxwell and Boltzmann and others.
30
00:03:09,780 --> 00:03:13,020
So these are absolutely fundamental ideas in physics.
31
00:03:13,350 --> 00:03:16,650
Everything is made of molecules or atoms.
32
00:03:19,560 --> 00:03:29,220
The point of view I want you to take is that molecules are, as it were, the implementation of the classic laws of continuous physics.
33
00:03:29,520 --> 00:03:36,330
Just as floating point arithmetic is the implementation of the laws of real mathematics on a computer.
34
00:03:36,630 --> 00:03:38,100
It's a very interesting analogy.
35
00:03:38,310 --> 00:03:46,410
In both cases, you have something discrete that is necessary at the hardware level in order to achieve an effect which feels effectively continuous.
36
00:03:47,130 --> 00:03:52,080
Now, let me give you a couple of examples of how physics implements the laws of physics.
37
00:03:52,290 --> 00:03:56,130
So, for example, suppose you have a box filled with gas.
38
00:03:57,990 --> 00:04:01,350
So that's a box filled with some air in it, if you like.
39
00:04:01,530 --> 00:04:11,310
And what happens if you have the volume of the box and suppose you keep the temperature constant so you make it the same temperature?
40
00:04:11,610 --> 00:04:14,820
Well, you know that the pressure doubles.
41
00:04:16,410 --> 00:04:20,790
In fact, that was discovered in Oxford, I suppose, in the 1660s.
42
00:04:21,780 --> 00:04:27,030
So the gas laws tell you that if you have the volume at constant temperature, the pressure doubles.
43
00:04:27,060 --> 00:04:37,320
Now that's a continuous statement, but there's an implementation underlying that because there are a lot of molecules in that box.
44
00:04:37,320 --> 00:04:43,290
And the point is, if you have the volume, then the molecules hit the boundary twice as often.
45
00:04:43,470 --> 00:04:48,240
So they're twice as dense. So you have molecules hitting the boundary twice as often.
46
00:04:48,250 --> 00:04:56,040
That's the implementation. That's how this law works, because twice as many molecules per unit time per unit area hit the wall.
47
00:04:56,550 --> 00:05:02,220
So the amount of momentum transferred is doubled, the pressure doubles basically.
48
00:05:02,640 --> 00:05:06,540
So that's implementation of one of the gas laws. Let's take another example.
49
00:05:06,750 --> 00:05:13,720
Suppose you have the same box. But now let's double the temperature instead of having the volume.
50
00:05:19,880 --> 00:05:24,920
Now, you know, another gas law says that again, if you do that, the pressure doubles.
51
00:05:26,180 --> 00:05:33,830
Why does it do that? Well, again, there's an implementation at the hardware level that makes that happen.
52
00:05:34,010 --> 00:05:39,410
The implementation now is actually a little more interesting. This one was kind of trivial, but this is more interesting.
53
00:05:39,590 --> 00:05:43,290
Again, you have these. Molecules in there.
54
00:05:43,290 --> 00:05:50,280
But if you double the temperature, each one has twice as much energy, which means the square root of two times as much momentum.
55
00:05:50,670 --> 00:05:57,600
So the velocities increase by the square root of two.
56
00:06:01,880 --> 00:06:08,750
Because the energy of each particle is doubling. But at the same time, if the velocities are increasing by the square root of two,
57
00:06:09,380 --> 00:06:14,090
so that means that the square root of two times more momentum is imparted to the boundary with each impact.
58
00:06:14,270 --> 00:06:17,450
But there are also the square root of two times as many impacts.
59
00:06:17,720 --> 00:06:22,790
So more square root of two times more impact squared or two times more momentum per impact.
60
00:06:23,030 --> 00:06:24,800
That means, again, that the pressure doubles.
61
00:06:25,070 --> 00:06:34,280
So this kind of thing was among the great discoveries of the 19th century, figuring out how our continuous world is based on discrete hardware.
62
00:06:35,600 --> 00:06:39,230
Now we all know about Avogadro's number. Let me remind you of some numbers.
63
00:06:39,680 --> 00:06:46,579
So we all know ten to the 23, and those of us who are chemists know exactly what that means,
64
00:06:46,580 --> 00:06:51,560
and everybody else probably forgets exactly what it means. It's the number of things in a mole.
65
00:06:51,590 --> 00:06:53,250
And then the question is, what's a mole?
66
00:06:53,270 --> 00:06:58,790
And if you're a typical mathematician like me, you have to think a little to remind yourself of what a mole is.
67
00:06:59,300 --> 00:07:05,210
Anyway, ten to the 23, roughly speaking, is the number of molecules in a handful of matter.
68
00:07:05,960 --> 00:07:10,610
And that applies. It's precisely a mole, but a mole is something on a human scale.
69
00:07:10,820 --> 00:07:18,950
And let me tell you what that translates into under normal conditions of pressure and density in the atmosphere.
70
00:07:19,760 --> 00:07:25,390
Avogadro's number. Under normal conditions turns into about.
71
00:07:26,310 --> 00:07:29,670
Three times ten to the 25th.
72
00:07:30,540 --> 00:07:33,930
Molecules in a cubic metre of gas.
73
00:07:37,320 --> 00:07:41,370
So in a cubic metre of gas in this room, that's how many molecules there are.
74
00:07:41,550 --> 00:07:45,420
And that's called Low. Schmitz Constant. I didn't know that number.
75
00:07:45,420 --> 00:07:52,530
But anyway, there it is. So there are some real scientists in the room.
76
00:07:52,550 --> 00:07:56,690
I'm curious, has anyone heard of Low? Schmitz considers that standard. Okay.
77
00:07:56,960 --> 00:08:02,030
Anyway, of course, it follows from Avogadro's number with under standard conditions.
78
00:08:03,350 --> 00:08:04,610
Now that's a volume.
79
00:08:04,940 --> 00:08:12,770
So in a certain volume of air, you have that many molecules, which means along a length you have the cube root of that many molecules.
80
00:08:12,980 --> 00:08:20,150
So the cube root of three times ten to the 25th is approximately three times ten to the eighth.
81
00:08:23,050 --> 00:08:28,370
So that's how many molecules there are per metre in a linear sense in air.
82
00:08:28,390 --> 00:08:36,860
Typically. And solids are about a thousand times denser than gases, which means linearly.
83
00:08:36,860 --> 00:08:42,830
They're about ten times denser. So three times ten to the eighth would be the number of molecules in a metre of air.
84
00:08:43,070 --> 00:08:47,390
Three times ten to the ninth, roughly, is the number of molecules in a metre of solid.
85
00:08:47,570 --> 00:08:51,229
So let's average three times ten to the eighth and three times ten to the ninth.
86
00:08:51,230 --> 00:08:56,180
And what we find is you get about ten to the ninth molecules or atoms.
87
00:08:56,180 --> 00:09:03,460
I'm being very vague here, of course, per metre in in gas or solid.
88
00:09:03,470 --> 00:09:07,430
That's the order of magnitude we're talking about under normal conditions.
89
00:09:08,660 --> 00:09:15,710
So if this podium is a metre long, roughly speaking from this end to this end is ten to the ninth atoms,
90
00:09:15,890 --> 00:09:22,240
or if my glass of water is a 10th of a metre long, it would have ten to the eighth atoms in a row.
91
00:09:25,610 --> 00:09:30,320
So that's the resolution of continuum mechanics.
92
00:09:30,620 --> 00:09:35,600
To get from the microscopic to the human scale is about a scaling of ten to the ninth.
93
00:09:36,960 --> 00:09:45,870
Now computers. Approximate the real line by a discrete thing, and the resolution is much finer.
94
00:09:45,870 --> 00:09:50,040
It's ten to the 16th, so computers, depending exactly how you count it,
95
00:09:50,040 --> 00:09:56,579
are six or seven orders of magnitude finer resolution of the real line than the human scale.
96
00:09:56,580 --> 00:10:00,810
Physical world is a resolution of, you know, by atoms and molecules.
97
00:10:00,960 --> 00:10:05,970
I find that interesting. So you shouldn't think of computers as very discrete.
98
00:10:05,970 --> 00:10:10,770
You should think of the real world is very discrete and computers are practically a continuum.
99
00:10:11,400 --> 00:10:16,200
Or another way to put it is it's something like ten to the seventh metres from here to Australia.
100
00:10:16,470 --> 00:10:23,850
So if you took went to Australia through the earth and counted the molecules as you went, then you'd get about ten to the 16th,
101
00:10:24,180 --> 00:10:29,220
which would be comparable to the number of floating point numbers between one and two on your computer.
102
00:10:29,760 --> 00:10:33,570
So here to Australia is like 0 to 1 on a computer.
103
00:10:37,060 --> 00:10:41,200
Let me then say a bit about how computers represent numbers.
104
00:10:42,730 --> 00:10:46,270
So this is a beautiful aspect of computing.
105
00:10:46,600 --> 00:10:51,280
Since about the 1980s, in the olden days, computers were all different.
106
00:10:51,910 --> 00:10:56,290
Every manufacturer would make a machine and it would do its own computer arithmetic.
107
00:10:56,590 --> 00:11:01,900
And they were in various bases. So pre, let's say pre 1980s.
108
00:11:04,980 --> 00:11:15,960
You. Most computers would have been probably in basis two or 16, but you would also find bases eight, ten and even three.
109
00:11:16,200 --> 00:11:20,970
There's a famous example of a Soviet computer that was made in base three as an experiment.
110
00:11:21,990 --> 00:11:27,450
People like to laugh at that as showing how silly the Soviets were, but actually it was probably a pretty interesting experiment.
111
00:11:28,380 --> 00:11:34,170
But anyway, certainly different powers of two and also Base ten have appeared over the years in various computers.
112
00:11:35,040 --> 00:11:38,940
In the 1980s, it was two things were agreed upon.
113
00:11:40,320 --> 00:11:43,920
One was that some of the arithmetic involved here was a mess.
114
00:11:44,670 --> 00:11:50,040
Another was that the mere diversity between all these different platforms caused great confusion.
115
00:11:50,490 --> 00:12:00,720
So for both of those reasons, people decided to make a standard of floating point arithmetic, and that was in practice came in around 1985.
116
00:12:01,000 --> 00:12:06,150
The Tripoli Floating Point Arithmetic Standard.
117
00:12:10,200 --> 00:12:13,649
And there was an individual who was the key in making that happen.
118
00:12:13,650 --> 00:12:22,740
William Tahan, who is a professor at Berkeley. He twisted arms all around the world and persuaded the world to adopt this standard.
119
00:12:22,920 --> 00:12:29,760
And ever since then, most computers have more or less adhered to the floating point standard from my Tripoli.
120
00:12:30,180 --> 00:12:36,630
It's not perfect. There are always some deviants and who knows what's happening in the future for all sorts of reasons.
121
00:12:37,380 --> 00:12:43,560
But so far, for the last 30 years, there's been a tremendous amount of consistency, completely different from the previous 30 years.
122
00:12:44,430 --> 00:12:51,120
So I want to tell you how I Tripoli represents our standard double precision number on your computer.
123
00:12:51,690 --> 00:12:56,460
Some of you know this, but probably most of you have a sort of vague idea of how it works.
124
00:12:58,980 --> 00:13:03,970
So. Let's draw one.
125
00:13:05,120 --> 00:13:11,260
And to. So that's the number one and that's the number two on your computer.
126
00:13:11,470 --> 00:13:19,870
And that represented exactly on your computer. And between one and two, the spacing is exactly uniform.
127
00:13:20,800 --> 00:13:24,750
And it's two to the -52.
128
00:13:25,450 --> 00:13:30,550
Exactly. So those are the numbers on your computer between one and two.
129
00:13:30,820 --> 00:13:35,230
There are two to the 52 of them or two to the 52 plus one if you count both endpoints.
130
00:13:35,710 --> 00:13:40,960
And they're exactly distributed by two to a distance of two to the -52.
131
00:13:42,750 --> 00:13:50,220
And then the beautiful thing is that that pattern is repeated many, many times on a larger and smaller scale.
132
00:13:50,550 --> 00:13:53,640
So if that's three and that's four.
133
00:13:56,180 --> 00:14:02,330
What you find is that between two and four you again have an exactly equal spacing,
134
00:14:03,140 --> 00:14:13,040
but now that spacing would be two to the -51 and between four and eight it would be equal spacing two to the -50.
135
00:14:13,790 --> 00:14:17,750
And then between one and a half, it's equal spacing.
136
00:14:20,460 --> 00:14:29,200
Two to the minus. 53. So that is the system of floating point numbers.
137
00:14:29,860 --> 00:14:42,800
Of course, there are also the negatives of those. And most of the time we act as if this goes on forever.
138
00:14:43,040 --> 00:14:46,879
So we imagine that between a quarter and a half it's the same.
139
00:14:46,880 --> 00:14:51,590
And between an eighth and a quarter it's the same, and so on and so on. I did have an item down to zero.
140
00:14:51,770 --> 00:14:55,940
Of course that's not really true. It has to stop somewhere at both ends.
141
00:14:56,330 --> 00:15:00,170
Similarly, at the big end, 2244288 216.
142
00:15:00,650 --> 00:15:05,720
They all have the same number of numbers between them and it goes a long way, but it doesn't go forever.
143
00:15:06,880 --> 00:15:14,340
So let me tell you a little more about what it does do. A floating point number.
144
00:15:14,850 --> 00:15:19,020
And I'm always referring here to what the double precision standard.
145
00:15:22,640 --> 00:15:31,870
Consist of two words. Which is eight bytes, which is 64 bits.
146
00:15:34,700 --> 00:15:47,030
And the conventional way that we talk about how those bits are used is that we say one bit is used for the sign of the number plus or -11 bits.
147
00:15:48,860 --> 00:15:56,580
I used for the exponent. Which is interpreted as a binary exponent.
148
00:15:57,980 --> 00:16:04,060
And those 11 bits include the sign on the exponent. And then 52 bits.
149
00:16:06,170 --> 00:16:13,840
For the fraction. Which again, is interpreted as a binary base to.
150
00:16:15,100 --> 00:16:19,180
Fraction. So this 52 matches, this 52.
151
00:16:19,630 --> 00:16:24,490
These bits are what tell you which of those you're in. The 11 bits are what?
152
00:16:24,520 --> 00:16:27,640
Tell you whether you're between one and two or two and four or four and eight and so on.
153
00:16:28,090 --> 00:16:36,819
And then the one bit for the sign, of course, tells you if you're positive or negative and if you want to learn this stuff over ten.
154
00:16:36,820 --> 00:16:46,940
This book is absolutely beautiful for that. In a moment, we're going to do experiments on the computer.
155
00:16:46,940 --> 00:16:52,850
And you'll see, of course, that if if there are only 11 bits for the exponent and one of them is a bit,
156
00:16:53,060 --> 00:17:00,650
that means the maximum exponent is two to the 10th. So the this keeps going up to a scale of two to the 10th.
157
00:17:01,220 --> 00:17:04,400
So it's two to the 1024 basically.
158
00:17:04,640 --> 00:17:11,510
So this scale invariant persists up to two to the 1024, which is about ten to the 308.
159
00:17:12,200 --> 00:17:19,190
And then it similarly persists at the low end down to about two to the -1 to 4, which is ten to the -308.
160
00:17:19,460 --> 00:17:26,150
So these are two numbers. Two, it's very important to be aware of the distinction between 16 and 308.
161
00:17:26,930 --> 00:17:30,860
16 is the number of digits.
162
00:17:30,860 --> 00:17:35,770
A number is represented. Two has nothing to do with the scale, it's just the relative precision.
163
00:17:36,050 --> 00:17:40,640
Whereas 308, roughly speaking, is the maximum exponent.
164
00:17:40,910 --> 00:17:45,200
So you can have a number of size ten to the zero, ten to the 110 to the 200.
165
00:17:45,200 --> 00:17:53,000
No problem. Whatever size it is, it will always be represented to about 16 digits of relative precision.
166
00:17:53,540 --> 00:17:59,060
Why 16? Because ten to the 16th is approximately two to the 52.
167
00:18:00,810 --> 00:18:08,280
So always bear in mind that your computer represents numbers to about 16 digits or exactly 52 bits.
168
00:18:13,360 --> 00:18:17,230
There are wonderful examples of computers that failed to do this.
169
00:18:17,830 --> 00:18:26,560
Cohan's bete noire was always the Cray machines, which would do certain subtractions with and lose all the information in effect.
170
00:18:27,850 --> 00:18:33,360
And then there was another example in my career was the great Pentium bug scandal of 1994.
171
00:18:33,370 --> 00:18:40,870
I think it was. None of you were born, hardly. But in 1994, Intel produced with great fanfare, its Pentium chip.
172
00:18:40,870 --> 00:18:47,500
And there was a bug which took a long time to discover because the bug did not make the arithmetic terrible.
173
00:18:47,830 --> 00:18:55,450
It merely meant that instead of 16 digits, it had five digits of accuracy in certain cases, usually 16, but occasionally five.
174
00:18:55,810 --> 00:19:02,890
And eventually a number theorist who was doing very delicate number theory computations discovered that it was inaccurate,
175
00:19:02,980 --> 00:19:10,300
thus proving that number theory is useful. Intel lost a huge amount of money on this.
176
00:19:10,320 --> 00:19:14,580
They had to, of course, do all sorts of things to recall and get their reputation back.
177
00:19:15,900 --> 00:19:17,910
But it changed the world in all sorts of ways.
178
00:19:17,910 --> 00:19:26,890
The whole field of hardware verification, which is an important area of computer science, was more or less created by this Pentium bug in 1994.
179
00:19:26,910 --> 00:19:32,190
So when things go wrong to from other points of view, that can have positive effects.
180
00:19:32,790 --> 00:19:38,460
Now, the way that chips are designed is much more rigorous than before 1994.
181
00:19:40,080 --> 00:19:46,250
Okay. So I want to show you on the computer. Some stuff from the handout called M12.
182
00:19:56,310 --> 00:20:01,860
I'm sorry. I'm not quite ready for that. My apologies. I want to say a little more on the board.
183
00:20:09,910 --> 00:20:13,270
What I've been telling you about is what are floating point numbers.
184
00:20:13,480 --> 00:20:18,400
And that is an answer to the question of how do we represent real numbers on a computer.
185
00:20:18,820 --> 00:20:24,280
What I've forgotten to tell you before turning to the demo is, is how do we compute with real numbers?
186
00:20:24,520 --> 00:20:28,270
So what we've just talked about is representing.
187
00:20:31,590 --> 00:20:37,080
Numbers. Now, let's talk about computing with these representations.
188
00:20:43,330 --> 00:20:46,420
Now, the problem is that if you take one floating point number, let's say,
189
00:20:46,420 --> 00:20:51,250
and divide it by another floating point number, the answer will not be a floating point number, typically.
190
00:20:51,520 --> 00:20:57,909
So you have to do something. And of course, what we do is around at every step when you do something on your computer,
191
00:20:57,910 --> 00:21:02,350
if necessary, the result is rounded to a floating point number.
192
00:21:02,650 --> 00:21:07,930
And this is where the I Tripoli standard is very firm and has improved the world greatly.
193
00:21:09,040 --> 00:21:14,380
Here's the principle. This is the principle of essentially all computer arithmetic.
194
00:21:14,740 --> 00:21:23,540
Since the 1980s. Let's suppose that X and Y are floating point numbers.
195
00:21:38,830 --> 00:21:47,770
And let's suppose we compute one of the basic operations, and people generally regard those as plus minus times a divide.
196
00:21:47,980 --> 00:21:57,730
So suppose we compute X plus Y or x minus Y or x times y or x divided by Y.
197
00:22:00,060 --> 00:22:04,140
The principle is that on your computer, the computed result.
198
00:22:08,150 --> 00:22:17,500
Is the mathematically exact result. Correctly rounded to your floating point system.
199
00:22:21,240 --> 00:22:23,760
Now we have to say a word about what that means.
200
00:22:25,020 --> 00:22:32,810
Of course, rounding means you take a number like one third and you turn it into point three, three, three, three, actually binary, not decimal.
201
00:22:32,820 --> 00:22:36,990
But that's the idea. Now, more precisely.
202
00:22:39,790 --> 00:22:44,500
Well, the idea is that you take a number and you find the nearest floating point number.
203
00:22:44,740 --> 00:22:49,030
Now, what if there's a tie? How do you break the tie? There are very precise rules about this.
204
00:22:49,390 --> 00:22:54,010
And moreover, in my Tripoli arithmetic, there are different rounding modes you can specify.
205
00:22:54,010 --> 00:22:59,290
You can round up, you can round down, you can do various things. So all of that is spelled out very fully.
206
00:22:59,860 --> 00:23:05,860
So correctly rounded is a slightly adjustable parameter, but a fully prescribed adjustable parameter.
207
00:23:06,370 --> 00:23:11,830
And according to this principle, there is no ambiguity whatsoever about the result.
208
00:23:12,250 --> 00:23:21,190
When you do these fundamental operations on the computer, your computer is required to produce exactly the number prescribed by this principle.
209
00:23:21,670 --> 00:23:29,110
And this is pretty much true in most hardware implementations of floating point arithmetic these days.
210
00:23:31,390 --> 00:23:36,940
There might be a few arguments at the end. At the end, you know, down ten to the -308 and things like that.
211
00:23:36,940 --> 00:23:42,530
But more or less this is true. There's a corollary of this principle.
212
00:23:46,200 --> 00:23:58,900
So supposing this principle is upheld. It follows that each of these operations gives a result we can talk about mathematically.
213
00:23:58,910 --> 00:24:06,940
So I'll put it this way. I'll say each operation yields. Computed.
214
00:24:09,490 --> 00:24:13,300
X operation y.
215
00:24:15,900 --> 00:24:20,130
Equals the exact value of X operation.
216
00:24:20,550 --> 00:24:27,560
Y. Times one plus some number epsilon whose absolute value.
217
00:24:29,830 --> 00:24:41,890
Is less than what we call machine epsilon. Which is two to the -53 in the standard I Tripoli.
218
00:24:46,680 --> 00:24:52,080
And that's about 1.1 times ten to the -60.
219
00:24:56,340 --> 00:24:58,590
Now, let me say a word about what I mean here.
220
00:25:01,770 --> 00:25:08,100
I'm assuming the default rounding mode which is round to the nearest with ties broken in a systematic fashion.
221
00:25:08,460 --> 00:25:18,270
And that's where two to the -53 comes from. If the gap between two numbers is two to the -52, then the nearest one is two to the -53 away.
222
00:25:18,720 --> 00:25:28,650
So whenever you round a real number to a floating point number, you make a relative error of at most two to the -53.
223
00:25:28,890 --> 00:25:37,890
So that's what's that being said here. Every computation involves a relative error of scale, at most two to the -53.
224
00:25:39,050 --> 00:25:43,880
Now I'm ignoring in all of this discussion what happens if you go over the top or down below the bottom?
225
00:25:44,090 --> 00:25:47,420
I'm assuming no overflow, no under flow.
226
00:25:51,750 --> 00:25:55,110
And most of the time things don't overflow or under flow.
227
00:25:57,970 --> 00:26:04,810
So no, none of our numbers get bigger than ten to the 308 or smaller than ten to the -308 if they do,
228
00:26:05,050 --> 00:26:13,300
of course, that's prescribed also what's supposed to happen. So this is a very firm principle.
229
00:26:13,300 --> 00:26:16,630
It's really specifying exactly what every operation has to deliver.
230
00:26:16,990 --> 00:26:18,520
It is possible to implement it.
231
00:26:18,520 --> 00:26:26,110
And this is successfully done and it yields this very fruitful corollary on which you can do all kinds of mathematical reasoning.
232
00:26:28,900 --> 00:26:38,320
So just to emphasise the point is that. Computers don't make sort of approximately a rounding error at each step of roughly this size.
233
00:26:38,680 --> 00:26:44,350
They rigorously precisely make a rounding error of strictly no greater than exactly this size.
234
00:26:45,430 --> 00:26:54,400
Now, let's go to the experiments. So.
235
00:26:55,360 --> 00:26:58,870
I'm going to take us through the first half of this M12 handout.
236
00:26:59,200 --> 00:27:05,860
The second half uses the symbolic toolbox, and my laptop doesn't have the symbolic toolbox, so I won't be able to do that.
237
00:27:06,070 --> 00:27:10,870
But if you have access to that, I give it a try because symbolic stuff is always fun.
238
00:27:12,720 --> 00:27:15,780
But of course, it's not what computational science normally uses.
239
00:27:16,410 --> 00:27:21,850
I want to do some experiments which. Are probably things you've never tried.
240
00:27:21,850 --> 00:27:29,890
And what I hope to reassure you with is how deterministic and understandable floating point arithmetic really is.
241
00:27:31,240 --> 00:27:38,620
So let's begin by computing one over 4096, which of course is two to the -12.
242
00:27:39,400 --> 00:27:42,990
You get a number printed out that is just a digital display.
243
00:27:43,000 --> 00:27:49,030
It's not the exact number. The exact number is 64 bits in the split up in the way I've described.
244
00:27:49,330 --> 00:27:52,510
So there's a digital approximation to an exact number.
245
00:27:53,020 --> 00:27:57,700
Now you know that one over 4096 is going to be represented exactly on the computer
246
00:27:57,910 --> 00:28:02,200
because it's precisely a power of two and it's nowhere near the under flow limit.
247
00:28:02,740 --> 00:28:06,250
So whatever that is. It will be exact.
248
00:28:07,430 --> 00:28:11,660
Suppose I multiply it by 4096?
249
00:28:12,500 --> 00:28:20,059
You know what the answer has to be? It has to be exactly one, because we know we've got exactly one over four nine, six.
250
00:28:20,060 --> 00:28:25,670
And our principal says. The we should take the mathematically correct answer, which is one and round it.
251
00:28:26,810 --> 00:28:32,340
So that will be exactly one. If you're not sure, it's exactly one you could.
252
00:28:32,520 --> 00:28:36,900
Well, in MATLAB, you can tell that's one. If it weren't exactly, I would say 1.000.
253
00:28:37,320 --> 00:28:40,800
But another way to look at it is to look at the answer minus one.
254
00:28:41,190 --> 00:28:44,820
And you see that's represented as zero. It's not ten to the -60.
255
00:28:46,350 --> 00:28:50,280
Now, by contrast, suppose I say one over 49.
256
00:28:51,830 --> 00:28:59,820
And then multiply that by 49. Now a priori, you don't know what this answer will be.
257
00:28:59,850 --> 00:29:03,630
Of course you can guess that it won't be exactly one because I'm showing it to you.
258
00:29:04,080 --> 00:29:10,770
But I priori it's a pretty safe bet that one over 49 will be an approximate number.
259
00:29:11,010 --> 00:29:17,850
It'll be a floating point number that is not mathematically equal to 1/49 when you then multiply that by 49.
260
00:29:17,910 --> 00:29:22,380
Do you get something that round to one? Well, I'm not quick enough to do that computation.
261
00:29:22,860 --> 00:29:26,579
But as it happens, you don't. You can see there it's not exactly one.
262
00:29:26,580 --> 00:29:33,360
And if I compare that answer to one, I see there's been a slight error, which, by the way, is probably two to the -53.
263
00:29:33,840 --> 00:29:38,340
Notice how exact that is? It really is exactly two to the -53.
264
00:29:38,520 --> 00:29:43,450
The error we've just made, 49 is the first number for which this is true.
265
00:29:43,470 --> 00:29:49,530
So amazingly, all of this stuff, one seventh works out, one ninth, 1/11, 1/13.
266
00:29:49,530 --> 00:29:54,549
They're all fine until 1/49. Okay.
267
00:29:54,550 --> 00:29:59,410
Let's do a few more examples of rounding. Suppose I say the square root of a half.
268
00:30:00,760 --> 00:30:06,430
Now that's of course not. Exactly represent a ball in the floating point system.
269
00:30:06,430 --> 00:30:08,920
This was proved by the Greeks more than 2000 years ago.
270
00:30:11,160 --> 00:30:18,810
So if I take it square and subtract the half, you don't know whether that will be exactly zero or not.
271
00:30:19,050 --> 00:30:26,400
You know that the answer squared mathematically will not be exactly two, but oh, sorry a half.
272
00:30:26,730 --> 00:30:32,090
But whether it rounds to a half, you don't know. Now as it happens, it doesn't.
273
00:30:32,120 --> 00:30:35,840
So there again, we've got two to the -53. Exactly.
274
00:30:37,740 --> 00:30:46,710
On the other hand, if I say a square root of a quarter, well, you know, that will be exact, because the quarter is represented in the binary system.
275
00:30:47,190 --> 00:30:51,059
The square root of it will be calculated. Exactly. And then rounded to floating point.
276
00:30:51,060 --> 00:30:56,160
Well, the exact square root is, of course, one half, which is already a floating point number.
277
00:30:56,700 --> 00:31:01,800
So we know that if I square that and subtract a quarter, I'll get truly zero.
278
00:31:04,100 --> 00:31:09,700
Exactly zero. Let's do a few more examples.
279
00:31:09,700 --> 00:31:20,690
Suppose I say sine of zero. Now, this stuff I told you, concerns the four fundamental operations plus minus times and divide.
280
00:31:21,110 --> 00:31:29,450
I haven't told you anything about what computers are obliged to do with the sign or the cosine or the Riemann Zeta function or anything like that.
281
00:31:29,780 --> 00:31:33,410
And I don't know the precise status of that.
282
00:31:34,760 --> 00:31:37,760
I don't actually remember whether I prescribed anything.
283
00:31:37,760 --> 00:31:46,970
I think not. But most computers try to do their best to give you the correctly rounded sign function, at least for a broad range of arguments.
284
00:31:47,210 --> 00:31:53,750
Not necessarily for all arguments, but certainly for numbers like zero that are of one, loosely speaking.
285
00:31:55,160 --> 00:32:03,050
Computers will give you the correctly rounded sine. Obviously sine of zero is going to give you the correctly rounded number zero.
286
00:32:03,080 --> 00:32:07,440
That's no surprise. Suppose I say pie?
287
00:32:07,980 --> 00:32:12,720
Well, of course, that will be the correctly rounded number pie in your computer.
288
00:32:13,530 --> 00:32:20,160
If I say sign of pie. Well, we have to think about what's happening here.
289
00:32:20,400 --> 00:32:23,970
Pi is not exactly pi. It's the floating point approximation.
290
00:32:24,600 --> 00:32:31,920
And then sine of that number introduces a new question. So you simply can't tell whether that's going to be exactly zero or not.
291
00:32:32,310 --> 00:32:37,200
We have no reason to be sure it will be exactly zero. And in fact, it's not exactly zero.
292
00:32:37,350 --> 00:32:47,220
Notice it's not even exactly two to the -53, because we've now composed two things an approximation to pi, followed by an approximation to the sun.
293
00:32:48,970 --> 00:32:59,640
Suppose I say sign of a thousand times pi. Well, we should be able to predict what will happen a thousand times pi.
294
00:33:00,980 --> 00:33:08,810
Well, let's see. Pi will have an hour at the end of the minor 16th level, so a thousand times pie will be off by about ten to the -13.
295
00:33:09,620 --> 00:33:20,930
When we take the sign of that, we will probably get the correctly rounded sign of some number that's about ten to the -13 from a thousand PI.
296
00:33:21,320 --> 00:33:24,830
So we'll probably get something about ten to the -13.
297
00:33:27,800 --> 00:33:35,870
And there it is, ten to the -13. So similarly, if I said the sign of ten to the 20th times pi.
298
00:33:39,970 --> 00:33:49,840
Well, ten of the 20th times pie is a number of size, ten of the 20 of from which we've lost a well represented in 16 digit precision.
299
00:33:49,840 --> 00:33:53,020
So of course there will be no relevant accuracy left.
300
00:33:53,260 --> 00:33:57,430
So this is a random number. Basically, there's your random number.
301
00:33:59,240 --> 00:34:05,990
Now there's a lot of philosophical conversation of whether it's important on a computer for that to be exactly the right random number.
302
00:34:06,230 --> 00:34:11,990
Maybe times PI is confusing matters. But suppose I said sign of ten to the 20.
303
00:34:13,580 --> 00:34:16,700
Another random number one point of view is that.
304
00:34:18,050 --> 00:34:21,630
There's no point in getting that number to any accuracy at all.
305
00:34:21,650 --> 00:34:27,530
We're working in 16 digit arithmetic. Of course, we can't meaningfully represent the sign of ten to the 20.
306
00:34:27,800 --> 00:34:31,010
We simply don't know enough. We lose all the digits when we take ten of the 20.
307
00:34:31,850 --> 00:34:41,080
On the other hand, there are those who say that. It's important for other reasons that that should be essentially exactly the same of ten to the 20.
308
00:34:41,350 --> 00:34:52,780
So we can test this. What happens if you take the sign of ten to the 20 squared plus the cosine of ten to the 20 squared?
309
00:34:54,560 --> 00:34:59,210
So if these are essentially random numbers, then this result will be essentially a random number.
310
00:34:59,600 --> 00:35:05,210
On the other hand, if they're if they have more structure than that, then the sum will be approximately one.
311
00:35:06,900 --> 00:35:15,510
It is approximately one. So that tells us that the people designing the computer arithmetic were really, you know, maniacs.
312
00:35:16,320 --> 00:35:23,400
It wasn't enough for them. In some sense, they really do try to give you the correctly rounded sign of ten to the 20th.
313
00:35:23,760 --> 00:35:26,070
Even though you know the next floating point,
314
00:35:26,100 --> 00:35:32,040
number ten to the 20th plus machine epsilon basically will be a totally different there's no continuity there at all.
315
00:35:32,400 --> 00:35:38,160
They're random numbers, but they're the correct random numbers. Okay.
316
00:35:39,270 --> 00:35:42,180
Let's do just a couple more things at the high and low ends.
317
00:35:45,430 --> 00:35:52,750
These are aspects of Tripoli arithmetic that got a lot of attention in the eighties and are still a little controversial today.
318
00:35:53,500 --> 00:35:58,300
One over zero is infinity, and that is a well-defined concept in floating point arithmetic.
319
00:35:58,570 --> 00:36:02,310
One over infinity is zero, and it's just the usual zero.
320
00:36:02,320 --> 00:36:05,890
So if I add two thirds to it, for example, I get two thirds.
321
00:36:06,700 --> 00:36:10,090
If i00 divided by zero, I say not a number.
322
00:36:10,810 --> 00:36:14,470
This is an introduction of I Tripoli arithmetic. Nine is not a number.
323
00:36:14,710 --> 00:36:17,920
It obeys a very precisely prescribed arithmetic.
324
00:36:18,340 --> 00:36:23,080
It's very convenient to have nans when these guys introduced Tripoli arithmetic.
325
00:36:23,470 --> 00:36:28,030
Their hope was that nans would become an absolutely central part of how people compute.
326
00:36:28,120 --> 00:36:32,680
You can do some really neat things with them. It hasn't turned out completely that way.
327
00:36:32,710 --> 00:36:36,790
They're useful around the edges, but not as central, I think, as expected.
328
00:36:37,150 --> 00:36:45,010
And sometimes manufacturers try to cut corners to speed things up in how they implement things like M&Ms and infinities.
329
00:36:46,530 --> 00:36:53,189
Let's do another example. Suppose I say three times that you get na na na means we don't know.
330
00:36:53,190 --> 00:36:56,399
It's a meaningless expression. If you multiply it by three, we still don't know.
331
00:36:56,400 --> 00:37:01,200
You don't get three nine, you just get nine. So it's impossible to destroy a nine.
332
00:37:01,200 --> 00:37:07,319
If I say zero times nine, it's still nine because you don't know, you know, was nine infinity?
333
00:37:07,320 --> 00:37:19,530
Was it finite? You don't know. So you can't kill a man. It's just there forever to to the 1023 is a big number to to the 1024 is infinity.
334
00:37:20,730 --> 00:37:34,260
So there you can see in this very precise sense that around ten to the 308 is the ceiling on floating point arithmetic, two to the -1 to 4.
335
00:37:34,680 --> 00:37:37,680
You might expect to be zero, but it's not actually.
336
00:37:37,680 --> 00:37:42,810
And that has to do with what are called D normalised floating point numbers down at the low end.
337
00:37:43,020 --> 00:37:46,140
They allow the 52 bits to have some zeros up front.
338
00:37:46,380 --> 00:37:49,440
And that's another controversial aspect of my Tripoli arithmetic.
339
00:37:49,800 --> 00:37:53,910
It has some slick properties, but it adds to the complexity of implementations.
340
00:37:55,660 --> 00:38:01,360
So in fact, if you go down to two to the -1 to 7 four, it's still non-zero.
341
00:38:01,690 --> 00:38:05,050
But if you go to two to the -1 to 7 five, it's zero.
342
00:38:05,440 --> 00:38:14,650
So floating point arithmetic represents numbers between ten to the minus three, eight and ten to the plus three, eight to full 16 digit precision.
343
00:38:14,950 --> 00:38:19,209
And then there's this weird zone between ten to the minus three, three,
344
00:38:19,210 --> 00:38:25,150
24 and ten to the -308, where the precision is fading out gradually rather than abruptly.
345
00:38:29,510 --> 00:38:31,910
Okay. I think that's all I'm going to show you on this demo.
346
00:38:39,790 --> 00:38:45,520
Most of the time, numerical people like me don't look at the details of floating point arithmetic,
347
00:38:45,730 --> 00:38:51,070
but it's tremendously reassuring to know that they're all precisely prescribed in a deterministic fashion.
348
00:38:54,560 --> 00:39:03,830
Now for the final few minutes, I want to give an absurdly speedy discussion of the consequences of rounding errors.
349
00:39:06,650 --> 00:39:12,590
And let me mention three key terms stability, numerical stability, conditioning.
350
00:39:18,560 --> 00:39:20,390
And backward error analysis.
351
00:39:21,200 --> 00:39:29,600
Now, these are the three concepts that numerical analysts have found are at the heart of analysing the effects of rounding errors.
352
00:39:37,770 --> 00:39:45,419
So if you think physically again, most of the time the fact that the continuum is made up of a lot of molecules doesn't matter.
353
00:39:45,420 --> 00:39:50,640
But of course there are special cases where the quantum world really does come up to the macroscopic world.
354
00:39:50,670 --> 00:39:58,830
So Superfluidity or Bose-Einstein condensation or something are cases where the microscopic and the macroscopic begin to blend together.
355
00:39:59,730 --> 00:40:06,390
Analogously, when you do numerical computing in floating point, there are cases where the floating point stuff kills you.
356
00:40:06,560 --> 00:40:11,730
In fact, it's more common than superfluidity. It's certainly something you have to be aware of.
357
00:40:12,390 --> 00:40:20,370
You have to make sure that your algorithms don't have the property of amplifying the rounding errors in a exponential fashion, roughly speaking.
358
00:40:25,360 --> 00:40:29,210
So if. I'm being very loose here.
359
00:40:29,780 --> 00:40:34,960
If the rounding errors. Are amplified enough to cause trouble.
360
00:40:37,650 --> 00:40:53,250
That's called numerical instability. And of course, that idea has been made much more precise with all sorts of things said about it.
361
00:40:53,550 --> 00:40:57,730
And here's a slightly more precise way of thinking of that.
362
00:40:57,750 --> 00:41:11,250
We say that a stable algorithm. Is one that gives the exact answer.
363
00:41:13,980 --> 00:41:20,340
The exactly correct answer to a slightly perturbed problem.
364
00:41:34,020 --> 00:41:38,370
So that's a very fundamental idea of numerical analysis.
365
00:41:40,700 --> 00:41:43,820
Most of the time, it's not so much appropriate to ask.
366
00:41:46,370 --> 00:41:52,310
Can we give nearly the right answer? What's more appropriate is to ask, can we give the right answer to nearly the posed problem?
367
00:41:52,640 --> 00:41:59,160
Think, for example, of the sign of ten to the 20th. On a 16 digit computer.
368
00:41:59,550 --> 00:42:08,670
A good algorithm is one that will give you the exact sign of some number that's within a relative error of ten to the -16 of ten to the 20th,
369
00:42:09,270 --> 00:42:15,060
which means a random number. A stable algorithm is going to give you a random number when you ask for the sign of ten to the 20th.
370
00:42:15,240 --> 00:42:20,370
That's okay. It wouldn't be reasonable to expect a computer to really do better than that.
371
00:42:20,670 --> 00:42:27,420
So this is a the right compromise between what we would like to happen and what's reasonable to expect on a computer.
372
00:42:27,870 --> 00:42:35,520
So slightly perturbed always means down at the level of rounding errors ten to the minus 16th kind of perturbation.
373
00:42:36,790 --> 00:42:42,070
And all across numerical analysis, people make stable algorithms to do all sorts of things.
374
00:42:45,520 --> 00:42:49,840
We then combine that idea of stability with the idea of conditioning.
375
00:42:51,200 --> 00:42:56,030
So rounding errors amplified to cause trouble that we call instability.
376
00:42:58,340 --> 00:43:02,510
If you have great sensitivity of answers to data.
377
00:43:10,820 --> 00:43:21,280
That's called ill conditioning. So for example,
378
00:43:21,460 --> 00:43:27,910
the sign of ten to the 20th is a very ill conditioned problem because a relative change in the
379
00:43:27,940 --> 00:43:34,690
number ten to the 20 by one part in ten of the 16 is enough to change the answer completely.
380
00:43:34,930 --> 00:43:40,990
So ten to the 20th is an extraordinarily ill condition problem, and condition number is ten to the 20th or something.
381
00:43:41,320 --> 00:43:46,330
So any old sign algorithm is probably going to be stable.
382
00:43:46,720 --> 00:43:53,410
Nevertheless, you can't expect sign of ten to the 20 necessarily to be correct because that problem is so ill conditioned.
383
00:43:54,070 --> 00:44:02,890
And combining together these two ideas, you can analyse the accuracy of all sorts of problems in numerical analysis.
384
00:44:04,630 --> 00:44:18,580
And the as it were, the fundamental axiom of numerical analysis is that stability plus conditioning implies accuracy.
385
00:44:24,420 --> 00:44:39,200
Plus good conditioning. Well, conditioning. So that's a five minute summary of 50 years of very detailed, rigorous academic work.
386
00:44:42,880 --> 00:44:47,290
All of all the time we ask, do we have a stable algorithm that doesn't amplifiers too much?
387
00:44:47,530 --> 00:44:52,960
And is it a well conditioned problem where it's reasonable to expect that we should be able to compute a good answer?
388
00:44:53,350 --> 00:44:57,910
Now, I just want to show you these things a little bit with these demos of the last three codes.
389
00:45:00,020 --> 00:45:03,710
Let me just emphasise that some names connected with this.
390
00:45:04,010 --> 00:45:09,800
Some of these ideas really go back to Turing, who was actually an even greater genius than they tell you.
391
00:45:11,600 --> 00:45:15,440
A key person was Wilkinson, who really put them on the map in our field.
392
00:45:15,710 --> 00:45:19,520
And he, incidentally, was Turing's assistant working on the pilot ace computer.
393
00:45:20,150 --> 00:45:26,000
And then let me mention this this person and the book that we passed around by Nick Higham,
394
00:45:26,180 --> 00:45:30,560
who is currently a leading figure in understanding this kind of analysis.
395
00:45:37,430 --> 00:45:43,940
Okay. So codes 13, 14 and 15, just aim to illustrate some of these effects.
396
00:45:44,360 --> 00:45:51,560
So for example, suppose I say X equals -40 and I'm interested in each of the x.
397
00:45:52,880 --> 00:45:58,520
Now, of course, we don't know what algorithm our computer uses, but you can be sure it's it's a good one.
398
00:45:58,550 --> 00:46:04,160
This is one of the most important of all functions. That's probably the correctly rounded result.
399
00:46:04,850 --> 00:46:08,230
It's probably exactly e to the -40, correctly rounded.
400
00:46:08,240 --> 00:46:15,910
I'm not certain of that, but I suspect it is. Let's look at an example of an unstable algorithm.
401
00:46:16,300 --> 00:46:25,150
And the the this is a classic example. If you add up the Taylor series for E to the X and set X equal to -40,
402
00:46:25,330 --> 00:46:30,399
that's a wonderfully unstable algorithm because the Taylor series has the huge terms before.
403
00:46:30,400 --> 00:46:36,910
They've got small, huge terms of alternating sine powers of -40 divided by factorial.
404
00:46:37,210 --> 00:46:47,710
Let's do it. Suppose I say index equals 0 to 150 and I'll say Taylor equals X to the I.
405
00:46:48,770 --> 00:46:54,230
Divided by factorial of I. So I've just made the Taylor coefficients.
406
00:46:56,620 --> 00:47:03,120
The first hundred and 50 Taylor coefficients, but that's plenty. For each of the x at x equals -40.
407
00:47:03,150 --> 00:47:08,160
So in principle, if I sum them up, I should get about four times ten to the minus 18th.
408
00:47:08,790 --> 00:47:13,590
But in fact, I get 1.8. So that has no relation whatsoever to the correct answer.
409
00:47:13,800 --> 00:47:22,110
And the reason is, I've tried to sum up a bunch of things that are huge numbers of oscillating sine rounding errors.
410
00:47:22,350 --> 00:47:26,640
This is rigorously derivable from our assumptions about floating point arithmetic.
411
00:47:27,150 --> 00:47:30,510
Nothing has gone wrong. It's just an unstable algorithm.
412
00:47:30,900 --> 00:47:34,070
Now, luckily, there do exist stable algorithms.
413
00:47:34,080 --> 00:47:38,400
For example, if I said the same thing except with minus x.
414
00:47:42,660 --> 00:47:46,770
Suppose I now add up the Taylor series?
415
00:47:48,570 --> 00:47:53,850
I'll get eight of the 40, won't I? And that's a stable algorithm.
416
00:47:55,440 --> 00:47:58,440
You can see it's beautifully computed. Eight of the 40.
417
00:47:58,680 --> 00:48:03,690
So here's a stable algorithm for computing E to the -41 over E to the 40.
418
00:48:06,610 --> 00:48:15,610
So you can see there we've illustrated the built in E to the -40 compared with the stable calculation of it, but also an unstable calculation of it.
419
00:48:19,500 --> 00:48:23,940
The next example illustrates well and ill conditioned matrix problems.
420
00:48:25,180 --> 00:48:31,480
So this is 14. Suppose I say a equals a random matrix of size 12 by 12.
421
00:48:31,780 --> 00:48:38,089
There it is. That's an example of a well conditioned problem.
422
00:48:38,090 --> 00:48:41,570
Probably. Of course it's random. I could have been unlucky, but I'm confident.
423
00:48:42,020 --> 00:48:46,160
Let's say B equals the sum of the columns.
424
00:48:49,220 --> 00:48:57,170
So I've just added up the columns, which means that we know mathematically what the answer to a backslash B should be.
425
00:48:57,930 --> 00:48:59,299
Remember, if you add up the columns,
426
00:48:59,300 --> 00:49:05,390
that means you're taking a linear combination of the columns with coefficients one therefore the solution of x equals B four.
427
00:49:05,390 --> 00:49:12,880
That B will be the vector of all ones. And if I compute it, I find their very accurate format long.
428
00:49:13,870 --> 00:49:18,880
Right. So we lost a digit maybe. But here's an example of a well-behaved problem.
429
00:49:19,330 --> 00:49:24,970
Of course, backslash is a stable algorithm because MATLAB is a good code, a good language.
430
00:49:26,410 --> 00:49:31,000
But moreover, this experiment shows that the problem was not too well conditioned.
431
00:49:32,840 --> 00:49:37,850
Now, by contrast, let's look at the so called Hilbert matrix.
432
00:49:38,060 --> 00:49:44,630
So I'll say A equals Hilbert of 12. It's an example of a high level condition matrix.
433
00:49:44,900 --> 00:49:54,170
You can learn more about it by typing help. And let's again take a right hand side corresponding to the solution vector of all ones.
434
00:49:55,490 --> 00:50:02,120
So if I now solve a x equals a backslash b again, mathematically I should get all ones.
435
00:50:03,200 --> 00:50:07,310
But you see, I get numbers that are about one but not accurate.
436
00:50:07,550 --> 00:50:15,740
So there we have a terribly ill conditioned matrix. It's a stable algorithm applied to a highly ill conditioned problem.
437
00:50:16,610 --> 00:50:22,520
And in fact, the condition number of this matrix is bad.
438
00:50:23,060 --> 00:50:31,730
What have I done? So the condition number of AA is something like ten to the 16th.
439
00:50:32,060 --> 00:50:35,660
So it's amplifying small perturbations by six orders of magnitude.
440
00:50:37,290 --> 00:50:43,920
Finally, I'll spend one more minute on 15, which is illustrating one of Wilkinson's great discoveries.
441
00:50:44,220 --> 00:50:49,670
Suppose I say our equals 0.01 times.
442
00:50:49,680 --> 00:50:55,530
I am going quickly because we're at the end. Oops, not too quickly.
443
00:50:56,070 --> 00:51:00,600
Plus nine times the upper triangular ones of seven.
444
00:51:00,600 --> 00:51:06,240
I'm just copying here. That turns out to be an ill conditioned upper triangular matrix.
445
00:51:06,540 --> 00:51:12,120
If I say I don't know why I've done this anyway, let's do exactly what's on the sheet.
446
00:51:13,890 --> 00:51:17,460
I'm now going to make a QR factorisation.
447
00:51:19,270 --> 00:51:29,090
Of. A random matrix in order to get a random or orthogonal matrix.
448
00:51:30,050 --> 00:51:33,200
Sorry this is going so fast and I'm now going to take a equals Q are.
449
00:51:33,200 --> 00:51:40,730
So let me tell you what I've just done. I've just made a matrix whose q r factorisation involves an il condition r.
450
00:51:41,690 --> 00:51:46,880
The details don't matter. Now I want to show you something astonishing that Wilkinson discovered.
451
00:51:48,230 --> 00:51:53,120
You can forget the details of the matrix. It's just a seven by seven matrix.
452
00:51:53,360 --> 00:52:03,030
Let's compute its QR factorisation. We've used whatever algorithm is built into MATLAB.
453
00:52:03,060 --> 00:52:08,570
I can assure you it's a stable algorithm. Based on householder.
454
00:52:10,970 --> 00:52:15,530
Now there's R and there's R two. If you look at them, you'll see they're totally different.
455
00:52:16,010 --> 00:52:19,069
So our stable algorithm has given us a garbage.
456
00:52:19,070 --> 00:52:23,840
Are the difference between the computed hour and the correct hour is huge.
457
00:52:24,920 --> 00:52:32,340
Now, here's Q and here's Q2. If you look especially on the bottom right, you'll see those are totally different.
458
00:52:32,490 --> 00:52:38,580
So we have a garbage. Q So the norm of Q minus Q two is big.
459
00:52:39,240 --> 00:52:42,660
So we just executed a QR factorisation on the computer.
460
00:52:42,870 --> 00:52:47,120
The Q we've gotten is no good and the R we've gotten is no good.
461
00:52:47,130 --> 00:52:53,000
They're both very far from correct. Let's compare now.
462
00:52:53,510 --> 00:52:57,440
The difference between A and Q.
463
00:52:57,650 --> 00:53:04,920
Times are we find these garbage factors are actually fine.
464
00:53:04,970 --> 00:53:10,910
Their product is ten to the -14. So this is all going by so fast.
465
00:53:10,910 --> 00:53:14,450
At the end of a lecture, you probably don't appreciate how incredible it is.
466
00:53:14,660 --> 00:53:21,950
But this was sort of the great discovery of Wilkinson's career that when you do calculations in linear algebra,
467
00:53:22,430 --> 00:53:29,510
even though the individual answers may not be correct, they're correlated in such a way as to give you the finally correct thing.
468
00:53:29,660 --> 00:53:36,980
In other words, Q2 and R2 are the exactly correct factors of a slightly perturbed matrix.
469
00:53:37,430 --> 00:53:38,900
Okay, let's stop. Thank you.