1 00:00:13,900 --> 00:00:18,790 Welcome back to the six events of the Oxford Mathematics Public Lectures Home Edition. 2 00:00:18,790 --> 00:00:25,400 My name is Alan Goretti and I'm in charge of external relations for the Mathematical Institute as usual. 3 00:00:25,400 --> 00:00:29,890 Well, thanks to a sponsor's execs, markets execs, market leading, 4 00:00:29,890 --> 00:00:36,430 quantitative driven electronic market maker with offices in London, Singapore and New York. 5 00:00:36,430 --> 00:00:42,100 Tonight's speaker is Professor David Sumpter from the University of Uppsala in Sweden. 6 00:00:42,100 --> 00:00:49,960 David is actually closely connected to Oxford as the Old Orile Society Fellowship with us before heading to Sweden, 7 00:00:49,960 --> 00:00:57,820 used and applied mathematician with a wide range of interests and has made seminal contribution to the problem of collective behaviour, 8 00:00:57,820 --> 00:01:04,150 which cover everything from the inner working of fish schools and colonies, segregation in society, 9 00:01:04,150 --> 00:01:11,360 machine learning and artificial intelligence to the analysis of passing networks for football team. 10 00:01:11,360 --> 00:01:12,320 A few years ago, 11 00:01:12,320 --> 00:01:20,180 you actually wrote an entire book on the mathematics of football called Soccer Metrics on which give us a brilliant lecture last year. 12 00:01:20,180 --> 00:01:28,550 Since then, David has been very prolific and has written yet another book, which is a general reflection about the role of mathematics in the world. 13 00:01:28,550 --> 00:01:34,370 All summarise two 10 equation that rule our lives and society. 14 00:01:34,370 --> 00:01:39,890 Typically, for such event, we would have a big box of book for the other to sign up to the lecture. 15 00:01:39,890 --> 00:01:45,350 Obviously, we will not be able to do this tonight, however, do not despair. 16 00:01:45,350 --> 00:01:49,670 You can still obtain a signed copy of his book from Blackwood's Bookshop. 17 00:01:49,670 --> 00:01:59,360 All you need to do is email Oxford at Blackwell's the Echo Dot UK by 15 November, and they will provide you with all the information that you need. 18 00:01:59,360 --> 00:02:06,560 I'm very keen to hear which a questioner should know so that I would be able to forget all the other ones that currently quote my brain. 19 00:02:06,560 --> 00:02:10,490 So please, David, start now and tell us all about learning. 20 00:02:10,490 --> 00:02:24,830 10 Equation can improve my life. Thank you. So it's not every day that you ask a mathematician for advice about how you should live your life. 21 00:02:24,830 --> 00:02:32,030 But that's what I'm going to try and do today. I'm going to talk about 10 equations, which I think can make you happier. 22 00:02:32,030 --> 00:02:38,060 Make you a better person might even make you richer and more successful. 23 00:02:38,060 --> 00:02:41,660 Now, when a mathematician starts trying to offer this advice, 24 00:02:41,660 --> 00:02:49,620 it's worth starting by looking at some of the alternatives that are already on the market and there's lots of them. 25 00:02:49,620 --> 00:02:58,770 So this is just a small sample. There's Marie Kondo, she tells you how you should tidy up and make things better. 26 00:02:58,770 --> 00:03:03,450 I was leaning in this Jordan Peterson and his 12 rules of life. 27 00:03:03,450 --> 00:03:07,590 You can learn like Ice Einstein, you can declutter your mind. 28 00:03:07,590 --> 00:03:15,960 A lot of them involve a few swear words where you just have to stop doing these extreme things you can solve to be more happier. 29 00:03:15,960 --> 00:03:22,530 There's lots and lots of different, I think, the seven, 12 and 15 laws for confidence, and this one's my favourite down in the corner here. 30 00:03:22,530 --> 00:03:29,950 This is gangster confidence. So the 16 laws for gangster confidence, apparently. 31 00:03:29,950 --> 00:03:32,080 So what can a mathematician add to this? 32 00:03:32,080 --> 00:03:43,960 Well, one kind of clue and maths is really an abstract well, but one kind of clue can be found in the book The DA Vinci Code. 33 00:03:43,960 --> 00:03:53,780 Dan Brown. The da Vinci Code is a very different type of books than these self-help books, and it's a work of fiction, of course. 34 00:03:53,780 --> 00:04:03,100 But in the first 60 to 100 pages of this, it brings out a lovely idea that mathematics can underlie everything. 35 00:04:03,100 --> 00:04:11,830 And it describes this using the number PHI the golden ratio one point six one eight and 36 00:04:11,830 --> 00:04:17,620 the idea of PHI comes up a lot in mathematics that comes up in the golden rectangle. 37 00:04:17,620 --> 00:04:22,270 It comes up in the Fibonacci sequence and in sunflowers. 38 00:04:22,270 --> 00:04:30,490 And the idea proposed in the book, and it comes up with a lot of fake examples as well as real examples. 39 00:04:30,490 --> 00:04:38,490 But the idea in the book is that there is a secret held in mathematics and it's held in a single number. 40 00:04:38,490 --> 00:04:46,230 So the problem or the question I'm going to pose in this talk is could mathematics hold the answer? 41 00:04:46,230 --> 00:04:55,720 We are all looking for? Could we just throw away or leave all of these other self-help books and concentrate on finding a mathematical answer? 42 00:04:55,720 --> 00:05:02,560 And I'm not going to tell you every one of the 10 equations today, I'm going to give you a feeling for a few of them. 43 00:05:02,560 --> 00:05:08,320 The first one that I'm going to talk about is called the judgement equation. 44 00:05:08,320 --> 00:05:14,290 Now. I'm going to introduce it by thinking about an example, and it's quite a scary example, 45 00:05:14,290 --> 00:05:21,390 but one we're all familiar with, or we were at least familiar with before the corona times. 46 00:05:21,390 --> 00:05:29,730 Imagine you are an experienced traveller, having flown 100 times before the flight you are now on is different. 47 00:05:29,730 --> 00:05:36,150 As you descend, the plane starts to rattle and shake in the way you've never experienced before. 48 00:05:36,150 --> 00:05:40,800 The woman next to you lets out a gasp. The man sitting across the aisle grips his knees. 49 00:05:40,800 --> 00:05:45,870 Everyone around you is visibly scared. Could this be it? 50 00:05:45,870 --> 00:05:53,050 Could the worst possible scenario be about to unfurl? So here is the nightmare scenario. 51 00:05:53,050 --> 00:05:57,930 You feel that the plane you're on is possibly going to crash. 52 00:05:57,930 --> 00:06:03,000 This is maybe a overdramatic representation of what you're currently experiencing. 53 00:06:03,000 --> 00:06:10,500 We want to know the probability of a crash, given the plane is shaking, we can feel the plane shaking. 54 00:06:10,500 --> 00:06:15,120 We're worried about the crash. The shake is the data, the crashes, the model. 55 00:06:15,120 --> 00:06:24,810 OK, so let's deal with that problem. You've been on a hundred flights before in this scenario and what's happening in this particular flight? 56 00:06:24,810 --> 00:06:30,630 Is your flight shaking? So this is a 10 by 10 grid of flights. 57 00:06:30,630 --> 00:06:36,510 There's only one of them that's ever shaken as badly as this particular one that you've been on. 58 00:06:36,510 --> 00:06:43,170 And so what you know from this information is the probability of this much shaken given that 59 00:06:43,170 --> 00:06:47,220 you're not going to crash if you assume you're not going to crash is one in a hundred. 60 00:06:47,220 --> 00:06:51,960 This is the worst possible flight that you've ever been on. So it could be you're going to survive. 61 00:06:51,960 --> 00:06:58,600 And a good estimate for the probability of shaking, given you're not going to crash is one in a hundred. 62 00:06:58,600 --> 00:07:03,870 It's worth just thinking about what might happen if you went on two thousand flights like this. 63 00:07:03,870 --> 00:07:10,020 So if you have a probability of shaking this badly of one in one hundred, 64 00:07:10,020 --> 00:07:16,830 then if you go in 2000 flights, you'll have 20 of those very bad shaky rides. 65 00:07:16,830 --> 00:07:20,250 And then let's go one step further. 66 00:07:20,250 --> 00:07:29,280 The background rate of plane crashes, plane crashes are very unlikely, so you don't know exactly how many plane crashes occur, 67 00:07:29,280 --> 00:07:33,540 but the chance of being in a plane crash is something like one in 10 million. 68 00:07:33,540 --> 00:07:40,290 You actually have time to look this up on your phone and in your current situation, I can tell you it's around one in 10 million. 69 00:07:40,290 --> 00:07:49,050 It might be a bit less than that. It might be a bit more than that. So now start thinking about all of those possible plane journeys you can be on. 70 00:07:49,050 --> 00:07:54,820 What I've done here is I've made a thousand times, ten times 10000 thousand array of journey, 71 00:07:54,820 --> 00:08:03,360 so imagine a thousand times 10000 journeys and that is 10 million journeys. 72 00:08:03,360 --> 00:08:11,410 An inside all of those journeys, there's always going to be these shaky journeys and you happen to be on one of those shaky journeys right now. 73 00:08:11,410 --> 00:08:18,790 But the probability you're going to crash is only one in 10 million before you got onto the plane if you weren't shaking. 74 00:08:18,790 --> 00:08:25,660 So here's the crashing plane somewhere hidden somewhere in between all of those 10 million journeys. 75 00:08:25,660 --> 00:08:31,480 So that's horrible for the people on that flight, that train, that that plane. 76 00:08:31,480 --> 00:08:39,640 So now you actually have all the information you need in order to work out your probability of being in a plane crash. 77 00:08:39,640 --> 00:08:51,580 In fact, there's going to be 100000 shaky journeys in this in these 10 million journeys, and only one of them is going to end in the crash journey. 78 00:08:51,580 --> 00:08:58,600 That's essentially 10 million divided by 100 gives you a hundred thousand. 79 00:08:58,600 --> 00:09:07,880 And that means roughly. The probability your shaky journey is the crash journey is approximately one in 100000. 80 00:09:07,880 --> 00:09:14,720 You really aren't going to die. It's the worst thing you've experienced, but you don't actually need to be worried given the circumstances. 81 00:09:14,720 --> 00:09:21,890 Now what? What's the judgement equation? I've gone through this problem with which I've gone through it from a frequent his point of view. 82 00:09:21,890 --> 00:09:29,810 I've looked at the frequency of these different things occurring, but we can actually use an equation which I call the judgement equation. 83 00:09:29,810 --> 00:09:39,290 But it originally was found by Thomas Bayes. It's called Bayes rule came from the end or mid end of 18th century. 84 00:09:39,290 --> 00:09:46,370 And what it does is it gathers up all the information we have, so you notice that while I was doing my calculation, 85 00:09:46,370 --> 00:09:52,910 I find the probability of crashing the background crash rate for planes is one in 10 million. 86 00:09:52,910 --> 00:09:56,780 If your plane is going to crash, it will shake. 87 00:09:56,780 --> 00:10:02,360 So the probability of it shaking given a crash is one. 88 00:10:02,360 --> 00:10:06,890 And probability of Shay Given not crash, that's based on your experience. 89 00:10:06,890 --> 00:10:11,330 We estimate that to be around one in 100. It's the worst flight you've been on. 90 00:10:11,330 --> 00:10:15,980 But what we really want to know is the probability of crash given shake, 91 00:10:15,980 --> 00:10:21,860 and that is given by base equation, which looks like follows as follows the here. 92 00:10:21,860 --> 00:10:29,750 We just put in all of the numbers and we work out, we come up with this thing being approximately equal to one in 100000 now. 93 00:10:29,750 --> 00:10:38,120 Bayes rule and the judgement equation have come up a lot recently because they also come up with regard to the coronavirus. 94 00:10:38,120 --> 00:10:46,940 One of the problems that's come up during the COVID crisis is there are false positives in tests for the disease. 95 00:10:46,940 --> 00:10:58,850 So if you go and get yourself tested for the virus sometimes and the rate is about one in a thousand, you'll come up with a false positive example. 96 00:10:58,850 --> 00:11:04,940 Even though you haven't got the disease that you'll get a test result that comes back, that says you have got the disease. 97 00:11:04,940 --> 00:11:10,460 And we can also think about this using the judgement equation. So here we've got 100 people. 98 00:11:10,460 --> 00:11:16,430 I've just said that it's more like a one in a thousand, but imagine it was a one in 100 error rate for the disease. 99 00:11:16,430 --> 00:11:18,260 So we've got one in 100 error rate. 100 00:11:18,260 --> 00:11:27,050 Well, one in 100 people are going to get back a positive test even though they're healthy and don't have the disease. 101 00:11:27,050 --> 00:11:35,420 So that's one in a hundred there. But imagine also that we have one in 100 people who have got the disease. 102 00:11:35,420 --> 00:11:42,560 So here's 100 people. And one of them. Has got the disease. 103 00:11:42,560 --> 00:11:54,140 That's this one here. Now, what we can see here quite clearly, actually, is that in this case, it's just 50 50 if you get a test result. 104 00:11:54,140 --> 00:11:58,320 So if you get a test result and it tells you you could be that person who has 105 00:11:58,320 --> 00:12:05,700 the disease or you could just be this person who got a wrong test result back, 106 00:12:05,700 --> 00:12:11,120 in fact, there's a 50 50 chance that you're one of these two people. 107 00:12:11,120 --> 00:12:17,260 Now, the one in 100 the I've I've put up there. 108 00:12:17,260 --> 00:12:24,790 It really depends on a lot of different factors. Now one of them, for example, is you only go and get tested if you've got symptoms. 109 00:12:24,790 --> 00:12:30,080 If you look at these, these little guys here, all of the ones with face masks on, they don't go and get tested. 110 00:12:30,080 --> 00:12:36,850 But there's a few individuals here who have got got symptoms. 111 00:12:36,850 --> 00:12:44,200 So if only one in seven individuals have, even if only seven out of 100 individuals have symptoms, 112 00:12:44,200 --> 00:12:49,220 then it's much more likely that a test result is correct. 113 00:12:49,220 --> 00:12:55,170 There's all sorts of parameters that you have to think about when you're thinking about the coronavirus and the testing of that. 114 00:12:55,170 --> 00:13:02,600 And so I don't want to give I don't want to give a conclusive discussion of that just now that's raging over Twitter and in the news, 115 00:13:02,600 --> 00:13:10,790 these types of discussions. But it's the same mathematical problem based on the judgement equation, which underlies both of those things. 116 00:13:10,790 --> 00:13:18,140 There's a general way which I think mathematicians think about the world, which is very useful or applied. 117 00:13:18,140 --> 00:13:27,400 Mathematicians think about the world, which is very useful. And I like to think about how we break the world down into model and data. 118 00:13:27,400 --> 00:13:35,350 I often see I personally see the future or I have a sort of model in my head, which is a little bit like a film. 119 00:13:35,350 --> 00:13:37,330 I see different things happening. 120 00:13:37,330 --> 00:13:45,640 So when I'm in an aeroplane and it starts to shake and I start to get worried, I start to imagine the bad things that could happen to me. 121 00:13:45,640 --> 00:13:57,100 And that's a model of the future of the world. The data of the world is the shaking experience that something bad is happening, and all of the time, 122 00:13:57,100 --> 00:14:04,330 what we how we should be thinking is updating our model on the basis of the data we get in, and that applies to everything you can think about. 123 00:14:04,330 --> 00:14:12,820 If you're swimming in the sea where you know there might be a shark attack and you see something moving, you need to your model is the shark attack. 124 00:14:12,820 --> 00:14:18,580 Your data is the movement in the water. It's the same when you get an interview for a new job. 125 00:14:18,580 --> 00:14:22,570 Your model is that you'll be successful and you'll get the new job. 126 00:14:22,570 --> 00:14:27,370 The data is the outcome of the interview and how the interview went and the same. 127 00:14:27,370 --> 00:14:32,890 When you get a medical test, you've got a model that you'll be infected with the disease and you get the test and 128 00:14:32,890 --> 00:14:38,380 that improves your understanding of if you will actually have the disease or not. 129 00:14:38,380 --> 00:14:45,970 And all of these things can be solved by putting the collecting up the data that you have and putting them into Bayes theorem, 130 00:14:45,970 --> 00:14:54,620 putting them into the judgement equation. And. The judgement equation isn't just about. 131 00:14:54,620 --> 00:14:59,930 Objective things that happen in time in our lives, it's also about our feelings. 132 00:14:59,930 --> 00:15:09,170 You can also use the judgement equation in order to update and have a better understanding of the personal things that happen in your life. 133 00:15:09,170 --> 00:15:18,050 So I'm going to give an example of this. So you probably can't help but notice the change of scene for this particular segment of the talk. 134 00:15:18,050 --> 00:15:23,270 And that's because when Darrell, who organises the talk talks at the institute, and Alan, 135 00:15:23,270 --> 00:15:29,810 who presented me and myself when we reviewed the original lecture that I did in a kind of cold, 136 00:15:29,810 --> 00:15:38,030 empty lecture theatre, we felt that the setting wasn't quite right for the message that I'm going to try and bring out now. 137 00:15:38,030 --> 00:15:43,190 And I think it works very well when I'm presenting things to do with aeroplanes and coronavirus, 138 00:15:43,190 --> 00:15:50,900 when we're thinking about factual situations, when we're handling hard data and trying to understand that. 139 00:15:50,900 --> 00:15:56,540 But what I'm going to talk about now isn't so much objective. Data is much more subjective data. 140 00:15:56,540 --> 00:16:00,710 It's things about feelings and it's things about emotions. 141 00:16:00,710 --> 00:16:09,530 Now, I think all of us have a lot of feelings and emotions generated by our interactions with the people around us. 142 00:16:09,530 --> 00:16:18,170 And I think that the judgement equation based rule actually suggests that we should be less judgemental and more forgiving of other people. 143 00:16:18,170 --> 00:16:25,400 And I'm going to illustrate that through an example. So if I think about my own life, I mean, I have lots of frustrations. 144 00:16:25,400 --> 00:16:30,770 For example, I'm teaching a course just now with students emailing me about different things to do with the lectures, 145 00:16:30,770 --> 00:16:37,040 and I just feel I've answered these questions 100 times. And of course, I know it's different students, but I get frustrated. 146 00:16:37,040 --> 00:16:46,490 And then you get frustrated. I also work in a football club giving data analytics and doing analysis, and I get frustrated by the demands on me. 147 00:16:46,490 --> 00:16:54,380 And the idea is that there's a lot there's just a lot of passion inside football, for example, and it's all about winning. 148 00:16:54,380 --> 00:17:00,530 And things do get very heated and people do get angry with each other. 149 00:17:00,530 --> 00:17:10,250 And also, if I think about in my personal life, I mean, you know, everybody I have a family wife and two kids, a teenage kids. 150 00:17:10,250 --> 00:17:11,030 And there's, you know, 151 00:17:11,030 --> 00:17:21,950 there's a lot of debate and discussions that go on and how can we actually use mathematics to be better at dealing with those types of situations? 152 00:17:21,950 --> 00:17:27,270 Now, I think when I started studying mathematics, I had a very kind of hard idea about it. 153 00:17:27,270 --> 00:17:33,680 I saw it as like a very precise logic. And if you go back to the idea of the da Vinci Code, I talked about to the star. 154 00:17:33,680 --> 00:17:39,800 It's this idea that there's a code, a secret which gives you the correct right answer. 155 00:17:39,800 --> 00:17:49,070 But it's not really like this because this is a list of some of the bad things that can happen to you on an everyday basis. 156 00:17:49,070 --> 00:17:55,400 I made the list here of when friends or apparent friends let you down. 157 00:17:55,400 --> 00:18:02,210 And this is something which all of us have experience, sometimes we overhear a friend say something nasty about us. 158 00:18:02,210 --> 00:18:08,510 Sometimes your friend steals an idea or a joke that you've made up and they Petraeus is your own. 159 00:18:08,510 --> 00:18:16,370 Sometimes your friends have parties and they don't invite you. And these things make us all sad and they make us all well. 160 00:18:16,370 --> 00:18:22,250 They nobody can have these types of things without feeling a little bit upset. 161 00:18:22,250 --> 00:18:24,560 And the question is, Well, what should you think? 162 00:18:24,560 --> 00:18:30,950 And one thing that you can think in these types of situations is that the people who do these things to us, 163 00:18:30,950 --> 00:18:35,390 the apparent friends, well, actually, they're idiots. 164 00:18:35,390 --> 00:18:43,070 And you question you have to think about is, is this friend who's let you down in this way, an idiot? 165 00:18:43,070 --> 00:18:46,850 Well, I do want to say that I don't go around. 166 00:18:46,850 --> 00:18:49,640 I don't really go around thinking that people are idiots. 167 00:18:49,640 --> 00:18:55,580 You know, if you meet me for the first time, it's not that I have this percentage probability that you're an idiot, 168 00:18:55,580 --> 00:19:01,730 but it's a useful way of thinking about the problem. And the numbers here aren't that important. 169 00:19:01,730 --> 00:19:10,070 So I've said here that I have a sort of baseline rate of people that I meet are likely to be idiots one in 20. 170 00:19:10,070 --> 00:19:14,300 So five percent of people are just unhelpful idiots. 171 00:19:14,300 --> 00:19:19,820 And then 95 percent of people are nice and easy to get along with. 172 00:19:19,820 --> 00:19:28,430 Now again, these numbers are subjective. I don't really think people are actually, maybe I do think some people are idiots. 173 00:19:28,430 --> 00:19:33,890 Anyway, the point is that we're going to work through this problem and we're going to find out the 174 00:19:33,890 --> 00:19:41,540 probability that somebody who said something not very nice to you is actually a total idiot. 175 00:19:41,540 --> 00:19:46,160 And this is a a representation of this problem. 176 00:19:46,160 --> 00:19:54,140 We've got PM, which is the probability of our model, and we've got PMC, which has a probability of the complement of our model. 177 00:19:54,140 --> 00:20:03,080 And there's a five percent chance this little grey box here is the five percent chance that the person is an idiot and the 90 percent, 178 00:20:03,080 --> 00:20:09,920 95 percent chance here that they're not in India. That's before they do anything bad to you at all. 179 00:20:09,920 --> 00:20:14,720 So now comes the data now comes with we've set up the model. 180 00:20:14,720 --> 00:20:19,490 Now comes the data. The data is the bad thing that this person has done to you. 181 00:20:19,490 --> 00:20:25,580 They've let you down by not inviting you to a party or by saying something bad behind your back. 182 00:20:25,580 --> 00:20:30,290 What the data tells us. Is that? 183 00:20:30,290 --> 00:20:33,590 We can what we can say is that, well. 184 00:20:33,590 --> 00:20:42,410 Idiots spend about 50 percent of their time doing nasty things, talking about people behind their backs, letting other people down. 185 00:20:42,410 --> 00:20:48,560 But what we have to remember is that even known idiots can have bad days, in fact, 10 percent of the time. 186 00:20:48,560 --> 00:20:56,450 I've estimated that perfectly nice people do something rather regrettable and not very nice. 187 00:20:56,450 --> 00:21:04,700 So these idiot type people in the well, they spend 50 percent of their time making nasty comments and non idiot spend 10 per cent of the time. 188 00:21:04,700 --> 00:21:10,160 And now we can break this down. Also in this square, so this this has area 100 percent. 189 00:21:10,160 --> 00:21:12,350 And these are the ideas we've divided. 190 00:21:12,350 --> 00:21:21,470 Those five percent of people are idiots into 50 percent of the time saying nasty things and 50 percent of the time just being generally OK. 191 00:21:21,470 --> 00:21:27,890 Now, for the non-media as well, we can also divide them and we've got 90 percent of the time. 192 00:21:27,890 --> 00:21:32,660 They're being nice and just doing nice things, being really great friends. 193 00:21:32,660 --> 00:21:38,720 But 10 percent of the time, they're also making mistakes. They're not idiots, but they've made a mistake. 194 00:21:38,720 --> 00:21:47,720 And when something nasty happens to us, when we've collected the data, we know we're in either this situation or this situation. 195 00:21:47,720 --> 00:21:55,640 So the other situations disappear because something bad has happened to us and we just left with these two situations. 196 00:21:55,640 --> 00:22:02,330 Now, what's interesting? Is that if you look at the relative size of that box? 197 00:22:02,330 --> 00:22:09,370 So if I go all the way back here, this was the idea of being nasty. This was the nice person making a mistake. 198 00:22:09,370 --> 00:22:11,920 We've just got these two boxes left. 199 00:22:11,920 --> 00:22:19,570 Those are the two possibilities of where we lie after we've had this nasty comment or we've not been invited to the party. 200 00:22:19,570 --> 00:22:23,830 We put them together, and I want you to look at the relative sizes of these. 201 00:22:23,830 --> 00:22:27,280 So the idiot being nasty is just this size here. 202 00:22:27,280 --> 00:22:36,910 The not idiot making a mistake is a much larger box, and this is the key to what Bayes rule tells us, Bayes rule says. 203 00:22:36,910 --> 00:22:42,220 What are the comparative areas of these two particular boxes? 204 00:22:42,220 --> 00:22:50,560 And we can see that this one is about actually one fourth of the size of this one here. 205 00:22:50,560 --> 00:22:54,250 And when we put the numbers that I told you into Bayes equation, 206 00:22:54,250 --> 00:23:02,170 the nought point five times the Note .05 is calculating the area of this box, which is the idea of being nasty. 207 00:23:02,170 --> 00:23:06,070 And that's also put down here because we add up the two areas together and the 208 00:23:06,070 --> 00:23:12,490 nought point one times nought point nine five is adding up the area of this box. 209 00:23:12,490 --> 00:23:23,590 And to find out the probability that your friend is an idiot, we need to find out the area of this divided by the area of this, plus the area of this. 210 00:23:23,590 --> 00:23:27,670 That's what Bayes base rule tells us. That's for the judgement equation tells us. 211 00:23:27,670 --> 00:23:32,950 And the conclusion is that whatever this person did to you, 212 00:23:32,950 --> 00:23:40,420 the probability that they are an idiot and you should stop talking to them immediately is that only 20 percent? 213 00:23:40,420 --> 00:23:45,490 And this tells you that you should be a much more forgiving person. 214 00:23:45,490 --> 00:23:51,400 Most of the bad things that happen to you are just because nice people are making mistakes. 215 00:23:51,400 --> 00:23:55,450 Now, the numbers I've given in this example are not really the important thing. 216 00:23:55,450 --> 00:24:02,020 The important thing is thinking about the alternative hypotheses in all of these different situations, and I think we don't have it. 217 00:24:02,020 --> 00:24:08,770 We have a tendency not to do that immediately. We think we take problems personally. 218 00:24:08,770 --> 00:24:12,820 We immediately start to get this idea that people are idiots. 219 00:24:12,820 --> 00:24:21,190 But when we just calm down for a minute, we think through the alternative that actually even nice people make mistakes now and again, 220 00:24:21,190 --> 00:24:32,380 we come to a very, very different conclusion. A key in this example is that it started with me allowing myself to have a wrong model of the world. 221 00:24:32,380 --> 00:24:39,400 I allowed myself to have some emotional reaction where when my friend let me down, I concluded that he was an idiot. 222 00:24:39,400 --> 00:24:45,370 But then after that initial response, this initial model that I had created, 223 00:24:45,370 --> 00:24:52,600 I worked through the judgement equation and was able to see that there was another explanation behind all of this. 224 00:24:52,600 --> 00:25:00,400 So what's important to emphasise is this imagining wrong models or feeling frustrations and being 225 00:25:00,400 --> 00:25:08,740 disappointed with other people is actually a good part is actually a key part of making a good judgement. 226 00:25:08,740 --> 00:25:11,860 OK, I think I've got the point of that one across now. 227 00:25:11,860 --> 00:25:20,220 I think we'll go back to the lecture theatre and listen to what more I've got to say in this case. 228 00:25:20,220 --> 00:25:24,030 One thing I worry about personally is screen time with my kids. 229 00:25:24,030 --> 00:25:27,840 My kids use their mobile for their teenagers 15 17. 230 00:25:27,840 --> 00:25:35,610 They use their mobile phones a lot and to be honest, their parents also use their mobile phones and their screens a lot. 231 00:25:35,610 --> 00:25:39,840 But what we are interested in knowing is, what's the effect of that? 232 00:25:39,840 --> 00:25:53,270 Is there a big negative effect of that? And this article I came across, which was from Berkeley, is published in a Berkeley website. 233 00:25:53,270 --> 00:26:04,430 Outlined and it gave this graph about how mental wellbeing of of teenagers decreases with the amount of time they spend looking at their screens. 234 00:26:04,430 --> 00:26:11,050 Now this looks quite worrying because you've got a negative trend here, but what I think you need to remember? 235 00:26:11,050 --> 00:26:19,690 As well, what's the effect size? So what's quite unclear when you first look at the picture is that this is a scale of zero to 100. 236 00:26:19,690 --> 00:26:26,920 They've asked people zero to 100, how do you feel? And the scale is only 40 to 50 here. 237 00:26:26,920 --> 00:26:32,740 In fact, from the average answer to the most extreme answer. 238 00:26:32,740 --> 00:26:35,170 There's just a one percent effect size. 239 00:26:35,170 --> 00:26:46,400 So you become one percent less happy because you've because you've you're using your screen and this or because of teenagers using the screen. 240 00:26:46,400 --> 00:26:52,560 And you can think of this in terms of the. Of the judgement equation. 241 00:26:52,560 --> 00:27:01,110 In fact, also in this same in this same study, they found a three percent effect size for if you got a good night's sleep, 242 00:27:01,110 --> 00:27:06,120 if teenagers got a good night's sleep of teenagers, eating breakfast had a three percent effect size. 243 00:27:06,120 --> 00:27:14,780 So not eating breakfast reduces your happiness by three percent. Not having a good night's sleep reduces your happiness by three percent. 244 00:27:14,780 --> 00:27:19,700 And that in the context of the phone usage shows that it's quite a small effect. 245 00:27:19,700 --> 00:27:24,380 So when I talk to my teenagers about using their phone too much, 246 00:27:24,380 --> 00:27:32,840 I also talk to them or I spend three times more time talking to them about having a good night's sleep and eating breakfast than I do, 247 00:27:32,840 --> 00:27:37,830 nagging them about how much they use their mobile phones. 248 00:27:37,830 --> 00:27:45,870 It's a very nice article by Candice Ortigas in Nature, where she balances all sides of the debate, and that's how, 249 00:27:45,870 --> 00:27:53,220 you know, when you're reading something that's proper science, when you actually hear all sides of the debate. 250 00:27:53,220 --> 00:27:59,680 And she comes to the conclusion that smartphones are bad for some teams, but not for all of them. 251 00:27:59,680 --> 00:28:05,080 So that's the judgement equation, and I've gone through some examples of how it can be used, 252 00:28:05,080 --> 00:28:10,000 it can be used for thinking about disasters, it can be used for thinking about diseases. 253 00:28:10,000 --> 00:28:13,120 It can be used for thinking about your personal life. 254 00:28:13,120 --> 00:28:19,720 And it can also be thinking used for thinking about the things that you read online and evaluating evidence for them. 255 00:28:19,720 --> 00:28:26,790 One of the nine equations now what the other nine? Well. 256 00:28:26,790 --> 00:28:31,980 These are the names of them. I've got the betting equation, the advertising equation, the confidence equation, 257 00:28:31,980 --> 00:28:38,550 learning equation, influencer equation, skill equation, market equation and the reward equation. 258 00:28:38,550 --> 00:28:43,710 Since we're at the mathematics lecture, I'll tell you the actual equations themselves. 259 00:28:43,710 --> 00:28:53,120 These are what they look like. And my argument is that if you have these equations in your life, 260 00:28:53,120 --> 00:28:59,630 you will both be able to understand the situations around you better on a more personal, on an everyday level. 261 00:28:59,630 --> 00:29:07,610 And also, if you learn the mathematics behind these equations, they lie behind a lot of the financial industry, 262 00:29:07,610 --> 00:29:16,350 the social media industry, the betting industry and lots of other work inside public service as well. 263 00:29:16,350 --> 00:29:20,730 So I'm going to give one more example, I think I'm yeah, I'm going to give one more example of of this. 264 00:29:20,730 --> 00:29:26,080 I'm going to take out the reward equation and tell you a little bit about that. 265 00:29:26,080 --> 00:29:30,280 The reward equation is widely used by social media companies, 266 00:29:30,280 --> 00:29:37,960 and it's used to decide what types of rewards and information it should give us if it finds that we like something a lot. 267 00:29:37,960 --> 00:29:41,920 It gives us more of that stuff so that we keep clicking on it. 268 00:29:41,920 --> 00:29:48,340 My idea is actually, you can reverse engineer the reward equation both to monitor what's going on in your 269 00:29:48,340 --> 00:29:55,750 life and also and also to decide what you should do and what you shouldn't do. 270 00:29:55,750 --> 00:30:05,450 So here's my scenario. Imagine you lying on the couch, rewarding yourself by bingeing on TV and you start watching a Netflix series, 271 00:30:05,450 --> 00:30:12,710 the first episode is brilliant, as they always are. The second is OK and the third is slightly better again. 272 00:30:12,710 --> 00:30:16,730 The question is how long should you keep watching before you give up? 273 00:30:16,730 --> 00:30:20,490 I think this is a problem that's familiar to all of us. 274 00:30:20,490 --> 00:30:26,660 We're stuck with so many TV series out there always feels you could be watching something else that might be better. 275 00:30:26,660 --> 00:30:32,270 But how long should you keep watching the series before you decide now? I'm just going to give up. 276 00:30:32,270 --> 00:30:39,960 So here here's how it works. First, you should rank the first episode you watch out of 10. 277 00:30:39,960 --> 00:30:47,600 So, yeah, give it a score zero to 10. This is your tracking variable. 278 00:30:47,600 --> 00:30:52,550 Keep this in your head. And what I'm going to propose is for all the activities you do. 279 00:30:52,550 --> 00:30:56,720 Maybe it's going to the gym, meeting friends, everything you think you're doing, 280 00:30:56,720 --> 00:31:01,940 you can have this track invariable one number, which describes how much you enjoy this, this activity. 281 00:31:01,940 --> 00:31:09,870 But just now we're watching Netflix. And the rule is following, if you're tracking variable is seven or below, then stop watching. 282 00:31:09,870 --> 00:31:13,670 So this is Netflix, you. There's lots of series available. 283 00:31:13,670 --> 00:31:22,120 I'm going to be quite strict here. If it's below a seven, then stop. If the first episode is below a seven, then you stop. 284 00:31:22,120 --> 00:31:31,150 Then watch the next episode and you give it a score out of 10 at the school for this episode to the quality score and divide by two. 285 00:31:31,150 --> 00:31:37,510 And then you round this number up to get your new tracking variable Kutty +1. 286 00:31:37,510 --> 00:31:45,280 Then you go back to step two. OK, you start. Here's how it works in practise and you start with the first episode. 287 00:31:45,280 --> 00:31:48,880 You watch it. You give it nine out of 10. Kutty is equal to nine. 288 00:31:48,880 --> 00:31:55,300 This is one of my favourite favourite of series Big Little Lies and enjoy the first episode. 289 00:31:55,300 --> 00:32:00,080 Nine out of 10. The second episode isn't quite as good. 290 00:32:00,080 --> 00:32:08,770 So I give it a six out of 10. Now, what I do to update my track, invariably is I take the original nine, I add the six to it. 291 00:32:08,770 --> 00:32:12,970 So I've got 15 and I divide by two and I get seven point five. 292 00:32:12,970 --> 00:32:16,090 And remember, I said I round up, so I round up to eight. 293 00:32:16,090 --> 00:32:22,060 I think it's good to round up because then you just need to keep one hole number in your in your head. 294 00:32:22,060 --> 00:32:29,870 So it's still an eight. So even if it got six, it survives at six and goes on to the next next round, you're going to watch the next one. 295 00:32:29,870 --> 00:32:34,600 And that's lucky because the next one you watch is a nine out of 10. 296 00:32:34,600 --> 00:32:42,050 So you're tracking variable is updated. You had eight from your previous time, plus nine divided by two is an eight point five. 297 00:32:42,050 --> 00:32:51,190 Now round it up to nine. So the main advantage of this method is a good show will survive a six whenever you have a six out of 10. 298 00:32:51,190 --> 00:32:55,760 It pops back up to the eight and you keep watching the show. 299 00:32:55,760 --> 00:33:01,750 Of course, you can change the parameters in this model. You can make things decay more slowly. 300 00:33:01,750 --> 00:33:10,810 You can change this divided by two, or you can change your threshold, but it's a good model for keeping track of things. 301 00:33:10,810 --> 00:33:15,220 But it won't survive a five, if you have a nine, then you have nine plus six. 302 00:33:15,220 --> 00:33:21,730 Divided by two is seven point five. But if you have an eight series plus a five, you're just not going to survive it. 303 00:33:21,730 --> 00:33:26,200 This was another series which I watched, which wasn't quite so good. 304 00:33:26,200 --> 00:33:31,740 Very quickly, even though it started very well, I stopped watching it quite quickly. 305 00:33:31,740 --> 00:33:37,170 Now, the general form of the reward equation is as follows. You have Kutty +1. 306 00:33:37,170 --> 00:33:46,590 This is your tracking variable in the next time stat is equal to one minus alpha, we set alpha equal to two in the original. 307 00:33:46,590 --> 00:33:53,700 Sorry, we set out for equal to half in the original set up, plus a half times the reward that we got. 308 00:33:53,700 --> 00:33:56,970 The reward is that is the score we have for the show. 309 00:33:56,970 --> 00:34:05,610 The cute is our tracking variable and what's very nice in this general form of the reward equation, which has been around since the 1950s. 310 00:34:05,610 --> 00:34:10,350 We know that the tracking variable is going to converge to the average reward. 311 00:34:10,350 --> 00:34:13,500 So this is a relative series watching. 312 00:34:13,500 --> 00:34:25,450 We can actually see that whatever we start off with our tracking variable at, it will eventually converge to the real world that we have. 313 00:34:25,450 --> 00:34:30,520 And this is something that we can use to monitor all parts of our life online. 314 00:34:30,520 --> 00:34:37,330 We can give each thing that we use on social media, not just Netflix series a score. 315 00:34:37,330 --> 00:34:40,420 And we can keep track of those scores in order to see, 316 00:34:40,420 --> 00:34:47,280 is this something that I want to continue doing or is this something that I should get rid of and get out of my life? 317 00:34:47,280 --> 00:34:53,090 There is a problem here, and it's that these scores actually keep changing over time. 318 00:34:53,090 --> 00:35:00,770 How do we keep track of the changes in these scores when we don't know the underlying value of something? 319 00:35:00,770 --> 00:35:07,320 And here actually there's an answer we can look to the answer. So it's. 320 00:35:07,320 --> 00:35:14,530 Use firemen trail. And what The Ferryman trial does is whenever the ants find something good, they find some food, 321 00:35:14,530 --> 00:35:19,930 they leave FireMon Trail going to that food, and it recruits other aunts to that foods. 322 00:35:19,930 --> 00:35:27,180 So the ants pheromone trail is like a tracking variable, which follows after where the food is. 323 00:35:27,180 --> 00:35:32,810 And you can do experiments with ants. Where you can offer them two alternatives. 324 00:35:32,810 --> 00:35:38,530 This is where they've been offered some red food and some blue food. 325 00:35:38,530 --> 00:35:41,590 And there's a y shape junction here. 326 00:35:41,590 --> 00:35:50,480 This is a close up of the junction where the ants have to choose if they will go to the red food or to the blue food. 327 00:35:50,480 --> 00:35:50,810 Now, 328 00:35:50,810 --> 00:35:58,520 how do the ants manage to get the balance right because they're trying to work in this dynamic environment where things are changing all of the time? 329 00:35:58,520 --> 00:36:07,790 And a good way to think about this from a mathematical point of view is to think of the ants as balancing on a seesaw. 330 00:36:07,790 --> 00:36:19,550 So here is my first aunt. She's on this seesaw and this seesaw is like the left right choice where the food to look for the food. 331 00:36:19,550 --> 00:36:24,170 And she maybe makes the left choice, she goes in that direction. 332 00:36:24,170 --> 00:36:29,450 Now what she does is when she makes a left choice, she leaves Ferryman going towards that choice. 333 00:36:29,450 --> 00:36:33,890 And so that bias is the decision for the next stamps which come along. 334 00:36:33,890 --> 00:36:39,950 The ants don't actually walk on seafood on a seesaw. They're on this Y-shaped maze. 335 00:36:39,950 --> 00:36:46,580 But the idea is to say that once they start going in one direction, they bias the choice towards that direction. 336 00:36:46,580 --> 00:36:51,440 So the next time comes along and she follows the previous one down next one, 337 00:36:51,440 --> 00:36:58,760 she does the same thing, and they keep following one after the other down towards the food. 338 00:36:58,760 --> 00:37:04,340 Now this is good. They've managed to find this lollipop, which was the food that they were looking for. 339 00:37:04,340 --> 00:37:09,380 But the problem is, once they've tipped the seesaw in this direction, 340 00:37:09,380 --> 00:37:17,120 if the food moves up here, how do they get back to the to the food, to the new food source? 341 00:37:17,120 --> 00:37:28,010 And they can't. They're sort of stuck in the nuance which will come along will follow the other end, so follow the trail down in the wrong direction. 342 00:37:28,010 --> 00:37:35,520 So. What you can do if you're an end is you can tip a bit less, so here we'll have the first and. 343 00:37:35,520 --> 00:37:40,420 And so this is equivalent to having less reaction to the ceremony. 344 00:37:40,420 --> 00:37:47,470 She's making a decision. She goes that way. And you'll see that the seesaw is balanced slightly less so that she's left less ferryman. 345 00:37:47,470 --> 00:37:54,460 Or she's reacted less strongly to the ferryman than the next and maybe goes the other way and balances a seesaw up again. 346 00:37:54,460 --> 00:37:56,620 And then you start to get off in that direction. 347 00:37:56,620 --> 00:38:07,540 But when it starts to get it, it never really gets too unbalanced because they're making decisions to go left or right, basically in a random way. 348 00:38:07,540 --> 00:38:14,710 But the problem now is if you don't tip enough, then you never really get focussed on the best food, 349 00:38:14,710 --> 00:38:19,230 so the food is moving around in different directions. 350 00:38:19,230 --> 00:38:28,400 And the answer following the food, but they never really they never really tip over strongly to One Direction or the other. 351 00:38:28,400 --> 00:38:35,960 And it turns out the answer for the ends is to kind of get their seesaw on a balance like this. 352 00:38:35,960 --> 00:38:42,380 So by getting that see saw it is sort of tipping point, they just enough, but not too much. 353 00:38:42,380 --> 00:38:48,350 What they can do is that they can track where the lollipop or the food is moving to. 354 00:38:48,350 --> 00:38:53,770 And this means getting exactly the right responsiveness to the food. 355 00:38:53,770 --> 00:39:01,300 The mathematical model behind this is is consists of two different equations which are based on the reward equation. 356 00:39:01,300 --> 00:39:07,780 But what we've done is we have this feedback term here as well. This is a feedback and this is a feedback here. 357 00:39:07,780 --> 00:39:11,710 And this model's both the the seesaw idea that I've talked about. 358 00:39:11,710 --> 00:39:19,900 It models the Y. May's decision, and it can also model how we make our choices about social media. 359 00:39:19,900 --> 00:39:23,080 I'm not going to go into the details of solving these equations, 360 00:39:23,080 --> 00:39:32,590 but when you solve them using a theory called bifurcation theory, you can actually show that if you have two weaker response, 361 00:39:32,590 --> 00:39:42,010 if you are not very sensitive to the Ferryman trail, you get the situation like this that they just sort of randomly distributed. 362 00:39:42,010 --> 00:39:46,390 If you get too strong a response, they can tip and go in the wrong direction. 363 00:39:46,390 --> 00:39:53,950 But if you get this just right response, then the answer can be very flexible when the food sources change position. 364 00:39:53,950 --> 00:40:01,480 And this is something you find in all animals, including ourselves. So. 365 00:40:01,480 --> 00:40:06,300 This is something you find in all animals, including ourselves, this is just a few examples here. 366 00:40:06,300 --> 00:40:12,120 This is humans applauding after a talk. This is fish escaping when we frighten them. 367 00:40:12,120 --> 00:40:18,540 And over here we've got bird flocks flying around. And what animals have is a collective FOMO, 368 00:40:18,540 --> 00:40:27,540 a collective fear of missing out that always balanced at this point where they can switch direction to to move towards food, 369 00:40:27,540 --> 00:40:35,820 to move towards shelter, to move towards a mate. And that's kind of what we are in the internet, on the internet as well. 370 00:40:35,820 --> 00:40:39,120 If you want to collect all the rewards that are available to you, 371 00:40:39,120 --> 00:40:44,820 then you have to be there all of the time, clicking and searching and trying to find out what's going on. 372 00:40:44,820 --> 00:40:47,010 And that's just incredibly stressful. 373 00:40:47,010 --> 00:40:55,680 It kind of lies in our collective nature to do this, but it's a very stressful thing to be doing on a daily basis. 374 00:40:55,680 --> 00:41:03,210 So I have a recommendation that I've also learnt from the hands. Another thing the ants are very good at is just taking it easy. 375 00:41:03,210 --> 00:41:11,010 The ants do this running around like crazy. They put themselves at tipping point when they're working for ants also like to rest. 376 00:41:11,010 --> 00:41:12,480 They like to sleep. 377 00:41:12,480 --> 00:41:25,530 And so I have this picture of my wife plays a lot of Pokemon Go and have this picture of Snorlax resting because as well as you can at times, 378 00:41:25,530 --> 00:41:34,560 go crazy and try and follow all the information. But other times you want to take it easy and have a sleep like Snorlax does. 379 00:41:34,560 --> 00:41:42,810 OK. I've told you about two of the equations today, I've told you about the judgement equation and the reward equation. 380 00:41:42,810 --> 00:41:48,450 There are if there's 10 equations, of course, eight other equations in the book, 381 00:41:48,450 --> 00:41:53,670 if you've been looking carefully, you'll see that I've listed seven of them here. 382 00:41:53,670 --> 00:41:58,260 These equations are used throughout our lives, 383 00:41:58,260 --> 00:42:03,510 the learning equation was used by YouTube when they wanted to learn about what videos we like to 384 00:42:03,510 --> 00:42:11,190 increased massively our use of YouTube by finding out what we liked and showing us more of it. 385 00:42:11,190 --> 00:42:18,300 The advertising equation is used by Facebook to find out what we're interested in and target adverts at us. 386 00:42:18,300 --> 00:42:24,540 It's the correlation equation as well. The learning equation is a derivative of maximisation. 387 00:42:24,540 --> 00:42:31,170 The advertising equation is the correlation equation. The confidence equation is based on the confidence interval. 388 00:42:31,170 --> 00:42:37,770 It's used both in gambling to find out if you really have got a winning strategy, 389 00:42:37,770 --> 00:42:43,290 and it's also used widely in research in order to find out if our results are really true. 390 00:42:43,290 --> 00:42:49,350 The influencer equation dice was first invented by Google or used by Google. 391 00:42:49,350 --> 00:42:52,620 It's actually based on mathematics, which is 100 years old, 392 00:42:52,620 --> 00:42:59,580 and it's used to measure who are the most important people in the social network or on on the internet. 393 00:42:59,580 --> 00:43:10,380 The market equation is used in finance. It's a diffusion equation which describes as I break it down into signal noise and feedback 394 00:43:10,380 --> 00:43:15,870 there the way ways the market work and how you can use that to make a lot of money. 395 00:43:15,870 --> 00:43:23,880 The skill equation is used in sports to evaluate the quality of different football players, for example, 396 00:43:23,880 --> 00:43:31,620 and the betting equation is well, it's used in betting, but also about how how we can use probability in our own lives. 397 00:43:31,620 --> 00:43:38,700 So there's all these equations or these equations, which allow you to get a better understanding of the world. 398 00:43:38,700 --> 00:43:49,080 And they all combine models where you describe things as a mathematical equation and data that you take in the data can be things from social media, 399 00:43:49,080 --> 00:43:58,740 from finance, from banks and so on. Combining models and data is the secret to successfully using these equations. 400 00:43:58,740 --> 00:44:02,130 There's a tense equation as well. And so what's that? 401 00:44:02,130 --> 00:44:10,140 Well, all of these equations, they involve combining different types of models with data. 402 00:44:10,140 --> 00:44:13,290 Now, the tenth equation is different. 403 00:44:13,290 --> 00:44:23,520 I call it the universal equation, and I like to say it as if then, because eventually this is going to end up as a logical statement, 404 00:44:23,520 --> 00:44:28,350 but it goes back all the way to what I mentioned at the start of the talk. 405 00:44:28,350 --> 00:44:30,830 The DA Vinci Code. 406 00:44:30,830 --> 00:44:38,990 Now, the da Vinci Code is a fascinating idea that there could be a single equation that unifies everything and brings the world together. 407 00:44:38,990 --> 00:44:47,500 The single theory of everything which all revolves around, in this case, phi a single number. 408 00:44:47,500 --> 00:44:51,640 But I believe that this can't be the case. 409 00:44:51,640 --> 00:45:02,010 And I'm just going to give you a brief idea about why we can't have a single number, a single mathematical idea that unifies the entire universe. 410 00:45:02,010 --> 00:45:09,960 If we take, for example, fine, where does it come from? Well, fi, I mentioned at the start, it comes from the golden rectangle. 411 00:45:09,960 --> 00:45:16,320 One side of the rectangle has lent a the other side of the rectangle as a a plus b. 412 00:45:16,320 --> 00:45:18,420 And this is a golden rectangle. 413 00:45:18,420 --> 00:45:30,540 If the rectangle that's left over when you put in the square, the ratio of those sides B to A is the same as the ratio of A-plus beat A. 414 00:45:30,540 --> 00:45:41,630 And that's given by this equation here. A-plus B divided by a equals to a of a b and is equal to PHI, which is the golden ratio. 415 00:45:41,630 --> 00:45:47,900 So that's one place in which fire comes up and what Dan Brown emphasised in his book, 416 00:45:47,900 --> 00:45:52,430 he came up with quite a few examples which weren't true, but he came out with quite a few that are true. 417 00:45:52,430 --> 00:45:55,910 Is that fire causing quite a lot of different settings? And here's one of them. 418 00:45:55,910 --> 00:46:02,870 Here is the Fibonacci sequence. This is this the Fibonacci sequence itself, 419 00:46:02,870 --> 00:46:09,230 and the rule is as follows you have one plus one is equal to two one plus two is equal to three to plus three is equal to five, 420 00:46:09,230 --> 00:46:13,310 three plus five is equal to eight and so on, up and up and up. 421 00:46:13,310 --> 00:46:18,500 And where does that golden ratio occur? Well, it occurs here. 422 00:46:18,500 --> 00:46:21,290 So 13 divided by eight is equal to one point six two five. 423 00:46:21,290 --> 00:46:33,350 So we take consecutive numbers and we divide them by each other and we get closer and closer as we go off into infinitely large numbers. 424 00:46:33,350 --> 00:46:38,950 That ratio gets closer to one point six on eight. 425 00:46:38,950 --> 00:46:47,200 And this is one of a number of different mathematical results where the same number appears in lots of different settings, 426 00:46:47,200 --> 00:46:56,390 and it's this that gives the kind of semi mystical feeling about mathematics that there's some sort of hidden truth behind those numbers. 427 00:46:56,390 --> 00:47:00,620 And it also it's always confused mathematicians right through history. 428 00:47:00,620 --> 00:47:09,350 So it led Henri Poincaré to reflect on this, and he said, and this is a rhetorical question really by him. 429 00:47:09,350 --> 00:47:15,050 He said that if all the assertions of mathematics put forward can be derived from one another by formal logic, 430 00:47:15,050 --> 00:47:20,190 just like we've we've looked at with or I'm going to look at now with fine. 431 00:47:20,190 --> 00:47:28,550 The mathematics can't be anything than some sort of immense tautology, so that just means everything in mathematics is true. 432 00:47:28,550 --> 00:47:37,020 Logical inference can teach us essentially nothing new. But can it really be the case that all of these books that we look at in mathematics 433 00:47:37,020 --> 00:47:42,540 have no other purpose than in a roundabout way to say that A is equal to a? 434 00:47:42,540 --> 00:47:47,100 Now this is a rhetorical question on the part of CAR. 435 00:47:47,100 --> 00:47:56,940 This is. He can't really believe, I think when he states this, that all of mathematics just says that a is equal to a. 436 00:47:56,940 --> 00:48:04,440 So if we take the golden rectangle here and and the equality we've set up for the different sides of it. 437 00:48:04,440 --> 00:48:14,530 What we can do is we can rewrite this by we have a divided by a plus b divided by A is equal to a as a basis. 438 00:48:14,530 --> 00:48:19,750 That's fine. But we've also said Overby, well, that's equal to the golden ratio PHI. 439 00:48:19,750 --> 00:48:26,140 So let's just put Phi in here and be over is one over a baby. 440 00:48:26,140 --> 00:48:31,500 So we have one over five there and we have one. 441 00:48:31,500 --> 00:48:43,220 Yeah. And then if we multiply through by five, we have five here, plus one is equal to five squared. 442 00:48:43,220 --> 00:48:51,740 And we always like to have everything over on one side, so we have a nice quadratic equation and the quadratic equation for this, 443 00:48:51,740 --> 00:49:02,690 if we put up zero on this side and we've got phi squared minus five minus one. 444 00:49:02,690 --> 00:49:11,330 And if you remember the roots of the quadratic equation, your views, the quadratic formula, 445 00:49:11,330 --> 00:49:23,760 and that will be if we've got it right here because we have it here, but it's a one plus the square root of. 446 00:49:23,760 --> 00:49:36,450 One plus four. Divided by two, which is one plus or plus a minus plus or minus the square root of five, divided by two. 447 00:49:36,450 --> 00:49:42,590 So that's how we come up with the value of PHI. Now. 448 00:49:42,590 --> 00:49:49,430 That's the problem from the point of view of the golden rectangle. 449 00:49:49,430 --> 00:49:52,920 If we look at the other problem. 450 00:49:52,920 --> 00:50:08,730 So then if we look at the Fibonacci sequence, what we know about the Fibonacci sequence is we have x t plus one is equal to x t plus x T minus one. 451 00:50:08,730 --> 00:50:15,000 So what I mean by that? Well, if you remember the sequence one plus one was equal to two, one plus two is equal to three. 452 00:50:15,000 --> 00:50:21,170 And I'm just writing that same thing out in an algebraic form. 453 00:50:21,170 --> 00:50:26,510 Now what we can do is we can say, well, what happens when T gets very large? 454 00:50:26,510 --> 00:50:36,320 And first of all, if I just write out x t plus one divided by x t. 455 00:50:36,320 --> 00:50:46,870 Then that. Is equal to x t plus x T minus one. 456 00:50:46,870 --> 00:51:00,330 Divide it by x t. And this is also equal to one plus x T minus one divided by x t. 457 00:51:00,330 --> 00:51:11,530 Now, if I take the limits of tea goes to infinity in this, and that means I just keep going in lots and lots of steps. 458 00:51:11,530 --> 00:51:15,910 Exactly as I've done here, I keep going forward and forward in time. 459 00:51:15,910 --> 00:51:23,500 So if I want to know what this limit is, I can replace it with five. 460 00:51:23,500 --> 00:51:30,520 And that's going to be equal to one plus. I'm here because I'm going off to the limits of time equals infinity. 461 00:51:30,520 --> 00:51:33,220 This is just one over five. 462 00:51:33,220 --> 00:51:49,610 Once we come up in further and further in enough time steps and that equation, you should recognise from there because it's the same equation again. 463 00:51:49,610 --> 00:52:05,130 And it's going to have the same solution again. And so the occurrence of these types of relationships. 464 00:52:05,130 --> 00:52:10,200 Isn't so much, it isn't so much a deep secret. 465 00:52:10,200 --> 00:52:15,640 It's more exactly as Henry Henry Pryor, said Henry Ford. 466 00:52:15,640 --> 00:52:24,480 It's it isn't so much. It isn't so much a deep secret, but it's more exactly as Poincaré originally proposed. 467 00:52:24,480 --> 00:52:31,680 What I've really shown here is that a quasi we get the same solution for these two different mathematical problems. 468 00:52:31,680 --> 00:52:38,010 Ultimately, for problems like this, for pure mathematics problems which aren't coupled to data, 469 00:52:38,010 --> 00:52:46,000 then they do ultimately just prove that a quasi and rhetorically was a rhetorical question that Poincaré posed. 470 00:52:46,000 --> 00:52:53,220 The answer is yes. And when you look at philosophy of mathematics, they come to the same answer. 471 00:52:53,220 --> 00:52:59,340 What you're doing when you decouple a problem from reality is you're just proving that a equals eight. 472 00:52:59,340 --> 00:53:09,940 And in that way, you can never expect a secret number of any type to inform what your form something about the structure of your life. 473 00:53:09,940 --> 00:53:16,610 But. Then the question could mathematics hold the answers we're all looking for? 474 00:53:16,610 --> 00:53:27,620 I believe it can. But only when we let data into our lives as well is when we combine. 475 00:53:27,620 --> 00:53:32,800 Data in the form of all the things that happened to us. 476 00:53:32,800 --> 00:53:37,330 With a model, a mathematical description of the world. 477 00:53:37,330 --> 00:54:08,297 That we can actually start to understand what's happening and better control our lives and maybe become more successful and happier people.