1 00:00:00,540 --> 00:00:08,940 I'm very pleased to see you see you out front. I did check with him one of the lads already side by his side of history. 2 00:00:08,940 --> 00:00:15,270 He's been a visiting research fellow with CCW for some time because of the way he's been doing. 3 00:00:15,270 --> 00:00:22,920 And we consider him to be a research fellow, but he is now embedded, if you like, in the sort of thinking of the day. 4 00:00:22,920 --> 00:00:31,950 He has been involved in a number of areas of conceptual development in defence, including Arizona Integrated Review Policy, 5 00:00:31,950 --> 00:00:36,840 the main integration, the integrated operating concept and indeed Multi-domain Battle. 6 00:00:36,840 --> 00:00:44,430 So plenty of things for us to think about and these areas is going to help us with thinking through today. 7 00:00:44,430 --> 00:00:49,830 It's got an interesting title integration goblets factor, so we'll come to that in just a matter of what you call it that. 8 00:00:49,830 --> 00:00:53,010 But just explain that. 9 00:00:53,010 --> 00:01:05,130 You know, like many disruptors in his generation, he's done the usual civil rights, we might say towards the future to which we are grateful. 10 00:01:05,130 --> 00:01:17,310 He has been one of those rare individuals who thinks very deeply about what defence means today and how new technologies are not only disruptive, 11 00:01:17,310 --> 00:01:22,500 but also how they're been understood how we might before we get to the point of application, 12 00:01:22,500 --> 00:01:29,970 how to make she understand what these phenomena are, what implications they carry. 13 00:01:29,970 --> 00:01:36,510 I'm not going to all the list of things because I think you might will pick up on a couple of these various un work, 14 00:01:36,510 --> 00:01:45,720 Ruthie Uninsurance, who I suspect are going to come up in the course of your conception, but I won't spoil it for social leave. 15 00:01:45,720 --> 00:01:50,580 But I'm curious to know your integration, the Goldilocks concept. 16 00:01:50,580 --> 00:01:57,000 Thank you very much for coming. I am amazed to see that many people to talk on integration, and I've done my level best to set myself up for failure. 17 00:01:57,000 --> 00:02:02,730 First of all, by talking about a thing that everybody loves to loathe. Secondly, I know that people like me to talk about artificial intelligence, 18 00:02:02,730 --> 00:02:06,810 particularly weirdly for the last six months about artificial intelligence ethics. 19 00:02:06,810 --> 00:02:15,300 So I'm not going to do that. So what I am talking about is integration, and why is a Goldilocks factor to it? 20 00:02:15,300 --> 00:02:23,190 And I by professional my staff officer, and that means I am powered by two things which are caffeine and hatred. 21 00:02:23,190 --> 00:02:31,620 And there's an array of publications that have appeared in the, you know, in the US, UK, France, any number of nations across the world. 22 00:02:31,620 --> 00:02:37,410 And integration is sort of designed KAIST. And you might ask, well, what are they integrating? 23 00:02:37,410 --> 00:02:41,640 And the answer is everything. So every form of medical, military endeavour. 24 00:02:41,640 --> 00:02:49,200 On the one side and on the right hand side, kind of every agent of every lever that state could hope to pull. 25 00:02:49,200 --> 00:02:56,490 Now, could things be better integrated? Yes, absolutely. And I will get out of the way now the idea that I am somehow attacking the idea that 26 00:02:56,490 --> 00:03:01,620 integration is a bad thing and we should go the other way because that's obviously nonsense. But it's this word. 27 00:03:01,620 --> 00:03:05,190 This is the word that bothered me slightly and therefore resulted in this talk. 28 00:03:05,190 --> 00:03:12,270 It's the idea that more is the answer in the sense that more is always the answer until everything is what it's everything else, 29 00:03:12,270 --> 00:03:21,240 until you have every ounce of data that exists and it leads to visions like this and there's nothing critically wrong with them in the sense. 30 00:03:21,240 --> 00:03:26,520 On the left, you have every platform, which every other platform and sending every ounce of data to every other platform in the country so that you 31 00:03:26,520 --> 00:03:33,180 have this universally informed military force and that is also tied into a state where we see kind of Russia, 32 00:03:33,180 --> 00:03:40,830 for example, but it doesn't really matter. And it's using trade and it's using sort of non-military activities. 33 00:03:40,830 --> 00:03:45,720 And in the space and nuclear weapons and all of it is linked seamlessly together. 34 00:03:45,720 --> 00:03:48,840 And there's a reason that people sort of think that way. 35 00:03:48,840 --> 00:03:54,150 And then you get to this idea, which is that if everything is interconnected, that you have everything monitored, 36 00:03:54,150 --> 00:03:58,200 monitored, measured and metastasised that all linked together and you have two things. 37 00:03:58,200 --> 00:04:03,600 One is this tidal wave of information that risks overwhelming you, which is a realistic threat proposition. 38 00:04:03,600 --> 00:04:07,890 But it also seen as an opportunity to create this sort of environment where you can 39 00:04:07,890 --> 00:04:12,570 know everything and you can therefore orchestrate the perfect world that you want to, 40 00:04:12,570 --> 00:04:16,890 ideally with some sort of godlike move that enables you to orchestrate everything even 41 00:04:16,890 --> 00:04:22,380 before you've started and say I either one is the fragility of the glass battlefield. 42 00:04:22,380 --> 00:04:26,670 Now, it's not unreasonable for lots of people to have formed this idea of perfect 43 00:04:26,670 --> 00:04:31,350 orchestration through massively surveilled or massive surveillance and information, 44 00:04:31,350 --> 00:04:34,880 because this guy had a really, really bright idea about this. So that's well known. 45 00:04:34,880 --> 00:04:40,620 Then if you know anything about computers, the word von Neumann architecture would just mean this is why we have computers to you. 46 00:04:40,620 --> 00:04:47,850 And explicitly one of the things he was seeking to create when he created the first set of computers, amazingly, was whether control. 47 00:04:47,850 --> 00:04:56,370 It's in the list of things that he wants to do. Why? Because von Neumann knew that the tiny changes in initial conditions can massively shift. 48 00:04:56,370 --> 00:04:59,950 How is this system performs in the long run? And therefore he. 49 00:04:59,950 --> 00:05:03,880 The idea was that humans are too simple to be able to cope with all the fluid dynamics that are involved. 50 00:05:03,880 --> 00:05:08,110 OK, we have a computer that does the crunching. We'll find the point to balance those in the military. 51 00:05:08,110 --> 00:05:14,350 We'll think about another such terms. And that lets us tip the system into new modes of our liking. 52 00:05:14,350 --> 00:05:23,020 And then Edward Lorenz comes along and ruins it. Now he was working on one of the first sets of computers, as imagined by by von Neumann, 53 00:05:23,020 --> 00:05:27,640 and he ran a simulation and then went back and re put it in a tiny change in the 54 00:05:27,640 --> 00:05:30,940 initial conditions and very soon the whole way the structure was completely different. 55 00:05:30,940 --> 00:05:35,770 And on that afternoon, he realised that long term weather forecasting was dead. 56 00:05:35,770 --> 00:05:41,710 Now there's another stage to that. It's worth explaining slightly more, which is on the on the screen here. 57 00:05:41,710 --> 00:05:47,860 What do you see a broken imaginary boys on the surface of water? And then we have all the water. 58 00:05:47,860 --> 00:05:53,050 And so if you had the simulation that was able to create, you know, understand all the wave properties, 59 00:05:53,050 --> 00:05:56,440 it would take the readings from those boys and be able to predict into the future what the 60 00:05:56,440 --> 00:06:01,450 waves might look like because it's actually a physics system and it's therefore codified. 61 00:06:01,450 --> 00:06:04,930 Your problem is when something happens in the gap between the senses. 62 00:06:04,930 --> 00:06:09,190 So in my hypothetical example, someone threw a rock between the two. And now you have a big new wave. 63 00:06:09,190 --> 00:06:14,290 Now you and I will know looking at the screen that the path those waves forged forevermore will be different. 64 00:06:14,290 --> 00:06:18,850 But the point is for the system that's linked to these boys. It can never know. 65 00:06:18,850 --> 00:06:24,880 And to its perception, there is no difference between that and this at that moment. 66 00:06:24,880 --> 00:06:31,180 So your ability to orchestrate that perfect battlefield is going to be foiled by the gap in between the senses. 67 00:06:31,180 --> 00:06:36,250 So Lorenz spotted that any area can be the point at which chaos points forth from and out. 68 00:06:36,250 --> 00:06:44,250 Our other problem is that you can't have infinitely arrayed senses of infinitely small space covering all of the situations that you have. 69 00:06:44,250 --> 00:06:48,640 And one of the other reasons we tend to think about orchestration in a particular 70 00:06:48,640 --> 00:06:52,660 way is we have classic examples like AlphaGo here now looking at boats. 71 00:06:52,660 --> 00:06:56,350 What's key about that thought? There are no gaps between where the pieces are. 72 00:06:56,350 --> 00:07:03,010 So you can have systems that are playing something like Starcraft or whatever, where you have hidden elements of data. 73 00:07:03,010 --> 00:07:09,340 But what you don't have in something like AlphaGo is you don't have to go between the gap to the god of chaos between the gaps, one of a description. 74 00:07:09,340 --> 00:07:17,830 And this is another useful idea because it takes you to the the premise that some computer models are doing emulation rather than simulations. 75 00:07:17,830 --> 00:07:21,280 So go is not being simulated here. 76 00:07:21,280 --> 00:07:28,780 Go is being played here. There is fundamentally no difference between what AlphaGo is playing as a game and Lee Sedol. 77 00:07:28,780 --> 00:07:34,240 The fact that a human is moving the counters around is irrelevant. The game is being played by the machine. 78 00:07:34,240 --> 00:07:41,940 Now that will apply for some systems we want to use machine learning tools for, but it won't apply necessarily for some simulations. 79 00:07:41,940 --> 00:07:47,080 And now we talked a bit before about something you'll recognise as chaos theory. 80 00:07:47,080 --> 00:07:52,570 Chaos theory obviously has the butterfly effect, and that's dependent on sensitivity in initial conditions. 81 00:07:52,570 --> 00:08:01,330 Now, most people don't know that the chaos butterfly has a less media savvy cousin, which is the hope of now. 82 00:08:01,330 --> 00:08:07,900 The chaos hawk moth is similar to the butterfly, but the difference between the two is that the hawk moth, 83 00:08:07,900 --> 00:08:13,120 the whole cloth effect is where there's sensitivity in the model that you're creating, 84 00:08:13,120 --> 00:08:20,410 and minor changes in the model produce wildly different outcomes, even where the initial conditions that you're feeding in don't. 85 00:08:20,410 --> 00:08:24,940 So I would strongly recommend looking at Erica L. Thompson from the London Maths Labs. 86 00:08:24,940 --> 00:08:27,490 But the example is you can have exactly the same initial conditions, 87 00:08:27,490 --> 00:08:36,280 but tiny tweaks to the model that you're creating will give you wildly different positions, and they can be infinitesimally small. 88 00:08:36,280 --> 00:08:41,950 We have another problem, which is that not only do we need to put the things for a system that enable it to see, 89 00:08:41,950 --> 00:08:48,790 but we can't see what we've learnt to see now. For those of you who clicked on the little click bait thing that might have got you here. 90 00:08:48,790 --> 00:08:52,930 This is where we get to the cats. So Rob has had me talk about this before, 91 00:08:52,930 --> 00:08:59,080 but this is what Tom Hoffman and he took some cats and he raised them in this very weird 92 00:08:59,080 --> 00:09:03,370 environment where the one set of cats was raised in an event where they only see horizontal lines. 93 00:09:03,370 --> 00:09:08,410 Another set of cats was raised in an environment where they could see vertical lines. And this is the result. 94 00:09:08,410 --> 00:09:13,690 So that cat can see, obviously the rod held horizontally. 95 00:09:13,690 --> 00:09:19,690 But there is nothing in the reward system that allows neurones to form to let it to see in the vertical. 96 00:09:19,690 --> 00:09:25,690 And there's the genres that there's nothing special about. That cat is not, you know, he's been raised in a really weird environment. 97 00:09:25,690 --> 00:09:30,910 But the idea that you have to be trying to see a thing or any system has to be trained to see a thing, 98 00:09:30,910 --> 00:09:34,630 whether it's a human machine learning system or an organisation remains true of 99 00:09:34,630 --> 00:09:39,460 its ability to then detect something later on to show that these things apply. 100 00:09:39,460 --> 00:09:45,790 If you train a machine learning system only on dogs, it will detect no muffins in this picture. 101 00:09:45,790 --> 00:09:49,900 And then there are classic case studies that we look at from an organisational perspective. 102 00:09:49,900 --> 00:09:53,170 So the MacNamara family is a good one. McNamara comes out for it. 103 00:09:53,170 --> 00:09:59,940 He arrives in the in his Defence Department job, and he's all about the mattress organisation and the measurement of. 104 00:09:59,940 --> 00:10:05,280 The Vietnam War and is convinced for ages that victory is coming his way because the metrics let him see it that way. 105 00:10:05,280 --> 00:10:11,070 But and it's beautifully described by a man who said, I always mumble on the left and Coslovich. 106 00:10:11,070 --> 00:10:12,900 It's not the bullet, isn't it? 107 00:10:12,900 --> 00:10:20,940 The third step is to presume that which that which can't be readily measured is isn't really important and is therefore blindness. 108 00:10:20,940 --> 00:10:24,870 So the same problem is true in any organisational system makes sense as well. 109 00:10:24,870 --> 00:10:28,740 Now, that might all seem very distant from you, but it happens to you every day. 110 00:10:28,740 --> 00:10:33,390 This is a drop down menu. You will have run afoul of this sometime in your life. 111 00:10:33,390 --> 00:10:38,520 For example, this computer is incapable of seeing that you want to fly with easyJet. 112 00:10:38,520 --> 00:10:46,710 It's simply not at all that it has that it can get from you in the elective way of seeing your preferences that it has developed. 113 00:10:46,710 --> 00:10:52,020 And I'll keep coming back to why these things keep repeating so that there are systemic issues there, 114 00:10:52,020 --> 00:10:57,480 issues of systemic theory and chaos theory that replicate again and again and again. 115 00:10:57,480 --> 00:11:04,750 And we could see now. So hopefully you'll be able to see once that takes place, the right way up one step. 116 00:11:04,750 --> 00:11:09,450 So I don't give you a second to have a look at that. 117 00:11:09,450 --> 00:11:15,120 And the problem with that is, of course, there's fundamentally no difference whatsoever to the picture. 118 00:11:15,120 --> 00:11:20,220 What's happening to those of you who can start to rise in this won't apply to everyone is that you have what's called a prior. 119 00:11:20,220 --> 00:11:24,090 So you have learnt to see a particular way and your brain has told you that 120 00:11:24,090 --> 00:11:27,600 the shadow belongs on one particular way for a plate that is the right way. 121 00:11:27,600 --> 00:11:32,760 And one way for plate that is the wrong way off. And it might even have told you that the shadow sits on the bottom of the picture. 122 00:11:32,760 --> 00:11:39,360 And so you have learnt to see it. So you actually see it in your brain as upside-down all the way up. 123 00:11:39,360 --> 00:11:45,030 And the way that you can tell that this is a neurological limitation as much as anything else is, 124 00:11:45,030 --> 00:11:49,260 it's not possible to see the plates in both orientations at the same time. 125 00:11:49,260 --> 00:11:54,420 You can't look at one of the pictures and simultaneously see it upside down and the right way up. 126 00:11:54,420 --> 00:11:59,490 At the same time, you'll experience this flick in your brain where it went from one to another. 127 00:11:59,490 --> 00:12:04,170 But that is how you have learnt to see and it codifies what you are capable of seeing. 128 00:12:04,170 --> 00:12:10,290 The same can be said for this one. So most people will look at this and hopefully you'll see three colours, maybe some of the colour blind. 129 00:12:10,290 --> 00:12:14,760 But of course, you know it's a trick because the balls are the same brown colour. 130 00:12:14,760 --> 00:12:19,380 What's happened is your brain has learnt to see what it expects. 131 00:12:19,380 --> 00:12:27,300 The change in the wavelength of light. Striking an object coming back to you is as a representative colour, so it doesn't see what's there. 132 00:12:27,300 --> 00:12:33,870 It sees what it considers to be useful to you. And this is systemically true, but it comes with problems. 133 00:12:33,870 --> 00:12:39,090 So what system is trying to recognise how optimised it is to specific types of information over 134 00:12:39,090 --> 00:12:45,540 others and what its natural focus of attention is often not proactively selected choices, 135 00:12:45,540 --> 00:12:55,500 but accidental defaults? Worse still, because they often no associated prompt, what system is blind to is typically reviewed infrequently. 136 00:12:55,500 --> 00:13:03,120 Sometimes this is only triggered by a post, post-mortem or inquests after the event, which is obviously when it's suboptimal. 137 00:13:03,120 --> 00:13:07,710 So every choice you make about how you or any system your creating perceives the world 138 00:13:07,710 --> 00:13:12,360 is necessarily a choice about what you're going to choose to be blind to as well. 139 00:13:12,360 --> 00:13:16,710 And again, that will sound quite arcane, but attention is an intentional, 140 00:13:16,710 --> 00:13:23,850 unapologetic discriminates that asks what's relevant right now in case you are only to notice that. 141 00:13:23,850 --> 00:13:31,170 And for example, you will be listening to my voice and hopefully doing that rather than listening to the faint drone, you can hear from the lights. 142 00:13:31,170 --> 00:13:34,530 You probably didn't notice that you could hear it until I mentioned the faint drone. 143 00:13:34,530 --> 00:13:37,140 And even if you can't hear the faint drone because your hearing shot away, 144 00:13:37,140 --> 00:13:41,580 you probably didn't notice where the chatter was pressing into your back until I mentioned it just now. 145 00:13:41,580 --> 00:13:42,930 And then when I mentioned just now, 146 00:13:42,930 --> 00:13:47,760 you probably even as you were thinking about that didn't think about kind of I feel the tongue on the roof of my mouth. 147 00:13:47,760 --> 00:13:50,220 You weren't conscious of any of those things, 148 00:13:50,220 --> 00:13:58,620 is my guess until you had a prompt to do so because attention causes you to have a discriminatory effect in how you process information. 149 00:13:58,620 --> 00:14:02,940 And this is always true of systems. And it sounds fairly absurd, 150 00:14:02,940 --> 00:14:10,510 but you can watch somebody here who giving directions and 50 percent of people don't notice when they pull that door trick. 151 00:14:10,510 --> 00:14:17,070 The person they started giving directions to is not the person they finished on, because who they're talking to is not the focus of that. 152 00:14:17,070 --> 00:14:23,940 I strongly recommend Googling Daniel Simmons's website. You can see the monkey business situation and loads of stuff in that. 153 00:14:23,940 --> 00:14:28,020 And again, this is a thing where I have given you biological examples, 154 00:14:28,020 --> 00:14:32,250 but you could go to the world of machine learning and you would find a whole array of research that's 155 00:14:32,250 --> 00:14:38,760 looking at tenses and other forms of attention discriminator in a way of saving processing power, 156 00:14:38,760 --> 00:14:44,130 improving how those systems work, but necessarily come at a cost in terms of what they don't sacrifice to do it. 157 00:14:44,130 --> 00:14:49,080 And I don't think I really need to explain where you've come across an organisation that doesn't have attention. 158 00:14:49,080 --> 00:14:54,480 You can just watch the news. Now there's another way of thinking about the why is that why? 159 00:14:54,480 --> 00:14:59,990 Why does why our system so often like this? And there's a fundamental point about truth and advanced. 160 00:14:59,990 --> 00:15:06,110 And then not necessarily the same thing. In fact, Colonel Nathan Jessop would have screamed at you, you can't handle the truth. 161 00:15:06,110 --> 00:15:10,790 And in the courtroom, he was wrong. But if he was an evolutionary biologist, he would have been right. 162 00:15:10,790 --> 00:15:17,060 So this is the mantis shrimp, the mantis shrimp. It has the most complex vision system in the world. 163 00:15:17,060 --> 00:15:20,990 You and I have long, medium and short wavelength cones in the back of our eyes. 164 00:15:20,990 --> 00:15:25,670 The mantis shrimp has 22 different types of receptors. It can see polarisation. 165 00:15:25,670 --> 00:15:31,550 You can see rotation of polarisation, circular polarisation, infra-red, ultraviolet, anything and everything. 166 00:15:31,550 --> 00:15:36,110 The dog has a much better sense of smell than you, but only to say worse perception than us. 167 00:15:36,110 --> 00:15:38,270 The dog is an animal, which is a predator. 168 00:15:38,270 --> 00:15:45,560 Its ability to detect prey is fundamentally linked to its ability to be a successful species to reproduce, adapt and survive. 169 00:15:45,560 --> 00:15:49,710 So why has the dog not got to these cones? 170 00:15:49,710 --> 00:15:56,910 And it's because there's an energy costs to whatever you do that. That's an example, but actually has been backed up by Mat. 171 00:15:56,910 --> 00:16:01,760 So this is work by Donald Hoffman. And what they were doing here is Don Hoffman worked in game theory, 172 00:16:01,760 --> 00:16:06,320 and you have two different types of truth for the things they put through their game theory programmes. 173 00:16:06,320 --> 00:16:13,400 And then you have something that sacrifice is true, so it accepts less data in order to be more efficient gathering reward. 174 00:16:13,400 --> 00:16:17,150 And you can see the drift in all of the models. And again, 175 00:16:17,150 --> 00:16:22,430 we'll get back to you say it's also worth understanding in terms of blindness and your 176 00:16:22,430 --> 00:16:26,060 inability to see your own blindness or any system's inability to see so blindness. 177 00:16:26,060 --> 00:16:31,690 That's how a look at the disaster. This the human eye. So on the image that you can see it to see, it goes through the lens and the back. 178 00:16:31,690 --> 00:16:39,350 You've got a nice clear mirror that's of mirror image it's replicated, allows it to focus features of the disaster in that sense. 179 00:16:39,350 --> 00:16:45,290 Ah, that is the right focal length. If you come into an angle, you don't have the same focal length. 180 00:16:45,290 --> 00:16:48,800 Your ability to produce the right image around there is poor. 181 00:16:48,800 --> 00:16:55,580 But that's also why most of your cells or the back of your eye and not round here you also have a blind spot. 182 00:16:55,580 --> 00:17:01,520 And so what does that mean? Does your focus or point of focus is you're looking straight down the road? 183 00:17:01,520 --> 00:17:07,100 And that's what you think you see when you're driving, because your mind either fills in the rest or tells you not to pay attention, 184 00:17:07,100 --> 00:17:10,940 you don't notice you're not looking at the pavement because you're not looking at the thing. 185 00:17:10,940 --> 00:17:14,990 That's what a replication of the eyes focal activity looks like as you're looking straight ahead. 186 00:17:14,990 --> 00:17:18,270 And this sort of seems fine, a bit dangerous off to the sides. 187 00:17:18,270 --> 00:17:22,220 And this is why your eyes round all at the place and then off to the side as you glance that car, 188 00:17:22,220 --> 00:17:25,670 suddenly realise that the vision off to the side is straight ahead of you. 189 00:17:25,670 --> 00:17:27,380 As you're looking right, it's relatively terrible, 190 00:17:27,380 --> 00:17:33,740 but it's not your life experience because there's no value in presenting that truth to you as an organising construct, 191 00:17:33,740 --> 00:17:37,230 driving the car so your brain doesn't do that. And here's another bit. 192 00:17:37,230 --> 00:17:40,070 So going back to the blind spots you can see here someone's map, 193 00:17:40,070 --> 00:17:43,700 the left eye and I put where the blind spot is about 10 degrees to the left of your vision, 194 00:17:43,700 --> 00:17:46,730 and you can kind of see where they've mapped the fading out as well. 195 00:17:46,730 --> 00:17:50,000 I would invite you to go home and try and see what your blind spot looks like later. 196 00:17:50,000 --> 00:17:53,840 You'll have to cover one eye and you can find things like this on the internet and be ayro. 197 00:17:53,840 --> 00:18:00,620 This works for because of the distance you're at. But if you look at the bluebird on the left and you cover your left eye and I'll 198 00:18:00,620 --> 00:18:04,400 be interested to see how many people have the balls to try this in the room, whether they look stupid. 199 00:18:04,400 --> 00:18:11,780 You will at some point if you're the right distance away, but RedBird will this pick just Google Birds blind spot and you can find this entire time. 200 00:18:11,780 --> 00:18:17,180 And it's interesting to see what happens as well from your perspective, because you don't suddenly have this black spot that appears. 201 00:18:17,180 --> 00:18:22,850 What happens is the blue line seems to carry on. It's not very clear, and the sky looks like it's what fills it in a bit. 202 00:18:22,850 --> 00:18:26,630 But you're just suddenly become aware of the fact it doesn't work very well and you could do other things as well. 203 00:18:26,630 --> 00:18:32,210 If you want to know how small your area of perceptive vision is, if you stick your hand out and you stare at this Nicole in the middle, 204 00:18:32,210 --> 00:18:35,630 you will find that you can count the lines here without moving your own fingers. 205 00:18:35,630 --> 00:18:40,520 But by the time you get up to these ones at the end, you're not quite sure if it's three or four. 206 00:18:40,520 --> 00:18:47,180 So the area that you actually perceive is really small, and it's largely true for most systems because of the energy cost. 207 00:18:47,180 --> 00:18:53,540 So what the advantage is that helps you be gain competitive advantage is the right balance of abstraction and detail. 208 00:18:53,540 --> 00:18:56,510 You need to know enough detail so that you can tackle your problem properly, 209 00:18:56,510 --> 00:19:01,370 but you need enough abstraction so that you are not wasting energy needlessly. 210 00:19:01,370 --> 00:19:06,770 There's another way of thinking about this so you can think of any agent that makes the decision be that machine, organisation or human. 211 00:19:06,770 --> 00:19:11,240 As an agent that's making a decision, it has a lens through which it sees the world, 212 00:19:11,240 --> 00:19:16,010 which is built up by all the prior or the lessons it has learnt up to this point in how it's constructed. 213 00:19:16,010 --> 00:19:21,770 And through that, it must see the right amount of information in order to gain competitive advantage. 214 00:19:21,770 --> 00:19:25,700 And we'll come back to that model minute to really hammer home a point. 215 00:19:25,700 --> 00:19:31,370 The simplest thoughts and your body involves only three neurones and it doesn't involve your brain. 216 00:19:31,370 --> 00:19:33,170 If you've ever reached out and touch to hot iron, 217 00:19:33,170 --> 00:19:37,500 you'll find your hand snapped back and you probably said to yourself, I'll pull my handbag because I know pain. 218 00:19:37,500 --> 00:19:44,270 But that's not what happened. The simplest thoughts in your body involves three neurones and any goes into your spine. 219 00:19:44,270 --> 00:19:49,290 The first neurone basically works on sort of proteins. Detecting heat sends a message. 220 00:19:49,290 --> 00:19:53,900 Second, you realise it goes to the upper muscles in your arm and then you get a snatch away response. 221 00:19:53,900 --> 00:19:59,490 The reason you have both this and the pain response is to give you two different reward functions. 222 00:19:59,490 --> 00:20:03,910 The. First one is because the longer you leave your hand on the hot plate, the greater the amount of damage. 223 00:20:03,910 --> 00:20:06,610 So speed is the reward function you want, 224 00:20:06,610 --> 00:20:13,180 but any number of times doing that isn't really a very effective treatment to stop you doing it again in the first place because it's stupid. 225 00:20:13,180 --> 00:20:19,780 So you have pain. The pain function actually arrives a lot later than the hand reflex, but I put together in your head in a different way. 226 00:20:19,780 --> 00:20:22,750 So quite often we will end up integrating things that have different reward 227 00:20:22,750 --> 00:20:26,740 responses to different functions that will feel like part of the same system. 228 00:20:26,740 --> 00:20:29,290 So integration isn't mindlessly wiring everything together. 229 00:20:29,290 --> 00:20:34,270 Integration has to be about purpose, taking away from the biological back to the machine learning world. 230 00:20:34,270 --> 00:20:38,530 This is Max Tegmark, who was looking at the theoretical largest artificial intelligence you can have. 231 00:20:38,530 --> 00:20:41,980 And his point was intelligence. Intelligent information processing system. 232 00:20:41,980 --> 00:20:48,100 Going big is a mixed blessing. On the one hand, going bigger lets it contain more particles, which enables more complex thoughts. 233 00:20:48,100 --> 00:20:52,360 On the other hand, this slows down the rate at which you can have truly global thoughts, 234 00:20:52,360 --> 00:20:56,860 since it now takes longer for the relevant information to propagate to all of its parts. 235 00:20:56,860 --> 00:21:01,420 So we've covered the biological. We've covered the mechanical. Now we can cover the people. 236 00:21:01,420 --> 00:21:07,240 There's the UN. That's three people making a decision. I don't really need to point out which of those two is going to be quicker. 237 00:21:07,240 --> 00:21:13,150 But I also don't really need to explain to you that if you wanted a global solution for the globe, 238 00:21:13,150 --> 00:21:18,430 you're going to have to have that level of complexity in the top. What does that mean for integration? 239 00:21:18,430 --> 00:21:20,740 Well, it means that you have to disaggregate for victory. 240 00:21:20,740 --> 00:21:27,040 You have to parcel up the parts of your system to do specific jobs and then feed on the right information to the rest of the system, 241 00:21:27,040 --> 00:21:32,830 not just assume everything we needs to know. Everything and everything has to have the same sorts of cognitive processing. 242 00:21:32,830 --> 00:21:40,270 And this is the art buddy and starch Star Trek the Borg. Arguably, if all of these principles from other bright people that aren't made are true, 243 00:21:40,270 --> 00:21:45,910 these people would have been useless because their whole point is that they're homogenous and that they're all wired together. 244 00:21:45,910 --> 00:21:51,250 So you would have had one lens of seeing all of space and they would have been incredibly slow. 245 00:21:51,250 --> 00:21:51,460 In fact, 246 00:21:51,460 --> 00:21:59,320 they move like zombies because they kind of all space on bodies because they would be incapable of reacting quickly and efficiently to those problems. 247 00:21:59,320 --> 00:22:02,980 And again, that goes back to that sometimes the UN. Yes, it is. 248 00:22:02,980 --> 00:22:08,240 How does it work? Well, when it has subcommittees, when it has experts that do things to it and present back to the group, 249 00:22:08,240 --> 00:22:12,790 then reference it and decide whether there's something in that particular field that's going to cause a different problem. 250 00:22:12,790 --> 00:22:20,080 So we're back to some of those lenses, those decision makers and passing on the right information in terms of passing information correctly. 251 00:22:20,080 --> 00:22:26,170 It's not necessarily just what you think is important to matters, it's what the tacit knowledge of the two actors is. 252 00:22:26,170 --> 00:22:31,480 So if you imagine that sort of lenses that see the world, if they completely overlap, 253 00:22:31,480 --> 00:22:36,370 you can pass on a tiny amount of information on one system will know from that trigger. 254 00:22:36,370 --> 00:22:41,020 I know he's going to know what's I mean when I say this single word is going to promote a whole response, 255 00:22:41,020 --> 00:22:44,740 but against that, it also means that we have a very, very similar way of seeing the world. 256 00:22:44,740 --> 00:22:51,250 And it's why you end up with organisations that are in favour of cognitive diversity, for example, trading funds, 257 00:22:51,250 --> 00:22:56,530 they like to have lots of different viewpoints because it will allow them to tap into wealth through lots of different lenses. 258 00:22:56,530 --> 00:22:58,090 Again, examples always help. 259 00:22:58,090 --> 00:23:04,780 So here on the Parade Square, you can have a single word of command that orchestrates vastly complex manoeuvres of human beings. 260 00:23:04,780 --> 00:23:09,850 But it's because they all have incredibly overlapping, tacit knowledge of a high degree that's been invested in beforehand. 261 00:23:09,850 --> 00:23:16,430 Very celebrity RF, very similar with surgeons. And again, waving a flags and naval vessels goes back to the Times of Europe. 262 00:23:16,430 --> 00:23:20,530 So there's a Goldilocks premise as to how integration works. 263 00:23:20,530 --> 00:23:25,600 You have to find the optimum and you might start wondering what that sort of means. 264 00:23:25,600 --> 00:23:28,930 And again, there are other ideas that are really useful. So this is Ross Ashby. 265 00:23:28,930 --> 00:23:33,580 He came up with this theory of requisite variety, and it's it sounds incredibly complicated, 266 00:23:33,580 --> 00:23:39,280 but in its simplest form, you can think of it as if you want a system to control another system. 267 00:23:39,280 --> 00:23:45,400 This system has to be at least as complex as how it interlock with the thing its target is. 268 00:23:45,400 --> 00:23:49,000 Now that sounds really difficult and abstract, and I probably lost half of you, 269 00:23:49,000 --> 00:23:55,600 but I can show you a picture and you will kind of instinctively understand. So early computer games, somebody runs, jumps left, right. 270 00:23:55,600 --> 00:24:00,550 Maybe I should eat something. And because they're very simple, you end up with a very simple controller. 271 00:24:00,550 --> 00:24:05,740 You couldn't play most of the modern games on that kind of controller without having some sort of weird. 272 00:24:05,740 --> 00:24:09,340 I need to go up three times down, twice left. Quite right four times. 273 00:24:09,340 --> 00:24:11,710 So you end up sacrificing speed on all sorts of things, 274 00:24:11,710 --> 00:24:18,190 so you end up with more complex control systems when you're trying to control something inherently more complex 275 00:24:18,190 --> 00:24:25,300 that might lead you to think that the right answer is just to have as many control levers as you possibly can. 276 00:24:25,300 --> 00:24:33,250 I just need to look up a bit of text here in order that you don't use them until actually now I make this leap. 277 00:24:33,250 --> 00:24:39,360 But that in itself has problems, and this is the p38. Was extremely successful in the Pacific, they said to the European theatre. 278 00:24:39,360 --> 00:24:44,260 I was escorting bombers over long range, and it did really badly compared to what they were expecting. 279 00:24:44,260 --> 00:24:49,990 And there was a series of angry letters of increasing rage that went from the U.S. to the pilots who were flying these things, 280 00:24:49,990 --> 00:24:55,150 which is a bit harsh because that being shot down asking Why is this all going so badly wrong? 281 00:24:55,150 --> 00:24:59,850 Are you just inept? What the [INAUDIBLE]? And eventually? 282 00:24:59,850 --> 00:25:03,420 How does a real leader, the 20th fighter ring, wrote this back. 283 00:25:03,420 --> 00:25:08,850 And he describes the changing conditions the plane was able to cope with because it had such 284 00:25:08,850 --> 00:25:13,410 an array of different control levers that meant it could now operate in high altitude, 285 00:25:13,410 --> 00:25:19,560 long range cold conditions, which is what it was, wasn't what it was doing in the Pacific. But the consequence of that? 286 00:25:19,560 --> 00:25:25,110 And he said, as soon as p38 pilot comes into a situation where he must engage in dogfighting, 287 00:25:25,110 --> 00:25:30,420 he must increase power to get rid of the external fuel tanks that's on his main fuel. 288 00:25:30,420 --> 00:25:35,550 So he reaches down turns to stiff, difficult gas, which is to make turns on the drop tanks, 289 00:25:35,550 --> 00:25:40,200 which suppresses the release button puts the mixture to also reach two separate and clumsy actions. 290 00:25:40,200 --> 00:25:43,050 Increases his r.p.m., increases his manifold pressure. 291 00:25:43,050 --> 00:25:48,600 It turns on his gun heater switch, which he's had to do to save the battery because otherwise the plane will still operate through the flight, 292 00:25:48,600 --> 00:25:50,920 which he must feel full and cannot possibly see. 293 00:25:50,920 --> 00:25:56,640 And he turns on his combat side switch, and he's now ready to fight, at which point he's invariably been shot down. 294 00:25:56,640 --> 00:26:03,000 So every time you out of control leave it, you had a training burden, and every time that controlled lever must be used, 295 00:26:03,000 --> 00:26:07,510 you're adding it's high burden if that's a significant deviation from normal action. 296 00:26:07,510 --> 00:26:15,390 And so this looks like a slightly ridiculous picture, but this is I strongly recommend going watching David Silva's talks on reinforcement learning. 297 00:26:15,390 --> 00:26:24,330 So David Silva behind AlphaGo wins for the best of DeepMind. And his point really is simply that you have observation, but you need to build it. 298 00:26:24,330 --> 00:26:27,930 It goes into an agent that makes a decision and does an action, 299 00:26:27,930 --> 00:26:34,950 and the control system will have to have the right level of complexity to be able to cope with the environment correctly. 300 00:26:34,950 --> 00:26:43,200 And then from that, you will gain a reward. And what the reward then does if it's an adaptive system is it changes both of these. 301 00:26:43,200 --> 00:26:48,030 So not only do you need to think about how you're integrating the system in the process of doing it, 302 00:26:48,030 --> 00:26:53,940 you need to examine what your real reward functions are, not necessarily what you just say your reward functions are. 303 00:26:53,940 --> 00:26:59,550 And they kind of paper out, which again, I would recommend. And his premise, which got later take in the artificial intelligence community, 304 00:26:59,550 --> 00:27:05,400 was that you can reach any form of intelligence simply through the correct application of reward. 305 00:27:05,400 --> 00:27:08,580 And it's interesting to watch people argue about architectures and all those sorts of stuff, 306 00:27:08,580 --> 00:27:16,680 which silver and co argue can be reached through having trivial functions which generate those architectures because it's not really a new idea. 307 00:27:16,680 --> 00:27:20,280 If you think about all the intelligences we experience anyway in the world, 308 00:27:20,280 --> 00:27:26,070 they all arrived here by the reward function of natural selection in the first place. 309 00:27:26,070 --> 00:27:30,420 So definitely go watch David Silva's lectures and kind of think about the simplicity 310 00:27:30,420 --> 00:27:35,280 of symptoms and then all the chaos that just boils down to this simple idea. 311 00:27:35,280 --> 00:27:40,230 Now I'm showing you this picture a few times, but if you're integrating anything, let's imagine this isn't necessarily a person, 312 00:27:40,230 --> 00:27:44,130 but it could be a technology, can be an organisation, can be any of those things. 313 00:27:44,130 --> 00:27:48,120 They all exist within systems or systems, so it's always going to be inherently complicated. 314 00:27:48,120 --> 00:27:52,860 So your reward system matters in the sense that whatever subsystem is doing, 315 00:27:52,860 --> 00:27:58,080 if it's adapting or you are adapting it, you're going to change it with respect to reward some function. 316 00:27:58,080 --> 00:28:02,430 But that reward function has to serve the rest of the organisation as a whole. 317 00:28:02,430 --> 00:28:05,550 But he didn't think that was complicated enough. You now have to throw in time as well. 318 00:28:05,550 --> 00:28:11,850 And I'll give you one last idea to play with, and this is the idea of pace layering. 319 00:28:11,850 --> 00:28:19,320 This is really interesting. I think it's one of the places this was first dreamed up independently was Brian Eno's music studio of all places. 320 00:28:19,320 --> 00:28:25,110 But you will find it in architecture, you will find it in neurology, you'll find it in a bunch of different places. 321 00:28:25,110 --> 00:28:30,600 One is a fun one, so we'll take that. And his idea is that things run a different paces that they're interlinked. 322 00:28:30,600 --> 00:28:34,410 The idea that you can have a system, everything works on the same clock. 323 00:28:34,410 --> 00:28:38,460 The second is the same for everything, and all systems work on that pace. It doesn't really work. 324 00:28:38,460 --> 00:28:42,660 So fashion with the argument that it changes constantly. It's this chaotic thing. 325 00:28:42,660 --> 00:28:47,970 It's turbulence, probably. But there's no way that fashion can exist without the commerce to support it. 326 00:28:47,970 --> 00:28:53,730 You can't buy clothes that nobody makes. You can only buy them from shops that exist. 327 00:28:53,730 --> 00:28:58,680 And in that sense, fashion does provide information which changes how commerce works. 328 00:28:58,680 --> 00:29:04,950 So if you are aware of fashion in your a clothing line, you won't ever have a commercial business to last very long. 329 00:29:04,950 --> 00:29:10,630 So there's an intellect between the two. So you kind of think of it as the turbulent interface. And then commerce also drives infrastructure, 330 00:29:10,630 --> 00:29:15,120 and we're seeing that now you're seeing the closing of the streets because commerce has changed 331 00:29:15,120 --> 00:29:19,980 and it's changing where people are buying things and that infrastructure ties into governance, 332 00:29:19,980 --> 00:29:25,440 which then leads back to culture. And then fundamentally, nature. And you can imagine each of these layers moving at a different price. 333 00:29:25,440 --> 00:29:31,080 Now these are indicative layers, and you will find all sorts of things when you peel back the onion of whatever organisation you're looking at. 334 00:29:31,080 --> 00:29:35,820 But just going back to kind of the idea of infrastructure and governance, provenance relationship. 335 00:29:35,820 --> 00:29:44,070 This is a picture of Boston in 1860 on the left, and here it is in 1981, and only one building is the same. 336 00:29:44,070 --> 00:29:49,350 But what is noticeable or notable about the picture is that every single road is the same. 337 00:29:49,350 --> 00:29:55,590 So the infrastructure changes, but the thing that determines where the roads go isn't an infrastructure problem, it's a governance problem. 338 00:29:55,590 --> 00:29:59,460 So because it's different, right? So you can think about these things. 339 00:29:59,460 --> 00:30:05,550 I picked that one there, but you could also think about it in the biological way, so here we have priors. 340 00:30:05,550 --> 00:30:10,010 So what you have learnt previously has shaped priors that determine what your vision system is. 341 00:30:10,010 --> 00:30:15,980 And now when you see this, you will know more about the picture, so you will tend to interpret it differently in the future. 342 00:30:15,980 --> 00:30:18,240 And it occurs to the brain at a different level as well. 343 00:30:18,240 --> 00:30:23,850 So as I tell you new things, your neurones will fire differently to recruit and record that information, 344 00:30:23,850 --> 00:30:31,800 but that isn't the sum total of how the brain adapts. So this is a terrible picture, but it's from a live wired by David Eagleman, 345 00:30:31,800 --> 00:30:36,780 which I recommend and this is scans of the change in brain structure of different musicians. 346 00:30:36,780 --> 00:30:43,110 So as your brains are learning things electronically, they're also released in clinical trials that tell you where to grow new neurones. 347 00:30:43,110 --> 00:30:47,490 And you can see it in violinists who end up with this sort of omega shape twist here on one side, 348 00:30:47,490 --> 00:30:50,550 because that finger is significantly busier than this one. 349 00:30:50,550 --> 00:30:55,230 But pianists, because they have equally busy fingers, end up with a wholly different brain structure. 350 00:30:55,230 --> 00:30:59,970 So the idea here is that you actually end up with these things occurring in organisational systems. 351 00:30:59,970 --> 00:31:09,810 Biological systems exist everywhere. So I would leave you with four things to take away the next time someone starts talking to you about integration, 352 00:31:09,810 --> 00:31:13,830 which will just annoy whoever you're talking to because they'll want a nice, simple answer. 353 00:31:13,830 --> 00:31:21,150 But you have to decide what is your problem in your environment? Is it amenable to that glass battlefield? 354 00:31:21,150 --> 00:31:25,680 Just like God move, and there will be lots of things that erupt at all. 355 00:31:25,680 --> 00:31:30,660 And in those instances, you should absolutely use deterministic planning structures. 356 00:31:30,660 --> 00:31:33,870 You should absolutely use machine learning with a high degree of confidence. 357 00:31:33,870 --> 00:31:39,090 You should actually accidentally use a highly procedural organisational structure. 358 00:31:39,090 --> 00:31:47,250 But if your system is prone to complexity or chaos, or there are a number of reasons why that first model doesn't fit, 359 00:31:47,250 --> 00:31:53,280 you will need to have the Goldilocks form of integration. You need to have the right amount of truth going through your system. 360 00:31:53,280 --> 00:31:57,030 You need to balance the amount of tacit knowledge in the different systems that communicating. 361 00:31:57,030 --> 00:32:01,260 You need to balance how much information is processed from one to another. 362 00:32:01,260 --> 00:32:08,970 You need to have the right diversity as well, so you need to have enough lenses looking at your problem that are suitable to your problem. 363 00:32:08,970 --> 00:32:12,840 If you are not, if your job, for example, is automatic, no recognition for the police. 364 00:32:12,840 --> 00:32:19,230 You don't need neurological diversity and how the no play system is read because that's one lens is great. 365 00:32:19,230 --> 00:32:25,860 It works really well if you're starting to think about more complex things like how we organise social policy for government. 366 00:32:25,860 --> 00:32:30,100 You absolutely need an array of different ways of looking at the same problem. 367 00:32:30,100 --> 00:32:35,220 And the final one is how we look at and I think this is probably the thing that we across all organisations 368 00:32:35,220 --> 00:32:41,010 tend to do worse how we look at the different tempo of feedback that occurs within a situation. 369 00:32:41,010 --> 00:32:47,400 Quite often, we will tend to do things which we tell ourselves are about time, but aren't in awe about the design of time. 370 00:32:47,400 --> 00:32:54,390 So if presented with a complex problem, it is common for people to plan for the length of time they have been given to do the planning, 371 00:32:54,390 --> 00:33:01,230 realise that time is up and announce I have a solution. But what they don't have is I have a solution that I'm confident in and I have. 372 00:33:01,230 --> 00:33:05,370 This is the solution I have reached in the available timeframe and then they might be working 373 00:33:05,370 --> 00:33:10,530 within a structure that says not only do understanding now I do orchestration not only direction. 374 00:33:10,530 --> 00:33:17,160 And then later I will come back and I will do some sort of post after action review and then I will start again. 375 00:33:17,160 --> 00:33:24,210 And if that is really monolithic, you aren't necessarily thinking in the course of analysing the problem which bits of the system, 376 00:33:24,210 --> 00:33:26,610 which bits of this problem going to change really fast. 377 00:33:26,610 --> 00:33:32,490 If the thing changes really fast, faster than my ability to turn around, then that's not great. 378 00:33:32,490 --> 00:33:37,080 If you start thinking I have to do everything at the speed of the fastest component in the target, 379 00:33:37,080 --> 00:33:42,630 you're going to be stuck in this cycle that finds itself choosing to be blind to long term effects. 380 00:33:42,630 --> 00:33:47,400 You will have said, I'm really interested in how my fly's move, but you will never work. 381 00:33:47,400 --> 00:33:52,830 I may fly's evolve because the timeframe over which they look isn't the timeframe in which you're looking at them. 382 00:33:52,830 --> 00:33:55,890 So there's things going on that I'm going to do two plugs. 383 00:33:55,890 --> 00:34:02,190 One, if you're interested in any of this batshit craziness and once get farther, I would strongly recommend to. 384 00:34:02,190 --> 00:34:05,940 Colin Williams is doing a colloquium on 76 81 Democrats. 385 00:34:05,940 --> 00:34:11,160 A history of this and no more on the 23rd. He's. His e-mail is the bottom there. 386 00:34:11,160 --> 00:34:16,230 Either take a photo now or come and find me. And one other thing which you should see. 387 00:34:16,230 --> 00:34:26,640 It's not out now, but coming out, hopefully in the next six months is definitely the most anticipated book and affecting more in the future, 388 00:34:26,640 --> 00:34:33,510 which will be out, probably available through CCW and definitely available through me in about six or eight months time. 389 00:34:33,510 --> 00:34:38,250 Will there be lots of other bright people saying more intelligent things happening right now? 390 00:34:38,250 --> 00:34:48,690 Thank you very much. So, you know, my brain, my brain is teeny. 391 00:34:48,690 --> 00:34:54,600 I also want to get you guys to a class that, you know, once you're taught something new physiologically, 392 00:34:54,600 --> 00:35:02,010 you damage somebody in perspective because actually that brain has changed so much it what people thought was going to just steal an opportunity. 393 00:35:02,010 --> 00:35:07,170 Yes, I pick up a one one sort of idea because you tell us a little bit about how one framing is, 394 00:35:07,170 --> 00:35:12,570 how important diversity of experience is actually on a more diverse groups. 395 00:35:12,570 --> 00:35:17,190 We have to solve complex problems, has a better, even though you oppose the idea. 396 00:35:17,190 --> 00:35:26,700 But one thing that really struck me is that as a species, we're a bit rubbish learning from mistakes and that troubles me. 397 00:35:26,700 --> 00:35:31,230 So if I give you, let me give you a solid in part because because that's a bit abstract. 398 00:35:31,230 --> 00:35:35,760 So when the Mongols were busy inventing sort of Central Asia, 399 00:35:35,760 --> 00:35:41,130 they come across Kiva and then you, the team had a really decent sized military force mobile, 400 00:35:41,130 --> 00:35:46,680 actually, and they could move this mobile force out from the city along the river, 401 00:35:46,680 --> 00:35:50,920 and they could basically hold, you know, this rhythm quite successfully. 402 00:35:50,920 --> 00:35:56,640 The models figured out, but they just needed to use a particular device that could slow, slow, quick, quick. 403 00:35:56,640 --> 00:36:03,180 Those of you know, your racial history, we know what happened. They moved in from the northern direction very, very, very slowly. 404 00:36:03,180 --> 00:36:06,990 They knew the key ones would pick up on it. They were doing this perception thing. They could observe it. 405 00:36:06,990 --> 00:36:14,670 I was doing this thing again. They drew away, even congratulate themselves about how wonderful they'd seen of the Mongols. 406 00:36:14,670 --> 00:36:21,250 Then, a few months later, the monkeys can be very, very slowly from the south and the Cubans getting concussions long. 407 00:36:21,250 --> 00:36:25,410 They moved on, you know, in a rain and the cold was moved away. 408 00:36:25,410 --> 00:36:28,140 And so while the Cubans were now sort of committed to these two different 409 00:36:28,140 --> 00:36:32,660 locations and weeks have gone by and they were busy feasting and writing poetry, 410 00:36:32,660 --> 00:36:37,440 which are wonderful they've done, then the Mongols kicked them in under 24 hours. 411 00:36:37,440 --> 00:36:44,610 They crossed the river in an unexpected location and did a saw two stroke double taps of heads and over three kiefer destroyed. 412 00:36:44,610 --> 00:36:49,440 Now, everyone else in the Middle East saw what had happened. Nobody learnt the lesson. 413 00:36:49,440 --> 00:36:55,110 So the Mongols did it again and again and again. Now that's the existential catastrophic. 414 00:36:55,110 --> 00:36:57,600 You can see examples of those were very personal, 415 00:36:57,600 --> 00:37:05,490 and they all sorts of repeated the same mistake several times over what has gone wrong with us as a species 416 00:37:05,490 --> 00:37:11,280 that we seem to be unable sometimes to learn from our own experiences or from these sort of things. 417 00:37:11,280 --> 00:37:17,970 So I would hope my instinctive response would be, I think that there are two key factors that relate to that. 418 00:37:17,970 --> 00:37:23,910 One is that when it all goes well, it tends to be unremarkable and therefore isn't recorded. 419 00:37:23,910 --> 00:37:30,450 So every time the Mongols had sort of turned off and know there was a force that they'd probably gone well, this was one of the slow ones. 420 00:37:30,450 --> 00:37:37,080 Don't tell anyone. And that and it would also not appear to be as remarkable as the destruction of Cuba. 421 00:37:37,080 --> 00:37:43,590 And so there's this tendency whereby catastrophic mistakes are retold his title more adequately and more interestingly, 422 00:37:43,590 --> 00:37:47,970 say we, we have a kind of a sampling bias problem, and that's to an extent. 423 00:37:47,970 --> 00:37:56,730 I think the the other thing that quite often happens and goes back to the idea of reward functions and say, 424 00:37:56,730 --> 00:38:04,680 Oh, Robert Miles is a guy who's got a YouTube channel by air safety. 425 00:38:04,680 --> 00:38:09,250 And he talks about matter and meta objectives and reward functions. 426 00:38:09,250 --> 00:38:14,220 One of the classic problems is it's that kind of monkeys, poor thing or the sorcerer's apprentice. 427 00:38:14,220 --> 00:38:22,980 I've asked it to do something about it. That's what it's done. And so you can get that in systems where you, you tell somebody you want something. 428 00:38:22,980 --> 00:38:29,880 And again, if you were to jump across to an organisational perspective, you'd get to things like Godwin's Law, where, you know, 429 00:38:29,880 --> 00:38:34,560 we said to someone that you had with the Minister of Health said that they 430 00:38:34,560 --> 00:38:39,450 didn't want to see anybody wait longer than 24 hours for potential appointment. 431 00:38:39,450 --> 00:38:45,510 And the only way they could measure that was by counting the number of appointments that occurred 24 hours after the booking. 432 00:38:45,510 --> 00:38:48,260 What that actually resulted in is a reward function in sight. 433 00:38:48,260 --> 00:38:55,830 Dental centres was receptionist, refusing to take any bookings longer than 24 hours out because now they're going to get a negative reward, 434 00:38:55,830 --> 00:39:01,650 they're going to get punished for it. But that's not actually the same as making more dental appointments available. 435 00:39:01,650 --> 00:39:09,340 So you'll end up. I think sometimes our inability to learn is tethered to an inability to correctly recall what functions are. 436 00:39:09,340 --> 00:39:18,090 And there's there's all sorts of ways you can look at institutions as well and say, you know, that person is engaging and truly appalling politics. 437 00:39:18,090 --> 00:39:21,870 But on the other hand, they have really successful within their own political party. 438 00:39:21,870 --> 00:39:28,410 Yeah. So under what criteria are they judging whether this is a catastrophe or a mistake? 439 00:39:28,410 --> 00:39:33,480 And it might even be to the state where, you know, from a personal advantage point of view, it gets them all the toys. 440 00:39:33,480 --> 00:39:41,160 They want all the platitudes and yet be bad for the system of which that part and they're purporting to act in the benefit, 441 00:39:41,160 --> 00:39:44,400 often quite often they'll will be blind to that mistake that that. 442 00:39:44,400 --> 00:39:51,528 We are not talking about wrongness, and that's where the reward function and selection bias, that's where we're going to focus less.