1 00:00:04,170 --> 00:00:08,450 In this episode, The Changing Character of War program welcomes Dr. Kenneth Payne, 2 00:00:08,830 --> 00:00:12,570 a senior lecturer in the School for Security Studies at King's College London. 3 00:00:13,230 --> 00:00:18,810 His talk explores the evolutionary basis of strategic behaviour from chimpanzees to computers, 4 00:00:19,140 --> 00:00:22,620 teasing out the connections between liberal warfare and human evolution. 5 00:00:24,770 --> 00:00:28,730 So the colour coding here is three parts of the book comes in three parts. 6 00:00:28,970 --> 00:00:34,120 The first part is all about evolution or fixes on evolution, and that's my thesis there. 7 00:00:34,130 --> 00:00:39,140 The second part is a bit about culture, which I'm going to gloss over in just three or four slides. 8 00:00:39,470 --> 00:00:43,340 And then the last part is probably the bit that some that might be of most interest to you is the bit 9 00:00:43,340 --> 00:00:47,120 about artificial intelligence and the ways in which artificial intelligence will shape strategy. 10 00:00:47,930 --> 00:00:57,440 And the overall thesis of the book in a nutshell is that there have been and everybody likes to be kind of reductive about revolutions in warfare. 11 00:00:58,370 --> 00:01:04,849 I think if the Toffler's, for example, that posited three revolutions in human history, the agricultural revolution, 12 00:01:04,850 --> 00:01:10,579 the information age, the story, the Industrial Revolution, and the Information Age Revolution I wouldn't bet it. 13 00:01:10,580 --> 00:01:17,450 The Toffler's beginning for two revolutions, a cognitive revolution early in human history that some, I argue, 14 00:01:17,450 --> 00:01:24,530 set up the basis for some underlying and enduring principles of strategy in the sense that most of us would understand strategy. 15 00:01:25,370 --> 00:01:30,200 And then an artificial intelligence revolution that challenges the evolved 16 00:01:30,830 --> 00:01:36,020 cognitive psychology that had been a longstanding part of our human history. 17 00:01:36,440 --> 00:01:45,920 So two revolutions, the evolution of a cognitive package to do with language and and theory of mind, 18 00:01:46,220 --> 00:01:51,470 and then latterly a revolution in artificial intelligence, which is just getting underway. 19 00:01:52,040 --> 00:02:01,040 So part one and the bits about evolved psychology, my thesis is that there is a dynamic relationship between strategy and evolution. 20 00:02:01,040 --> 00:02:06,079 That is to say that the need to think strategically was a strong factor in shaping our human 21 00:02:06,080 --> 00:02:11,870 evolution and also the way in which we evolved in pace with constraints on how we did strategy. 22 00:02:12,680 --> 00:02:20,180 Hopefully I wouldn't say too much about what strategy is, but in a nutshell, for those of you that might not be familiar with the term, 23 00:02:20,390 --> 00:02:27,080 I'm talking here primarily about military strategy, the use of organised social violence in pursuit of some political goal. 24 00:02:28,510 --> 00:02:36,250 My thesis is that some another Tom Jones who is not my thesis is not even Tom's, but he's done some tremendous work on it. 25 00:02:36,430 --> 00:02:44,650 The general thesis is that Greek surmise has an important effect on winning or losing in combat, 26 00:02:44,950 --> 00:02:56,230 and the need to cooperate together in progressively larger human groups has been one of the fundamental driving forces shaping human evolution. 27 00:03:00,160 --> 00:03:09,819 I got started on this project reading a book that I commend you all in the late seventies and early 80 seconds devours chimpanzee politics. 28 00:03:09,820 --> 00:03:18,550 If you haven't seen that. Grab a copy of that highly entertaining account of life and death in a chimpanzee colony colony in Arnhem, 29 00:03:19,240 --> 00:03:28,390 and the game's inverted commas that the chimpanzees played in order to get ahead and assume a dominant position within the hierarchy, 30 00:03:28,720 --> 00:03:32,080 colony and devil at the end of the book asks the question. 31 00:03:32,080 --> 00:03:36,550 Interesting question Do chimpanzees think strategically? 32 00:03:37,120 --> 00:03:42,790 And he answers that, yes, they do think strategically, but I think it is a slight oxymoron. 33 00:03:43,210 --> 00:03:48,340 They do so instinctively. There's not he doesn't necessarily suppose, 34 00:03:48,340 --> 00:03:54,129 a particularly sophisticated degree of self-reflection and reflection about the minds of 35 00:03:54,130 --> 00:03:59,560 others on the part of chimpanzees is much of their strategic behaviour is innate to them, 36 00:03:59,560 --> 00:04:00,970 is instinctively acquired. 37 00:04:01,720 --> 00:04:09,550 And I think not that I want to disagree on this eminent primatologists in the world, but I think it is something of a disservice to chimpanzees. 38 00:04:09,910 --> 00:04:16,720 And I think the view on chimpanzee cognition and theory of mind has moved on since the late seventies. 39 00:04:16,990 --> 00:04:24,220 So on the one hand, he's doing a disservice to chimpanzees because they are capable of thinking about the intentions of others, 40 00:04:24,280 --> 00:04:28,120 not as not to the same degree that we are, but they are to a certain degree. 41 00:04:28,660 --> 00:04:30,549 And on the other hand, he does limit service, 42 00:04:30,550 --> 00:04:38,940 because I think there's is the supposition that we as humans think in rational and calculating terms about strategy is unfair. 43 00:04:38,950 --> 00:04:47,860 We do a lot of unconscious, instinctive, horrific type thinking about strategy, just as the chimpanzees do. 44 00:04:48,610 --> 00:04:53,170 So Deval set me going and the question was, do animals fight wars? 45 00:04:53,380 --> 00:05:00,310 And if they do, do they fight wars in the strategic sense? Do they make use of violence in an instrumental and passive way? 46 00:05:00,700 --> 00:05:11,290 And my answer was this was answer fight pretty totalitarian wars of the entire destruction of other and colonies in other species, 47 00:05:11,650 --> 00:05:15,850 that they do so on a fairly automatic situational basis. 48 00:05:15,850 --> 00:05:21,670 They're responding in an evolved fashion to instinctive clues from the environment about how they ought to behave. 49 00:05:22,270 --> 00:05:26,290 Chimpanzees, on the other hand, certainly fight wars come on tonight in the second. 50 00:05:26,620 --> 00:05:34,270 And then he did so with a little bit more of what we might call strategic thought, a little bit more conscious reflection on what they're doing. 51 00:05:36,070 --> 00:05:41,170 And so that was my starting point. Way too much about the middle part of the book. 52 00:05:41,170 --> 00:05:45,570 Middle part of the book is a dilemma coming to anybody looking for general patterns in strategic behaviour. 53 00:05:45,580 --> 00:05:47,740 It's you've got the whole gamut of human history. 54 00:05:47,920 --> 00:05:54,820 What can you say about strategic behaviour that unites strategy across all these different varieties and cultural circumstances? 55 00:05:55,360 --> 00:06:02,200 That's book classes wrestle with, of course, is the tension between historicism something that's particular to the moment in theory, 56 00:06:02,200 --> 00:06:07,500 the ability to say something general across all particular areas or multiple areas. 57 00:06:08,080 --> 00:06:14,229 The second tension that the tension between historicism, the local, the particular and history with a capital H, 58 00:06:14,230 --> 00:06:19,120 the idea that history is driving forward towards some particular purpose. 59 00:06:19,840 --> 00:06:23,079 That that's the that's the tension I wrestle with in the middle part of the book, 60 00:06:23,080 --> 00:06:27,700 the bit about culture that I'm not going to spend too much time thinking thinking about here today. 61 00:06:28,090 --> 00:06:32,650 And then lastly, the last part of the book, can artificial intelligence strategy save that for the end? 62 00:06:34,270 --> 00:06:38,659 Here to kick it off, we come to chimpanzees and their ability to think strategically. 63 00:06:38,660 --> 00:06:45,250 This is a paper in 2014 in which the chimpanzees demonstrated an ability that might have 64 00:06:45,250 --> 00:06:51,400 appealed to social scientists from the middle of the 20th century to win a game of strategy. 65 00:06:51,640 --> 00:06:54,760 This is the game they're playing here is a little bit like pieces of rock. 66 00:06:55,000 --> 00:06:58,410 You have to guess what your adversary actually can't see. 67 00:06:58,420 --> 00:07:02,170 The chimpanzee is play. It's the chimpanzee here. Is that back to back? 68 00:07:02,170 --> 00:07:06,190 So they can't see what the adversary is choosing. One of them is tasked with matching the shapes. 69 00:07:06,190 --> 00:07:11,320 And and if he guesses, if he matches the shapes that the other chimp throws, he wins. 70 00:07:11,680 --> 00:07:14,299 The other is tasked with mismatching the shapes that the chimp race. 71 00:07:14,300 --> 00:07:19,540 So he knows he's got to think about what the other chimp is going to throw and then produce a mismatch. 72 00:07:19,870 --> 00:07:21,879 So you've got a strategic game to play, 73 00:07:21,880 --> 00:07:26,920 a strategic game in which they've got to try and guess what the other person going to do and behave accordingly. 74 00:07:27,610 --> 00:07:37,360 Chimpanzees outperform humans at this game, which if you were a scholar of the standard social science model of the rational activity, 75 00:07:37,360 --> 00:07:43,870 expected utility payoffs, that would be a puzzle. Why are the chimpanzees so good at it and why the humans are less good at it? 76 00:07:43,870 --> 00:07:49,899 And if you change the payoffs or if you change whether one's a match or a mismatch and they tell them very quickly, 77 00:07:49,900 --> 00:07:58,800 the chimpanzees change tack and start behaving according to how the change suggests that they should on a mathematical basis, they come. 78 00:07:58,860 --> 00:08:02,760 Very close to the optimal mathematics solution that humans are hopeless at it comparatively. 79 00:08:03,510 --> 00:08:11,610 The question is why and what does that mean? And the answer for me is not that chimpanzees are better strategists. 80 00:08:12,030 --> 00:08:20,280 It's that they're pursuing a different strategic logic. The chimpanzees are responding more instinctively to the payoffs. 81 00:08:20,550 --> 00:08:23,910 The humans are trying to second guess what the adversary is doing. 82 00:08:24,720 --> 00:08:27,960 Humans are making that strategy using theory of mind. 83 00:08:27,990 --> 00:08:34,450 They're trying to play the idea that, you know, the innate game and the chimpanzees aren't playing that game. 84 00:08:34,470 --> 00:08:40,830 They just want to sweep that they get for winning the game. And it's the over complication that we put into it, 85 00:08:40,830 --> 00:08:47,040 trying to read what the intention of others are the producers in this narrowly to play game a suboptimal outcome. 86 00:08:49,650 --> 00:08:54,060 So chimpanzees more rational actors than humans are in that game. 87 00:08:55,500 --> 00:09:00,329 Chimpanzees, we know because they're evolutionarily close to us. 88 00:09:00,330 --> 00:09:08,819 We parted company for them some 6 million years ago. Very useful comparator in terms of thinking about strategy. 89 00:09:08,820 --> 00:09:13,500 And in answer to Duval's question, you know, do chimpanzees fight wars? 90 00:09:13,740 --> 00:09:18,540 When he wrote that in 79, I think it was the goomba extreme war had just ended. 91 00:09:18,630 --> 00:09:21,720 I'm not sure whether where they could all have actually published about it, 92 00:09:21,720 --> 00:09:27,300 but the dominant view at the time in anthropology and actually in climatology as well was still, 93 00:09:27,420 --> 00:09:30,990 you know, chimpanzees were fairly peaceful, fun loving kind of characters. 94 00:09:31,740 --> 00:09:40,080 Since when that view has changed almost 180 degrees and the survey that shouldn't say we're going to say random, of course, Richard Wrangham, 95 00:09:40,950 --> 00:09:42,560 famous primatologist at Harvard, 96 00:09:43,410 --> 00:09:52,270 it turns out that some chimpanzee life is extremely violent at best interest group when you're struggling for dominance as it develops. 97 00:09:52,320 --> 00:09:56,790 Chimpanzees were on them and also between groups in terms of these chimpanzee wars. 98 00:09:57,330 --> 00:10:05,010 And the the paper by Wrangham that compiles all the available evidence around in 2006 99 00:10:05,730 --> 00:10:10,020 about how many wars were fought and what the levels of casualties were in these wars. 100 00:10:10,020 --> 00:10:16,860 And it turns out the chimpanzee life inside the group is much, much more violent than Detroit in the 1970s, 101 00:10:17,580 --> 00:10:24,120 and that some the levels of war were between chimpanzee groups allowing for small 102 00:10:24,120 --> 00:10:28,259 sample size was quite likely to be on a par with hunter gatherer communities. 103 00:10:28,260 --> 00:10:30,060 And we'll see what that was in the moment. 104 00:10:30,300 --> 00:10:38,400 Here's some chimpanzees behaving you might think strategically, perhaps in response to innate subconscious inclinations. 105 00:10:39,120 --> 00:10:45,779 They're conducting a chimpanzee patrol here and they patrol into the territory of neighbouring chimpanzees and they're doing the tactics 106 00:10:45,780 --> 00:10:54,420 when it comes to waging a war like the going to extreme war is the raid and ambush that typically attack the 10 to 1 force ratios. 107 00:10:55,410 --> 00:11:02,040 And so if they can't not heavily outnumber their adversary, they'll stage a silent retreat back away until such time as they do. 108 00:11:02,040 --> 00:11:07,860 WRANGHAM Theory is that this behaviour, which contrasts markedly with that of bonobos, 109 00:11:08,760 --> 00:11:11,430 they tend to fight wars, spend a lot of time having a good time. 110 00:11:13,320 --> 00:11:18,990 Range theory is that the distribution of food which forced chimpanzees to forage in an isolated fashion, 111 00:11:18,990 --> 00:11:22,980 may be particularly vulnerable to predation in this way by their fellow chimpanzees, 112 00:11:23,310 --> 00:11:31,170 because they had to fissure into small groups to go foraging for food, they could very quickly be outnumbered and defeated by other chimpanzees. 113 00:11:31,980 --> 00:11:36,660 The question is, is that strategic behaviour or not? 114 00:11:37,530 --> 00:11:44,759 On the one hand, the chimpanzees are capable of waging a sustained war against another group of chimpanzees. 115 00:11:44,760 --> 00:11:48,930 In that extreme war they annihilated the other group of chimpanzees. 116 00:11:48,930 --> 00:11:51,360 That survey every last chimpanzee. 117 00:11:51,400 --> 00:11:59,130 I think there were a couple of females left over that were absorbed back into the original population and they waited over a period of years, 118 00:12:01,800 --> 00:12:04,470 you might say, well, that's not necessarily strategic behaviour and you know, 119 00:12:04,500 --> 00:12:10,889 they're following an evolved inclination to conduct these patrols and to stage ambushes and I think you'd probably be right about that. 120 00:12:10,890 --> 00:12:19,110 For me, the big difference that makes our behaviour more strategic in the sense that we're familiar with is our capacity for this theory of mind, 121 00:12:19,110 --> 00:12:27,000 for intuiting what other people want and communicating on the basis of that in order to derive an effective strategy. 122 00:12:27,840 --> 00:12:31,049 So I start with chimpanzee wars and I say yes. 123 00:12:31,050 --> 00:12:35,010 On the one hand they can do instinctive strategy, but there's something different about humans. 124 00:12:35,400 --> 00:12:42,780 And what's different about humans is our ability to communicate and to plan for an imagined future. 125 00:12:43,950 --> 00:12:51,269 And the basic theory that elaborate in part one is that there is enough violence in 126 00:12:51,270 --> 00:12:54,870 the hunter gatherer world in which we evolved for almost all of our human history. 127 00:12:55,290 --> 00:12:58,820 There is a sufficiently high level of violence like in that chimpanzee world. 128 00:12:59,610 --> 00:13:04,170 To force us to become effective co-operators, we have to cooperate. 129 00:13:04,440 --> 00:13:08,160 Otherwise, we go out of business. That's essentially the theory. 130 00:13:08,880 --> 00:13:14,790 So, as you might know, Robin Dunbar's work here may have heard Dunbar's number. 131 00:13:15,390 --> 00:13:18,690 Dunbar's number? Yeah. And under the back? I hope not. 132 00:13:18,690 --> 00:13:25,559 Robin Dunbar. I've never met Robin Dunbar. So. So Dunbar's number is the size. 133 00:13:25,560 --> 00:13:33,660 Your neocortex correlates very effectively with the size of social group in which you live in the larger neocortex, the theory goes. 134 00:13:34,230 --> 00:13:38,490 The greater the ability. You have to track the key essentials of life in a social group. 135 00:13:38,790 --> 00:13:42,690 And these key centres are essentially who's up, who's down and who's shagging who. 136 00:13:42,900 --> 00:13:47,430 These are the things that you won't be able t t to understand. 137 00:13:48,150 --> 00:13:55,670 And if you if you correlate a chimpanzee neocortex sized chimpanzee cortex size relationship, 138 00:13:55,980 --> 00:13:59,370 you can extend that line outwards to humans and come up with what's called Dunbar's number. 139 00:13:59,400 --> 00:14:09,300 Dunbar's number for humans is 150. That's the size of hunter gatherer community in which we spent a lot of our early human evolution. 140 00:14:09,540 --> 00:14:15,180 It's a very rough rule of thumb because actually the site you hang out with on a day to day basis would be near 50 people. 141 00:14:15,600 --> 00:14:19,710 That would be your kinship band, people you primarily be related to. 142 00:14:20,310 --> 00:14:25,140 And you come together in this gathering of 150 for an overnight camp, perhaps for reasons of security. 143 00:14:26,250 --> 00:14:35,459 Dunbar's number is one aspect of the theory of human evolution called the social theory of cognitive evolution, 144 00:14:35,460 --> 00:14:43,170 which says a lot of our functions are derived to help us navigate this complex, increasingly co-operative, 145 00:14:43,380 --> 00:14:49,170 interdependent world 50 to 150 people with whom we would spend most of our time. 146 00:14:49,530 --> 00:14:52,860 That's the group that we hung out here now. 147 00:14:55,930 --> 00:15:03,520 The second big theory to think about is Frederick Manchester's mathematical laws of armed conflict dating to 1916. 148 00:15:03,520 --> 00:15:09,330 Anybody familiar with those yet to be familiar with? They say without without stretching the point. 149 00:15:09,410 --> 00:15:12,879 Point one, this is an abstraction. It's a mathematical abstraction. 150 00:15:12,880 --> 00:15:16,390 It's not meant to correlate precisely to real world behaviour. 151 00:15:17,080 --> 00:15:26,150 But Manchester's argument was that the benefits that would accrue from being in a larger group would be disproportionate to the size of that group. 152 00:15:26,770 --> 00:15:39,190 That's what he call square. If fighting is structured so that it's a melee, basically a wall against all, then the fighting power of the group, 153 00:15:39,190 --> 00:15:44,019 all of the things being equal will increase geometric t relative to the size of the group. 154 00:15:44,020 --> 00:15:50,469 The advantages from being in a larger group will increase geometrically if fighting is structured. 155 00:15:50,470 --> 00:15:53,650 So it's one on one. So imagine a series of duels at dawn. 156 00:15:54,490 --> 00:16:04,200 Or imagine, as I do later in the book, Hoplite Warfare, where the formations force you to fight 1 to 1 rather than in an overall melee. 157 00:16:04,930 --> 00:16:09,790 Then it still matters to be big, but it matters in a linear fashion. 158 00:16:10,720 --> 00:16:17,740 Say provided fighting is disorganised or there is a capacity to concentrate fire, not renown, 159 00:16:17,740 --> 00:16:23,830 strategic principle, concentrating force where you can then the fighting power increases geometrically. 160 00:16:24,550 --> 00:16:32,170 So the question is, is, is there enough fighting in hunter gatherer communities early in human evolution to 161 00:16:32,170 --> 00:16:38,680 force the agglomeration of humans together into larger groups and in larger groups? 162 00:16:38,680 --> 00:16:40,750 Perforce, they have to fight. 163 00:16:40,960 --> 00:16:48,640 You have to be in a group with people to whom you're not directly related, and you have to find a way of cooperating with strangers. 164 00:16:49,660 --> 00:16:54,580 And the answer I make, the claim I make in the book and I'm not alone in making plenty of other people make it as well, 165 00:16:54,580 --> 00:17:00,790 is that, yes, there was enough fighting in hunter gatherer times to force people to cooperate together. 166 00:17:02,140 --> 00:17:07,780 So the result I suggest is that there was sufficient violence early in human history to put 167 00:17:07,780 --> 00:17:13,660 pressure on humans to start cooperating with people with whom they might not know each other. 168 00:17:13,760 --> 00:17:17,230 Manchester's law provides a mathematical formulation of that. 169 00:17:17,590 --> 00:17:23,200 That is to say the prevalence of war solves what's known as the free riding problem. 170 00:17:23,650 --> 00:17:30,940 Free riding problem is essentially a problem of loafing. If you get enough people together, there'll always be some people who slack off. 171 00:17:31,270 --> 00:17:36,220 And if it's a matter of fighting somebody to the death, I personally would be on the side of the slacker office. 172 00:17:36,970 --> 00:17:40,600 So what is it that's going to make me cooperate? 173 00:17:40,600 --> 00:17:44,050 That's going to stop me from slacking off? 174 00:17:44,320 --> 00:17:48,100 One answer is to this the prevalence of war forces, 175 00:17:48,730 --> 00:17:55,420 cooperation and scrutiny of how people are behaving within a particular book say the problem to be solved, 176 00:17:56,050 --> 00:18:03,879 you'll be aware, is the classic problem of the prisoner's dilemma. And the payoffs is just in the prisoner's dilemma, 177 00:18:03,880 --> 00:18:09,730 such that non-cooperation tends to be the favoured option, even if cooperating would be better for you. 178 00:18:10,150 --> 00:18:14,020 The solution to that problem, as Samuel Bowles argues, as Peter Turchin argues, 179 00:18:14,290 --> 00:18:20,200 is that there is sufficient violence to force the read out that on co-operators and to force cooperation. 180 00:18:20,620 --> 00:18:23,799 And actually you can see echoes of that in a couple of days. 181 00:18:23,800 --> 00:18:30,730 Research is there. So the basic theory of cooperation in an evolutionary sense is from Robert TRIVERS 182 00:18:30,730 --> 00:18:35,680 Trivers argues that we cooperate with non kin on the basis of reciprocal altruism. 183 00:18:37,030 --> 00:18:43,750 That is, if I scratch your back, I do so with the supposition that in years to come you might scratch my back. 184 00:18:43,960 --> 00:18:48,580 So I will behave altruistically to you on the assumption that I'll get some payback later on. 185 00:18:48,580 --> 00:18:51,070 That's the basic theory of reciprocal altruism, 186 00:18:51,310 --> 00:18:57,730 and that requires you to have a make a pretty good bet about it that the probability of being paid back in that future date, 187 00:18:58,060 --> 00:19:05,140 of course, the incentive then is to cheat, to get away with having your back scratched without returning the favour, so to speak. 188 00:19:05,470 --> 00:19:13,750 And Dan Ariely and Travis himself have demonstrated that very ably trivers in the in the world of animals. 189 00:19:13,990 --> 00:19:24,310 Ariely, in some fascinating experiments on his students, proved that our inclination is to cheat and then lie about it. 190 00:19:25,030 --> 00:19:33,099 But what actually proved this is quite interesting is that we lie about it not just to the people we're cheating, we lie about it to ourselves. 191 00:19:33,100 --> 00:19:38,170 So we have the capacity to persuade ourselves that we are good, honest co-operators, even while we're cheating. 192 00:19:39,100 --> 00:19:43,360 So we can see that the natural inclination is not to be a free, 193 00:19:43,360 --> 00:19:47,470 cooperative member of society, even though the pressure of war suggests that we should be. 194 00:19:48,460 --> 00:19:53,950 But there is, because of that pressure of war, I argue, still some inclination to cooperate. 195 00:19:54,280 --> 00:19:55,300 And you can see that. 196 00:19:55,330 --> 00:20:05,170 When you do real world experiments with the prisoner's dilemma, in theory, in game theoretical terms, the incentive is to cheat and not to cooperate. 197 00:20:05,770 --> 00:20:12,370 In practice, if you give the prisoner's dilemma to normal people, they tend to cooperate, even where it's a one shot game. 198 00:20:12,370 --> 00:20:18,550 So there's no possibility of playing the same person again. So you can't build a reputation for trust that might shape your behaviour. 199 00:20:18,880 --> 00:20:21,100 Was a one shot game with a stranger. 200 00:20:21,310 --> 00:20:28,690 We have a tendency to wait to cooperate and the argument is that tendency to cooperate has evolved within us as a result, 201 00:20:28,690 --> 00:20:35,890 as I say, these pressure of warfare. And in fact, even if you give the prisoner's dilemma to genuine real life prisoners who ought to know better, 202 00:20:36,340 --> 00:20:39,880 they tend to cooperate more than is suggested by the prisoner's dilemma. 203 00:20:42,810 --> 00:20:51,180 So this evolutionary picture then that I take out of the first part combines this notion of reciprocal altruism. 204 00:20:52,200 --> 00:20:53,510 Lanchester His idea, 205 00:20:53,540 --> 00:21:05,040 the victory goes to the big battalions in a disproportionate sense and comes up with what I think is the the dominant approach to strategy, 206 00:21:05,040 --> 00:21:08,430 which is to be strong everywhere, if you're strong everywhere. Of course, strategy is very easy. 207 00:21:09,270 --> 00:21:15,750 And the strategy that the strong evolved with involves scale, mass and concentration of force. 208 00:21:16,230 --> 00:21:19,860 And and that couldn't I gave you Voltaire a moment ago. 209 00:21:20,130 --> 00:21:22,680 Victory doesn't belong to the big chains. It belongs to the best shooters. 210 00:21:22,800 --> 00:21:28,740 Lanchester would disagree with that, and I would suggest the evidence from evolutionary history is against that as well, 211 00:21:28,740 --> 00:21:34,350 which we generally get used to the big battalions, as Stone said of the pope. 212 00:21:34,590 --> 00:21:41,160 How many divisions? SIEGEL And say there is still that in-built supposition that mass pays off in the end. 213 00:21:42,300 --> 00:21:45,450 But you will say, you know, even if you're strong, you don't want to be a muffett. 214 00:21:45,450 --> 00:21:50,009 You also want to combine your strategy, the strong with being strong everywhere with some sensible tactics. 215 00:21:50,010 --> 00:21:51,180 And that's what the chimpanzees do. 216 00:21:51,180 --> 00:22:00,000 They make they in conducting a raid and an ambush rather than pitched battle against the same number of adversary chimpanzees. 217 00:22:00,360 --> 00:22:10,100 What they're doing effectively is building a force ratio that Lanchester would say is dominant to create a localised force ratios, 218 00:22:10,110 --> 00:22:15,960 even a strong use rate and an ambush. And if you look at this book wall before civilisation, 219 00:22:16,500 --> 00:22:22,470 that pattern is very strongly evident across ethnographic studies and archaeological studies 220 00:22:22,740 --> 00:22:27,240 that the dominant strategy in warfare is rate an ambush rather than pitched battle. 221 00:22:28,560 --> 00:22:32,100 And if you're on the on the weak side, that's when strategy gets really interesting, 222 00:22:32,100 --> 00:22:35,970 because you have to find a way of offsetting these advances to decrease the others. 223 00:22:36,300 --> 00:22:42,240 So you adopt all these sort of approaches and surprise, deception and fortification, manoeuvre. 224 00:22:42,900 --> 00:22:48,330 And if you are the borrowers, as the Americans call the British army, you come up with the British way of warfare, 225 00:22:48,330 --> 00:22:52,809 which is to try and be more flexible and you get throwing about it and reverse. 226 00:22:52,810 --> 00:23:02,890 Three. Max Fraser if you don't know, Max raises work and you should. 227 00:23:03,670 --> 00:23:09,010 He's a statistician at Oxford and he's produced extensive database on all sorts of fascinating stuff. 228 00:23:09,010 --> 00:23:15,159 The world, the world and figures and the general picture is that if he has a general motif is that the world is getting a better place, 229 00:23:15,160 --> 00:23:18,910 which is not always how it feels. A day's out for an American election, 230 00:23:19,870 --> 00:23:30,309 but he has compiled a whole range of statistical information about it's the most comprehensive list that I've seen of archaeological 231 00:23:30,310 --> 00:23:37,990 evidence for prevalence of warfare and hunter gatherer communities and ethnographic studies of warfare in hunter gatherer communities. 232 00:23:38,740 --> 00:23:47,080 And on the right here, you can't make out any of the detail here, but on the right this is violent deaths per 100,000 in state societies. 233 00:23:47,440 --> 00:23:52,000 Down at the bottom here is the world in 2007. 234 00:23:52,540 --> 00:23:56,110 So that's that's that's the the [INAUDIBLE] pit that we live in at the moment. 235 00:23:56,680 --> 00:24:03,910 Battle deaths 0.33 deaths per 100,000 people for the world in 2007. 236 00:24:04,750 --> 00:24:11,110 Whereas if you look at this hunter gatherer community study in the 1840s in California. 237 00:24:14,430 --> 00:24:21,569 1450 deaths per hundred thousand, saying hunter gatherer communities tend to have the most violent men, 238 00:24:21,570 --> 00:24:27,910 but they tend to have more violence, more violent deaths than do more modern state communities. 239 00:24:27,930 --> 00:24:34,320 So was there enough warfare in the course of human history to drive consumption behaviour and get rid of free riders? 240 00:24:34,980 --> 00:24:40,110 The answer for me is yes and that's the claim I take out of the first part of the book. 241 00:24:42,270 --> 00:24:48,120 I didn't spend a bit of time thinking through the implications of what that means for human cognition and 242 00:24:48,120 --> 00:24:53,640 for the heretics that we've developed to help us think strategically when we go into any of that here. 243 00:24:54,480 --> 00:24:59,010 That's part one of the book and an evolutionary history of warfare. It pays to be big. 244 00:24:59,010 --> 00:25:06,780 There's enough fights around to force us to agglomerate in large groups and to force it to learn patterns of cooperative 245 00:25:06,780 --> 00:25:14,580 behaviour and shortcuts for thinking about who we trust and who we shouldn't trust to allow us to think strategically. 246 00:25:15,180 --> 00:25:22,120 In part to the book. I take that forward and see if it holds in the cultural era. 247 00:25:22,560 --> 00:25:26,640 And the answer is that it kind of does. 248 00:25:27,160 --> 00:25:30,330 But stuff complicates it for reasons of time. I'm not going to get into. 249 00:25:30,330 --> 00:25:33,900 I use three fairly well-trodden examples. 250 00:25:34,200 --> 00:25:36,509 It's all about the Greeks and hoplites. 251 00:25:36,510 --> 00:25:44,910 And I talk about the invention of writing and particularly the invention of strategic studies as a separate way of thinking reflectively about war. 252 00:25:45,210 --> 00:25:51,630 And the take out from that study. The Greeks is that writing provided a first example of artificial intelligence, 253 00:25:51,630 --> 00:25:56,250 if you like, a means of offshoring, part of our cognition outside of our brain. 254 00:25:56,610 --> 00:26:01,870 I won't say any more about that example to Clausewitz and Napoleonic warfare. 255 00:26:01,890 --> 00:26:06,360 Again, I'm not going to say too much more about that. I will say a little bit about nuclear weapons. 256 00:26:07,980 --> 00:26:17,010 And the take out from the middle part of the book is that Manchester's law breaks down in the cultural era because 257 00:26:17,310 --> 00:26:23,280 fighting power is not just about the number of men that you have at your disposal and their ability to concentrate force. 258 00:26:23,880 --> 00:26:31,410 It's also about technology in the cultural areas technologies of warfare, weapons technologies, but not just weapons technologies. 259 00:26:31,800 --> 00:26:41,430 Technology screws Manchester's abstract thinking about the importance of scale when it comes to strategy. 260 00:26:41,670 --> 00:26:50,729 And the apotheosis of that, of course, is nuclear weapons, where the analogy can inflict as much damage as a thousand plane fighter bombing raid. 261 00:26:50,730 --> 00:26:55,320 On take, you say scale, which you have nuclear weapons becomes less of an issue. 262 00:26:55,830 --> 00:26:59,190 Of course, you need scale in order to develop nuclear weapons in the first place. 263 00:27:01,340 --> 00:27:07,100 I said at the beginning, I posited two revolutions in military affairs, the one the cognitive revolution, 264 00:27:07,140 --> 00:27:12,110 the flexible package to construct grips flexibly in order to deal with violence in the world. 265 00:27:12,590 --> 00:27:25,130 And second, artificial intelligence. And one of the other contenders for a revolution in military affairs that changed everything is nuclear weapons. 266 00:27:25,190 --> 00:27:30,910 Robert Jervis, many of you will know, said that nuclear weapons were a revolutionary development in warfare. 267 00:27:30,930 --> 00:27:33,290 They were the revolutionary development in warfare. 268 00:27:33,650 --> 00:27:40,220 Bernard Brody, the American strategic thinker on hearing about the destination of the first nuclear bomb, 269 00:27:40,640 --> 00:27:45,920 said that he thought it made everything else he had ever written redundant and obsolete. 270 00:27:46,220 --> 00:27:51,140 He thought that the entirety of strategic history was redundant and obsolete because nuclear weapons are transforming so much. 271 00:27:52,400 --> 00:27:59,750 I think he was wrong. He was wrong. But I also think that this is wrong in the sense that nuclear weapons are revolutionary. 272 00:28:00,860 --> 00:28:03,170 They're an important development in the history of warfare, for sure. 273 00:28:03,410 --> 00:28:07,800 But I didn't think the revolution and the reason is at the bottom of their strategy. 274 00:28:07,820 --> 00:28:13,280 Even with nuclear weapons remains a matter of our evolved human psychology. 275 00:28:13,610 --> 00:28:17,030 We still making decisions about the employment of nuclear weapons, 276 00:28:17,390 --> 00:28:25,430 using precisely the same cognitive architecture that we evolved with 4050, 200,000 years of our human evolution. 277 00:28:25,460 --> 00:28:34,160 So all these factors in nuclear strategic theory, escalation, the need to demonstrate credibility, miscalculation, 278 00:28:34,170 --> 00:28:41,480 the scope for misperception, they are all subject to the same evolved cognitive, psychological foibles that we brought with us before. 279 00:28:42,470 --> 00:28:48,860 Jervis said that nuclear weapons were revolutionary on the basis that they would transform the balance between denial and punishment. 280 00:28:49,160 --> 00:28:57,280 And of course, you have a nuclear standoff through mutually assured destruction, and the threat of retaliation is how you use the weapon. 281 00:28:57,290 --> 00:29:03,350 So is the threat of using the weapon rather than the actual use of the weapon that delivers you more strategic result. 282 00:29:03,740 --> 00:29:08,330 And what he said is there's no point trying to deny nuclear weapons. 283 00:29:08,330 --> 00:29:15,590 You can't defend against them. And instead, your strategic use of nuclear weapons relies entirely on the threat of punishment. 284 00:29:17,760 --> 00:29:25,770 I'm not entirely sure that is revolutionary. Firstly, because denial and punishment both featured before the onset of nuclear weapons. 285 00:29:26,070 --> 00:29:30,810 You could certainly deny through the construction of fortification or the use of terrain, use of things. 286 00:29:31,140 --> 00:29:35,130 And you could also threaten punishments through retaliatory attack. 287 00:29:35,160 --> 00:29:40,800 You didn't need nuclear weapons. Individual nuclear weapons gave you an assured ability to punish. 288 00:29:41,520 --> 00:29:45,480 But punishment was certainly a feature of strategy even before the development of nuclear weapons. 289 00:29:46,320 --> 00:29:51,420 So I'm not sure in Genesis account that nuclear weapons are particularly revolutionary. 290 00:29:51,890 --> 00:29:55,980 That's what I'll say about them here. Reasons of time. 291 00:29:56,790 --> 00:29:59,669 I wanted to get ahead to the artificial intelligence because I'm aware that that's 292 00:29:59,670 --> 00:30:02,910 probably why most of you here not to hear me rambling on about chimpanzees warfare. 293 00:30:03,580 --> 00:30:12,810 So in very quick order, my headline for artificial intelligence is that it is a revolution in military affairs, 294 00:30:12,990 --> 00:30:19,560 is a revolution in strategic affairs precisely because it challenges the underlying evolved psychological 295 00:30:20,820 --> 00:30:26,670 abilities that we've brought to bear to tackle strategic problems since we emerged on the planet and even before, 296 00:30:26,820 --> 00:30:34,290 you know, antecedent species. Here are some examples of artificial intelligence from recent months. 297 00:30:34,860 --> 00:30:39,570 Let's go with the easy ones. Do you believe what's going on there? Yeah. 298 00:30:39,990 --> 00:30:43,470 Google's AlphaGo. Yeah. The part of the world. 299 00:30:43,620 --> 00:30:47,399 The world leading go. Yeah. Did you release it all for one? The World Championship. 300 00:30:47,400 --> 00:30:52,889 And before that? Very quickly. We very quickly adopt hindsight bias with achievements like that. 301 00:30:52,890 --> 00:30:59,580 And assuming, of course, it beat him before that pretty well. Every A.I. expert at DeepMind was saying it couldn't be done, 302 00:30:59,580 --> 00:31:06,930 couldn't be done for 20 years because they'd all been trying to do it goes right for those you don't know who wins or version where the ball game. 303 00:31:07,200 --> 00:31:11,160 It's a very simple constrained what now developer would call a toy universe. 304 00:31:11,370 --> 00:31:14,160 It's not like the complex, messy reality of winning in Vietnam. 305 00:31:14,490 --> 00:31:19,350 There are only a certain number of moves that can take place at two different varieties moves that can take place on the keyboard, 306 00:31:19,650 --> 00:31:27,690 but they can do so with such dazzling combinatorial complexity that there are more possible moves mathematically than there are atoms in the universe. 307 00:31:28,560 --> 00:31:32,010 So you can't solve a game of going through brute force calculation, 308 00:31:32,340 --> 00:31:36,270 which is why A.I. developers have said it would be 20 odd years before anybody came 309 00:31:36,270 --> 00:31:40,830 close to performing at the level of the world champion still is beating him anyway. 310 00:31:41,430 --> 00:31:44,520 Within DeepMind, they were pretty confident they trash him and they truly did. 311 00:31:45,600 --> 00:31:51,450 So that's not one this one you. But you know, that one is a wreckage or helicopter flying upside down. 312 00:31:51,450 --> 00:31:56,730 But more significantly, the Knights and rings, remote control helicopter flying upside down from 2006. 313 00:31:57,120 --> 00:32:02,969 And this is one of the first deep learning nets that you don't know how deep learning net and rough terms. 314 00:32:02,970 --> 00:32:07,230 It's analogous to the connections of neurones in your brain. 315 00:32:07,470 --> 00:32:12,780 It's a very loose analogy, but that was the inspiration that started AI researchers off down this sort of 316 00:32:14,250 --> 00:32:17,970 what's called constructivist constructionist approach to artificial intelligence. 317 00:32:18,330 --> 00:32:22,830 So Hayes was the first one to figure out how to fly a helicopter from first principles. 318 00:32:23,460 --> 00:32:28,020 It encountered the universe of the helicopter and it flew and crashed the flew and crashed. 319 00:32:28,020 --> 00:32:32,280 It flew and crashed on a simulator. But he could afford the models to actually and even over again. 320 00:32:32,820 --> 00:32:39,330 And then eventually it reached that takeoff point and it mastered helicopter flight to the point at which it could roll a helicopter, 321 00:32:39,330 --> 00:32:43,860 which is no mean feat for any helicopter pilots, because normally they crash at that point. 322 00:32:44,790 --> 00:32:53,609 So not only could you grow the form to pretty complex manoeuvres as well, this one down here is in the last couple of weeks. 323 00:32:53,610 --> 00:33:01,169 Anybody seen this one? You know, this guy is he's an American Air Force colonel. 324 00:33:01,170 --> 00:33:06,450 He retired. He is a a superiority combat instructor. 325 00:33:06,450 --> 00:33:10,500 Who is his job? Dogfighting. Top gun stalwarts we've spent his career doing. 326 00:33:10,890 --> 00:33:13,620 Now he works as a consultant for air companies. 327 00:33:13,950 --> 00:33:22,770 And this is him, as they say, having his ass handed to him by an artificial intelligence system that's running off blackberry pie, 328 00:33:23,070 --> 00:33:31,980 running on the BlackBerry Pi mobile phone. And he was unable to beat it in repeated air to air combat activities. 329 00:33:32,340 --> 00:33:39,059 And that's only now in the last couple of weeks. And it's really combat against human vessels is something that obviously the Pentagon 330 00:33:39,060 --> 00:33:43,049 has engineers been working on since the 1980s with no particular degree of success. 331 00:33:43,050 --> 00:33:47,910 It's just a complicated control problem to be able to process all the relevant 332 00:33:47,910 --> 00:33:53,819 information in the required amount of time to to defeat a sophisticated human adversary. 333 00:33:53,820 --> 00:34:01,440 But it's now been done. Pentagon, by the way, still planning to put a man in the cockpit of the next generation long range bomber, 334 00:34:01,440 --> 00:34:04,989 which is something that they might think about again. 335 00:34:04,990 --> 00:34:09,149 And you know what this is? That's that's an intelligent design. 336 00:34:09,150 --> 00:34:13,030 He's a visionary design are where they brand the computers. 337 00:34:13,140 --> 00:34:21,150 How did you know that? And it was ever known that if you actually geek out yet that's exactly what it is 338 00:34:21,150 --> 00:34:25,889 it's very old now is a nicer antenna to go on a satellite communication antenna. 339 00:34:25,890 --> 00:34:29,400 And it was designed using A.I., which is why it looks like that wacky shape. 340 00:34:29,850 --> 00:34:33,540 It's not the sort of thing that I would sit down with it, design me an antenna I'd come up with, 341 00:34:33,540 --> 00:34:37,739 used a straight forward thing like that one bend and it came out looking like that. 342 00:34:37,740 --> 00:34:41,970 And that is the optimal solution that is designed better than it than a human could do it. 343 00:34:42,930 --> 00:34:51,690 And I think together these things show a range of the sort of military relevant activities that I is involved with already. 344 00:34:52,050 --> 00:34:56,360 This is this is weapons design right now. 345 00:34:56,380 --> 00:35:07,770 Already you can read this in stations for them on on the US Army website solicitations to use air to design bunker penetrating munitions. 346 00:35:08,100 --> 00:35:11,829 So air is already involved in designing weapon systems. 347 00:35:11,830 --> 00:35:20,670 As I'm walking over here with Rob and we were talking about the carriers, the Navy carriers was the favourite project of the Royal Navy. 348 00:35:20,940 --> 00:35:24,720 We might also have been talking about the US Air Force man and long range bomber. 349 00:35:25,320 --> 00:35:31,440 These are weapon systems that are designed partly with the efficiency imperative in mind. 350 00:35:31,440 --> 00:35:35,100 You know, war has an efficiency imperative. If you don't do it well, you go out of business. 351 00:35:35,490 --> 00:35:40,090 But partly also of course to reflect the institutional proclivities of the organisations that design them. 352 00:35:40,110 --> 00:35:43,620 Bureaucratic politics in action that's only sustainable. 353 00:35:45,390 --> 00:35:48,810 Well, have been designed by human and I probably wouldn't design you an aircraft carrier. 354 00:35:48,840 --> 00:35:55,200 You might not like what it designed you, but it would design you something a bit different from from what you would come up with. 355 00:35:55,860 --> 00:36:02,420 So alien weapons design, this is not just to be in strategic situations or operational situations, 356 00:36:02,550 --> 00:36:10,410 situations with less structure to them when it comes to decision making and this is off to a tactical level, 357 00:36:11,730 --> 00:36:16,740 activities across a range of military savings activities. 358 00:36:17,400 --> 00:36:27,000 And my argument is to my good friends who work for DeepMind, who show me on a regular basis that they are not involved in defence related research, 359 00:36:27,300 --> 00:36:30,930 and indeed it is one of their criteria for being bought by Google that they 360 00:36:30,930 --> 00:36:36,870 would not engage in defence related activities a little analogous to to nuclear 361 00:36:36,870 --> 00:36:45,570 weapons development and the idea of dual use technologies where you have civil nuclear capability capable of being switched over to military ones, 362 00:36:45,840 --> 00:36:52,020 but even more so in this case, because air is entirely about autonomous decision making in imperfectly understood environments, 363 00:36:52,020 --> 00:36:58,080 and that is an inherently military strategic capability, regardless of their desire to keep them separated. 364 00:36:58,500 --> 00:37:02,370 So let me let me close this section for reasons of time. Some questions. 365 00:37:02,670 --> 00:37:08,980 The way I've tackled air here in the book is through some dualism isms that I think a 366 00:37:09,090 --> 00:37:13,680 particularly relevant considering how I will affect strategic level decision making. 367 00:37:14,760 --> 00:37:17,130 First, the distinction between narrow and generally I. 368 00:37:17,340 --> 00:37:24,720 So when Stephen Hawking tells you that we've unleashed the demon we must unleash, the demon air is coming to take us over. 369 00:37:25,260 --> 00:37:33,510 He's talking about generally that is not just unbounded flexibility, capable of reaching decisions across a range of activities. 370 00:37:33,510 --> 00:37:40,499 It's not just a chess computer. It might also decide what steel garden, but it's its air that is capable, 371 00:37:40,500 --> 00:37:45,030 that is motivated to some extent intrinsically by itself, to do these things. 372 00:37:45,030 --> 00:37:49,200 That's what he's frightened about. It's a flexibly I is motivated to do it. 373 00:37:49,200 --> 00:37:54,840 So in fact, I'm not so much concerned about that for all sorts of reasons. 374 00:37:55,440 --> 00:37:56,610 But I think there's plenty of you because. 375 00:37:56,630 --> 00:38:02,460 Concerned about women still talking about narrow domains, specifically in the tactical impact of narrow domain specific. 376 00:38:02,700 --> 00:38:06,630 So that's one big distinction when we're talking about narrowly. 377 00:38:06,690 --> 00:38:12,929 I was talking about, Lisa, of being beaten there. I mean, to my generally, I have a look at the top, a robotics challenge from this year. 378 00:38:12,930 --> 00:38:19,320 The robots are still incapable of doing fairly basic stuff like opening doors, climbing over fences without toppling over. 379 00:38:20,070 --> 00:38:21,930 And I think, you know, the complexity, 380 00:38:21,930 --> 00:38:27,330 the combinatorial explosion in doing some of the things that we take for granted is still beyond what our current level of I can do. 381 00:38:27,840 --> 00:38:31,410 Second, you ask, is between tactical and strategic decision making. 382 00:38:31,830 --> 00:38:36,740 So flying that helicopter up upside down or defeating us, colonel, 383 00:38:36,760 --> 00:38:44,610 in that's where combat is a tactical level activity tactical level military activity still require some generalist knowledge, 384 00:38:44,610 --> 00:38:48,060 flexible skills across a range of activities. And they're still hard. 385 00:38:48,420 --> 00:38:53,340 It's not a chess computer. It's altogether more complex to learn to fly a helicopter. 386 00:38:57,990 --> 00:39:02,290 The last couple of of Judaism's before I. I stopped talking with you. 387 00:39:03,040 --> 00:39:08,950 One is this question of autonomy and self-directed A.I., and this opens a real can of worms. 388 00:39:09,190 --> 00:39:13,450 To what extent is artificial intelligence capable of making decisions for itself? 389 00:39:14,260 --> 00:39:20,740 And to what extent is it responding to inputs that you put in ahead of time ex-ante? 390 00:39:21,340 --> 00:39:28,240 And in an ideal world, you would like to have the idea of your strategic goal within your tactical goal, 391 00:39:28,480 --> 00:39:31,570 and you would tell the machine to achieve that, and it would go out and achieve it. 392 00:39:31,570 --> 00:39:36,250 And that's what the Army calls mission command. It does it today. You tell them what you want to achieve. 393 00:39:36,250 --> 00:39:40,389 You tell them your commander's intent, and you leave them to figure out how it is to achieve them. 394 00:39:40,390 --> 00:39:45,660 So in an ideal world, you would quite like that to be your relationship with autonomous performance. 395 00:39:45,670 --> 00:39:52,030 In terms of chief, they are autonomous to the extent that they are carrying out activities that you have told me to do upfront, 396 00:39:52,270 --> 00:39:53,470 and that's fine, well and good, 397 00:39:53,740 --> 00:40:00,460 except that you don't always know what it is that you want them to do upfront because you can't anticipate situations that may arise down the street. 398 00:40:02,350 --> 00:40:06,700 It's fine, well and good because you may be Satisficing is the term is a dishonest term 399 00:40:06,700 --> 00:40:11,409 is between different goals that you want to achieve that might be intention. 400 00:40:11,410 --> 00:40:14,830 Can you specify which goal you want machine to achieve? 401 00:40:14,980 --> 00:40:18,460 Which goal is the most important or otherwise? 402 00:40:20,750 --> 00:40:29,270 Boston's Beijing paid ticket counter is going to be read Boston superintelligence if you if you want an intro to the to the to the 403 00:40:29,270 --> 00:40:36,410 scary Elon Musk Stephen Hawking site of artificial intelligence Bostrom book on superintelligence is a pretty good place to start. 404 00:40:36,680 --> 00:40:41,270 And his nightmare scenario is the paperclip counter the paper accounts it has the 405 00:40:41,270 --> 00:40:45,410 best intentions in the world in fact has no intention is not is not out to get you. 406 00:40:45,630 --> 00:40:53,060 It's just been tasked with counting to 100. You want it to count 100 paperclips and tell you when it's done, that there are 100 paperclips there. 407 00:40:54,320 --> 00:41:00,680 And then they end. In Bostrom scenario, it conquers the known universe in order to count these paper because it can't count 408 00:41:00,680 --> 00:41:05,240 them to the degree of certainty that it is happy 100% that they're 100 paperclips. 409 00:41:05,390 --> 00:41:08,750 So in 98, 99, 100. Job done. 410 00:41:09,560 --> 00:41:13,310 Did I get it right? Just have one more check. 93, four, five, six. 411 00:41:13,520 --> 00:41:16,729 And it gets 90 to 100 and it's reasonably short. It has done it twice. 412 00:41:16,730 --> 00:41:23,090 Now you can have one more check because it's a vanishingly small probability that it counted wrong twice and it will carry on doing this. 413 00:41:23,450 --> 00:41:29,030 And as a genius artificial intelligence system that has been tasked with this thing and nothing else, 414 00:41:29,510 --> 00:41:34,730 it will secure all the resources, Bostrom argues. We don't know that it's obsidian. 415 00:41:34,790 --> 00:41:41,600 It will secure all the resources in the universe in order to keep counting the paperclips to make sure it's got 100 paperclips. 416 00:41:42,080 --> 00:41:45,239 That is an example of the agent principal problem gone disastrously wrong. 417 00:41:45,240 --> 00:41:49,910 All you wanted to know is is the roughly 100 paperclips that unless you specify it up front, 418 00:41:49,910 --> 00:41:56,660 your very clever artificial intelligence will keep counting and securing the resources to enable it to keep counting. 419 00:41:56,960 --> 00:41:59,300 So the principle problem is a key thing. 420 00:41:59,300 --> 00:42:06,350 How do you put in enough safeguards to ensure that the machine is going to do exactly what you want and not what it thinks you want? 421 00:42:07,340 --> 00:42:11,240 And if you can see the military dimension flashing autonomous weapons systems, 422 00:42:11,570 --> 00:42:16,940 I just asked you to kill this one guy, and now you're doing all sorts of other stuff that I didn't ask me to do. 423 00:42:19,550 --> 00:42:26,390 I talk in the book about consciousness and the possibility of machines becoming conscious, and I won't go into that here. 424 00:42:27,290 --> 00:42:30,860 The last the last few before I finish, 425 00:42:33,140 --> 00:42:41,959 what is the distinction between that I make between evolved human cognition and artificially intelligent machines? 426 00:42:41,960 --> 00:42:44,990 Doing strategy is probably a false dichotomy. 427 00:42:45,140 --> 00:42:48,980 It's not a dichotomy. It is not probably it is a false dichotomy. 428 00:42:50,030 --> 00:42:57,110 Artificial intelligence and human intelligence are probably not going to be seen as as separate entities for all that long. 429 00:42:57,350 --> 00:43:00,650 And HCI is human computer interface. 430 00:43:01,010 --> 00:43:05,600 And the extent to which you are using artificial intelligence systems as part of 431 00:43:05,600 --> 00:43:10,070 your normal cognition and pokémon's augmented reality is an example of that. 432 00:43:10,850 --> 00:43:16,850 Or the use of a chip in your brain that's reading your prosthetic arm. 433 00:43:17,390 --> 00:43:25,460 To tell you what your prosthetic arm is feeling as an example is where mind computer merge if you like another dollop of project. 434 00:43:25,760 --> 00:43:32,089 The quadriplegic woman at some at MIT has flown in F-35 through the power of her thoughts. 435 00:43:32,090 --> 00:43:41,180 She's flying distant, not real. I 35 about ten of these in existence but she's flown in a simulator using the power of her mind. 436 00:43:41,180 --> 00:43:49,700 So I think the breakdown between what is evolved human cognition and what is artificially intelligent cognition will probably blend a little bit. 437 00:43:50,000 --> 00:43:55,040 So that's another another duality. They talk about defence and civilian research. 438 00:43:55,160 --> 00:44:02,280 I'm pretty hard over on in the book. It's all defence research. It's all about decision making in complicated environments. 439 00:44:02,680 --> 00:44:06,250 And to me, this is real universes. A lot of A.I. recession. 440 00:44:06,510 --> 00:44:14,629 Longstanding A.I. researchers are pessimistic about A.I., and I think they are say, by inclination because they've been at it a long time. 441 00:44:14,630 --> 00:44:15,680 And it's a frustrating game. 442 00:44:16,340 --> 00:44:24,470 And they are by inclination because they want to counter unjustified hyperbole, the sort of, you know, massacres with is talking about. 443 00:44:24,830 --> 00:44:28,770 But but I think that that pessimism is to a certain extent, unwarranted. 444 00:44:28,790 --> 00:44:34,310 You know, toy universes can be pretty complicated and they are doing pretty well in these twin universes. 445 00:44:37,210 --> 00:44:45,280 Let me finish for for reasons of time, if nothing else with I then offer by way of conclusion the book Ways in which artificial 446 00:44:45,280 --> 00:44:49,180 intelligence is going to change strategy and perhaps take a lot more in questions, 447 00:44:49,900 --> 00:44:52,890 the sort of the sorts of things in regards to, oh, 448 00:44:52,900 --> 00:44:58,360 that A.I. changes the balance between offence and defence because it allows like a concentration of force. 449 00:44:58,720 --> 00:45:02,920 Hey, I shift your tolerance for risk. Firstly, because you're not losing any of your own people. 450 00:45:03,190 --> 00:45:11,169 Secondly, because your actions are not being driven by your own emotional cognition, A.I. makes the man in the loop unsustainable. 451 00:45:11,170 --> 00:45:14,650 Apartment issued defence position on artificial intelligence weapons. 452 00:45:14,710 --> 00:45:19,990 There will always be a man in the loop, a lawyer making a decision on what machines can and can't kill. 453 00:45:20,740 --> 00:45:29,950 That is inherently unsustainable. And it's fine when you're attacking the Taliban in Helmand province and you can lawyer everything back up to London. 454 00:45:30,280 --> 00:45:35,500 It's not fine if you're tackling an autonomous system operated by a high tech adversary in, say, China, 455 00:45:36,400 --> 00:45:42,280 because they might not care to have their lawyer in the loop deliberating or reporting their machine in court. 456 00:45:42,280 --> 00:45:47,350 To which case you eventually say the man in the loop will be forced out of the loop. 457 00:45:47,980 --> 00:45:52,900 By dint of the speed of artificial intelligence, A.I. will change radically. 458 00:45:52,900 --> 00:45:58,720 The structure of armed forces is another point that I make in the Strip. 459 00:45:58,740 --> 00:46:03,220 A lot of people out of the armed forces to kick off with no change, the sort of people that are employed. 460 00:46:04,120 --> 00:46:11,979 So that's my last slide. I started from the point of considering the way in which our evolved human 461 00:46:11,980 --> 00:46:17,080 psychology has shaped our strategic behaviour and for our entire human history. 462 00:46:17,590 --> 00:46:22,719 Up until fairly recently and really fairly recently, 463 00:46:22,720 --> 00:46:27,640 the Pentagon is now using artificial intelligence systems in rapid change to identify and prosecute targets. 464 00:46:28,240 --> 00:46:37,900 So fairly, fairly recently, our entire strategic decision making about the use of violence in the collective good 465 00:46:38,080 --> 00:46:42,880 has been taken on of these evolved cognitions that I say started with Manchester 466 00:46:42,900 --> 00:46:50,140 Law one and continues to cooperate that is now gradually being replaced with what and 467 00:46:51,010 --> 00:46:56,770 that I'll leave it and take any questions that you may have about the statements.