1 00:00:02,780 --> 00:00:07,370 In this episode, The Changing Character of War Program welcomes Dr. Heather Ruff, 2 00:00:07,790 --> 00:00:15,680 a senior research fellow in the Department of Politics and International Relations and the Oxford Institute for Ethics, Law and Armed Conflict. 3 00:00:16,460 --> 00:00:22,820 Dr. Ross Research investigates a future where machines, not humans, might be authorised to make and wage wars. 4 00:00:23,390 --> 00:00:27,170 She joins us to talk about the moral quandary of autonomous weapons systems. 5 00:00:31,380 --> 00:00:37,020 Today, I want to do two things. One is to situate the question of what an autonomous system is. 6 00:00:37,320 --> 00:00:45,510 This has been a large debate going on in international debates right now at the United Nations with the ICRC and other member states. 7 00:00:45,510 --> 00:00:50,850 Because I can walk into a room and say autonomous weapons system and somebody will think of a Roomba. 8 00:00:51,390 --> 00:00:54,000 Another person might walk into a room and say, 9 00:00:54,000 --> 00:00:59,280 I found this weapon system and think of a remotely piloted aircraft that has a human making, drafting decisions and flying. 10 00:01:00,630 --> 00:01:04,410 And somebody else can walk into the room and say, Oh, the system will pick the Terminator. 11 00:01:05,820 --> 00:01:12,360 So we have this very large division between what we mean by autonomous weapons systems, and the more and more you push on it, 12 00:01:12,360 --> 00:01:17,159 the more and more you see that as the as you say, well, it does this and it does this and it does this. 13 00:01:17,160 --> 00:01:22,620 The distinctions start to get quite fuzzy between what's the difference between an automated system and an autonomous one. 14 00:01:23,490 --> 00:01:27,930 So this is my attempt to use kind of conceptual coherence and meet multiple 15 00:01:27,930 --> 00:01:32,910 disciplinary discussions and definitions of autonomy to bring them together. 16 00:01:33,150 --> 00:01:40,799 But they can all talk to one another and then problematise that a little bit, particularly from a moral perspective and philosophy. 17 00:01:40,800 --> 00:01:45,620 And just we're thinking as well as we go thinking. So what is an autonomous weapons system? 18 00:01:45,630 --> 00:01:50,820 According to the U.S. DOD directive 3,000.09 from 2012. 19 00:01:51,600 --> 00:01:56,640 This is a system that can once activated selected engage targets without intervention by a human operator. 20 00:01:57,350 --> 00:02:00,990 The the emphasis here is on select and engage and that's it. 21 00:02:01,800 --> 00:02:05,100 How that comes about. Under what circumstances? For what time? 22 00:02:05,100 --> 00:02:12,750 We don't know. This includes human supervised systems, something we think is really a window or perhaps a career rocket and water system as well, 23 00:02:13,710 --> 00:02:18,290 if they're selecting and engaging and if the DOD says that's an autonomous weapons system, 24 00:02:18,300 --> 00:02:21,930 but at present, they have excluded all existing systems from the discussion. 25 00:02:23,070 --> 00:02:27,660 So the DOD says that everything we have currently in play is not an autonomous system. 26 00:02:28,530 --> 00:02:35,160 Only future systems will be autonomous systems. That's also something to keep in mind in this country. 27 00:02:35,170 --> 00:02:39,350 The UK emoji also has a policy position on on autonomous weapons systems. 28 00:02:39,360 --> 00:02:45,239 These are the only two states publicly to come out with definition and or policy statements on weapon systems. 29 00:02:45,240 --> 00:02:53,400 Out of the 186 systems that operate without human intervention in the weapon, command and control chain. 30 00:02:55,130 --> 00:02:58,580 If you want to talk about they think this is a good place to start. 31 00:02:59,390 --> 00:03:06,440 I also would submit that this is such a big and broad definition that it is so encompassing 32 00:03:06,440 --> 00:03:11,089 because no military commander that I know or in I think the history of the world would say, 33 00:03:11,090 --> 00:03:16,880 yes, please, can you give me a person or a system that operates outside of my command and control? 34 00:03:17,390 --> 00:03:20,390 I love that. Military's don't like that. 35 00:03:20,400 --> 00:03:24,070 Right? They want to be in control of the effects. They want to be in control of their forces. 36 00:03:24,080 --> 00:03:29,239 So this is kind of an odd system or an odd situation of defining the system, 37 00:03:29,240 --> 00:03:37,400 because what it does is it finds the the the threshold so high that nothing could ever really amount to an autonomous weapon system, 38 00:03:37,850 --> 00:03:44,569 thereby saying when the UK when the UK claims to say that it will never adopt an autonomous weapons system, 39 00:03:44,570 --> 00:03:50,390 it's because it's almost a possibility that any military would ever want to adopt something that has this characteristic. 40 00:03:52,880 --> 00:03:55,580 Then we have the ICRC, the International Committee for the Red Cross, 41 00:03:55,580 --> 00:03:59,510 and the ICRC basically takes what I would call a functionalist account to the definition. 42 00:03:59,780 --> 00:04:07,040 They say something like a weapons system possesses autonomy in the critical functions, and they give some examples of what critical functions are. 43 00:04:07,250 --> 00:04:12,380 These are not exclusive. These are not and they're not in total. 44 00:04:12,710 --> 00:04:16,550 So this would be something like searching for an object, identifying, tracking, 45 00:04:16,550 --> 00:04:20,690 intercepting, using force against, neutralising, damaging or destroying. 46 00:04:21,890 --> 00:04:30,710 And again, without human intervention. But notice all of these definitions focus on various things either the functional capability of the system, 47 00:04:31,280 --> 00:04:38,089 rather the action or decision that is being undertaken, or, let's just say in the UK case, fantastical things that will never exist. 48 00:04:38,090 --> 00:04:43,230 I mean, this might as well say unicorn border and I will never use unicorns in the words. 49 00:04:43,610 --> 00:04:49,040 So that so I think we have some big space and these are the publicly the best kind of publicly available definitions. 50 00:04:49,040 --> 00:04:57,560 We have a president which makes it very hard when you're talking in international organisations about what to do, how to regulate. 51 00:04:58,520 --> 00:05:06,469 And I've participated. There's been the last three years the United Nations has had an informal meeting of experts every year, 52 00:05:06,470 --> 00:05:09,620 one week a year in Geneva under the auspices of the convention, 53 00:05:09,620 --> 00:05:15,350 with certain conventional weapons to discuss whether to prohibit or regulate autonomous systems. 54 00:05:16,220 --> 00:05:20,390 For the past two years, I've been an invited expert to give testimony to the member states about this. 55 00:05:20,750 --> 00:05:26,570 And so I can tell you that these discussions are fraught and ultimately frustrating, 56 00:05:26,960 --> 00:05:31,130 especially when you have somebody like the ICRC in the back of the room saying something like this. 57 00:05:32,060 --> 00:05:37,190 We believe that this is important to have a clear, common understanding of what the object of discussion is, 58 00:05:38,090 --> 00:05:43,410 and in particular what constitutes an autonomous weapon without engaging in a definition exercise. 59 00:05:43,430 --> 00:05:47,630 There is a need to set the boundaries of discussion. Is it the Terminator? 60 00:05:48,020 --> 00:05:52,310 Is it a unicorn or is it your Roomba? We need to know what it is we're regulating. 61 00:05:53,570 --> 00:06:00,350 So boundaries boundary. So taking up the ICRC challenge, I launched a data project that has just come online. 62 00:06:01,310 --> 00:06:09,590 If you're interested, it is a data set of autonomy and weapon systems of categories 284 systems on about 1314 indicators. 63 00:06:10,190 --> 00:06:19,700 It's on the Global Security Initiative's website, and it attempts to take this critical functions approach and say, Well, where are we now and then? 64 00:06:19,700 --> 00:06:25,440 Where are we going with empirical evidence? That's the conceptual and that's the empirical side. 65 00:06:25,460 --> 00:06:30,920 This is the conceptual side that I'm going to give you today. So what would the boundaries be? 66 00:06:31,340 --> 00:06:39,800 Initially, we have to think about what the boundaries between automated, which everybody thinks of as permissive and permissible and autonomous. 67 00:06:41,450 --> 00:06:46,850 We can think of this in two distinct, but they might be two categories, right? 68 00:06:47,390 --> 00:06:49,610 Some things are automated, some things are autonomous. 69 00:06:51,020 --> 00:06:58,040 If you think of them in classes this way as distinctly, uniquely to take a dichotomy of two things you could say, well, on the automated account, 70 00:06:58,310 --> 00:07:06,560 they might be something like they're predictable, they're reliable, the action, the the sensory input, and then the action is direct. 71 00:07:07,670 --> 00:07:15,890 They're simple. They're dumb. This could be something like a Ford Auto manufacturing plant that is just, you know, popping out widgets. 72 00:07:15,900 --> 00:07:20,860 Right. That would be something automated and it's perhaps better thought of as self prompted. 73 00:07:20,870 --> 00:07:28,850 It can do something that its self prompts to do, but it's not taking into consideration its its own goals and its reasoning about its ends. 74 00:07:30,050 --> 00:07:38,810 Autonomous, on the other hand, might be thought of as unpredictable, unreliable, indirect, complex, smart, and perhaps self-determining. 75 00:07:40,040 --> 00:07:45,710 And these these ideas consistently get trotted out again and again. 76 00:07:46,310 --> 00:07:51,440 But would it help if I dim the lights a bit? Or can people see dim the lights? 77 00:07:52,300 --> 00:08:00,380 Because these are as I'm seeing some squinting happening so in an effort to. 78 00:08:04,940 --> 00:08:13,639 Um. So, um. Okay, so that's one technology. 79 00:08:13,640 --> 00:08:14,900 There you go. All right. 80 00:08:15,500 --> 00:08:24,770 So often they're pitted against one another, and sometimes what is defined as autonomous is that which is just it's a negative definition. 81 00:08:24,770 --> 00:08:33,250 That was that which is not automated. But sometimes we might want to think about this not as a distinct difference of classes, 82 00:08:34,030 --> 00:08:39,910 but along a continuum that you could have automated on one end and autonomous on another, 83 00:08:41,050 --> 00:08:47,290 and that there's all sorts of degrees of difference between the two and this continuum. 84 00:08:47,320 --> 00:08:54,250 Some people say, you know, it's going to go all the way up to a point and then it magically becomes. 85 00:08:55,360 --> 00:09:03,520 Autonomous. We're if there's any engineers in the room, this is a common understanding by levels of automation, but it's buried in place. 86 00:09:04,400 --> 00:09:13,480 Sheridan played Sheridan from 1977, which was a discussion about levels of automation in Sheridan Think you've ten levels of automation. 87 00:09:14,420 --> 00:09:23,319 And they said it starts with the human does everything. And then number ten, the machine does everything and then in between are randomly assigned, 88 00:09:23,320 --> 00:09:30,370 I think quite rather arbitrarily assigned levels and they're supposed to be discrete doing these different jobs. 89 00:09:30,950 --> 00:09:38,980 Um, I think that's actually the wrong way to look at it because from 1 to 8, it's human robot, from 2 to 8, it's human robot interaction. 90 00:09:39,460 --> 00:09:42,580 And then one is human and the iceman, it's something completely different. 91 00:09:42,580 --> 00:09:48,790 So it's quite odd to think of it this way, but this is another way that it gets discussed as a boundary. 92 00:09:51,070 --> 00:09:59,980 I think all of these are insufficient ways of thinking about it, either as category mistakes or as levels of one kind of continuum going up. 93 00:10:01,450 --> 00:10:10,900 And I'm lot more to think about the notion of complex adaptive systems and how complex adaptive systems have emergent properties. 94 00:10:11,830 --> 00:10:20,950 And I'm thinking through this in my own work right now, and so I'm totally open to your suggestions of how to rethink this. 95 00:10:20,950 --> 00:10:27,460 But I think there's something different about the notion of autonomy, particularly autonomy in weapons systems that is emergent. 96 00:10:27,850 --> 00:10:35,080 It's an emergent property based system. So the great to San Paolo, I think we can get at this if we think of a great example, but reverse it. 97 00:10:35,650 --> 00:10:39,640 So the grain of sand problem says, look, you might have a pile of sand. 98 00:10:40,450 --> 00:10:46,330 You know, everybody can envision in their head a part of the sand. And you can take each integral individual grain of sand for weight. 99 00:10:47,650 --> 00:10:50,140 And so all of a sudden it's no longer a pile of sand. 100 00:10:51,040 --> 00:11:00,580 But the idea would be that you can't get to the idea or the kind of sand like qualities of the pile until you have a certain amount of sand there. 101 00:11:01,840 --> 00:11:05,890 And you can't conceptualise it by saying it's this amount of grains of sand. 102 00:11:06,940 --> 00:11:13,570 Likewise, you can think of this if that doesn't work for you with water for H2O is the water column right? 103 00:11:14,320 --> 00:11:18,580 But H2O as a distinct kind of what is water doesn't get at the quality of wetness. 104 00:11:20,500 --> 00:11:26,140 Wetness is an emergent property of water. So these are the kinds of things that I'm talking about. 105 00:11:26,590 --> 00:11:32,890 So if we think that the critical functions and autonomous systems may all individually be automated, 106 00:11:33,460 --> 00:11:38,260 that is, they can or they cannot do it and they may or may not be able to do it well or poorly. 107 00:11:39,160 --> 00:11:43,450 But if you think of those as automated, but somehow when we put things together in a particular way, 108 00:11:43,690 --> 00:11:47,620 there's something that is emergent, something that is more than the sum of its parts. 109 00:11:51,220 --> 00:11:57,640 And then I think drawing from not just thinking about complex adaptive systems, 110 00:11:58,120 --> 00:12:05,110 but ethics, then you say, well, what is it about autonomy that's particularly important? 111 00:12:05,230 --> 00:12:08,440 And I think there's a couple of things here that we can learn from philosophy. 112 00:12:08,440 --> 00:12:16,360 So one is that we can say, look, we in philosophy, when we have the question about, you know, the right thing to do, we do one or two things. 113 00:12:16,360 --> 00:12:22,249 One, we can evaluate the action. Did it maximise some sort of good, good, some non moral good? 114 00:12:22,250 --> 00:12:25,600 That's right. Is it you know, are you a consequentialist or utilitarian? 115 00:12:25,600 --> 00:12:31,180 You're looking at the outcome and you're evaluating the outcome or the act. 116 00:12:31,960 --> 00:12:38,740 Or sometimes you could say it's not. The act is the actually it's the is the quality of the agent who's acting. 117 00:12:39,250 --> 00:12:42,790 So in more than two logical conceptions, you say, look, 118 00:12:42,790 --> 00:12:47,620 it's my intention that I was I intending to do X, Y and Z regardless of what happened with the outcome. 119 00:12:49,240 --> 00:12:52,990 So these are ways in which we sometimes think about the right thing to do an 120 00:12:53,000 --> 00:12:57,970 evaluation of X when it comes to autonomous moral agents acting in various ways. 121 00:12:58,900 --> 00:13:00,750 So I think if you put these two things together, 122 00:13:01,750 --> 00:13:07,010 thinking about the timing weapons systems as well as what we're talking about or think, well, what is an autonomous weapons system? 123 00:13:07,040 --> 00:13:13,060 What we want to do with it, I think we have to ultimately think about this as both. 124 00:13:14,050 --> 00:13:22,180 Why do I think we it has to be both because an autonomous weapons system, as you'll see in a second, is something that is both objects and agent. 125 00:13:23,770 --> 00:13:32,260 It is something that is not a moral agent, but it has agency, it has cognitive capacities in decision making. 126 00:13:33,010 --> 00:13:36,970 It can do planning, artificial intelligence right now has. 127 00:13:37,780 --> 00:13:44,200 We talk about artificial intelligence as agents can move around their environment, doing certain things, making decisions. 128 00:13:44,770 --> 00:13:54,159 These have agency, but they might not be moral agents in the way that we think of them presently in and full of philosophical terms, 129 00:13:54,160 --> 00:13:57,130 as a moral agent, as the subjects of rights and duties. 130 00:13:59,380 --> 00:14:07,510 But it's also an object that in general, we would say we want to evaluate, you know, the task at hand and how well or poorly it does the task. 131 00:14:09,220 --> 00:14:12,940 And this is something I think we can get our head around a bit more then. 132 00:14:13,660 --> 00:14:19,510 So in a recent paper that I wrote with Professor David Davis of Carnegie Mellon, Carnegie Mellon, 133 00:14:19,900 --> 00:14:32,979 we say that autonomy is about a system's ability to carry out a particular task assigned to it by someone or something, 134 00:14:32,980 --> 00:14:36,580 maybe itself, without the intervention of the human operator, 135 00:14:36,850 --> 00:14:42,910 where the performance depends upon various other capabilities or capacities related to its autonomy. 136 00:14:43,930 --> 00:14:46,839 So for the for the philosophers in the room or the political theorist in the room, 137 00:14:46,840 --> 00:14:51,820 you can think of this kind of as a as a throwback to Martha Nussbaum and her capabilities approach. 138 00:14:54,280 --> 00:14:57,700 So what would be some of these general categories of capabilities? 139 00:14:58,540 --> 00:15:04,750 Well, I think when it comes to a weapon system, we're going to have a bundle of things happening and we can divide them along three lines. 140 00:15:05,230 --> 00:15:12,010 One line of combat capabilities will have to do with movement, because we're talking about something that's be moving around the time and space. 141 00:15:12,310 --> 00:15:17,440 Even if that's non-kinetic, even if that's in the cyber system, it's got to be able to move within the network. 142 00:15:18,790 --> 00:15:26,500 Then you can think about things like detection, right? Because I might give a direction for a weapons system to go attack a target X, 143 00:15:26,920 --> 00:15:32,140 but it has to be able to detect that target once they're not selected, but detect it. 144 00:15:32,620 --> 00:15:36,250 So this has to do with something about weapons, really, and detection. 145 00:15:37,060 --> 00:15:45,520 And then finally, I think we're talking about a target system. We have to also take into consideration planning, learning and adaptation. 146 00:15:49,170 --> 00:15:53,879 So if you think of these three things, these three general bundles of capabilities, 147 00:15:53,880 --> 00:15:59,880 you can actually look at an existing weapons system and you can say, I'm going to categorise this into three indices. 148 00:16:02,510 --> 00:16:06,080 My first index, we look at things like self mobility. 149 00:16:07,700 --> 00:16:11,300 How well does an autonomous weapons system position itself in the world? 150 00:16:12,770 --> 00:16:21,680 How that is? How does it go through purposive movement, through its environment to find a location to guide or home in on something. 151 00:16:22,220 --> 00:16:25,790 So lots of precision guided munitions would fall in this area. 152 00:16:26,120 --> 00:16:30,770 Self mobility. Some systems might be completely stationary. 153 00:16:31,970 --> 00:16:35,570 You can think of kind of rocket and mortar systems on forward operating bases. 154 00:16:36,680 --> 00:16:42,770 You might think, well, that's not right. I think, well, maybe a round version of a weapon on a ship. 155 00:16:42,770 --> 00:16:48,680 Leadership is mobile or something, but you can start thinking about it in terms of how well is this thing moving around? 156 00:16:49,130 --> 00:16:56,990 If it's just flailing about, that's not a good thing. The second one would be to think about self-direction. 157 00:16:57,470 --> 00:17:06,110 And self-direction for me again, is how well a system can aim and employ weapons. 158 00:17:06,110 --> 00:17:11,840 How how well can it aim at something? Detect that thing that it's aiming at and then employ weapons against it. 159 00:17:12,050 --> 00:17:17,690 And so fire control issues, acquisition issues, does it have target acquisition? 160 00:17:18,050 --> 00:17:22,670 Does it have automatic target? So automatic does it have automatic target recognition? 161 00:17:23,330 --> 00:17:27,229 Does it have a secret? Well, what happens at the Seeker when you put something, 162 00:17:27,230 --> 00:17:33,770 it looks at a certain area called an acquisition basket that has all these different things that are going inside its acquisition basket. 163 00:17:34,400 --> 00:17:39,080 Can it do these types of things out of all of the known kind of capabilities and systems have? 164 00:17:39,110 --> 00:17:43,160 And how many of those things does it have? Does it have none of them? 165 00:17:43,670 --> 00:17:45,890 That would be bad, too. These would be done bombs. 166 00:17:48,410 --> 00:17:56,870 And then I think the crux of it is when you add this this last this last bit here, which is self-determination, 167 00:17:57,080 --> 00:18:01,340 and this is qualitatively different than the other two indexes, the other two elements. 168 00:18:01,340 --> 00:18:06,169 And that is this idea is about cognition and volitional capacities. 169 00:18:06,170 --> 00:18:15,560 So we want to understand how or whether a system can give itself goals, can formulate plans and act on those goals. 170 00:18:16,610 --> 00:18:21,950 This to me involves higher order, cognitive and cognitive aspects. 171 00:18:22,820 --> 00:18:30,140 So the cognitive aspect is what is the fact that I think, you know, what is the purpose of the factor, the idea or the general principle or the law? 172 00:18:31,460 --> 00:18:34,070 What is the specific situation that I find myself in? 173 00:18:35,030 --> 00:18:40,850 Does the specific fall under the general formulate reasoning and the analytical political thought? 174 00:18:41,180 --> 00:18:46,900 But the cognitive part is like but the difference this is whether or not my subjective state of mind, 175 00:18:46,910 --> 00:18:53,120 my feelings are actually working alongside my judgements in my my cognitive reasoning capacities. 176 00:18:54,260 --> 00:18:59,210 And you might say to yourself, wow, factor off, you've completely gone off the wagon. 177 00:18:59,540 --> 00:19:02,810 You're talking about emotions and machines. 178 00:19:02,900 --> 00:19:12,920 And I would submit again that when we're talking about agents in multi-agent systems that would be able to go through 179 00:19:12,920 --> 00:19:22,310 various higher order capabilities of undertaking a proportionality calculation in a particular instance of armed conflict. 180 00:19:23,030 --> 00:19:26,390 It's exactly these kinds of emotive cognitive. 181 00:19:28,940 --> 00:19:37,610 Bits that you would want to have. We also have work being done in artificial intelligence right now on artificial emotion and on various ways in which 182 00:19:37,610 --> 00:19:43,700 you can use emotions or things like artificial hormones or pheromones to get systems to do what you want them to do. 183 00:19:44,480 --> 00:19:50,330 So I don't think it's actually beyond the scope of possibility to think that you would have a system of an artificial agent 184 00:19:50,510 --> 00:19:58,610 with cognitive capacities and something akin to an artificial emotion to help guide it on the right way to the right action. 185 00:20:01,070 --> 00:20:06,440 I'll give you another example of what I mean by that. So how many of you in here have ever heard of a neural network? 186 00:20:08,210 --> 00:20:12,200 How many of you in here have ever heard of a deep neural network? 187 00:20:12,710 --> 00:20:16,070 How many of you in here have ever heard of a reinforcement learning deep neural network? 188 00:20:16,490 --> 00:20:23,360 Okay, so a neural network is a way in which you give a computer. 189 00:20:23,390 --> 00:20:28,240 It's a learning system. It's kind of the image recognition contact scene. 190 00:20:28,250 --> 00:20:31,940 You know, you showed a picture and there are all sorts of massive amounts of data. 191 00:20:31,940 --> 00:20:34,850 It's going to have certain input values and an output value. 192 00:20:35,150 --> 00:20:40,460 And I can have a little hidden layer of nodes of neurones that don't know exactly how things are related. 193 00:20:40,910 --> 00:20:44,650 It's trained to look at for various things, and then it's kind of set off and goes. 194 00:20:46,080 --> 00:20:53,850 It's much more technical, but I'm not going to get into it. The Office But the idea here is that you're training it to recognise certain patterns. 195 00:20:54,060 --> 00:20:59,340 And when you get that, when that pattern is right, you give it a reward just like you would a dog. 196 00:21:00,240 --> 00:21:04,140 If you want to say, please sit to your dog and it just looks at you. 197 00:21:04,830 --> 00:21:09,630 We sit this looks at you, and all of a sudden you maybe raise your hand and it sits down. 198 00:21:09,750 --> 00:21:13,590 You, Oh, it's so great. You and the dog love that. 199 00:21:13,590 --> 00:21:18,180 That's what you want me to do. Oh, okay. And then you do it again, and then you get a treat. 200 00:21:18,660 --> 00:21:24,150 And all of a sudden, this reinforcement paradigm comes in so that it starts to recognise what it is you want it to do. 201 00:21:24,720 --> 00:21:30,360 That's reinforcement learning. We've added that into the into how we do computational learning. 202 00:21:30,930 --> 00:21:34,170 We give it points or values or relate things certain ways. 203 00:21:35,130 --> 00:21:42,810 So this kind of weighting, this reinforcement is something like what we get as humans, right? 204 00:21:43,140 --> 00:21:45,480 Which is we get a good feeling, we do something nice. 205 00:21:46,080 --> 00:21:51,870 Or maybe if you're more deterministic, our hormones release nice endorphins when we do something that we think is nice, 206 00:21:52,440 --> 00:21:56,610 however that works, it's a reinforcement of something good to tell us. 207 00:21:56,610 --> 00:21:59,190 This is the right action. Do the right thing. 208 00:22:00,120 --> 00:22:07,380 And if you're going to have an autonomous system acting in in amongst humans, we're out here away from humans, commanders. 209 00:22:08,250 --> 00:22:14,850 But in a human population, you better be sure it's going to do the right things and uphold the law if you're going to undertake discrimination, 210 00:22:14,850 --> 00:22:19,530 proportionality, assessing, precaution. But how do you teach it to do that when it's learning? 211 00:22:20,610 --> 00:22:23,730 So this is why I think the cognitive capacities are equally important. 212 00:22:25,050 --> 00:22:28,680 Okay, so I didn't like my little lines before. 213 00:22:29,010 --> 00:22:34,350 Right. But I do think what you get is something like this. 214 00:22:34,350 --> 00:22:42,840 So it's not two categories, it's not even one dimension, but it's actually, I think, a three dimensional space. 215 00:22:43,320 --> 00:22:51,149 Multidimensional. You have self-direction, self-determination and sociability, 216 00:22:51,150 --> 00:22:56,280 and you can imagine looking at a system and saying, where does it position itself along these three axes? 217 00:22:57,270 --> 00:23:06,000 And then I can evaluate it somehow over these three axes. Now I'm a visual learner, so this is how I try to think of it in my head. 218 00:23:06,900 --> 00:23:12,360 That is that somewhere along these lines the whole becomes more than the sum of the parts. 219 00:23:13,410 --> 00:23:16,740 It has some sort of emergence that we would call the time. 220 00:23:19,720 --> 00:23:24,400 And it requires us to look at the functions and the capabilities as well as the 221 00:23:24,400 --> 00:23:29,889 internal states of the agent acting its cognitive and quantitative disposition, 222 00:23:29,890 --> 00:23:33,790 if you will, and then how these relate to the task at hand. 223 00:23:34,450 --> 00:23:39,700 Or if you don't think about it in neuro terms, this task perhaps the goal that we give it. 224 00:23:40,990 --> 00:23:50,610 I've been in conversations with various state officials and they will say things like an autonomous weapon system must understand commanders can have. 225 00:23:54,410 --> 00:23:57,800 So just keep that nugget with you when you think that you have to. 226 00:23:57,810 --> 00:24:05,090 It's not merely about the task, but it could also be about the goal, which is to comply with the rules of engagement, 227 00:24:05,090 --> 00:24:12,979 the laws of armed conflict and commanders, and all of which are very nebulous concepts. 228 00:24:12,980 --> 00:24:20,860 And in certain circumstances, we might think the principle is of distinction that you know about shalt not be indiscriminate with 229 00:24:20,870 --> 00:24:27,050 relation to the use of force against civilians or civilian objects is a really easy rule to follow. 230 00:24:28,160 --> 00:24:37,580 Except what if the civilian object is being used in a certain way that makes it a military objective that is by its nature, purpose, location or use. 231 00:24:38,480 --> 00:24:45,690 Attacking it would give a direct military advantage. That's a more that's the what the law says. 232 00:24:46,350 --> 00:24:49,290 But that's very contextual. Right. 233 00:24:49,920 --> 00:24:56,729 The commander have to know at that time that that civilian train is actually transporting material, things like that. 234 00:24:56,730 --> 00:25:00,090 So you need better, more content. 235 00:25:00,530 --> 00:25:04,800 And thus you have to think about what we're asking the machine to do. 236 00:25:05,190 --> 00:25:09,000 If it is a simple task, go on to our missiles right now. 237 00:25:09,300 --> 00:25:22,070 Go on to a location in space, as we call it. Do you go on to a location facing or take the hill very broad or. 238 00:25:22,980 --> 00:25:31,860 Understand commanders and time. If you still don't think you can understand what I'm talking about, here's a nice little 3D graph, 239 00:25:31,860 --> 00:25:39,600 but it doesn't actually means nothing other than the fact that you could you can envision where systems may fall on the three axes. 240 00:25:39,810 --> 00:25:45,959 So if you had a handful of systems that you were trying to measure, you could kind of place them in time and space to see you said, 241 00:25:45,960 --> 00:25:50,360 Oh, up here in the red part, maybe we're going to start getting into some emergent autonomous capacities. 242 00:25:53,920 --> 00:26:01,909 Okay. So you buy my argument, which is a big one to buy, but let's just say you buy American because I really like the submergence thing. 243 00:26:01,910 --> 00:26:10,380 This is nuanced. Here's the rub. What exactly is the quality of the emergence? 244 00:26:10,590 --> 00:26:23,040 What is the quality of autonomy here? Is it technological capability or is it something on the order of free will or moral agency? 245 00:26:24,060 --> 00:26:31,590 We might say that somehow magically autonomy emerges from all these capabilities, but then what does the autonomy mean for the system? 246 00:26:33,660 --> 00:26:40,560 And this, I think, is where our good pal Aristotle and the Greeks can help us. 247 00:26:43,120 --> 00:26:48,300 So just when you thought it was safe to burn, your Arizonan must go back to. 248 00:26:50,490 --> 00:26:56,700 Autonomous is typically thought of as in the Greek autonomous self-governing giving myself a laws and rules. 249 00:26:57,330 --> 00:27:06,120 But in the Greek way of thinking about this, it's very much dependent on kind of a community self-determining, you know, all of the men. 250 00:27:06,270 --> 00:27:09,659 Don't worry about those women, all of the men who are also free. 251 00:27:09,660 --> 00:27:16,390 Don't worry about going into the public space, the agora, and then making a lot of costume remarks. 252 00:27:16,410 --> 00:27:21,690 Right. This is the kind of self-determining that we were talking about when talking about autonomous. 253 00:27:23,010 --> 00:27:28,200 So I don't think that's the kind of thing that we would want to identify with an autonomous system. 254 00:27:28,440 --> 00:27:34,590 I don't see an autonomous system belonging to a community holding something in common, going and casting lots. 255 00:27:35,280 --> 00:27:38,910 Signing up to be emperor for a day. I don't see that right. 256 00:27:39,120 --> 00:27:46,139 What I do see is another concept in Greek that would be really helpful for us here and that is auto exist here and now. 257 00:27:46,140 --> 00:27:50,550 To exist here is also a notion of autonomy. And it's about the power to act. 258 00:27:51,180 --> 00:28:02,340 It's about having the authorisation act and having the required moral faculties that would allow one to be delegated an authorisation to act. 259 00:28:03,720 --> 00:28:07,620 So if you don't. Again, I think examples are really helpful here. 260 00:28:07,620 --> 00:28:15,150 So you can think of something like a power of attorney. A power of attorney says, I have, I am a self-determining agent. 261 00:28:15,360 --> 00:28:17,970 But you know what? I can't do some things right now. 262 00:28:18,240 --> 00:28:23,790 So, for example, my husband and I bought a house six months ago or so, and I happened to be in Oxford at the time. 263 00:28:24,210 --> 00:28:30,960 I couldn't sign any of the papers, so I had to sign a power of attorney saying he can sign all of the papers for me. 264 00:28:32,790 --> 00:28:37,949 I trust him within this scope, only with this scope, and then only in the scope of can you do this. 265 00:28:37,950 --> 00:28:43,349 Then you can't do anything else. And so that would be me authorising. 266 00:28:43,350 --> 00:28:47,580 I would take my authorisation and I'm delegated to him. He's capable of that. 267 00:28:47,880 --> 00:28:54,840 That authorisation and delegation. So delegation to fly, whatever fly happens to be. 268 00:28:54,900 --> 00:28:56,520 So what is the task then? 269 00:28:56,640 --> 00:29:04,830 That we're trying to think about in relation to an autonomous weapons system that we're delegating in the kind of auto insincere way? 270 00:29:05,280 --> 00:29:10,950 Well, at its most general level, I think it's a task to use force against the military objective, 271 00:29:12,150 --> 00:29:16,560 whatever the quality of that force, whatever the objective has to be at its most general level. 272 00:29:16,860 --> 00:29:20,250 It's this to achieve a military advantage. 273 00:29:20,490 --> 00:29:24,090 Now, we could insert precaution, proportionate, all that stuff. I'm going to leave that out. 274 00:29:24,660 --> 00:29:32,850 It's going to say at its most general, the task is to use force against the military objective to get military advantage. 275 00:29:33,510 --> 00:29:40,800 Except when it's an autonomous weapons system, it's selecting which targets to use force against by itself. 276 00:29:43,670 --> 00:29:49,340 Come. We come full circle back to U.S. Department of Defence Directive 30009. 277 00:29:49,670 --> 00:30:00,890 It selects. Not merely detects. It selects which objects to achieve, to use force against, and perhaps even what kind of force to use against, 278 00:30:01,580 --> 00:30:06,290 which then smuggles in all sorts of questions about precautions and proportionality. 279 00:30:08,170 --> 00:30:18,639 So if we say then that they're being delegated this task now we have another question in front of us, 280 00:30:18,640 --> 00:30:27,130 which is where does the authorisation to use force come from and how is it transmitted down to this system in a coherent way? 281 00:30:29,440 --> 00:30:33,159 Well, we can look at where the power to declare war comes from. 282 00:30:33,160 --> 00:30:43,510 So coming back to our lovely little just war theorists, we have the power to declare war would be an illegitimate authority. 283 00:30:43,510 --> 00:30:48,970 Those who are, you know, authorising war, whatever that wherever that comes from the people, the king, whatever. 284 00:30:49,960 --> 00:30:55,720 And then there are the authorised agents of war, and that is the armed forces who are undertaking those actions. 285 00:30:56,830 --> 00:31:02,590 I have no idea yet where autonomous weapons fall in because they're an object and an agent at the same time. 286 00:31:04,720 --> 00:31:08,050 So the power to declare war has traditionally been about illegitimate authority 287 00:31:08,440 --> 00:31:13,840 and how and where and how and from where those agents receive these rights, 288 00:31:14,230 --> 00:31:19,890 authorisations and obligations. It's all kind of meaningless, this kind of bottom up and top down at the same time. 289 00:31:19,900 --> 00:31:27,790 So if it's a democracy, you could say that's the power to all powers and rights lies with the people, then the people through their representatives. 290 00:31:27,790 --> 00:31:31,000 Right. Very lucky. And so Congress and so on. 291 00:31:31,040 --> 00:31:37,570 American Congress has the power to declare war. Congress is the U.S. government is a legitimate authority. 292 00:31:38,380 --> 00:31:41,530 The US armed forces are the legitimate authorised agents of war. 293 00:31:42,070 --> 00:31:51,670 And when Congress authorises, which it does very rarely anymore, but if it does authorises war, gives it to the executive, go fight. 294 00:31:51,880 --> 00:31:57,940 You are the head of the armed forces and through a chain of command, it goes all the way down to the PFC on the ground. 295 00:31:59,620 --> 00:32:07,120 That's how it works, right? Once you get to the authorisation for agents to use force in war, 296 00:32:07,160 --> 00:32:15,580 you say those authorised agents of war can do certain things, certain things that they cannot do in peacetime. 297 00:32:17,470 --> 00:32:20,710 You can't go around shooting civilians in peacetime. 298 00:32:22,740 --> 00:32:29,010 Depending on the situation, the more you make, they may be collateral damage. 299 00:32:29,490 --> 00:32:35,590 They may be X, Y or Z. Not making moral judgements, whatever it is. 300 00:32:35,590 --> 00:32:41,290 But there are certain permissions that you are allowed in armed conflict that you are not allowed to use. 301 00:32:42,760 --> 00:32:51,370 And with those permissions to act come rights and responsibilities and obligations, we say something like the combatants privilege, 302 00:32:51,370 --> 00:32:55,269 the combatant privilege is that they will not be tried if they're acting within 303 00:32:55,270 --> 00:32:58,360 the bounds of armed conflict law where they will not be tried for murder. 304 00:32:59,980 --> 00:33:06,400 They will only be tried for war crimes if they act outside of x allegation of armed conflict of the international between both. 305 00:33:08,200 --> 00:33:12,130 If they surrender, they have the combatants privilege. 306 00:33:12,310 --> 00:33:19,390 The P.O.W.s is not portrayed. So this is, again, this kind of notion of a right and a responsibility. 307 00:33:19,630 --> 00:33:29,170 It's working at play here. So delegating the authorisation is to delegate the moral right to use force. 308 00:33:32,890 --> 00:33:39,510 This is how it's been structured since we can think at least in more modern times. 309 00:33:39,520 --> 00:33:44,560 Maybe not, you know, Syrian or more modern times. 310 00:33:46,510 --> 00:33:51,550 Okay. But now we have to think about delegating moral rights to use force to non-royal agents. 311 00:33:53,620 --> 00:34:03,570 And this gets us into a bit of a quagmire. So an autonomous weapon is a weapon that decides it doesn't matter what weapon is attached to it. 312 00:34:03,580 --> 00:34:07,209 That is munition. They can have a precision guided munition. 313 00:34:07,210 --> 00:34:13,390 It can have a gun, it can have a bomb, it can have a torpedo, it can have a laser, can have anything. 314 00:34:14,120 --> 00:34:16,179 Now, we don't care about the munition. 315 00:34:16,180 --> 00:34:22,770 We care about the autonomy in the decision making capacity of the system, that is, to select and engage in all the things that go on. 316 00:34:22,780 --> 00:34:26,650 We have to think about selecting and engaging. So it's a weapon that decides. 317 00:34:27,490 --> 00:34:31,600 But I'm not clear. I don't think anybody is clear at this point. 318 00:34:34,270 --> 00:34:37,450 Well, I can say the technological state of technology now, it is clear. 319 00:34:37,450 --> 00:34:43,840 But in the future, not so clear. It is not clear that such systems possess the requisite moral faculties to make decisions 320 00:34:44,320 --> 00:34:47,890 that would allow them to have this authorisation delegated to them permissible. 321 00:34:49,000 --> 00:34:51,730 So that is the technological capabilities of systems. 322 00:34:52,600 --> 00:35:00,250 So if I offer a bone to the person who says these systems are going to be better but discriminating between combatants and non-combatants, 323 00:35:01,330 --> 00:35:05,980 facial recognition is better in computers right now than it is in people. 324 00:35:06,490 --> 00:35:11,650 These may be more humane to use. I'm going to offer that person a phone and then say, okay, 325 00:35:12,220 --> 00:35:18,190 let's say for the sake of argument that these systems can distinguish between combatant and non-combatants better. 326 00:35:20,510 --> 00:35:28,820 That's not sufficient. That's not sufficient to authorise the machine to take the decision to attack. 327 00:35:29,960 --> 00:35:38,120 Because also under moral and legal rules, you have to also be able to take precaution and proportionate force. 328 00:35:38,840 --> 00:35:42,110 And that is highly context sensitive. 329 00:35:44,800 --> 00:35:48,460 And you can't use something like an artificial intelligence. 330 00:35:49,240 --> 00:35:52,810 Go find good old fashioned and good old fashioned A.I. is not going to get you there. 331 00:35:53,410 --> 00:35:54,160 It's too brittle. 332 00:35:55,030 --> 00:36:02,440 You can't see all the possible permutations of what's going to happen in a battle space and how your adversary is going to adapt in that battle space. 333 00:36:02,470 --> 00:36:07,510 So you have to have a learning system if you want to invest in systems that are going to protect you. 334 00:36:08,200 --> 00:36:12,040 You want to be operationally effective, cost effective. That's a real learning system. 335 00:36:12,760 --> 00:36:19,750 If that learning system is out there, you better hope it's smart enough to actually undertake out of the proportionality calculations. 336 00:36:20,350 --> 00:36:28,660 But I don't think that's very likely because they're not human in the way that you and I think about human proportionality calculations. 337 00:36:30,770 --> 00:36:38,780 But until they have greater cognitive and cognitive capacities, they just they're not going to be able to comply with moral and legal rules. 338 00:36:43,030 --> 00:36:47,410 What is a good political theory is there was one that had one thing and oscillate a little bit more. 339 00:36:47,470 --> 00:36:51,280 Right. But that's the almighty. 340 00:36:51,280 --> 00:37:01,510 But if systems do possess these greater cognitive and kind of capacities, then they may be moral agents as well. 341 00:37:03,400 --> 00:37:05,920 But if that's true, then they have rights and obligations too. 342 00:37:07,540 --> 00:37:12,940 If we create the artificial general intelligence that A.I. specialists are trying to get to right now, 343 00:37:13,330 --> 00:37:17,740 and they can do everything that we can do, it looks like a duck and walks like a duck. 344 00:37:18,340 --> 00:37:21,340 Quacks like a duck. Then I'm. 345 00:37:21,880 --> 00:37:25,540 Let's go this way. If if intel is, it is a moral agent. 346 00:37:25,720 --> 00:37:28,840 I don't know why that thing over there would not be considered a moral agent. 347 00:37:30,940 --> 00:37:35,140 So if that's true, they may have rights and obligations. And then there's all sorts of questions about conscription. 348 00:37:38,110 --> 00:37:44,200 But until such a time that AWB possess the moral faculty to permit the delegating to happen, 349 00:37:44,290 --> 00:37:50,290 the delegating of that authorisation to use force, I think it is morally permissible to task them with this choice. 350 00:37:52,270 --> 00:37:57,490 They lack the moral as well as the technical capabilities to comply with moral and legal rules of armed conflict. 351 00:37:59,500 --> 00:38:05,980 But the army but in cases of defence, their own narrow, very narrow cases. 352 00:38:06,640 --> 00:38:13,830 I think you could probably see their utility, but I also think that you may not actually have to task them with the selecting bit. 353 00:38:15,490 --> 00:38:26,260 So what do I mean? Well, if you had a system purely defensive and we can talk about purely defensive, purely offensive and everything in between, 354 00:38:26,800 --> 00:38:31,710 but purely defensive, that is, it does not activate unless it senses an incoming mission, 355 00:38:31,780 --> 00:38:38,079 munition or mortar or missile or something like this that is not you know, it's actively scanning all the time, 356 00:38:38,080 --> 00:38:42,250 but it's not, you know, then you're like, oh, I just feel like today is the day. 357 00:38:42,590 --> 00:38:47,440 Right now, it's actively it's only activated when it senses something incoming. 358 00:38:48,010 --> 00:38:51,160 Then I think what's happening is it's not selecting a target. 359 00:38:52,750 --> 00:38:57,130 The selection has come at some other point in time by some other agent. 360 00:38:57,460 --> 00:39:04,780 It's detecting and then it's responding. I think selection has a very kind of grey. 361 00:39:05,740 --> 00:39:12,130 It could be defensive in like these more kind of preventive defence and responding to a threat kind of terms. 362 00:39:14,110 --> 00:39:20,559 Or it could be purely offensive, but I think we might square the circle by saying actually an autonomous weapons system 363 00:39:20,560 --> 00:39:26,380 that's purely defensive might just have to respond and react to an incoming threat. 364 00:39:27,220 --> 00:39:31,060 Incoming, not, not perceive, not look shadowy out there. 365 00:39:31,060 --> 00:39:37,959 That guy looks a little shady, which we do have anomalous behaviour recognition in crowds that we're using. 366 00:39:37,960 --> 00:39:41,920 And right now I think that would be a bad thing to put on a weapon system. 367 00:39:43,540 --> 00:39:48,730 Um, so that's that kind of very restricted defensive reading I think would, 368 00:39:49,090 --> 00:39:56,950 would mean that you would not permit the systems to preventively strike oh so good a monster. 369 00:40:00,520 --> 00:40:07,240 And then I have a question, if that's true, we might just think of it as mere automation in some instances and not autonomy. 370 00:40:07,570 --> 00:40:14,410 But for the most part, I think we have to kind of say that these systems are not where we want them to be. 371 00:40:14,590 --> 00:40:17,890 And that's what autonomy, truly. 372 00:40:19,480 --> 00:40:21,760 We cannot progressively allow them to undertake a.