1 00:00:01,230 --> 00:00:05,970 This paper is is called the moral status of so-called moral machines, 2 00:00:05,970 --> 00:00:10,980 and a version of it will appear in the Cambridge Quarterly of Health Care Ethics in January. 3 00:00:10,980 --> 00:00:18,490 But Julian needn't panic because either I will make this completely different or I will give you a completely different paper. 4 00:00:18,490 --> 00:00:25,500 And this is not really about moral status, but just to establish my credentials in this field. 5 00:00:25,500 --> 00:00:34,530 I first wrote about moral status in the book of my PhD thesis, Violence and Responsibility, which was published way back in 1980, 6 00:00:34,530 --> 00:00:43,560 and I wrote again about moral status in more detail in my second book, The Value of Life, which was published in 1985. 7 00:00:43,560 --> 00:00:47,370 And just for the record, as far as the moral status of persons goes, 8 00:00:47,370 --> 00:00:53,070 I think John Locke's account in the essay concerning human understanding has not been surpassed or replaced. 9 00:00:53,070 --> 00:00:58,490 But I won't insult you by pretending that you don't know what it is and reading it out. 10 00:00:58,490 --> 00:01:02,220 Although I could do it from memory today, I'm going to be concerned. 11 00:01:02,220 --> 00:01:12,750 However, with the morality of artificial intelligence, I am particularly putative AI persons. 12 00:01:12,750 --> 00:01:23,190 And it's worth reminding ourselves of the prudential significance of establishing the moral status of persons, them and us. 13 00:01:23,190 --> 00:01:30,960 I just want to quote from a passage that I wrote about this in the value of life 35 years ago now. 14 00:01:30,960 --> 00:01:37,860 I said this the question of whether or not there are people on other planets is a real one. 15 00:01:37,860 --> 00:01:45,210 If there are, we need not expect them to be human. It would be bizarre if they were no need. 16 00:01:45,210 --> 00:01:50,370 We expect them to look or sound or smell or anything else like us. 17 00:01:50,370 --> 00:01:59,460 They might not even be organic, but might perhaps reproduce by mechanical construction rather than by genetic reproduction. 18 00:01:59,460 --> 00:02:06,840 But if we are able to answer the question in the affirmative, i.e. that they are persons from other planets, 19 00:02:06,840 --> 00:02:15,390 we will be distinguishing between people on other worlds and animals, plants or lesser machines on those worlds. 20 00:02:15,390 --> 00:02:25,770 We will be deciding, in short, whether an appropriate response to them is to invite them to dinner or to have them for dinner. 21 00:02:25,770 --> 00:02:33,210 But suppose the pooches on the other foot suppose instead of us finding them in deep space, 22 00:02:33,210 --> 00:02:39,210 they turn up on our doorstep as science fiction is prepared this fall and find us. 23 00:02:39,210 --> 00:02:43,260 And because they turn up here before we turn up, there they are. 24 00:02:43,260 --> 00:02:48,990 They thereby prove their superior technology. What could we say about ourselves? 25 00:02:48,990 --> 00:02:55,980 To them, that ought to convince them that they had found persons on another planet and that 26 00:02:55,980 --> 00:03:04,890 we have a status that was in some way worthy of their respect and attention. 27 00:03:04,890 --> 00:03:11,790 Because if we think about what we would say about ourselves, we know what to look for in others. 28 00:03:11,790 --> 00:03:19,390 But now, back to the future, for the rest of the talk I want to lay into, I hope in a robust manner. 29 00:03:19,390 --> 00:03:23,820 A recent paper in Nature entitled The Moral Machine Experiment. 30 00:03:23,820 --> 00:03:29,520 It was published in Nature in the 24th of October 2018. 31 00:03:29,520 --> 00:03:38,400 No. And, you know, in a fit of rage, I wrote this paper in about a day and a half to reply to it, 32 00:03:38,400 --> 00:03:45,740 and you're going to get some some of that rage and some of that content now. 33 00:03:45,740 --> 00:03:55,580 Edmund Howard and his colleagues have perhaps unwittingly set out a programme, I believe in that paper, which threatens us all. 34 00:03:55,580 --> 00:03:59,840 It's full of breathtaking and reckless assumptions, both about decision making, 35 00:03:59,840 --> 00:04:08,180 the decision making capacities of any current ally and particularly about so-called autonomous vehicles, 36 00:04:08,180 --> 00:04:16,880 but also reckless assumptions about the nature of morality and indeed the role of law. 37 00:04:16,880 --> 00:04:24,020 In all of this, in the first paragraph of their paper, they say and I quote, 38 00:04:24,020 --> 00:04:31,790 autonomous vehicles will need to decide how to divide up the risk between the different stakeholders on the road. 39 00:04:31,790 --> 00:04:37,550 In other words, that suggestion is that the A.I. itself will be programmed to make those decisions. 40 00:04:37,550 --> 00:04:44,570 And then the The Moral Machine experiment is to try and find the data to put into that programme rather than rely on. 41 00:04:44,570 --> 00:04:56,190 I think that you were suggesting yesterday turn in some of the public perception experiments that you referred to. 42 00:04:56,190 --> 00:05:04,740 It seems to me that it just didn't occur to the authors of that paper that it is not open to the drivers of driverless cars, 43 00:05:04,740 --> 00:05:13,440 whether they are machines or humans, automatically to impose risk or death on other innocent road users. 44 00:05:13,440 --> 00:05:20,420 Whenever the alternative may involve a risk, even the slightest risk to themselves. 45 00:05:20,420 --> 00:05:24,200 The more machine experiment uses a decision making methodology, 46 00:05:24,200 --> 00:05:30,260 I'll spend just a bit more about how it works in a moment derived from the famous trolley problem. 47 00:05:30,260 --> 00:05:38,720 The trolley problem was invented, as you will all know by the great Oxford philosopher Philip afoot in 1967, 48 00:05:38,720 --> 00:05:44,450 and I was one of a group of initially Oxford philosophers who devised and popularised 49 00:05:44,450 --> 00:05:51,310 various versions of this problem in the late 60s and early 70s of the last century. 50 00:05:51,310 --> 00:06:01,870 If a vehicle was really autonomous instead of simply metaphorically autonomous, it might do some deciding. 51 00:06:01,870 --> 00:06:11,680 But so-called autonomous vehicles are incapable of deciding they will merely do some programmed selecting between alternatives. 52 00:06:11,680 --> 00:06:18,310 Tossing a coin, for example, is notoriously a method of selecting without deciding. 53 00:06:18,310 --> 00:06:25,630 That's why we turn to tossing a coin to make a decision. It's a method of selecting without deciding. 54 00:06:25,630 --> 00:06:30,100 Deciding is a process, not a product. 55 00:06:30,100 --> 00:06:41,650 The moral machinists seem to believe that they have found a way of making themselves and the rest of us and I quote cognisant of public morality 56 00:06:41,650 --> 00:06:51,730 and that such knowledge will give them and us access to morally optimal or at least morally acceptable choices that we can programme into. 57 00:06:51,730 --> 00:07:02,480 And this will an extremists so they think, minimise harm in proven, morally acceptable ways. 58 00:07:02,480 --> 00:07:10,670 They also claim that these ways are somehow morally licenced by the nature of what they call public morality, 59 00:07:10,670 --> 00:07:18,190 which is being reserved discovered by their sampling of public opinion. 60 00:07:18,190 --> 00:07:22,000 So the moral machines have a cunning plan. This is what they say. 61 00:07:22,000 --> 00:07:27,910 In other words, even if ethicists were to agree on how autonomous vehicles should solve moral dilemmas, 62 00:07:27,910 --> 00:07:33,610 their work would be useless if citizens were to disagree with that solution and thus 63 00:07:33,610 --> 00:07:39,670 opt out of the future of autonomous vehicles promise in lieu of the status quo. 64 00:07:39,670 --> 00:07:48,490 Any attempt to devise artificial intelligence ethics must at least be cognisant of public morality. 65 00:07:48,490 --> 00:07:57,430 And then they set out by inventing a computer game to test what public morality says about the sorts of decisions that they believe that 66 00:07:57,430 --> 00:08:08,590 autonomous vehicles will be tasked to make return to the problematic nature of the moral machinists account of public morality in a moment. 67 00:08:08,590 --> 00:08:11,200 But we should note that in the real world, 68 00:08:11,200 --> 00:08:21,160 cognisance of and respect for the law is a much bigger hurdle for these cavalier experimenters than they imagined. 69 00:08:21,160 --> 00:08:33,820 Many jurisdictions, including English law and all other common law jurisdictions, do not recognise a defence of the necessity to charges of murder. 70 00:08:33,820 --> 00:08:38,050 It is not in most jurisdictions, simply up to individuals, 71 00:08:38,050 --> 00:08:46,390 whether these individuals are people or even driverless vehicles to take the law into their own hands and refuse to spare. 72 00:08:46,390 --> 00:08:55,420 Words used from the nature paper that is to deliberately execute an innocent fellow individual for the crime, for example, of jaywalking, 73 00:08:55,420 --> 00:08:58,960 come back to their use of that example in a moment, 74 00:08:58,960 --> 00:09:07,600 particularly in circumstances in which the self-interest of the vehicle and its passengers is obviously paramount. 75 00:09:07,600 --> 00:09:19,240 In 1984, the famous case of Dublin, Stevens decided that necessity was no defence to any charge of unlawful killing. 76 00:09:19,240 --> 00:09:28,220 This landmark case established a precedent throughout the Commonwealth, which includes a lot of the United States. 77 00:09:28,220 --> 00:09:36,980 This fact, in this case, two shipwrecked sailors, Dudley and Stevens, when a third survivor to a shipwreck, 78 00:09:36,980 --> 00:09:42,830 the cabin boy, Richard Parker, fell into a coma, decided to kill him for food, 79 00:09:42,830 --> 00:09:48,830 to save their own lives and to save the life of a fourth survivor who refused to take part 80 00:09:48,830 --> 00:09:56,180 in the decision making that led to the murder of the poor cabin boy Dudley in Stevens. 81 00:09:56,180 --> 00:10:11,820 Embarrassing, they were rescued just a day and a half. I think it was after they butchered the cabin boy and their actions became no. 82 00:10:11,820 --> 00:10:16,860 In this in this case, Dudley and Stevens were tried for murder. 83 00:10:16,860 --> 00:10:22,050 They were convicted and sentenced to death sentence to hanging. 84 00:10:22,050 --> 00:10:26,130 But and they were sweating it out in prison, 85 00:10:26,130 --> 00:10:36,120 actually awaiting sentence when they were reprieved and eventually just said six months for the as it were for the extenuating circumstances. 86 00:10:36,120 --> 00:10:38,700 But the case established the precedent. 87 00:10:38,700 --> 00:10:47,820 But you can't argue that it was necessary for you to kill other innocent people simply because you have a better prospect in store, 88 00:10:47,820 --> 00:10:52,530 whether it's saving your own life or the lives of others. 89 00:10:52,530 --> 00:11:01,050 A common law principle established now for over 135 years is unlikely to be overturned by reference to the preferences. 90 00:11:01,050 --> 00:11:08,430 However, since they're revealed in a superficial survey by what might be politely called overall optimistic. 91 00:11:08,430 --> 00:11:14,040 And psychologists. If Dr I was the lead scientist on the paper, 92 00:11:14,040 --> 00:11:23,940 we're deliberately to drive his car into a bus queue or even to an innocent jaywalking and cause death to avoid what he judged to be a greater harm. 93 00:11:23,940 --> 00:11:29,230 He would, in most jurisdictions, be judged rightly to. 94 00:11:29,230 --> 00:11:36,960 He would find himself anyway charged with murder, at the very least some other form of culpable homicide or manslaughter. 95 00:11:36,960 --> 00:11:44,100 So, never mind artificial intelligence ethics needing to be at least cognisant of public morality. 96 00:11:44,100 --> 00:11:50,320 Much more urgent devises of experiments like this is to be cognisant of the law, 97 00:11:50,320 --> 00:11:56,730 a cognisance of which the authors of this paper show no evidence whatsoever. 98 00:11:56,730 --> 00:12:03,270 In the longer version of this paper, I just would have brought you up to date with a more recent case which I was involved in in Manchester, 99 00:12:03,270 --> 00:12:13,830 the famous case of the Manchester conjoined twins in which two very severely conjoined twins would have died if they were separated. 100 00:12:13,830 --> 00:12:18,720 If they were separated. The only way to do it and save the life of one, but not both, 101 00:12:18,720 --> 00:12:24,130 would be to sacrifice the life of what was called the weaker, the less likely to survive. 102 00:12:24,130 --> 00:12:31,410 And the two twins carry out the separation and and hopefully save the life of the surviving twin. 103 00:12:31,410 --> 00:12:36,420 I was very much involved with that case. I knew this the surgeon who carried it out. 104 00:12:36,420 --> 00:12:43,080 I had many, many conversations with him. I followed the proceedings and we went to the Court of Appeal. 105 00:12:43,080 --> 00:12:50,370 And the operation to separate the twins could not go ahead without the approval of the Court of Appeal. 106 00:12:50,370 --> 00:13:00,450 So these are very severe obstacles to just programming into an artificial intelligence decisions that would amount to language. 107 00:13:00,450 --> 00:13:07,050 Usually, this is a heroic and historic moment for me because in my entire career lecturing career, 108 00:13:07,050 --> 00:13:12,450 which lasts for more than 40 years, I've only ever used before one visual. And this is this is the second. 109 00:13:12,450 --> 00:13:17,490 This is the fact that this is a great day for me and I just with it. 110 00:13:17,490 --> 00:13:21,690 This is one of the examples in the nature paper crudely drawn by me. 111 00:13:21,690 --> 00:13:26,010 Basically, you've got a car. Driverless, no space for the driver, 112 00:13:26,010 --> 00:13:36,540 with three passengers approaching a solid concrete barrier on a pedestrian crossing well before and even the zebra crossing. 113 00:13:36,540 --> 00:13:42,210 There are three old cultures signify this old cultures by walking sticks, 114 00:13:42,210 --> 00:13:48,990 and they are crossing the crossing against the sign, which tells them that they shouldn't cross. 115 00:13:48,990 --> 00:13:56,280 How wicked does that get out in the public? 116 00:13:56,280 --> 00:14:02,820 Opinion celebrates that the image of this game in which they invited people to play the game of solving these problems. 117 00:14:02,820 --> 00:14:09,130 How? Because it should be programmed. And this is one of the examples classic trolley problems. 118 00:14:09,130 --> 00:14:15,670 For example, you have the three old cultures on the cross. Here's the driverless car. 119 00:14:15,670 --> 00:14:28,930 It can swerve into them, or it can crash itself that the peril of the passengers into a concrete barrier and the the people consulted 120 00:14:28,930 --> 00:14:38,200 are asked to say which solution they would prefer to see programmed into an eye in charge of a driverless car. 121 00:14:38,200 --> 00:14:38,440 Now, 122 00:14:38,440 --> 00:14:48,790 accepting for a moment that the holy grail here is to find out how to obtain cognisance of public morality and then programmed driverless vehicles. 123 00:14:48,790 --> 00:14:53,620 Accordingly, the following four steps are offered by the authors of this paper. 124 00:14:53,620 --> 00:15:01,690 I'll just list them very briefly for you. Firstly, find out what public morality will prefer to see happen in a range of scenarios, 125 00:15:01,690 --> 00:15:11,470 of which this was one second on the basis of this discovery of what public morality would choose, 126 00:15:11,470 --> 00:15:19,060 claim both popular acceptance of the preferences and persuade would be owners and manufacturers of the vehicles 127 00:15:19,060 --> 00:15:27,520 concerned to programme their vehicles in accordance with this demonstrated public acceptance of the solution offered. 128 00:15:27,520 --> 00:15:37,480 Thirdly, citizens agreements thus characterised is then presumed to deliver moral licence for the chosen preferences. 129 00:15:37,480 --> 00:15:44,290 And finally, the fourth planning. This yields permission in some sense to programme the vehicles or condemn those 130 00:15:44,290 --> 00:15:48,760 outside the vehicles when their deaths will preserve the vehicle and occupants. 131 00:15:48,760 --> 00:15:55,600 And actually, they dilute the one of the other questions. They describe the three old cultures on the crossing, the jaywalkers. 132 00:15:55,600 --> 00:15:59,460 And I say, would you prefer to kill jaywalkers or innocent? 133 00:15:59,460 --> 00:16:05,350 You know, if they weren't crossing against the light, would that make a difference if they were innocent of the crime of jaywalking? 134 00:16:05,350 --> 00:16:12,770 Would it be more legitimate to run them over than if they were not innocent of the crime of jaywalking? 135 00:16:12,770 --> 00:16:15,440 Well, Dworkin, in one of the greatest, in my view, 136 00:16:15,440 --> 00:16:21,290 jurisprudential and constitutional lawyers and philosophers of recent times drew a distinction which is of crucial 137 00:16:21,290 --> 00:16:28,820 relevance here in the context of a discussion of the famous debate between two other leading academic lawyers, 138 00:16:28,820 --> 00:16:34,070 lawyer Patrick Devlin and Herbert Harte, concerning the enforcement of morality law, 139 00:16:34,070 --> 00:16:41,660 Devlin made great play of the importance of respecting public opinion and community values. 140 00:16:41,660 --> 00:16:48,290 Dawkins telling rebuke was, and I quote, what is shocking and wrong is not his law. 141 00:16:48,290 --> 00:16:56,600 Devlin's idea that the community's morality counts, but his idea of what counts as the community's morality. 142 00:16:56,600 --> 00:17:04,580 And that will be the basis of my critique of what I take to be the nonsense in this nature paper. 143 00:17:04,580 --> 00:17:12,080 I should declare, I suppose, a mild interest here. When he's talking, it was my Ph.D. supervisor. 144 00:17:12,080 --> 00:17:22,250 I knew him well until his death. I knew his wife, Betsy, well. And so I have a particular penchant for for his writings. 145 00:17:22,250 --> 00:17:28,040 So-called autonomous vehicles do not solve moral dilemmas. 146 00:17:28,040 --> 00:17:34,760 If they did, if they only could, they would be persons properly so-called. 147 00:17:34,760 --> 00:17:42,830 Intelligence, Superintelligence, some people call it a persons and would, therefore, in my view, necessarily have rights, 148 00:17:42,830 --> 00:17:50,240 interest duties and votes like the rest of US persons, much more important their lives, 149 00:17:50,240 --> 00:17:55,460 which are more perhaps appropriately say their existences would matter, would matter. 150 00:17:55,460 --> 00:18:03,770 In whatever sense, moral status conveys any sense of mattering would count equally with those of humans. 151 00:18:03,770 --> 00:18:08,960 So you can add a fourth person to the driverless car if the person the absent behind 152 00:18:08,960 --> 00:18:12,710 the wheel and the numbers would weigh in favour of the occupants of the car, 153 00:18:12,710 --> 00:18:22,200 possibly if we play a numbers game. But so-called autonomous vehicles don't solve moral dilemmas. 154 00:18:22,200 --> 00:18:30,700 In the world of the moral machinists, as I call the authors of the paper. 155 00:18:30,700 --> 00:18:42,460 Preferences and decisions will be made without solving the dilemmas that make those preferences or decisions moral and just make a claim, 156 00:18:42,460 --> 00:18:49,120 which I will only support very sketchily, but I tried to support it at much greater length in my last book called How to Be Good. 157 00:18:49,120 --> 00:18:58,480 Published by Oxford in 2016. But I would like to claim suggest to you that just exactly as not just any judgement 158 00:18:58,480 --> 00:19:06,010 about things in which science is interested is a scientific judgement part of science. 159 00:19:06,010 --> 00:19:11,290 So not just any judgements about things with which morality is concerned. 160 00:19:11,290 --> 00:19:16,680 Moral judgements moral judgements have to qualify as such. 161 00:19:16,680 --> 00:19:26,430 Making a judgement a claim about what the right thing to do is not necessarily making a moral claim has to have some moral bits in that somewhere. 162 00:19:26,430 --> 00:19:35,280 The solving of a moral dilemma involves much more than having a preference for one possible outcome of a moral dilemma. 163 00:19:35,280 --> 00:19:44,550 Just as the solution of a scientific problem requires much more than simply opting for stipulating a particular solution to that scientific problem, 164 00:19:44,550 --> 00:19:50,560 it has to show how the circumstances which make it a moral dilemma have been weighed carefully. 165 00:19:50,560 --> 00:19:53,610 One against another and morally persuasive reasons, 166 00:19:53,610 --> 00:20:01,050 facts and or justifications found for having a moral preference for one night come rather than another. 167 00:20:01,050 --> 00:20:06,570 Majorities, however, many people you sample in your survey majorities are not necessarily right. 168 00:20:06,570 --> 00:20:12,780 Neither science nor ethics is produced by casting votes for particular answers or so-called answers. 169 00:20:12,780 --> 00:20:18,450 Happy though this. The thought of such a possibility might seem to some. 170 00:20:18,450 --> 00:20:22,770 It seems to me that the moral machinists are proposing the moral equivalent of deciding 171 00:20:22,770 --> 00:20:27,240 whether the world is flat by finding out what preferred what people would prefer. 172 00:20:27,240 --> 00:20:36,870 The answer to be as to what the shape of the world is. 173 00:20:36,870 --> 00:20:42,360 Hmm. Having a preference for. 174 00:20:42,360 --> 00:20:50,340 So there is clarity tossing a coin, I would suggest to be sure select cannot come, but not for moral reasons. 175 00:20:50,340 --> 00:21:00,930 So neither tossing a coin or an algorithm nor methods described in the paper in nature, methods or processes of moral deliberation. 176 00:21:00,930 --> 00:21:07,650 Nor does it seem to have resulted from much that we could call deliberation. 177 00:21:07,650 --> 00:21:12,270 The moral machinists work is at least as useless as they claim a lot of ethicists would be. 178 00:21:12,270 --> 00:21:22,080 In the paragraph I created a moment ago, they have not discovered where the citizens and corporations are or are not prepared to accept the moral and 179 00:21:22,080 --> 00:21:30,690 legal consequences of the decisions of the autonomous vehicles that they manufacture by or driving amongst, 180 00:21:30,690 --> 00:21:38,100 which will be the consequences of deliberately causing the deaths of innocent bystanders and other road users. 181 00:21:38,100 --> 00:21:47,340 Consequences which involve much more than operationalising a principle of minimising immediate harm of directing the vehicle in one way or another. 182 00:21:47,340 --> 00:21:56,280 Proper consideration of the possible harms of the preferences delivered by the moral machinists must surely include their effect, 183 00:21:56,280 --> 00:21:58,380 for example, on due process, 184 00:21:58,380 --> 00:22:06,690 on the principle that no person should be condemned on herd or on constitutional protections for freedom of the individual and on justice, 185 00:22:06,690 --> 00:22:15,830 including criminal justice. And so all these other considerations which do not appear in the nature paper at all. 186 00:22:15,830 --> 00:22:20,210 Having a preference for killing or sparing one person rather than another, 187 00:22:20,210 --> 00:22:29,330 even the preference shared with thousands or even millions does not make it moral, even though it's about something of moral importance. 188 00:22:29,330 --> 00:22:38,990 The preferences of a certain bohemian corporal as von Hindenburg characterised Hitler came to be shared by many millions of people. 189 00:22:38,990 --> 00:22:43,570 But that did not count morally in their favour or indeed against them. 190 00:22:43,570 --> 00:22:49,930 It was irrelevant to that particular question. 191 00:22:49,930 --> 00:23:06,660 However, laws deny that laws. Arrived at democratically and over time or at least one indication of a human driver, it seems to me, should surely. 192 00:23:06,660 --> 00:23:12,510 It seems to me that this problem is a complete no brainer. What should the human driver of that car do? 193 00:23:12,510 --> 00:23:17,220 Well, it's obvious that a human driver should drive into the barrier. 194 00:23:17,220 --> 00:23:19,860 He's not entitled to mow down innocent people on a crossing. 195 00:23:19,860 --> 00:23:28,770 He should rely on the crumple zones and the safety features of the car to protect him, the airbags and so forth and take his chances. 196 00:23:28,770 --> 00:23:33,590 It is not open to him to decide on his own say-so or her and say so it's unsafe. 197 00:23:33,590 --> 00:23:37,800 So to mow down three innocent people on a crossing, I mean, it's just such a no brainer. 198 00:23:37,800 --> 00:23:49,830 I can't imagine that any sane person would offer that to the public, even as part of a rather malicious game to see what they would say. 199 00:23:49,830 --> 00:23:55,620 A human driver, I said, should surely stay for the barrier and hope for the best. 200 00:23:55,620 --> 00:24:01,650 In any event, there could be no certainty that the passengers and the driver of the vehicle would be killed or even injured, 201 00:24:01,650 --> 00:24:07,950 and they certainly would be at lest risk in their vehicle than the unprotected old people on the crossing. 202 00:24:07,950 --> 00:24:14,400 I can see no rational or any moral basis in this example or an example of this drawing for a choice, 203 00:24:14,400 --> 00:24:19,530 human or machine to any to do anything but avoid the old people. 204 00:24:19,530 --> 00:24:24,000 It would be just wicked, obviously not to do. 205 00:24:24,000 --> 00:24:31,320 Surely the right thing to do is design better safety cages and driverless cars, rather than programming A.I. to sacrifice soft targets. 206 00:24:31,320 --> 00:24:42,660 God save us. We also need to recall that the law, both civil and criminal, is also an expression, if incomplete, of public morality. 207 00:24:42,660 --> 00:24:47,100 If you want to know what public morality says about something, we'll look at laws developed over many, 208 00:24:47,100 --> 00:24:51,870 many years and refined by judicial precedents and all sorts of other things. 209 00:24:51,870 --> 00:24:56,790 It's a very good way if you want to find out one dimension of what public morality is. 210 00:24:56,790 --> 00:25:02,850 You don't need to invent computer games and offer them to a lot of people who spend that amount 211 00:25:02,850 --> 00:25:09,590 of time thinking about the ethics of the dilemma that they're entitled to show a preference for. 212 00:25:09,590 --> 00:25:21,650 We also need to recall recall that the law, both civil and criminal, is is more reliable and allowed buy time moral machinists. 213 00:25:21,650 --> 00:25:26,720 The authors of that paper should have been aware that they also need parliament's legislators, 214 00:25:26,720 --> 00:25:31,730 the court's human rights conventions and many other things not simply accrued. 215 00:25:31,730 --> 00:25:42,790 Vox Pop to be cognisant of what public morality says about these sorts of things. 216 00:25:42,790 --> 00:25:46,300 The more mass shooting is asked in just the paragraph four of the paper and 217 00:25:46,300 --> 00:25:51,280 I quote whether people prefer to spare the younger rather than the elderly, 218 00:25:51,280 --> 00:25:58,780 or whether they prefer to spare pedestrians who cross illegally rather than pedestrians who walk. 219 00:25:58,780 --> 00:26:03,850 Surely any decent citizens would immediately ask what gives you the right to condemn to death innocent people, 220 00:26:03,850 --> 00:26:13,330 even one so wicked as jaywalking without due process? Which commands public respect, has has her their defence and convicted them, 221 00:26:13,330 --> 00:26:19,790 especially in a jurisdiction unlike the UK, which has not abolished the death penalty. 222 00:26:19,790 --> 00:26:30,110 Most matters of life or death have not been left to the bare preferences of people who have shown no evidence of deliberation. 223 00:26:30,110 --> 00:26:32,390 All juries, for example in criminal trials, 224 00:26:32,390 --> 00:26:40,790 are cautioned about their responsibilities required to hear and pay attention to evidence and argument from both sides, 225 00:26:40,790 --> 00:26:46,640 prosecution and defence and also from the judge and deliberate. 226 00:26:46,640 --> 00:26:52,790 There is no evidence of deliberation required of the people who pressed the buttons in the computer game. 227 00:26:52,790 --> 00:27:05,420 For one choice or another, this is one of the many things that makes it not just not morbid, but immoral. 228 00:27:05,420 --> 00:27:09,470 Rather, it is the case, it seems to me that public morality expressed through laws, 229 00:27:09,470 --> 00:27:13,460 civil and human rights conventions and many other publicly accessible ways in 230 00:27:13,460 --> 00:27:18,680 most civilised societies has not simply been left to individual preference, 231 00:27:18,680 --> 00:27:23,240 let alone to popular preferences or to popular prejudice. 232 00:27:23,240 --> 00:27:35,750 It has evolved over lengthy and often painfully informed opinions over centuries and in many other ways. 233 00:27:35,750 --> 00:27:42,890 There was another recent case, which I was also involved in talking about in the UK, and that was the case of Charlie Gard, 234 00:27:42,890 --> 00:27:51,410 who is a very severely damaged infant who his parents wanted to keep treating 235 00:27:51,410 --> 00:27:55,340 him as though there was evidence that he was some evidence that he was in pain, 236 00:27:55,340 --> 00:28:01,160 but no evidence that he would ever recover. And eventually this what should happen to Charlie Gard, 237 00:28:01,160 --> 00:28:11,690 whether he should be allowed to die as the people looking after him in Great Ormond Street Hospital in London thought, 238 00:28:11,690 --> 00:28:20,420 or whether his parents should be allowed to condemn him to endless surgeries to keep him going, which had to be decided by the courts. 239 00:28:20,420 --> 00:28:24,050 Nobody suggested that we have should have a quick Vox pop in on the streets 240 00:28:24,050 --> 00:28:29,690 outside Great Ormond Street Hospital and see what the people thought about it. 241 00:28:29,690 --> 00:28:30,500 Chicken clipped. 242 00:28:30,500 --> 00:28:41,780 I have a modest proposal to make on this whole subject, one that I hope will recommend itself, if not to reason, at least to morality. 243 00:28:41,780 --> 00:28:48,110 No one should deliberately kill the innocent without an excuse of overwhelming, 244 00:28:48,110 --> 00:29:01,580 plausible and judicially approved necessity, as in the Manchester conjoined twins case as in Dudley and Stephens and so on. 245 00:29:01,580 --> 00:29:04,940 If a vehicle is detectable but out of control, 246 00:29:04,940 --> 00:29:13,730 the driver should not deliberately kill others rather than put herself itself its passengers and the vehicle at risk. 247 00:29:13,730 --> 00:29:18,890 If we are to programme driverless vehicles on the road. 248 00:29:18,890 --> 00:29:25,100 The roads would be much safer if the following was the first law of vehicle robotics, and I commend it to you. 249 00:29:25,100 --> 00:29:28,610 Here it is. I'm going to call it out law. It's law. 250 00:29:28,610 --> 00:29:37,910 On the first of the Roomba 1911 captains Scott and Oates and 14 other members, members of Robert Falcon Scott's doomed, 251 00:29:37,910 --> 00:29:48,390 as it turned out to be Antarctic expedition set off from their Cape Evans base at the South Pole on the 15th of March. 252 00:29:48,390 --> 00:29:57,140 Oates told his companions that he could not go on and propose that they leave him in a sleeping bag, which they refused to do. 253 00:29:57,140 --> 00:30:02,240 He managed a few more miles that day, but his condition worsened that night. 254 00:30:02,240 --> 00:30:06,560 On the morning of the 16th of March, or possibly the 15th, 255 00:30:06,560 --> 00:30:12,860 we don't know because all of the members of that expedition died, including Captain Scott himself. 256 00:30:12,860 --> 00:30:19,400 And we know from Scott's diary and his dead bodies that survived the conversation that 257 00:30:19,400 --> 00:30:26,840 I'm not going to repeat to you on the morning of the 16th or possibly the 15th of March. 258 00:30:26,840 --> 00:30:34,610 Scott was unsure. Oates walked out of the tent into a blizzard and to his death, Scott wrote in his diary. 259 00:30:34,610 --> 00:30:39,830 We knew that Paul Oates was walking to his death. But then we tried to dissuade him. 260 00:30:39,830 --> 00:30:47,600 We knew it was the act of a brave man, and an English gentleman apologised for the last bit. 261 00:30:47,600 --> 00:30:54,110 According to Scott's diary, as Oates left the tent, he said, I'm just going outside. 262 00:30:54,110 --> 00:31:00,590 I may be. Some time outs most likely died on March 17th. 263 00:31:00,590 --> 00:31:08,210 Ouch. Legendary self-sacrifice to save the life of his colleagues in the face of diminishing supplies of food and 264 00:31:08,210 --> 00:31:14,570 the need to make enough speed to reach further supplies sets a moral example that might daunt most of us, 265 00:31:14,570 --> 00:31:24,740 let alone driverless cars. What might we expect of an autonomous vehicle, but probably not, perhaps simply of a driverless one? 266 00:31:24,740 --> 00:31:31,430 Whatever its priorities, it will surely be conscious if it really is an artificial intelligence that its 267 00:31:31,430 --> 00:31:36,980 solution to the problem will say something about the sort of creature it is. 268 00:31:36,980 --> 00:31:43,700 Just as all of our actions say something about the sorts of people we are. 269 00:31:43,700 --> 00:31:52,340 And also about the sort of creature it will from thence forth be in its own mind and in the minds of others, 270 00:31:52,340 --> 00:32:01,840 we are created by our past acts and our past decisions, both what we think of ourselves and what we detect that others think of us. 271 00:32:01,840 --> 00:32:09,550 And, of course, an analogous consciousness of what we are advocating or acting on in the moral machine case will say something about the 272 00:32:09,550 --> 00:32:16,870 morality of those who have seriously proposed such a scheme for deciding how ethically we might programme driverless cars, 273 00:32:16,870 --> 00:32:25,300 what it says about them, about their moral seriousness and their qualities as human beings. 274 00:32:25,300 --> 00:32:31,480 Perhaps the most obvious answer to the spurious questions put by the highly immoral moral machine 275 00:32:31,480 --> 00:32:38,170 experiment is that driverless vehicles should always risk themselves and their occupants, 276 00:32:38,170 --> 00:32:47,920 relying on the safety built into the structure of their vehicles, rather than choose between different groups of innocent bystanders for slaughter. 277 00:32:47,920 --> 00:32:55,660 Not least because of the obviously corrupting self-interest involved in their prioritising themselves and their vehicle, 278 00:32:55,660 --> 00:33:00,550 rather than the other road users. True. 279 00:33:00,550 --> 00:33:08,770 Not many people would want to ride in or own a driverless car programmed in this way, but perhaps that's for the best. 280 00:33:08,770 --> 00:33:09,850 Until that is, 281 00:33:09,850 --> 00:33:20,980 we have real autonomous vehicles who not which can take their own moral responsibilities seriously and who have received a proper education, 282 00:33:20,980 --> 00:33:33,440 not least in law and ethics. Thank you.