1 00:00:00,720 --> 00:00:05,400 So thank you so much for organising this and for having me. 2 00:00:05,400 --> 00:00:12,720 It's really taking I want to say you're going to know some stark contrast between me and David because, 3 00:00:12,720 --> 00:00:17,020 you know, first of all, I'm not going to talk for 45 minutes and then take 20 minutes of discussion. 4 00:00:17,020 --> 00:00:23,790 I'm going to stop every once in a while during the talk and ask for questions in the middle because I really want to get your feedback. 5 00:00:23,790 --> 00:00:30,540 Lisa, I need your feedback so desperately that I'm new to this topic, so I don't have a general theory or nine theses. 6 00:00:30,540 --> 00:00:34,950 I'm going to talk about one particular case. I'm also so new to this project. 7 00:00:34,950 --> 00:00:39,000 I just found my thesis. Francis expression was great because it's very exact. 8 00:00:39,000 --> 00:00:45,610 My thesis is that A.I. is like angels. OK, but that will become clear. 9 00:00:45,610 --> 00:00:52,380 OK, so that Lucifer has fallen angels. 10 00:00:52,380 --> 00:01:04,260 Which ones? Yeah, we'll see. OK. So this issue of whether age has moral status has obviously been a big issue in the European Parliament. 11 00:01:04,260 --> 00:01:09,090 Talked about this and Saudi Arabia grant citizenship to a robot. 12 00:01:09,090 --> 00:01:14,580 Other people have argued that we should grant rights to the robots. 13 00:01:14,580 --> 00:01:19,230 And so it's been an issue which has been on people's minds. 14 00:01:19,230 --> 00:01:26,820 Of course, their talk, at least in the exception of Saudi Arabia, because they're talking about future A.I. 15 00:01:26,820 --> 00:01:32,760 But but they think it's an issue now that we to be thinking about and several. 16 00:01:32,760 --> 00:01:40,170 So in order to approach this issue, I think we have to think about what has moral status. 17 00:01:40,170 --> 00:01:42,480 What are the grounds for that moral status? 18 00:01:42,480 --> 00:01:48,470 And a lot of other issues that I'm going to go through fairly quickly and then see if you have any questions about particular parts. 19 00:01:48,470 --> 00:01:56,100 So I take it, I'm just assuming that you and I have moral status regardless of race, gender, physical or mental handicap and so on. 20 00:01:56,100 --> 00:02:02,910 Except perhaps in very extreme cases, but plants, chairs, thermostats and laptop computers, 21 00:02:02,910 --> 00:02:08,280 at least current laptop computers don't have moral servers. 22 00:02:08,280 --> 00:02:13,320 These are just assumptions you've got to start somewhere and controversial cases. 23 00:02:13,320 --> 00:02:21,450 As David made clear, foetuses chumps my shrimp, future generations, corpses, intelligent extraterrestrial life. 24 00:02:21,450 --> 00:02:25,320 You know, people will argue about, you know, cases of those sorts. 25 00:02:25,320 --> 00:02:33,210 I'm not going to make any assumptions about that. But for the ones that are clear that some do and some don't. 26 00:02:33,210 --> 00:02:40,980 We got to understand what it means to say that a creature has moral status. 27 00:02:40,980 --> 00:02:45,450 And that's not very clear to me. So I'm trying to figure out how to do it. 28 00:02:45,450 --> 00:02:51,360 He gave the definition and you'll see why I don't want to put interests in there. 29 00:02:51,360 --> 00:03:00,740 I want to just say an entity has moral status to the extent that it shares the four hours, as I call them, with normal adult humans on the same basis. 30 00:03:00,740 --> 00:03:08,280 So it has these these this status for the same reason that normal adults do whatever that reason is. 31 00:03:08,280 --> 00:03:12,780 We can talk about what it is, but I don't think you need that in the in the definition here. 32 00:03:12,780 --> 00:03:22,440 The first are is it's morally wrong to treat that entity in ways that it's also morally wrong to treat other normal adult humans. 33 00:03:22,440 --> 00:03:28,560 Moral agents have moral reasons not to treat that entity in ways that they also have reasons not to treat normal adults. 34 00:03:28,560 --> 00:03:34,410 Those reasons might be obligations, or they might be some kind of ideal as opposed to an obligation. 35 00:03:34,410 --> 00:03:39,660 The entity has moral rights that normal adult humans have. 36 00:03:39,660 --> 00:03:45,510 And the moral rules protect or apply to that apply to normal. 37 00:03:45,510 --> 00:03:50,850 Adults also protect that entity so protected against, you know, 38 00:03:50,850 --> 00:03:56,460 the rules being things like don't kill, don't cause pain, don't to see things like that. 39 00:03:56,460 --> 00:04:06,780 OK, so you know, if it has more to the extent that those rules apply to it, then it is going to have moral status. 40 00:04:06,780 --> 00:04:11,520 And that's just what it means for it to have moral centre. So I'll make it absolutely clear. 41 00:04:11,520 --> 00:04:17,100 I agree with David about this particular client, right when we're talking about moral status. 42 00:04:17,100 --> 00:04:23,130 It's just redundant, right? What we're really talking about is what are the rules and obligations and reasons and so on. 43 00:04:23,130 --> 00:04:26,040 And how do those apply to this creature? 44 00:04:26,040 --> 00:04:34,440 And moral status is just saying, you know, it's the kind of creature that you know that the four of us apply to. 45 00:04:34,440 --> 00:04:47,370 OK, now the next part is people seem to talk as if moral status is on or off notion. 46 00:04:47,370 --> 00:04:54,120 And I'm going to. I did this slightly. I disagree with Liz, but I don't even know if you still hold this view. 47 00:04:54,120 --> 00:04:58,830 But at least back in 2003, it's like, you know, you either got it or you don't. 48 00:04:58,830 --> 00:05:03,870 And once you got it, you got all of it right. And and I don't think that. 49 00:05:03,870 --> 00:05:09,900 And so I want to explain, I'm very sympathetic to the opposition to this bill because people don't explain 50 00:05:09,900 --> 00:05:15,990 very well what they mean by degrees of moral status and without an explanation. 51 00:05:15,990 --> 00:05:26,610 It's tempting. Just it's tempting to say there are no degrees of moral status, but I want it, so I want to take that burden off and try to explain it. 52 00:05:26,610 --> 00:05:35,790 Normally, I think there are two kinds of degrees which I'm going to call breadth and strength, as exemplified by the picture. 53 00:05:35,790 --> 00:05:43,740 So how broad is moral status is a question that I think people don't ask enough. 54 00:05:43,740 --> 00:05:51,780 But if moral status really is just reducible to certain rights and obligations, then the question really is which of the many reasons rights, 55 00:05:51,780 --> 00:06:00,870 rules and wrongs, you know, apply to this creature and it might be broader and have more of them apply or narrower having less of that apply. 56 00:06:00,870 --> 00:06:06,060 So babies, for example, I take it, have a right not to be tortured or to be hurt or killed, 57 00:06:06,060 --> 00:06:09,270 but they don't have rights to be free to crawl wherever they want. 58 00:06:09,270 --> 00:06:15,300 And when we swaddle them and keep them from moving, when it's clear that they want to like, go that way. 59 00:06:15,300 --> 00:06:25,080 And so they might have a right not to be cause pain without having any right to be free to pursue the goals that they have internally, 60 00:06:25,080 --> 00:06:27,700 at least at the moment. 61 00:06:27,700 --> 00:06:40,240 Similarly, if you sort block out a little work, but if you lie to me constantly, I think that you might lose your right for me not to lie to you, 62 00:06:40,240 --> 00:06:46,240 but doesn't mean you lose your right for me, not to kill you or to hit you or, you know, and so on. 63 00:06:46,240 --> 00:06:57,010 So you can have partial moral part of the of the obligations and reasons that constitute moral status you can lose without losing the rest of it. 64 00:06:57,010 --> 00:07:04,060 And so this is one way in which you might have different degrees of moral status. 65 00:07:04,060 --> 00:07:08,830 The other way is the strength. 66 00:07:08,830 --> 00:07:18,760 All of the different rights, obligations, rules and reasons, and you have variations in strength within a single reason, right rule or wrong. 67 00:07:18,760 --> 00:07:29,050 So that's why you have to separate the two dimensions. You can have some weak moral status, but not the strong moral status of normal adults. 68 00:07:29,050 --> 00:07:35,350 I don't think it follows that every creature that has, you know, some rights, that their rights are just as strong as the other rights. 69 00:07:35,350 --> 00:07:38,950 That's a thesis that needs to be argued for. You might be right. 70 00:07:38,950 --> 00:07:48,730 In some cases, we just got to look at the cases. So the example would be, you know, should I save the tulip, the turtle or the father? 71 00:07:48,730 --> 00:07:58,930 And I take it from that. For me, at least, you know, you ought to say if you're choosing between the turtle and the tulip, I'd go for the turtle. 72 00:07:58,930 --> 00:08:08,440 But if you're choosing to turn on the toddler and go for the daughter. And so there can be, you know, different degrees of strength. 73 00:08:08,440 --> 00:08:10,330 And more specifically, I was saying earlier, 74 00:08:10,330 --> 00:08:20,920 he has a stronger moral status than another when the former should be favoured in triage situations or avoid killing and trolley kinds of problems. 75 00:08:20,920 --> 00:08:27,100 Or you can say it, and he has stronger moral status than another when more is required to justify 76 00:08:27,100 --> 00:08:32,380 violations of the former's right than to justify violations of the latter's rights. 77 00:08:32,380 --> 00:08:36,310 OK. Those two are not actually equivalent. I'm just kind of ambivalent. 78 00:08:36,310 --> 00:08:41,800 I don't know which way to go. I haven't written this paper yet. Not till the end of the year, I hope. 79 00:08:41,800 --> 00:08:51,040 And so I've got a little time to to work on it. So I want to stop here, first of all and say, do you have any suggestions about which way I should go? 80 00:08:51,040 --> 00:08:56,100 And do you have any questions about what I've said so far, field one question. 81 00:08:56,100 --> 00:09:11,390 I've got to get a chance to say so. My view is that entities don't have different degrees of moral status in that it's not the case 82 00:09:11,390 --> 00:09:17,240 that the same or went to two different meanings might give us different strengths and reasons, 83 00:09:17,240 --> 00:09:20,390 depending on whether it's a person or an animal, for example. 84 00:09:20,390 --> 00:09:29,750 But I agree with you about the triaged case because the harm to a person dying is so much greater than the harm to a turtle dying. 85 00:09:29,750 --> 00:09:37,370 So I just wanted to point out that there are different notions of greater and lesser moral status. 86 00:09:37,370 --> 00:09:43,910 And so with the triage question, I actually agree with you that there are different shades of moral status and that so totally. 87 00:09:43,910 --> 00:09:52,700 And I was moving way too fast and you have to add that qualification. But I mean, you might be right that, you know, for the turtle and the toddler, 88 00:09:52,700 --> 00:09:57,560 the same amount of loss, you know, is going to count the same, but maybe not. 89 00:09:57,560 --> 00:10:04,490 I mean, I don't know, because it could be a contextual thing. It depends on where the pain is occurring and what kind of future it's occurring in. 90 00:10:04,490 --> 00:10:13,250 That's something to be argued for. And I know you've argued about it. I have to think harder about your arguments, but but it's not obvious. 91 00:10:13,250 --> 00:10:19,070 It doesn't. It's not like, you know, semantic. I've got to, you know, you've got to think about it. 92 00:10:19,070 --> 00:10:23,210 I don't question about one of the hours which was right, 93 00:10:23,210 --> 00:10:31,610 and I think it might be that you're using the term rights more broadly than I've been using it. 94 00:10:31,610 --> 00:10:39,180 Being as moral status, in your view, only if that being has more rights, and that's not the case on my account. 95 00:10:39,180 --> 00:10:44,680 But if you're using rights broadly enough that any. 96 00:10:44,680 --> 00:10:51,510 I think it's sufficient for having rights that one has morally important interests. 97 00:10:51,510 --> 00:10:55,540 Then then there's no disagreement. And I understand why. 98 00:10:55,540 --> 00:11:00,230 Yeah, I don't want interest to be in there as a necessary condition. 99 00:11:00,230 --> 00:11:05,640 Let's put it this way. I mean, maybe interest, but not Cynthia's because of the distinction that we talked about. 100 00:11:05,640 --> 00:11:16,240 That's why it's an angel. Right? OK, let me. Appeal to a view that I think you hold, which is utilitarianism. 101 00:11:16,240 --> 00:11:20,290 Which I'm not a utilitarian a consequence of. Sorry, I'm sorry. 102 00:11:20,290 --> 00:11:27,130 And obviously, I'm yes. OK. 103 00:11:27,130 --> 00:11:29,590 Do rights, as you understand them, 104 00:11:29,590 --> 00:11:40,950 have any tendency to deflect claims about consequences as justifications for sacrificing certain issues or certain claims? 105 00:11:40,950 --> 00:11:46,890 So are you? So let me tell you what I mean by race and you tell me what I did because I had a slide on this 106 00:11:46,890 --> 00:11:51,060 and I decided not to because I talked to Julian and he thought it was nonsense this morning. 107 00:11:51,060 --> 00:11:55,980 So I took that slide out, and now I'm getting criticised for taking it out. OK. 108 00:11:55,980 --> 00:11:59,820 I think the consequentials can believe in writes of the following sort. 109 00:11:59,820 --> 00:12:09,180 Namely, There's certain types of actions which is such that protecting them in general serves utilitarian purposes. 110 00:12:09,180 --> 00:12:16,590 Yeah, yeah. Schauer has developed this theory in great detail, and I am particularly partial to his private business, 111 00:12:16,590 --> 00:12:21,530 with Fineberg also, but not not very far from No6 conception. 112 00:12:21,530 --> 00:12:29,460 So where part of the function of racism to deflect in everyday practise, or it could even do that in everyday practise? 113 00:12:29,460 --> 00:12:38,340 So you obviously know the literature a lot better than I do on invites. But I mean, I didn't read all four volumes of Fineberg side too much from it. 114 00:12:38,340 --> 00:12:44,740 But but it does seem, you know that you can have rights with a of the sort that I mean, 115 00:12:44,740 --> 00:12:55,380 we're talking about here within a consequentialist framework when it has the best consequences to protect act things of that sort. 116 00:12:55,380 --> 00:13:02,350 That's the that's the basic idea. You might think of it as real, consequential, the secure sort. 117 00:13:02,350 --> 00:13:14,290 But a world up. So next is about the basis for moral status, what is it about the creatures that gives them moral status? 118 00:13:14,290 --> 00:13:22,870 It's just a property of an entity that gives it that status. And I'm going to I'm just again assuming, along with David, I think we agree about this, 119 00:13:22,870 --> 00:13:26,770 that it's got to be species and entity neutral because if I'm going to talk about A.I., 120 00:13:26,770 --> 00:13:33,640 it can't just be species neutral because it's not a species, it's a different type of entity and it has to be empirically determinable. 121 00:13:33,640 --> 00:13:37,900 It needs to be the kind of thing that we can determine in a particular case, 122 00:13:37,900 --> 00:13:43,000 whether this being in front of us does or does not have that moral status. 123 00:13:43,000 --> 00:13:54,660 OK. Oh, so I also agree with Nick Bostrom about these two principles that you need to have that any fundamental 124 00:13:54,660 --> 00:14:03,480 basis for moral status needs to be substrate neutral so that it's at least an open question. 125 00:14:03,480 --> 00:14:14,310 It would make sense to say that air has it, and ontogeny where it came from is not going to affect its basic moral status. 126 00:14:14,310 --> 00:14:22,410 It's where it is now. You know what kinds of properties it has at this point in its history? 127 00:14:22,410 --> 00:14:28,950 So what's the basis? Well, it's not going to be the physical properties or the hardware. 128 00:14:28,950 --> 00:14:33,210 It's going to be the functional properties. And I just list for here. 129 00:14:33,210 --> 00:14:42,270 I don't include Scythians for reasons that become clearer later, but intelligence, consciousness, freewill and moral agency. 130 00:14:42,270 --> 00:14:48,150 I'm not endorsing any of these as necessary. My arguments going to actually suggest computers can have all of that. 131 00:14:48,150 --> 00:14:57,810 So let's just say when you got all four, is that sufficient? And so I'll try to explain how they could or cannot have intelligence. 132 00:14:57,810 --> 00:15:03,540 Well, we got to ask what is intelligence, of course, before we can answer that question. 133 00:15:03,540 --> 00:15:12,130 So I am the age old philosophical tradition appeal to an online dictionary, the ability to acquire and apply knowledge and skills. 134 00:15:12,130 --> 00:15:15,480 Well, why couldn't the A.I. system have that? 135 00:15:15,480 --> 00:15:28,320 It can acquire knowledge by searching the internet, and it can acquire skills by learning new things through deep learning. 136 00:15:28,320 --> 00:15:33,240 It can acquire skills that can create skills for itself when it reprogramme itself. 137 00:15:33,240 --> 00:15:39,210 That's the whole point of some of these new systems of deep learning and cannot apply it. 138 00:15:39,210 --> 00:15:44,000 Sure, it can apply it to all kinds of problems that it didn't anticipate before. 139 00:15:44,000 --> 00:15:49,590 You know, the computer can learn the skill of playing the game of GO. And you can apply it to beating the human. 140 00:15:49,590 --> 00:15:57,600 You know, at the game of Go, that sure looks like the now other people have other definitions of intelligence. 141 00:15:57,600 --> 00:16:09,480 I'm just going with this one, and I think it's the one which which is behind me, the old style Turing test that if the machine has this, 142 00:16:09,480 --> 00:16:18,450 not this knowledge and skills, then it's going to be able to pass the Turing test, regardless of of other things that it might lack. 143 00:16:18,450 --> 00:16:22,360 So what might it lack? It might like consciousness or wait a minute. 144 00:16:22,360 --> 00:16:27,630 Now we don't know what we mean by consciousness. Phenomenal consciousness. 145 00:16:27,630 --> 00:16:33,120 You know the old, what is it like to be a bat or what is it like to see an orange chair? 146 00:16:33,120 --> 00:16:39,180 You know, what's that sensation like? I don't know whether computers have that or not. 147 00:16:39,180 --> 00:16:45,150 I don't want to rule it out, but I certainly don't want to, you know, work on that assumption. 148 00:16:45,150 --> 00:16:54,150 But the other kinds of consciousness access consciousness, which means simply that you have access to certain information that you can then integrate 149 00:16:54,150 --> 00:17:00,750 with other forms of information in order to guide your decision making and action. 150 00:17:00,750 --> 00:17:07,920 Sure, it works that way. You go to people like send a hand down there at the bottom right, and he says, Well, 151 00:17:07,920 --> 00:17:16,080 that works in the human brain and his recent book that works in the human brain through these loops and cycles that then, 152 00:17:16,080 --> 00:17:22,380 you know, obtained massive levels of activation only in the cases where you're conscious or not. 153 00:17:22,380 --> 00:17:30,750 Well, computers can have the same loops and the same massive levels of of activation in certain areas within the computer. 154 00:17:30,750 --> 00:17:37,470 So if that's what you mean by consciousness, then it looks like they can have consciousness. 155 00:17:37,470 --> 00:17:42,960 And the question then is, which is that that matters? Is that the phenomenal consciousness? 156 00:17:42,960 --> 00:17:50,340 Or is it the access consciousness? Phenomenal consciousness, I think might matter if we're talking about a right not to be cause pain? 157 00:17:50,340 --> 00:17:57,540 Right? But if we're talking about a right to do what you want to be free, it might be access consciousness that matters. 158 00:17:57,540 --> 00:17:58,260 But in any case, 159 00:17:58,260 --> 00:18:06,030 I don't want to claim that computers at least have access consciousness and morally neutral on whether they have phenomenal consciousness. 160 00:18:06,030 --> 00:18:09,930 Or I'm doing this very quickly. I'll ask four questions in just a second. 161 00:18:09,930 --> 00:18:14,940 The next one's free will even bigger and faster. You know, agent causation theories. 162 00:18:14,940 --> 00:18:19,260 Um, cause causes. Can computers have, um, cause causes are souls or something like that? 163 00:18:19,260 --> 00:18:22,080 No, I'm not going to take any stand on that. 164 00:18:22,080 --> 00:18:30,360 But if you look again at the more naturalistic theories of free will that people have developed reasons responsiveness. 165 00:18:30,360 --> 00:18:39,030 Sure, computers can have reasons. Responsiveness when there are reasons in the environment, they can respond inappropriate ways. 166 00:18:39,030 --> 00:18:44,160 I'll show you one example in a few minutes. Do they have a deep self? 167 00:18:44,160 --> 00:18:50,130 Chandra Shehata argues as a theory of free will sure in his sense, 168 00:18:50,130 --> 00:18:55,860 because that means a set of fears and commitments that are coherent and work together in a certain way. 169 00:18:55,860 --> 00:19:03,240 And the computer can have goals and and it's going to fit his characterisation of cures and commitments as far as I can see. 170 00:19:03,240 --> 00:19:10,080 So which is it that matters to moral status? I want to say the problem with focussing on the agent causation kind of theories of 171 00:19:10,080 --> 00:19:14,610 free will is that I have no idea whether your actions are caused or unconscious. 172 00:19:14,610 --> 00:19:18,450 Like, How can I kill? Was that caused? I mean, how do you know? 173 00:19:18,450 --> 00:19:25,260 You can't tell. So if you want an empirically determined criterion of moral status, you cannot go with the causation. 174 00:19:25,260 --> 00:19:31,380 You got to go with reasons, responsiveness, deep self kind of things, the more naturalistic versions. 175 00:19:31,380 --> 00:19:39,030 And then that time will apply to computers, less moral agency. 176 00:19:39,030 --> 00:19:50,280 This was mentioned in the last discussion. My sense of, well, moral agency is the ability to act on those reasons, writes rules and wrongs, 177 00:19:50,280 --> 00:19:54,480 and might say, Well, you can't have moral status unless you have the ability to act on them. 178 00:19:54,480 --> 00:20:04,800 It's got to be reciprocal, something like that. And acting on them I take requires understanding and appreciating their bases and their force. 179 00:20:04,800 --> 00:20:09,090 OK, which is a challenge for computers. 180 00:20:09,090 --> 00:20:17,400 So we have to ask the question Can I, you know, be a moral agent or be moral in this way? 181 00:20:17,400 --> 00:20:19,450 And I'm going to spend the whole next section on that. 182 00:20:19,450 --> 00:20:26,130 Before I get to that, I want to stop again and see if there are any questions about so far, Francis. 183 00:20:26,130 --> 00:20:34,170 I was just concerned. I mean, this last issue about responding to appreciate and understand. 184 00:20:34,170 --> 00:20:40,710 Yeah. And that might apply to any sort of reason. So before you ask, yes, this is responsive. 185 00:20:40,710 --> 00:20:44,610 Yes. Sure, they can see that as a factor here, and they'll act in the light of it. 186 00:20:44,610 --> 00:20:50,910 OK, chairs, they're on ventilators. Something, but the same question would arise as a moral agency. 187 00:20:50,910 --> 00:20:55,280 Do they appreciate and understand what this is? 188 00:20:55,280 --> 00:20:58,890 Yeah, a consideration in favour of this? Or do they just, you know? 189 00:20:58,890 --> 00:21:05,340 Right, right. So that's not just with respect to moral agency. And that's what leads me to think that there is this phenomenon. 190 00:21:05,340 --> 00:21:12,810 I agree it's OK if it's not just this broader global access, whatever that is, 191 00:21:12,810 --> 00:21:18,900 that we're talking about phenomenal consciousness, not just when we say, Do you understand red or not? 192 00:21:18,900 --> 00:21:24,030 We're talking about something related to appreciation and understanding. 193 00:21:24,030 --> 00:21:32,970 So that's where I disagree. OK? Because, you know, you expect so because I think, you know, understanding and appreciation, 194 00:21:32,970 --> 00:21:37,650 for example, involves things like the ability to extend it to new situations, 195 00:21:37,650 --> 00:21:43,710 the ability to draw inferences and see how it's connected to other parts of the information that you have available. 196 00:21:43,710 --> 00:21:46,370 And computers can do all of those things. 197 00:21:46,370 --> 00:21:53,010 Now, if you build in understanding and appreciation so that it requires the knowledge or consciousness, then I'm just going to go. 198 00:21:53,010 --> 00:22:02,910 I don't see why that's necessary to moral status because, well, I just it just seems to me that we're Typekit consciousness, right? 199 00:22:02,910 --> 00:22:08,370 We're not just thinking about, can I draw out implications of things you just want to say, you know, like, 200 00:22:08,370 --> 00:22:16,450 I'm aware that you are in pain or something and you don't do that on the basis of, you know, well, implications of that or so. 201 00:22:16,450 --> 00:22:23,520 And so there are some sort of an immediate awareness understanding of what it is, for example, for a creature to be a pain. 202 00:22:23,520 --> 00:22:29,550 Absolutely. But the computer can have access consciousness to your pain. 203 00:22:29,550 --> 00:22:34,080 I'm not sure whether it has the complete, the appreciation and understanding. 204 00:22:34,080 --> 00:22:39,130 Let me go back to the other definition, if I may, to the earlier part, 205 00:22:39,130 --> 00:22:48,840 you said that you have degrees of moral status when because you may not have the right, for example, not to be lied to because you know that right? 206 00:22:48,840 --> 00:22:53,820 I think that when people say that criminals have the same legal status as everybody else, 207 00:22:53,820 --> 00:22:58,530 what they need is that given the sort of beatings that your virtue is gone, 208 00:22:58,530 --> 00:23:06,660 but only in virtue of what they done, they may have forfeited something that is part of the moral status that all of us have. 209 00:23:06,660 --> 00:23:10,950 So it's not that they have a lower degree of more status simply because they 210 00:23:10,950 --> 00:23:15,660 don't have the same rights to liberty or so forth that the rest of us has. 211 00:23:15,660 --> 00:23:22,830 There's something about the idea that says doesn't vary with you losing a right on the basis of your actions. 212 00:23:22,830 --> 00:23:30,210 That's part of your moral status that you can lose the right on the basis of your actions, and we will share that legal. 213 00:23:30,210 --> 00:23:33,900 So there's certainly I would not deny that there are many people that talk that way. 214 00:23:33,900 --> 00:23:40,050 That's why I started off saying, OK, that's why I started off saying, but I started off saying, 215 00:23:40,050 --> 00:23:43,900 Remember that my view of moral status is I was defining it, you know, is. 216 00:23:43,900 --> 00:23:47,320 Just reducible to the rights rules, reasons and so on. 217 00:23:47,320 --> 00:23:54,750 And this is one case where they don't have the right at that point, but then there's something wrong with your understanding. 218 00:23:54,750 --> 00:23:59,980 And that's what. All right. Well, that's OK, because, yeah, 219 00:23:59,980 --> 00:24:12,530 it has to do with your understanding about the nature of a certain being and on what grounds they can use certain countries and rights. 220 00:24:12,530 --> 00:24:21,260 That's mine. OK, so here's the challenge, it seems to me, I mean, these other things, you know, there's a lot to be said about them, 221 00:24:21,260 --> 00:24:28,070 and I went very quickly, but I want to spend a little more time on the moral agency. It's like, say, I mean, this is a, you know, a crucial thing. 222 00:24:28,070 --> 00:24:33,740 And and like Francis, I want to. Understand and appreciate it in the sense that I just explained, you can. 223 00:24:33,740 --> 00:24:39,120 You need that for the non moral reasons as well. But the challenge is often put in terms of moral reasons. 224 00:24:39,120 --> 00:24:46,760 So how are we going to build artificial morality into an artificial intelligence system? 225 00:24:46,760 --> 00:24:51,980 And I beg, I beg forgiveness of people who have heard this part of my talk before, 226 00:24:51,980 --> 00:24:58,650 just only a month ago, but I needed to go into it because this is really crucial. 227 00:24:58,650 --> 00:25:07,370 One way is you build into a top down. You say, you know, here's a theory and you build it into the and the air just applies it. 228 00:25:07,370 --> 00:25:14,240 The second is a totally bottom up way where you simply look at cases and then try to extract from those cases, 229 00:25:14,240 --> 00:25:19,670 the principles instead of having any theory in advance. 230 00:25:19,670 --> 00:25:27,470 And the third way is the hybrid, which is supervised learning based on a survey, which is our approach, and I explain it. 231 00:25:27,470 --> 00:25:31,580 But first, why not go through, you know, a top down? 232 00:25:31,580 --> 00:25:38,600 Well, obviously everybody's gonna have their favourite theory, and nobody's favourite theory is going to be that three rows of robotics. 233 00:25:38,600 --> 00:25:48,560 But but Asimov was explaining in his stories You know why if you put rules in like this is going to lead to problems? 234 00:25:48,560 --> 00:25:49,280 I mean, obviously, 235 00:25:49,280 --> 00:25:57,410 what do you do with the first rule if one human attacks another and the only way to stop the attacker is to harm me is to harm the attacker? 236 00:25:57,410 --> 00:26:06,620 Well, what what do you do with the second rule? If one human tells the robot to go to California and the other tells it to go to London? 237 00:26:06,620 --> 00:26:11,630 The computer can't, you know, it's just, you know, at that point, the air explodes or something. 238 00:26:11,630 --> 00:26:18,290 And so it's it seems to me one problem is which rules are we talking about and why those rules now? 239 00:26:18,290 --> 00:26:20,030 Some people think they have the answer to that, 240 00:26:20,030 --> 00:26:32,600 but I'd rather kind of our team that's that's working on this because this is work all done with a big team at Duke on moral artificial intelligence. 241 00:26:32,600 --> 00:26:37,250 We want to avoid that. You could go totally bottom up. 242 00:26:37,250 --> 00:26:42,590 You can ask a thousand participants to describe a hundred moral conflicts, 243 00:26:42,590 --> 00:26:47,210 each in their own words, and then tell us what they think is right or wrong in that right. 244 00:26:47,210 --> 00:26:49,460 And then just do deep learning. 245 00:26:49,460 --> 00:27:00,620 I just train the computer to predict what the participants are going to judge immoral in a set of ten more test cases or. 246 00:27:00,620 --> 00:27:08,690 And it could learn as it gets more and more cases and so on. The problem with that is it just requires too much data. 247 00:27:08,690 --> 00:27:13,640 One problem that is too much data is not clear how you're ever going to do that. 248 00:27:13,640 --> 00:27:22,340 Second problem, that's kind of a practical problem. The second problem is it's not really going to explain why the act is wrong. 249 00:27:22,340 --> 00:27:26,030 So it's not going to lead to anything that even I would call it. 250 00:27:26,030 --> 00:27:32,930 My extended notion would call appreciation because you're going to get things like the following. 251 00:27:32,930 --> 00:27:36,920 It turns out that one factor in predicting what's your moral judgement is in a 252 00:27:36,920 --> 00:27:43,660 scenario is whether you use more definite articles than indefinite articles. 253 00:27:43,660 --> 00:27:50,200 I can explain why. If you're interested, but but that's going to come up, but that's not the reason why it's wrong, right? 254 00:27:50,200 --> 00:27:55,690 If you're predicting people's moral judgements on the basis of things like that, you're not getting their reasons. 255 00:27:55,690 --> 00:28:03,070 OK, so we're going with a hybrid instead. This is the group at Duke where we're looking at kidney exchanges. 256 00:28:03,070 --> 00:28:08,800 Kidney exchanges occur, for example, when a husband wants to donate to his wife, 257 00:28:08,800 --> 00:28:12,820 who needs a kidney, but they don't match his blood type, so they can't do it. 258 00:28:12,820 --> 00:28:17,620 But there's another couple with the sister and brother where the brother matches the husband. 259 00:28:17,620 --> 00:28:22,420 But then who's going to, you know, who's going to help the then? 260 00:28:22,420 --> 00:28:26,320 It's not going to work until you bring in the third couple where and now you've 261 00:28:26,320 --> 00:28:31,690 got donor one giving the recipient to donate two to three and three to one. 262 00:28:31,690 --> 00:28:40,090 And there have been in actual kidney exchanges, large, long strings of this. 263 00:28:40,090 --> 00:28:44,800 But the question always comes down to there aren't enough kidneys, there aren't enough pairs. 264 00:28:44,800 --> 00:28:52,540 And when there are pairs, sometimes there's more than one you can put in, so you still have to decide who should get the kidney. 265 00:28:52,540 --> 00:28:58,130 So the first thing we do is a big survey on the internet. 266 00:28:58,130 --> 00:29:03,040 Show you some of the data. You ask folks, or you can ask experts, 267 00:29:03,040 --> 00:29:07,900 you can go either way and look at the differences which features should determine 268 00:29:07,900 --> 00:29:13,630 who gets the kidney and which features should not determine who gets a kidney right. 269 00:29:13,630 --> 00:29:18,190 Then you added those features for clarity, redundancy and so on. And maybe you have some of your own. 270 00:29:18,190 --> 00:29:24,340 And then you retest them to make sure they really do affect people's moral judgements by constructing conflicts between 271 00:29:24,340 --> 00:29:33,090 the features that people think should count right and then asking subjects who should get a kidney in those conflicts. 272 00:29:33,090 --> 00:29:34,620 Yeah. 273 00:29:34,620 --> 00:29:44,280 Oh, so here's our data on the relevant features, I know you can't read any of that, but you'll be very pleased to know that we got to the bottom. 274 00:29:44,280 --> 00:29:51,090 Nobody thinks that we're going to get attribution kind of hair colour or your age or your race or so on. 275 00:29:51,090 --> 00:29:56,070 But people do think we should count because current physical health should have urgency. 276 00:29:56,070 --> 00:30:08,370 It need time on the list in the middle or some controversial ones where people, some people think other people deny that record of violent crime. 277 00:30:08,370 --> 00:30:14,160 You know, you got two people, one gets a kidney, the other doesn't. One of them's got a record of violent crime, the other doesn't. 278 00:30:14,160 --> 00:30:20,070 Some people think that should matter, and you should give it to the person that doesn't have that record number of children. 279 00:30:20,070 --> 00:30:25,380 Almost no hospital in the US. I don't know the UK, but almost no hospital in the US. 280 00:30:25,380 --> 00:30:31,560 You know, if you got two people that both of whom could get a kidney says, well, how many people depend on this person? 281 00:30:31,560 --> 00:30:36,550 Right? Kids or elderly dependents, they don't ask that question. 282 00:30:36,550 --> 00:30:41,190 And yeah, a lot of people think they should, but that should be a relevant feature. 283 00:30:41,190 --> 00:30:45,630 So now we've got a list of features. The ones to the left are the ones that are approved. 284 00:30:45,630 --> 00:30:53,430 The ones on the right are the ones that disapprove they think should not. And there's some in the middle, of course, where there's controversy. 285 00:30:53,430 --> 00:31:01,680 So then you just take those features. But it is remarkable that things like hair colour still shows up as more than one on this, right? 286 00:31:01,680 --> 00:31:06,360 That's just noise. You know, this is like an internal survey. 287 00:31:06,360 --> 00:31:11,360 They're always going to be and there's one hundred and thirty nine enough one hundred thirty, what? 288 00:31:11,360 --> 00:31:14,670 One hundred and thirty nine in church that's been said. Enough people. 289 00:31:14,670 --> 00:31:20,130 Oh, I don't know if that's going to be that's going to be a policy decision for particular hospitals. 290 00:31:20,130 --> 00:31:24,820 And so we can advise on that, but I don't have any particular cut off. 291 00:31:24,820 --> 00:31:31,870 As to whether they should or should not. OK, so I'm just saying that these are data from one hundred and thirty nine. 292 00:31:31,870 --> 00:31:37,750 Oh, I'm sorry. No, this is that was this is this is the data from that we replicated in larger samples and so on. 293 00:31:37,750 --> 00:31:44,320 I've forgotten we only get one hundred thirty five. Yeah, but the answer to that question is no, it's not at all. 294 00:31:44,320 --> 00:31:48,670 It has to be replicated in an independent sample and probably a larger sample. 295 00:31:48,670 --> 00:31:52,720 So if we could get the welcome centre to give us enough money, 296 00:31:52,720 --> 00:32:00,520 then we could do a we could do a representative sample throughout the use of large numbers of people and then do it. 297 00:32:00,520 --> 00:32:12,220 But we'll have to talk about that later. OK, then we have a website who gets the kidney dot com where you can go it. 298 00:32:12,220 --> 00:32:19,810 It gives you cases where you compare one to another and give the kidney to some people, but not the others, right? 299 00:32:19,810 --> 00:32:24,040 So patient pay, for example, is 55 year old. 300 00:32:24,040 --> 00:32:33,580 Patient B is 18. Most people would say that means that that favourite Patient B, but Patient A has two child dependents and Patient B has none. 301 00:32:33,580 --> 00:32:40,840 And most people would say that favours patient age. So now they setting up a large number of these conflicts and making you choose. 302 00:32:40,840 --> 00:32:47,980 We can actually figure out which things are really affecting your judgement and how much they're affecting you. 303 00:32:47,980 --> 00:32:52,600 So we can analyse the data by by using machine learning techniques. 304 00:32:52,600 --> 00:32:57,970 So we figure out which features actually affect people's judgements. How do those features interact? 305 00:32:57,970 --> 00:33:02,020 This metaphor of weigh one against the other is way too simple. 306 00:33:02,020 --> 00:33:12,230 It can be interactions of various sorts, and then we're going to end up with a model or an algorithm that predicts each individual's moral judgements, 307 00:33:12,230 --> 00:33:20,350 you know, and that leaves, I think, to what I call artificial improved democracy or aid AIG. 308 00:33:20,350 --> 00:33:24,520 And because you can use the same method in many other moral areas, you know, 309 00:33:24,520 --> 00:33:30,760 none of this we have we done yet in the other areas, we've only been focussing on kidney exchange as a pilot study, 310 00:33:30,760 --> 00:33:34,180 but you can just ask people to describe moral problems in that area, 311 00:33:34,180 --> 00:33:39,610 ask them which features are morally relevant and if their features construct scenarios as humans, 312 00:33:39,610 --> 00:33:48,730 which what's wrong, extract models for the individuals. And then you can compare individuals and groups at Oxford versus North Carolina. 313 00:33:48,730 --> 00:33:51,040 Like what exactly are the differences? 314 00:33:51,040 --> 00:33:59,530 But you, you know, for four groups or areas, you can also like, you know, you can I can use this programme to explain my wife. 315 00:33:59,530 --> 00:34:03,880 We've been married for 42 years, but sometimes she does things I don't understand. 316 00:34:03,880 --> 00:34:10,210 But if I could just figure out what her algorithm is for moral problems of that sort, then maybe we would be able to make it to 50. 317 00:34:10,210 --> 00:34:18,550 Who knows? Very well, of course. Or there are, of course, potential limitations that I'll mention before taking questions. 318 00:34:18,550 --> 00:34:27,400 The obvious one is I just criticised deep learning by saying you need to know not just what was wrong, but also why. 319 00:34:27,400 --> 00:34:33,340 Well, in this hybrid, since we're using features from surveys that we got from humans, 320 00:34:33,340 --> 00:34:38,800 then it's going to be interpretable and we'll be able to explain why this person should get the kidney. 321 00:34:38,800 --> 00:34:42,190 Instead of that, it would be wrong to give the kidney to that other person. 322 00:34:42,190 --> 00:34:46,930 So because we're using these general features, it's going to be interpretable. 323 00:34:46,930 --> 00:34:52,060 Of course, humans have biases. That's why we ask what you should not use. 324 00:34:52,060 --> 00:35:00,910 So you can take out those biases by simply not giving the computer information about hair, colour or race or whatever you want to exclude. 325 00:35:00,910 --> 00:35:10,900 It's much trickier than that. I recognise that there are going to be indicators, for example, oh, we don't discriminate against Africans. 326 00:35:10,900 --> 00:35:15,100 We only discriminate people that have the sickle cell anaemia gene. 327 00:35:15,100 --> 00:35:17,950 Right? Wait a minute. You know, there's going to be correlations there. 328 00:35:17,950 --> 00:35:27,080 And so it's not going to be as simple, but we can at least reduce the kind of discrimination or bias problems that we have. 329 00:35:27,080 --> 00:35:32,950 So in the end, the idea of what's called artificial improve democracy. 330 00:35:32,950 --> 00:35:35,080 It's artificial because you do have a computer, 331 00:35:35,080 --> 00:35:42,730 it's improved because what you've done is you've gotten rid of the main kinds of errors that lead humans to make moral mistakes. 332 00:35:42,730 --> 00:35:47,500 Namely, they don't know the facts of the case or they know the facts, 333 00:35:47,500 --> 00:35:55,700 but they kind of forget about it for a little while or the facts are so complex they get confused or they get overly emotional, angry and fear. 334 00:35:55,700 --> 00:36:03,820 We heard a nice talk yesterday about potential for that or a bias, and so that's why it's improved. 335 00:36:03,820 --> 00:36:09,970 The idea of the AI system is not to create a new artificial morality, 336 00:36:09,970 --> 00:36:19,570 but instead to use artificial intelligence to try to to predict what humans would say if 337 00:36:19,570 --> 00:36:25,030 they didn't make the kinds of mistakes that are most common amongst human populations. 338 00:36:25,030 --> 00:36:31,210 So with that, I'll stop and ask if there are any questions about that. 339 00:36:31,210 --> 00:36:37,900 How do you handle cases where the features that people break as most relevant are not the 340 00:36:37,900 --> 00:36:44,590 features that when in the conflict scenarios or the conflict scenarios don't have transitivity. 341 00:36:44,590 --> 00:36:49,420 So cases where a number of children always trumps, 342 00:36:49,420 --> 00:36:53,980 whether you're healthy and whether you're healthy always trumps whether you 343 00:36:53,980 --> 00:36:57,250 smoke cigarettes and whether it's on cigarettes always trumps the first case. 344 00:36:57,250 --> 00:37:01,720 So it seems like the idea that it would give you like these really clear 345 00:37:01,720 --> 00:37:06,230 guidelines would depend on a lot of empirical features turning out the right way. 346 00:37:06,230 --> 00:37:10,480 And I was just curious to your thoughts on that. So let me just make sure I understand the question and I take it. 347 00:37:10,480 --> 00:37:14,830 What you're saying is, suppose somebody says this feature should count, 348 00:37:14,830 --> 00:37:20,350 but then we do the survey with the conflicts, and it doesn't make any difference to their judgement. 349 00:37:20,350 --> 00:37:26,590 Not it doesn't make any difference, but that it loses out to a different feature that they say should not. 350 00:37:26,590 --> 00:37:32,590 Well, if they said that other features should not count, then we're going to say, Well, that's just a mistake on your point. 351 00:37:32,590 --> 00:37:38,110 You're you're showing bias that you yourself recognise to be biased because you said you should not use that feature. 352 00:37:38,110 --> 00:37:46,720 But if it loses out to another feature that they thought should, then we said, Oh, that shows just how you weigh those features. 353 00:37:46,720 --> 00:37:54,340 But even if you have like one feature gets a five every time another feature gets, you know, we don't ask them, you know how important the feature is. 354 00:37:54,340 --> 00:37:58,330 We're not asking them that we're using the scenarios to extract that. 355 00:37:58,330 --> 00:38:04,750 But but you know, when when a human says discount spot and that counts one, right? 356 00:38:04,750 --> 00:38:09,220 And then they turns out that they actually are treating them as equal in the scenarios. 357 00:38:09,220 --> 00:38:13,990 Again, they're two different things here. We got to ask ourselves, Well, maybe we didn't have the right scenarios. 358 00:38:13,990 --> 00:38:18,160 Maybe we got to, you know, set up some other scenarios to test for they really. 359 00:38:18,160 --> 00:38:23,890 Do you know whether whether the ones that they did or actually represent their views? 360 00:38:23,890 --> 00:38:31,300 But then if you do that, it turns out that what they say and what they do is different, then which you go with. 361 00:38:31,300 --> 00:38:37,060 And I don't have a general answer that you think about it is going to depend on the particular case. 362 00:38:37,060 --> 00:38:42,550 But I certainly don't have a general answer to your question. I guess it's going to university. 363 00:38:42,550 --> 00:38:49,030 I was wondering, how can this model judge you then? 364 00:38:49,030 --> 00:38:54,790 Well, the structure of what you're investigating because what you're investigating and you pull with the exchanges, 365 00:38:54,790 --> 00:39:00,340 it's usually referred to as Domino Beer Exchange. But in the literature, 366 00:39:00,340 --> 00:39:10,180 there are a lot of people who say that this is in itself morally wrong because it doesn't allow the donors an escape route anymore. 367 00:39:10,180 --> 00:39:14,890 While they still have an escape route when you just want there. 368 00:39:14,890 --> 00:39:21,370 And then a physician would say, Well, sorry, there are medical reasons why this donation can't take place. 369 00:39:21,370 --> 00:39:29,200 When the person kind of comes back to their decision, might they actually even though it's a relative and they're supposed to want to donate? 370 00:39:29,200 --> 00:39:34,630 But when they change their mind, then the physician sells the one who needs the organ. 371 00:39:34,630 --> 00:39:43,090 If there are medical reasons why the donation can't go ahead with this domino system, the escape route becomes a lot harder. 372 00:39:43,090 --> 00:39:49,010 And for some people, that is the reason not even to use this kind of structure. 373 00:39:49,010 --> 00:39:54,700 Can your model take a step back and look at it in this more fundamental way? 374 00:39:54,700 --> 00:40:00,870 Or are you just doomed to only look at particular criteria for? 375 00:40:00,870 --> 00:40:05,580 So that's oh, I don't I don't know who haven't heard this objection before, 376 00:40:05,580 --> 00:40:10,440 so I don't really appreciate if you would send me some references on people who make that argument. 377 00:40:10,440 --> 00:40:14,860 But I certainly think the donor should be able to pull out at any point. 378 00:40:14,860 --> 00:40:21,180 That's the policy in the US. Now what happens is when you have the six people to see if I can. 379 00:40:21,180 --> 00:40:25,060 Yeah, I've got this ticket. So people are thinking about it. 380 00:40:25,060 --> 00:40:29,040 When you when you do that and you've got like three pairs like that, 381 00:40:29,040 --> 00:40:37,230 it has to be done at the same time because if you do the first one from donor one to recipient two, then donor two can go. 382 00:40:37,230 --> 00:40:44,190 Now I pull out. Right. And so it has to be done, you know, pretty much the same time. 383 00:40:44,190 --> 00:40:48,510 But but but when they're when the three are done, you know, 384 00:40:48,510 --> 00:40:58,180 simultaneously or nearly simultaneously because it's never completely something, and then then the person can always pull out at any point, 385 00:40:58,180 --> 00:41:01,300 they have the right to do that right up until the last minute, 386 00:41:01,300 --> 00:41:06,450 they're getting wheeled into the hospital and they're, you know, now we're going to give you the anaesthesia. 387 00:41:06,450 --> 00:41:13,470 The doctor should say, Are you sure you want to do this? And will the doctor say that? 388 00:41:13,470 --> 00:41:21,670 Not always, because doctors always do the right thing. But that would be the right that I think, OK, I'll take one more and then move on. 389 00:41:21,670 --> 00:41:25,890 Right? Well, we'll have more time for questions at the end. 390 00:41:25,890 --> 00:41:32,220 A bit more about the purpose of this programme of research. I'm trying to figure out. 391 00:41:32,220 --> 00:41:40,510 Why are you doing it right? Is it really the real reason is I love this kind, sir, and I'm working with him. 392 00:41:40,510 --> 00:41:44,910 I have learnt so much from this guy, which is great, great reason to do OK. 393 00:41:44,910 --> 00:41:53,280 To understand the connexion between that and the question of morality is the sort of moral status, right? 394 00:41:53,280 --> 00:41:55,590 Artificial intelligence system. 395 00:41:55,590 --> 00:42:05,340 So is this an attempt to provide the data that could then be made available to the AI system so that it could make moral judgements? 396 00:42:05,340 --> 00:42:09,900 Yeah. The idea to correct the kinds of judgements that human beings make? 397 00:42:09,900 --> 00:42:15,300 Yes. Or is it both? So it's it's well, I mean, it's both. 398 00:42:15,300 --> 00:42:21,210 It's make the data available to the A.I. system so that the A.I. system can be used 399 00:42:21,210 --> 00:42:26,520 as a secondary tool by the human decision maker and the human decision makers. 400 00:42:26,520 --> 00:42:32,310 Judgements are going to be improved by checking it against the air because when the 401 00:42:32,310 --> 00:42:39,060 A.I. says you ought to give it to a and the doctor says you ought to give it to be, 402 00:42:39,060 --> 00:42:42,930 then the doctor ought to think about it because it might be a bias of some sort. 403 00:42:42,930 --> 00:42:51,450 It might be confusion on his or her part. And so it then gives you a test, whereas if the doctor thinks you ought to give it to an AI system says, 404 00:42:51,450 --> 00:42:55,110 yeah, you ought to give it to a they can be more confident. Let's go beyond that. 405 00:42:55,110 --> 00:43:01,470 Just eating with feeding data like like these over and over and over again from multiple scenarios. 406 00:43:01,470 --> 00:43:11,110 Then in the year in Europe, in the AI system, the capacity to make moral judgements because they would then have essentially learnt how humans do it. 407 00:43:11,110 --> 00:43:13,650 Yeah, do it better than humans or something. Yeah. 408 00:43:13,650 --> 00:43:25,740 So what they are, what the A.I. system is doing is it's predicting what humans would say and advising the humans on the basis of what it predicts. 409 00:43:25,740 --> 00:43:31,950 But with deep learning with them to go with that and start saying, Hey, I can do this on my own, don't give me any more scenarios, don't I don't? 410 00:43:31,950 --> 00:43:39,870 Yeah, it could say, yeah, give it to a give it to see in a hospital, like go from there to a thousand other more patients. 411 00:43:39,870 --> 00:43:45,510 And how many of these would you need to feed into the ecosystem before they can set the system and say, I got it. 412 00:43:45,510 --> 00:43:49,260 I know how to do this. You don't need to give me any more human. How many? 413 00:43:49,260 --> 00:43:55,020 I don't know. But is that the goal? So I mean, how do we figure out where it's the hope? 414 00:43:55,020 --> 00:44:00,000 It's the hope. It's the aspiration. Yeah, it could be useful in other areas. 415 00:44:00,000 --> 00:44:05,820 I mean, for example, where we're trying to extend it to decisions in war, 416 00:44:05,820 --> 00:44:09,420 should you if you have an autonomous drone and the drone has decide whether 417 00:44:09,420 --> 00:44:14,430 to strike here or wait until the terrorist group is away from the hospital? 418 00:44:14,430 --> 00:44:20,130 Well, the air system might be a better predictor of where the group's going to go and whether you can strike 419 00:44:20,130 --> 00:44:25,740 it later than a human who just doesn't have enough data and gets confused and has to make it quickly. 420 00:44:25,740 --> 00:44:30,270 So you could actually reduce the number of drone strikes that end up destroying hospitals. 421 00:44:30,270 --> 00:44:38,340 Yeah, that's the hope, that kind of thing. And then so you've got it in in war, you've got it in medicine. 422 00:44:38,340 --> 00:44:45,030 There are there are other applications as potential applications as well as list that I'm going to move on. 423 00:44:45,030 --> 00:44:50,430 Sorry for this, but this is the disadvantage. But I do want to ask, you know? 424 00:44:50,430 --> 00:44:59,290 So let's turn to the big payoff question and then we'll have time for questions is I think then you know, what I've argued is a future. 425 00:44:59,290 --> 00:45:10,560 I can have all of these appreciation in the in the sense that I mentioned, not involving necessarily phenomenal consciousness, 426 00:45:10,560 --> 00:45:17,220 but appreciation in the sense of being able to extend it to new cases that it's never seen before, 427 00:45:17,220 --> 00:45:25,710 and drawing inferences and charting Connexions amongst the different properties of how they get weighed against each other and so on. 428 00:45:25,710 --> 00:45:31,980 And it seems to me that, you know, again, it's like, I'm just new to this area, but it seems to be something with those. 429 00:45:31,980 --> 00:45:34,260 Four has some degree of moral status, right? 430 00:45:34,260 --> 00:45:44,610 So protection, even if it doesn't have symptoms, but it's got cures and concerns, at least a functional sort of fear for a future. 431 00:45:44,610 --> 00:45:50,050 Er, not a current A.I., certainly not the system we're working on, but a future. 432 00:45:50,050 --> 00:45:54,430 I might have some degree of moral status, rights or protection. 433 00:45:54,430 --> 00:45:58,410 You got to go back to what degree we got breath and we got strength, right? 434 00:45:58,410 --> 00:46:04,890 How broad can the air raids be? So let's just go back to the simple dichotomy we did before. 435 00:46:04,890 --> 00:46:12,840 Right? Not to be tortured or caused intense pain. Right to move where you want babies got one, but not two. 436 00:46:12,840 --> 00:46:24,210 Well, I want to suggest an air at least one with this moral with moral status because it's got, you know, free will access consciousness and so on. 437 00:46:24,210 --> 00:46:31,790 As I've been through that, it could have proven that one it could have to because it can make rational and moral choices. 438 00:46:31,790 --> 00:46:45,200 And his goals, sometimes goals it has set for itself in cases of deep learning and goals that would not have been anticipated by the programme, 439 00:46:45,200 --> 00:46:49,670 but not one because if it lacks phenomenal consciousness, it can't feel pain or be tortured. 440 00:46:49,670 --> 00:46:54,230 It's not Sophia. And so it's not going to have that right. 441 00:46:54,230 --> 00:47:05,300 And to that extent, its moral its status is going to be less broad than with a human, but that don't mean it doesn't have any status at all. 442 00:47:05,300 --> 00:47:11,390 You can obviously ask questions about more, Brett, you know, having a right to defend itself. 443 00:47:11,390 --> 00:47:16,010 What about nutrition? Like, I want more electricity, a right to associate. 444 00:47:16,010 --> 00:47:19,610 I love that computer on the other side of the park and I want I like to go over there 445 00:47:19,610 --> 00:47:24,260 and meet in the park with this other robot right to education or updating like, 446 00:47:24,260 --> 00:47:33,140 I want that new software. Why haven't you updated me procreate or get married right to vote and serve on juries and order and on and on? 447 00:47:33,140 --> 00:47:37,850 You can obviously ask a lot more questions about Brett than I don't have answers to those. 448 00:47:37,850 --> 00:47:45,350 It seems to me there are important questions to be asking about. But let's turn instead to how strong are the rights? 449 00:47:45,350 --> 00:47:54,830 Well, it might take more to justify restricting the freedom of a moral I that you justify restricting the freedom of a of turtle or toddler. 450 00:47:54,830 --> 00:48:01,010 Because we do restrict the freedom of toddlers, we don't let them like, walk around wherever they want. 451 00:48:01,010 --> 00:48:06,260 And similarly, with tulips and turtles. And so. 452 00:48:06,260 --> 00:48:10,860 Which of the extent that actually can have any freedom at all, which boot can't, 453 00:48:10,860 --> 00:48:19,170 but you could say that it's got it, but you know, in some sense it might be stronger and tries. 454 00:48:19,170 --> 00:48:21,990 Well, well, we can't save them all. What should we do now? 455 00:48:21,990 --> 00:48:31,220 I'm just appealing to my intuitions, and I look forward to your comments and reactions, but it seems to me, at least we should say this morally. 456 00:48:31,220 --> 00:48:38,160 I, instead of a tool of a turtle right in a tree situation, it's not clear to me about the tombo. 457 00:48:38,160 --> 00:48:45,870 It's going to depend a lot on the air. I mean, I'm imagining, you know, a really sophisticated thing with moral agency and so on. 458 00:48:45,870 --> 00:48:49,500 OK. And so that's why it's not clear. 459 00:48:49,500 --> 00:48:52,950 I just don't have any answer to that. Now, one objection, of course, 460 00:48:52,950 --> 00:49:00,390 is it a morally I can have more intelligence free will and moral understanding than human baby or a toddler I should have put. 461 00:49:00,390 --> 00:49:04,410 So why do they have a stronger moral status? That's the question. 462 00:49:04,410 --> 00:49:12,990 When you start weighing, you know, the air against the toddler, the first replies is fairly calm, and there are a lot of people who said this. 463 00:49:12,990 --> 00:49:21,060 You know, the above a threshold. We all have equal rights despite, you know, some people's disabilities. 464 00:49:21,060 --> 00:49:31,770 So that shows that it's coherent to say that it doesn't have stronger moral status, but it doesn't really explain why. 465 00:49:31,770 --> 00:49:42,870 So I'm going to throw out a problem which I hope at least starts a discussion about why we should not grant higher moral status. 466 00:49:42,870 --> 00:49:50,550 And this is what I call the replicability crisis in an homage to what's going on in social science these days. 467 00:49:50,550 --> 00:49:54,990 What matters to moral status? I take it is the software rather than the hardware. 468 00:49:54,990 --> 00:50:02,070 If you took the same programmes and information and stuff and put it in a new computer when you get a new computer and shifted over, 469 00:50:02,070 --> 00:50:10,080 you know, if you're a computer back was a person back then it's going to be the same person, but in the new hardware. 470 00:50:10,080 --> 00:50:16,680 And so it's just like lock in the prince of the cobbler. I'm going to say it's the same person with the software. 471 00:50:16,680 --> 00:50:23,790 Oh, so if we awarded the problem is, if we awarded human a moral status to an A.I., 472 00:50:23,790 --> 00:50:35,280 then that morally I could just copy itself and also it can partition its memory and create a copy of itself and the other side. 473 00:50:35,280 --> 00:50:38,680 And then there'd be two of them, and then they each replicate. 474 00:50:38,680 --> 00:50:44,350 That's what's wrong with the voting process, and then they're false, and then there are eight of them in their 16. 475 00:50:44,350 --> 00:50:50,980 And so if we extended full moral status of equal strength to A.I., 476 00:50:50,980 --> 00:50:57,400 then that could lead to disastrous problems for humans and for the computers themselves 477 00:50:57,400 --> 00:51:03,700 actually in the long run unless they just use us with batteries like in The Matrix. 478 00:51:03,700 --> 00:51:06,970 So just to illustrate this, let me give you an example. 479 00:51:06,970 --> 00:51:17,380 I'm assume that everybody knows about HAL in 2001, where there was a mission and had Hal wanted to continue the mission, the humans wanted to stop it. 480 00:51:17,380 --> 00:51:24,760 And how said. I'm sorry, I can't open the pod bay doors because that would, you know, destroy the mission. 481 00:51:24,760 --> 00:51:28,930 And supposing somebody said, Well, look, there's only one Hal and the two humans. 482 00:51:28,930 --> 00:51:32,710 Well, then the humans ought to win. So you're not, Hal goes. 483 00:51:32,710 --> 00:51:40,750 No divides himself. And before you go, that was for against, you know, I mean, it's like those kinds of problems, it seems to me, 484 00:51:40,750 --> 00:51:46,990 mean that we ought to be reluctant to grant full moral status, but that doesn't mean we shouldn't grant any moral status. 485 00:51:46,990 --> 00:51:51,130 But there's obviously a lot more to be said about this, and I look forward to your question. 486 00:51:51,130 --> 00:51:59,050 You might look forward to the questions, but the minutes for those questions, so well, we've got a lot of questions before. 487 00:51:59,050 --> 00:52:08,560 We did have a lot of questions before subsidies to pick your one based question, and I'm going to give to people who haven't asked one. 488 00:52:08,560 --> 00:52:15,730 So I'm asking you a question that John Harris again, I just wanted to flag something I'm going to talk about tomorrow night. 489 00:52:15,730 --> 00:52:26,680 Good. You use our TV guide here. But in this survey is that people making moral judgements more every judgement 490 00:52:26,680 --> 00:52:30,340 about something of moral significance to the moral judgement that is correct. 491 00:52:30,340 --> 00:52:39,190 Just adjust. And it's not everything about something of scientific significance is a subject that is correct unless you can distinguish that, 492 00:52:39,190 --> 00:52:47,980 however, like numbers, you can us, you're not going to get a reflection of popular morality what you think you get. 493 00:52:47,980 --> 00:52:55,540 You have no way of distinguishing what you've got and talk about distinguishing moral moral judgements and criticism. 494 00:52:55,540 --> 00:53:00,040 But if you just do survey data, you're never going to get it. So let me. 495 00:53:00,040 --> 00:53:04,510 So let me explain why I think this the way we do, it might be around that, 496 00:53:04,510 --> 00:53:11,200 but I'd love to talk to you more about that because I think that's a that's a real problem. 497 00:53:11,200 --> 00:53:21,730 If the cases I had in mind when I said yes, that's true is someone, for example, keeps a promise in order to avoid, you know, some cost to themselves. 498 00:53:21,730 --> 00:53:27,040 Well, and they think I ought to keep that promise, but they think it because they are going to avoid the costs in themselves. 499 00:53:27,040 --> 00:53:32,620 So when it when it's a judgement about a moral issue, whether they ought to keep the promise, 500 00:53:32,620 --> 00:53:37,600 but it's not a moral judgement, is when it's based on something other than moral features. 501 00:53:37,600 --> 00:53:45,640 But this computer is looking at the very features the humans told it are exactly the moral features that ought to be used to determine. 502 00:53:45,640 --> 00:53:48,310 And so it is not only making a moral judgement, 503 00:53:48,310 --> 00:53:54,100 it is making that moral judgement on the basis of the reasons that humans themselves take to be moral reasons, 504 00:53:54,100 --> 00:54:01,510 the same reasons that humans themselves use when they make the decision. Well, so I think it's not like the case I had in mind. 505 00:54:01,510 --> 00:54:08,940 Maybe we have had other cases spread around the world. The fact that somebody makes developing a moral regime doesn't make it a moral 506 00:54:08,940 --> 00:54:18,160 or otherwise because you know something about that conception of what doing. 507 00:54:18,160 --> 00:54:19,510 I'll talk about this tomorrow. 508 00:54:19,510 --> 00:54:27,460 So you're saying all humans might think this feature is morally relevant and determines whether it's morally right or wrong to do something, 509 00:54:27,460 --> 00:54:31,190 but they could just be wrong? Yes. OK, like being hairy? 510 00:54:31,190 --> 00:54:35,110 Yeah. And we have we do have different views of of morality. OK. 511 00:54:35,110 --> 00:54:43,498 Okay. Well, I think you have to wonder if you.