1 00:00:00,210 --> 00:00:09,180 Well, as most of you know, of course they are, and we have seen it since yesterday, in fact, 2 00:00:09,180 --> 00:00:14,430 there are never any debates about the question of who or what has small standing amongst small philosophers. 3 00:00:14,430 --> 00:00:19,590 And in fact, all celebration is of varied persuasions in recent time times. 4 00:00:19,590 --> 00:00:26,850 I think much of this has been triggered by political arguments over the rights or lack thereof of non-human sentient animals as well, 5 00:00:26,850 --> 00:00:30,480 of course, as questions about human foetuses and embryos. 6 00:00:30,480 --> 00:00:36,090 Proponents of the view that sentence non-human animals have defensible more rights rights claims defend their stance, 7 00:00:36,090 --> 00:00:39,270 typically, but arguing that these animals can be wrong, 8 00:00:39,270 --> 00:00:47,850 morally relevant ways by moral agents like ourselves because they have interests that are the results of certain capacities like our own. 9 00:00:47,850 --> 00:00:57,420 So typically then what motivates rights claims and claims of moral standing are particular capacities for animals that are like ours. 10 00:00:57,420 --> 00:01:03,360 And given that it's only fair to treat like a like or to give equal consideration to something that is relevantly equal, 11 00:01:03,360 --> 00:01:11,130 it's defensible to argue that, for instance, sentient animals should be treated in a manner that is comparable to sentient humans, 12 00:01:11,130 --> 00:01:14,790 at least as far as infliction of pain and suffering is concerned. 13 00:01:14,790 --> 00:01:21,600 So everything hinges here, then on a non-human entity having capacities that are comparable to our own. 14 00:01:21,600 --> 00:01:24,990 The idea, essentially is this I can experience pain, 15 00:01:24,990 --> 00:01:31,710 which typically the suffering a given sentient non-human animal has the same capacity or similar capacity. 16 00:01:31,710 --> 00:01:37,020 It responds, in fact, to pain inducing stimuli in a manner similar to how I would respond. 17 00:01:37,020 --> 00:01:40,560 I assume, like humans like me, prefer to avoid pain and suffering. 18 00:01:40,560 --> 00:01:46,230 So the assumption is that essential non-human animals responds to pain inducing stimuli like us. 19 00:01:46,230 --> 00:01:51,780 To try to avoid them suggests that like us, they prefer not to experience that. 20 00:01:51,780 --> 00:01:58,440 So the infliction of pain and suffering then on these sentient non-human animals appears not to be a more useful activity. 21 00:01:58,440 --> 00:02:07,560 And so seeing that because it inflicts pain and suffering on them and so results in us harming them if we do so regardless. 22 00:02:07,560 --> 00:02:12,750 Based on the variety of this argument, many has cut moral standing to send in non-human animals. 23 00:02:12,750 --> 00:02:17,340 But why does it matter really who we ascribe moral standing to? 24 00:02:17,340 --> 00:02:21,210 And I think you might be able to get a good indication of the significance of the answers 25 00:02:21,210 --> 00:02:25,620 to these kinds of questions by undertaking a search on your favourite online search engine, 26 00:02:25,620 --> 00:02:33,630 such as, for instance, Google Scholar. A lot of people have a lot of different ideas about the question of who or what's moral standing. 27 00:02:33,630 --> 00:02:37,380 So what I wanted to start off with was just to clear a little bit of ground because in my own mind, 28 00:02:37,380 --> 00:02:43,110 when I started writing this to explain how I understand moral standing, 29 00:02:43,110 --> 00:02:47,970 it seems to me at least, and this is not tied to controversial what I'm saying here. 30 00:02:47,970 --> 00:02:53,340 Whoever has the moral standing is somebody who can be morally wrong for actions undertaken by moral agents. 31 00:02:53,340 --> 00:02:59,790 And moral agent is someone who has the capacity to distinguish between what is morally right and what is morally wrong. 32 00:02:59,790 --> 00:03:08,310 So if you have moral standing animal agents, agents, my actions affecting you can be morally right or wrong or neutral. 33 00:03:08,310 --> 00:03:14,220 If something has no moral standing, whatever it is that I do or don't do to it, that affects it, has no moral implications. 34 00:03:14,220 --> 00:03:20,920 So. If we agree, then, that a stone has no moral standing, me kicking the stone along the road is small, 35 00:03:20,920 --> 00:03:26,710 inconsequential at least as far as a stone and as far as the universe is concerned. 36 00:03:26,710 --> 00:03:31,750 So if you think then that non-human animals or people with certain disabilities or androids have no moral standing, 37 00:03:31,750 --> 00:03:39,340 your treatment of them, whatever you might choose to do or not to do would all other things being equal and morally neutral activity. 38 00:03:39,340 --> 00:03:44,980 L.A. County has suggested that we should distinguish between wild status and moral standing. 39 00:03:44,980 --> 00:03:52,070 Between saying that something is not standing simply means that it counts more in its own right. 40 00:03:52,070 --> 00:03:56,870 However, the moral status of entities that have more standing might not necessarily be the same. 41 00:03:56,870 --> 00:04:04,820 And then when we can get into these hierarchies, if you want to, you know that people have discussed here and is probably in line with the kind of 42 00:04:04,820 --> 00:04:10,400 hierarchy of ranking that Ruth and Tom try to get worked out by means of a large grant. 43 00:04:10,400 --> 00:04:19,640 Now, some beings with more standing might have a higher or lower moral status than others other beings, swift also moral standing and in brackets. 44 00:04:19,640 --> 00:04:24,530 As far as I can see again, this depends entirely on how they relate to our own capacities. 45 00:04:24,530 --> 00:04:30,800 Nothing else comes. It just happened. It doesn't happen in our universe of analysis, 46 00:04:30,800 --> 00:04:38,120 so we can then if any number of arguments about the normative grounds on which we could base such rankings. 47 00:04:38,120 --> 00:04:44,570 But it's not unheard of in your philosophy that sentient persons are seen as having a higher status than central non-person. 48 00:04:44,570 --> 00:04:48,350 Why is that? Because they are paradigmatic human beings. They are like us. 49 00:04:48,350 --> 00:04:52,790 And of course, as Sarah explained in response to Michael's question yesterday, 50 00:04:52,790 --> 00:04:59,270 they had to be on the top of our pecking order off the greasy pole of moral status. 51 00:04:59,270 --> 00:05:05,360 And that seems no surprise to me, I think, to align with one's intuition, said many of us share, because this approach could, 52 00:05:05,360 --> 00:05:10,460 for instance, lead us to grant sentience, non-human animals, small standing and moral status conceding. 53 00:05:10,460 --> 00:05:19,160 This would still permit us then, as we have seen here, to disagree vigorously about the question of whether a non-human animal small status should be 54 00:05:19,160 --> 00:05:24,950 the same as that of a human being with similar intellectual capacities so that if a person, 55 00:05:24,950 --> 00:05:30,120 what seems uncontroversial to Buchanan is that more statist admits of degrees. 56 00:05:30,120 --> 00:05:35,970 Ann Warren in her book on moral status obligations to Person, other living things, on the other hand, 57 00:05:35,970 --> 00:05:43,410 seems to propose that we should use more status and more standing interchangeably 58 00:05:43,410 --> 00:05:48,180 because to have more status is to be morally considerable or to have moral standing. 59 00:05:48,180 --> 00:05:52,200 And yet she still finds herself in the business of wanting to create pecking orders. 60 00:05:52,200 --> 00:06:00,330 And so in fact, us, when similar to sentiment, is both a necessary and sufficient condition for establishing moral status. 61 00:06:00,330 --> 00:06:07,650 Status comes in degrees depending on the being's capacity for sentience, so flies would find themselves lower in the pecking order than picks. 62 00:06:07,650 --> 00:06:12,330 And if you ask me to join the camp, I would probably get a membership card for summer camp. 63 00:06:12,330 --> 00:06:15,330 But of course, what personhood should Trump sentence? 64 00:06:15,330 --> 00:06:22,940 It's never quite clear, except when we realise that those who make those determinations of people who happen to be also persons. 65 00:06:22,940 --> 00:06:27,200 So it probably can't hurt, then to place oneself on top of whatever pecking order one creates. 66 00:06:27,200 --> 00:06:34,810 We are very good at that. Now if we take the debate about centre non-human animals as our starting point. 67 00:06:34,810 --> 00:06:39,550 It becomes clear that many of the arguments about moral standing, as well as some old status as strongly, 68 00:06:39,550 --> 00:06:45,130 if not entirely, on drawing parallels from human capacities and needs that we have. 69 00:06:45,130 --> 00:06:49,750 To those of central non-human animals who seem to have comparable, if not identical, needs, 70 00:06:49,750 --> 00:06:58,720 the content of the small standing is inconceivable without us humans having experienced something that demands particular consideration of our needs, 71 00:06:58,720 --> 00:07:04,030 and we grant the existence of those needs to sentence non-human animals that seem to have similar needs. 72 00:07:04,030 --> 00:07:12,040 Them all standing, in other words, depends entirely on our experience of world that would be otherwise irrelevant. 73 00:07:12,040 --> 00:07:18,670 We would have nothing to go by it. If you look at them, evaluate nothing at all. 74 00:07:18,670 --> 00:07:26,150 But you think about it. Most of us are typically not at all concerned about the most tending of animals objects such as stones and rocks. 75 00:07:26,150 --> 00:07:35,090 And I wonder whether that is not indicative of our failure to imagine anything that is not somehow somewhat like us and our experience of the world. 76 00:07:35,090 --> 00:07:42,640 David Hume was right, of course, on moral feelings, as he put it, make up the essential and instinctive foundations of our morality. 77 00:07:42,640 --> 00:07:47,800 But that's an issue I want to return to in just a moment. 78 00:07:47,800 --> 00:07:53,920 Let's talk about what I think of the easy bits. When you think about cameras, for instance, 79 00:07:53,920 --> 00:07:58,000 there's been some exciting reporting recently about the injection of reprogrammed human 80 00:07:58,000 --> 00:08:04,030 derived stem cells from patients with Parkinson's disease into the brains of monkeys. 81 00:08:04,030 --> 00:08:07,750 Concerns have been raised on the one hand, but the well, dissected ethics. 82 00:08:07,750 --> 00:08:13,000 Sorry about the work dissected ethics of animal experimentation involving monkeys. 83 00:08:13,000 --> 00:08:20,770 Invariably, if one is opposed to the creation and use of monkeys for research purposes, one would be opposed to such projects. 84 00:08:20,770 --> 00:08:25,540 However, arguments also began evolving around the question of whether the insertion of human cells into these 85 00:08:25,540 --> 00:08:32,870 monkeys brains should be seen as meaningfully impacting on the moral status of these non-human animals. 86 00:08:32,870 --> 00:08:41,210 So if we had packed them all standing at, say, sixty seven or a hundred or five billion, however many trillion, 87 00:08:41,210 --> 00:08:47,840 whatever you want to put it, but say we put that 67 out of 100 for the sake of the argument, 100 being, 88 00:08:47,840 --> 00:08:48,200 of course, 89 00:08:48,200 --> 00:08:56,690 a paradigmatic human person based on their being sentient beings with limited intellectual capacities within the insertion of those human cells, 90 00:08:56,690 --> 00:09:05,890 move them higher up on the greasy pole of moral status. Would they be more human as a result of this, and so move up to more status pecking order? 91 00:09:05,890 --> 00:09:14,290 Procurement cameras have also been produced with a view to creating organisms that can be utilised for organ transplantation purposes as before, 92 00:09:14,290 --> 00:09:20,080 some of you opposed to the use of central non-humans as involuntary spare organ banks for human purposes. 93 00:09:20,080 --> 00:09:22,300 And the question arises, of course, 94 00:09:22,300 --> 00:09:29,920 at what point in time the human camera might be transformed into a human pig camera and whether eventually it might turn into a human, 95 00:09:29,920 --> 00:09:34,560 depending on how far we might be able to advance such development projects. 96 00:09:34,560 --> 00:09:44,160 What's what's with such changes impact meaningfully on the entities, small state standing or status, and bioethicists have worked their way, I think, 97 00:09:44,160 --> 00:09:48,720 successfully through standard ethical objections to creating such chimaeras, 98 00:09:48,720 --> 00:09:53,160 debating the supposed unnatural oddness of the procedures leading to chimaeras. 99 00:09:53,160 --> 00:09:59,670 There were endless varieties of slippery slope arguments along Frankenstein scenarios, 100 00:09:59,670 --> 00:10:07,980 concerns about violations of species integrity and if all else fails that bioethical all time classic free for all human dignity. 101 00:10:07,980 --> 00:10:16,090 So it seems to me as if arguments about the moral status and moral standing of such chimaeras are not terribly complicated to settle. 102 00:10:16,090 --> 00:10:19,780 Well, on positions developed in the context of the animal rights debate so applicable here, 103 00:10:19,780 --> 00:10:26,440 and David in particular has nicely described the thing and analysed in his book Taking Animals Seriously the 104 00:10:26,440 --> 00:10:31,930 relevant market events that we could look at and look up in this context from those reception consciousness, 105 00:10:31,930 --> 00:10:38,080 pain, distress, fear, anxiety, suffering, pleasure, enjoyment, happiness, desires, concepts, beliefs and so on and so forth. 106 00:10:38,080 --> 00:10:44,170 Now, while there's no consensus over the question of which of these market events is the relevant relevant one, 107 00:10:44,170 --> 00:10:47,860 I'm talking like the weird accent at the moment. I apologise. 108 00:10:47,860 --> 00:10:54,910 So while there is no consensus over the question of which of these macro events is the relevant one, what is it today, 109 00:10:54,910 --> 00:11:01,540 or whether a single with a single one of these events constitutes a Trump type inflexion point from which everything changes? 110 00:11:01,540 --> 00:11:08,410 The arguments are, well, familiar. Only two species is really the question of when cameras are sufficiently close to our 111 00:11:08,410 --> 00:11:13,420 species that they should have full moral status cause some headaches to sentient. 112 00:11:13,420 --> 00:11:17,770 It's like some look like when summer, for instance, that you have to University of Toronto. 113 00:11:17,770 --> 00:11:23,580 That is an irrelevant consideration. Two defenders of a subject of a life standard like Tom Regan. 114 00:11:23,580 --> 00:11:30,990 This would also be an irrelevance in any case. I think that with the exception of species whose stance I think is implausible, 115 00:11:30,990 --> 00:11:36,300 there's little that can be said beyond the well-established arguments from the animal rights debates in this context. 116 00:11:36,300 --> 00:11:44,490 People agree on criteria that can be defended even if we hold divergent views on which of these criteria we consider to be the right ones. 117 00:11:44,490 --> 00:11:46,110 What these criteria have in common, however, 118 00:11:46,110 --> 00:11:53,730 is that their starting point is some arguably physiological equivalency to a parrot to a paradigmatic human's capacities. 119 00:11:53,730 --> 00:12:00,120 It doesn't matter, then, whether you settle for Bentham single sentient ism or Regan's more complex subject of a life criterion, 120 00:12:00,120 --> 00:12:04,670 both standards are inconceivable without the human condition that gave rise to them. 121 00:12:04,670 --> 00:12:16,930 The starting point was us, our ability to experience pain and suffer our disposition or capacity to be subjects of our lives and so on and so forth. 122 00:12:16,930 --> 00:12:23,690 So, yeah, so this approach works well with regard to whatever kind of entity that possesses, 123 00:12:23,690 --> 00:12:33,940 arguably the ability to experience experiences like paradigmatic humans to know the older folks amongst us might recall data. 124 00:12:33,940 --> 00:12:37,900 Data is a fictional character in the science fiction series Star Trek. 125 00:12:37,900 --> 00:12:45,760 Data is a highly sophisticated android with human like characteristics, including strangely having a sex data the machine. 126 00:12:45,760 --> 00:12:52,000 But hopefully at least this data, I'm not so certain anymore, but hopefully uncontroversial data is male. 127 00:12:52,000 --> 00:12:58,300 In various episodes of the series seems to regret its inability to experience emotions, including joy. 128 00:12:58,300 --> 00:13:03,060 But regret, in fact, might be too strong a label. It's more like a serious befuddlement. 129 00:13:03,060 --> 00:13:07,780 In the feature film Star Trek First Contact, if you have well, I hope you have seen it, 130 00:13:07,780 --> 00:13:14,550 if not, just assume there was this movie called Star Trek First Contact. There's this evil ball queen, but some of you are smiling at the rest. 131 00:13:14,550 --> 00:13:23,760 I'm lost. I lost you. Probably in any case. So, so in this feature film Star Trek first contact the evil box office data what he desires. 132 00:13:23,760 --> 00:13:33,180 She reconfigures the Android so that he also experiences emotions as well as sensation skin crops are thrown in for good measure. 133 00:13:33,180 --> 00:13:34,500 So this takes us, of course, 134 00:13:34,500 --> 00:13:42,600 then to the challenging ethical question about the moral standing and status of potential future artificial intelligence entities created by us, 135 00:13:42,600 --> 00:13:52,040 or potentially at some point created by themselves. That might display characteristics and behaviours similar to our own data, for instance, 136 00:13:52,040 --> 00:13:56,360 has a line in one of the serious episodes where he says, and I quote, I quote data. 137 00:13:56,360 --> 00:13:57,830 No kidding. 138 00:13:57,830 --> 00:14:04,700 I chose to believe that I was a person and that I had the potential to become more than a collection of circuits and sub processors, unquote. 139 00:14:04,700 --> 00:14:08,270 The right post in this episode is delivered by Beverly Crusher in brackets. 140 00:14:08,270 --> 00:14:14,690 If you don't know how she discussed starships doctor and she says Commander Data, 141 00:14:14,690 --> 00:14:24,350 the Android who sits yet ops operations dreams of being human, but he never gets the punchline to a joke, unquote. 142 00:14:24,350 --> 00:14:32,780 So what? He failed as the upgraded data to relay to humans and so satisfy Rob Sparrow, about whom we will hear more in just a moment. 143 00:14:32,780 --> 00:14:40,730 And so satisfy Sparrow, about whom in just a moment, everything changed in terms of his small status and more standing after that was done. 144 00:14:40,730 --> 00:14:45,800 Once he had the emotion chip installed and the skin graft that permitted him to feel sensation. 145 00:14:45,800 --> 00:14:51,140 He really was like us in more irrelevant, relevant weighs. What is it but relevant anyway? 146 00:14:51,140 --> 00:14:58,520 His status arguably should have been picked somewhere above us due to his superior analytical skills and speed, as well as his bodily strength. 147 00:14:58,520 --> 00:15:04,100 And that perhaps was the Star Trek writer's answer to the question of what constitutes consciousness. 148 00:15:04,100 --> 00:15:11,390 Now, the troubling thing about post-war queen data is that we still had to take at face value his expressions of anxiety, 149 00:15:11,390 --> 00:15:20,540 stress, fear and so on and so forth. But we didn't know, not in a serious and we really we could never know really how the skin grafts, 150 00:15:20,540 --> 00:15:26,040 how the skin graft, emotion, shape and his Android hardware actually interacted. 151 00:15:26,040 --> 00:15:33,660 Was that really the same as us experiencing anxiety, stress, fear and so on and so forth? 152 00:15:33,660 --> 00:15:40,740 Thankfully, until today, there are no androids developed anywhere near to a level with a data like scenario would even occur as a serious question. 153 00:15:40,740 --> 00:15:48,320 However, if we ever reached a stage where creatures like Cylons cybernetic life. 154 00:15:48,320 --> 00:15:53,720 Lifeform notes silence could become a reality. 155 00:15:53,720 --> 00:15:59,960 That is a debate the moral standing or status of bioengineered hybrids of humans and machines. 156 00:15:59,960 --> 00:16:06,380 This empirical question would pose a very chilling, very serious challenges challenge to address indeed. 157 00:16:06,380 --> 00:16:12,680 And the main reason for this is that we typically take expressions of pain, suffering and happiness at face value. 158 00:16:12,680 --> 00:16:21,620 We can never know, ultimately, with a someone who expresses pain, suffering or happiness actually experiences any of those. 159 00:16:21,620 --> 00:16:28,150 One of the first things that medical students are taught us to believe that patients when they claim that they are in pain. 160 00:16:28,150 --> 00:16:35,730 This is so because despite a few gotcha strategies, we have no means to investigate the veracity of such patients claims. 161 00:16:35,730 --> 00:16:44,280 We act on our own experience of those feelings and assume that someone who is physiologically close to us can experience the same so existing, 162 00:16:44,280 --> 00:16:49,410 I think the analysis then unless they are specious and I would put it to you that there's still a species, 163 00:16:49,410 --> 00:16:57,470 even if you attention to them, because there's nothing else. If we do that, we can do probably rely on the truth of those assumptions. 164 00:16:57,470 --> 00:16:59,210 So I think it's not too far fetched, then, 165 00:16:59,210 --> 00:17:07,130 to grant the truth of such assumptions with regard to being set on vital respects like us as biological cameras could arguably be. 166 00:17:07,130 --> 00:17:13,660 But where would that leave data Android's human hybrids? And my response to this? 167 00:17:13,660 --> 00:17:18,910 Would be that we would need to do to research empirical questions of whether data claims, 168 00:17:18,910 --> 00:17:23,620 whether what data claims he feels is something that he actually feels in a manner that is similar to us, 169 00:17:23,620 --> 00:17:27,330 at least we have something we could compare under those circumstances. 170 00:17:27,330 --> 00:17:35,170 If the answer to the question then is yes, we can deploy our established modes of ethical analysis to address questions of moral status standing. 171 00:17:35,170 --> 00:17:39,070 But perhaps more importantly, questions of our moral obligations. 172 00:17:39,070 --> 00:17:46,120 So I'm very grateful to Ruth and Tom for working this out for us doing the grand project because I'm genuinely curious in brackets. 173 00:17:46,120 --> 00:17:51,610 I don't know whether it's possible actually to get these answers at all, but you know. 174 00:17:51,610 --> 00:17:57,580 Well, let's assume the answer is no. Or quite possibly, maybe. Where would that leave us? 175 00:17:57,580 --> 00:18:05,830 And here's what I think of the the more difficult bits, self-learning artificial intelligence machines like androids, for instance, 176 00:18:05,830 --> 00:18:08,680 there's quite a bit of literature that exists on the ethical implications of 177 00:18:08,680 --> 00:18:14,650 creating and eventually interacting with self-learning artificial intelligence. 178 00:18:14,650 --> 00:18:16,300 And let me not in passing at least, 179 00:18:16,300 --> 00:18:21,790 that I'm somewhat less excited about the challenges these machines cause in terms of their moral status and standing, 180 00:18:21,790 --> 00:18:24,880 even though these questions are fascinating. 181 00:18:24,880 --> 00:18:34,120 I'm much more anxious as a science fiction fan and with with reference to Terminator, Battlestar Galactica and Star Trek, certainly, 182 00:18:34,120 --> 00:18:42,280 I'm much more concerned about the question of whether they will eventually consider us humanoids as imperfect to be eradicated carbon based lifeforms, 183 00:18:42,280 --> 00:18:44,500 as Voyager pointed out. 184 00:18:44,500 --> 00:18:54,410 And it's a prop in Star Trek or having us having moral status and as deserving to survive when machines eventually will take over. 185 00:18:54,410 --> 00:19:03,140 Now, you might recall that the Cylons, both in the 2003 mini series as well as the 2004 Battlestar Galactica movie, were sort of a sinful and flawed. 186 00:19:03,140 --> 00:19:11,180 And to be eradicated. Now, these are religiously fanatic machines, but there was still crazy. 187 00:19:11,180 --> 00:19:17,870 In any case, the question of moral status and moral standing seems quite significantly more difficult vis a vis self-learning, 188 00:19:17,870 --> 00:19:26,950 artificially intelligent machines such as, for instance, androids like the pre upgrade data who didn't have the emotional chip. 189 00:19:26,950 --> 00:19:33,940 So for the purpose of this, I would henceforth refer to androids, but please interpret this as any kind of self-learning artificial intelligence 190 00:19:33,940 --> 00:19:40,060 machine that does not have the uncontroversial physiologic capacity to feel pain. 191 00:19:40,060 --> 00:19:45,310 It doesn't have it because unlike post upgrade data and unlike the humanoid Cylons, 192 00:19:45,310 --> 00:19:49,060 there is no biological component that could give rise to the kind of pain and 193 00:19:49,060 --> 00:19:55,390 suffering as we understand it when we consider how mammals or equivalent lifeforms. 194 00:19:55,390 --> 00:20:00,610 So what would be the situation because of intelligence, a similar event of personhood? 195 00:20:00,610 --> 00:20:04,450 Rob Sparrow Some of you might be familiar with his work. 196 00:20:04,450 --> 00:20:11,320 His book admonished Citizen here, OK? He has argued years ago, when I didn't even think about any of these issues, 197 00:20:11,320 --> 00:20:19,330 he has argued for Turing trials type tests to assist us in determining whether an artificial intelligence possesses moral standing. 198 00:20:19,330 --> 00:20:26,860 Now, as we have said, I think Walter did it. The original Turing test, of course, had a different function here. 199 00:20:26,860 --> 00:20:32,470 The objective was simply to test with an Android could manage to talk in such a way that a human unaware 200 00:20:32,470 --> 00:20:36,760 that they were talking to the Android could be fooled into thinking that they were talking to it, 201 00:20:36,760 --> 00:20:41,640 that they were talking or conversing with a fellow adult human. 202 00:20:41,640 --> 00:20:46,200 Turing was in no doubt in his days already that machines eventually would be able to succeed on that 203 00:20:46,200 --> 00:20:56,650 frontier and indeed an artificial intelligence machine reportedly successfully prosecuting tests in 2014. 204 00:20:56,650 --> 00:21:05,560 No, Spiros, cybersecurity security trial type test is somewhat different that he proposes you propose instead, 205 00:21:05,560 --> 00:21:14,810 and I quote a test for when computers have a chief moral standing by asking when a computer might take the place of a human being in a moral dilemma, 206 00:21:14,810 --> 00:21:21,330 such as a trial situation in which a choice must be made as to which of two human lives to save. 207 00:21:21,330 --> 00:21:27,480 We will know, he says, that machines have achieved moral standing comparable to a human when the replacement of one 208 00:21:27,480 --> 00:21:32,310 of these people with an artificial intelligence leaves the character of the dilemma intact, 209 00:21:32,310 --> 00:21:39,930 unquote. And Sparrow has little doubt. If we decide that more personhood is only a matter of cognitive skills, 210 00:21:39,930 --> 00:21:44,490 it's just a matter of time until artificial intelligence attains moral standing. 211 00:21:44,490 --> 00:21:55,580 I think he's right there. Spare us spare and instead is defending an account of moral standing, where for a machine to pass a Turing test. 212 00:21:55,580 --> 00:21:59,330 It must be capable of being the object of iMore's grief and sympathy, 213 00:21:59,330 --> 00:22:05,230 a small emotion such as these are partially constitutive of our concept of a person, he says. 214 00:22:05,230 --> 00:22:11,570 Then he goes on, but I claim machines are not appropriate surprises Robespierre, after all. 215 00:22:11,570 --> 00:22:18,730 But I claim machines are not appropriate objects of these responses because they incapable of achieving the individual personality too much, 216 00:22:18,730 --> 00:22:20,900 which they are oriented unquote. 217 00:22:20,900 --> 00:22:26,360 So it seems to me that that mould standing for Spiro and targets crucially being part of a network of others like like us, 218 00:22:26,360 --> 00:22:35,400 with whom we socially interact like, for instance, in the African philosophy of Ubuntu and machines is right up to a point. 219 00:22:35,400 --> 00:22:43,850 Today, any case would have some difficulty passing such a test. He eventually admits that his analysis points down to something more trivial. 220 00:22:43,850 --> 00:22:47,810 It goes back to our intuitions about what such machines can and can't do, 221 00:22:47,810 --> 00:22:54,290 as opposed to what what they might at a certain point in time the de facto can't or can do. 222 00:22:54,290 --> 00:23:00,650 He writes. We cannot seriously hold. The machines have thoughts of feeling in a life because a radical. 223 00:23:00,650 --> 00:23:04,370 Inevitably arises as to whether they really feel what they appear to. 224 00:23:04,370 --> 00:23:09,380 And that's the crux of it. Will never know. Probably, and could not. 225 00:23:09,380 --> 00:23:12,770 A card would have offered similar thoughtful thoughts vis-a-vis non-human higher mammals. 226 00:23:12,770 --> 00:23:17,360 In his days, Descartes was clearly wrong. The jury's out there on Sparrow. 227 00:23:17,360 --> 00:23:24,370 Time will tell. If I had to place a bet, I would bet that Sparrow would eventually be proven wrong to. 228 00:23:24,370 --> 00:23:28,390 I'm sceptical that special social relationship strategy is helpful. 229 00:23:28,390 --> 00:23:33,220 It encounters problems of the same kind that we are familiar with from the animal rights debates. 230 00:23:33,220 --> 00:23:40,510 There will be humans who are unable or unwilling to reciprocate socially in social relationships or to participate in such relationships. 231 00:23:40,510 --> 00:23:45,760 Surely, their moral standing and old status cannot be contingent on their recognition by others. 232 00:23:45,760 --> 00:23:52,390 Why should someone small standing depend on whether they succeed in becoming the object of our remorse, grief or sympathy or sympathy? 233 00:23:52,390 --> 00:23:55,960 If we have learnt anything from the court's failure to recognise animal pain and animal suffering, 234 00:23:55,960 --> 00:24:00,070 it's this we are as a species, not sophisticated enough to do justice, 235 00:24:00,070 --> 00:24:08,920 even to others who are in fact uncontroversial like us in morally rather than face ways, let alone to others who are very much unlike us. 236 00:24:08,920 --> 00:24:12,520 So leaving aside relational approaches to mould status or moral standing, 237 00:24:12,520 --> 00:24:19,660 it seems to me that two questions arise, one of which seems to be empirical in nature. 238 00:24:19,660 --> 00:24:25,780 The other one in normative in nature will give answers to both of these questions. 239 00:24:25,780 --> 00:24:30,910 I think both of my answers are pretty crappy, and I'm kind of curious whether you have better answers. 240 00:24:30,910 --> 00:24:37,870 So the empirical question is this with androids express the experience of pain and suffering happiness and misery. 241 00:24:37,870 --> 00:24:41,450 What what they actually experience, if anything. 242 00:24:41,450 --> 00:24:51,430 Is it comparable to to what we would experience, is it comparable to what, say, higher non-human mammals might experience? 243 00:24:51,430 --> 00:24:54,340 In case you are able to defend an affirmative answer to these questions, 244 00:24:54,340 --> 00:25:02,600 you would be able to deploy the same type of analysis that we have been going over and over again since yesterday here. 245 00:25:02,600 --> 00:25:09,000 And so we would be able actually to to to make sense of this in terms of ethical analysis and in some sense, 246 00:25:09,000 --> 00:25:17,360 this would be good news given that it's well-trodden territory. He would be able to bring to bear a sophisticated ethical frameworks to this question. 247 00:25:17,360 --> 00:25:22,580 And as alluded to this already in the in the section on chimaeras, the trouble is, of course, we can't. 248 00:25:22,580 --> 00:25:31,190 You do not have the means to make a definite determination and spare us answers quite revealing again in the end, like us. 249 00:25:31,190 --> 00:25:36,390 Therefore, it ain't happening. He might be right today. He might be wrong tomorrow. 250 00:25:36,390 --> 00:25:40,470 And this takes me to the other question. I think it's crucial and it's normative. 251 00:25:40,470 --> 00:25:47,140 What if the empirical question is answered in the negative, like it's not like us? 252 00:25:47,140 --> 00:25:51,010 We have no evidence that there is real pain. We have no evidence that there is we are suffering. 253 00:25:51,010 --> 00:25:55,160 We have no evidence that there's real regret, no matter what they tell us. 254 00:25:55,160 --> 00:26:00,710 Would that give us more licence to discard than Android's expression of suffering? 255 00:26:00,710 --> 00:26:06,770 And we should finally address the elephant in the room, is it normatively defensible to reduce answers to the question about moral status and 256 00:26:06,770 --> 00:26:11,540 more standing to essentially criteria that look exclusively at the human experience? 257 00:26:11,540 --> 00:26:17,640 And it's good everything that falls outside the realm of for moral consideration, as we are doing so far. 258 00:26:17,640 --> 00:26:24,630 And that's pretty much a Spurs response. But the same was true, of course, percentage isn't based approaches and it falls short. 259 00:26:24,630 --> 00:26:32,070 I think of what the Somalia requires. It doesn't take us long to realise that our current strategy to make us the 260 00:26:32,070 --> 00:26:39,290 standard of value if you evaluating anything else is difficult to defend and. 261 00:26:39,290 --> 00:26:44,540 Bear with me, if I come up one more time with a science fiction example. 262 00:26:44,540 --> 00:26:52,730 So imagine there's an extraterrestrial species visiting us, it's not completely out of the possibility it might happen at some point. 263 00:26:52,730 --> 00:26:58,910 So they visit us, leaving that spaceship to make particular requests vis-a-vis their accommodation. 264 00:26:58,910 --> 00:27:05,990 They request a particular low temperature and they require a large and as energy source to regenerate during and after our meetings. 265 00:27:05,990 --> 00:27:11,000 As hosts, we are provided by them with the necessary specifications for the hardware. 266 00:27:11,000 --> 00:27:16,730 They also kind of eventually provide us with some hot because our adaptors apparently are incompatible. 267 00:27:16,730 --> 00:27:23,330 Eventually they leave their spaceship. Well, the materials in our meeting room bring you long down energy source to ensure they have sufficient 268 00:27:23,330 --> 00:27:29,090 regeneration capability until their bodies and our energy sources have to be made compatible. 269 00:27:29,090 --> 00:27:34,100 Our service androids provide the human species representative for subsequent systems doing the meeting. 270 00:27:34,100 --> 00:27:39,530 This quickly an exchange of information occurring between the extraterrestrials and our service androids. 271 00:27:39,530 --> 00:27:45,500 The extraterrestrials, much like European politicians, would do in business meetings with Chinese counterparts, 272 00:27:45,500 --> 00:27:50,000 bring up the issue of Android rights and based on what they gathered from our androids, 273 00:27:50,000 --> 00:27:54,050 what they perceived to be abuse and discriminatory conduct by humans towards those 274 00:27:54,050 --> 00:27:58,580 machines because we have made already our determination that none of that counts. 275 00:27:58,580 --> 00:28:04,910 Essentially, their charge is extraterrestrials that we had ignored unfairly the androids demands for reliable energy supply, 276 00:28:04,910 --> 00:28:12,470 the appropriate temperature and worse time repairs both in their work and the out-of-service environment. 277 00:28:12,470 --> 00:28:18,440 So it is true that androids did complain about these three issues and about the temperature in particular to us, 278 00:28:18,440 --> 00:28:24,170 but we were very quick to reject those complaints on the grounds that androids cannot suffer as far as they were. 279 00:28:24,170 --> 00:28:31,430 As far as we or they were concerned, the androids were faking it and the at sparrows were quite approvingly, apparently. 280 00:28:31,430 --> 00:28:36,980 It was also cheaper to do so. And in this future, society capitalism still reigns. 281 00:28:36,980 --> 00:28:45,410 So the extraterrestrials wouldn't accept any of these explanations. They can't experience pain or suffering or happiness. 282 00:28:45,410 --> 00:28:50,060 They bring up the example of cheap labour that they have imparted to their homeworld. 283 00:28:50,060 --> 00:28:58,460 These biological organisms are in fact not dissimilar to us in that they require sustenance, rest the ascension, but our definition of sentience. 284 00:28:58,460 --> 00:29:01,790 And while the extraterrestrials can't experience any of these things, 285 00:29:01,790 --> 00:29:09,860 they did listen to the complaints that they received when the first cohort of workers got weaker and weaker due to the lack of available food. 286 00:29:09,860 --> 00:29:14,780 They realised quickly that their standards of well-being are quite different to that of those labourers. 287 00:29:14,780 --> 00:29:21,360 And they realised that they could not apply their own standards of well-being to them. Their needs were different. 288 00:29:21,360 --> 00:29:27,440 No, I grant you, of course, that this is all completely made up as far fetched as far as hypotheticals go. 289 00:29:27,440 --> 00:29:33,470 But I do think that it goes to show maybe that our strategy to using our own experience that we are 290 00:29:33,470 --> 00:29:41,130 familiar with as the gold and the only standard might be impossible to defend when it's challenged. 291 00:29:41,130 --> 00:29:51,370 Different beings would have different needs, so as far as they're more standing on what status is concerned, any attempt at making a determination. 292 00:29:51,370 --> 00:29:56,460 Should be unique and different to vet. That we deployed in the case of non-human animals, 293 00:29:56,460 --> 00:30:03,870 insofar as there doesn't have to be biological material directly involved in the production of artificial intelligence machines. 294 00:30:03,870 --> 00:30:06,420 And yet the show disposition capacities that seem comparable, 295 00:30:06,420 --> 00:30:13,900 if not identical to some human disposition capacities that typically give rise to claims about mould status and moral standing. 296 00:30:13,900 --> 00:30:21,190 So part of the difficulty we face will undoubtedly be to answer the question of whether hypothetical human ascension being like responses 297 00:30:21,190 --> 00:30:28,810 to a particular stimuli in an artificial intelligence entity are morally equivalent to the responses given by beings that we know well, 298 00:30:28,810 --> 00:30:35,140 that we know that we think we know are physiologically capable of having such experiences and if they are not. 299 00:30:35,140 --> 00:30:44,830 Well, that was, give us more licence to ignore them. So let me wrap this up by talking briefly about the secret lives of stones. 300 00:30:44,830 --> 00:30:49,480 An obvious response to to my hypothetical is that there are no such extraterrestrials. 301 00:30:49,480 --> 00:30:53,920 There's no evidence that there are beings who can be wrong too or not at a minimum, sentient. 302 00:30:53,920 --> 00:30:59,200 It's all nice and well to go on, harping about the ball screen and data or silence when they also do not exist. 303 00:30:59,200 --> 00:31:07,940 And I concede that I'm not proposing that we grant rights to particular operating environments or treatments to current androids. 304 00:31:07,940 --> 00:31:14,210 But that is only because they have not yet reached the stage of sophistication when that seems warranted in a way they are like, Stone said. 305 00:31:14,210 --> 00:31:18,170 We kicked down the road like milkshakes plotted. Protesters shared with another. 306 00:31:18,170 --> 00:31:25,640 For us, the reason why we do not grant rocks and stones, particular a particular right to be left alone that is lying forever. 307 00:31:25,640 --> 00:31:32,420 Where they are is perhaps not only because they have no interest that we can wrap our heads around for the reasons that I mentioned, 308 00:31:32,420 --> 00:31:38,980 but also because they have never made any effort to communicate their violated or threatened interests. 309 00:31:38,980 --> 00:31:42,970 So to be fair to those who think that I'm too conservative, that we should go further, 310 00:31:42,970 --> 00:31:48,400 there are in fact precedents supporting supportive of their stance already in our legal systems. 311 00:31:48,400 --> 00:31:54,850 To give you an example, we create legal rights today not only to persons and also persons like corporations, 312 00:31:54,850 --> 00:31:57,760 for instance, but also include things like reverse. 313 00:31:57,760 --> 00:32:06,220 For instance, some of you will be familiar with the Wanganui River in New Zealand, which has been granted the same rights as persons. 314 00:32:06,220 --> 00:32:08,440 It must be treated by law as a living entity, 315 00:32:08,440 --> 00:32:13,930 even though clearly it is not going to say clearly it is not based on what we know and who might be mistaken. 316 00:32:13,930 --> 00:32:21,220 The reason for this is that the Maori hold a worldview where the river is considered a living entity with valid moral claims and valid moral standing, 317 00:32:21,220 --> 00:32:27,010 and ultimately the political campaign succeeded. This is what happened. 318 00:32:27,010 --> 00:32:32,860 No, I think of rivers as ecosystems with instrumental value that should give rise to certain protections. 319 00:32:32,860 --> 00:32:41,410 But unless one subscribes to world view that William Franklin has described as a holistic environmental ethic where everything somehow counts, 320 00:32:41,410 --> 00:32:48,700 it would not be because it deserves those protections in its own right. That is in the absence of it being of instrumental value. 321 00:32:48,700 --> 00:32:56,080 Interestingly enough, this holistic ethic fits to address our standard lifeboat example, and you can just think about it. 322 00:32:56,080 --> 00:33:02,110 Say if in this example, where basically everything in the in the environment somehow matters has some standing 323 00:33:02,110 --> 00:33:05,770 say you have five million cancer cells in one person with cancer who matters more, 324 00:33:05,770 --> 00:33:11,710 how do you quantify that? There's nothing at all in this. I think that that would answer this question. 325 00:33:11,710 --> 00:33:17,260 But what about the future? What about future post upgrade data equivalent androids? 326 00:33:17,260 --> 00:33:20,980 My view would be this unlike Spiro, 327 00:33:20,980 --> 00:33:27,910 I would not want to eliminate such a possibility of vested interests that are specific to future sophisticated androids, 328 00:33:27,910 --> 00:33:34,470 but accountant visitors beyond hypotheticals that have currently at least no relation whatsoever to reality. 329 00:33:34,470 --> 00:33:40,180 Far fetched as the hypothetical with the visiting extraterrestrial was, it points us to something that might be relevant, 330 00:33:40,180 --> 00:33:50,130 though if there is a sophisticated Android, say, a machine with no biologic components at all, that feels wrong but wrong our treatment of it. 331 00:33:50,130 --> 00:33:59,890 You will need some evidence or argument from it or from a proxy. Defending it that shows evidence in support of the violated interest or harm claims. 332 00:33:59,890 --> 00:34:06,580 It's easier, I think, for us to investigate claims of harm if there's a biological being that is somewhat familiar to us. 333 00:34:06,580 --> 00:34:09,700 It's harder, if not impossible, for us to investigate such claims. 334 00:34:09,700 --> 00:34:16,480 If there's no biological component that offers us a chance to compare like with like and draw conclusions from that. 335 00:34:16,480 --> 00:34:22,480 So I think it's not unreasonable to place the onus on something that claims rights for itself and the claimant or the human proxy, 336 00:34:22,480 --> 00:34:27,460 as has been the case with non-human animals or the river in New Zealand. 337 00:34:27,460 --> 00:34:34,720 In our history, at least, rights activism ultimately led to resolve societal change, whether it was for people who were held as slaves, 338 00:34:34,720 --> 00:34:40,780 women, queer people or animals, they met their case or we made their case, depending on how you want to read this. 339 00:34:40,780 --> 00:34:49,380 We made our case sometimes by deploying militant means, and we succeeded. At a minimum, I would expect androids to make their case. 340 00:34:49,380 --> 00:34:56,070 To stake their claim. No one could object that androids might not have the capacity to do so themselves. 341 00:34:56,070 --> 00:35:05,250 And again, I think the case of the Wanganui River is illuminating the case that was brought by the Maori people acting on the river's behalf. 342 00:35:05,250 --> 00:35:09,970 So one could easily imagine human proxies representing the androids to. 343 00:35:09,970 --> 00:35:14,520 So the Animal Liberation, the Android Liberation Front, or something like that. 344 00:35:14,520 --> 00:35:19,380 I don't think it's unreasonable to say that if something that might have justifiable 345 00:35:19,380 --> 00:35:23,700 moral claims against mall agents is unable to communicate those claims, 346 00:35:23,700 --> 00:35:28,560 and these claims are so significantly different to what we know today that we can't imagine them. 347 00:35:28,560 --> 00:35:36,800 We are not quite blameworthy for not recognising our failings. And this seems a reasonable response, maybe not to me, 348 00:35:36,800 --> 00:35:41,810 it looks like a reasonable response to the conundrum of wanting to do justice to something that might be, 349 00:35:41,810 --> 00:35:45,440 but that we haven't seen yet, and that would be very different to us. 350 00:35:45,440 --> 00:35:51,740 It's also easier than to conjure up justifications for interests that would be significantly different to our own and that it's not impossible, 351 00:35:51,740 --> 00:35:59,780 in fact, to conceptualise such claims today. In the meantime, what I think is worth reconsidering, you know what, 352 00:35:59,780 --> 00:36:07,890 I think it's worth reconsidering some of the sentiment isn't a person, not a variations of them really should be the only games in town. 353 00:36:07,890 --> 00:36:13,080 And I have no answer. By the way, it's not a rhetorical question I would not be able to say yes or no, I'm just uncertain at this stage. 354 00:36:13,080 --> 00:36:19,750 It's all science fiction does give a surprisingly rich fodder for analysis, and it might prepare us for things to come. 355 00:36:19,750 --> 00:36:28,138 So thanks very much for your time.