1 00:00:01,700 --> 00:00:10,790 So you, Harrison, is based within the equity philosophy and is devoted to to doing research and practical or applied ethics. 2 00:00:10,790 --> 00:00:15,860 As the name suggests, it's directed by June seven, 3 00:00:15,860 --> 00:00:25,200 me August the chair in practical ethics and currently includes 24 other academic staff members, as well as five don't think students. 4 00:00:25,200 --> 00:00:28,280 Now we have a very broad remit, 5 00:00:28,280 --> 00:00:38,660 which is basically just to bring philosophical analysis to bear on practically important ethical questions and questions about how we want to live, 6 00:00:38,660 --> 00:00:41,270 how society ought to be arranged and so on. 7 00:00:41,270 --> 00:00:49,610 And as you might suspect, given that broad agreement, we work on a fairly diverse range of topics, some of which are listed on the site. 8 00:00:49,610 --> 00:00:55,370 Historically, we've had quite a strong focus on ethical issues raised by medicine and the life sciences. 9 00:00:55,370 --> 00:01:05,100 But we've also worked in other areas like criminal justice, criminal justice, ethics, internet ethics and military efforts. 10 00:01:05,100 --> 00:01:11,850 OK, so, so several of our projects bring philosophy to bear on Christians and ethics, 11 00:01:11,850 --> 00:01:17,180 and what I want to do in that in the few minutes that I have left is just to briefly mentioned some of these. 12 00:01:17,180 --> 00:01:20,880 I went there to cover all of them just by way of illustration. 13 00:01:20,880 --> 00:01:28,800 Some of the ways in which rethink some of the areas in which rethink philosophical ethics might have something to say about ethics. 14 00:01:28,800 --> 00:01:38,400 And then Garissa is going to go into a bit more detail about exactly how philosophy can contribute to debates about ethics. 15 00:01:38,400 --> 00:01:45,480 OK, so the first strand of research that I wanted to mention is the work that Julian Savea, Eskew and Guy Canning are doing, 16 00:01:45,480 --> 00:01:56,710 along with actress Ginger from Melbourne University on the role of public preferences and informing the development of ethical algorithms. 17 00:01:56,710 --> 00:02:00,340 So as you're probably all aware, one of the challenges posed by, for example, 18 00:02:00,340 --> 00:02:07,180 autonomous vehicles is is that a specifying how these vehicles should respond when posed with like ethical 19 00:02:07,180 --> 00:02:14,800 dilemmas and a choice between sacrificing an occupant of the vehicle and sacrificing a pedestrian, 20 00:02:14,800 --> 00:02:23,050 a choice between running over an adult or running over a child? So one natural response to this problem has been to kind of go out into the world 21 00:02:23,050 --> 00:02:27,210 and collect lots of data about what the public think about how autonomous vehicle, 22 00:02:27,210 --> 00:02:34,930 we should respond to these kinds of situations and actually quite a lot of that work has already been done. 23 00:02:34,930 --> 00:02:40,750 But what's what's not in all of this is what we should be doing with the data that we're getting out of this empirical research, 24 00:02:40,750 --> 00:02:46,430 essentially scientific data about what public the public preferences actually are. 25 00:02:46,430 --> 00:02:52,490 Because it certainly doesn't seem to be the case that we should just kind of unthinkingly implement the public well in this kind of area. 26 00:02:52,490 --> 00:03:00,830 So I suppose in a particular society that most people thought that autonomous vehicles should prioritise saving the lives of white people over others. 27 00:03:00,830 --> 00:03:06,290 Clearly, you wouldn't follow that. That's what we should programme. Autonomous vehicles to do. 28 00:03:06,290 --> 00:03:14,600 But on the other hand, it does seem plausible that at least some public attitudes should play some role in informing the design of these algorithms. 29 00:03:14,600 --> 00:03:21,810 As in what Gillian, Guy and Chris are thinking about is exactly what that role should be. 30 00:03:21,810 --> 00:03:24,840 And just to kind of cut straight to one of their conclusions, 31 00:03:24,840 --> 00:03:30,630 one of the proposals probably the most controversial proposal has been that public preferences 32 00:03:30,630 --> 00:03:36,310 should be put through a kind of philosophical crusade before being built into algorithms. 33 00:03:36,310 --> 00:03:45,810 So we should first check with these preferences are actually consistent with at least some plausible and widely held philosophical or ethical theories 34 00:03:45,810 --> 00:03:54,810 and only preferences to make it through this kind of filter in what they call laundered preferences and should be fed into algorithm design. 35 00:03:54,810 --> 00:04:00,810 OK, so the second strand of our research that I wanted to mention is a programme that I'm leading on 36 00:04:00,810 --> 00:04:06,990 the ethics of predicting and influencing behaviour supported by the Wellcome Trust in the U.S. 37 00:04:06,990 --> 00:04:10,710 So this work has intersected with ethics in a few places. 38 00:04:10,710 --> 00:04:14,310 One of these is on the topic of crime prediction, 39 00:04:14,310 --> 00:04:20,880 where we've been doing some work with colleagues in the Department of Psychiatry to try to make some, I guess, 40 00:04:20,880 --> 00:04:25,980 practical suggestions about how we could improve the kinds of crime prediction algorithms 41 00:04:25,980 --> 00:04:31,590 that are increasingly being used based in criminal justice and in forensic psychiatrist. 42 00:04:31,590 --> 00:04:42,420 And by improve, I mean here making more accurate but also mitigating some of the ethical concerns about bias and unfairness and 43 00:04:42,420 --> 00:04:47,580 potentially the strand of our research that I wanted to go into a little bit more detail about is some work that 44 00:04:47,580 --> 00:04:53,040 we're doing on the idea of a right to mental integrity because I think this is an interesting case of an area 45 00:04:53,040 --> 00:05:02,420 where an existing thinking in medical ethics might have something of relevance to contribute to to AI ethics. 46 00:05:02,420 --> 00:05:07,640 So it's very widely accepted, especially in medical ethics, but also elsewhere, 47 00:05:07,640 --> 00:05:12,620 that we all possess something like a right to bodily integrity or right against interference with our bodies. 48 00:05:12,620 --> 00:05:16,820 This right would be infringed if someone physically assaults you. 49 00:05:16,820 --> 00:05:24,110 But it would also be infringed, for example, if a medical professional performed a medical procedure on you without your consent. 50 00:05:24,110 --> 00:05:28,820 So the question that we're interested in is whether there might be an analogous right to mental integrity, 51 00:05:28,820 --> 00:05:34,580 so that would be a right against interference with your mind and your body. 52 00:05:34,580 --> 00:05:39,260 And I mean, this is a question that hasn't been much discussed either in philosophy or law, 53 00:05:39,260 --> 00:05:43,850 but we think it's going to be very relevant to medical ethics because quite a few medical 54 00:05:43,850 --> 00:05:49,460 interventions look like they might infringe something like a right to mental integrity. 55 00:05:49,460 --> 00:05:52,610 Perhaps the most obvious example here would be the use of compulsory psychiatric 56 00:05:52,610 --> 00:05:57,650 interventions on patients who have been sectioned under the Mental Health Act. 57 00:05:57,650 --> 00:06:01,700 But we think this question about mental integrity is also going to be relevant to 58 00:06:01,700 --> 00:06:08,030 non-medical interventions and interventions that are not at all physically invasive. 59 00:06:08,030 --> 00:06:16,640 And I think one of the most interesting and important examples here would be what we might call assisted manipulation. 60 00:06:16,640 --> 00:06:22,160 So I suppose that an online platforms that something like Facebook develops and deploys an artificial 61 00:06:22,160 --> 00:06:28,040 intelligence that can identify the psychological weaknesses of all of its users and almost 62 00:06:28,040 --> 00:06:35,270 perfectly target them with with content that will maximise the length of time that they spend on the 63 00:06:35,270 --> 00:06:42,170 platform and sort of maximally strengthen their desire to keep habitually checking that platform. 64 00:06:42,170 --> 00:06:48,140 It seems reasonable to ask whether we could think of this technology as infringing a rights to mental integrity, 65 00:06:48,140 --> 00:06:53,240 and we think that the answer to this question is going to depend on exactly how you understand that, right? 66 00:06:53,240 --> 00:07:02,780 And what kind of sense of basis you think that it has released on some plausible accounts of the right to mental integrity that we're considering. 67 00:07:02,780 --> 00:07:09,350 It seems that that air based manipulation could actually infringe the right to mental integrity in just the same way as, 68 00:07:09,350 --> 00:07:12,290 for example, compulsory psychiatric interventions, 69 00:07:12,290 --> 00:07:21,170 which might suggest that we should be regulating air based manipulation and other similar forms of manipulation much more stringently, 70 00:07:21,170 --> 00:07:27,080 perhaps a bit more closely to how we currently regulate psychiatry. 71 00:07:27,080 --> 00:07:32,420 The third and final example that I wanted to say something about is the work of an amazing man, 72 00:07:32,420 --> 00:07:39,320 Steven Rainey, on ethical issues raised by the use of neuro prosthetics for decoding speech. 73 00:07:39,320 --> 00:07:44,570 So this is this is the philosophical part of a large multidisciplinary project called Brain Con, 74 00:07:44,570 --> 00:07:47,990 which is which is seeking to develop neuro prostheses, 75 00:07:47,990 --> 00:07:55,880 a brain computer interface that could allow individuals who have lost the ability to speak to communicate. 76 00:07:55,880 --> 00:08:05,360 So these devices would work by by detecting brain signals, converting them into synthesised speech with the mediation of an air language model. 77 00:08:05,360 --> 00:08:10,070 And that language model would do a significant amount of predicting and rephrasing 78 00:08:10,070 --> 00:08:14,450 in order to allow the speaker to speak in a kind of ordinary conversational pace. 79 00:08:14,450 --> 00:08:23,300 And with more or less ordinary fluency. So one of the ethical issues here concerns the extent to which we could hold people 80 00:08:23,300 --> 00:08:28,550 responsible for the utterances that they might make via a device like this. 81 00:08:28,550 --> 00:08:34,130 So ordinarily, we do hold people responsible for their speech acts if someone says something racist or offensive. 82 00:08:34,130 --> 00:08:40,430 We tend to think that they can be blamed for that. And that seems to presuppose that they're responsible for what they've said. 83 00:08:40,430 --> 00:08:44,660 If I promise to do something, you'll probably think that I'm bound by that promise. 84 00:08:44,660 --> 00:08:50,880 And again, that seems to presuppose that I'm responsible for what I said when I made the promise. 85 00:08:50,880 --> 00:08:59,910 But one question is, to what extent would those kinds of responsibility assignments carry over to cases involving new prostheses of this sort? 86 00:08:59,910 --> 00:09:05,460 Given that in these cases, it might be a much more significant gap between the kind of mental act of intending 87 00:09:05,460 --> 00:09:10,530 or attempting to say something and the actual utterance that gets produced. 88 00:09:10,530 --> 00:09:18,650 So how does the mediation of an artificial intelligence in this kind of system affect the responsibility of the speaker for the utterance? 89 00:09:18,650 --> 00:09:22,410 And how might the nature of AI make a difference through assignments response? 90 00:09:22,410 --> 00:09:26,700 But these are the kinds of questions that Stephen and NRA are addressing. 91 00:09:26,700 --> 00:09:31,920 And again, just kind of briefly cut to one of the conclusions. They're arguing that in many of these cases, 92 00:09:31,920 --> 00:09:36,000 the mediation of the artificial intelligence could significantly diminish the 93 00:09:36,000 --> 00:09:40,980 responsibility of the speaker and in a way that might require this sort of significant. 94 00:09:40,980 --> 00:09:46,650 Do you think some of our ethical norms regarding speech and conversation? 95 00:09:46,650 --> 00:09:51,840 OK, so those are some of the air relevant areas on which you have AI Centre is working. 96 00:09:51,840 --> 00:09:56,220 There are others as well. And Mike Robillard, a lot of military ethicists in the centre, 97 00:09:56,220 --> 00:10:01,980 has been working on critiquing some of the existing debates about autonomous weapons systems or killer robots. 98 00:10:01,980 --> 00:10:11,040 And one of the students, Abhishek Mishra, is working on the extent to which we might need to revise some of our concepts, 99 00:10:11,040 --> 00:10:19,320 like the standard of care and medical negligence in a world in which health care professionals are relying heavily on machine learning classifiers. 100 00:10:19,320 --> 00:10:26,340 But I'm going to stop there and handover to accuracy, and he's going to go into a bit more detail about some of the the actual ways in which 101 00:10:26,340 --> 00:10:30,300 philosophy can kind of make progress on the types of topics that I've been talking about. 102 00:10:30,300 --> 00:10:32,725 And also to tell you about some of the research.