1 00:00:00,210 --> 00:00:11,610 It's my great pleasure to welcome as our first speaker, appropriately so Professor Sir Nigel Shadbolt, principal of Jesus College, 2 00:00:11,610 --> 00:00:18,030 who's a professorial research fellow in the Department of Computer Science and chairman of the Open Data Institute, 3 00:00:18,030 --> 00:00:23,130 which he co-founded with Tim Berners Lee, amongst many other distinctions. 4 00:00:23,130 --> 00:00:30,810 Nigel is a fellow of the Royal Society. And in 2013 was knighted for services to science and engineering. 5 00:00:30,810 --> 00:00:40,860 He's published over 500 articles on topics ranging from cognitive psychology to computational neuroscience and from A.I. to the Semantic Web. 6 00:00:40,860 --> 00:00:50,970 And Nigel has been fronting the universities in Connexions with the Schmall Schwartzman Foundation to bring about this new institute. 7 00:00:50,970 --> 00:01:04,980 So, Nigel, over to you. Thank you. So I just wanted to set the scene really for today's wonderful set of talks, 8 00:01:04,980 --> 00:01:11,100 vignettes around the range and scope of what's going on around the challenges of 9 00:01:11,100 --> 00:01:15,600 ethics in many of the advanced technology settings that we were almost engaged in. 10 00:01:15,600 --> 00:01:18,930 And also to explain where this all came from. 11 00:01:18,930 --> 00:01:30,430 So we know that the Schwartzmann Institute itself, the Schwartzmann centre itself, is a much bigger endeavour involving the humanities in general. 12 00:01:30,430 --> 00:01:35,070 And one thing to be very clear about, when people heard the announcement of 150 million pounds. 13 00:01:35,070 --> 00:01:38,690 Some people mistakenly imagined that was for the Institute for Ethics in. 14 00:01:38,690 --> 00:01:43,800 Now, I know it's a it's a part of that gift. 15 00:01:43,800 --> 00:01:49,110 And it's the part that is particularly exciting because it's there to support human talent, 16 00:01:49,110 --> 00:01:54,930 to support appointments in the area, in the faculty of philosophy in particular. 17 00:01:54,930 --> 00:02:02,640 And I'll talk about those. So it's a significant but a very minority part of that gift. 18 00:02:02,640 --> 00:02:08,550 But it's a significant and the whole challenge of this is, of course, 19 00:02:08,550 --> 00:02:14,820 premised on the fact that we know that computer science and particularly artificial intelligence present huge challenges. 20 00:02:14,820 --> 00:02:24,480 This is one of my favourite quotes. This is from cybernetics. Father Norbert Viña in 1948 already foresaw that, as he called it, 21 00:02:24,480 --> 00:02:31,770 the ultra rapid computing machine was an ideal central nervous system for an apparatus for automatic control 1948. 22 00:02:31,770 --> 00:02:38,840 He foresaw the opportunities for. Unheard of amounts of good and evil. 23 00:02:38,840 --> 00:02:40,780 And boy, was he right there. 24 00:02:40,780 --> 00:02:49,640 And of course, you're the forcing function that we recognise in that much technology accelerates under the impulse of of conflict, sadly. 25 00:02:49,640 --> 00:02:59,360 So whether it's Bletchley Park, where, of course, so much of the early work on automated computing happened, kept much of it classified, 26 00:02:59,360 --> 00:03:06,020 sharing himself, of course, working there or it was the Manhattan Project where can computation was required. 27 00:03:06,020 --> 00:03:15,860 This forcing function accelerate both the demands, the technology, but also very quickly led to issues around the fundamental ethical questions. 28 00:03:15,860 --> 00:03:23,340 The other thing I like to present is the fact that modern A.I. is understood very often to simply be all about machine learning. 29 00:03:23,340 --> 00:03:33,590 And as somebody who has been working the field since the late 70s, I've seen a number of cycles of A.I. come and go. 30 00:03:33,590 --> 00:03:37,130 And in that time, different methodologies have held sway. 31 00:03:37,130 --> 00:03:42,860 Back in the day was rule based systems theory improving a whole range of methods to do search. 32 00:03:42,860 --> 00:03:49,640 And each time round the cycle of enthusiasm, particular human capabilities fell to the dominance of the machines. 33 00:03:49,640 --> 00:03:57,440 There was always a question around the ethical deployment of this, even back in the 80s when people were talking about building expert systems. 34 00:03:57,440 --> 00:04:03,050 What about the experts the systems would displace? Could we trust the actual diagnosis? 35 00:04:03,050 --> 00:04:09,200 The systems would give us worthy accounts, explicable and understandable and so on. 36 00:04:09,200 --> 00:04:15,320 So there is no there is no shift in some of the fundamental challenges facing us over many decades. 37 00:04:15,320 --> 00:04:21,530 But what has happened clearly in recent times is the emergence of a particular class of computing power, 38 00:04:21,530 --> 00:04:26,930 a method that has led to some breakthrough moments. These so-called deep neural networks, 39 00:04:26,930 --> 00:04:34,460 various forms of machine learning have led both to the try the triumph of machines over particular areas of human expertise. 40 00:04:34,460 --> 00:04:41,480 This, of course, is Alpha Go's triumph over Lisa Abdulle and most recently achieving extraordinary results 41 00:04:41,480 --> 00:04:47,000 in what was thought again to be quite difficult to master areas of problem solving. 42 00:04:47,000 --> 00:04:57,560 This is a multi-strategy king called Starcraft. It's it's the latest achievement, again, of the deep mind science research labs. 43 00:04:57,560 --> 00:05:04,550 Of course, within all of that, there's plenty of good things going on. But again, sometimes with issues around the ethics, 44 00:05:04,550 --> 00:05:15,590 this is the Google DMI at work on diagnosing various diseases from the retinal scans that were available, 45 00:05:15,590 --> 00:05:21,560 whether it's various forms of in this case, this is diabetic retinopathy. 46 00:05:21,560 --> 00:05:27,530 These systems are very good at detecting patterns and coming up with all sorts of really rather impressive classifications. 47 00:05:27,530 --> 00:05:33,140 But even there, there was an issue ultimately that fell into the whole area of was the data used 48 00:05:33,140 --> 00:05:37,310 to train these systems acquired with the appropriate level of informed consent. 49 00:05:37,310 --> 00:05:45,240 It is almost impossible to think of an AIM deployment mode in which ethical questions don't abound. 50 00:05:45,240 --> 00:05:52,100 Oxford's own working let net wonderful work to actually get to the level of human expertise in reading. 51 00:05:52,100 --> 00:05:57,530 You can imagine various forms of more intriguing deployment of these kinds of technologies. 52 00:05:57,530 --> 00:06:07,430 In fact, it was the central thesis of 2001. How, if you remember, read the lips of the astronauts in the pod, or indeed facial recognition, 53 00:06:07,430 --> 00:06:14,960 which is already causing concerns about its sensibilities and sensitivities around particular forms of bias. 54 00:06:14,960 --> 00:06:20,420 Given certain sorts of training regime and given a particular deployment notes. 55 00:06:20,420 --> 00:06:27,830 The all more obvious ones, of course, range from what you do with autonomous vehicles to how you should restrict 56 00:06:27,830 --> 00:06:35,480 and control the weaponization of of platforms such as A.I. included in drone. 57 00:06:35,480 --> 00:06:41,390 More extensively, we worry about the use of extensive amounts of data in domains such as predictive 58 00:06:41,390 --> 00:06:47,570 policing or indeed at a national scale when deployed by states such as China. 59 00:06:47,570 --> 00:06:55,340 In that social credit system, we worry about the flows of data from our mobiles, from our devices. 60 00:06:55,340 --> 00:07:01,490 In fact, my group in Oxford has spent a lot of time trying to understand just what that ecosystem looks like. 61 00:07:01,490 --> 00:07:05,360 And it's clear that the flows of data are extensive. They're numerous. 62 00:07:05,360 --> 00:07:11,210 This is just one particular flow of data from one app surrounded by a few of us on a phone that we were able to 63 00:07:11,210 --> 00:07:20,630 track and understand just how extraordinary the economy of data exchange is and where is the control of that. 64 00:07:20,630 --> 00:07:30,280 And whereas the oversight and insight in that often your ethical challenges are cheek by jowl with questions around governance and regulation. 65 00:07:30,280 --> 00:07:35,830 And within all of this, we have concerns about the emergence of dominant platforms both in. 66 00:07:35,830 --> 00:07:45,200 East and west, who seem to have so much control in this emerging world of data enabled A.I. algorithms 67 00:07:45,200 --> 00:07:49,760 and data able to aggregate algorithms that they say aren't just about machine learning, 68 00:07:49,760 --> 00:07:56,330 they can just as much be about expert system reasoning, about various forms of model based reasoning. 69 00:07:56,330 --> 00:08:04,850 There are a very large set of methods now available to the computer scientist and the engineer, the various people looking to exploit the methods, 70 00:08:04,850 --> 00:08:12,230 even if it is in some cases, statistics read, presented various forms of linear regression or whatever. 71 00:08:12,230 --> 00:08:19,040 There will still often be ethical challenges at the end of that deployment issue. 72 00:08:19,040 --> 00:08:24,710 So the question for us isn't just so to think about this in the narrow confines of what we might think 73 00:08:24,710 --> 00:08:34,340 of as a robotic controlled drone or a particular use of a neural network in a biometric system. 74 00:08:34,340 --> 00:08:42,760 It can be how data and Arab rhythms are used very broadly on the Web, how they're used in targeting and surveilling. 75 00:08:42,760 --> 00:08:46,200 So in the broadest sense, this was always the ambition. 76 00:08:46,200 --> 00:08:57,140 And so when I was asked to put together a proposal for the Schwartzmann initiative and this was very much driven out of his own concerns, 77 00:08:57,140 --> 00:09:06,230 Swartz was unconcerned that he was seeing a world emerge where his particular worry was around the concern of how these systems might be deployed. 78 00:09:06,230 --> 00:09:12,980 It was natural to turn to oxfords extraordinary, extraordinary heritage in this area. 79 00:09:12,980 --> 00:09:21,680 It's not hard to make the case for ethics, of course. But Oxford, you have the most extraordinary cast of characters here. 80 00:09:21,680 --> 00:09:28,850 Looking through a history that includes everybody from had to parfit to to to warn Ockham Murdoch. 81 00:09:28,850 --> 00:09:35,460 These are these these are extraordinary figures who have shaped our thinking in the ethical and moral space. 82 00:09:35,460 --> 00:09:43,640 And in some respects, of course, Mary Warnock's work also held an appeal to many because this was a person who 83 00:09:43,640 --> 00:09:50,200 managed to convene an entire regulatory framework and lead on that thinking 84 00:09:50,200 --> 00:10:00,590 and the deployment of a technology which at the time concerned people the whole way in which human fertilisation and embryology science was moving, 85 00:10:00,590 --> 00:10:03,050 reproductive science was moving. 86 00:10:03,050 --> 00:10:11,070 And that multistakeholder conversation, which had serious ethical underpinnings, was one of the reasons that we found the way forward to nuance. 87 00:10:11,070 --> 00:10:19,580 That interesting debate. And one of the things I think we will see as the landscape develops here in Oxford is 88 00:10:19,580 --> 00:10:26,900 this very interesting appeal to lessons learnt from fields such as medical ethics. 89 00:10:26,900 --> 00:10:30,770 Many of the problems addressed have near analogies. 90 00:10:30,770 --> 00:10:36,410 Some will turn out to be different, whether it is the use of human subjects or what is being done to them or the question 91 00:10:36,410 --> 00:10:41,900 of informed consent or how access is granted to two parts of the population. 92 00:10:41,900 --> 00:10:45,950 Many of these will have their equivalents in what we do. 93 00:10:45,950 --> 00:10:56,660 So when we looked at the proposal as we assembled the proposal, we were surrounded really by a surfeit of riches. 94 00:10:56,660 --> 00:11:01,550 This diagram just begins to reflect. This is we call this a heliocentric bottle. 95 00:11:01,550 --> 00:11:10,040 This is not to imply, by the way, that the you are all circling this quite not quite existent existing ethics in a institute. 96 00:11:10,040 --> 00:11:16,430 This is meant to represent the extraordinary level of import and cognate activities that 97 00:11:16,430 --> 00:11:20,570 we can look at from computer science and our involvement in the Alan Turing Institute, 98 00:11:20,570 --> 00:11:24,780 on the one hand, to the work that goes on in the Oxford Internet Institute. 99 00:11:24,780 --> 00:11:28,640 Hear about some of that today and indeed the work and the ahero Centre for Practical 100 00:11:28,640 --> 00:11:34,070 Ethics Work in the Human Future of Humanises Institute work in a governance, 101 00:11:34,070 --> 00:11:36,810 working diplomatic school on policy, 102 00:11:36,810 --> 00:11:48,170 a whole set of interests emerging in law around how we might think about legal restraints and applications in technology in medicine. 103 00:11:48,170 --> 00:11:54,200 Again, the Welcome Centre for Ethics, the Oxford Martin School, Oxford Foundry for Innovation, 104 00:11:54,200 --> 00:12:00,170 the Big Data Institute up in the Medical Science Division and Information and Engineering, 105 00:12:00,170 --> 00:12:05,000 where much of the really interesting robotics development goes on and other things too. 106 00:12:05,000 --> 00:12:09,550 So it's a very rich surround. This won't even be completely comprehensive. 107 00:12:09,550 --> 00:12:15,210 But what's compelling about that is that you set this effort. 108 00:12:15,210 --> 00:12:22,620 Within a nexus of really interesting research, and I think one of the things that Peter's looking to do is to is to is to help convene 109 00:12:22,620 --> 00:12:28,980 the plurality of those conversations around the interests of the ethics in a institute. 110 00:12:28,980 --> 00:12:35,640 Now, that, of course, is as yet to be staffed, to be to be established. 111 00:12:35,640 --> 00:12:44,400 But the interesting, absolute essential difference of this and some people said there are any number of ethics in a I or ethics initiatives, 112 00:12:44,400 --> 00:12:50,190 somebody counter some extraordinary number of ethics, a coach the other day and it was in the low hundreds. 113 00:12:50,190 --> 00:12:56,430 Believe it or not, lots of them are close similarities to one another. 114 00:12:56,430 --> 00:13:02,730 And a lot of it. A lot of it. One suspects that various forms of virtue signalling lots of it. 115 00:13:02,730 --> 00:13:12,270 You might suspect some kind of copying to get into something that seems an idea whose idea of the moment. 116 00:13:12,270 --> 00:13:20,040 I think the difference in the effort here is to locate this whole enterprise in the deep research of 117 00:13:20,040 --> 00:13:27,870 the faculty of philosophy in a tradition where the philosophical questions are primary are paramount. 118 00:13:27,870 --> 00:13:36,180 Now, we can divide those up in any any variety of ways in whatever set of questions you raise here. 119 00:13:36,180 --> 00:13:45,900 There are more whether it's about when you use or don't use the technology, whether it's fair, who's responsible for it, who has access to it. 120 00:13:45,900 --> 00:13:49,650 Does it sufficiently explain itself? Does it varies? 121 00:13:49,650 --> 00:13:53,880 You go from one geography to another to one culture to another? 122 00:13:53,880 --> 00:13:58,560 What about its utilisation for public and private goods? 123 00:13:58,560 --> 00:14:08,340 And all of this just provides a sense of the depth and range of ethical questions that will arise in the last few minutes. 124 00:14:08,340 --> 00:14:13,940 Let just me say something about the shape and structure of where we are. 125 00:14:13,940 --> 00:14:20,780 So this all began with a group of individuals who gave of their time and energy are hugely grateful 126 00:14:20,780 --> 00:14:28,080 to them in a steering committee that was put together back in actually almost a year ago. 127 00:14:28,080 --> 00:14:33,530 It seems extraordinary, just I think I was approached about a year ago and and then indeed, 128 00:14:33,530 --> 00:14:39,380 in January of this year, we began to think about shaping the proposal. 129 00:14:39,380 --> 00:14:48,230 So myself and Chris Timson, head of philosophy, Dan Grimly, Phil Howard from the ally, Mike Wooldridge, computer science. 130 00:14:48,230 --> 00:14:53,120 Aseel Fabbri, a philosopher. Mike Parker and Alison Noble. 131 00:14:53,120 --> 00:14:59,210 Mike Parker from the Welcome Institute. And Allison from Information Engineering. 132 00:14:59,210 --> 00:15:01,460 Actually, Alison, replace Angela McClain, 133 00:15:01,460 --> 00:15:11,720 who was the original originally on the committee as Angela took up the chief scientific adviser role in one of the Ministry of Defence. 134 00:15:11,720 --> 00:15:17,740 So that group had gotten together and tried to frame and shape. 135 00:15:17,740 --> 00:15:24,880 The proposal in a way, that was that was going to work for for the university. 136 00:15:24,880 --> 00:15:31,690 The plan ultimately is that there'll be a management committee of the institute to be confirmed. 137 00:15:31,690 --> 00:15:39,700 And it will be Cross Divisional. Probably will mirror the kinds of things we see within things like the Bullivant Neek School. 138 00:15:39,700 --> 00:15:43,690 The advisory board will comprise internal and external members. 139 00:15:43,690 --> 00:15:46,050 They will be advisory. They will not set the agenda. 140 00:15:46,050 --> 00:15:53,440 It's important to say, and we are at this moment about to advertise for a director for the institute. 141 00:15:53,440 --> 00:15:56,530 My role is essentially to try and steward this into existence. 142 00:15:56,530 --> 00:16:01,360 I think the important thing to say is of all of the activities associated with the guest. 143 00:16:01,360 --> 00:16:06,460 This is one of the first ones to to begin its work because the building, 144 00:16:06,460 --> 00:16:14,800 which will be placed on the Rateliff Observatory site, will be a number of years in development. 145 00:16:14,800 --> 00:16:23,920 I mean, it could be four to five years before that is actually opened. In the meantime, we would like to be undertaking this exciting research agenda. 146 00:16:23,920 --> 00:16:29,500 So a director and initially two associate professors are being will be being advertised, 147 00:16:29,500 --> 00:16:34,480 one of these in philosophy and one in philosophy and computer science. 148 00:16:34,480 --> 00:16:37,420 There are five associate professorships ultimately to be appointed, two. 149 00:16:37,420 --> 00:16:44,110 So this will be a substantial centre of gravity in terms of the ability to research and teach. 150 00:16:44,110 --> 00:16:51,880 And the other thing to say is that these will be at least the first two advertize will be college associations as well. 151 00:16:51,880 --> 00:16:57,820 So we're trying to wrap the Institute Stot into the Collegiate University. 152 00:16:57,820 --> 00:17:03,550 There will be a number of post-doctoral research fellows or jail appointments. 153 00:17:03,550 --> 00:17:06,610 There will be a significant number of Theophile students. 154 00:17:06,610 --> 00:17:11,500 We are looking to use the model of academic secondment to move people into the institute for periods 155 00:17:11,500 --> 00:17:18,790 of time where their research would help and be valuable and a significant vesting fellows programme. 156 00:17:18,790 --> 00:17:31,780 So in total, we might imagine that in four or so years time there would be 20 or so individuals at the core of the of the institute and ultimately, 157 00:17:31,780 --> 00:17:43,370 of course, looking to develop and expand with joint programmes of work to even more significant, more significant size. 158 00:17:43,370 --> 00:17:50,080 The other thing is probably worth mentioning is that there was always the ambition that we would develop. 159 00:17:50,080 --> 00:17:55,120 Various forms of curriculum, content to inform our teaching. 160 00:17:55,120 --> 00:18:00,650 And the interesting model here is to think about the evolution of our various. 161 00:18:00,650 --> 00:18:04,550 Courses where philosophy has been at their heart. 162 00:18:04,550 --> 00:18:08,990 Whether it is the original literary Amarna raise with the Greeks, as it was called, the modern Greeks, 163 00:18:08,990 --> 00:18:18,320 which reflected the fact that economic elites have become a subject that required required study, political science and philosophy. 164 00:18:18,320 --> 00:18:24,470 The scientific Greeks that we saw evolve philosophy, psychology, linguistics in the 20th century, 165 00:18:24,470 --> 00:18:28,730 physics and philosophy, maths and philosophy, computer science, philosophy. 166 00:18:28,730 --> 00:18:33,860 One of the interesting things about these is the extent to which computational thinking, 167 00:18:33,860 --> 00:18:40,640 the intrusion of new ways of understanding our world has been very much at the centre. 168 00:18:40,640 --> 00:18:51,380 So the question will be how we can develop content that is capable of being inserted into even Masters and on the great as undergraduate modules. 169 00:18:51,380 --> 00:19:02,300 That would certainly be an ambition. And I'm certainly aware that in computer science we have a significant need for appropriate ethics courses. 170 00:19:02,300 --> 00:19:12,350 It's not in and of itself the only thing, of course, going on. There is an ongoing cultural programme that will be being launched. 171 00:19:12,350 --> 00:19:15,950 In fact, there isn't events, A.I. and creativity, I think. 172 00:19:15,950 --> 00:19:25,100 On November the 20th, Wednesday, that's actually also being held that's being held at the maths institute. 173 00:19:25,100 --> 00:19:28,790 And that's certainly something to look out for. One of the kind of features, again, 174 00:19:28,790 --> 00:19:37,670 is that we look to really motivate the presence of this institute within the context of being human in the 21st century. 175 00:19:37,670 --> 00:19:43,400 How, despite all the concerns we have, or perhaps exactly because of those concerns, 176 00:19:43,400 --> 00:19:51,620 we can understand an appropriate set of balanced interventions with this technology. 177 00:19:51,620 --> 00:19:57,290 As I mentioned, that the site itself is not yet built. 178 00:19:57,290 --> 00:20:02,180 And indeed, the whole process for selecting architects is in train as we speak. 179 00:20:02,180 --> 00:20:14,380 To get that process underway. And this finally, just to say, the actually, I guess apart from a talk, I gave it up. 180 00:20:14,380 --> 00:20:17,590 So this is the single earliest manifestation of actual activity, 181 00:20:17,590 --> 00:20:25,060 which is all of you in this room listening to a variety of presentations as to why ethics has 182 00:20:25,060 --> 00:20:33,580 a place in the consideration deliberations of of Oxford's extraordinary intellectual history. 183 00:20:33,580 --> 00:20:35,890 And yeah, that really is my introduction. 184 00:20:35,890 --> 00:20:41,320 This is one of the first in fact, the first two of these are very much like community meetings, town hall meetings, 185 00:20:41,320 --> 00:20:47,560 where we're convening the interested parties together to understand what we're trying to achieve. 186 00:20:47,560 --> 00:21:02,250 So that's my introduction. Thank you. Thanks very much indeed, Nigel. 187 00:21:02,250 --> 00:21:08,610 When these seminars were conceived, it was not clear what form they should take. 188 00:21:08,610 --> 00:21:19,110 But as Nigel's explained, one of the functions of a very major function of the Institute for Ethics in A.I. is to bring together Oxford academics and 189 00:21:19,110 --> 00:21:28,920 researchers and students from from across a very wide range of centres and other institutions who often may not know each other, 190 00:21:28,920 --> 00:21:33,660 even though they're working in closely related areas. Just half a mile apart. 191 00:21:33,660 --> 00:21:42,330 So what we're planning to do in the first of these few seminars is not to go deeply into particular topics of interest, 192 00:21:42,330 --> 00:21:47,970 but rather to show a variety of work that's going on in different parts of the university. 193 00:21:47,970 --> 00:21:53,990 So those who attend can get to understand something of that ecosystem. 194 00:21:53,990 --> 00:21:59,760 We're combining this event, as you know, with nibbles and drinks afterwards. 195 00:21:59,760 --> 00:22:04,800 That's not just an added extra. That's an integral part of the conception. 196 00:22:04,800 --> 00:22:09,930 That way, you if you hear speakers saying things that particularly interest you, 197 00:22:09,930 --> 00:22:13,770 that people you might like to work with, call them afterwards, have a chat. 198 00:22:13,770 --> 00:22:22,010 We've got plenty of time for that. So the way the the rest of today's session until then is going. 199 00:22:22,010 --> 00:22:28,430 We've got three groups of speakers. The first two are from the ahero centre. 200 00:22:28,430 --> 00:22:38,600 The next three from the Oxford Internet Institute. And then we began to round off with two speakers who focussed particularly on A.I. in ethics. 201 00:22:38,600 --> 00:22:48,440 Sorry. Hey, I need medicine. I beg your pardon. So we we've got a mix here of some talks that are more theoretical. 202 00:22:48,440 --> 00:22:51,920 How is it that philosophy fits into this mix? 203 00:22:51,920 --> 00:22:57,050 How does philosophy influence practical ethics? 204 00:22:57,050 --> 00:23:03,830 From Oxford Internet Institute, the focus is going to be partly on law, legal regulation. 205 00:23:03,830 --> 00:23:10,100 And then when we get onto medicine, of course, we really are at the sharp end in a literal sense. 206 00:23:10,100 --> 00:23:20,530 And we will hear there about what it's like actually working with a AI in medicine and regulation within the health service. 207 00:23:20,530 --> 00:23:25,330 So without further ado, I'm going to ask Tom to kick off. 208 00:23:25,330 --> 00:23:33,440 Tom is from the Oxford Internet Institute. Sorry, I'm not doing very well, am I? 209 00:23:33,440 --> 00:23:38,980 I apologise. From the Here Heroes Centre, he trained Tom Douglas. 210 00:23:38,980 --> 00:23:46,590 It trained as a medical doctor and philosopher and his senior research fellow and director of research and development at the Occiput, 211 00:23:46,590 --> 00:23:52,170 Your Hero Centre for Practical Ethics. And he's also a Hugh Price fellow at Jesus College. 212 00:23:52,170 --> 00:24:02,220 His research currently focuses on the ethics of predicting and modulating behaviour, a topic on which will shortly take up an ELC consolidator award. 213 00:24:02,220 --> 00:24:06,290 So, Tom, over to you. 214 00:24:06,290 --> 00:24:17,270 So do you have a centre is based within the faculty of philosophy and is devoted to to doing research in practical or applied ethics. 215 00:24:17,270 --> 00:24:22,320 And as the name suggests, it's directed by at during zamili school. 216 00:24:22,320 --> 00:24:31,790 He holds the chair in practical ethics and currently includes 24 other academic staff members, as well as five doctoral students. 217 00:24:31,790 --> 00:24:43,530 Now we have a very broad remit, which is basically just to bring philosophical analysis to bear on practically important ethical questions, 218 00:24:43,530 --> 00:24:47,820 questions about how well you want to live, how society ought to be arranged and so on. 219 00:24:47,820 --> 00:24:56,160 And as you might suspect, given that broad agreement, we work on a fairly diverse range of topics, some of which are listed on the site. 220 00:24:56,160 --> 00:25:01,870 Historically, you've had quite a strong focus on ethical issues raised by medicine and the life sciences. 221 00:25:01,870 --> 00:25:14,440 But we've also worked in other areas like criminal justice, criminal justice, ethics, Internet ethics and military ethics. 222 00:25:14,440 --> 00:25:21,120 Like I said, so several of our projects bring philosophy to bear on questions in ethics. 223 00:25:21,120 --> 00:25:26,490 And what I want to do in that in the few minutes that I have left is just to briefly mentioned some of these. 224 00:25:26,490 --> 00:25:31,830 I went out to cover all of them just by way of illustrating some of the ways in which rethink some 225 00:25:31,830 --> 00:25:38,070 of the areas in which we think philosophical ethics might have something to say about AI ethics. 226 00:25:38,070 --> 00:25:47,760 And then Caressa is going to go into a bit more detail about exactly how philosophy can contribute to debates about AI ethics. 227 00:25:47,760 --> 00:25:54,780 OK, so the first strand of research that I wanted to mention is some work that Julian Cervalis Group and Guy Keohane are doing, 228 00:25:54,780 --> 00:26:05,980 along with Chris Kinjo from Melbourne University on the role of public preferences in informing the development of ethical algorithms. 229 00:26:05,980 --> 00:26:12,040 So as you're probably all aware, one of the challenges posed by, for example, autonomous vehicles is, 230 00:26:12,040 --> 00:26:16,970 is that a specifying how these vehicles should respond when posed with what looked like ethical dilemmas. 231 00:26:16,970 --> 00:26:23,230 So, say a choice between sacrificing an occupant of the vehicle and sacrificing a 232 00:26:23,230 --> 00:26:28,060 pedestrian or the choice between running over an adult or running over a child. 233 00:26:28,060 --> 00:26:33,430 So one natural response to this problem has been to kind of go out into the world and collect lots of data 234 00:26:33,430 --> 00:26:39,400 about what the public think about how autonomous vehicles should respond to these kinds of situations. 235 00:26:39,400 --> 00:26:44,230 And actually, quite a lot of that work has already been done. 236 00:26:44,230 --> 00:26:49,090 But what's what's not at all obvious is what we should be doing with the data that we're getting out of this 237 00:26:49,090 --> 00:26:55,780 empirical research that the social scientific data about what public the public preferences actually are. 238 00:26:55,780 --> 00:27:01,810 Because it certainly doesn't seem to be the case that we should just kind of unthinkingly implement the public will in this kind of area. 239 00:27:01,810 --> 00:27:10,150 So I suppose in a particular society that most people thought that autonomous vehicles you prioritise saving the lives of white people over others. 240 00:27:10,150 --> 00:27:15,610 Clearly, I wouldn't follow that. That's what we should programme autonomous vehicles to do. 241 00:27:15,610 --> 00:27:23,970 But on the other hand, it does seem plausible that at least some public attitudes should play some role in informing the design of these algorithms. 242 00:27:23,970 --> 00:27:31,060 As I what GM Guy and Chris are thinking about is exactly what that role should be. 243 00:27:31,060 --> 00:27:34,980 And just to kind of cut straight to one of their conclusions. One of their proposals, 244 00:27:34,980 --> 00:27:40,230 probably the most controversial proposal has been that public preferences should be 245 00:27:40,230 --> 00:27:45,580 put through a kind of philosophical filter before being built into the algorithm. 246 00:27:45,580 --> 00:27:51,510 So we should first cheque with these preferences are actually consistent with at least some plausible and 247 00:27:51,510 --> 00:27:57,990 widely held philosophical ethical theories and only preferences that make it through this kind of filter. 248 00:27:57,990 --> 00:28:04,170 What they call laundered preferences should be fed into algorithm design. 249 00:28:04,170 --> 00:28:10,100 OK, so the second strand of our research that I wanted to mention is a programme that I'm leading on 250 00:28:10,100 --> 00:28:16,340 the ethics of predicting and influencing behaviour supported by the welcome trust in the U.S. 251 00:28:16,340 --> 00:28:20,000 So this work has intersected with A.I. ethics and a few places. 252 00:28:20,000 --> 00:28:23,600 One of these is on the topic of crime prediction, 253 00:28:23,600 --> 00:28:30,170 where we've been doing some work with colleagues in the Department of Psychiatry to try to make some, I guess, 254 00:28:30,170 --> 00:28:35,270 practical suggestions about how we could improve the kinds of crime prediction algorithms 255 00:28:35,270 --> 00:28:40,880 that are increasingly being used both in criminal justice and in forensic psychiatrist. 256 00:28:40,880 --> 00:28:51,710 And by improve, I mean here making more accurate, but also mitigating some of the ethical concerns about bias and unfairness. 257 00:28:51,710 --> 00:28:55,820 But actually, the strand of our research that I wanted to go into a little bit more detail 258 00:28:55,820 --> 00:29:00,050 about is some work that we're doing on the idea of a right to mental integrity, 259 00:29:00,050 --> 00:29:07,730 because I think this is an interesting case of an area where existing thinking and medical ethics might have something of relevance to contribute, 260 00:29:07,730 --> 00:29:16,910 to add to ethics. So it's very widely accepted, especially in medical ethics, but also elsewhere, 261 00:29:16,910 --> 00:29:21,800 that we all possess something like a right to bodily integrity, a right against interference with our bodies. 262 00:29:21,800 --> 00:29:27,220 So this right would be infringed if someone physically assaults you, but it would also be infringed. 263 00:29:27,220 --> 00:29:33,420 For example, if a medical professional performed a medical procedure on you without your consent. 264 00:29:33,420 --> 00:29:38,250 So the question that we're interested in is whether there might be an analogies right to mental integrity. 265 00:29:38,250 --> 00:29:43,950 That would be a right against interference with your mind rather than your body. 266 00:29:43,950 --> 00:29:48,570 And I mean, this is a question that hasn't been much discussed either in philosophy or law. 267 00:29:48,570 --> 00:29:53,160 But we think it's going to be very relevant to medical ethics because quite a few medical 268 00:29:53,160 --> 00:29:58,740 interventions look like they might infringe something like a right to mental integrity. 269 00:29:58,740 --> 00:30:01,890 Perhaps the most obvious example here would be the use of compulsory psychiatric 270 00:30:01,890 --> 00:30:06,930 interventions on patients who have been sectioned under the Mental Health Act. 271 00:30:06,930 --> 00:30:10,980 But we think this question about mental integrity is also going to be relevant to 272 00:30:10,980 --> 00:30:17,430 nonmedical interventions and interventions that are not at all physically invasive. 273 00:30:17,430 --> 00:30:25,980 And I think one of the most interesting and important examples here would be what we might call A.I.S assisted manipulation. 274 00:30:25,980 --> 00:30:34,410 So supplies and online platforms that something like Facebook develops and deploys an artificial intelligence that can identify the psychological 275 00:30:34,410 --> 00:30:40,830 weaknesses of all of its users and almost perfectly target them with with 276 00:30:40,830 --> 00:30:45,480 content that will maximise the length of time that they spend on the platform. 277 00:30:45,480 --> 00:30:51,530 And as sort of maximally strengthen the desire to keep habitually checking that platform. 278 00:30:51,530 --> 00:30:57,480 It seems reasonable to ask whether we could think of this technology as infringing a rights to mental integrity, 279 00:30:57,480 --> 00:31:01,760 and we think that the answer to this question is going to depend on exactly how you understand that. 280 00:31:01,760 --> 00:31:05,780 Right. What kind of philosophical basis you think that it has. 281 00:31:05,780 --> 00:31:12,110 But at least on some plausible accounts of the right to mental integrity that we're considering. 282 00:31:12,110 --> 00:31:18,650 It seems that that A.I. based manipulation could actually infringe the right to mental integrity in just the same way as, 283 00:31:18,650 --> 00:31:21,590 for example, compulsory psychiatric interventions, 284 00:31:21,590 --> 00:31:30,440 which might suggest that we should be regulating a basement manipulation and other similar forms of manipulation much more stringently, 285 00:31:30,440 --> 00:31:36,620 perhaps a bit more closely to how we currently regulate psychiatry. 286 00:31:36,620 --> 00:31:41,590 The third and final example that I wanted to say something about is the work of Hannah Maslova 287 00:31:41,590 --> 00:31:48,650 and Steven Rainey on ethical issues raised by the use of neuro prosthetics for decoding speech. 288 00:31:48,650 --> 00:31:53,870 So this is this is the philosophical part of a large multidisciplinary project called brain coding, 289 00:31:53,870 --> 00:32:03,200 which is which is seeking to develop neuro prostheses or brain computer interfaces that could allow individuals who have lost the ability to speak, 290 00:32:03,200 --> 00:32:05,210 to communicate. 291 00:32:05,210 --> 00:32:14,660 So these devices would work by detecting brain signals, converting them into synthesised speech with the mediation of an eye language model. 292 00:32:14,660 --> 00:32:19,370 And that language model would do a significant amount of predicting and rephrasing 293 00:32:19,370 --> 00:32:23,840 in order to allow the speaker to speak at a kind of ordinary conversational pace. 294 00:32:23,840 --> 00:32:32,570 And with more or less ordinary fluency. So one of the ethical issues here concerns the extent to which we could hold people 295 00:32:32,570 --> 00:32:37,870 responsible for the utterances that they might make via a device like this. 296 00:32:37,870 --> 00:32:40,920 So ordinarily, we do hold people responsible for their speech acts. 297 00:32:40,920 --> 00:32:45,670 If someone says something racist or offensive, we tend to think that they can be blamed for that. 298 00:32:45,670 --> 00:32:49,740 And that seems to presuppose that they're responsible for what they've said. 299 00:32:49,740 --> 00:32:54,030 If I promise to do something, you'll probably think that I'm bound by that promise. 300 00:32:54,030 --> 00:33:00,210 And again, that seems to presuppose that I'm responsible for what I said when I made the promise. 301 00:33:00,210 --> 00:33:09,270 But one question is, to what extent would those kinds of responsibility assignments carry over to cases involving neuro prostheses of this sort? 302 00:33:09,270 --> 00:33:14,790 Given that in these cases, there might be a much more significant gap between the kind of mental act of intending 303 00:33:14,790 --> 00:33:19,860 or attempting to say something and the actual utterance that gets produced. 304 00:33:19,860 --> 00:33:28,470 So how does the mediation of an artificial intelligence in this kind of system affect the responsibility of the speaker for the for the utterance? 305 00:33:28,470 --> 00:33:36,060 How might the nature of A.I. make a difference to Assignment's response? But these are the kinds of questions that Stephen and Henare are addressing. 306 00:33:36,060 --> 00:33:41,220 And again, to kind of briefly cut to one of the conclusions, they're arguing that many of these cases, 307 00:33:41,220 --> 00:33:47,160 the mediation of artificial intelligence, could significantly diminish the responsibility of the speaker. 308 00:33:47,160 --> 00:33:56,220 And in a way that might require us this sort of significantly, I think, some of our ethical norms regarding speech and conversation. 309 00:33:56,220 --> 00:34:00,960 So those are some of the relevant areas on which the hero centre is working. 310 00:34:00,960 --> 00:34:05,490 There are others as well. Mike Robillard, a military ethicist in the centre, 311 00:34:05,490 --> 00:34:11,630 has been working on critiquing some of the existing debate about autonomous weapon systems or killer robots. 312 00:34:11,630 --> 00:34:20,340 And one of the DEHL students appreciate Mishra is working on the extent to which we might need to revise some of our concepts, 313 00:34:20,340 --> 00:34:28,620 like the standard of care and medical negligence in a world in which healthcare professionals are relying heavily on machine learning classifiers. 314 00:34:28,620 --> 00:34:31,020 But I'm going to stop there and hand over to accuracy. 315 00:34:31,020 --> 00:34:36,780 He's going to go into a bit more detail about some of the visual ways in which philosophy can kind of make 316 00:34:36,780 --> 00:34:49,270 progress on the types of topics that I've been talking about and also to tell you about some of her research. 317 00:34:49,270 --> 00:34:53,860 Thanks very much indeed, Tom. As you see, lots going on, and that's just the new hero. 318 00:34:53,860 --> 00:35:03,040 Senator Kerissa is a research fellow at the you're hearing senator and also at the Welcome Centre for Ethics and Humanities. 319 00:35:03,040 --> 00:35:08,470 So yet another senator we have in Oxford and is a member of Christchurch. 320 00:35:08,470 --> 00:35:12,880 He works on digital ethics and moral and political philosophy more generally. 321 00:35:12,880 --> 00:35:18,730 And she's the editor of the forthcoming Oxford Handbook of Digital Ethics, Overdue Gracen. 322 00:35:18,730 --> 00:35:23,650 Thank you, Peter. It's great to be here. Thank you all for coming. 323 00:35:23,650 --> 00:35:31,860 I'm going to talk a little bit about the place of philosophy in the ethics of a I have philosophy can contribute to this field. 324 00:35:31,860 --> 00:35:37,260 Practical ethics was developed in the 1970s, in the 1940s and 50s. 325 00:35:37,260 --> 00:35:44,020 It was understood that ethics didn't have much to do except analyse moral language, 326 00:35:44,020 --> 00:35:49,050 analyse the meaning of terms like good and rights and obligatory things like that, 327 00:35:49,050 --> 00:35:56,550 and maybe explored the tooth conditions for which a particular utterance was true or false. 328 00:35:56,550 --> 00:36:03,720 And then in the 1970s, I learnt social movements happened that put pressure on the discipline both externally and internally, 329 00:36:03,720 --> 00:36:08,250 and that made philosophy engage more directly with political issues. 330 00:36:08,250 --> 00:36:13,150 The Vietnam War was an important thing that happened. 331 00:36:13,150 --> 00:36:25,770 But philosophers started to think about feminism, questions about abortion and sexual orientation and discriminations on. 332 00:36:25,770 --> 00:36:29,880 This really changes discipline first, a lot of students became very, 333 00:36:29,880 --> 00:36:36,780 very much involved in philosophy and sign up to philosophy courses at a time when philosophy was kind of losing steam. 334 00:36:36,780 --> 00:36:45,360 And whenever I. I read about these moments in Lost for History, I always had a bit of jealousy in part. 335 00:36:45,360 --> 00:36:48,270 I just came to philosophy 30 or 40 years later. 336 00:36:48,270 --> 00:36:54,930 You know, how interesting and how important the change discipline in a way that can contribute positively to the world. 337 00:36:54,930 --> 00:37:00,810 And one moment that was particularly important was in 1972, 338 00:37:00,810 --> 00:37:08,410 The New York Times came out with a story about the Tuskegee experiment and the Tuskegee scandal. 339 00:37:08,410 --> 00:37:17,010 And this had been an experiment that carried out on for 40 years in which people who had syphilis were observed and they weren't treated. 340 00:37:17,010 --> 00:37:21,210 Even though treatment was available, there was a huge scandal. 341 00:37:21,210 --> 00:37:26,810 We pushed off the discipline of medical ethics forward. 342 00:37:26,810 --> 00:37:31,770 And I think in many ways we are in a similar situation when in the 1970s, 343 00:37:31,770 --> 00:37:36,660 we have new technologies that are facing us with new problems that we haven't faced before. 344 00:37:36,660 --> 00:37:42,840 And that computer scientists are not particularly well-trained to think about these things. 345 00:37:42,840 --> 00:37:51,570 And we have new scandals on the camera and a legal scandal, amongst others, that we make tangible that the need for for ethics. 346 00:37:51,570 --> 00:38:01,110 So practical ethics more generally. Tries to come up with ways to apply theory into practise. 347 00:38:01,110 --> 00:38:04,650 And so there's a question of, well. OK. So that's a practical philosophy. 348 00:38:04,650 --> 00:38:08,580 Does more, more or less. But how can philosophy contribute to ethics in particular? 349 00:38:08,580 --> 00:38:16,070 What is special about the philosopher and what is special about the philosopher in the context of the ethics of a I? 350 00:38:16,070 --> 00:38:20,970 And this is just the latest version of a question that has been bothering 351 00:38:20,970 --> 00:38:25,710 philosophers for a long time and many people have given different answers to. 352 00:38:25,710 --> 00:38:35,500 This is just a small sample. And although there are many controversies within within this debate, maybe the most important of which is. 353 00:38:35,500 --> 00:38:38,920 Are there more authorities and what do we mean by more authorities? 354 00:38:38,920 --> 00:38:47,350 There's much more consensus and disagreement within the debate and perhaps the most important points of consensus is that philosophy can 355 00:38:47,350 --> 00:38:55,990 offer a conceptual analysis in the hope of leading to better decisions and also better justifying decisions that have already been made, 356 00:38:55,990 --> 00:39:04,740 especially to those who lose out in that decision and with the hope that consumption rises, can make debates sharper. 357 00:39:04,740 --> 00:39:10,660 It can mitigate shorter. And it can make it less income to suit. 358 00:39:10,660 --> 00:39:16,990 So what is conceptual analysis and what kinds of things does it impute? It includes things like clarifying concepts. 359 00:39:16,990 --> 00:39:22,090 Sometimes people are fighting about something and they're not even talking about the same thing. 360 00:39:22,090 --> 00:39:26,770 On occasion, making sure that people are talking about the same thing even leads to solving problems. 361 00:39:26,770 --> 00:39:32,320 This is something that we decided defended. Of course, that's not always the case. 362 00:39:32,320 --> 00:39:37,160 Sometimes philosophy can provide nuance like any other discipline. 363 00:39:37,160 --> 00:39:44,200 Ethics have developed a very technical language that can be much more nuanced than just ordinary moral language. 364 00:39:44,200 --> 00:39:48,490 So it's not only about right and wrong, it's about what's permissible, what's impermissible, 365 00:39:48,490 --> 00:39:54,470 what's obligatory, what might be Suburgatory above and beyond duty and so on. 366 00:39:54,470 --> 00:40:02,030 It's also about working out the implications of use, some views might be might feel very attractive or be very attractive in a first glance, 367 00:40:02,030 --> 00:40:05,690 and then you start working out what the implications, 368 00:40:05,690 --> 00:40:11,360 either practical implications or theoretical implications and certainly doesn't seem that attractive anymore. 369 00:40:11,360 --> 00:40:16,090 A good example is how your personal data has been treated. 370 00:40:16,090 --> 00:40:20,860 Some some people think that we should treat personal data as property. And that sounds quite intuitive, 371 00:40:20,860 --> 00:40:25,040 except when you start looking at the implications and how poverty differs from 372 00:40:25,040 --> 00:40:29,490 personal data and suddenly it doesn't seem like such a good idea anymore. 373 00:40:29,490 --> 00:40:38,490 Pointing out contradictions of this cause is full of contradictions and fallacies both in the media but also in parliament and everywhere in between. 374 00:40:38,490 --> 00:40:44,730 And flusters can point out those distinguishing questions of fact and questions of value. 375 00:40:44,730 --> 00:40:51,840 It's not always obvious and it's not always easy. For instance, in the 1960s, we thought that death was just a biological question, 376 00:40:51,840 --> 00:40:56,040 a medical question, whether somebody was dead or not, just for the doctor to decide. 377 00:40:56,040 --> 00:41:00,390 Suddenly, with mechanical ventilators, we realised, well, you know, 378 00:41:00,390 --> 00:41:06,190 we have these bodies that are warm, their hearts are beating, but their brains seem to be destroyed. 379 00:41:06,190 --> 00:41:12,500 And we can harvest their organs either alive or they're dead. And suddenly the philosopher comes and says, well, what do you mean by that? 380 00:41:12,500 --> 00:41:16,080 Is it the death of the body, the death of consciousness, the death of the person, 381 00:41:16,080 --> 00:41:23,300 the death of the interests and rights that typically attached to people? 382 00:41:23,300 --> 00:41:28,320 I'm finding providing theory and in practical ethics. 383 00:41:28,320 --> 00:41:33,510 Of course, there's a lot of theory that comes both from normative ethics, metaphysics and so on. 384 00:41:33,510 --> 00:41:41,280 And in the property, in the course of application, many times we realise the limits and possible mistakes of the theory itself. 385 00:41:41,280 --> 00:41:48,410 So that. Practical cases and also empirical facts inform the theory and change it and polish it. 386 00:41:48,410 --> 00:41:53,520 And this is an interesting process because, you know, philosophy has a bad reputation for a lot of disagreements. 387 00:41:53,520 --> 00:41:55,750 There are not a lot of progress in consensus. 388 00:41:55,750 --> 00:42:03,490 But in fact, when you study in detail of the the history of philosophy and theories get very much polished. 389 00:42:03,490 --> 00:42:08,350 First, there are consensus on some things that they used to that there was disagreement in the past. 390 00:42:08,350 --> 00:42:17,260 But even when there is disagreement, for example, consequentialist today is a much more sophisticated theory than it was in its origins. 391 00:42:17,260 --> 00:42:24,850 And partly theories get polished through bumping up with reality and taking a look at practical cases. 392 00:42:24,850 --> 00:42:28,510 Secondly, it's really important for philosophy to identify moral problems. 393 00:42:28,510 --> 00:42:31,780 And then again, some of these are not as obvious as you might think. 394 00:42:31,780 --> 00:42:39,400 So before bioethics came along, doctors engaged in all sorts of problematic practises that one, seen as a problem, weren't seen as problematic. 395 00:42:39,400 --> 00:42:47,980 So, for example, is not informing patients on a diagnoses randomising patients to treatment or placebo without informing them that they were 396 00:42:47,980 --> 00:42:59,160 part of a research or even conducting very invasive examinations like rectifies examination in patients who were unconscious. 397 00:42:59,160 --> 00:43:06,170 And this one seen as, you know, doctors having bad intentions or anything like it was just an. 398 00:43:06,170 --> 00:43:13,930 They love. Their philosophy can inspire moral thought. 399 00:43:13,930 --> 00:43:26,020 Through arguments on thought, experiments and analogies, we can raise moral thought about prejudice and invite people to consider certain situations. 400 00:43:26,020 --> 00:43:33,260 And in so doing, stimulate their their moral thinking and also challenge moral intuitions. 401 00:43:33,260 --> 00:43:40,870 And here there's a very important role for public philosophy and for public and engaging with the public in general. 402 00:43:40,870 --> 00:43:49,300 And then finally, philosophy can provide experience. Good ethicists have extensive experience tackling difficult issues. 403 00:43:49,300 --> 00:43:58,900 And it would be a waste to not involve such so much knowledge in a time when there's so much at stake. 404 00:43:58,900 --> 00:44:06,010 Of course, a person doesn't have to have a pay student athletes or have published in the best ethical journals to have good ethical insight. 405 00:44:06,010 --> 00:44:12,640 But spending also your hours of most of your days thinking about ethical questions, trying different methodologies, 406 00:44:12,640 --> 00:44:19,540 learning about past these pitfalls does provide some kind of experience that can that can be of use. 407 00:44:19,540 --> 00:44:24,430 And in this sense, private life is his most experience with medical ethics, of course. 408 00:44:24,430 --> 00:44:31,300 And I think there's a lot to learn from this analogy that still hasn't been worked out, 409 00:44:31,300 --> 00:44:36,100 both from the similarities in these two fields and the differences. 410 00:44:36,100 --> 00:44:43,900 I think digital ethics is much more political and medical ethics just cite one difference and that makes it is going to make it very, very different. 411 00:44:43,900 --> 00:44:48,340 And also, we have a lot to learn from successes, but also from failures. 412 00:44:48,340 --> 00:44:55,210 I mean, one of the biggest failures, in my view of medical ethics is how it hasn't been able to regulate Big Pharma property. 413 00:44:55,210 --> 00:45:02,470 So Big Pharma today gets away with, for instance, carrying out 100 experiments to prove that a drug works. 414 00:45:02,470 --> 00:45:08,560 99 of them show that the drug doesn't work. One shows that the drug might work and that one gets published. 415 00:45:08,560 --> 00:45:15,190 The 99 don't get published. And sometimes researchers can't even talk about it. 416 00:45:15,190 --> 00:45:22,630 This is a huge failure. And it sort of signals a challenge that we have with regulating industry because most are a good part 417 00:45:22,630 --> 00:45:27,310 of the research carried on an A.I. right now is carried out by industry and not in universities. 418 00:45:27,310 --> 00:45:34,000 And so this is a huge challenge for digital ethics to detect. So just to finish. 419 00:45:34,000 --> 00:45:40,270 I'm involved in the following research projects. I'm thinking about what digital effects can learn from medical ethics. 420 00:45:40,270 --> 00:45:45,320 I'm finishing a book about the privacy. I'm working on the ethics of prediction. 421 00:45:45,320 --> 00:45:48,750 Human beings have been using predictions since the Oracle of Delphi. 422 00:45:48,750 --> 00:45:54,830 And strangely enough, we haven't thought much about the ethics of prediction and what makes a prediction ethical. 423 00:45:54,830 --> 00:46:00,710 And most importantly, the thing that I'm most excited about is editing the Oxford Handbook of Digital Ethics. 424 00:46:00,710 --> 00:46:04,160 Many philosophers in the room are going to participate in that. Which is great. 425 00:46:04,160 --> 00:46:10,750 And it's going to cover all sorts of things from sex and friendship in the digital age to democracy. 426 00:46:10,750 --> 00:46:18,350 Does fear the use of killer robots, surveillance, privacy and so much more. 427 00:46:18,350 --> 00:46:25,100 So it's very exciting. This is a very exciting time. And I'm no longer jealous of the practical emphasis of the 1970s. 428 00:46:25,100 --> 00:46:32,480 I think this is so much better. And one day in 10, 15 years, when the institute has had enormous impact on the ethics of AI, 429 00:46:32,480 --> 00:46:46,060 each of us will be able to say that we were here the first day when we said. 430 00:46:46,060 --> 00:46:50,530 Hearing you talk about the old timers that they're interested in ethics, 431 00:46:50,530 --> 00:47:02,860 I noticed that two of those papers that you were highlighting were actually in a volume that I coedited back in 1992. 432 00:47:02,860 --> 00:47:06,700 Thank you very much. Both Tom Charissa, your hero, senator. 433 00:47:06,700 --> 00:47:11,140 Doing lots of interesting work. You'll get a chance to catch up later. 434 00:47:11,140 --> 00:47:22,090 Over over nibbles and drinks. One innovation that we're bringing into these seminars, we won't get to have use it very much this time. 435 00:47:22,090 --> 00:47:29,920 But our intention is, is to make it a general feature is to enable audience feedback through questions. 436 00:47:29,920 --> 00:47:35,600 Now, we're planning at the very end this time to have a session on questions. 437 00:47:35,600 --> 00:47:41,710 If you want to suggest a question that you'd like to have discussed by the speakers. 438 00:47:41,710 --> 00:47:51,420 If you go to Sligo dot com, that's s l i d o Sligo dot com. 439 00:47:51,420 --> 00:48:01,640 And. Type in A.I. ethics, one word as this session to log into, and if you submit a question there, 440 00:48:01,640 --> 00:48:05,690 then that question will magically appear on my device here. 441 00:48:05,690 --> 00:48:11,270 And you in the audience can vote for the question or questions that you'd most like to have answered. 442 00:48:11,270 --> 00:48:19,250 So at the end, once the speakers that have all spoken, the intention is to have a short question and answer session. 443 00:48:19,250 --> 00:48:25,650 Then I hope that will be interesting and it is at least an interesting experiment. 444 00:48:25,650 --> 00:48:33,180 Okay, now we're moving from the hero centre to the Oxford Internet Institute, and Victoria Nash is our first speaker. 445 00:48:33,180 --> 00:48:38,460 She's deputy director, associate professor and senior policy fellow there. 446 00:48:38,460 --> 00:48:46,530 Her research focuses on normative and practical challenges of regulating online behaviour with a particular focus on child protection, 447 00:48:46,530 --> 00:48:48,960 content, moderation and human rights. 448 00:48:48,960 --> 00:49:02,000 And Vicki is just going to give a quick introduction to the Internet Institute before handing over to Sandra and Brian. 449 00:49:02,000 --> 00:49:06,410 Coming up, speak. I am reminded of when my rule never I should never offer slides if I speak for 450 00:49:06,410 --> 00:49:14,480 less than five minutes because I would probably spend five minutes finding it. 451 00:49:14,480 --> 00:49:17,840 Thank you very much for the introduction. Thank you to the previous speakers. 452 00:49:17,840 --> 00:49:22,730 As a political theorist who did her defo here many years ago, she would have loved to have your presentation. 453 00:49:22,730 --> 00:49:24,460 That right beginning of my page to experts, 454 00:49:24,460 --> 00:49:30,650 to the whole three years to figure out how to explain to her why last week mattered, why my work is so crucially important. 455 00:49:30,650 --> 00:49:36,860 So I those pieces fit within just a couple of minutes of explanation about what the oil is. 456 00:49:36,860 --> 00:49:43,870 And then I'll hand over to my colleagues. We will take down a great depth to give an indication of what type of research we do. 457 00:49:43,870 --> 00:49:47,600 For those of you that don't know, we're relatively new. I love it. 458 00:49:47,600 --> 00:49:51,980 Every time we do have a new centre, it's too creative because I feel like an old hand. 459 00:49:51,980 --> 00:49:58,820 We've been around since 2001. We were set up to be a multidisciplinary centre based within social sciences, 460 00:49:58,820 --> 00:50:05,330 but multidisciplinary focussing on the societal implications of digital technologies. 461 00:50:05,330 --> 00:50:11,610 It's funny looking back, actually. I mean, the question that we are asking now are many in many ways the same questions that you're asking in 2001, 462 00:50:11,610 --> 00:50:15,710 but perhaps applied to new and innovative technologies. 463 00:50:15,710 --> 00:50:20,810 But I think that, you know, so much of what the new institute is going to do is is hugely exciting to us. 464 00:50:20,810 --> 00:50:23,300 And I'm frankly very delighted to see, you know, 465 00:50:23,300 --> 00:50:29,270 significant investment in this field and new opportunities for us to collaborate both with the humanities, but actually across the way. 466 00:50:29,270 --> 00:50:36,380 University is the chart that Niger presented showed us. So, yes, congratulations on those that helped make this happen. 467 00:50:36,380 --> 00:50:41,570 I work in this area. I would say a sort of maybe threefold. 468 00:50:41,570 --> 00:50:44,900 The first is, if you like, in the very broadest sense. 469 00:50:44,900 --> 00:50:48,710 So what you want an Nigel covered at the very beginning in the sort of, you know, 470 00:50:48,710 --> 00:50:52,580 the array of different topics and areas is just fundamental to what we do. 471 00:50:52,580 --> 00:50:55,070 So, you know, big, broad questions about. 472 00:50:55,070 --> 00:51:05,480 About innovation, about the development of new tools, new ways of using artificial intelligence to develop new products, new forms and functions. 473 00:51:05,480 --> 00:51:11,440 Are those in everyday life questions about what this means for how we regulate and govern these technologies? 474 00:51:11,440 --> 00:51:18,360 This this is just our everyday business and it's something that that all of my 90 50 faculty are frankly engaged in. 475 00:51:18,360 --> 00:51:24,990 However, there's a much narrower sentence in which actually quite the questions of ethics and I arise in a 476 00:51:24,990 --> 00:51:30,330 number of very specific sort of research has portfolios to which you're going to hear about tonight. 477 00:51:30,330 --> 00:51:36,090 Since 2014, I'm pleased to note that we've had in-house philosophies. 478 00:51:36,090 --> 00:51:38,580 We've had I've in the very early days of the. 479 00:51:38,580 --> 00:51:47,700 Let me John, if you're ready with his work around information, ethics, addressing questions like sort of moral, artificial, 480 00:51:47,700 --> 00:51:54,470 Egil, evil or the morality of artificial agents that that's been with us is safe runabouts of five, six years. 481 00:51:54,470 --> 00:51:57,940 But actually, we don't just have lots of both of these questions now. 482 00:51:57,940 --> 00:52:02,810 So as we've brought in several philosophers on top of each on it, what you're going to hear from shortly. 483 00:52:02,810 --> 00:52:06,240 But we also now have lawyers like Sandra who's going to speak to you. 484 00:52:06,240 --> 00:52:07,740 We have political scientists. 485 00:52:07,740 --> 00:52:15,350 We have data scientists again, though, whose work is fundamentally concerned with questions of ethics and I of writ large. 486 00:52:15,350 --> 00:52:23,960 And in those cases, it covers issues such as things like what constitutes fairness or unfairness, discrimination, if you like, 487 00:52:23,960 --> 00:52:30,150 in the uses of I brought political questions about once you once you maybe identified principles of fairness, 488 00:52:30,150 --> 00:52:35,320 how you might go about regulating for those or holding companies to account. 489 00:52:35,320 --> 00:52:37,930 We cover applied questions such as, again, 490 00:52:37,930 --> 00:52:45,580 if you if you identify unfair practises or sort of sources of lack of diversity in data collection, what does that mean for innovation? 491 00:52:45,580 --> 00:52:48,940 What does that mean for data collection? What does it mean for privacy? 492 00:52:48,940 --> 00:52:54,910 So if you like the fact that a multi disciplinary means that we aren't just approaching this, certainly from a philosophical angle, 493 00:52:54,910 --> 00:53:02,050 but we are taking some of these sort of core philosophical questions and playing them out across different disciplines and different topics, 494 00:53:02,050 --> 00:53:07,750 we hope, all with the aim of improving not just research outputs, research, 495 00:53:07,750 --> 00:53:13,820 understanding these issues, but also, if you like, societal understanding, societal practises. 496 00:53:13,820 --> 00:53:19,610 So if we have a sort of a broad focus, this very narrow focus in specific projects, again, which we'll hear about shortly. 497 00:53:19,610 --> 00:53:23,390 I just want to flag up. But we also have a focus on this in our teaching. 498 00:53:23,390 --> 00:53:27,430 We have full graduate degrees, two masters. One is social data science. 499 00:53:27,430 --> 00:53:31,570 One, social science, the Internet and two corresponding defo programmes. 500 00:53:31,570 --> 00:53:36,560 And in each of those, again, these broad questions around ethics and I arrive in different places. 501 00:53:36,560 --> 00:53:41,930 We actually have got a pure philosophy paper, for example, in one of our master's degrees. 502 00:53:41,930 --> 00:53:49,070 In another, we we embrace the core questions about what things like fairness and transparency would look like in the practise of social data science. 503 00:53:49,070 --> 00:53:56,570 So again, I really applaud, I think, the desire to ensure that the work of the new institute will also include content, 504 00:53:56,570 --> 00:54:03,080 fair fit for new courses, new option papers. Certainly we find it immensely satisfying experience to deliver those to our students. 505 00:54:03,080 --> 00:54:06,920 I'm happy to take questions later on on what the eyes, the eyes doing in this area. 506 00:54:06,920 --> 00:54:18,480 But it's my great pleasure to hand over to my. Such as? 507 00:54:18,480 --> 00:54:25,940 So next up is Sandra Baktir, who's associate professor and senior research fellow in law and ethics of A.I., 508 00:54:25,940 --> 00:54:31,170 Big Data and robotics, as well as Internet regulation at the AIS. 509 00:54:31,170 --> 00:54:37,770 We've heard her current research focuses on the legal and ethical implications of a high profile, 510 00:54:37,770 --> 00:54:43,590 including profiling, explaining lendable A.I. and algorithmic bias. 511 00:54:43,590 --> 00:54:45,940 Thank you so much for this introduction. 512 00:54:45,940 --> 00:54:54,870 I'm very excited to be able to talk to you a little bit about the research that I do with a couple of my colleagues here with you. 513 00:54:54,870 --> 00:55:02,210 And as Bikila mentioned, I am a lawyer, so I usually look at things from a legal perspective and. 514 00:55:02,210 --> 00:55:07,820 But what I want to do is actually talk about free aspects where I think that the eyes disrupting 515 00:55:07,820 --> 00:55:14,420 legal concepts and I think the only way to fix that problem is actually via this mega approach. 516 00:55:14,420 --> 00:55:18,050 So I'm actually arguing that the law might not be fit for purpose, 517 00:55:18,050 --> 00:55:25,940 which is why we need ethicists and mulches to work to get it to actually give new strength to the law that we currently have. 518 00:55:25,940 --> 00:55:30,620 And yeah, I want to give you three examples where I think we need to focus a bit more. 519 00:55:30,620 --> 00:55:36,470 And one of the areas that I'm focussing on as well. One is accountability, one to Spanish and one is privacy. 520 00:55:36,470 --> 00:55:44,960 So I start with accountability. I've chosen the example of loan applications and how we have done this previously in the past. 521 00:55:44,960 --> 00:55:53,150 Usually when you go into a human human setting, if you go to a bank and you apply for a loan, the loan officer will ask you a couple of questions. 522 00:55:53,150 --> 00:56:00,410 They might ask you what your income is. If you have any savings, your employment history, all this questions make sense. 523 00:56:00,410 --> 00:56:06,930 There is an intuitive link where a person ask you about your financial status in order to find out if you've gone before the loan or not. 524 00:56:06,930 --> 00:56:14,180 There's an intuitive connexion there. What we see now is they've been moving away from those traditional data sources to make decisions. 525 00:56:14,180 --> 00:56:16,250 We use our traditional data sources. 526 00:56:16,250 --> 00:56:24,140 So a couple of examples here where credit institutions are now using Facebook profiles to decide if somebody should get at them. 527 00:56:24,140 --> 00:56:29,780 So they look at you, profile pictures, the friends that you have on Facebook, the groups that you join, 528 00:56:29,780 --> 00:56:36,840 the things that you post, the things that you like, and they infer if you're reliable borrower, not. 529 00:56:36,840 --> 00:56:43,770 Similar things happen in the insurance space where also, for example, profile social profile, 530 00:56:43,770 --> 00:56:49,920 that network information is being used to define if somebody should get insurance and what premiums should get. 531 00:56:49,920 --> 00:56:58,320 So but that's not the only thing. In general, whenever we make decisions, we start to use very untraditional data. 532 00:56:58,320 --> 00:57:05,940 That includes clicking behaviour, typing behaviour, geo location, ice and ice tracking, 533 00:57:05,940 --> 00:57:10,170 all of that to infer certain things about you, things that can be very privacy, 534 00:57:10,170 --> 00:57:17,400 invasive and unintentional and very counter-intuitive, where I don't really understand all of my data says above me, 535 00:57:17,400 --> 00:57:22,460 I have no idea how my browsing behaviour will affect my credit rating in the future. 536 00:57:22,460 --> 00:57:28,490 And this information can be and is rare that replicates it and and shared with a lot of people and not just in. 537 00:57:28,490 --> 00:57:34,070 Financial services, also important decisions, has to go to jail. Who can go to university? 538 00:57:34,070 --> 00:57:38,480 If you get hired, fired, promoted. All of this information of big data is being used for that. 539 00:57:38,480 --> 00:57:41,240 And that pose questions for accountability. 540 00:57:41,240 --> 00:57:47,000 Because if in the future, if I don't get a loan or don't get a job, the first thing that comes to mind is to ask why? 541 00:57:47,000 --> 00:57:50,930 Why did I not get a job? What happened? I want to have an explanation. 542 00:57:50,930 --> 00:57:55,370 And this is why I started thinking about this topic and look at it at first from a legal 543 00:57:55,370 --> 00:58:00,440 perspective to figure out if we do have a right that Ogwyn physicians have been explained to you, 544 00:58:00,440 --> 00:58:04,280 because that would make a lot of sense. And just looking at from the legal perspective, 545 00:58:04,280 --> 00:58:09,950 what I found is that I don't think we actually have a legally binding right that things have been explained to you. 546 00:58:09,950 --> 00:58:15,240 I was not very happy with the outcome of my paper. It's very hard to say from my academic perspective to actually admit to that. 547 00:58:15,240 --> 00:58:17,810 But it wasn't very happy with the outcome of the paper. 548 00:58:17,810 --> 00:58:22,820 But this was a starting point where I started to think about I don't actually care so much about what the law says. 549 00:58:22,820 --> 00:58:27,170 Just because something is not legally required doesn't mean it's not ethically desired. 550 00:58:27,170 --> 00:58:32,990 Which was the reason why Grant and Chris Russum and I got together and we now have 551 00:58:32,990 --> 00:58:36,950 a research programme at which is called The Governance of Emerging Technologies. 552 00:58:36,950 --> 00:58:44,240 So bretonneux that we'll be talking a couple of minutes is an ethicist and Chris Russell is a machine learning expert. 553 00:58:44,240 --> 00:58:52,790 So, yes, what basically starts like a joke in the sense that a lawyer, a philosopher and a scientist walk into a bar, which we did, 554 00:58:52,790 --> 00:58:59,480 and we spent three hours screaming at each other because we wanted to figure out what a good explanation would actually look like. 555 00:58:59,480 --> 00:59:02,730 What do we think of good explanations? Because we set the legal question of why. 556 00:59:02,730 --> 00:59:08,500 I wanted to know what is it that you want to know? So Brand was very much interested in the trust side of things. 557 00:59:08,500 --> 00:59:15,530 For him, it was very important to figure out what is justified, to believe what a good argument is, what make arguments valid. 558 00:59:15,530 --> 00:59:21,470 And I didn't care about that at all. I went to a justice and accountability. I wanted to be able to contest the decision. 559 00:59:21,470 --> 00:59:25,760 I'm not happy with it. And Curtis, a computer scientist, didn't care about each of those things. 560 00:59:25,760 --> 00:59:32,780 Just want to buck his coat and said, what's going on in the black box? So even though we were all very, very passionate about explanations, 561 00:59:32,780 --> 00:59:40,070 we started to understand that we think very different things about that and see the explanations in very different ways. 562 00:59:40,070 --> 00:59:44,300 But we finally found the solution and actually wrote a paper together and which 563 00:59:44,300 --> 00:59:50,990 has come come to factual explanations to try to reconcile those approaches. 564 00:59:50,990 --> 00:59:55,010 And our method actually allows you to understand a little bit of what's going on. 565 00:59:55,010 --> 01:00:01,670 The black box also gives you more trust in a system and gives you accountability because you would be able to, 566 01:00:01,670 --> 01:00:09,190 for example, contested decision, not happy with it. So the exciting part of that is that it actually got traction. 567 01:00:09,190 --> 01:00:11,900 And Google has implemented our our method. 568 01:00:11,900 --> 01:00:18,290 Now, last year in intensive LOSO nine can actually play with logarithms and understand what's going on in a black box. 569 01:00:18,290 --> 01:00:26,700 And Google is not the only one who is very fond of our idea because IBM, Microsoft and FLOC also implemented our our idea. 570 01:00:26,700 --> 01:00:32,310 So what that means is if you scream a lot to each other, it actually pays off at some point. 571 01:00:32,310 --> 01:00:40,850 But we also learnt that explanations are only one facet of true accountability, because explanations are not justifications. 572 01:00:40,850 --> 01:00:49,080 I can tell you, for example, I'm not giving you the loan because I don't like you face explanations. 573 01:00:49,080 --> 01:00:58,040 It doesn't mean it's justified. So actually, what you also want to look at is at the inferences to predictions and the opinions that algorithms out 574 01:00:58,040 --> 01:01:03,770 about you write all that big data is being collected about you and very sensitive things can be inferred. 575 01:01:03,770 --> 01:01:08,870 For example, if you're pregnant, if you're alive of worry, or if you could work, if you should get promotion, 576 01:01:08,870 --> 01:01:14,450 if you have undiagnosed disorders, all of that information can be very progress, invasive. 577 01:01:14,450 --> 01:01:20,930 And that is the actual thing that you're concerned with. So, again, I look at it from my eye, from a legal perspective, 578 01:01:20,930 --> 01:01:25,670 and try to look at data protection law because, you know, this is very close to your private life. 579 01:01:25,670 --> 01:01:30,530 So having friends is one example of personal data. You would get a lot of data protection rights. 580 01:01:30,530 --> 01:01:39,920 And again, what I found is that I actually disrupts the law in a very and traditional way where we have to think creatively about those problems. 581 01:01:39,920 --> 01:01:48,250 So, Brandon, I wrote a paper together which is called A Rights to Reasonable Inferences, Rethinking their protection law and Age Data. 582 01:01:48,250 --> 01:01:56,000 Any are actually calling for new standards because we found that the law, as it currently stands, is not good enough to protect us. 583 01:01:56,000 --> 01:02:00,720 As you can see, it's 130 pages trying to make the point that law is not good enough. 584 01:02:00,720 --> 01:02:06,890 It was actually planned to be a hundred and sixty, but read threatened if they were to work with me again if I don't stop writing. 585 01:02:06,890 --> 01:02:13,880 So it's just 30 pages. But it's very important to point out that we found a lot of loopholes in the law, 586 01:02:13,880 --> 01:02:18,500 some of which are that we don't even know if inferences of personal data. 587 01:02:18,500 --> 01:02:22,050 So all the assumptions that algorithms are making are being made above. 588 01:02:22,050 --> 01:02:27,060 It's not clear if it actually wasn't the PR, even if they do fall on the data protection law. 589 01:02:27,060 --> 01:02:32,400 What the purpose of data protection law is, is not to regulate how we make decisions about you. 590 01:02:32,400 --> 01:02:36,540 So if you have a problem over how you're being evaluated, how you've been seen, 591 01:02:36,540 --> 01:02:40,490 how you assess, you would need to find another law that helps you do that. 592 01:02:40,490 --> 01:02:46,500 Data protection law doesn't give you a remedy over how are you being seen by algorithms. 593 01:02:46,500 --> 01:02:52,860 And that is quite problematic because as we've already mentioned very often, we don't have standards for accurate, 594 01:02:52,860 --> 01:03:00,630 reasonable decision making because in essence, a lot of times we don't have a rights to get a job or insurance or don't go to university. 595 01:03:00,630 --> 01:03:06,570 So if you don't have any loss and regulate how those decisions have to be made and data protection doesn't give you doubts, 596 01:03:06,570 --> 01:03:10,800 maybe it's the ethical thing to think about what would be reasonable. 597 01:03:10,800 --> 01:03:14,400 And that leads me to the current research project that we've just started, 598 01:03:14,400 --> 01:03:19,780 which is a right to reasonable inferences in online advertising and in financial services. 599 01:03:19,780 --> 01:03:23,040 And we just had to postdocs starting with us. 600 01:03:23,040 --> 01:03:26,570 One is a philosopher, the other one a legal scholar. 601 01:03:26,570 --> 01:03:33,630 And we are trying to figure out what would be a reasonable standard for inferentially takes in those two sectors in the future. 602 01:03:33,630 --> 01:03:41,310 We're going to look at other sectors as well. That's will include, for example, health, employment and criminal justice. 603 01:03:41,310 --> 01:03:45,570 But this is the current focus of our project. 604 01:03:45,570 --> 01:03:52,650 The last area where I see that A.I. is actually disrupting the law has to do with non-discrimination law. 605 01:03:52,650 --> 01:03:59,400 Again, that's something that we are all aware of, that whenever we talk about the IBE, you also have to think about bias and discrimination. 606 01:03:59,400 --> 01:04:05,040 So, again, I did look at non-discrimination law and tried to figure out if the law is actually good 607 01:04:05,040 --> 01:04:09,990 enough and it's good enough to protect us against those unintended and undesirable risks. 608 01:04:09,990 --> 01:04:13,800 And again, I have to say, it doesn't and it doesn't. 609 01:04:13,800 --> 01:04:18,630 It actually is not really the laws fold or technology fold. It's just something that happened. 610 01:04:18,630 --> 01:04:24,420 If an Oscar nomination law is an answer to undesired behaviour of people, right. 611 01:04:24,420 --> 01:04:30,240 That means that very often we want to regulate or prohibit that people are 612 01:04:30,240 --> 01:04:34,740 unethical to each other or being racist or sexist or homophobic to each other. 613 01:04:34,740 --> 01:04:39,420 But the way that algorithms do that is very, very different than humans. 614 01:04:39,420 --> 01:04:44,970 So, for example, if you think about press discrimination, 615 01:04:44,970 --> 01:04:54,030 suppress discrimination means that the same product is offered to different people at different prices in the offline world. 616 01:04:54,030 --> 01:04:56,250 You will be there. It would be very easy for you to figure it out. 617 01:04:56,250 --> 01:05:05,010 You can just go to different stores and compare fruits and prices and pick the products that you think is fair or appropriate. 618 01:05:05,010 --> 01:05:08,550 For example, Tesco doesn't let you into the store anymore. 619 01:05:08,550 --> 01:05:13,740 You know that you have been excluded from the market. How do you know that in an online world? 620 01:05:13,740 --> 01:05:18,210 Right. How do you know if you're actually being offered the best price? Do you know what the prices are? 621 01:05:18,210 --> 01:05:22,480 People get offered. And do you know the advertising is that you don't see. 622 01:05:22,480 --> 01:05:26,580 Right. So that kind of discrimination, you don't feel it anymore. 623 01:05:26,580 --> 01:05:33,130 Whereas in the off line, Wolf, there will be often D ability of the complainant to see that something's off. 624 01:05:33,130 --> 01:05:40,560 Something feels unfair, unjust, but that sends a feeling of unfairness might not be as easy to grasp anymore. 625 01:05:40,560 --> 01:05:50,910 And similarly here with the law, if you look at the the classes that we want to protect, that relates to sex and gender, 626 01:05:50,910 --> 01:05:58,850 ethnicity, religious beliefs, because we had historical experiences where people have used that to discriminate against people. 627 01:05:58,850 --> 01:06:04,110 Right. So this is how we constructed the world, how we group people. 628 01:06:04,110 --> 01:06:07,410 But algorithms might move people completely differently, right? 629 01:06:07,410 --> 01:06:15,330 It could be the diagrammed start to discriminate against people who are born on a Tuesday, have brown socks and like dogs. 630 01:06:15,330 --> 01:06:24,900 But this class of people doesn't find information non-discrimination law because we never had it or similar here with video gamers, 631 01:06:24,900 --> 01:06:30,000 which, for example, in China is something that could cause your credit score to drop. 632 01:06:30,000 --> 01:06:37,270 Obviously, that class of people doesn't find any protection non-discrimination law, but maybe it shouldn't. 633 01:06:37,270 --> 01:06:44,430 Maybe. So this is D.M. the project that we are also working on, trying to figure out if the law doesn't give you enough protection. 634 01:06:44,430 --> 01:06:50,820 What would be the ethical thing to do? How can we increase and protect fairness and then algorithmic world? 635 01:06:50,820 --> 01:06:56,730 So, yeah, basically does the programme that we're currently working on. 636 01:06:56,730 --> 01:07:02,220 And I actually think that the one of the reasons why I'm so excited about all this, what's happening, 637 01:07:02,220 --> 01:07:12,300 because I think it finally is clear to everyone that if we want to use a AI for a good year, we need to think of at least three perspectives. 638 01:07:12,300 --> 01:07:16,020 Need to think about is the law actually helping you? Is a law good enough? 639 01:07:16,020 --> 01:07:20,040 And if it's not, what would be desirable? What would the ethical thing to do? 640 01:07:20,040 --> 01:07:24,850 And then actually map it again. Technology and figure out if it's actually feasible and if you do that, 641 01:07:24,850 --> 01:07:29,590 you can harness the full potential of A.I. but make sure that you protect human rights. 642 01:07:29,590 --> 01:07:41,560 Thank you. Thanks very much indeed, Sandra. 643 01:07:41,560 --> 01:07:50,740 And now Brent, who is Sandra's colleague. Brent Mitchell shot, is a philosopher and postdoctoral research fellow in data ethics at the Oh I. 644 01:07:50,740 --> 01:08:00,500 He specialises in fairness, interprete ability, group privacy and other aspects of the ethical governance of A.I. and complex algorithmic systems. 645 01:08:00,500 --> 01:08:07,580 Over you, Brent. Yes, so thank you so much for inviting me into this. 646 01:08:07,580 --> 01:08:12,160 And thank you, Senator, for being great introduction of our research programme. 647 01:08:12,160 --> 01:08:17,110 So what I'm going to talk about a bit is I'm going to talk about one of my most recent pieces of work, 648 01:08:17,110 --> 01:08:27,490 which I think speaks to the question of the relationship between law and ethics in the space of a I and particularly talks about the the frameworks, 649 01:08:27,490 --> 01:08:35,350 the initiatives that we now have, over 100 of them that are in some way trying to define the right sort of high level 650 01:08:35,350 --> 01:08:41,200 concepts or principles or values or tenets to in some way guide the development of A.I., 651 01:08:41,200 --> 01:08:49,480 the government's way of use. I salute the talk is going to be based on this paper, which just came out about a week ago, 652 01:08:49,480 --> 01:08:58,540 where I look at the role the principles can and perhaps should play in the governance of A.I. ethically. 653 01:08:58,540 --> 01:09:04,210 Now, my paper, despite having come out a week ago, is apparently already out of date. 654 01:09:04,210 --> 01:09:12,020 I thought we only had 84 initiatives from across the world, but we have over 100 now. 655 01:09:12,020 --> 01:09:19,530 And I hope this gets crossing the gravity of the situation. 656 01:09:19,530 --> 01:09:22,920 And I suppose my main concern is that if we have all these initiatives, 657 01:09:22,920 --> 01:09:29,140 youre essentially creating a market where developers can pick and choose the set of principles that works best for them. 658 01:09:29,140 --> 01:09:33,430 Chris Russell was joking the other day that he. Here are my principles. 659 01:09:33,430 --> 01:09:42,430 If you don't like them, I have a bunch of others. But to me, the risk there is it gives the impression that there's sort of one way to do ethics, 660 01:09:42,430 --> 01:09:49,390 that you have a high level set of principles that you were then going to specify into a set of practical requirements. 661 01:09:49,390 --> 01:09:57,670 And unsurprisingly, you should expect that if you have both lots of different sets of principles, but also leave it up to, 662 01:09:57,670 --> 01:10:03,280 say, individual development teams or individual companies to choose how to specify those principles on the ground. 663 01:10:03,280 --> 01:10:09,040 You're going to get requirements that don't match with each other requirements and may be contradictory. 664 01:10:09,040 --> 01:10:13,600 And essentially, you can end up in a situation of extreme more relativism. 665 01:10:13,600 --> 01:10:21,610 And to me, that that that thought, that sort of ethics or at least ethical frameworks are there as something to, 666 01:10:21,610 --> 01:10:29,320 let's see, replace regulation or at least self-regulation can be seen as a way to replace hard regulation. 667 01:10:29,320 --> 01:10:33,010 That's me sort of does a disservice to the actual value that ethics can have. 668 01:10:33,010 --> 01:10:39,430 And in particular, the value of the ethics and the law in working together in a good way can have. 669 01:10:39,430 --> 01:10:45,580 So what I'm interested in is basically how can we make these ethics frameworks actually work within a, 670 01:10:45,580 --> 01:10:49,630 you know, an approach where we're open to hard regulation? 671 01:10:49,630 --> 01:10:57,110 Now, I'm happy that this has come up already, but it's just to say there's a clear connexion between bioethics and medical ethics. 672 01:10:57,110 --> 01:11:02,150 So I don't have to make the case for that. So thank you. A number of people mentioned that, actually. 673 01:11:02,150 --> 01:11:08,350 But just to say that there was some work done recently, a couple of papers done recently that was trying to look for some sort of consensus 674 01:11:08,350 --> 01:11:12,730 across all these different different frameworks and initiatives that we had. 675 01:11:12,730 --> 01:11:17,290 And what in particular, this piece of work from the A.I. from People Project. 676 01:11:17,290 --> 01:11:21,490 And also, it was adopted by the High-Level Expert Group on Artificial Intelligence, 677 01:11:21,490 --> 01:11:28,540 is that there's a set of principles and very closely mimic the classic principles of medical ethics. 678 01:11:28,540 --> 01:11:36,580 And that's really interesting because I think actually looking at medical ethics and looking how a development compares to it can tell you a lot 679 01:11:36,580 --> 01:11:46,020 about whether we should expect this sort of principled form of self governance to work in a similar way in a development as it does in medicine. 680 01:11:46,020 --> 01:11:51,700 That's not to say that medical ethics is without its failures about a principled approach is going to solve everything. 681 01:11:51,700 --> 01:11:57,910 It's just to say it's definitely a case where the use of principles for ethical governance, for ethical decision making is very clear. 682 01:11:57,910 --> 01:12:02,170 It's very prominent and it has had impact in practise. 683 01:12:02,170 --> 01:12:08,530 So what we're going to do in the remainder of my few minutes is just to look at those two professions a little bit closer and see, well, 684 01:12:08,530 --> 01:12:17,360 how exactly does a development compare to this profession where we've seen a principled approach to ethics emerge and have some success? 685 01:12:17,360 --> 01:12:23,620 And so there's four different characteristics I want to look at here. The first is the existence of common aims. 686 01:12:23,620 --> 01:12:30,460 And what I mean by that is that medicine has doctors have fiduciary duties towards their patients. 687 01:12:30,460 --> 01:12:37,600 The practise is broadly guided by a common set of aims, which is to benefit the health and well-being of the patient. 688 01:12:37,600 --> 01:12:41,700 And of course, we will disagree about how to do that best in practise. 689 01:12:41,700 --> 01:12:48,230 At the very least, talking about, say, public public interests versus the interests of individual patient. 690 01:12:48,230 --> 01:12:54,430 But there is this sort of commonality. There is this common ground from which ethical decision making can actually proceed. 691 01:12:54,430 --> 01:13:01,990 And that is reflected in the fact that you have very strong fiduciary duty between professionals and patients. 692 01:13:01,990 --> 01:13:07,920 What that allows for is. Basically, co-operative approach to ethical decision making to specifying these principles. 693 01:13:07,920 --> 01:13:14,910 And I'm not sure that that level of cooperation can be taken for granted in the space of a development at the very least. 694 01:13:14,910 --> 01:13:22,410 If you're talking about privately developed Dahai, the initial fiduciary duties owned by the development team will be to the 695 01:13:22,410 --> 01:13:27,250 shareholders of the company rather than the users of people affected by the system. 696 01:13:27,250 --> 01:13:31,710 And so my concern is, if you don't have the sort of common grounding of the direction we're all travelling, 697 01:13:31,710 --> 01:13:36,930 then I'm not sure that you actually can have that that you're going to end up with with 698 01:13:36,930 --> 01:13:40,650 ethical decision making that's fundamentally competitive rather than cooperative. 699 01:13:40,650 --> 01:13:49,050 And clearly, that's that's not something that's conducive to finding an appropriate balance between different interests and practise. 700 01:13:49,050 --> 01:13:53,820 The second characteristic I want to look at is the role of professional history. 701 01:13:53,820 --> 01:13:58,770 And this is just to say that medicine has obviously an oppression for a very long time. 702 01:13:58,770 --> 01:14:03,810 We have codes of conduct. We have accounts of what it means to be a good doctor. 703 01:14:03,810 --> 01:14:08,820 These are captured on things like credit go through more recently, the Aimé code, medical ethics. 704 01:14:08,820 --> 01:14:14,190 These are documents that have been revised over time, the very longstanding, the very detailed. 705 01:14:14,190 --> 01:14:18,930 They give opinions on particular types of interventions. 706 01:14:18,930 --> 01:14:20,880 And the fact that they have been sought, 707 01:14:20,880 --> 01:14:28,710 tested over time and revised is one of the reasons that they they continue to actually be useful in day to day professional practise. 708 01:14:28,710 --> 01:14:37,080 Now, if you compare that to the state of a development, I'm going to use software engineering here as an analogue to to a development. 709 01:14:37,080 --> 01:14:43,080 We do have professional bodies. I see ACM in the eye triply, of course, being two biggest ones. 710 01:14:43,080 --> 01:14:50,430 And we do have codes of ethics. But in comparison to medicine there, the relatively short comparison, relatively abstract. 711 01:14:50,430 --> 01:14:57,120 The ACM one was revised recently, but still remains comparatively abstract when compared to, say, the main code of ethics. 712 01:14:57,120 --> 01:14:57,950 And there's research, 713 01:14:57,950 --> 01:15:05,940 a recent research that suggests that the codes of ethics actually have very little influence on day to day decision making of engineers, 714 01:15:05,940 --> 01:15:09,360 which is clearly a problem because if you had to have a code of ethics, 715 01:15:09,360 --> 01:15:19,400 but it isn't shown to be particularly effective in the sense of influencing the behaviour of engineers, then you have to question what is the actual. 716 01:15:19,400 --> 01:15:26,660 The third characteristic on a briefly touch on is just the methods that we have for translating principles into practise. 717 01:15:26,660 --> 01:15:30,880 The fundamental problem that we have here in the ethics frameworks is that they're 718 01:15:30,880 --> 01:15:36,020 they're based on or they rely on what can be called essentially contested concepts, 719 01:15:36,020 --> 01:15:40,610 essentially very abstract concepts that can have lots of different meanings in practise. 720 01:15:40,610 --> 01:15:46,820 If I asked you if I took a survey of this room, well, who thinks that a system should treat people fairly? 721 01:15:46,820 --> 01:15:48,890 I imagine pretty much everybody would raise their hand. 722 01:15:48,890 --> 01:15:54,620 And yet everybody may have a completely different sense of what fairness would actually mean in practise. 723 01:15:54,620 --> 01:15:59,300 And the point is that those different means of fairness can be rationally held. 724 01:15:59,300 --> 01:16:06,710 They can be genuine. They can be defended. We shouldn't expect a single correct meaning of any of these terms. 725 01:16:06,710 --> 01:16:12,490 The problem that you have is that we've reached consensus on what the correct, essentially contested concepts are. 726 01:16:12,490 --> 01:16:17,930 We've we've reached this high level consensus. But to me, that doesn't actually reflect any sort of true consensus. 727 01:16:17,930 --> 01:16:25,010 And more just is a way to mask sort of really important normative and political disagreements have a framework. 728 01:16:25,010 --> 01:16:30,040 We agreed in the framework. But we certainly don't agree to what it means in practise. 729 01:16:30,040 --> 01:16:36,780 And of course, the other problem is that those sorts of concepts don't translate automatically into practical requirements. 730 01:16:36,780 --> 01:16:43,950 That is a very difficult process. Medicine has things like professional societies and boards, ethics review committees, 731 01:16:43,950 --> 01:16:49,140 accreditation and licencing schemes, pure self governance standards. 732 01:16:49,140 --> 01:16:53,530 All these things help you actually do that. Translation and practise. 733 01:16:53,530 --> 01:17:02,440 Software engineering does have some of these things, but it's lacking it's lacking mechanisms that are of, say, similar stature, similar importance. 734 01:17:02,440 --> 01:17:08,500 I think one of the reasons for that is that the profession would not be legally 735 01:17:08,500 --> 01:17:14,050 recognised in a legally recognised as a profession in the same way that medicine is. 736 01:17:14,050 --> 01:17:20,320 What I mean by that is doctors require a licence to practise software engineers, in some cases in limited contexts, 737 01:17:20,320 --> 01:17:30,640 do need a licence to practise, but it's in no way of the same importance or the same sort of the same coverage as as medical licences. 738 01:17:30,640 --> 01:17:35,860 And so this lack of sort of legal recognition of the profession is a serious problem because ethics tends 739 01:17:35,860 --> 01:17:42,790 to professional ethics at least really has teeth when it actually has legal mechanisms to back it up. 740 01:17:42,790 --> 01:17:49,350 When doing something unethical in your day to day behaviour as a as a professional could actually lead to, 741 01:17:49,350 --> 01:17:55,100 say, losing the ability to practise your profession. 742 01:17:55,100 --> 01:18:02,020 And so just to conclude here, I think we have a number of sort of legal gaps within ethical way. 743 01:18:02,020 --> 01:18:08,050 One is really important ones that there is not this legal recognition of A.I. developments as professions. 744 01:18:08,050 --> 01:18:12,820 And a lot of the initiatives we have are based on human rights frameworks or other 745 01:18:12,820 --> 01:18:16,720 sorts of frameworks that are not directly legally binding in the same way as, 746 01:18:16,720 --> 01:18:22,300 say, to Judy Pyar would be. And so to move forward, I think we can do a couple of things. 747 01:18:22,300 --> 01:18:29,290 One is to start thinking about ethics more in the sense of a business or organisational ethics and less as a professional ethics. 748 01:18:29,290 --> 01:18:35,710 Both sides are important. But I think there's too much emphasis placed on individuals doing wrong things software engineer, 749 01:18:35,710 --> 01:18:40,990 individual developers doing wrong things and less about the same business model that they're 750 01:18:40,990 --> 01:18:46,270 working within or the organisational practises themselves being unethical in some way. 751 01:18:46,270 --> 01:18:54,450 We had a chance to do that with a High-Level Expert Group. There were red lines initially supposed to be draughted to basically send out types of AOG. 752 01:18:54,450 --> 01:18:58,120 It should not be developed in Europe. In the end, those red lines were taken out. 753 01:18:58,120 --> 01:19:04,380 To me, these would been a very strong signal of A.I. Ethics is also a business ethics. 754 01:19:04,380 --> 01:19:12,460 And then just finally, I think we may need to revisit the idea of licencing developers of high risk A.I. applications. 755 01:19:12,460 --> 01:19:17,550 And we I think more than anything, we just need to develop much stronger empirical evidence base. 756 01:19:17,550 --> 01:19:22,320 This based on case studies, on specific, specific ethical challenges. 757 01:19:22,320 --> 01:19:26,700 So we can really start to understand how we disagree and agree with each other 758 01:19:26,700 --> 01:19:30,900 in practise about what these different concepts and principles go on there. 759 01:19:30,900 --> 01:19:40,480 Thank you very much. The. 760 01:19:40,480 --> 01:19:48,300 What we're getting is something of a taste of the range of very difficult issues that will need to be addressed by the institute. 761 01:19:48,300 --> 01:19:56,760 We're going to end with a session on in medicine. Comparisons have been drawn between medicine and they are quite a few times already. 762 01:19:56,760 --> 01:20:00,930 But now we're going to hear what A.I. in medicine is really like. 763 01:20:00,930 --> 01:20:04,260 And our first speaker is Professor Gil McBain. 764 01:20:04,260 --> 01:20:13,950 He's professor of statistical genetics, director of the Big Data Institute and amongst various other honours, a fellow of the Royal Society. 765 01:20:13,950 --> 01:20:21,430 Five years ago, he co-founded the Company Genomics P.L.C. to use genomics to improve healthcare. 766 01:20:21,430 --> 01:20:29,160 You know, I think the sort of the kindest way of describing me in the context of this room is very much as a non expert, 767 01:20:29,160 --> 01:20:35,340 at least when it comes to ethics. I am a statistician and a geneticist, 768 01:20:35,340 --> 01:20:43,050 and I've worked in the field of using genetics to understand human disease and to identify 769 01:20:43,050 --> 01:20:50,400 opportunities for new therapies or to better sort of predict where people are on that trajectory. 770 01:20:50,400 --> 01:20:55,620 I've done that for many years. And over the last few years, 771 01:20:55,620 --> 01:21:02,250 my role has shifted a bit from purely being someone who was doing research to someone 772 01:21:02,250 --> 01:21:07,170 who has been thinking about the kinds of infrastructure and kinds of communities that 773 01:21:07,170 --> 01:21:12,660 you need to build in order to make sure that this growing part of biomedical research 774 01:21:12,660 --> 01:21:17,460 is sort of done well at the university level and translated well and ethically, 775 01:21:17,460 --> 01:21:24,990 of course, into practise. And so behind that is my very much my role as the director of the Big Data Institute, 776 01:21:24,990 --> 01:21:35,070 which is one of these recent now not quite the latest new centres to have appeared within University of Oxford. 777 01:21:35,070 --> 01:21:38,550 For those of you who don't know us, where we are, a physical thing, 778 01:21:38,550 --> 01:21:46,850 where a new institute and new building up the hill in the old growth campus, and this is somewhere between 350 and 500. 779 01:21:46,850 --> 01:21:58,500 And then you can how you can people within this institute but who are united by this desire to use a data 780 01:21:58,500 --> 01:22:05,160 driven approach to understanding the causes of human disease and identifying groups to intervention. 781 01:22:05,160 --> 01:22:10,280 So we're entirely dry land. We're entirely computational, as it were. 782 01:22:10,280 --> 01:22:15,960 And really what we are about is creating this fuel for a car. 783 01:22:15,960 --> 01:22:22,320 Now, we've heard an awful lot about this sort of the ethical issues of how you actually 784 01:22:22,320 --> 01:22:28,860 use A.I. in context and under the regulatory or the legal aspects around that. 785 01:22:28,860 --> 01:22:33,390 Perhaps what we haven't heard quite so much about is the process by which you can 786 01:22:33,390 --> 01:22:39,540 acquire what is the fundamental part that has to go into any are a are a process, 787 01:22:39,540 --> 01:22:50,980 which is very much the data itself. Within the institute, just very briefly, there are four types of things that we do. 788 01:22:50,980 --> 01:22:56,530 First is about how we measure things. This sort of measurement technologies. 789 01:22:56,530 --> 01:23:02,380 The second is about how we bring all those data together to create the research, ready the analysis, 790 01:23:02,380 --> 01:23:09,070 ready datasets that our researchers and others can come on in and peer into 791 01:23:09,070 --> 01:23:14,230 and try and identify the structure that ultimately leads to these insights. 792 01:23:14,230 --> 01:23:23,230 We have people from statistics, computer science, engineering, epidemiology, genomics, et cetera, developing methods. 793 01:23:23,230 --> 01:23:31,330 That is, if you like, the VII algorithms which are going to peer into this these kinds of rich data set. 794 01:23:31,330 --> 01:23:38,890 And then finally, and probably why in here is that the fourth pillar of what we do with this institute is to 795 01:23:38,890 --> 01:23:45,670 think hard about the much more the why does societal aspects of this data driven landscape. 796 01:23:45,670 --> 01:23:50,560 So issues around consent, issues around privacy, security issues. 797 01:23:50,560 --> 01:23:54,520 Right. Governance issues around sharing intellectual property. 798 01:23:54,520 --> 01:24:03,350 And so. And we made a decision right at the start of putting this institute together that this was something that we wanted to go on. 799 01:24:03,350 --> 01:24:09,320 Actually, in the building, it's such an integral part of doing biomedical research these days. 800 01:24:09,320 --> 01:24:18,740 And the issues that us that come out of this kind of research are so deep that if we don't train people in how you 801 01:24:18,740 --> 01:24:25,730 should think about conducting this kind of research and you don't build the right practises into how people do it, 802 01:24:25,730 --> 01:24:34,400 prosecuting the research programmes actually at the point of implementation, then you've kind of you're starting on the wrong foot. 803 01:24:34,400 --> 01:24:42,920 So we very much put that at the heart of the institute. A march of ethos is based within the Big Data Institute. 804 01:24:42,920 --> 01:24:52,670 And we very recently we got funding from EPA Sarsae to set up a new central doctoral training in health data science. 805 01:24:52,670 --> 01:25:01,670 One of the key pillars of that programme being that these data scientists and machine learners and so on would be trained 806 01:25:01,670 --> 01:25:10,580 very much alongside all the other skills in what they need in the skills to think about the problems from that standpoint. 807 01:25:10,580 --> 01:25:15,710 So it really, really is central to how we we think. 808 01:25:15,710 --> 01:25:24,230 And if anyone's interested in use cases coming up to Lausanne, talking about the types of problems that we're working on, 809 01:25:24,230 --> 01:25:29,030 the types of dilemmas that we're faced with, then please do get in touch. 810 01:25:29,030 --> 01:25:32,160 We'd be more than happy to talk. 811 01:25:32,160 --> 01:25:42,690 So I just want to say just a couple of things about at least my sort of personal perspectives on why the types of research that we're doing now, 812 01:25:42,690 --> 01:25:47,970 which is very much within the tradition of biomedical or medical research. 813 01:25:47,970 --> 01:25:55,930 Why those are a bit different and why they're raising new challenges from from the sort of the ethical perspective. 814 01:25:55,930 --> 01:26:01,080 And I think a really important point to start, which is perhaps not so well understood, 815 01:26:01,080 --> 01:26:08,130 is actually that the growth of AI and machine learning technologies within biomedical 816 01:26:08,130 --> 01:26:15,420 research has really led to something of a shift in how medical research itself is conducted. 817 01:26:15,420 --> 01:26:20,360 And this comes back to this question of data. How we how we get the data is. 818 01:26:20,360 --> 01:26:26,880 So it used to be that in medical science, you had a hypothesis, you. 819 01:26:26,880 --> 01:26:30,980 You decided I wanted to test some particular question. 820 01:26:30,980 --> 01:26:36,770 And off the back of that, you designed an experiment, not experiment, gave you some data. 821 01:26:36,770 --> 01:26:43,820 You analyse the data on the back of that and make some conclusions. And maybe you came up with a new hypothesis. 822 01:26:43,820 --> 01:26:48,540 Importantly, those data were collected specifically for that purpose. 823 01:26:48,540 --> 01:26:55,910 And you went out you explain to people why you were going to collect that data and what you hoped to learn come out of it. 824 01:26:55,910 --> 01:27:00,680 That's a very clean way of doing science. But clearly, it's not massively scalable. 825 01:27:00,680 --> 01:27:06,710 There's one question that you could ask of those data and essentially one. 826 01:27:06,710 --> 01:27:17,310 Now, about ten, fifteen years ago, biomedical research sort of took a side step, it changed direction a bit in how it collected data, 827 01:27:17,310 --> 01:27:27,210 and a lot of that came out of the world of genomics where people realised who had been studying how genes affect diseases. 828 01:27:27,210 --> 01:27:32,580 They'd been studying sort of their favourite gene and their favourite disease in a particular combination. 829 01:27:32,580 --> 01:27:40,810 And the literature was full of incredibly bad results that never heated and were massively underpowered. 830 01:27:40,810 --> 01:27:45,920 But 15 years ago, what happened was a change in technology. 831 01:27:45,920 --> 01:27:55,970 Changes in technology that start things. That legs to our us being able to experiments not just on one gene and a handful of individuals, 832 01:27:55,970 --> 01:28:01,030 but the entire genome in tens of thousands of individuals. 833 01:28:01,030 --> 01:28:11,140 And that led to this idea that rather than going in with your specific hypothesis, actually the most powerful thing is to go in without a hypothesis. 834 01:28:11,140 --> 01:28:16,630 You go in and you just collect data and you let the data tell you what the answer is. 835 01:28:16,630 --> 01:28:22,990 And that idea has very much percolated from just thinking about, well, let's study the whole genome. 836 01:28:22,990 --> 01:28:30,130 And one disease, the Genome Association studies was essentially that idea to the idea that you go in and you 837 01:28:30,130 --> 01:28:36,100 collect the genome and you collect everything that you possibly can about an individual's health, 838 01:28:36,100 --> 01:28:41,980 environment, lifestyle, finances. You just collect everything you can. 839 01:28:41,980 --> 01:28:47,730 And later on, you decide what research from. No. 840 01:28:47,730 --> 01:28:55,740 The success of this programme is sort of made real by something with the UK biobank, which many of you will probably know about, 841 01:28:55,740 --> 01:29:06,240 but about somewhere between one and two percent of adults within the UK have consented to have their entire medical data, 842 01:29:06,240 --> 01:29:09,030 their entire genome sequence. 843 01:29:09,030 --> 01:29:18,620 I'm huge Moundsville axillary information about them, lifestyle and cognition that their parents, sometimes the children. 844 01:29:18,620 --> 01:29:28,820 Huge amounts of information made available to people like me and people like you and people in companies and people in China and people in the U.S., 845 01:29:28,820 --> 01:29:36,560 all you have to do is this to sign up to a very few sort of restrictions about what you're going to do with the data. 846 01:29:36,560 --> 01:29:43,730 You have to say roughly what you're going to do with the data. You have to say that you're not going to try and identify these people. 847 01:29:43,730 --> 01:29:50,730 But beyond that, it's really not very much that you have to say that you're going to do. 848 01:29:50,730 --> 01:29:56,880 And as a consequence of that, there are people all over the world probing the tiniest details, 849 01:29:56,880 --> 01:30:04,630 the most intimate information, about half a million people within the U.K., some probably indeed within this. 850 01:30:04,630 --> 01:30:11,710 So it's an example of how our way of doing research is really shifting. 851 01:30:11,710 --> 01:30:18,460 This shift is exactly what enables the whole A.I. revolution in medicine and health care. 852 01:30:18,460 --> 01:30:26,440 But it, of course, brings up all sorts of questions about what it means to be informed about research project, which has no end. 853 01:30:26,440 --> 01:30:34,780 What it means in terms of can you ever can you ever comprehend the sorts of things that I might 854 01:30:34,780 --> 01:30:40,270 learn about you if I bring together lots of sorts of information that you would never have had? 855 01:30:40,270 --> 01:30:49,720 And what what would you like to know if I can, for example, predict whether you're going to get out on this disease in the next 10, 20, 50 years? 856 01:30:49,720 --> 01:31:00,040 Huge amounts of new challenges arising from it, which are where only, I think, just begin to describe the topic that I shall shut out. 857 01:31:00,040 --> 01:31:08,250 Thanks very much. Thank you, Joe. 858 01:31:08,250 --> 01:31:15,180 It's interesting how some of the points that you raised there actually could be potentially revolutionary, 859 01:31:15,180 --> 01:31:19,800 revolutionary or other areas of science, too, and views about philosophy and science. 860 01:31:19,800 --> 01:31:25,530 The idea that you play data and then you form a hypothesis rather than the other way round. 861 01:31:25,530 --> 01:31:30,420 Our final speaker today is Jess Morley. She's policy lead at Oxford. 862 01:31:30,420 --> 01:31:39,790 Evidence based medicine data lab research assistant at the Oxford Internet Institute Digital Ethics Lab and a subject matter expert, 863 01:31:39,790 --> 01:31:47,850 the NHS NHS X, whose aim is to drive digital transformation within the NHS. 864 01:31:47,850 --> 01:31:52,620 Here she focuses on the ethics policies and regulations needed to enable the use of 865 01:31:52,620 --> 01:31:59,550 healthcare data for the improvement of outcomes whilst minimising the associated risks. 866 01:31:59,550 --> 01:32:09,390 Thank you. I also don't have any slices. Borlase causes too much, so I'm also very aware that I am kind of the stop gap between you and wine. 867 01:32:09,390 --> 01:32:15,270 I'm getting out of this very room, so I will try and not be very boring. 868 01:32:15,270 --> 01:32:21,420 I think what you have had so far is a really wonderful introduction to what I am trying to talk about, 869 01:32:21,420 --> 01:32:27,060 because I sit in a number of different intersections in that I am both an employee of 870 01:32:27,060 --> 01:32:31,310 the university and a student at the university in a different part of the university. 871 01:32:31,310 --> 01:32:39,690 And I also am working in academia and I am actually making policy today because of Puzder. 872 01:32:39,690 --> 01:32:49,170 I have to very much be talking from the Oxford perspective. But if you have questions about specific things afterwards. 873 01:32:49,170 --> 01:32:56,790 But really what I try and do and I think sonder faced it really, really well is look at what is legally required, 874 01:32:56,790 --> 01:33:06,570 which in the context of medicine and data use, tends to be consent, anonymization, then what is ethically desirable and what's the standard? 875 01:33:06,570 --> 01:33:12,120 And then we try and develop what we refer to as a principle proportionate approach that 876 01:33:12,120 --> 01:33:19,620 tries to move the what of a ethics into the how of ethics and to really operationalise it. 877 01:33:19,620 --> 01:33:24,000 Look at a number of different things. First of all, how can I actually help? 878 01:33:24,000 --> 01:33:27,170 I think this is something that often gets missed in ethical conversations. 879 01:33:27,170 --> 01:33:33,390 People always talk about the risks, but actually a very big risk, particularly from the perspective of the NHS, 880 01:33:33,390 --> 01:33:42,330 that has a duty to capitalise on all of the ways that you can enable healthcare is the opportunity cost of not letting it happen. 881 01:33:42,330 --> 01:33:48,630 So to try and look at why does it help, which tends to be largely in categories such as diagnostic screening. 882 01:33:48,630 --> 01:33:51,810 And Josh, that's where most research currently focuses, 883 01:33:51,810 --> 01:33:57,450 primarily because of that's where the data is more easily accessible and it's why it's standardised. 884 01:33:57,450 --> 01:34:06,090 Population health. This is time basically better versions and less bad versions of Google flu discovery, 885 01:34:06,090 --> 01:34:12,300 such as drug discovery and what's sometimes referred to as P4 medicine or precision medicine. 886 01:34:12,300 --> 01:34:17,120 And those are the kind of areas that we look at in terms of why we think it might help. 887 01:34:17,120 --> 01:34:23,040 And then the question is, how can it actually hurt? You've already heard many of the reasons why. 888 01:34:23,040 --> 01:34:29,580 And one of the things that I've co-authored is one of those many ethical principles that Brent had up on that on that slide, 889 01:34:29,580 --> 01:34:35,600 which is the Department of Health and Social Care has a code of conduct for the NHS. 890 01:34:35,600 --> 01:34:42,360 So it's called the NHS Code of Conduct for data driven health care technologies, which just really both of your tongue. 891 01:34:42,360 --> 01:34:46,080 The reason it is cool that is because most of the applications that we see 892 01:34:46,080 --> 01:34:50,040 currently in these within the NHS are within the healthcare sector at large. 893 01:34:50,040 --> 01:35:00,230 It's at least at the frontline do not fall into the category of being a I, yet they tend to be fastpass, simpler basic statistics. 894 01:35:00,230 --> 01:35:08,340 So we've tried to encompass it with data, didn't get it. But the things that try and look at is that that code of conduct and almost everything that 895 01:35:08,340 --> 01:35:13,740 we've heard about so far tends to focus on just the individual protecting the individual, 896 01:35:13,740 --> 01:35:19,890 which stems from this connexion with medical ethics being around protecting the person. 897 01:35:19,890 --> 01:35:24,650 And as we have seen, autonomy, beneficence, not maleficent justice and acceptability. 898 01:35:24,650 --> 01:35:32,040 And was that's really been important when you're operating and thinking of the perspective from a systems perspective. 899 01:35:32,040 --> 01:35:36,660 You also need to think about how ethics, how ethical risks can arise. 900 01:35:36,660 --> 01:35:39,690 A number of other different levels in the interpersonal level. 901 01:35:39,690 --> 01:35:46,470 So how much it changed the relationship between clinicians and patients, for example, and also between the patient and themselves. 902 01:35:46,470 --> 01:35:51,030 This is a very interesting dichotomy that you have in the healthcare sector with a guy is that 903 01:35:51,030 --> 01:35:56,910 you are dealing with simultaneously a very physical being and an entirely digital sphere. 904 01:35:56,910 --> 01:36:04,680 And sometimes those two things do not actually match. And that chemical is on some and need to be aware of the other. 905 01:36:04,680 --> 01:36:11,950 The other empathic you can see is group level impact, sectoral level impacts and set societal level. 906 01:36:11,950 --> 01:36:15,660 So a group and societal level impacts might be where, for example, 907 01:36:15,660 --> 01:36:23,430 that we have problems with the fact that if the NHS is grounded on this principle, that it is careful, all obvious. 908 01:36:23,430 --> 01:36:29,940 It's like making that less true by focussing on on the introduction of. 909 01:36:29,940 --> 01:36:33,570 When I should be used in a way that makes it more true. 910 01:36:33,570 --> 01:36:40,920 But we could, in fact, introduce an new and evolving inequality and exacerbate those already existing. 911 01:36:40,920 --> 01:36:46,860 And then the other thing I said to level and this is particularly important again, is taking a systems perspective, 912 01:36:46,860 --> 01:36:57,210 is the issue with trust on how might you lose trust in the provision of health care from the states because you have made mistakes too early on? 913 01:36:57,210 --> 01:37:01,320 Until there is all of the sort of levels that you might think about. 914 01:37:01,320 --> 01:37:06,830 And then the other area that we need to try and look at is where you might have. 915 01:37:06,830 --> 01:37:11,780 Different ethical issues arising at different levels of the machine learning pipeline. 916 01:37:11,780 --> 01:37:19,430 One of the things I often say is if you have written ethics in to the business case that coded it out by the time it got to the point, 917 01:37:19,430 --> 01:37:23,600 it's really clear example that we're currently tackling with us. 918 01:37:23,600 --> 01:37:31,730 A couple of things is one is because the question of liability is a little bit unsure. 919 01:37:31,730 --> 01:37:37,100 It tends to be interpreted that it land on the on the final clinician who's made the decision. 920 01:37:37,100 --> 01:37:45,670 But all of. Tools that are deployed at the front line or deployed in the app stores are things like skin latex, for example, 921 01:37:45,670 --> 01:37:51,640 which is an app that can take a photo of a mole and it will tell you your level of risk are skewed towards false 922 01:37:51,640 --> 01:37:57,750 positives because it's less risk to the company to say that you have a diagnosis when you don't want to do it. 923 01:37:57,750 --> 01:38:02,590 What's the issue that this is having is that whilst these technologies are 924 01:38:02,590 --> 01:38:08,020 supposed to be deployed in order to to enable people to take care of themselves. 925 01:38:08,020 --> 01:38:15,430 In fact, it is actually doing the opposite and driving people towards frontline care because people get these very high level ratings of risk. 926 01:38:15,430 --> 01:38:21,560 Similarly, we have an issue with. Like breast cancer screening algorithms, for example, 927 01:38:21,560 --> 01:38:28,250 that have drastically increase the capacity to recognise the potential of the pathology being that because can. 928 01:38:28,250 --> 01:38:33,140 A machine can be many scans at once. But a doctor can still only treat one person at once. 929 01:38:33,140 --> 01:38:39,500 So you effectively created a bottleneck, which is an ethical side from the perspective of that. 930 01:38:39,500 --> 01:38:45,590 Then they have the person who is living with the potentially anxiety inducing diagnosis. 931 01:38:45,590 --> 01:38:50,210 And that has very big implications for their medical integrity and that sort of mental integrity. 932 01:38:50,210 --> 01:38:56,960 But be waiting much longer for treatment. So those are the kinds of areas that we try and look at. 933 01:38:56,960 --> 01:39:03,350 I'm trying. Then develop standards and policies that build on the regulatory framework. 934 01:39:03,350 --> 01:39:11,510 There is work going on to assess the regulation and the various things like create regulatory sandboxes in order to test ideas. 935 01:39:11,510 --> 01:39:17,620 That takes a much longer than it takes to develop standards and policies and the implementation of them. 936 01:39:17,620 --> 01:39:22,700 I'm sure one of the things, for example, that we have looked at is the code of conduct. 937 01:39:22,700 --> 01:39:31,960 First of all, it is an abstraction of an abstraction. It's based on the ethical principles, but it is far more trying to operationalise them. 938 01:39:31,960 --> 01:39:38,960 So instead of saying things like autonomy, it says things like design for your specific user state, 939 01:39:38,960 --> 01:39:46,410 how the technology will impact them and how it will help and how it is better than what is already in place. 940 01:39:46,410 --> 01:39:50,870 And there's a number of operational things that we have done to make that a little 941 01:39:50,870 --> 01:39:55,160 bit more realistic because of the complexities that you're dealing with in medicine, 942 01:39:55,160 --> 01:39:57,230 such as safety and efficacy. 943 01:39:57,230 --> 01:40:05,750 For example, one of the things that we developed was the standards for evidence of efficacy in digital health technologies. 944 01:40:05,750 --> 01:40:11,960 And similarly, what is socially acceptable commercial models for health pages. 945 01:40:11,960 --> 01:40:19,100 And then to build a platform that shows how people can actually go in, 946 01:40:19,100 --> 01:40:24,230 put provide that evidence to the entities of how you are meeting those principles. 947 01:40:24,230 --> 01:40:29,960 What evidence have you provided and allow people to actually. This is all available in a big report. 948 01:40:29,960 --> 01:40:35,660 That's and it was published a couple of weeks ago. Anyone wants to see really that's what we're trying to do. 949 01:40:35,660 --> 01:40:43,790 Look at what are the issues to do with the fact that data is now circulating outside of the system to in many places, 950 01:40:43,790 --> 01:40:50,230 you now have private companies that may know more about your health than the healthcare system 951 01:40:50,230 --> 01:40:56,390 and that how does that breaking down traditional barriers that have been in good places? 952 01:40:56,390 --> 01:41:00,560 We had earlier things like the difference between medical research and medical practise 953 01:41:00,560 --> 01:41:05,480 when you're working in a cybernetic loop with data that starts to break down. 954 01:41:05,480 --> 01:41:10,910 Look at the risks that approach at different levels of society, 955 01:41:10,910 --> 01:41:18,290 different stages in the machine learning pipeline and how we actually operationalise those so that instead of just saying, 956 01:41:18,290 --> 01:41:32,560 isn't it nice that everyone is going to be fat? How will we actually know? 957 01:41:32,560 --> 01:41:37,420 Thank you very much, Jess. We're giving a little bit over the originally advertised time. 958 01:41:37,420 --> 01:41:47,050 So anyone who needs to go do feel free. But we've got three questions from the audience that have attracted nine votes. 959 01:41:47,050 --> 01:41:52,750 So I think it would be good to spend just a little bit of time addressing those. 960 01:41:52,750 --> 01:42:01,300 One of them said, what place does diversity and inclusion play in work done in the ethics of a I? 961 01:42:01,300 --> 01:42:06,610 What will the institute do to ensure a range of backgrounds and voices are involved? 962 01:42:06,610 --> 01:42:15,640 That's a biggie. And I don't think this is the time to be discussing the various policies that the institute will be implementing. 963 01:42:15,640 --> 01:42:24,760 But I'm going to take the opportunity to give a plug to the podcast Future Makers, where we have two excellent episodes last year. 964 01:42:24,760 --> 01:42:36,430 One of them discussing the bias of algorithms, in fact, where Sandra and Brent were present and another one does A.I. have a gender? 965 01:42:36,430 --> 01:42:44,650 And Charissa was a contributor to that. So in each case, you've got a discussion of 45, 50 minutes about these very difficult issues. 966 01:42:44,650 --> 01:42:47,470 Future makers, Oxford. And you'll find it. 967 01:42:47,470 --> 01:42:59,710 We had a Hulce series 10 podcasts on A.I. and ethical and other issues arising from A.I. But those those of you 968 01:42:59,710 --> 01:43:05,860 who aren't to the other questions that I'm going to mention that might want to say a little bit about that issue. 969 01:43:05,860 --> 01:43:16,250 So, Sandra, I think this is one aimed at you. What if I can make predictions that accurately predict predict risk? 970 01:43:16,250 --> 01:43:25,440 For example, about credit, health, criminal behaviour, but are, in effect discriminatory? 971 01:43:25,440 --> 01:43:33,790 Did you come to the front? Yeah. Many of our other speakers would like to do. 972 01:43:33,790 --> 01:43:44,180 And you seem to be the obvious. Yes. That is could be all a very wet period. 973 01:43:44,180 --> 01:43:49,170 It's for the record. Yeah, that's a very good question. 974 01:43:49,170 --> 01:43:53,730 And it is one that I don't yet have an answer to. 975 01:43:53,730 --> 01:44:00,240 And I think it is like the core question of what inspires my research in general, 976 01:44:00,240 --> 01:44:05,130 which is it's not just about the question what's technically possible. 977 01:44:05,130 --> 01:44:08,160 It's also about the question, is it ethical or desirable? 978 01:44:08,160 --> 01:44:15,150 So just because we get very good at certain things doesn't necessarily mean we have to do them. 979 01:44:15,150 --> 01:44:22,170 And I don't like the idea that the general idea of just because it's there, we should also use it. 980 01:44:22,170 --> 01:44:24,000 I think we should actually have a very, 981 01:44:24,000 --> 01:44:32,070 very thorough discussion of how we want to deploy those systems and if we actually want to welcome those in our society. 982 01:44:32,070 --> 01:44:33,960 So when people talk about bias, 983 01:44:33,960 --> 01:44:43,860 they often imagine that the problem is a system that's actually not very good at predicting things because it is biased by preconceptions. 984 01:44:43,860 --> 01:44:50,990 You're saying even if the system really predicts very well, but actually. 985 01:44:50,990 --> 01:44:55,910 Particular aspect to feeding into that decision that we'd rather didn't, right? 986 01:44:55,910 --> 01:45:00,260 We may want to hold back on implementation. I think it's two different things. 987 01:45:00,260 --> 01:45:05,450 I think there might be let's say there is a predictor that says the shape of your 988 01:45:05,450 --> 01:45:10,280 nose is a very good predictor of whether you're going to fail in law school or not. 989 01:45:10,280 --> 01:45:15,320 And I have statistical evidence that shows that. Right. And we know it's super accurate. 990 01:45:15,320 --> 01:45:18,590 We could still think. I don't. I think that's stupid. 991 01:45:18,590 --> 01:45:22,820 And I don't want my nose to have anything to do with whether or not I get admitted to Lorsque or not. 992 01:45:22,820 --> 01:45:24,980 Right. And that it's not a technical problem. 993 01:45:24,980 --> 01:45:34,520 That's a, you know, a question of what we think of what are desirable or socially acceptable criteria to make decisions based on rates. 994 01:45:34,520 --> 01:45:40,160 Grades may be reference Thatchers maybe, but noses are not, even if they are good predictors. 995 01:45:40,160 --> 01:45:46,390 The other thing would bias might be introduced is that we don't know what the shape of the nose actually correlates with. 996 01:45:46,390 --> 01:45:50,570 Right. It might be that, you know. I don't know. 997 01:45:50,570 --> 01:45:55,370 I women have particularly short noses and we don't know dad. 998 01:45:55,370 --> 01:45:58,220 And all the sudden moves start discriminating against people, women. 999 01:45:58,220 --> 01:46:04,190 And we don't know it because we don't have the link between gender and those as yet because we don't have enough research. 1000 01:46:04,190 --> 01:46:09,740 The only thing that we see is an interesting correlation between shape of noses and being successful in law school. 1001 01:46:09,740 --> 01:46:18,010 Right. So we have to be very, very careful when we choose does decision making tools and we don't know the actual causal link between those two, 1002 01:46:18,010 --> 01:46:23,900 which is related very much to your work on inscrutability of the machine, learning algorithms and all that's yapping. 1003 01:46:23,900 --> 01:46:30,980 You can't just take the results and use them without really understanding. Yes, I think that's that's the ethical change, the ethic of responsibility. 1004 01:46:30,980 --> 01:46:36,830 When we use big data for life changing decisions, then we should have a right to at least understand what's going on. 1005 01:46:36,830 --> 01:46:42,260 Respire, what's going on, or be honest about the fact we don't know what's going on and don't pretend to be, 1006 01:46:42,260 --> 01:46:47,720 you know, just trust him without actually questioning them. Thank you very much. 1007 01:46:47,720 --> 01:46:50,510 Another really big question here. 1008 01:46:50,510 --> 01:47:02,890 Do we foresee a time that A.I. will perform the work of ethics better than human intelligence based on our current definitions of AI and ethics? 1009 01:47:02,890 --> 01:47:12,810 Any volunteers may maybe your brain. Thank you. 1010 01:47:12,810 --> 01:47:20,220 That's an excellent question. I have a paper on that. I am very happy to to share, especially because I'm looking for feedback. 1011 01:47:20,220 --> 01:47:32,160 So if you show me any more, I will. But in a nutshell, I think that any entity that doesn't have the ability to feel pain and suffer and 1012 01:47:32,160 --> 01:47:36,780 feel pleasure and know what that is cannot possibly have any moral understanding. 1013 01:47:36,780 --> 01:47:41,580 Or so I argue. Right. So dot, dot, dot. 1014 01:47:41,580 --> 01:47:49,020 Read the map. Do any of our other speakers want to say anything about either of those big questions? 1015 01:47:49,020 --> 01:47:53,670 Can I say something about risk? Yeah. I mean, I find that question super interesting. 1016 01:47:53,670 --> 01:47:57,390 And it's one part of my interest in the ethics of prediction. 1017 01:47:57,390 --> 01:48:07,320 It seems to me at the moment there is a presumption that institutions have unlimited rights to assess risk and make decisions accordingly. 1018 01:48:07,320 --> 01:48:13,620 And I find that quite questionably questionable. So take insurance. Used to be the case that insurance was about pooling risk. 1019 01:48:13,620 --> 01:48:18,900 Right. So you got a big pool of people and you know that you have some notions of 1020 01:48:18,900 --> 01:48:23,640 statistics such that you know that some people are going to have health problems, other others won't. 1021 01:48:23,640 --> 01:48:30,180 And it balances itself out so that you can calculate how much you have to charge people to survive. 1022 01:48:30,180 --> 01:48:37,740 And now it seems like we can actually assess very, very accurately whether somebody is going to have a health problem or not. 1023 01:48:37,740 --> 01:48:41,670 And then if somebody is high risk, then you don't have to take them into insurance. 1024 01:48:41,670 --> 01:48:45,740 And it seems to me that there is this is a this is a process more in general, in society. 1025 01:48:45,740 --> 01:48:50,550 It's also in the job market and so on, in which it's not that risk is being diminished. 1026 01:48:50,550 --> 01:48:55,920 Overall, it's rather that it's being displaced. So the burden of risk is being pushed onto individuals. 1027 01:48:55,920 --> 01:49:04,380 And as individuals, we have much less resources to face some of these risks that maybe should be faced by society as a community. 1028 01:49:04,380 --> 01:49:10,280 So this is something that worries me. Thank you very much. Well. 1029 01:49:10,280 --> 01:49:15,030 I see, I see, I haven't done it yet. Very briefly, if you don't want to these question. 1030 01:49:15,030 --> 01:49:20,800 Yes, super short. Oh yeah. This would be a sliding the beginning of your. 1031 01:49:20,800 --> 01:49:27,450 What you can bring to this set of 30 exposed theological loop works of art of 1032 01:49:27,450 --> 01:49:34,350 those very much issues where machine learning and any I can tell at bones. 1033 01:49:34,350 --> 01:49:39,920 And how does that relate to your Arthur? Definitely, that's a very good point. 1034 01:49:39,920 --> 01:49:41,810 And you're right to push me on that. 1035 01:49:41,810 --> 01:49:49,310 And I think I can assist us in many things that can assist us in figuring out the empirical details, which may be very important for ethics. 1036 01:49:49,310 --> 01:49:58,400 But at the moment, as long as I is not sentenced, it's very questionable whether it can assess us in weighing that and making us decide. 1037 01:49:58,400 --> 01:50:06,530 In the end, how do how do we morally weigh these different things about bringing up these issues in matter? 1038 01:50:06,530 --> 01:50:17,400 This is my favourite philosopher. David Hume was fond of emphasising facts are one thing, but moral judgement arguably requires human sentiment. 1039 01:50:17,400 --> 01:50:30,780 On that note, let's thank all our speakers again. 1040 01:50:30,780 --> 01:50:37,650 This this has been quite an unusual seminar, but the main aim of it has been to let us all get to know each other, 1041 01:50:37,650 --> 01:50:41,910 to see some of the wonderful stuff that's going on around the university. 1042 01:50:41,910 --> 01:50:45,270 In subsequent seminars, we'll be digging more deeply. 1043 01:50:45,270 --> 01:50:53,040 It seems to me that this slide show technology is rather useful and in future will leave more time for questions and interaction. 1044 01:50:53,040 --> 01:50:57,180 Thank you very much for coming. Now onto the refreshments. 1045 01:50:57,180 --> 01:51:00,623 Thank you.