1 00:00:01,720 --> 00:00:06,910 So I just wanted to set the scene really for today's wonderful set of talks, 2 00:00:06,910 --> 00:00:13,000 vignettes around the range and scope of what's going on around the challenges of 3 00:00:13,000 --> 00:00:17,440 ethics in many of the advanced technology settings that we were almost engaged in. 4 00:00:17,440 --> 00:00:20,830 And also to explain where this all came from. 5 00:00:20,830 --> 00:00:32,230 So we know that the Schwarzman Institute itself, the Schwarzman Centre itself is a much bigger endeavour involving the humanities in general. 6 00:00:32,230 --> 00:00:36,970 And one thing to be very clear about when people heard the announcement of 150 million pounds. 7 00:00:36,970 --> 00:00:41,560 Some people mistakenly imagine that was for the Institute for Ethics and I. 8 00:00:41,560 --> 00:00:45,640 No, it's it's a part of that gift. 9 00:00:45,640 --> 00:00:51,040 And it's a part that is particularly exciting because it's there to support human talent, 10 00:00:51,040 --> 00:00:56,830 to support appointments in the area, in the Faculty of Philosophy in particular. 11 00:00:56,830 --> 00:01:06,550 And I'll talk about this. So it's a significant but a very minority part of that gift, but it's a significant one. 12 00:01:06,550 --> 00:01:10,480 The whole challenge of this is, of course, 13 00:01:10,480 --> 00:01:16,730 premised on the fact that we know that computer science and particularly artificial intelligence present huge challenges. 14 00:01:16,730 --> 00:01:26,410 This is one of my favourite quotes. This is from cybernetics. Father Norbert Vena in 1948 already foresaw that, as he called it, 15 00:01:26,410 --> 00:01:33,670 the ultra rapid computing machine was an ideal central nervous system for an apparatus for automatic control 1948. 16 00:01:33,670 --> 00:01:42,660 He foresaw the opportunities for. Unheard of amounts of good and evil, and boy, was he right there. 17 00:01:42,660 --> 00:01:51,510 And of course, the other forcing function that we recognise in that much technology accelerates under the impulse of of conflict, sadly. 18 00:01:51,510 --> 00:02:02,010 So whether it's Bletchley Park, where of course, so much of the early work on automated computing happened kept much of it classified sharing himself, 19 00:02:02,010 --> 00:02:07,920 of course, working there or it was the Manhattan project where computation was required. 20 00:02:07,920 --> 00:02:17,730 This forcing function accelerated by the demands of the technology, but also very quickly led to issues around the fundamental ethical questions. 21 00:02:17,730 --> 00:02:25,260 The other thing I like to present is the fact that modern AI is understood very often to simply be all about machine learning. 22 00:02:25,260 --> 00:02:31,770 And as somebody who has been working the field since the late 70s, 23 00:02:31,770 --> 00:02:39,060 I've seen a number of cycles of A.I. come and go, and in that time, different methodologies have held sway. 24 00:02:39,060 --> 00:02:44,700 Back in the day was rule based systems. They're improving a whole range of methods to do search. 25 00:02:44,700 --> 00:02:51,510 And each time round the cycle of enthusiasm, particular human capabilities fell to the dominance of the machines. 26 00:02:51,510 --> 00:02:59,280 There was always a question around the ethical deployment of this, even back in the 80s when people were talking about building expert systems. 27 00:02:59,280 --> 00:03:06,600 What about the experts? The systems were displays. Could we trust the actual diagnosis the systems would give us? 28 00:03:06,600 --> 00:03:11,100 Were the accounts explicable and understandable and so on? 29 00:03:11,100 --> 00:03:17,220 So there is no there is no shift in some of the fundamental challenges facing us over many decades. 30 00:03:17,220 --> 00:03:23,430 But what has happened clearly in recent times is the emergence of a particular class of computing power, 31 00:03:23,430 --> 00:03:28,770 a method that has led to some breakthrough moments. These so-called deep neural networks, 32 00:03:28,770 --> 00:03:36,330 various forms of machine learning have led both to the try the triumph of machines over particular areas of human expertise. 33 00:03:36,330 --> 00:03:43,350 This, of course, is AlphaGo is triumph over loss of dull and most recently achieving extraordinary results 34 00:03:43,350 --> 00:03:48,930 in what was thought again to be quite difficult to master areas of problem solving. 35 00:03:48,930 --> 00:03:59,910 This is a multi-strategy king called Starcraft. It's it's the latest achievement again of the DeepMind Science Research Labs, of course. 36 00:03:59,910 --> 00:04:06,450 Within all of that, there's plenty of good things going on, but again, sometimes with issues around the the ethics. 37 00:04:06,450 --> 00:04:17,430 This is the Google DeepMind work on diagnosing various diseases from the retinal scans that were available, 38 00:04:17,430 --> 00:04:23,430 whether it's various forms of, in this case, this is diabetic retinopathy. 39 00:04:23,430 --> 00:04:29,430 These systems are very good at detecting patterns and coming up with all sorts of really rather intrusive classifications. 40 00:04:29,430 --> 00:04:31,200 But even there, that was an issue. 41 00:04:31,200 --> 00:04:39,180 Ultimately that fell into the whole area of was the data used to train these systems acquired with the appropriate level of informed consent. 42 00:04:39,180 --> 00:04:47,130 It is almost impossible to think of an arm deployment mode in which ethical questions don't abound. 43 00:04:47,130 --> 00:04:53,940 Oxford's own work in letting that wonderful work to actually get to the level of human expertise in lip reading. 44 00:04:53,940 --> 00:04:59,430 You can imagine various forms of more intriguing deployment of these kinds of technologies. 45 00:04:59,430 --> 00:05:09,270 In fact, it was the central thesis of 2001. How, if you remember, read the lips of the astronauts in the pod or indeed facial recognition, 46 00:05:09,270 --> 00:05:15,120 which is already causing concerns about its sensibilities and sensitivities around particular 47 00:05:15,120 --> 00:05:22,350 forms of bias given certain sorts of training regime and given a particular deployment note. 48 00:05:22,350 --> 00:05:29,760 The more obvious ones, of course, range from what you do with autonomous vehicles to how you should restrict 49 00:05:29,760 --> 00:05:37,320 and control the weaponization of of platforms such as A.I. included in drone. 50 00:05:37,320 --> 00:05:44,340 More extensively, we worry about the use of extensive amounts of data in domains such as predictive policing or 51 00:05:44,340 --> 00:05:51,450 indeed at a national scale when deployed by states such as China in that social credit system. 52 00:05:51,450 --> 00:05:57,240 We worry about the flows of data from our mobiles, from our devices. 53 00:05:57,240 --> 00:06:03,330 In fact, my group in Oxford has spent a lot of time trying to understand just what that ecosystem looks like, 54 00:06:03,330 --> 00:06:07,290 and it's clear that the flows of data are extensive. There are numerous. 55 00:06:07,290 --> 00:06:13,140 This is just one particular flow of data for one app surrounded by a few others on a phone that we able to 56 00:06:13,140 --> 00:06:22,530 track and understand just how extraordinary the economy of data exchange is and where is the control of that? 57 00:06:22,530 --> 00:06:32,160 And where's the oversight and insight in that? Often your ethical challenges or cheek by jowl with questions around governance and regulation, 58 00:06:32,160 --> 00:06:37,740 and within all of this, we have concerns about the emergence of dominant platforms both in. 59 00:06:37,740 --> 00:06:47,010 East and West, who seem to have so much control in this emerging world of data enabled A.I. algorithms 60 00:06:47,010 --> 00:06:51,660 and data enabled air raid algorithms that I say aren't just about machine learning, 61 00:06:51,660 --> 00:06:59,730 they can just as much be about expert system reasoning about various forms and model based reasoning that are a very large 62 00:06:59,730 --> 00:07:06,660 set of methods now available to the computer scientist and the engineer and various people looking to exploit the methods. 63 00:07:06,660 --> 00:07:14,070 And even if it is, in some cases, statistics represented various forms of linear regression or whatever. 64 00:07:14,070 --> 00:07:20,940 There will still often be ethical challenges at the end of that deployment issue, 65 00:07:20,940 --> 00:07:26,280 so the question for us isn't just necessary to think about this in the narrow confines of what we might 66 00:07:26,280 --> 00:07:36,180 think of as a robotic control drone or a particular use of a neural network in a biometric system. 67 00:07:36,180 --> 00:07:44,740 It can be how data in Arabic rhythms are used very broadly on the web, how they're used in targeting and surveilling us. 68 00:07:44,740 --> 00:07:48,090 So in the broadest sense, this was always the ambition. 69 00:07:48,090 --> 00:07:58,720 And so when I was asked to put together a proposal for the Schwarzman initiative, and this was very much driven out of his own concerns, 70 00:07:58,720 --> 00:08:01,980 Stephen Schwartz seems unconcerned that he was seeing a world emerge where his 71 00:08:01,980 --> 00:08:08,070 particular worry was around the concern of how these systems might be deployed. 72 00:08:08,070 --> 00:08:14,850 It was natural to turn to Oxford's extraordinary, extraordinary heritage in this area. 73 00:08:14,850 --> 00:08:23,550 It's not hard to make the case for ethics, of course, but Oxford, you have the most extraordinary cast of characters here. 74 00:08:23,550 --> 00:08:30,790 Looking through a history that includes everybody from a hacker to Paul Fletcher to Warnock and Murdoch, 75 00:08:30,790 --> 00:08:38,310 these these are extraordinary figures who have shaped our thinking in the ethical and moral space and in some respects. 76 00:08:38,310 --> 00:08:49,410 Of course, Mary Warnock's work also held an appeal to many because this was a person who managed to convene an entire regulatory 77 00:08:49,410 --> 00:08:56,460 framework and lead on that thinking and the deployment of a technology which at the time concerned people. 78 00:08:56,460 --> 00:09:02,520 The whole way in which human fertilisation and embryology science was moving, 79 00:09:02,520 --> 00:09:07,140 reproductive science was moving, and that multi-stakeholder conversation, 80 00:09:07,140 --> 00:09:14,130 which had serious ethical underpinnings, was one of the reasons that we found the way forward to so nuanced and interesting debate. 81 00:09:14,130 --> 00:09:21,510 And one of the things I think we will see as the landscape develops here in Oxford is 82 00:09:21,510 --> 00:09:28,800 this very interesting appeal to lessons learnt from fields such as medical ethics. 83 00:09:28,800 --> 00:09:32,700 Many of the problems addressed have many analogies. 84 00:09:32,700 --> 00:09:37,680 Some will turn out to be different, whether it is the use of human subjects or what is being done to them, 85 00:09:37,680 --> 00:09:43,740 or the question of informed consent, or how access is granted to two parts of the population. 86 00:09:43,740 --> 00:09:47,850 Many of these will have their equivalents in what we do. 87 00:09:47,850 --> 00:09:58,530 So when we looked at the proposal as we assembled the proposal, we were surrounded really by a surfeit of riches. 88 00:09:58,530 --> 00:10:03,450 This diagram just begins to reflect. This is we call this a hugely essential model. 89 00:10:03,450 --> 00:10:08,820 This is not to imply, by the way, that the you're all circling. This quite not quite exist. 90 00:10:08,820 --> 00:10:18,360 Existing ethics in AI Institute. This is meant to represent the the extraordinary level of input and cognate activities that 91 00:10:18,360 --> 00:10:22,470 we can look at from computer science and our involvement in the Alan Turing Institute. 92 00:10:22,470 --> 00:10:27,780 On the one hand, to the work that goes on in the Oxford Internet Institute with hear about some of that today 93 00:10:27,780 --> 00:10:33,210 and indeed the work and the Ahero Centre for Practical Ethics Work in the Future of Humanity. 94 00:10:33,210 --> 00:10:38,370 Institute work in Allied Governance, work in the Blavatnik School on policy, 95 00:10:38,370 --> 00:10:50,430 a whole set of interests emerging in law around how we might think about legal restraints and applications in AI technology in medicine again, 96 00:10:50,430 --> 00:10:52,230 the Wellcome Centre for Ethics, 97 00:10:52,230 --> 00:10:59,790 the Oxford Martin School, Oxford Foundry for Innovation, the Big Data Institute up and in the Medical Science Division, 98 00:10:59,790 --> 00:11:06,930 and information in engineering, where much of the really interesting robotics development goes on and other things too. 99 00:11:06,930 --> 00:11:17,070 So it's a very rich surround. This won't even be completely comprehensive, but what's compelling about that is that you set this effort. 100 00:11:17,070 --> 00:11:24,480 Within a nexus of really interesting research, and I think one of the things that Peter's looking to do is to is to is to help convene 101 00:11:24,480 --> 00:11:30,840 the plurality of those conversations around the interests of the ethics in AI Institute. 102 00:11:30,840 --> 00:11:37,560 Now that, of course, is as yet to be staffed, to be to be established. 103 00:11:37,560 --> 00:11:46,290 But the interesting absolute essential difference of this, and some people said there are any number of ethics in AI or ethics initiatives. 104 00:11:46,290 --> 00:11:52,110 Somebody counts as some extraordinary number of ethics codes the other day, and it was in the low hundreds. 105 00:11:52,110 --> 00:11:58,260 Believe it or not, lots of them close similarities to one another. 106 00:11:58,260 --> 00:12:04,020 And a lot of a lot of it. One suspects the various forms of virtue signalling. 107 00:12:04,020 --> 00:12:14,120 Lots of it, you might suspect. Kind of copying to get into something that seems an idea whose idea of the moment. 108 00:12:14,120 --> 00:12:23,330 I think the difference in the effort here is to locate this whole enterprise in the deep research of the Faculty of Philosophy, 109 00:12:23,330 --> 00:12:29,720 in a tradition where the philosophical questions are primary are paramount. 110 00:12:29,720 --> 00:12:38,750 Now we can divide those up in any any variety of ways and whatever set of questions you raise here, there are more, 111 00:12:38,750 --> 00:12:43,880 whether it's about when you use or don't use the technology, whether it's fair, 112 00:12:43,880 --> 00:12:50,690 who's responsible for it, who has access to it, does it sufficiently explain itself? 113 00:12:50,690 --> 00:12:55,730 Does it varies? You go from one geography to another, it's one culture to another. 114 00:12:55,730 --> 00:13:00,380 What about its utilisation of public and private goods? 115 00:13:00,380 --> 00:13:09,170 And all of this just provides a sense of the depth and range of ethical questions that will arise. 116 00:13:09,170 --> 00:13:15,810 And the last few minutes, let just me say something about the shape and structure of where we are. 117 00:13:15,810 --> 00:13:22,680 So this all began with a group of individuals who gave of their time and energy hugely grateful 118 00:13:22,680 --> 00:13:30,010 to them in a steering committee that was put together back in actually almost a year ago, 119 00:13:30,010 --> 00:13:31,020 it seems extraordinary. 120 00:13:31,020 --> 00:13:41,280 Just I think I was approached about a year ago and and then in the in the January of this year, we began to think about shaping the proposal. 121 00:13:41,280 --> 00:13:45,210 So myself and Chris Timson, head of philosophy. 122 00:13:45,210 --> 00:13:52,140 Dan Grimley, Phil Howard from the I might well use computer science to or fabricate a philosopher, 123 00:13:52,140 --> 00:14:01,110 Mike Parker and Alison Noble, Mike Parker from the Welcome Institute and Allison from Information Engineering. 124 00:14:01,110 --> 00:14:07,080 Actually, Allison replaced Angela McLean, who was the original originally on the committee, 125 00:14:07,080 --> 00:14:13,620 as Angela chalk up the chief scientific adviser role in one of in the Ministry of Defence. 126 00:14:13,620 --> 00:14:19,630 So that group had gotten together and tried to frame and shape. 127 00:14:19,630 --> 00:14:26,740 The the proposal in a way that was that was going to work for for the university. 128 00:14:26,740 --> 00:14:33,940 The plan ultimately is that there will be a management committee of the institute to be confirmed and it will 129 00:14:33,940 --> 00:14:41,890 be cross divisional probably will mirror the kinds of things we see within things like the Blavatnik School. 130 00:14:41,890 --> 00:14:45,550 The advisory board will comprise internal and external members. 131 00:14:45,550 --> 00:14:47,910 They will be advisory. They will not set the agenda. 132 00:14:47,910 --> 00:14:55,270 That's it's important to say, and we are at this moment about to advertise for a director for the institute. 133 00:14:55,270 --> 00:14:58,450 My role is essentially to try and steward this into existence. 134 00:14:58,450 --> 00:15:03,250 I think the important thing to say is of all of the activities associated with the gifts. 135 00:15:03,250 --> 00:15:08,260 This is one of the first ones to to begin its work because the building, 136 00:15:08,260 --> 00:15:16,630 which will be placed on the right ratcliff observatory site, will be a number of years in development. 137 00:15:16,630 --> 00:15:25,810 I mean, it could be four to five years before that is actually opened. In the meantime, we would like to be undertaking this exciting research agenda. 138 00:15:25,810 --> 00:15:31,420 So a director and initially two associate professors are being will be being advertised. 139 00:15:31,420 --> 00:15:36,340 One of these in philosophy and one in philosophy and computer science. 140 00:15:36,340 --> 00:15:39,340 There are five associate professorships ultimately to be appointed two. 141 00:15:39,340 --> 00:15:46,030 So this will be a substantial centre of gravity in terms of the ability to research and teach. 142 00:15:46,030 --> 00:15:53,800 And the other thing to say is that these will be at least the first to advertise will be college associations as well. 143 00:15:53,800 --> 00:15:59,650 So we're trying to attract the the institute, not into the collegiate university. 144 00:15:59,650 --> 00:16:05,380 There will be a number of postdoctoral research fellows or JRC appointments. 145 00:16:05,380 --> 00:16:08,440 There will be a significant number of DPhil students. 146 00:16:08,440 --> 00:16:13,420 We are looking to use the model of academic secondment to move people into the Institute for periods 147 00:16:13,420 --> 00:16:20,720 of time where that research would help and be valuable and a significant visiting fellows programme. 148 00:16:20,720 --> 00:16:33,670 So in total, we might imagine that in four years time there would be 20 or so individuals at the core of the of the institute and ultimately, 149 00:16:33,670 --> 00:16:45,190 of course, looking to develop and expand with joint programmes of work to even more significant to a more significant size. 150 00:16:45,190 --> 00:16:51,950 The other thing it's probably worth mentioning is that there was always the ambition that we would develop. 151 00:16:51,950 --> 00:17:02,520 Various forms of curriculum content to inform our teaching, and the interesting model here is to think about the evolution of our various. 152 00:17:02,520 --> 00:17:09,450 Courses where philosophy has been at their heart, whether it is the original literary human race with the greats, 153 00:17:09,450 --> 00:17:18,270 as it was called the modern greats, which reflected the fact that economics have become a subject that required required study, 154 00:17:18,270 --> 00:17:24,720 political science and philosophy, the scientific greats that we saw evolve philosophy, psychology, 155 00:17:24,720 --> 00:17:30,570 linguistics in the 20th century, physics and philosophy, maths and philosophy, computer science and philosophy. 156 00:17:30,570 --> 00:17:35,790 One of the interesting things about these is the extent to which computational thinking, 157 00:17:35,790 --> 00:17:42,540 the intrusion of new ways of understanding our world has been very much at the centre. 158 00:17:42,540 --> 00:17:53,280 So the question will be how we can develop content that is capable of being inserted into either masters and on the offered his undergraduate modules. 159 00:17:53,280 --> 00:18:04,170 That would certainly be an ambition, and I'm certainly aware that in computer science we have a significant need for appropriate ethics courses. 160 00:18:04,170 --> 00:18:14,280 It's not in and of itself. The only thing, of course, going on there is an ongoing cultural programme that will be being launched. 161 00:18:14,280 --> 00:18:17,400 In fact, there is an event, A.I. and creativity. 162 00:18:17,400 --> 00:18:26,970 I think on the November the 20th Wednesday that's actually also being held, that's being held at nothing in the maths institute. 163 00:18:26,970 --> 00:18:30,690 And that's certainly something to look out for. One of the kind of features, again, 164 00:18:30,690 --> 00:18:40,350 is that we look to really motivate the presence of this institute within the context of being human in the 21st century and how, 165 00:18:40,350 --> 00:18:45,210 despite all the concerns we have, or perhaps exactly because of those concerns, 166 00:18:45,210 --> 00:18:53,520 we can understand an appropriate set of balanced interventions with this technology. 167 00:18:53,520 --> 00:19:01,200 As I mentioned that the site itself is not yet built and indeed the whole process 168 00:19:01,200 --> 00:19:08,290 for selecting architects is in train as we speak to get that process underway. 169 00:19:08,290 --> 00:19:19,510 And this finally just to say the actually, I guess, apart from a talk I gave up, so this is the single earliest manifestation of actual activity, 170 00:19:19,510 --> 00:19:27,070 which all of you in this room listening to a variety of presentations as to why ethics has a 171 00:19:27,070 --> 00:19:35,440 place in the consideration deliberations of of Oxford's extraordinary intellectual history. 172 00:19:35,440 --> 00:19:38,650 And yeah, that really is my introduction. This is one of the first. 173 00:19:38,650 --> 00:19:42,430 In fact, the first two of these are very much if you like community meetings, 174 00:19:42,430 --> 00:19:49,420 town hall meetings where we're convening the interested parties together to understand what we're trying to achieve. 175 00:19:49,420 --> 00:19:54,602 So that's my introduction. Thank you.