1 00:00:18,910 --> 00:00:23,860 Welcome to a discussion devoted to ethics in a high education. 2 00:00:23,860 --> 00:00:32,530 The latest in our growing series of seminars associated with the exciting new Institute for Ethics in A.I. at the University of Oxford. 3 00:00:32,530 --> 00:00:38,160 I'm Peter Milliken. Gilbert Ryul, fellow and professor of philosophy that Hartford College, Oxford. 4 00:00:38,160 --> 00:00:46,120 And I'll be chairing tonight's event. As artificial intelligence increasingly impacts on so many aspects of our lives, 5 00:00:46,120 --> 00:00:55,120 there's recently been hugely increased awareness of the importance of putting ethical considerations at the centre of A.I. developments. 6 00:00:55,120 --> 00:01:04,680 But for this to happen in any sustainable and wide, widespread way, it's crucial that ethics becomes established as a key part of A.I. education. 7 00:01:04,680 --> 00:01:13,870 So that is the focus of our seminar tonight. I'm delighted to be joined by three young academics who've put a lot of thought into this area, 8 00:01:13,870 --> 00:01:18,480 and they will speak in turn, after which we'll have a discussion and questions. 9 00:01:18,480 --> 00:01:27,060 First up is Milo Phillips Brown, currently at M.I.T., but soon to join us here at Oxford. 10 00:01:27,060 --> 00:01:32,610 And that's second is how a new web. 11 00:01:32,610 --> 00:01:39,750 He's been teaching ethics in the Department of Computer Science here. 12 00:01:39,750 --> 00:01:44,760 And finally, Max Van Keek, who's been working alongside Helena. 13 00:01:44,760 --> 00:01:52,910 And we'll be focussing particularly on the challenges that such teaching involves. 14 00:01:52,910 --> 00:01:59,840 Each of our speakers will talk for about fifteen or twenty minutes, and the event as a whole will last for 90 minutes. 15 00:01:59,840 --> 00:02:05,780 So we'll have plenty of time for discussion. And you're very welcome to offer your own questions to the speakers. 16 00:02:05,780 --> 00:02:13,970 So please do and please feel free to do this at any time. So if questions occur to you as the speakers are speaking, 17 00:02:13,970 --> 00:02:21,110 put them into the comments box on YouTube and in due course, I'll be bringing those into the discussion. 18 00:02:21,110 --> 00:02:26,780 So we'd love to hear from you. We'd love to learn from your ideas. 19 00:02:26,780 --> 00:02:33,390 OK, so first, as I said, is Milo Phillips Brown. 20 00:02:33,390 --> 00:02:39,930 Milo is currently distinguished fellow in ethics and technology at M.I.T. and senior 21 00:02:39,930 --> 00:02:46,110 research fellow in digital ethics and governance that the Jain Family Institute. 22 00:02:46,110 --> 00:02:57,120 I'm delighted that Milo is going to be joining us shortly after Christmas as associate professor of philosophy and tutorial fellow at Jesus College. 23 00:02:57,120 --> 00:03:05,890 So here's one of the important new appointments that's come about because of the new Institute for Ethics in I. 24 00:03:05,890 --> 00:03:12,100 As part of his job here, he will be teaching ethics in the Department of Computer Science. 25 00:03:12,100 --> 00:03:22,240 So he's absolutely on this key interface between philosophy and computer science, which we're so keen on here at Oxford. 26 00:03:22,240 --> 00:03:29,200 Milo's led various efforts across M.I.T. in creating ethics curricula for STEM students. 27 00:03:29,200 --> 00:03:34,450 For example, he's developed and taught ethics material for a wide range of STEM classes, 28 00:03:34,450 --> 00:03:41,230 focus focussing especially on the ethics of technology and on developing two completely new courses. 29 00:03:41,230 --> 00:03:45,430 A workshop in ethical engineering and experiential ethics. 30 00:03:45,430 --> 00:03:52,630 So we're looking very forward, very much to seeing what Milo does when he comes and teaches ethics here in Oxford. 31 00:03:52,630 --> 00:04:01,200 Over to you, Marlee. Thanks, Peter. Really glad to be here and looking forward to this conversation with Elayna and Max. 32 00:04:01,200 --> 00:04:09,270 So I'm going to share my screen now and give a little bit of a talk show. 33 00:04:09,270 --> 00:04:14,730 Has that has I look in news. OK, great. 34 00:04:14,730 --> 00:04:20,760 So I'm going to be talking about goals for ethical engineering, pedagogy, and in particular, 35 00:04:20,760 --> 00:04:29,820 the slogan to get students ready, willing and able to engineer ethically. 36 00:04:29,820 --> 00:04:33,600 OK, so as Peter alluded to, 37 00:04:33,600 --> 00:04:42,610 the news is bleak and familiar technologies are going awry all over the place and this is called this has caused many calls for action. 38 00:04:42,610 --> 00:04:48,190 Prominent amongst them, that engineering students, computer science students in particular, 39 00:04:48,190 --> 00:04:54,880 need to be trained in ethics so that history does not repeat itself. 40 00:04:54,880 --> 00:04:58,870 So what are you really talking about today is what it is that we should be 41 00:04:58,870 --> 00:05:04,330 doing or can be doing and might have difficulty doing in reaching this goal. 42 00:05:04,330 --> 00:05:09,520 So here's a brief overview plan of the plan of the talk. 43 00:05:09,520 --> 00:05:14,590 So we start talking about the learning objectives for ethics, pedagogy, for engineering students. 44 00:05:14,590 --> 00:05:20,890 How do we get students to be at that point where history doesn't repeat itself? 45 00:05:20,890 --> 00:05:27,130 And this question kind of attending to a carefully with that particular goal in mind reveals, I think, 46 00:05:27,130 --> 00:05:36,820 a different picture of what we ought to be doing in teaching ethics than sort of the traditional approach in engineering ethics more generally. 47 00:05:36,820 --> 00:05:42,340 And so that leads into the second topic, which is how it is that we reach these new learning objectives. 48 00:05:42,340 --> 00:05:49,690 And so I'm going to talk a bit, give you a preview of the kinds of things that I've been doing with my collaborators across M.I.T. 49 00:05:49,690 --> 00:05:57,870 and then close to talk a bit more about what the obstacles are to reaching these objectives. 50 00:05:57,870 --> 00:06:02,730 All right. Let's jump in. Learning objectives. 51 00:06:02,730 --> 00:06:11,100 So we said that they're sort of like the the long term goal is to have students graduate from their graduate programmes 52 00:06:11,100 --> 00:06:19,950 or undergraduate programmes and once they sort of enter the real world to not be creating the mistakes of the past. 53 00:06:19,950 --> 00:06:26,790 What that means is that we want them ultimately to be doing something to be engineering ethically. 54 00:06:26,790 --> 00:06:33,160 And so if they're going to be able to do, though, if they're going to do those things, then they need. 55 00:06:33,160 --> 00:06:39,370 To be taught, we think, two different things. One is to be able to do the things that we want them to do. 56 00:06:39,370 --> 00:06:47,540 To be able to identify, address and communicate about the ethical dimensions of their work. 57 00:06:47,540 --> 00:06:53,500 And also to be ready and willing, which is to say motivated to do these things. 58 00:06:53,500 --> 00:06:55,850 So these we think of as the learning objectives, 59 00:06:55,850 --> 00:07:04,970 what we want our students to do is have the ability to do something and the willingness to do it if they just happen to know a lot about ethics. 60 00:07:04,970 --> 00:07:12,650 But I've no idea how to translate that into your practise. If they're not able to practise ethical engineering, that I won't really matter. 61 00:07:12,650 --> 00:07:19,280 Nothing will change. Similarly, even if they're able know exactly what to do but aren't motivated to do it. 62 00:07:19,280 --> 00:07:24,800 Nothing will change. So this is what we're shooting for. How do we get there? 63 00:07:24,800 --> 00:07:32,270 How are we going to reach these objectives? Well, start with this idea of trying to get students able to engineer ethically. 64 00:07:32,270 --> 00:07:39,280 What we think is that the best way to get this is the idea of teaching ethics as a skill. 65 00:07:39,280 --> 00:07:43,240 Quite literally, teaching them to be able to engineer ethically. 66 00:07:43,240 --> 00:07:52,750 So what does that mean exactly? Well, at the core of the approach is to teach concrete methods for ethical engineering, 67 00:07:52,750 --> 00:07:58,900 something that you can sit down in practise in designing a technology and take 68 00:07:58,900 --> 00:08:03,910 steps to understand what the ethical implications are and how to go forward. 69 00:08:03,910 --> 00:08:08,770 So here we draw on large research fields and programmes. 70 00:08:08,770 --> 00:08:16,540 So the field of responsible innovation, for example, or research programmes in value sensitive design or participatory design. 71 00:08:16,540 --> 00:08:27,610 There are people who've been working on these questions for decades. What can you do in practise to build technologies that are more responsible? 72 00:08:27,610 --> 00:08:37,530 The answer to those questions are the things that we want to teach students, and indeed that's what we have been teaching students across M.I.T. 73 00:08:37,530 --> 00:08:48,900 So one of the things that we've been doing is teaching integrating ethics into technical curriculum. 74 00:08:48,900 --> 00:08:54,780 So my team has done about like 12 or 17 different classes across M.I.T. doing this. 75 00:08:54,780 --> 00:09:00,030 I'm just gonna talk you through a little bit about one of those. 76 00:09:00,030 --> 00:09:08,730 And so it's a class called Software Studio. And the computer science department where students learn the fundamentals of software design. 77 00:09:08,730 --> 00:09:18,630 And for each assignment in this class, each technical assignment, we've worked with technical instructors to integrate ethical material. 78 00:09:18,630 --> 00:09:22,110 And this is an example of one such assignment. 79 00:09:22,110 --> 00:09:29,460 So in the course, they are learning software designed by building a toy version of Twitter, which they call fritter. 80 00:09:29,460 --> 00:09:39,840 Because you're figuring your time away. So they'll have this set of questions, a problem set in a traditional, you know, computer science class. 81 00:09:39,840 --> 00:09:48,810 And alongside them, there will be questions about the ethical implications of the decisions they might be making in their design choices. 82 00:09:48,810 --> 00:09:59,730 So, for example, we'll ask a question such as if your sole goal in creating fritter were to get children addicted to the technology. 83 00:09:59,730 --> 00:10:02,120 What design decisions you make? 84 00:10:02,120 --> 00:10:08,670 Or similarly, if your sole goal were to try to make it as difficult as possible for someone who's not very tech literate. 85 00:10:08,670 --> 00:10:17,600 What decisions you would you make? Or if your sole goal was to prevent the spread of disinformation on the platform, what would you make? 86 00:10:17,600 --> 00:10:22,850 So the idea here is to sort of use these kind of creative and hopefully somewhat fun prompts 87 00:10:22,850 --> 00:10:28,760 to get the students to try to investigate how the decisions that they're already making. 88 00:10:28,760 --> 00:10:36,620 You're making these decisions in building a platform. How are the decisions that you're already making valuate? 89 00:10:36,620 --> 00:10:38,750 How do they have ethical implications? 90 00:10:38,750 --> 00:10:48,510 And so this is just sort of one of the concrete steps that is sort of core to things such as value sensitive design is revealing. 91 00:10:48,510 --> 00:10:53,580 Where sort of what might look like value neutral or purely engineering decisions. 92 00:10:53,580 --> 00:10:57,750 In fact, have ethical consequences. And that's a skill. 93 00:10:57,750 --> 00:11:06,020 Figuring out exactly what those things are when you're setting down to build a technology. 94 00:11:06,020 --> 00:11:10,260 OK. So, again, that that's just one example of the kinds of things we're doing. 95 00:11:10,260 --> 00:11:15,300 Happy to and keen to talk more about that in the Q&A. 96 00:11:15,300 --> 00:11:21,000 One thing I alluded to advertise at the beginning was that this sort of approach to learning objectives 97 00:11:21,000 --> 00:11:28,110 that we're advocating for and know by by no means new to us so that we sort of have our own spin on it. 98 00:11:28,110 --> 00:11:32,990 But in any case, it's the approach we're advocating for. Does. 99 00:11:32,990 --> 00:11:40,200 It does break somewhat from a kind of traditional approach to engineering, ethics, 100 00:11:40,200 --> 00:11:48,030 pedagogy and that traditional approach which you'll find in kind of engineering textbooks, engineering ethics textbooks, 101 00:11:48,030 --> 00:12:01,170 and also in the sort of more and more current movement to get ethics into computer science classrooms is based in moral theory, 102 00:12:01,170 --> 00:12:08,610 basically, like we teach students kind of bite sized versions of what they might learn in a philosophy classroom. 103 00:12:08,610 --> 00:12:16,080 We teach them philosophical distinctions, hopefully philosophical distinctions that have some connexion to some technological issue. 104 00:12:16,080 --> 00:12:22,490 But at roots out of the idea is to teach a bit of philosophy. 105 00:12:22,490 --> 00:12:33,140 We think that philosophy has a place for sure in ethics, engineering and vocation, but it shouldn't be the focus. 106 00:12:33,140 --> 00:12:43,060 That's the idea. Anyway. The reason is that philosophy is not, nor is it usually meant to be applicable in practise. 107 00:12:43,060 --> 00:12:48,280 The kinds of questions that philosophers find interesting, even in ethics, 108 00:12:48,280 --> 00:12:56,380 are often not ones that are going to be the kind of thing that a computer science student is going to be wrestling with, 109 00:12:56,380 --> 00:13:02,440 you know, on day one of their their new job working for a tech company. 110 00:13:02,440 --> 00:13:07,960 So that's one issue is that the the the topics that philosophers are concerned with 111 00:13:07,960 --> 00:13:14,230 are often sort of not directly relatable to what the engineer is doing in practise. 112 00:13:14,230 --> 00:13:19,870 The second issue is, even when they are, you're still dealing in the realm of theory. 113 00:13:19,870 --> 00:13:24,100 So the student might learn a little bit of theory that relates to what they're thinking about. 114 00:13:24,100 --> 00:13:29,380 But how do they apply that to practise? Let's do that work for them. 115 00:13:29,380 --> 00:13:36,120 Let's teach them directly what they do in practise and have them build that skill. 116 00:13:36,120 --> 00:13:42,010 OK. And I say this as a philosopher. I'm a philosopher by training. This is no knock on philosophy. 117 00:13:42,010 --> 00:13:51,300 This is just trying to think of what is the best way to reach that outcome, where students are able to engineer ethically. 118 00:13:51,300 --> 00:13:53,760 OK. Now, all that said, 119 00:13:53,760 --> 00:14:03,870 there is certainly a place and a necessary role for kind of complementary ways of thinking from the humanities and social sciences. 120 00:14:03,870 --> 00:14:12,870 When we think about exercising ethics as a skill, one thing you learn in a philosophy classroom when you're studying ethics or political or 121 00:14:12,870 --> 00:14:18,540 social philosophy is a certain kind of moral reasoning or a certain kind of awareness of it, 122 00:14:18,540 --> 00:14:23,580 of moral facts that you might not appreciate otherwise. That's important. 123 00:14:23,580 --> 00:14:32,550 Those things make you better. When we're thinking about how good you are as practising ethics as a skill. 124 00:14:32,550 --> 00:14:38,850 Similarly, understanding the sort of historical lineage of a certain kind of technology can belt. 125 00:14:38,850 --> 00:14:44,250 You better understand how that technology fits within a socio technical system. 126 00:14:44,250 --> 00:14:52,230 We don't engineer in vacuums. Students need to understand how what they're doing operates within a broader system. 127 00:14:52,230 --> 00:14:59,070 And there's plenty of work in the humanities and social sciences that will help them understand that. 128 00:14:59,070 --> 00:15:04,260 So the approach here is one sort of where at the core we teach methods of responsible 129 00:15:04,260 --> 00:15:09,000 engineering and kind of complementing that and interconnecting with that. 130 00:15:09,000 --> 00:15:13,020 Are these methods and insights from the humanities and social sciences? 131 00:15:13,020 --> 00:15:20,550 And of course, there are technical elements to this, too. But in some ways, that that goes without saying. 132 00:15:20,550 --> 00:15:25,290 All right. So that is the means to reach what we're thinking of. 133 00:15:25,290 --> 00:15:30,400 This first objective is to get students to be able to engineer ethically. 134 00:15:30,400 --> 00:15:40,180 The second objective is, remember, for them to be ready and willing to do so, for them to be motivated to do so. 135 00:15:40,180 --> 00:15:42,890 How do you go about doing that? 136 00:15:42,890 --> 00:15:51,400 This, I think, is a really difficult question and kind of in some ways a matter of, you know, really sort of like behavioural psychology. 137 00:15:51,400 --> 00:15:58,150 But there are some things that we think we can do that push us at least in this direction. 138 00:15:58,150 --> 00:16:07,210 And that kind of the keywords or the catch phrases we're thinking of are to help students develop an interest in and proximity to ethics. 139 00:16:07,210 --> 00:16:12,610 So for students to think ethics relate to what I do as a computer scientist, 140 00:16:12,610 --> 00:16:18,340 to what I do as somebody who's making it technology, not to what's just happening in the news. 141 00:16:18,340 --> 00:16:25,780 Like, that's not my problem. And for them to not only feel like it's proximal, but like they care about it, like, 142 00:16:25,780 --> 00:16:29,920 yeah, this is something related to what I do and I want to be involved in it. 143 00:16:29,920 --> 00:16:33,450 It's not just not my problem type thing. OK. 144 00:16:33,450 --> 00:16:35,170 So how do we do that? 145 00:16:35,170 --> 00:16:48,220 Well, one of the things we've already seen a bit as I mentioned, a lot of the work is integrating ethics curriculum into computer science classes. 146 00:16:48,220 --> 00:16:56,530 That relates directly to the technical material that the students are working on, that develops a sense of proximity. 147 00:16:56,530 --> 00:17:00,760 If you're seeing in your own homework, as you're learning to be an engineer, 148 00:17:00,760 --> 00:17:08,980 how the engineering decisions you're making have ethical consequences, your seeing the proximity between engineering and ethics. 149 00:17:08,980 --> 00:17:16,600 So there's sort of integrating or embedding approach which goes back 30 years at the University of 20 in particular. 150 00:17:16,600 --> 00:17:25,480 This is something that's been going on for a long time is, I think, particularly well suited to developing this sense of proximity. 151 00:17:25,480 --> 00:17:30,130 Here's another thing that we can maybe do to engender the sense of proximity is class. 152 00:17:30,130 --> 00:17:34,000 My colleagues and I developed there's a there's a link to the syllabus. 153 00:17:34,000 --> 00:17:41,320 There are a class that we developed and piloted last this past summer with 70 students at RMIT. 154 00:17:41,320 --> 00:17:48,790 And the primary learning objective of the course was just to get students to be more interested in ethics. 155 00:17:48,790 --> 00:17:58,070 And the idea is that we would try to do so by allowing them to come to ethics on their own terms and by way of their own experiences. 156 00:17:58,070 --> 00:17:59,340 So here's what that means, 157 00:17:59,340 --> 00:18:09,730 that the class had no lecture rather than the students were would meet in groups of about five with a graduate facilitator each week. 158 00:18:09,730 --> 00:18:13,780 They would talk about some topic that that we assign. 159 00:18:13,780 --> 00:18:22,720 We would have readings and short assignments. But the sort of the trajectory of the discussion would be very much driven by them. 160 00:18:22,720 --> 00:18:30,130 What did they want to talk about? What were they interested? So there's no one standing at a front of the class saying ethics matters. 161 00:18:30,130 --> 00:18:36,660 You're making a technology. You're doing a bad thing. Here's all these ways that technology is bad to use a caricature, rather. 162 00:18:36,660 --> 00:18:42,250 We're trying to present them with ethical issues and let them explore it, to come to it on their own terms. 163 00:18:42,250 --> 00:18:52,480 And then the experientially part of experiential ethics, the idea that the students will do this specifically based on their own experiences. 164 00:18:52,480 --> 00:18:57,640 So we connected this to the internships or undergraduate research opportunities 165 00:18:57,640 --> 00:19:02,380 that students were doing at M.I.T. and encourage them to reflect on those in, 166 00:19:02,380 --> 00:19:04,600 for example, their final projects. 167 00:19:04,600 --> 00:19:16,430 So one student who was working at law tax firm over the summer did a final project about racial inequality in the US tax system. 168 00:19:16,430 --> 00:19:24,890 So that's experimental ethics. What's another way to try to get students interested in ethics? 169 00:19:24,890 --> 00:19:34,910 Well, we have to be open minded, like there are a lot of interesting things, questions in philosophy, in policy, in the humanities. 170 00:19:34,910 --> 00:19:36,830 Technical questions. 171 00:19:36,830 --> 00:19:46,910 There are all sorts of really rich and interesting questions about ethics of technology, about the social implications and history of technology. 172 00:19:46,910 --> 00:19:50,240 Let's just try to get students interested in those things. 173 00:19:50,240 --> 00:19:59,120 So here's one example. Of course, I taught that was just a pretty straight up applied or practical ethics course in the ethics of technology. 174 00:19:59,120 --> 00:20:02,750 We didn't do so much of the responsible innovation. That wasn't the point. 175 00:20:02,750 --> 00:20:09,170 We were teaching philosophy. And this is the kind of thing that some students got really excited about it. 176 00:20:09,170 --> 00:20:19,340 And that's great, because we want to get students engaged in as we want to get students motivated to do sort of ethical thing to engineer responsibly. 177 00:20:19,340 --> 00:20:22,040 And so we need to be creative here and not be dogmatic. 178 00:20:22,040 --> 00:20:32,200 Let's follow what the students are interested in and give that to them so that they develop this sense of proximity and interest. 179 00:20:32,200 --> 00:20:36,010 OK, so that's the big picture. Ready, willing and able. 180 00:20:36,010 --> 00:20:44,150 We want students to have this skill of engineering ethically and we want them to be motivated to do it. 181 00:20:44,150 --> 00:20:50,870 This is a hard thing to do, though. And so we're going to close a little bit and talk about some obstacles. 182 00:20:50,870 --> 00:20:55,940 So it's no surprise, really genuine change in students is readiness. 183 00:20:55,940 --> 00:21:00,200 Willingness and ability requires dedication to ethics curriculum. 184 00:21:00,200 --> 00:21:04,910 You don't change the way somebody is motivated to do something overnight, 185 00:21:04,910 --> 00:21:11,060 especially when that thing is often going to sort of potentially come at some personal or professional cost. 186 00:21:11,060 --> 00:21:16,910 Once they're sort of maybe advocating for an ethical outcome in the workplace, that takes time. 187 00:21:16,910 --> 00:21:24,350 So does developing a skill. We spend a whole undergraduate career teaching someone how to be an engineer. 188 00:21:24,350 --> 00:21:29,960 The skill of engineering takes time to learn. So, too, does the skill of ethical engineering. 189 00:21:29,960 --> 00:21:36,920 It doesn't come overnight. And so that need for sort of significant dedication raises obstacles. 190 00:21:36,920 --> 00:21:40,440 For one, it means that there'll be less technical material in the curriculum. 191 00:21:40,440 --> 00:21:46,300 There are only so many hours in the day. There are only so many hours in education. 192 00:21:46,300 --> 00:21:51,040 And if you're teaching students more ethics, then something else has to go. 193 00:21:51,040 --> 00:21:56,650 And in many cases, that will be the technical material. So there's a tension there. 194 00:21:56,650 --> 00:22:01,570 There's also sort of the institutional obstacles, which are that you need people, 195 00:22:01,570 --> 00:22:11,320 you need sort of a workforce to teach ethical engineering practises and insights from the humanities and social sciences. 196 00:22:11,320 --> 00:22:18,040 That's the kind of thing that needs to be funded. These people need to be able to work with the technical instructors. 197 00:22:18,040 --> 00:22:27,670 It's a non-trivial endeavour, especially when you're trying to do it at a scale where we're really going to change how the students are ready, 198 00:22:27,670 --> 00:22:33,100 willing and able to engineer responsibly. All right. 199 00:22:33,100 --> 00:22:45,730 Thank you. Peter, I think you're muted. 200 00:22:45,730 --> 00:22:55,390 Thank you very much, Miles. That was extremely interesting. Could I just try pushing back a little bit against some of what you said there? 201 00:22:55,390 --> 00:22:59,200 So you've got the idea of teaching ethics as a skill. 202 00:22:59,200 --> 00:23:08,260 And I'm one of the things you're doing there is encouraging students to produce this Frita toy version of Twitter, 203 00:23:08,260 --> 00:23:15,380 and you can give them different aims there. Now, I can imagine that some students might think. 204 00:23:15,380 --> 00:23:19,580 Wow. Getting the users to be addicted to it. 205 00:23:19,580 --> 00:23:26,990 Isn't that cool? Or producing something that will make misinformation go viral. 206 00:23:26,990 --> 00:23:29,180 Wow. How exciting. 207 00:23:29,180 --> 00:23:37,810 Because I think, you know, a lot of the people who who end up writing computer viruses, part of the appeal there is to write something cool. 208 00:23:37,810 --> 00:23:44,000 It's going to just go around the place and they don't think very much about the ethical implications. 209 00:23:44,000 --> 00:23:53,720 Now, I can imagine somebody saying that if you focus on ethics as a skill in the way you've described and neglect the classic moral philosophy. 210 00:23:53,720 --> 00:24:00,170 The risk is you're not actually giving them a reason to care about being moral. 211 00:24:00,170 --> 00:24:06,960 So how would you respond to that? Yeah, certainly. 212 00:24:06,960 --> 00:24:13,110 I think that's exactly right. I mean, if you've got students in, these things are a double edged sword. 213 00:24:13,110 --> 00:24:19,140 The technology just sort of showing them what the technology can do doesn't push them in one direction or another. 214 00:24:19,140 --> 00:24:24,360 But I do think that it's sort of like a core idea in terms of getting them to see what design 215 00:24:24,360 --> 00:24:29,460 decisions will result in which outcomes heads off this problem that we often talk about, 216 00:24:29,460 --> 00:24:36,600 which is like an unforeseen or unintended consequence. People are building something and they don't understand what ethical implications 217 00:24:36,600 --> 00:24:41,640 they might have because they don't even understand what might happen. They're not sort of thinking creatively about that. 218 00:24:41,640 --> 00:24:46,440 So sort of a couple of points to sort of build on that. 219 00:24:46,440 --> 00:24:51,480 One is that we don't just try to get them to think about, like, how would you abuse the system in this way? 220 00:24:51,480 --> 00:24:56,250 We also sort of give them some tools to think through well, 221 00:24:56,250 --> 00:25:07,780 what we call sort of moral lenses of think through a different sort of ethical aspects of the possible effects that they're going to be having. 222 00:25:07,780 --> 00:25:18,000 And we don't think, for the most part, that you have to do moral theory to get people on to what are different sort of salient ethical aspects. 223 00:25:18,000 --> 00:25:25,870 There is often, in my experience at M.I.T., at least, a very strong sort of like consequentialist mindset and computer science students. 224 00:25:25,870 --> 00:25:32,670 Mm hmm. So there's a ways to sort of disabuse them of that, which is not to say the consequences don't matter without sort of, 225 00:25:32,670 --> 00:25:36,480 you know, teaching them all three formulations of the categorical imperative. 226 00:25:36,480 --> 00:25:40,350 They're sort of different ways to do that, that don't get into the theory. 227 00:25:40,350 --> 00:25:46,110 And then there's the other question you raise, which is just how do you teach them to be moral? 228 00:25:46,110 --> 00:25:49,800 So, OK, now they're really attuned to these ways in which they can sort of addict their 229 00:25:49,800 --> 00:25:56,970 users or do bad things and they're going to make more money by doing that. Well, that's where the sort of the willingness and motivation comes in. 230 00:25:56,970 --> 00:26:08,370 And they is it not is it not plausible that some of that willingness and innovation could come from discussing issues like UNIVERSALISE ability? 231 00:26:08,370 --> 00:26:15,810 Miten, your interests are as important as mine from an objective point of view of virtue, ethics or moral psychology. 232 00:26:15,810 --> 00:26:22,530 In other words, doing some more traditional moral philosophy might actually help to motivate them. 233 00:26:22,530 --> 00:26:28,740 I think it absolutely can. And so this is the sort of this idea that I was trying to articulate at the end, 234 00:26:28,740 --> 00:26:36,180 which is that when we think especially about the mode, we can really separate the skill building point where moral philosophy, 235 00:26:36,180 --> 00:26:42,810 I think, has a much smaller role, because a lot of these are like looking at a technology and understanding the space of alternatives. 236 00:26:42,810 --> 00:26:47,580 Who are your stakeholders? What are the decisions that I'm making? So that's more on the skills side. 237 00:26:47,580 --> 00:26:51,150 But on the motivation side, we have to be flexible. 238 00:26:51,150 --> 00:26:55,350 We have to figure out what works and it might be different in different places. 239 00:26:55,350 --> 00:27:00,210 So some students might get really excited about the moral theory and that's their way in. 240 00:27:00,210 --> 00:27:01,410 And that's fantastic. 241 00:27:01,410 --> 00:27:10,410 And other students might get excited by the history of it or, you know, understanding, you know, what the corporate culture at Google is like. 242 00:27:10,410 --> 00:27:16,190 We have to sort of be flexible in that way. And I do think that moral theory is a great way. 243 00:27:16,190 --> 00:27:20,130 Like this ethics of technology class. I was a practical applied ethics class. 244 00:27:20,130 --> 00:27:29,680 Students got more excited. The engineering students at M.I.T. got more excited, like express this about ethical engineering in their own work. 245 00:27:29,680 --> 00:27:33,990 Even though we weren't teaching responsible engineering practise, we take big losses. 246 00:27:33,990 --> 00:27:35,190 That's interesting. That's it. 247 00:27:35,190 --> 00:27:48,150 So bringing in the ethics as a skill could quite independently of another value, actually be useful in motivating an interest in it. 248 00:27:48,150 --> 00:27:50,040 I think so, yeah. Yeah. 249 00:27:50,040 --> 00:28:01,650 Yeah, because I imagine there is a tendency for computer science in general to think of ethics classes as rather orthogonal to their main interests. 250 00:28:01,650 --> 00:28:09,180 Well, thank you very much indeed, Molly. That's that's fascinating. Let's now hear from Helena. 251 00:28:09,180 --> 00:28:17,620 Helena Webb is senior researcher in the human centred computing theme at the Department of Computer Science. 252 00:28:17,620 --> 00:28:20,170 Research has been highly interdisciplinary, 253 00:28:20,170 --> 00:28:29,350 involving a range of projects to investigate the impacts of computing on social life and the lived experience of technology. 254 00:28:29,350 --> 00:28:34,270 Together with Max. Max Van Cleek, who will be speaking next. 255 00:28:34,270 --> 00:28:44,400 Helena developed and delivered a new compulsory course for undergraduates in the Department of Computer Science on ethics and responsible innovation. 256 00:28:44,400 --> 00:28:51,000 And the course ran for the first time last year. It's currently being delivered for the second time. 257 00:28:51,000 --> 00:28:54,570 As part of a work in the human cancer, human centred computing theme, 258 00:28:54,570 --> 00:29:01,710 Helena also contributes to a number of wider education initiatives, which she'll talk about in her presentation. 259 00:29:01,710 --> 00:29:11,640 So, Helena, we'll be really interested to hear what you've learnt from this year and something of a teaching ethics in Oxford. 260 00:29:11,640 --> 00:29:16,800 Two competing students and your other outreach initiatives over the year. 261 00:29:16,800 --> 00:29:22,850 Thank you very much. It's really great to have the opportunity to be part of this discussion today. 262 00:29:22,850 --> 00:29:27,300 And I just share my screen. Hopefully that's come up. Okay. 263 00:29:27,300 --> 00:29:36,750 Fantastic. Yes. So, as Peter mentioned, I'm a senior researcher and I work in the Department of Computer Science as part of a group, 264 00:29:36,750 --> 00:29:44,010 a research team called Human Centred Computing. So we are an interdisciplinary group of researchers and we carry out projects that 265 00:29:44,010 --> 00:29:48,510 examine the impacts that contemporary computing systems have on individuals, 266 00:29:48,510 --> 00:29:57,000 communities and societies. And our work tries to identify ways that these innovations can be ethical and better support human flourishing. 267 00:29:57,000 --> 00:30:03,060 So ethics is central to a lot of what we do, often with a very applied focus. 268 00:30:03,060 --> 00:30:11,920 And this focus often involves an interest in education in both formal ways and non formal ways. 269 00:30:11,920 --> 00:30:15,640 So in this presentation, I'm going to talk about two things. So, first of all, 270 00:30:15,640 --> 00:30:23,820 I'm going to talk about some of the education initiatives that I've been involved in as part of our work in the human centred computing theme. 271 00:30:23,820 --> 00:30:31,900 And in particular, that takes place as part of projects that we carry out that are underpinned by the initiative known as Responsible Innovation, 272 00:30:31,900 --> 00:30:38,350 the initiative that mine was already mentioned in his talk previously. And then I'm going to talk about the undergraduate module, 273 00:30:38,350 --> 00:30:42,550 that maxim I developed on ethics and responsible innovation that we delivered for 274 00:30:42,550 --> 00:30:48,830 the first time last year here in the Department of Computer Science at Oxford. 275 00:30:48,830 --> 00:30:53,420 So most of the projects that I'm involved in as part of the human centred computing 276 00:30:53,420 --> 00:30:58,280 theme are underpinned by the initiative known as responsible innovation. 277 00:30:58,280 --> 00:31:02,930 And this is an initiative that has developed a great deal of traction in academia, 278 00:31:02,930 --> 00:31:10,100 industry and policy in the UK and the EU and more broadly in the last 10, 15 years or so. 279 00:31:10,100 --> 00:31:20,360 And the idea of responsible innovation is to bring together actors across society to work together during the entire research and innovation process. 280 00:31:20,360 --> 00:31:27,950 So you bring together researchers, scientists, citizens, policymakers, businesses, third sector organisations and so on, 281 00:31:27,950 --> 00:31:32,090 bring them together so they can engage with each other and talk about innovation. 282 00:31:32,090 --> 00:31:36,460 Talk about the potential consequences, both positive and negative that it might have, 283 00:31:36,460 --> 00:31:43,040 and identify ways to address those consequences early in the lifecycle of the innovation process. 284 00:31:43,040 --> 00:31:50,930 So it's about finding out ways to try to mitigate the potential negative consequences of an innovation before it is implemented. 285 00:31:50,930 --> 00:31:57,500 So the idea is that if you bring together these stakeholders to work together, you can produce better outcomes. 286 00:31:57,500 --> 00:32:04,130 You can ensure that innovations meet the values of society, meet societal needs and interests. 287 00:32:04,130 --> 00:32:11,030 So this responsible innovation perspective really emphasises stakeholder engagement and the sharing of understandings. 288 00:32:11,030 --> 00:32:18,470 So it's a form of ethics being carried out in very practical ways, and it naturally brings in some educational elements as well. 289 00:32:18,470 --> 00:32:24,080 So education to focus on good practise and innovation or ethics and innovation. 290 00:32:24,080 --> 00:32:31,610 So in our group, we carry out a number of activities that come under the umbrella of education in a formal and informal ways. 291 00:32:31,610 --> 00:32:39,890 And that might focus specifically on ethical a I or might focus more broadly on icy tea and digital technologies. 292 00:32:39,890 --> 00:32:46,390 So I'm just going to mention a few of the activities that fall under this umbrella of education and in different forms. 293 00:32:46,390 --> 00:32:54,200 So the first of these is Orbit, which is an organisation that is an observatory for responsible innovation in ice tea. 294 00:32:54,200 --> 00:32:58,300 And this observatory provides I'm training consultancy and research. 295 00:32:58,300 --> 00:33:04,490 So at the moment, Orbit is involved in the training of doctoral students across the UK. 296 00:33:04,490 --> 00:33:10,250 And this is seen as something that's highly important. So we have these doctoral students who are just starting out on postdoc 297 00:33:10,250 --> 00:33:15,500 postgraduate work on various issues around science and technology and innovation. 298 00:33:15,500 --> 00:33:21,170 And here we have an opportunity to embed in them as they begin that training understandings of 299 00:33:21,170 --> 00:33:26,660 ethics and practise of responsibility and to think about how they'll carry out in their own work. 300 00:33:26,660 --> 00:33:32,570 And then they can take it forward as they go through their careers. If it's into further research, it's in science and industry and so on. 301 00:33:32,570 --> 00:33:39,200 So it's a valuable opportunity to embed ethical understandings right at the start of their journey. 302 00:33:39,200 --> 00:33:45,690 Another initiative that I was involved in concerned one particular project, and this was the unbias project. 303 00:33:45,690 --> 00:33:48,590 So this was a project that looked at algorithms. 304 00:33:48,590 --> 00:33:55,130 So controversies around potential bias and algorithmic systems, questions of fairness and algorithmic systems. 305 00:33:55,130 --> 00:33:59,870 So in this project, what are the outcomes that we produced? Was a fairness toolkit. 306 00:33:59,870 --> 00:34:06,470 And this tool kit has various resources that are available for stakeholders of all kinds to help them to reflect on issues about. 307 00:34:06,470 --> 00:34:10,430 About bias and algorithms, about fairness and algorithmic systems. 308 00:34:10,430 --> 00:34:14,960 And what we can do to promote and foster fairness in these systems. 309 00:34:14,960 --> 00:34:21,230 So one of the elements of the toolkit is shown on this slide here. And this is a set of awareness cards. 310 00:34:21,230 --> 00:34:25,280 So the cards have different exercises and different pieces of information on them. 311 00:34:25,280 --> 00:34:29,690 So they have exercises to take you through the processes of designing an algorithm, 312 00:34:29,690 --> 00:34:35,630 deciding what pieces of information go into an algorithm, or they have exercises to help people consider. 313 00:34:35,630 --> 00:34:43,530 What kind of values might we want to be embedded into an algorithmic system or what kind of values would we want that system to uphold? 314 00:34:43,530 --> 00:34:50,420 These toolkits are available as physical and digital copies for a wide number of stakeholders, and we've sent them out. 315 00:34:50,420 --> 00:34:54,410 And they've been used with students, with professionals and all kind of areas. 316 00:34:54,410 --> 00:34:57,350 So design, science, law and so on. 317 00:34:57,350 --> 00:35:04,940 And there are really great opportunity for groups to engage together in discussion and foster this kind of critical thinking about algorithmic 318 00:35:04,940 --> 00:35:14,030 systems and also help people to feel empowered to make better informed choices about decisions about how they interact with algorithmic systems. 319 00:35:14,030 --> 00:35:23,040 So this is an eye. This is an example of some kind of all informal educational wide engagement and educational opportunity. 320 00:35:23,040 --> 00:35:28,770 And other activities that we've carried out across a number of our projects is the ethical hackathon. 321 00:35:28,770 --> 00:35:32,380 So this is a slight twist on the traditional idea of the hackathon. 322 00:35:32,380 --> 00:35:36,540 And computer scientists and engineering students are very familiar with the hackers on 323 00:35:36,540 --> 00:35:41,580 model in which you compete together in small teams to face some kind of design challenge. 324 00:35:41,580 --> 00:35:47,080 It's a fun competition. It's a chance to exercise various skills and so on. 325 00:35:47,080 --> 00:35:50,430 In an ethical hackathon, we make our teams interdisciplinary. 326 00:35:50,430 --> 00:35:57,510 So we bring together teams of computer scientists, engineers, social scientists, philosophers, lawyers, and they all work together. 327 00:35:57,510 --> 00:36:02,250 And the idea is that they can share expertise and share experiences amongst each other. 328 00:36:02,250 --> 00:36:09,570 We also make the design challenge something that's less focus on the technical, more thinking about ethical issues. 329 00:36:09,570 --> 00:36:12,900 So, for instance, in some of the ethical hack zones that we've run, 330 00:36:12,900 --> 00:36:21,120 we've set students are challenged to say that you're developing a new social network and you're committed to the responsible use of algorithms. 331 00:36:21,120 --> 00:36:28,230 How are you going to do that? So, first of all, the team would need to work together to set out the ethos that their network would have. 332 00:36:28,230 --> 00:36:33,030 And then they need to think about practical steps to put that ethos into action. 333 00:36:33,030 --> 00:36:36,330 So it's a challenge for them to think about, to identify, first of all, 334 00:36:36,330 --> 00:36:41,970 these ethical and responsibility issues and then to try to translate them into practical steps. 335 00:36:41,970 --> 00:36:46,050 So the students learn from each other from the different skill sets that they have. 336 00:36:46,050 --> 00:36:50,700 And we also incorporate some formal teaching sessions into the hackathon as well. 337 00:36:50,700 --> 00:36:53,610 So they have that more formal teaching element, too. 338 00:36:53,610 --> 00:36:59,010 And as Milo was saying in some of the work that he's been doing, the emphasis is on creativity and fun. 339 00:36:59,010 --> 00:37:04,020 So these hacker forms designed to be fun encounters. Students get to learn from each other. 340 00:37:04,020 --> 00:37:09,690 They enjoy spending time with each other. They present on their ideas and their prises available for the best ones. 341 00:37:09,690 --> 00:37:12,630 At the end of the event as well. 342 00:37:12,630 --> 00:37:20,340 So those are some examples of some of the wide educational initiatives that we've been involved in as part of the human centred computing theme. 343 00:37:20,340 --> 00:37:29,970 Now I'll talk specifically about the module that Max and I developed here for, here for the computer science students in our department. 344 00:37:29,970 --> 00:37:35,880 So in our department, in computer science at Oxford, ethics hasn't traditionally been taught as a core subject. 345 00:37:35,880 --> 00:37:41,160 And sometimes it's difficult for our students to see it as an issue that's relevant to their own studies, 346 00:37:41,160 --> 00:37:44,970 because a lot of the teaching that goes on in computer science here is quite formal. 347 00:37:44,970 --> 00:37:50,940 It's quite mathematical. So they don't necessarily see it as something that's relevant to what they're doing. 348 00:37:50,940 --> 00:37:54,000 However, after a period of time spent talking to the department, 349 00:37:54,000 --> 00:38:02,670 we had the opportunity to run a course and this was put on for the first time last year and made compulsory for our first year undergraduates. 350 00:38:02,670 --> 00:38:07,080 The aim of the course was to introduce the students to core ethical principles and normative 351 00:38:07,080 --> 00:38:12,180 theories and to deepen the understanding of ethical challenges in contemporary computer science. 352 00:38:12,180 --> 00:38:17,250 So we did spend time talking to them about moral philosophy and different normative theories. 353 00:38:17,250 --> 00:38:23,940 And we also talked about certain contemporary issues around A.I., around data surveillance and so on. 354 00:38:23,940 --> 00:38:27,940 We introduced ideas of responsible innovation and other practise based approaches. 355 00:38:27,940 --> 00:38:32,910 So we talked them about value sensitive design or about corporate social responsibility and other 356 00:38:32,910 --> 00:38:39,000 kinds of practical approaches to what we wanted to do was introduce these conceptual ideas, 357 00:38:39,000 --> 00:38:44,280 but also very strongly encourage critical reflection from the students as well to get them to think 358 00:38:44,280 --> 00:38:49,080 about how they understood these issues and how they could see them as relevant to their own, 359 00:38:49,080 --> 00:38:59,160 their own work and their own activities. So we run the course as a series of lectures followed up by seminar sessions and seminar sessions. 360 00:38:59,160 --> 00:39:06,240 The students work together in small groups. These are obviously from last year when it was possible for them to be close to each other. 361 00:39:06,240 --> 00:39:11,040 So the students work together in small groups and we gave them tasks that required them to 362 00:39:11,040 --> 00:39:16,260 draw on all the elements that we've covered in the calls and try to apply them practically. 363 00:39:16,260 --> 00:39:20,460 So, for instance, we might give them a task where they had to imagine that they were the head of an 364 00:39:20,460 --> 00:39:25,320 organisation and that organisation was bringing in an A.I. tool for recruitment that could, 365 00:39:25,320 --> 00:39:30,720 you know, work through CVD or decide who might be employed for that role and so on. 366 00:39:30,720 --> 00:39:37,540 So that task might be to consider how can that organisation make sure that they use that a system ethically? 367 00:39:37,540 --> 00:39:43,470 What challenges might they face? What kind of steps do they put in place to make sure that there's no bias in the system, 368 00:39:43,470 --> 00:39:47,580 that humans in H.R. aren't worried about losing their jobs and so on? 369 00:39:47,580 --> 00:39:55,740 So they seminars themselves had a very kind of practical focus thinking about translating these more conceptual ideas into practises and really 370 00:39:55,740 --> 00:40:03,360 challenging the students to reflect on their own understanding of these issues and how they themselves sort of see them as being put into practise. 371 00:40:03,360 --> 00:40:06,600 And this includes encouraging to see conflicts where they might arise. 372 00:40:06,600 --> 00:40:11,250 So the ways in which sort of ethical concerns and ideas about upholding values can 373 00:40:11,250 --> 00:40:15,450 sometimes come into conflicts with the economic imperative and the profit motive. 374 00:40:15,450 --> 00:40:21,480 So we worked a lot on identifying those kind of tensions and those challenges and talking about different kinds of ways, 375 00:40:21,480 --> 00:40:25,490 different solutions you might find to those challenges. 376 00:40:25,490 --> 00:40:32,820 So I know that Max is going to talk a bit further about the course and some of the challenges that arise in teaching these issues to students, 377 00:40:32,820 --> 00:40:34,670 so I won't say much more about it. 378 00:40:34,670 --> 00:40:42,230 But just to finish, I'm going to quickly talk about the three key issues that Peter was saying we want to cover in this discussion. 379 00:40:42,230 --> 00:40:47,570 So the first of those issues was what are our learning objectives if we're teaching ethics? 380 00:40:47,570 --> 00:40:51,470 And I would agree with Miloje in the importance of a very grounded approach to it, 381 00:40:51,470 --> 00:40:55,610 one that taps into the experiences of those that you were talking to as being 382 00:40:55,610 --> 00:41:00,710 a really helpful way in getting people to understand these kind of issues. 383 00:41:00,710 --> 00:41:04,810 The other question was, what are the suitable means to meet those objectives? 384 00:41:04,810 --> 00:41:09,140 And I think there are many different means that we can use so they can be formal 385 00:41:09,140 --> 00:41:13,220 teaching methods in classroom situations where they can be very informal as well. 386 00:41:13,220 --> 00:41:18,050 They can be incorporated into other kinds of activities, and you can be teaching ethics. 387 00:41:18,050 --> 00:41:21,710 Even if the people that you're teaching aren't students, they might be stakeholders. 388 00:41:21,710 --> 00:41:27,230 You're engaging in other kinds of activities. And then the final question that we were given concern, 389 00:41:27,230 --> 00:41:30,920 the obstacles and I know the discussion will focus on this a great deal because there 390 00:41:30,920 --> 00:41:35,330 are a wide number of obstacles to truly being successful in this kind of teaching. 391 00:41:35,330 --> 00:41:43,280 So I'll just very briefly offer a few here. I think one obstacle, one challenge that we've already mentioned is showing the relevance to people. 392 00:41:43,280 --> 00:41:46,790 Why should I care about this thing? Why, you know, call the ethicist. 393 00:41:46,790 --> 00:41:53,690 Just do it. And I don't need to worry about it. I can focus on my computing or my engineering and other challenges, avoiding ethics, 394 00:41:53,690 --> 00:41:59,990 becoming just a tick box approach that's just done and kind of siloed off from the rest of the curriculum. 395 00:41:59,990 --> 00:42:05,180 And then a further challenge is bringing in an interdisciplinary perspective, 396 00:42:05,180 --> 00:42:12,920 because I think there is a great deal of ways in which ethics and social sciences and law etc can combine in this kind of teaching. 397 00:42:12,920 --> 00:42:18,350 But genuinely bringing in the interdisciplinarity can be quite difficult to achieve. 398 00:42:18,350 --> 00:42:25,850 So those are the challenges that I'm going to offer here and look forward to discussing later on as well. 399 00:42:25,850 --> 00:42:30,040 Thank you very much, Helena. That was that was very interesting indeed. 400 00:42:30,040 --> 00:42:36,100 I'm just pointing up a contrast of your approach with Milosz. 401 00:42:36,100 --> 00:42:47,240 You did bring a traditional ethical theory. And so you were telling them about deontological views and consequentialist and virtue, ethics and so on. 402 00:42:47,240 --> 00:42:53,750 Did you find the students engaged with that or did they turn off? 403 00:42:53,750 --> 00:42:56,390 I think they did. I would say that they did engage with it. 404 00:42:56,390 --> 00:43:05,030 And I think what's very helpful is actually to use those theories and say, you know, you use these positions all of the time, 405 00:43:05,030 --> 00:43:10,050 saying if you think about when you have discussing with your friends about, oh, should we have driverless cars or, 406 00:43:10,050 --> 00:43:14,450 you know, what's the solution to the trolley problem and all of that kind of stuff, the arguments that they make, 407 00:43:14,450 --> 00:43:21,500 they naturally pull on these consequentialist and dancehall positions without people necessarily realising it. 408 00:43:21,500 --> 00:43:23,720 And I think I see if you can make that point. 409 00:43:23,720 --> 00:43:30,740 It can be a very useful way of showing people in ethics isn't just this sort of philosophical thing that that people 410 00:43:30,740 --> 00:43:35,780 do in a room and that they're striking that there's actually ethics is something that we engage with all of the time. 411 00:43:35,780 --> 00:43:40,580 So I think it can make a very useful teaching point to to bring in those perspectives. 412 00:43:40,580 --> 00:43:43,280 Yeah. Okay. So you. I mean, Descartes would like this. 413 00:43:43,280 --> 00:43:48,890 You start from the particular and moved to the general and you say, look, here you are thinking in this way. 414 00:43:48,890 --> 00:43:54,380 And if you if you think through the implications of it, it's going to lead you in, as it were, 415 00:43:54,380 --> 00:44:00,410 an ethical direct or the direction of things that have underlined being seen as underlying ethics. 416 00:44:00,410 --> 00:44:04,730 One other question. I mean, this is obviously very new. 417 00:44:04,730 --> 00:44:13,020 I mean, ethics is coming in across the world in a teaching in a way that it previously hasn't been. 418 00:44:13,020 --> 00:44:21,140 Are there any outcomes of this that you've not expected where you've tried things and. 419 00:44:21,140 --> 00:44:25,910 You know, being rather surprised by the outcome. 420 00:44:25,910 --> 00:44:33,040 Oh, I think that one of the things that I find very rewarding about this kind of teaching 421 00:44:33,040 --> 00:44:39,370 in general is you can't always predict the things that people will take away from it. 422 00:44:39,370 --> 00:44:46,840 So for our course, we are students to write up some reflections at the end of the course and write about how they might use these ideas going forward. 423 00:44:46,840 --> 00:44:52,930 And some of the things that they wrote were really, really rewarding because they talked about kind of other activities, you know, 424 00:44:52,930 --> 00:44:58,960 extra curricular activities that they're involved in and how they're going to use those ideas in that work going forward. 425 00:44:58,960 --> 00:45:02,050 And that wasn't necessarily one of our core teaching objectives, 426 00:45:02,050 --> 00:45:07,090 but it was really fantastic to see them seeing the relevance of that and taking it forwards. 427 00:45:07,090 --> 00:45:13,750 And similarly, we find with the sort of the broader even the non formal teaching that we're engaged in, 428 00:45:13,750 --> 00:45:17,050 it's often used in ways that we don't anticipate. 429 00:45:17,050 --> 00:45:23,500 So with our unbias cards, they've been used by a much broader range of people than we first anticipated, 430 00:45:23,500 --> 00:45:26,800 and that's led us actually to develop extensions to them. 431 00:45:26,800 --> 00:45:33,070 So so my colleagues who worked on the project have developed further materials for facilitators using those cards. 432 00:45:33,070 --> 00:45:38,930 And also, I mean, I decision makers tool kit, which has just come out as well, which I was supposed to mention. 433 00:45:38,930 --> 00:45:42,580 And when I had a slide up with the with you were all for the projects. 434 00:45:42,580 --> 00:45:47,800 So I'll just mention it now. And if anybody would like more information about it, they can get in contact with me and I'll pass it on. 435 00:45:47,800 --> 00:45:55,600 But I think that's a fantastic thing about when you're doing these engagement and education things is the kind of the abduction. 436 00:45:55,600 --> 00:46:00,760 You know, what people take away from it isn't necessarily what you set out when your core objectives. 437 00:46:00,760 --> 00:46:03,100 It can be something something else as well. 438 00:46:03,100 --> 00:46:07,300 And it's because, you know, people are seeing it through the lens of their own experiences and their own interest. 439 00:46:07,300 --> 00:46:13,280 So it can have many more positive benefits. And you initially set out for them to have. 440 00:46:13,280 --> 00:46:18,820 Well, thank you very much. Is really interesting. And now we're onto your collaborator, 441 00:46:18,820 --> 00:46:30,030 Max Van Cake is associate professor of Human Computer Interaction that the Department of Computer Science and a fellow of Kellogg College in Oxford. 442 00:46:30,030 --> 00:46:32,260 He'd like. 443 00:46:32,260 --> 00:46:41,790 He has a background in M.I.T. and while studying at my team, is a research assistant at the M.I.T. A.I. Lab, the media lab and the computer science. 444 00:46:41,790 --> 00:46:53,150 They are my lab. So. So he ended up studying under some of his most influential founding fathers, including Rodney Brooks and Marvin Minsky. 445 00:46:53,150 --> 00:47:03,110 But enviable, Max, since moving to the U.K. first at Southhampton and then it's OK that Max has worked closely with Sir Nigel Shadbolt, 446 00:47:03,110 --> 00:47:07,540 who chairs the steering committee for the New Ethics in a eye institute. 447 00:47:07,540 --> 00:47:16,560 So. So Nigel has really led that charge and played a huge role in getting the new institute started. 448 00:47:16,560 --> 00:47:21,960 Since last year, as we've heard, Max has been teaching ethics and responsible innovation. 449 00:47:21,960 --> 00:47:27,390 Along with Helena, and they developed that course together from scratch. 450 00:47:27,390 --> 00:47:35,080 And you're going to focus? Oh, actually, I've just noted a rather whimsical thing at M.I.T., Max. 451 00:47:35,080 --> 00:47:42,490 I understand you used to be an electronic music deejay with a DOT pop radio show. 452 00:47:42,490 --> 00:47:46,440 I had robot music designed to soothe lonely robots. 453 00:47:46,440 --> 00:47:52,400 Is that true? Yes. I spent so much time in robot labs that I deserve. 454 00:47:52,400 --> 00:47:56,100 I believe that they deserve, you know, acoustic accompaniment. 455 00:47:56,100 --> 00:48:02,650 So I ended up becoming an electronic music deejay with IBM, VR, M.I.T. Radio. 456 00:48:02,650 --> 00:48:07,600 Excellent. Right. Well, over to you. OK. Thank you so much, Peter. 457 00:48:07,600 --> 00:48:12,490 And yes, thank you as well to the Institute. 458 00:48:12,490 --> 00:48:16,250 Ever since the institute was founded, I was excited to get involved. 459 00:48:16,250 --> 00:48:20,860 And there were some excellent seminars last term about ASX. 460 00:48:20,860 --> 00:48:28,680 And the reason that I am so keen to be to give this talk here is twofold. 461 00:48:28,680 --> 00:48:30,340 First of all, you know, 462 00:48:30,340 --> 00:48:38,830 this is a very this is an area which is fast moving and extra incredibly important as the pace of A.I. and technology has continued 463 00:48:38,830 --> 00:48:49,840 to accelerate the pace at which new ethical problems and concerns are are arising as also correspondingly seeming to accelerate. 464 00:48:49,840 --> 00:48:53,150 And so that is a major reason for it. 465 00:48:53,150 --> 00:48:59,680 The reason why I think it's exciting, interesting, exciting to talk about education is that in response to these emerging issues, 466 00:48:59,680 --> 00:49:06,760 there have been a huge number of different courses and curriculum curricula that have been proposed across the world. 467 00:49:06,760 --> 00:49:11,530 A lot of my collaborators in the US, for example, have put their syllabi online. 468 00:49:11,530 --> 00:49:18,400 And I think it's very important now to start conversations about how best to go about this endeavour to try to, you know, 469 00:49:18,400 --> 00:49:26,050 to try to understand, to teach ethics to to integrate it throughout various engineering and computer science curricula. 470 00:49:26,050 --> 00:49:29,080 And that's what I'd like to talk a little bit about today. 471 00:49:29,080 --> 00:49:39,010 So I was asked by Helena to to take part in the design of a new module, as you've just heard. 472 00:49:39,010 --> 00:49:46,270 And I'm going to talk a little bit more about about that here, that name of my talk as victims of algorithmic violence. 473 00:49:46,270 --> 00:49:52,450 Now, this is meant to reflect the fact that sooner or later, because of the fact that algorithms are now going everywhere, 474 00:49:52,450 --> 00:49:59,090 you or someone that you care about, it's going to be a victim of some sense of algorithmic violence. 475 00:49:59,090 --> 00:50:05,170 And we'll talk more about that later. The let's see if my slides can work. 476 00:50:05,170 --> 00:50:16,030 There we go. So the brief map of this talk, I'll first talk about a background about of the course and why I was so interested in teaching it. 477 00:50:16,030 --> 00:50:25,960 I'll briefly describe fill in the gaps, a little bit of the details of what Helena and I jointly decided would make an appropriate first course. 478 00:50:25,960 --> 00:50:34,060 Again, this we were very much alpha testing this course and we're very keen to iterate it and make it better over time. 479 00:50:34,060 --> 00:50:39,430 And what we want. I want to do is go over a brief overview of our objectives there. 480 00:50:39,430 --> 00:50:45,610 And then I want to talk for a moment about reflection. So how well, it it works. 481 00:50:45,610 --> 00:50:51,730 Challenges that we encountered and student reactions and then and on talking about. 482 00:50:51,730 --> 00:50:56,210 So what now we know where can we go next? 483 00:50:56,210 --> 00:51:04,970 So I usually begin my my talks with a brief contact warning, we will be mentioning a number of of spicy issues here. 484 00:51:04,970 --> 00:51:15,170 If any of these things are of topics that are sensitive to you, then you may consider not tuning in for the next 10 minutes while I talk about them. 485 00:51:15,170 --> 00:51:25,100 But anyway. Yes. So let's first talk about why I got so interested in trying to teach this class in joining Helena on this mission. 486 00:51:25,100 --> 00:51:32,540 The reason it's not only because I help lead a group that is looking at the design of ethically technology, 487 00:51:32,540 --> 00:51:44,320 but because it seems that every week we have new, egregious examples of ways that technical systems are doing, you know, unbelievably terrible things. 488 00:51:44,320 --> 00:51:47,990 And, you know, just to mention an example from last week, 489 00:51:47,990 --> 00:51:56,660 here is the example of the Twitter image cropping algorithm and the they just explain what this what this problem was, was the tweet. 490 00:51:56,660 --> 00:52:01,370 Twitter is, as you know, the world's most popular sort of microblogging platform. 491 00:52:01,370 --> 00:52:05,240 And it has this problem that tweets are this big. 492 00:52:05,240 --> 00:52:14,930 So they're quite small. And so if you upload images to share and tweet, it has to figure out how to align the images so that you can see the bits. 493 00:52:14,930 --> 00:52:20,090 It's that it can get sort of a bit of it visible. And then you can click on it to see the rest. 494 00:52:20,090 --> 00:52:29,300 So somebody discovered a researcher discovered last week that if you upload photos of people and one person, 495 00:52:29,300 --> 00:52:32,210 for example, Mitch McConnell and then other person, Barack Obama, 496 00:52:32,210 --> 00:52:42,410 and you you have you create the images artificially to be long and narrow so that the image copying algorithm has to then try to figure out a crop. 497 00:52:42,410 --> 00:52:46,940 And you permit them. What do you think the image cropping algorithm would do? 498 00:52:46,940 --> 00:52:51,650 Well, a very sensible thing would be to just take the middle, for example. 499 00:52:51,650 --> 00:52:56,300 And you would you end up with is an image that's just white or blank for both of them. 500 00:52:56,300 --> 00:53:05,060 But what the algorithm was found to do, in fact, was always choose Mitch McConnell instead of Barack Obama. 501 00:53:05,060 --> 00:53:05,660 Now, 502 00:53:05,660 --> 00:53:18,530 the reason why it was later sort of explained by Twitter was that they were trying to figure out how to make cropped the image to maximise salience. 503 00:53:18,530 --> 00:53:30,080 And the data set that they were using to do that one. What had more faces of white Caucasian nature than of black or darker skinned people? 504 00:53:30,080 --> 00:53:33,320 And this is, you know, just one of the many examples. 505 00:53:33,320 --> 00:53:40,610 We also have like plenty of other examples of devices that work less well for for those who are darker skinned or 506 00:53:40,610 --> 00:53:47,930 who are not white because of these same kind of unbalances imbalances in the data sets that people are using. 507 00:53:47,930 --> 00:53:52,730 Now, these are these are terrible examples, because what happens is that it ah, it exacerbates makes worse. 508 00:53:52,730 --> 00:54:00,590 And then in some cases, quite literally promotes the erasure of marginalised groups, members, marginalised groups. 509 00:54:00,590 --> 00:54:07,250 And and, you know, and these are not very complicated to demonstrate that these seemingly obvious kinds of things. 510 00:54:07,250 --> 00:54:15,190 But this is just the tip of the iceberg. We have a huge number of ethnic ecosystems on a daily basis that we talk about, which, for example, 511 00:54:15,190 --> 00:54:20,840 do everything from target advertisers beats to teenagers who are particularly depressed or have low body 512 00:54:20,840 --> 00:54:29,210 confidence systems that are optimised to to spread conspiracy theories to those most likely to believe them, 513 00:54:29,210 --> 00:54:35,090 systems that are designed and implemented based on a history of racist pseudoscience and so on. 514 00:54:35,090 --> 00:54:41,450 And so this was, you know, this did it require. It didn't require much to persuade me that this is a very important topic. 515 00:54:41,450 --> 00:54:51,050 And and what was really exciting was that we were given the opportunity by the department to create a mandatory module for computer scientists. 516 00:54:51,050 --> 00:54:53,900 Just a little bit of background for our depart about our department. 517 00:54:53,900 --> 00:55:00,620 So our undergraduate programme in computer science is currently ranked top of the world by subject and one of the rankings. 518 00:55:00,620 --> 00:55:05,660 You know these rankings. This one's the times. Higher education when it's. 519 00:55:05,660 --> 00:55:12,260 But, you know, overall, it's just a very competitive, highly competitive programme with a very strong theoretical emphasis. 520 00:55:12,260 --> 00:55:21,260 But what's interesting here is that it's not just computer science. We have three different courses and one of them is actually a joint programme, 521 00:55:21,260 --> 00:55:25,490 computer science and philosophy, and another one is maths and computer science. 522 00:55:25,490 --> 00:55:32,600 But regardless of which of these streams you're on, you are required to take our new module. 523 00:55:32,600 --> 00:55:40,240 So this set up a very exciting opportunity for Helena and myself to try to interest students, get it, 524 00:55:40,240 --> 00:55:48,020 get students interested and excited about it, about ethical challenges and and to give them a means by which they can think of them. 525 00:55:48,020 --> 00:55:55,040 So our main goal was to prepare students to be able to, one, identify and to start to think about an address. 526 00:55:55,040 --> 00:56:01,250 Real world ethical challenges that are articulated by digital systems and in order to do that. 527 00:56:01,250 --> 00:56:05,900 This is you know, this is Oxford and we're not we're not afraid of embracing our tradition. 528 00:56:05,900 --> 00:56:12,710 We wanted to equip students with the foundational conceptual frameworks for thinking about ethical positions. 529 00:56:12,710 --> 00:56:21,920 And for that, you know, we did not hold back on ancient philosophy, as you know, in order to establish this foundation. 530 00:56:21,920 --> 00:56:27,140 And then we wanted to relate those who want to connect us to contemporary ethical challenges. 531 00:56:27,140 --> 00:56:34,310 And then finally to apply them to situations, as Helena just described. 532 00:56:34,310 --> 00:56:44,480 And in order to do that, we were able to draw from a huge variety of of text texts that have been recently written on ethical challenges. 533 00:56:44,480 --> 00:56:51,020 And we'll revisit those in a bit. But weapons of mass destruction, invisible women and Benjamin Benjamin's race after technology, 534 00:56:51,020 --> 00:56:59,290 where three of the books that we found extremely useful. Now, this slide might get me in quite a bit of trouble. 535 00:56:59,290 --> 00:57:05,850 However, one thing we really wanted to avoid doing was. 536 00:57:05,850 --> 00:57:11,550 Yeah, you must understand. So. So just going back for a second, we only had four lectures and two practical. 537 00:57:11,550 --> 00:57:18,850 So it's very compressed. The problem is undergraduates here are very stressed out and they are required to take a lot of classes their first year. 538 00:57:18,850 --> 00:57:23,620 And so adding unbranded Tery module in their schedule is not an easy thing to do. 539 00:57:23,620 --> 00:57:31,270 And so we had to really, really figure out a way that we could compress this material in a way that was still meaningful and engaging. 540 00:57:31,270 --> 00:57:43,310 So something had to go. And the thing that we decided had to go, were they the most common kinds of of tropes associated with A.I. ethical challenges. 541 00:57:43,310 --> 00:57:50,320 Now, what's particularly funny is that these three challenges are very championed or or in fact, 542 00:57:50,320 --> 00:57:59,080 they come from oxfords that the various philosophers and Professor Nick Bustards superintelligence book, 543 00:57:59,080 --> 00:58:02,890 for example, describes the superintelligent hypothesis, 544 00:58:02,890 --> 00:58:12,340 which states that at some point I might become out of control and vastly more capable than humans and thus will not need to keep us around. 545 00:58:12,340 --> 00:58:23,290 And the of course, the trolley problem is perhaps the most well-known ethical conundrum or problem in in history, 546 00:58:23,290 --> 00:58:27,940 which states the question of we need to be able to solve the question of whether what 547 00:58:27,940 --> 00:58:34,450 the value of a human life or several humans lives are in order to determine whether, 548 00:58:34,450 --> 00:58:37,510 you know, in order to determine how a system should behave. 549 00:58:37,510 --> 00:58:47,560 So if a trolley were out of control and it needed to either sacrifice one person or or several people and they had different properties and histories, 550 00:58:47,560 --> 00:58:51,160 which would it choose and how can you make such a decision? 551 00:58:51,160 --> 00:58:59,020 And finally, the third thing that we didn't talk about more than more than just describing the problem are the is, 552 00:58:59,020 --> 00:59:03,790 of course, the issue of robot rights and information ethics, 553 00:59:03,790 --> 00:59:09,860 which Professor Florida and others have talked about whether information itself should exert a moral claim 554 00:59:09,860 --> 00:59:15,500 and and thus whether technical systems comprised of information themselves should exert moral claims. 555 00:59:15,500 --> 00:59:24,310 The interesting the reason why we didn't talk about these is that they don't currently present pressing issues to worry about. 556 00:59:24,310 --> 00:59:32,080 And in some cases, many of us in computer science department don't believe they ever will present issues worth worrying about. 557 00:59:32,080 --> 00:59:39,520 Sorry. Worth is a strong word. You know, that that is that is relevant to actual problems. 558 00:59:39,520 --> 00:59:41,830 They are very interesting questions to discuss. 559 00:59:41,830 --> 00:59:50,350 If you'd like to know more about why I don't believe the superintelligence hypothesis will ever occur in reality. 560 00:59:50,350 --> 00:59:56,550 Please see these scholars who are who can articulate them. 561 00:59:56,550 --> 01:00:03,550 The reasons vastly better than I can. As far as the the trolley problem is concerned, we discussed it briefly. 562 01:00:03,550 --> 01:00:11,840 I think one of the most mis applied versions of the problem is in terms in the context of autonomous vehicles. 563 01:00:11,840 --> 01:00:15,820 And and so you've probably heard that in order to create autonomous vehicles, 564 01:00:15,820 --> 01:00:24,520 we're going to need to solve the trolley problem so that the car can decide whether or not to kill a person X or maim some other person Y. 565 01:00:24,520 --> 01:00:30,520 In fact, those who are designing safety critical systems and autonomous vehicles here and elsewhere will 566 01:00:30,520 --> 01:00:35,680 tell you that the trolley problem is definitely not anything close to anything they worry about, 567 01:00:35,680 --> 01:00:41,710 because as soon as you injure or hurt somebody, the system is outside the scope that it has. 568 01:00:41,710 --> 01:00:46,390 It has failed. Right. You design systems not to kill people by any means possible. 569 01:00:46,390 --> 01:00:54,850 And as soon as you're in that space, then you're in a sort of an erikka failure condition and you need to cope with that in a particular way. 570 01:00:54,850 --> 01:01:03,250 On the other hand, there are some really compelling and also tragic applications of the trolley problem, which we did discuss. 571 01:01:03,250 --> 01:01:08,230 And one of them is very relevant right now with the pandemic critical resource allocation and medical 572 01:01:08,230 --> 01:01:13,540 ethics to say say that you've got a finite number of ICU beds and you have more people who need them. 573 01:01:13,540 --> 01:01:19,840 How do you then provision and assign patients to beds or then again as well to disaster relief? 574 01:01:19,840 --> 01:01:24,760 Hopefully we won't have another scenario in 2020, at least related to that. 575 01:01:24,760 --> 01:01:27,970 So what kinds of problems did we focus on? Well, we focussed on it. 576 01:01:27,970 --> 01:01:33,070 I wanted to give you a very high level overview because obviously I am limited in time here. 577 01:01:33,070 --> 01:01:41,530 But one, you know, of course, a primary example is what Helen alluded to in terms of algorithmic bias. 578 01:01:41,530 --> 01:01:52,450 So it is you know, it is a fact that voice recognition systems today work better for our have higher accuracy for men than for women. 579 01:01:52,450 --> 01:01:57,540 And what's really terrible about this? Is that this can get cascaded on. 580 01:01:57,540 --> 01:02:02,880 If you have systems that rely on speech recognition as the first stage of their pipeline, for example, 581 01:02:02,880 --> 01:02:10,710 they kind of a hiring systems that Helena alluded to there require essentially an understanding of what a person is saying 582 01:02:10,710 --> 01:02:17,730 in order to make a decision whether somebody is a good candidate will then automatically have higher error rates for women. 583 01:02:17,730 --> 01:02:25,710 And this is particularly the problem because, you know, women are already facing, you know, great challenges, greater challenges than men and getting, 584 01:02:25,710 --> 01:02:32,670 you know, through, you know, glass ceilings and in order and getting hired, especially in particular sectors. 585 01:02:32,670 --> 01:02:37,600 And, you know, this kind of thing only makes matters worse. And it's particular. 586 01:02:37,600 --> 01:02:49,820 And if you then look at groups that have accents or, you know, any other variation in speech, then the recognition rates become significantly worse. 587 01:02:49,820 --> 01:02:54,270 And you think about how those variations then correlate with marginalised groups. 588 01:02:54,270 --> 01:03:04,460 Already these systems are only going to make things worse. So we talked specifically about things like that, about accuracy related to the groups. 589 01:03:04,460 --> 01:03:09,180 A fundamentally different kind of harm that we also described were representational 590 01:03:09,180 --> 01:03:15,300 harms which deal with with with essentially the ways that that that people, 591 01:03:15,300 --> 01:03:20,010 places and ideas are then associated through the data that we use. 592 01:03:20,010 --> 01:03:24,870 For example, Professor Latanya Sweeney is research that revealed that African-American 593 01:03:24,870 --> 01:03:29,820 sounding names were more likely to be associated with ads suggesting arrest in 594 01:03:29,820 --> 01:03:34,320 Google very much demonstrated the sort of representational harm that these 595 01:03:34,320 --> 01:03:38,700 African-American names should be associated in some sense with with arrests. 596 01:03:38,700 --> 01:03:48,720 And and this was sort of the kind of thing that we were talking about. 597 01:03:48,720 --> 01:03:58,710 I'm going to skip very quickly through some of the other other lessons, because I would like to get immediately to our discussion of reflections. 598 01:03:58,710 --> 01:04:08,220 So let me see. Sorry about that. I think I was slightly ambitious in terms of what I could cover in time, another sort of this ultimate thing. 599 01:04:08,220 --> 01:04:16,770 I'd like to talk about the front of our reflections. Was the question of whether art of technological artefacts could be unethical or ethical. 600 01:04:16,770 --> 01:04:23,220 And for this question, I believe that, you know, Milo talked about value embedded design. 601 01:04:23,220 --> 01:04:30,780 It was very useful for us to talk about the many kinds of systems that computer science students and engineers use on a daily basis. 602 01:04:30,780 --> 01:04:40,560 And talk about whether we could see them as, you know, as ethically neutral general purpose or whether they are in some sense good or bad. 603 01:04:40,560 --> 01:04:49,860 And that was that was a very exciting exercise because it related to a lot of the kinds of activities they performed to the databases. 604 01:04:49,860 --> 01:04:56,650 Okay, so now I'd like to just spend a couple minutes at the end talking about how this split. 605 01:04:56,650 --> 01:05:02,130 So now this this information diagram is not based on actual figures. 606 01:05:02,130 --> 01:05:04,990 This is based upon my sort of recollection and heuristics. 607 01:05:04,990 --> 01:05:11,190 I'd like to say semi random figures representing a subset of the students with particular reaction to ethics material. 608 01:05:11,190 --> 01:05:19,140 So there were there were there were, first of all, a significant group which were which were really sort of ready for this. 609 01:05:19,140 --> 01:05:23,220 Already they were already aware and motivated to look at ethical challenges. 610 01:05:23,220 --> 01:05:27,450 And really, you know, this was the group that was like it was like giving water to a sponge. 611 01:05:27,450 --> 01:05:31,590 They were ready to to get this material and start working immediately. 612 01:05:31,590 --> 01:05:35,940 And we were very you know, they were the least difficult to convince, very excited. 613 01:05:35,940 --> 01:05:44,010 And essentially, all we were doing is providing examples which Brett, you know, broadened their initial expectations. 614 01:05:44,010 --> 01:05:48,410 In general, a majority of the students were not in that group. 615 01:05:48,410 --> 01:05:50,310 While there were a significant number in that group. 616 01:05:50,310 --> 01:05:55,980 The majority of the students started, it seemed to start out with a slightly more reluctant and sceptical view. 617 01:05:55,980 --> 01:06:00,090 And I'll talk of in the next slide about why that might be, because in part, 618 01:06:00,090 --> 01:06:05,910 they started with much narrower expectations of what ethics would talk about. 619 01:06:05,910 --> 01:06:13,860 And, you know, there was a bit of element of surprise when we started talking about the breadth of the kinds of issues that they discussed. 620 01:06:13,860 --> 01:06:18,120 And it felt a little bit like they were navigating this sort of local maxima. 621 01:06:18,120 --> 01:06:24,390 And we were really trying to push them to a place that they were they were able to see the large picture. 622 01:06:24,390 --> 01:06:32,760 But then we had the last group. Now, the last group, which is a minority, but nonetheless, I think an important minority. 623 01:06:32,760 --> 01:06:37,560 Whereas a highly sceptical students, first of all, that the students that, you know, 624 01:06:37,560 --> 01:06:40,110 when you say you need to take this class, they are the ones that are say, 625 01:06:40,110 --> 01:06:49,770 do I really need to you know, again, why why is this anything related to what what I'd like to call them the turkey 20 percent. 626 01:06:49,770 --> 01:06:54,290 So the turkey 20 percent. Why were they so tricky? Well. 627 01:06:54,290 --> 01:07:03,170 One way that we dealt with the trickiness, one of, first of all, is that they saw ethics as impinging on their private, their their activities. 628 01:07:03,170 --> 01:07:10,010 They saw and it's very useful. So for us to frame ourselves, as in within groups, as computer scientists, 629 01:07:10,010 --> 01:07:18,110 why you want a hacker on your team is that then you can actually describe you can connect to them as a as a 630 01:07:18,110 --> 01:07:25,160 mentor rather than an interloper rather than somebody who is introducing ethics is something that you need to do, 631 01:07:25,160 --> 01:07:30,560 almost as parents would tell somebody that they need to clean their room. It's much more useful to have friends. 632 01:07:30,560 --> 01:07:36,680 Just shame you based upon how dirty your room is. But another reason is that it is language. 633 01:07:36,680 --> 01:07:43,190 So it's very important to be able to for the for this tough 20 percent speak the language. 634 01:07:43,190 --> 01:07:53,150 And and that can be very challenging because, you know, interdisciplinary work inherently deals with different definitions of things. 635 01:07:53,150 --> 01:07:57,500 And even the use of the term, for example, A.I. Imprecisely, 636 01:07:57,500 --> 01:08:02,770 which is notoriously done by the media, can be a dead giveaway that you really don't know. 637 01:08:02,770 --> 01:08:08,150 The systems work very well. And that can put people off. They can put students off saying, OK, here are these people. 638 01:08:08,150 --> 01:08:13,160 They don't know what they're talking about. And they're telling me what I can do or what I can't do. 639 01:08:13,160 --> 01:08:18,110 But another, more positive say of effort, of being a hacker, a computer science, 640 01:08:18,110 --> 01:08:24,110 having computer scientist on your team is that there are some really deep and profound connexions between theoretical A.I., 641 01:08:24,110 --> 01:08:26,870 theoretical science and philosophy. 642 01:08:26,870 --> 01:08:32,600 And what's great is that we know we have something like 15, 20 percent of our students are CSM philosophy students, 643 01:08:32,600 --> 01:08:40,880 and being able to make those beautiful connexions is very valuable. Next, you want to have a hacker because you need to understand the hacker mindset. 644 01:08:40,880 --> 01:08:46,430 So, you know, the hacker mindset originally that Mark Zuckerberg is to move fast and break things right. 645 01:08:46,430 --> 01:08:55,100 No news allowed. And so what you want to do is you want to set the poise of the work so that you're not the fun police telling people what not to do. 646 01:08:55,100 --> 01:08:57,710 But instead, you're setting the bar higher. 647 01:08:57,710 --> 01:09:04,550 You're introducing a bunch of new challenges that will make your life perhaps more challenging in the short term, 648 01:09:04,550 --> 01:09:09,590 but will allow you to build better things in the future. 649 01:09:09,590 --> 01:09:18,420 And so so that is is why. So that's there's a sort of the feelings that we got and the reasons that we were able to pivot with our 650 01:09:18,420 --> 01:09:25,700 multidisciplinary team between the more computer science angle and more of a philosophy and social science angle. 651 01:09:25,700 --> 01:09:30,510 I'm not sure how much time I have. Peter, do I have a one or two minutes left? 652 01:09:30,510 --> 01:09:41,180 I'm sorry you're needed at the moment, but if you could bring things to a reasonably quick conclusion, it would be nice to have a quarter. 653 01:09:41,180 --> 01:09:46,640 Now, if a general question sorry, I'm finished within three, four more minutes. 654 01:09:46,640 --> 01:09:48,850 Yes, absolutely. That's more than enough time. 655 01:09:48,850 --> 01:09:57,440 So so an example of the kinds of sort of interesting and rich and slightly tense discussions that we had amongst students, 656 01:09:57,440 --> 01:10:01,760 which meant which made me realise that, you know, we're different. Our jobs are definitely not done. 657 01:10:01,760 --> 01:10:05,630 And there's a lot of opportunity here to improve things. Is are the following. 658 01:10:05,630 --> 01:10:12,230 I give you three examples. The first one is that we were in the middle of discussing a paper which was called Discriminating Tastes, 659 01:10:12,230 --> 01:10:16,700 which is about Uber customer ratings and the ways that, for example, 660 01:10:16,700 --> 01:10:22,730 women and minority drivers get, on average, lower Uber Lyft ratings than than white men. 661 01:10:22,730 --> 01:10:30,860 This is in general perceived or thought to be the case for a lot of gig economy workers and is, of course, you know, 662 01:10:30,860 --> 01:10:38,180 a problem because then it feeds into the algorithm, which then changes the opportunities that are given to the gig workers. 663 01:10:38,180 --> 01:10:47,210 As a consequence. So it has this vicious feedback cycle. So we were discussing this case and, you know, we had a student say, well, you know, 664 01:10:47,210 --> 01:10:56,030 how do you know that this is because that people are being biased and that women or minorities are just not worse drivers than men. 665 01:10:56,030 --> 01:11:03,200 And the nice thing about this class is that we were very open to having it as a seminar and we had somebody immediately respond. 666 01:11:03,200 --> 01:11:08,450 Well, do you know that women have better access records so they have fewer accidents than men? 667 01:11:08,450 --> 01:11:13,400 So the evidence would suggest the opposite, that, in fact, maybe women were better drivers in general. 668 01:11:13,400 --> 01:11:17,210 And then we had another student respond that, you know, these are good paraphrases. 669 01:11:17,210 --> 01:11:19,280 They're not direct quote, we do this to respond. 670 01:11:19,280 --> 01:11:27,110 That would so maybe, you know, they just drive more slowly or or are just too careful or aren't as nice to passengers or something, really. 671 01:11:27,110 --> 01:11:34,220 And what I wanted to highlight here is that this is a real challenge because because they're trying to statistically explain 672 01:11:34,220 --> 01:11:41,750 the possibility that there may have been another reason besides discrimination that led these groups to have lower ratings. 673 01:11:41,750 --> 01:11:45,500 You tried to slice away the possibility that infinite possibilities, 674 01:11:45,500 --> 01:11:51,560 that there may be yet another reason that maybe it wasn't racism, maybe it wasn't discrimination. 675 01:11:51,560 --> 01:11:54,320 And this is a trap. Right? This is a challenge. 676 01:11:54,320 --> 01:12:01,520 And and I think that what are the ways that we can get around this is really by saying, well, you know, 677 01:12:01,520 --> 01:12:09,740 the personal experience of people who are a member of these groups will be able to tell you and amplify that. 678 01:12:09,740 --> 01:12:15,130 In fact, you know, when the kinds of ways that they get treated and the ways that they get rated, 679 01:12:15,130 --> 01:12:21,140 it suggests discrimination rather than one of these, you know, increasingly unlikely other hypotheses. 680 01:12:21,140 --> 01:12:27,660 And that kind of discussion is is really interesting and fruitful and and will allow these things to pan out. 681 01:12:27,660 --> 01:12:32,720 You know, again, so we want to not erase discrimination statistically. 682 01:12:32,720 --> 01:12:41,060 And so it's and it's often difficult to train this argument in two other very quick examples. 683 01:12:41,060 --> 01:12:49,490 And this one kept coming up over and over again in an a hiring scenario was the diversity quality phalluses that goes something like this. 684 01:12:49,490 --> 01:12:54,800 We have to consider diversity, but that'll force us to sacrifice the quality of our candidates. 685 01:12:54,800 --> 01:12:57,860 When you screen candidates, you generate an N best list. 686 01:12:57,860 --> 01:13:03,620 And and, you know, and if you want to and if those top candidates are all of, for example, white men, 687 01:13:03,620 --> 01:13:10,640 then you're going to have to keep going down the list until you find a candidate that meets your diversity criteria. 688 01:13:10,640 --> 01:13:16,490 And we found several students hit you giving this argument over and over again. 689 01:13:16,490 --> 01:13:24,440 And the point that broke that was very exciting was when we had students respond to this very strongly, 690 01:13:24,440 --> 01:13:28,250 strongly and say, for example, what are you measuring anyway? 691 01:13:28,250 --> 01:13:35,150 What is the ranking supposed to represent? You can't measure the value of diversity to a team and then another student would cave. 692 01:13:35,150 --> 01:13:40,740 Another really exciting point, which what was the law of diminishing returns above a certain threshold? 693 01:13:40,740 --> 01:13:49,220 She said rank differences are mostly likely, likely be caused by noise and other things like historical injustices and structural inequalities. 694 01:13:49,220 --> 01:13:56,600 And so, you know, that kind of being able to for this students to counter that argument, I think was extremely exciting. 695 01:13:56,600 --> 01:14:00,060 The owner, the third and final one that I wanted to mention. 696 01:14:00,060 --> 01:14:03,630 What's that? So engineers will be engineers no matter what you do to them. 697 01:14:03,630 --> 01:14:13,050 And they'll try to they believe genuinely that you can fix it if it's broken, you know, whereas we you know, 698 01:14:13,050 --> 01:14:18,510 we strongly believe and we tried to highlight that in some cases that some systems are so deeply 699 01:14:18,510 --> 01:14:23,550 problematic fundamentally by their principles that they should never have been built to begin with. 700 01:14:23,550 --> 01:14:34,590 And and so we gave a number of examples of this deep nude skin with the skin whitening app, sexual assault simulator. 701 01:14:34,590 --> 01:14:42,420 And I think one of the most interesting things are examples which draw on historical racist pseudoscience, for example. 702 01:14:42,420 --> 01:14:52,530 There's a huge number of papers that are being produced by machine learning researchers right now to try to infer traits such as IQ, 703 01:14:52,530 --> 01:14:57,540 criminality or sexual orientation from their faces. 704 01:14:57,540 --> 01:15:01,770 Of course, this is rooted in physiognomy, 705 01:15:01,770 --> 01:15:08,130 which is a longstanding racist tradition of trying to identify whether people might be criminals by how they look. 706 01:15:08,130 --> 01:15:15,570 And without that context, though, students may be prone to try to throw deep neural networks at everything. 707 01:15:15,570 --> 01:15:20,460 And what really the aha moment was where students realised that, in fact, 708 01:15:20,460 --> 01:15:27,240 maybe the features that these these DPL networks were were triggering on where the features that that, you know, 709 01:15:27,240 --> 01:15:34,560 that society has forced people to to fit was essentially they were overfitting on on features that 710 01:15:34,560 --> 01:15:40,500 we really shouldn't be using for determining these ever determining characteristics like these. 711 01:15:40,500 --> 01:15:50,340 I just want to end by saying there's a lovely GitHub by David Daou, which is a compilation of things that should never be built. 712 01:15:50,340 --> 01:15:59,340 And in conclusion, the first year of ethics in response to innovation showed us a bit about how C.S. students think about ethics. 713 01:15:59,340 --> 01:16:05,220 But there's a lot of work to be done. You know, we have there's there's no singular text book that's being used. 714 01:16:05,220 --> 01:16:11,510 We were able to draw together a lot of materials from different lot of different contemporary sources that are very good. 715 01:16:11,510 --> 01:16:15,690 But we sort of had to bring together bits and pieces of each. 716 01:16:15,690 --> 01:16:22,560 And I think the next thing we need to do is really understand how to to to help 717 01:16:22,560 --> 01:16:29,340 students through some of these tough traps that they might encounter in the process. 718 01:16:29,340 --> 01:16:33,430 Thanks very much. Thanks very much, Max. 719 01:16:33,430 --> 01:16:40,470 Really interesting. Right. Well, are all our speakers now available? 720 01:16:40,470 --> 01:16:45,580 One. What I like to do is kick off with a question that's been asked. 721 01:16:45,580 --> 01:16:49,370 And it's prompted partly by what you've just said, Max. 722 01:16:49,370 --> 01:16:57,340 The constraints of the undercurrent graduate curriculum and how difficult it is to fit ethics teaching into it. 723 01:16:57,340 --> 01:17:00,550 The question ideally, what kinds of ethics, 724 01:17:00,550 --> 01:17:10,810 pedagogy and how much of it should engineering or a computer science student encounter during their undergraduate education? 725 01:17:10,810 --> 01:17:17,560 And another question made this point. Another question that made this point that actually a lot of these issues will 726 01:17:17,560 --> 01:17:26,150 arise with engineering students as well as computer science or A.I. students and. 727 01:17:26,150 --> 01:17:34,580 Are there specific issues? This is from Leslie. What do you think about teaching ethics to engineering students as they also have that need? 728 01:17:34,580 --> 01:17:40,580 And maybe you'd like to comment on how much ethics should we get be getting into those courses? 729 01:17:40,580 --> 01:17:50,500 And is there a distinction between engineering students and software engineering or A.I. students? 730 01:17:50,500 --> 01:17:56,320 Who'd like to volunteer first on that? Element of it. 731 01:17:56,320 --> 01:18:02,880 Yes. I'm actually a social scientist. So I studied social political sciences for undergraduate degree. 732 01:18:02,880 --> 01:18:07,500 And what's interesting is that in that curriculum, we start learning ethics from day one. 733 01:18:07,500 --> 01:18:10,440 We learn in relation to two research methods and so on. 734 01:18:10,440 --> 01:18:16,590 So it always surprises me to be in a computer science department where you don't have that kind of attitude. 735 01:18:16,590 --> 01:18:19,530 You know, ethics is kind of embedded into everything that you do. 736 01:18:19,530 --> 01:18:28,290 So so the question about how much ethics and how soon I would say teach ethics from day one, teach it as a fundamental part because it could. 737 01:18:28,290 --> 01:18:32,520 It is. And, you know, we see all of the examples that Max's just talked about. 738 01:18:32,520 --> 01:18:38,490 This is why it's so important that we're teaching ethics to our computer scientists and to our engineers 739 01:18:38,490 --> 01:18:43,290 right from day one and teaching it as something that is fundamental to everything that they're doing. 740 01:18:43,290 --> 01:18:48,090 It's not just sort of like an add on thing that you have to get through, but actually it's integrated. 741 01:18:48,090 --> 01:18:55,350 And I think moving forward, what we would like to see are these ethical elements that we teach it as its own course, 742 01:18:55,350 --> 01:18:59,430 but also seeing it integrated into other modules or seeing it sort of students being 743 01:18:59,430 --> 01:19:05,730 talents to think about it in other modules and other activities that they're doing. We already have it in in our department in India, too. 744 01:19:05,730 --> 01:19:10,530 They have to do a practical design challenge. And now the students have to think about ethical issues. 745 01:19:10,530 --> 01:19:14,010 So there is a sense of building up through the curriculum. 746 01:19:14,010 --> 01:19:21,030 And I think the key is to keep that going and to keep getting these issues sort of embedded into what the students are doing. 747 01:19:21,030 --> 01:19:28,350 And I think actually it's the stuff about A.I. and that really helps sell it to our computer science students, because I think, 748 01:19:28,350 --> 01:19:34,410 again, because of the kind of computer science that studied at Oxford, sometimes I can ask you always for the engineers. 749 01:19:34,410 --> 01:19:40,590 You know, it's the implementation of this issue, but it actually is all of the examples that Max talked about that really I think 750 01:19:40,590 --> 01:19:45,190 helped open our students eyes of why we need to think about it in computer science, 751 01:19:45,190 --> 01:19:51,810 like why it's actually, you know, in the algorithms themselves and it's the way that they're developed and the data that they're working on and so on. 752 01:19:51,810 --> 01:19:56,490 It's going to give rise to a challenge, isn't it? Because it can do to science has grown so much. 753 01:19:56,490 --> 01:20:03,000 And I in particular in recent years, I mean, I know I spent 20 years partly in a computer science department. 754 01:20:03,000 --> 01:20:09,810 And, you know, whenever the curriculum was being revised, there was lots of new stuff and there was this competition. 755 01:20:09,810 --> 01:20:14,220 What can you what do you have to lose from the curriculum to make space? 756 01:20:14,220 --> 01:20:27,720 And if most of your faculty are people who weren't really used to thinking of ethics as part of the discipline, that that's a bit of a challenge. 757 01:20:27,720 --> 01:20:33,970 Yes. Yes. Absolutely. The agreement. I think solution, I guess, my lady. 758 01:20:33,970 --> 01:20:36,820 Yeah. I just think that that way of putting it. 759 01:20:36,820 --> 01:20:45,970 And what he later said also also kind of pushes into this that way of putting it is very fruitful is like what is part of the discipline? 760 01:20:45,970 --> 01:20:49,840 What does it what counts as a good computer scientists? 761 01:20:49,840 --> 01:20:55,100 And if what counts as a good computer scientist is a certain technical or mathematical aptitude. 762 01:20:55,100 --> 01:20:58,210 And you the other things are gonna come second and it's going to get crowded out. 763 01:20:58,210 --> 01:21:05,920 You're not optimising for the problem of how to teach them to be a certain kind of, you know, ethically responsible. 764 01:21:05,920 --> 01:21:14,490 But if we think of what counts as a good computer scientists, at least for the purposes of our education as integrating these components, 765 01:21:14,490 --> 01:21:22,080 then that's just one of the things that we're trying to teach. And then it becomes like anything else, you know, we're balancing whether, 766 01:21:22,080 --> 01:21:25,900 you know, more or theoretical aspects of computer science versus more applied. 767 01:21:25,900 --> 01:21:30,100 You do machine learning versus not. There's always going to be a Trade-Off in what content. 768 01:21:30,100 --> 01:21:35,680 But we do. I do think that that's like the fundamental shift to think about a good computer 769 01:21:35,680 --> 01:21:41,120 scientist or a good engineer more generally as not just having technical aptitude. 770 01:21:41,120 --> 01:21:44,980 And once we have that shift, other things can follow. Right. 771 01:21:44,980 --> 01:21:49,690 Thank you. Yeah. Go on, Max. 772 01:21:49,690 --> 01:21:53,020 Kind of just say two very quick things. 773 01:21:53,020 --> 01:22:02,920 One, to also reinforce what what Helen said about it was really exciting to be able to get the students in their first year to talk about this, 774 01:22:02,920 --> 01:22:13,420 to establish a framing. What's interesting is that the deontological idea is really sort of supported for some a feeling that people had, 775 01:22:13,420 --> 01:22:16,090 you know, that there's some things that were sort of bad to do. 776 01:22:16,090 --> 01:22:19,900 Like, you know, there's sort of this gut feeling that there were some things that were just bad, 777 01:22:19,900 --> 01:22:24,780 but they didn't know that there was a term for it and they didn't know that there was like a principal that said actually that, 778 01:22:24,780 --> 01:22:27,540 you know, violating someone's privacy, you know, and, you know. Right. 779 01:22:27,540 --> 01:22:36,470 To like to treat somebody genuinely rather than try to use them to, you know, sell more things or more engagement to be more engaged was like, 780 01:22:36,470 --> 01:22:43,510 you know, a violation of something that was a fundamental right that that that people have talked about humans having. 781 01:22:43,510 --> 01:22:49,810 So that we were hoping by sort of capturing them at the beginning that they would then be able to have that that structure, 782 01:22:49,810 --> 01:22:53,020 that they were able to then apply that lens to whatever they were looking at. 783 01:22:53,020 --> 01:23:00,340 But that being said, the second thing I wanted to say was that there's a lot of really domain specific applied ethics that we could go into. 784 01:23:00,340 --> 01:23:04,450 So the first thing we talked about, we said, have any has anybody even written a line of code before? 785 01:23:04,450 --> 01:23:09,790 Of course, most people had in the computer science programme. So did you decide to put it up online? 786 01:23:09,790 --> 01:23:16,670 Was that OK? Well, you know, it's a very fundamental questions of the potential harms that your code might do. 787 01:23:16,670 --> 01:23:24,070 You know, sort of basic ethics like this is basic, like, you know, the fact that something you're doing in the world can have unexpected consequences. 788 01:23:24,070 --> 01:23:28,700 And, you know, should you be responsible for this things versus something that's highly domain specific, 789 01:23:28,700 --> 01:23:34,000 like is scraping data off the web for your machine learning algorithm, you know, 790 01:23:34,000 --> 01:23:38,140 that is comprised of data that people don't know that there's being used for. 791 01:23:38,140 --> 01:23:46,150 It is, you know, is that OK? So there are lots of very specific kinds of things that I think it be interesting to return to. 792 01:23:46,150 --> 01:23:51,250 And for example, you know, A.I. or machine learning ethics, I think should be its own course. 793 01:23:51,250 --> 01:23:59,350 And it should probably be, you know, towards the end towards where the students are actually doing advanced machine learning to be, 794 01:23:59,350 --> 01:24:02,230 you know, to reinforce those connexions. 795 01:24:02,230 --> 01:24:12,040 Probably there's actually a plan underway and not now that Milo and Chris and my colleague at the college are on board. 796 01:24:12,040 --> 01:24:21,400 The plan is to catch a course on A.I. ethics, which will be taken by our students doing computer science and philosophy, 797 01:24:21,400 --> 01:24:28,240 probably compulsorily, but will also be available to others. And I think that will be very exciting to see those those developments. 798 01:24:28,240 --> 01:24:33,250 So, yeah, and I'm sure comfortable things will be happening elsewhere. 799 01:24:33,250 --> 01:24:41,590 I want to focus on a different question, the two questions that have come in that are kind of in a similar sort of theme. 800 01:24:41,590 --> 01:24:47,980 So one from Allison, how do you integrate ethics into a graduate programme like Oxfords, 801 01:24:47,980 --> 01:24:56,350 where students are more likely to work on ethically questionable projects without the time to devote to an ethics class? 802 01:24:56,350 --> 01:25:02,920 I mean, I my quick responses, I think graduate have more time for an ethics class rather than less, 803 01:25:02,920 --> 01:25:06,820 because graduate course gives lots of time for reflection. 804 01:25:06,820 --> 01:25:12,990 But the main the main issue here is that they could be working on ethically questionable projects. 805 01:25:12,990 --> 01:25:21,840 And a related question, your discussion is focussed on decisions that students might make as individuals when they create technology. 806 01:25:21,840 --> 01:25:31,610 But we know that it's often bigger forces, incentive structures, corporate cultures that are the real culprits when technologies have bad effects. 807 01:25:31,610 --> 01:25:39,890 What does your ethics pedagogy have to do with that? That's so if you focus, focus on the incentive here. 808 01:25:39,890 --> 01:25:45,440 There are a lot of financial incentives to do unethical things to the student. 809 01:25:45,440 --> 01:25:49,370 Your teaching may have the best motives in the world, but when they go out into the job market, 810 01:25:49,370 --> 01:25:58,370 they come under a lot of pressure and maybe even, God forbid, when they are pressured by academics and they're doing their research. 811 01:25:58,370 --> 01:26:03,570 I hope that would be less of a problem. Milo, do you wanna have a go at this one? 812 01:26:03,570 --> 01:26:11,160 Yeah, sure. It's already two parts of it. One is how this fits within the education itself. 813 01:26:11,160 --> 01:26:16,340 Sort of like more structure of you thinking about organisational incentives. 814 01:26:16,340 --> 01:26:24,320 You know, the move fast and break things. Culture of Silicon Valley that like that Max alluded to. 815 01:26:24,320 --> 01:26:31,020 One part of it, I think, is having these elements from social sciences and the areas of the humanities 816 01:26:31,020 --> 01:26:35,900 that look at kind of these things to teach students that how this is going on. 817 01:26:35,900 --> 01:26:39,710 Why does it grow? Why might Facebook do things that are unethical? 818 01:26:39,710 --> 01:26:45,200 Well, they have a fiduciary duty. It's part of their charter to make money. 819 01:26:45,200 --> 01:26:51,650 So they get students to understand that, to get students to to see those kinds of things. 820 01:26:51,650 --> 01:26:59,960 Also not to be super deterministic about it, to see that things like corporate culture and how you set up a team if you're working with other people. 821 01:26:59,960 --> 01:27:05,600 These all things can have effects. But to get them to appreciate and not just to have an individual's view. 822 01:27:05,600 --> 01:27:08,390 So that's sort of like on the structural question. 823 01:27:08,390 --> 01:27:16,520 And then there's this other question of how do you motivate the students to not just do the bad thing once they leave? 824 01:27:16,520 --> 01:27:25,360 That's a really hard question. And that's kind of this idea of the slogan I was using is like ready and willing. 825 01:27:25,360 --> 01:27:29,120 And I and I think, you know, it's gonna be context dependent. 826 01:27:29,120 --> 01:27:35,780 It's going to take work. And it's really hard to measure. But there are lots of different ways, like, you know, maybe them. 827 01:27:35,780 --> 01:27:42,230 It sounds like, you know, some of the talking about how that I-Max had success with teaching parts of moral 828 01:27:42,230 --> 01:27:48,590 theory to show students that their intuitions have theoretical backing and then that 829 01:27:48,590 --> 01:27:53,540 makes them take things more seriously or just gets them to see that they have a 830 01:27:53,540 --> 01:27:59,140 certain power over the ethical outcomes in these small decisions that they're making. 831 01:27:59,140 --> 01:28:03,560 That might be exciting to them. We just have to explore, I think, there. 832 01:28:03,560 --> 01:28:10,820 Thank you. Hello. Yeah. I think the so the first question about graduate students. 833 01:28:10,820 --> 01:28:15,740 If the if the system for the graduate student education is working properly, 834 01:28:15,740 --> 01:28:21,620 then there are various places where it can be picked up if they're doing something that could be ethically harmful. 835 01:28:21,620 --> 01:28:24,980 Obviously, they have the relationship with their supervisors. 836 01:28:24,980 --> 01:28:31,130 They have internal examinations as they go through the stages of their career where these things can be picked up as well. 837 01:28:31,130 --> 01:28:35,510 They also have opportunities to take courses in their first year of graduate study, too. 838 01:28:35,510 --> 01:28:40,400 And a lot of them do take some of the ethics oriented courses when they take it. 839 01:28:40,400 --> 01:28:45,200 And we do, I think has been a very helpful development in our department over the last couple of years is that we 840 01:28:45,200 --> 01:28:50,030 have our own research ethics committee now so that if you're doing work that involves human participants, 841 01:28:50,030 --> 01:28:56,600 it goes for an ethics cheque within the department. Whereas previously I was having to go to the Social Sciences Research Ethics Committee, 842 01:28:56,600 --> 01:29:05,830 and it's it makes a big difference because actually we need to have these. 843 01:29:05,830 --> 01:29:10,750 Are you okay? Hello, I'm back. I'm sorry, I don't know what happened. 844 01:29:10,750 --> 01:29:16,620 I just disappeared. Maybe I said something and somebody out. 845 01:29:16,620 --> 01:29:24,940 Did it mean to me? But I still say it's very important that I think that people's work in computer science is checked by 846 01:29:24,940 --> 01:29:30,070 people have the expertise to know about the ethical dimensions of the technical work that they're doing. 847 01:29:30,070 --> 01:29:35,080 So I think there are various developments, very safeguards we have in place that if they all work properly, 848 01:29:35,080 --> 01:29:41,860 then there are protections against our graduate students who are more independent in their work, which I think is where the question is coming from. 849 01:29:41,860 --> 01:29:44,320 We do have these sort of protections in place. 850 01:29:44,320 --> 01:29:50,330 And I think the question about sort of individuals and incentives is a really important one, and it's one that some of our students rights as well. 851 01:29:50,330 --> 01:29:52,460 I say, what was the point of being ethical? 852 01:29:52,460 --> 01:29:59,710 If I go and work for this big company and what they're doing is deeply unethical and we tried to to raise it in the discussions, 853 01:29:59,710 --> 01:30:04,390 in the sessions where we talked about sort of codes of practise and the responsible innovation perspective, 854 01:30:04,390 --> 01:30:09,130 we use much more sort of society based where you might think about sort of, you know, 855 01:30:09,130 --> 01:30:13,230 safeguards or responsibility practises that could be embedded within organisations. 856 01:30:13,230 --> 01:30:20,500 It could be in the form of self governance that industries might put in place their own codes of practise, or it could be policy dimensions as well. 857 01:30:20,500 --> 01:30:24,340 So we try to bring out that that wider perspective because it's absolutely true. 858 01:30:24,340 --> 01:30:28,150 You know, it's not just down to individual people to make ethical decisions. 859 01:30:28,150 --> 01:30:33,850 It's actually about how these things fit in across society, in the various institutions in society as well. 860 01:30:33,850 --> 01:30:42,490 I'd like to bring in a couple more questions. So we've got Richard saying regarding the skills required for practising engineers. 861 01:30:42,490 --> 01:30:46,990 What about the socio political skills of asserting and applying ethical 862 01:30:46,990 --> 01:30:51,760 thinking when working with people who are less aware or motivated than you are? 863 01:30:51,760 --> 01:30:54,240 I mean, that's very pertinent here, isn't it? 864 01:30:54,240 --> 01:31:04,840 If if students from studying ethics can not only get good at thinking about it themselves, but also good at persuading others to take it seriously. 865 01:31:04,840 --> 01:31:11,770 And if we could wrap that up with another couple of questions. 866 01:31:11,770 --> 01:31:19,420 From Anna, given how ubiquitous the problem of replicating structural inequalities seems to be a lie. 867 01:31:19,420 --> 01:31:23,830 Why is the sector so quiet about its role in these issues? 868 01:31:23,830 --> 01:31:29,320 How is the sector ensuring accountability and related to that? 869 01:31:29,320 --> 01:31:35,830 Do the speakers have any reflections on the decolonisation process of ethical algorithms? 870 01:31:35,830 --> 01:31:42,870 How can the industry's creators ensure marginalised communities are reflected in the creation of these algorithms? 871 01:31:42,870 --> 01:31:47,750 So, as it does a few issues there, but there are obviously closely related. 872 01:31:47,750 --> 01:31:57,730 If each of you could just quickly say give your two penn'orth on that, Max is so really great questions. 873 01:31:57,730 --> 01:32:06,040 The first question dealing with the socio political issues are also exacerbated by the cultural dimension as well. 874 01:32:06,040 --> 01:32:12,610 We have had students who explicitly say, well, it's OK if women don't have rights, they don't have rights in my country. 875 01:32:12,610 --> 01:32:14,270 And, you know, and things like that. 876 01:32:14,270 --> 01:32:21,520 And the very interesting thing about this is to be able to say, you know, well, you know, we're talking about this context and in this context, 877 01:32:21,520 --> 01:32:26,110 these these are the kinds of problems for which, you know, you know, in the West or whatever. 878 01:32:26,110 --> 01:32:28,930 And so I think it's very important to contextualise that. 879 01:32:28,930 --> 01:32:36,820 But also, I think that some of the the more fundamental moral philosophy gives us a lot of foundation or ammunition by which one can say, 880 01:32:36,820 --> 01:32:44,860 you know, people should be treated equally and these things. And so I think that was a that sort of a a a useful thing to say. 881 01:32:44,860 --> 01:32:50,710 You know, if you believe these things, then then, you know, our systems are not in accord with what they're trying to achieve. 882 01:32:50,710 --> 01:32:54,520 And so that, I think, was very sort. What was this what was the second point? 883 01:32:54,520 --> 01:32:58,990 I'm sorry. I'm already did it with diversity and so forth in diversity. 884 01:32:58,990 --> 01:33:04,540 Oh, what I said. That is that is that is a hugely usually important problem. 885 01:33:04,540 --> 01:33:10,090 And you know, what's interesting is that there are really bad ways to go about it and really good ways to go about it. 886 01:33:10,090 --> 01:33:14,980 And so right now, amongst. So there's a real issue with dealing with information economics. 887 01:33:14,980 --> 01:33:20,770 So the most sort of wealthy companies, the biggest you know, the power platforms have the resources to end. 888 01:33:20,770 --> 01:33:28,030 What they're trying to do right now is explicitly take task forces to go collect data on specific intersectional groups. 889 01:33:28,030 --> 01:33:31,240 You know, that are, you know, in in all around the world, 890 01:33:31,240 --> 01:33:38,770 they have the money and resources to do this and they're going and targeting it to try to improve the overall performance of their eye systems. 891 01:33:38,770 --> 01:33:41,740 Smaller companies don't have that kind of resource, 892 01:33:41,740 --> 01:33:46,900 and they therefore rely on sort of public datasets which have this inherent, you know, large scale bias. 893 01:33:46,900 --> 01:33:53,830 And the question is, do commercial companies like Google and the other large platforms have a moral imperative to share this sort 894 01:33:53,830 --> 01:33:59,620 of data that will allow systems to work more universally with the companies that they're competing with, 895 01:33:59,620 --> 01:34:05,880 these upstarts and stuff? So it's it's very much an economic problem as well. 896 01:34:05,880 --> 01:34:10,270 And you also mentioned the issue of companies obligations. 897 01:34:10,270 --> 01:34:18,040 If companies see their obligation as being to maximise shareholder value, that pushes them in one direction. 898 01:34:18,040 --> 01:34:29,920 But if actually countries get wise to the fact that they need their companies to adopt a broader, broader aims with more stakeholders in mind. 899 01:34:29,920 --> 01:34:33,980 That could could help Helena or Milo. 900 01:34:33,980 --> 01:34:40,000 You want to say what, Helena? Yes. So this question about how do you persuade others? 901 01:34:40,000 --> 01:34:44,770 I've been thinking about it and it'll be a fantastic task, actually, for our students to do to kind of role play it out. 902 01:34:44,770 --> 01:34:53,330 Imagine you've you started working in Company X and you're aware of all of these because can justices' how do you persuade your new colleagues of it? 903 01:34:53,330 --> 01:34:56,860 And I think it would be a wonderful practical challenge that we could set our 904 01:34:56,860 --> 01:35:01,960 students and maybe one that they would use when they go into the workforce. 905 01:35:01,960 --> 01:35:06,340 And then the questions about structural inequality. You know, 906 01:35:06,340 --> 01:35:12,670 I think that is so crucial and there's a very large awareness of them within ethics at the moment about how these 907 01:35:12,670 --> 01:35:18,730 inequalities get reproduced through these systems and problems of lack of diversity within datasets and so on. 908 01:35:18,730 --> 01:35:26,200 And I think if that industry isn't doing enough, isn't being isn't doing enough. 909 01:35:26,200 --> 01:35:32,800 It's because it doesn't need to at the moment. So, you know, it can still make profits even while this is in place. 910 01:35:32,800 --> 01:35:40,210 So in our economic environment, there has to be pushback for them to make the change. 911 01:35:40,210 --> 01:35:44,320 And I think we have seen some successes in icey team is, you know, 912 01:35:44,320 --> 01:35:51,480 social media platforms being held to account and making changes based on sort of public disapproval of appetising and so on. 913 01:35:51,480 --> 01:35:59,710 But I think, you know, ultimately we have to find ways to make diversity profitable and lack of diversity lose these companies money, 914 01:35:59,710 --> 01:36:06,400 and that's what will change it. So it has to be sort of, you know, pressure coming, I think, from wider society, right? 915 01:36:06,400 --> 01:36:14,930 Yes. Miley. Yes, so on the first point of sort of communication and negotiation. 916 01:36:14,930 --> 01:36:23,180 I love that. And I think that that is sort of like one of the skills that you need to have if we're going to effect change doesn't. 917 01:36:23,180 --> 01:36:26,660 So let's say you're really good at identifying ethical issues. 918 01:36:26,660 --> 01:36:31,400 You're really motivated to do it, but you can't convince any of your co-workers to do it. 919 01:36:31,400 --> 01:36:40,120 So what you mean you might have a little impact, but unless you're running the company and you might not be much. 920 01:36:40,120 --> 01:36:47,600 Yeah, I think, again, this is a matter of practise, the kind of thing that I Laina talked about as the kind of thing that we've done some. 921 01:36:47,600 --> 01:36:54,560 So we'll have a problem site where students will have like a decision that they've identified 922 01:36:54,560 --> 01:36:59,700 and they'll have identified certain stakeholder groups that are adversely affected. 923 01:36:59,700 --> 01:37:06,500 And we'll have them like write a Facebook post address to that stakeholder group saying like, here's the decision and why. 924 01:37:06,500 --> 01:37:13,430 And then we'll ask them to imagine being the stakeholder reply to Facebook posts and do like a little dialogue back and forth. 925 01:37:13,430 --> 01:37:22,400 Exactly. That kind of thing that Helen was talking about. So I think one student's need practise and two students needs sort of tools or vocabulary. 926 01:37:22,400 --> 01:37:26,300 So in some cases, I agree. I can be the language of moral philosophy. 927 01:37:26,300 --> 01:37:31,660 It has a weight behind it. It has a credibility. This isn't just my opinion. 928 01:37:31,660 --> 01:37:40,310 There's there's a there's a there there. And then I think the better you can sort of articulate what's going on, the better. 929 01:37:40,310 --> 01:37:44,810 The bigger repertoire you have of cases, the more you know about these things, 930 01:37:44,810 --> 01:37:49,310 the more articulate that you are about saying here's why this is the right choice. 931 01:37:49,310 --> 01:37:54,170 Because I've thought through the space of possible options, here are stakeholders. 932 01:37:54,170 --> 01:38:01,730 Here's how doing this might affect them in this way. Or, you know, probably or here's how going this might affect them in that way. 933 01:38:01,730 --> 01:38:09,110 Sort of just having the skill to do that puts you in a better position to advocate for the choice. 934 01:38:09,110 --> 01:38:09,680 So I'll leave it there. 935 01:38:09,680 --> 01:38:16,730 And I'm running a little long and I just agree with everything that both Max and Holiness said about the diversity and inclusion elements. 936 01:38:16,730 --> 01:38:21,930 Thank you. Well, we're going to have to wrap it up, but I got one question, a a nice one to finish on. 937 01:38:21,930 --> 01:38:28,520 I think if you can can each have a stab at coming up with one example. 938 01:38:28,520 --> 01:38:37,340 We've had examples of problematic algorithms, but do the speakers have any examples or illustrations of design that he's doing ethics? 939 01:38:37,340 --> 01:38:45,080 Well, so let's end on a positive note. He's going to volunteer to go first. 940 01:38:45,080 --> 01:38:48,740 I think the answer isn't no. Certainly. 941 01:38:48,740 --> 01:38:49,060 Yeah. 942 01:38:49,060 --> 01:38:57,120 What I can come up with an example, something at M.I.T. it's not a it's not an algorithm, but it it connects to this sort of ethical hackathon thing. 943 01:38:57,120 --> 01:39:05,220 So it's a project called Make the Breast Pump Not Suck. So it used these methods that Lena talked about maths. 944 01:39:05,220 --> 01:39:09,240 Such On2 is sort of responsible design, participatory design, 945 01:39:09,240 --> 01:39:13,470 where people were getting together and being like, here's my experience with the breast pump. 946 01:39:13,470 --> 01:39:19,350 Here's what I would need out of it, both from like a financial perspective and from a user perspective. 947 01:39:19,350 --> 01:39:25,290 And that resulted in a new prototype of the breast pump that's since been developed. 948 01:39:25,290 --> 01:39:29,610 So, I mean, it's like there's not something inherently wrong with technology. 949 01:39:29,610 --> 01:39:30,630 There's a lot we can do. 950 01:39:30,630 --> 01:39:40,020 And there are a lot of known methods for pushing us towards outcomes that promote social good and don't just sort of act as some sort of guardrail. 951 01:39:40,020 --> 01:39:48,220 Thank you. There have been a variety of really exciting experiments and experimental efforts in 952 01:39:48,220 --> 01:39:53,690 trying to reduce polarisation on social media that have that are really exciting, 953 01:39:53,690 --> 01:39:59,050 you know, things that that allow people to discuss in a sort of long form, 954 01:39:59,050 --> 01:40:03,490 why they believe what they believe rather than to in order to counter the sort of 955 01:40:03,490 --> 01:40:10,630 reaction driven incentive engineering that we've been exposed to all of these years. 956 01:40:10,630 --> 01:40:14,220 And I think that, you know, I would point to this one particular project, 957 01:40:14,220 --> 01:40:20,080 but there is a large number of projects that try to apply information visualisation, 958 01:40:20,080 --> 01:40:28,240 as well as other sort of techniques to ask people to actually spend time flow down and then spend time arguing 959 01:40:28,240 --> 01:40:36,160 their point and and so that they can try to to try to then counter and discuss things in a more civil way. 960 01:40:36,160 --> 01:40:41,370 And I think that's really, you know, what we could do with that elsewhere in America. 961 01:40:41,370 --> 01:40:46,480 Yeah. Eleanor? Yeah, I do agree. 962 01:40:46,480 --> 01:40:54,070 There are various examples. And I think actually the point that Milo was making about the value of of products at about three participatory 963 01:40:54,070 --> 01:40:59,500 design process is so you where you bring in users and you try to understand their perspectives and you know, 964 01:40:59,500 --> 01:41:07,960 their experiences. Right from the early stages of design can make surimi beneficial effect for producing products or systems that promote social good. 965 01:41:07,960 --> 01:41:17,620 The example I'm gonna give is someone I was talking to just last week who works in robotics and was producing and developing assistive care 966 01:41:17,620 --> 01:41:24,310 robots that could interact with people with with older people in care homes and provide sort of social support and other forms of support. 967 01:41:24,310 --> 01:41:30,160 But in this pandemic scenario has diverted attention towards a very simple cleaning robot that can go 968 01:41:30,160 --> 01:41:37,990 into hospitals and provide cleaning cleaning services to to assist with infection control in hospitals, 969 01:41:37,990 --> 01:41:43,960 checking the pandemics. I think that's a very good example. And the robot functions without collecting up any personal data or anything like that. 970 01:41:43,960 --> 01:41:47,500 So that's my example. Excellent. Oh, that's a nice note to finish on. 971 01:41:47,500 --> 01:41:50,590 Thank you. Thank you very much indeed to all of you. 972 01:41:50,590 --> 01:41:55,660 That's been a fascinating discussion and it's raised a lot of important issues. 973 01:41:55,660 --> 01:42:01,030 The sessions being recorded, it's going to be added to what's becoming a very rich resource, 974 01:42:01,030 --> 01:42:06,220 covering lots of different areas of ethics that we're building up at Oxford. 975 01:42:06,220 --> 01:42:13,750 If you want to look at those resources, by the way, if you go to the Faculty of Philosophy website and the ethics section, 976 01:42:13,750 --> 01:42:23,020 that's philosophy dot ops, dot, ac dot, UK slash A.I. Ethics, you'll find it there. 977 01:42:23,020 --> 01:42:31,930 There are also links to it from fellow komp dot net that p h i l o c o m p dot net. 978 01:42:31,930 --> 01:42:40,000 And if you go to the ethics section there, you'll find links. We're trying to build these up so that over time the seminars that we've given 979 01:42:40,000 --> 01:42:47,290 will become a really valuable resource for people both in academia and outside. 980 01:42:47,290 --> 01:42:52,210 And I hope that what we've produced today will feed into a much broader discussion. 981 01:42:52,210 --> 01:42:56,560 And all note will no doubt develop hugely over coming years as we all learn more 982 01:42:56,560 --> 01:43:02,050 about the possibilities and problems of applying ethics in A.I. developments. 983 01:43:02,050 --> 01:43:08,470 And in particular, as we've seen today, teaching and promoting such applications. 984 01:43:08,470 --> 01:43:13,030 So thank you again to the speakers. Thank you, Milo. Thank you, Helena. 985 01:43:13,030 --> 01:43:20,530 Thank you, Max. That's been a tremendous discussion. Thank you for tuning in and especially those who ask questions. 986 01:43:20,530 --> 01:43:26,260 And it's been great in particular to see how some of those questions have even here now in this session, 987 01:43:26,260 --> 01:43:34,370 inspired some fruitful ideas for further development of courses in ethics. 988 01:43:34,370 --> 01:43:40,010 Look out for our next seminar, that's on Thursday, the twenty sixth of November. 989 01:43:40,010 --> 01:43:46,880 Same time, same day of the week, five o'clock, and that will be on a I and autonomy. 990 01:43:46,880 --> 01:44:35,056 So until then. Thank you very much. I hope you've enjoyed this as much as I have.