1 00:00:34,640 --> 00:00:41,450 Welcome to a discussion on responsible research and publication in artificial intelligence. 2 00:00:41,450 --> 00:00:50,060 This is part of an ongoing series of seminars associated with the new Institute for Ethics in EHI at the University of Oxford. 3 00:00:50,060 --> 00:00:56,400 A series which began over a year ago in that wonderful world before Kobe. 4 00:00:56,400 --> 00:01:02,430 If you're interested in finding links to other related events both past and forthcoming, 5 00:01:02,430 --> 00:01:14,450 together with recordings and podcasts categorised by topic, then go to the Oxford Philosophy Faculty homepage and click on the ethics in I Link. 6 00:01:14,450 --> 00:01:20,330 I'm Peter Milliken, Gilbert Ryle fellow and professor of philosophy at Hartford College. 7 00:01:20,330 --> 00:01:26,000 And head of education and outreach of the new institute. 8 00:01:26,000 --> 00:01:34,990 I and related technologies are having an increasing impact on the lives of individuals as well as society as a whole. 9 00:01:34,990 --> 00:01:38,470 Alongside many current and potential future benefits, 10 00:01:38,470 --> 00:01:48,160 there's been an expanding catalogue of actual and potential harms arising from deployed systems and raising questions about fairness and equality, 11 00:01:48,160 --> 00:01:54,860 privacy, exploitation of workers, environmental impact and much more. 12 00:01:54,860 --> 00:01:55,790 In addition, 13 00:01:55,790 --> 00:02:06,080 there have been increasing numbers of research publications that have caused an outcry over ethical concerns and possible negative impacts on society. 14 00:02:06,080 --> 00:02:13,460 In response, many are now asking whether the technical A.I. research community itself needs to do more 15 00:02:13,460 --> 00:02:20,920 to ensure ethical research conduct and to ensure beneficial outcomes from deployed systems. 16 00:02:20,920 --> 00:02:25,900 But how should individual researchers and the research community more broadly 17 00:02:25,900 --> 00:02:32,520 respond to the existing and potential impacts from research and technology? 18 00:02:32,520 --> 00:02:38,180 How should we balance academic freedom against the impact of research on society? 19 00:02:38,180 --> 00:02:44,540 And where should we draw the line between openness and caution in publication? 20 00:02:44,540 --> 00:02:52,300 One key question here is whether technical researchers are themselves well placed to grapple with such issues. 21 00:02:52,300 --> 00:02:55,220 Well, who else needs to be involved? 22 00:02:55,220 --> 00:03:06,210 What can we learn from other fields to help us navigate forward in this new area that promises to have such high stakes for our collective future? 23 00:03:06,210 --> 00:03:14,370 I'm delighted to be joined by three researchers who will be discussing these issues, drawing on examples such as conference impact statements, 24 00:03:14,370 --> 00:03:21,350 released strategies for large language models and responsible research innovation in practise. 25 00:03:21,350 --> 00:03:30,900 In order of speaking, they are Rosie Campbell, who leads the safety critical aid programme at the Partnership on. 26 00:03:30,900 --> 00:03:32,580 Carolyn Ashurst, 27 00:03:32,580 --> 00:03:41,330 who is a senior research scholar at the Future of Humanity Institute in Oxford and research affiliate with the Centre for the Governance of EHI. 28 00:03:41,330 --> 00:03:47,320 And Helena Webb, who is a senior researcher in the Department of Computer Science at. 29 00:03:47,320 --> 00:03:53,100 Welcome to all of you and thank you very much for joining me. 30 00:03:53,100 --> 00:03:59,990 Each of our speakers will give a short talk and the events, the whole will last for around an hour and a quarter. 31 00:03:59,990 --> 00:04:06,370 So we'll have plenty of time for discussion, and you're very welcome to offer your own questions to the speakers. 32 00:04:06,370 --> 00:04:11,930 Feel free to do this at any time by typing them into the comments box on YouTube. 33 00:04:11,930 --> 00:04:16,640 I'll be noting these as we go and proposing them to the speakers at various points, 34 00:04:16,640 --> 00:04:24,230 so the sooner you get your questions in, the more opportunities there will be for having them address. 35 00:04:24,230 --> 00:04:35,020 First up of our speakers is Rosie Campbell. As I mentioned, Rosie leads the safety critical aid programme at the Partnership On. 36 00:04:35,020 --> 00:04:41,890 She's currently focussed on responsible publication and deployment practises for increasingly advanced A.I. 37 00:04:41,890 --> 00:04:49,560 and recently CO organised a major international workshop on navigating the broader impact of A.I. research. 38 00:04:49,560 --> 00:04:54,900 Previously, Rosie was assistant director of the Centre for Human Compatable A.I., 39 00:04:54,900 --> 00:05:02,100 a technical AI safety research group at Berkeley working towards provably beneficial A.I. 40 00:05:02,100 --> 00:05:11,130 Before that, Rosie worked as a research engineer at BBC R&D, a multidisciplinary research lab based in the U.K. 41 00:05:11,130 --> 00:05:15,210 There she worked on emerging technologies for media and broadcasting, 42 00:05:15,210 --> 00:05:21,550 including an award winning project exploring the use of A.I. in media production. 43 00:05:21,550 --> 00:05:28,860 Rosie holds degrees in computer science and physics and also has academic experience in philosophy and machine learning. 44 00:05:28,860 --> 00:05:34,560 She co-founded a futurist community group in the UK to explore the social implications of 45 00:05:34,560 --> 00:05:42,980 emerging tech and was recently named one of 100 brilliant women to follow in A.I. ethics. 46 00:05:42,980 --> 00:05:48,950 Welcome, Rosie, and over to you. Thank you so much, Peter. 47 00:05:48,950 --> 00:05:58,580 I hope you can all see my screen, OK? It's great to be here. I'm going to be talking today about considerations for responsible publication norms. 48 00:05:58,580 --> 00:06:04,100 As Peter mentioned, I lead the Partnership on Ice Safety Critical Eye Programme. 49 00:06:04,100 --> 00:06:08,600 And this work is one stream that we're doing under that. 50 00:06:08,600 --> 00:06:15,320 In case you're not aware of the partnership, it's a multistakeholder non-profit made up of a variety of institutions, 51 00:06:15,320 --> 00:06:20,990 including non-profit organisations, industry companies and academic institutions. 52 00:06:20,990 --> 00:06:26,900 And so a lot of the work we do involves convening and talking with partners from all of these different 53 00:06:26,900 --> 00:06:34,460 areas and trying to understand what we need to do as a community to move towards more responsible I. 54 00:06:34,460 --> 00:06:40,740 The goal of this talk is to lay out some of the key questions and considerations we've been exploring as part of this work, 55 00:06:40,740 --> 00:06:46,760 and I'm also going to suggest some tentative lessons that we might be able to learn from other high stakes fields. 56 00:06:46,760 --> 00:06:52,010 And I'd also like to try to set the scene for further discussion on this, including possible interventions, 57 00:06:52,010 --> 00:06:58,350 which I'm sure Carolyn and the Helinet will also be able to cover in their talks. 58 00:06:58,350 --> 00:07:05,430 So the initial premise is that our research can be both high stakes and dual use and buy high stakes. 59 00:07:05,430 --> 00:07:10,290 I mean that the outcomes can either be really, really great, 60 00:07:10,290 --> 00:07:16,650 but they could also be potentially very bad if we get things wrong or act in an irresponsible way. 61 00:07:16,650 --> 00:07:21,660 And it's dual use in the sense that I can be applied to many socially beneficial purposes, 62 00:07:21,660 --> 00:07:31,130 but it also could be maliciously used or otherwise otherwise applied in ways that are more harmful to society. 63 00:07:31,130 --> 00:07:38,120 And unfortunately, often a default mode of tech companies, at least in the past, has been to move fast and break things, 64 00:07:38,120 --> 00:07:44,460 and that's not always the best way to deal with technology that is high stakes and also dual use. 65 00:07:44,460 --> 00:07:52,900 So my focus has been thinking about the question of how can we conduct our research responsibly to maximise the benefits and minimise the risks, 66 00:07:52,900 --> 00:07:59,890 and in particular what research practises and publication norms do we need to do that effectively? 67 00:07:59,890 --> 00:08:07,270 And I want to start by talking about a particular example of this that happened in early twenty nineteen, 68 00:08:07,270 --> 00:08:16,360 so Open I, which is a company that is developing advanced AI models, created a language model called GP2. 69 00:08:16,360 --> 00:08:22,500 And the way that this works is you give it a sentence and it will write an essay in the style of that sentence. 70 00:08:22,500 --> 00:08:27,640 If you give it something that sounds like the first line of the president's speech, it will write an essay that sounds like the president's speech. 71 00:08:27,640 --> 00:08:30,430 If you give it something that sounds like a five year old's homework assignment 72 00:08:30,430 --> 00:08:33,550 and will write something that sounds like a five year homework assignment, 73 00:08:33,550 --> 00:08:40,390 and it's pretty convincing, at least the initial model, you could generally tell that it wasn't written by a human, but it was it was not bad. 74 00:08:40,390 --> 00:08:42,460 It was pretty convincing. 75 00:08:42,460 --> 00:08:49,540 And basically when they developed this, they were worried that this technology could be used for all sorts of problematic purposes. 76 00:08:49,540 --> 00:08:57,130 So as some basic examples, potentially, it could be used by spammers, it could be used for very targeted fishing. 77 00:08:57,130 --> 00:09:04,060 It could be used to generate fake news. It might obviously help people cheat in their coursework or homework assignments. 78 00:09:04,060 --> 00:09:14,550 So even if I were a bit worried about releasing this technology out into the world so that it could be used in all these different ways. 79 00:09:14,550 --> 00:09:24,040 So what they did was to experiment with a slightly unusual release model, which was to release a smaller version of the of the technology first. 80 00:09:24,040 --> 00:09:31,260 So rather than releasing the full, fully trained, very large language model, they released a smaller version of it that was less capable. 81 00:09:31,260 --> 00:09:36,090 And so what that meant is that you could you can feed it a sentence and it would give you something back. 82 00:09:36,090 --> 00:09:41,970 But it would be like not overwhelmingly convincing that it was written by a human. 83 00:09:41,970 --> 00:09:49,230 And the plan was to release things in increasingly large steps so that once they could see how people were 84 00:09:49,230 --> 00:09:55,860 actually using the model and make any mitigations that needed to be put in place to combat malicious use, 85 00:09:55,860 --> 00:10:00,250 they could increasingly release the more capable versions. 86 00:10:00,250 --> 00:10:06,110 And this caused quite a few different reactions in the community, some people were very impressed with this. 87 00:10:06,110 --> 00:10:12,940 So a tweet here says, Going so far as to think ahead to malicious uses and check in with stakeholders sets a new bar for ethics. 88 00:10:12,940 --> 00:10:22,940 And I will play it open. I saw some people thought this was a necessary step towards thinking about releasing research in a responsible way. 89 00:10:22,940 --> 00:10:29,030 Whereas others felt very differently. An example here says what you what you were doing is the opposite of open. 90 00:10:29,030 --> 00:10:34,790 It is unfortunate that you hype up and propagate fear and thwart reproducibility and scientific endeavour. 91 00:10:34,790 --> 00:10:42,830 And so this represents what became a pretty intense conversation in the community, which has been going on for the last couple of years, 92 00:10:42,830 --> 00:10:47,750 and I think raises some really interesting questions and important tensions that we are facing as a 93 00:10:47,750 --> 00:10:55,410 community as we realise the impact our technology is having and could continue to have on the world. 94 00:10:55,410 --> 00:11:00,840 And I want to cover a few particular questions that have arisen from this conversation. 95 00:11:00,840 --> 00:11:09,030 One is, how do we balance openness with caution? On one hand, it's very important for science to be as open as possible. 96 00:11:09,030 --> 00:11:18,990 A lot of the advances we all enjoy in our lives today are due to the scientific norms of open enquiry and all the benefits that that has brought. 97 00:11:18,990 --> 00:11:23,820 It's also very important for reproducibility to be able to release research as open as possible 98 00:11:23,820 --> 00:11:30,790 so that other scientists can reproduce the work and confirm the findings and build on them. 99 00:11:30,790 --> 00:11:31,510 And then, of course, 100 00:11:31,510 --> 00:11:39,970 there's the opportunity cost of potentially not releasing research in the sense of missing out on beneficial applications of that technology, 101 00:11:39,970 --> 00:11:43,540 which is also a very real cost that we need to take into consideration. 102 00:11:43,540 --> 00:11:50,620 However, on the other side of things, we may want to take a more cautious approach for a few reasons. 103 00:11:50,620 --> 00:11:58,360 One example I've heard for to illustrate this is if you knew how to make a nuclear bomb from kitchen supplies, 104 00:11:58,360 --> 00:12:02,980 most people would probably agree is not a good idea to post that information on the Internet where anyone 105 00:12:02,980 --> 00:12:09,790 can see it just because someone is going to do that and it's going to result in devastating consequences. 106 00:12:09,790 --> 00:12:15,160 So most people agree that there is a line to draw somewhere, but there's a lot of discussion about where we draw that line. 107 00:12:15,160 --> 00:12:22,480 And an example that occurred in the life science community a few years ago was some researchers discovered 108 00:12:22,480 --> 00:12:29,290 how to synthesise the horsebox virus and they were going to publish the steps that they took to do that. 109 00:12:29,290 --> 00:12:37,240 But this resulted in a big backlash because people were worried that it could cause people to essentially bring back the smallpox disease, 110 00:12:37,240 --> 00:12:39,850 which we worked as a species somehow to eliminate. 111 00:12:39,850 --> 00:12:46,130 So there was a big conversation in the life sciences community about whether that kind of information should be distributed or not. 112 00:12:46,130 --> 00:12:50,970 And I think we're now starting to see similar conversations take place in I. 113 00:12:50,970 --> 00:12:56,340 Another question is, whose responsibility is it to anticipate and mitigate these risks? 114 00:12:56,340 --> 00:12:59,520 On one hand, researchers are experts in their own work, 115 00:12:59,520 --> 00:13:06,450 but they may not be able to anticipate second and third order consequences that rely on knowledge and knowledge about economics, 116 00:13:06,450 --> 00:13:13,830 politics, history, all sorts of things. Maybe this is the job of teams, research teams. 117 00:13:13,830 --> 00:13:19,080 Maybe it should be part of the peer review process. Is there a role for government to play or society at large? 118 00:13:19,080 --> 00:13:22,620 And there's this Trade-Off between individual versus collective responsibility. 119 00:13:22,620 --> 00:13:25,920 To what extent should these be things individuals are supposed to be worried about 120 00:13:25,920 --> 00:13:30,840 versus infrastructure that we put in place to support the community at large? 121 00:13:30,840 --> 00:13:36,740 And I think Carolyn is going to talk a little bit more about this question and her talk. 122 00:13:36,740 --> 00:13:43,730 And then finally, another question I've been asking myself is how do we effectively equip ourselves to navigate responsible publication? 123 00:13:43,730 --> 00:13:48,290 What resources do we need as a community and what infrastructure needs to exist? 124 00:13:48,290 --> 00:13:58,310 Potentially, we need to create frameworks for thinking about the risks of our of our work taxonomies, to categorise different types of research, 125 00:13:58,310 --> 00:14:02,600 to help us understand how risky it might be, different services, 126 00:14:02,600 --> 00:14:08,540 to help researchers anticipate the potential impacts of their work guidelines, institutions. 127 00:14:08,540 --> 00:14:15,400 All of these sorts of things are things that we're exploring with our partner community that we might need to develop. 128 00:14:15,400 --> 00:14:19,450 OK, now I'm just going to touch on a few things I've learnt throughout this work, 129 00:14:19,450 --> 00:14:27,550 and the first is to try and disambiguate the different terms that often get used in this conversation. 130 00:14:27,550 --> 00:14:32,410 So. Firstly, people talk about research integrity, 131 00:14:32,410 --> 00:14:41,520 which really refers to the kinds of responsibility a scientist or researcher has in order to make sure that their findings are. 132 00:14:41,520 --> 00:14:46,680 Legitimate and as close to truth as possible, so things like avoiding hacking, 133 00:14:46,680 --> 00:14:51,030 not falsifying data, those sorts of things, then we have research ethics, 134 00:14:51,030 --> 00:14:55,140 which is primarily, at least traditionally, 135 00:14:55,140 --> 00:15:01,260 it's been primarily focussed on the welfare of human participants and things like how you have gathered your data. 136 00:15:01,260 --> 00:15:06,580 Have you done that in a responsible way? And then we have the issue of downstream consequences. 137 00:15:06,580 --> 00:15:12,130 So this has been more of what I've been focussed on, which is once research has been conducted and it's been released into the world, 138 00:15:12,130 --> 00:15:20,950 what are the downstream effects of that research, whether that's something like technological unemployment or fakes and misinformation? 139 00:15:20,950 --> 00:15:29,560 What are the effects of that on society? And then there's another term that has been used more frequently, which is the idea of broad impact, 140 00:15:29,560 --> 00:15:35,460 so not just the potential negative consequences, but the impacts at large from from the work. 141 00:15:35,460 --> 00:15:40,800 And I just wanted to disambiguate these because I think we sometimes use the term broad impacts, 142 00:15:40,800 --> 00:15:46,950 for example, to cover both the downstream consequences and the research ethics aspects of it. 143 00:15:46,950 --> 00:15:51,990 And I think it can be useful to try and untangle these a little bit to to know what we're focussing on. 144 00:15:51,990 --> 00:16:00,730 And as I said, my my work is primarily focussed on downstream consequences, but there are certainly ways that the different areas here overlap. 145 00:16:00,730 --> 00:16:11,400 So, for example, the the environmental effects of how much computation you need to train a model might be part of research ethics, 146 00:16:11,400 --> 00:16:18,750 but also has relevance to the downstream consequences if that model is then going to be used in a deployed at large. 147 00:16:18,750 --> 00:16:25,490 And a lot of people are going to be that environmental effect is magnified. 148 00:16:25,490 --> 00:16:34,070 Another consideration is the fact that a lot of research happens beyond academia and happens within industry, and not only that, 149 00:16:34,070 --> 00:16:41,420 but a lot of my research I researched goes directly to places like archival blog posts and bypasses the peer review system completely. 150 00:16:41,420 --> 00:16:49,610 So we're dealing with a slightly unusual field here in that we can't just think only about the academic research process. 151 00:16:49,610 --> 00:16:56,810 We have to think more broadly about the community at large. And in addition, the line between research and product is quite blurry. 152 00:16:56,810 --> 00:17:01,010 If you think of something like Jupiter, too, like I mentioned earlier in the talk, 153 00:17:01,010 --> 00:17:06,650 that was positioned as a research endeavour, but it is also now being turned into products. 154 00:17:06,650 --> 00:17:11,210 And so I think we're seeing in I a much sort of type of turnaround between the 155 00:17:11,210 --> 00:17:18,890 research that's done and the products that are deployed based on that research. So it's quite difficult to untangle those two things. 156 00:17:18,890 --> 00:17:25,430 That's a wide spectrum of views in the community, so as I mentioned, there's this tension between openness and caution, 157 00:17:25,430 --> 00:17:31,790 and some people very much see themselves as on the open side of things, very pro openness, no matter what the costs. 158 00:17:31,790 --> 00:17:36,860 And other people see themselves as very much on the coalition side of things are trying to really be sure that 159 00:17:36,860 --> 00:17:42,420 you've mitigated any risks and that you can be pretty confident that your research is safe before it's deployed. 160 00:17:42,420 --> 00:17:48,660 And you can imagine most people fall somewhere along that spectrum. 161 00:17:48,660 --> 00:17:54,510 It's currently not that uncommon for researchers to discuss the risks of their work, and there are a couple of reasons for this. 162 00:17:54,510 --> 00:17:59,790 Some want to do that and they think it is within the remit, but they just don't feel equipped. 163 00:17:59,790 --> 00:18:05,430 They don't understand how to anticipate the second and third order effects. 164 00:18:05,430 --> 00:18:12,990 But some researchers see their role as pushing the boundaries of knowledge and science and feel like 165 00:18:12,990 --> 00:18:17,770 it's not within them to consider the impacts of that work and it's outside of their responsibility. 166 00:18:17,770 --> 00:18:25,840 So we have different views in the community on that. I've also noticed a few different clusters of values in the community. 167 00:18:25,840 --> 00:18:30,790 We have people who are primarily focussed on the science of climate science. 168 00:18:30,790 --> 00:18:36,490 Those people in terms of pushing the boundaries of knowledge and not worrying too much about the impacts on society, 169 00:18:36,490 --> 00:18:42,940 we then have people who are primarily focussed on social justice issues and are very much interested in 170 00:18:42,940 --> 00:18:50,230 examining the impacts that we see today on our systems that are having effects on marginalised communities. 171 00:18:50,230 --> 00:18:57,370 And then we have people who are more focussed on the long term future and the way that these technologies may continue to develop. 172 00:18:57,370 --> 00:19:02,470 So they might be worried more about things like automation or superintelligence. 173 00:19:02,470 --> 00:19:09,310 And so depending on which cluster you fall into, people tend to have very different views on this issue. 174 00:19:09,310 --> 00:19:14,680 And I have identified here a few ways that these groups tend to have things in common. 175 00:19:14,680 --> 00:19:22,760 But I do think it's helpful to understand when you're talking to someone about this, where they're coming from. 176 00:19:22,760 --> 00:19:28,740 There are also there's also two major target audiences for this one are practitioners. 177 00:19:28,740 --> 00:19:33,950 So these are the people who are the researchers and engineers actually building the systems and doing the research. 178 00:19:33,950 --> 00:19:38,660 And they hold a lot of market power because researchers are in high demand. 179 00:19:38,660 --> 00:19:41,450 And so they can effect widespread culture changes. 180 00:19:41,450 --> 00:19:48,170 They can put pressure on their employers, they can change the culture within their research labs from a grassroots level. 181 00:19:48,170 --> 00:19:53,720 And then you have what I'm calling gatekeepers, who are the grant makers and publishers, 182 00:19:53,720 --> 00:20:01,280 such as general editors and conference chairs who hold a lot more direct power so they can mandate a policy change, 183 00:20:01,280 --> 00:20:05,780 like requiring all authors to include a broader impact statement with their work, 184 00:20:05,780 --> 00:20:10,040 which has wide sweeping effects across the community in a very quick way. 185 00:20:10,040 --> 00:20:15,110 The problem is that if you focus too much on the top down policy changes, 186 00:20:15,110 --> 00:20:20,690 you can end up with a backlash from the practitioner group who who may not be on board with those changes. 187 00:20:20,690 --> 00:20:26,900 But if you don't if you focus too much on the grassroots changes, then things can happen very slowly. 188 00:20:26,900 --> 00:20:35,300 So I see it as needing to to to to sort of work with both of these groups in tandem to try and make progress. 189 00:20:35,300 --> 00:20:40,760 And then finally, there are a few coordination challenges that we are seeing on this issue. 190 00:20:40,760 --> 00:20:42,620 So, for example, 191 00:20:42,620 --> 00:20:52,370 if if a research lab wants to not fully release a certain piece of work because there are some ethical or broader impact considerations there, 192 00:20:52,370 --> 00:20:58,880 often that is within policy of conferences or conferences, often have like open data, open access policies. 193 00:20:58,880 --> 00:21:02,780 And both parties here are trying to act in a responsible way. 194 00:21:02,780 --> 00:21:07,730 But because there aren't standardised norms in the field, they end up kind of clashing. 195 00:21:07,730 --> 00:21:14,870 And we also have a problem where authors who may want to delay their the 196 00:21:14,870 --> 00:21:20,210 publication of their work to try and do some risk mitigation before it's released, 197 00:21:20,210 --> 00:21:28,410 may miss out on getting the credit for that discovery. A lot of if you're not going to e-mail, you're probably aware of the publish or perish mindset. 198 00:21:28,410 --> 00:21:34,070 And I think a lot of people feel pressure to publish things as soon as possible so that they can get that credit and it's good for their career. 199 00:21:34,070 --> 00:21:39,620 And so anything that talks about restricting publication or delaying publication can be a problem. 200 00:21:39,620 --> 00:21:43,340 So that's another coordination challenge we need to think about. 201 00:21:43,340 --> 00:21:50,060 And so then just to briefly touch on some things we can learn from other fields in the life sciences, 202 00:21:50,060 --> 00:21:59,180 it's very it's the norm to include in any research on, for example, new drugs, the side effects that those drugs might have. 203 00:21:59,180 --> 00:22:05,750 It would be very strange to publish results of clinical trials without talking about the potential downsides of the drug. 204 00:22:05,750 --> 00:22:11,940 And so that's an analogy to what we're trying to advocate for here in the world of I. 205 00:22:11,940 --> 00:22:16,140 In the last census, there was also something called the National Science Advisory Board for Biosecurity, 206 00:22:16,140 --> 00:22:19,740 which was established in the wake of the anthrax attacks. 207 00:22:19,740 --> 00:22:30,670 And the this board can advise on different publication decisions and provide some level of expertise and oversight to those sorts of decisions. 208 00:22:30,670 --> 00:22:35,100 And again, that's something that we could think about for. 209 00:22:35,100 --> 00:22:40,650 And then finally, in the life sciences, they have what's sometimes referred to as a culture of responsibility. 210 00:22:40,650 --> 00:22:45,930 And often this is mostly about how you handle yourself around dangerous and hazardous substances. 211 00:22:45,930 --> 00:22:51,040 But it also applies more broadly to thinking about the impacts of of bio research. 212 00:22:51,040 --> 00:22:53,850 And so, again, that's something that we could think about adopting. 213 00:22:53,850 --> 00:23:04,450 And I put a link to a case study that we have published on the H5N1 virus and what we can learn from that incident. 214 00:23:04,450 --> 00:23:08,380 In cybersecurity, there is a norm of coordinated vulnerability, disclosure, 215 00:23:08,380 --> 00:23:12,310 and what that means is that if someone discovers a bug in a software system, 216 00:23:12,310 --> 00:23:17,790 rather than making that bug public immediately, they will often notify the vendor of that bugs. 217 00:23:17,790 --> 00:23:24,190 If you find a bug in Google Chrome, you would notify Google before making that public so that then Google have a fixed 218 00:23:24,190 --> 00:23:30,340 amount of time to be able to fix that bug before before it's made known to the public. 219 00:23:30,340 --> 00:23:38,980 And so what that does is allow a coordinated effort to fix vulnerabilities before they can be exploited by malicious actors. 220 00:23:38,980 --> 00:23:44,170 And again, that's something we might want to think about, how that might work in the world of AI. 221 00:23:44,170 --> 00:23:49,510 And they also cybersecurity also has some interesting approaches to accreditation. 222 00:23:49,510 --> 00:23:55,460 So trying to get around this issue of how do you delay publication while also giving people credit, 223 00:23:55,460 --> 00:24:02,490 there are some interesting things we can learn that. And again, I put a link to a case study that we've written on this topic. 224 00:24:02,490 --> 00:24:06,510 And then finally, nuclear research also has some interesting analogies. 225 00:24:06,510 --> 00:24:09,520 Again, it is a dual use and a high stakes feel, 226 00:24:09,520 --> 00:24:15,720 just like I but it also gives us an example of how having too much secrecy can actually lead to problems. 227 00:24:15,720 --> 00:24:24,750 So there are some arguments that some of the the the result of the devastation in Chernobyl could have been 228 00:24:24,750 --> 00:24:31,440 avoided if there was much more of a culture of openness and learning from mistakes in the in the nuclear field. 229 00:24:31,440 --> 00:24:35,580 So that's something we also want to be very mindful of with an eye. 230 00:24:35,580 --> 00:24:41,430 And we have a bunch of possible interventions we're exploring, which I'm not going to go into detail on now. 231 00:24:41,430 --> 00:24:46,440 I'm running out of time. And also I know Carolyn and potentially Helena are also going to cover some of these. 232 00:24:46,440 --> 00:24:53,460 But just to run through them quickly, we're thinking about things like including broader impact statements in papers, 233 00:24:53,460 --> 00:25:00,750 trying to establish institutional review processes to think ahead about the potential impacts of research before it's conducted, 234 00:25:00,750 --> 00:25:09,750 looking at ways we could consider partial or restricted publication and advisory and expert advisory board along the lines of the NSA that I mentioned 235 00:25:09,750 --> 00:25:18,450 and also developing informational resources that can assist researchers who are having trouble navigating the potential impacts of that work. 236 00:25:18,450 --> 00:25:22,620 And then just to finish off, I wanted to share some upcoming work we have in the pipeline, 237 00:25:22,620 --> 00:25:28,830 which is going to be a white paper for with recommendations and considerations for individual researchers, 238 00:25:28,830 --> 00:25:38,040 research institutions and teams, and also conferences and journals for how we can navigate this issue responsibly as a community. 239 00:25:38,040 --> 00:25:44,010 And the sort of heuristic that we are using is that the more impressive the contribution of your paper, 240 00:25:44,010 --> 00:25:49,980 the greater responsibility, the greater the responsibility you have to consider the potential impacts of your work. 241 00:25:49,980 --> 00:25:55,710 And I'm happy to talk more about that in the discussion. Thank you very much. 242 00:25:55,710 --> 00:25:59,790 Thank you very much, Rosie. That was that was really interesting. 243 00:25:59,790 --> 00:26:04,720 I mean, a great overview of what's clearly quite a complicated landscape. 244 00:26:04,720 --> 00:26:08,700 Yeah. Can I ask you just before we move on, 245 00:26:08,700 --> 00:26:16,540 I'd be interested to know your personal view of a kind of two different perspectives that someone might have on this. 246 00:26:16,540 --> 00:26:22,890 I mean, on the one hand, somebody might say, look, in general, sunlight is the best disinfectant. 247 00:26:22,890 --> 00:26:29,200 But the way to be safe is to have things out in public where everybody can see them. 248 00:26:29,200 --> 00:26:31,060 On the other hand, somebody might say, I mean, 249 00:26:31,060 --> 00:26:39,670 you mentioned things like impact statements when researchers are consciously thinking about the risks and seeing what they are. 250 00:26:39,670 --> 00:26:46,930 Some might say that's just a way of getting bad guys ideas, but a bit of a conflict there. 251 00:26:46,930 --> 00:26:55,180 Do you feel both of those temptations or do you go one way or the other, or what do you think about it? 252 00:26:55,180 --> 00:27:03,580 Yeah, I think these are both really good points and this illustrates why this is a complicated topic and we need to have these conversations now. 253 00:27:03,580 --> 00:27:07,810 So all the sunlight is this is the best disinfectant position. 254 00:27:07,810 --> 00:27:09,790 I think in general that's probably true. 255 00:27:09,790 --> 00:27:16,690 And for most research, the responsible thing to do is going to be to publish it as openly as possible, disseminate as widely as possible. 256 00:27:16,690 --> 00:27:21,190 However, I think there are going to be cases where that's not necessarily the case. 257 00:27:21,190 --> 00:27:26,530 And we need to think about what's sometimes called the offence, defence, balance of scientific knowledge. 258 00:27:26,530 --> 00:27:34,570 And there's a paper on this and I would recommend people go and read where sometimes releasing the knowledge is going to be more 259 00:27:34,570 --> 00:27:42,010 helpful to people who want to cause harm so adversaries than it is going to be to the people who want to help mitigate that harm. 260 00:27:42,010 --> 00:27:48,670 And that's going to be a judgement call. And we need to think about what he thinks we can use to try and anticipate in advance whether research 261 00:27:48,670 --> 00:27:54,940 is more is more likely to help sort of social researchers or those who are going to misuse it. 262 00:27:54,940 --> 00:27:59,320 So so, yeah, overall, I think sometimes the best disinfectant is generally true, 263 00:27:59,320 --> 00:28:03,250 but there are going to be exceptions to that that we want to be mindful of. 264 00:28:03,250 --> 00:28:09,400 And then the other question you had around broader impact statements and will it just give bad guys ideas? 265 00:28:09,400 --> 00:28:13,480 Yeah. So I think this is a really interesting problem to have, 266 00:28:13,480 --> 00:28:20,800 because if we are trying to encourage researchers to write more about the potential ways that their research could be used to cause harm, 267 00:28:20,800 --> 00:28:24,760 you end up in a situation where someone who does want to misuse that research can just go and 268 00:28:24,760 --> 00:28:30,850 read all the latest papers and see a nice menu of ways that they can use that to cause trouble. 269 00:28:30,850 --> 00:28:33,620 So I think this is, again, another coordination challenge. 270 00:28:33,620 --> 00:28:39,700 Maybe there's a way where we asked researchers to write a broad impact statement, but we don't necessarily publish it. 271 00:28:39,700 --> 00:28:44,920 Or maybe there are certain things that it makes much more sense to have out in the open. 272 00:28:44,920 --> 00:28:48,730 Again, going back to the sunlight is a disinfectant thing, 273 00:28:48,730 --> 00:28:55,900 but some some types of impacts could be better kept as like redacted or something like that. 274 00:28:55,900 --> 00:29:04,360 So I don't really have a strong view on that other than to say that I think it is a valid concern and one that I want to hear 275 00:29:04,360 --> 00:29:12,070 from people in the community and think about potential creative solutions to solve it sure shows how complex these problems are. 276 00:29:12,070 --> 00:29:18,100 You've got better coordination. You've also got issues of whose responsibility is to make the various judgements and so on. 277 00:29:18,100 --> 00:29:28,820 Yeah, well, thank you very much, Rosie. And it's been really interesting. Um, and now we move to Dr. Carolyn Ashurst. 278 00:29:28,820 --> 00:29:35,710 Carolyn is a senior research scholar at the Future of Humanity Institute. 279 00:29:35,710 --> 00:29:44,620 Hello, Carolyn, welcome. Thank you. And research affiliate with the Centre for the Government of EHI, both of those in Oxford. 280 00:29:44,620 --> 00:29:51,040 Her research focuses on improving the societal impacts of machine learning and related technologies, 281 00:29:51,040 --> 00:29:57,220 including topics in governance, responsible machine learning and algorithmic fairness. 282 00:29:57,220 --> 00:30:07,020 The Technical Fairness Research focuses on using causal models to formalise incentives for fairness related behaviours. 283 00:30:07,020 --> 00:30:10,710 On the question of responsible and research and publication, 284 00:30:10,710 --> 00:30:16,920 Carolynn recently co-authored a major guide to writing impact statements and was Coorg 285 00:30:16,920 --> 00:30:23,110 organiser with Rosie of the recent workshop on navigating the broader impacts of our research. 286 00:30:23,110 --> 00:30:29,020 Previously, she worked as a data and research scientist in various roles within government and finance, 287 00:30:29,020 --> 00:30:34,610 and you started off with both the master's and doctoral degree in mathematics. 288 00:30:34,610 --> 00:30:40,230 So welcome, Carolyn, over to you. Brilliant. 289 00:30:40,230 --> 00:30:46,970 Thank you so much. And hopefully you can see my screen do let me know if not, but brilliant. 290 00:30:46,970 --> 00:30:56,540 So thank you very much, Peter. So my talk is really a continuation of a lot of the themes that the Raisi already introduced in her excellent talk. 291 00:30:56,540 --> 00:31:00,830 So I'll be talking about self governance in machine learning and using impact 292 00:31:00,830 --> 00:31:05,750 statements and ethical review as a as a case study for thinking about that. 293 00:31:05,750 --> 00:31:11,360 So here I'll be concentrating on the role of the technical machine learning research community, 294 00:31:11,360 --> 00:31:16,730 both individual researchers, research groups, research conferences. 295 00:31:16,730 --> 00:31:25,130 But of course, I ethics and questions around societal impact has to be an interdisciplinary endeavour and has to involve many stakeholders, 296 00:31:25,130 --> 00:31:29,330 regulators, tech firms in impacted communities. 297 00:31:29,330 --> 00:31:35,750 So whilst I'm focussing on just one important actor, of course, this is just one piece of the puzzle. 298 00:31:35,750 --> 00:31:46,040 So there are lots of examples of the technical research community taking action in response to existing and potential harms around technology. 299 00:31:46,040 --> 00:31:47,420 So, for example, 300 00:31:47,420 --> 00:31:56,540 technical researchers such as Joy and Wienie shown here have done extensive work to highlight the limitations and resulting harms from A.I. systems. 301 00:31:56,540 --> 00:32:02,220 For example, exposing algorithmic bias in commercial systems and work on these topics are 302 00:32:02,220 --> 00:32:08,240 spawned new research disciplines concerned with the societal impacts of AI, 303 00:32:08,240 --> 00:32:11,270 for example, fairness, accountability and transparency. 304 00:32:11,270 --> 00:32:19,700 And this year's fact conference begins tomorrow, for example, and mainstream conferences such as Europe's now, 305 00:32:19,700 --> 00:32:28,390 except technical papers related to societal aspects such as AI, safety, fairness and privacy. 306 00:32:28,390 --> 00:32:29,700 Certain data scientists, 307 00:32:29,700 --> 00:32:39,080 some machine learning researchers have also been very active in raising awareness amongst the public through books and other activities. 308 00:32:39,080 --> 00:32:46,190 And there are also many groups, workshops and teams who wish to promote the use of A.I. for beneficial purposes, 309 00:32:46,190 --> 00:32:54,590 often under the head of A.I. for social good or particular areas such as climate change. 310 00:32:54,590 --> 00:33:01,790 But whilst there is a lot of proactive initiatives in this space like these, many are still asking whether the research community needs to do more. 311 00:33:01,790 --> 00:33:05,000 After all, the world is still full of problematic applications, 312 00:33:05,000 --> 00:33:11,150 business models and applications that bring benefits to some whilst bringing material harms to others. 313 00:33:11,150 --> 00:33:16,850 And in addition, every year we see research papers that provoke an outcry over ethical concerns, 314 00:33:16,850 --> 00:33:21,110 for example, over problematic data sets or categorisations of people, 315 00:33:21,110 --> 00:33:25,910 applications that could be used against populations such as surveillance technologies, 316 00:33:25,910 --> 00:33:33,170 weapons research and applications that could be used for misinformation such as fake image generation. 317 00:33:33,170 --> 00:33:39,950 So in response to this, then, Europe's conference introduced new mechanisms this past year. 318 00:33:39,950 --> 00:33:47,720 So now Europe's is the largest machine learning conference in twenty nineteen thirteen thousand people attended in 2020. 319 00:33:47,720 --> 00:33:53,550 The conference was held in line with around twenty two thousand attendees and having a paper published in Europe. 320 00:33:53,550 --> 00:33:59,660 It's very competitive. It's very much a top tier conference and has a lot of focus on on theory, on methods, 321 00:33:59,660 --> 00:34:04,760 more so than on developing and deploying specific systems in specific contexts. 322 00:34:04,760 --> 00:34:13,850 So in 2020, Europe's introduced two new mechanisms, which I'll talk about in this in this talk. 323 00:34:13,850 --> 00:34:18,890 So as part of peer review, they introduce an ethical review process, 324 00:34:18,890 --> 00:34:29,060 the technical reviewers could flag papers for ethical concerns and papers that receives strong technical reviews were flagged for ethical reasons, 325 00:34:29,060 --> 00:34:35,480 which were assessed by a pool of ethics reviewers. So what happened? 326 00:34:35,480 --> 00:34:41,180 Around 10000 papers were submitted, generic IPS, about 2000 were accepted. 327 00:34:41,180 --> 00:34:50,420 Of these, 13 were passed to the ethics reviewers, resulting in four rejection on ethical grounds, 328 00:34:50,420 --> 00:34:59,820 and seven authors were asked to make changes to their papers. So this affected just a very small number of papers in the end. 329 00:34:59,820 --> 00:35:05,970 On the other hand, the initiative that affected everybody was the broader impact statement. 330 00:35:05,970 --> 00:35:13,380 So it's required that all authors include a broader impact section at the end of their papers, 331 00:35:13,380 --> 00:35:21,270 including its ethical aspects and future societal consequences, including both positive and negative outcomes. 332 00:35:21,270 --> 00:35:28,710 And this is actually announced before they explain the changes to the review process. And the immediate reaction was very mixed. 333 00:35:28,710 --> 00:35:35,670 So how did people respond? Well, on social media, many came out in favour of this requirement, 334 00:35:35,670 --> 00:35:43,940 both from the ethics community and from the technical community as a step in the right direction that will encourage broader dialogue. 335 00:35:43,940 --> 00:35:48,800 But many also criticise this initiative with concerns about whether technical authors have 336 00:35:48,800 --> 00:35:54,770 the right expertise and whether this is even meaningful for a lot of the work and Europe's. 337 00:35:54,770 --> 00:35:59,600 But of course, social media isn't always the best place to find out what the majority really think. 338 00:35:59,600 --> 00:36:07,550 So some research is an element. I survey 50 researchers to find out about their attitudes and how they gone, about the requirements. 339 00:36:07,550 --> 00:36:11,090 And again, they found a really mixed response. 340 00:36:11,090 --> 00:36:17,360 So the survey authors stated that there seems to be a general feeling that assessing broad impact is important, 341 00:36:17,360 --> 00:36:22,370 but some uncertainty regarding who should do it and how. And some liked that. 342 00:36:22,370 --> 00:36:31,280 It forced researchers to reflect on the impact on their work, whereas others felt that it was too broad and that they didn't feel qualified. 343 00:36:31,280 --> 00:36:35,540 One respondent said, If I liked writing fiction, I'd be writing novels, 344 00:36:35,540 --> 00:36:42,080 and another described it as one more burden that falls on the shoulders of already overworked researchers. 345 00:36:42,080 --> 00:36:50,210 So really a mixed response and lots of concerns. We also know that the vast majority of respondents spent less than two hours on their 346 00:36:50,210 --> 00:36:57,110 statement and that many felt there was a lack of clarity and examples and guidance. 347 00:36:57,110 --> 00:37:03,380 So because the official guidance was very brief, little more than what I showed on that first slide, 348 00:37:03,380 --> 00:37:09,560 a group of us, the FBI and elsewhere put together an unofficial guide for my research, as could years. 349 00:37:09,560 --> 00:37:12,380 And as we learn more about what researchers find challenging, 350 00:37:12,380 --> 00:37:18,830 we hope others will build on this so that we can iterate towards a better state of affairs. 351 00:37:18,830 --> 00:37:23,960 So I've given a flavour of some of the divided attitudes towards this requirement, 352 00:37:23,960 --> 00:37:33,380 but what did researchers actually write as so researchers, including a group of us, are starting to analyse the statements. 353 00:37:33,380 --> 00:37:41,540 Back in the autumn, Marguerita Boyarsky and tell analyse the first preference to be put on archive. 354 00:37:41,540 --> 00:37:48,290 And they did find some encouraging trends. They found that some considered a variety of stakeholders, 355 00:37:48,290 --> 00:37:52,940 that some were very clear about that uncertainty and some even deliberated about 356 00:37:52,940 --> 00:37:59,090 the limitations of mitigation strategies and some gave concrete examples of tasks, 357 00:37:59,090 --> 00:38:03,390 failure scenarios and situations of homes. 358 00:38:03,390 --> 00:38:10,890 But they also highlighted some trends that they found concerning, for example, neglecting stakeholders, e.g., 359 00:38:10,890 --> 00:38:19,580 assuming the benefits that means benefits to companies and taking positive impacts to mean technical advances. 360 00:38:19,580 --> 00:38:27,740 They also felt there examples of people outsourcing ethical responsibility and letting the research topic around the scope of enquiry, 361 00:38:27,740 --> 00:38:34,160 for example, fairness papers failing to acknowledge unintended negative effects of fairness work. 362 00:38:34,160 --> 00:38:42,890 And they also felt that some overemphasised the net impact and some were overconfident in their claims. 363 00:38:42,890 --> 00:38:51,440 So further analysis is still needed, but it's already clear that the quality of statements was highly variable. 364 00:38:51,440 --> 00:38:59,990 So given what we know so far from the immediate reaction from the surveys, from the statements themselves, 365 00:38:59,990 --> 00:39:08,630 what is the state of play regarding self governance in I, as I was Rasi pointed to in her talk and has been shown to be the case. 366 00:39:08,630 --> 00:39:10,700 But for impact statements, 367 00:39:10,700 --> 00:39:19,190 there's certainly a lack of consensus on some of the underlying questions around whose responsibility there should be around the trade offs, 368 00:39:19,190 --> 00:39:21,290 for example, between openness and caution, 369 00:39:21,290 --> 00:39:28,190 between scientific freedom and responsibility to do work to benefit society, and on beliefs about scientific endeavour, 370 00:39:28,190 --> 00:39:33,860 for example, whether research should be considered beneficial until proven otherwise. 371 00:39:33,860 --> 00:39:36,830 And even amongst those who agree on those questions, 372 00:39:36,830 --> 00:39:47,260 there is still a lack of consensus about which concrete mechanisms to adopt and best practise implement them, implementing them. 373 00:39:47,260 --> 00:39:49,180 So where do we go from here? 374 00:39:49,180 --> 00:40:00,100 So in our recent work in this paper led by Carina Frankel, we outlined some of the potential benefits of impact statements, 375 00:40:00,100 --> 00:40:05,140 increased anticipation of risks, reflection and awareness and assisting coordination. 376 00:40:05,140 --> 00:40:12,310 But we also outlined some of their risks that impact statements risk being low quality, 377 00:40:12,310 --> 00:40:16,690 that such initiatives can trivialise the task of ethical reflection, 378 00:40:16,690 --> 00:40:25,270 that they can provoke negative attitudes or give a false sense of security if particular risks and harms are understated. 379 00:40:25,270 --> 00:40:31,900 They can also unintentionally signal that researchers alone should be the ones to judge the ethics of their work. 380 00:40:31,900 --> 00:40:39,750 And we do risk a polarisation of the research community along political lines and along institutional lines. 381 00:40:39,750 --> 00:40:49,590 And writing impact statements like this is challenging the potential impacts of a piece of work are very complex, 382 00:40:49,590 --> 00:40:56,310 and in this case there was a lack of explanation and transparency. 383 00:40:56,310 --> 00:41:03,360 Also, researchers are under time, pressure, and perhaps most importantly, there are institutional, 384 00:41:03,360 --> 00:41:13,670 social and cognitive biases and pressures that can incentivise researchers to focus on the positive impacts and not the negatives. 385 00:41:13,670 --> 00:41:23,540 So how can we address these challenges and risks? Well, where initiatives like impact statements are used, we recommend focussing on the following. 386 00:41:23,540 --> 00:41:34,070 So firstly, some straightforward first steps, improving the transparency around the task and process and improving the guidance available, 387 00:41:34,070 --> 00:41:39,170 as well as providing links to ethical and societal expertise. 388 00:41:39,170 --> 00:41:47,180 But on the more challenging side, we also need to think very carefully about how to improve the incentives to address some of the challenges, 389 00:41:47,180 --> 00:41:53,630 for example, using peer review and expert involvement to ensure that the standard is met, 390 00:41:53,630 --> 00:41:58,070 perhaps encouraging researchers to cite other impact statements to provide an incentive 391 00:41:58,070 --> 00:42:03,470 to write them well or prestigious prises for well written statements and finally, 392 00:42:03,470 --> 00:42:08,210 deliberation. So in order to move towards a shared understanding and shared norms, 393 00:42:08,210 --> 00:42:14,420 we need to continue to create forums for deliberation, providing evidence where possible, 394 00:42:14,420 --> 00:42:23,240 and to continue to discuss how to address challenges that push researchers towards understating harms like reputational and legal costs. 395 00:42:23,240 --> 00:42:32,100 And wherever self-government mechanisms more broadly are used, we would urge people to consider these suggestions. 396 00:42:32,100 --> 00:42:36,540 And while there are a lot of challenges and risks, I do think that if done well, 397 00:42:36,540 --> 00:42:45,460 things like impact statements could be really beneficial in raising awareness and encourage encouraging reflection about societal impacts. 398 00:42:45,460 --> 00:42:53,010 But we do need to think carefully about how to get there. So with that, I'll end there. 399 00:42:53,010 --> 00:43:06,640 And thank you very much for listening. Thank you very much, Caroline, can I ask you a couple of questions about impact statements? 400 00:43:06,640 --> 00:43:12,700 I mean, it sounds like a quite a promising idea, but it must be quite difficult. 401 00:43:12,700 --> 00:43:21,100 I mean, with a lot of eye research, you're dealing with stuff that's actually very theoretical, very, if you like, high level. 402 00:43:21,100 --> 00:43:25,880 I mean, developing fundamental algorithmic techniques, for example. 403 00:43:25,880 --> 00:43:35,660 That could have zillions of possible applications all over the place. So is it realistic to expect impact statements there? 404 00:43:35,660 --> 00:43:41,390 Yeah, thank you. This is a really good question. Yeah, this is certainly my reaction when the announcement came out, 405 00:43:41,390 --> 00:43:45,590 particularly because NeuroPace is such a kind of a theoretical and methods based conference. 406 00:43:45,590 --> 00:43:49,070 I kind of thought, what are these researchers going to going to write and say? 407 00:43:49,070 --> 00:43:53,150 The first thing to note is that shortly after the announcement, 408 00:43:53,150 --> 00:43:59,240 NeuroPace did clarify that if your research is very theoretical, you do have to include a statement, 409 00:43:59,240 --> 00:44:02,930 but you are allowed to say something along the lines of my research is very 410 00:44:02,930 --> 00:44:09,140 theoretical and therefore a broader impact statement does not imply it does not apply. 411 00:44:09,140 --> 00:44:16,700 I guess for me personally, I would still encourage theoretical researchers to use the opportunity just to think 412 00:44:16,700 --> 00:44:21,560 about their kind of their research field more broadly and the impacts that it does have, 413 00:44:21,560 --> 00:44:28,670 which which their theoretical research is is, after all, contributing towards. 414 00:44:28,670 --> 00:44:34,970 And in terms of your your other point, you mentioned about having a very large range of potential applications. 415 00:44:34,970 --> 00:44:42,920 This is this is really true for a lot of work, even even work. That's not really theoretical because many of these outcomes are so general-purpose 416 00:44:42,920 --> 00:44:50,930 if you train them on a new dataset that can be used for completely different tasks. So I think this is really I think this is really challenging one. 417 00:44:50,930 --> 00:44:59,750 And so I encourage researchers to think very broadly across the spectrum because that is that is useful for the discussion. 418 00:44:59,750 --> 00:45:03,440 It is useful for for policymakers. But ultimately, 419 00:45:03,440 --> 00:45:12,820 we do need experts from other fields to to help think about some of these kind of these how the impacts could pan out in very different situations, 420 00:45:12,820 --> 00:45:15,350 in very different applications. Yeah, I mean, 421 00:45:15,350 --> 00:45:24,650 I'm struck by the way the big developments in deep learning over the last decade or so have come in and suddenly impacted all over the place. 422 00:45:24,650 --> 00:45:30,800 You know, suddenly it becomes possible to fake pictures writing. 423 00:45:30,800 --> 00:45:35,450 Even suddenly it becomes possible to play chess and go at a high level. 424 00:45:35,450 --> 00:45:44,360 And you wouldn't have thought of those as being similar. So the big theoretical innovations somewhere can have impacts all over the place. 425 00:45:44,360 --> 00:45:48,410 Yeah, definitely. I mean, we often refer to it as being a general purpose technology. 426 00:45:48,410 --> 00:45:56,630 And that's why that thinking about the ethics and the societal impact is is so difficult, but also ultimately why it's so important. 427 00:45:56,630 --> 00:45:58,340 Yeah. Yeah. Well, thank you very much. 428 00:45:58,340 --> 00:46:05,540 You've you've emphasised some of the techniques that we're trying to use to keep this beast under control, but it's obviously controlling. 429 00:46:05,540 --> 00:46:12,860 It is a very, very complicated matter. Thank you very much indeed, Carolyn. 430 00:46:12,860 --> 00:46:19,180 OK, next, we're we're moving on to Helena, Helena Webb. 431 00:46:19,180 --> 00:46:24,500 Helena is a senior researcher in the Department of Computer Science at Oxford. 432 00:46:24,500 --> 00:46:32,970 She's an interdisciplinary researcher and specialises in projects that bridge social science and computational analysis. 433 00:46:32,970 --> 00:46:37,170 She's interested in the ways that users interact with technologies in different 434 00:46:37,170 --> 00:46:44,530 kinds of settings and how social action both shapes and is shaped by innovation. 435 00:46:44,530 --> 00:46:54,730 She works on projects that seek to identify mechanisms for the improved design, responsible development and effective regulation of technology. 436 00:46:54,730 --> 00:47:01,660 While at Oxford, she's worked on projects relating to, amongst others, harmful content on social media, 437 00:47:01,660 --> 00:47:08,330 algorithmic bias, resources and STEM education and responsible robotics. 438 00:47:08,330 --> 00:47:16,630 Helen is the research lead at the newly formed Responsible Technology Institute within the Department of Computer Science here at Oxford. 439 00:47:16,630 --> 00:47:25,960 She also convened student multiples in the department on computers, in society and ethics and responsible innovation. 440 00:47:25,960 --> 00:47:29,560 Oh, welcome. Thank you. Thank you very much. 441 00:47:29,560 --> 00:47:39,030 I pressed a button that I shouldn't have earlier, so I just need to check that my slides are still going to show. 442 00:47:39,030 --> 00:47:43,110 So can you see my slides? Okay, they're excellent. 443 00:47:43,110 --> 00:47:50,130 Thank you. So thank you very much. I'm really pleased to have the opportunity to join this discussion today. 444 00:47:50,130 --> 00:47:56,010 I'm going to talk about the notion of responsibility and open up what it means in the context of A.I. research, 445 00:47:56,010 --> 00:47:59,550 specifically, really about our research in academia. 446 00:47:59,550 --> 00:48:06,210 And I think what I'm going to say is really going to touch on a lot of the things that Rosie and Carolyn have already mentioned. 447 00:48:06,210 --> 00:48:12,900 In particular, what I want to do is draw on a well known responsibility initiative that's gained quite a lot of traction in research, 448 00:48:12,900 --> 00:48:18,330 industry and policy. And I want to argue that we can use this notion of responsibility to foster ethical 449 00:48:18,330 --> 00:48:23,700 practise all the way through the processes of our research and innovation. 450 00:48:23,700 --> 00:48:27,690 Before I go on to that, I will just introduce myself a little bit further. 451 00:48:27,690 --> 00:48:30,390 So as Peter says, I'm an interdisciplinary researcher. 452 00:48:30,390 --> 00:48:37,380 I trained initially in the social sciences and I've been working now in the Department of Computer Science at Oxford for six years. 453 00:48:37,380 --> 00:48:42,120 I'm part of a team called Human Centred Computing, and we're an interdisciplinary group. 454 00:48:42,120 --> 00:48:46,080 And as the name suggests, we put humans at the heart of computing. 455 00:48:46,080 --> 00:48:54,150 So the kind of projects that we do examine the impacts that computer based innovations have on individuals, communities and societies. 456 00:48:54,150 --> 00:48:58,200 And we often seek to identify ways in which these kind of innovations can be ethical 457 00:48:58,200 --> 00:49:02,910 and better support human flourishing so that they can aid empowerment of users, 458 00:49:02,910 --> 00:49:09,090 that they can be trustworthy, that they can be safe, that they can address societal inequalities and so on. 459 00:49:09,090 --> 00:49:15,030 A number of the projects that I am involved in follow this initiative called Responsible 460 00:49:15,030 --> 00:49:20,100 Research and Innovation also sometimes just referred to as responsible innovation. 461 00:49:20,100 --> 00:49:22,590 And this initiative spans across policy, 462 00:49:22,590 --> 00:49:30,060 academia and industry and emerged about 20 years ago and has gained a great deal of influence over that period, 463 00:49:30,060 --> 00:49:34,230 in particular in the U.K. and in the EU as well as elsewhere. 464 00:49:34,230 --> 00:49:40,470 And this initiative, ORER, I began with an aim to identify and address uncertainties and risks associated 465 00:49:40,470 --> 00:49:44,340 with novel areas of research such as nanotechnology and geo engineering. 466 00:49:44,340 --> 00:49:47,820 And now it has moved into ICT as well. 467 00:49:47,820 --> 00:49:56,250 And really the core idea behind responsible research innovation is to bring together actors across society so researchers, citizens, 468 00:49:56,250 --> 00:50:04,320 policymakers, businesses, third sector organisations and so on to work together across the whole research and innovation process. 469 00:50:04,320 --> 00:50:06,330 And the idea is that by doing this, 470 00:50:06,330 --> 00:50:14,850 we can have better outcomes of research and innovation because we are aligning these process with societal values, needs and interests. 471 00:50:14,850 --> 00:50:19,230 So we will produce better results. For that reason, 472 00:50:19,230 --> 00:50:24,510 responsible research and innovation is often characterised as doing science with and for 473 00:50:24,510 --> 00:50:32,430 society because it brings society and together with processes of research and innovation. 474 00:50:32,430 --> 00:50:38,460 A longer definition on the slide here comes from Renee von Eschenbach, who talks about R.I as a transparent, 475 00:50:38,460 --> 00:50:45,360 interactive process by which societal actors and innovators become mutually responsive to each other with a view on the ethical, 476 00:50:45,360 --> 00:50:51,480 acceptability, sustainability and societal desirability of the innovation process and its marketable products 477 00:50:51,480 --> 00:50:57,210 in order to allow a proper embedding of scientific and technological advances in our society. 478 00:50:57,210 --> 00:50:58,850 So what we're really talking about here is, 479 00:50:58,850 --> 00:51:07,110 is a broadening out of traditional research and innovation processes to include the involvement of society very upstream. 480 00:51:07,110 --> 00:51:11,730 And I'll talk a little bit in a second about some frameworks for achieving this kind of work. 481 00:51:11,730 --> 00:51:16,650 But first, I'll just mention the ways in which this idea of responsible evasion or responsibility 482 00:51:16,650 --> 00:51:23,970 more generally is becoming quite embedded in the ways that people are talking about A.I. 483 00:51:23,970 --> 00:51:31,780 So just a couple of examples on the slide here. So responsibility is often talked about in industry. 484 00:51:31,780 --> 00:51:35,880 So one example is Google, which has its own responsible innovation team. 485 00:51:35,880 --> 00:51:44,460 Of course, not without any problems or without controversy. And also the other examples from the slide here relates to UK policy. 486 00:51:44,460 --> 00:51:48,720 So there is a great deal of drive to see responsible innovation as a factor in the 487 00:51:48,720 --> 00:51:53,980 pipeline of A.I. from research to development right the way through into industry. 488 00:51:53,980 --> 00:52:04,470 So so the UK and the UK government wants to make the UK a leader of AI and sees this being achieved by really creating a 489 00:52:04,470 --> 00:52:10,740 lot more opportunities for A.I. in research and then driving that through into the development process and into industry. 490 00:52:10,740 --> 00:52:15,990 And the way this is often talked about is combining this with approaches that apply 491 00:52:15,990 --> 00:52:21,960 responsible innovation as well to in order to make sure that whilst being a leader in AI, 492 00:52:21,960 --> 00:52:29,640 that UK is also creating a situation in which A.I. achieves social good in various different ways. 493 00:52:29,640 --> 00:52:38,300 And this kind of interest in responsible innovation also carries over into academia and the kind of places where we might often see. 494 00:52:38,300 --> 00:52:45,920 Funding for the work that we do, so two examples on the slide here, the first one comes from the EPSA Sea, 495 00:52:45,920 --> 00:52:53,390 which provides funding work for research in engineering and physical sciences and funds a lot of work that's going on in. 496 00:52:53,390 --> 00:52:58,160 The says recently launched a very large programme on trusted autonomous systems. 497 00:52:58,160 --> 00:53:05,180 For instance, as it states on their website, the IPCC is committed to promoting responsible innovation, 498 00:53:05,180 --> 00:53:12,320 so it expects those who apply for funding to carry out research using using its money to engage 499 00:53:12,320 --> 00:53:18,020 with responsible innovation in some way and to embed these approaches into their projects. 500 00:53:18,020 --> 00:53:26,690 Also, Horizon 2020, which is the other part on the slide here, is the biggest EU research and innovation programme ever, 501 00:53:26,690 --> 00:53:33,020 with nearly 80 billion euros of funding available over a period of seven years and responsible innovation, 502 00:53:33,020 --> 00:53:39,620 response, research and innovation is very heavily embedded into the Horizon 2020 programme as well, 503 00:53:39,620 --> 00:53:44,120 in particular with the science with and for society objective. 504 00:53:44,120 --> 00:53:44,690 So again, 505 00:53:44,690 --> 00:53:53,930 we have this expectation that researchers who are getting money from these sources will be engaging with the ideas of responsible innovation as well. 506 00:53:53,930 --> 00:53:59,210 So I think it's worth spending a bit of time to look at the kind of frameworks that are available 507 00:53:59,210 --> 00:54:05,900 for knowing how you might carry out these kind of responsible research and innovation processes. 508 00:54:05,900 --> 00:54:14,540 So we'll look again at these two funding bodies. So first of all, with the EPSA, see, they have what's called an area framework. 509 00:54:14,540 --> 00:54:19,370 And so this area framework sets out the kinds of actions that researchers can do 510 00:54:19,370 --> 00:54:23,270 to embed responsible research and innovation into the work that they're doing. 511 00:54:23,270 --> 00:54:28,610 And this framework is composed of four parts anticipate, reflect, engage and act. 512 00:54:28,610 --> 00:54:32,870 So the first thing that you do within the area framework is to anticipate and 513 00:54:32,870 --> 00:54:36,680 this is something that's already mentioned a couple of times in the seminar, 514 00:54:36,680 --> 00:54:44,240 anticipation involves thinking about the possible outcomes and impacts of the research that's being done. 515 00:54:44,240 --> 00:54:52,100 So it's identifying the positive and negative impacts and also the intended and unintended ones that might arise from the work that's going on. 516 00:54:52,100 --> 00:54:52,910 And importantly, 517 00:54:52,910 --> 00:54:59,990 this isn't the same as predicting there's no need to know for sure what the outcomes might be because they can be very difficult to pin down. 518 00:54:59,990 --> 00:55:07,160 But it's an awareness of all the different potential outcomes so that you can then act on them later. 519 00:55:07,160 --> 00:55:12,560 The next phase is one of reflection. So it's reflecting about the process of research itself. 520 00:55:12,560 --> 00:55:18,140 What kind of assumptions are being carried into it? What kinds of uncertainties need to be addressed? 521 00:55:18,140 --> 00:55:27,600 And then you move on to engage. And this is a highly important stage and it involves opening up the research process to a very inclusive dialogue. 522 00:55:27,600 --> 00:55:34,040 So typically, engage involves bringing in the perspectives of different kinds of stakeholders in the research process. 523 00:55:34,040 --> 00:55:41,780 So this might be users, policy makers, people from different professional environments and so on, and genuinely listening, 524 00:55:41,780 --> 00:55:48,170 inviting their perspectives and listening to the kind of concerns they might have about the research process and its outcomes, 525 00:55:48,170 --> 00:55:53,450 what kind of interests they have, and the different kind of values that they have associated with it. 526 00:55:53,450 --> 00:55:57,710 And then the final stage, once you've done these three, is to act. 527 00:55:57,710 --> 00:56:03,140 So you take what you've learnt from the first three stages and then you act on them in certain ways. 528 00:56:03,140 --> 00:56:09,140 And what you're trying to do here is to positively influence the trajectory of this research process. 529 00:56:09,140 --> 00:56:11,630 So it might be making changes to the research planning, 530 00:56:11,630 --> 00:56:16,430 it might be making changes to the research team, making changes to suggested outputs and so on. 531 00:56:16,430 --> 00:56:21,170 Based on what you've learnt across the three earlier phases so that you are 532 00:56:21,170 --> 00:56:26,180 moving towards making something that is more likely to be acceptable in society, 533 00:56:26,180 --> 00:56:30,890 more likely to provide value, more likely to align with societal needs. 534 00:56:30,890 --> 00:56:33,320 So that's the IPCC area framework. 535 00:56:33,320 --> 00:56:42,590 And then the EU has the 2014 Rome Declaration on Responsible Research and Innovation, which is rather broader and is based on six pillars. 536 00:56:42,590 --> 00:56:51,050 And this relates to open access, government, governance, ethics, science, communication, public engagement and gender equality. 537 00:56:51,050 --> 00:56:53,150 So once again, we can see here that engagement, 538 00:56:53,150 --> 00:57:01,400 public engagement is very important and it's certainly something that we found very useful in the projects that we do here in Oxford. 539 00:57:01,400 --> 00:57:08,360 So, as I mentioned, we often use this responsible research innovation approach in the work that we do in our projects, 540 00:57:08,360 --> 00:57:11,780 and we find it extremely helpful to get us to think about all the different kinds of 541 00:57:11,780 --> 00:57:16,460 societal values related to different technologies and how we can understand those 542 00:57:16,460 --> 00:57:20,840 different kinds of values and genuinely listen to the concerns of our stakeholders and 543 00:57:20,840 --> 00:57:25,220 then use that in a very constructive way to influence our own research processes. 544 00:57:25,220 --> 00:57:30,170 So just to give a very brief example here, I was involved in a project called Unbias, 545 00:57:30,170 --> 00:57:37,010 which looked at the user experience of algorithm driven online platforms and also issues of algorithmic bias. 546 00:57:37,010 --> 00:57:42,050 And when we. Carried out this project, we are very committed to various forms of stakeholder engagement, 547 00:57:42,050 --> 00:57:47,380 so we ran professional stakeholder workshops in which we brought together professionals with an interest in 548 00:57:47,380 --> 00:57:52,870 the kind of issues of project was exploring and encourage them to have dialogue with us and with each other. 549 00:57:52,870 --> 00:57:59,740 So we brought together people from industry, from policy, special interest groups, education, the media and so on. 550 00:57:59,740 --> 00:58:03,820 And we talk through issues of Alcalay bias filter bubbles, 551 00:58:03,820 --> 00:58:09,400 the ways in which fake news processes can be driven by the mechanisms of online platforms and so on, 552 00:58:09,400 --> 00:58:13,390 and discuss together possible ways to tackle these kind of issues and to really 553 00:58:13,390 --> 00:58:18,670 understand the different kind of issues that were in play in these kind of phenomena. 554 00:58:18,670 --> 00:58:24,730 We also spent a lot of time on public engagement as well. So going to different kind of events and talking to members of the public about the 555 00:58:24,730 --> 00:58:28,360 research that we were doing and engaging with them to get their responses to it, 556 00:58:28,360 --> 00:58:35,470 to see how how they understood the work that we were doing and and what kind of values they associated with it. 557 00:58:35,470 --> 00:58:40,510 And then finally, we engaged in creation processes as well for our material outputs. 558 00:58:40,510 --> 00:58:47,350 So we worked with different kinds of stakeholder to produce outputs that could be used beyond the lifespan of the study. 559 00:58:47,350 --> 00:58:52,060 So in the slide here, I've shown a picture of our unbias awareness cards, 560 00:58:52,060 --> 00:58:58,360 and these are sets of cards that you can use to play kind of games to help understand issues of 561 00:58:58,360 --> 00:59:03,880 algorithmic bias and their impacts and to think through what fairness in algorithms might mean. 562 00:59:03,880 --> 00:59:08,350 And these concepts were created with groups of school children. 563 00:59:08,350 --> 00:59:18,460 So we worked with them to design activities that would be engaging and also carry over the kinds of ideas that we wanted to communicate. 564 00:59:18,460 --> 00:59:24,520 So we find responsible research and innovation highly useful as an approach to embed into our work. 565 00:59:24,520 --> 00:59:32,020 And I can understand that this might seem less immediately useful to people who engage in far more technical work. 566 00:59:32,020 --> 00:59:36,070 Is the work that we do is already very sort of societal facing and so on. 567 00:59:36,070 --> 00:59:44,950 But I do think there are lots of ways in which we can take this idea of responsibility and the overall aims of the response. 568 00:59:44,950 --> 00:59:48,370 Will research and innovation approach and apply it to any form of research. 569 00:59:48,370 --> 00:59:57,590 So to include this more technically focussed, I work and I'll just mention a few kind of important characteristics that we might think 570 00:59:57,590 --> 01:00:03,820 relating to responsibility to see how it can apply to this form of more technical research. 571 01:00:03,820 --> 01:00:10,090 So one of them is to understand that responsibility isn't just about thinking of the impacts of research. 572 01:00:10,090 --> 01:00:16,540 It's also thinking of the research process as well. So it's understanding that when you're doing research, that's part of society, too. 573 01:00:16,540 --> 01:00:22,720 So you need to think about responsibilities within it. And this goes beyond the traditional kind of, you know, 574 01:00:22,720 --> 01:00:27,220 following the guidance set out by your ethics committee to thinking about different issues as well. 575 01:00:27,220 --> 01:00:35,220 So thinking back to the Rome Declaration on Responsible Research and Innovation, they talk about open access to science, 576 01:00:35,220 --> 01:00:41,260 so so reaching different kinds of audiences in making publications available to different audiences, 577 01:00:41,260 --> 01:00:48,040 making them understandable to different audiences. Rosie mentioned the carbon footprints of projects as well. 578 01:00:48,040 --> 01:00:55,780 And I think this is hugely important when we think about the research process as we know that the A.I. requires a lot of computational power. 579 01:00:55,780 --> 01:00:59,620 So it actually contributes to climate change in various ways. 580 01:00:59,620 --> 01:01:03,010 So as a research process, you can think about how you might manage that, 581 01:01:03,010 --> 01:01:07,510 how you'd manage the carbon footprint of the project that you're doing more generally 582 01:01:07,510 --> 01:01:10,630 with all of us learnt over the last year that we don't necessarily need to travel 583 01:01:10,630 --> 01:01:14,380 by plane thousands of miles to go to conferences so we can all start to think about 584 01:01:14,380 --> 01:01:18,790 how we might reduce the carbon footprint of our projects in that sense as well. 585 01:01:18,790 --> 01:01:23,110 And then also the Rome declaration mentions gender and exclusivity. 586 01:01:23,110 --> 01:01:27,880 And I think this is a highly important point when we think about research processes. 587 01:01:27,880 --> 01:01:38,050 And I know that in industry has often come under attack for industries not necessarily being particularly inclusive or balanced in terms of 588 01:01:38,050 --> 01:01:47,500 gender and the kind of impacts it has in perhaps making some of the datasets that are used within industry practise less than diverse as well. 589 01:01:47,500 --> 01:01:51,730 And that kind of attitude, this neglect of diversity carrying through. 590 01:01:51,730 --> 01:01:55,570 And I think the same can apply in some senses to to university research as well. 591 01:01:55,570 --> 01:02:00,640 Computer science departments are not bastions of gender diversity or exclusivity. 592 01:02:00,640 --> 01:02:05,020 More generally, there's lots of work that can be done in this side of things. 593 01:02:05,020 --> 01:02:13,480 And it's something that we can think about as a responsibility of researchers and research institutions in carrying out this kind of work. 594 01:02:13,480 --> 01:02:18,100 The next two factors, I think, relate to each other very closely, so I put them together. 595 01:02:18,100 --> 01:02:22,360 So one is understanding that responsibility is not just about liability. 596 01:02:22,360 --> 01:02:26,710 So it's not just about finding who is to blame when something goes wrong. 597 01:02:26,710 --> 01:02:31,870 It's something quite different from that because it also involves being forward looking. 598 01:02:31,870 --> 01:02:38,000 It's not just looking back after something has gone wrong and saying, oh, well, you know, let's work out who to blame. 599 01:02:38,000 --> 01:02:46,910 It's more about a kind of proactive caretaking, and this is where this idea of anticipation comes in, that you look forward and think about, 600 01:02:46,910 --> 01:02:53,180 you know, what are the potential implications of this process, the positive ones, the negative ones, intended and unintended. 601 01:02:53,180 --> 01:02:59,330 And rather than just you know, you don't just sort of raise awareness of them, you know, in the sense of kind of creating fear and so on. 602 01:02:59,330 --> 01:03:06,980 You anticipate so that you can then take action, you can take steps to try to mitigate those potentially negative consequences. 603 01:03:06,980 --> 01:03:09,290 So by being responsible, 604 01:03:09,290 --> 01:03:18,020 you're looking forward and you're taking care of the future in that sense and thinking far more broadly than just simple liability. 605 01:03:18,020 --> 01:03:22,880 And then the final point to make is that responsibility is also shared. 606 01:03:22,880 --> 01:03:29,630 It's shared as a process is distributed across different groups of people across society, really. 607 01:03:29,630 --> 01:03:34,790 So, you know, we've spoken about the concerns that researchers have about taking all of this on. 608 01:03:34,790 --> 01:03:40,970 And I think it is very true to say that they shouldn't be expected to take on all of these kind of considerations themselves. 609 01:03:40,970 --> 01:03:47,570 And in fact, it's much better if they don't is much better if we think of responsibility as a kind of collective process, 610 01:03:47,570 --> 01:03:53,600 one in which we bring in these different kind of perspectives and understanding the perspectives of stakeholders and 611 01:03:53,600 --> 01:04:01,700 understanding the role that the different groups and that different people play across these research and innovation processes. 612 01:04:01,700 --> 01:04:09,950 So we can understand that the responsibility is really distributed across this landscape rather than just being placed into the laps of poor, 613 01:04:09,950 --> 01:04:13,850 overworked researchers in that sense. 614 01:04:13,850 --> 01:04:21,050 So I think if we put these to put all these things together, the understanding that responsibility is about the research process, 615 01:04:21,050 --> 01:04:26,300 it goes beyond liability, it's very forward looking and it's shared is shared with different stakeholders. 616 01:04:26,300 --> 01:04:32,030 We can really open up this notion of responsibility and we can use it to foster 617 01:04:32,030 --> 01:04:36,680 ethical practise all the way through the process of A.I. research and innovation 618 01:04:36,680 --> 01:04:42,890 and perhaps overcome some of the concerns about what it means to to ask researchers 619 01:04:42,890 --> 01:04:47,150 to take on all of these considerations and just do them by themselves. 620 01:04:47,150 --> 01:04:53,100 It's a far more kind of a collective approach and a very Forward-Looking one as well that I think is really crucial. 621 01:04:53,100 --> 01:04:59,060 You know, we've mentioned a number of times the kind of impacts that that I can have on society. 622 01:04:59,060 --> 01:05:05,840 And it's really crucial that we think forwards about those impacts and don't just react to them retrospectively. 623 01:05:05,840 --> 01:05:11,870 And there are many other things I could say at this point, but I'm going to stop there so that we can move on to the discussion. 624 01:05:11,870 --> 01:05:21,010 Thank you very much. Thank you very much, Eleanor. 625 01:05:21,010 --> 01:05:30,130 One question I'd like to ask you about, I mean, you're very interested in the issues of training and education. 626 01:05:30,130 --> 01:05:39,000 And you were talking about researchers taking responsibility and caretaking, I mean, just caring about it. 627 01:05:39,000 --> 01:05:45,510 Some people might be rather rather sceptical about whether training can actually make people care. 628 01:05:45,510 --> 01:05:49,410 And I mean, if a research is in a context where, you know, 629 01:05:49,410 --> 01:05:55,950 progression in their job or in publication or whatever depends primarily on just getting 630 01:05:55,950 --> 01:06:02,730 on and producing research and getting there faster than the other guy and so forth. 631 01:06:02,730 --> 01:06:07,800 How far can training make people care? 632 01:06:07,800 --> 01:06:16,500 Well, I think training is really vital and sort of early training so so that people learn it sort of right from from the start is really important. 633 01:06:16,500 --> 01:06:20,130 So so we see, for instance, in department he computer science here, 634 01:06:20,130 --> 01:06:24,660 we now teach our first year undergraduates ethics and responsible innovation with the idea that they kind of 635 01:06:24,660 --> 01:06:29,940 get the basics of it in the first year and they carry it through all the way through their research careers. 636 01:06:29,940 --> 01:06:35,490 And the EPSA see the funding body that I mentioned where they fund incentives for doctoral training, 637 01:06:35,490 --> 01:06:38,400 which is the four year programmes for doctoral students. 638 01:06:38,400 --> 01:06:44,640 They require the centres for doctor training to include responsible innovation training as well. 639 01:06:44,640 --> 01:06:49,020 So again, you sort of have these perspectives brought in very early on so that, you know, 640 01:06:49,020 --> 01:06:55,140 people learn them quickly and just assume that they're kind of part of the work that they're doing, 641 01:06:55,140 --> 01:07:02,520 not an extra thing that's, you know, terrible that they have to add to the workload, but just a central thing of the work that they're doing. 642 01:07:02,520 --> 01:07:08,280 And I can only speak for the students that I've been in contact with, but they certainly take it on very responsibly, actually. 643 01:07:08,280 --> 01:07:17,400 And I think, you know, they can really see the need for the need for this kind of ethical thinking about A.I. But I think more broadly, 644 01:07:17,400 --> 01:07:24,240 you know, I do take your point that it's really about sort of what Rosie was saying about a culture of responsibilities, like everybody does it. 645 01:07:24,240 --> 01:07:30,840 It's not like, you know, I do it and therefore I matter. You know, I'm at a disadvantage position next to the next person. 646 01:07:30,840 --> 01:07:39,150 It's we all do it. Everybody does it. And we just kind of take it for granted that we do it because we see it as central to the work that's going on. 647 01:07:39,150 --> 01:07:46,280 Thank you very much, and it's getting a culture going in a place like Oxford amongst our students and so on, 648 01:07:46,280 --> 01:07:56,580 it's probably a relatively easy task getting it over an international community of researchers that Sakey extremely difficult. 649 01:07:56,580 --> 01:08:00,220 But I guess let's bring the. Thank you very much. 650 01:08:00,220 --> 01:08:07,320 That's been really interesting. And let's bring the other two back and we'll talk about about these broader issues, 651 01:08:07,320 --> 01:08:14,970 because I guess part of the part of this broader culture is a matter of enshrining things like these 652 01:08:14,970 --> 01:08:21,990 research impact statements and and so forth in in things like these big international conferences. 653 01:08:21,990 --> 01:08:27,120 By the way, the workshop that Rosie and Carolyn were involved with, that was in Europe's workshop. 654 01:08:27,120 --> 01:08:30,600 You were doing movement. So that's that's a pretty big deal. 655 01:08:30,600 --> 01:08:43,260 Yes. Well, thank you very much. A couple of questions I'd like to ask all of you or invite you to discuss amongst yourselves. 656 01:08:43,260 --> 01:08:50,440 First of all, of meaningful differences between responsible research publication. 657 01:08:50,440 --> 01:08:59,240 And responsible deployment. So, I mean, how much some people might think, well, you know, let's take. 658 01:08:59,240 --> 01:09:07,560 Don't worry about the researchers, let them get on with their stuff. What we should be doing is putting the checks on the deployment. 659 01:09:07,560 --> 01:09:16,710 Coming up, I'll jump in quickly, so I think this relates to something I mentioned in my talk around the blurriness in our 660 01:09:16,710 --> 01:09:20,730 research in particular when it comes to the distinction between research and products 661 01:09:20,730 --> 01:09:26,490 and how we're seeing a lot of research that sort of advances capabilities actually 662 01:09:26,490 --> 01:09:32,390 happening within industry in the process of building products out of those developments. 663 01:09:32,390 --> 01:09:41,880 So I've thought a lot about, you know, should we be thinking about changing the sort of research culture or is this actually a wider cultural 664 01:09:41,880 --> 01:09:47,220 issue in the whole field of AI that includes research and development and deployment as well? 665 01:09:47,220 --> 01:09:53,340 And I think that might be slightly different considerations. 666 01:09:53,340 --> 01:09:59,520 But overall, they're so entangled that I do think we need some kind of common stances here. 667 01:09:59,520 --> 01:10:05,040 And I think this idea of the culture of responsibility is an example, like if it does, 668 01:10:05,040 --> 01:10:11,010 if everyone starts to acknowledge that, yes, I have become something that has these large effects on the world, 669 01:10:11,010 --> 01:10:14,610 that puts us in a position of responsibility towards it, 670 01:10:14,610 --> 01:10:19,320 as much as I would like to be able to do research in a vacuum and not have to worry about these things, 671 01:10:19,320 --> 01:10:23,670 we are all citizens of society and we have to play a role in that. 672 01:10:23,670 --> 01:10:30,360 So I think I think there are possibly some meaningful differences between research and deployment. 673 01:10:30,360 --> 01:10:39,000 And in particular, I'm thinking there's a lot of concrete work that has happened in safety engineering fields. 674 01:10:39,000 --> 01:10:46,830 So fields like aviation or vehicles have to have very concrete measures of safety. 675 01:10:46,830 --> 01:10:53,370 And I think that we could potentially learn from fields like that when we're talking about deploying AI systems. 676 01:10:53,370 --> 01:10:59,700 However, I do think that, yes, we are because of this tight loop between the research and the product deployment, 677 01:10:59,700 --> 01:11:05,850 some of that thinking also needs to apply to the research itself. But this gets close to another question that's been asked. 678 01:11:05,850 --> 01:11:10,830 Actually, many other fields have established processes for research. 679 01:11:10,830 --> 01:11:18,840 Is A.I. trying to reinvent the wheel? Can we learn from other fields like psychology and medicine and so on? 680 01:11:18,840 --> 01:11:22,290 So maybe put that into your into your thinking as well. 681 01:11:22,290 --> 01:11:25,830 Carolyn, do you see that? Yeah. 682 01:11:25,830 --> 01:11:34,950 Yeah, sure. And so I think I think as has been mentioned a little bit in the seminar, I think I think to an extent, yes. 683 01:11:34,950 --> 01:11:36,210 And to an extent no. 684 01:11:36,210 --> 01:11:45,630 So I think research ethics, as it concerns you know, I'm doing a piece of research that's going to affect some human participants. 685 01:11:45,630 --> 01:11:50,190 How do I need to make sure that that's ethical and that's all that OK in situations like that. 686 01:11:50,190 --> 01:11:59,820 This is, you know, from other fields are whole, you know, institutional review boards and things like that set up to to to make sure that goes well. 687 01:11:59,820 --> 01:12:06,990 And so, yes, 100 percent that stuff should be and and I believe is being pulled into the space. 688 01:12:06,990 --> 01:12:11,310 Of course, I often does not have human subjects. 689 01:12:11,310 --> 01:12:15,300 Often it's more likely to deal with kind of personal data. 690 01:12:15,300 --> 01:12:21,480 But then again, in that case, there's a lot of stuff that can be drawn in from the kind of the data governance space. 691 01:12:21,480 --> 01:12:26,850 So to an extent, yes, I think with things like impact statements, though, 692 01:12:26,850 --> 01:12:32,820 we are in a slightly different world where we're not just asking researchers to talk about their sorts of research conduct, 693 01:12:32,820 --> 01:12:36,570 you know, people being affected during the research itself. 694 01:12:36,570 --> 01:12:44,790 We're also asking bigger questions about know downstream impacts and a impacts, as we called them, 695 01:12:44,790 --> 01:12:52,740 and often research, research, ethics, things like institutional review boards don't have quite that scope. 696 01:12:52,740 --> 01:12:54,840 And so in spaces like that, and again, 697 01:12:54,840 --> 01:13:00,750 particularly because machine learning is so is so sort of general purpose that could be used in so many different contexts, 698 01:13:00,750 --> 01:13:03,030 then that does look quite different. 699 01:13:03,030 --> 01:13:11,160 And then and then, of course, there is stuff that you can pull over from responsible research and innovation and such and such things like this. 700 01:13:11,160 --> 01:13:16,080 But we need to make sure that it kind of fits fits the machine learning setting. 701 01:13:16,080 --> 01:13:23,820 Well, I'm sure Helena might also have thought, oh, can I just bring something else in there and then I'll come to you next. 702 01:13:23,820 --> 01:13:30,450 But I just the thought that occurs in reaction to what Carolyn said and one of 703 01:13:30,450 --> 01:13:37,350 our our viewers has raised the issue of responsibility as opposed to liability. 704 01:13:37,350 --> 01:13:44,250 And I think that could mix in here. So one thing that comes to mind is that in areas like medicine. 705 01:13:44,250 --> 01:13:57,390 It's relatively easy to establish control over what he's done because the research is typically being done in government or large institutions. 706 01:13:57,390 --> 01:14:06,120 A medical research institution can't hide away, can't be somebody in their basement working away on a machine, but researching I can be. 707 01:14:06,120 --> 01:14:13,540 So it's extremely difficult to control. You can't control it in the same way by controlling large institutions. 708 01:14:13,540 --> 01:14:18,270 And there's this issue of liability. And that's a really tricky one. 709 01:14:18,270 --> 01:14:26,200 You know, with big institutions, you can say, well, if your researchers misbehave, we're going to charge huge fines or whatever. 710 01:14:26,200 --> 01:14:31,630 If it's someone someone hiding under a cloak of anonymity, working away in their basement, 711 01:14:31,630 --> 01:14:36,940 and if it's possible for them to make big leaps in research, it may be different. 712 01:14:36,940 --> 01:14:44,460 But one can see that there is much more of a problem than there is in the case of something like Lightman's. 713 01:14:44,460 --> 01:14:48,600 So, Helen, you're on any of the issues we've just been raising here. 714 01:14:48,600 --> 01:14:56,700 Yes, I think with the sort of the liability question is absolutely right, you know, where our research is going on, 715 01:14:56,700 --> 01:15:02,550 it's much harder to get the sense of what's being done if you compare it to where it's taking part in sort of big, 716 01:15:02,550 --> 01:15:05,790 well-established institutions, which sort of have their own regulations and so on. 717 01:15:05,790 --> 01:15:11,310 And I think that our community at the moment is sort of facing these questions about kind of like self governance. 718 01:15:11,310 --> 01:15:16,980 So is it enough to have sort of professional codes of ethics and sort of values and so on? 719 01:15:16,980 --> 01:15:25,620 Is it enough to sort of expects people to kind of voluntarily follow them and that will be enough to secure good practise? 720 01:15:25,620 --> 01:15:29,400 Or do you need some sort of much more kind of external mechanisms in place? 721 01:15:29,400 --> 01:15:35,090 And I think, again, the kind of the comparison with medicine is a very good one, because, of course, I, 722 01:15:35,090 --> 01:15:42,390 I I'm a much younger field and it hasn't had a chance to sort of develop those professional values and so on, 723 01:15:42,390 --> 01:15:51,270 sort of as as a field where compared to medicine, obviously you have these kind of external mechanisms in place, but you also have to do very, 724 01:15:51,270 --> 01:15:59,220 very well embedded values that that individuals have about sort of what is and isn't good practise in research. 725 01:15:59,220 --> 01:16:03,510 So I think it's kind of an ongoing question, I think for the community, 726 01:16:03,510 --> 01:16:09,600 the extent to which is it going to to rally itself to sort of get those values in place, 727 01:16:09,600 --> 01:16:13,200 or are some more external mechanisms going to be needed in that? 728 01:16:13,200 --> 01:16:18,120 Yes. And what could be I mean, the cultural issue that was raised earlier comes in here, doesn't it? 729 01:16:18,120 --> 01:16:22,590 Because people standardly go into medicine because they want to cure people. 730 01:16:22,590 --> 01:16:32,690 They're already pretty well motivated. They're not usually going in primarily from a financial motive to medical research, plastic surgery. 731 01:16:32,690 --> 01:16:36,680 But but in a I. 732 01:16:36,680 --> 01:16:39,560 Yeah, that's that's that's that's not the same. 733 01:16:39,560 --> 01:16:48,290 I mean, another question that's been raised by the problem is who decides what is responsible research, not easy or self evident. 734 01:16:48,290 --> 01:16:58,590 And another comment, stakeholder approach is interesting, but whole groups of stakeholders can be sadly misinformed. 735 01:16:58,590 --> 01:17:06,380 But this general question of who decides what is responsible, what is acceptable and so forth to. 736 01:17:06,380 --> 01:17:14,820 Any of you have any anything particular to say on the. There's the sort of the ideal view that, you know, 737 01:17:14,820 --> 01:17:22,980 you have all of the stakeholders come together and you have this process of engagement and through that consensus emerges. 738 01:17:22,980 --> 01:17:29,940 And sort of one of the things that we talk about in the sense of engagement is that it is kind of like a creative process in that sense. 739 01:17:29,940 --> 01:17:35,130 And disagreements can be a place where, you know, creativity can emerge. 740 01:17:35,130 --> 01:17:39,660 It can be highly constructive and sort of leading to new pathways and so on. 741 01:17:39,660 --> 01:17:47,580 So so the ideal form is who decides it's everybody, because it's kind of a collective understanding that that's reached through these processes. 742 01:17:47,580 --> 01:17:50,490 Of course, that the practise might be quite different. 743 01:17:50,490 --> 01:17:57,450 And you might find you you might find that ideals in or general cultures in different countries are quite different. 744 01:17:57,450 --> 01:18:03,160 Yes, absolutely. Absolutely. So then you sort of have questions where that where the boundaries mean, 745 01:18:03,160 --> 01:18:07,200 because we talk about societal values as if they're their single thing, but obviously that they're not there. 746 01:18:07,200 --> 01:18:09,580 They can be very different, they can be competing and so on. 747 01:18:09,580 --> 01:18:17,080 And then you face very difficult questions about, well, whose values are you going to, you know, take is important and someone who's values wins. 748 01:18:17,080 --> 01:18:26,850 And and just to finish off with this, I mean, another viewer has remarked that a lot of research, 749 01:18:26,850 --> 01:18:30,240 a lot of our societal values have an anthropocentric bias. 750 01:18:30,240 --> 01:18:36,120 They're biased in favour of human goods rather than good for the environment. 751 01:18:36,120 --> 01:18:42,030 And that throws in yet more issues about who decides and how do we decide. 752 01:18:42,030 --> 01:18:50,340 So Rosie and Carolyn, do you want to finish off with the gist of your view on how we decide and who decides? 753 01:18:50,340 --> 01:18:58,260 Yeah, so. I think that this is a question that humanity has faced since the dawn of time. 754 01:18:58,260 --> 01:19:04,830 We all have to get along with each other. We might all have different intuitions about what is ethical behaviour. 755 01:19:04,830 --> 01:19:11,580 Different cultures have different perceptions of morality. And I think it is valid to ask this question with respect to I. 756 01:19:11,580 --> 01:19:17,190 But I do think that this is this is definitely not something that is unique to as a field. 757 01:19:17,190 --> 01:19:23,970 And we somehow have managed to kind of muddle along and learn how to cooperate roughly obviously not smooth sailing, 758 01:19:23,970 --> 01:19:28,590 but like we've developed institutions and systems that have allowed us to navigate 759 01:19:28,590 --> 01:19:32,730 the fact that we still all have differences in values and intuitions and things. 760 01:19:32,730 --> 01:19:39,660 And I think my main sort of the thing I'm advocating for here is not necessarily that we need 761 01:19:39,660 --> 01:19:44,490 to right now pick exactly whose values we're going to respect and what those values should be. 762 01:19:44,490 --> 01:19:51,450 And it's more that we need to start building those institutions and thinking through those processes within the specific context of I. 763 01:19:51,450 --> 01:19:58,200 So I think we can still make progress on this without having a concrete answer about who's values, who is responsible. 764 01:19:58,200 --> 01:20:01,770 We can still sort of nudge things. We can kind of iterate, 765 01:20:01,770 --> 01:20:06,180 we can try things and we can nudge things in a more positive direction and hopefully slowly 766 01:20:06,180 --> 01:20:14,100 over time sort of build our capacity for wrestling with these more difficult questions. 767 01:20:14,100 --> 01:20:16,500 Thank you, Carolyn. 768 01:20:16,500 --> 01:20:26,970 Yeah, I guess I can add add a couple of things, yeah, firstly, firstly, on the on the on the excellent point about only worrying about human values. 769 01:20:26,970 --> 01:20:30,000 Yes, certainly, certainly things like environmental impact. 770 01:20:30,000 --> 01:20:36,330 I think there's been increasing recognition within the sort of machine learning community that 771 01:20:36,330 --> 01:20:41,850 the that we do need to also think about environmental and environmental impacts of systems, 772 01:20:41,850 --> 01:20:50,850 particularly systems that require huge, huge amounts of compute for training, which are just very, very energy intensive. 773 01:20:50,850 --> 01:21:00,900 So, yeah, lots to do, but at least there's kind of a recognition that this is something that should be thought about. 774 01:21:00,900 --> 01:21:10,020 Yeah. In terms of the question of of who decides this is that this is a really big question that could get into some really meaty philosophical, 775 01:21:10,020 --> 01:21:21,540 philosophical debates. I'm sure. I think I think to an extent, you know, so obviously, obviously a lot of people have very different moral views. 776 01:21:21,540 --> 01:21:22,540 But I think across the board, 777 01:21:22,540 --> 01:21:32,460 there's a lot of stuff that we can agree on in terms of of kind of certain certain human suffering and certain inequalities that we want to avoid. 778 01:21:32,460 --> 01:21:38,670 At least we have a very good starting point in terms of some of the stuff that we don't want to see happening. 779 01:21:38,670 --> 01:21:45,480 And yet, sadly, sometimes do can start from things that we do agree on, both positive and negative. 780 01:21:45,480 --> 01:21:52,890 Yeah, sure. And some people would also say things like, you know, we have, you know, for example, that human rights, 781 01:21:52,890 --> 01:21:56,990 for example, you know, this is this has been a collaborative endeavour to get things written down. 782 01:21:56,990 --> 01:22:03,690 And we have starting points like that that we can think about what that means for AI systems and kind of building, 783 01:22:03,690 --> 01:22:14,820 building, building that then and then. I guess the final comment I want to make is a lot of people really are really thinking 784 01:22:14,820 --> 01:22:18,660 about how to get sort of impacted communities at the centre of these conversations. 785 01:22:18,660 --> 01:22:25,170 So particularly where you're having systems that are creating real material harms for people and their lives and their decisions. 786 01:22:25,170 --> 01:22:26,340 Now, 787 01:22:26,340 --> 01:22:35,230 how do we kind of send to those voices who have historically not been not been necessarily listened to that well in the development of these systems? 788 01:22:35,230 --> 01:22:40,590 So I think that is that's a really important a really important place to start with. 789 01:22:40,590 --> 01:22:51,780 And as we think about kind of future impacts as well, thinking about how we can meaningfully kind of translate the possibilities of the technologies 790 01:22:51,780 --> 01:22:56,460 and where we might be headed and translate that into a meaningful way such that we can 791 01:22:56,460 --> 01:23:03,210 have kind of useful conversations with people who might be impacted and to let people kind 792 01:23:03,210 --> 01:23:08,280 of have a say and think about what it is that they do and do not want from these systems, 793 01:23:08,280 --> 01:23:12,270 I think is really important. Thank you very much. And only Helena. 794 01:23:12,270 --> 01:23:17,490 Oh, yes. So just very building on what Carolyn was saying, I think. 795 01:23:17,490 --> 01:23:23,340 Yeah, we absolutely have sort of, you know, a task to work out what are the fundamental values that we have here. 796 01:23:23,340 --> 01:23:32,130 And I think Carolyn's point about, you know, addressing the inequalities in society that can be furthered by, you know, 797 01:23:32,130 --> 01:23:37,470 we might want to be able to to reach out to disadvantaged communities and sort of correct out some of these inequalities. 798 01:23:37,470 --> 01:23:41,100 So that could be one of the fundamental values that we're looking at when we're talking 799 01:23:41,100 --> 01:23:45,240 about responsible research and A.I. And then sometimes it's a matter of a trade off, 800 01:23:45,240 --> 01:23:49,080 as well as sort of understanding that maybe the ideal isn't always possible. 801 01:23:49,080 --> 01:23:53,970 So it's hard to see whether balances is so, for instance, a balance between that, the profit motive, 802 01:23:53,970 --> 01:24:01,080 which drives the huge amounts of A.I. and protecting the environment, for instance, you probably can't have both of those things together. 803 01:24:01,080 --> 01:24:07,200 So where is the balance between them? So we kind of have a combination. What's the fundamental and where are the trade-offs? 804 01:24:07,200 --> 01:24:16,500 Thank you very much. Yeah, I'm struck by, as you gave your answers, that one answer that doesn't come out is, oh, we listen to moral philosophers. 805 01:24:16,500 --> 01:24:20,790 They will decide where the truth lies in all these things. Well, we can dream. 806 01:24:20,790 --> 01:24:27,510 And I think they even agree amongst themselves. Do they? No, no, no, I. Well, absolutely not. 807 01:24:27,510 --> 01:24:32,970 All right. Thank you very much indeed, Rosie. Carolyn. And that's been extremely interesting, 808 01:24:32,970 --> 01:24:41,940 giving us all an insight into a lot of difficult but really important issues raised by research and which is certain to become even more important in 809 01:24:41,940 --> 01:24:46,560 the future of funding and research structures and things like that aren't the 810 01:24:46,560 --> 01:24:54,420 sorts of things that typically attract a huge amount of public attention. But it's clear that they could potentially have a massive effect over time. 811 01:24:54,420 --> 01:25:02,760 I'm feeling rather conflicted myself between the typical academic prejudice in favour of openness and worries about giving bad guys ideas. 812 01:25:02,760 --> 01:25:10,620 But I'm in the fortunate position that no ideas I produce are likely to be of any use to bad guys. 813 01:25:10,620 --> 01:25:19,000 The recession has been recorded. It will be added to the rich collection of resources that we're building up at Oxford. 814 01:25:19,000 --> 01:25:27,130 As I said at the beginning, you can find links to past and forthcoming events and the growing set of recordings and podcasts categorised by topic. 815 01:25:27,130 --> 01:25:34,870 If you go to the Oxford Philosophy Faculty homepage and click on the Ethics in Link. 816 01:25:34,870 --> 01:25:38,110 Before saying goodbye, I'd like to thank Wes Williams, 817 01:25:38,110 --> 01:25:44,530 Vicki McGuinness and the whole team of Torch for helping with the organisational and technical arrangements for the seminar. 818 01:25:44,530 --> 01:25:51,160 They've made everything much easier for the four of us, and we hugely appreciate that. 819 01:25:51,160 --> 01:25:58,810 Thank you again to our three speakers. Thank you for watching, especially those who've added comments or questions. 820 01:25:58,810 --> 01:26:01,960 Do you look out for our future events from the link I mentioned? 821 01:26:01,960 --> 01:26:12,340 Our next seminar will be on artificial intelligence and mental integrity and takes place in three weeks time, five p.m. on March the 21st. 822 01:26:12,340 --> 01:26:34,553 Until then, thank you again and goodbye.