1 00:00:02,490 --> 00:00:07,680 So Crean and I will tell you about some of the work going on at the Centre for the Governance CBI, which is mentioned, 2 00:00:07,680 --> 00:00:15,240 is that the Future of Humanity Institute and the philosophy of the University of Oxford, we can begin by just asking What is this term governance? 3 00:00:15,240 --> 00:00:22,350 So that sort of two key terms ethics and governance are often used and it's valuable to try and reflect on one of these. 4 00:00:22,350 --> 00:00:26,830 And one definition notion of governance is a very descriptive definition, 5 00:00:26,830 --> 00:00:31,380 says governance is really the processes by which decisions are made and things like shape. 6 00:00:31,380 --> 00:00:38,730 This notion of governance are norms, policies, institutions, laws, but also things like technology code, infrastructure, right? 7 00:00:38,730 --> 00:00:43,940 All of this shapes how decisions are made. Now, to motivate the importance of governance, 8 00:00:43,940 --> 00:00:51,270 I think it's useful to quote what we might think of as a theory of how A.I. governance could work from Vladimir Putin. 9 00:00:51,270 --> 00:00:55,970 So the quote because whoever leads in the eye will rule the world. 10 00:00:55,970 --> 00:01:00,770 This is a theory of governance, if you will, because it tells you how I could change the way in which decisions are made, 11 00:01:00,770 --> 00:01:02,240 in particular, it's a stark theory of governance. 12 00:01:02,240 --> 00:01:11,180 It says I will concentrate power in whomever listening in this quote, I received a lot of media attention throughout the world, 13 00:01:11,180 --> 00:01:15,410 but also in national security conversations and hallways, 14 00:01:15,410 --> 00:01:21,440 because I think it reflects a fear many have that I could be displacing in power in world order. 15 00:01:21,440 --> 00:01:26,540 And also, of course, it's resonating here with our concerns about the role that Russia plays in the world. 16 00:01:26,540 --> 00:01:32,620 Now it's worth remarking that when Putin made the statement, he was not staring ominously into the camera. 17 00:01:32,620 --> 00:01:40,640 In fact, he was hosting a middle school science fair televised where he was broadly just encouraging. 18 00:01:40,640 --> 00:01:46,670 These are children, these these young budding scientists and their various projects. 19 00:01:46,670 --> 00:01:50,630 And this one was a remark in relation to a robotics project. 20 00:01:50,630 --> 00:01:52,610 He had a lot of cooperate in the nice things to say about AI, 21 00:01:52,610 --> 00:01:56,750 but this was the quote that was really pulled out of context and echoed around the world. 22 00:01:56,750 --> 00:02:03,980 And I think this probably illustrates the challenge we have having a mature, calm global conversation about AI governance because of the fears, 23 00:02:03,980 --> 00:02:08,990 because of the ways in which claims can be amplified and taken out of context. 24 00:02:08,990 --> 00:02:16,400 So our job will not be easy. Now this motivates a normative notion of governance, which is that we don't just care about how decisions are made. 25 00:02:16,400 --> 00:02:24,260 We also want those decisions to be made in a good way where good means something like effective, legitimate, inclusive, adaptive. 26 00:02:24,260 --> 00:02:26,090 OK, so at the Centre for the Governance of AI, 27 00:02:26,090 --> 00:02:31,400 we are both interested in how decisions are made and how technology and institutions and so forth shape those decisions, 28 00:02:31,400 --> 00:02:37,610 but also in how we can get to a good set of governance structures when we think about the governance of the AI. 29 00:02:37,610 --> 00:02:43,340 There's a narrow definition which focuses on specific systems algorithms deployed in a particular domain, 30 00:02:43,340 --> 00:02:49,100 be a criminal justice, making loans and so forth, or things like autonomous vehicles, robotics. 31 00:02:49,100 --> 00:02:54,080 There's also a narrow interpretation of ethics, the ethics of a particular system being deployed. 32 00:02:54,080 --> 00:02:56,900 There's also a broad interpretation of the governance and ethics of AI, 33 00:02:56,900 --> 00:03:05,240 which says I could have a wide range of impacts on things that matter to us, and we need to think about how we can manage those impacts. 34 00:03:05,240 --> 00:03:09,560 So impacts such as labour, displacement, challenges to democracy and social epistemic, 35 00:03:09,560 --> 00:03:15,770 how as a community, we make decisions and strategic instability, which means nuclear instability. 36 00:03:15,770 --> 00:03:20,810 So the risks to the risks of nuclear war, in short. 37 00:03:20,810 --> 00:03:29,060 Now, a question I had for myself is how does our remit what we're working on relates to the ethics and aims to inform conversations with Peter. 38 00:03:29,060 --> 00:03:35,000 I've come to understand that ethics in AI is very much understood in this broad sense to the full range of potential impacts. 39 00:03:35,000 --> 00:03:38,330 So I would say that the the problems we're studying are the same. 40 00:03:38,330 --> 00:03:44,660 The scope of the problems are the same, but perhaps the emphasis and the toolset are somewhat different and complement each other. 41 00:03:44,660 --> 00:03:51,980 We tend to bring the tools of social scientists, policy, science and in particular, an emphasis on the geopolitical character of the challenge. 42 00:03:51,980 --> 00:03:56,940 OK, I'm. OK, so here we are. 43 00:03:56,940 --> 00:04:03,600 And in summary, our mission is to help humanity achieve the benefits and avoid the risks associated with advanced artificial intelligence. 44 00:04:03,600 --> 00:04:10,770 Now it may seem like the kind of mission that this group could complete on our own, or maybe with a bit of help, but that's meant to be sarcastic. 45 00:04:10,770 --> 00:04:18,780 Actually, it's it's not going to be easy. The governance of A.I. will be a very difficult challenge, and I'll briefly explain why. 46 00:04:18,780 --> 00:04:20,730 If you think about AI's a general purpose technology, 47 00:04:20,730 --> 00:04:26,070 this is a concept that's been used to think about AI technologies like electricity, the combustion engine. 48 00:04:26,070 --> 00:04:30,750 Then you can see that I will not just have these impacts in various narrow applications, 49 00:04:30,750 --> 00:04:36,270 but really in a deep, transformative sense, transforming the economy, society, politics melting. 50 00:04:36,270 --> 00:04:40,800 And we could go through the properties of AI as we understand it and see why it poses 51 00:04:40,800 --> 00:04:43,920 such challenges for governance so that the harms and the benefits tend to be diffuse. 52 00:04:43,920 --> 00:04:50,220 The technology is so fast moving and require such technical sophistication to understand that so many of the developments in IoT, 53 00:04:50,220 --> 00:04:55,380 dual use and the sense that they have scientific, commercial humanitarian benefits. 54 00:04:55,380 --> 00:05:02,650 But those same technologies with slight modification can be misused, can cause harm, can be used for the military. 55 00:05:02,650 --> 00:05:06,310 So if we try to enumerate the governance challenges, we'll get a long list. 56 00:05:06,310 --> 00:05:11,770 Here's one such list. I've group them into several categories. 57 00:05:11,770 --> 00:05:17,140 Maybe I'll point to one subcategory that's perhaps neglect in this conversation, which is air safety. 58 00:05:17,140 --> 00:05:23,020 This is really a set of work that needs to be done in collaboration with computer scientists and researchers. 59 00:05:23,020 --> 00:05:29,290 The Future of Humanity Institute has an air safety group, and we collaborate with researchers at other labs and elsewhere. 60 00:05:29,290 --> 00:05:34,810 The work that we do is mostly on the right hand side. So thinking about domestic political challenges, 61 00:05:34,810 --> 00:05:40,770 international political economic challenges and then especially challenges for international security. 62 00:05:40,770 --> 00:05:45,390 So I'm now going to start moving quickly to just give you a sampling of some of our work. 63 00:05:45,390 --> 00:05:52,140 So to begin, if you want to have a more, more lengthy overview of how we think about some of these problems and how they relate, 64 00:05:52,140 --> 00:05:59,490 you can look at this research agenda document that I wrote, which breaks up the problem into four categories slightly different. 65 00:05:59,490 --> 00:06:04,500 So the first is the technical landscape. This is work that needs to be done by email researchers and economists, 66 00:06:04,500 --> 00:06:13,740 and some others that tries to understand what's the current state of email and not just about machine learning. 67 00:06:13,740 --> 00:06:17,970 And also what it would look like in the future. So how will it change economic structures? 68 00:06:17,970 --> 00:06:23,530 How will it change the demand for different kinds of labour and so forth? The next category is politics. 69 00:06:23,530 --> 00:06:29,860 This is where most of our work to date has been. This is thinking about the various ways that political institutions can help manage and develop A.I., 70 00:06:29,860 --> 00:06:36,610 but also the disruptions and opportunities that A.I. poses for political institutions. 71 00:06:36,610 --> 00:06:41,830 Ideal governance refers to the more normative challenge of how can we envision what it is as a society? 72 00:06:41,830 --> 00:06:49,510 We want what our institutions that might do a good job of managing the risks and eliciting the benefits. 73 00:06:49,510 --> 00:06:54,910 And then finally, policy refers to near-term policy recommendations that we can make to universities, 74 00:06:54,910 --> 00:07:00,130 to government, international bodies, labs and firms. 75 00:07:00,130 --> 00:07:08,050 OK, so now I'm going to go quickly. So one of our first are projects that got a lot of attention was what's called the malicious use report. 76 00:07:08,050 --> 00:07:10,360 This was led by Miles Brundage, who's now at OpenAI, 77 00:07:10,360 --> 00:07:16,300 and this was our attempt to just sort of think through and catalogued the various risks that I could pose. 78 00:07:16,300 --> 00:07:21,580 For example, one of the risks that we talked about in here was deepfakes, which at the time was relatively unknown. 79 00:07:21,580 --> 00:07:25,810 It was sort of a technical curiosity and speculation, even though it wasn't so long ago. 80 00:07:25,810 --> 00:07:36,190 Now, of course, we're seeing this in the news. All kinds of challenges to privacy, decency and also political conversations. 81 00:07:36,190 --> 00:07:39,950 One of our collaborators is a DPhil at Oxford Objecting. 82 00:07:39,950 --> 00:07:46,930 And he's done a lot of work on Chinese A.I. policy, innovation policy, global technology policy and how that relates to China. 83 00:07:46,930 --> 00:07:52,390 And if you're interested in this, he has a weekly newsletter where he sometimes translates works coming out of 84 00:07:52,390 --> 00:08:01,100 China and also just reflects on the many interesting issues in this space. Another collaborator, Bao Bao Zheng, is the lead on our survey works. 85 00:08:01,100 --> 00:08:04,570 We've done surveys of publics and experts. 86 00:08:04,570 --> 00:08:11,140 This depicted here is a survey we did of American citizens, and we also have a survey in the works on Chinese citizens. 87 00:08:11,140 --> 00:08:19,000 And then we're going to survey European citizens. And I don't know if you can read the left hand side, but this is one of our interesting results. 88 00:08:19,000 --> 00:08:28,420 We asked these Americans which institutions, which organisations they most trusted to develop, manage and develop AI in the public interest. 89 00:08:28,420 --> 00:08:32,390 And we found such things as Americans trust university researchers. 90 00:08:32,390 --> 00:08:36,070 So that's good for us, I guess. 91 00:08:36,070 --> 00:08:43,120 And the U.S. military is another institution that has a lot of trust in maybe second place where tech companies so so Google, 92 00:08:43,120 --> 00:08:49,510 for example, scored well and Microsoft and some others, with one big exception now is Facebook. 93 00:08:49,510 --> 00:08:54,970 And interestingly about Facebook, even though we surveyed this after the Cambridge Analytica results scandal occurred, 94 00:08:54,970 --> 00:09:03,490 we did a pilot beforehand and the results were basically the same. So this concern about Facebook governance has a long history. 95 00:09:03,490 --> 00:09:07,360 And then Americans also don't put up especially large amount of trust in the government. 96 00:09:07,360 --> 00:09:14,200 So federal government, state government and international governmental bodies like the UN do not do not score well there. 97 00:09:14,200 --> 00:09:20,290 Another interesting result here is we asked these respondents which A.I. governance challenges they thought were 98 00:09:20,290 --> 00:09:26,440 most likely to cause to have a large impact in the world in the next 10 years and which were most important. 99 00:09:26,440 --> 00:09:31,510 One takeaway is that all of these issues scored very high on importance on the y axis. 100 00:09:31,510 --> 00:09:36,010 You can see they're all somewhere around 2.5, which somewhere between important and very important. 101 00:09:36,010 --> 00:09:39,850 But another interesting result is that there was meaningful variation in these different issues. 102 00:09:39,850 --> 00:09:45,460 So, for example, the issues that were regarded as most likely to impact a large number of people were data privacy, 103 00:09:45,460 --> 00:09:48,430 cyber cyber-attacks, surveillance and digital manipulation. 104 00:09:48,430 --> 00:09:54,640 Now there's, of course, always ways in which you might want to ask these questions differently to better elicit their beliefs. 105 00:09:54,640 --> 00:09:59,890 In this case, I think because we asked about the challenges in the next 10 years that gave a certain kind of response, 106 00:09:59,890 --> 00:10:08,680 whereas if you're thinking longer term on other issues like technological unemployment, I expect would be judged as more impactful. 107 00:10:08,680 --> 00:10:14,020 OK, so we've also done surveys of A.I. researchers and experts on this. 108 00:10:14,020 --> 00:10:19,840 These figures refer to an older paper of ours from 2017. We also have a recent survey we've done. 109 00:10:19,840 --> 00:10:26,620 The results will be coming out soon, and I won't say much about this for some time, but we had some interesting results, 110 00:10:26,620 --> 00:10:33,070 for example, on the left figure asking these experts to just forecast when various capabilities would be achieved. 111 00:10:33,070 --> 00:10:37,120 And for example, in the bottom, you can see Starcraft that refers to superhuman Starcraft performance. 112 00:10:37,120 --> 00:10:43,390 And what's nice about some of these questions is that they are they have to do with tasks that should be achievable in the near term. 113 00:10:43,390 --> 00:10:45,430 When your term is like five to 15 years. 114 00:10:45,430 --> 00:10:54,040 And so what that means is after about 10 years, we can start evaluating how well these experts did at forecasting technological developments. 115 00:10:54,040 --> 00:11:01,090 We can also analyse by demographics are certain kinds of experts better able, more calibrated and their technological forecasting. 116 00:11:01,090 --> 00:11:10,890 And so the reference to Starcraft, of course, is that DeepMind has sense results when this task would be achieved. 117 00:11:10,890 --> 00:11:18,180 We have some work this is with a computer scientist who's now a police colonel looking at the social implications of data efficiency. 118 00:11:18,180 --> 00:11:21,960 And maybe I'll just remark that I think work in the space and ethics and governance of the 119 00:11:21,960 --> 00:11:27,690 I really benefits from a deep conversation between the technical experts and the social, 120 00:11:27,690 --> 00:11:34,380 ethical, philosophical experts. So in this case, data efficiency is when you can sort of do more with less data. 121 00:11:34,380 --> 00:11:40,800 And there's a first conceptual perspective that's useful, which is to think of this data efficiency in two ways. 122 00:11:40,800 --> 00:11:43,710 One is to think about it in terms of the access effect. 123 00:11:43,710 --> 00:11:48,480 If you need any given level of performance, say it's an autonomous vehicle that is sufficiently safe, 124 00:11:48,480 --> 00:11:53,040 that the regulators will approve it, you will be able to achieve that with less data. 125 00:11:53,040 --> 00:11:58,890 That's the access effect. The second perspective we call the performance effect is that for any given level of data that you have, 126 00:11:58,890 --> 00:12:07,140 you'll be able to achieve a higher level of performance. OK, now, given those two perspectives, there's often some intuitions that emerge from it. 127 00:12:07,140 --> 00:12:14,160 For example, some people think, well, data efficiency means that the market, the technological, the market and I will become more competitive. 128 00:12:14,160 --> 00:12:19,410 Instead of having a few big tech companies, you'll have more entrants. Right, because you can do more with less data. 129 00:12:19,410 --> 00:12:25,290 What this paper did is really reflect on some of these intuitions and show that they're not as straightforward as you might think. 130 00:12:25,290 --> 00:12:32,370 And I for want of time, I won't explain, but I'm happy to do it. You and I are references to the paper. 131 00:12:32,370 --> 00:12:40,770 OK, I'm going to skip. So another set of projects we have this is with a historian of technology of this particular period. 132 00:12:40,770 --> 00:12:46,860 Let's look at moments in history when humanity has confronted powerful technologies here, depicted as the aeroplane. 133 00:12:46,860 --> 00:12:50,290 And then on the right hand side is the United Nations Atomic Energy Commission. 134 00:12:50,290 --> 00:12:53,310 Our deliberations were taking place under the control of nuclear weapons. 135 00:12:53,310 --> 00:13:00,710 So here are circumstances where humanity has confronted an emerging, powerful technology that many regarded as posing a joint danger. 136 00:13:00,710 --> 00:13:07,230 A danger to sort of all the great powers and many individual scientists, members of the public, 137 00:13:07,230 --> 00:13:13,470 but also political elites, tried to find ways to build global institutions to minimise those risks. 138 00:13:13,470 --> 00:13:18,930 And of course, I should qualify. I'm not saying that I it's the same as nuclear weapons or the bomber. 139 00:13:18,930 --> 00:13:21,660 These are very different kinds of technologies, different circumstances. 140 00:13:21,660 --> 00:13:25,980 But there are lessons, I think, to be drawn from history when we have confronting technologies. 141 00:13:25,980 --> 00:13:36,420 And in both these cases that were perceived to be highly valuable commercially, but also posing a disruptive impact for military stability. 142 00:13:36,420 --> 00:13:40,640 And I think there's a lot of really interesting lessons to be learnt here. 143 00:13:40,640 --> 00:13:46,320 Many of us didn't even know this took place, that there was a movement to control the bomber and even to have an international air force, 144 00:13:46,320 --> 00:13:51,050 so bombers would only be possessed by the League of Nations. No one country would be allowed to build bombers. 145 00:13:51,050 --> 00:13:53,930 Fighters were allowed because those are defensive technologies of thought. 146 00:13:53,930 --> 00:14:00,980 And then many of us have forgotten this historical moment when the US and the Soviet Union had a multi-month 147 00:14:00,980 --> 00:14:07,220 conversation in the UN about moving all nuclear materials and nuclear weapons over to the U.N. for control. 148 00:14:07,220 --> 00:14:11,300 Right. In the current global climate, we think this is just impossible. 149 00:14:11,300 --> 00:14:19,100 But this conversation did happen because it's provoked by the fear of a nuclear arms race, which of course, ended up happening. 150 00:14:19,100 --> 00:14:24,080 So in conclusion, there's a lot of governance challenges and ethical challenges that we're going to face. 151 00:14:24,080 --> 00:14:29,660 And I do think many of these become especially difficult in the presence of geopolitical competition. 152 00:14:29,660 --> 00:14:32,180 And this is something that resonates with many of you. 153 00:14:32,180 --> 00:14:38,690 So even something like privacy becomes much harder in a world that's competing economically between these economic blocs. 154 00:14:38,690 --> 00:14:42,350 So Europe, for example, might want to have more stringent privacy policies. 155 00:14:42,350 --> 00:14:49,370 But there's a concern that if Europe adopts that, then Europe has no chance of cultivating any champion the way Silicon Valley and China have. 156 00:14:49,370 --> 00:14:52,610 And similarly, in the US, there's debates about regulation of Facebook and others, 157 00:14:52,610 --> 00:14:57,320 and they often use retort is if you regulate us, if you break us up, then China will win. 158 00:14:57,320 --> 00:15:06,800 Then you know, the Chinese air champions will win. And so you can see how even what seemed like domestic political ethical issues cannot 159 00:15:06,800 --> 00:15:10,970 be understood except without understanding the charter of global competition. 160 00:15:10,970 --> 00:15:17,910 So with that, I'm going to turn it over to my colleague a. So I would be talking about a bunch of different projects that we've been doing it. 161 00:15:17,910 --> 00:15:22,250 I bet it would start with a project of mine on human autonomy in particular. 162 00:15:22,250 --> 00:15:28,070 It's a question of whether and how AI systems may affect human autonomy. 163 00:15:28,070 --> 00:15:34,640 Now, autonomy is one of these concepts that has been popping up in guidelines in ethics principles recently. 164 00:15:34,640 --> 00:15:41,270 Quite a lot. But surprisingly, there is not a lot of academic work that has been done on the topic. 165 00:15:41,270 --> 00:15:48,080 And I think it's a bit like with fairness, where philosophers have been thinking about these concepts for a long, long time. 166 00:15:48,080 --> 00:15:53,540 It's the same with autonomy we've been there's a lot of philosophical work on the philosophical thought on autonomy. 167 00:15:53,540 --> 00:16:00,200 And the question is like what can philosophy contribute to it to a topic important as this? 168 00:16:00,200 --> 00:16:04,190 Now what do we mean by autonomy? Of course, philosophers never agree on anything. 169 00:16:04,190 --> 00:16:10,490 But broadly speaking, we can say that autonomy refers to the ability to self-governance. 170 00:16:10,490 --> 00:16:18,530 So it's the ability of human beings to be their own persons, to have actions guided by beliefs, preferences, 171 00:16:18,530 --> 00:16:29,270 values that are in some sense in some important sense, genuinely their own, as opposed to externally imposed by a manipulation or correction. 172 00:16:29,270 --> 00:16:38,210 Now, when we look at the principles and the guidelines in ethics of AI, 173 00:16:38,210 --> 00:16:44,060 then we see that autonomy actually is used in a in a different a variety of different ways. 174 00:16:44,060 --> 00:16:50,720 So of course there is. The autonomy is personal autonomy, which was the concept that I just introduced and that it's incredibly rich. 175 00:16:50,720 --> 00:16:55,580 So I won't have time to go into a lot of detail of what philosophers have said about personal autonomy. 176 00:16:55,580 --> 00:17:00,020 But the High-Level Expert Group on Artificial Intelligence, for example, 177 00:17:00,020 --> 00:17:11,090 has the principle of autonomy and rights that A.I. systems should not unjustifiably subordinate, coerce deceive, manipulate condition or hurt humans. 178 00:17:11,090 --> 00:17:19,040 But that's about it. In terms of personal autonomy, the AI ethics guidelines usually warrant when these guidelines speak about autonomy, 179 00:17:19,040 --> 00:17:28,310 what they mean is more control or autonomy of control. So I mean, I would call this the principle of control, but they use the term autonomy. 180 00:17:28,310 --> 00:17:36,360 Usually it's it's about maintaining and exerting control about over AI systems, over the AI system question. 181 00:17:36,360 --> 00:17:44,060 Sometimes people also speak about maintaining the power to decide which tasks are being outsourced to AI systems. 182 00:17:44,060 --> 00:17:49,190 But it's very much about controlling the system. Now, these two are clearly not the same. 183 00:17:49,190 --> 00:17:50,270 So, for example, 184 00:17:50,270 --> 00:17:58,820 I may not have control over which advertisement pops up on my screen or which recommended recommendation system recommends the next song to me. 185 00:17:58,820 --> 00:18:04,910 But this doesn't mean that my that my personal autonomy in some sense, is affected by this. 186 00:18:04,910 --> 00:18:08,240 So, yeah, so I mean, both of these are very rich topic, 187 00:18:08,240 --> 00:18:14,720 and I think there is some there are some serious concerns underlying the principle of control as well. 188 00:18:14,720 --> 00:18:24,980 But it's up to interpretation whether they might be more referring to consent or more about questions about meaningful, meaningful human control. 189 00:18:24,980 --> 00:18:28,760 So instead, I'd like to talk a bit more about personal autonomy. 190 00:18:28,760 --> 00:18:31,760 And like the the concept of that, 191 00:18:31,760 --> 00:18:39,680 our beliefs and values are genuinely our own and just give a subset of possible ways that I may interfere with personal autonomy. 192 00:18:39,680 --> 00:18:49,130 So there is a paradigm case of manipulation. So Cambridge Analytica, for example, attempted large scale money from large scale voter manipulation. 193 00:18:49,130 --> 00:18:53,120 It's quite it's very unclear whether they succeeded. 194 00:18:53,120 --> 00:19:05,240 Succeeded at that. Probably not. But what's important here is the scale and the potential for abuse and the potential for manipulation by AI systems. 195 00:19:05,240 --> 00:19:11,810 Now then there's also adaptive adaptive preference and the deformation by system. 196 00:19:11,810 --> 00:19:19,190 So what I have in mind here makes the most recommendation algorithms that give them like those are algorithms, as the name suggests, 197 00:19:19,190 --> 00:19:23,540 that first predict the user's preferences on the basis of data they they have access to, 198 00:19:23,540 --> 00:19:29,930 and then present the user with options that best fit those preferences. So we know this, they're being used across the bench. 199 00:19:29,930 --> 00:19:35,510 We know this from Spotify, Netflix, YouTube and so on. 200 00:19:35,510 --> 00:19:42,170 Now they may like it turns out that they may actually alter preferences as opposed to merely adapting to our preferences. 201 00:19:42,170 --> 00:19:50,600 And now their first, their first studies and one of them, I linked the paper at the bottom by adding my features and who shows how 202 00:19:50,600 --> 00:19:56,510 preferences change on the basis of a of a random of a fake recommender system, 203 00:19:56,510 --> 00:20:01,160 giving giving ratings to a certain to certain songs and and videos. 204 00:20:01,160 --> 00:20:04,640 I think they did that. Now finally, we may. 205 00:20:04,640 --> 00:20:11,690 There is also the concern for the loss of competence so that we that we lose the competence of making authentic decision in some sense. 206 00:20:11,690 --> 00:20:19,790 Again, this is a very complex issue that I. You touched on now, maybe some more advertisement of how great autonomy is. 207 00:20:19,790 --> 00:20:26,360 The thing is like it overlaps with a lot of other areas that we're already looking at in in ethics like privacy and surveillance, 208 00:20:26,360 --> 00:20:35,150 the need for transparency of AI systems and the use of AI systems, but also like questions of responsibility. 209 00:20:35,150 --> 00:20:44,090 Now the main takeaway here is that the way personal autonomy might currently not be, it might be not at stake. 210 00:20:44,090 --> 00:20:52,460 Like the more sophisticated AI systems get, the more problematic. This could be this could be a problem, and they are getting very sophisticated. 211 00:20:52,460 --> 00:21:00,050 So here is an example of an of Typekit you, which is a language model developed by OpenAI. 212 00:21:00,050 --> 00:21:08,060 And what it does is you enter a sentence by hand or a title or not or a paragraph, and it completes the article. 213 00:21:08,060 --> 00:21:15,680 So here I enter just for fun the like, the title of the ethics of autonomy and artificial intelligence, and I was looking at what I was producing. 214 00:21:15,680 --> 00:21:19,250 And fortunately, it didn't write an abstract about my work, 215 00:21:19,250 --> 00:21:25,710 but instead it came up with something about the moral responsibility responsibilities of robots. 216 00:21:25,710 --> 00:21:35,360 There was another case where it was talking about autonomous vehicles. So this very free hearing is very surprising and very advanced A.I. system. 217 00:21:35,360 --> 00:21:35,570 Now, 218 00:21:35,570 --> 00:21:43,370 the reason why it took a detour became became famous was not only because it was one of the breakthroughs in AI development and in language models. 219 00:21:43,370 --> 00:21:48,410 It was also because OpenAI made the choice of only partially releasing the model. 220 00:21:48,410 --> 00:21:51,470 So OpenAI, as a company, decided, well, 221 00:21:51,470 --> 00:21:57,230 maybe the model could be used for malicious purposes or so we're going to we're going to have a stage for these. 222 00:21:57,230 --> 00:22:04,520 We're not going to release a fully trained model. And this sparked a lot of controversy in the machine learning community, 223 00:22:04,520 --> 00:22:15,400 which is usually known for being quite open for publishing a lot of open source for publishing their algorithm so that other people can access it. 224 00:22:15,400 --> 00:22:22,270 Now, the question is whether this is from this norm of openness is problematic in some cases. 225 00:22:22,270 --> 00:22:28,450 And here I want to I want to briefly outline a paper that was written by Alan and 226 00:22:28,450 --> 00:22:37,300 Toby Shevlin on openness and on this culture of openness in in machine learning. 227 00:22:37,300 --> 00:22:40,300 And they were particularly they were looking at arguments that were made by the machine learning 228 00:22:40,300 --> 00:22:45,850 community against against open AI's decision to partially release in favour of openness. 229 00:22:45,850 --> 00:22:50,770 And a lot of their arguments were based on examples from cybersecurity. 230 00:22:50,770 --> 00:22:58,190 So in cybersecurity, they have this model of responsible disclosure, which means that they publish all the secure, 231 00:22:58,190 --> 00:23:03,970 like all the vulnerabilities in software publicly, but only after a few a few weeks. 232 00:23:03,970 --> 00:23:13,720 So people have time to patch the patch, the leaks. But if the reason to publish these these vulnerabilities publicly is so that other people can 233 00:23:13,720 --> 00:23:19,540 learn from the mistakes and hopefully it builds more secure software now in their paper. 234 00:23:19,540 --> 00:23:29,590 Toby and Allen show that the comparison between these arguments and cyber security and I don't really don't don't really apply in all cases. 235 00:23:29,590 --> 00:23:31,870 And that, in fact, well, you can. 236 00:23:31,870 --> 00:23:39,550 When you patch a security leak in the software, you you patch a piece of code, whereas when you have you publish your AI system, 237 00:23:39,550 --> 00:23:46,480 then in many cases it's irreversible and it's much harder to patch to patch the new vulnerabilities that pop up. 238 00:23:46,480 --> 00:23:52,180 And yeah, I think, for example, of voice deepfakes that fake the the voice of a loved one, 239 00:23:52,180 --> 00:23:55,690 and they get quite LaGuardia available with a snippet of voice. 240 00:23:55,690 --> 00:23:59,830 You can you can create these deep fakes and that are being used for fraudulent cases. 241 00:23:59,830 --> 00:24:03,280 Now, once these models are out there, it's really hard to take them back. 242 00:24:03,280 --> 00:24:10,660 So the comparison the comparison doesn't, really doesn't really work that well in all cases. 243 00:24:10,660 --> 00:24:16,330 Now how much time? How much more time do I have? 244 00:24:16,330 --> 00:24:19,840 Five minutes. Seven minutes. OK. OK. 245 00:24:19,840 --> 00:24:25,000 So we also we also working on trying to implement these insights and more practically. 246 00:24:25,000 --> 00:24:29,020 So I'm just going to briefly outline here we have developed it or we are in the process of 247 00:24:29,020 --> 00:24:34,630 developing a guideline for machine learning researchers about how to conduct responsible research. 248 00:24:34,630 --> 00:24:38,320 It's going to be based on work that has been done in the academic community, 249 00:24:38,320 --> 00:24:45,850 and it's going to be a hands on guide that takes me such a step further step by step for the research progress. 250 00:24:45,850 --> 00:24:53,740 Now here we've been focussing on the machine learning research community and how the machine learning research community, 251 00:24:53,740 --> 00:25:00,350 whether machine learning research community is heading and how that might affect the safe development of artificial intelligence. 252 00:25:00,350 --> 00:25:06,490 Now, another interest of ours, though, is to figure out like, what does our own community do? 253 00:25:06,490 --> 00:25:11,380 And one thing that has been emerging in the last few and the last few years is that 254 00:25:11,380 --> 00:25:17,140 there seem to be two parallel communities working in the AI ethics and society domain. 255 00:25:17,140 --> 00:25:25,060 And these communities can be roughly described as those working on near-term issues like algorithmic bias, talking as vehicles, 256 00:25:25,060 --> 00:25:33,670 fairness and those who are working on more long term oriented projects on what is usually called the long term community, 257 00:25:33,670 --> 00:25:43,120 such as superintelligence or advanced A.I. Now it's a bit all there, and there seems to be an almost a rivalry about which is more important. 258 00:25:43,120 --> 00:25:51,610 So here we see Kate Crawford, who is one of the founders of AI now and has an excellent work on on fairness. 259 00:25:51,610 --> 00:25:58,780 And Ryan Cato, who say, Oh, you know, I think this whole talk about superintelligence is overblown and we should focus on the real issues. 260 00:25:58,780 --> 00:26:01,400 And then we have others from the larger community. 261 00:26:01,400 --> 00:26:09,520 You say, Well, you know, I think it's really important that we talk about this now and not in the future because the effects are drastic. 262 00:26:09,520 --> 00:26:13,780 So with my colleague Jess Whittles down in Cambridge, we were wondering, OK, what's going on? 263 00:26:13,780 --> 00:26:17,440 Is this actually a useful distinction to and that is being made here? 264 00:26:17,440 --> 00:26:26,350 And so we tried to figure out whether the the division of the community is into these two blocks is not a bit too crude and in fact, 265 00:26:26,350 --> 00:26:30,370 not very useful when we think about the ethics and society space. 266 00:26:30,370 --> 00:26:35,740 And what we find is that when people talk about near-term and long term, they mean very different things by it, in fact. 267 00:26:35,740 --> 00:26:37,570 So people defining near-term and long term, 268 00:26:37,570 --> 00:26:46,990 sometimes they refer to the capacity of an AI system or are about they think about the impacts or their research 269 00:26:46,990 --> 00:26:53,350 priorities are set because they believe they want to focus on more extreme risks or they want to focus on certain risks. 270 00:26:53,350 --> 00:26:59,650 So we've identified these four dimensions, and we think that all of these are not binary, by the way. 271 00:26:59,650 --> 00:27:05,770 Like these are very much gradients and a lot of these a lot of research projects are going to be somewhere, 272 00:27:05,770 --> 00:27:14,360 are going to fall somewhere in the middle here. And we've just focussed on capacities and impacts and just some of the research developments. 273 00:27:14,360 --> 00:27:18,590 Some of the research priorities actually onto this onto this graph, 274 00:27:18,590 --> 00:27:24,620 and you can see that it is far from clear that they actually these two clusters that emerged like the long term, the near term cluster. 275 00:27:24,620 --> 00:27:30,710 In fact, there is a lot of overlap and the communities could benefit much more from engaging with each other, 276 00:27:30,710 --> 00:27:40,240 exchanging methodologies, exchanging knowledge. OK, so here is a paper that far for future reference. 277 00:27:40,240 --> 00:27:46,260 OK, some of our work has also been done on privacy of more with a focus on how can we use email to actually enhance privacy. 278 00:27:46,260 --> 00:27:53,940 This is by Ben Garfinkel, but sadly, I won't have time. So I would just briefly skip to the teaching card because Peter asked me to talk a bit 279 00:27:53,940 --> 00:27:59,430 about this course that I gave last year with the Oxford Artificial Intelligence Society. 280 00:27:59,430 --> 00:28:03,660 It was a six week course and each one each last was two hours. 281 00:28:03,660 --> 00:28:11,520 Now, the benefit of not doing it as part of the university but with a student society is that you can actually have the first. 282 00:28:11,520 --> 00:28:15,980 You can actually also invite people who who give expert tasks. 283 00:28:15,980 --> 00:28:23,730 So what we did is we really had the first hour of the course as a as an introduction, the topic on the various topics that are listed here. 284 00:28:23,730 --> 00:28:30,940 And the second course we had people from the Oxford and my friends was there from the exit to the interest rate institute, but also people from AI. 285 00:28:30,940 --> 00:28:37,590 I talking on the on the take on this topic, specifically now for half a minute, 286 00:28:37,590 --> 00:28:42,060 I'm just going to talk about some of the policy engagement that we've been doing it if it's AI. 287 00:28:42,060 --> 00:28:49,560 So there are a lot of but we've published a lot of work in obits, in newspapers and engaged with the public. 288 00:28:49,560 --> 00:28:57,060 And one of those more remarkable engagements is that we actually working together with the partnership on the AI, 289 00:28:57,060 --> 00:29:00,420 on the so-called windfall across now. 290 00:29:00,420 --> 00:29:06,390 The windfall clause was originally suggested by Nick and his book Superintelligence, and it's like it's a legally commitment. 291 00:29:06,390 --> 00:29:12,900 Is there a legally binding commitment by private firms that in the case of a firm's profit in the case of for profits, 292 00:29:12,900 --> 00:29:17,550 skyrocket because it has developed some transformative artificial intelligence? 293 00:29:17,550 --> 00:29:24,480 Then every part of the profits about a certain threshold go back to the community or to humanity as a whole. 294 00:29:24,480 --> 00:29:29,220 Now there's a lot of substantial work that needs to be that me to go into determining how exactly this will apply. 295 00:29:29,220 --> 00:29:34,680 But the good news is that our analysis finds that the windfall class actually is legally permissible. 296 00:29:34,680 --> 00:29:40,710 So, yeah, so this is a project that I think I've been more interested in. Think contact Jade or Elon. 297 00:29:40,710 --> 00:29:48,778 And please get in touch with us. OK, so thank you very much.