1 00:01:14,920 --> 00:01:20,530 Hello, everyone, and welcome to this launch of the Institute for Ethics and I in Oxford. 2 00:01:20,530 --> 00:01:22,570 Thank you for joining us. 3 00:01:22,570 --> 00:01:31,600 This institute is made possible by the historic philanthropic gift from Stephen Schwarzman to create a new centre for the humanities at Oxford. 4 00:01:31,600 --> 00:01:34,810 The centre will bring together seven humanities faculties. 5 00:01:34,810 --> 00:01:42,070 It will create a new state of the art performing arts centre and a new centre for Ethics and I. 6 00:01:42,070 --> 00:01:51,580 The goal is to harness insights from the humanities gleaned from a millennium of scholarship and to apply them to the technologies of the future. 7 00:01:51,580 --> 00:01:54,440 Such as I. To do this, 8 00:01:54,440 --> 00:02:03,670 we will bring together philosophers and other humanists with technical developers and users of A.I. and academia and business and government. 9 00:02:03,670 --> 00:02:08,710 The ethics and governance of the is an exceptionally vibrant area of research at Oxford, 10 00:02:08,710 --> 00:02:19,150 and the institute will build on this work as it explores how A.I. is changing, how we work, how we live, how our democracy functions. 11 00:02:19,150 --> 00:02:24,820 The institute has gotten off to a very strong start by assembling a wonderful group of academics with 12 00:02:24,820 --> 00:02:32,040 more appointments to come and by organising an important series of stimulating events such as this. 13 00:02:32,040 --> 00:02:36,960 This evening to help us explore the questions of A.I. and democratic culture, 14 00:02:36,960 --> 00:02:43,540 we've convened a fabulous panel and I'm delighted to have the chance to introduce them to you. 15 00:02:43,540 --> 00:02:51,370 Professor Sir Nigel Shadbolt is principal of Jesus College and Professorial Research Fellow in the Department of Computer Science. 16 00:02:51,370 --> 00:03:00,370 He's chairman and co-founder of the Open Data Institute and a fellow of the Royal Academy of Engineering and the British Computer Society. 17 00:03:00,370 --> 00:03:03,520 Niger has been unfailingly generous with his time, 18 00:03:03,520 --> 00:03:10,240 wisdom and commitment as we've sought to define the institution in such a way as to mobilise work and 19 00:03:10,240 --> 00:03:18,050 related fields across the university while optimising its impact on the world beyond the university. 20 00:03:18,050 --> 00:03:23,160 Professor Alan Landau is associate professor of political science at Yale. 21 00:03:23,160 --> 00:03:30,750 Her research and teaching interests include democratic theory, political epistemology, theories of justice, 22 00:03:30,750 --> 00:03:37,680 the philosophy of social sciences, constitutional processes and theories and workplace democracy. 23 00:03:37,680 --> 00:03:43,200 Allen is the author of the recent book Open Democracy, published by Princeton A Vision for a new, 24 00:03:43,200 --> 00:03:49,730 more open form of democracy based on non electoral forms of representation. 25 00:03:49,730 --> 00:03:55,190 Professor Joshua Cohen is a political philosopher and is on the faculty of Apple University. 26 00:03:55,190 --> 00:04:00,960 He's also a distinguished senior fellow in law, philosophy and political science at Berkeley. 27 00:04:00,960 --> 00:04:06,000 He has written on issues of democratic theory, freedom of expression, religious freedom, 28 00:04:06,000 --> 00:04:12,180 political equality, democracy and digital technology, good jobs and global justice. 29 00:04:12,180 --> 00:04:18,540 His books include on democracy, democracy and Association Philosophy, Politics, Democracy, 30 00:04:18,540 --> 00:04:25,640 Rousseau, a Free Community of Equals, and the Arc of the Moral Universe and other essays. 31 00:04:25,640 --> 00:04:30,530 And finally, I'd like to introduce the chair of today's panel, Professor John Sourceless, 32 00:04:30,530 --> 00:04:38,600 who is the inaugural director of the Institute for Ethics, and I am professor of ethics and Legal Philosophy here in Oxford. 33 00:04:38,600 --> 00:04:45,010 Professor Thewlis, until recently held the chair of Politics, Philosophy and Law in King's College, London. 34 00:04:45,010 --> 00:04:51,040 He has degrees in law and philosophy from the University of Melbourne and a different philosophy from the University of Oxford, 35 00:04:51,040 --> 00:04:57,520 where he studied as a Rhodes Scholar, previously taught in Glasgow and here in Oxford, 36 00:04:57,520 --> 00:05:03,520 and has also served as a consultant on human rights for the World Bank. Since coming back to Oxford last year, 37 00:05:03,520 --> 00:05:11,920 he has brought insight and energy to the role of director of the Institute for Ethics and A.I. and we're delighted to welcome him back to Oxford. 38 00:05:11,920 --> 00:05:17,580 Over to you, John. Many thanks for that very kind introduction, Louise. 39 00:05:17,580 --> 00:05:25,890 And before beginning discussion, I want to say a few words about the mission of Oxford's Institute of Ethics for Ethics. 40 00:05:25,890 --> 00:05:31,650 In a good place to begin is the word ethics itself for many people. 41 00:05:31,650 --> 00:05:35,610 Ethics is just one form of regulation, amongst others. 42 00:05:35,610 --> 00:05:45,360 Many perhaps big tech companies, amongst them regarded as a soft form of regulation to be distinguished from law, which is enforceable. 43 00:05:45,360 --> 00:05:53,940 Indeed, on a cynical view, ethics is a matter of facile matters like Don't Be Evil, which can be hard to distinguish from corporate PR. 44 00:05:53,940 --> 00:05:56,790 But that is not at all how we understand ethics. 45 00:05:56,790 --> 00:06:06,660 In the Institute for US, Ethics refers to the ultimate values that human beings should respect and pursuit values like justice, 46 00:06:06,660 --> 00:06:16,600 equality and the common good. And two of the reasons for action generated by those values in particular times and places. 47 00:06:16,600 --> 00:06:21,670 On this expansive view of ethics, it is not one form of regulation, amongst others. 48 00:06:21,670 --> 00:06:31,620 Instead, ethics is the foundation of all forms of regulation, whether political, economic, legal or any other kind. 49 00:06:31,620 --> 00:06:37,620 It is ethical judgement, for example, that determines whether a given kind of activity, 50 00:06:37,620 --> 00:06:47,130 such as the use of self driving cars or online microtargeting is an appropriate subject matter for legal regulation. 51 00:06:47,130 --> 00:06:52,620 And if it is, what form those laws should take. 52 00:06:52,620 --> 00:06:59,000 It is because we understand ethics in this way. It is because we understand ethics in this way, 53 00:06:59,000 --> 00:07:04,730 that the institute gives a central role to the discipline of philosophy and why the 54 00:07:04,730 --> 00:07:11,270 institute enjoys the great privilege of being based in Oxford's faculty of philosophy. 55 00:07:11,270 --> 00:07:15,620 Philosophy is the academic discipline with a centuries long cross-cultural track 56 00:07:15,620 --> 00:07:21,360 record of seeking to address questions of ultimate value through the use of reason. 57 00:07:21,360 --> 00:07:26,210 One of the most distinctive features of the institute here at Oxford is that it aims to bring 58 00:07:26,210 --> 00:07:34,730 the clarity and rigour characteristic of the best philosophy to this nascent field of ethics. 59 00:07:34,730 --> 00:07:42,560 But philosophy by itself is not enough. The ethics of A.I. is inescapably an interdisciplinary endeavour. 60 00:07:42,560 --> 00:07:50,840 Most obviously, it requires dialogue with computer scientists who are at the forefront in the development of technology. 61 00:07:50,840 --> 00:07:55,640 And it has been immensely gratifying that the institute has already established 62 00:07:55,640 --> 00:08:00,110 close ties with brilliant colleagues in Oxford's Department of Computer Science, 63 00:08:00,110 --> 00:08:07,600 many of whom have themselves reflected long and hard about the ethical challenges thrown up by their research. 64 00:08:07,600 --> 00:08:11,560 However, there's another reason why philosophy by itself is not enough, 65 00:08:11,560 --> 00:08:18,220 a truly rich and informative ethics debate has to recognise that it's ethical questions assume 66 00:08:18,220 --> 00:08:24,820 a significantly different shape depending on the domain of human life that is in question. 67 00:08:24,820 --> 00:08:31,750 The use of robots and A.I. to carry out surgery or to make medical diagnoses, for example, 68 00:08:31,750 --> 00:08:40,620 raises very different ethical issues from their use as judges, law enforcers or lethal weapons and warfare. 69 00:08:40,620 --> 00:08:49,140 This is one reason why it is significant that the Institute for Ethics and I will be housed in the Schwartzmann Centre for the Humanities, 70 00:08:49,140 --> 00:08:56,910 drawing on expertise from a range of other humanities disciplines, from classics to music and also wine. 71 00:08:56,910 --> 00:09:00,990 We're establishing close connexions with other parts of the university, 72 00:09:00,990 --> 00:09:09,960 such as the Oxford Internet Institute and the Blowback School of Government, with whom we will soon be advertising joint posts. 73 00:09:09,960 --> 00:09:16,200 One final aspect of the institute's mission I want to stress is public engagement. 74 00:09:16,200 --> 00:09:21,540 This doesn't simply mean that my colleagues and I are interested in producing work that is accessible 75 00:09:21,540 --> 00:09:29,260 to a wider public beyond academia and which can inform policy decisions and enhance democratic debate, 76 00:09:29,260 --> 00:09:38,850 although it does mean that, too. It also means that we hope that the institute will operate as an inclusive forum in which a broader public 77 00:09:38,850 --> 00:09:46,370 in this country and globally can engage in discussion about the challenges and opportunities of I. 78 00:09:46,370 --> 00:09:49,160 In his work and political philosophy, The Politics, 79 00:09:49,160 --> 00:09:56,300 Aristotle wrote that the person who lives in a house often knows more about it than the architect who designed it. 80 00:09:56,300 --> 00:10:01,160 All of us are increasingly living in a society shaped by. 81 00:10:01,160 --> 00:10:06,740 So it is vital that the voices of those who are affected by applications such 82 00:10:06,740 --> 00:10:11,780 as contact tracing and facial recognition are heard not just those who create, 83 00:10:11,780 --> 00:10:15,530 deploy or profit from those applications. 84 00:10:15,530 --> 00:10:22,640 And this is what brings me to the topic of our discussion tonight, philosophers can play an important role in ethics, 85 00:10:22,640 --> 00:10:28,760 but they cannot resolve these conundrums by themselves, not because they are not wise enough, 86 00:10:28,760 --> 00:10:33,320 but because they have no political authority to do so in a democracy. 87 00:10:33,320 --> 00:10:40,680 That authority ultimately rests in the citizenry. And that is why we thought that is a topic for a launch event. 88 00:10:40,680 --> 00:10:44,780 We could not do better than to explore the interplay of democracy. 89 00:10:44,780 --> 00:10:51,320 And democracy is not just one of the demands of activity in which operates. 90 00:10:51,320 --> 00:11:01,130 It is also the overarching system of governance that is ultimately tasked with addressing its challenges and its opportunities. 91 00:11:01,130 --> 00:11:06,530 So it's my pleasure now to invite our three panellists to join me. 92 00:11:06,530 --> 00:11:15,410 And as they do so, let me remind you that there will be a Q&A session towards the very end of the last half hour of this event. 93 00:11:15,410 --> 00:11:22,700 So please do use the chat function in YouTube to ask your questions. 94 00:11:22,700 --> 00:11:31,700 So thank you so much and welcome, everybody. Perhaps we can begin in the sort of characteristically, perhaps the Danto way of philosophers. 95 00:11:31,700 --> 00:11:38,660 And just to clarify the key terms in both democracy or democratic culture and artificial intelligence. 96 00:11:38,660 --> 00:11:46,820 So when a lot of people think about democracy, they think about Majoritarian Decision-Making, especially in the context of elections, 97 00:11:46,820 --> 00:11:53,210 and then they think about formal political institutions like legislatures that enact laws, 98 00:11:53,210 --> 00:12:00,080 courts that adjudicate upon them, administrators that implement them, police that enforce them. 99 00:12:00,080 --> 00:12:07,250 But the notion of a democratic culture goes beyond purely that formal set of institutions. 100 00:12:07,250 --> 00:12:12,050 And I think one of the things that's most impressive about the work that both Alan and Josh 101 00:12:12,050 --> 00:12:18,320 have done is that that emphasised the need for democracy beyond this institutional context. 102 00:12:18,320 --> 00:12:22,670 But I also know that you somewhat disagree about the nature of that broader context. 103 00:12:22,670 --> 00:12:31,530 Perhaps Joshie might begin by explaining in what sense democracy needs to be broader than just the institutional focus. 104 00:12:31,530 --> 00:12:37,340 Yes, thanks. Thanks very much, John. It's really a privilege to be here. 105 00:12:37,340 --> 00:12:46,460 And I want to, if I may, begin by offering to congratulations one to you for your position as director of the institute 106 00:12:46,460 --> 00:12:52,850 and the other to Oxford for having had the good fortune to be able to bring you in. 107 00:12:52,850 --> 00:13:00,800 As the head of this. I really can't think of anyone who's got the combination of intellectual breadth 108 00:13:00,800 --> 00:13:07,790 and ethical sensibilities that better suits him or her or they to play this role. 109 00:13:07,790 --> 00:13:14,060 So congratulations to both of you. Thanks for the opportunity. 110 00:13:14,060 --> 00:13:19,400 You know, I teach courses in Democratic theory, as does Alan. 111 00:13:19,400 --> 00:13:29,480 And often you begin the course in Democratic theory by discussing what are commonly called minimalist theories of democracy and minimalist theories. 112 00:13:29,480 --> 00:13:40,520 Democracy is a matter of competition for power that's resolved through elections and then a peaceful transfer of power subsequent to the elections. 113 00:13:40,520 --> 00:13:46,970 And this is a view that's presented classically by Joseph Schumpeter in his book Capitalism, 114 00:13:46,970 --> 00:13:54,530 Socialism and Democracy, Norberto Bobbio and Adam Sawhorse Gear, contemporary proponent of it. 115 00:13:54,530 --> 00:14:03,920 And and then when you're teaching the course, you always say, well, that's fine, but it sets the Democratic bar pretty low after January six. 116 00:14:03,920 --> 00:14:08,570 That doesn't seem like such a bad idea to a peaceful transfer. 117 00:14:08,570 --> 00:14:12,620 Powers are looking better. Really good. 118 00:14:12,620 --> 00:14:23,210 Still, I don't think we should let those hideous events of January 6th changed the basic conception of democracy as something more demanding, 119 00:14:23,210 --> 00:14:30,080 a form of self-government, a form of self-government in which the people understood as free and equal, 120 00:14:30,080 --> 00:14:37,310 have good expansive opportunities for open communication and organisation to form ideas 121 00:14:37,310 --> 00:14:45,530 and opinions that they aim to translate into programmes to turn into laws and policies. 122 00:14:45,530 --> 00:14:52,790 Democracy isn't simply a peacefully regulated contest for power, but also an informal, 123 00:14:52,790 --> 00:14:58,730 open ended, fluid, dispersed public discussion of matters of common concern. 124 00:14:58,730 --> 00:15:07,400 And that discussion, in turn, shapes public opinion and shapes and is shaped by civic activism and ultimately the exercise of formal political power. 125 00:15:07,400 --> 00:15:15,120 I think of it is a form of power based on an ideal of substantively equal communicative. 126 00:15:15,120 --> 00:15:21,690 Freedom and that system requires good and equal chances for people to express views, access, 127 00:15:21,690 --> 00:15:28,980 it's instructive information from reliable sources to hear a wide range of views and to coordinate with other people 128 00:15:28,980 --> 00:15:36,990 to explore interests and ideas with an eye to developing common understandings and advancing common concerns. 129 00:15:36,990 --> 00:15:41,670 And you know, that system of rights and opportunities that substantively equal communicative 130 00:15:41,670 --> 00:15:48,930 freedom depends in turn on a set of norms and dispositions of the participants. 131 00:15:48,930 --> 00:15:54,630 So there needs to be a concern for truth, for getting things right and a conception, 132 00:15:54,630 --> 00:16:04,050 not Thrasymachus conception of conception of politics as an activity in which the concern for truth matters. 133 00:16:04,050 --> 00:16:08,070 Doesn't that go without saying? Well, apparently not. 134 00:16:08,070 --> 00:16:16,200 Moreover, a concern for the common good politics is about competing and conflicting views about how do we address common shared concerns. 135 00:16:16,200 --> 00:16:22,050 And I think also a concern for civility, which I don't understand, is politeness or etiquette, 136 00:16:22,050 --> 00:16:30,090 but is acknowledging that the way you should argue and justify is in ways that respect and acknowledge the equal standing of other people. 137 00:16:30,090 --> 00:16:33,040 That's what I think Rawls meant by the duty of civility. 138 00:16:33,040 --> 00:16:41,580 Now, some of us are old enough to remember when digital technology was supposed to be great for all of this. 139 00:16:41,580 --> 00:16:49,920 When we finally had the technological foundations for a renewed and reinvigorated Hober Mocean public sphere, 140 00:16:49,920 --> 00:16:57,420 lowering the barriers to entry of speaking and listening and engaging. 141 00:16:57,420 --> 00:17:02,790 But, you know, the bloom fell pretty quickly from the digital rose. 142 00:17:02,790 --> 00:17:09,060 Still, I think the right way to think about the Democratic idea and the technological 143 00:17:09,060 --> 00:17:15,630 challenges to the Democratic idea is by ruling the understanding in this more ambitious 144 00:17:15,630 --> 00:17:21,180 conception of democracy is something more than a peaceful contest for power with a 145 00:17:21,180 --> 00:17:29,370 peaceful transition as incredibly important as that peaceful contest in transition is. 146 00:17:29,370 --> 00:17:31,800 This is a much richer notion of democracy. 147 00:17:31,800 --> 00:17:39,750 It involves citizens getting together, as it were, as free and equal citizens to deliberate about the common good, 148 00:17:39,750 --> 00:17:48,660 to make decisions about the common good, and that deliberations are structured by norms like respect for the truth, civility and so forth. 149 00:17:48,660 --> 00:17:52,260 Then you would agree with most of that, I think, wouldn't you? 150 00:17:52,260 --> 00:18:01,000 Yes, I think I share a vision of, you know, democracy is a deliberative democracy where laws are legitimate, 151 00:18:01,000 --> 00:18:07,860 don't need they've been the product of a reasoned exchange of arguments amongst free and equal. 152 00:18:07,860 --> 00:18:13,040 So that's sort of the ideal. But I think that the way I look at democracy in general, 153 00:18:13,040 --> 00:18:21,940 it a as a procedure to solve collective decisions where power is distributed equally discursive and voting power, 154 00:18:21,940 --> 00:18:28,640 but it's also more generally a regime form. And even more broadly, a type of society, 155 00:18:28,640 --> 00:18:32,480 and it's highly plausible that you can have a functional democracy in the narrow 156 00:18:32,480 --> 00:18:38,970 sense of a procedure or a regime unless you have a functional democratic society. 157 00:18:38,970 --> 00:18:41,270 And what do I mean by a democratic society? 158 00:18:41,270 --> 00:18:49,340 I mean, what Chuck Hagel meant when he when he first visited, basically, it's a society characterised by an equality of conditions. 159 00:18:49,340 --> 00:18:59,180 And it's really something that I you fully understand only when you come from a country where this equality of condition is lesser or in existence. 160 00:18:59,180 --> 00:19:03,350 For me, it was a revelation when I when I moved to the US, even as interestingly, 161 00:19:03,350 --> 00:19:13,580 the actual American regime is not that democratic at all, but in fact, in recent years has become expressly plutocracy and oligarchy. 162 00:19:13,580 --> 00:19:23,110 But the equality of conditions is still there. And it's it's I think it's essential to the stability and viability of a democratic regime. 163 00:19:23,110 --> 00:19:27,470 And can I just pick up one point? You published this important book recently, Open Democracy, 164 00:19:27,470 --> 00:19:33,530 which takes issue with the kind of view of democracy that Josh and Habermas and roles have. 165 00:19:33,530 --> 00:19:42,110 So the idea there is that there is a twin track of Democratic deliberation, decision making, the formal track of elected representatives and others, 166 00:19:42,110 --> 00:19:49,730 and then this informal track that sort of sets the agenda but doesn't actually decide the decision making is made in the formal track. 167 00:19:49,730 --> 00:19:58,910 You've argued for, in some sense, a less emphasis on elections and more on citizens making decisions themselves. 168 00:19:58,910 --> 00:20:03,550 Can you just bring up bring out that disagreement a little bit with Josh? 169 00:20:03,550 --> 00:20:07,300 Yeah. So I think it's it's just that I think in the 18th century, 170 00:20:07,300 --> 00:20:13,070 we took the wrong turn by reinventing democracy in an electoral form when I think there were other forms of 171 00:20:13,070 --> 00:20:18,170 democratic representation that could have been available because of different ideological and conceptual apparatus, 172 00:20:18,170 --> 00:20:22,460 including what I call autocratic representation based on random selection, 173 00:20:22,460 --> 00:20:30,470 something closer to the ancient Greek practises and also to some Swiss practise. 174 00:20:30,470 --> 00:20:36,860 At the time, there were not all that democracy, to be honest, but at least somewhat equally amongst the pool of candidates. 175 00:20:36,860 --> 00:20:44,900 And maybe it's not a real problem, I guess, but my, my, my, my divergence made my difference in disagreement. 176 00:20:44,900 --> 00:20:56,150 That way with Josh is that is still wedded to this dichotomous Habermas and vision of a space where you have like the 177 00:20:56,150 --> 00:21:04,370 politicians and the officials who make the decisions and then the space for a citizen which is wild and free flowing, 178 00:21:04,370 --> 00:21:10,790 and where magically, somehow an agenda for the official sphere is supposed to emerge. 179 00:21:10,790 --> 00:21:12,590 And I don't buy that, 180 00:21:12,590 --> 00:21:23,810 that there's this invisible market of one deliberate the invisible hand of a person in the wild can set up that agenda perfectly efficiently, 181 00:21:23,810 --> 00:21:32,180 efficiently. And I also think that distinguishing between a professional class of politicians and ordinary citizens is, 182 00:21:32,180 --> 00:21:37,490 you know, in the end, I think it's it's not a good thing that if you want an authentic democracy, 183 00:21:37,490 --> 00:21:45,260 we need to rotate ordinary citizens in and out of power and specifically in and out of legislative functions, 184 00:21:45,260 --> 00:21:51,530 but also in and out of the technical administrative structure while most of the laws are being made, actually. 185 00:21:51,530 --> 00:21:56,990 And meanwhile, we should also kerb the power of executives and return it to the function of 186 00:21:56,990 --> 00:22:02,840 executing the law and not making it so that that's that's probably the reason. 187 00:22:02,840 --> 00:22:07,790 So open democracies, not direct democracy, not all decisions being made by the whole populace. 188 00:22:07,790 --> 00:22:15,530 It is citizens representing citizens rather than an elite class representing citizens as elected representatives. 189 00:22:15,530 --> 00:22:19,700 Now, what I think would be interesting to come back to this issue a bit later to see 190 00:22:19,700 --> 00:22:24,500 whether a digital technology affects how we should adjudicate these two views. 191 00:22:24,500 --> 00:22:31,460 But, Nigel, I want to move on to you and just ask you the question, what do we mean by artificial intelligence? 192 00:22:31,460 --> 00:22:38,510 And also to ask you perhaps to expand on something that you said in your book, The Digital Age. 193 00:22:38,510 --> 00:22:47,720 You said that quite strikingly, that you're more afraid of what natural stupidity rather than artificial intelligence 194 00:22:47,720 --> 00:22:54,260 might wreak in the next 50 years of gradually more pervasive machines and smartness. 195 00:22:54,260 --> 00:22:56,420 Yes, thank you, John. 196 00:22:56,420 --> 00:23:06,020 It's it's very instructive to remember that the 1957 paper by Turing Computing Machine with Intelligence actually was published in mind in 197 00:23:06,020 --> 00:23:14,900 a philosophical journal which really kicked off 70 years now of intense work around the methods and techniques of artificial intelligence. 198 00:23:14,900 --> 00:23:22,700 Those methods and techniques have expanded extraordinarily when you try and define a little bit like trying to define life. 199 00:23:22,700 --> 00:23:26,730 There are many variations in many variants. The thing to remember, 200 00:23:26,730 --> 00:23:32,430 I think over the 70 years of the expansion of methods and techniques and we now have a huge 201 00:23:32,430 --> 00:23:38,880 toolbox of methods from structural encoding of causal principles to statistical methods, 202 00:23:38,880 --> 00:23:47,580 the latest forms of machine learning, many different types of ways of representing and reasoning about the world, 203 00:23:47,580 --> 00:23:56,850 that the thing that's really powered much of this acceleration has been the extraordinary and exponential increase in raw compute power, 204 00:23:56,850 --> 00:24:01,230 the technology underpinning the algorithms so that we now live in a world that's 205 00:24:01,230 --> 00:24:05,610 completely diffused with ubiquitous or pervasive supercomputer supercomputers, 206 00:24:05,610 --> 00:24:11,250 that we hardly acknowledge winning loads of algorithms to recognise speech faces, 207 00:24:11,250 --> 00:24:16,860 shopping patterns, suspicious regions of medical images, and the list goes on. 208 00:24:16,860 --> 00:24:24,840 What we have here is an extraordinary set of task, achieving programmes, very narrow, specific. 209 00:24:24,840 --> 00:24:34,320 Azis tuned to the specifics of particular tasks. But those are tasks that we've determined are important, worthwhile and profitable. 210 00:24:34,320 --> 00:24:38,850 And I think the application of extraordinary computing power driving A.I. with 211 00:24:38,850 --> 00:24:44,640 access to huge swathes of our data presents us with a host of ethical challenges, 212 00:24:44,640 --> 00:24:55,140 dilemmas of governance and choices about the ends to which that's all put my view in that quote was that I augments us in the best case, 213 00:24:55,140 --> 00:24:59,040 it empowers us and in the worst case, it will oppress and disenfranchise us. 214 00:24:59,040 --> 00:25:05,430 And those choices are ours alone and not likely to be emerging from any self-aware machine any time soon. 215 00:25:05,430 --> 00:25:13,380 So let's leave that to one side. And my point in that quote was that we might choose to put the machines in ascendancy over us, 216 00:25:13,380 --> 00:25:17,910 but that would be a case of natural stupidity rather than artificial intelligence. 217 00:25:17,910 --> 00:25:25,140 And I think we're seeing fantastic progress in the taming of particular problems that have large search spaces 218 00:25:25,140 --> 00:25:31,050 where the nature of the space being searched and what counts as a successful outcome can be very well specified. 219 00:25:31,050 --> 00:25:41,430 And we've just seen recently the fantastic results in protein folding, obstacle avoidance, face recognition, many, many examples. 220 00:25:41,430 --> 00:25:48,540 But I think what we're appreciating and in this conversation would explore more is that all of this can't just be instrumentalist. 221 00:25:48,540 --> 00:25:54,030 A.I.s use will reflect values that we collectively take to be important. 222 00:25:54,030 --> 00:26:02,430 And I think along with this idea of democracy that was in its origination and through its philosophical history, 223 00:26:02,430 --> 00:26:09,330 a host of ideas around the values that we most prise transparency, accountability, 224 00:26:09,330 --> 00:26:14,250 tolerance, autonomy, dignity, self-determination, reason, evidence, equity. 225 00:26:14,250 --> 00:26:22,470 And I think that's really important. You're seeing a real flourishing at the moment within departments of computer science, within our conferences, 226 00:26:22,470 --> 00:26:28,920 within the the community itself of discussions around value based computing, 227 00:26:28,920 --> 00:26:36,240 human centric A.I. And I think we all now share this deeper concern actually is back to Bobbio. 228 00:26:36,240 --> 00:26:43,800 Very powerful observation that the problem is that it's turning out that representation 229 00:26:43,800 --> 00:26:49,860 doesn't seem to be reflecting the interests and opinions of individuals, but rather organised group interests. 230 00:26:49,860 --> 00:26:57,900 And it's been what we see is that the procedure enables now large organisations 231 00:26:57,900 --> 00:27:04,080 to effect control rather more than individuals and democratic self-government. 232 00:27:04,080 --> 00:27:07,950 And I think that's where a source of a lot of people's unease arises. 233 00:27:07,950 --> 00:27:16,260 So I've got a very straightforward and I think much more instrumentalise view of what I is able to do at the moment. 234 00:27:16,260 --> 00:27:19,560 Great. So let me pick up on that point a little bit. 235 00:27:19,560 --> 00:27:28,380 So you're not seeing the action being in the development of artificial general intelligence, replicating human capacities across the board? 236 00:27:28,380 --> 00:27:36,900 It is about more narrow applications and the emphasis is on human choice about which values these more narrow applications will serve. 237 00:27:36,900 --> 00:27:41,670 So that's very much an image of a science, the technology for which we're responsible. 238 00:27:41,670 --> 00:27:48,960 I just want to ask a question to all of you and just with a brief answer, because the question is, is is rather imponderable. 239 00:27:48,960 --> 00:27:57,510 Is democracy a technology? Is democracy simply a mechanism for bringing up good results, like good decisions? 240 00:27:57,510 --> 00:28:02,070 Or is there inherent value in the democratic process? 241 00:28:02,070 --> 00:28:11,130 Or if both, where is where is its value mostly lie in the inherent value of everyone having an equal say as free and equal citizens, 242 00:28:11,130 --> 00:28:16,680 or in the fact that through this process we get better decisions than we would through other processes. 243 00:28:16,680 --> 00:28:20,620 Just a short answer from all of you then. Do you want to start? Yes. 244 00:28:20,620 --> 00:28:25,460 So I think I'll just answer with an anecdote by Robert Doll at the beginning of. 245 00:28:25,460 --> 00:28:31,820 Democracy, where you reports on the story of Danish Vikings driving up a river in France and from Normandy, 246 00:28:31,820 --> 00:28:38,360 some sticks to me on many levels and they were asked by a messenger coming out from the riverbanks, what's the name of your master? 247 00:28:38,360 --> 00:28:44,450 None. They replied, We are all equals. There's something intrinsically valuable about that. 248 00:28:44,450 --> 00:28:46,350 So they're. 249 00:28:46,350 --> 00:28:58,640 But you also think in terms of making better decisions that way as well, but I often get criticised for overemphasising the dimensions, so I thought. 250 00:28:58,640 --> 00:29:07,260 So you think that I care about the. So you've stressed the diversity, epistemic diversity conducive to better outcomes. 251 00:29:07,260 --> 00:29:15,550 Josh. Well, I think your muted mutagen. 252 00:29:15,550 --> 00:29:23,080 I am, but no longer. First on the issue of intrinsic value. 253 00:29:23,080 --> 00:29:38,530 Yes, and Ellen put it the powerfully on, it's a form of decision making amongst people conceived as equals with, 254 00:29:38,530 --> 00:29:53,590 as Rawls described, at certain moral powers. The in his last speech, Martin Luther in Memphis, Martin Luther King said, you know, 255 00:29:53,590 --> 00:30:02,600 the great glory of American democracy is the right to protest for right, not the right to grab, but the right to protest for right. 256 00:30:02,600 --> 00:30:12,070 So an exercise of that fundamental moral power. So I completely agree with you on the issue of at the same time, 257 00:30:12,070 --> 00:30:20,800 democracy is a form of political decision making and political decision making is a kind of collective problem solving. 258 00:30:20,800 --> 00:30:26,710 I, I know very little compelling, conclusive evidence that democracy there's some, 259 00:30:26,710 --> 00:30:32,980 but that democracy is consistently better at making collective decisions and alternatives. 260 00:30:32,980 --> 00:30:41,260 But it doesn't have to be better. It just has to be pretty good because there's a there's a limited willingness to pay 261 00:30:41,260 --> 00:30:46,330 the cost of a system that has intrinsic attractions but sucks in other respects. 262 00:30:46,330 --> 00:30:52,300 So but and there's no evidence that it is in way over the line. 263 00:30:52,300 --> 00:31:00,190 Well, good enough to make it worth having democracy for both of those reasons, 264 00:31:00,190 --> 00:31:08,990 but not as the system which somehow optimises on problem solving capacity to do it doesn't it doesn't need to do that. 265 00:31:08,990 --> 00:31:14,600 Nigel, I think we'll come back to that notion of sometimes setting up an expectation 266 00:31:14,600 --> 00:31:20,000 that I somehow has a more perfect solution it can struggle towards and or 267 00:31:20,000 --> 00:31:24,260 arrive at the point about this is there'll be there'll be fallibility all the 268 00:31:24,260 --> 00:31:28,840 way through these systems and they'll be educators we can't take account of. 269 00:31:28,840 --> 00:31:42,200 I mean, the way in which I can be at the service of the democratic process is around equity, availability of resource and insight, 270 00:31:42,200 --> 00:31:52,520 an explanation around facts of the matter where they can be adduced and and large participation of bringing groups together. 271 00:31:52,520 --> 00:32:00,410 But again, we've seen that experiments with the mobilisation at true scale of this falls into some really 272 00:32:00,410 --> 00:32:09,760 rather well understood problems now about giving people genuinely heterogeneous experience. 273 00:32:09,760 --> 00:32:17,980 OK, let's move on to some of the issues around the deployment of baby applications in decision making. 274 00:32:17,980 --> 00:32:23,920 I mean, there's the obvious allure in having an algorithm resolve important decisions. 275 00:32:23,920 --> 00:32:26,900 I mean, it's a mechanical judgement, free procedure. 276 00:32:26,900 --> 00:32:36,250 So you're not at the whim of someone's discretion or judgement, but promises to be faster, cheaper, perhaps more accurate, less biased than humans. 277 00:32:36,250 --> 00:32:40,720 I mean, there were these harrowing studies of judges engaged in sentencing and their 278 00:32:40,720 --> 00:32:44,800 sentences seem to be influenced by the time of day or whether they had lunch. 279 00:32:44,800 --> 00:32:52,100 And also just eliminating what Cass Sunstein calls noise, unwanted inconsistencies between various decision makers. 280 00:32:52,100 --> 00:32:54,640 So there is this tremendous promise to get. 281 00:32:54,640 --> 00:33:04,730 On the other hand, there's the worry that decision making is somehow through algorithms is somehow incompatible with democratic equality, for example. 282 00:33:04,730 --> 00:33:15,170 So there's been a lot of studies that talk about, for example, the ways in which algorithmic decision making to say facial recognition or in having 283 00:33:15,170 --> 00:33:19,370 to do with criminal justice amplifies and exacerbates existing inequalities, 284 00:33:19,370 --> 00:33:28,550 for example, not being able to recognise people with darker skin tones or discriminate against people from poorer neighbourhoods and so forth. 285 00:33:28,550 --> 00:33:39,350 Nigel, is this endemic to algorithmic decision making or can these discriminatory unfair effects be mitigated? 286 00:33:39,350 --> 00:33:46,970 Well, when we design and anticipate these systems, when we think about the goals that we're going to put them to work on, 287 00:33:46,970 --> 00:33:54,050 then we absolutely must reflect on on this broad range of potential and beddings. 288 00:33:54,050 --> 00:33:59,990 Otherwise, you will end up with the classic pratfalls that we've seen in some of these areas. 289 00:33:59,990 --> 00:34:00,410 Now, you know, 290 00:34:00,410 --> 00:34:10,100 if you were if you were well accustomed to the notion that this would be a classification system within these boundaries for these outcome types, 291 00:34:10,100 --> 00:34:17,810 then that's one thing. But when these things are deployed in very broad, open, textured contexts, you're really heading for trouble. 292 00:34:17,810 --> 00:34:21,140 And this is where I think the the kind of philosophical context, 293 00:34:21,140 --> 00:34:26,600 the one that you these are not entirely deterministic systems in, not in the sense that they run, 294 00:34:26,600 --> 00:34:33,350 but in the sense that was they're deployed and interpreted the open texture of the environment, society in which they are deployed. 295 00:34:33,350 --> 00:34:41,990 And of course, we can point to all sorts of very and we have done in the past in our discussions, very exciting examples of human bias. 296 00:34:41,990 --> 00:34:50,720 The column in the Tversky study is this extraordinary evidence that we're very susceptible to very simple distortions and 297 00:34:50,720 --> 00:34:58,370 effects in our decision making and perhaps our algorithms can help us edges towards better decisions that one thing, 298 00:34:58,370 --> 00:35:01,130 rather than putting the system entirely in charge, 299 00:35:01,130 --> 00:35:09,140 an earlier generation of A.I. systems was seen as a decision support what they did and in some modalities, 300 00:35:09,140 --> 00:35:14,480 explicitly it was critique the human decision making, trying to get them out of a local minima, 301 00:35:14,480 --> 00:35:19,430 some kind of local preference or bias zone they were in. 302 00:35:19,430 --> 00:35:25,190 So I think you can play this to support human decision making. 303 00:35:25,190 --> 00:35:32,540 But when you allow the system to simply be deployed without a broad range of appreciation 304 00:35:32,540 --> 00:35:38,480 of the cases of the task to which is being put to critically reflect on that, 305 00:35:38,480 --> 00:35:43,520 you will end up with the rather poor examples we've seen over time recently, 306 00:35:43,520 --> 00:35:49,250 I think in this area with facial recognition or bots online or whatever that might be. 307 00:35:49,250 --> 00:35:55,080 Anyone else want to come in on that? I was just thinking that, you know, 308 00:35:55,080 --> 00:36:02,370 there are different situations if you take a plane and you don't really care whether you are driven by a robot or a human beings, 309 00:36:02,370 --> 00:36:09,420 if you're handed a sentence for a crime you've committed, you absolutely care about the reasons behind the sentence. 310 00:36:09,420 --> 00:36:21,480 So I think it comes down to the reason this is justifiability, the kind of reason you're owed as human beings and citizens in some contexts. 311 00:36:21,480 --> 00:36:27,490 So I suppose even the use of loss that I'm very much in favour of, in fact, 312 00:36:27,490 --> 00:36:34,080 was toned down or avoided by 18th century philosophers because they had no rhyme or reason. 313 00:36:34,080 --> 00:36:41,520 It was not easy to justify. And so they preferred elections because it's based on choice and consent and full rationality. 314 00:36:41,520 --> 00:36:48,750 And I do understand the worry about non self-liquidating or non self-explanatory AI, 315 00:36:48,750 --> 00:36:55,980 where you're just given results with no understanding of how you got to those results. 316 00:36:55,980 --> 00:37:03,900 So this is an interesting point that it's not simply that we're interested in a certain kind of decision being within a frame of accuracy or fairness. 317 00:37:03,900 --> 00:37:11,550 It's also the reasons for the decision. I think this ties in with Joshes notion of the duty of stability, which is not simply about politeness, 318 00:37:11,550 --> 00:37:18,490 but about giving our fellow citizens the appropriate sorts of reasons, the decisions that affect them, especially for officials. 319 00:37:18,490 --> 00:37:23,460 Now, there is a kind of major issue here around the explained ability of AI. 320 00:37:23,460 --> 00:37:29,970 So I just want to quote something from a scientist, Dudayev Tul, on the deep learning style of machine learning. 321 00:37:29,970 --> 00:37:36,480 He says, I find many users who say that it works well and we don't know why. 322 00:37:36,480 --> 00:37:40,890 Once you unleash it on large data, deep learning has its own dynamics. 323 00:37:40,890 --> 00:37:47,640 It does its own repair and its own optimisation. And it gives you the right results most of the time. 324 00:37:47,640 --> 00:37:54,060 But when it doesn't, you don't have a clue about what went wrong and what should be fixed. 325 00:37:54,060 --> 00:38:00,600 In particular, you do not know if the fault is in the programme, in the method or because things have changed in the environment. 326 00:38:00,600 --> 00:38:06,570 We should be aiming, he says, a different kinds of transparency. So you don't know why things went wrong. 327 00:38:06,570 --> 00:38:13,200 But presumably that must also apply to the case where things go right? So there's a kind of divorce between outcome and process. 328 00:38:13,200 --> 00:38:16,170 What do you say to someone who says, look. 329 00:38:16,170 --> 00:38:24,080 I just want to have the right decision that really care about the nature of the process, this is a kind of abstract intellectual concern. 330 00:38:24,080 --> 00:38:27,330 Just, you know, so a few things on that. 331 00:38:27,330 --> 00:38:36,510 I want to go back to the previous point about algorithms and just and I think in the similar spirit to what was said, 332 00:38:36,510 --> 00:38:41,910 just one thing to bear in mind is a tour down historical lane. 333 00:38:41,910 --> 00:38:51,750 I mean, the development of a theory of algorithms in the 1930s is one of the most extraordinary intellectual achievements of the 20th century. 334 00:38:51,750 --> 00:38:56,790 And there's an important point about it that I just want to emphasise, which is, I mean, 335 00:38:56,790 --> 00:39:02,160 basically what happened was Hilbert has a programme in the foundations of mathematics. 336 00:39:02,160 --> 00:39:06,510 He's concerned about completeness, consistency and decide ability. 337 00:39:06,510 --> 00:39:15,570 Girdle takes care of the first two problems. There's things, incompleteness, theorems, the problem of decide ability is left. 338 00:39:15,570 --> 00:39:23,310 And in 36, 37, church interring solve the incidence problem. 339 00:39:23,310 --> 00:39:32,580 Another negative result, and this is one of these cases where you have a you couldn't solve the decision 340 00:39:32,580 --> 00:39:36,990 problem until you had a precise characterisation of what an algorithm is. 341 00:39:36,990 --> 00:39:45,540 And that's what Turing proposes, is I mean, church had a different one, but Turing's becomes the dominant one, computable by a Turing machine. 342 00:39:45,540 --> 00:39:52,160 And now you could prove you could conclusively resolved the decision. 343 00:39:52,160 --> 00:40:05,050 Problem, a pure problem in the most abstract, rarified areas of philosophy and the foundations of mathematics, that's the basis of computer science. 344 00:40:05,050 --> 00:40:12,080 I mean, it's an extraordinary case of following out a kind of intellectually pure problem and doing something that has, 345 00:40:12,080 --> 00:40:15,620 you know, obviously incredibly profound consequences. 346 00:40:15,620 --> 00:40:25,850 Now, one of the troubles is that I think people don't don't have a kind of grip on what an algorithm is. 347 00:40:25,850 --> 00:40:35,780 And so what you get are statements about how, you know, in allocation decisions of the kind that you describe, 348 00:40:35,780 --> 00:40:42,690 the algorithm did it OK, this is like literally nonsense, literally nonsense. 349 00:40:42,690 --> 00:40:50,710 In the following way, there are accountable infinity of algorithms. 350 00:40:50,710 --> 00:41:00,490 So if you don't like the one that you're using, there are plenty more where that one came from and it's a kind of fetishism 351 00:41:00,490 --> 00:41:04,990 attributing a magical property to the algorithm that you're blaming the algorithm. 352 00:41:04,990 --> 00:41:08,500 It's your fault. Look in the mirror. Don't look at the algorithm. 353 00:41:08,500 --> 00:41:15,970 I mean, I'm here very much agreeing with what Nigel has been saying about having a kind of clarity about what the values are that you're pursuing, 354 00:41:15,970 --> 00:41:19,780 what the sense of purpose is and how things can go. That's all on you. 355 00:41:19,780 --> 00:41:23,710 It's on your data. It's also on your model. It's not just a data driven model. 356 00:41:23,710 --> 00:41:39,550 So this kind of blaming algorithms, I think, is letting the letting people off the hook in in a way that they shouldn't be let off the hook. 357 00:41:39,550 --> 00:41:48,820 Now, the particular issue that you raised about explained ability that you day a pearl raises about people, 358 00:41:48,820 --> 00:41:54,550 that's really specifically an issue about deep learning. I mean, it's just a characteristic of it. 359 00:41:54,550 --> 00:42:03,250 It's not an A.I. issue generically. It's a deep learning issue because of the way that deep learning models work. 360 00:42:03,250 --> 00:42:10,180 And it's easy to overexaggerate the uninterpretable ability of deep learning mahsa lots of people who are spending lots 361 00:42:10,180 --> 00:42:21,820 of time trying to figure out how you can make sense of what's happening at a node in a in a neural in a neural net. 362 00:42:21,820 --> 00:42:28,960 But I think. The reason that. 363 00:42:28,960 --> 00:42:36,550 There's a demand for explained ability appropriately is not because there's 364 00:42:36,550 --> 00:42:41,860 an intellectual curiosity about what's happening in the guts of the system, 365 00:42:41,860 --> 00:42:46,450 that intellectual curiosity can't be answered any more than you can. 366 00:42:46,450 --> 00:42:52,690 Answer the question of why, you know, the photon went through one hole rather than the other hole. 367 00:42:52,690 --> 00:42:56,650 That's the way I don't think that can be answered. 368 00:42:56,650 --> 00:43:04,480 The reason that people are concerned about that is because systems are not, as Nigel said, these systems are not perfect and reliable. 369 00:43:04,480 --> 00:43:09,160 And you want to know about the you may want to know why it succeeds when it succeeds, 370 00:43:09,160 --> 00:43:12,250 but you want to know that partly because you know it's going to fail. 371 00:43:12,250 --> 00:43:19,930 It's going to say that your dog is a dishtowel and you want to know how the [INAUDIBLE] did it come up, you know, with a classification like that. 372 00:43:19,930 --> 00:43:31,780 And so I think if I were 25 years old and starting out in philosophy and looking for a really fantastic problem to explore, 373 00:43:31,780 --> 00:43:38,620 I would start writing about explained ability because it's an incredibly rich, complicated, 374 00:43:38,620 --> 00:43:44,260 philosophically, technically complicated area with huge practical importance. 375 00:43:44,260 --> 00:43:53,410 And I think there were about 18 different concerns that are being wrapped together in under the label of explain ability. 376 00:43:53,410 --> 00:44:00,890 And there's enormous benefit that could come from somebody doing some untangling around those. 377 00:44:00,890 --> 00:44:07,310 That is a fantastic recommendation that hopefully some of our listeners will take up. 378 00:44:07,310 --> 00:44:13,670 Let me move to another issue. So we've talked a little bit about issues about fairness and discrimination with respect, 379 00:44:13,670 --> 00:44:18,980 algorithms that address, albeit briefly, this issue of explain ability. 380 00:44:18,980 --> 00:44:26,000 But of course, another worry that people have about the operation of algorithms is that many of them are fed on big data. 381 00:44:26,000 --> 00:44:30,020 And there are issues about the data and in particular issues around privacy. 382 00:44:30,020 --> 00:44:35,420 And of course, a lot of this is the focus, the criticism of what's known as surveillance, 383 00:44:35,420 --> 00:44:41,120 capitalism and the use of big data in order to push people consumer goods. 384 00:44:41,120 --> 00:44:49,310 But you might think that there is a more serious issue when there's a prospect to advance public health to the use of big data. 385 00:44:49,310 --> 00:44:57,170 So the question I'd like perhaps to address briefly here is whether I in areas like advancing public health, for example, 386 00:44:57,170 --> 00:45:06,200 contact tracing and dealing with the pandemic forces us to reconfigure how we think of the balance between individual rights, 387 00:45:06,200 --> 00:45:16,430 like privacy and issues like the furtherance of public goods, like controlling pandemics and eliminating them. 388 00:45:16,430 --> 00:45:23,510 And I just want to quote something that you mentioned, Nigel, in your book. 389 00:45:23,510 --> 00:45:34,900 You say. The single thing that every citizen and every corporate decision maker needs to understand is that the enormous data stores that government, 390 00:45:34,900 --> 00:45:37,480 government agencies, corporations, 391 00:45:37,480 --> 00:45:47,110 trusts and individuals hold are as much a key part of national and international infrastructure as the road network. 392 00:45:47,110 --> 00:45:54,650 So this idea that these massive stores of data are, in a sense, public goods that we can debate about, exclude ability and so forth, 393 00:45:54,650 --> 00:45:58,390 that they're public goods in the sense that they stand to benefit everyone and 394 00:45:58,390 --> 00:46:03,200 it's benefiting one person is not at the expense of benefiting anyone else. 395 00:46:03,200 --> 00:46:10,040 But you claim that we need to open up these sources of data in a way, however, that respects privacy. 396 00:46:10,040 --> 00:46:17,170 How do we how do we strike that balance, that the data can be both open yet at the same time respecting privacy? 397 00:46:17,170 --> 00:46:23,530 Well, I mean, this isn't a single dimension of variation on this, 398 00:46:23,530 --> 00:46:30,880 we often talk about data as being on a spectrum from open to closed in private, but in fact, the conditions under which you vary that. 399 00:46:30,880 --> 00:46:36,370 I mean, I think what we appeal for in the open data movement is that a broader foundation of 400 00:46:36,370 --> 00:46:42,430 possible of data is made available on the open and publicly open licence conditions. 401 00:46:42,430 --> 00:46:44,620 And much of that is not about personal data. 402 00:46:44,620 --> 00:46:54,310 It's everything from when the trains run to where the roads actually join one another or the legal addresses in a country, you know, 403 00:46:54,310 --> 00:47:04,420 you can argue about what you like as your as your as your non personal public data infrastructure, the stuff that relates to individuals. 404 00:47:04,420 --> 00:47:10,270 Now we're seeing an interesting change in perceptions at the moment of a public health crisis. 405 00:47:10,270 --> 00:47:12,940 There is a sense of a collective good. I mean, 406 00:47:12,940 --> 00:47:22,330 there are already laws and regulations which allow various intrusions into your liberties and your your kind 407 00:47:22,330 --> 00:47:27,160 of rights of movement in the case of certain infectious diseases where we're living under them right now. 408 00:47:27,160 --> 00:47:30,280 The issue about how your data is used now, 409 00:47:30,280 --> 00:47:37,090 I think we often get to heads up with this because we think it's bound to be about dereference identifiable data. 410 00:47:37,090 --> 00:47:44,620 But we've got lots of examples now where the data can remain in highly secure enclaves behind all sorts of levels. 411 00:47:44,620 --> 00:47:55,930 Security are my models that seek to understand the transmission of the virus at this moment or where the susceptible groups are in the demographic. 412 00:47:55,930 --> 00:48:04,810 Why models can interrogate that data and the data never has to leave a particular set of secure data centres, for example. 413 00:48:04,810 --> 00:48:08,410 And where and we know that because our patient records are held in that way 414 00:48:08,410 --> 00:48:16,060 and they're currently linked behind various forms of of of secure firewall. 415 00:48:16,060 --> 00:48:24,280 So the question is, the danger comes when that data is just kind of shifted around the place with very little attention and thought to the security. 416 00:48:24,280 --> 00:48:30,850 There's a different aspect to this which says, well, maybe that data will be better held close to the individuals whose data it is. 417 00:48:30,850 --> 00:48:34,600 And so you use different sorts of rights and interests over that data. 418 00:48:34,600 --> 00:48:39,070 And then the decision is where and how you can aggregate and collect that data together. 419 00:48:39,070 --> 00:48:41,290 So we're seeing a very interesting discussion at the moment, 420 00:48:41,290 --> 00:48:49,000 not just around the technology for securing and making data available as and when different groups and interests need it. 421 00:48:49,000 --> 00:48:52,270 But around the need for new kinds of data institution, 422 00:48:52,270 --> 00:49:00,780 new kinds of possibly co-operatives and mutuals where the data is stewarded for public or other collective interests. 423 00:49:00,780 --> 00:49:05,230 So I think that there's not a there's not a single answer to this. 424 00:49:05,230 --> 00:49:09,970 It might appear like that when you look at the kind of throw around contact tracing, 425 00:49:09,970 --> 00:49:15,640 but even then, there are good designs and much worse designs about how you do that effectively. 426 00:49:15,640 --> 00:49:24,790 Presumably, you're not saying that all this goes through some detail of consent by particular individuals with respect to their data? 427 00:49:24,790 --> 00:49:35,200 It can't. In all cases, you've got to kind of understand where the urgency of some of the use to which you want to understand that. 428 00:49:35,200 --> 00:49:42,910 I mean, again, patient records are good examples. There's a general sense that, you know, that within a trusted health care delivery system, 429 00:49:42,910 --> 00:49:48,040 there are limits and purposes to which that data can and should or might be caught. 430 00:49:48,040 --> 00:49:58,600 And I think that we're becoming more sophisticated about our understandings around how you permit and vary that level of of linkage. 431 00:49:58,600 --> 00:50:05,350 But there has to be very careful characterisation of the rules and governance that sits over the top of that. 432 00:50:05,350 --> 00:50:08,650 So we don't see mass exfiltration, for example, 433 00:50:08,650 --> 00:50:19,960 of important information and insights to two organisations that may or may not have a general broader public interest. 434 00:50:19,960 --> 00:50:27,280 Josh, I know you want to come in on this. Yeah, just you know, I agree very much with what Nigel is saying. 435 00:50:27,280 --> 00:50:38,980 I mean, there's real. I mean, there's interesting philosophical work on privacy, but the technologies of privacy protection are important, 436 00:50:38,980 --> 00:50:43,720 sophisticated, secure enclaves, differential privacy to a whole bunch of things. 437 00:50:43,720 --> 00:50:57,040 I want to come, though, to the issue about contact tracing, which this was a big topic around the tensions that you were describing, 438 00:50:57,040 --> 00:51:02,080 potential tensions between protecting privacy and solving a big public health problem. 439 00:51:02,080 --> 00:51:06,220 And I think the issue there are some very instructive issues here. 440 00:51:06,220 --> 00:51:14,020 And I'll start with the terminological point, but I think it has very large, substantive implications. 441 00:51:14,020 --> 00:51:19,130 There is no such thing as digital contact tracing. 442 00:51:19,130 --> 00:51:22,430 There's digital exposure notification. 443 00:51:22,430 --> 00:51:29,120 Contact tracing is different from exposure notification, exposure notification is you got a piece of information. 444 00:51:29,120 --> 00:51:33,650 The information is that you are within a certain relevant distance, 445 00:51:33,650 --> 00:51:43,790 10 or 15 feet for a certain relevant period of time, 15 minutes to somebody who received a positive test for covid. 446 00:51:43,790 --> 00:51:52,670 Now, I've taken several contact tracing courses. I have certification from Johns Hopkins and from the WHL as I never did any because 447 00:51:52,670 --> 00:51:57,110 the programmes were so badly organised that my volunteering didn't get me anywhere. 448 00:51:57,110 --> 00:52:00,620 Or maybe they were well enough organised that they didn't want me and they knew that. 449 00:52:00,620 --> 00:52:03,680 But anyway, I got I took the courses. 450 00:52:03,680 --> 00:52:10,700 And what you learn in a contact tracing course is the contact tracing is basically a human public health activity. 451 00:52:10,700 --> 00:52:15,860 Step one is letting somebody know about an exposure it. 452 00:52:15,860 --> 00:52:25,850 But contact tracing is what you do after that. It's telling somebody how they can be tested, telling somebody how they can inform their employer, 453 00:52:25,850 --> 00:52:31,010 helping them to get some support while they're in isolation. 454 00:52:31,010 --> 00:52:37,040 It's not it's not exposure notification. 455 00:52:37,040 --> 00:52:39,620 Not simply exposure notifications, 456 00:52:39,620 --> 00:52:55,610 and I think the fact that so much intellectual energy went into the arguments about privacy protection in contact tracing, 457 00:52:55,610 --> 00:53:01,010 is it revealing of a kind of if I can use this, 458 00:53:01,010 --> 00:53:04,580 you know, piece of [INAUDIBLE] terminology over indexing, I mean, 459 00:53:04,580 --> 00:53:14,570 over indexing on the significance of information as opposed to a human public health process. 460 00:53:14,570 --> 00:53:27,620 Now. That said, it's also true, so far as I know that, you know, you look in the United States now, case rates are down and mortality is down. 461 00:53:27,620 --> 00:53:37,520 It's a very good. Things have improved a bunch. And there's a discussion about why there is literally no one, as far as I've read, 462 00:53:37,520 --> 00:53:44,270 literally no one who thinks that the rates are down because of successful exposure notification. 463 00:53:44,270 --> 00:53:50,040 Perhaps a lot of people spent a lot of time developing very privacy, protecting exposure notification apps. 464 00:53:50,040 --> 00:53:52,940 I was involved in one of those projects. It's great. 465 00:53:52,940 --> 00:54:02,780 But there if you look at the literature on success stories and less successful stories about dealing with covid Cross nationally, 466 00:54:02,780 --> 00:54:10,130 it is really not a story about some places that had fantastic exposure notification apps. 467 00:54:10,130 --> 00:54:15,980 The most popular exposure notification app is this Indian app. 468 00:54:15,980 --> 00:54:20,360 I've mispronounce it Oroya Saturnus, millions and millions and millions. 469 00:54:20,360 --> 00:54:25,490 But I don't think anybody thinks it's really done very much good. 470 00:54:25,490 --> 00:54:31,950 So there's, I think the public discussion around. 471 00:54:31,950 --> 00:54:39,750 Contact tracing apps, actually, exposure notification apps and protecting people's privacy and dealing with the public health issue and how to 472 00:54:39,750 --> 00:54:49,440 navigate these troubled waters is an expression of a kind of distortion in public discussion of the kind that I know. 473 00:54:49,440 --> 00:55:00,340 As you know, this is the kind of thing that you think about dealing with covid from the contact tracing point of view is a human public health. 474 00:55:00,340 --> 00:55:06,910 Stand up, large numbers of people didn't really happen anywhere to deal with these human needs. 475 00:55:06,910 --> 00:55:16,350 It's not about providing somebody with a piece of information that they don't know what to do with and probably won't do anything with. 476 00:55:16,350 --> 00:55:21,720 Helen, do you want to come into this and just said, I was thinking about this connexion between, 477 00:55:21,720 --> 00:55:28,560 again, democracy and its privacy rights and property rights of our own deadline? 478 00:55:28,560 --> 00:55:35,740 The problem is that with the current sort of structure of our so-called democratic government, a lot of these decisions, 479 00:55:35,740 --> 00:55:42,840 these trade-offs that you think is societies are entitled to make between more privacy or more collective safety. 480 00:55:42,840 --> 00:55:52,130 They are actually made by. Groups of people that are not entirely representative of the larger interests of the population, 481 00:55:52,130 --> 00:55:57,320 sometimes they are delegated to a technocratic sort of structure or agency or 482 00:55:57,320 --> 00:56:05,630 administrative groups that that that that make decisions on on behalf of the rest of us. 483 00:56:05,630 --> 00:56:19,460 And so my own take is that I would like to see something like a citizen's convention on I gotta digital technology to talk about these issues from it, 484 00:56:19,460 --> 00:56:28,040 from an authentically. Democratic representative perspective, if you will, and it could happen at the global level, I know that, 485 00:56:28,040 --> 00:56:35,570 for example, people in Australia don't like Australia is organising something like a citizen's assembly. 486 00:56:35,570 --> 00:56:44,660 You randomly selected on genome editing, and I'm sure we could do something similar on questions of artificial intelligence. 487 00:56:44,660 --> 00:56:46,020 That's a really interesting point. 488 00:56:46,020 --> 00:56:52,580 Is this the kind of lazy assumption that writes accounting, majoritarian norms and therefore not the business of majority, 489 00:56:52,580 --> 00:56:58,100 but rather of technocratic experts, judges and others itself is a mistake? 490 00:56:58,100 --> 00:56:59,330 That was a good thing to actually. 491 00:56:59,330 --> 00:57:05,840 You know, when I read your first question and answer your first question, I was thinking, well, actually, our privacy rights democratic to begin with. 492 00:57:05,840 --> 00:57:09,680 I mean, it's the liberal notion. Did the Greeks have privacy rights? 493 00:57:09,680 --> 00:57:15,080 I don't think so. The business of the private citizen with the business of of the city. 494 00:57:15,080 --> 00:57:25,880 So it's actually interesting to see how stretched, to use your terms on the concept of democracy has become. 495 00:57:25,880 --> 00:57:32,690 It encompasses all the things we deem desirable, including liberal rights, including human rights and all these things. 496 00:57:32,690 --> 00:57:41,330 And I just said before that I do think of democracy as a way of life in society, but it doesn't have to be a liberal society necessarily. 497 00:57:41,330 --> 00:57:41,990 It doesn't have to be. 498 00:57:41,990 --> 00:57:49,430 I mean, we could be a little bit more we could be a little bit more parsimonious in the way we think about a democratic society. 499 00:57:49,430 --> 00:57:53,250 OK, so if I could just say yes, I would just say a word about that. 500 00:57:53,250 --> 00:57:58,370 I just in support of thinking of privacy as having a value in a democracy, 501 00:57:58,370 --> 00:58:05,390 which is if you're constantly under threat of having everything about your life exposed, 502 00:58:05,390 --> 00:58:16,670 the idea that that is going to be empowering to you as a vigilant member of the polity feels a little bit like a stretch. 503 00:58:16,670 --> 00:58:21,410 I mean, I understand there are a whole bunch of other foundations for the idea of a right of privacy, 504 00:58:21,410 --> 00:58:25,880 but I think there is a specific case that I'm sure you were. 505 00:58:25,880 --> 00:58:33,020 I, of course, agree. I do think that if you want citizens who are capable of thinking, they have to have the sort of like a freedom, 506 00:58:33,020 --> 00:58:39,890 internal space to develop ideas that are not that everyone who I would second that, 507 00:58:39,890 --> 00:58:43,970 but still, I think that this triggers the thought that we may want to be, again, 508 00:58:43,970 --> 00:58:52,100 a bit more parsimonious about where those concepts are conscious of, say, and defensive. 509 00:58:52,100 --> 00:58:54,380 By the way, exposure tracing. 510 00:58:54,380 --> 00:59:04,500 There is research recently for such a group in Oxford who claim that they've seen a two point three percent decrease in the in income and vice. 511 00:59:04,500 --> 00:59:16,880 So, I mean, it is hard to do that. Absolutely. And it's a theme that is emerging is that democracy requires genuine deliberation. 512 00:59:16,880 --> 00:59:26,240 That's informed by the respect for truth, that offers reasons to our fellow citizens for decision making, 513 00:59:26,240 --> 00:59:35,030 and that is oriented towards the common good. A lot of people now think digital technology is subverting these processes of public reason, 514 00:59:35,030 --> 00:59:38,990 and they think very often that there is a kind of unholy alliance between, 515 00:59:38,990 --> 00:59:39,860 on the one hand, 516 00:59:39,860 --> 00:59:47,480 big tech that is in the business of sucking up as much of people's attention as possible in order then to manipulate them in various ways. 517 00:59:47,480 --> 00:59:54,980 And on the other hand, exclusionary forms of populism that use social media in order to push out their message to get 518 00:59:54,980 --> 01:00:02,960 people more and more sort of caught up with extremist views and in filter bubbles and so forth. 519 01:00:02,960 --> 01:00:12,800 And so there's this kind of very negative take. And I just wanted to read to you a passage in an op ed recently published by The New York Times. 520 01:00:12,800 --> 01:00:16,880 The author is very distinguished scholar Sesana Zubov, 521 01:00:16,880 --> 01:00:25,790 and she talks about the epistemic coup that is surveillance capitalism, an epistemic coup comparable well, 522 01:00:25,790 --> 01:00:34,880 also at a different level, but comparable in some ways with the coup that she thinks took place or was attempted on January 6th in the US capital. 523 01:00:34,880 --> 01:00:42,890 She says the following is an information. Civilisation societies are defined by questions of knowledge. 524 01:00:42,890 --> 01:00:49,280 How was it distributed? The authority that governs its distribution and the power that protects that authority? 525 01:00:49,280 --> 01:00:53,180 Who knows? Who decides? Who knows? Who decides? Who decides? 526 01:00:53,180 --> 01:01:01,640 Who knows? Surveillance capitalist's. Now hold the answers to each question, though we never elected them to govern. 527 01:01:01,640 --> 01:01:11,240 This is the essence of the epistemic coup. They claimed the authority to decide who knows by asserting ownership rights over our personal information 528 01:01:11,240 --> 01:01:17,660 and defend that authority with the power to control critical information systems and infrastructures. 529 01:01:17,660 --> 01:01:21,650 So beyond any issue of algorithms, there is this deeper structural. 530 01:01:21,650 --> 01:01:28,910 Issue that she's talking about, this epistemically being in a sense perpetrated by big tech, 531 01:01:28,910 --> 01:01:35,860 in some sense in cahoots with government, can I get any responses to that claim? 532 01:01:35,860 --> 01:01:48,020 Yes. Just OK, so, you know, Shoshana Zubov has been writing about these issues for a long time. 533 01:01:48,020 --> 01:01:55,250 She's a fantastic sociologist with great insight, surveillance, capitalism. 534 01:01:55,250 --> 01:02:10,430 Very interesting book. I read a statement like that and it reminds me of something that I often see in amongst people who have, 535 01:02:10,430 --> 01:02:19,070 you know, deep association with technology companies, which is there's a kind of weird, so great chest beating. 536 01:02:19,070 --> 01:02:23,150 And then a few years later, we are so horrible. 537 01:02:23,150 --> 01:02:29,340 We're the most horrible. And it's all the most whether it's the the best or the worst. 538 01:02:29,340 --> 01:02:39,770 There's this I think what she says in that statement is incredibly exaggerated and not about the concern, 539 01:02:39,770 --> 01:02:47,540 the kind of antitrust concern about dominance over communication infrastructure. 540 01:02:47,540 --> 01:02:51,230 No, but the control of what people. 541 01:02:51,230 --> 01:03:00,640 So let me just say a few things about American politics, which she's writing about despite its parlous state. 542 01:03:00,640 --> 01:03:05,980 First of all, it's true that Donald Trump got 74 million votes. 543 01:03:05,980 --> 01:03:13,480 It's also true that Joe Biden got six million more than that, the most votes anybody has ever gotten running for president. 544 01:03:13,480 --> 01:03:19,780 It's also true that there are a bunch of states that were central to Biden's victory, 545 01:03:19,780 --> 01:03:30,070 including Georgia and Wisconsin, where there was a huge long play in state politics over a very long period of time. 546 01:03:30,070 --> 01:03:35,880 That Stacey Abrams in Georgia. 547 01:03:35,880 --> 01:03:46,440 Ben Wikler, amongst others, in in Wisconsin, years of work, years of work, hard work, political work that produced that change. 548 01:03:46,440 --> 01:03:51,910 Thirdly, there is an absolutely decisive break in the Democratic Party from the milk, 549 01:03:51,910 --> 01:03:56,400 the market fundamentalism that dominated the party really since the 1970s. 550 01:03:56,400 --> 01:04:05,850 This is not about Clinton. This is about Carter. Gary Hart, it's goes back a long time to you. 551 01:04:05,850 --> 01:04:12,660 If you look at the debates, about the current debates, about the stimulus's, you see that there's a real willingness to go big, 552 01:04:12,660 --> 01:04:18,240 to do something about climate and a real willingness to go big and to do something about this, 553 01:04:18,240 --> 01:04:27,390 you know, deep, persistent, profound set of issues about racial justice when I lay that parlous state. 554 01:04:27,390 --> 01:04:33,300 Yes, absolutely. But I lay those facts alongside what Shoshana Zubov says. 555 01:04:33,300 --> 01:04:49,250 I just feel like she's. You know, it's a 30000 foot look at something where things look a little different when you get down to 29000 feet even. 556 01:04:49,250 --> 01:04:55,480 Yeah. You're on view, John. I'm sorry, can I get reactions from the audience? 557 01:04:55,480 --> 01:04:59,920 So I don't know, I always feel like this is really new and provocative, 558 01:04:59,920 --> 01:05:06,070 but an epistemic coup, I all I'm thinking of is the actual political failure behind it. 559 01:05:06,070 --> 01:05:11,560 I remember Zuckerberg testifying in front of Congress and what what a what a catastrophe that was, 560 01:05:11,560 --> 01:05:17,980 how incompetent, ignorant those senators in particular, where, I mean, a disgrace. 561 01:05:17,980 --> 01:05:25,540 So, yeah, then surprise, surprise. You know, then these companies take over and do what they want. 562 01:05:25,540 --> 01:05:28,900 And we we're left in our filter bubbles and echo chambers. 563 01:05:28,900 --> 01:05:35,110 And no one is really any way to figure out whether truth life, I think, is the basic political problem, 564 01:05:35,110 --> 01:05:40,270 lack of real lack of regulations and over representation of corporations in Congress. 565 01:05:40,270 --> 01:05:45,740 The American Congress, 84 percent belong to the ten wealthiest percent of the population. 566 01:05:45,740 --> 01:05:51,640 Hello. The working class completely ignored. Then you wonder why you know the rise in torture. 567 01:05:51,640 --> 01:06:00,550 So so you know, and we know for a fact that so basically the overrepresentation of economic interests is the problem, 568 01:06:00,550 --> 01:06:06,640 not the epistemically by some companies. That's that's pointing the finger in the wrong way. 569 01:06:06,640 --> 01:06:08,760 This is a really good point. 570 01:06:08,760 --> 01:06:19,180 And I think in your book, you mention empirical research that says that if you control for the political views of the 10 percent wealthiest in the US, 571 01:06:19,180 --> 01:06:23,380 then there is no real impact of Democratic decision. 572 01:06:23,380 --> 01:06:29,950 Majorities are not causally efficient once you control for the economic preferences of the richest people. 573 01:06:29,950 --> 01:06:30,910 I think this is my way. 574 01:06:30,910 --> 01:06:40,990 And I kind of agree with Josh that ACORN and Bartel's in a very famous 2014 article called Testing its Theories of Democracy Now. 575 01:06:40,990 --> 01:06:45,610 I mean, of course, but it sounds awfully plausible to me. 576 01:06:45,610 --> 01:06:51,430 I just sort of wonder whether this sort of very vehement attack on big tech actually 577 01:06:51,430 --> 01:06:56,740 shrouds bigger problems about capitalism and its relationship with democracy. 578 01:06:56,740 --> 01:06:58,030 So let me just put that out there. 579 01:06:58,030 --> 01:07:07,000 I mean, John Rawls, your teacher and mentor, Josh, famously thought that his theory of justice was incompatible with capitalism, 580 01:07:07,000 --> 01:07:15,710 that you had to have a property owning democracy or that you had to have some sort of socialist system that respected liberal rights. 581 01:07:15,710 --> 01:07:23,030 Is someone like Zubov actually papering over the deeper issue, because, you know, if you think about even on epistemic basis, 582 01:07:23,030 --> 01:07:27,710 prior to the advent of social media having sort of great impact, 583 01:07:27,710 --> 01:07:32,930 how did the mainstream media deal with the Iraq war pretty much lined up in favour of it. 584 01:07:32,930 --> 01:07:38,450 How did it deal with the Vietnam War? So it's not as if there's this great history of successes. 585 01:07:38,450 --> 01:07:57,200 Well, you know, I'm mindful of the fact that, you know, I'm 69 years old, and if I say I've heard this music before, this is nothing new. 586 01:07:57,200 --> 01:08:04,310 It's like the geezers, you know, talking. That's what you do. You say, oh, nothing new here, but I don't know. 587 01:08:04,310 --> 01:08:23,930 There there we are. I mean, I think it's all of the you know, consider just the case of political fragmentation. 588 01:08:23,930 --> 01:08:33,610 More or less than this is not a nice example, but I understand, but more or less than in Germany, 1928. 589 01:08:33,610 --> 01:08:39,520 Fragmentation of the media in Germany, 1928, same deal. 590 01:08:39,520 --> 01:08:50,860 Now, that's as I say, it's not a very good but it had zero like obviously zero to do with digital digital technology. 591 01:08:50,860 --> 01:09:04,640 If you look at the outsized growth in inequality, whether wealth inequality or income inequality. 592 01:09:04,640 --> 01:09:11,900 Not so clear that it's got I mean, there may be something to do with the success of tech firms, 593 01:09:11,900 --> 01:09:17,330 but that's not the I don't think on anybody's story, that's the biggest driver. 594 01:09:17,330 --> 01:09:24,000 And I think outsized growth of inequality has something to do with the nature of our politics. 595 01:09:24,000 --> 01:09:30,830 You look at the failures to go back to the previous point about, you know, the break from market fundamentalism. 596 01:09:30,830 --> 01:09:36,460 You look at the failure, politically consequential failures around trade policy. 597 01:09:36,460 --> 01:09:45,370 I had nothing to do with digital technology and all of this is consistent with saying 598 01:09:45,370 --> 01:09:55,420 that the there are real serious antitrust issues about the dominance of some players. 599 01:09:55,420 --> 01:09:59,600 And and there's there's a lot of action on that. 600 01:09:59,600 --> 01:10:05,560 What it took to get the action in the United States was for people to reject the Chicago 601 01:10:05,560 --> 01:10:13,870 school antitrust view that antitrust is all about consumer welfare as registered in price, 602 01:10:13,870 --> 01:10:18,340 even if you go to consumer welfare is registered and product quality, not price. 603 01:10:18,340 --> 01:10:24,820 Odina Srinivasan has written fantastically on how Facebook competed on product quality. 604 01:10:24,820 --> 01:10:31,270 In particular, Facebook competed against MySpace on privacy protections. 605 01:10:31,270 --> 01:10:42,300 And I think MySpace was gone, so the I mean, this is the idea of attributing all of this stuff to the latest piece of technology. 606 01:10:42,300 --> 01:10:46,420 I think it's it's a little much. 607 01:10:46,420 --> 01:10:56,020 I want to let Nigel come in before we go to questions from the audience, just because I think I think that's pretty much what just to say. 608 01:10:56,020 --> 01:10:59,800 But I would just say that one thing that is available to us, 609 01:10:59,800 --> 01:11:07,570 I think in an age of huge amounts of data traffic, is to apply the same technology that we're using now, 610 01:11:07,570 --> 01:11:14,720 technology, much of it, to shape the traffic, to allow these video conferences to happen, to help us see where the information is flowing. 611 01:11:14,720 --> 01:11:19,240 Now, that may sound like the reason that matters is we have very little visibility, 612 01:11:19,240 --> 01:11:26,980 either as individual citizens or as regulators about what the market concentrations are actually like. 613 01:11:26,980 --> 01:11:30,040 And this isn't just around the commercial context. 614 01:11:30,040 --> 01:11:37,630 It could just as well do with the facts of where concentration of data is at everything from safety data to scientific data. 615 01:11:37,630 --> 01:11:44,200 There is a real need to get much more transparency about the actual flows of data to the destinations. 616 01:11:44,200 --> 01:11:52,960 And actually, unfortunately, what we have is an arms race where everybody is trying to obscure and variously protect that information. 617 01:11:52,960 --> 01:12:02,740 I think if open data, which I see as a jewel to much of the prospect of algorithmic algorithmic decision making, is to offer a hope. 618 01:12:02,740 --> 01:12:08,680 It's to say certain classes of information we could recognise as being a good one is things like beneficial 619 01:12:08,680 --> 01:12:16,060 ownership on issues like where the data flows actually are in this so-called surveillance of his populace, 620 01:12:16,060 --> 01:12:18,130 then we can address some of these alleged concerns. 621 01:12:18,130 --> 01:12:27,400 Are they real or they imagined all that actually concentrations that are invidious to the conditions, the democratic conditions we want to foster. 622 01:12:27,400 --> 01:12:32,860 OK, I'm going to move now to some questions from the audience. So the first one is from Nicholas Hincker. 623 01:12:32,860 --> 01:12:39,970 And he says there's so much progress on recommending systems, learning so much about individuals from data. 624 01:12:39,970 --> 01:12:44,270 Are there ways for this to help democracy and not harm democracy? 625 01:12:44,270 --> 01:12:54,720 So what about the positive side? So then you think that perhaps these developments can actually help have a more radical form of democracy? 626 01:12:54,720 --> 01:13:02,890 I tried to imagine what it could do to a completely reinvented form of democracy where a lot 627 01:13:02,890 --> 01:13:11,140 of it takes place online and you'd be registered on it on a platform called Citizen Book, 628 01:13:11,140 --> 01:13:14,440 you know, like a sort of democratised non-profit version of Facebook. 629 01:13:14,440 --> 01:13:21,520 And and you'd have artificial intelligence at your fingertips to help you synthesise knowledge about, I don't know, 630 01:13:21,520 --> 01:13:33,310 various candidates at the global level or the European level or similar large scale political level to help you sift through various profiles 631 01:13:33,310 --> 01:13:44,050 and then choose the right candidate for an electoral mood for some issues or and sort of question when there's a sort of electronic referendum. 632 01:13:44,050 --> 01:13:54,580 So I could see these these technologies as augmenting and speeding up the process of making collective decisions. 633 01:13:54,580 --> 01:13:58,570 So that's one way in which I was trying to imagine that these tools could be useful. 634 01:13:58,570 --> 01:14:08,650 You could envision I know Jim Fishkin, I'd send for his artificial intelligence, artificial intelligence facilitator for Notkin deliberation, 635 01:14:08,650 --> 01:14:17,290 so that that also cheapens the cost of large scale democratic innovations like deliberative polls, like citizens assembly. 636 01:14:17,290 --> 01:14:21,280 So in the end, you know, I already had a chance to argue that point before. 637 01:14:21,280 --> 01:14:31,870 But I do think that we the danger, the generosity of our A.I. depends in part on the quality of our democracy. 638 01:14:31,870 --> 01:14:41,650 If we are in control, if we know how to run, instrumentalise technology to good ends, then then we'll be fine. 639 01:14:41,650 --> 01:14:47,770 But our democracies are in terrible shape. In fact, they are hardly democracy at all. 640 01:14:47,770 --> 01:14:52,270 And so and just to go back to the point about democracy is a way of life. 641 01:14:52,270 --> 01:14:53,620 And, you know, 642 01:14:53,620 --> 01:15:01,780 I also think the question of the workplace is really important in the sense that I wish we could all say we have new masters like those Vikings, 643 01:15:01,780 --> 01:15:09,730 you know, but if you if you're in a corporation and your health care depends on your boss and, you know, and in this pandemic, 644 01:15:09,730 --> 01:15:17,260 you could lose your job and you get depressed and, you know, you have a master and can you be free? 645 01:15:17,260 --> 01:15:22,700 Can you. So all these things are connected to my life. 646 01:15:22,700 --> 01:15:26,300 Another question from Lloyd Rushton regarding personal data, 647 01:15:26,300 --> 01:15:33,920 should we impose fiduciary duties on those who seek to utilise such data in addition to empowering data subjects through collective initiatives? 648 01:15:33,920 --> 01:15:36,110 Nigel. 649 01:15:36,110 --> 01:15:47,970 I think that's certainly something where that exists in certain cases of sensitive data, you're not simply free to do what you want with this stuff. 650 01:15:47,970 --> 01:15:57,830 I think I think evolving a regulatory landscape and a legal landscape where there are duties of care over how data is actually 651 01:15:57,830 --> 01:16:07,100 deployed is something that we will see emerge as we start to understand the scope and range of it will become very acutely aware. 652 01:16:07,100 --> 01:16:11,930 And you see this particularly, for example, in areas where age appropriate design. 653 01:16:11,930 --> 01:16:14,750 So if you want to look at some of the most harrowing examples, 654 01:16:14,750 --> 01:16:22,830 then you look into examples of where a persuasive design is used with respect to interacting with children on children's apps. 655 01:16:22,830 --> 01:16:39,780 So I think there is every prospect of needing to think about a balance of interests that place fiduciary actual constraints on use of data. 656 01:16:39,780 --> 01:16:48,320 If I may say, I do think also, though, and I don't think we're in disagreement here, fiduciary responsibilities are fine. 657 01:16:48,320 --> 01:16:55,070 But you have you've got a bunch of data that's a threat surface that somebody is going to go after. 658 01:16:55,070 --> 01:17:00,800 And so you I think it's very important to think on the technology side of privacy protection. 659 01:17:00,800 --> 01:17:04,670 So and there are a whole bunch of tools. There's differential privacy. 660 01:17:04,670 --> 01:17:12,770 You do own device processing. You have a principle of data minimisation so that you only collect the data that are needed. 661 01:17:12,770 --> 01:17:15,980 You have some sunset law that requires that the data go. 662 01:17:15,980 --> 01:17:26,030 You have secure enclave's because you know, what you're what you want is both to have the data protected against threats, 663 01:17:26,030 --> 01:17:33,140 but also, frankly, to protected to be protected against your own temptations to do the wrong thing. 664 01:17:33,140 --> 01:17:38,870 But if you want to protect it against yourself, so fiduciary obligations. 665 01:17:38,870 --> 01:17:42,680 But I think discharging those in a way that gives a kind of technolog, 666 01:17:42,680 --> 01:17:50,840 a set of that wraps the protections in a pretty secure technology is an important strategy. 667 01:17:50,840 --> 01:17:57,740 We've talked about some of the power structures and power imbalances that into the operation of democracy. 668 01:17:57,740 --> 01:18:03,260 But Carrisi and asks a question which I'm going to summarise now, that is, how can we reach a consensus, 669 01:18:03,260 --> 01:18:08,810 a democratic consensus about fairness with respect to the operation of algorithms when we know that 670 01:18:08,810 --> 01:18:14,390 people's conception of fairness is so heavily shaped by cultural norms and they differ so greatly. 671 01:18:14,390 --> 01:18:22,430 So there's so much cultural fragmentation within our democracies, who gets whose conception of fairness gets to win out? 672 01:18:22,430 --> 01:18:28,940 Who overrides whom? She asks. So I think that that's a wrong way to ask the question. 673 01:18:28,940 --> 01:18:32,120 It's not about who overrides to it's it's to convince who. 674 01:18:32,120 --> 01:18:38,420 And again, I am just going to bring it back to my favourite thing, which is the Citizens Conventions. 675 01:18:38,420 --> 01:18:42,590 One took place in France recently on climate change, very controversial issue. 676 01:18:42,590 --> 01:18:50,030 You had renters against landlords about mandatory housing renovations, you know, to limit greenhouse emissions. 677 01:18:50,030 --> 01:18:57,110 And so if you look at it as a as a as a negative sum game where some people are going to lose, some people are going to win, 678 01:18:57,110 --> 01:19:03,800 and some people's views of fairness are going to override other people's views of fairness if it's not never going to work. 679 01:19:03,800 --> 01:19:12,260 But if you look at it as a as an orientation toward the common good, we all reason as citizens to go into enterprise. 680 01:19:12,260 --> 01:19:20,930 We are trying to achieve a common goal. In that case was kerbing greenhouse emissions by 40 percent of 1990 levels in a state of social justice. 681 01:19:20,930 --> 01:19:28,850 It actually works. If you institutionalised property, you give it time, you facility well and you structure it well. 682 01:19:28,850 --> 01:19:37,400 It's a success story from my perspective, in terms of what they managed to produce as consensual solutions to very, 683 01:19:37,400 --> 01:19:41,780 very thorny value disagreements and practical disagreements. 684 01:19:41,780 --> 01:19:46,190 They would sit at the end during the seven session about those proposals, 685 01:19:46,190 --> 01:19:53,240 some of them likely to be very difficult to pass through the regular legislative system. 686 01:19:53,240 --> 01:19:59,360 And they voted with like high percentage of of approval, like eighty five percent they convinced each other. 687 01:19:59,360 --> 01:20:04,920 So I believe in that. That's the whole idea of deliberative democracy. 688 01:20:04,920 --> 01:20:11,160 John, if I could add a point on that, I agree with what Alan is saying, 689 01:20:11,160 --> 01:20:18,690 but I also want to pick up on another way that the question is put, which makes me a little uncomfortable. 690 01:20:18,690 --> 01:20:20,220 I mean, it's a great question. 691 01:20:20,220 --> 01:20:34,350 And I can tell you, it comes up all the time and you won't be surprised to hear that if you start by asking what's your conception of fairness? 692 01:20:34,350 --> 01:20:40,980 And you go around the room, you know, honestly, I don't care if it's collective deliberation or individual reflection. 693 01:20:40,980 --> 01:20:51,010 It's going to be all over the place. If you say here's a more concrete problem, is this fair? 694 01:20:51,010 --> 01:20:52,190 Let me give an example. 695 01:20:52,190 --> 01:21:00,730 There was a story in the Times a year ago about speech recognition systems, all the speech recognition systems have have higher word error. 696 01:21:00,730 --> 01:21:06,930 Existing speech recognition systems have a higher word error rate on African-American vernacular English. 697 01:21:06,930 --> 01:21:15,320 I don't know anybody maybe I don't know enough people, but I don't know anybody who thinks, oh, that's that's fine, there's no fairness issue there. 698 01:21:15,320 --> 01:21:18,900 And if you say, well, you know, 699 01:21:18,900 --> 01:21:26,190 if you use African-American vernacular English or you have to do is code switch and don't speak in African-American vernacular English, 700 01:21:26,190 --> 01:21:31,050 it'll be understood fine. No one thinks that that's a fair solution to the problem. 701 01:21:31,050 --> 01:21:38,610 So if you take concrete's rather than abstractions about what's your theoretical definition of fairness, 702 01:21:38,610 --> 01:21:46,710 I think you find a little considerably less of the kind of unending disputation. 703 01:21:46,710 --> 01:21:48,960 I don't mean it goes away because there are hard cases, 704 01:21:48,960 --> 01:21:57,010 but a considerably less of it than you find if you start with high level abstractions about what fairness is. 705 01:21:57,010 --> 01:22:03,090 That's resolving ethical disagreements doesn't mean having to penetrate down to philosophical foundations of those. 706 01:22:03,090 --> 01:22:08,280 No, that's right. And what you what I find is and I think this is very important and it goes back to something 707 01:22:08,280 --> 01:22:13,290 that you said at the beginning about the importance of philosophy in this enterprise. 708 01:22:13,290 --> 01:22:16,440 I agree with you. Philosophy is important in the enterprise, 709 01:22:16,440 --> 01:22:25,500 but it would be hopeless for philosophers to participate in the enterprise were it not for the fact that most people are morally intact. 710 01:22:25,500 --> 01:22:30,210 Philosophers don't teach people morality. 711 01:22:30,210 --> 01:22:35,130 They help to clarify moral thinking. They help to make it more salient. 712 01:22:35,130 --> 01:22:39,120 But if you weren't in a universe of people who, 713 01:22:39,120 --> 01:22:47,700 you know on the whole and for the most part had a kind of facility with moral distinctions and ideas, it would be the enterprise would be hopeless. 714 01:22:47,700 --> 01:22:56,670 That's just to amplify that. The fact is that this area has become practical by the operationalisation of of of context. 715 01:22:56,670 --> 01:23:03,210 So we're both into contact with really interesting classes of moral questions in particular cases. 716 01:23:03,210 --> 01:23:06,570 And I think that is why the discussion between computer science, 717 01:23:06,570 --> 01:23:13,860 application builders and and and ethical philosophical is just such a rich conversation 718 01:23:13,860 --> 01:23:19,560 because you can actually ground in just the way just as difficult questions, 719 01:23:19,560 --> 01:23:25,650 in particular implemented realities. And this has a powerful pedagogic aspect. 720 01:23:25,650 --> 01:23:33,090 So one of my colleagues, Michael Phillips Brown, teaches ethics to engineers, but not by saying, well, his concern is Aristotle in his mill, 721 01:23:33,090 --> 01:23:41,490 not go ahead and apply that to your problems, but by working within the issues that arise in their research is kind of bespoke kind of ethics. 722 01:23:41,490 --> 01:23:50,040 And therefore, even if you have deep of theoretical disagreements, that doesn't prevent being able to resolve those more concrete problems. 723 01:23:50,040 --> 01:23:54,060 As Josh said, you know, I'm going to ask two questions. 724 01:23:54,060 --> 01:24:01,380 I think it's likely the last I'm going to run them together. So one is a question from Jose Marti, who's asking? 725 01:24:01,380 --> 01:24:08,880 Is there any specific contribution that I might make, in particular to strengthen the quality and extension of Democratic deliberation? 726 01:24:08,880 --> 01:24:13,410 So that's about improving Democratic deliberation by us. 727 01:24:13,410 --> 01:24:17,310 But there's another question from my colleague, Ted Lieutenant, who says, 728 01:24:17,310 --> 01:24:21,840 suppose artificial general intelligence can someday generate legislation that is 729 01:24:21,840 --> 01:24:27,600 maximally just political equality is preserved because no one has any influence at all. 730 01:24:27,600 --> 01:24:38,330 Does this reveal democracy's value? I mean, I guess the point would be at that point, Democratic Decision-Making would become redundant. 731 01:24:38,330 --> 01:24:43,700 So enhancement redundancy, pick up either of those and hensman, 732 01:24:43,700 --> 01:24:48,230 I feel like I gave some examples of what I had in mind, but I just had another thought, 733 01:24:48,230 --> 01:24:54,560 actually, which is about mass deliberation, because the big puzzle for me is whether we can have quality deliberation, 734 01:24:54,560 --> 01:24:58,640 the kind that would be legitimising of laws and policies at the large scale, 735 01:24:58,640 --> 01:25:06,110 or whether we always have to go through a representative moment where we did the task of the aversion to a subset with a random or to. 736 01:25:06,110 --> 01:25:13,810 And from what I understand, maybe artificial intelligence could again, because it helps speed up and clarify. 737 01:25:13,810 --> 01:25:18,040 Issues, maybe, I'm not entirely sure, 738 01:25:18,040 --> 01:25:25,210 but I could envisage a role there for artificial intelligence in connecting a humongous amount 739 01:25:25,210 --> 01:25:31,720 of people and to get to that ideal of like all the brains connected together via technology. 740 01:25:31,720 --> 01:25:36,550 I'm not sure it's remotely possible, but that's actually at least an idea I have. 741 01:25:36,550 --> 01:25:47,620 And then what was the second question? How do we know what would happen if we had a perfectly just taking over it was God, right? 742 01:25:47,620 --> 01:25:53,800 I mean, that's pretty much who that would be. 743 01:25:53,800 --> 01:26:00,460 I guess it depends on how much we care about having reasons for what we do and and whether we want to have a master or not. 744 01:26:00,460 --> 01:26:06,550 Because if it's gone to the OR in some respects and I go back to the Vikings, I think we don't want to live with the master. 745 01:26:06,550 --> 01:26:10,930 I think there's a value in that that trumps even getting the right outcomes. 746 01:26:10,930 --> 01:26:15,820 Is it always the case that there are people who could make better decisions than us, like, for example, 747 01:26:15,820 --> 01:26:22,210 someone could make far better sartorial decisions on my behalf, but I don't surrender is something. 748 01:26:22,210 --> 01:26:28,510 I mean, I'm not saying that I'd want to only I think there's a threshold of mistakes that that can go below. 749 01:26:28,510 --> 01:26:34,180 But I think there's a value in learning from your own mistakes and not getting to the right. 750 01:26:34,180 --> 01:26:41,250 And so but sort of deploying yourself as a human being in life through your own agenda. 751 01:26:41,250 --> 01:26:50,010 I guess. Josh, maybe what would happen and no one could make better sartorial decisions for me than me. 752 01:26:50,010 --> 01:26:57,480 I'll just I'll put a stake in the ground there. And if they tried to, I would make their life miserable. 753 01:26:57,480 --> 01:26:59,100 Well, let's see. Nigel's taking his tie off. 754 01:26:59,100 --> 01:27:09,220 So he's coming back to you in one respect, you know, with with respect to the you know, the redundancy alliance as well. 755 01:27:09,220 --> 01:27:18,250 Wouldn't that be like God? And I guess what I would hope is that it would be a Assab abscond dittos. 756 01:27:18,250 --> 01:27:23,560 Yes, I did have the good grace to let us get on with a good grace to let us make continue 757 01:27:23,560 --> 01:27:34,480 to make a mess and sometimes a blessing out of our crookedly timbered lives. 758 01:27:34,480 --> 01:27:40,450 I do wonder whether one of the you know, there have been various attempts at liquid democracy, 759 01:27:40,450 --> 01:27:47,140 liquid democracy meets the block, all sorts of ideas about how you might change the democratic process or the or that. 760 01:27:47,140 --> 01:27:56,410 One of the things that's been striking about recent developments in large scale platforms has been the emergence of citizen science, 761 01:27:56,410 --> 01:28:00,850 a participatory engagement that hasn't been about doing all of the science, 762 01:28:00,850 --> 01:28:06,040 but has been about really important aspects of helping the science succeed, you know, 763 01:28:06,040 --> 01:28:12,070 or tapping into people's enthusiasm to either collect the primary data or to clean data up. 764 01:28:12,070 --> 01:28:16,630 And I think that sometimes we we kind of elevate the decision making to the point where 765 01:28:16,630 --> 01:28:21,430 I think there's lots of ways of participating that these platforms can facilitate. 766 01:28:21,430 --> 01:28:27,430 And we found some fascinating data. In fact, the Zooniverse group here in Oxford, Chris Lento's group, 767 01:28:27,430 --> 01:28:34,450 did some fascinating work where they could actually look at large scale participation and run machine learning across those participations 768 01:28:34,450 --> 01:28:46,680 and find the sub groups within that large participation who were particularly effective for certain sorts of of task or contribution. 769 01:28:46,680 --> 01:28:53,440 And I think there's a lot to be had in reframing this in a slightly broader sense of what it is to participate. 770 01:28:53,440 --> 01:28:59,560 Absolutely. People often forget the Universal Declaration includes the right to participate in science and culture, 771 01:28:59,560 --> 01:29:03,070 and arguably that is a dimension of the sort of broader, 772 01:29:03,070 --> 01:29:09,610 informal public sphere that is relevant, highly relevant to democratic participation on that optimistic note. 773 01:29:09,610 --> 01:29:14,380 Nigel, I think we're going to have to draw to a close. Thank you so much to our three wonderful speakers. 774 01:29:14,380 --> 01:29:20,560 I really enjoyed this invigorating discussion. We've had to touch over lightly some really important themes, 775 01:29:20,560 --> 01:29:24,910 but I think it's been incredibly stimulating and hopefully given our audience a 776 01:29:24,910 --> 01:29:29,920 taste for the sorts of discussions we're hoping to have in future at the institute. 777 01:29:29,920 --> 01:29:35,650 Thank you to the three speakers that really appreciate you taking the time to be part of the launch. 778 01:29:35,650 --> 01:29:41,480 And thank you to everyone for tuning in and listening to us tonight. 779 01:29:41,480 --> 01:29:46,060 Thanks very much. Thank you. All right. Thank you, everyone. 780 01:29:46,060 --> 01:30:20,121 Thank you. Thank you.