1 00:00:06,450 --> 00:00:16,650 Welcome, everyone, to this online event, which is part of the Colloquium on Ethics presented by Oxford's Institute for Ethics in I. 2 00:00:16,650 --> 00:00:24,690 My name is John Tissue this and I'm the director of the institute. Our topic today is the ethics of algorithmic decision making. 3 00:00:24,690 --> 00:00:32,040 And I'm delighted to welcome our main speaker, Professor Cass Sunstein from Harvard Law School. 4 00:00:32,040 --> 00:00:41,220 Cass will be well known to many of you, is the most cited American legal scholar and an exemplary practitioner of interdisciplinary research, 5 00:00:41,220 --> 00:00:46,890 drawing on fields such as behavioural economics, philosophy, amongst others. 6 00:00:46,890 --> 00:00:54,360 He's written in on innumerable topics, including in defence of socio economic rights, on legal reasoning, 7 00:00:54,360 --> 00:00:59,910 on the rule of law, as shown by his recent book, Law and the Vieth and with Adrian the New. 8 00:00:59,910 --> 00:01:04,320 And he is a leading proponent of nudge, theory and regulatory policy, 9 00:01:04,320 --> 00:01:10,680 developing that idea within a framework that he describes as libertarian paternalism. 10 00:01:10,680 --> 00:01:19,950 His intellectual range is so great, it extends to a galaxy far, far away, as illustrated by his book The World, according to Star Wars. 11 00:01:19,950 --> 00:01:26,400 In addition to his many academic accomplishments, Katz is also a prominent figure in American public life. 12 00:01:26,400 --> 00:01:34,010 He was, for example, head of the Office of Information and Regulatory Affairs in the Obama administration. 13 00:01:34,010 --> 00:01:39,650 Our topic today, as I said, is the place of algorithms in public decision making. 14 00:01:39,650 --> 00:01:48,230 If we think about legal adjudication, for example, the allure of automated decision making is evident we face throughout the world. 15 00:01:48,230 --> 00:01:51,650 Massive problems of access to justice rights. 16 00:01:51,650 --> 00:01:56,210 People have, as a matter of law, remain merely paper rights. 17 00:01:56,210 --> 00:02:03,650 If they cannot be effectively enforced, for example, because of prohibitive costs or excessive delays, 18 00:02:03,650 --> 00:02:09,290 algorithms present themselves as tools for achieving faster, cheaper justice. 19 00:02:09,290 --> 00:02:18,740 But of course, there are also challenges if algorithms operate on the basis of big data, a vast mass of prior human decisions. 20 00:02:18,740 --> 00:02:26,160 Isn't there a risk that they will reproduce and amplify the biases inherent in those past decisions? 21 00:02:26,160 --> 00:02:28,310 And even if they are free from bias, 22 00:02:28,310 --> 00:02:35,210 can algorithms be devised that are sensitive enough to balance all the considerations a judge must take into account when, 23 00:02:35,210 --> 00:02:45,380 for example, sentencing a defendant? And even if they can reproduce just outcomes, can algorithms do this via a just process? 24 00:02:45,380 --> 00:02:49,820 Often the workings of algorithms are opaque even to their makers. 25 00:02:49,820 --> 00:02:57,690 Which strikes against the demand for transparency. And even if we have transparency about how an algorithm reaches a decision, 26 00:02:57,690 --> 00:03:05,060 it's process of reaching that decision is unlikely to resemble the reasoning process engaged in by human judges. 27 00:03:05,060 --> 00:03:08,960 So we might end up with a justification of the wrong kind. 28 00:03:08,960 --> 00:03:15,560 Finally, even if all these problems can be sorted, many will find it creepy and dehumanising. 29 00:03:15,560 --> 00:03:23,510 Tough decisions that bear on our most basic interests being taken not by our fellow citizens, but by machines. 30 00:03:23,510 --> 00:03:32,300 Doesn't this undermine the reciprocity and solidarity amongst citizens that we hope for in a healthy, democratic society? 31 00:03:32,300 --> 00:03:35,800 In the face of this kind of many scepticism, 32 00:03:35,800 --> 00:03:43,610 Kass has been a leading advocate for the positive case for incorporating algorithms and human decision making. 33 00:03:43,610 --> 00:03:50,660 He sees them as potential cures for the bias and inconsistency exhibited by human decision makers. 34 00:03:50,660 --> 00:03:59,150 So we are very pleased to have Cass with us today, and we are also delighted to have two distinguished commentators. 35 00:03:59,150 --> 00:04:05,800 First, Professor Ruth Chang, who is the chair of jurisprudence in the Oxford Faculty of Law. 36 00:04:05,800 --> 00:04:12,860 Ruth's approach to law is through a deeper, original theory of the nature of practical reason that she has developed. 37 00:04:12,860 --> 00:04:21,360 So she is an ideal commentator for this event. Our other commentator, no less ideal, is Professor Sir Nigel Shadbolt, 38 00:04:21,360 --> 00:04:27,200 who is principal of Jesus College and a professor of computer science at the university. 39 00:04:27,200 --> 00:04:33,320 Amongst his many other roles is chairman of the Open Data Institute, and he is the co-author, 40 00:04:33,320 --> 00:04:40,640 along with Roger Hampson of The Digital Ape How to Live in Peace with Smart Machines. 41 00:04:40,640 --> 00:04:44,970 Now, I should say there will be an opportunity for Q&A later on. 42 00:04:44,970 --> 00:04:53,020 So please put your questions in the YouTube comments section, and I hope to get round to as many of them as possible. 43 00:04:53,020 --> 00:04:59,580 So let me now invite casts to explain to us why, as he puts it, algorithms eliminate noise. 44 00:04:59,580 --> 00:05:03,270 And that is very good. Yes. Great. 45 00:05:03,270 --> 00:05:13,260 Thank you, John. It's a great honour to get to speak to you all at this amazing set of sessions. 46 00:05:13,260 --> 00:05:20,790 And John's new organisation and to have these commentators is also an honour. 47 00:05:20,790 --> 00:05:26,730 I confess that I am particularly thrilled to be doing this because this is the first time I'm going 48 00:05:26,730 --> 00:05:35,580 to be presenting on a multi-year research project with Daniel Kahneman and Olivier Sabantuy. 49 00:05:35,580 --> 00:05:47,940 We've been working on this subject, which I was sorry that some noise in a very different sense on this topic. 50 00:05:47,940 --> 00:05:56,040 We've been working for many years and algorithms are central to what we're exploring. 51 00:05:56,040 --> 00:06:03,090 I think to orient the discussion, I'd like to present a figure that's called the Dark Rabbit image. 52 00:06:03,090 --> 00:06:11,770 We get that up. There you go, see the duck rabbit. 53 00:06:11,770 --> 00:06:21,100 OK. I'd like each one of you to see, say, whether you see a duck or a rabbit inside your own head. 54 00:06:21,100 --> 00:06:26,200 Can you do that? See the dark rabbit? Or just see the dark. 55 00:06:26,200 --> 00:06:30,910 Or just see the rabbit. Think to yourself, if you would, what is that you see? 56 00:06:30,910 --> 00:06:36,170 You see both, OK. We can take it down. Thank you. 57 00:06:36,170 --> 00:06:47,690 And what I'm going to suggest is that the duck rabbit exemplifies both bias and noise around Easter time. 58 00:06:47,690 --> 00:06:52,370 People tend to see a rabbit and not a duck. 59 00:06:52,370 --> 00:07:00,740 People generally see ducks more than they see rabbits. And the best theory for this is bias. 60 00:07:00,740 --> 00:07:07,910 That is what you see is a product of your preconceptions. And around the time of Easter, people are thinking bunnies. 61 00:07:07,910 --> 00:07:13,970 And evidently around the time of, let's say, January or June, 62 00:07:13,970 --> 00:07:19,910 people are thinking ducks because they think about ducks more than they do about rabbits. 63 00:07:19,910 --> 00:07:30,980 So let's just think that preconceptions are a determinant of perception and this is associated with it consigns use of the dog grap duck rabbit image. 64 00:07:30,980 --> 00:07:42,140 I'm very confident that in addition to bias of the sort just described, there was scatter amongst those of you who thought what it is that you saw. 65 00:07:42,140 --> 00:07:49,160 There was scatter in the sense that some of you saw Duck and some of you saw Rabbit. 66 00:07:49,160 --> 00:08:03,590 And that suggests that you as a group were demonstrating system noise where system noise is a system that is showing an unwanted variability. 67 00:08:03,590 --> 00:08:09,170 I want to stipulate on wanted and say a few things about what that means. 68 00:08:09,170 --> 00:08:15,950 If you go on a scale and weigh yourself in the morning, the scale could be biased, but not noisy. 69 00:08:15,950 --> 00:08:20,150 My scale always shows me actually six pounds heavier than I am. 70 00:08:20,150 --> 00:08:23,030 I don't know why, but it just happens to do that. 71 00:08:23,030 --> 00:08:32,420 I know for a fact there are other scales that are noisy in the sense that they will show you a little heavier than you are, 72 00:08:32,420 --> 00:08:45,290 a little lighter than you are. They are all around the true value to get clear on the difference between noise and bias one more time. 73 00:08:45,290 --> 00:08:54,410 You could imagine a team of shooters who are archers, who are constantly going northeast of the target. 74 00:08:54,410 --> 00:08:58,670 That would mean that they are biased in a predictable way. 75 00:08:58,670 --> 00:09:05,870 Or you could imagine a team of shooters that is scattered all around the target and they are noisy. 76 00:09:05,870 --> 00:09:12,830 These are different forms of error where one is systematic and the other is random. 77 00:09:12,830 --> 00:09:20,690 We know from the medical practise that doctors show both intrapersonal and interpersonal noise, 78 00:09:20,690 --> 00:09:32,270 and this is associated with error where some doctors will be likely to do certain things, let's say in the morning, but not in the afternoon. 79 00:09:32,270 --> 00:09:38,270 That's intrapersonal noise and in domains where you wouldn't expect to see noise, 80 00:09:38,270 --> 00:09:46,700 doctors show interpersonal noise with respect to reading x rays, for example, or diagnosing heart disease or lung cancer. 81 00:09:46,700 --> 00:09:58,880 The medical profession is actually quite noisy and the noise can be ascertained without knowing what the true value is or meaning we detect scatter. 82 00:09:58,880 --> 00:10:08,850 It might be a product of a bias in the sense that some doctors might be particularly inclined to read an X-ray in a certain way. 83 00:10:08,850 --> 00:10:20,460 And that may be erroneous. But to detect noise, we don't need to be clear on what bias is at work or how the magnitude of bias. 84 00:10:20,460 --> 00:10:25,010 OK. To get a little more specific about the operation of noise, 85 00:10:25,010 --> 00:10:38,750 this algorithm enthusiastic project actually becan when cognomen was doing a an assortment of work for an insurance company, 86 00:10:38,750 --> 00:10:46,100 asking the underwriters at the company to come up with an assortment of premiums for 87 00:10:46,100 --> 00:10:51,830 places that wanted to insure with the company and what was discovered at the firm. 88 00:10:51,830 --> 00:11:02,660 This is real people whose job it is to be underwriters is that they were extremely noisy in the premiums with which they on which they settled. 89 00:11:02,660 --> 00:11:11,210 That is the magnitude of variance across similarly situated underwriters was really high. 90 00:11:11,210 --> 00:11:19,430 Now, given what we know about interpersonal noise, that itself is surprising, but not stunning. 91 00:11:19,430 --> 00:11:27,440 What was stunning was that the executives at the company were startled at the magnitude of noise. 92 00:11:27,440 --> 00:11:35,530 They predicted that the level of noise they would see would be a very fraction of the level of noise that they actually observed. 93 00:11:35,530 --> 00:11:45,700 Which gives rise to and I'm building up to algorithm's a mantra, which is wherever there is judgement, there is noise. 94 00:11:45,700 --> 00:11:57,340 And more than you think, Oh, Daddy. To get clarity on the nature of noise, I think we have to make distinctions amongst different kinds of toys. 95 00:11:57,340 --> 00:12:05,140 And to do that and to see some of the comparative advantages of algorithms, I'd like you to play a little game with me, 96 00:12:05,140 --> 00:12:14,380 a little exercise in which no one's going to be asked to write or say anything, but in which you'll get the intuition. 97 00:12:14,380 --> 00:12:19,360 Imagine that we all share certain friends. Maybe that's even true. 98 00:12:19,360 --> 00:12:23,710 Everyone is John's friend. Let's just assume that. 99 00:12:23,710 --> 00:12:26,260 But we're not going to do this with respect to John. 100 00:12:26,260 --> 00:12:40,450 Let's assume that we are ranking our friends along three dimensions on a scale of one to five with respect to kindness, diligence and intelligence. 101 00:12:40,450 --> 00:12:50,240 OK, so we're taking our friends, Carol and Carl, and we're ranking them in terms of kindness, diligence and intelligence. 102 00:12:50,240 --> 00:12:55,010 And let's suppose all of us know Carol and Carl very well. 103 00:12:55,010 --> 00:13:01,820 I've actually engaged in this exercise with my spouse and there is going to be a lot of noise. 104 00:13:01,820 --> 00:13:11,000 That is Carol. My God. A four from you and a two from someone else who knows her. 105 00:13:11,000 --> 00:13:19,470 With respect to kindness. And this will be somewhat startling because you both know Carol and Carl really well. 106 00:13:19,470 --> 00:13:27,320 OK, what's going on in these distinctions? Which map on, I suggest, to multiple domains of human judgement? 107 00:13:27,320 --> 00:13:32,060 OK, a first possibility is occasion noise. 108 00:13:32,060 --> 00:13:44,060 Let's call occasion noise, intrapersonal noise in which a human being will make a different judgement depending on the occasion. 109 00:13:44,060 --> 00:13:54,650 So it may be that in hot weather, someone is going to get a higher rating in terms of diligence than in cold weather, 110 00:13:54,650 --> 00:14:02,290 which is to say that it may be that a judge the day after his favourite football team won, 111 00:14:02,290 --> 00:14:08,120 will be more lenient on a criminal defendant than the day his football team lost. 112 00:14:08,120 --> 00:14:12,620 And there's actually data suggestive of exactly that. 113 00:14:12,620 --> 00:14:18,590 Or it might be that a judge on a day in which something has happened in the world that's 114 00:14:18,590 --> 00:14:24,830 very cheering will be more lenient than on a day in which something happened in the world. 115 00:14:24,830 --> 00:14:35,750 That's the opposite of cheering. This is just a way of saying that one of the reasons the exercise involving Carol and Carol produces noise is that 116 00:14:35,750 --> 00:14:43,040 people are in different moods because of different external circumstances and that produces intra personal noise, 117 00:14:43,040 --> 00:14:53,540 which will vary from person to person. The second source of noise to which algorithms are not going to be subject, let's call it level noise. 118 00:14:53,540 --> 00:14:59,990 It might be with respect to kindness. That person A is just more lenient. 119 00:14:59,990 --> 00:15:08,150 Greater than person B, such that a person who's kind of kind will, on a scale of zero to five, 120 00:15:08,150 --> 00:15:15,650 let's say, typically get a four from one person where someone else will give that person a three. 121 00:15:15,650 --> 00:15:23,330 With respect to punishment judgements and with respect to judgements about how to rate applicants, 122 00:15:23,330 --> 00:15:33,080 let's say for university or for a job, some people will be different from others in the sense that they will show level noise. 123 00:15:33,080 --> 00:15:42,500 They are systematically tougher or more lenient than others where the words toughness and leniency are placed. 124 00:15:42,500 --> 00:15:49,350 Place holders for a broader away array of evaluative judgements. 125 00:15:49,350 --> 00:15:58,310 The dominant factor in noise we're finding empirically is not occasion, noise or level noise. 126 00:15:58,310 --> 00:16:10,640 We call it now pattern noise by which there's an interaction between people's assessment and the particular category that they are assessing. 127 00:16:10,640 --> 00:16:14,480 It's not a systematic level noise difference. 128 00:16:14,480 --> 00:16:25,780 So a judge, let's say in the criminal context, might be really tough on auto theft, but not really tough on financial crimes. 129 00:16:25,780 --> 00:16:34,310 Or the opposite. Or a recruiter at a firm might be really upbeat, let's say, 130 00:16:34,310 --> 00:16:47,090 about people who came from socio economically challenging backgrounds and not very excited about people who went to Cambridge or Oxford. 131 00:16:47,090 --> 00:16:51,920 And there might be someone who has an opposite set of patterns. 132 00:16:51,920 --> 00:17:02,510 The idiosyncratic nature of individual judgement and a source of noise as a source of noise is often a product of pattern noise. 133 00:17:02,510 --> 00:17:12,170 And this can be found in medicine and law, as well as in Olympic evaluation of, say, ice skaters, et cetera. 134 00:17:12,170 --> 00:17:22,790 And in the award of damages for wrongdoing, there's a fourth source of noise, which I think you'll get immediately with respect to that Carol. 135 00:17:22,790 --> 00:17:27,140 And Carl exercise, which is scaling noise, 136 00:17:27,140 --> 00:17:39,740 how you use a scale putting occasion noise and level noise and pattern noise to one side, whether the scale is pounds or numbers. 137 00:17:39,740 --> 00:17:43,940 Some people will think, well, Carol is a four. 138 00:17:43,940 --> 00:17:49,130 She's pretty kind. And someone else will think Carol is a three. 139 00:17:49,130 --> 00:17:55,190 She's pretty kind. They don't disagree about anything except how to use the scale. 140 00:17:55,190 --> 00:18:04,550 And I found with my little exercise with spouse that scaling noise was a significant contributor to our different numbers. 141 00:18:04,550 --> 00:18:16,010 OK. I hope at this point minds are focussing on the likelihood that wherever there is human judgement, 142 00:18:16,010 --> 00:18:22,310 occasion, noise, level, noise, pattern, noise or scaling noise might be the culprit. 143 00:18:22,310 --> 00:18:30,590 And I hope you're also thinking that group deliberations can be a magnifier of noise. 144 00:18:30,590 --> 00:18:35,580 So if you have a group of people who show, let's say, 145 00:18:35,580 --> 00:18:44,690 a high level of punitive intention with respect or wrongdoing and another group that shows a somewhat lower level, 146 00:18:44,690 --> 00:18:51,980 the likelihood that you're going to see system noise because of the differences between the two groups is actually really high. 147 00:18:51,980 --> 00:18:57,080 As groups magnify the effects of their antecedent inclinations. 148 00:18:57,080 --> 00:19:06,710 OK. For examples of both interpersonal and intrapersonal noise that will reflect all four of these things. 149 00:19:06,710 --> 00:19:13,700 Let's just notice that for criminal, the criminal justice system, for hiring, 150 00:19:13,700 --> 00:19:20,240 for medical judgements, for judgements of underwriters and judgements of juries, 151 00:19:20,240 --> 00:19:31,280 for judgements of whether to admit people to certain programmes or judgements about whether to promote people to certain positions, 152 00:19:31,280 --> 00:19:39,320 we are going to see noise may be produced by an unshared bias, but it might be very hard. 153 00:19:39,320 --> 00:19:49,880 And deceivingly to identify the bias that's predictive of noise, you might see the scatter without knowing exactly what is the source of the scatter. 154 00:19:49,880 --> 00:19:57,170 I'm smiling a little bit as I talk because I'm thinking of something I hadn't thought about for about eight months, 155 00:19:57,170 --> 00:19:59,600 which is at an early stage of this project. 156 00:19:59,600 --> 00:20:07,280 I discussed someone on the admissions committee of a very distinguished university in the United States that shall go nameless. 157 00:20:07,280 --> 00:20:15,920 And the person on the admissions committee said basically went like this and said, you have no idea. 158 00:20:15,920 --> 00:20:24,590 Our admissions decisions are so noisy. It really depends on which person tends to happens to be looking at the file. 159 00:20:24,590 --> 00:20:34,490 And this is a hidden secret. It has nothing to do with affirmative action or the standard things that are discussed. 160 00:20:34,490 --> 00:20:39,980 It's just we're really noisy and we don't do anything about it. 161 00:20:39,980 --> 00:20:48,290 OK. Given the omnipresence of noise and part of our work over the last years has been in investigating 162 00:20:48,290 --> 00:20:55,760 noise in different domains that have been taken separately and just uncovering the scandal, 163 00:20:55,760 --> 00:21:04,370 that is noise, I would be a very good idea to think, given the possible solution, that algorithms might be. 164 00:21:04,370 --> 00:21:10,700 What's wrong with it? Let's notice. What's obviously wrong with noise is unfairness. 165 00:21:10,700 --> 00:21:16,400 We're similarly situated. People are treated differently and that might involve the criminal justice 166 00:21:16,400 --> 00:21:22,340 system or who gets disability benefits or who gets hired or who gets promoted. 167 00:21:22,340 --> 00:21:27,620 It might involve costs. So the intuition about. 168 00:21:27,620 --> 00:21:33,890 Noise is that the costs, that the errors will cancel themselves out if they're scatter. 169 00:21:33,890 --> 00:21:39,530 And that's a very hard intuition to get past it. 170 00:21:39,530 --> 00:21:46,430 They will cancel out if we take the average or the median. But in life, typically, the costs are additive. 171 00:21:46,430 --> 00:21:54,740 They don't cancel out. They compound each other, which means e.g. in the insurance company, the fact that there's so much noise, 172 00:21:54,740 --> 00:22:01,880 just the computation suggests produces massive economic harm to the firm. 173 00:22:01,880 --> 00:22:08,240 And it would be very nice to be able to isolate the detached domains in which in which 174 00:22:08,240 --> 00:22:13,700 unfairnesses the problem and the domains in which spiralling costs is the problem. 175 00:22:13,700 --> 00:22:18,800 OK, how algorithms potentially will eliminate bias. 176 00:22:18,800 --> 00:22:24,800 And by definition, will eliminate noise. I'm just going to give two examples. 177 00:22:24,800 --> 00:22:35,300 One thing that is kind of a primitive algorithm, which is the Apgar score for little kids appearance, Paul scrimps activity and respiration. 178 00:22:35,300 --> 00:22:39,860 It doesn't involve computers, but it involves something like an algorithm. 179 00:22:39,860 --> 00:22:49,040 And the effect of the Apgar score for health assessments and helping little kids is huge. 180 00:22:49,040 --> 00:22:53,990 It eliminates bias on the part of doctors of all kinds. 181 00:22:53,990 --> 00:22:58,580 And given the Apgar score, the level of noise doesn't quite vanish. 182 00:22:58,580 --> 00:23:10,920 But it comes really close to vanishing. The most dramatic study of kind of the algorithm, literal with respect to bias and noise reduction, 183 00:23:10,920 --> 00:23:22,290 involves Bayle studies where replacement of human decisions by an algorithm shows that crime can be reduced by up to 25 percent. 184 00:23:22,290 --> 00:23:31,110 With no change in jailing rates, if we substitute for the human decision an algorithm, a simple algorithm, 185 00:23:31,110 --> 00:23:40,500 or you can reduce jail populations by up to 42 two percent with no change in it would no increase in crime rates so long either margin. 186 00:23:40,500 --> 00:23:42,990 You can do a whole lot better. 187 00:23:42,990 --> 00:23:54,990 We know that the two sources of human inferiority to algorithms are first bias, not racial bias, but current offence bias. 188 00:23:54,990 --> 00:24:02,010 Let's call it. By which human beings overvalue the current defence in deciding whether to give people bail. 189 00:24:02,010 --> 00:24:09,540 Algorithms don't. That's eliminated by algorithms and judges are really noisy. 190 00:24:09,540 --> 00:24:16,690 Judges show intrapersonal and interpersonal noise and algorithms eliminate that. 191 00:24:16,690 --> 00:24:26,310 OK. A puzzle is you could have a really dumb algorithm that is not noisy and that would be inferior to human judgement. 192 00:24:26,310 --> 00:24:37,800 So, as John suggests, if an algorithm is trained on some characteristic X where X is a product of race or sex for race, let's say arrest records. 193 00:24:37,800 --> 00:24:46,410 It may be that the algorithm would be noise free, but racially inflected, potentially more so than human beings would be. 194 00:24:46,410 --> 00:24:54,720 If you said that everyone convicted of a certain crime gets a five year sentence or no one gets a promotion in 2020, 195 00:24:54,720 --> 00:25:02,820 it would not be a noisy system, but it would be a biased system and therefore an erroneous system. 196 00:25:02,820 --> 00:25:07,230 The basic idea here is that the choice between algorithms and, 197 00:25:07,230 --> 00:25:15,900 let's say human decision making in situations in which the algorithm is kind of dumb is a rule standards problem, 198 00:25:15,900 --> 00:25:19,770 which involves the costs of decisions of costs and costs of errors, 199 00:25:19,770 --> 00:25:27,080 where it's potentially the case that the algorithm will be inferior to human decision making. 200 00:25:27,080 --> 00:25:33,170 That suggests that every algorithm has to be interrogated both along the dimension of bias. 201 00:25:33,170 --> 00:25:40,340 And along the dimension of noise, always the algorithm will do really well along the dimension of noise. 202 00:25:40,340 --> 00:25:48,940 And that's very good. Thank you. 203 00:25:48,940 --> 00:25:57,830 I really appreciate that. So now we're going to turn to our first commentator, who is Ruth Chang. 204 00:25:57,830 --> 00:25:59,630 Thank you. Thank you. 205 00:25:59,630 --> 00:26:07,670 It's a great honour to be here, and I really appreciate the opportunity to be part of this conversation with Carson Nigel, about ethics and A.I. 206 00:26:07,670 --> 00:26:13,980 Thanks to cast for typically brilliantly interesting and thought provoking talk. 207 00:26:13,980 --> 00:26:20,160 And to John and Torch for inviting me to take part and thinking about it. 208 00:26:20,160 --> 00:26:29,040 So here's a picture of the lay of the land on work, on algorithms on one side over here. 209 00:26:29,040 --> 00:26:38,910 You have computer scientists and engineers led mostly by industry dollars, excitedly working away at developing algorithms and applications for them, 210 00:26:38,910 --> 00:26:47,090 such as radiological diagnoses, decisions on bank loans, facial recognition, autonomous vehicles and much more. 211 00:26:47,090 --> 00:26:52,100 Over here, you have the academy, especially humanist and social scientists, 212 00:26:52,100 --> 00:27:02,770 writing articles and books about how dangerous algorithms and A.I. in general are trying to slow down a AI until we know what we're doing. 213 00:27:02,770 --> 00:27:09,890 Now, here's an interesting fact, the two camps agree on one thing, that a guy needs regulation and it needs it. 214 00:27:09,890 --> 00:27:19,590 Yesterday. Even the leading computer scientists who can see the great dangers inherent in current development and application of algorithms 215 00:27:19,590 --> 00:27:28,260 regard what's happening over here in the development and application of algorithms as a bit of a bit of the Wild West. 216 00:27:28,260 --> 00:27:34,590 And they're begging the folks over in the academy to help bring a sheriff to town. 217 00:27:34,590 --> 00:27:39,420 And I'll just show my cards. I think we all have a moral imperative. 218 00:27:39,420 --> 00:27:44,560 Those of us in the academy to think. You know, this is what we should be doing. 219 00:27:44,560 --> 00:27:52,090 We should be figuring out ways to make sure that guy we build serves rather than harms the human condition. 220 00:27:52,090 --> 00:28:00,940 So it's against this background that cast who is about as expert on the subject of regulation as anyone on Earth could be, 221 00:28:00,940 --> 00:28:07,620 offers us a somewhat surprising range of work. He's a cheerleader for algorithm's. 222 00:28:07,620 --> 00:28:10,290 So instead of giving us a recipe for how to regulate algorithms, 223 00:28:10,290 --> 00:28:17,270 the offices hope for algorithms that they can help solve bias and most importantly, noise. 224 00:28:17,270 --> 00:28:19,340 And there's another point and plussing what he said, 225 00:28:19,340 --> 00:28:27,800 that we could emphasise that even if we can't solve the bias problem, we can use algorithms to solve the noise problem. 226 00:28:27,800 --> 00:28:33,230 And here are two things I wrote down for what he just said. He said, to detect noise. 227 00:28:33,230 --> 00:28:42,990 We have to solve. Oh, sorry. Yeah, to detect noise, he said, we don't have to solve the problem of bias. 228 00:28:42,990 --> 00:28:46,520 And later on, he said. Algorithms for move. 229 00:28:46,520 --> 00:28:56,710 Noise by definition. So if algorithms can help us be more consistent and say insurance rider evaluations and beyond. 230 00:28:56,710 --> 00:29:03,970 That's a very good thing. So maybe we have a use here for algorithms that everyone can agree on as being a very good thing. 231 00:29:03,970 --> 00:29:10,730 And we don't have to get all nervous about regulating it. 232 00:29:10,730 --> 00:29:21,780 But I worry that the focus on noise obscures the fact that you can't identify noise independently of tackling the headline or problem for eye, 233 00:29:21,780 --> 00:29:30,400 which is at least for now, bias. Now, bias follows readily from the fact that our social structures advantage some over others. 234 00:29:30,400 --> 00:29:35,150 Carson is the first to recognise this. And the question is how? 235 00:29:35,150 --> 00:29:44,600 The questions we put to algorithms are framed is very much a reflection of an reproduction of those background conditions. 236 00:29:44,600 --> 00:29:48,890 This is what John was referring to when he talked about the training data. 237 00:29:48,890 --> 00:29:56,810 So just to take a familiar example, new mothers must choose between going back to work and putting their isn't in day-care. 238 00:29:56,810 --> 00:30:04,820 On the one hand or staying at home to care for their infant child and losing their place on the career ladder. 239 00:30:04,820 --> 00:30:11,440 Now, you can imagine devising an algorithm that helped a mother determine the probability of her being able to re-enter 240 00:30:11,440 --> 00:30:19,840 the workforce after a prolonged spell of caretaking and evaluated against the probability that her child would. 241 00:30:19,840 --> 00:30:28,880 I don't know, turn out to be a drug addict or a rebellious teenager because she spent her early years with a paid worker. 242 00:30:28,880 --> 00:30:36,230 By setting up an algorithm to answer this question, which alternative should the new mother choose? 243 00:30:36,230 --> 00:30:43,820 We believe in the bias against new mothers inherent in our current social structures, right in today's social structure. 244 00:30:43,820 --> 00:30:52,060 You can't both caretaker is in and remain where you would have been on the career ladder. 245 00:30:52,060 --> 00:31:01,100 So algorithms can quite easily obscure rather than make more transparent the underlying bias in our social structures. 246 00:31:01,100 --> 00:31:04,940 The question we should be asking about the algorithm is not how should we think 247 00:31:04,940 --> 00:31:09,590 about the Trade-Off between leaving work and not raising your own child, 248 00:31:09,590 --> 00:31:16,190 but how can we arrange our social structures so that mothers can be with their infants during 249 00:31:16,190 --> 00:31:23,020 the workday if they so choose without any sacrifice to their position on the work ladder? 250 00:31:23,020 --> 00:31:32,230 The danger of algorithms is that they just further entrench our existing biases. 251 00:31:32,230 --> 00:31:33,010 Now, 252 00:31:33,010 --> 00:31:43,510 the same might be said also about the case of bail bond hearings cast suggests that what matters to a judge in a bail bond hearing are two things. 253 00:31:43,510 --> 00:31:48,400 The probability that they will commit crime and their flight risk. 254 00:31:48,400 --> 00:31:57,950 But why should we accept or assume that only these probabilistic judgements are what bail bond hearings are about or should be about? 255 00:31:57,950 --> 00:32:01,940 A good judge will care about so much more. The safety of the defendant. 256 00:32:01,940 --> 00:32:07,070 Out on the streets. For example, he is flipped on his fellow gang members. 257 00:32:07,070 --> 00:32:15,150 The fact that a figure of authority has shown trust towards him and the effect that might have on his life going forward and so on. 258 00:32:15,150 --> 00:32:20,580 John Cassilis has recent work cataloguing these and other ways in which bail bond 259 00:32:20,580 --> 00:32:26,090 questions involve multiple factors and much more complexity than the experiments. 260 00:32:26,090 --> 00:32:29,310 The cast refers to allow, in this case, 261 00:32:29,310 --> 00:32:38,490 the use of algorithms again simply entrenches and reproduces biases in framing the question that leave out the human element, 262 00:32:38,490 --> 00:32:43,410 if you like, in decision making. OK, but what about noise? 263 00:32:43,410 --> 00:32:51,590 Surely you want our judgements and decisions to be consistent. And if algorithms can help us with that, we should be enthusiastic about it. 264 00:32:51,590 --> 00:32:57,620 Is noise different from bias, if it is and algorithms can help reduce noise, 265 00:32:57,620 --> 00:33:06,120 then maybe we can sidestep all the worries about bias and find a safe place where algorithms can be used with gusto. 266 00:33:06,120 --> 00:33:14,610 Unfortunately, I don't think this is the case. This is because, in short, one person's noise is another person's bias. 267 00:33:14,610 --> 00:33:18,900 Or put differently, as lawyers put it. Treating like cases alike. 268 00:33:18,900 --> 00:33:24,510 At the heart of the rule of law is itself a substantive normative matter. 269 00:33:24,510 --> 00:33:29,250 You can't determine where the two cases are being treated alike without already first 270 00:33:29,250 --> 00:33:35,880 ascertaining that the differences between them are irrelevant to the question at hand. 271 00:33:35,880 --> 00:33:44,290 And that is a normative judgement about whether there is bias in treating a difference as relevant or not. 272 00:33:44,290 --> 00:33:50,810 Here's an example, suppose last Friday your teenager asked you whether he could stay up with his friends past midnight. 273 00:33:50,810 --> 00:33:55,550 Socially distance and wearing a mask, of course. You just read that Trump came down. 274 00:33:55,550 --> 00:34:01,270 So you say no. Three weeks later, your teenager asks you again. 275 00:34:01,270 --> 00:34:07,050 Well, now case counts are diminishing. He's been cooped up in his room and you're worried he's losing his connexions with his friends. 276 00:34:07,050 --> 00:34:11,980 So you say yes. You've made two facially inconsistent judgements. 277 00:34:11,980 --> 00:34:21,310 But, of course, there is no noise here. There are differences that, normatively speaking, make a difference and justify different judgements. 278 00:34:21,310 --> 00:34:28,270 We can't make a judgement that two judgements are inconsistent without first ascertaining that the difference in 279 00:34:28,270 --> 00:34:37,250 the circumstances of the two cases are differences that do not make a normative difference to the judgement. 280 00:34:37,250 --> 00:34:46,990 I think the same can be said about Kassin example of Reinking France according to criteria of kindness, diligence and intelligence. 281 00:34:46,990 --> 00:34:55,980 So the general point I'd like to raise is that we need to restrict our use of algorithms and I generally. 282 00:34:55,980 --> 00:35:02,230 To help solve only one. Quite restricted type of problem. 283 00:35:02,230 --> 00:35:13,610 The computationally tractable. I suggested that you can't ascertain something as a case of noise unless you first solve the problem of bias. 284 00:35:13,610 --> 00:35:18,030 So noise isn't a separate, safe niche for algorithms. 285 00:35:18,030 --> 00:35:22,940 We're back to the headline problem for algorithms, which is bias. 286 00:35:22,940 --> 00:35:25,670 But you might say, well, let's just be extra special, 287 00:35:25,670 --> 00:35:32,510 careful in formulating the correct questions for algorithms to minimise bias as much as possible. 288 00:35:32,510 --> 00:35:42,010 And then we're off. We can use algorithms every. But I think the real problem is that algorithms are limited to computationally tractable problems. 289 00:35:42,010 --> 00:35:48,860 Only. And deciding whether to stay home and care for your infant. 290 00:35:48,860 --> 00:35:54,650 And I would submit granting or denying bail or breaking your friends. 291 00:35:54,650 --> 00:36:02,840 These are not computationally tractable problems. So what is it to be computationally tractable? 292 00:36:02,840 --> 00:36:07,200 Well, this is a large question, but at a minimum, to be computations tractable, 293 00:36:07,200 --> 00:36:13,970 a problem has to have an adequate numerical representation of the factors involved in the decision, 294 00:36:13,970 --> 00:36:21,140 along with a numerical representation of how the various factors weigh against one another. 295 00:36:21,140 --> 00:36:26,820 But there's a large literature and philosophy that shows convincingly to my mind that most 296 00:36:26,820 --> 00:36:36,130 of the problems we care about are not even in pieces computationally tractable problems. 297 00:36:36,130 --> 00:36:43,580 Is there really a scale of kindness according to which we can rank our friends with respect to their kindness? 298 00:36:43,580 --> 00:36:53,150 And if there is such a scale, can you really put the kindness of your good friend Carol on it as opposed to your good friend Carl? 299 00:36:53,150 --> 00:36:57,980 Many philosophers argue that is constitutive of being friends. 300 00:36:57,980 --> 00:37:03,990 That you can't do this. So I'm gonna end by considering the duck rabbit case. 301 00:37:03,990 --> 00:37:14,210 It's an odd case. And I want to suggest that there's some reason to think, well, it's really not a case of bias or noise. 302 00:37:14,210 --> 00:37:20,540 The truth we all know is that the picture is a duck rabbit, it's neither a duck. 303 00:37:20,540 --> 00:37:26,900 Neither sorry. Neither just a duck. And it's neither just a rabbit. 304 00:37:26,900 --> 00:37:31,280 So one moment I look at it and I judge it's a duck. 305 00:37:31,280 --> 00:37:39,800 And then the next moment I look at it and I judge is the rabbit. That appears to be a noisy set of judgements. 306 00:37:39,800 --> 00:37:43,400 But now we look at the differences in the circumstances in which I make those 307 00:37:43,400 --> 00:37:50,990 judgements and we see that those differences are relevant to justify the judgement. 308 00:37:50,990 --> 00:37:56,750 So neuroscientists tell us that the explanation of why we sometimes see a duck and why 309 00:37:56,750 --> 00:38:01,400 we sort of see a rabbit has to do with a pattern of our eye movements of our eye, 310 00:38:01,400 --> 00:38:05,750 a light at a certain spot, and then travels through another spot. 311 00:38:05,750 --> 00:38:11,430 We'll conceptualise a duck. And, you know, similarly for a rabbit. 312 00:38:11,430 --> 00:38:17,940 So when I judge that, I see a duck and then a minute later I judge that I see a rabbit. 313 00:38:17,940 --> 00:38:22,500 Those judgements are facially inconsistent, but they're not really inconsistent. 314 00:38:22,500 --> 00:38:37,580 They're not really noisy because there are relevant factors that explain and in some sense justify the judgements I make. 315 00:38:37,580 --> 00:38:42,290 So I'm going to end with a little point about transparency. 316 00:38:42,290 --> 00:38:49,190 One thing about algorithms is it seems like they they list all of the factors that are in play. 317 00:38:49,190 --> 00:38:54,510 And then we can see how they should be weighed up against one another. 318 00:38:54,510 --> 00:39:02,430 That allows us to say if we decrease the importance of one factor and increase the importance of another, look at the outcome. 319 00:39:02,430 --> 00:39:09,370 And that's pretty cool. We can learn things about trade Trade-Off between different factors. 320 00:39:09,370 --> 00:39:20,220 But I I don't think that anything that an algorithm does these be transparency is something that I don't know how else to put it. 321 00:39:20,220 --> 00:39:30,950 But a philosopher. A person who is actually thinking about the various factors can't do and can't do better. 322 00:39:30,950 --> 00:39:38,640 And I'll leave it at that. But thank you cast for that really interesting talk. Thank you so much, root for a great set of comments. 323 00:39:38,640 --> 00:39:44,260 And now I'll pass over to Nigel, Shantal, Nigel. 324 00:39:44,260 --> 00:39:48,070 Thank you very much and thank you, John, and thank you to torch for organising this. 325 00:39:48,070 --> 00:39:55,020 And of course, it's a huge pleasure to be responding to Karzai's excellent talk and compelling observations. 326 00:39:55,020 --> 00:39:59,770 A really powerful enumeration, I think, of our susceptibilities to different kinds of noise, 327 00:39:59,770 --> 00:40:05,140 bias, etc in human judgements and how algorithms might help. 328 00:40:05,140 --> 00:40:13,810 And I really enjoyed was brilliant insights also. So so one of my roles here at Oxford has been to help establish the institute and 329 00:40:13,810 --> 00:40:17,730 helped develop the original case and and now Change Institute steering group, 330 00:40:17,730 --> 00:40:22,060 the founding principle was to locate the institute in the Faculty of Philosophy. 331 00:40:22,060 --> 00:40:28,000 We wanted the institute to confront substantive and real ethical challenges using the best philosophical arguments, 332 00:40:28,000 --> 00:40:30,910 insights and minds we could muster. 333 00:40:30,910 --> 00:40:37,420 Now, of course, the internships got to reach out and be informed by a wide range of other disciplines computer science, 334 00:40:37,420 --> 00:40:41,140 a whole range of mathematical, physical engineering, medical and social sciences. 335 00:40:41,140 --> 00:40:50,090 And the law, politics and economics. It must reach out widely because the questions that it must address are both technical and societal. 336 00:40:50,090 --> 00:40:56,410 And tonight's discussion is really excellent, excellent example of such a set of questions. 337 00:40:56,410 --> 00:40:57,880 So a set of ethical questions. 338 00:40:57,880 --> 00:41:09,430 And they either revolve precisely around the issues of when to use A.I. and when not to use it around whether its results are fair or equitable, 339 00:41:09,430 --> 00:41:15,800 or whether there is sufficiency of access to the A.I. systems themselves. 340 00:41:15,800 --> 00:41:21,760 And I'm also a professor of computer science here at Oxford, and I've been researching I for almost 40 years, 341 00:41:21,760 --> 00:41:26,560 and in that time I've seen the various high points and low points of the subject, 342 00:41:26,560 --> 00:41:32,110 the various enthusiasms and disappointments that have followed in its wake. 343 00:41:32,110 --> 00:41:41,430 And I was reflecting back to the 1980s when, as a young researcher, we were busy building knowledge based systems, sometimes called expert systems, 344 00:41:41,430 --> 00:41:49,120 and these were rule based engines using a range of technologies to explicitly reason and represent over a problem domain. 345 00:41:49,120 --> 00:41:57,670 So from designing computer hardware to diagnosing medical conditions, from planning logistics to improving the design of aircraft, 346 00:41:57,670 --> 00:42:01,210 some of my colleagues I remember even attempted to codify the British Nationality 347 00:42:01,210 --> 00:42:04,930 Act that turned out not to be amenable to logic based reasoning at all. 348 00:42:04,930 --> 00:42:14,830 Perhaps no surprise that, but many of the systems were built by taking the expertise of humans and codifying them through a process of knowledge. 349 00:42:14,830 --> 00:42:20,480 Engineering and the resulting systems often raise significant questions. 350 00:42:20,480 --> 00:42:25,630 Where are they going to replace the experts whose knowledge they codified? Could we trust their results? 351 00:42:25,630 --> 00:42:30,720 Were they able to explain or account for their results? 352 00:42:30,720 --> 00:42:38,820 Could you ever hope to accommodate the rich range of context that humans are able to marshal when solving problems, 353 00:42:38,820 --> 00:42:46,280 the challenges of A.I. are those not new, but with the scale and power of new hardware and software, the availability of data? 354 00:42:46,280 --> 00:42:51,400 There's a new urgency in the questions. 355 00:42:51,400 --> 00:43:01,150 So back in the 80s, one common approach to help people not to worry so much was to reframe the system as a decision support system. 356 00:43:01,150 --> 00:43:08,890 It was to be seen as augmenting, not replacing human experts and decision makers. 357 00:43:08,890 --> 00:43:16,150 And under this rubric, algorithms would help where people succumb to bias or functional fixity. 358 00:43:16,150 --> 00:43:21,520 Or else we're locked into a particular way of looking at the data. 359 00:43:21,520 --> 00:43:26,260 Indeed, there was a style of knowledge based system that was referenced as critiquing systems, 360 00:43:26,260 --> 00:43:30,940 and they were explicitly designed not to replace the expert, 361 00:43:30,940 --> 00:43:36,880 but to ensure that the full range of factors have been taken into account when reaching a decision. 362 00:43:36,880 --> 00:43:44,780 That alternative search path had been pursued and other possible outcomes evaluated. 363 00:43:44,780 --> 00:43:49,920 Now, the tension between human problem solving and algorithmic reasoning is not a new one, 364 00:43:49,920 --> 00:43:54,810 and certainly we could do quite well, I think, to see what our modern generation of deployment's today. 365 00:43:54,810 --> 00:43:57,370 I can learn from the lessons of the past. 366 00:43:57,370 --> 00:44:06,760 And one of the most interesting was this recognition that we need a marrying of human and machine capabilities. 367 00:44:06,760 --> 00:44:15,820 So, as I've noted, Modern Eye, in fact, comprises a broad suite of methods, some using explicit reasoning, role based, 368 00:44:15,820 --> 00:44:19,810 rule based reasoning and others various forms of neural network systems where the ability 369 00:44:19,810 --> 00:44:27,610 to look inside and interrogate the internal state of the algorithm is more problematic. 370 00:44:27,610 --> 00:44:33,820 Now, what are the compelling arguments that both Cass Sunstein and in other background papers provided for this event? 371 00:44:33,820 --> 00:44:39,310 Daniel Kahneman paper makes is that sometimes effective decision making doesn't have to be 372 00:44:39,310 --> 00:44:45,390 as complex as people think and that relatively simple algorithms can do better than humans. 373 00:44:45,390 --> 00:44:52,800 That decision making in humans is variable in many contexts, that individuals are inconsistent both within and between themselves, 374 00:44:52,800 --> 00:44:58,320 and that they're susceptible to a range of bias and produce inherently noisy behaviour. 375 00:44:58,320 --> 00:45:08,100 I can't diminish from that wonderful work. The seminal work of researchers such as Kahneman and Daniel Kahneman, Paul's Paul Slovic, Amos Persky, 376 00:45:08,100 --> 00:45:18,640 who showed just how pervasive cognitive bias was, how the use of particular heuristics overwhelmed the natural statistics of situations. 377 00:45:18,640 --> 00:45:26,970 And I remember reading the heroic attempt to rescue statistically normative models by psychologists of human decision making. 378 00:45:26,970 --> 00:45:30,750 If we weren't real Bayesian, as perhaps we are just yeah, 379 00:45:30,750 --> 00:45:35,640 perhaps we weren't able to modify our estimates in a statistically defensible way, then perhaps we were degraded. 380 00:45:35,640 --> 00:45:39,660 Bayesian is defeated by the inherent noise and uncertainty of the world. 381 00:45:39,660 --> 00:45:47,430 But it's pretty clear, as CAS and people like Daniel Kahneman have been saying for a long time, that in most cases humans, 382 00:45:47,430 --> 00:45:54,840 even statisticians, succumb to the bias of availability, anchoring representativeness and so on. 383 00:45:54,840 --> 00:46:01,920 So the elimination of some of these tendencies might seem to be a good thing, particularly if much that's touted as advanced. 384 00:46:01,920 --> 00:46:06,890 AI is relatively straightforward predictive statistics and that it can help. 385 00:46:06,890 --> 00:46:11,310 But then the question revert back to the ones that really Ruth has already alluded to. 386 00:46:11,310 --> 00:46:21,650 It's how far is the reality and context of decision making captured in the impoverished context of the algorithm? 387 00:46:21,650 --> 00:46:25,470 And what are the challenges that arises in applications like sentencing and bail? 388 00:46:25,470 --> 00:46:30,930 Particularly if automated, is also this sense of what it is to be explicable. 389 00:46:30,930 --> 00:46:39,540 Some of the work in my own group has carried out studies on on how System X flick ability should work and there are clear choices here. 390 00:46:39,540 --> 00:46:46,650 Do I offer up as an explanation the inputs to the algorithm or the comparison to similar cases? 391 00:46:46,650 --> 00:46:50,910 Or a contextualisation within the distribution of cases learnt over the Dimmock, 392 00:46:50,910 --> 00:47:01,470 the demographics of the distributional demographics of the cases learnt over or the extent to which an outcome is sensitive to particular features. 393 00:47:01,470 --> 00:47:09,690 This aspect of the application of algorithms in context, such as bail and sentencing, call out another fundamental concern in a AI ethics. 394 00:47:09,690 --> 00:47:15,930 The reasonable expectation for contextualised explanation. 395 00:47:15,930 --> 00:47:22,110 Of course, there might be an expectation that some explanation relates to streetball machine processing. 396 00:47:22,110 --> 00:47:30,260 How does the machine do it? And we know that in the case of many modern machine learning systems, this can be very opaque. 397 00:47:30,260 --> 00:47:34,480 There is no constant that can be offered up that we would recognise and the systems 398 00:47:34,480 --> 00:47:38,290 do not represent the world in the way we do and in many Pann recognition systems, 399 00:47:38,290 --> 00:47:44,140 we literally see this in so-called generalised adversarial networks which generate counterfeits that 400 00:47:44,140 --> 00:47:51,360 don't look anything like the confidently predicted outcomes that the original networks predicted. 401 00:47:51,360 --> 00:48:00,340 Just an interesting remark on noise. There's actually an interesting feature of some computational algorithms to literally eliminate it. 402 00:48:00,340 --> 00:48:02,970 But there's some others noise is actually used constructively. 403 00:48:02,970 --> 00:48:09,660 For example, the fluctuation of signals can be used as a searching mechanism that drives computation and the search for a solution. 404 00:48:09,660 --> 00:48:15,030 But that's not the noise we're talking about here. But I thought that was an interesting point. 405 00:48:15,030 --> 00:48:19,560 Can we be confident that the noise and bias in the cases that we do talk about here, 406 00:48:19,560 --> 00:48:23,700 that train these algorithms that we seek to deploy amply dealt with? 407 00:48:23,700 --> 00:48:30,300 And again, Ruth made the very good point that. Do we really know where we're looking at the source and nature of this place? 408 00:48:30,300 --> 00:48:35,640 Do we know that effective sampling and representation might not be an issue in itself? 409 00:48:35,640 --> 00:48:43,690 And is there even a gold standard in the decision making itself? Can we be certain that algorithms that we apply our context to, 410 00:48:43,690 --> 00:48:51,630 where are we content that the nature of the explanation the systems used in the decision making are reasonable? 411 00:48:51,630 --> 00:48:55,650 But again, perhaps the question that we come back to is, 412 00:48:55,650 --> 00:49:03,150 what are the goals and whose goals are we affecting in the algorithms we deploy and what kind of bias is this? 413 00:49:03,150 --> 00:49:12,900 Whose objective function is being optimised? And the essential question of the social embedding of these systems. 414 00:49:12,900 --> 00:49:22,920 You can make a judgement that two decisions that appear to be inconsistent can be contextualised to be appropriate for the situation. 415 00:49:22,920 --> 00:49:24,180 But all of this to one side, 416 00:49:24,180 --> 00:49:30,030 I'd just like to say how much I've enjoyed the observations and insights that both Cass and Ruth have shared with us tonight. 417 00:49:30,030 --> 00:49:34,000 Thank you. Thank you so much for your insights as well. 418 00:49:34,000 --> 00:49:41,880 Nischelle, that was fantastic. There's a very rich array of comments that and I hope you won't feel obliged to respond to everything, 419 00:49:41,880 --> 00:49:49,230 but perhaps you might want to pick out some and sort of take five minutes, because we also want to get the audience involved as well. 420 00:49:49,230 --> 00:49:56,400 So these are fantastic comments. So I'm grateful, Ruth, to you and Nigel to you for for them. 421 00:49:56,400 --> 00:50:06,090 Let me try to be. I'm going to have a lot of words permitted, I think, because there's so much there for philosophers, lawyers and others. 422 00:50:06,090 --> 00:50:13,680 Bias has charisma, noise, not so much bias as a star noises off stage. 423 00:50:13,680 --> 00:50:19,650 The basic goal of this is to restore the balance there with respect to everything we care about. 424 00:50:19,650 --> 00:50:24,210 Noise is really important and fully neglected. 425 00:50:24,210 --> 00:50:28,800 Take the following questions with someone. Have a heart attack. Will this product sell? 426 00:50:28,800 --> 00:50:33,270 Will a candidate be elected? Will the defendant flee or commit a crime? 427 00:50:33,270 --> 00:50:40,020 For all of those, four relevant people are noisy, extraordinarily noisy. 428 00:50:40,020 --> 00:50:44,730 That can create extreme unfairness and produce systematic error. 429 00:50:44,730 --> 00:50:54,180 That is very costly, whatever values we care about. We can speculate and maybe even demonstrate that a bias accounts for the noise. 430 00:50:54,180 --> 00:51:00,120 That is, that some people are optimistically biased and others not with respect to all of those questions. 431 00:51:00,120 --> 00:51:07,980 But we don't need to do that in order to identify noise with respect to Ruth's point, one person's noises and others bias. 432 00:51:07,980 --> 00:51:18,810 I want to be very careful about that, because these studies depend on on noise audits in which the same people are seeing exactly the same case. 433 00:51:18,810 --> 00:51:25,960 And some judges, because their sports team won or because it's cold outside, end up being more punitive. 434 00:51:25,960 --> 00:51:32,980 So it would be hard to say that they have some normatively relevant reason to be stricter or less strict than others. 435 00:51:32,980 --> 00:51:40,150 If Underwriter's turn out to have had a really happy morning, so they quote a low amount. 436 00:51:40,150 --> 00:51:48,160 And what I'm just describing that is dominant every domain I've described, it's not as if there's a morally or other relevant difference. 437 00:51:48,160 --> 00:51:49,870 That accounts for the noise. 438 00:51:49,870 --> 00:52:00,610 There's not a normative foundation that me shows that we have wanted rather than unwanted variability in which one person is right. 439 00:52:00,610 --> 00:52:02,770 There's no evidence, zero evidence. 440 00:52:02,770 --> 00:52:12,940 After careful investigation that the differences amongst the judges in the bail study depend on judges caring about such things as Ruth referred to. 441 00:52:12,940 --> 00:52:18,100 It's an empirical hypothesis which is quite plausible. It just turned out not to be true. 442 00:52:18,100 --> 00:52:25,480 There's no evidence that some of the factors that humans might care about and that intuitively they ought to care about which, 443 00:52:25,480 --> 00:52:31,930 by the way, are legally prohibited. So a judge who's doing that is engaging in civil disobedience. 444 00:52:31,930 --> 00:52:36,310 There's no evidence that judges are doing that. Every factor tested. 445 00:52:36,310 --> 00:52:41,110 Turns out not to account for what judges do. It's noise. 446 00:52:41,110 --> 00:52:45,730 Ranking friends might be not the nicest thing to do. What's it stative about? 447 00:52:45,730 --> 00:52:49,690 The example is not that people think. 448 00:52:49,690 --> 00:52:53,680 I think Carol's really kind. I think she's not. No. 449 00:52:53,680 --> 00:52:59,200 It's that people use the scales differently. Ah, some people are in better moods than others. 450 00:52:59,200 --> 00:53:03,520 That's what's interesting about the exercise. And it's not about ranking friends. 451 00:53:03,520 --> 00:53:08,080 It's a heuristic for thinking about hiring. Admitting, suspending. 452 00:53:08,080 --> 00:53:13,030 Giving people Social Security benefits. Finding people for environmental harms. 453 00:53:13,030 --> 00:53:16,030 Imposing damage. Awards for pain and suffering. 454 00:53:16,030 --> 00:53:24,550 Those are cases in which the unwanted variability not normally normatively relevant fine-tuned human beings. 455 00:53:24,550 --> 00:53:30,670 Unwanted variability is the driver. And the question is, what are we going to do about that? 456 00:53:30,670 --> 00:53:36,760 I think some of the concerns about algorithms are in the grip of a picture of 457 00:53:36,760 --> 00:53:42,100 human beings seising on relative factors to which algorithms are indifferent. 458 00:53:42,100 --> 00:53:52,420 That sometimes happens, sure. But often the factors that are driving human differences aren't normally very relevant things. 459 00:53:52,420 --> 00:53:58,690 They're just scaling noise or occasional noise or something else that we can't discern. 460 00:53:58,690 --> 00:54:04,780 Should we reformulate the question for bail such that we consider those other things? 461 00:54:04,780 --> 00:54:11,260 Maybe. And then an algorithm can do that. If if we get an algorithm to do that, it won't be noisy. 462 00:54:11,260 --> 00:54:18,570 That's good. Nigel made a bunch of really interesting points about about seven different things. 463 00:54:18,570 --> 00:54:26,500 I'm going to just mention one, which is it's tempting to think algorithms should just augment and not replace human 464 00:54:26,500 --> 00:54:32,200 beings to ensure a full range of variables are considered that the algorithm is alert to, 465 00:54:32,200 --> 00:54:38,350 while allowing the human being to be alert to things that the algorithm might be alert to in certain circumstances. 466 00:54:38,350 --> 00:54:43,500 That's exactly the right way to go. And a checklist is often a way to do that. 467 00:54:43,500 --> 00:54:48,040 There are domains in which that will replicate the problem, 468 00:54:48,040 --> 00:54:56,050 where the human being which is been augmented, will be biased and noisy compared to the algorithm. 469 00:54:56,050 --> 00:55:02,830 So with respect to that, yes, sometimes no. Sometimes I think with respect to race and sex discrimination, 470 00:55:02,830 --> 00:55:08,530 it's absolutely right and indeed fundamental to see that race or sex and equality might be 471 00:55:08,530 --> 00:55:14,570 in the background and the algorithm might be piggybacking on that or aggravating that. 472 00:55:14,570 --> 00:55:16,510 That's really, really important. 473 00:55:16,510 --> 00:55:28,810 But I think it wrong Foote's analysis, because to see race and sex as the first thing that comes to mind when we think of algorithms is to narrow, 474 00:55:28,810 --> 00:55:36,460 truncate really radically the domain of things to which algorithms are relevant, which was why I started with will someone have a heart attack? 475 00:55:36,460 --> 00:55:42,570 Will this product sell? Will a product the elected? Will the defendant flee or commit a crime? 476 00:55:42,570 --> 00:55:54,090 Done. Thanks. Thank you, sir. 477 00:55:54,090 --> 00:55:57,150 Thank you so much, Cash, that was excellent. We've got a series of questions. 478 00:55:57,150 --> 00:56:01,440 Can I just ask you one question just to clarify a little bit about noise. 479 00:56:01,440 --> 00:56:07,910 So noise is not mere inconsistency, but has to be in some sense, unwanted inconsistency. 480 00:56:07,910 --> 00:56:15,180 It's unwanted variability, unwanted variability. So one of the difficult questions and I think Ruth kind of alluded to this, 481 00:56:15,180 --> 00:56:19,950 is you're gonna have to make a substantive judgement about what is unwonted variability. 482 00:56:19,950 --> 00:56:23,890 So say you think, for example, in the sentencing case that there are pervasive, 483 00:56:23,890 --> 00:56:29,670 incommensurate abilities in the values in place such that there isn't a particular sentence? 484 00:56:29,670 --> 00:56:37,800 That is the correct sentence, typically, but they'll be usually be a range of sentences that could be imposed upon a defendant. 485 00:56:37,800 --> 00:56:45,810 And there you have situations will arise where different judges will impose the same same sort of case, 486 00:56:45,810 --> 00:56:50,610 different levels of punishment, but they're all falling within the range. 487 00:56:50,610 --> 00:56:56,250 And we think that maybe this could be a good thing because it's a kind of human interaction, sentencing. 488 00:56:56,250 --> 00:57:01,680 So I'm interacting with this particular judge who is someone of a more lenient disposition. 489 00:57:01,680 --> 00:57:09,570 Let's say. Let's put aside the possibility that his team won the grand final yesterday and the other judge is somewhat of a more strict disposition. 490 00:57:09,570 --> 00:57:16,380 But within the range, the eligible range. Are you suggesting at this point that that kind of inconsistency is unwanted? 491 00:57:16,380 --> 00:57:20,370 Or could that be an acceptable sort? My view is it's a scandal. 492 00:57:20,370 --> 00:57:31,250 If Tamma, let's say, gets a five year sentence and Tom gets a three year sentence and there's not no difference between time and 10. 493 00:57:31,250 --> 00:57:34,640 That's not merely undesirable, it's a scandal. 494 00:57:34,640 --> 00:57:41,930 Now, it's no doubt that there's some characteristic of Tom such that the judge is stricter or more lenient than with Cam. 495 00:57:41,930 --> 00:57:46,730 It's fair. So Ruth is completely right on that point. 496 00:57:46,730 --> 00:57:50,010 So you're putting a lot of emphasis on this, what they call horizontal equity. 497 00:57:50,010 --> 00:57:56,510 That that is such an important consideration is not the consideration that I'm putting toward 498 00:57:56,510 --> 00:58:02,390 allowing judges to make a decision within the range gets sort of comprehensively trumped. 499 00:58:02,390 --> 00:58:06,590 Well, a range is helpful because it reduces the magnitude of noise. 500 00:58:06,590 --> 00:58:14,540 If one person gets probation and the other one gets life imprisonment, that's a very, very scandalous situation. 501 00:58:14,540 --> 00:58:21,380 But if one person gets five years and another gets three, that's deeply unfair. 502 00:58:21,380 --> 00:58:30,860 All it depends on is that it's a lottery. So a way to put it is that the the existence of noise suggests that there are there's a lot of lottery 503 00:58:30,860 --> 00:58:36,920 entry out here and there are involuntary lotteries where whether you get asylum in the United States, 504 00:58:36,920 --> 00:58:41,510 it turns out, depends on a lottery. The judge to whom you're assigned. 505 00:58:41,510 --> 00:58:47,060 And let's stipulate that there are some people who definitely without asylum and some people definitely won't. 506 00:58:47,060 --> 00:58:56,270 So it's parallel to your hypothetical. Still, it's deeply unfair that similarly situated people have their lives either up 507 00:58:56,270 --> 00:59:03,330 ended in a bad way or sort it out in a good way just by virtue of the lottery. 508 00:59:03,330 --> 00:59:10,370 And I think the human mind doesn't naturally see things the way I am suggesting it, which is one possibility. 509 00:59:10,370 --> 00:59:18,080 I'm just wrong. Another possibility is that bias is something which makes our lives light up, our eyes light up noise. 510 00:59:18,080 --> 00:59:23,390 They kind of glaze over. OK, I'm going to let Ruth come back. 511 00:59:23,390 --> 00:59:28,190 She wants to raise something in response to what you've said, but we you could do this very succinctly. 512 00:59:28,190 --> 00:59:32,330 So we can add onto the Q&A. So just quickly. 513 00:59:32,330 --> 00:59:36,910 Even a case where take the Israeli judge case. 514 00:59:36,910 --> 00:59:43,390 After having lunch, the judges had a good move. And so he's more likely to grant parole. 515 00:59:43,390 --> 00:59:47,880 Is that noise when the counterfactuals. 516 00:59:47,880 --> 00:59:53,470 He wouldn't have granted parole before lunch for the very same. 517 00:59:53,470 --> 00:59:57,730 I'm not sure. Right. So I. I hear your picture. 518 00:59:57,730 --> 01:00:01,690 A walking that's kind of strict. Everything by the book. 519 01:00:01,690 --> 01:00:06,850 It's better, but life is not like that, right, Cass? We know that life is full of luck. 520 01:00:06,850 --> 01:00:11,620 There's more luck. Whether or not the child runs in front of your car, you kill it. 521 01:00:11,620 --> 01:00:22,750 And we're part of human institutions, not machine institutions, where human interactions like John Sayegh is part of the system. 522 01:00:22,750 --> 01:00:35,080 If there is a small range, the discretion of a judge seems to me to be important because the bias he shows after lunch may actually be more accurate. 523 01:00:35,080 --> 01:00:39,460 Right. So it'll tell us whether or not the bias is that the noise is wanted or unwanted. 524 01:00:39,460 --> 01:00:50,710 That's all. I take your point. So this is a real standards issue where it might be that the judges before lunch are systematically too harsh. 525 01:00:50,710 --> 01:00:58,360 So they are inaccurate by reference to normative standard, let's say, but noiseless. 526 01:00:58,360 --> 01:01:03,970 And it might be better to have system noise with leniency shown after lunch. 527 01:01:03,970 --> 01:01:15,020 So the unfairness we get is outweighed by the absence of noise free, let's say, harshness. 528 01:01:15,020 --> 01:01:21,640 So I completely get that. Still, the idea that we should be satisfied with an institution in which whether you get 529 01:01:21,640 --> 01:01:26,050 a harsh sentence or lenient sentence or whether you got scheduled after a lunch, 530 01:01:26,050 --> 01:01:37,220 the judges eat. That's really unfair. OK, let me ask you two questions from the audience, which I think is kind of similar. 531 01:01:37,220 --> 01:01:43,820 I'm going to try to run them together. So the first question is, would are algorithms worse than group polarisation? 532 01:01:43,820 --> 01:01:49,240 Is polarisation actually a manifestation of reducing interpersonal noise? 533 01:01:49,240 --> 01:01:52,630 And the second question is, when the training data comes, 534 01:01:52,630 --> 01:02:04,720 we've consistently biased human decision makers willing to trade-offs be reduced by noise and increased bias is completely in response to the second. 535 01:02:04,720 --> 01:02:16,270 So if you have a noise free algorithm that's either stupid or biased, we might want the noisy human beings more completely. 536 01:02:16,270 --> 01:02:25,030 And that's really important. I think often the critics of algorithms find that as a trump card against algorithms rather 537 01:02:25,030 --> 01:02:30,790 than an invitation to discuss how to make either human beings through Nigel's suggestion, 538 01:02:30,790 --> 01:02:36,430 maybe as advised by the algorithms or better or make the algorithms better. 539 01:02:36,430 --> 01:02:38,510 That would be one possibility. 540 01:02:38,510 --> 01:02:45,700 Now, group polarisation means that groups tend up ending up in a more extreme point in line with their pre deliberation tendencies, 541 01:02:45,700 --> 01:02:50,420 and that can create a lot of system noise. If you have several groups. 542 01:02:50,420 --> 01:02:58,210 So think of three groups of people deciding what to do about some, whether to bring a lawsuit. 543 01:02:58,210 --> 01:03:02,710 Some who are inclined might get really excited. Let's do it. 544 01:03:02,710 --> 01:03:07,840 Some are disinclined. Might in groups get really disinclined? No, let's settle. 545 01:03:07,840 --> 01:03:14,410 And the algorithm. If it's a good algorithm, will be more accurate. 546 01:03:14,410 --> 01:03:19,320 Won't run into the trap that group deliberation sometimes creates. 547 01:03:19,320 --> 01:03:29,340 If it's about algorithm, we'll have noise free law firms. But they might be Ruths point more a room error prone. 548 01:03:29,340 --> 01:03:33,060 Let me ask you the question, which also I think maybe applies to Ruth. 549 01:03:33,060 --> 01:03:40,560 The question is, Ling's shouldn't algorithm's be judged by comparing them to the status quo? 550 01:03:40,560 --> 01:03:44,700 I think this is an interesting what's the baseline of comparison? The mother status quo. 551 01:03:44,700 --> 01:03:50,400 This is alluding to Roof's example is to ask other people, which is equally biased. 552 01:03:50,400 --> 01:03:58,990 An algorithm could be transparent about how this works, how that process works. 553 01:03:58,990 --> 01:04:10,750 I'm thinking that, you know, some of us are teachers grading, and when some of us grade A paper, we don't ask other people. 554 01:04:10,750 --> 01:04:16,660 We just do our best. And the grading is noisy. 555 01:04:16,660 --> 01:04:22,270 And some of us are aware of our own interpersonal noise with respect to grading. 556 01:04:22,270 --> 01:04:31,060 Start moving toward an algorithm with more checklist, the approaches which more like a statistical thing. 557 01:04:31,060 --> 01:04:34,750 And that's less noisy and probably less biased. 558 01:04:34,750 --> 01:04:46,540 The point is right, that if you have a decision maker who's resource is a single other person, might be more or less biased and more or less noisy. 559 01:04:46,540 --> 01:04:50,950 And do we need to know what the algorithm is aiming to maximise? 560 01:04:50,950 --> 01:04:52,930 I think that's part of Ruths points and yours. 561 01:04:52,930 --> 01:05:00,460 Also, John, so if you're trying to maximise crime reduction, it might be that's not what you should be after. 562 01:05:00,460 --> 01:05:06,010 And I say then have an algorithm that tries to figure out what you actually are after. 563 01:05:06,010 --> 01:05:13,510 Or have the algorithm be an adviser, which will tell you Nigel's point about flight risk and then you can consider other things. 564 01:05:13,510 --> 01:05:18,940 Ruth, did you want to respond to the status quo issue? And just quickly. 565 01:05:18,940 --> 01:05:24,790 Yeah. So you ask your friend what to do and your friend says X. 566 01:05:24,790 --> 01:05:27,250 Then you ask your friend some more questions. Right. 567 01:05:27,250 --> 01:05:36,550 So that's what you have to do, is you have to figure out why your friend said X and that X algorithms are only as good as the inputs we we give them. 568 01:05:36,550 --> 01:05:39,760 So can I just end with quick thought experiment? 569 01:05:39,760 --> 01:05:49,390 Let's suppose we get, you know, all the flatterers, lawyers and like together and we figure out all the relevant factors to a question. 570 01:05:49,390 --> 01:05:54,230 And, you know, we're pretty sure that the weighting is within this range and so on. 571 01:05:54,230 --> 01:06:02,140 OK. I think the deepest difference between people who think, yeah, we need have a lot of algorithms take over human life, 572 01:06:02,140 --> 01:06:07,180 and the people who are worried about that is that it really goes back to what John said, 573 01:06:07,180 --> 01:06:18,090 that do we want to live in a world where there are human interactions that are subject to a bit of luck, contingency and so on? 574 01:06:18,090 --> 01:06:25,510 It's part of the human condition. Or do we want to live in a world where, you know, by the way, it's constrained? 575 01:06:25,510 --> 01:06:30,670 Right. The luck is constrained. So it's not like it's death penalty or you're free to walk. 576 01:06:30,670 --> 01:06:34,360 It's five years or just four and a half years. 577 01:06:34,360 --> 01:06:40,900 Do we want to live in that kind of world or do we want to live in a world where we know you're gonna get five years, right? 578 01:06:40,900 --> 01:06:49,690 Five years, five years, five years. And I want. And I for one, I want to live in the other world, the messier world. 579 01:06:49,690 --> 01:06:57,040 It's just just just a point on that as well. It's just just the sense that sometimes the focus on our task, achieving algorithms, 580 01:06:57,040 --> 01:07:04,150 we shouldn't forget that actually all of our institutions exist in these complex milia where there are many things going on. 581 01:07:04,150 --> 01:07:10,360 We have this reference to what we sometimes called social machines, components that have algorithms, components that are flawed human decision makers. 582 01:07:10,360 --> 01:07:16,390 And then understanding those system properties is also important because actually you can be doing quite a good job. 583 01:07:16,390 --> 01:07:20,470 And I don't think this is necessarily contentious point on one of your algorithms. 584 01:07:20,470 --> 01:07:29,230 Make noise free, bias free elements and still have the system overturned by some real absurdities in the structural dynamics between systems. 585 01:07:29,230 --> 01:07:39,790 So I think that that's what that's worth considering. I think to Ruth Point, I agree with her in many contexts. 586 01:07:39,790 --> 01:07:44,050 So like friendship or something. 587 01:07:44,050 --> 01:07:55,810 But with respect to flying a plane or driving a car, if you have an algorithm that is noise less and really good, 588 01:07:55,810 --> 01:08:07,270 it's certainly an empirical possibility that the fact that it's not a pilot making choices or a driver is going to save lives. 589 01:08:07,270 --> 01:08:13,050 I think also just might not forget that algorithms are not entirely determined simply on inputs, 590 01:08:13,050 --> 01:08:19,320 that the extraordinary thing about modern plus about rhythms is. Yes, they are in some weak sense, but in a really interesting sense, 591 01:08:19,320 --> 01:08:28,600 they're exploring a landscape of solutions which is well beyond a simple set of input parameters. 592 01:08:28,600 --> 01:08:35,750 Can I ask another question from the audience? I think there's a number of these questions, but here's his one version of it. 593 01:08:35,750 --> 01:08:39,770 Wanting to reduce noise assumes we know the correct answer. 594 01:08:39,770 --> 01:08:45,410 For example, university applicants, aren't we fooling ourselves by imagining a true ranking? 595 01:08:45,410 --> 01:08:52,400 Isn't a noisy selection the most fair country to not claim that fairness is one of the key driving factors? 596 01:08:52,400 --> 01:08:56,770 Noise elimination. OK, it's a good and I think deep question. 597 01:08:56,770 --> 01:09:10,540 So let's take cases where people get admitted based on the random draw of who the person who sees their file is. 598 01:09:10,540 --> 01:09:20,770 If we know that some people are getting admitted who are along relevant dimensions, not as good as people are being rejected. 599 01:09:20,770 --> 01:09:25,370 Then there's unfairness. And I guess that's implicit in the question. 600 01:09:25,370 --> 01:09:33,880 Let's suppose we just don't know. It may be that the point is right that that kind of lottery is OK. 601 01:09:33,880 --> 01:09:42,760 But I want to think more about that in the case of the underwriters that the insurance company, the lottery, costing a lot of money now. 602 01:09:42,760 --> 01:09:48,250 Does that mean we know something at least about the true value? Maybe. 603 01:09:48,250 --> 01:09:55,880 I bet we can find cases where we don't know the true value, where we wouldn't celebrate the existence of variability. 604 01:09:55,880 --> 01:10:00,910 Like if some doctors are saying, you know, go home, you don't have a problem. 605 01:10:00,910 --> 01:10:09,060 And the same doctors and different doctors or the same doctor and different day are saying surgery now. 606 01:10:09,060 --> 01:10:15,600 I don't think we think that that's better. Can I ask you about your Harvard colleague, Michael Sandel, 607 01:10:15,600 --> 01:10:23,120 who has recently written a book proposing an element of randomness in admission to elite universities? 608 01:10:23,120 --> 01:10:26,600 So the idea is that there should be a threshold of achievement. 609 01:10:26,600 --> 01:10:33,830 But beyond that, of course, it's absurd to suppose we could sort of really have an ordinal ranking of different candidates. 610 01:10:33,830 --> 01:10:40,360 So why not just throw them down the stairs and then pick up, you know, which are the two hundred, etc. randomly? 611 01:10:40,360 --> 01:10:48,490 And part of the thought here is that there's something quite oppressive about entertaining the idea that we can make these strict rankings. 612 01:10:48,490 --> 01:10:55,400 OK, so that's a great question and there's a lot of work on lotteries as just and cases in 613 01:10:55,400 --> 01:11:03,060 which you think that the criteria that would be used are morally irrelevant or something. 614 01:11:03,060 --> 01:11:09,920 Maybe one's for a lottery, but I doubt Professor Sandel would think that grades should be given out on the 615 01:11:09,920 --> 01:11:16,100 basis of lotteries or book contracts should be given out on the basis of lotteries. 616 01:11:16,100 --> 01:11:26,370 Maybe he believes the latter. So that's a signal of a possible account of when, which and when not. 617 01:11:26,370 --> 01:11:31,380 And I think there's something about dessert and there's something about welfare. 618 01:11:31,380 --> 01:11:41,850 And he may be right about universities, but that wouldn't mean that for other things like fines, you should have a lottery for fines. 619 01:11:41,850 --> 01:11:47,720 Nigel Ruth, is there anything either of you want to. No, I don't think that is. 620 01:11:47,720 --> 01:11:56,180 That's an interesting view, again, about what what contextualises the situation such that you feel that there's some level of 621 01:11:56,180 --> 01:12:01,340 reasonableness in in a lottery based approach and that that really does come into some very interesting 622 01:12:01,340 --> 01:12:07,250 questions about what we feel is just and where the materiality on decisions are and whether or not 623 01:12:07,250 --> 01:12:16,630 we with our out within we're being suggested a kind of a false precision in anybody's making of. 624 01:12:16,630 --> 01:12:29,690 Ruthie Johnson, interject at this point. Just that, you know, fairness is not the same as sameness. 625 01:12:29,690 --> 01:12:33,920 So I think we just have to be careful. OK. 626 01:12:33,920 --> 01:12:40,490 Let me ask you one other thing, which I think kind of relates a little bit to Nigel's thought that we somehow 627 01:12:40,490 --> 01:12:48,170 have to integrate the algorithm with some kind of human in the loop situation. 628 01:12:48,170 --> 01:12:51,980 Isn't there a risk Say, you're right about these things, say that even in the bail case. 629 01:12:51,980 --> 01:12:56,870 We can reconfigure the algorithm so it's not so focussed on prediction of flight risk or recidivism, 630 01:12:56,870 --> 01:13:03,350 but takes into account other relevant considerations. What are the wider ramifications, though? 631 01:13:03,350 --> 01:13:09,050 And this is picking up on a question from the audience for human agency that we kind of become. 632 01:13:09,050 --> 01:13:13,760 Become a culture reliant upon the machines taking these decisions. 633 01:13:13,760 --> 01:13:19,700 And even if we in theory, have some kind of override, the tendency will be more and more, No. 634 01:13:19,700 --> 01:13:26,510 One, to rely on those sorts of considerations which are amenable to algorithms and a number to just 635 01:13:26,510 --> 01:13:32,690 sort of depart from the scene because they acquire a certain kind of prestige or social authority, 636 01:13:32,690 --> 01:13:40,300 partly for the reasons you've given. And that puts us in a kind of abject position as autonomous reasoning beings. 637 01:13:40,300 --> 01:13:52,070 Well, I mean, my association is. Should we outlaw the G.P.S. device, just like philosophers use it as they need it the most? 638 01:13:52,070 --> 01:13:55,920 The best device is basically right there, isn't it? 639 01:13:55,920 --> 01:14:02,910 It's something that reduces our own agency in the sense that it displaces our 640 01:14:02,910 --> 01:14:07,150 own judgement about I get from here to there consistent with Nigel's point, 641 01:14:07,150 --> 01:14:12,630 it is something we can override so we can think I like the scenic route. 642 01:14:12,630 --> 01:14:18,180 I want to be very careful about overriding the G.P.S. If the grounds for overriding it is, 643 01:14:18,180 --> 01:14:25,650 I know a faster way noisy and biased human beings, but if it's about seen, it could go for it. 644 01:14:25,650 --> 01:14:35,160 So I think where we are now, one phrase this a little more starkly than I actually think it's that life is nasty, brutish and short. 645 01:14:35,160 --> 01:14:47,180 And one reason is bias and noise and algorithms have fantastic promise for making the person you most love have more years on the planet. 646 01:14:47,180 --> 01:14:51,980 And that that should not be taken lightly. 647 01:14:51,980 --> 01:14:55,220 Well, that is a very positive note on which to end. 648 01:14:55,220 --> 01:15:00,580 I want to thank everyone who's joined us today for what I thought was a really fascinating discussion. 649 01:15:00,580 --> 01:15:04,190 I'm sorry I couldn't get to all your questions, but there were a lot of them. 650 01:15:04,190 --> 01:15:07,790 I'd like to thank Professor Cass Sunstein for being our main speaker today and 651 01:15:07,790 --> 01:15:12,740 introducing us to this very exciting new line of research that he's embarked upon. 652 01:15:12,740 --> 01:15:19,670 And I'd also like to thank our two wonderful commentators, Ruth Chang and Nigel Shadbolt. 653 01:15:19,670 --> 01:15:25,170 Thank you very much, everyone, and look out for our next event, which is on November the 13th, 654 01:15:25,170 --> 01:15:31,940 where we'll have a lend lend more talking about A.I. and its implications for democracy. 655 01:15:31,940 --> 01:16:10,070 Thanks a lot, everyone. Oh.