1 00:00:01,860 --> 00:00:10,710 I'm very excited to be able to talk to you a little bit about the research that I do with a couple of my colleagues here with you. 2 00:00:10,710 --> 00:00:18,020 And as you mentioned, I'm a lawyer, so I usually look at things from a legal perspective and. 3 00:00:18,020 --> 00:00:24,680 But what I want to do is actually talk about free aspects where I think that is disrupting legal concepts, 4 00:00:24,680 --> 00:00:28,670 and I think the only way to fix that problem is actually via. 5 00:00:28,670 --> 00:00:33,890 I mean, that is a America approach. So I'm actually arguing that the law might not be fit for purpose, 6 00:00:33,890 --> 00:00:41,690 which is why we need ethicists and technologies to work together to actually give new strength to the law that we currently have. 7 00:00:41,690 --> 00:00:49,580 And yeah, I want to give you three examples where I think we need to focus more on one of the areas that I'm focussing on as well. 8 00:00:49,580 --> 00:00:54,690 One is accountability, one in Spanish and one is privacy. So let's start with accountability. 9 00:00:54,690 --> 00:01:00,500 And I've chosen the example of loan applications and how we have done this previously in the past. 10 00:01:00,500 --> 00:01:06,230 So usually when you go in a human human setting, you go to a bank and you apply for a loan. 11 00:01:06,230 --> 00:01:14,570 The loan officer will ask you a couple of questions. They might ask you what your income is if you have in savings, your employment history. 12 00:01:14,570 --> 00:01:16,260 All these questions make sense. 13 00:01:16,260 --> 00:01:22,680 There is an intuitive link workers and ask you about your financial status in order to find out if you can't afford that one or not. 14 00:01:22,680 --> 00:01:25,910 Right is an intuitive connexion there. 15 00:01:25,910 --> 00:01:32,120 What we see now is that we are moving away from those traditional data sources to make decisions we use and traditional data sources. 16 00:01:32,120 --> 00:01:39,980 So a couple of examples here where credit institutions are now using Facebook profiles to decide if somebody should get a loan. 17 00:01:39,980 --> 00:01:45,650 So they look at you profile pictures, the friends that you have on Facebook, the groups that you join, 18 00:01:45,650 --> 00:01:52,670 the things that you post, the things that you like and they infer if you're a reliable borrower, not. 19 00:01:52,670 --> 00:01:56,750 Similar things happen in the insurance space, where also, for example, 20 00:01:56,750 --> 00:02:05,750 profile social profile network information is being used to define if somebody should get insurance and premiums should get. 21 00:02:05,750 --> 00:02:14,120 So but that's not the only thing. In general, whenever you make decisions, you start to use very untraditional data. 22 00:02:14,120 --> 00:02:21,740 That includes clicking behaviour, typing behaviour, geolocation, ice and ice tracking. 23 00:02:21,740 --> 00:02:26,030 All of that to infer certain things about you, things that can be very privacy, 24 00:02:26,030 --> 00:02:33,230 invasive and unintentional and very counter-intuitive where I don't really understand what my data says about me. 25 00:02:33,230 --> 00:02:39,770 I have no idea how my browsing behaviour will affect my credit rating in the future, and this information can be enhanced. 26 00:02:39,770 --> 00:02:45,230 We have that replicated and shared with a lot of people and not just in the financial services. 27 00:02:45,230 --> 00:02:51,560 Also, important decisions who has to go to jail, who can go to university and if you get hired, fired, promoted, 28 00:02:51,560 --> 00:02:58,010 all of this information of big data is being used for that and that poses questions for accountability because if in the future, 29 00:02:58,010 --> 00:03:02,780 if I don't get a loan, I don't get a job. The first thing that comes to mind is to ask why? 30 00:03:02,780 --> 00:03:06,680 Why did I not get a job? What happened there? I want to have an explanation, 31 00:03:06,680 --> 00:03:12,680 and this is why I started thinking about this topic and look at it at first from a legal perspective to figure out 32 00:03:12,680 --> 00:03:18,080 if we do have a right that algorithm physicians are being explained to you because that would make a lot of sense. 33 00:03:18,080 --> 00:03:20,120 And just looking at from a legal perspective, 34 00:03:20,120 --> 00:03:25,730 what I found is that I don't think we actually have a legally binding right that things up being explained to you. 35 00:03:25,730 --> 00:03:28,400 I was not very happy with the outcome of my paper. 36 00:03:28,400 --> 00:03:33,620 It's very hard to say from an academic perspective to actually admit to that, but it wasn't very happy with the outcome of the paper. 37 00:03:33,620 --> 00:03:38,630 But there was a starting point where I started to think about I don't actually care so much about what the law says. 38 00:03:38,630 --> 00:03:42,950 Just because something is not legally required doesn't mean it's not ethically desired, 39 00:03:42,950 --> 00:03:50,480 which was the reason why Rent and Chris Russo and I got together and we now have research programme at D-I, 40 00:03:50,480 --> 00:03:54,990 which is called the governance of Emerging Technologies. So Brando said, we'll be talking a couple of minutes. 41 00:03:54,990 --> 00:04:00,080 Is an ethicist. I'm Chris Russell is a machine learning expert. 42 00:04:00,080 --> 00:04:08,600 So, yeah, what basically starts like a joke in the sense that a lawyer or a philosopher and a scientist walk into a bar, which we did, 43 00:04:08,600 --> 00:04:15,290 and we spent three hours screaming at each other because we wanted to figure out what a good explanation would actually look like. 44 00:04:15,290 --> 00:04:19,220 What do we think of good explanations because we set the legal question of what I wanted to know? 45 00:04:19,220 --> 00:04:24,350 What is it that you want to know? So Frank was very much interested in the trust side of things. 46 00:04:24,350 --> 00:04:29,540 For him, it was very important to figure out what is justified to believe what a good argument is, 47 00:04:29,540 --> 00:04:34,970 what make arguments valid, and I didn't care about that at all. I went to a justice and accountability. 48 00:04:34,970 --> 00:04:38,510 I wanted to be able to contest the decision, but I'm not happy with it. 49 00:04:38,510 --> 00:04:44,810 And Curtis, a computer scientist, didn't care about each of those things. Just wanting parties could understand what's going on in the black box. 50 00:04:44,810 --> 00:04:48,620 So even though we were all very, very passionate about explanations, 51 00:04:48,620 --> 00:04:55,880 we started to understand that we think very different things about that and see the explanations in very different ways. 52 00:04:55,880 --> 00:04:59,750 But we finally found the solution and actually wrote a paper together, 53 00:04:59,750 --> 00:05:06,830 and we just can't come to factual explanations to try to reconcile those two approaches. 54 00:05:06,830 --> 00:05:14,240 And our method actually allows you to understand a little bit of what's going on in the black box also gives you more trust in a system, 55 00:05:14,240 --> 00:05:20,720 and it gives you accountability because you would be able to, for example, a decision not happy with it. 56 00:05:20,720 --> 00:05:27,740 So the exciting part of that is that it actually got off traction and Google has implemented our our method. 57 00:05:27,740 --> 00:05:34,100 Now, last year in TensorFlow Zoo, nine can actually play with algorithms and understand what's going on in a black box. 58 00:05:34,100 --> 00:05:42,530 And Google is not the only one who is very fond of our idea because IBM, Microsoft and FLoC also were our idea. 59 00:05:42,530 --> 00:05:48,110 So what that means is if you scream a lot of each other, it actually pays off at some ways. 60 00:05:48,110 --> 00:05:56,630 But we we also learnt that explanations are only one facet of true accountability because explanations are not justification. 61 00:05:56,630 --> 00:06:05,060 I can tell you, for example, I'm not giving you the loan because I don't like you face explanations. 62 00:06:05,060 --> 00:06:07,010 It doesn't mean it's justified. 63 00:06:07,010 --> 00:06:14,840 So actually, what you also want to look at is the inference as to predictions and the opinions that algorithms have about you, right? 64 00:06:14,840 --> 00:06:19,580 All that big data is being collected about you, and very sensitive things can be inferred. 65 00:06:19,580 --> 00:06:24,650 For example, if you're pregnant, if you're liable more, if you work like, you should get promotion. 66 00:06:24,650 --> 00:06:30,230 If you have undiagnosed disorders, all of that information can be very privacy invasive. 67 00:06:30,230 --> 00:06:33,650 And that is the actual thing that you're concerned about. 68 00:06:33,650 --> 00:06:38,930 So again, I look at it from a from a legal perspective and try to look at data protection law because, 69 00:06:38,930 --> 00:06:46,310 you know, this is very close to your private life. So if inferences, for example, of personal data, you would get a lot of data protection rights. 70 00:06:46,310 --> 00:06:52,250 And again, what I found is that I actually disrupts the law in a very untraditional. 71 00:06:52,250 --> 00:06:55,740 Where we have to think creatively about the problems. 72 00:06:55,740 --> 00:07:02,880 So, Brendan, I wrote a paper together, which is called a rights to reasonable inferences rethinking self-protection law, 73 00:07:02,880 --> 00:07:11,760 the age of data and actually calling for new standards because we found that the law, as it currently stands, is not good enough to protect us. 74 00:07:11,760 --> 00:07:16,590 As you can see, it's 130 pages trying to make the point that law is not good enough. 75 00:07:16,590 --> 00:07:22,710 It was actually planned to be a hundred and six, but threatened to work with me again if I don't stop writing. 76 00:07:22,710 --> 00:07:29,700 So it's just on 30 pages, but it's very important to point out that we found a lot of loopholes in the law, 77 00:07:29,700 --> 00:07:34,290 some of which are that we don't even know if inference is a personal data. 78 00:07:34,290 --> 00:07:37,890 So all the assumptions and algorithms are making a be made about you. 79 00:07:37,890 --> 00:07:42,900 It's not clear if it actually found a good PR, even if they do fall under the data protection law. 80 00:07:42,900 --> 00:07:48,250 What the purpose of data protection lawyers is not to regulate how we make decisions about you. 81 00:07:48,250 --> 00:07:52,380 So if you have a problem over how you are being evaluated, how you being seen, 82 00:07:52,380 --> 00:07:56,340 how you're being assessed, you would need to find another law that helps you do that. 83 00:07:56,340 --> 00:08:02,280 Data protection law doesn't give you a remedy over how you being seen by algorithms, 84 00:08:02,280 --> 00:08:06,570 and that is quite problematic because as was already mentioned very often, 85 00:08:06,570 --> 00:08:11,040 we don't have standards for accurate and reasonable decision making because in essence, 86 00:08:11,040 --> 00:08:16,470 a lot of times we don't have a right to get a job or insurance, although we'll go to university. 87 00:08:16,470 --> 00:08:22,380 So if you don't have any loss and regulate harmless decisions have to be made and data protection doesn't give you that. 88 00:08:22,380 --> 00:08:26,550 Maybe it's the ethical thing to think about what would be reasonable. 89 00:08:26,550 --> 00:08:30,180 And that leads me to the current research project that we just started, 90 00:08:30,180 --> 00:08:35,580 which is a right to reasonable inferences in online advertisement and in financial services. 91 00:08:35,580 --> 00:08:42,330 And we just had to docs starting with as one, as a philosopher, the other one a legal scholar. 92 00:08:42,330 --> 00:08:49,500 And we're trying to figure out what would be a reasonable standard for international ethics in those two sectors in the future. 93 00:08:49,500 --> 00:08:57,150 We're going to look at other sectors as well that will include, for example, health, employment and criminal justice. 94 00:08:57,150 --> 00:09:08,430 But what is the current focus of our project? The last area where I see that is actually disrupting the law has to do with non-discrimination law. 95 00:09:08,430 --> 00:09:12,210 Again, that's something that we all are aware of whenever we talk about it. 96 00:09:12,210 --> 00:09:15,690 We also have to think about bias and discrimination. So again, 97 00:09:15,690 --> 00:09:20,880 I did look at non-discrimination law and try to figure out if the law is actually good 98 00:09:20,880 --> 00:09:25,740 enough and it's good enough to protect us against those unintended and undesirable risks. 99 00:09:25,740 --> 00:09:29,580 And again, I have to say it doesn't, and it doesn't. 100 00:09:29,580 --> 00:09:34,470 And actually, it's not really the law's fault or technology fault. It's just something that happen. 101 00:09:34,470 --> 00:09:40,260 If you're an Oscar nomination, law is an answer to undesired behaviour of people, right? 102 00:09:40,260 --> 00:09:43,320 That means that very often we want to regulate. 103 00:09:43,320 --> 00:09:50,580 It prohibits that people are unethical to each other or being racist or sexist or homophobic to each other. 104 00:09:50,580 --> 00:09:55,230 But the way that algorithms do that is very, very different than humans. 105 00:09:55,230 --> 00:10:00,780 So, for example, if you think about price discrimination, 106 00:10:00,780 --> 00:10:09,810 suppress discrimination means that the same product is offered to different people at different prices in the offline world. 107 00:10:09,810 --> 00:10:12,120 You will be able to get easy for you to figure it out. 108 00:10:12,120 --> 00:10:20,670 You can just go to different stores and compare fruits and their prices and pick the products that you think is fair or appropriate. 109 00:10:20,670 --> 00:10:26,880 If, for example, Tesco doesn't let you into the store anymore, you know that you have been excluded from the market. 110 00:10:26,880 --> 00:10:32,670 How do you know that in an online world, right? How do you know if you actually being offered the best price? 111 00:10:32,670 --> 00:10:38,580 Do you know the precious other people get offered? And do you know the advertisements that you don't see, right? 112 00:10:38,580 --> 00:10:43,980 So that kind of discrimination, you don't feel it anymore, whereas in the offline world, 113 00:10:43,980 --> 00:10:48,990 there will be often the ability of the complainant to see that something's off. 114 00:10:48,990 --> 00:10:56,310 Something feels unfair, unjust, but that sense a feeling of unfairness might not be as easy to grasp anymore. 115 00:10:56,310 --> 00:11:07,440 And similarly, here with the law, if you look at the classic that we want to protect, that relates to sex and gender, ethnicity, 116 00:11:07,440 --> 00:11:15,930 religious beliefs because we had historical experiences where people have used that to discriminate against people, right? 117 00:11:15,930 --> 00:11:23,190 So this is how we constructed the world, how we group people. But algorithms might prove people completely differently, right? 118 00:11:23,190 --> 00:11:31,140 It could be that the algorithms start to discriminate against people who are born on a Tuesday, have brown socks and like dogs. 119 00:11:31,140 --> 00:11:40,650 But this class of people doesn't find recognition in non-discrimination law because we never have it or similar here with video gamers, 120 00:11:40,650 --> 00:11:45,780 which, for example, in China, is something that could cause your credit score to drop. 121 00:11:45,780 --> 00:11:52,190 Obviously, that class of people doesn't find any protection in non-discrimination law, but maybe. 122 00:11:52,190 --> 00:12:00,260 And maybe, too, so this is the the project that we also are working on trying to figure out if the law doesn't give you enough protection, 123 00:12:00,260 --> 00:12:06,650 what would be the ethical thing to do? How can we increase and protect fairness in algorithmic world? 124 00:12:06,650 --> 00:12:12,500 So, yeah, basically does the the the programme that you're currently working on. 125 00:12:12,500 --> 00:12:17,540 And I actually think that the one of the reasons why I'm so excited about all this, 126 00:12:17,540 --> 00:12:24,050 what's happening because I think it finally is clear to everyone that if you want to, 127 00:12:24,050 --> 00:12:31,850 you know, use, if you need to think of it, at least three perspectives need to think about is the law actually helping you isn't all good enough. 128 00:12:31,850 --> 00:12:33,990 And if it's not, what would be desirable? 129 00:12:33,990 --> 00:12:39,710 Well, the ethical thing to do and then actually match it against technology and figure out if it's actually feasible. 130 00:12:39,710 --> 00:12:45,410 And if you do that, you can harness the full potential of it, but make sure that you protect human rights. 131 00:12:45,410 --> 00:12:47,765 Thank you.