1 00:00:02,260 --> 00:00:10,330 So thank you so much for inviting me to this. And thank you, Sandra, for giving great introduction of of our research programme. 2 00:00:10,330 --> 00:00:15,220 So what I'm going to talk about a bit is I'm going to talk about one of my most recent pieces of work, 3 00:00:15,220 --> 00:00:25,660 which I think speaks to the question of the relationship between law and ethics in the space of air and particularly talks about the the frameworks, 4 00:00:25,660 --> 00:00:34,990 the initiatives that we now have. Over 100 of that are in some way trying to define the right sort of high level concepts or principles or 5 00:00:34,990 --> 00:00:42,670 values or tenets to in some way guide the development of the government's ban on the use of AI to do. 6 00:00:42,670 --> 00:00:47,710 The talk is going to be based on this paper, which just came out about a week ago, 7 00:00:47,710 --> 00:00:55,120 where I look at the role the principles can and perhaps should play in the governance of A.I. of ethically. 8 00:00:55,120 --> 00:01:02,350 I know my paper, despite having come out a week ago, is apparently already out of date. 9 00:01:02,350 --> 00:01:06,190 I thought we only had eighty four initiatives from across the world, 10 00:01:06,190 --> 00:01:17,650 but we have over 100 now and I hope this gets across the gravity of the situation. 11 00:01:17,650 --> 00:01:21,070 And I suppose my my main concern is that if we have all these initiatives, 12 00:01:21,070 --> 00:01:27,340 you're essentially creating a market where developers can pick and choose the set of principles that works best for them. 13 00:01:27,340 --> 00:01:35,890 Chris Russell was joking the other day that he said, Here are my principles and if you don't like them, I have a bunch of others. 14 00:01:35,890 --> 00:01:42,160 But to me, the risk there is, it gives the impression that there's sort of one way to do ethics that you have a high level 15 00:01:42,160 --> 00:01:47,470 set of principles that you were then going to specify into a set of practical requirements. 16 00:01:47,470 --> 00:01:56,080 And unsurprisingly, you should expect that if you have both lots of different sets principles, but also leave it up to say, 17 00:01:56,080 --> 00:02:01,450 individual development teams or individual companies to choose how to specify those principles on the ground, 18 00:02:01,450 --> 00:02:07,210 you're going to get requirements that don't match with each other, requirements that may be contradictory. 19 00:02:07,210 --> 00:02:11,740 And essentially, you can end up in a situation of extreme moral relativism. 20 00:02:11,740 --> 00:02:19,990 And to me, that that that thought that sort of ethics or at least ethical frameworks are there as something to let's 21 00:02:19,990 --> 00:02:27,230 see workplace regulation or at least self-regulation can be seen as a way to replace hard regulation. 22 00:02:27,230 --> 00:02:32,200 And that's me sort of does a disservice to the actual value that ethics can have and in particular, 23 00:02:32,200 --> 00:02:37,630 the value, the ethics and the law when working together in a good way can have. 24 00:02:37,630 --> 00:02:43,690 So what I'm interested in is basically, how can we make these ethics frameworks actually work within a, 25 00:02:43,690 --> 00:02:47,770 you know, an approach where we're open to hard regulation? 26 00:02:47,770 --> 00:02:55,320 Now, I'm happy that this has come up already, but it's just to say there is a clear connexion between ethics and medical ethics. 27 00:02:55,320 --> 00:03:00,460 So I don't have to make the case. That's it. So thank you. A number of people mentioned, actually, 28 00:03:00,460 --> 00:03:05,200 but just to say that there was some work done recently as papers done recently that was trying to look 29 00:03:05,200 --> 00:03:10,870 for some sort of consensus across all these different different frameworks and initiatives that we have. 30 00:03:10,870 --> 00:03:12,910 And what in particular, 31 00:03:12,910 --> 00:03:19,630 this piece of work from the Air People project and also it was adopted by the High-Level Expert Group on Artificial Intelligence, 32 00:03:19,630 --> 00:03:26,710 is that there's a set of principles and very closely mimic the classic principles of medical ethics. 33 00:03:26,710 --> 00:03:34,360 And that's really interesting because I think actually looking at medical ethics and looking how A.I. development compares to it can tell you a 34 00:03:34,360 --> 00:03:44,140 lot about whether we should expect this in a principal form of self-governance to work in a similar way in AI development as it does in medicine. 35 00:03:44,140 --> 00:03:49,840 And it's not to say that medical ethics is without its values, and that principled approach is going to solve everything. 36 00:03:49,840 --> 00:03:56,050 It's just to say it's definitely a case where the use of principles for ethical governance, for ethical decision making is very clear. 37 00:03:56,050 --> 00:04:00,340 It's very prominent, and it has had impact in practise. 38 00:04:00,340 --> 00:04:06,700 So I'm going to do it in the remainder of my few minutes is just to look at those two professions a little bit closer and see, well, 39 00:04:06,700 --> 00:04:15,490 how exactly does A.I. development compare to this profession where we've seen a principled approach emerge and have some success? 40 00:04:15,490 --> 00:04:21,730 And so there's four different characteristics I want to look at here. The first is the existence of common ends. 41 00:04:21,730 --> 00:04:28,690 And what I mean by that is that medicine has doctors have fiduciary duties towards their patients. 42 00:04:28,690 --> 00:04:35,710 The practise is broadly guided by a common set of aims, which is to benefit the health and well-being of the patient. 43 00:04:35,710 --> 00:04:39,550 And of course, we will disagree about how to do that best in practise, 44 00:04:39,550 --> 00:04:46,360 at the very least talking about, say, public interest versus the interests of individual patient. 45 00:04:46,360 --> 00:04:52,510 But there is this sort of commonality. There is this common ground from which ethical decision making can actually proceed. 46 00:04:52,510 --> 00:05:00,100 And that is reflected in the fact that you have very strong fiduciary duties between professionals and patients. 47 00:05:00,100 --> 00:05:06,030 What that allows for is basically. Co-operative approach, difficult decision making to specifying these principles, 48 00:05:06,030 --> 00:05:12,000 and I'm not sure that that level of cooperation can be taken for granted in the space of aid development. 49 00:05:12,000 --> 00:05:15,600 At the very least, if you're talking about privately developed the guy, 50 00:05:15,600 --> 00:05:22,050 the initial fiduciary duties owed by the development team will be to the shareholders of the company, 51 00:05:22,050 --> 00:05:25,350 rather than the users of the people affected by the system. 52 00:05:25,350 --> 00:05:30,450 And so my concern is if you don't have this sort of common grounding of the direction we're all travelling, 53 00:05:30,450 --> 00:05:33,650 I'm not sure that you actually can have that, 54 00:05:33,650 --> 00:05:38,790 that you're going to end up with ethical decision making that's fundamentally competitive rather than co-operative. 55 00:05:38,790 --> 00:05:47,190 And clearly, that's that's not something that's conducive to finding an appropriate balance between different interests in practise. 56 00:05:47,190 --> 00:05:51,990 The second characteristic want to look at is the role of professional history, 57 00:05:51,990 --> 00:05:56,970 and this is just to say that medicine has obviously an impression for a very long time. 58 00:05:56,970 --> 00:06:02,010 We have codes of conduct. We have accounts of what it means to be a good doctor. 59 00:06:02,010 --> 00:06:06,990 These are captured in things like credit growth or more recently, became a code of medical ethics. 60 00:06:06,990 --> 00:06:14,280 These are documents that have been revised over time. The very long standing, the very detailed they give opinions on, you know, 61 00:06:14,280 --> 00:06:21,420 particular types of interventions and the fact that they have been sort of tested over time and revised 62 00:06:21,420 --> 00:06:26,850 is one of the reasons that they continue to actually be useful in day-to-day professional practise. 63 00:06:26,850 --> 00:06:35,190 Now, if you compare that to the state of development, I'm going to use software engineering here as the analogue to to AI development. 64 00:06:35,190 --> 00:06:43,080 We do have professional bodies see ACM in the I believe it's been two of the biggest ones and we do have codes of ethics. 65 00:06:43,080 --> 00:06:48,630 But in comparison to medicine, they're the relatively short comparison, relatively abstract. 66 00:06:48,630 --> 00:06:55,200 The ACM one was revised recently, but still remains comparatively abstract when compared to, say, the inmate code of Ethics. 67 00:06:55,200 --> 00:06:59,490 And there's research on the recent research that suggests that the codes of ethics 68 00:06:59,490 --> 00:07:04,080 actually have very little influence on day to day decision making of engineers, 69 00:07:04,080 --> 00:07:08,910 which is clearly a problem because if you hadn't have a code of ethics but isn't shown 70 00:07:08,910 --> 00:07:13,200 to be particularly effective in the sense of influencing the behaviour of engineers, 71 00:07:13,200 --> 00:07:17,570 then you have to question what is the actual value? 72 00:07:17,570 --> 00:07:24,830 The third characteristic I want to briefly touch on is just the methods that we have for translating principles into practise. 73 00:07:24,830 --> 00:07:29,120 The fundamental problem that we have here in the air it's frameworks is that they're 74 00:07:29,120 --> 00:07:34,160 they're based on or they rely on what can be called essentially contested concepts, 75 00:07:34,160 --> 00:07:38,600 essentially very abstract concepts that can have lots of different meanings in practise. 76 00:07:38,600 --> 00:07:44,930 I mean, if I asked you if I took a survey of this room and who thinks that air system should treat people fairly, 77 00:07:44,930 --> 00:07:47,060 I imagine pretty much everybody would raise their hand. 78 00:07:47,060 --> 00:07:52,730 And yet everybody may have a completely different sense of what fairness would actually mean in practise. 79 00:07:52,730 --> 00:07:57,500 And the point is that those different means of fairness can be rationally held. 80 00:07:57,500 --> 00:08:04,880 They can be genuine. They can be defended. We shouldn't expect a single correct meaning of any of these terms. 81 00:08:04,880 --> 00:08:08,780 The problem that you have is that we've reached consensus on what the correct. 82 00:08:08,780 --> 00:08:12,980 Essentially contested concepts are. We've reached this high level consensus. 83 00:08:12,980 --> 00:08:13,370 And to me, 84 00:08:13,370 --> 00:08:22,560 that doesn't actually reflect any sort of true consensus emerges is a way to mask sort of really important normative and political disagreements. 85 00:08:22,560 --> 00:08:28,340 The framework we reached the framework. We certainly don't agree to what it means in practise. 86 00:08:28,340 --> 00:08:34,970 And of course, the other problem is that those sorts of concepts don't translate automatically into practical requirements. 87 00:08:34,970 --> 00:08:42,110 That is a very difficult process. Medicine has things like professional societies and boards, ethics review committees, 88 00:08:42,110 --> 00:08:47,270 accreditation and licencing schemes, pure self-governance standards. 89 00:08:47,270 --> 00:08:55,910 All these things help you actually do that translation in practise. Software engineering does have some of these things, but it's lacking. 90 00:08:55,910 --> 00:09:00,600 It's lacking mechanisms that are of similar stature, similar importance. 91 00:09:00,600 --> 00:09:06,680 I think one of the reasons for that is that the profession would not be legally 92 00:09:06,680 --> 00:09:12,170 recognised in legally recognised as a profession in the same way that medicine is. 93 00:09:12,170 --> 00:09:18,500 What I mean by that is doctors require a licence to practise software engineers, in some cases in limited contexts. 94 00:09:18,500 --> 00:09:28,730 Do you need a licence to practise? But it's in no way of the same importance or the same sort of same coverage as as medical licences. 95 00:09:28,730 --> 00:09:34,070 And so this lack of sort of legal recognition of the profession is a serious problem because ethics tends 96 00:09:34,070 --> 00:09:40,910 to professional ethics at least really has teeth when it actually has legal mechanisms to back it up. 97 00:09:40,910 --> 00:09:45,680 When doing something unethical in your day to day behaviour as a as a professional 98 00:09:45,680 --> 00:09:53,240 could actually lead to you saying losing the ability to practise your profession. 99 00:09:53,240 --> 00:10:00,110 And so just to conclude here, I think we have a number of sort of legal gaps within escalating AI. 100 00:10:00,110 --> 00:10:07,490 One is really important ones that there is not this legal recognition of AI development as professions and a lot of the initiatives 101 00:10:07,490 --> 00:10:16,090 we have are based on human rights frameworks or other sorts of frameworks are not directly legally binding in the same way and say, 102 00:10:16,090 --> 00:10:20,480 Gee, DPR would be. And so to move forward, I think we can do a couple of things. 103 00:10:20,480 --> 00:10:27,440 One is to start thinking about air ethics more in the sense of a business or organisational ethics and less as a professional ethics. 104 00:10:27,440 --> 00:10:33,890 Both sides are important, but I think there's too much emphasis placed on individuals doing wrong things, software engineering, 105 00:10:33,890 --> 00:10:39,200 individual developers doing wrong things and less about the same business model that they're 106 00:10:39,200 --> 00:10:44,450 working within or the organisational practises themselves being unethical in some way. 107 00:10:44,450 --> 00:10:47,810 We had a chance to do that with the High-Level Expert Group. 108 00:10:47,810 --> 00:10:54,260 There were red lines initially supposed to be draughted to basically set out types of A.I. that should not be developed in Europe. 109 00:10:54,260 --> 00:10:59,540 In the end, those red lines were taken out. To me, these would have been a very strong signal of AI. 110 00:10:59,540 --> 00:11:10,580 Ethics is also business ethics. And then just finally, I think we may need to revisit the idea of licencing developers of high risk A.I. applications. 111 00:11:10,580 --> 00:11:12,470 And we I think more than anything, 112 00:11:12,470 --> 00:11:20,510 we just need to develop much stronger empirical evidence base that's based on case studies on specific specific ethical challenges. 113 00:11:20,510 --> 00:11:24,830 So we can really start to understand how we disagree and agree with each other 114 00:11:24,830 --> 00:11:29,060 in practise about what these different concepts and principles are in there. 115 00:11:29,060 --> 00:11:36,362 Thank you very much. Thanks for.