1 00:00:56,230 --> 00:01:01,300 And commercial organisations, well, thank you and thank you for having me along today, 2 00:01:01,300 --> 00:01:06,220 it's a pleasure to talk about a topic that's slightly outside of my usual comfort zone. 3 00:01:06,220 --> 00:01:13,720 I'm going to address questions that are in the air business school and the business people that we're talking about in the context of a I. 4 00:01:13,720 --> 00:01:21,370 But I am acutely aware that in doing so, I'm doing so as a specialist in the governance, the behaviour of investment banks. 5 00:01:21,370 --> 00:01:26,830 So I'm going to step a little bit outside the boundaries of my expertise at times today. 6 00:01:26,830 --> 00:01:33,160 And I apologise in advance if you think that at some stage I'm teaching you how to suck eggs. 7 00:01:33,160 --> 00:01:43,480 I'm kind of in terms of my remit as I put about what I might see today as covering business in general and finance in particular. 8 00:01:43,480 --> 00:01:50,140 And I've used this seminar as an opportunity to to think about the speculates upon some of the things 9 00:01:50,140 --> 00:01:57,640 that I and my colleagues at the business school are worrying about as we contemplate the extension, 10 00:01:57,640 --> 00:02:04,900 the extension of A.I. into business life. I don't really have any answers to the problems I'm going to to raise. 11 00:02:04,900 --> 00:02:13,090 I'm just going to generate questions and the answers are going to be very far from clear. 12 00:02:13,090 --> 00:02:18,640 So the the the the first thing I think arises in this context is the extent to 13 00:02:18,640 --> 00:02:24,340 which businesses know their employees and to which they know their clients. 14 00:02:24,340 --> 00:02:26,590 So to illustrate the point I'm making, 15 00:02:26,590 --> 00:02:33,550 I'm conducting a bunch of interviews at the moment with a co-author from the states with very elderly investment bankers. 16 00:02:33,550 --> 00:02:42,400 People in their late 70s have been in the business 50 odd years to find out how the business has changed or what sort of cultural standards drive. 17 00:02:42,400 --> 00:02:46,570 Investment banking and what received ethical wisdom is in investment banking, 18 00:02:46,570 --> 00:02:54,140 and it's probably unsurprising to you that everything has changed dramatically, particularly since the 1980s. 19 00:02:54,140 --> 00:02:58,060 One of the things that came up has come up multiple times in conversations 20 00:02:58,060 --> 00:03:03,160 with banks is the enormous power of the compliance systems that they have now. 21 00:03:03,160 --> 00:03:08,650 So one bank told us that they have a compliance system that logs every phone call people make every day. 22 00:03:08,650 --> 00:03:15,160 That's not surprising. Every keystroke they make on their computers is when they go to lunch and when they get back in. 23 00:03:15,160 --> 00:03:19,060 There is how long they take a cigarette break and then when they get for coffee. 24 00:03:19,060 --> 00:03:26,530 And this system tells them when people are in danger of leaving the bank. So they get a flag saying this person might be about to resign, 25 00:03:26,530 --> 00:03:30,850 and then they start watching that person very closely to make sure that he or she 26 00:03:30,850 --> 00:03:35,590 doesn't leave with customers and with confidential information and that sort of thing. 27 00:03:35,590 --> 00:03:42,130 And that's a very striking fact. Oh, always teaching the banks how to double guess the decision of their employees and me has 28 00:03:42,130 --> 00:03:47,200 already indicated that it's probably better double guessing those decisions than the partners, 29 00:03:47,200 --> 00:03:54,670 those employees. Sort of interesting question to me is whether it's better a double guessing those questions than the employees themselves. 30 00:03:54,670 --> 00:03:59,440 And I kind of suspect it is. I mean, my party can test my decisions better than me. 31 00:03:59,440 --> 00:04:05,720 So if Facebook can be my partner, it can surely beat me a match, thus a striking thing. 32 00:04:05,720 --> 00:04:10,810 And that means that businesses already understand more about their employees and their employees do. 33 00:04:10,810 --> 00:04:15,040 And I think quite some quite difficult questions arise as we think what the relationship 34 00:04:15,040 --> 00:04:20,380 between the business and the employee is on what the business should do with that information. 35 00:04:20,380 --> 00:04:27,280 And it's not just about employees, of course. That sort of data gathering applies also to customers. 36 00:04:27,280 --> 00:04:31,300 And that leads broadly to questions about how markets are going to look in the future. 37 00:04:31,300 --> 00:04:39,580 We're not just selling products, we're selling knowledge of people and product information about what those people are likely to do. 38 00:04:39,580 --> 00:04:47,030 So there are some sort of trivial questions that everyone has anticipated about data integrity and data ownership. 39 00:04:47,030 --> 00:04:54,160 It's collectively or individually or organisations outside of me understand me as well as I do. 40 00:04:54,160 --> 00:04:59,350 Surely, at the very least, I to Thomas, perhaps we ought to be able to assert some property rights over that. 41 00:04:59,350 --> 00:05:03,310 Information of the institutions that would allow it to do so don't exist at the moment. 42 00:05:03,310 --> 00:05:09,070 Although one of my colleagues at the business school is leading an effort in the commercial world to try and establish those rights, 43 00:05:09,070 --> 00:05:13,930 which is an interesting thing. 44 00:05:13,930 --> 00:05:23,260 There's also a question of whether the organisations that understands better than we do ought to tell us what we don't know about ourselves, 45 00:05:23,260 --> 00:05:26,380 whether that's actually even a meaningful thing to contemplate doing. 46 00:05:26,380 --> 00:05:30,430 So maybe corporations that have gathered information about others that have, you know, 47 00:05:30,430 --> 00:05:35,200 200 likes could help us reflexively to think about what we are, what we ought to be. 48 00:05:35,200 --> 00:05:42,820 And those seem to be quite deep questions that haven't been properly teased out at all. 49 00:05:42,820 --> 00:05:49,420 There's also a bunch of questions about once a corporation understands as better than we do, 50 00:05:49,420 --> 00:05:54,490 whether we can manipulate us and whether a committee plays in ways that are good for us in ways that are bad for us. 51 00:05:54,490 --> 00:06:03,700 For example, one of my. As opposed to research called Alex, Peter Blank is analysing the ways that a large bank, I know which bank it is, 52 00:06:03,700 --> 00:06:09,340 but I can't remember if it's public uses, I talk to clients in its online stockbroking arm. 53 00:06:09,340 --> 00:06:13,330 So one of the things that comes out of her research is strong evidence that the clients of this bank, 54 00:06:13,330 --> 00:06:18,820 or at least by the likes of finance research, profoundly irrational. 55 00:06:18,820 --> 00:06:21,370 So they don't do what finance theory says they should do. 56 00:06:21,370 --> 00:06:26,530 Whether that's really rational or whether we just have the wrong definition of rationality is another question. 57 00:06:26,530 --> 00:06:32,470 But for example, they they have reference points that are almost random. 58 00:06:32,470 --> 00:06:35,590 You can manipulate the reference points by giving them information. 59 00:06:35,590 --> 00:06:40,870 And of course, once they understand this, they know which people are most susceptible to what sorts of information, 60 00:06:40,870 --> 00:06:44,740 which is the sense in which those people's quote bounded rationality, 61 00:06:44,740 --> 00:06:50,600 unquote, is the knowledge of the bank and is something manipulated by the bank to have to alter the way that they trade. 62 00:06:50,600 --> 00:06:54,370 Now, there's no suggestion that people at the bank in question is doing this. 63 00:06:54,370 --> 00:07:01,420 There's also absolutely a banking question. If it chose to could do this, it could take advantage of what's called the disposition effect, 64 00:07:01,420 --> 00:07:07,750 and it could use that to alter the way that its clients behave. That's interesting. 65 00:07:07,750 --> 00:07:12,100 Is it OK? Probably not. If it isn't, what should we do about it? 66 00:07:12,100 --> 00:07:15,820 And when I say these firms are irrational, what do I what? These people are irrational? 67 00:07:15,820 --> 00:07:20,380 Do I even have the right version of rationality in mind? 68 00:07:20,380 --> 00:07:25,570 Well, to make sense of that, perhaps I should think about the reasons people have for taking actions. 69 00:07:25,570 --> 00:07:30,370 The reason giving is, of course, important for people care about ethics. 70 00:07:30,370 --> 00:07:35,180 The fact that a value systems uncover and respond to those biases doesn't mean 71 00:07:35,180 --> 00:07:40,950 that any person operating the system understands that this is happened upset, 72 00:07:40,950 --> 00:07:47,290 well-known and obvious fact. And that means that no one can actually discern the reasons the choices that are being made. 73 00:07:47,290 --> 00:07:56,020 So this is well understood in the context of things like discrimination over gender and discrimination over racial origin. 74 00:07:56,020 --> 00:07:59,680 It's also well known that it's incredibly hard to correct that sort of bias. 75 00:07:59,680 --> 00:08:03,610 You can go through and score out. All the words are obviously gender specific. 76 00:08:03,610 --> 00:08:08,310 And it turns those all sorts of words that you don't immediately think of as being gendered or gendered. 77 00:08:08,310 --> 00:08:14,950 And we know that because the computer programme correctly identifies the gender of the person who's speaking well, 78 00:08:14,950 --> 00:08:20,350 that's problematic is problematic because we don't know the reasons that these things are happening. 79 00:08:20,350 --> 00:08:24,070 And usually when we think about ethics, we think about reasons. 80 00:08:24,070 --> 00:08:29,060 So there are obvious questions here that people have already bloviating about at some length. 81 00:08:29,060 --> 00:08:35,290 What should we do about this sort of bias? How can we agree, even though this is just a definition of bias in this context? 82 00:08:35,290 --> 00:08:41,830 But I think the more interesting or is the deeper questions I worry about myself at the moment. 83 00:08:41,830 --> 00:08:47,080 Twenty seven minutes relates to accountability. 84 00:08:47,080 --> 00:08:51,790 If a customer calls out a financial firm or any other firm for bias and recruitment, 85 00:08:51,790 --> 00:08:59,050 or for way it behaves when it gives financial advice, it's not clear what it means the firm to be accountable. 86 00:08:59,050 --> 00:09:03,640 Because if a firm is relying on a system like this, it cannot give good reasons for what it's doing. 87 00:09:03,640 --> 00:09:11,050 And if you can't do that, it's not clear that the firm is an accountable agent is not firm, even clear how we should regulate the firm's activities. 88 00:09:11,050 --> 00:09:16,420 And I think that is a concern both for people and business and for the people who regulate business. 89 00:09:16,420 --> 00:09:20,140 There's also a concern for me, and this kind of starts to tell. 90 00:09:20,140 --> 00:09:21,250 It's like back to work. 91 00:09:21,250 --> 00:09:33,250 I'm doing myself for what it teaches us about moral learning and the acquisition of moral wisdom in business to use a slightly loaded. 92 00:09:33,250 --> 00:09:38,860 So if we allow corporations or on our talk a bit about what it means to allow a corporation, 93 00:09:38,860 --> 00:09:44,410 but if we allow corporations to pass responsibility onto algorithms, in some cases, 94 00:09:44,410 --> 00:09:48,340 perhaps even unwittingly to pass that responsibility on, 95 00:09:48,340 --> 00:09:55,690 I think it's interesting to ask whether the people who work for those corporations become less morally aware, rather appealing here to literature. 96 00:09:55,690 --> 00:10:03,280 That suggests that the skills you need to make ethically complex choices in business and the professions is a learnt skill, 97 00:10:03,280 --> 00:10:07,660 is highly contextual and it's acquired through up through use. 98 00:10:07,660 --> 00:10:15,400 You can't learn how to what an ethical choices in investment banking or in the law APR, right? 99 00:10:15,400 --> 00:10:19,840 You actually have to engage in those choices. You have to talk to the people who are engaged in the choices. 100 00:10:19,840 --> 00:10:28,360 You have to receive feedback and you have to engage in dialogue. And if we deploy even an absolutely excellent first class A.I. system, 101 00:10:28,360 --> 00:10:34,960 I wonder if we run the risk that we create a cadre of moral imbeciles that simply cannot exercise 102 00:10:34,960 --> 00:10:42,400 complex judgement and cannot post their values in a meaningful sense onto later generations? 103 00:10:42,400 --> 00:10:50,110 And one doesn't want to point fingers. But in the industry, I've spent the last decade worrying about investment banking. 104 00:10:50,110 --> 00:10:56,560 I think this sort of thing has happened. I think it's unfair to describe all investment bankers as moral and. 105 00:10:56,560 --> 00:11:02,830 But it's also true to say that some areas of investment banking that used to be characterised by the exercise of quite deep judgement, 106 00:11:02,830 --> 00:11:09,250 like giving advice have now been turned over to activities that are to the markets that are very observant, 107 00:11:09,250 --> 00:11:13,030 that are very contractual, right, data driven. That's a great thing. 108 00:11:13,030 --> 00:11:17,620 It has actually reduced to less of the sort of biases I was talking about was also reduced 109 00:11:17,620 --> 00:11:22,870 in less acquisition of judgement because this judgement is required of those businesses. 110 00:11:22,870 --> 00:11:27,370 And that's a concern for bankers. It's a concern for regulators. 111 00:11:27,370 --> 00:11:35,620 I don't think it should be a concern for all of us as this sort of system codifies and takes out of human agency called important decisions. 112 00:11:35,620 --> 00:11:42,160 It also raises questions that relate to sociology that came out of Stanford in the mid-1980s. 113 00:11:42,160 --> 00:11:50,260 People that will talk about all of this, they are taking moral decisions and being subject to public discourse, 114 00:11:50,260 --> 00:11:55,030 not only taking away our capacity for judgement, but for. 115 00:11:55,030 --> 00:12:04,660 Is it also perhaps unthinkingly steering is into common ways of thinking generating unthinking consensus? 116 00:12:04,660 --> 00:12:11,320 And if it is, I suspect that's a bad thing. We're going to talk about two more things, one of which is obvious, one of which I suspect is not obvious. 117 00:12:11,320 --> 00:12:16,660 One is this the effect of this stuff on free speech? 118 00:12:16,660 --> 00:12:20,980 Some firms are starting to use A.I. to police speech. 119 00:12:20,980 --> 00:12:26,560 One of my colleagues, the business school, Natalia Efimova, who originally was a computer science faculty member, 120 00:12:26,560 --> 00:12:30,910 is now on the road working in the future marketing initiative. 121 00:12:30,910 --> 00:12:39,550 She is attempting to understand how Facebook and other actors might identify misogynistic language in chat rooms and other fora. 122 00:12:39,550 --> 00:12:44,500 Using artificial intelligence is remarkably hard to do when you can identify some obvious words. 123 00:12:44,500 --> 00:12:51,670 But the tenor of some you actually have to internalise the tone of language to make any sense of this. 124 00:12:51,670 --> 00:12:57,820 So Uber is interesting and important work because no single person can police this. 125 00:12:57,820 --> 00:13:04,180 The team of people can probably police the volume of messages that was promulgated on Facebook, 126 00:13:04,180 --> 00:13:08,290 but we generate some obvious questions about where you draw lines, 127 00:13:08,290 --> 00:13:13,160 and we also generate some quite important institutional questions about whether we're happy to allow Facebook to draw those lines. 128 00:13:13,160 --> 00:13:15,100 Right now, the default answer appears to be yes. 129 00:13:15,100 --> 00:13:23,110 Right now, we allow Facebook to decide where the line is drawn between unacceptable misogynistic language and just provocative language. 130 00:13:23,110 --> 00:13:26,710 And if they can't readily give us the reasons for their decisions, which they can't, 131 00:13:26,710 --> 00:13:31,870 if they're relying on a system that they don't properly understand, and perhaps no one understands that deep learning system, 132 00:13:31,870 --> 00:13:38,620 how do we decide what the proper division of responsibilities is on this sort of line drawing between the state and the corporation? 133 00:13:38,620 --> 00:13:41,920 And how do we design the institutions that will allow us to draw those lines? 134 00:13:41,920 --> 00:13:45,720 There's a hard questions that I think we haven't really got properly into. 135 00:13:45,720 --> 00:13:52,090 I want to talk about relates to moral agency, which is one of my own research interests. 136 00:13:52,090 --> 00:14:00,790 We talk a lot in the business school and we talk a lot generally in society about the notion of CSR corporate social responsibility. 137 00:14:00,790 --> 00:14:07,440 What do you sort of think about that phrase? It's far from clear what it means. What does it mean to hold a corporation responsible? 138 00:14:07,440 --> 00:14:10,510 I am responsible for my actions, at least to some extent. 139 00:14:10,510 --> 00:14:16,220 Is it meaningful to say that IBM is responsible for anything, or is it just the actors at IBM? 140 00:14:16,220 --> 00:14:20,140 And this is something that people have been arguing about in business and in 141 00:14:20,140 --> 00:14:26,680 philosophy since an influential school 40 years ago by a guy called Peter French. 142 00:14:26,680 --> 00:14:33,550 And I think we all took the arguments that were going to be considerably muddied by the arrival of artificial intelligence. 143 00:14:33,550 --> 00:14:34,480 So on the one hand, 144 00:14:34,480 --> 00:14:41,100 the list nearly double one of the literature on this stuff is concerned with the extent to which you could be a corporation's being autonomous. 145 00:14:41,100 --> 00:14:46,060 So can I view the goals and the ideas and the baby, 146 00:14:46,060 --> 00:14:52,180 the decisions and the propositions of corporation views as true as independent of those of its individual members? 147 00:14:52,180 --> 00:14:59,200 And there's a consensus that on a reasonably weak assumptions, you can. 148 00:14:59,200 --> 00:15:04,640 Well, if that's the case, presumably A.I. renders corporations even more autonomous because no one person, 149 00:15:04,640 --> 00:15:08,980 no bunch of people making decisions on sales. What decisions on some level? 150 00:15:08,980 --> 00:15:15,830 Perhaps we have to stop being a corporation is governed partly by Illinois as being more morally or homeless. 151 00:15:15,830 --> 00:15:18,310 But I have quite profound problems with that. 152 00:15:18,310 --> 00:15:26,860 My approach to this problem derives from a novel literature, which dates back to a 1962 article by a philosopher, Straus and Straus Beatles Dawson, 153 00:15:26,860 --> 00:15:29,950 who at the time was more college under his argument, 154 00:15:29,950 --> 00:15:36,100 a more recent work by a Yale philosophical Stephen Dole but identifies moral agency in terms of firstly, 155 00:15:36,100 --> 00:15:43,060 the extent to which reform attitudes like outrage or indignation towards one another, 156 00:15:43,060 --> 00:15:50,140 and the ways in which we respond to a properly expressed attitudes of that sort of moral agent is someone we 157 00:15:50,140 --> 00:15:55,300 can feel outraged about on a moral agent and someone who responds appropriately when we feel outrage towards. 158 00:15:55,300 --> 00:16:00,610 The moral agent, so I'm kind of interested myself on whether we could be a corporation in that 159 00:16:00,610 --> 00:16:05,920 way without worrying too much about deep metaphysical questions about autonomy. 160 00:16:05,920 --> 00:16:09,190 But I think the presence of AI makes these questions very, very difficult. 161 00:16:09,190 --> 00:16:17,350 So how can we can reform an attitude of outrage towards a computer system that no one really understands? 162 00:16:17,350 --> 00:16:24,850 I don't know. I think we probably can buy the new and if we form an attitude of outrage towards an artificial intelligence system that, 163 00:16:24,850 --> 00:16:33,640 for example, is introducing regularly but consistently biased decisions and recruitment. 164 00:16:33,640 --> 00:16:39,490 Can we meaningfully hold corporations accountable? Because I've already argued that the corporation control reasons for its doing, 165 00:16:39,490 --> 00:16:45,550 they call apologise, meaning meaningfully for what it's doing because it's just a computer system. 166 00:16:45,550 --> 00:16:53,980 And if we can't do any of those things, what happens to legal notions of corporate accountability and corporate responsibility? 167 00:16:53,980 --> 00:16:58,030 I have no idea what the answer to these questions is, but I think they're really important. 168 00:16:58,030 --> 00:17:06,377 I think we're going to care more and more about them in the future. Thank you.