1 00:00:02,210 --> 00:00:14,690 I am here to represent all of social science that Allen didn't cover and and to all of the work that we do at Oh I in 15 minutes, 2 00:00:14,690 --> 00:00:20,270 I had a good representation at the last one. Oh, good. Two sessions, I'll I'll cut those lights out. 3 00:00:20,270 --> 00:00:32,930 I am a sociologist by training, and I'm here really to make a call to arms for why we should be studying AI in a particular way. 4 00:00:32,930 --> 00:00:39,890 And that's not to negate the my other speakers tonight who are doing fascinating and wonderful 5 00:00:39,890 --> 00:00:46,730 topics for all of you who are coming together across this research community in various ways. 6 00:00:46,730 --> 00:00:53,840 But to make a call for something I see that's not yet part of our discussions about ethics. 7 00:00:53,840 --> 00:00:58,460 And that's really to be to talk about users use and the social context. 8 00:00:58,460 --> 00:01:02,270 Now, as a sociologist primarily of work and organisations, 9 00:01:02,270 --> 00:01:13,040 my view on technology has been deeply shaped by how things get used in practise, and I'll talk a little bit more about that tonight. 10 00:01:13,040 --> 00:01:21,410 But we have a discourse. It will be no surprise to anyone in the room that we are talking. 11 00:01:21,410 --> 00:01:34,190 When we talk about ethics and AI and future A.I., we we talk about fears that get translated very well into public discourse about what A.I. is, 12 00:01:34,190 --> 00:01:42,470 isn't as we'll hear from my colleague Rasmus Nielsen later. 13 00:01:42,470 --> 00:01:53,030 We also have a problem with how the press and the media are talking about A.I. and who is talking in those discourse. 14 00:01:53,030 --> 00:02:05,420 I would argue that we within the academy have a an urgent need to help address and prepare and think about that short term because, 15 00:02:05,420 --> 00:02:09,980 you know, as John Maynard Keynes said, in the long term, we're all dead right now. 16 00:02:09,980 --> 00:02:15,500 The near-term impact of how artificial intelligence is going to impact society is 17 00:02:15,500 --> 00:02:21,050 something that we're not necessarily bringing a toolkit for people to understand. 18 00:02:21,050 --> 00:02:24,950 So we're talking to one one one way about technologies that aren't here yet. 19 00:02:24,950 --> 00:02:28,580 That's great. We need to be having those conversations, but we're not. 20 00:02:28,580 --> 00:02:34,310 We're not helping to prepare the public. We're not helping to engage around questions. 21 00:02:34,310 --> 00:02:38,330 And what do we need to do? I think we need to get much more tangible. 22 00:02:38,330 --> 00:02:43,880 Part of the challenge for us as researchers, is when we talk about artificial intelligence and machine learning. 23 00:02:43,880 --> 00:02:49,670 We're really talking about becoming social infrastructure. And it's that social infrastructure, 24 00:02:49,670 --> 00:02:58,250 that's social material infrastructure that's helping to shape a whole host of what Karl called second order effects, 25 00:02:58,250 --> 00:03:05,210 what will be the effect on jobs, what will be the effect on politics? How will our society be structured? 26 00:03:05,210 --> 00:03:14,060 It's those questions that are driving the concern about AI and and yet as scientists, 27 00:03:14,060 --> 00:03:19,400 it's hard for us to get into the nitty gritty of studying that infrastructure. 28 00:03:19,400 --> 00:03:26,870 Now the researchers Stuart Russell has a has a beautiful metaphor about infrastructure and A.I., and some of you may have heard this. 29 00:03:26,870 --> 00:03:34,130 So forgive me if you if you have, he says. You know, it's as if we have artificial intelligence. 30 00:03:34,130 --> 00:03:42,980 You know, think of think of people who make artificial intelligence as asphalt engineers and they're really, really good at making asphalt. 31 00:03:42,980 --> 00:03:50,060 And so they just go around saying, you know, your garden, it needs asphalt that beach over there, asphalt. 32 00:03:50,060 --> 00:03:55,850 Why that road? It's in the wrong place. Let's put asphalt here instead. In some ways, 33 00:03:55,850 --> 00:04:05,390 our policy and technical conversations about expertise in machine learning and AI are being dominated by people who are very good at the engineering, 34 00:04:05,390 --> 00:04:09,800 but not necessarily as good. So we as kind of a broader community, I think, 35 00:04:09,800 --> 00:04:17,270 need to work on multi-disciplinary ways that we can address and redress some of the challenges of how we're going to have those hard 36 00:04:17,270 --> 00:04:26,810 and difficult questions around the normative political and distribute of questions of how we're going to talk about resources. 37 00:04:26,810 --> 00:04:32,020 But that leaves our challenge. If we see these systems, 38 00:04:32,020 --> 00:04:38,350 machine learning and artificial intelligence systems becoming part and parcel of how we run 39 00:04:38,350 --> 00:04:43,870 our daily lives without things being visible with things being black boxed to most people, 40 00:04:43,870 --> 00:04:48,850 then how do we, as social researchers, start to study and understand that? 41 00:04:48,850 --> 00:04:55,210 And I think there's a twofold problem that I'll give to you now. 42 00:04:55,210 --> 00:05:01,570 This is from exactly a year ago this month I ran a workshop, an expert workshop. 43 00:05:01,570 --> 00:05:08,980 It was a room not unlike this in Silicon Valley, with the researcher from the University of Washington, Javan West. 44 00:05:08,980 --> 00:05:14,920 Now Jevon runs a course for those of you who are not native English speakers. 45 00:05:14,920 --> 00:05:23,500 His course is called calling B.S. and he actually says, I want tonight what B.S. stands for, 46 00:05:23,500 --> 00:05:33,490 but it means to call out to call is wrong tomfoolery or outright lies or trickery, something that someone has done. 47 00:05:33,490 --> 00:05:39,730 And so he has been featured in The New Yorker for this undergraduate course in teaching people how to read statistics right? 48 00:05:39,730 --> 00:05:45,310 Know how to live with statistics, but how to call B.S. with statistics. 49 00:05:45,310 --> 00:05:49,720 And so we together ran. It was a group of 40 experts in the room. 50 00:05:49,720 --> 00:05:54,760 Some people from academia, some people from industry, some people from advocacy. 51 00:05:54,760 --> 00:06:02,650 And we basically said, OK, the rules of the game are this you have to call in the first half hour, you have to call B.S. on A.I. 52 00:06:02,650 --> 00:06:08,800 Now this is akin to what Arvin Iranian has has called, you know, the snake oil of A.I. 53 00:06:08,800 --> 00:06:09,700 Now we all know it. 54 00:06:09,700 --> 00:06:19,990 Those of us who are experts, we've seen the cases in which something is called A.I. and it's an Excel spreadsheet or something is called A.I. 55 00:06:19,990 --> 00:06:25,690 And you know, the challenges really are that there was some political corruption going on. 56 00:06:25,690 --> 00:06:31,630 And so and so that's what this was about. It was about calling out these cases. And let me call your attention. 57 00:06:31,630 --> 00:06:38,590 I don't expect you can read what these words. So in 30 minutes, we got twenty seven different cases. 58 00:06:38,590 --> 00:06:50,050 Some of these cases were common. Lee known challenges of how biased data lead to biased outputs and that there's some kind 59 00:06:50,050 --> 00:06:55,660 of gap between what we think the model is doing and what the data can actually show for. 60 00:06:55,660 --> 00:07:07,540 So, for example, Jupiter Medical, IBM, Watson for oncology and a challenge where doctors thought they were looking at a real time, a diagnostic tool. 61 00:07:07,540 --> 00:07:16,090 The diagnostic tool was actually done on synthetic cases or Amazon famous case of creating a hiring algorithm 62 00:07:16,090 --> 00:07:24,190 that ends up discriminating against people who have anything in their CV that recognises them as women. 63 00:07:24,190 --> 00:07:31,000 So this is this is a challenge that we actually now have language for right now. 64 00:07:31,000 --> 00:07:39,760 In Barcelona, the Fairness, Accountability and Transparency Conference is is launching this kicking off as we speak. 65 00:07:39,760 --> 00:07:44,560 And this idea that biased data lead to biased outputs is something that we, as researchers now can. 66 00:07:44,560 --> 00:07:51,040 We have language where we can talk about this problem and the problem has awareness because we've been raising this. 67 00:07:51,040 --> 00:07:57,400 The second kind of classification of these problems that were identified by experts was really growing pains. 68 00:07:57,400 --> 00:08:04,930 This is we are in a phenomenal period of growth as a certain Nigel pointed out of how computing 69 00:08:04,930 --> 00:08:13,600 technologies and large corporations are at a particular moment leading to an expansion of use cases. 70 00:08:13,600 --> 00:08:23,110 And of course, the industrial applications are a little shaky, and we're in a relatively novel phase of where this industry is going. 71 00:08:23,110 --> 00:08:31,810 Some of that will be worked out over time growing pains, but it's really these last three that I want to call our attention to because 72 00:08:31,810 --> 00:08:38,260 of that of the other types of of challenges to classify these challenges. 73 00:08:38,260 --> 00:08:45,790 The third was really a kind of mismatch mismatched expectations that experts or designers have one 74 00:08:45,790 --> 00:08:53,830 set of expectations or are more sanguine about the possibility of particular modelling exercise. 75 00:08:53,830 --> 00:08:59,500 But you know, the public, they just don't understand or the sales team, they just don't understand. 76 00:08:59,500 --> 00:09:06,940 Or, you know, the vendor knows that the client doesn't know that there are these difference in product and expectations. 77 00:09:06,940 --> 00:09:13,240 The fourth was really, you know, if we could only fix the perceptions of people about what I really is, 78 00:09:13,240 --> 00:09:17,710 then somehow we could get into a, you know, we would solve and resolve some of these problems. 79 00:09:17,710 --> 00:09:23,530 And the fifth was a kind of bad application right there. Bad users out there in the world. 80 00:09:23,530 --> 00:09:28,420 So good users know how to use predictive policing and software and ways. 81 00:09:28,420 --> 00:09:36,910 Is that it's ethical, but you know, there are bad police departments out there that will use these things in a bad way. 82 00:09:36,910 --> 00:09:44,410 So I would say that these three really are about a kind of granularity around use users and social context 83 00:09:44,410 --> 00:09:49,600 and from the perspective of the social studies of technology from the social science of technology, 84 00:09:49,600 --> 00:09:57,520 we actually take that as the wheelhouse of what we do so that we never truly think of 85 00:09:57,520 --> 00:10:05,320 technologies as separate or separable from how they get used and what their social context is. 86 00:10:05,320 --> 00:10:13,180 So we've had and I'm going to just speed through this because this is kind of where I think some of the near term, 87 00:10:13,180 --> 00:10:18,400 that wonderful near-term long term divide that we have. 88 00:10:18,400 --> 00:10:27,790 I think this is kind of where we are. We we know that we have ethical challenges with enormous perspective that have 89 00:10:27,790 --> 00:10:35,770 given us a way to talk about the design and the rolling out of AI technologies. 90 00:10:35,770 --> 00:10:43,810 But what I would argue is that this kind of notion that we need to bring back the users, 91 00:10:43,810 --> 00:10:51,980 the uses and the social context for those is really part of what we're missing and some of the conversations about about ethics. 92 00:10:51,980 --> 00:10:58,210 So let me let me bring in three of those perspectives from that world of research. 93 00:10:58,210 --> 00:11:02,260 The first, and this is a long standing in the sociology of technology. 94 00:11:02,260 --> 00:11:07,870 This is a long standing idea that technologies are never finished by the designer. 95 00:11:07,870 --> 00:11:15,130 They're always finished in their use. They're open to interpretation, to modification and adaptation. 96 00:11:15,130 --> 00:11:22,960 And sometimes those those changes stick, right? Sometimes they're durable and they actually influence the long term trajectory of the technology. 97 00:11:22,960 --> 00:11:30,550 And sometimes they're just hacks. And the trick is then figuring out when, what, which, which one is going on. 98 00:11:30,550 --> 00:11:34,750 The second and again, this is a 40 year finding, right? This isn't this isn't new. 99 00:11:34,750 --> 00:11:44,620 It's certainly not mine, but that new technologies always become an occasion at work for reasserting and asserting power and expertise. 100 00:11:44,620 --> 00:11:49,780 Right. So so we never just plot new technologies in and the boss says, Go, go do it. 101 00:11:49,780 --> 00:11:55,270 We always have these kind of opportunities to negotiate and renegotiate. 102 00:11:55,270 --> 00:12:04,600 And what we've seen is that those can really diverge pretty radically, even with the same set of tools, even in the same kinds of organisations. 103 00:12:04,600 --> 00:12:14,810 And the third is is a conversation that I am really a part of and kind of reimagining on what we call technological affordances, right? 104 00:12:14,810 --> 00:12:24,550 So when people approach a technology and I and I use a audience here, both in a way akin to Don Norman's notions of importance, 105 00:12:24,550 --> 00:12:32,320 but also from a sociological perspective, when we when a user approaches a technology, 106 00:12:32,320 --> 00:12:36,310 you know, they're they they they see things that they can do with it, right? 107 00:12:36,310 --> 00:12:41,230 That's what we think of as a technological. They are afforded certain opportunities. 108 00:12:41,230 --> 00:12:47,680 But the research that's coming out now about affordances really suggests how people people's 109 00:12:47,680 --> 00:12:52,510 ability to take up technological affordances are deeply shaped by their social position. 110 00:12:52,510 --> 00:12:59,440 And the work that my team is doing is really deeply shaped by where they are and their social, organisational and institutional context. 111 00:12:59,440 --> 00:13:03,220 So, OK, the technology doesn't just matter. 112 00:13:03,220 --> 00:13:08,600 It's where the user has that deeply, deeply matters. So let me give you a concrete example. 113 00:13:08,600 --> 00:13:17,260 Literally, this is concrete. I've spent a decade working in digitisation and construction, and this is a picture from my field, one of my field sites. 114 00:13:17,260 --> 00:13:23,560 This picture is an incredible accomplishment in and of itself that unionised workers 115 00:13:23,560 --> 00:13:30,250 would accept what they saw as potentially disruptive technology into the field site. 116 00:13:30,250 --> 00:13:40,360 Now, when we started this project, we had a we created a computer folder, as you all do that, said BIM hype. 117 00:13:40,360 --> 00:13:44,050 This tool is called building and for modern building information modelling or ben. 118 00:13:44,050 --> 00:13:51,760 And so we had our hype folder and we put everything in there and literally everything that everyone was being told in 119 00:13:51,760 --> 00:13:58,390 the trade press about the tool you're looking at was this would completely and utterly revolutionise how architects, 120 00:13:58,390 --> 00:14:03,160 engineers and builders work together. Fast forward a decade. 121 00:14:03,160 --> 00:14:06,400 I am privileged to have spent a decade on this project. 122 00:14:06,400 --> 00:14:16,750 You know, we kind of show in part something that we know from sociology of technology, that the mental models and the structural elements, 123 00:14:16,750 --> 00:14:22,810 those rewards norms and cultures of organisations really fundamentally shape how the technologies work. 124 00:14:22,810 --> 00:14:28,400 And I don't mean theoretically how they work. I mean, how they actually work in practise is as much. 125 00:14:28,400 --> 00:14:34,060 Apart of those mental models, as it is how people are, are told or forced. 126 00:14:34,060 --> 00:14:45,130 So the uptake of this of this tool was nowhere near as transformative as the industry hope said that it would be. 127 00:14:45,130 --> 00:14:50,080 Let me give you another from a different point of view that doesn't take a decade. 128 00:14:50,080 --> 00:14:57,710 A paper that I have been talking about quite a bit written by colleagues at Stanford and Cornell. 129 00:14:57,710 --> 00:15:08,410 Take off from Kai from last year. So what I love about this paper is it's a granular paper of lifting the algorithmic lid, 130 00:15:08,410 --> 00:15:16,840 lifting the black box lid off of a study to help get people's perceptions of what is going on with AI. 131 00:15:16,840 --> 00:15:21,340 Now they discover what they call the replicant effect. 132 00:15:21,340 --> 00:15:27,040 That is, when you show people Airbnb host profiles, you can ask them an experimental setting. 133 00:15:27,040 --> 00:15:35,440 Who do you trust? And then, when shown profiles again, say some of these are written by AI and some of these are written by people. 134 00:15:35,440 --> 00:15:47,260 They find trust collapse. And that should worry many of us because we're used to understanding how people interact with, in particular, settings. 135 00:15:47,260 --> 00:15:58,480 But we're not quite yet there and understanding how the introduction of artificial technologies technologies transparently identified as, 136 00:15:58,480 --> 00:16:09,290 quote unquote, A.I. Impacts the social organisation that people have around them, so that's freakish at all 2019 from. 137 00:16:09,290 --> 00:16:16,990 So in my one minute left, here's kind of my manifesto for where I think we need to go. 138 00:16:16,990 --> 00:16:22,210 We need many more case studies. The examples we have of AI failures. 139 00:16:22,210 --> 00:16:27,220 The examples we have of ethical problems are getting a little stale. 140 00:16:27,220 --> 00:16:34,960 We need to map, track and compare and measure these changes of people's practises with AI across multiple settings across multiple countries. 141 00:16:34,960 --> 00:16:42,010 We need to identify elements of social social infrastructure and social structure that really serve as levers for responsible use. 142 00:16:42,010 --> 00:16:47,440 So, so, so in addition to the kind of ethical projects that we have going on, 143 00:16:47,440 --> 00:16:51,910 we need to think about what are those organisational routines that might help us 144 00:16:51,910 --> 00:16:56,680 get to better and we need to measure how people are responding to AI systems, 145 00:16:56,680 --> 00:17:02,500 including existing social norms, conventions, heuristics and social organisation. 146 00:17:02,500 --> 00:17:06,700 And then I think we really need to be doing more comparative work. 147 00:17:06,700 --> 00:17:17,800 Our work right now overwhelmingly is dominated in the AI leading countries, and we have very little work that goes across or with emerging countries. 148 00:17:17,800 --> 00:17:21,670 So let me give you just kind of my set of questions. 149 00:17:21,670 --> 00:17:28,600 I really developed this question with Jack Chu and Madam Madeline Claire Ellis in a paper. 150 00:17:28,600 --> 00:17:35,230 We did this this autumn. What and whose goals are being achieved or promised through what structured 151 00:17:35,230 --> 00:17:39,040 performance using what division of labour under whose control at whose expense. 152 00:17:39,040 --> 00:17:48,550 So you'll notice this is very much a question asking about what are the social impacts of any A.I. or machine learning system. 153 00:17:48,550 --> 00:17:54,910 And finally, my kind of guiding principles. This is this is for me, and I'm just speaking for me here. 154 00:17:54,910 --> 00:17:59,620 I really think it's critical that we need to be expanding our knowledge and 155 00:17:59,620 --> 00:18:04,030 ethical capacity outside of the communities of experts in science technology. 156 00:18:04,030 --> 00:18:06,850 It's critical for society at this particular moment. 157 00:18:06,850 --> 00:18:14,890 We need to participate with the communities of engineering, data science and AI communities, both in research spheres but also in commercial spheres. 158 00:18:14,890 --> 00:18:20,470 Again, that's that's a little bit of a dividing line, but I think it's critical for getting problem solved. 159 00:18:20,470 --> 00:18:24,160 And then finally, and perhaps most urgently from my perspective, is, you know, 160 00:18:24,160 --> 00:18:30,040 I really think we have to be looking at AI on the ground and we have to be looking at these questions of how he 161 00:18:30,040 --> 00:18:35,830 is being used in that social context in order to actually understand what is developing and what's going on. 162 00:18:35,830 --> 00:18:39,978 Thank you.