1 00:00:07,020 --> 00:00:10,770 Good evening, very warm, welcome to the Oxford Martin School, I'm in Golden, 2 00:00:10,770 --> 00:00:18,180 I was the founding director of the school and it's a huge pleasure this evening to welcome Daniel Susskind, 3 00:00:18,180 --> 00:00:23,100 who's going to introduce us to his wonderful new book, A World Without Work. 4 00:00:23,100 --> 00:00:28,150 I'm sure you've picked up in the media the many great things that have been said in 5 00:00:28,150 --> 00:00:32,400 the very short period of time since this was released a week ago about the book, 6 00:00:32,400 --> 00:00:38,670 including that it is this The New York Times required reading for every presidential candidate. 7 00:00:38,670 --> 00:00:42,870 So first required reading for presidential candidates certainly is for all of you, 8 00:00:42,870 --> 00:00:47,910 and there will be book sales and a drink which you are invited to afterwards. 9 00:00:47,910 --> 00:00:48,630 As you may know, 10 00:00:48,630 --> 00:00:56,100 the Oxford Martin School was established to deal with the great challenges of the 21st century and to apply interdisciplinary thinking, 11 00:00:56,100 --> 00:01:00,450 including this question of the intersection of technology and society. 12 00:01:00,450 --> 00:01:08,820 And we started the group with our friend Michael Osborne in 2012, which does exactly that. 13 00:01:08,820 --> 00:01:18,240 We were somewhat criticised at times by being what people thought was rather pessimistic, saying that maybe half of US jobs were vulnerable. 14 00:01:18,240 --> 00:01:24,030 And one of the many, many things I'm delighted about was Daniel's book is that it places that in a perspective, 15 00:01:24,030 --> 00:01:29,370 it makes it seem not only prescient, but rather mainstream that work. 16 00:01:29,370 --> 00:01:38,610 Daniels, a Oxford burial person, so am I, and I just see his doctoral supervisor, David Vines, walking in thanks to such excellent supervision, 17 00:01:38,610 --> 00:01:47,430 David, which is a background to this book in one dimension, as Daniel will no doubt share with us. 18 00:01:47,430 --> 00:01:53,700 He did economics and management here and then went on to do his doctorate. 19 00:01:53,700 --> 00:01:59,490 He also worked in Downing Street in the Prime Minister's Advisory Unit. 20 00:01:59,490 --> 00:02:05,310 Not for this Prime Minister, I should hasten to add, but a previous prime minister. 21 00:02:05,310 --> 00:02:15,840 So he's been able to bridge that gap well. And what you will find in this book is absolutely remarkable ability to condense 22 00:02:15,840 --> 00:02:21,700 extremely complex economic ideas in a language which is attractive to all readers. 23 00:02:21,700 --> 00:02:29,550 It's a wonderful read, and it makes sense. And which can't be said of all my economics colleagues, and I commend it to you. 24 00:02:29,550 --> 00:02:33,300 So Daniel will speak for about 45 50 minutes. 25 00:02:33,300 --> 00:02:37,410 We'll have a short time for Q&A and then we'll have the book signing and drinks. 26 00:02:37,410 --> 00:02:54,140 The floor is yours. 27 00:02:54,140 --> 00:03:04,550 Thank you very much for that warm introduction, and it's a great pleasure to be with you this evening to talk to you about a world without work. 28 00:03:04,550 --> 00:03:09,950 And what I want to do is begin with the story, the great man your crisis of the 1890s. 29 00:03:09,950 --> 00:03:18,500 And it should have come as no surprise for some time in big cities like London and New York, the most popular forms of transport. 30 00:03:18,500 --> 00:03:27,680 The end of the 19th century have relied upon horses, hundreds of thousands of them to heave a whole variety of vehicles through the streets. 31 00:03:27,680 --> 00:03:31,970 And with these horses, of course, came manure and lots of it. 32 00:03:31,970 --> 00:03:36,260 One enthusiastic health officer in Rochester, New York, 33 00:03:36,260 --> 00:03:46,310 calculated that the horses in his city alone created enough manure to cover an acre of land to a height of one hundred and seventy five feet, 34 00:03:46,310 --> 00:03:49,130 almost as high as the leaning tower of Pisa. 35 00:03:49,130 --> 00:03:56,300 Apocryphal people at the time extrapolated from these calculations to an inescapably manure filled future. 36 00:03:56,300 --> 00:04:02,630 A New York commentator who predicted the piles would soon reach the height of third story windows. 37 00:04:02,630 --> 00:04:05,630 A London reporter who imagined that by the middle of the night, 38 00:04:05,630 --> 00:04:11,660 by the middle of the 20th century, the streets would be buried under nine feet of the stuff. 39 00:04:11,660 --> 00:04:16,760 And it said the policy makers didn't know what to do there. They couldn't simply ban horses. 40 00:04:16,760 --> 00:04:24,170 They were far too economically important. But the twist in the tale, of course, is that in the end, policymakers didn't need to worry. 41 00:04:24,170 --> 00:04:32,300 In the 1870s, the first internal combustion engine was built. In the 1880s, it was installed in the first automobile, and only a few decades later, 42 00:04:32,300 --> 00:04:36,920 Henry Ford brought cars to the mass market with his famous Model T. 43 00:04:36,920 --> 00:04:47,180 By 1912, New York had more cars than horses. Five years after that, the last horse drawn Horse-Drawn tram was decommissioned in the city. 44 00:04:47,180 --> 00:04:50,870 The great manure crisis was over. 45 00:04:50,870 --> 00:04:59,330 The parable of horse [INAUDIBLE], as Elizabeth Kolbert called it in The New Yorker, has been told many times over the years and in most tellings, 46 00:04:59,330 --> 00:05:05,660 the decline of horses is cast in an optimistic light as a sort of tale of technological 47 00:05:05,660 --> 00:05:13,400 triumph before resiliency of the Russian-American economist who won the Nobel prise in 1973. 48 00:05:13,400 --> 00:05:17,300 The same event suggested a far more unsettling conclusion. 49 00:05:17,300 --> 00:05:22,580 What he saw instead was how a new technology the combustion engine had taken a creature that 50 00:05:22,580 --> 00:05:29,210 for millennia had sat at the centre of economic life and banished it to the sidelines. 51 00:05:29,210 --> 00:05:32,450 Then, in a set of articles written in the early 1980s, 52 00:05:32,450 --> 00:05:39,890 the to made one of the most infamous claims in modern economic thought well, technological progress had done to horses. 53 00:05:39,890 --> 00:05:47,750 He said it would eventually do to human beings as well drive us out of work what cars and tractors were to them. 54 00:05:47,750 --> 00:05:55,160 Computers and robots would be to us. Today, the world is gripped again by Deonte of fear. 55 00:05:55,160 --> 00:06:02,870 Thirty percent of U.S. workers now believe that jobs are likely to be replaced by robots and computers in their lifetime. 56 00:06:02,870 --> 00:06:08,010 A similar proportion in the UK think it could happen in the next 20 years. 57 00:06:08,010 --> 00:06:15,800 And what I want to do this evening, drawing on my new book A World Without Work, is explain why we have to take these sorts of fears seriously. 58 00:06:15,800 --> 00:06:19,160 Not always their substance, as we shall see. 59 00:06:19,160 --> 00:06:25,170 But I do think some of the spirit will there be enough well-paid work for everyone to do in the 21st century? 60 00:06:25,170 --> 00:06:30,560 I think this is one of the great questions of our time, and in our in the book, 61 00:06:30,560 --> 00:06:38,750 I argue that the answer to that question is no, that the threat of technological unemployment is now real. 62 00:06:38,750 --> 00:06:44,330 Now, I don't think there's going to be some big technological Big Bang in the next few decades, 63 00:06:44,330 --> 00:06:48,080 after which lots of people wake up and find themselves without work. 64 00:06:48,080 --> 00:06:53,900 I don't think that's going to happen at all. But what I do think is that as we move through the 21st century, 65 00:06:53,900 --> 00:06:59,840 more and more people might find going to find that they're unable to make the sorts 66 00:06:59,840 --> 00:07:03,560 of economic contributions that they might have hoped to make in the 20th century. 67 00:07:03,560 --> 00:07:08,680 And I think this is a fundamental challenge to the way that we live together in society today, 68 00:07:08,680 --> 00:07:13,730 and I want to explore this problem with you this evening. 69 00:07:13,730 --> 00:07:16,880 And in particular, I want to do six things. The first is, 70 00:07:16,880 --> 00:07:21,080 I want to say a little bit about the history of technology and what I then want to 71 00:07:21,080 --> 00:07:25,460 share my general thoughts and technology and then explore one technology in particular, 72 00:07:25,460 --> 00:07:31,670 which is artificial intelligence. I want to say what I think this means for work in the future. 73 00:07:31,670 --> 00:07:37,610 I want to set out what I think the problems that this will create for us are going to be. 74 00:07:37,610 --> 00:07:46,130 And then finally, I want to end on a note of optimism, explain why, in spite of all of this, I still remain optimistic about the future. 75 00:07:46,130 --> 00:07:52,860 So first, the history, I mean, economic growth is actually a very recent phenomenon for most of the three hundred thousand years. 76 00:07:52,860 --> 00:07:57,780 Is that human beings have been around economic life has been relatively stagnant, 77 00:07:57,780 --> 00:08:02,700 but over the last few hundred years that economic activity came to an explosive end. 78 00:08:02,700 --> 00:08:09,960 The amount each person produced increased about 13 fold and world output rocketed by nearly 300 fold. 79 00:08:09,960 --> 00:08:14,280 So if you imagine that the sum of human existence where an hour long, 80 00:08:14,280 --> 00:08:20,370 then all of this actually happened in the last half, second or so in the literal blink of an eye. 81 00:08:20,370 --> 00:08:22,740 And it was Britain, of course, that led the economic charge, 82 00:08:22,740 --> 00:08:28,590 thundering ahead of others in the industrial revolution in the 1760s over the following decades. 83 00:08:28,590 --> 00:08:34,230 New machines were invented and put to use the greatly improved the way the goods were produced, 84 00:08:34,230 --> 00:08:43,650 and these new technologies allowed manufacturers to produce to operate more productively than ever before, to produce far more with far less. 85 00:08:43,650 --> 00:08:53,160 And it's here at the beginning of modern economic growth that we can also detect the origins of automation anxiety even back then, 86 00:08:53,160 --> 00:08:55,380 when economic growth was just taking off. 87 00:08:55,380 --> 00:09:05,040 People started to worry that using these machines to make more would mean less demand for their own work as well. 88 00:09:05,040 --> 00:09:13,830 The anxiety that automation would destroy jobs spilt into a protest and dissent during the industrial revolution, as is widely known. 89 00:09:13,830 --> 00:09:17,580 Technological vandalism by the so-called Luddites was rife. 90 00:09:17,580 --> 00:09:25,890 In fact, in 1812, the British parliament felt felt forced to pass the destruction of stocking frames etc. Act, 91 00:09:25,890 --> 00:09:32,480 which made the destruction of destroying machines a crime punishable by death. 92 00:09:32,480 --> 00:09:39,960 In fact, the next year, several people were tried and executed for smashing these machines. 93 00:09:39,960 --> 00:09:45,180 Importantly, though, this automation anxiety wasn't just confined to the 17th and 18th century. 94 00:09:45,180 --> 00:09:48,510 It continued all the way through right up until the present day. 95 00:09:48,510 --> 00:09:55,740 And in the last few years, we've seen a frenzy of writing and commentary reports on the threat of automation. 96 00:09:55,740 --> 00:10:03,690 But what's interesting is that if you go back to 1940, for instance, the New York Times or the debates about technological unemployment was already so 97 00:10:03,690 --> 00:10:08,790 commonplace that the New York Times felt comfortable calling it an old argument. 98 00:10:08,790 --> 00:10:16,200 In fact, going through the archives and I managed to find in every decade, almost every decade since 1920, 99 00:10:16,200 --> 00:10:23,650 an article in The New York Times engaging in some way with the threat of technological unemployment. 100 00:10:23,650 --> 00:10:26,980 And yet, and this is the key key point, really. 101 00:10:26,980 --> 00:10:34,430 And yet most of those anxieties about the economic harm caused by new technology turned out to be misplaced. 102 00:10:34,430 --> 00:10:36,940 And looking back over the last few hundred years, 103 00:10:36,940 --> 00:10:45,070 there's little evidence to support the primary fear the technological progress would create large pools of permanently unemployed workers. 104 00:10:45,070 --> 00:10:53,230 Know it's true the workers have been displaced by new technologies, but eventually most have also found new work to do. 105 00:10:53,230 --> 00:11:00,850 And so the question is why? How can it be that in the past, despite the fears of so many people, though, 106 00:11:00,850 --> 00:11:08,920 that the technological progress turned out not to lead to the sorts of levels of unemployment that they worried about? 107 00:11:08,920 --> 00:11:12,970 And the answer to that question, I think, is that when we look back over the last few hundred years, 108 00:11:12,970 --> 00:11:17,860 what we see is the harmful effect of technological change on work. 109 00:11:17,860 --> 00:11:23,230 The one that really preoccupied our anxious ancestors is only half the story. 110 00:11:23,230 --> 00:11:30,880 You know, yes, machines took the place of human beings in performing certain tasks, but they didn't just substitute for people. 111 00:11:30,880 --> 00:11:38,620 They also complemented them and other tasks that had not yet been automated, raising the demand for people to do those activities instead. 112 00:11:38,620 --> 00:11:46,180 And this helpful force so often forgot about the second force worked in a variety of different ways. 113 00:11:46,180 --> 00:11:48,610 I mean, perhaps the most obvious way that it worked, 114 00:11:48,610 --> 00:11:57,340 the first way that this complementing force has helped human beings is by making people more productive, more efficient at certain tasks. 115 00:11:57,340 --> 00:11:59,590 So a taxi driver, for example, 116 00:11:59,590 --> 00:12:07,540 can use a sat nav system to navigate unfamiliar roads or an architect can use computer assisted design software to design bigger, 117 00:12:07,540 --> 00:12:17,380 more complex buildings. But economic history also reveals a second way less direct way that this complementing force has helped human workers. 118 00:12:17,380 --> 00:12:24,070 If we think of the economy as a pie, technological progress over time has made the pie far bigger. 119 00:12:24,070 --> 00:12:28,300 As productivity increases, incomes rise and demand grows. 120 00:12:28,300 --> 00:12:35,040 So the British pie, for instance, is more than 100 times the size it was 300 years ago. 121 00:12:35,040 --> 00:12:38,250 And intuitively, it's possible to see how this might have helped displaced workers, 122 00:12:38,250 --> 00:12:47,610 people displaced from tasks in the old pie could tumble into task relativities in this new, bigger, better pie instead. 123 00:12:47,610 --> 00:12:52,290 And in turn, there's also a third way that the complementing force has health workers. 124 00:12:52,290 --> 00:12:59,850 Technological progress is not only made the pie bigger, but has also changed the pie to again think of the British economy. 125 00:12:59,850 --> 00:13:05,760 Not only is it for more than a hundred times larger than it was 300 years ago, 126 00:13:05,760 --> 00:13:10,240 but that output and the way that help is produced has completely transformed by. 127 00:13:10,240 --> 00:13:18,300 Five hundred years ago, the British economy was made up of farms. Three hundred years ago, factories and today of offices. 128 00:13:18,300 --> 00:13:24,330 And again, we can see how this might have helped displaced workers in the past as the economy changes over time. 129 00:13:24,330 --> 00:13:34,950 People displaced from tasks in the old pie could again tumble into performing different activities and tasks in this new changed bit of pie instead. 130 00:13:34,950 --> 00:13:38,550 So distinguishing clearly between this substituting force in this complementing 131 00:13:38,550 --> 00:13:44,520 force helps to explain why anxieties about automation were repeatedly misplaced. 132 00:13:44,520 --> 00:13:52,500 In this clash between these two fundamental forces, our ancestors tended to pick the wrong winner, and time and again, 133 00:13:52,500 --> 00:13:56,700 they either neglected the complimenting force altogether and just ignored it, 134 00:13:56,700 --> 00:14:01,980 or they mistakenly imagined that it would be overwhelmed by the substituting force. 135 00:14:01,980 --> 00:14:07,530 And that is why they repeatedly underestimated the demand for the work of human beings that would remain. 136 00:14:07,530 --> 00:14:13,830 There has always been by and large enough to keep everyone in employment. 137 00:14:13,830 --> 00:14:23,910 So I want to say a little now about technology. You know, every day we hear of systems and machines taking on tasks and activities. 138 00:14:23,910 --> 00:14:32,700 Until recently, we thought only human beings alone could ever do draughting legal contracts and making medical diagnoses, 139 00:14:32,700 --> 00:14:39,060 designing beautiful buildings and composing music, writing news reports. 140 00:14:39,060 --> 00:14:45,030 Given the time constraints today, I just want to give you a general sense of how it is that I think about technology and the way 141 00:14:45,030 --> 00:14:49,740 in which I think it's useful to think about technology when thinking about the future of work. 142 00:14:49,740 --> 00:14:58,300 So although machines can clearly do more today than they can than they could in the past, they cannot do everything right. 143 00:14:58,300 --> 00:15:02,850 You know, there are still limits to the harmful substituting force. 144 00:15:02,850 --> 00:15:07,920 The problem is that those boundaries are really very unclear and they're always changing. 145 00:15:07,920 --> 00:15:14,280 And lots of recent articles and books and reports have tried to work out the new limits to machine capabilities, 146 00:15:14,280 --> 00:15:17,160 and they've done this using a variety of different approaches. 147 00:15:17,160 --> 00:15:23,520 So one approach has been to try and identify which particular human faculties are hard to automate. 148 00:15:23,520 --> 00:15:30,990 So a popular finding, for instance, is that new technologies really struggle to perform tasks that require social intelligence, 149 00:15:30,990 --> 00:15:34,500 things like face to face interaction or empathetic support. 150 00:15:34,500 --> 00:15:45,690 So from 1980 to 2012, jobs that require a high level of human interaction grew by about 12 percent as a share of the US workforce. 151 00:15:45,690 --> 00:15:51,960 A different tack has been to consider not faculties the human body human beings bring to bear, 152 00:15:51,960 --> 00:15:59,280 but instead to consider the tasks themselves and ask whether they have features that make them easier or harder for a machine to handle. 153 00:15:59,280 --> 00:16:03,810 So, for instance, if you come across a task where it's easy to define the goal, 154 00:16:03,810 --> 00:16:09,480 straightforward to tell whether or not that goal has been achieved and there are lots of data for the machine to learn from, 155 00:16:09,480 --> 00:16:16,230 then chances are that task can probably be automated. The classic example of this is identifying cats, and the goal is simple. 156 00:16:16,230 --> 00:16:21,870 Just answer the question Is this a cat? It's easy to tell whether or not the system has succeeded. 157 00:16:21,870 --> 00:16:27,450 Yes, this is indeed a cat. And there's lots of photos of cats out there on the internet. 158 00:16:27,450 --> 00:16:35,190 Perhaps disturbing things. Disturbingly so six point five billion photos of cats supposedly had access them. 159 00:16:35,190 --> 00:16:41,010 So you can imagine the other tasks which are perhaps harder to define the goal, how to see the goals being achieved. 160 00:16:41,010 --> 00:16:46,080 Well, there's not a lot, not a lot of data to learn from are harder to automate. 161 00:16:46,080 --> 00:16:51,690 The obvious problem, though, with trying to identify the limits of machines in either of these ways is that 162 00:16:51,690 --> 00:16:55,680 any conclusions you reach are going to become outdated incredibly quickly. 163 00:16:55,680 --> 00:17:04,710 Those who try to identify the boundaries of machine capabilities are like the sort of proverbial painters of the Forth 164 00:17:04,710 --> 00:17:11,850 Rail Bridge in Scotland Yard Bridge so long that those painters supposedly had to start again when they got to the end. 165 00:17:11,850 --> 00:17:13,450 Because by the time they got to the end, 166 00:17:13,450 --> 00:17:21,420 the paint and starts to peel and spend time coming up with a sensible account of what it is that machines could do today. 167 00:17:21,420 --> 00:17:26,130 And by the time you finish, you're probably going to have to start again and readjust. 168 00:17:26,130 --> 00:17:35,090 So the argument that I make is that a better way to think about technology is to stop trying to identify specific limits and repress that temptation. 169 00:17:35,090 --> 00:17:40,370 Taxonomies and instead trying to make out the more general trends, and I think when you do that, 170 00:17:40,370 --> 00:17:46,700 what you see beneath the particular ripples of progress that we see around us today are some deeper currency, 171 00:17:46,700 --> 00:17:53,330 although it's very difficult to say exactly what it is the machines will be capable of doing in the future, 172 00:17:53,330 --> 00:17:58,130 it's pretty certain that they're going to be able to do more than they can at the moment. 173 00:17:58,130 --> 00:18:06,780 Over time, machines will gradually but relentlessly advance further and further into the realm of tasks performed by human beings. 174 00:18:06,780 --> 00:18:13,640 You know, take any technology that exists today. Open your laptop, pick up your smartphone and you can be pretty confident. 175 00:18:13,640 --> 00:18:19,340 That's the least advanced it's ever going to be. And I call this process task encroachment. 176 00:18:19,340 --> 00:18:24,440 And I think when you look at the three main capabilities that human beings draw on in their work, 177 00:18:24,440 --> 00:18:28,340 whether it's manual capabilities that involve dealing with the physical world, 178 00:18:28,340 --> 00:18:36,120 cognitive capabilities that draw on our ability to think and reason or effective capabilities, our capacities for feelings and emotions. 179 00:18:36,120 --> 00:18:43,850 I think what you see in each of those areas are machines gradually encroaching on more and more tasks that require those faculties. 180 00:18:43,850 --> 00:18:52,100 Each of those capabilities from human beings. And if you have a look at the book, what you'll see are hundreds of examples of this at work. 181 00:18:52,100 --> 00:18:57,740 I think it's important, though, to say that the examples I give in the book are not meant to be exhaustive. 182 00:18:57,740 --> 00:19:04,670 There's some impressive ones almost certainly missing in a few years time when we look back at what I wrote. 183 00:19:04,670 --> 00:19:13,190 Some will almost certainly look tired and out of date and the claims as well of the companies I talk about aren't meant to be taken as gospel either. 184 00:19:13,190 --> 00:19:19,280 At times, I think it can be hard to distinguish serious corporate ambitions and achievements 185 00:19:19,280 --> 00:19:26,150 from provocations drawn up by marketeers whose job it is to exaggerate for a living. 186 00:19:26,150 --> 00:19:33,230 The icing on the cake for me was when someone asked me if I would like an artificially intelligent toothbrush for Christmas. 187 00:19:33,230 --> 00:19:37,130 I'm quite sure how intelligent you need to be to brush your teeth. 188 00:19:37,130 --> 00:19:44,690 The point is this, though, to dwell for too long, and it's just a general point to dwell for too long on any particular omission or exaggeration, 189 00:19:44,690 --> 00:19:47,630 I think, is to miss the bigger picture of what's going on. 190 00:19:47,630 --> 00:19:55,660 Machines gradually encroaching on more and more tasks and activities that in the past have required a rich range of human capabilities. 191 00:19:55,660 --> 00:20:01,550 Now, economists are pretty reluctant to label any empirical regularity as a law or a rule. 192 00:20:01,550 --> 00:20:09,770 But this process of task encroachment, I think, has proven to be as more like a zany historical phenomenon can be and barring catastrophe. 193 00:20:09,770 --> 00:20:12,140 I think it's pretty certain to continue, 194 00:20:12,140 --> 00:20:19,850 and I think this is how we ought to think about technology in general in terms of this process of task encroachment. 195 00:20:19,850 --> 00:20:24,650 What I want to talk a little bit more about now. There is one technology in particular, which is artificial intelligence, 196 00:20:24,650 --> 00:20:30,740 and it's something that has really captured people's imaginations in the last few years is something responsible, 197 00:20:30,740 --> 00:20:35,870 I think, for the renewal of worries about what technology is going to do to work. 198 00:20:35,870 --> 00:20:41,180 And I think something significant has happened in the field and I want to share that with you now. 199 00:20:41,180 --> 00:20:45,620 And the story I want to tell begins in the first wave of artificial intelligence. 200 00:20:45,620 --> 00:20:47,960 The first wave of artificial intelligence. 201 00:20:47,960 --> 00:20:55,790 This was when my dad, who was my co-author on the previous book, that I wrote The Future of the Professions. 202 00:20:55,790 --> 00:21:00,740 My dad was doing his doctorate here on artificial intelligence in the law. 203 00:21:00,740 --> 00:21:05,810 So he was back then already trying to build systems that could solve legal problems. 204 00:21:05,810 --> 00:21:14,900 And something quite interesting happened in 1986, which was a very difficult piece of legislation was passed in the UK called the Latent Damage Act. 205 00:21:14,900 --> 00:21:20,990 And it turned out that the leading experts in the world at the time on this piece of legislation was a man called Philip Carter, 206 00:21:20,990 --> 00:21:25,280 and Philip happened to be the dean of the law school here, 207 00:21:25,280 --> 00:21:29,150 where my dad was doing his doctorate and Philip came to my dad and said, Look, this is absurd. 208 00:21:29,150 --> 00:21:35,190 Any time anyone wants to understand if this legislation applies to them, they have to come to me. 209 00:21:35,190 --> 00:21:41,840 You know, why don't we instead join forces, work together and build a system based on my expertise for other people to use it? 210 00:21:41,840 --> 00:21:53,240 And that's exactly what they did. From 1986 to 1988, it was known as an expert system expert system because it was based on Philip's expertise. 211 00:21:53,240 --> 00:22:00,380 This was the home screen designed for that system. My dad assures me this was a cool screen design in the 1980s. 212 00:22:00,380 --> 00:22:06,050 Ever been convinced of that? Just to give you a flavour of what it was, they were up against the extract from the legislation. 213 00:22:06,050 --> 00:22:10,910 Section two of this act shall not apply to an action to which the section applies. 214 00:22:10,910 --> 00:22:14,660 And that's a more readily understandable piece of the law. And I love this. 215 00:22:14,660 --> 00:22:20,390 They published it in the form of two floppy disks time with floppy disk was caused genuinely were floppy, 216 00:22:20,390 --> 00:22:25,100 and essentially what they did together was they built a gigantic decision tree where you answered yes or 217 00:22:25,100 --> 00:22:30,920 no questions and you navigated through this huge tree that literally had millions of branches through it. 218 00:22:30,920 --> 00:22:37,370 And my dad and his colleagues had manually painstakingly crossed it, and they weren't just doing this at all. 219 00:22:37,370 --> 00:22:43,340 This was the general approach in artificial intelligence. And in the beginning, like my dad, 220 00:22:43,340 --> 00:22:48,320 most people in artificial intelligence thought that building a machine to perform 221 00:22:48,320 --> 00:22:54,170 a given task meant observing how human beings perform that task and copying them. 222 00:22:54,170 --> 00:22:59,220 So some people tried to replicate the actual structure of the human brain. 223 00:22:59,220 --> 00:23:05,640 Others tried a more sort of psychological approach and tried to replicate the thinking, process, 224 00:23:05,640 --> 00:23:11,670 thinking and reasoning processes in which human beings appeared to be engaged when they were performing a task. 225 00:23:11,670 --> 00:23:18,870 And the third, and this is what my dad was doing in building those expert systems, was trying to identify the rules the human beings follow. 226 00:23:18,870 --> 00:23:23,760 But all of these efforts in this first wave of artificial intelligence in all of them 227 00:23:23,760 --> 00:23:31,310 human beings provided the template for machine behaviour in one way or another. 228 00:23:31,310 --> 00:23:40,670 And what's interesting looking back, is that ultimately this approach of building machines in these different images of human beings did not succeed. 229 00:23:40,670 --> 00:23:45,350 And despite the initial burst of enthusiasm and optimism and excitement, 230 00:23:45,350 --> 00:23:51,500 really no progress was made in artificial intelligence by the really sort of serious noteworthy progress. 231 00:23:51,500 --> 00:23:55,940 And as progress faltered, researchers found themselves at a dead-end reset. 232 00:23:55,940 --> 00:24:01,190 Funding dried up. Progress came to an end. Interest in the field fell away. 233 00:24:01,190 --> 00:24:07,790 A period known as the A.I. winter began when really a not not a lot happened in the field at all. 234 00:24:07,790 --> 00:24:12,710 The great turning point, and it's a familiar moment, came in 1997 when Garry Kasparov, 235 00:24:12,710 --> 00:24:19,420 who at the time was the world chess champion, was beaten by a system owned by IBM called Deep Blue. 236 00:24:19,420 --> 00:24:24,670 What's so interesting about this, looking back on it, is that if you would ask my dad in the 1980s, 237 00:24:24,670 --> 00:24:30,070 do you think something like this will ever be possible? He would have said emphatically, no. 238 00:24:30,070 --> 00:24:35,680 And the reason he would have said no is very important. The reason he would have said no is because at the time they were on that first wave mindset, 239 00:24:35,680 --> 00:24:41,830 they thought the only way to build a system to outperform a human expert was to identify human experts. 240 00:24:41,830 --> 00:24:47,800 My dad's case with this man, Phillip Capper, sit down with him, get them to explain to you how it was. 241 00:24:47,800 --> 00:24:54,190 They had all that, every problem it was. You were trying to build a machine to build a machine to perform. 242 00:24:54,190 --> 00:25:00,980 And then you tried to capture the way that human beings were thinking and reasoning in a set of instructions for machine to follow. 243 00:25:00,980 --> 00:25:05,140 But here's the problem, and Garry Kasparov is a great example of this problem. 244 00:25:05,140 --> 00:25:09,340 If you sit down with Garry Kasparov and say, Garry, tell me how it is. 245 00:25:09,340 --> 00:25:14,800 You're so good at chess. Tell me what you're thinking about. Walk me through your reasoning processes. 246 00:25:14,800 --> 00:25:19,390 He might be able to give you a few clever opening moves or closing plays, but ultimately he'd struggle. 247 00:25:19,390 --> 00:25:24,490 He'd say things like It requires your instinct, intuition, gut reaction experience. 248 00:25:24,490 --> 00:25:28,660 I can't articulate how it is. I'm so good at chess. 249 00:25:28,660 --> 00:25:33,550 And for that reason, my dad and his colleagues thought something like this could never be automated of a human being. 250 00:25:33,550 --> 00:25:38,410 Can't articulate how it is. They perform a task. Where in us do we begin? 251 00:25:38,410 --> 00:25:42,840 They thought in writing a set of instructions for mission to follow. 252 00:25:42,840 --> 00:25:51,430 Of course, what they had a bank told in the 1980s was the exponential growth in processing power that would happen in the decades to come. 253 00:25:51,430 --> 00:25:56,530 So by the time the Garry Kasparov sat, and this is just to give you a sense of quite how powerful this is, 254 00:25:56,530 --> 00:26:01,660 that's a logarithmic scale there on the left hand side, which means each step up represents a ten fold increase. 255 00:26:01,660 --> 00:26:05,230 So one step, ten fold, two steps, one hundred fold, three steps, a thousand fold. 256 00:26:05,230 --> 00:26:13,000 So that is a really explosive growth in the computational power available for beginning in the 1950s, 257 00:26:13,000 --> 00:26:17,560 but really taking off in the middle of the second half of that century. 258 00:26:17,560 --> 00:26:21,670 So by the time Garry Kasparov sat down with deep blue in nineteen ninety seven, 259 00:26:21,670 --> 00:26:26,050 deep blue was calculating up to three hundred and thirty million moves a second. 260 00:26:26,050 --> 00:26:32,140 Garry Kasparov, at best, could maybe jog about one hundred and ten moves in his head on any one time he was blown 261 00:26:32,140 --> 00:26:36,850 out of the water by brute force purchasing power and lots of data storage capability. 262 00:26:36,850 --> 00:26:44,020 This system wasn't trying to replicate. His thinking process wasn't trying to mimic his reasoning. 263 00:26:44,020 --> 00:26:47,470 It was performing the task in a fundamentally different way. 264 00:26:47,470 --> 00:26:53,440 So the deep blue result was a practical victory, but it was also an ideological triumph as well. 265 00:26:53,440 --> 00:26:58,630 We can think of most A.I. researchers up until that point as being purists and closely 266 00:26:58,630 --> 00:27:04,090 observed human beings acting intelligently and trying to build machines like that. 267 00:27:04,090 --> 00:27:06,520 But that was not how Deep Blue was designed. 268 00:27:06,520 --> 00:27:14,860 Its creators didn't set out to copy the anatomy of human chess players, the reasoning they engaged in or the particular strategies they followed. 269 00:27:14,860 --> 00:27:16,390 They were pragmatists. 270 00:27:16,390 --> 00:27:24,100 They took a task that required intelligence work performed by a human being and built a machine to perform it in a fundamentally different way. 271 00:27:24,100 --> 00:27:27,190 And that's what brought that, and that's what brought the world of AI. 272 00:27:27,190 --> 00:27:35,890 I argue out of the air into what I call the pragmatist revolution, and a generation of systems is now being built in that spirit, 273 00:27:35,890 --> 00:27:46,330 crafted to function very differently from human beings, judged not by how they perform and task, but judged by how well they perform it. 274 00:27:46,330 --> 00:27:54,850 Advances in machine translation, for instance, have come not from developing a machine that mimics a talented translator, a human translator, 275 00:27:54,850 --> 00:28:01,510 but having computer scan millions of human translated pieces of text to figure out the different correspondences and patterns on the road. 276 00:28:01,510 --> 00:28:06,490 Likewise, machines have learnt to classify images not by mimicking human vision, 277 00:28:06,490 --> 00:28:14,800 but by reviewing millions of previously labelled pictures and hunting for similarities between those in the particular photo in question. 278 00:28:14,800 --> 00:28:18,100 I think this is also why, in the economics literature, 279 00:28:18,100 --> 00:28:26,440 we've seen a systematic underestimation of the capabilities of machines and what the tasks of driving a car, 280 00:28:26,440 --> 00:28:31,750 making a medical diagnosis and identifying a bird at a fleeting glimpse have in common. 281 00:28:31,750 --> 00:28:36,250 Well, these were all tasks that at one point leading economists thinking about the future of work, 282 00:28:36,250 --> 00:28:41,320 thinking about technology and its impact on work thought couldn't readily be automated. 283 00:28:41,320 --> 00:28:45,280 And yet today they increasingly can be. So what went wrong here? 284 00:28:45,280 --> 00:28:54,640 When again, I think these economists were purists believing that machines had to copy the way the human beings think and reason to outperform them? 285 00:28:54,640 --> 00:29:01,750 They were exactly the same mindset that my dad was, which was that if you want to build a system to outperform a human expert, 286 00:29:01,750 --> 00:29:06,220 you've got to sit down with them, get them to explain to you how it was. They do whatever task is. 287 00:29:06,220 --> 00:29:10,240 You're trying to build a machine to do and try and copy that human explanation. 288 00:29:10,240 --> 00:29:17,890 But if you ask a doctor, say, how is it that you make a medical diagnosis just like Garry Kasparov, 289 00:29:17,890 --> 00:29:22,210 they would probably struggle to explain exactly how it is that they do it. 290 00:29:22,210 --> 00:29:28,840 And this set of observations leads, I think, to one of the most important ideas in our in our work, 291 00:29:28,840 --> 00:29:33,980 which is the fallacy and the artificial intelligence fallacy. 292 00:29:33,980 --> 00:29:38,710 And it's this it's the mistaken assumption that the only way to develop systems to perform tasks 293 00:29:38,710 --> 00:29:43,570 at the level of human beings or higher is to copy the way that human beings perform that task. 294 00:29:43,570 --> 00:29:48,520 Now that was true in the first wave of artificial intelligence is simply no longer true, 295 00:29:48,520 --> 00:29:55,420 and I think it clouds so many of our judgements about what it is that systems of machines will be capable of doing in the future. 296 00:29:55,420 --> 00:29:58,630 So let me give you an example. Look, Daniel, you don't understand. 297 00:29:58,630 --> 00:30:05,860 Here's a job that requires judgement and judgement is the sort of thing that simply cannot be performed by machine. 298 00:30:05,860 --> 00:30:11,380 And again, in light of everything, I've just said the question Can a machine ever exercise judgement? 299 00:30:11,380 --> 00:30:15,880 Is the wrong question to be asking? In fact, there's two more important questions we should be asking. 300 00:30:15,880 --> 00:30:18,550 The first is to what problem is judgement the solution? 301 00:30:18,550 --> 00:30:24,320 Why do we go to our fellow human beings and say, Look, I need your judgement, how help me, I need you? 302 00:30:24,320 --> 00:30:28,420 Well, I think the answer to that question is because of uncertainty, 303 00:30:28,420 --> 00:30:32,380 when the facts are unclear, when the information is ambiguous, when we don't know what to do, 304 00:30:32,380 --> 00:30:40,090 we go to our fellow human beings and say, Look, I need your judgement based on your experience to help me make sense of this uncertainty. 305 00:30:40,090 --> 00:30:45,550 So really, the question we've got to be asking isn't in this case kind of machine ever exercised judgement, 306 00:30:45,550 --> 00:30:49,420 but it's kind of a machine to deal with uncertainty better than a human being. 307 00:30:49,420 --> 00:30:52,430 And the answer is in many. Of course they can. That just again, 308 00:30:52,430 --> 00:30:58,220 think of this system medical diagnostic system developed by a team of researchers at Stanford that 309 00:30:58,220 --> 00:31:03,230 can tell you whether or not to freckle is cancerous as accurately as leading dermatologists. 310 00:31:03,230 --> 00:31:07,100 How does it work? It's not trying to copy the judgement of a human doctor. 311 00:31:07,100 --> 00:31:11,060 It knows understands nothing about medicine at all. 312 00:31:11,060 --> 00:31:12,980 Instead, it's got a database of, I think, 313 00:31:12,980 --> 00:31:19,940 about one hundred and twenty nine thousand plus cases and is running essentially a pattern recognition algorithm through those cases. 314 00:31:19,940 --> 00:31:27,350 Hunting for similarities between them and the particular photo of a troubling freckle that you've given it is performing the task 315 00:31:27,350 --> 00:31:34,640 in an unhuman way based on the analysis of more possible cases than any human doctor could hope to review in their lifetime. 316 00:31:34,640 --> 00:31:42,680 It doesn't matter that a human doctor can't explain themselves how machines think love the question from a philosophical point of view. 317 00:31:42,680 --> 00:31:45,650 But again, in light of all this, in light of all I've said, 318 00:31:45,650 --> 00:31:56,990 I think it's not particularly helpful question to be asking when we think about the future of work to see why take a different system owned by IBM, 319 00:31:56,990 --> 00:32:01,520 a system called Watson. It's claim to fame, of course, that was that. 320 00:32:01,520 --> 00:32:09,260 It went on the US quiz show Jeopardy in 2011, and it beat the two human champions at Jeopardy. 321 00:32:09,260 --> 00:32:17,240 And again, it was a remarkable achievement. What I loved about this, though, looking back on it, was that the day after Watson won on Jeopardy, 322 00:32:17,240 --> 00:32:22,190 the Wall Street Journal ran a piece by the great philosopher John Self with the title. 323 00:32:22,190 --> 00:32:26,450 Watson doesn't know it won a jeopardy. Right? 324 00:32:26,450 --> 00:32:28,220 And it's brilliant. That is completely true. 325 00:32:28,220 --> 00:32:33,360 You know, Watson didn't let out a point of excitement, didn't call up his parents to say what a good job I had done. 326 00:32:33,360 --> 00:32:36,650 You know, didn't want to go down to the proverbial pub to have a drink with his friends. 327 00:32:36,650 --> 00:32:42,140 The system wasn't trying to copy the way that these human contestants thought or reasoned, but it didn't matter. 328 00:32:42,140 --> 00:32:47,120 It still outperformed them. And that's what I think is what the second wave of artificial intelligence is about. 329 00:32:47,120 --> 00:32:52,940 And that's where we find ourselves today. Systems and machines, which are using lots of processing power, 330 00:32:52,940 --> 00:32:58,340 lots of data storage capability and advances in algorithms designed to perform tasks that might 331 00:32:58,340 --> 00:33:03,920 require faculties like judgement and creativity and empathy when performed by human beings, 332 00:33:03,920 --> 00:33:07,860 but are able now to perform those tasks in fundamentally different ways. 333 00:33:07,860 --> 00:33:13,190 And so a whole realm of tasks that we thought were once out of reach of automation. 334 00:33:13,190 --> 00:33:19,220 More and more of those tasks appear to be within reach. So what does this then mean for thinking about what? 335 00:33:19,220 --> 00:33:23,680 What are these changes in the world of technology mean for thinking about work? 336 00:33:23,680 --> 00:33:33,550 Greek mythology tells the story of a man called tenseness who chops up his son and serves him as a meal to the gods. 337 00:33:33,550 --> 00:33:40,660 This, given his dinner guests omniscience, turns out to be a very bad decision, and as a punishment, 338 00:33:40,660 --> 00:33:49,510 he's made to stand for eternity in a pool of water up to his chin, surrounded by trees bursting with fruit. 339 00:33:49,510 --> 00:33:53,830 But each time he leans down to take a sip, the water recedes away from his lips. 340 00:33:53,830 --> 00:33:59,810 And every time he reaches out to take some fruit from the tree, the branches swing away from his grasp. 341 00:33:59,810 --> 00:34:02,740 And this, I think, captures. 342 00:34:02,740 --> 00:34:09,940 This story of 10 summers, which gives us the word tantalise captures, I think the spirit of the first kind of technological unemployment, 343 00:34:09,940 --> 00:34:15,460 which we can think of as being a sort of frictional technological unemployment here. 344 00:34:15,460 --> 00:34:22,990 There's still work to be done. The problem is that not all workers are able to reach out and take it up. 345 00:34:22,990 --> 00:34:28,890 So friction a of technological unemployment doesn't mean that there's going to be fewer jobs for human beings to do. 346 00:34:28,890 --> 00:34:37,030 It's important to remember that if we think again in terms of those two forces, I think for the next decade or so, at least in almost all economies, 347 00:34:37,030 --> 00:34:42,280 that harmful substituting force that displaces workers will be overwhelmed by 348 00:34:42,280 --> 00:34:46,900 the helpful complementing force that raises the demand for that work elsewhere. 349 00:34:46,900 --> 00:34:53,140 I think that's going to continue. That historical story will continue. But for three reasons, and I set these out in the book, 350 00:34:53,140 --> 00:35:01,570 I think that this in-demand work is increasingly going to be out of the reach of more and more people if we want to take it up. 351 00:35:01,570 --> 00:35:05,230 What are these three reasons? The first is perhaps the most familiar. 352 00:35:05,230 --> 00:35:09,730 It's the skills mismatch where displaced workers do not necessarily have the skills 353 00:35:09,730 --> 00:35:14,350 and capabilities to do the new work that's created by technological progress. 354 00:35:14,350 --> 00:35:18,310 And this is probably the most familiar reason why people might not be able to 355 00:35:18,310 --> 00:35:23,680 take up the work that's available and how we can perhaps talk about it later on. 356 00:35:23,680 --> 00:35:31,990 The second reason is a geographical one place mismatch where displaced workers do not live in the same place as the new work that is created. 357 00:35:31,990 --> 00:35:35,230 I mean, it's interesting if we think back to the early days of the internet, 358 00:35:35,230 --> 00:35:39,280 there was a moment where it seemed like these sorts of worries about location might no 359 00:35:39,280 --> 00:35:45,070 longer matter that people spoke about the death of distance and how the world is flat. 360 00:35:45,070 --> 00:35:52,160 But it turns out that really, in looking for work today, the place where you live matters more than ever. 361 00:35:52,160 --> 00:35:57,860 I think the third reason for frictional technological unemployment is perhaps the least familiar. 362 00:35:57,860 --> 00:36:06,380 And it's what I call identity mismatch, and this is where displaced workers have an identity rooted in a particular sort 363 00:36:06,380 --> 00:36:11,840 of work and are willing to stay unemployed in order to protect the identity. 364 00:36:11,840 --> 00:36:18,410 So think of adult men in the United States, for instance, displaced from manufacturing roles by new technologies. 365 00:36:18,410 --> 00:36:27,080 Some say that they prefer not to work at all than to take up that it's an unfortunate term take up so-called pink collar work. 366 00:36:27,080 --> 00:36:33,950 And it's a term that's meant to capture the fact that many of the roles currently out of reach of machines are disproportionately held by women. 367 00:36:33,950 --> 00:36:40,940 So ninety seven point seven percent of pre-school and kindergarten teachers in the United States are women. 368 00:36:40,940 --> 00:36:47,420 Ninety two point two percent of nurses are women. Eighty two point five percent of social workers are women. 369 00:36:47,420 --> 00:36:54,650 So I think together those three mismatches pay to tell a story where there might be work for people to do. 370 00:36:54,650 --> 00:37:00,520 But I think there are important worries. Two reasons to think that there might not be able to necessarily take it up. 371 00:37:00,520 --> 00:37:06,620 And I think most economists tend to be comfortable with this idea of friction or technological unemployment. 372 00:37:06,620 --> 00:37:13,100 I think many of them can picture a world where there's lots of work to be done because of these mismatches. 373 00:37:13,100 --> 00:37:20,750 Not everyone can do it. But as we move through the 21st century, I think we might see the emergence of a second type of technological unemployment, 374 00:37:20,750 --> 00:37:25,580 one that's not set, one where there simply isn't enough well-paid work to be done. 375 00:37:25,580 --> 00:37:33,560 Full stop. And I call this structural technological unemployment, and this is, I think, is a less comfortable idea. 376 00:37:33,560 --> 00:37:42,950 So can this be right? Can you? What about the fact that after 300 years of radical technological change, there's still enough work for people to do? 377 00:37:42,950 --> 00:37:49,860 Does that not tell us there's always going to be enough work, enough demand for the work of human beings? 378 00:37:49,860 --> 00:37:53,870 What I try and argue in the book is that, no, that's not the case, 379 00:37:53,870 --> 00:38:00,590 and the fundamental reason why is that process of tasking encroachment that I described before. 380 00:38:00,590 --> 00:38:07,160 So we think again, in terms of those two forces, you know, there can be little doubt is that as task encroachment continues, 381 00:38:07,160 --> 00:38:15,050 that harmful substituting force is going to grow stronger. The workers are going to be displaced from a wider range of tasks than ever before. 382 00:38:15,050 --> 00:38:21,350 So the key question is why can't we simply rely on that helpful complementing force to overcome that effect, 383 00:38:21,350 --> 00:38:30,020 as it has done since modern economic growth began? And the answer, I think, is that task encroachment also has a second pernicious effect, 384 00:38:30,020 --> 00:38:36,770 which is that over time, I worry that it might wear down that helpful complementing force as well. 385 00:38:36,770 --> 00:38:43,340 So let's just think about that complementing force a little more. Take the productivity effect that I mentioned originally in the future. 386 00:38:43,340 --> 00:38:48,980 New technologies are no doubt going to make some people more productive assets and tasks. 387 00:38:48,980 --> 00:38:51,800 That's almost certainly going to happen. 388 00:38:51,800 --> 00:38:58,430 But this is only going to continue to help workers if they remain better placed to do those tasks than machines. 389 00:38:58,430 --> 00:39:03,550 And as task encroachment continues and continues, that becomes less and less likely for more and more tasks. 390 00:39:03,550 --> 00:39:09,320 So you think about satnav systems again, just to see what I'm getting at today, as I said before, 391 00:39:09,320 --> 00:39:15,260 they make it easier for taxi drivers to navigate unfamiliar roads, making them better at the wheel. 392 00:39:15,260 --> 00:39:19,340 But that's only going to be true so long as human beings are better placed. 393 00:39:19,340 --> 00:39:27,290 The machines to steer a vehicle from A to B and in the coming years of excitement about driverless cars is right. 394 00:39:27,290 --> 00:39:33,530 That's no longer going to be the case. Your software is likely to drive cars more efficiently and safely than us. 395 00:39:33,530 --> 00:39:39,920 And at that point, it's no longer going to matter how good people are at driving with or without signs. 396 00:39:39,920 --> 00:39:45,410 The machines will simply do it instead. I'll think about the bigger pie effect in the future. 397 00:39:45,410 --> 00:39:53,150 Again, economic plays will no doubt continue to grow. Incomes will be larger than ever before, and demand for goods is going to soar. 398 00:39:53,150 --> 00:39:58,580 It can be pretty certain about that. Yeah, I don't think we can necessarily rely. 399 00:39:58,580 --> 00:40:05,270 I don't think we can necessarily rely on this to bolster the demand for the work of human beings, as it has in the past. 400 00:40:05,270 --> 00:40:13,820 Why? Because just as with that productivity effect, the bigger pie effect is only going to help if people rather than machines remain 401 00:40:13,820 --> 00:40:18,200 better placed to perform whatever tasks have to be done to produce those goods. 402 00:40:18,200 --> 00:40:22,010 And again, it's task encroachment continues that becomes less and less likely, 403 00:40:22,010 --> 00:40:27,110 and we can already see something like this, in particular corners of economic life. 404 00:40:27,110 --> 00:40:30,860 So think about UK agriculture UK agricultural sector. 405 00:40:30,860 --> 00:40:39,770 This part of the British economic pie has grown dramatically over the last century and a half, but it's simply not created more work for people to do. 406 00:40:39,770 --> 00:40:45,420 British agriculture now produces about five times what it did back in 1860. 407 00:40:45,420 --> 00:40:53,060 Yeah, only requires a tenth of the workers to do it. Think about UK manufacturing a similar story since 1948. 408 00:40:53,060 --> 00:40:57,470 Sector now produces about one hundred and fifty percent more than it did back then. 409 00:40:57,470 --> 00:41:00,530 It requires 60 percent fewer workers to do it. 410 00:41:00,530 --> 00:41:09,440 A growing economic pie doesn't necessarily mean in these particular corners of the economy growing demand for the work of human beings as well. 411 00:41:09,440 --> 00:41:11,660 And finally, think about that changing pie effect. 412 00:41:11,660 --> 00:41:18,590 You know, again, the economic pie may change, but in exactly the same way as this process of task encroachment continues, 413 00:41:18,590 --> 00:41:25,430 it becomes more and more likely that machines, rather than human beings, will be better placed to do whatever new tasks have to be done. 414 00:41:25,430 --> 00:41:31,910 And again, you know, if you look at you parts of economic life, you might worry that something like this is already unfolding. 415 00:41:31,910 --> 00:41:41,930 So in nineteen sixty four, the most valuable company in the United States was AT&T, with seven hundred and seven hundred and sixty thousand employees. 416 00:41:41,930 --> 00:41:47,510 Roll forward those of twenty eighteen and as Apple with only one hundred and thirty two thousand employees. 417 00:41:47,510 --> 00:41:52,400 Twenty nineteen. It was Microsoft with only one hundred and thirty one thousand. 418 00:41:52,400 --> 00:42:00,320 The economy has transformed when we look at some of the leading companies, but it's those companies require fewer workers. 419 00:42:00,320 --> 00:42:05,090 I mean, the more general point is this, which is that in the year 2000, 420 00:42:05,090 --> 00:42:12,200 you industries that were created in the 21st century accounted for just half a percent of U.S. employment. 421 00:42:12,200 --> 00:42:21,600 Clearly, the U.S. economy has transformed, but that hasn't. Assertive use necessarily created lots and lots more work for people to do. 422 00:42:21,600 --> 00:42:28,400 So this, I think, gives you a flavour of the argument that I'm making in the book about how we might find ourselves in a world with less work. 423 00:42:28,400 --> 00:42:34,880 As time goes on, machines continue to become more and more capable, taking on tasks that once fell to human beings. 424 00:42:34,880 --> 00:42:38,810 The harmful substituting force displaces workers in the familiar way. 425 00:42:38,810 --> 00:42:45,800 And for a time, I think the helpful complementing force is going to continue to raise the demand for those displaced workers elsewhere. 426 00:42:45,800 --> 00:42:52,100 And I challenge in that world is going to be frictional technological unemployment. 427 00:42:52,100 --> 00:42:56,750 But a toxic encroachment continues and more and more tasks for to machines. 428 00:42:56,750 --> 00:43:00,140 I worry that the hopeful complementing force is going to be can. 429 00:43:00,140 --> 00:43:06,500 As well, the human beings will find themselves confined to an ever shrinking set of tasks and activities. 430 00:43:06,500 --> 00:43:13,070 And there's no reason, no economic law that tells us there must necessarily be enough demand for those 431 00:43:13,070 --> 00:43:18,080 residual activities to keep everyone who wants it and well-paid employment. 432 00:43:18,080 --> 00:43:22,610 And then our challenge becomes structural technological unemployment. 433 00:43:22,610 --> 00:43:25,370 And so the world of work comes to an end, not with some Big Bang, 434 00:43:25,370 --> 00:43:31,520 not with robots taking everyone's jobs in a flash in an instant, but with a sort of gradual withering, 435 00:43:31,520 --> 00:43:39,770 a withering in the demand for the work of human beings, as that substituting force gradually overruns the helpful complimenting force 436 00:43:39,770 --> 00:43:45,710 and the balance between the two no longer tips in favour of human beings. 437 00:43:45,710 --> 00:43:52,610 So what problems does this lead to? I just want to sketch out what I think are the main challenges that we're going to face as a result of this, 438 00:43:52,610 --> 00:43:57,410 and we can perhaps discuss what we do about these in more detail in a moment. 439 00:43:57,410 --> 00:44:01,120 The first is the economic challenge, the challenge of inequality. 440 00:44:01,120 --> 00:44:07,100 You know, today the labour market is the main way that we share our economic prosperity and society, 441 00:44:07,100 --> 00:44:11,240 that most people's jobs are their only source of income. 442 00:44:11,240 --> 00:44:18,290 How do we share out material prosperity in a society where our traditional way of doing so came? 443 00:44:18,290 --> 00:44:22,850 People for the work that they do is far less effective than in the past. 444 00:44:22,850 --> 00:44:26,150 How do we do it? What's going to take the labour market place? 445 00:44:26,150 --> 00:44:31,730 I see these worries about a world with less work as being very closely related to worries about inequality. 446 00:44:31,730 --> 00:44:34,910 Today, the two things are very, very closely related. 447 00:44:34,910 --> 00:44:40,490 So I think the economic challenge in the 21st century is increasingly going to be a distributional one. 448 00:44:40,490 --> 00:44:50,420 But I don't think the challenges are just about economics. A big second challenge is what we do about the growing power of large technology 449 00:44:50,420 --> 00:44:56,120 companies who are increasingly responsible for developing these technologies. 450 00:44:56,120 --> 00:45:03,110 And I think what's interesting is that the 20th century, our main worry about large corporations was their economic power. 451 00:45:03,110 --> 00:45:06,500 You know, we focussed on things like profit and crisis. 452 00:45:06,500 --> 00:45:13,130 And we used competition policy and economists told us whether or not the economic power was excessive or not. 453 00:45:13,130 --> 00:45:19,460 I think in the 21st century, our worry is going to be far less about the economic power of large technology companies 454 00:45:19,460 --> 00:45:23,480 and far more about their political power is going to be far more about issues, 455 00:45:23,480 --> 00:45:32,120 liberty and democracy and social justice, things that have often very little to do with prices and profits. 456 00:45:32,120 --> 00:45:34,220 And it might sound like I'm shooting myself in the foot, 457 00:45:34,220 --> 00:45:40,820 but things that economists who've traditionally dominated these discussions aren't necessarily best placed to talk about. 458 00:45:40,820 --> 00:45:48,710 And so I think that is the challenge for the 21st century when we think about large companies, what we do about that growing political power. 459 00:45:48,710 --> 00:45:53,780 The final challenge is the challenge of meaning. What do we do? 460 00:45:53,780 --> 00:46:00,440 How do we provide meaning and purpose in people's lives when work? 461 00:46:00,440 --> 00:46:04,700 For many people, a traditional source of meaning and purpose is no longer available? 462 00:46:04,700 --> 00:46:12,920 What do we do? And that, I think, is again a challenge that doesn't have much to do with economics, 463 00:46:12,920 --> 00:46:19,650 but it really is fundamental to the nature of the problems that we're going to face if we move into a world with less work. 464 00:46:19,650 --> 00:46:21,540 So those are the challenges I think we're going to face. 465 00:46:21,540 --> 00:46:28,080 Let me just in the final minutes explain though, why in spite of all of this, I do remain optimistic. 466 00:46:28,080 --> 00:46:32,340 And the reason I think is simple, which is that in decades to come, 467 00:46:32,340 --> 00:46:39,690 technological progress is going to solve or is likely to solve the economic problem that's dominating humanity until now. 468 00:46:39,690 --> 00:46:42,390 And if we think again, if the economy isn't pie, 469 00:46:42,390 --> 00:46:49,740 the traditional challenge has been how do we make that pie large enough everyone to live on at the turn of the first century A.D., 470 00:46:49,740 --> 00:46:54,420 if you had taking the global economic pie and divided up into equal slices for everyone in the world, 471 00:46:54,420 --> 00:46:57,060 most people would have got a few hundred dollars. 472 00:46:57,060 --> 00:47:04,260 Almost everyone lived on or around the poverty line roll forward a thousand years, and roughly the same was true. 473 00:47:04,260 --> 00:47:08,310 But as we saw on those opening slides over the last few hundred years, 474 00:47:08,310 --> 00:47:14,190 economic growth has soared and this growth was of course driven in large part by technological progress. 475 00:47:14,190 --> 00:47:17,790 Economic policies around the world have got far bigger today. 476 00:47:17,790 --> 00:47:24,480 If we take the global economic pie and divided up into equal slices for everyone in the world, everyone gets about $11000. 477 00:47:24,480 --> 00:47:35,270 Thirty years, it'll be double that in 60 years. Double that again. It by and large, we come very close to solving the traditional economic problem, 478 00:47:35,270 --> 00:47:41,780 the struggle for subsistence, as Keynes called that has dominated humankind until now. 479 00:47:41,780 --> 00:47:49,280 And technological unemployment in a strange way, I see as a symptom of success a symptom of that success. 480 00:47:49,280 --> 00:47:53,090 The 21st century technological progress is going to solve one problem, 481 00:47:53,090 --> 00:47:59,210 how do we make the pie large enough for everyone to live on that is going to replace it with those three others? 482 00:47:59,210 --> 00:48:05,180 These problems of inequality, problems of power and problems of meaning and purpose. 483 00:48:05,180 --> 00:48:12,350 And clearly, I think there's going to be huge disagreement about how we meet these challenges about how we should share our prosperity and society, 484 00:48:12,350 --> 00:48:19,280 about how we constrain the political power of large technology companies and how we provide meaning in a world with less work. 485 00:48:19,280 --> 00:48:22,820 But I think these are and this is the argument I make in the book. 486 00:48:22,820 --> 00:48:26,780 These are in the final analysis, far more attractive problems, I believe, 487 00:48:26,780 --> 00:48:32,450 to have to grapple with than the ones the hope with and the wonderful things that our ancestors for centuries, 488 00:48:32,450 --> 00:48:38,690 which was how to make that point big enough in the first place. So I will finish there. 489 00:48:38,690 --> 00:48:44,940 Thank you very much for your attention, and I look forward to hearing some reflections and taking some questions as well. 490 00:48:44,940 --> 00:48:57,670 Thank you. Thank you very much. 491 00:48:57,670 --> 00:49:07,390 You've heard wonderfully lucid summary of your book and clearly just touching on some of the key things, but hopefully enticing us all to read it. 492 00:49:07,390 --> 00:49:14,620 I certainly have and commend to all of you. I think one person who's guaranteed to have a job in the future is here. 493 00:49:14,620 --> 00:49:19,570 I don't see a machine doing anything as formidable as what you've just done. 494 00:49:19,570 --> 00:49:27,820 And incidentally, if interesting in Garry Kasparov, view on what that was all about with deep blue challenging him. 495 00:49:27,820 --> 00:49:32,800 You know, he's been a fellow here, visiting fellow and somewhere on a video archive. 496 00:49:32,800 --> 00:49:36,900 You'll find him talking about that experience and what he thinks about the future. 497 00:49:36,900 --> 00:49:46,340 And so we have about time for very few questions, but who'd like to go first? 498 00:49:46,340 --> 00:49:52,430 Maybe we should collectively act. Thank you very much. 499 00:49:52,430 --> 00:49:57,480 I think he touched on what I'm about to ask in the last few minutes about the distributional issues. 500 00:49:57,480 --> 00:50:03,380 Yes. But earlier you said demand for goods is going to soar. 501 00:50:03,380 --> 00:50:11,550 Yes. But I'm wondering who's going to demand those goods from a kind of consumption focus rather than a production focus? 502 00:50:11,550 --> 00:50:19,440 We're just going to collect a couple of questions post them here. She's making down. 503 00:50:19,440 --> 00:50:28,110 And John Hofmeyr from the Centre for Mutual and Cohn Business here at Oxford, as you talk about inequality. 504 00:50:28,110 --> 00:50:33,090 One of the solutions people suggest is a basic income. 505 00:50:33,090 --> 00:50:42,720 My suggestion is that that won't really help overcome inequality, and it's really at the base that wealth is produced by ownership of shares. 506 00:50:42,720 --> 00:50:55,890 I wonder if you could just comment on whether you've thought about that and the personal view of the two people on that story where they. 507 00:50:55,890 --> 00:51:03,820 Hey, I'm just curious, what do you think the implications of a world without work will be for challenges such as climate change? 508 00:51:03,820 --> 00:51:08,790 I'm, uh, my name's Jasmine, I'm from Stanford University. 509 00:51:08,790 --> 00:51:14,760 So I guess I'm wondering. It seems like work provides a set of intrinsic benefits to the job holder, such as meaning, 510 00:51:14,760 --> 00:51:20,520 skills, et cetera, as well as a set of extrinsic benefits like training people to follow rules, 511 00:51:20,520 --> 00:51:21,840 keeping people off the streets, 512 00:51:21,840 --> 00:51:28,530 corporate provided health care and a host of other sort of public goods that having our current system of work provides. 513 00:51:28,530 --> 00:51:32,790 So I'm so it seems that widespread unemployment would threaten the centralised political 514 00:51:32,790 --> 00:51:37,530 power and the ability to keep regime stable because of those extrinsic benefits. 515 00:51:37,530 --> 00:51:42,140 I'm curious how you think that will play out or how governments might be able to manage it. 516 00:51:42,140 --> 00:51:48,680 Hmm. Yeah, so let me try. Let me try and respond to some of those questions at the same time. 517 00:51:48,680 --> 00:51:56,520 I mean, the first two, just again, I see the economic challenge that we face as a distributional one. 518 00:51:56,520 --> 00:52:02,020 Yeah, the labour market has traditionally been our way of sharing our prosperity and society. 519 00:52:02,020 --> 00:52:08,740 If we are approaching a world with less work than that labour, then that mechanism that we've relied upon will be less effective than in the past. 520 00:52:08,740 --> 00:52:11,020 And so the question is, well, what will replace it? 521 00:52:11,020 --> 00:52:18,430 The argument that I make in the book is that I think we're going to see a growing role of the state in solving that problem. 522 00:52:18,430 --> 00:52:21,460 And I call it the big state, not the big state of the 20th century. 523 00:52:21,460 --> 00:52:28,750 Teams of smart people sitting with blueprints and central government trying to command and control entire economies and how they operate, 524 00:52:28,750 --> 00:52:36,130 not in production, but in distribution. And I say that not coming from the left or right, just looking at the technical features, 525 00:52:36,130 --> 00:52:42,700 the problem and just saying if the labour market is less effective at sharing prosperity, then we'll need the state to do it in its place. 526 00:52:42,700 --> 00:52:50,740 And universal basic income is one thing that's proposed in the book Occupy critical of Universal Basic Income, and I propose quite different ideas. 527 00:52:50,740 --> 00:52:54,540 But I think the general. 528 00:52:54,540 --> 00:53:03,750 I think this is I think it's important to have debates and conversations about UBI because they are engaging with the based on the right diagnosis, 529 00:53:03,750 --> 00:53:04,470 which is, 530 00:53:04,470 --> 00:53:16,370 well, how can we maybe share our income in society if we can't rely on the labour market to do the meaning and purpose component of this problem is. 531 00:53:16,370 --> 00:53:18,860 And I spent a lot of time in the book thinking about, I think it's very important. 532 00:53:18,860 --> 00:53:26,660 I mean, just in response to your particular observation, one of the things I try and do is really explore this relationship between work and meaning. 533 00:53:26,660 --> 00:53:32,810 I think we often assume that work is an important source of meaning and purpose for everyone when actually, 534 00:53:32,810 --> 00:53:38,180 if you look at the data, lots of people don't get a sense of identity and purpose from their work. 535 00:53:38,180 --> 00:53:45,470 Lots of people don't think that their work makes a meaningful contribution to the world that we live in. 536 00:53:45,470 --> 00:53:51,560 And if you look back in history, you can see very different relationships between work and meaning. 537 00:53:51,560 --> 00:54:03,560 So in the ancient Egyptian city of Thebes, you were banned from seeking political office if you had engaged in trade in work in the previous decade. 538 00:54:03,560 --> 00:54:12,710 In Sparta, citizens were banned by law from engaging in productive work. 539 00:54:12,710 --> 00:54:16,850 You know, Aristotle and Plato both thought that work was a sort of prohibitively grubby affair. 540 00:54:16,850 --> 00:54:22,670 That meaning and purpose could only really come from certain types of leisure activities, and we could talk more about that. 541 00:54:22,670 --> 00:54:23,600 But that's the sort of thing. 542 00:54:23,600 --> 00:54:32,880 So I think the first thing we've got to do is begin with this assumption that many of us have that work is a source of meaning and purpose. 543 00:54:32,880 --> 00:54:36,690 For everyone to be whether or not there might be alternative sources of meaning and 544 00:54:36,690 --> 00:54:42,390 purpose and and if your answer to that first question is it's not for everyone to be, 545 00:54:42,390 --> 00:54:50,310 there might be other sources of it, then it opens up a whole fascinating, I think, new set of policy possibilities, not about the future of what, 546 00:54:50,310 --> 00:54:54,600 but about the future of leisure, not about labour market policies, but about leisure policies. 547 00:54:54,600 --> 00:54:59,620 And how do we as a society shape how we spend our time when we're not at work? 548 00:54:59,620 --> 00:55:05,760 It sounds quite radical, but we already do it in many ways today. Entrancing entrance to museums is free. 549 00:55:05,760 --> 00:55:15,600 We try and stop the fine artworks from leaving the country. We encourage young children to learn to psycho and sweat and swim. 550 00:55:15,600 --> 00:55:21,690 The pension system, in a way you could look at it and think, Well, you know, that's a form of leisure policy is essentially the state saying, 551 00:55:21,690 --> 00:55:26,490 Where are you really going to really support leisure in the twilight of life? 552 00:55:26,490 --> 00:55:31,200 That's when the only moment we're going to really provide financial support for leisure. 553 00:55:31,200 --> 00:55:38,070 So I think as a whole, a fascinating set of policy possibilities that open up when we start looking more carefully at this meaning and purpose. 554 00:55:38,070 --> 00:55:41,610 Part of the problem, and that's what I'm trying to do in the book. Climate change is very interesting. 555 00:55:41,610 --> 00:55:53,220 It's not something I look at in in the explicitly in the book, but I see it as being a challenge on the same sort of scale. 556 00:55:53,220 --> 00:55:57,600 I mean, you know, if we had had one of the reasons I've written a book is that because I think 557 00:55:57,600 --> 00:56:02,400 that if we had had the right conversations about climate change 30 years ago, 558 00:56:02,400 --> 00:56:06,630 we wouldn't be necessarily in the problem facing the problems that we face today. 559 00:56:06,630 --> 00:56:12,330 And I think I think of this problem in a similar way that these are conversations about inequality and what we do about it, 560 00:56:12,330 --> 00:56:19,730 that we need to engage with today to avoid finding ourselves in very difficult problems in a few decades time. 561 00:56:19,730 --> 00:56:25,800 We've got time for one more question. If I would, actually I was just going to a Mike. 562 00:56:25,800 --> 00:56:36,520 Yes. Mike, Mike, Mike. By I way, this is being webcast with the headset on those questions if you don't want to. 563 00:56:36,520 --> 00:56:40,770 And it was a bit about your it was following on from your point about work as a way of 564 00:56:40,770 --> 00:56:46,950 distributing income because we need income to get access to resources and services. 565 00:56:46,950 --> 00:56:57,580 But actually, air is going to in many ways will it will it will lower the cost, the marginal cost of producing many services that we require now. 566 00:56:57,580 --> 00:57:01,620 We won't be relying on expensive expertise. 567 00:57:01,620 --> 00:57:03,580 That will just be what you know, 568 00:57:03,580 --> 00:57:12,090 the access to medical diagnosis will be dependent on what those companies who hold power in the knowledge choose to charge us. 569 00:57:12,090 --> 00:57:15,520 But the actual cost of it will be the marginal cost will be very low. 570 00:57:15,520 --> 00:57:21,690 So actually, we could provide for people very, sort of cheaply in a way. 571 00:57:21,690 --> 00:57:28,290 I mean, it could almost be a quid pro quo for those companies that they provide housing, they provide transport, 572 00:57:28,290 --> 00:57:35,070 they provide energy for people because marginal cost of doing that should be much lower. 573 00:57:35,070 --> 00:57:38,490 And this was some of what you're observing. 574 00:57:38,490 --> 00:57:43,680 There was the argument that I made in the book that I co-authored with my dad the future of the professions, 575 00:57:43,680 --> 00:57:48,990 arguing that the promise of a lot of these technologies is a liberation of the sorts 576 00:57:48,990 --> 00:57:53,790 of expertise that traditionally has only been available to a very privileged, a very lucky few people. 577 00:57:53,790 --> 00:58:01,520 Access to a good health care system, access to knowledge of their legal entitlements, access to knowledge of how to manage their financial affairs. 578 00:58:01,520 --> 00:58:05,850 Suppose the observation we make in that book, though, is that this doesn't happen automatically and that you know, 579 00:58:05,850 --> 00:58:13,530 there's a there's a fight to be had to make sure that this expertise is widely available and in theory, correct. 580 00:58:13,530 --> 00:58:21,240 Well, it's always good to end the conversation like this with many more questions than answers than it will be around signing books, 581 00:58:21,240 --> 00:58:27,030 but also hopefully have time to respond to the many of you that haven't had a chance to answer your questions. 582 00:58:27,030 --> 00:58:35,760 And as I said, this is a wonderfully provocative and I think significant book and part of a much bigger debate, 583 00:58:35,760 --> 00:58:43,110 which the Oxford Martin School is certainly engaged in on the future of work and economies. 584 00:58:43,110 --> 00:58:47,490 It's, of course, engaged in many other things on the 28th of February. 585 00:58:47,490 --> 00:58:57,240 We have a whole series of discussions panels chaired after a keynote address by Andy Haldane, 586 00:58:57,240 --> 00:59:03,570 who's the chief economist at the Bank of England, and every Thursday in term time at five o'clock. 587 00:59:03,570 --> 00:59:07,560 There's a seminar series as well, so I encourage you to look at the websites. 588 00:59:07,560 --> 00:59:15,300 And if you want to ping tonight's, talk to your friends and others that haven't had a chance to be here or to look at it again. 589 00:59:15,300 --> 00:59:42,165 That will be up on our website and thanks to you all for coming. And please do join us for a drink and in thanking Daniel.