1 00:00:16,080 --> 00:00:23,970 He's becoming quite a hard act to follow these days. Actually, I think he's going to put me out of a job with all his line up of jokes. 2 00:00:23,970 --> 00:00:29,190 How many mathematicians do we have in the room just to give me a sense of who I'm talking to? 3 00:00:29,190 --> 00:00:30,300 OK, great. 4 00:00:30,300 --> 00:00:38,580 So the mathematicians in the room, you probably have a similar experience that I do when I go to parties and you get this question about what you do. 5 00:00:38,580 --> 00:00:43,630 I don't. Perhaps you will fake it. I'm starting to fake it, actually, and say I'm an international spy or something. 6 00:00:43,630 --> 00:00:48,180 But because most of the time I get a kind of stock set of reactions. 7 00:00:48,180 --> 00:00:52,920 One of them is that they just flee to the other end of the party and I'm abandoned. 8 00:00:52,920 --> 00:00:57,120 But actually, you always before they go, they always tell me what they got in their GCSE or A-level, 9 00:00:57,120 --> 00:00:59,640 which is I'm not really quite sure what that's about. 10 00:00:59,640 --> 00:01:06,480 But but if they do stick around, one of the things they often say is he's trying to work out the tech here, 11 00:01:06,480 --> 00:01:12,560 which is, yeah, doing a talk about tech is always a disaster, actually using the tech. 12 00:01:12,560 --> 00:01:20,820 But so one of the questions I often get or statements is, come on, surely you must have been put out of a job by a computer by now? 13 00:01:20,820 --> 00:01:26,520 I think it's mostly because people think that what I do in my office up here is long division to lots of decimal places. 14 00:01:26,520 --> 00:01:30,900 And if that were true, certainly my computer would have put me out of a job. 15 00:01:30,900 --> 00:01:32,790 But you know, we're all as Alan saying, 16 00:01:32,790 --> 00:01:41,910 we're all a little bit threatened at the moment about this advancing A.I. that it seems to be very powerful doing lots of interesting things. 17 00:01:41,910 --> 00:01:45,570 And surely aren't computers all about kind of logic and mathematics? 18 00:01:45,570 --> 00:01:54,510 So wouldn't my job be one of the first to be threatened? I got this a lot during the 90s, actually, when a deep, 19 00:01:54,510 --> 00:02:02,550 deep blue beat Kasparov because often people used to compare the idea of playing a game of chess to doing mathematics. 20 00:02:02,550 --> 00:02:06,060 There are certain logical moves you can make with the pieces. 21 00:02:06,060 --> 00:02:11,910 There's a kind of endgame that you're off to the end of the prove the QED when you sort of win the game or you win the proof. 22 00:02:11,910 --> 00:02:16,440 And so a lot of people say to me during the 90s, Well, come on, you must be next. 23 00:02:16,440 --> 00:02:20,310 But I never felt particularly threatened by chess. And actually, 24 00:02:20,310 --> 00:02:26,280 there was always another game that we mathematicians always used as all kind of protective shield against 25 00:02:26,280 --> 00:02:33,480 the idea that computers could do on subject because there's another game and that's the ancient game of Go. 26 00:02:33,480 --> 00:02:39,030 This Chinese game played on a 19 by 19 grid where you put black and white stones down, 27 00:02:39,030 --> 00:02:42,840 you try to engulf the other person's territory before they engulf yours. 28 00:02:42,840 --> 00:02:48,540 And this is a green which has a high degree of complexity, much more complex, actually, than chess. 29 00:02:48,540 --> 00:02:55,350 There's a lot of pattern recognition that needs to go on to be able to play this game, and that's something that really mathematics is about. 30 00:02:55,350 --> 00:03:02,160 It's about spotting patterns, underlying patterns. But quite often when you're playing go and especially when you're doing mathematics, 31 00:03:02,160 --> 00:03:09,210 to be able to quite know where you're going requires a lot of intuition, a lot of creativity, not quite sure why you're making moves. 32 00:03:09,210 --> 00:03:16,260 You spend a lot of time in this world and you build up kind of kind of a feel for playing go or doing chess. 33 00:03:16,260 --> 00:03:20,610 I'm very traditionally in computer science lectures. They would always say, Yeah, 34 00:03:20,610 --> 00:03:29,010 chess is something we can automate because we can understand the kind of logical implications of playing particular moves we can follow. 35 00:03:29,010 --> 00:03:35,520 The tree of Possibility through Go was always said to be a game that no computer would ever be able to play. 36 00:03:35,520 --> 00:03:41,850 And certainly any attempt somebody's trying to encode the way that a human plays this game always failed. 37 00:03:41,850 --> 00:03:49,890 The attempts to try and encode playing go in some sort of algorithm wouldn't even beat an amateur at this game. 38 00:03:49,890 --> 00:03:57,750 So I felt pretty safe because computer science say you can't play games, so you certainly won't be able to play the complex game of mathematics. 39 00:03:57,750 --> 00:04:03,480 So I got a little bit of a shock a couple of years ago, and you're probably aware of this story. 40 00:04:03,480 --> 00:04:15,030 When a team in London declared that they had got an algorithm that they believed could compete at not just a high level but the highest level. 41 00:04:15,030 --> 00:04:19,140 This is DeepMind in London and actually was Demis Hassabis. 42 00:04:19,140 --> 00:04:30,060 We went to see it. He went to Cambridge to do his study, and he was told this old adage that you can't complete programme a computer to play go. 43 00:04:30,060 --> 00:04:32,670 And this was sort of like a red rag to Dennis. 44 00:04:32,670 --> 00:04:38,940 And so he went away and set up this company, and they devised this algorithm that they thought could play the best and they challenge. 45 00:04:38,940 --> 00:04:44,550 Lee Sedol, a Korean grandmaster, and Lisa Dolan was totally dismissive of this algorithm. 46 00:04:44,550 --> 00:04:48,780 It wouldn't be able to get anywhere near the level that he could play at. 47 00:04:48,780 --> 00:04:56,130 He said, I'm going to demolish this thing five nil. They were going to play over five games, but he got a little bit of a shock. 48 00:04:56,130 --> 00:05:04,270 I sat and watched these games obsessively on YouTube because I realised that my life was probably under threat and as I watched, I saw, Lee said. 49 00:05:04,270 --> 00:05:08,190 I'll get more and more depressed throughout the games. He lost the first game, he lost the second. 50 00:05:08,190 --> 00:05:13,640 He lost the third. He lost the match already. After three, he won the fourth game and he now regards that. 51 00:05:13,640 --> 00:05:22,370 It's the greatest game that he's ever played, that he was able to beat this algorithm in one game and he lost the fifth game, so he lost four one. 52 00:05:22,370 --> 00:05:25,430 What had changed in the last few years? 53 00:05:25,430 --> 00:05:34,280 The style of coding has changed and we've got new sort of code on the block which is able to do things that code in the past couldn't do. 54 00:05:34,280 --> 00:05:39,830 And it's something that we're very interested here in Oxford. This idea of deep learning or machine learning. 55 00:05:39,830 --> 00:05:43,440 So code in the pass used to be written in a very top down manner. 56 00:05:43,440 --> 00:05:48,050 You really have to know what the thing is, how the thing was going to behave. 57 00:05:48,050 --> 00:05:54,050 You told it the rules of how it was going to play. You had to understand the setting and the machine just implemented that. 58 00:05:54,050 --> 00:05:57,890 Yeah, sure, it could play chess because it was told to implement this thing. It could go deeper. 59 00:05:57,890 --> 00:06:03,470 It could analyse more situations than a human. But the human was still tallying the programme. 60 00:06:03,470 --> 00:06:09,260 What to do What has changed is that the code is now written in a very bottom up manner. 61 00:06:09,260 --> 00:06:17,690 We've got a sort of code which is learning very much like a child learns. In the past, it was like the parent's DNA would give birth to a child, 62 00:06:17,690 --> 00:06:22,280 but the child would be so still attached to the DNA of the parent it wouldn't learn anything new. 63 00:06:22,280 --> 00:06:30,140 But suddenly we've got code that can adapt and change and mutate, and we parameter AIS itself as it encounters a new environment. 64 00:06:30,140 --> 00:06:36,770 There's almost like a meta code, which is telling the code how to change if it gets something wrong and change and mutate. 65 00:06:36,770 --> 00:06:42,020 And this is what they used to actually train AlphaGo to to learn how to play this game. 66 00:06:42,020 --> 00:06:47,990 By playing games and failing that, they started with some other games to start with some, some simple games. 67 00:06:47,990 --> 00:06:55,550 So they started with Atari games, games I used to be obsessed with actually, when I was a kid and one of the ones I really love with, 68 00:06:55,550 --> 00:06:59,510 this one called out where you have a little a ball, which ping pong is up and down. 69 00:06:59,510 --> 00:07:06,560 You've got a paddle and you've got to knock these out. You score points, the blue out just one point up to read the higher points. 70 00:07:06,560 --> 00:07:12,530 The machine was only given the pixels on the screen and the score it had to learn how to play. 71 00:07:12,530 --> 00:07:16,100 This game wasn't told anything about the fact that you had to hit this ball, 72 00:07:16,100 --> 00:07:20,150 but it was randomly moving the paddle and every time it hit the ball, it saw the school go up. 73 00:07:20,150 --> 00:07:26,600 So it rip parameterised itself and said, I'm going to prioritise moving towards the where the ball came from. 74 00:07:26,600 --> 00:07:32,780 Now, when I was with my mate and we played this, we were very pleased when we found a fantastic hack because you can create a 75 00:07:32,780 --> 00:07:37,940 little tunnel on the left hand side and then if you get the ball to go up there, 76 00:07:37,940 --> 00:07:42,050 then you don't have to do any work at all because the ball just bounces backwards and forwards. 77 00:07:42,050 --> 00:07:51,050 What I was absolutely staggered. When you see how this DeepMind actually learnt to play, the Atari gave it the same hack is, you know, 78 00:07:51,050 --> 00:07:56,990 not just humans, but the computer is also lazy, doesn't want to move this thing around, so shoots it back up again. 79 00:07:56,990 --> 00:07:57,830 This is extraordinary. 80 00:07:57,830 --> 00:08:06,770 After 600 games, it had learnt just by randomly moving the paddle and seeing that what the moves were that made that the school go up fast. 81 00:08:06,770 --> 00:08:13,850 It had learnt how to do this hack. So this is how it then went on to play the game of GO. 82 00:08:13,850 --> 00:08:17,600 So what it did was the first take all the human games that are on the internet, 83 00:08:17,600 --> 00:08:26,030 a lot of games encoded on the internet and learnt how humans lost and won the games, and that was its first material that it learnt on. 84 00:08:26,030 --> 00:08:30,230 Then it started to create synthetic data. It started to play itself. 85 00:08:30,230 --> 00:08:34,970 Different versions of itself would play a game and then it lost a game. 86 00:08:34,970 --> 00:08:42,980 It would understand which were the moves that meant it, that it actually was failing to play at a high level and it would reprioritize itself. 87 00:08:42,980 --> 00:08:49,970 And so after a while, it got to this high level quite amazing that it did such that it could challenge release at all. 88 00:08:49,970 --> 00:08:53,570 Now, at first I say, OK, so the computers just got very good. 89 00:08:53,570 --> 00:09:01,730 It can analyse its game very deeply, but I think that there was something more amazing that happened during these games the first game. 90 00:09:01,730 --> 00:09:07,520 Interestingly, Lisa Donald decided that if it had done learning on how humans played, 91 00:09:07,520 --> 00:09:13,280 maybe the best way to beat this computer was to play it rather unlike a human. 92 00:09:13,280 --> 00:09:21,050 So he actually played a very disruptive game, but at the AlphaGo was smart enough to actually just cope with the moves that he was making. 93 00:09:21,050 --> 00:09:26,330 And they turned out to be quite weak moves, and he lost the game because he was not playing his standard game. 94 00:09:26,330 --> 00:09:34,670 So in the second game, at least, it all decided to play a much more standard, high level game that he knew very well. 95 00:09:34,670 --> 00:09:39,200 Very early on in the game, your game master teaches you a few things strategies. 96 00:09:39,200 --> 00:09:45,470 One is that you should play on the kind of edge of the board, so you're encouraged to pray on the first, second, 97 00:09:45,470 --> 00:09:53,660 third and fourth rows end because there's a kind of competition early on for the ages and the kind of internal part of the board. 98 00:09:53,660 --> 00:09:56,570 And if you play too far into the middle of the board early on, 99 00:09:56,570 --> 00:10:01,760 it's considered a weak move because you're not really establishing important territory at that point. 100 00:10:01,760 --> 00:10:09,860 So Lisa, all on the 36th move of this game, decided that he needed a cigarette break and he went up to the top of the hotel, had a cigarette. 101 00:10:09,860 --> 00:10:13,710 AlphaGo didn't need to smoke in order to get stimulation. So. 102 00:10:13,710 --> 00:10:18,930 It sat there and it felt for a while, and then it asked the human plant because this wasn't an exercise in robotics, 103 00:10:18,930 --> 00:10:22,740 this was an exercise in just pure thought, so there was still human. 104 00:10:22,740 --> 00:10:28,050 This is actually still quite difficult for an eye to actually pick a stone up and place it on the board. 105 00:10:28,050 --> 00:10:31,860 But it told the human plant to place a stone on the fifth row in. 106 00:10:31,860 --> 00:10:37,650 So I've circled this. He was playing. He already anthropomorphise the A.I. 107 00:10:37,650 --> 00:10:45,960 The A.I. is playing black, and it put this stone in on the fifth row in which I put it a little white circle on all the commentators. 108 00:10:45,960 --> 00:10:53,630 I remember this on YouTube. They all gasped and went, Wow, wow, AlphaGo has made a huge mistake. 109 00:10:53,630 --> 00:11:00,060 Never play that sort of move early on in the game. This is at least it all will be, and it'll be one one after this. 110 00:11:00,060 --> 00:11:02,160 And they were all very complacent. 111 00:11:02,160 --> 00:11:08,280 And at least it all came down after a cigarette break looked at what AlphaGo and you can say you should watch this back. 112 00:11:08,280 --> 00:11:13,710 It's so funny because it just literally like, cannot believe what this is just suggesting. 113 00:11:13,710 --> 00:11:17,340 What a stupid move. But he's he's a bit more suspicious things. 114 00:11:17,340 --> 00:11:23,280 You know that why has he done that move? Why did it do that move? 115 00:11:23,280 --> 00:11:25,380 It turned out that as the game built up, 116 00:11:25,380 --> 00:11:31,890 and there's something rather different about chess and go because chess gets simpler as the game goes on because pieces get taken off. 117 00:11:31,890 --> 00:11:36,210 But Go gets more and more complex because more and more pieces get put on. 118 00:11:36,210 --> 00:11:40,020 As the game built up, a more and more pieces were put on the board. 119 00:11:40,020 --> 00:11:46,960 Territory was building up from the bottom right hand corner, and it turned out that AlphaGo is move at move thirty seven. 120 00:11:46,960 --> 00:11:52,950 At that, Blackstone meant that it was it that won that territory rather than lease at all. 121 00:11:52,950 --> 00:12:00,240 It was an incredibly inspired move. It won AlphaGo the second match that decision on move thirty seven to put the 122 00:12:00,240 --> 00:12:04,620 stone there and break the tradition of how humans thought we should play the game. 123 00:12:04,620 --> 00:12:14,430 And for me, this was really exciting because I believe that this is an example of what we should call a creative act by artificial intelligence. 124 00:12:14,430 --> 00:12:17,070 I spent some time on a committee at the Royal Society. 125 00:12:17,070 --> 00:12:22,560 Over the last few years, we've been looking at the impact that machine learning is having on society. 126 00:12:22,560 --> 00:12:28,590 Over the next ten years, Dempsey was on the committee and there was also a philosopher, 127 00:12:28,590 --> 00:12:32,850 Margaret Boden, and I talked to her quite a bit about the idea of creativity. 128 00:12:32,850 --> 00:12:39,960 She's been very interested in what she calls these tin cans can do computers and the idea of whether they can be created. 129 00:12:39,960 --> 00:12:44,670 And she had a very nice working definition of what we should call creative. 130 00:12:44,670 --> 00:12:51,180 I'm not sure it's the best one, and we can argue there's lots of philosophical debate over the idea of what we mean by creativity. 131 00:12:51,180 --> 00:12:55,530 But I think this is going to be quite useful working definition as we go forward tonight. 132 00:12:55,530 --> 00:13:00,870 So creativity is something which should be new. Well, computers can make new things quite easily. 133 00:13:00,870 --> 00:13:05,520 We can objectively judge whether something is new budget to other qualities. 134 00:13:05,520 --> 00:13:13,260 It should also have an element of surprise. Now that's a little bit more subjective and also value. 135 00:13:13,260 --> 00:13:18,510 That's also quite subjective. So a computer, if it's going to be creative, is something we as humans believe. 136 00:13:18,510 --> 00:13:23,010 It's creative, it's going to learn to, to know what we think is surprising and has value. 137 00:13:23,010 --> 00:13:28,680 One of the things I notice about games is, of course, you can judge this kind of these qualities quite quickly. 138 00:13:28,680 --> 00:13:32,340 Surprise, yes, the commentators all went, Oh, it's made a mistake. 139 00:13:32,340 --> 00:13:36,090 Value? Yes, this move won AlphaGo the game. 140 00:13:36,090 --> 00:13:44,610 And so what I've been interested in is to look at if it can be creative in this very close environment of a game, where else can it be created? 141 00:13:44,610 --> 00:13:46,440 Can it be creative in mathematics? 142 00:13:46,440 --> 00:13:52,620 Actually, I sat next to Dennis and I joked to him, We just become both our friends, and I said, Well, could you get AlphaGo to become an f r s? 143 00:13:52,620 --> 00:13:57,090 And part of the story of my book is about Demi said, Yeah, we're already on the case. 144 00:13:57,090 --> 00:14:04,020 So they were already a deep mind looking at making a creative AI mathematician. 145 00:14:04,020 --> 00:14:14,730 But so the journey that what I think is exciting about this kind of new A.I. is disappearing is that in the case of AlphaGo, 146 00:14:14,730 --> 00:14:20,040 it not only played the game at a high level, but it taught us how to play the game in a new way. 147 00:14:20,040 --> 00:14:29,390 We thought we'd reached a kind of peak of playing with these kind of rules that we had about playing and the ones the fourth wrote in. 148 00:14:29,390 --> 00:14:32,130 And, you know, there was a kind of optimal way to play the game. 149 00:14:32,130 --> 00:14:39,660 What AlphaGo has shown us in these games is that although we thought we were at the peak of a performance in playing this game, 150 00:14:39,660 --> 00:14:43,560 actually this was only what we mathematicians call a local maximum. 151 00:14:43,560 --> 00:14:49,140 Actually, this was like Snowdonia, and there was actually a much higher mountain and Everest, 152 00:14:49,140 --> 00:14:56,790 a new way to play the game that AlphaGo had experimented by taking risks gone down this kind of adaptive valley and found a much better way to play. 153 00:14:56,790 --> 00:15:05,580 And AlphaGo is now taught us a new way to play this game, new strategies that are helping us to play the game at a much higher level. 154 00:15:05,580 --> 00:15:09,540 And so the journey of this book, which is called the creativity code, 155 00:15:09,540 --> 00:15:13,480 is to look at, well, this ain't only that it's a pairing that seems to be able to. 156 00:15:13,480 --> 00:15:19,300 To learn about through its interaction with the kind of digital world around it, 157 00:15:19,300 --> 00:15:25,210 could it perhaps be creative and other realms, not just creative in a game? 158 00:15:25,210 --> 00:15:34,480 France, one of the first people to think of the idea of code, was already suggesting that code might be able to do things of an artistic nature. 159 00:15:34,480 --> 00:15:38,920 So we celebrate Ada Lovelace Day, Ada Lovelace every year. 160 00:15:38,920 --> 00:15:44,830 Ada Lovelace was taken by her mother to see a Babbage's analytical engine. 161 00:15:44,830 --> 00:15:49,150 Her mother used to like to expose her to lots of different ideas, scientific ideas. 162 00:15:49,150 --> 00:15:55,120 And when she saw this machine, she already began to realise that this could do more than just the long division 163 00:15:55,120 --> 00:16:00,190 or the multiplication that it could do something a little bit more exciting. 164 00:16:00,190 --> 00:16:06,130 And she started to write down code to make the analytic engine do interesting things. 165 00:16:06,130 --> 00:16:08,260 And that's why we sort of celebrate Ada Lovelace, 166 00:16:08,260 --> 00:16:17,680 the notes that she wrote for a paper about the analytic engine we regard as the first idea of code to make machines do interesting things already. 167 00:16:17,680 --> 00:16:25,280 Then she was thinking about the fact this could do maybe things which are a little bit more interesting than just sort of scientific calculations. 168 00:16:25,280 --> 00:16:32,650 She wrote the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent. 169 00:16:32,650 --> 00:16:37,090 So she's already thinking about music, a place, of course, which has quite a lot of connexion with mathematics, 170 00:16:37,090 --> 00:16:42,100 the idea of patterns getting the machine to, to kind of run out patterns. 171 00:16:42,100 --> 00:16:47,590 Perhaps it could make music, and we'll come to that a little bit later on the challenge of whether I can write music. 172 00:16:47,590 --> 00:16:50,530 But she offered a word of caution when she wrote this. 173 00:16:50,530 --> 00:16:55,990 She said it is desirable to guard against the possibility of exaggerated ideas that might arise as 174 00:16:55,990 --> 00:17:02,410 though the powers of the analytic engine it has no pretensions won't ever to originate anything. 175 00:17:02,410 --> 00:17:06,850 It can do whatever we order it to perform. 176 00:17:06,850 --> 00:17:15,070 And I think that's what we always felt in the past is that this kind of top-down coding, well, it's the human that's telling the computer what to do. 177 00:17:15,070 --> 00:17:16,930 So if the computer is being creative, 178 00:17:16,930 --> 00:17:24,280 that's because the human has been creating images encoded that in just a set of rules that the computer is just implementing. 179 00:17:24,280 --> 00:17:31,510 But I think something has changed now. The code is beginning to change mutate as it interacts with, say, 180 00:17:31,510 --> 00:17:39,610 new artistic data music that is starting to become code that the original code doesn't quite know how it's performing. 181 00:17:39,610 --> 00:17:47,530 So this machine learning is producing programmes, which now perhaps disconnect itself from the original coder. 182 00:17:47,530 --> 00:17:50,890 So here's the challenge can this new only A.I. that's appearing, 183 00:17:50,890 --> 00:17:59,980 which seems to be moving on from the original code written by the coder kind of put some distance between the code and the coder. 184 00:17:59,980 --> 00:18:08,440 So you probably heard of the Turing Test kind of computer pass itself off and an interaction online, as Alan did. 185 00:18:08,440 --> 00:18:13,780 You know, would you be convinced that that was a human talking? Or is it just an AI computer? 186 00:18:13,780 --> 00:18:19,960 So Turing put this down as quite a big challenge kind of process natural language and responds real time. 187 00:18:19,960 --> 00:18:24,280 So here's a new challenge that is being offered connected to the autistic realm. 188 00:18:24,280 --> 00:18:33,700 It's called the Lovelace test. So the test is kind of machine originate a creative work of art such that the process is repeatable. 189 00:18:33,700 --> 00:18:39,940 So it shouldn't just be some sort of glitch in hardware. Somehow, the code should know what it's doing. 190 00:18:39,940 --> 00:18:46,000 It shouldn't be some sort of randomness which is put in there, such as the code wouldn't be able to reproduce what it's done. 191 00:18:46,000 --> 00:18:51,100 But here's the challenge the programmer, the person who wrote the code that is now learns and mutated, 192 00:18:51,100 --> 00:18:55,330 is unable to explain actually how the algorithm produced its output. 193 00:18:55,330 --> 00:19:00,610 So this is a challenge that I want to explore with you how good has been in the last couple of years. 194 00:19:00,610 --> 00:19:09,580 It's really the story of the book in understanding what we regard as art and creativity and being able to produce its own version of that. 195 00:19:09,580 --> 00:19:16,000 And I think, you know, I think we're quite happy that A.I. is going to be driving our cars or maybe even beyond doctors, 196 00:19:16,000 --> 00:19:17,410 although are a little nervous about, 197 00:19:17,410 --> 00:19:23,110 you know, suggest move thirty seven in the game to have go and suggest you take a pill that looks incredibly dangerous. 198 00:19:23,110 --> 00:19:28,480 Do you do that? Is that mistake or is it incredibly insightful mood? It's going to save your life. 199 00:19:28,480 --> 00:19:30,730 So I think we are quite happy in certain realms, 200 00:19:30,730 --> 00:19:36,970 but I think the one thing that we regard as uniquely human is our own creativity, our artistic output. 201 00:19:36,970 --> 00:19:42,130 That's what it means to be human. We express it in music and art, in poetry and novels. 202 00:19:42,130 --> 00:19:48,850 So if I only can get close to doing something that craft is uniquely human, I think this is a very exciting moment. 203 00:19:48,850 --> 00:19:57,280 So. So how good is it? Well, I think I made a programme for the BBC a horizon of about six years ago. 204 00:19:57,280 --> 00:20:05,560 It was a touring anniversary about A.I., and six years ago I was pretty disappointed in the state of A.I. at that point. 205 00:20:05,560 --> 00:20:13,230 And there was one hurdle that seemed, I only seem to be finding really difficult to achieve, and that was beyond recognition. 206 00:20:13,230 --> 00:20:20,550 Ryan is very good at taking in a huge onslaught of information, you know, lots of different colours here, people faces. 207 00:20:20,550 --> 00:20:27,930 But I'm able to integrate this into a single story about seeing an audience that's come to the talk I'm giving. 208 00:20:27,930 --> 00:20:32,610 So Vision was one of the great hurdles for A.I. at the time, 209 00:20:32,610 --> 00:20:37,260 and this is one of the things that it's been able to do with this idea of machine learning. 210 00:20:37,260 --> 00:20:45,780 So actually, we're going to do it. We've used a bit of machine learning to do a few experiments during this lecture, so we have a camera here. 211 00:20:45,780 --> 00:20:53,670 So the point is machine learning. It's shown some images of cats and dogs, and it has to distinguish them and it gets it wrong to start with. 212 00:20:53,670 --> 00:20:57,600 But it starts to ask more and more questions, which helps it to get it more and more, right? 213 00:20:57,600 --> 00:21:06,780 So we've got to we're going to be doing some tests here, so you'll see you've got some SIM cards in front of you, you're going to be doing. 214 00:21:06,780 --> 00:21:11,400 I'm going to give you some challenges, some AI art kind of Turing test, 215 00:21:11,400 --> 00:21:15,790 Lovelace tests, and you're going to have to decide what you think is made by a human. 216 00:21:15,790 --> 00:21:20,520 And what do you think is made? Doesn't have a soul and that's that's an A.I. 217 00:21:20,520 --> 00:21:27,570 So in order to do this right, so we're going to get you up on your display one and we want the HDMI. 218 00:21:27,570 --> 00:21:35,710 So that's interesting. So display to. 219 00:21:35,710 --> 00:21:45,490 Oh, OK, that's interesting. It worked in their rehearsal, exactly, but when you're doing air and tank, it's always asking for the trouble. 220 00:21:45,490 --> 00:21:50,770 So OK, well, do you think you can sort it out? 221 00:21:50,770 --> 00:21:59,640 Possibly not. It's not a disaster, but OK, so this is how we trained our air. 222 00:21:59,640 --> 00:22:04,920 So what we did was we want when you throw up your cards, we want the air to be able to recognise, 223 00:22:04,920 --> 00:22:12,450 recognise what you're putting up, that it's a blue which will be for human and a red for robot. 224 00:22:12,450 --> 00:22:17,730 But he doesn't want to get confused by a red jacket over there or a blue shirt. 225 00:22:17,730 --> 00:22:20,340 And so we had to train this eye on pictures. 226 00:22:20,340 --> 00:22:29,190 So what we did was we just took a random load of pictures and then we put the things that we were trying to get the A.I. to recognise. 227 00:22:29,190 --> 00:22:33,240 And so there will be a training image. So we had about 600 training images. 228 00:22:33,240 --> 00:22:39,570 And basically, it would be told there are four red robots and three human faces. 229 00:22:39,570 --> 00:22:46,050 And gradually it learnt over those 600 faces to be able to distinguish these such that now when we show it an image, 230 00:22:46,050 --> 00:22:52,140 it can count quite quickly and effectively. How many robots and how many humans there are? 231 00:22:52,140 --> 00:22:56,940 So we're going to get to you. So. So this is really machine learning in kind of action. 232 00:22:56,940 --> 00:23:02,010 So what I'm going to do is to offer you some challenges. So here you are. This is OK. 233 00:23:02,010 --> 00:23:10,140 So. You. Separate so you can only show on it. 234 00:23:10,140 --> 00:23:15,090 Oh, that's OK, right? I shall have to improvise. OK. 235 00:23:15,090 --> 00:23:19,950 So I'm going to that's fine because I'll just hop over between the HMO and that will bring you upset. 236 00:23:19,950 --> 00:23:24,690 That's fine. OK, so here you are. So this this is a project that was done. 237 00:23:24,690 --> 00:23:32,590 So I'm going to start with visual art because vision has been the place where creativity is being very successful. 238 00:23:32,590 --> 00:23:36,480 So you can only recognise the artist here. 239 00:23:36,480 --> 00:23:42,720 This is Rembrandt, and there was a team in Holland that decided to see whether they could get their A.I. 240 00:23:42,720 --> 00:23:47,610 vision recognition to sort of understand Rembrandt very particular style of painting. 241 00:23:47,610 --> 00:23:54,420 Rembrandt did quite a lot of portraits, so it was quite a bit of data, not as much data as we used to actually train or A.I. 242 00:23:54,420 --> 00:24:03,390 But, you know, in the region of 300 portraits. And of course, one of the things Rembrandt has is a very special use of light. 243 00:24:03,390 --> 00:24:11,070 So we're going to learn how to put light on a portrait at a very particular style of dress at that particular time. 244 00:24:11,070 --> 00:24:13,890 So one of these images is a Rembrandt. 245 00:24:13,890 --> 00:24:21,960 The other one is the product of the artificial intelligence learning on Rembrandt style and producing a new Rembrandt. 246 00:24:21,960 --> 00:24:27,570 So the challenge for you is, can you tell which of these is the real Rembrandt? 247 00:24:27,570 --> 00:24:35,490 Now I'm going to just do a little experiment. First of all, to see whether all cards are working, so let's switch over to show you. 248 00:24:35,490 --> 00:24:40,860 So here you are. So I want you to open up your blue cards. So let's see where. 249 00:24:40,860 --> 00:24:44,370 And you can see it starting to pick out some of you a little bit edgy. 250 00:24:44,370 --> 00:24:51,690 I'm sorry, you'll just have to feel like your vote does count. And OK, now turn them over to red. 251 00:24:51,690 --> 00:25:00,160 Let's see. And you'll see the bar at the top is recording, so we'll be able to test the proportion that say that's 100 percent red now, 252 00:25:00,160 --> 00:25:04,720 turn back to blue and you'll see the bar shoot back to the to the blue side. 253 00:25:04,720 --> 00:25:10,660 OK, so good. So it seems to be working. So now not to prejudice the thing. 254 00:25:10,660 --> 00:25:16,900 So I'm going to let's go back. I'll just show you the pictures again. So I'm going to ask you about one of these. 255 00:25:16,900 --> 00:25:23,530 So let me flip my coin. So, OK, so I'm going to ask you about the painting on the left. 256 00:25:23,530 --> 00:25:31,570 I want you to vote. When did you think the painting on the left is done by a? 257 00:25:31,570 --> 00:25:41,430 I or is it done by Rembrandt, so if you think the painting on the left is my art, I want you to show your rates faces to me. 258 00:25:41,430 --> 00:25:45,300 And if you think it was a human, then I want you to show your blue faces. 259 00:25:45,300 --> 00:25:49,470 And OK, so here we're starting to get so quite a lot of you voting red. 260 00:25:49,470 --> 00:25:54,420 And we're seeing, although some of you are also voting blue, so it's edging over. 261 00:25:54,420 --> 00:25:58,810 More to read, I'm saying is probably about 60 30. OK, so good. 262 00:25:58,810 --> 00:26:03,900 Let's see whether how good you were at this one. So we're going back to here. 263 00:26:03,900 --> 00:26:07,440 So which one was the I? In fact. 264 00:26:07,440 --> 00:26:12,300 So you were pretty good already, so you can feel good about yourselves. That's as an audience. 265 00:26:12,300 --> 00:26:20,560 At least you voted correctly. So yes, the one on the left is, in fact, the Rembrandts. 266 00:26:20,560 --> 00:26:23,640 Now, it's interesting that not only did they do this as a 2D image, 267 00:26:23,640 --> 00:26:28,080 but you know if you've seen a Rembrandt, it has a very his use of paint is very special. 268 00:26:28,080 --> 00:26:35,070 It's very kind of 3D effect. And they even went to the extent of analysing the the kind of height of the paint. 269 00:26:35,070 --> 00:26:45,570 And so they 3D printed this interestingly. And when they asked a Rembrandt expert to come along and review that, their result and of course, 270 00:26:45,570 --> 00:26:48,870 the Rembrandt expert was incredibly snooty and dismissive about the whole project. 271 00:26:48,870 --> 00:26:56,250 But the only thing that he could point to criticise it about was while the use of paint is 20 years earlier than the style of the actual portrait. 272 00:26:56,250 --> 00:27:02,640 So the team felt actually quite good that they'd managed to just, you know that if that was all that was wrong with it, it's OK. 273 00:27:02,640 --> 00:27:06,300 So but you might say, Well, what's the point about another Rembrandt? 274 00:27:06,300 --> 00:27:14,610 We've got wonderful Rembrandt. Why do we need any more Rembrandt? So certainly my favourite art critic Jonathan Jones in The Guardian. 275 00:27:14,610 --> 00:27:19,890 I love reading Jonathan Jones because he's always totally dismissive about anything to do with A.I. 276 00:27:19,890 --> 00:27:26,340 This is what he wrote about this Rembrandt project. What a horrible, tasteless, insensitive and soulless travesty of all. 277 00:27:26,340 --> 00:27:31,380 It is created in human nature when technology is used for things it never should be used for. 278 00:27:31,380 --> 00:27:33,480 But frankly, anyone who wins that sort of shirt is an art critic. 279 00:27:33,480 --> 00:27:40,110 I'm really not quite sure I trust very much in that critique, but but to some extent he has a point. 280 00:27:40,110 --> 00:27:42,480 You know, what is the point about creating other Rembrandt? 281 00:27:42,480 --> 00:27:48,780 Well, I do think there is a point because the wonderful thing about this A.I. is it's starting to recognise things that we, 282 00:27:48,780 --> 00:27:54,190 as humans have missed in the data. So not so much in Rembrandt. 283 00:27:54,190 --> 00:27:58,980 I haven't seen anything new in size in Rembrandt. But, for example, something like Jackson Pollock, 284 00:27:58,980 --> 00:28:05,040 a kind of algorithmic analysis of Jackson Pollock has revealed the Pollock is doing something very special when he splatters paint 285 00:28:05,040 --> 00:28:13,350 around that he's creating a very special mathematical shape that we can actually analyse and kind of judge a kind of the fractal, 286 00:28:13,350 --> 00:28:18,940 the greenness of this dimension of these paintings. So aliens giving us new insights. 287 00:28:18,940 --> 00:28:26,700 There's a wonderful story I tell in the book about the Netflix algorithm that just took our likes and dislikes of films, 288 00:28:26,700 --> 00:28:30,060 and the numbers of the films didn't know anything about the films. 289 00:28:30,060 --> 00:28:35,520 But just from all likes and dislikes is able to clump them together into films of a similar sort of genre. 290 00:28:35,520 --> 00:28:38,340 So you can see, Oh yeah, look, these are all comedy films. 291 00:28:38,340 --> 00:28:44,010 These are all thrillers, but every now and again, it would clump films together because of all likes and dislikes, 292 00:28:44,010 --> 00:28:51,570 which is kind of expressing our common feelings for film in ways that we didn't really have a name for that genre. 293 00:28:51,570 --> 00:28:59,160 It was almost as if the alien spotted through our likes and dislikes that there was a kind of way of clumping films together that deserved a new name. 294 00:28:59,160 --> 00:29:04,290 It had spotted a new sort of structure in the films that we hadn't kind of named. 295 00:29:04,290 --> 00:29:08,580 So I think there is a point about looking backwards, but I think the most exciting thing is looking forward. 296 00:29:08,580 --> 00:29:13,290 Can we get the A.I. to do new things to, to break the mould, to do exciting new things? 297 00:29:13,290 --> 00:29:19,110 So here's your next challenge Four of these paintings are done by a human. 298 00:29:19,110 --> 00:29:25,800 Four of these paintings are done by an AI. You have to now judge, which is which. 299 00:29:25,800 --> 00:29:29,370 So we'll do the same since I flipped. We'll do the left hand one again. 300 00:29:29,370 --> 00:29:37,680 So the left hand one is the one you're going to be voting on. Do you think the four paintings on the left are by the human or by the A.I.? 301 00:29:37,680 --> 00:29:46,890 So let's turn you turn it over to you. So your chance to vote those four paintings on the left, are they? 302 00:29:46,890 --> 00:29:52,260 OK, so now much oh, look, this Brexit vote going on there? 303 00:29:52,260 --> 00:30:02,580 So you seem to be still going for it. It's much less convinced by that, but there's just edging a little bit over to the end, still changing. 304 00:30:02,580 --> 00:30:14,070 But I think that you think that once the AI won, OK, let's go back and see what I think is an awesome supporter over that good or something. 305 00:30:14,070 --> 00:30:19,290 OK, so which one was which? No. In fact, those were for only a few people. 306 00:30:19,290 --> 00:30:26,130 Yes. Yes, I knew that. Yeah. So in fact, so you find that one a bit more difficult. 307 00:30:26,130 --> 00:30:30,860 It's interesting because in some ways I would say that's the. 308 00:30:30,860 --> 00:30:35,600 Four, which are produced by I have much greater complexity to them. 309 00:30:35,600 --> 00:30:41,210 And these were actually shown at Basel Art Fair a couple of years ago, I think this is 2016. 310 00:30:41,210 --> 00:30:46,250 These were and nobody was told there was any AI involved in this. 311 00:30:46,250 --> 00:30:52,160 They were just asked to give their feedback on the paintings and the feedback on the A1. 312 00:30:52,160 --> 00:30:58,640 So they were people were much more emotionally engaged with the AI ones than they were with human ones. 313 00:30:58,640 --> 00:31:03,590 And then, of course, you see when you then tell somebody, well, in fact, that was created by a computer. 314 00:31:03,590 --> 00:31:07,580 It really upsets people. And I think, Oh my gosh, I had it. 315 00:31:07,580 --> 00:31:12,260 But that's, you know, there's no emotional world going on inside that. I think this is really interesting. 316 00:31:12,260 --> 00:31:17,690 The reaction one has to experiencing something, then finding out it's done by A.I. 317 00:31:17,690 --> 00:31:22,580 And I think most of you, I mean, I also feel like I've been cheated somehow. 318 00:31:22,580 --> 00:31:30,140 But if I tell you a joke, you know, Alan's jokes were all actually made by an A.I. and you laughed at them all. 319 00:31:30,140 --> 00:31:36,230 But then if I tell you now, no, no, he actually just ran the the AI joke app. 320 00:31:36,230 --> 00:31:38,750 Does that invalidate your laughter? I don't think it does. 321 00:31:38,750 --> 00:31:46,550 But why I don't think we should get to threatened by this is because this only is learning on our emotional world to produce its next step. 322 00:31:46,550 --> 00:31:54,380 So it's not disconnected. It has got an emotional what? It's representing all our emotional world, but in a sort of new filter. 323 00:31:54,380 --> 00:31:57,020 What's interesting about this project, I think especially, 324 00:31:57,020 --> 00:32:03,410 is the way that these paintings were created because these four paintings on the right were not created by one algorithm, 325 00:32:03,410 --> 00:32:07,370 but two algorithms almost working in competition against each other. 326 00:32:07,370 --> 00:32:13,880 They almost made it into a game, something called a creative or generative, sometimes adversarial network. 327 00:32:13,880 --> 00:32:18,170 So the first algorithm was tasked with creating the art, 328 00:32:18,170 --> 00:32:24,140 and what it did was to learn on all of the art of the past, and it learnt it became a kind of art historian. 329 00:32:24,140 --> 00:32:30,860 It learnt how to classify art in particular styles, and understood when something was Cubist art or pointless art. 330 00:32:30,860 --> 00:32:37,850 By doing a machine learning process on the images and being told what was in which particular style, 331 00:32:37,850 --> 00:32:43,190 then it was tasked with creating something that didn't fit into any of those styles. 332 00:32:43,190 --> 00:32:50,210 So it was really trying to break the mould. It had to make something that couldn't be classified given the parameters that it had learnt. 333 00:32:50,210 --> 00:32:55,220 But it was also tasked with creating something that we, as humans would recognise as art. 334 00:32:55,220 --> 00:33:00,080 So we did already learnt from the all in all, the art lost one thousand five hundred years what we recorded as art. 335 00:33:00,080 --> 00:33:05,270 And so it knew kind of an upper limit of where how much it could push the idea. 336 00:33:05,270 --> 00:33:13,160 The second algorithm was tossed. It was the discriminator algorithm was tasked with either saying, Look, that I think that still stuck. 337 00:33:13,160 --> 00:33:19,070 You're still stuck in Cubist art there or else was saying you going way too far and that isn't art at all. 338 00:33:19,070 --> 00:33:23,690 And it was the competition between these two that ultimately led to these images. 339 00:33:23,690 --> 00:33:30,950 Now, I think that's what's exciting with this particular idea is because very often algorithms can just churn out loads of things. 340 00:33:30,950 --> 00:33:38,810 But the challenge is choosing which ones are interesting. So we had a second algorithm which was doing some choosing and discriminating. 341 00:33:38,810 --> 00:33:43,280 And this is very close, I think, to actually how humans work creatively. 342 00:33:43,280 --> 00:33:48,470 Here's Paul Clay talking about the act of creation already at the very beginning of the productive act. 343 00:33:48,470 --> 00:33:55,100 Shortly after the initial motion to create a the first counter motion, the initial movement of receptivity. 344 00:33:55,100 --> 00:33:58,880 This means the creator controls whether what he has produced so far is good. 345 00:33:58,880 --> 00:34:02,750 There's always that you do something and then you. Is that good? I'm not sure I'm going to throw it away. 346 00:34:02,750 --> 00:34:09,560 I'll do something again. Here's Paul Valery, a French poet, talking about the idea of these two kind of mindsets. 347 00:34:09,560 --> 00:34:14,510 It takes two to invent anything. The one makes up combinations the other one chooses. 348 00:34:14,510 --> 00:34:20,120 And I suddenly find that as a mathematician that I have collaborators around the world where we kind of play these two roles. 349 00:34:20,120 --> 00:34:27,300 So I have a collaborator in Germany where I'm the kind of mad creator, and he's the discriminator kind of knocking things down. 350 00:34:27,300 --> 00:34:33,860 Now, while it's a collaborator in the Middle East that I have, he's the kind of man creator and I'm the discriminator in that case. 351 00:34:33,860 --> 00:34:39,710 And by doing that kind of combination that we actually make progress together. 352 00:34:39,710 --> 00:34:43,730 OK, so we don't the visual world. What about the written word, the written word? 353 00:34:43,730 --> 00:34:49,460 We already heard how a text kind of prediction produces some strange effects. 354 00:34:49,460 --> 00:34:53,660 The written word, interestingly, is having quite a lot of difficulty with. 355 00:34:53,660 --> 00:34:58,670 But then again, it's one of the first things that I was actually interested in trying to do. 356 00:34:58,670 --> 00:35:04,880 This is the Manchester Universal computer after Turing left Bletchley Park. 357 00:35:04,880 --> 00:35:14,510 He went up to try and realise some of his ideas in Manchester, and the team there were rather perplexed when letters started appearing around the lab, 358 00:35:14,510 --> 00:35:19,250 which were kind of love letters written by the Manchester Universal computer. 359 00:35:19,250 --> 00:35:22,820 And they were sort of perplexed by this until one of the team admitted that in fact, 360 00:35:22,820 --> 00:35:30,620 he'd written a programme for the computer, which was a template, and he was using a random number generator that Turing. 361 00:35:30,620 --> 00:35:37,340 It just created for the computer, which was randomly filling in the template with words, amorous words. 362 00:35:37,340 --> 00:35:45,140 So after a while, you'd spot the template. So not very good, but poetry is somewhere where ale has been quite successful. 363 00:35:45,140 --> 00:35:52,880 I think partly because it's again nice closed form. It's not asking sort of too much large scale structure. 364 00:35:52,880 --> 00:35:56,900 Also, I think that oh, actually, you know, as an audience, 365 00:35:56,900 --> 00:36:01,940 you bring a lot of your own creativity when you look at a piece of art stone, you I mean, I think that's the point. 366 00:36:01,940 --> 00:36:07,940 An artist leaves room for your own world to fill things with a certain ambiguity to things. 367 00:36:07,940 --> 00:36:08,630 And so, you know, 368 00:36:08,630 --> 00:36:15,710 poetry has it's like gnomic quality that I think especially is something that you bring a lot of your creativity to when you read it. 369 00:36:15,710 --> 00:36:18,920 So here are your challenges. Now I've got some poems for you. 370 00:36:18,920 --> 00:36:26,000 I want you to vote, whether you think these poems are because these guys already know poetry now. 371 00:36:26,000 --> 00:36:32,060 So whether these poems are by AI or are they by humans so bought or not? 372 00:36:32,060 --> 00:36:40,610 OK, so here's your first poem. So I'll read you the poem, and then we'll go to see what you think about it. 373 00:36:40,610 --> 00:36:47,120 I won't read it also. Mortal mind makes burying my rock a heart warm beat with cold beats company, 374 00:36:47,120 --> 00:36:54,770 shall I earlier or you fail at our force and lie the ruins of rifles once a world of art? 375 00:36:54,770 --> 00:36:58,410 OK, let's stop there. So do you think that is bought or not? 376 00:36:58,410 --> 00:37:04,350 So now you just voting, wrote red for robots. Blue for. 377 00:37:04,350 --> 00:37:09,230 Yeah. Oh gosh, I didn't get that one. OK. So a massive vote for red. 378 00:37:09,230 --> 00:37:13,070 There are a few thinking it's human and blue. 379 00:37:13,070 --> 00:37:17,390 OK, so you know, if that was your first challenge. So let's go back to your next challenge. 380 00:37:17,390 --> 00:37:21,350 I won't reveal them yet. I'll reveal I've got three poems for you. 381 00:37:21,350 --> 00:37:25,910 OK, so here's your next one. This is quite different. Even three. 382 00:37:25,910 --> 00:37:33,900 There are smaller pieces of plastic side reaction of real time of packs of displaced exclusionary heart hurt of powerlessness. 383 00:37:33,900 --> 00:37:39,080 The magazine fired and undignified as head fatty implied internalised violence. 384 00:37:39,080 --> 00:37:45,110 A frozen helplessness is off white chocolate, a two tiered OK. 385 00:37:45,110 --> 00:37:48,980 So what do you think? You think that is I? 386 00:37:48,980 --> 00:37:59,500 Or do you think that's human? OK, so let's give it to you to vote. Am I messing with you or you know? 387 00:37:59,500 --> 00:38:04,480 OK, so you think, yeah, that's so that's quite a little, yeah, 388 00:38:04,480 --> 00:38:08,170 a little bit for human there, but still some people thinking, OK, is this a double bluff? 389 00:38:08,170 --> 00:38:13,180 Because that's clearly code? OK, so I think you're going for human there. 390 00:38:13,180 --> 00:38:16,690 OK, so right. This is working. 391 00:38:16,690 --> 00:38:22,410 OK, so here's your next challenge bolt or not. 392 00:38:22,410 --> 00:38:27,150 Imagine now the dark smoke awakened to fly all these years to another day. 393 00:38:27,150 --> 00:38:31,590 Notions of tangled trees, the other side of water. I see it is already here. 394 00:38:31,590 --> 00:38:36,630 Sequences of a face sea. The shared an old friends past their dreams. 395 00:38:36,630 --> 00:38:43,640 Boats or not? OK, so over to you. You think they're all bought? 396 00:38:43,640 --> 00:38:48,140 OK, so you're going for bought on that one, so. So that's right. 397 00:38:48,140 --> 00:38:55,520 So you think they're all using, they're all. I yeah, that's the sort of thing I would do, isn't it? 398 00:38:55,520 --> 00:39:02,690 OK, so let's see. So you've gone for the for the bolt on that one? So again. 399 00:39:02,690 --> 00:39:07,810 Oh, they went to human, yes, that's right, they sorry, you were absolutely right, yeah. Thank you for picking that one up. 400 00:39:07,810 --> 00:39:11,290 Yeah. So let's go back. Let me give you the answers then. 401 00:39:11,290 --> 00:39:18,550 So the first one, you were pretty convinced that was I parole Gerard Manley Hopkins will be turning in his grave. 402 00:39:18,550 --> 00:39:21,790 And which you think honourable? 403 00:39:21,790 --> 00:39:29,980 I chose Gerard Manley Hopkins because I've never understood any poem the Gerard Manning Hopkins has ever written, so didn't sniff that one out. 404 00:39:29,980 --> 00:39:33,940 Second one. Yes, you did sniff out that. OK, that's that's too much like code. 405 00:39:33,940 --> 00:39:38,770 It can't be an A.I. It's actually a young Australian poet called Miss Breeze, 406 00:39:38,770 --> 00:39:46,420 and she's very interested in this kind of interplay between computer code having its own kind of poetry and rhythm to it, 407 00:39:46,420 --> 00:39:49,730 yet having meaning that might have something to say to us. 408 00:39:49,730 --> 00:39:53,620 And so she's very interested in this kind of weird interface between the two. 409 00:39:53,620 --> 00:39:58,120 So actually, we've had two humans, so that leaves the last one is that human that I really mess with, you know, 410 00:39:58,120 --> 00:40:05,290 the last one you sniffed out, actually, it was the only one that made any sense at all was in fact, the only one created by an AI. 411 00:40:05,290 --> 00:40:10,000 This is actually Ray Kurzweil created something called the cybernetic poet. 412 00:40:10,000 --> 00:40:14,560 And this is a machine learning process where he took poems of Yeats, Keats, 413 00:40:14,560 --> 00:40:24,250 Elliott and then the the bot was kind of tasked with creating something which is a kind of fusion of Yeats and Elliott, for example. 414 00:40:24,250 --> 00:40:30,430 So, of course, Ray Kurzweil is one of the people talking about the idea of the singularity, 415 00:40:30,430 --> 00:40:35,180 the moment when computers might actually be more intelligent than humans. 416 00:40:35,180 --> 00:40:40,180 And so that moment the singularity. So poetry actually is not doing too bad. 417 00:40:40,180 --> 00:40:46,330 It's a kind of longer scale writing that A.I. is having real difficulty on it, 418 00:40:46,330 --> 00:40:51,070 so you can generate quite interesting sort of text generation or sort of small scale. 419 00:40:51,070 --> 00:40:54,250 But it's kind of the idea of writing a novel is still way beyond it. 420 00:40:54,250 --> 00:41:02,230 Although there have been some attempts to write novels, so there was a very interesting case by a team called Book Bottleneck. 421 00:41:02,230 --> 00:41:06,580 They decided the big Harry Potter fans, and they decided they were very disappointed. 422 00:41:06,580 --> 00:41:11,200 There were only seven volumes of Harry Potter and they wanted an eighth. They wanted to know what happens next now. 423 00:41:11,200 --> 00:41:18,550 So what they did was they got the machine learning to take all of J.K. Rowling's writing learns kind of ideas that she's interested in, 424 00:41:18,550 --> 00:41:24,370 in her style of writing and decided they would create an algorithm to create an eighth book. 425 00:41:24,370 --> 00:41:27,400 So here is the beginning of this latest book. I actually love the title. 426 00:41:27,400 --> 00:41:33,700 This is called Harry Potter and the portrait of what looked like a large part of Ash. 427 00:41:33,700 --> 00:41:38,710 I'd read that. I'd read that. So it starts off pretty well. 428 00:41:38,710 --> 00:41:42,670 Magic. It was something that Harry Potter thought was very good. So good. 429 00:41:42,670 --> 00:41:45,760 It's already picked up that these books are about magic, you know? 430 00:41:45,760 --> 00:41:51,460 But leathery sheets of rain lashed at Harry's ghost as he leathery, She's right, rhino. 431 00:41:51,460 --> 00:41:55,510 I think that's a beautiful image leathery sheets of rain as he walked across 432 00:41:55,510 --> 00:42:00,790 the grounds towards the castle after they began to lose the plot a little bit. 433 00:42:00,790 --> 00:42:10,580 Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat him on his family. 434 00:42:10,580 --> 00:42:15,850 A of much good that runs run. It was just as bad as Ron himself. 435 00:42:15,850 --> 00:42:22,060 So I was having a very good kind of local generation of things, which had some sort of meaning, 436 00:42:22,060 --> 00:42:28,420 but it doesn't have my very good sense of a long term structure. OK, what about music? 437 00:42:28,420 --> 00:42:35,200 Lovelace either gave us this challenge of whether I could produce music and music has full of lots of patterns. 438 00:42:35,200 --> 00:42:38,230 A composer when you hear a piece of music on the radio, 439 00:42:38,230 --> 00:42:45,580 you can probably very quickly pick out what the composer is because they have particular styles, particular sort of sound world that they have. 440 00:42:45,580 --> 00:42:52,630 Can the I learn that and be able to produce something at a particularly good level to try to replicate or do something new? 441 00:42:52,630 --> 00:43:00,550 So I always starts on Bach. Bach is where I always thought because Bach has a lot of algorithms at work. 442 00:43:00,550 --> 00:43:05,290 If you do look at something like the musical offerings, Bach wrote, the musical offerings, 443 00:43:05,290 --> 00:43:12,040 these little pieces that he wrote for Duke Ferdinand as a as a kind of puzzle dissolved. 444 00:43:12,040 --> 00:43:16,540 There was a little algorithm you had to show which you had to expand in and see what the music actually meant. 445 00:43:16,540 --> 00:43:24,280 So. So actually, Bach is a very good place, actually for I just thought and some of you may remember a few weeks ago the Google Doodle, 446 00:43:24,280 --> 00:43:29,380 did you have a go on the Google Doodle celebrating Bach's birthday and this Google Doodle? 447 00:43:29,380 --> 00:43:34,690 You could put in a line of music and then it would harmonise the other three voices. 448 00:43:34,690 --> 00:43:38,890 So the machine learning take this is based on something called magenta. 449 00:43:38,890 --> 00:43:46,360 It it's taken all the carols that Bach had written. Carol is very good because they kind of they don't change key very often. 450 00:43:46,360 --> 00:43:53,380 They're very, sort of boring, and they can learn a lot about the way that a crawl and a Koran is filling in the harmonies. 451 00:43:53,380 --> 00:44:01,700 A bit like filling in a Sudoku. You've got to learn the rules. So this would actually the the the little Google Doodle would fill in. 452 00:44:01,700 --> 00:44:05,120 With your tune, so as you put it, in the pit of the musical offerings, 453 00:44:05,120 --> 00:44:09,180 so this is my attempt to win it because the musical offerings is actually quite a difficult challenge. 454 00:44:09,180 --> 00:44:14,240 She doesn't seem to have any key to it at all. And when I played it back, it was really rubbish, actually. 455 00:44:14,240 --> 00:44:22,400 But what was very nice about the Google Doodle is that you could then say I thought that was rubbish and the thing would learn and actually say, 456 00:44:22,400 --> 00:44:28,790 OK, that harmonising didn't work. And so you were training the algorithm as you gave your feedback. 457 00:44:28,790 --> 00:44:34,890 And if you thought it was good, it would kind of reprioritize itself like, I do more of that sort of harmony. 458 00:44:34,890 --> 00:44:44,420 So very nice. So what we've done and I've got to actually set up a centre in the Royal Northern College of Music with a composer, Emily Howard. 459 00:44:44,420 --> 00:44:49,370 It's a called PRISM, which stands so practise and research in science and music. 460 00:44:49,370 --> 00:44:56,270 And what we're already interested in is this kind of interplay between the scientific and mathematical world and the world of composition. 461 00:44:56,270 --> 00:45:02,570 So we've done various projects. We wrote a string quartette representing mathematical proofs together, 462 00:45:02,570 --> 00:45:06,620 but I'm now what I now got a Ph.D. student who's a composer, which is really exciting. 463 00:45:06,620 --> 00:45:12,380 Robledo, who actually had his quartette, premiered last week at the Wigmore Hall. 464 00:45:12,380 --> 00:45:16,370 He was going to be here tonight, so I'm sure he's ill, which is a real shame. 465 00:45:16,370 --> 00:45:23,840 But he's been working alongside me and various other people as she here in the university as well to produce a 466 00:45:23,840 --> 00:45:31,640 piece using some of the best software we have around at the moment to create a kind of hybrid A.I. Bach piece. 467 00:45:31,640 --> 00:45:40,070 And this is what we want to play you. So the challenge here is so we've got a fantastic musicians in the department here, 468 00:45:40,070 --> 00:45:49,130 and Kobe is going to come up and play this piece for you, which is so we just been doing very straight up human tests. 469 00:45:49,130 --> 00:45:51,530 This is a slightly different challenge. 470 00:45:51,530 --> 00:45:59,870 You've now got a piece which sometimes is AI and sometimes is ball, and I'm not going to tell you how many times it moves between one and the other. 471 00:45:59,870 --> 00:46:05,690 And what I want you to do is to see whether you can see whether joins or can you tell, Oh, gosh, that's horrible. 472 00:46:05,690 --> 00:46:10,430 That's going AI or oh yeah, that's boring. So we're going to run this twice. 473 00:46:10,430 --> 00:46:16,980 The piece is about four minutes long, and I say, we're going to record your thoughts on this and then hopefully, 474 00:46:16,980 --> 00:46:20,390 if we can do this, I'm going to play this back to you. 475 00:46:20,390 --> 00:46:25,850 Oh gosh, because now we don't have two screens, that's going to be quite a. 476 00:46:25,850 --> 00:46:29,690 OK, no, I know what I can do. OK, yeah. I've thought of an improvised way of doing this. 477 00:46:29,690 --> 00:46:34,100 I'm going to do it very analogue. So you'll see what my solution in a minute. 478 00:46:34,100 --> 00:46:45,260 OK, so so perhaps we can give a big round of applause to Coby, who's going to come and play this piece. 479 00:46:45,260 --> 00:46:52,370 We have a page turner as well. And so just to set this thing off, can you all show your blue faces? 480 00:46:52,370 --> 00:46:54,950 So we're going to just start you off all on blue. 481 00:46:54,950 --> 00:47:03,560 So the idea is as soon as you think that the music has gone into something which is not my ball, you turn to the red face. 482 00:47:03,560 --> 00:47:07,910 And if you think it's gone back to Bach, you move to the blue face, OK? 483 00:47:07,910 --> 00:47:13,640 It's very simple. OK, so now in order to think these things, I'm going to have to count this down. 484 00:47:13,640 --> 00:47:18,380 So we going to hopefully we can show these two things again. So you ready? 485 00:47:18,380 --> 00:51:35,720 Ready timing. OK, so I'll go for three to one. 486 00:51:35,720 --> 00:51:42,320 Well, I was raising I think there are moments when Bach will be turning in his grave, as you'll see. 487 00:51:42,320 --> 00:51:50,240 But it's think because I think when it was Bach, there was much more confidence in your knowledge of that. 488 00:51:50,240 --> 00:51:54,230 You could feel it was right. But when it wasn't, it was really kind of edgy. 489 00:51:54,230 --> 00:51:57,830 And there were moments when it's sunny surge red, which there were give away moments. 490 00:51:57,830 --> 00:52:06,860 So. So we're going to replay this so you can actually see what what your answers were. 491 00:52:06,860 --> 00:52:10,860 This was actually so what we did was we took one of the English suites. 492 00:52:10,860 --> 00:52:18,560 So this is the fourth English suite, and what we did was to take these bits out of the piece of music and then asked the AI to fill in the gaps. 493 00:52:18,560 --> 00:52:24,200 And the machine learning was. It's interesting because we actually use quite a simple piece of machine learning. 494 00:52:24,200 --> 00:52:26,330 It's called Clara. It's developed now. 495 00:52:26,330 --> 00:52:35,690 It's developed by open air, by this team, which is trying to make A.I. very open to the world, something that Elon Musk has helped set up. 496 00:52:35,690 --> 00:52:42,410 And so this is actually by a piece of software written by Christine Payne as part of the open A.I. project. 497 00:52:42,410 --> 00:52:48,890 And it's growing. So we're going to keep on working on this to to make it even better with something called News Net. 498 00:52:48,890 --> 00:52:53,600 But what's interesting is that this is kind of predictive in the sense of here's 499 00:52:53,600 --> 00:52:58,070 what's happened up to date and then make the decision about what will be next. 500 00:52:58,070 --> 00:53:02,090 So some of the A.I. is working very cleverly and working backwards as well. 501 00:53:02,090 --> 00:53:06,440 So knowing where the piece is going, it can make some prediction. 502 00:53:06,440 --> 00:53:10,610 But this piece of software does not have a long term memory, 503 00:53:10,610 --> 00:53:15,290 and this is one of the challenges to create a piece of music software that can actually know 504 00:53:15,290 --> 00:53:20,810 about what it's done in the past and exploit that and its decisions as it goes forward. 505 00:53:20,810 --> 00:53:29,840 One of the things I talked about with Kobe beforehand, he could feel the AI very clearly because the AI is not important. 506 00:53:29,840 --> 00:53:39,920 So it has no trouble with giving really awkward fingerings whilst Bork was writing something that would fit very nicely under the fingers. 507 00:53:39,920 --> 00:53:48,070 So I think this is one of the challenges of A.I. very generally is the idea of it not being embodied and you could really feel that it just did. 508 00:53:48,070 --> 00:53:52,700 I mean, when I first played through it, it was just like, Well, this is just really goofy. 509 00:53:52,700 --> 00:54:00,770 OK, so let's play that back. So what I was hoping to do is have things on one screen on the other, but I think what I would do. 510 00:54:00,770 --> 00:54:05,300 So we're going to yeah. Yeah, exactly. So let's we'll get the the thing up here. 511 00:54:05,300 --> 00:54:13,130 So we're going to get Paul. Kobe's going to play this again. The painful bits, the bark is great fun and what I will do analogue wise. 512 00:54:13,130 --> 00:54:19,460 So I was going to do this on the screen. But I will just show you as we're going along, which bits are I and which are not. 513 00:54:19,460 --> 00:54:23,810 And so you'll be able to see what you voted and I'll tell you what the answers are. 514 00:54:23,810 --> 00:54:32,150 OK. OK. All right. OK. So as I sort of slightly biased things by asking you to put all up the human face, 515 00:54:32,150 --> 00:54:38,210 I wanted to see how long it would take until you actually spotted that the opening wasn't bark at all. 516 00:54:38,210 --> 00:54:42,470 It was, in fact, so it took quite a long time. So we're going to start. So we're going to count this down. 517 00:54:42,470 --> 00:55:13,590 So we get the thinking right, hopefully. So they all already. So four, three two one. 518 00:55:13,590 --> 00:55:48,530 You pick you pick that up quite quickly, that that was back, I think you can hear that it's got some sort of direction to it. 519 00:55:48,530 --> 00:56:03,710 You know, I did it. 520 00:56:03,710 --> 00:57:04,880 Know. Or a bar? 521 00:57:04,880 --> 00:58:51,420 Oh, cheer. Now you can be this gun ball. 522 00:58:51,420 --> 00:59:01,990 Right. And thank you very much to Katie. 523 00:59:01,990 --> 00:59:07,000 Now, one of the things I was very strict about with Rob and the composer was that it 524 00:59:07,000 --> 00:59:11,560 had to be air and he was not allowed any chance to try and improve the air. 525 00:59:11,560 --> 00:59:18,490 So we were very strict about because very often when you look at projects and the many in the book, all that saying it's an involvement of air, 526 00:59:18,490 --> 00:59:24,280 you can see this a lot of human input because it makes a much better story if you just say it's the eye and the human isn't involved at all. 527 00:59:24,280 --> 00:59:29,050 So he's very strict with this that the portions that were air had to be only air. 528 00:59:29,050 --> 00:59:35,260 And interestingly, we all it just to fill in the last chord, and it missed out one crucial note which made the whole thing resolved. 529 00:59:35,260 --> 00:59:40,880 So. But you can see from that, that's actually it was pretty convincing, 530 00:59:40,880 --> 00:59:46,610 and there were few given away moments, but actually quite hard to pick out, which was the air. 531 00:59:46,610 --> 00:59:48,250 I think you were pretty good on what was Bach, 532 00:59:48,250 --> 00:59:53,950 although there were a few horrific moments when you thought it was II and Bach will be poor turning in his grave, so. 533 00:59:53,950 --> 01:00:01,450 And I think that's again, what's the point of this? Well, I think there is a point for creative artists, and I think this is not about competition. 534 01:00:01,450 --> 01:00:07,150 This is about collaboration. This is a new tool to push our own human creativity. 535 01:00:07,150 --> 01:00:09,640 We've always already seen that in the realm of art. 536 01:00:09,640 --> 01:00:16,840 But in music, one of the most interesting stories I saw was the idea of an A.I. that had been trained to play jazz. 537 01:00:16,840 --> 01:00:24,700 It's called the Jazz Continuato, something constructed in Sony Labs in Paris by Francois Paci and his team. 538 01:00:24,700 --> 01:00:28,450 And they got the A.I. to learn in a very similar way to a jazz musician, 539 01:00:28,450 --> 01:00:33,670 learning what the probability is of the kind of next move after a certain sequence of notes. 540 01:00:33,670 --> 01:00:40,270 And they then did a concert where people found it very difficult to tell when it was the human playing or when it was the A.I. 541 01:00:40,270 --> 01:00:48,580 But what struck me was the jazz musician who this A.I. been trained on his response to hearing the AI play back to him. 542 01:00:48,580 --> 01:00:56,920 And he said This has been a clue that the system shows me ideas I could have developed, but would have taken me years to actually develop. 543 01:00:56,920 --> 01:01:01,630 It is years ahead of me. Yet everything it plays is unquestionably me. 544 01:01:01,630 --> 01:01:09,910 And I think this is what's exciting because I think that we, as humans often end up behaving very much like machines. 545 01:01:09,910 --> 01:01:15,890 We get stuck in our ways of thinking. We just perform the same ideas over and over again, especially in creativity. 546 01:01:15,890 --> 01:01:19,330 I know that in my own mathematics, I try the same things over and over again. 547 01:01:19,330 --> 01:01:26,920 And sometimes I need something to push me out of the way I've been thinking and see that my world of possibilities is much richer. 548 01:01:26,920 --> 01:01:30,370 Like, blown out, I was in a room. The spotlight was on him. 549 01:01:30,370 --> 01:01:34,420 He didn't realise there was so much more to play with his and within his sound world. 550 01:01:34,420 --> 01:01:39,220 So the exciting thing for me is that this movement into an A.I. that might be creative. 551 01:01:39,220 --> 01:01:42,190 It's not about a threat. It's about an opportunity. 552 01:01:42,190 --> 01:01:50,620 It's about the fact that this thing could push us to be behave less like machines and actually become more creative again as humans. 553 01:01:50,620 --> 01:02:11,330 Thank you.