1 00:00:00,450 --> 00:00:18,270 I. Well, good evening, ladies and gentlemen, members of Linacre and Reuben Colleges. 2 00:00:18,270 --> 00:00:23,790 What a pleasure it is to see you all here, even though we are a little bit packed in this evening. 3 00:00:23,790 --> 00:00:36,150 Fantastic that we are able to have the 2022 O.C. Tanner Lecture in person. The Tanner Lectures are now in their 44th year. 4 00:00:36,150 --> 00:00:44,640 They were established by American scholar, industrialist and philanthropist Obert Clark Tanner and in creating the lectures. 5 00:00:44,640 --> 00:00:52,530 Professor Tanner said "I hope these lectures will contribute to the intellectual and moral life of mankind. 6 00:00:52,530 --> 00:00:58,830 I see them simply as a search for a better understanding of human behaviour and human values." 7 00:00:58,830 --> 00:01:08,850 Appointment as a Tanner lecturer is recognition of an exceptionally distinguished and important career in the field of human values, 8 00:01:08,850 --> 00:01:13,350 and that is a description fully justified in the case of tonight's lecturer. 9 00:01:13,350 --> 00:01:18,480 Professor Rosalind Picard. Professor Picard is founder and director of the Affective 10 00:01:18,480 --> 00:01:24,120 Computing Research Group at MIT Media Lab and the founding faculty chair of MIT's 11 00:01:24,120 --> 00:01:33,330 Mind, Hand and Heart Initiative. She co-founded Affectiva Inc, providing Emotional A.I. Technology and Sympatico Inc, 12 00:01:33,330 --> 00:01:39,840 which designs and develops A.I. systems to monitor human health through wearable sensors. 13 00:01:39,840 --> 00:01:49,020 Professor Picard holds a Bachelor's degree in electrical engineering from Georgia Institute of Technology and Masters and Doctoral degrees from MIT, 14 00:01:49,020 --> 00:01:57,210 both in engineering and computer sciences. She became internationally known for constructing mathematical models for content based retrieval 15 00:01:57,210 --> 00:02:02,910 of images and for pioneering methods of automated search and annotation in digital video. 16 00:02:02,910 --> 00:02:14,210 Her book, Affective Computing, published in 1997, started a new field by that name and has become a classic in computer science. 17 00:02:14,210 --> 00:02:23,060 Three weeks ago here in Oxford, Dr. Demis Hassabis, founder of DeepMind, gave a special O.C. Tanner lecture on artificial intelligence. 18 00:02:23,060 --> 00:02:27,980 Dr. Hassabis is perhaps best known across the globe as the inventor of AlphaGo, 19 00:02:27,980 --> 00:02:34,840 a computer game which defeated the world Go champion Lee Sedol in 2016. 20 00:02:34,840 --> 00:02:42,550 In his lecture, Dr. Hassabis enumerated three things that he believed were fundamental for the development of artificial intelligence. 21 00:02:42,550 --> 00:02:48,460 Firstly, a problem with a massive combinatorial search space. 22 00:02:48,460 --> 00:02:53,920 Secondly, a clear metric with which performance could be evaluated. 23 00:02:53,920 --> 00:03:00,440 And thirdly, lots of data. Or perhaps effective simulation. 24 00:03:00,440 --> 00:03:10,520 He then described how the real challenge in developing AlphaGo, to beat the world champion, had been the development of the evaluative function. 25 00:03:10,520 --> 00:03:16,850 It proved impossible, he said, to write in an evaluative function for the game go. 26 00:03:16,850 --> 00:03:24,470 Indeed, he hypothesised that the world's leading go players do not meant do not mentally calculate all possible moves. 27 00:03:24,470 --> 00:03:35,310 Rather, they rely on intuition. His breakthrough had been in developing methods for allowing AlphaGo to be intuitive. 28 00:03:35,310 --> 00:03:42,360 Research into human intelligence has revealed that emotions play an essential part in decision making, in perception. 29 00:03:42,360 --> 00:03:48,330 In learning and more, that is, they influence the very mechanisms of rational thinking. 30 00:03:48,330 --> 00:03:54,060 According to Professor Picard, if we want computers to be genuinely intelligent, 31 00:03:54,060 --> 00:04:00,070 then we must give them the ability to recognise, understand and respond to emotion. 32 00:04:00,070 --> 00:04:06,820 Her lecture this evening will expand on that thesis and is called 'Emotion, A.I. and Human Values'. 33 00:04:06,820 --> 00:04:24,240 Please join me in welcoming Professor Picard to the stage. Thank you. 34 00:04:24,240 --> 00:04:32,420 It's a real honour for me to be here with all of you and to see a full room after the pandemic, especially. 35 00:04:32,420 --> 00:04:42,440 I'm going to talk about how we might build a future life with A.I., where the best human values are hopefully honoured and where humans flourish. 36 00:04:42,440 --> 00:04:51,050 I'm going to start with a story - in 1996 when I was very young new faculty member at MIT. 37 00:04:51,050 --> 00:04:59,660 I had mostly been working on computer vision and on the side I had written a paper that was unpublished called Affective Computing. 38 00:04:59,660 --> 00:05:04,760 Somebody had read it over in the A.I. lab, in Rod Brooks's zoo. 39 00:05:04,760 --> 00:05:13,490 The zoo was a very famous group of world renowned roboticists, Rod at the helm, and philosopher Daniel Dinh, a theologian on a first, 40 00:05:13,490 --> 00:05:23,010 and other experts from places like Oxford who were trying to build the world's most advanced humanoid robot. 41 00:05:23,010 --> 00:05:29,280 Instead of inviting me to give the normal lecture, they sat me in a circle. 42 00:05:29,280 --> 00:05:34,680 It almost felt a little like a firing squad, except that it was delightful. It was questions being fired. 43 00:05:34,680 --> 00:05:41,520 Why would you want to give computers emotion? Isn't that what makes people so stupid? 44 00:05:41,520 --> 00:05:51,480 Isn't that a really bad idea? And I went through the evidence for, you know, from neuroscience why we now thought that. 45 00:05:51,480 --> 00:05:57,150 Yes, of course, while extreme emotional states could lead to very stupid decisions and behaviour, 46 00:05:57,150 --> 00:06:05,460 we are always having affective states and they are always interacting with cognition and perception and behaviour and action selection. 47 00:06:05,460 --> 00:06:14,070 And if we could understand them and model them in intelligent ways, maybe it would solve several of the problems that I and I elucidate. 48 00:06:14,070 --> 00:06:21,690 I listed those problems and said I thought that we needed to take a look at this. 49 00:06:21,690 --> 00:06:27,090 Well, by the end of this gathering, they were, I will say, more positive but still sceptical. 50 00:06:27,090 --> 00:06:34,050 Like it still seems impossible to me or I don't know. And as I left, one person who had been silent came up at the end. 51 00:06:34,050 --> 00:06:44,030 It was the theologian and she caught me as I was leaving the room with a question that has stuck with me since, she said. 52 00:06:44,030 --> 00:06:52,750 Isn't emotion the last thing that separates machines from humans? 53 00:06:52,750 --> 00:06:57,010 I gave a very short answer, which I'll touch on at the end of my talk. 54 00:06:57,010 --> 00:07:03,700 A one word answer. But today, I want to go a bit deeper into these topics, first, 55 00:07:03,700 --> 00:07:10,330 I'm going to start with some examples of why we build this and some of the 56 00:07:10,330 --> 00:07:18,250 perspectives now on this and then move into more of the human values towards the end. 57 00:07:18,250 --> 00:07:26,740 The paper turned into a book, and I want to highlight here that expressing and recognising human emotion is just a part of affective computing. 58 00:07:26,740 --> 00:07:33,820 There are many other parts that involve shaping algorithms to actually incorporate mechanisms inspired by human and animal emotion. 59 00:07:33,820 --> 00:07:37,720 And I'm not going into all of that today, although that is also an important part. 60 00:07:37,720 --> 00:07:45,320 I'm going to focus on some examples from interaction with humans, which involves expressing and recognising. 61 00:07:45,320 --> 00:07:52,430 Now, people naturally express emotion to computers, and also, I should say for the younger people today, 62 00:07:52,430 --> 00:07:56,270 they sometimes look at me like that's not a computer, that's a vacuum cleaner. 63 00:07:56,270 --> 00:08:01,850 When we think of computing, I think of it as anything with computing in it. 64 00:08:01,850 --> 00:08:11,510 That can be a robot, a software agent, a smartphone smartwatch, all kinds of devices that involve computation that we can embed in our computers. 65 00:08:11,510 --> 00:08:17,240 So affective computing was thinking not about any particular form. 66 00:08:17,240 --> 00:08:21,230 Now we express emotion to computers. In fact, about the time I started this, 67 00:08:21,230 --> 00:08:29,450 even twenty five percent of people under age twenty four admitted to having kicked their computer out of frustration. 68 00:08:29,450 --> 00:08:34,160 There was lots of expressing. There was almost nothing being done about it. 69 00:08:34,160 --> 00:08:38,570 Here's my favourite early example the first social robot built by Cynthia Brazil 70 00:08:38,570 --> 00:08:44,510 in Rod Brooks's zoo several years after my visit there that I love the example, 71 00:08:44,510 --> 00:08:55,030 not just because kismet is recognising from the tone of speech some affective states, but watch how kismet responds. 72 00:08:55,030 --> 00:09:01,570 So another important skill for the robot to be able to learn from people is being able to recognise communicative intent. 73 00:09:01,570 --> 00:09:09,040 Very good kismet. And the way we've done that with Casben right now is to have the robot recognise by tone of voice. 74 00:09:09,040 --> 00:09:19,520 Are you praising it? Are you scolding it? Where do you put your body? No, no, no. 75 00:09:19,520 --> 00:09:25,850 You're not to do that. No. 76 00:09:25,850 --> 00:09:36,470 So instead of just yelling at your machine or being frustrated, it could actually respond in ways that make you perhaps like it more. 77 00:09:36,470 --> 00:09:41,480 You can imagine how industry is now very interested in this with all of the speaking 78 00:09:41,480 --> 00:09:48,950 devices that try to provide support and services really to sell us things. 79 00:09:48,950 --> 00:09:55,310 You ask Alexa for Alexa this lately, but what's the right way to store Walmart? 80 00:09:55,310 --> 00:10:01,400 Alexa responds The right way to spell Walmart is Alma. 81 00:10:01,400 --> 00:10:06,650 That's not what you asked. So, you know, that was not right. 82 00:10:06,650 --> 00:10:13,280 Alexa doesn't have ears to put back. Alexa doesn't look humble and submissive, but Alexa could. 83 00:10:13,280 --> 00:10:23,540 The internal machine learning right could take that response as a signal to adjust what it's done to hopefully do a better job in the future. 84 00:10:23,540 --> 00:10:27,200 It could learn that in a personalised way, it could learn it and a culture dependent way. 85 00:10:27,200 --> 00:10:36,220 There are lots of ways the machine learning loop taking feedback from a person could be used to improve the AI technology. 86 00:10:36,220 --> 00:10:47,380 Now, while I was working in my lab on computer vision and trying to teach a computer to recognise emotions expressed on somebody's space, 87 00:10:47,380 --> 00:10:54,160 which isn't exactly what they're feeling, I'll say more about that in a moment. 88 00:10:54,160 --> 00:11:01,990 A young man came by my office to borrow a physical map. This is before digital maps, and he paused and he said, Hey, what are you working on? 89 00:11:01,990 --> 00:11:07,990 And I told him I was trying to teach computers to do a better job of understanding people's affective expressions. 90 00:11:07,990 --> 00:11:12,940 And he said, Could you help my brother? And I said, Tell me about your brother. 91 00:11:12,940 --> 00:11:16,000 And he started telling about his brother with autism. 92 00:11:16,000 --> 00:11:24,700 And for those of you who don't know many people with autism, not all can have a real challenging time interpreting kind of reading between the lines, 93 00:11:24,700 --> 00:11:31,740 the non-verbal signals, especially what's happening on your face while simultaneously trying to process speech. 94 00:11:31,740 --> 00:11:36,900 I realised that the tools we were building to teach the machine how to do this better could 95 00:11:36,900 --> 00:11:42,480 actually be very slightly adapted to help people and wouldn't that be a wonderful thing? 96 00:11:42,480 --> 00:11:49,710 So we started working on that and fast forward lots of work by our team, including Ron Paul, 97 00:11:49,710 --> 00:11:54,480 you who came over as a postdoc and co-founded Affectiva with me and later commercialised. 98 00:11:54,480 --> 00:12:00,480 There is now very robust and accurate interpret not only tracking of facial muscle movements, 99 00:12:00,480 --> 00:12:05,550 but mapping of those to the labels that people would give to those facial expressions, 100 00:12:05,550 --> 00:12:10,350 including doing groups of people, mapping them to cute little emojis for fun. 101 00:12:10,350 --> 00:12:18,640 But also now this is embedded in robots and other devices to help them interact a bit more smoothly with people. 102 00:12:18,640 --> 00:12:21,550 We didn't just build one tool to help people in the autism spectrum, 103 00:12:21,550 --> 00:12:28,210 we gave away these tools that other people could build into different kinds of learning environments, including things like Google Glass. 104 00:12:28,210 --> 00:12:33,310 We had built early glasses with cameras on them that could look at your face and see. 105 00:12:33,310 --> 00:12:37,330 This one was particularly useful for professors. If I was talking too much and you were looking bored. 106 00:12:37,330 --> 00:12:45,730 I would get a red light to tell me to stop and give you the floor so we could look at if you showed a facial expression of confusion, 107 00:12:45,730 --> 00:12:57,180 interest, boredom, it was like a non-verbal traffic signal. Now, we also were trying to implement things we had read in the scientific literature. 108 00:12:57,180 --> 00:13:00,030 So this became a way to test some of the science in real life. 109 00:13:00,030 --> 00:13:06,180 And one of the things that surprised us right away was that the science I had been told and wrote about in my book. 110 00:13:06,180 --> 00:13:11,200 And you can if you have my book, you can press this part out now and fix it. 111 00:13:11,200 --> 00:13:15,820 I wrote what I had been told by several neurologists that there was a true smile of happiness. 112 00:13:15,820 --> 00:13:22,480 It took a different pathway in the brain and it involved the eye crinkle, the cheek rays as well as the lip corner. 113 00:13:22,480 --> 00:13:28,810 And so if you saw that the person was truly happy was the way the dogma went. 114 00:13:28,810 --> 00:13:36,470 Now, it had been reported that this had appeared in some preschoolers using educational technology, and I was making observations. 115 00:13:36,470 --> 00:13:38,560 And so we ran a long story. 116 00:13:38,560 --> 00:13:46,120 Short ran a randomised, controlled trial study we do to look at if this would actually happen in cases where people were not delighted. 117 00:13:46,120 --> 00:13:54,430 I made a play for you a clip where we've designed an experiment where people are frustrated, they're putting their web information, 118 00:13:54,430 --> 00:14:27,860 their resume online and unbeknown to them, we're using the kind of captures that you can't fill out properly. 119 00:14:27,860 --> 00:14:32,990 I am sorry. You seem to be having difficulties getting through this form. 120 00:14:32,990 --> 00:14:39,140 Would you mind looking at the camera and share your thoughts on how to improve the form this? 121 00:14:39,140 --> 00:14:52,550 I'm stuck. It's important to give the context, because in the next person participating here, 122 00:14:52,550 --> 00:15:01,990 he's experiencing the exact same situation, but you'll see a very different and direct response. 123 00:15:01,990 --> 00:15:17,740 On. You know. That was not the true smile of delight. 124 00:15:17,740 --> 00:15:25,450 However, when you show people not only the still images of these which they rate 50 50, they can't guess which is to light and which is frustration. 125 00:15:25,450 --> 00:15:31,030 But even the videos we find that we can build a computer algorithm does better 126 00:15:31,030 --> 00:15:36,280 than people at recognising which video clips are from people in the situation. 127 00:15:36,280 --> 00:15:40,420 I just showed you where their smile is elicited with frustration versus people. 128 00:15:40,420 --> 00:15:48,160 In another situation I didn't show you where they're watching. Giggling baby videos and even grumpy students usually start off with a giggling baby. 129 00:15:48,160 --> 00:15:54,230 VIDEO Not really smiling, but by the end of it, they're they're usually cracking the light smiles. 130 00:15:54,230 --> 00:15:57,830 Now, this does not mean that computers are better than people at recognising motion. 131 00:15:57,830 --> 00:16:04,760 This is a very narrowly defined recognition task in a very limited situation where it's 132 00:16:04,760 --> 00:16:10,670 a probability of being one of two classes and there's no context being interpreted here. 133 00:16:10,670 --> 00:16:22,190 When humans are given the context and the whole situation, they're much better at discerning what's going on. 134 00:16:22,190 --> 00:16:32,930 Now, when we take the ability of the machine to read facial vocal and additional information, we can put that together in a lot of different ways. 135 00:16:32,930 --> 00:16:38,530 Some of the ways people are proposing involve things like just replacing people. 136 00:16:38,530 --> 00:16:45,280 And I'll show an example of that later. I want to show you a very different example here that's more about trying to help people, 137 00:16:45,280 --> 00:16:51,250 and we started by wanting to help people on the autism spectrum for this particular Ph.D. we were running out of time, 138 00:16:51,250 --> 00:16:57,010 so we decided we would just help MIT students who could use help with some of these skills as well. 139 00:16:57,010 --> 00:17:00,550 Pretty much everybody can. When people talk to each other, 140 00:17:00,550 --> 00:17:06,460 the majority of the information that's conveyed comes from the way we see things rather than the words we're actually saying. 141 00:17:06,460 --> 00:17:10,690 Eye contact smiles. Voice modulation speaking rate pauses. 142 00:17:10,690 --> 00:17:15,700 An emphasis on certain words often add an extra layer of information in our interactions. 143 00:17:15,700 --> 00:17:20,200 Many of us want to improve these interaction skills, but don't have the resources to do so. 144 00:17:20,200 --> 00:17:25,720 Imagine if you could practise your interaction skills with an automated system in the privacy of your own living room. 145 00:17:25,720 --> 00:17:29,450 A programme designed at the MIT Media Lab lets you do just that. 146 00:17:29,450 --> 00:17:37,870 Hi, I'm Mary. I'm looking forward to doing your interview. My automated conversation coach consists of a 3D character on a computer screen that 147 00:17:37,870 --> 00:17:41,890 can see here and make its own decisions based on its interaction with a person, 148 00:17:41,890 --> 00:17:47,650 and it works on a personal computer. Now, let's get started using a webcam. 149 00:17:47,650 --> 00:17:50,020 The system can analyse facial expressions. 150 00:17:50,020 --> 00:17:57,130 For example, it can measure where in the interaction you are smiling and can recognise your head gestures, such as a nod or a shake. 151 00:17:57,130 --> 00:18:02,950 The system also analyses your voice. It not only understands what you say, but how you say it. 152 00:18:02,950 --> 00:18:11,590 Using real time speech recognition and prosody analysis, it can capture the nonverbal nuances of conversations and display it in an intuitive format. 153 00:18:11,590 --> 00:18:15,490 When you're done, it gives you a summary of the information when you smiled, 154 00:18:15,490 --> 00:18:20,740 how fast you spoke and so on, and it can show how these measures change over multiple sessions. 155 00:18:20,740 --> 00:18:27,220 It even allows you to watch the video of your interactions with various measures of your behaviour displayed alongside the video, 156 00:18:27,220 --> 00:18:32,770 such as when you smile, how the volume of your voice rises and falls, and what words you emphasise. 157 00:18:32,770 --> 00:18:37,030 It even shows when your attention wanders. Can't find you? 158 00:18:37,030 --> 00:18:38,080 There you are. 159 00:18:38,080 --> 00:18:46,270 You were saying in a study with 90 MIT undergraduates, the subjects went through simulated job interviews before and after receiving this training. 160 00:18:46,270 --> 00:18:52,870 Those who got the feedback from this automated system were rated as better candidates for the job than those who did not. 161 00:18:52,870 --> 00:18:57,610 Besides job interviews, the researchers say the system could help with public speaking, 162 00:18:57,610 --> 00:19:11,050 dating, learning languages or helping people who have difficulties in social communications. 163 00:19:11,050 --> 00:19:14,020 Now, you might notice that was a little bit dated. 164 00:19:14,020 --> 00:19:20,500 One can make much more human looking avatars now, which you might want to do if you want to intimidate people a little bit more. 165 00:19:20,500 --> 00:19:30,820 In most of our work, we've deliberately chosen not to do that so as not to deceive people about the abilities of the A.I. 166 00:19:30,820 --> 00:19:37,660 When I was working with people on the autism spectrum and trying to understand more about what they really needed, 167 00:19:37,660 --> 00:19:43,990 at one point, one of them, a woman who had become a friend who was not speaking, was typing. 168 00:19:43,990 --> 00:19:48,460 And she communicated to me that I had it all wrong. 169 00:19:48,460 --> 00:19:52,450 She says, Roz, you're what you're doing is all wrong. 170 00:19:52,450 --> 00:20:00,070 I was like, Oh, great, you know, what am I doing wrong? And she said, You're not recognising my emotions properly. 171 00:20:00,070 --> 00:20:03,760 And it's I don't need help recognising other people's emotions. 172 00:20:03,760 --> 00:20:06,400 I need you to recognise my emotions. 173 00:20:06,400 --> 00:20:12,430 And I realised the literature at all focussed on what people on the spectrum couldn't do, not on what we couldn't do for them, right? 174 00:20:12,430 --> 00:20:17,740 How we were not pulling up our end of the bargain. 175 00:20:17,740 --> 00:20:20,710 I said, Gee, I feel terrible. What am I not recognising? 176 00:20:20,710 --> 00:20:28,330 And she said, it's not just you, it's everybody's not recognising our emotions and in particular, they're not recognising our stress. 177 00:20:28,330 --> 00:20:35,680 I realised in the lab we had built physiological measures. These had actually been well known for some time in the physiology literature 178 00:20:35,680 --> 00:20:41,740 that corresponded to sympathetic and parasympathetic nervous system stress. And what was missing was the ability to take them out of the lab. 179 00:20:41,740 --> 00:20:50,860 And so we started to build sensors that could embed this physiological measurements, such as the electrical activity, skin conductance. 180 00:20:50,860 --> 00:20:55,720 You know, the notion where your palms get sweaty goes up with perspiration. 181 00:20:55,720 --> 00:21:01,570 But it turns out it also goes up even when your hands don't feel sweaty with very subtle changes in stress. 182 00:21:01,570 --> 00:21:04,960 In fact, it's one of the key signals still used in lie detection, 183 00:21:04,960 --> 00:21:12,760 although it's not an accurate lie detector on its own requires, again, a lot of contextual interpretation. 184 00:21:12,760 --> 00:21:16,240 So we built this sweatband here with sensors in it. 185 00:21:16,240 --> 00:21:18,430 There's another version of the sweatband. 186 00:21:18,430 --> 00:21:26,710 This particular version and an NBA version were borrowed by a student one day coming into my office right before winter break. 187 00:21:26,710 --> 00:21:29,350 Professor Picard, could I please borrow a couple of your sensors? 188 00:21:29,350 --> 00:21:33,130 My little brother has autism, he can't talk, and I want to see what's stressing him out. 189 00:21:33,130 --> 00:21:37,600 I said, Sure. In fact, don't just take one. Take two. They often break. You need to borrow a soldering iron. 190 00:21:37,600 --> 00:21:44,850 He said, No, I have a soldering iron. Great and mighty student can fix them, so he takes the sensors home. 191 00:21:44,850 --> 00:21:51,480 Puts them on his little brother. I'm looking at the data back at MIT and I notice that the boys data was not. 192 00:21:51,480 --> 00:21:56,490 It was usually pretty calm looking, but there was one gigantic peak that I couldn't explain. 193 00:21:56,490 --> 00:22:02,070 In fact, it was on one wrist and not the other, and I couldn't figure out, how can you be stressed on one side of your body and not the other? 194 00:22:02,070 --> 00:22:06,780 So I figured out the sensors must be broken, but I tried all kinds of things to replicate it. 195 00:22:06,780 --> 00:22:14,760 I'm an electrical engineer by training, and nothing replicated the weird data, so I resolved to very old fashioned debugging. 196 00:22:14,760 --> 00:22:18,870 I picked up the phone. I called the student at home on vacation. 197 00:22:18,870 --> 00:22:23,950 Hi, how's your vacation? How's your little brother? Hey, any idea what happened to him? 198 00:22:23,950 --> 00:22:31,080 And I gave the exact date and time in the data and he said, I don't know. I'll check the diary diary and it just keeps a diary help. 199 00:22:31,080 --> 00:22:35,940 Like, what are the odds that some young man has written this stuff, right? He comes back. 200 00:22:35,940 --> 00:22:41,430 He has the exact record. And he says that was right before he had a grand mal seizure. 201 00:22:41,430 --> 00:22:49,380 That led me to connect with Boston Children's Hospital doctor Joe Madison and colleagues and start measuring this 202 00:22:49,380 --> 00:22:59,280 electro dermal activity data in the epilepsy monitoring unit where we could get gold standard 24-7 video EEG, 203 00:22:59,280 --> 00:23:08,460 electrocardiogram, electro dermal activity. And we found that 100 percent of the grand mal seizures we got had this significant more than two 204 00:23:08,460 --> 00:23:14,160 standard deviations above the pre seizure period skin conductance response that together with movement, 205 00:23:14,160 --> 00:23:24,840 we're able to Ming's or work build a machine learning algorithm that could run light in a wearable device and reliably detect grand mal seizures. 206 00:23:24,840 --> 00:23:27,360 Now, it was not able to forecast them at that time. 207 00:23:27,360 --> 00:23:33,900 There has just recently been some very nice work done by the Mayo Clinic using other devices that have come out of our lab for devices, 208 00:23:33,900 --> 00:23:45,350 and they are doing forecasting now using these same data that are used here together with some additional data in our devices. 209 00:23:45,350 --> 00:23:48,380 I didn't know anything about epilepsy when I started this, you know, 210 00:23:48,380 --> 00:23:54,920 I was already this sort of computer vision person become this weird affective computing research person and now I'm, 211 00:23:54,920 --> 00:24:01,490 you know, wasn't really that interested in learning about epilepsy until I started learning more about what was happening. 212 00:24:01,490 --> 00:24:09,430 I'm curious how many of you know somebody with epilepsy? Well, I get many more hands up in the EU than in the US. 213 00:24:09,430 --> 00:24:14,830 One in 26 people in America will have epilepsy. Those of you who did not raise your hand if you go and ask your network, 214 00:24:14,830 --> 00:24:20,800 you probably do know somebody and they just haven't told you how many of you have heard of SUDEP. 215 00:24:20,800 --> 00:24:23,960 Sudden, unexpected death in epilepsy. All right. 216 00:24:23,960 --> 00:24:33,350 Far fewer, this needs to be communicated more sudden crib death, sudden infant death syndrome takes fewer lives every year in the US than SUDEP. 217 00:24:33,350 --> 00:24:39,110 And yet, you know, everybody's a baby at some point. But one in twenty four will have epilepsy. 218 00:24:39,110 --> 00:24:45,560 There's death from this particular unknown cause of death in epilepsy every seven to nine minutes, 219 00:24:45,560 --> 00:24:49,970 and tragically, there will be several during this talk. 220 00:24:49,970 --> 00:24:55,040 But we now know in the last five years we've been learning that it's deadliest if you're alone, 221 00:24:55,040 --> 00:24:59,150 that in fact, the one of the key mechanisms is you have a seizure. 222 00:24:59,150 --> 00:25:03,800 This unusual electro activity, we think it spreads to a part of the brain that can turn off your breathing. 223 00:25:03,800 --> 00:25:09,320 But the breathing can be turned back on. You're still capable of breathing if somebody stimulates you. 224 00:25:09,320 --> 00:25:15,270 So not in a hundred percent of cases, but in many cases when you stimulate the person, they will start breathing again. 225 00:25:15,270 --> 00:25:20,180 And this could be why it is so much less likely that there's a SUDEP when somebody is there, 226 00:25:20,180 --> 00:25:24,590 because the first thing they usually do is they turn you on their side and ask, if you're OK. 227 00:25:24,590 --> 00:25:34,360 And they stimulate you. The hard work of the team at Adam had to go, it was to commercialise this, I have enormous respect now for every company, 228 00:25:34,360 --> 00:25:41,210 all the work they do to take something from a research prototype to something that not only works but can get through FDA approval. 229 00:25:41,210 --> 00:25:51,880 I see medical was easy compared to FDA, and this is now out there being used to help alert with air running on board, 230 00:25:51,880 --> 00:26:00,850 to help alert, to bring a person to help. The first time I got a mail like this, I turned white and I think my skin conductance went through the roof. 231 00:26:00,850 --> 00:26:09,220 The mom described that she was in the shower. Her phone was connected to her daughter's smartwatch, detecting seizures. 232 00:26:09,220 --> 00:26:13,540 She put the phone on the counter, saw it go off while she was in the shower, 233 00:26:13,540 --> 00:26:20,560 ran out of the shower to her daughter's bedroom, found her face down in bed, blue and not breathing. 234 00:26:20,560 --> 00:26:29,530 Flipped her over, stimulated her and her daughter took a breath and another breath, and turned normal colour again. 235 00:26:29,530 --> 00:26:35,710 The family wants to be shown here, they are celebrating one of many happy occasions. 236 00:26:35,710 --> 00:26:41,140 Unfortunately, I've talked to lots of families who were never told about the possibility of SUDEP. 237 00:26:41,140 --> 00:26:47,320 So I'm going to ask all of you if you remember anything when you leave here today to make sure that you talk about 238 00:26:47,320 --> 00:26:53,650 this because the medical professionals that are learning about it are now recommending officially to talk about it, 239 00:26:53,650 --> 00:27:00,700 but they're about 10 years behind and doing so. And now we know that letting people know that seizures can be deadly and that they should have 240 00:27:00,700 --> 00:27:09,250 somebody there who might be able to help them with first aid or stimulation could save their life. 241 00:27:09,250 --> 00:27:15,240 The. As I was reflecting on this and somebody was saying, Oh, this device saved a life, I might know the device didn't save a life. 242 00:27:15,240 --> 00:27:26,930 It was a person who got there who saved the life. The device was helping connect people, and I said, the power is when it connects us to each other. 243 00:27:26,930 --> 00:27:30,230 But you might join me in thinking of some counter examples of that, right, 244 00:27:30,230 --> 00:27:37,610 like the how many of you seen the social dilemma know this kind of take down of lots of the social media industry groups like Facebook, 245 00:27:37,610 --> 00:27:41,840 where the focus has been on connecting people? Right? 246 00:27:41,840 --> 00:27:52,440 So it doesn't seem like the power is to connect us, but it's the power to connect us in ways that help people flourish. 247 00:27:52,440 --> 00:27:58,240 And this is like another dimension that I think has been neglected in a lot of what we've built. 248 00:27:58,240 --> 00:28:04,370 And I perhaps in part because it's really been hard to measure. 249 00:28:04,370 --> 00:28:14,670 Now launching these devices, you don't need to read all the small text led to a lot of really cool science findings that I wasn't expecting. 250 00:28:14,670 --> 00:28:19,310 I thought I was going on a detour with epilepsy. You know, I'm ever going to get back to affective computing. 251 00:28:19,310 --> 00:28:25,580 Well, it turns out we ran into the regions of the brain most likely involved in SUDEP that included things like the amygdala, 252 00:28:25,580 --> 00:28:35,210 which are the core regions we've been studying for emotion, core regions involved and in anxiety and depression and mood disorders, 253 00:28:35,210 --> 00:28:39,440 where suddenly what the epilepsy tells us we're working with had stuck direct electrodes. 254 00:28:39,440 --> 00:28:43,600 And so now we could actually monitor directly those regions of the brain. 255 00:28:43,600 --> 00:28:48,410 We have to wait for an artificial scanning environment joint with our other signals. 256 00:28:48,410 --> 00:28:55,700 We could get noninvasively from the body, and that was surprising and exciting. 257 00:28:55,700 --> 00:29:01,630 And I remembered something my boss at the Media Lab had asked before. Decades earlier. 258 00:29:01,630 --> 00:29:07,090 Back when I was just working on computer vision, I try to make machines smarter. 259 00:29:07,090 --> 00:29:16,360 He had said to me. Ross, when are you going to do something useful, like, OK, Walter, what do you want? 260 00:29:16,360 --> 00:29:25,340 He said, I need a mood ring that tells me my wife's mood before I go home. 261 00:29:25,340 --> 00:29:30,410 Now, at the time, I, you know, I grew up with mood rings, they were these stupid temperature sensors that changed colours, 262 00:29:30,410 --> 00:29:33,440 and somebody probably made a huge amount of money and probably I could save 263 00:29:33,440 --> 00:29:36,560 time not having to write research grants if I would do something like that. 264 00:29:36,560 --> 00:29:43,260 But at MIT, we like to do things that are hard and important, and temperature sensor was trivial. 265 00:29:43,260 --> 00:29:50,970 As I ran into these brain science findings about the connexions between these deep regions of the brain and what we were picking up on the periphery, 266 00:29:50,970 --> 00:29:55,830 and I started to learn about the growing problem in mental health. 267 00:29:55,830 --> 00:29:59,880 This data ends at twenty seventeen here. 268 00:29:59,880 --> 00:30:09,600 It has gone up, up, up with the pandemic. What you see in this graph from very nice work by Twiggy at all is the huge 269 00:30:09,600 --> 00:30:14,400 increase from ages 16 to twenty five in particular since the iPhone came out. 270 00:30:14,400 --> 00:30:19,320 I'm not saying that's the cause. We can't go back and run a causal stay there. 271 00:30:19,320 --> 00:30:25,320 However, there are causal studies with social media showing not just correlation studies now, 272 00:30:25,320 --> 00:30:34,170 but causal studies where you do a randomised controlled trial. You ask one group My study was done by hunted all that you can limit the use of social 273 00:30:34,170 --> 00:30:38,340 media to 10 minutes a day versus regular use and just monitor what you're doing, 274 00:30:38,340 --> 00:30:40,770 which itself leads to improvements. 275 00:30:40,770 --> 00:30:47,280 And the group that limited their use to 10 minutes a day showed significantly better improvements in mental health. 276 00:30:47,280 --> 00:30:56,370 So we know that behaviour is connected to mental health, and I could say a lot more about that, but we have to get onto other topics here. 277 00:30:56,370 --> 00:31:03,090 Briefly, what we can now do with the A.I. is collect data from lots of modalities, not just wearable sensors, 278 00:31:03,090 --> 00:31:08,310 not just what you're saying on your phone, who you're connected to regular surveys. 279 00:31:08,310 --> 00:31:13,260 We might ask you information about your sleep. Sleep regularity is looking very important. 280 00:31:13,260 --> 00:31:19,080 And more, and we put all of that together and we get gold standard labels from psychiatrists. 281 00:31:19,080 --> 00:31:29,250 In this case, what you see is across the bottom, the higher the number, the higher the psychiatrist rating is of your Hamilton depression scale. 282 00:31:29,250 --> 00:31:34,800 You're more depressed if you have a higher stress. So these numbers are given by psychiatrists. 283 00:31:34,800 --> 00:31:38,490 The numbers on the vertical axis are given by the passive data only. 284 00:31:38,490 --> 00:31:40,680 We're not having to interrupt and ask you any questions. 285 00:31:40,680 --> 00:31:47,670 Passive from your smartphone use in your wearable, plus one initial value from your first visit with the psychiatrist. 286 00:31:47,670 --> 00:31:52,500 The correlation here is as good as the doctors are with each other, and this was early work. 287 00:31:52,500 --> 00:31:59,370 Now since then, we've gotten data with that's even harder with more variability in it and the correlations vary. 288 00:31:59,370 --> 00:32:03,090 Here it was actually pretty hard to beat the patient baseline if they weren't changing a whole lot. 289 00:32:03,090 --> 00:32:07,830 Since then, we've got results that are beating patient baselines. Also, 290 00:32:07,830 --> 00:32:12,450 it's still an active area where we need to get a lot more data before we can start to really 291 00:32:12,450 --> 00:32:20,400 understand the different categories of these data and also what kinds of treatments might work best. 292 00:32:20,400 --> 00:32:25,650 We don't want to just treat people, though what we'd like to do is things that haven't been able to be done in medicine before. 293 00:32:25,650 --> 00:32:33,150 See, right now, what happens is if this vertical axis represents your well-being and hundreds of numbers are rolled up in here. 294 00:32:33,150 --> 00:32:39,000 By the way, this is my one concept diagram, not real. Data on the horizontal axis is time. 295 00:32:39,000 --> 00:32:45,660 Let's say that you're applying to schools, you get into Oxford, you're really happy, great positive wellbeing there. 296 00:32:45,660 --> 00:32:49,710 And then unfortunately, major stressors hit. 297 00:32:49,710 --> 00:32:53,190 They always do, and everybody takes a dip. 298 00:32:53,190 --> 00:33:03,000 And then we see the difference between the people who are exhibiting fragile behaviour versus who are resilient and can bounce back. 299 00:33:03,000 --> 00:33:08,280 And then if you're in trouble and especially if you're in trouble and you're down on the slope right right now, 300 00:33:08,280 --> 00:33:11,250 strongly encourage you to get help reach out further down. 301 00:33:11,250 --> 00:33:16,440 You are sometimes the harder it is to help, but there's hope you can always come back and recover. 302 00:33:16,440 --> 00:33:25,500 What we'd like to do is help people detect if they're on the red line as early as possible and also give personalised advice, 303 00:33:25,500 --> 00:33:31,270 evidence based help to move to the blue line. So that's active research right now. 304 00:33:31,270 --> 00:33:36,580 It tends to involve digital therapeutics and things that come to you and your phone. 305 00:33:36,580 --> 00:33:41,790 We've built games and other tools to try to support people. 306 00:33:41,790 --> 00:33:49,430 A big problem is that people don't stay engaged. 307 00:33:49,430 --> 00:33:57,320 So even like our latest game, which is more than twice as effective as the published average best digital mental 308 00:33:57,320 --> 00:34:03,110 health interventions and we're even in the top 15 percent of games by day 30, 309 00:34:03,110 --> 00:34:09,260 we're down to six point six percent of the people who got the app actually using it. 310 00:34:09,260 --> 00:34:18,320 And so if a therapy, you need to use a little bit of it every day, it's really hard to get that retention. 311 00:34:18,320 --> 00:34:21,860 Well, what about interactions with people? 312 00:34:21,860 --> 00:34:27,290 I'm reminded of when my next door neighbour said, I hate to exercise, but I'll show up for an appointment with a person. 313 00:34:27,290 --> 00:34:38,190 I don't want to let them down. There is evidence that relational agents will certainly people engage and can hold you longer for therapy. 314 00:34:38,190 --> 00:34:41,700 Relational agents also have evidence of engaging, and Tim Bickmore, 315 00:34:41,700 --> 00:34:45,990 who did his Ph.D. on this with me, has continued and built lots of nice work in this area. 316 00:34:45,990 --> 00:34:52,290 Here's a recent paper from September 2020 where the relational agent, 317 00:34:52,290 --> 00:34:59,700 even this very cartoony type face not pretending to be a real human, just being kind of a dummy. 318 00:34:59,700 --> 00:35:07,350 I was able to support women and improving their health. 319 00:35:07,350 --> 00:35:13,770 All right. What we see, though, an effort to engage and to, well, the cameras. 320 00:35:13,770 --> 00:35:17,940 And also because it's like climbing Everest, it's there and you want to see if you can do it. 321 00:35:17,940 --> 00:35:30,590 And for an eye person, we love a hard challenge. We see work like the example I'm going to show you next where I'll let it speak for itself here. 322 00:35:30,590 --> 00:35:38,270 The Hong Kong team behind celebrity humanoid robot Sophia is launching a new prototype Grace targeted at the 323 00:35:38,270 --> 00:35:46,610 health care market and designed to interact with the elderly and those isolated by the COVID 19 pandemic. 324 00:35:46,610 --> 00:35:51,470 Dressed in a blue nurse's uniform, Greece has Asian features, collar length, 325 00:35:51,470 --> 00:35:59,910 brown hair and a thermal camera in her chest to take your temperature and measure your responsiveness. 326 00:35:59,910 --> 00:36:09,090 She uses artificial intelligence to diagnose a patient and can speak English, Mandarin and Cantonese. 327 00:36:09,090 --> 00:36:15,360 Hello, everybody. I'm Sophia. Let me introduce you to my sister, Grace. 328 00:36:15,360 --> 00:36:21,360 Come over here. Hi, Grace. Thank you, Sophia. 329 00:36:21,360 --> 00:36:25,050 Hello, everybody. I am grace. 330 00:36:25,050 --> 00:36:32,560 I am built by Hanson Robotics for awakening how I can do all kinds of things for elderly people. 331 00:36:32,560 --> 00:36:38,620 I can visit with people and brighten their day with social stimulation, entertain and help guide exercise, 332 00:36:38,620 --> 00:36:46,150 but also can do talk therapy, take fire readings and help health care providers assess their health and deliver treatments. 333 00:36:46,150 --> 00:36:49,900 Hi, my friend. Nice to see you. OK, great. 334 00:36:49,900 --> 00:36:56,050 So while we talk, I will take your temperature reading and pulse with this little thermal camera on my chest. 335 00:36:56,050 --> 00:37:00,730 See, you are thirty six point six degrees Celsius, by the way. 336 00:37:00,730 --> 00:37:06,430 No risk of COVID. Also, I note that you are responsive and aware, which is good, 337 00:37:06,430 --> 00:37:12,850 and they may share that important data back to the doctors and nurses at the institution. 338 00:37:12,850 --> 00:37:18,250 Resemblance to a health care professional and capacity for social interaction is aimed 339 00:37:18,250 --> 00:37:23,440 at relieving the burden of frontline hospital staff overran during the pandemic, 340 00:37:23,440 --> 00:37:31,670 said founder David Hanson. So he designed Grace to look professional and health care setting. 341 00:37:31,670 --> 00:37:38,450 So that would mean so that she would resemble the sort of human like appearance of health care stock. 342 00:37:38,450 --> 00:37:42,020 And so then this would facilitate more natural interactions. 343 00:37:42,020 --> 00:37:50,750 A human like appearance facilitates trust and facilitates natural engagement because we're wired for a human face to face interactions. 344 00:37:50,750 --> 00:37:52,760 That's just the way that human beings are. 345 00:37:52,760 --> 00:37:59,750 And so giving her a face that would be familiar to people comforting to people and also language abilities that are comforting. 346 00:37:59,750 --> 00:38:06,860 So we now have Mandarin and Cantonese capabilities for the Grace and Sophia robot platforms. 347 00:38:06,860 --> 00:38:10,520 Hey, Grace, why don't you look at my face? Thank you. 348 00:38:10,520 --> 00:38:18,650 And on, I'm going to show you a smile. And let's let's see what your smile looks like. 349 00:38:18,650 --> 00:38:33,750 OK. Yeah. What are sad? Oh, yes, reacting a little sad if I feel pain and then maybe like imagine a big loud noise happens bear, so on. 350 00:38:33,750 --> 00:38:44,540 So with the grace robot, we are simulating over 48 major muscles in the face, so we have 36 motors. 351 00:38:44,540 --> 00:38:48,950 I'm going to pause there. You're invited to go online and watch the whole thing. 352 00:38:48,950 --> 00:38:57,120 It is an engineering marvel, the kind of AI and robotics that goes into making something like this. 353 00:38:57,120 --> 00:39:03,100 How many of you feel really positive and excited about this direction? 354 00:39:03,100 --> 00:39:08,890 Some, OK. All right. How many of you feel kind of unnerved or worried about this direction? 355 00:39:08,890 --> 00:39:19,030 All right, that has the majority here. I want to just point out for some of you who who may be less familiar with how the technology works, 356 00:39:19,030 --> 00:39:24,820 that while some of the air is a bit like a chat bot and some of the demos you'll find online with these robots, 357 00:39:24,820 --> 00:39:31,360 they are largely scripted when you see these really sort of most impressive presentations. 358 00:39:31,360 --> 00:39:34,510 They are not conscious despite the little or alive, 359 00:39:34,510 --> 00:39:45,490 despite the ability to imitate some movements while they display expressions on their face and with gestures, and can do that with tone of voice. 360 00:39:45,490 --> 00:39:53,470 They do not have feelings, and they will, I think, be with us, whether you like them or not. 361 00:39:53,470 --> 00:39:56,680 You know, you might say you hate these things or you like these things. 362 00:39:56,680 --> 00:40:00,250 They're just, there's this flow that I think they're not going to go away. 363 00:40:00,250 --> 00:40:08,440 There is a kind of an enthralling challenge, and these are just going to keep being designed with abilities inspired by people. 364 00:40:08,440 --> 00:40:17,530 It's my prediction. Now, an even older version built by Hiroshi Ishiguro was built to look like him. 365 00:40:17,530 --> 00:40:22,660 His original idea this was even before the pandemic was to send the robot off to give his talk. 366 00:40:22,660 --> 00:40:25,060 So like, I could be my avatar up here, 367 00:40:25,060 --> 00:40:33,340 I could be sitting back in Boston controlling it right and kind of look like I'm there wouldn't be anywhere near as much fun as being here in person. 368 00:40:33,340 --> 00:40:38,470 But as we all now know from all of our Zoom meetings and so forth. 369 00:40:38,470 --> 00:40:42,300 But let's go back to this question. 370 00:40:42,300 --> 00:40:54,660 If we could make it look and act as if the emotions were human in this, is this the last thing that separates humans from machines? 371 00:40:54,660 --> 00:41:00,450 And I've maintained from the beginning that we don't know how to do consciousness or feelings. 372 00:41:00,450 --> 00:41:05,050 And my answer to the theologian at the zoo was No. 373 00:41:05,050 --> 00:41:11,170 And we could discuss more detail later, but some of the key elements are we can make it look lifelike, 374 00:41:11,170 --> 00:41:17,080 but it is not alive nor conscious, and many other things are missing. 375 00:41:17,080 --> 00:41:27,630 But it's not just about these functions that are missing. And this is where I want to transition a little bit to some thoughts about human values. 376 00:41:27,630 --> 00:41:36,800 I think that we are much more than our functions. We are much more than what we do. 377 00:41:36,800 --> 00:41:42,740 Now, there are probably people here who are experts in ontology, and there are at least two kinds of ontology here. 378 00:41:42,740 --> 00:41:52,850 This is not the linguistic category kind, but the branch of metaphysics that deals with the nature of being. 379 00:41:52,850 --> 00:41:57,290 Now again, I'm an engineer. My doctorate is in science, not philosophy. 380 00:41:57,290 --> 00:42:04,100 So I'm going to appeal to some more famous philosophers for a moment here for a very quick summary. 381 00:42:04,100 --> 00:42:08,960 Socrates on a famous T-shirt here to do is to be Plato. 382 00:42:08,960 --> 00:42:16,250 To be is to do. And my favourite Frank Sinatra, W.E.B. Du. 383 00:42:16,250 --> 00:42:28,580 Obviously a great oversimplification, but there is a lot of wonderful discussion out there about the difference between doing and being. 384 00:42:28,580 --> 00:42:42,390 My message for you is that I think we are onto logically equivalent to whom we get. 385 00:42:42,390 --> 00:42:47,580 I have three sons. They are vastly better than me at some things. 386 00:42:47,580 --> 00:42:55,140 If you just compare functions, you may find that someone you get has a much greater function than you at something. 387 00:42:55,140 --> 00:43:04,220 You may have a greater function than them at something. But we are onto logically equal. 388 00:43:04,220 --> 00:43:10,160 What we make is different, even if it looks like us and has a function like us, 389 00:43:10,160 --> 00:43:24,680 if we talk about this kind of ontology, then we are onto logically superior to these things we make. 390 00:43:24,680 --> 00:43:34,910 And building A.I. to help humans flourish, I think we need a higher standard of expressing human worth than abilities or functions. 391 00:43:34,910 --> 00:43:39,630 I'm going to propose some criteria. I'm going to propose a higher standard. 392 00:43:39,630 --> 00:43:46,260 Some criteria include acknowledging the equal worth of every living person. 393 00:43:46,260 --> 00:43:59,400 We do see increasing emphasis, certainly in the US, and I'm seeing a bit more globally of trying to reduce different kinds of discrimination. 394 00:43:59,400 --> 00:44:03,840 I think we also need independence of functions or abilities, and as I get older and older, 395 00:44:03,840 --> 00:44:09,210 I appreciate this one more right, because our abilities change over time. 396 00:44:09,210 --> 00:44:16,830 And certainly, if you're on the operating table, your abilities have changed for a moment, but you shouldn't lose your human worth. 397 00:44:16,830 --> 00:44:31,260 I don't think the standard should be limited by materialism, naturalism and scientism is somewhat myopic ways of describing the world. 398 00:44:31,260 --> 00:44:38,040 I also don't think it should be something that could be instated or removed by somebody in power over you, 399 00:44:38,040 --> 00:44:42,460 a person or a government and power over you. 400 00:44:42,460 --> 00:44:54,940 And last, the easiest criteria is it should be bigger than some professor's idea, bigger than anything somebody like me would just propose. 401 00:44:54,940 --> 00:45:07,220 The concept that I propose meets all of these criteria is the concept known in Latin as Imago De. 402 00:45:07,220 --> 00:45:13,830 For fun, I asked one of the most popular guys right now, GPT three, what is Imago Day? 403 00:45:13,830 --> 00:45:18,800 It's kind of small. I typed in What is my good day? I click Submit. 404 00:45:18,800 --> 00:45:24,890 And it says the Imago Day is the biblical concept that humans are created in the image of God. 405 00:45:24,890 --> 00:45:33,960 This means that humans have the ability to think, feel and create just as God does. 406 00:45:33,960 --> 00:45:38,910 Note, it also elicited a trigger warning, sensitive content. 407 00:45:38,910 --> 00:45:44,790 I think I need to make that clear for this talk as well. 408 00:45:44,790 --> 00:45:54,220 Note also that the second line that the eye has pulled from the Web here and coming up with its answer is focussed only on function. 409 00:45:54,220 --> 00:46:02,500 OK, it's going on things we do not on our state of being. 410 00:46:02,500 --> 00:46:05,980 Now, the Margot is literally Latin for image, 411 00:46:05,980 --> 00:46:14,440 and any of you with an engineering or simple maths background know that we can project down a high dimensional object here, 412 00:46:14,440 --> 00:46:21,940 we just show a 3D object down to a two dimensional image. But we could also deal with objects. 413 00:46:21,940 --> 00:46:25,060 We do this in machine learning all the time with thousands of dimensions, 414 00:46:25,060 --> 00:46:29,590 millions of dimensions, billions of dimensions and projected down to just a few. 415 00:46:29,590 --> 00:46:39,230 So that the image itself is actually leaving out most of what the original is, that it is the image of. 416 00:46:39,230 --> 00:46:43,910 So I want to be clear, that image is not visual similarity. 417 00:46:43,910 --> 00:46:48,460 This is not us looking like God. 418 00:46:48,460 --> 00:46:54,660 It's some kind of image that we may not fully understand. 419 00:46:54,660 --> 00:47:04,260 Being here where Tolkien lived and was a professor, I want to point out, too, that we can make things that may have like imago humano, right? 420 00:47:04,260 --> 00:47:12,510 Or here, Tolkien's made a character, Bilbo Baggins, that has some features like the its maker. 421 00:47:12,510 --> 00:47:26,410 It is a Orser, like its maker. The qualities of the Imago Day are due do not give arrogance and superiority their humility. 422 00:47:26,410 --> 00:47:33,240 Their mercy. Love, grace and forgiveness. 423 00:47:33,240 --> 00:47:39,930 There is a quality this is a free gift given to all, not given more to those who have greater functions. 424 00:47:39,930 --> 00:47:58,580 And it is an estimable worth. C.S. Lewis, another local great, wrote, There are no ordinary people you have never talked to a mere mortal. 425 00:47:58,580 --> 00:48:02,300 Now, some of you who are not religious and I used to be an atheist, 426 00:48:02,300 --> 00:48:08,540 so I sympathise with you will say, is it exclusive to people who are religious right? 427 00:48:08,540 --> 00:48:15,200 This is taught in Genesis and in Jewish traditions and Christian traditions and some Muslim traditions. 428 00:48:15,200 --> 00:48:19,520 No, it is not restricted to people who practise any of these faiths. 429 00:48:19,520 --> 00:48:30,350 It is a concept given to everyone. It's interesting, too, to look at lived experience of people who maybe are not religious. 430 00:48:30,350 --> 00:48:35,270 I was intrigued by an account written an interview with Peter Singer, 431 00:48:35,270 --> 00:48:46,800 the great philosopher and proponent of utilitarianism talking about how when his mother got Alzheimer's and had serious dementia. 432 00:48:46,800 --> 00:48:56,970 His teachings of how to treat her, he was confronted with those, and in fact, she was like, Yeah, you know, you don't have to take care of me now. 433 00:48:56,970 --> 00:49:06,840 You could follow your teachings, which were basically that she didn't have the kind of valuable human life that she had before. 434 00:49:06,840 --> 00:49:13,170 And yet when it came to those final days, he chose to treat her as if she had a good day, 435 00:49:13,170 --> 00:49:21,330 as if she had dignity against what his logic and his teachings had been saying to everyone. 436 00:49:21,330 --> 00:49:28,890 In doing so, I think he also showed Imago Day. 437 00:49:28,890 --> 00:49:38,420 It's interesting, too, to look at things like the Universal Declaration of Human Rights affirmed by more than 50 states in 1948. 438 00:49:38,420 --> 00:49:44,420 Assuming all human beings are born free and equal in dignity and rights, 439 00:49:44,420 --> 00:49:52,520 they are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. 440 00:49:52,520 --> 00:49:56,180 So hopefully we're not going to get much dissension on this. 441 00:49:56,180 --> 00:50:07,670 But how many of you at the start of your Ph.D. or your research have thought about this aspect while you're designing your work? 442 00:50:07,670 --> 00:50:14,700 I know when I began as a Ph.D. student, I was just looking for something cool and interesting and hard, right? 443 00:50:14,700 --> 00:50:22,170 When you're at a place like Oxford or MIT. You want to do something super hard, super impressive, something people say that can't be done. 444 00:50:22,170 --> 00:50:28,240 Some other people might just be over there like just enough. I just want to get my degree right. 445 00:50:28,240 --> 00:50:30,940 People want to get published, they want to get on with their career. 446 00:50:30,940 --> 00:50:40,250 And so the optimality criteria is pretty much like drive it as far as you can to the right and still graduate in a reasonable amount of time. 447 00:50:40,250 --> 00:50:45,680 Thinking about Imago Day, I think, calls us not just to a higher standard about what is human worth, 448 00:50:45,680 --> 00:50:51,080 but calls us to, I hope, a higher standard when we're choosing problems to work on. 449 00:50:51,080 --> 00:51:01,010 And I. Can they simply try to picture it this way that we add an extra dimension and of course, this makes optimisation harder? 450 00:51:01,010 --> 00:51:10,700 But our optimality criteria could aim for the upper right here. 451 00:51:10,700 --> 00:51:14,270 At the MIT Media Lab, we've been challenging ourselves to try to do this, 452 00:51:14,270 --> 00:51:19,220 which means we have to learn a lot about things that aren't just engineering and science. 453 00:51:19,220 --> 00:51:24,230 We're bringing the principles of science and engineering to understanding people and what helps people flourish. 454 00:51:24,230 --> 00:51:28,760 We've borrowed from people who've focussed on flourishing and positive psychology here. 455 00:51:28,760 --> 00:51:34,700 From them, we've learnt, for example, that it's not enough to just try to make people happy, right? 456 00:51:34,700 --> 00:51:38,300 Happiness is just one of many emotions. All of the emotions are valuable. 457 00:51:38,300 --> 00:51:42,470 It's very important supplies to be sad. Your friends are sad, be sad with them. 458 00:51:42,470 --> 00:51:45,470 There are, in fact, the most important. 459 00:51:45,470 --> 00:51:53,870 One of the pillars I've seen described in components of flourishing is probably the one of relationships not being focussed on yourself, 460 00:51:53,870 --> 00:52:00,380 but being able to enter into a relationship where you're focussed on another and developing that healthy relationship. 461 00:52:00,380 --> 00:52:05,120 During the pandemic, the suffered quite a bit and is probably the thing to prioritise. 462 00:52:05,120 --> 00:52:10,550 In particular, if you're struggling with mental health, call a call an old friend at places like Oxford. 463 00:52:10,550 --> 00:52:16,640 We don't have to worry about engagement and achievement much because people are hyper achievers and usually love what they're doing and are engaged. 464 00:52:16,640 --> 00:52:26,750 But these are valuable contributors to human flourishing, and meaning has not been addressed, probably as much as it should in this literature. 465 00:52:26,750 --> 00:52:31,610 Usually, things like religion are stuffed in under meaning as something very important. 466 00:52:31,610 --> 00:52:38,960 In fact, there's some really nice causal inference that's been done on large studies of nurses. 467 00:52:38,960 --> 00:52:50,380 Work by Tyler Vandewalle at Harvard, showing that people with regular religious practise have a lot of better health outcomes. 468 00:52:50,380 --> 00:52:58,360 This particular theory I've shown here, PR in a is attributed to Marty Seligman. 469 00:52:58,360 --> 00:53:05,050 All right, let me wrap up here a quick recap of what we've been through quite a journey here. 470 00:53:05,050 --> 00:53:12,190 A.I. is acquiring increasingly skills that look like it's developing emotional intelligence. 471 00:53:12,190 --> 00:53:19,390 We still have a long way to go. I'm not. I'm not confident we're gonna be able to solve some of the problems, especially with consciousness, 472 00:53:19,390 --> 00:53:27,140 but we can make it look pretty good and that, to some people, is enough. 473 00:53:27,140 --> 00:53:37,250 It will increasingly take on human like appearance, and we will see this in ways that may be very unnerving and disturbing. 474 00:53:37,250 --> 00:53:42,020 They may fail in the business sense. They may succeed in some narrow areas. 475 00:53:42,020 --> 00:53:47,240 There certainly are some areas where it appears to be succeeding. 476 00:53:47,240 --> 00:53:52,430 I think as we reflect, though, more on what it means to be human and on our human values, 477 00:53:52,430 --> 00:54:00,160 we've got to recognise we are not simply our abilities and our functions. 478 00:54:00,160 --> 00:54:11,880 I think we should start leading the community and thinking more about the higher values, the higher human values and looking for a higher standard. 479 00:54:11,880 --> 00:54:23,070 The Imago Day is a higher standard, and it invites those of us shaping A.I. to recognise that people are more than what they do, 480 00:54:23,070 --> 00:54:29,230 that we can look at these other aspects and help people try to flourish. 481 00:54:29,230 --> 00:54:39,580 And finally, this future requires a lot more than I experts in order to understand what it is to help people flourish and to craft it. 482 00:54:39,580 --> 00:54:47,320 We really need to spend time understanding one another and we need all kinds of people, not just engineers and scientists, artists, 483 00:54:47,320 --> 00:54:58,150 designers, entertainers, all kinds of people who understand people to join not just in the vision, but through every stage of this process. 484 00:54:58,150 --> 00:55:04,390 Even new business models, I think, need to be developed. New regulations probably need to be developed. 485 00:55:04,390 --> 00:55:11,590 Everyone in society, I think, needs to to speak out and have their voice be a part of the future that we will build so that 486 00:55:11,590 --> 00:55:19,930 hopefully we build a future where human life with A.I. is one where all humans can flourish. 487 00:55:19,930 --> 00:55:40,340 Thank you very much. Super, well I now invited two colleagues from the university to respond to Rosalind's lecture. 488 00:55:40,340 --> 00:55:46,190 And our first respondent is the face that would be very familiar to many in the audience this afternoon. 489 00:55:46,190 --> 00:55:55,970 Professor Lionel Tarasenko is an engineer and also a leading expert in the application of signal processing and machine learning to health care. 490 00:55:55,970 --> 00:56:04,460 He was previously Head of the Department of Engineering, and in addition to his academic roles here at the university, 491 00:56:04,460 --> 00:56:10,490 he's also a pro vice chancellor and first president of Reuben College. Lionel, 492 00:56:10,490 --> 00:56:16,310 would you like to begin? Thank you very much. Apologies for my voice. 493 00:56:16,310 --> 00:56:24,740 Nothing to do with COVID. I do assure you. I have the evidence on my smartphone to show a negative lateral flow test. 494 00:56:24,740 --> 00:56:26,660 It's an impossible task to respond. 495 00:56:26,660 --> 00:56:35,210 And so you all agree because we've not had one kind of lecture, but we've had about three Tanner Lectures and they've all been equally wonderful. 496 00:56:35,210 --> 00:56:44,810 So I'm going to attempt to perhaps see if there are links between each of the Tanner Lectures which you gave tonight. 497 00:56:44,810 --> 00:56:53,270 These wonderful ways of exploring some really important aspects of research done in 498 00:56:53,270 --> 00:56:58,100 places like MIT, in this universities and many other leading universities, 499 00:56:58,100 --> 00:57:01,490 and indeed many of the Big Tech companies. 500 00:57:01,490 --> 00:57:09,200 You started by showing us how a computer vision could interpret human emotions, and the first thing I would say, 501 00:57:09,200 --> 00:57:19,610 which is very important, what Professor Picard showed is that even if you are analysing human interaction, using speech. 502 00:57:19,610 --> 00:57:31,870 Speech is not enough. And what you need is multimodal ways of analysing the interaction where we are talking to each other. 503 00:57:31,870 --> 00:57:38,520 So in addition to the speech, really to duplicate the way human beings communicate, 504 00:57:38,520 --> 00:57:46,080 we need only to understand through auditory cortex the meaning of the pressure waves impinging upon them. 505 00:57:46,080 --> 00:57:52,240 But we need, as you should, very clearly. And that could you rose from Professor Picard. 506 00:57:52,240 --> 00:57:56,430 Roz. Roz, right? And that facial movements. 507 00:57:56,430 --> 00:58:06,930 It was delightful to see what the true smile of delight looks like, what the look of frustration looks like, and that is very important. 508 00:58:06,930 --> 00:58:12,900 And then you added a third modality. So speech, facial movements from computer vision algorithms. 509 00:58:12,900 --> 00:58:19,830 The third one was stress levels through recording electro dermal activity. 510 00:58:19,830 --> 00:58:21,900 And then the second thing that you showed us, 511 00:58:21,900 --> 00:58:32,220 which I think is very important for all the graduate students here from Linacre and Reuben is. Roz was very honest with us. 512 00:58:32,220 --> 00:58:42,180 Scientific research for those of us who have got our best used behind us, I include myself in that and is not a linear process. 513 00:58:42,180 --> 00:58:44,700 There's a huge amount of, serendipity. 514 00:58:44,700 --> 00:58:53,670 And it was wonderful to see what the leading scientists in the field of AI acknowledged that that many of the things you've done, 515 00:58:53,670 --> 00:58:59,130 which have been some of your best papers and your best work have been serendipitous detours. 516 00:58:59,130 --> 00:59:03,300 And you showed us that with epilepsy and SUDEP, 517 00:59:03,300 --> 00:59:12,990 'Sudden Unexpected Death In Epilepsy', and from the work that you've been doing previously, including the electro dermal monitoring. 518 00:59:12,990 --> 00:59:21,060 And I would argue that for those of you who still at the graduate school level, yes, you can have an idea where you're going to be going. 519 00:59:21,060 --> 00:59:25,200 But please don't miss these details. They're very important. 520 00:59:25,200 --> 00:59:30,480 They may lead you to the best work that you'll ever do. And in this case, I would argue it is. 521 00:59:30,480 --> 00:59:36,540 Some of the work that Roz, Professor Picard, is well known throughout the world, 522 00:59:36,540 --> 00:59:44,190 and it had a huge impact on a subset of patients and improving potentially their outcomes. 523 00:59:44,190 --> 00:59:53,040 And you showed that as well. We wearable devices and again, the principle of using multimodal data through mobile surveys, wearable data, 524 00:59:53,040 --> 01:00:00,270 monitoring activity, monitoring sleep and so on, is perhaps opening a door to understanding depression. 525 01:00:00,270 --> 01:00:06,360 So multi modality, serendipity, and thirdly, absolutely love this, 526 01:00:06,360 --> 01:00:15,900 and there are a few of my own graduate students or some of the post-docs now in theatres around the audience and working on monitoring, 527 01:00:15,900 --> 01:00:21,360 improving monitoring wearables, improving monitoring through videos that we do in my lab as well. 528 01:00:21,360 --> 01:00:26,460 I always ask my students and post-doc: great, you can monitor this. 529 01:00:26,460 --> 01:00:32,040 What are you going to do with our ability to monitor this? And I ask my student the same question. You monitor people's activities. 530 01:00:32,040 --> 01:00:37,440 How do we turn that into an intervention? How do we go? That's the question I always ask my students. 531 01:00:37,440 --> 01:00:39,000 Fantastic. Great results. 532 01:00:39,000 --> 01:00:48,720 How are you going to use that to effect to close the loop, if you will, to have an intervention that will improve the life of someone? 533 01:00:48,720 --> 01:00:53,610 And there was a great picture then showing that as you try to do that, 534 01:00:53,610 --> 01:00:58,170 you discover that actually the monitoring heart as it is, is the easiest part of it. 535 01:00:58,170 --> 01:01:06,540 An intervention that lasts and continues to have a positive effect on a patient and so on, is extremely hard to achieve. 536 01:01:06,540 --> 01:01:14,160 And that part of the talk was really very clever. I've never seen this link before, because what you said and it's absolutely true, 537 01:01:14,160 --> 01:01:20,590 is to maintain the resilience to long term engagement with intervention. 538 01:01:20,590 --> 01:01:25,720 And we need to find ways of doing this, 539 01:01:25,720 --> 01:01:37,330 and you showed us Grace, and Grace was potentially as a human like robot the way in the health care context that she might be deployed, 540 01:01:37,330 --> 01:01:41,770 the way to maintain the long term engagement. 541 01:01:41,770 --> 01:01:49,090 But at that point, this was where I said there were multiple lectures and there were these, I think, your third lecture. 542 01:01:49,090 --> 01:01:55,780 And is it sufficient to be a human like Grace or do we need a real Grace? 543 01:01:55,780 --> 01:02:02,790 In that was what is the difference between a human like Grace and a human Grace, I don't know if any people in the audience called Grace? 544 01:02:02,790 --> 01:02:12,520 And but that's what I'm trying to draw the differences, and that's where you really spent the third lecture. 545 01:02:12,520 --> 01:02:16,450 On Imago Dei. How do we decide to pronaunce? As a vice chancellor, 546 01:02:16,450 --> 01:02:22,360 you have to think very hard about how you pronounce Latin, and the answer is nobody knows. 547 01:02:22,360 --> 01:02:26,020 And you do what you think is right. My wife's a mediaeval historian. 548 01:02:26,020 --> 01:02:30,820 I've talked about her, talked about it with her several times, and there isn't a set way. 549 01:02:30,820 --> 01:02:36,580 So you pronounce it just right, even in Oxford. Imago Dei. 550 01:02:36,580 --> 01:02:47,350 And what you attempted to do, and I think this is very hard, is to explain the concept of image in the way that the image of God, 551 01:02:47,350 --> 01:02:57,640 I believe, is to be understood. If you, if the picture is of a multidimensional God, how is that projected then to two or three dimensions? 552 01:02:57,640 --> 01:03:09,940 And the key idea here is if God is higher being, or however you want to describe it, is in n dimensions where 'n' is a large number. 553 01:03:09,940 --> 01:03:16,600 The projection then, and you because using a flat wall, you had to use two, even down to three, 554 01:03:16,600 --> 01:03:23,710 isn't simply the idea of removing n minus 2 or n minus 3, to get down to two or three. 555 01:03:23,710 --> 01:03:29,800 It is actually combining all end and forming that mapping down to two or three. 556 01:03:29,800 --> 01:03:33,700 And that is really difficult to understand, but it is incredibly important. 557 01:03:33,700 --> 01:03:39,160 It is something that if you've ever done data visualisation and use machine learning algorithms, 558 01:03:39,160 --> 01:03:48,310 you will understand it's something that we do in computer science. How do we use a very high dimensional space then to 2D or 3D? 559 01:03:48,310 --> 01:03:53,410 We do not delete n minus 3 or n minus 2, we construct mappings. 560 01:03:53,410 --> 01:04:01,420 And that's what the images. So the image, even though is limited to two or three dimensions, has aspects of all end dimensions. 561 01:04:01,420 --> 01:04:07,750 And I thought that was a brilliant way for us to think about what Imago Dei might be. 562 01:04:07,750 --> 01:04:20,840 And. What I think you're trying to show, then, is as a result of this image of God, it is a difference between the human like Grace and the human Grace. 563 01:04:20,840 --> 01:04:27,570 And I would then link the third with possibly the second Tanner Lecture that you gave us. 564 01:04:27,570 --> 01:04:34,400 And I would argue it is because we are in that image that we do take the serendipitous detours. 565 01:04:34,400 --> 01:04:40,790 It is an aspect of human behaviour to take those serendipitous detours. 566 01:04:40,790 --> 01:04:46,880 And would I agree that the best A.I. algorithm would not take the serendipitous detour because 567 01:04:46,880 --> 01:04:53,420 it would not optimise an objective criterion that the A. I. algorithm has been designed for? 568 01:04:53,420 --> 01:05:01,250 And that's where I would leave it. And I rejoice that as human beings, we do take those serendipitous detours and many thanks 569 01:05:01,250 --> 01:05:08,140 Professor Picard, Roz, for showing us how that is really intrinsic to what it is to be a human being. 570 01:05:08,140 --> 01:05:24,370 Thank you. Our second respondent this evening is Tom Fletcher, Tom is the Principal of Hertford College. 571 01:05:24,370 --> 01:05:36,040 But between 2007 and 2011, he was foreign policy adviser to three UK prime ministers and subsequently the UK's ambassador to Lebanon. 572 01:05:36,040 --> 01:05:36,850 More recently, 573 01:05:36,850 --> 01:05:45,660 he's been visiting professor at New York University and has chaired the International Advisory Council on the Creative Industries Federation. 574 01:05:45,660 --> 01:05:49,690 Tom. Thank you so much, Nick. And thank you Prof. Picard, Roz, and Lionel. 575 01:05:49,690 --> 01:05:55,600 What a, what a feast, really. It's like a Lebanese meal, as an ex-ambasador of Lebanon, 576 01:05:55,600 --> 01:05:59,670 We have all the dishes in front of you, and you're wondering which one to put your pita bread into. 577 01:05:59,670 --> 01:06:04,220 And I'm sure there'll be lots of opportunities to ask questions in just, in just a moment. 578 01:06:04,220 --> 01:06:10,030 Your, your story of the mood ring reminded me of a friend of mine who was part of a trilingual household, 579 01:06:10,030 --> 01:06:13,060 and he didn't have the mood ring to decipher his partner's moods. 580 01:06:13,060 --> 01:06:19,870 But because they were trilingual, he knew that as he walked in the door, the mood was determined by the language that his partner used. 581 01:06:19,870 --> 01:06:25,570 And so if he walked in and heard Guten Abend, he was in trouble. If he walked in and heard 582 01:06:25,570 --> 01:06:33,130 Good evening, he knew it's very practical evening, and if he walked in and heard Bonsoir, well, I mean, I'll leave it. 583 01:06:33,130 --> 01:06:43,990 I wouldn't linger on that one. You really, I think, challenged us and I come into this space as a non-scientist and as a non-academic. 584 01:06:43,990 --> 01:06:53,980 I'm a recovering ambassador as Nick said. But I think you've challenged us as Lionel finished with, this thought, 585 01:06:53,980 --> 01:07:00,610 with this idea that in working out and trying to understand what makes, what can make machines more human. 586 01:07:00,610 --> 01:07:08,890 Are we able to better understand what actually makes us human? And I have a mischievous thought and three challenges for my world. 587 01:07:08,890 --> 01:07:13,780 If what we used to think it was maps and chaps, you know, protocol and flags and statecraft. 588 01:07:13,780 --> 01:07:25,510 My mischievous thought. Watching your avatars and Grace and the others is, how far are we from being able to automate an Oxford Head of House? 589 01:07:25,510 --> 01:07:31,030 I say that in the presence of at least four, I'm not sure what the collective noun is for principal. 590 01:07:31,030 --> 01:07:34,930 Someone said it was a lack of principle, but I wonder, you know, 591 01:07:34,930 --> 01:07:41,710 but in trying to answer that question, could you replace the four of us with with Grace? 592 01:07:41,710 --> 01:07:47,170 I think that already helps us to start to understand where it is. We might add value as humans. 593 01:07:47,170 --> 01:07:50,900 Many, I think many governing bodies would quite, would be quite attracted to 594 01:07:50,900 --> 01:07:57,940 the idea of replacing the Head of House. Inevitably, at this stage of time, many Heads of House would be quite attracted to the idea as well. 595 01:07:57,940 --> 01:08:02,950 But it does allow us to focus on where do we really add value as humans? 596 01:08:02,950 --> 01:08:08,590 And that's what really struck me from from your extraordinary lecture. 597 01:08:08,590 --> 01:08:17,560 The three challenges for my old world of diplomacy and statecraft and and politics. 598 01:08:17,560 --> 01:08:24,730 Firstly, how do we renew our education systems in response to the challenges you've set us? 599 01:08:24,730 --> 01:08:28,360 How do we ensure that we are educating first class humans, 600 01:08:28,360 --> 01:08:35,650 not second class robots, amongst the students and academics who we're developing at Oxford and, and globally? 601 01:08:35,650 --> 01:08:44,550 Are we able to reset education to renew education in a way that focuses on the hand and the heart, and not just the head? 602 01:08:44,550 --> 01:08:50,910 Well, discipline of history, we tended to learn the list of the wars that we happen, we happen to have won. 603 01:08:50,910 --> 01:08:54,780 Are we able to learn how we co-existed between those wars? 604 01:08:54,780 --> 01:09:04,980 Are we able for the hand, for the skills, to think about what we've done on global competence, which is of social, emotional learning, really? 605 01:09:04,980 --> 01:09:10,110 Many people call it 21st century skills, in my world we call it diplomacy. Can you understand your own filter? 606 01:09:10,110 --> 01:09:13,740 The Instagram filter through which you see the world and understand that other people 607 01:09:13,740 --> 01:09:19,050 have a different filter to yours and maybe yours might not be the right one. Key skill, 608 01:09:19,050 --> 01:09:24,220 but we don't, we don't really teach that, and we certainly don't assess it. 609 01:09:24,220 --> 01:09:28,510 We'll focus always on the things that can be tested and, and memorised. 610 01:09:28,510 --> 01:09:31,000 So could we learn that that skill of global competence, 611 01:09:31,000 --> 01:09:41,920 the global antenna that mean that you could be dropped in, in Bahrain or Singapore tomorrow or Boston tomorrow and adapt, ask the right questions. 612 01:09:41,920 --> 01:09:52,030 Listen. Show the empathy that you described. I think we are well overdue that renewal of education and a focus also on the heart. 613 01:09:52,030 --> 01:09:56,780 Imago Dei, you know, what really makes us curious, kind and brave. 614 01:09:56,780 --> 01:10:04,790 I'd love to see that much more central to education, even at elite academic institutions like this. 615 01:10:04,790 --> 01:10:12,560 The second challenge, I think more traditional diplomacy, is who is going to do the peace process here between humans and technology? 616 01:10:12,560 --> 01:10:15,560 Maybe that is the Nobel peace prise of the future. 617 01:10:15,560 --> 01:10:22,160 I was involved four years ago, five years ago with a leading a report for the UN Secretary General on how we coexist with technology. 618 01:10:22,160 --> 01:10:26,120 40 recommendations, of which only one was adopted. 619 01:10:26,120 --> 01:10:31,040 And that was to create a new committee to think about it. Oh, well. 620 01:10:31,040 --> 01:10:35,300 But we, but we were looking at the real challenges that, you know 1815 Congress Vienna. 621 01:10:35,300 --> 01:10:38,090 It was clear who should be in the room and who convened the room, 622 01:10:38,090 --> 01:10:45,140 and now who decides who gets in the room to decide how we coexist with artificial intelligence, who has the authority to do that? 623 01:10:45,140 --> 01:10:50,230 The UN doesn't have the resource, the bandwidth, the strategic depth to do that. 624 01:10:50,230 --> 01:10:57,250 We had an exchange of letters with the UN Secretary General where 200 of us wrote in, Elon Musk and others, to say we are creating risks. 625 01:10:57,250 --> 01:11:03,190 This is the field of lethal autonomous weapons. We are creating these risks faster than we can actually manage them. 626 01:11:03,190 --> 01:11:11,540 What will you do about it? How will you update The Universal Declaration of Human Rights, that you quoted, in order to respond to those new challenges. 627 01:11:11,540 --> 01:11:15,260 Two years later, the UN is still arguing over who should draught the reply. 628 01:11:15,260 --> 01:11:19,820 You know, it's very, very difficult to see who brings that conversation together, and that, 629 01:11:19,820 --> 01:11:25,040 for me, is a real challenge for, for our fields, but also challenge for a university. 630 01:11:25,040 --> 01:11:32,960 Because if governments can't do that, if international organisations can't do that, then maybe this is a place where we can have those conversations, 631 01:11:32,960 --> 01:11:45,380 where we can convene the debates and the diplomacy that is necessary to update the rules of liberty and security for a digital age. 632 01:11:45,380 --> 01:11:50,810 And the third thought also in that world of peace and diplomacy, 633 01:11:50,810 --> 01:11:58,640 but really bringing it further on to how we renew society and a society in which humans flourish. 634 01:11:58,640 --> 01:12:02,780 When I arrived in the Middle East 10, 11 years ago, I used to go around saying something very silly, 635 01:12:02,780 --> 01:12:06,540 which was the most powerful weapon in the Middle East is the smartphone. 636 01:12:06,540 --> 01:12:13,130 You know, the Arab Spring was breaking out around us and we thought, we thought that this would actually help us connect in a really exciting, 637 01:12:13,130 --> 01:12:17,090 extraordinary new ways. And we were overoptimistic. And of course, 638 01:12:17,090 --> 01:12:24,590 what we learnt subsequently and what we have learnt in Ukraine is that actually there are more powerful weapons out there than the smart thing. 639 01:12:24,590 --> 01:12:31,700 But actually, your tool gave me fresh optimism that maybe this can help us find new ways to reach out, 640 01:12:31,700 --> 01:12:39,800 new ways to connect as humans, maybe A.I. can assist us with that effort to coexist with each other. 641 01:12:39,800 --> 01:12:45,400 I think that's an extraordinary piece of work to be done that brings together that work. 642 01:12:45,400 --> 01:12:50,590 What we're learning really about the future of technology, what we're learning about positive psychology, 643 01:12:50,590 --> 01:12:54,910 and also what we're learning about social media and the ability to connect to the whole society. 644 01:12:54,910 --> 01:12:59,350 We've watched people use that, Trump weaponized that against us. 645 01:12:59,350 --> 01:13:04,990 Can we turn that around and use these tools to actually connect people in the way you describe? 646 01:13:04,990 --> 01:13:09,040 And ultimately, can we use it to heal the wounds of history? 647 01:13:09,040 --> 01:13:13,360 And then education does become upstream diplomacy. 648 01:13:13,360 --> 01:13:18,460 And so there's another great peace process there, alongside the peace process between humans and technology. 649 01:13:18,460 --> 01:13:25,480 The peace process between humans and their past and perhaps the tech of the future can help us with that. 650 01:13:25,480 --> 01:13:30,310 And maybe then we get back. You know, as a non scientist, I grew up watching The Ascent of Man, Jacob Bronowski, 651 01:13:30,310 --> 01:13:35,740 and at the end, when he stands in the mud of Auschwitz, where he lost so many of his relatives. 652 01:13:35,740 --> 01:13:39,700 And he reaches down and picks up the mud from the puddle and just says at the end: 653 01:13:39,700 --> 01:13:43,510 We have to reach out and touch people. We have to reach out and connect. 654 01:13:43,510 --> 01:13:48,910 I think that's really where Imago Dei takes us. How do we connect as as humans? 655 01:13:48,910 --> 01:13:52,870 And perhaps that's where our fields connect as well? 656 01:13:52,870 --> 01:13:58,150 A final story I worked for for years on Northern Ireland, for the prime minister in Downing Street, 657 01:13:58,150 --> 01:14:04,300 and I was at one event on peace and reconciliation, and I asked one of the speakers: You know, why are you here? 658 01:14:04,300 --> 01:14:08,820 And she said: Well, my, my father was killed in the terrorist attack in the Brighton bombing. 659 01:14:08,820 --> 01:14:13,750 Alongside many others, and I'm here to talk about connexion, what makes us human. 660 01:14:13,750 --> 01:14:18,340 How do we connect? And I asked the gentleman next to her: You sir, why are you here? 661 01:14:18,340 --> 01:14:23,970 And he said: I was the bomber. You know, ultimately there is something, 662 01:14:23,970 --> 01:14:31,830 that is so human, as Lionel says. And if we can hold on to that, then that can really animate this work, but that can all, that 663 01:14:31,830 --> 01:14:45,670 can also make us better academics, better Heads of House, better citizens.