1 00:00:00,370 --> 00:00:14,670 It seems like. 2 00:00:14,670 --> 00:00:22,620 Welcome everybody to this term strategy lecture. This is a series of distinguished lectures named after Professor Christopher Street. 3 00:00:22,620 --> 00:00:27,240 He founded the Programming Research Group here in Oxford in 1965. 4 00:00:27,240 --> 00:00:35,850 And together with Dana Scott, he founded the fields of denotation semantics, but provided a firm mathematical foundation for programming languages. 5 00:00:35,850 --> 00:00:41,610 Before introducing today's speaker, which I'm about to do, I would like to thank Oxford Asset Management, 6 00:00:41,610 --> 00:00:47,250 who have generously supported this lecture series since 2015. 7 00:00:47,250 --> 00:00:52,870 So without their support, we wouldn't really be able to have this exciting lecture series full of distinguished speakers, 8 00:00:52,870 --> 00:01:00,690 so we're very grateful for that. I'm also reliably informed that they have brought Krispy Kreme doughnuts, 9 00:01:00,690 --> 00:01:04,650 and those of you that would like to speak to them during the reception are very 10 00:01:04,650 --> 00:01:11,790 welcome to do that and also to ask more questions to our distinguished speaker. 11 00:01:11,790 --> 00:01:18,630 OK, so it's a great pleasure to introduce Yvonne Rogers and Yvonne Rogers is the chair 12 00:01:18,630 --> 00:01:24,420 of Interaction Design and also the director of the London Interaction Centre, 13 00:01:24,420 --> 00:01:33,140 both at UCLA. Her research interests are ubiquitous computing, interaction, design and HCI. 14 00:01:33,140 --> 00:01:38,430 VON has won so many awards that I will not name all of them, but I'll name a few. 15 00:01:38,430 --> 00:01:46,190 So she won this year's Royal Society Milner Award, which is really great, and MRC Suffrage Science Award. 16 00:01:46,190 --> 00:01:55,020 She's the holder of an absolute dream fellowship. She was co-chair of the Intel Collaborative Research Institute on Sustainable Connected Cities, 17 00:01:55,020 --> 00:02:00,570 and she's also the recipient of a Microsoft Research Outstanding Collaborator Award. 18 00:02:00,570 --> 00:02:06,390 Yvonne has a whole host of academic publications, but in addition to that, 19 00:02:06,390 --> 00:02:15,300 she's the co-author of a manifesto on HCI in 2020 and also co-author of Really Lucky Textbook In the Area. 20 00:02:15,300 --> 00:02:26,070 So her talk today is going to be about how our new technology is changing what we see, and I'm really delighted to introduce Yvonne. 21 00:02:26,070 --> 00:02:39,510 Thank you very much. Firstly, it's a pleasure to hear real claps and seeing yellow hands popping up on in front of the screen. 22 00:02:39,510 --> 00:02:46,410 And secondly, it's a real pleasure to see a sea of faces, albeit mostly masks. 23 00:02:46,410 --> 00:02:53,490 It's nice to see eyes rather than just a blank screen. Thank you very much for coming along to to hear the lecture. 24 00:02:53,490 --> 00:02:59,340 I know it's, you know, we're still a bit concerned about whether we should be coming out, 25 00:02:59,340 --> 00:03:04,020 so I really appreciate you coming to my lecture in terms of awards. 26 00:03:04,020 --> 00:03:12,570 My very first award was when I was six years old and it was at a brook bond tea for best handwriting. 27 00:03:12,570 --> 00:03:15,960 So you should see my handwriting now. 28 00:03:15,960 --> 00:03:21,780 But anyway, I'm going to be talking today about some of my research, 29 00:03:21,780 --> 00:03:26,700 which is very much about how can we design technology to empower and augment people? 30 00:03:26,700 --> 00:03:33,690 And that's what I've done with all sorts of technologies really thought about what is it about this technology that can make our lives better, 31 00:03:33,690 --> 00:03:37,770 but also what we do? And my talk is going to be in two parts. 32 00:03:37,770 --> 00:03:43,590 Firstly, I'm going to talk about how we might extend the body physically. 33 00:03:43,590 --> 00:03:48,900 And this is Joy Melford, who I used to work with at Apple many years ago on the left with her cyber gloves. 34 00:03:48,900 --> 00:03:54,750 And the second part of my talk is going to be about how we extend the mine digitally with software. 35 00:03:54,750 --> 00:04:00,150 And this is Andy Clarke, who's a dear friend and colleague of mine. 36 00:04:00,150 --> 00:04:07,050 And this was him on the front of the New Yorker magazine. Many of you might know him from his work in the extended mind. 37 00:04:07,050 --> 00:04:11,550 He's a philosopher at Sussex University. 38 00:04:11,550 --> 00:04:23,190 So when we think about extending the body, how many of you have thought about what it would be like to have a third eye in the back of your head, 39 00:04:23,190 --> 00:04:29,340 so you might rather swivelling round to see who's behind you? You'd be able to see who is sitting behind you and what they're doing. 40 00:04:29,340 --> 00:04:35,850 Wouldn't that be great if you could have a 360 degree vision? 41 00:04:35,850 --> 00:04:43,500 And philosophers have thought a lot about what it might be to have other senses other than what we were evolved from. 42 00:04:43,500 --> 00:04:48,000 And when I was at Sussex with Andy Clarke, 43 00:04:48,000 --> 00:04:56,970 we spent time talking about this and then I put it into the HSC for a grant looking at how we might extend our senses. 44 00:04:56,970 --> 00:05:04,020 And we were lucky to get funded, and the project that we we came up with was called the Essence Project, 45 00:05:04,020 --> 00:05:10,110 and we wanted to think about how you would make a third eye and what it might mean. 46 00:05:10,110 --> 00:05:21,630 So this work was done with Andy Clarke, John Bird and Paul Marshall, and we set about thinking about where would you want to have a third eye? 47 00:05:21,630 --> 00:05:25,590 So it might be that you might not want it in the back of your head because it's stationary. 48 00:05:25,590 --> 00:05:30,360 You can only see that, but supposing you had a third eye that was on your hand as a ring, 49 00:05:30,360 --> 00:05:39,030 you could then move your arm around like this and and under and over and get you to be in control of what else you might see. 50 00:05:39,030 --> 00:05:51,090 So this got us thinking and prototyping with the technology of the day, thinking about having a tiny camera that you could move around like this. 51 00:05:51,090 --> 00:05:57,840 And it made us think, Well, where would you be the what the camera captures? 52 00:05:57,840 --> 00:05:58,830 Where would it go? 53 00:05:58,830 --> 00:06:05,580 So of course, you can see it on a display, but you wouldn't want to be walking along with the display whilst moving your hand around. 54 00:06:05,580 --> 00:06:13,530 It makes no sense. And so what we did was to try and think about how we might translate the images that are collected 55 00:06:13,530 --> 00:06:24,270 by the third eye into a pattern that can then be translated into a matrix of vibrating motors. 56 00:06:24,270 --> 00:06:27,870 And these motors would then be placed on part of your torso. 57 00:06:27,870 --> 00:06:30,750 It could be you, the front or the back or your shoulders. 58 00:06:30,750 --> 00:06:36,930 And the idea is that there would be a pattern of vibrations a bit like you have on your smartphone, 59 00:06:36,930 --> 00:06:46,590 and that you would learn to sense those patterns of vibrations and be able to transform them into this other sense. 60 00:06:46,590 --> 00:06:56,580 And this work follows on from some early work 50 years ago by a philosopher technician, Becky Rita. 61 00:06:56,580 --> 00:07:05,130 And so we started to sort of think about this, but then we realised that it didn't make sense with this eye to be moving around. 62 00:07:05,130 --> 00:07:09,970 It would go upside down the orientation. You just get all this fuzzy feeling on you. 63 00:07:09,970 --> 00:07:19,710 So we went back a few steps to think about what would be the basic principles of having some kind of sensory substitution. 64 00:07:19,710 --> 00:07:22,860 And we thought about, can you see through your tummy? 65 00:07:22,860 --> 00:07:30,540 That was what we were thinking about and this is one of the researchers working on it, and he can explain it quite eloquently. 66 00:07:30,540 --> 00:07:39,630 John Birch one piece of a really interesting experiment that was doing over 40 years ago by Paul back in. 67 00:07:39,630 --> 00:07:49,470 So what he did was he took a camera and as he controls the vibration on the back and blind people. 68 00:07:49,470 --> 00:07:59,250 And after 10 years of annual training, they learnt how to interpret that vibration and recognise objects in the world around them. 69 00:07:59,250 --> 00:08:05,580 And so what if you have this idea of how the mind is extended and how the body is extended by using technology? 70 00:08:05,580 --> 00:08:12,960 So we've got a beautiful starting place with features built around technology and sensory substitution devices, or tDCS. 71 00:08:12,960 --> 00:08:21,150 So we use cheap off-the-shelf components with the idea that a lot of people can take our designs and run with them. 72 00:08:21,150 --> 00:08:31,440 So what happens is we have a webcam that looks down on a table and as objects that blindfolded and we put headphones on as well. 73 00:08:31,440 --> 00:08:37,530 And all they can feel is an array of vibration on that very effectively. 74 00:08:37,530 --> 00:08:42,060 As the ball goes down the table, they feel the vibration both on and off. 75 00:08:42,060 --> 00:08:47,160 It's on the left hand side of the table after they come down here or what I thought it would come down right. 76 00:08:47,160 --> 00:08:52,410 And I think of that as glass and tight. So you shouldn't get any vibration from it. 77 00:08:52,410 --> 00:09:01,230 But now you get a feel of oppression, vibration and get a foothold in this corner and it's going to make this motor, which is that one there. 78 00:09:01,230 --> 00:09:13,500 But then so that the computer programme has worked out where the bullet is attractive and it's made that much of us and therefore move along more. 79 00:09:13,500 --> 00:09:19,920 Your feel that the next much about right, if I'm ever home alone before the next March to my right, 80 00:09:19,920 --> 00:09:27,300 and that should be the bottom corner so that if you move out of it, you can get used to higher level changes when you move the ball. 81 00:09:27,300 --> 00:09:34,980 We also track where the subject at hand is. We do that, by the way, a bicycle glove, which is very trivial expertise to track. 82 00:09:34,980 --> 00:09:39,180 So as they move their hands, they'll feel a slightly gentler vibration really, 83 00:09:39,180 --> 00:09:46,230 because basically their job is to try and get their hands vibration to coincide with the full vibration. 84 00:09:46,230 --> 00:09:50,250 They do that to catch the ball, and people are remarkably good at it. 85 00:09:50,250 --> 00:10:03,210 So I've done a few trials. Most people can do it by. So that was our attempt to sensory substitution, 86 00:10:03,210 --> 00:10:10,800 and we went round to many schools and exhibitions and science museums and thousands of children and families 87 00:10:10,800 --> 00:10:17,130 tried this and we were really struck at just how quickly they were able to learn to see through their tummy. 88 00:10:17,130 --> 00:10:23,730 So they're able through those vibrations to be able to know where the ball was and to catch it without being able to see it visually. 89 00:10:23,730 --> 00:10:30,280 That was a very early project, but it inspired, I think, others to think about this. 90 00:10:30,280 --> 00:10:31,500 We don't want you again. 91 00:10:31,500 --> 00:10:39,840 And some people at the University of Osnabrück started thinking about a magnetic belt, which was part of the field space project. 92 00:10:39,840 --> 00:10:46,980 And what they had was to try and give people a sense of north by having this belt, 93 00:10:46,980 --> 00:10:54,060 which comprised of 32 tactile vibration vibrators with one of the vibrating north. 94 00:10:54,060 --> 00:10:59,820 As you walked around, it would be north of that was north of it. I have no idea because I don't know Oxford very well. 95 00:10:59,820 --> 00:11:03,390 But if you go this way, which way's north that way? Yes. 96 00:11:03,390 --> 00:11:08,580 So that's why that one would go off. And then if I started walking that way would go that way. 97 00:11:08,580 --> 00:11:16,200 So it was again this idea that you could learn to have this extra sense of through vibration. 98 00:11:16,200 --> 00:11:24,180 And what they found was that when they asked someone to wear this belt constantly for nine weeks, which can you imagine it would be quite heavy? 99 00:11:24,180 --> 00:11:32,850 You could take it off during a lecture or when you were stationary. They found that it improved their spatial awareness so that they were able to know 100 00:11:32,850 --> 00:11:40,110 where North was and also their understanding of of the campus and the surrounds. 101 00:11:40,110 --> 00:11:49,140 So there have been developments along these lines, but not only just using vibration as a way of extending our senses, 102 00:11:49,140 --> 00:11:56,130 but others have thought about what other kinds of digits or ways in which we can extend and then include who is an artist. 103 00:11:56,130 --> 00:12:09,480 Engineer has been designing this third thumb and basically, as you can see here, you attach it to your hand and you control it with your big toes. 104 00:12:09,480 --> 00:12:14,220 And what it does is you can control it, so it moves a bit like one of the other digits. 105 00:12:14,220 --> 00:12:22,470 Again, I think it's good to see the people who have been researching explain this research say Tamar has been 106 00:12:22,470 --> 00:12:30,000 working as a new neuroscientist looking at whether or not it changes the neural structure of the brain. 107 00:12:30,000 --> 00:12:37,260 By using this third thumb, this robotic extra thumb can be controlled by moving your toes. 108 00:12:37,260 --> 00:12:43,110 A prolonged use may come at a cost of your brain being less certain about how your hands work. 109 00:12:43,110 --> 00:12:50,310 Researchers gave this six people a prosthetic limb, the wrapped around their wrist and sat underneath their little finger and their thumbs. 110 00:12:50,310 --> 00:12:55,680 Movement was controlled by sensors attached to the user's big toes and communications were sent using. 111 00:12:55,680 --> 00:13:00,330 One is technology affixed to the wrist and ankle by wiggling each time. 112 00:13:00,330 --> 00:13:03,480 The augmented humans could move the thumb in different directions, 113 00:13:03,480 --> 00:13:10,500 and 28 series participants were encouraged to use them both in laboratory settings and in the wide world. 114 00:13:10,500 --> 00:13:16,080 The additional thumb could cradle a cup of coffee, while the same hands or fingers held a spoon to stir in milk. 115 00:13:16,080 --> 00:13:22,180 While some hospitals could use the phone to peel banana blow bubbles or even play the guitar. 116 00:13:22,180 --> 00:13:27,260 Don't stand how we get this purple tick supersonic. 117 00:13:27,260 --> 00:13:36,490 The video goes on for quite a while to explain how it can change the structure of the brain when you've got this extra digit. 118 00:13:36,490 --> 00:13:47,080 But for the purpose of today, I just wanted to show you how researchers are innovative in thinking about how you can provide extra limbs. 119 00:13:47,080 --> 00:13:55,280 But I don't think we've got to this point yet where having forearms, anyone know who the painter is? 120 00:13:55,280 --> 00:14:06,320 It says in the corner there it's Marguerite's 51 resources, but can you imagine having forums at dinner and how you would use those? 121 00:14:06,320 --> 00:14:14,210 But I think there's a lot of interest in thinking about if we are going to extend our bodies, how can we do that? 122 00:14:14,210 --> 00:14:21,740 And there's been interest in those who have disabilities or have had strokes or have lost a limb, 123 00:14:21,740 --> 00:14:28,940 and how the technology is being designed to enable them to feel like they might have those limbs, but also to have more. 124 00:14:28,940 --> 00:14:37,730 What I want to do now in the second part of my talk is to talk about how we can extend the mind through using digital technologies. 125 00:14:37,730 --> 00:14:48,560 And one of the first, I think, attempts to do this was a project that was called the Sixth Sense, and this was at the Media Lab at MIT. 126 00:14:48,560 --> 00:14:55,550 A guy, a student at the time, Pranav Mistry and his supervisor, Patty Maze. 127 00:14:55,550 --> 00:15:06,170 And he was experimenting with different technologies of the time to think about what it would be like to be able to project data into the environment. 128 00:15:06,170 --> 00:15:11,780 This was before smartphones, which we've all got used to as our way of looking at data. 129 00:15:11,780 --> 00:15:18,890 So the way in which he created this was to have a projector that was on on the 130 00:15:18,890 --> 00:15:24,410 would be on the cap and a camera and then some colour markers that would be 131 00:15:24,410 --> 00:15:30,350 different for the four fingers and that these would combine into a wearable gestural 132 00:15:30,350 --> 00:15:36,380 interface and allow digital information to be projected into the environment. 133 00:15:36,380 --> 00:15:41,270 And the way in which it does that is it recognises and tracks the user's hands through these 134 00:15:41,270 --> 00:15:49,310 four coloured markers and is able through computer vision to decide what action to take. 135 00:15:49,310 --> 00:15:59,180 Is it that they're taking a photo? Or is it as in this case, acting as a digital keypad, which allows the user to make a phone call, 136 00:15:59,180 --> 00:16:04,580 which at the time was considered to be, you know, amazing and it went viral? 137 00:16:04,580 --> 00:16:05,360 And I think again, 138 00:16:05,360 --> 00:16:12,650 this was quite influential in getting people to think about what data would we want to be projected out there and how might we use it? 139 00:16:12,650 --> 00:16:18,170 And one of our projects that followed on from the essence was to think about the 140 00:16:18,170 --> 00:16:22,910 technologies that we might use to be able to provide information in the environment. 141 00:16:22,910 --> 00:16:34,490 So we didn't use projectors. We used very straightforward, simple LEDs that would light up to show a certain amount of information the person needed. 142 00:16:34,490 --> 00:16:40,670 And what we did was we were trying to nudge people into deciding what objects to buy when they're in a supermarket. 143 00:16:40,670 --> 00:16:50,150 So what? They want to buy this water or that water, and sometimes you look to see whether it's come from, this one has come from a can't see. 144 00:16:50,150 --> 00:16:54,560 So whereas this was just from Oxford, the tap water. 145 00:16:54,560 --> 00:17:00,380 And we wanted to provide people with information that would help them make that decision. 146 00:17:00,380 --> 00:17:03,560 So we designed what was called the lemon shopping handle. 147 00:17:03,560 --> 00:17:15,050 And I don't know if any of you have watched the gadget man, but Richard Odey was very taken by our design and featured it on one of his programmes. 148 00:17:15,050 --> 00:17:25,600 So I'm going to let the video speak for itself. So I want to have some of this delicious looking spaghetti bolognese, right? 149 00:17:25,600 --> 00:17:32,650 20 percent. Yeah, that's a bargain that thought. Handling that, that he's as good as it looks like you. 150 00:17:32,650 --> 00:17:36,430 I never thought from anything you want to know the recycling packaging people do like. 151 00:17:36,430 --> 00:17:46,080 I think those are the exact things. I well, if you look at the map here from 13 states have Union Jack Big, which is already taking too long. 152 00:17:46,080 --> 00:17:50,110 What do they say? They at take here. 153 00:17:50,110 --> 00:17:51,100 OK, we're going to do. 154 00:17:51,100 --> 00:17:59,860 OK, well, I'll tell you what, I'm going to use the Lambert shopping cart handle on this handle as an experiment in lot of psychology, 155 00:17:59,860 --> 00:18:08,740 aiming to influence buyers choices. The more unhealthy or further the food is terrible, the more lights come on, the three things that light up. 156 00:18:08,740 --> 00:18:15,730 If it was very thought you'd light up all the way to this 30 days, roughly Scotland, how long did that take effect this quick? 157 00:18:15,730 --> 00:18:21,910 Oh, right, and how long ago? That was interesting. I'm not going to dispute how fascinating you are. 158 00:18:21,910 --> 00:18:27,460 I'm just saying the time management situation you're at. 159 00:18:27,460 --> 00:18:34,870 No wonder he's such a good presenter. So I think from these examples I've shown you, 160 00:18:34,870 --> 00:18:41,650 there's lots of technology we can experiment with and we've seen, you know, tangible physical computing. 161 00:18:41,650 --> 00:18:47,890 We can design all sorts of ways in which we can sense, but also present information. 162 00:18:47,890 --> 00:18:57,910 And then in the last one, the use of mobile and ubiquitous technologies, there are lots of software tools that we can develop and augmented reality, 163 00:18:57,910 --> 00:19:05,530 virtual reality wearables, speech robots, chatbots, A.I. and machine learning, multi-sensory service and so on and so forth. 164 00:19:05,530 --> 00:19:12,760 Every year, there's a new technology that's coming along not so many in the last two years that provide people like myself and 165 00:19:12,760 --> 00:19:20,230 others in the field of interaction design with new tools by which to think about how we can augment and empower people. 166 00:19:20,230 --> 00:19:22,000 And I'm not going to go through all of these. 167 00:19:22,000 --> 00:19:28,900 What I'm going to do is just talk about three themes that run through my work where I think about how we are 168 00:19:28,900 --> 00:19:36,100 augmenting people and using three different technologies and these three kinds of technology augmentation. 169 00:19:36,100 --> 00:19:38,710 I'm going to give case studies for each one. 170 00:19:38,710 --> 00:19:46,930 The first one is what I'm calling cognitive offloading, where the technology, usually some smart phone device, 171 00:19:46,930 --> 00:19:53,890 allows us to perceive and see more in the world because we're offloading onto that device. 172 00:19:53,890 --> 00:20:00,310 But the question is at what expense, and I'm going to describe a study we did to explore this. 173 00:20:00,310 --> 00:20:08,800 The second theme that runs through my research is to think about how we can scaffold learning and thinking, 174 00:20:08,800 --> 00:20:19,600 how can we design technologies to enable you to guide you and to probe you into thinking, perhaps differently, change your direction of thought? 175 00:20:19,600 --> 00:20:24,370 And we've been designing virtual agents chat bots to do this. 176 00:20:24,370 --> 00:20:29,290 And the third one is called what we're calling engaged interaction. 177 00:20:29,290 --> 00:20:35,770 Sorry, engaged intelligence. And this is to make us think with cognitive software tools, 178 00:20:35,770 --> 00:20:43,180 and I'll present a case study there from some people who are in the audience today on their work. 179 00:20:43,180 --> 00:20:48,580 So the first one cognitive offloading we're all familiar with using our phones for 180 00:20:48,580 --> 00:20:53,110 all sorts of tasks where before we have to remember whether it's people's birthdays, 181 00:20:53,110 --> 00:20:56,950 telephone numbers and addresses and so on, 182 00:20:56,950 --> 00:21:06,670 we just put stories into our phones and we know we don't have to learn them and we rely on our phones to to do the, you know, remember for us. 183 00:21:06,670 --> 00:21:11,290 So how are they changing how we see and remember? And I think that's the really interesting question. 184 00:21:11,290 --> 00:21:15,760 There's been a lot of report in the media that is telling us now we're not using our brains, 185 00:21:15,760 --> 00:21:23,380 they're getting a trophy because we we're just not exercising our mental faculties. 186 00:21:23,380 --> 00:21:28,750 So the one I started looking at was the the way in which digital map apps have changed, 187 00:21:28,750 --> 00:21:34,210 how we find our way through cities and what we remember of them. 188 00:21:34,210 --> 00:21:38,770 And I a few years ago did a study with one of my MSA students show you some. 189 00:21:38,770 --> 00:21:47,590 You are looking at how this was changing, the way in which we interact and perceive the world as we use them. 190 00:21:47,590 --> 00:21:54,250 So I don't know how many of you remember datasets or even still have one in a cupboard somewhere. 191 00:21:54,250 --> 00:22:00,070 But this is the paper based maps that we used to rely on all the time to get us from A to B. 192 00:22:00,070 --> 00:22:06,610 And the way in which we would use them was to we would learn how to read them and then match up what we're seeing with landmarks, 193 00:22:06,610 --> 00:22:12,940 which might be names of streets or certain buildings if they are mentioned with smartphone apps. 194 00:22:12,940 --> 00:22:16,930 There are many different varieties of these maps. 195 00:22:16,930 --> 00:22:19,170 You can follow the dot, the line or the. 196 00:22:19,170 --> 00:22:27,450 Person on the map, and it moves dynamically as you move, and it shows where you currently are and indicates where you need to go. 197 00:22:27,450 --> 00:22:32,910 So it's dynamic and it leads you. And you don't have to do this learning and matching up. 198 00:22:32,910 --> 00:22:39,000 So as I said, the cognition is offloaded onto the map and it makes the navigation task much, much more easier. 199 00:22:39,000 --> 00:22:47,940 Many of us will probably couldn't even imagine, especially Generation Z, what it's like to use a map. 200 00:22:47,940 --> 00:22:54,070 But one of the questions in the research has been Does it make us have poorer mental maps of the area? 201 00:22:54,070 --> 00:23:00,180 So we couldn't if we were dumped in it without smartphones, find our way home. 202 00:23:00,180 --> 00:23:05,820 But the research we were looking at was, well, maybe because you're not spending all your time looking, 203 00:23:05,820 --> 00:23:12,150 you're actually looking around and enjoying the surroundings, and maybe you remember more of the surroundings. 204 00:23:12,150 --> 00:23:18,280 So we conducted and in the wild study, much of the research up until then had been experimental in the lab. 205 00:23:18,280 --> 00:23:23,880 We wanted to actually see how people look around and use these devices in situ. 206 00:23:23,880 --> 00:23:29,730 And so rather than just measuring performance time or errors, we thought about measuring something else, 207 00:23:29,730 --> 00:23:36,150 which was the strategies that people employ when they're using a phone or a paper based map. 208 00:23:36,150 --> 00:23:45,560 So as I said, for a paper based map, if you're looking for, you know, Downing Street as we would all be wanting to do now is you, 209 00:23:45,560 --> 00:23:53,010 you look on your map, you see it and then you try and match up with what you're seeing it outside in the environment or you do the other way round. 210 00:23:53,010 --> 00:24:00,960 Whereas with a smartphone, you follow the dot or moving icon to see if it's still on track at choice points. 211 00:24:00,960 --> 00:24:05,070 And those choice points might be where you're making a turn. 212 00:24:05,070 --> 00:24:14,070 So we conducted a study of having 28 participants split into two groups to prevent training transfer, 213 00:24:14,070 --> 00:24:23,370 and they were asked to walk a route through London that they weren't familiar with using either a paper based map or smart phone one. 214 00:24:23,370 --> 00:24:32,430 And my poor MSE student shadowed all of these 28 participants each time they did this and made notes of what they were doing. 215 00:24:32,430 --> 00:24:38,970 This was because at the time we didn't have any sophisticated technology that they might be able to measure what they were doing, 216 00:24:38,970 --> 00:24:43,080 but also we wanted to see what they were looking at, which is quite difficult to do. 217 00:24:43,080 --> 00:24:53,450 Using some kind of a camera after they they followed, they were then asked to come back a week later and to do a recognition task. 218 00:24:53,450 --> 00:25:01,350 And this involved saying whether they'd seen certain things, and they also asked to draw a map of the route they took. 219 00:25:01,350 --> 00:25:06,990 So we used at the time a Google map, and we tried to control this to be similar as possible. 220 00:25:06,990 --> 00:25:14,250 So the digital one was printed, but it was static and the digital one moved as the person moved. 221 00:25:14,250 --> 00:25:22,250 There are problems with this design. I think it's really difficult to get a control in the wild, but we can talk about that later. 222 00:25:22,250 --> 00:25:29,300 So as the recognition task, they were asked to see, say, whether they'd seen these images on their route. 223 00:25:29,300 --> 00:25:37,460 So this was, you know, recall I remember seeing those and we put in a number which weren't as part of the route. 224 00:25:37,460 --> 00:25:42,380 So what did we find? Well, we did find a difference between the two conditions. 225 00:25:42,380 --> 00:25:49,400 We found that the paper map participants in that condition looked more frequently at the map, 226 00:25:49,400 --> 00:25:53,600 especially when they were coming up to a junction to check that they were doing the right thing. 227 00:25:53,600 --> 00:26:00,530 We're smartphone map participants. They looked much less and they usually looked after. 228 00:26:00,530 --> 00:26:05,630 They'd crossed the road or taken a turn to see if they'd done the right thing. 229 00:26:05,630 --> 00:26:14,360 So quite different strategies, as we predicted in terms of recognition, what they remembered, the smartphone group remembered more street views. 230 00:26:14,360 --> 00:26:22,340 Those are the ones which you showed the images because they were looking around and they also grew grew through more detailed maps, 231 00:26:22,340 --> 00:26:27,560 whereas the paper based drew more accurate maps because they had memorised it. 232 00:26:27,560 --> 00:26:34,850 So here's my only graph I'm going to show you is a shows on the left. 233 00:26:34,850 --> 00:26:42,830 A significant difference between groups. These street views in terms of the number that was remember the blue is the smartphone, 234 00:26:42,830 --> 00:26:47,330 so you can see they remembered more for the other ones the street names, the shops and the landmarks. 235 00:26:47,330 --> 00:26:55,160 There was a slight difference, but not significant. What was interesting is when we asked them to draw what they remembered. 236 00:26:55,160 --> 00:27:04,340 The top ones were the paper based map condition, and they drew more crossroads because they remember those and they were spatially more accurate. 237 00:27:04,340 --> 00:27:10,850 So using a paper based map, you do have a cleaner, better mental model. 238 00:27:10,850 --> 00:27:17,060 Whereas the smartphone map condition was more linear, more just a straight line with descriptions. 239 00:27:17,060 --> 00:27:25,690 So on this one, you've got Nandos really important place in the Polish embassy and posh houses. 240 00:27:25,690 --> 00:27:35,310 That sort of thing that you notice, so in terms of comparison of the routes described the terminology, the words used, 241 00:27:35,310 --> 00:27:43,740 they were far more subjective emotional descriptions in the sketches for the smartphone than for the. 242 00:27:43,740 --> 00:27:48,870 The paper based map and also using first person, so here, I don't know if you can read this, 243 00:27:48,870 --> 00:27:53,730 but for the smart phone condition, some of the descriptions that we use for things like beautiful houses. 244 00:27:53,730 --> 00:27:58,440 Nice flowers. Most flowers. Nice car pausch. 245 00:27:58,440 --> 00:28:02,610 Nice bar, cool shops. Posh houses. Boring, confusing houses. 246 00:28:02,610 --> 00:28:07,140 And so on. And over in the paper map one, there were far few. 247 00:28:07,140 --> 00:28:09,000 They did mention the beautiful houses. 248 00:28:09,000 --> 00:28:18,390 If you know that part of London, you know why they described it and the strange cars so well, this was an initial study. 249 00:28:18,390 --> 00:28:29,250 And what I think it shows is that the the use of these smart phone apps lends itself to people adopting different strategies, 250 00:28:29,250 --> 00:28:34,800 which is keep going, then check to confirm if I get it wrong, I just go back and walk again. 251 00:28:34,800 --> 00:28:40,110 Whereas for the paper base, you've got to read, you've got to stop, look around, match up. 252 00:28:40,110 --> 00:28:49,470 So quite a different strategy. And I think more generally what it shows is that there are trade-offs when we use these smartphone apps because, 253 00:28:49,470 --> 00:28:58,200 you know, you can rely on the tech analogy and you know, you don't need to learn the skill and so you don't. 254 00:28:58,200 --> 00:29:02,790 But the trade-off with this is that you don't remember. 255 00:29:02,790 --> 00:29:04,440 But the benefits, as we've seen, 256 00:29:04,440 --> 00:29:13,020 is that you will use the time to look around and enjoy worrying rather than worrying too much about whether you're going the right way. 257 00:29:13,020 --> 00:29:17,010 So those were that was an early type of phone app. 258 00:29:17,010 --> 00:29:25,770 Supposing now, as you see, there are more sophisticated types of smartphone map apps, 259 00:29:25,770 --> 00:29:30,210 and this one here is Apple's look around you, which has the schematic map. 260 00:29:30,210 --> 00:29:38,310 But then you can click and it will show you a video or a 3D image of where it is that you're looking at. 261 00:29:38,310 --> 00:29:45,030 So you can combine these two together. This is in Cupertino, where I once upon a time used to work. 262 00:29:45,030 --> 00:29:50,220 And so you can zoom in and then you can look at particular places before you go. 263 00:29:50,220 --> 00:29:53,010 And then you can look at them side by side with a street map. 264 00:29:53,010 --> 00:29:58,560 And I think I haven't done a study of that because I haven't been in Cupertino for a while. 265 00:29:58,560 --> 00:30:07,170 But it's interesting to think about how you can combine these and how does that affect and extend our cognition, our cognition? 266 00:30:07,170 --> 00:30:14,820 Do we look? Do we think about navigation and what we're doing in this space differently with these different types of maps? 267 00:30:14,820 --> 00:30:18,750 OK, so that was cognitive offloading has a lot more I can talk about for that, 268 00:30:18,750 --> 00:30:28,030 but I think I want to just move on to think a bit more about one of my other themes that I'm interested in, which is how do we scaffold thinking? 269 00:30:28,030 --> 00:30:35,760 In a way that helps us to to deal with problem solving and decision making when it's hard. 270 00:30:35,760 --> 00:30:40,230 And virtual agents, chat bots have been around now for many years, 271 00:30:40,230 --> 00:30:48,750 and they may be designed to answer users questions or customers questions and trials due to a particular page on a website. 272 00:30:48,750 --> 00:30:57,270 But recently they have been designed for other uses. There's one called Réplica, which is acts like a therapist, 273 00:30:57,270 --> 00:31:04,710 and you can interact with this therapist chatting away, typing in how you're feeling and so on. 274 00:31:04,710 --> 00:31:13,170 We were more interested in how we might use them as prompts or ways in which to guide your thinking. 275 00:31:13,170 --> 00:31:17,730 And the project I'm to talk about today is called the voice phase project. 276 00:31:17,730 --> 00:31:28,950 And this is one of my Ph.D. students, Leon Reichardt's, and we were very interested in helping teams of clinicians make sense of complex health data. 277 00:31:28,950 --> 00:31:32,670 We would be working for with Great Ormond Street Hospital for many years, 278 00:31:32,670 --> 00:31:38,790 helping them to think about how technology can improve what it's like to be working in a hospital. 279 00:31:38,790 --> 00:31:46,590 And this is one particular project. And so we developed interface agent called Physi and Vishy. 280 00:31:46,590 --> 00:31:51,100 The idea was to probe these teams to to guide the where to look on this graph. 281 00:31:51,100 --> 00:31:56,220 Where's the where's the place you should be looking to think about what's happening? Why is that those too close? 282 00:31:56,220 --> 00:32:03,340 And to get them to think a bit more about what's the causes behind this? 283 00:32:03,340 --> 00:32:13,360 So our agent, Rossi was designed to prompt teams into hypothesising about a particular set of data. 284 00:32:13,360 --> 00:32:22,780 The data that we looked at was obesity data, and obesity is something that's on the rise and has been for the last 30 years. 285 00:32:22,780 --> 00:32:27,970 And there's lots of data there, and sometimes it's quite difficult to know how to make sense of it. 286 00:32:27,970 --> 00:32:33,940 So we design visitors do not interfere too much or get in the way, 287 00:32:33,940 --> 00:32:43,030 but just prompt occasionally when it appears that they might be stuck to ask open ended questions, things like Do you need a hint for analysing X? 288 00:32:43,030 --> 00:32:50,020 What do you think of this? Shall we move on? Did you consider the differences between Variable X and Y? 289 00:32:50,020 --> 00:32:54,610 Did you consider what might have caused that sudden spike? 290 00:32:54,610 --> 00:33:03,130 So here's just an example of a visualisation. The teams could ask Veasey to bring up data for particular parameters, 291 00:33:03,130 --> 00:33:09,100 and here is four different gender girls and boys and for developing and developed countries. 292 00:33:09,100 --> 00:33:15,040 And the idea here was to try and think, why is it that that graph is going straight, which there's a sudden spike here? 293 00:33:15,040 --> 00:33:17,140 What might have caused that? 294 00:33:17,140 --> 00:33:26,140 And they can do the same for mixes of the dimensions so women and girls and see that the graphs are quite different there. 295 00:33:26,140 --> 00:33:32,080 So building a a virtual agent can take a long time. Those of you who work in natural language processing. 296 00:33:32,080 --> 00:33:42,790 And so in human computer interaction, we cheat in that we use what's called a Wizard of Oz approach and Boy Wizard of Oz. 297 00:33:42,790 --> 00:33:48,520 We have someone here who's a human but supes pretending to be the agent. 298 00:33:48,520 --> 00:34:01,720 And this is one of my students, Ethan here, and he is there controlling the interface and sends the prompts at particular times and chooses which one. 299 00:34:01,720 --> 00:34:11,980 And this allows us to manipulate different criteria much more easily than if we are programming it from to begin with. 300 00:34:11,980 --> 00:34:18,670 And so in this particular study, he actually as busy and would send prompts not very often, 301 00:34:18,670 --> 00:34:24,100 but occasionally if a team had not discussed a pattern or trend being referred to, 302 00:34:24,100 --> 00:34:32,200 and there was a silence in the conversation for at least three seconds. And he was able to control the ones that he sent for different conditions. 303 00:34:32,200 --> 00:34:42,400 We ran various experiments looking at whether if he was spoke to the team or whether he just appeared as a text chat bot. 304 00:34:42,400 --> 00:34:48,940 But for today, I'm just going to mention some of the key findings about Veasey, per say. 305 00:34:48,940 --> 00:34:53,440 So this is how it would look. You'd have a team here. This would be the smart speaker here. 306 00:34:53,440 --> 00:35:00,190 And then they would ask is to change the visualisations and they would talk about them and then visit my prompt to say, Have you thought about this? 307 00:35:00,190 --> 00:35:07,990 What do you think about the sudden spike? So I'm just going to summarise the findings rather than go through them in any detailed way. 308 00:35:07,990 --> 00:35:14,740 And we found that it encouraged the team to generate more hypotheses about obesity trends, 309 00:35:14,740 --> 00:35:23,740 and it got them to focus on the causes behind the different ways the Lions increased in the graph. 310 00:35:23,740 --> 00:35:26,980 So here's a quote from one of the participants. 311 00:35:26,980 --> 00:35:33,040 I like that the questions are finding out more about the data by answering the question or even by looking at it. 312 00:35:33,040 --> 00:35:36,790 You would think about the consequences, and then you would ask yourself, 313 00:35:36,790 --> 00:35:41,680 why is this line more steady than the other, which wouldn't necessarily happen without this system? 314 00:35:41,680 --> 00:35:48,610 So this work is just been published. We are very happy and these are my other colleagues in Tokyo. 315 00:35:48,610 --> 00:35:52,750 But here's an example of a conversation after a visit prompt. 316 00:35:52,750 --> 00:35:56,260 So there's a silence as they're looking at this state over here. 317 00:35:56,260 --> 00:36:04,960 This visualisation, which is a change in prevalence of obese girls and boys for developing and developed countries and physics, 318 00:36:04,960 --> 00:36:13,210 says what might have caused a sudden spike. And then you see, for these team participants, they go through a number of different possibilities. 319 00:36:13,210 --> 00:36:16,360 Is it globalisation getting lots of different foods? 320 00:36:16,360 --> 00:36:25,580 I think obviously computers, PlayStations, yes, children play less outside and overweight, and it goes on and on and. 321 00:36:25,580 --> 00:36:34,190 And then once they've exhausted the possibilities, they think, well, maybe we should look at how it varies for men and boys. 322 00:36:34,190 --> 00:36:43,760 So we got a number of examples of conversations that happen like this when there was busy prompting the when visit wasn't. 323 00:36:43,760 --> 00:36:50,390 So I think here it's just early days thinking about how we can design agents 324 00:36:50,390 --> 00:36:56,720 to augment our cognition and to guide sense making like that and ideation. 325 00:36:56,720 --> 00:37:01,010 And as we were discussing what we might do next, we thought, Well, 326 00:37:01,010 --> 00:37:07,280 maybe we might have to think about, is it good to slow down cognition rather than speeding it up? 327 00:37:07,280 --> 00:37:13,790 Because much of what we do is we designed software to make things more fast, more efficient, quicker to do. 328 00:37:13,790 --> 00:37:18,290 And we thought, well, actually, sometimes it's better to slow people down. 329 00:37:18,290 --> 00:37:27,050 And last summer we did a. He carried out a study with my colleagues who hear about 10 minutes looking at phishing scams. 330 00:37:27,050 --> 00:37:40,940 That is to say, I knew you were. You quite often just tap on a button, sorry on an icon or a click here and you've been zapped. 331 00:37:40,940 --> 00:37:49,940 I got caught last week, by the way, but that's another story. And so what we thought was having this type of agent would disappear. 332 00:37:49,940 --> 00:37:58,640 So you sure you want to click on it? Have you looked at this or that might be sufficient to slow you down to stop making those mistakes? 333 00:37:58,640 --> 00:38:03,200 So I think it's quite interesting to think when and what context is it? 334 00:38:03,200 --> 00:38:11,690 Would it be useful and to slow people down and their cognition rather than always speeding them up? 335 00:38:11,690 --> 00:38:15,360 So we're looking at a bit more in other contexts. 336 00:38:15,360 --> 00:38:24,480 The final case study, I went to a theme that I want to look at is what we're calling engaged intelligence, which is designing cognitive toolsets. 337 00:38:24,480 --> 00:38:31,470 And this is work with Wendy Jefferson and Anna Leslie, who were at Nasdaq as part of the behavioural science team, 338 00:38:31,470 --> 00:38:35,310 but now have left to become part of a start-up called Let's Think. 339 00:38:35,310 --> 00:38:40,620 So they're very interested in how you can design technologies to get people to 340 00:38:40,620 --> 00:38:45,480 think rather than letting A.I. take over and do the thinking on behalf of us. 341 00:38:45,480 --> 00:38:48,420 And one of the things that they did was that mass that was to develop new 342 00:38:48,420 --> 00:38:54,030 software tools for supporting complex investigative work in the finance world. 343 00:38:54,030 --> 00:39:07,890 And why is that? Well, you're probably all familiar with some dodgy dealings that go on in insider trading and other types of bad behaviour. 344 00:39:07,890 --> 00:39:18,300 And so they're very interested in how you can design technology to help people whose job it is to discover market abuse of this type or other. 345 00:39:18,300 --> 00:39:23,070 And the people who do this are called compliance officers and Nasdaq, 346 00:39:23,070 --> 00:39:31,920 and banks and other financial companies all have people who try to detect whether this is going on, 347 00:39:31,920 --> 00:39:36,750 and there are many different types of market abuse. Spoofing is one. 348 00:39:36,750 --> 00:39:40,170 And this is where there's a type of market manipulation, 349 00:39:40,170 --> 00:39:49,980 where the investor tries to move the price of a financial instrument up or down by placing a very large order. 350 00:39:49,980 --> 00:39:57,360 You're nodding your head, Nigel. Have you tried it? So once the price has been shifted, 351 00:39:57,360 --> 00:40:04,590 the original order is cancelled and new orders are placed on the opposite side to take advantage of the favourable price. 352 00:40:04,590 --> 00:40:06,900 So this happens and then many other forms. 353 00:40:06,900 --> 00:40:14,790 And so these compliance officers, their job is to try and find out if this has occurred and who is being responsible or involved in it. 354 00:40:14,790 --> 00:40:22,050 So they have to create and collate lots of data from several sources to conduct their investigations. 355 00:40:22,050 --> 00:40:27,930 And it's complex work to do, very labour intensive to get to the bottom of some of these spoofs. 356 00:40:27,930 --> 00:40:32,310 What they do is they start by examining what are called trading alerts, 357 00:40:32,310 --> 00:40:38,370 and companies like Nasdaq have these systems that are rule based engine that will search 358 00:40:38,370 --> 00:40:44,970 for patterns of behaviour that are associated with market abuse from previous histories. 359 00:40:44,970 --> 00:40:49,290 And they will fire an alert, which is one of these things here. 360 00:40:49,290 --> 00:40:55,440 When something potential is detected and it might be buy or sell order that occurs close 361 00:40:55,440 --> 00:41:01,230 to a news story that might have influenced the price and that then once they've got that, 362 00:41:01,230 --> 00:41:05,910 they go through these lists and they have to determine which need further investigation. 363 00:41:05,910 --> 00:41:12,000 Most of them, they just closed down because they're just perfectly normal. But some of them aren't, and those are the ones they need to detect. 364 00:41:12,000 --> 00:41:20,310 But to do that, it involves lots of steps. And this is a task analysis that when did it and you start there? 365 00:41:20,310 --> 00:41:24,870 I'm not going to go through all of this and you go there and there and then and there. 366 00:41:24,870 --> 00:41:34,260 And as you can see, is quite complex. The green is worth the experts can do yellows for people who have got middle expertise. 367 00:41:34,260 --> 00:41:44,940 So you can see you need to be quite well trained up an expert in order to be able to do this range of investigative work. 368 00:41:44,940 --> 00:41:49,500 So how do they do it? How do the experts, the green ones do this? 369 00:41:49,500 --> 00:41:55,530 Well, a lot of it is done in their head and they might use notepads to jot down their thoughts. 370 00:41:55,530 --> 00:41:58,170 They will scan through thousands of these alerts. 371 00:41:58,170 --> 00:42:06,570 They'll sift through millions of combs, emails, texts, and they'll check various news feeds and try and put this all together in the head. 372 00:42:06,570 --> 00:42:13,230 And it's a huge cognitive effort and lots of demand on their attention, and they need to switch between various resources. 373 00:42:13,230 --> 00:42:16,230 And they may forget that they've looked at that and then they'll look again. 374 00:42:16,230 --> 00:42:27,060 So there's a big opportunity here to think about how we might design some cognitive tools to help them with this type of work. 375 00:42:27,060 --> 00:42:29,370 So what kinds might we develop? 376 00:42:29,370 --> 00:42:41,340 Well, we're all well aware that air is usually the first thing that comes to people's minds as an opportunity to automate some of the processes. 377 00:42:41,340 --> 00:42:49,450 And indeed, I think A.I. has a lot to offer for the time consuming and error prone parts of doing this type of investigative work. 378 00:42:49,450 --> 00:42:54,330 But when do you and Anna and their team did was to think differently. 379 00:42:54,330 --> 00:43:03,210 They had to think differently, to think out of the box and to develop a toolset which would be integrated set of tools that would enable them to do 380 00:43:03,210 --> 00:43:09,720 all the stuff they do in the head much more systematically and to externalise it so they could share it with others. 381 00:43:09,720 --> 00:43:10,800 And more importantly, 382 00:43:10,800 --> 00:43:21,210 they could bring together all of disparate parts of the information and to make more connexions and help to build up a visual picture that, 383 00:43:21,210 --> 00:43:25,440 as I say, can be shared. So they developed all of these tools. I'm not going to go through them. 384 00:43:25,440 --> 00:43:31,350 This is still very much a work in progress, but I'm just going to mention two or three of them. 385 00:43:31,350 --> 00:43:39,410 But the idea is that these tools would allow the. Compliance officers to externalise rather than doing it in the head, 386 00:43:39,410 --> 00:43:50,450 and this would mean the training would be much improved and more people in your organisation could see and share what they thought was going on. 387 00:43:50,450 --> 00:43:53,870 So here are just three tools that they originally came up with. 388 00:43:53,870 --> 00:43:59,870 They've come up with some other since then. But the main one was the canvas. 389 00:43:59,870 --> 00:44:03,770 This would be a bit like if those of you who've used mirror in the last two years, 390 00:44:03,770 --> 00:44:10,340 it's like a big digital canvas that you can place tools on you can sketch on. 391 00:44:10,340 --> 00:44:16,790 You can have visualised sessions appearing side by side, and one of these might be what's called the network, 392 00:44:16,790 --> 00:44:27,500 which has allows you to see information between different people in a visualisation so that you can compare, compare and combine those. 393 00:44:27,500 --> 00:44:31,340 And then there's a case builder that helps you to build up your case rather than, 394 00:44:31,340 --> 00:44:36,800 it just notes jotted down and can be easily shared amongst the different offices. 395 00:44:36,800 --> 00:44:42,230 So this is a very early design. It's gone through many iterations since then. 396 00:44:42,230 --> 00:44:47,840 But just to give you a sense of here's the case builder where you build up, 397 00:44:47,840 --> 00:44:54,290 and these are some of the tools that you might place open close to show who's connected with whom. 398 00:44:54,290 --> 00:45:03,050 So you can start to have this visualisation and be able to see side by side, which is very different from how they currently do it. 399 00:45:03,050 --> 00:45:06,520 So I see with the time is. 400 00:45:06,520 --> 00:45:16,150 Coming to the end of my talk, so I'm going to finish with just summarising what I think this type of way of designing tools can do. 401 00:45:16,150 --> 00:45:20,170 Certainly, we know that I can help reduce workload. 402 00:45:20,170 --> 00:45:26,560 It can automate lots of things. But there's an awful lot of work that we humans still do and that we should still do, 403 00:45:26,560 --> 00:45:30,730 I should say, using our brains and not just hand over to machines, 404 00:45:30,730 --> 00:45:37,270 but what we should do is to provide and design tools to help humans do this type of 405 00:45:37,270 --> 00:45:44,210 investigative work and to help them to externalise the picture and help to design these took. 406 00:45:44,210 --> 00:45:54,430 It's a bit like how we used to use diagrams to show the Connexions, but to make them more sophisticated and to allow more team working to take place. 407 00:45:54,430 --> 00:46:04,630 And I think, you know, we may be working in the finance world, but I think this type of approach is very valuable for other kinds of detective work. 408 00:46:04,630 --> 00:46:12,310 And I'm sure the police would be very interested in this way of helping them to to find out and solve crimes. 409 00:46:12,310 --> 00:46:16,360 So I'm going to finish now. I hope through those three case studies, 410 00:46:16,360 --> 00:46:23,620 these three themes I've shown how we can augment through digital means our 411 00:46:23,620 --> 00:46:28,060 minds and that we shouldn't be handing over to A.I. to do everything for us, 412 00:46:28,060 --> 00:46:31,750 but we should be empowering and augmenting how we do them. 413 00:46:31,750 --> 00:46:33,460 And there are different ways in which we can do this. 414 00:46:33,460 --> 00:46:44,200 We can use agents, we can use these software toolkits and we can use what was the first one we did with mobile smartphones. 415 00:46:44,200 --> 00:46:50,050 But certainly things are games and things are lost. And I think more generally, it's quite interesting to think about that. 416 00:46:50,050 --> 00:46:55,990 So the benefits of designing technology to augment us. 417 00:46:55,990 --> 00:47:02,440 First one, I would call cognitive versatility. Thank you, Eva. 418 00:47:02,440 --> 00:47:06,310 If you're one of which is to it, 419 00:47:06,310 --> 00:47:12,640 may free up mental capacity through the cognitive tools that we develop and the 420 00:47:12,640 --> 00:47:17,170 various apps that we use on smartphones such that we can engage in other activities. 421 00:47:17,170 --> 00:47:24,460 We don't know which we have to be spending time using them. And that's the one with the the mobile phone apps. 422 00:47:24,460 --> 00:47:31,570 Then there's this idea of cognitive slowdown rather than cognitive speed up, which is counterintuitive to to many software developers. 423 00:47:31,570 --> 00:47:38,800 But we think at certain times in certain contexts, it enhances sense making capabilities. 424 00:47:38,800 --> 00:47:42,760 And then this third theme, which I think is really important, is cognitive empowerment. 425 00:47:42,760 --> 00:47:48,130 How can we help extend our thinking to see new connexions in data? 426 00:47:48,130 --> 00:47:55,390 And I think that's really important for us to have to focus on that. But there are worries. 427 00:47:55,390 --> 00:48:04,180 And this is something I think for these students to be looking at, which is what happens is if we lose our ability to think for ourselves, 428 00:48:04,180 --> 00:48:14,200 if we rely increasingly on the technology to do it for us, I think that scenario is what some people have been worried about within the A.I. world. 429 00:48:14,200 --> 00:48:20,470 But I think also with some of these other technologies, we need to think, is this going to happen? 430 00:48:20,470 --> 00:48:26,260 Another one is we might trust the technology too much and don't notice when it makes errors. 431 00:48:26,260 --> 00:48:33,730 There's been some research on some of the explanations that have been developed for A.I. that people rely on them too much, 432 00:48:33,730 --> 00:48:36,220 that they don't notice when it makes errors. 433 00:48:36,220 --> 00:48:44,620 And so there's another problem that we need to consider, and we might be led astray and draw the wrong conclusions. 434 00:48:44,620 --> 00:48:51,700 So we need to be mindful of what these potential concerns are when developing these tools. 435 00:48:51,700 --> 00:48:57,550 I want to finish off now thinking about the future. 436 00:48:57,550 --> 00:49:07,120 So you've all heard or probably sick to death of metaverse, and I'm going to leave that to Microsoft and Metta to fight out and come up 437 00:49:07,120 --> 00:49:14,680 with something that we would all find useful for a few minutes occasionally. 438 00:49:14,680 --> 00:49:24,190 I think the future, actually, we should be more putting our efforts into is augmented reality and augmented reality. 439 00:49:24,190 --> 00:49:28,510 For those of you who played Pokemon Go a few years ago was fun. 440 00:49:28,510 --> 00:49:37,030 It was playful. You went out with your friends and your family, and you went searching for things in the and they popped up all over the place. 441 00:49:37,030 --> 00:49:45,520 And I think now we have not just mobile phones with augmented reality apps, but they're starting to see glasses. 442 00:49:45,520 --> 00:49:55,420 So Snap came out with Spectacles earlier this year, and these allow you to design things to appear and to allow people to be creative and imaginative. 443 00:49:55,420 --> 00:49:57,940 But as I said, there are dangers with it, 444 00:49:57,940 --> 00:50:06,700 and I'm going to finish off with showing you the dystopian view of what augmented reality can do and the utopian view. 445 00:50:06,700 --> 00:50:28,710 So this one is by a video that was created by Matt Suda. 446 00:50:28,710 --> 00:50:35,760 Grade ninth grade. 447 00:50:35,760 --> 00:50:41,160 What is it, because you don't know how people can view this means will create jobs, 448 00:50:41,160 --> 00:50:49,350 so that means we must do all of opportunities, but I was stood in the way cars because we must try this. 449 00:50:49,350 --> 00:51:03,130 I think that assessment is not okay for these people having all these different kinds. 450 00:51:03,130 --> 00:51:08,110 OK, that's enough of that. I think I who give you a headache where it's just overload, you get efforts blasting at you, 451 00:51:08,110 --> 00:51:17,610 you just have to take your glasses off versus what Snap are trying to promote, which is a tool for creativity. 452 00:51:17,610 --> 00:51:31,440 And. As an artist, I'm constantly thinking about the different ways that I can tell stories and share my vision of the world with people. 453 00:51:31,440 --> 00:51:36,120 I first saw them and I was like, How has all this technology fit into something so small? 454 00:51:36,120 --> 00:51:41,850 The new are going to allow you to overlay anything you want onto the album. 455 00:51:41,850 --> 00:51:49,530 I like using technology to delight people and immerse them in a world that they could experience in reality, 456 00:51:49,530 --> 00:51:53,970 because I really wanted to use air in a therapeutic sense. 457 00:51:53,970 --> 00:52:02,100 Looking at the ebb and flow of water and sea creatures, I think that really inspires my creation. 458 00:52:02,100 --> 00:52:06,390 As a black artist, I feel like a lot of young UP-AND-COMING artists are not represented. 459 00:52:06,390 --> 00:52:14,130 I wanted to showcase their work in augmented reality, gathering in 3D and showing them in a lake that is beautiful and just inspirational. 460 00:52:14,130 --> 00:52:19,890 Now I can actually walk around and see what I'm capturing and call shots for sure of my friends. 461 00:52:19,890 --> 00:52:25,270 I feel like this generation is going to finally push the new digital frontier. 462 00:52:25,270 --> 00:52:28,020 This is going to change the game forever. 463 00:52:28,020 --> 00:52:36,060 So I love New Mexico, and I really wanted to sort of create a lot of fun in Mexico and be an immersive history lesson. 464 00:52:36,060 --> 00:52:41,760 You were standing on the rim of the bay caldera. I love these historic red times. 465 00:52:41,760 --> 00:52:55,320 There's sort of this funny, archaic things, and I love the idea that I might bring new life to this old way that we used to bring people in. 466 00:52:55,320 --> 00:52:58,230 I think air is about having a conversation with the world. 467 00:52:58,230 --> 00:53:03,870 I always go back to try to ask really essential questions, what makes us human and what makes us happy? 468 00:53:03,870 --> 00:53:09,600 What does it mean to be alive and communicating? I love poetry of a language about words. 469 00:53:09,600 --> 00:53:16,890 What would it feel like to walk through a poem last studio and the Swiss Army knife of augmented reality? 470 00:53:16,890 --> 00:53:25,980 I think I want to finish there to think about what would be like to walk through a poem, so I hope you've enjoyed it as a thank you. 471 00:53:25,980 --> 00:53:35,770 I hope you've enjoyed listening to some of my stories and research on how technology can extend how we see. 472 00:53:35,770 --> 00:53:44,010 But I'd like to thank all of my colleagues and researchers, but especially these who have worked on the projects I mentioned. 473 00:53:44,010 --> 00:53:51,150 And for those of you interested to, heterochromatin is for those, particularly women who can see more colours and the rest of us. 474 00:53:51,150 --> 00:53:58,782 So thank you very much.