1 00:00:01,290 --> 00:00:13,450 So. Are. Hello and welcome to the Oxford Playhouse. 2 00:00:13,810 --> 00:00:20,920 My name is Mark, sister Sautoy. I'm the, uh, Simone professor for the Public Understanding of Science here at the University of Oxford. 3 00:00:21,310 --> 00:00:26,469 And this year, we've got not just one, but two events that we're bringing to the Oxford Playhouse here. 4 00:00:26,470 --> 00:00:36,430 So, uh, next week, on Monday, on the 11th and 12th of November, we've got a premiere of a new play that I've written called The Axiom of Choice, 5 00:00:36,730 --> 00:00:40,350 and which I've just come from rehearsals with a great team that we're putting on. 6 00:00:40,350 --> 00:00:44,169 And so I hope, uh, tickets will be available at the box office as you go out. 7 00:00:44,170 --> 00:00:48,850 So I hope you'll enjoy, uh, me for that's in a week and a bit time. 8 00:00:48,850 --> 00:00:53,979 The theme is all about free will, which is obviously very relevant to tonight's, uh, topic as well. 9 00:00:53,980 --> 00:01:00,070 War and mathematics. But tonight we've got an amazing Simone lecture lined up for you. 10 00:01:00,070 --> 00:01:06,040 Uh, testament to the wonderful speaker we have and the very interesting topic we're going to be talking about. 11 00:01:06,280 --> 00:01:12,190 Um, the theatre tonight is ram to the rafters. So thank you so much, uh, uh, for coming out in such numbers. 12 00:01:12,370 --> 00:01:17,020 Um, tonight's speaker annual. Seth actually grew up around here like myself. 13 00:01:17,050 --> 00:01:24,820 Um, and he said to me just before we came on that, uh, he hasn't actually been back to the Oxford Playhouse, uh, since he was at school here. 14 00:01:24,820 --> 00:01:29,530 So it's wonderful to have him back here, um, on, uh, home turf. 15 00:01:29,530 --> 00:01:37,599 Uh, uh, and he is now a professor of, uh, cognitive and computational neuroscience at the University of Sussex, 16 00:01:37,600 --> 00:01:41,830 where he is also director of the centre for Consciousness Studies. 17 00:01:42,070 --> 00:01:47,410 And he's also the author of this wonderful, uh, bestselling book, uh, Being You. 18 00:01:47,410 --> 00:01:52,809 And tomorrow he will actually be at Blackwell's, uh, at about 11:00, uh, to sign copies. 19 00:01:52,810 --> 00:01:59,320 If you would like to go and get a copy or bring your copy Well-thumbed copy all along and, uh, have a chat to him there. 20 00:01:59,320 --> 00:02:02,020 So that'll be another chance, uh, for you to talk to him. 21 00:02:02,020 --> 00:02:10,270 But tonight, um, he's here to give, um, his, uh, some only lecture entitled consciousness Consciousness in Humans and Other things. 22 00:02:10,270 --> 00:02:14,200 So please give a big Oxford Playhouse. Welcome to our Neil Smith. 23 00:02:23,660 --> 00:02:27,410 Thank you very much, Marcus. And yeah, it's lovely to almost see so many people here. 24 00:02:27,410 --> 00:02:31,940 I can't really see anybody at all because of the lights. But it is it is wonderful to be back. 25 00:02:31,940 --> 00:02:37,100 And indeed today I am going to be talking about this, this subject of consciousness. 26 00:02:37,610 --> 00:02:41,780 Now, consciousness is one of our greatest remaining mysteries. 27 00:02:42,290 --> 00:02:46,819 At the same time, it's a phenomenon that we are each intimately familiar with. 28 00:02:46,820 --> 00:02:54,740 We all know what consciousness is. It's what goes away when we go under something like deep general anaesthesia. 29 00:02:54,740 --> 00:02:57,770 How many people have had sort of deep general anaesthesia? 30 00:02:58,460 --> 00:03:02,270 Quite, quite a few. I can't quite. So for those of you who haven't had it, give it a go. 31 00:03:02,270 --> 00:03:05,569 It's really very interesting. I highly recommend it. 32 00:03:05,570 --> 00:03:08,629 Very existential experiment you can do on yourself. 33 00:03:08,630 --> 00:03:14,990 Of course, we also lose consciousness most nights when we go into a dreamless sleep, not dreaming, but dreamless sleep. 34 00:03:14,990 --> 00:03:21,200 And then consciousness is what returns when we wake up in the morning or come around in the recovery room, 35 00:03:22,610 --> 00:03:31,760 and when we are conscious and when we open our eyes in the morning or in the recovery room, our brains don't just process visual information. 36 00:03:32,360 --> 00:03:35,960 There's another dimension entirely. We see colours, shapes. 37 00:03:36,410 --> 00:03:41,899 We experience smells, sounds, emotions there's experiencing happening. 38 00:03:41,900 --> 00:03:52,130 And a world appears. And parts of this world, within this world, there is the experience of being a self, of being you or being me. 39 00:03:52,970 --> 00:04:00,830 This background experience of being somebody is probably the aspect of consciousness that we each cling to most tightly. 40 00:04:01,490 --> 00:04:05,930 But consciousness very simply, is any kind of experience whatsoever. 41 00:04:05,930 --> 00:04:10,550 It's what makes life worth living. That's the intuitive definition. 42 00:04:11,000 --> 00:04:17,930 I want to just give you a little bit more of a formal definition. This is from a philosopher, Thomas Nagel, and it's the definition that I prefer. 43 00:04:17,930 --> 00:04:22,700 I mean, there's still a lot of disagreement about this kind of thing. Now, Thomas Nagel put it like this. 44 00:04:22,700 --> 00:04:29,209 He said that an organism has conscious mental states if and only if there is something. 45 00:04:29,210 --> 00:04:31,820 It is like to be that organism. 46 00:04:31,820 --> 00:04:38,450 And I think what he means, Meyer, is that it feels like something to just to be made feels like something to be each one of you. 47 00:04:38,930 --> 00:04:48,590 But it doesn't feel like anything to be this, this table or this book or this shoe, or probably this laptop computer or this this wash. 48 00:04:48,830 --> 00:04:57,319 It does feel like something to be about. Although exactly what only a bat will know that, um, for these kinds of things, 49 00:04:57,320 --> 00:05:01,550 for some things in the universe there is experiencing going on and other things there isn't. 50 00:05:02,300 --> 00:05:06,050 Now, one reason I like this definition is for what it does not say. 51 00:05:06,320 --> 00:05:11,570 Consciousness is not the same thing as being intelligent or being able to speak, 52 00:05:11,570 --> 00:05:17,330 having language, or having even this explicit sense of personal identity. 53 00:05:17,330 --> 00:05:21,920 You know, to me that is just any kind of experience count. 54 00:05:22,670 --> 00:05:25,819 Now, putting it this way raises this problem, 55 00:05:25,820 --> 00:05:32,150 this challenge that's vexed scientists and philosophers for millennia, really, which is how does it happen? 56 00:05:32,690 --> 00:05:38,299 Now, on the one hand, we've got this mess of complex stuff inside our brains and bodies, 57 00:05:38,300 --> 00:05:42,020 the brain inside the skull, the brain, this electrified pathway. 58 00:05:42,350 --> 00:05:47,780 And on the other hand, we have the redness of red, the sharpness of pain. 59 00:05:47,780 --> 00:05:50,510 We have the world of experience. How do they relate? 60 00:05:50,960 --> 00:05:57,740 And probably the most influential way of putting this big overall challenge was from the philosopher David Chalmers. 61 00:05:58,340 --> 00:06:01,700 And he coined this idea of the hard problem of consciousness. 62 00:06:01,700 --> 00:06:02,479 He put it like this. 63 00:06:02,480 --> 00:06:13,460 He said it is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. 64 00:06:13,610 --> 00:06:17,209 Why should physical processing give rise to a rich inner life at all? 65 00:06:17,210 --> 00:06:21,290 It seems objectively unreasonable that it should. And yes, it does. 66 00:06:21,740 --> 00:06:25,960 We're all conscious. How does this happen? Well, 67 00:06:26,050 --> 00:06:30,790 the intuition sort of that David Chalmers is driving at here is that even if we had 68 00:06:30,790 --> 00:06:36,489 in our hands a complete explanation of how the brain works as a complex object, 69 00:06:36,490 --> 00:06:40,210 and it is a very complex object at most. 70 00:06:40,360 --> 00:06:48,099 Each brain in this room has about 86 billion neurones and about a thousand times more connections, 71 00:06:48,100 --> 00:06:52,780 so that if you counted one connection every second, it would take you about 3 million years to finish. 72 00:06:52,780 --> 00:06:57,459 It's a very, very complicated object. But even if we knew everything about it, 73 00:06:57,460 --> 00:07:05,350 we might still be completely in the dark as to how and why it has anything to do with consciousness seem to be like different things. 74 00:07:05,360 --> 00:07:07,390 That's that's the hard problem of consciousness. 75 00:07:08,770 --> 00:07:16,000 Now, addressing the hard problem, as Chalmers puts it head on, might not be the only or the right way to go about it. 76 00:07:16,270 --> 00:07:19,960 And in my work over the last decade or so, 77 00:07:20,110 --> 00:07:26,360 I prefer to tackle on what I've been calling the real problem of consciousness just to annoy David Chalmers a little bit. 78 00:07:26,380 --> 00:07:29,770 The real problem, um, and it goes like this. 79 00:07:29,770 --> 00:07:36,940 It's it's to accept that consciousness exists and there will be some people, some philosophers that will tell you we were wrong about that, 80 00:07:36,940 --> 00:07:41,050 and that consciousness isn't this kind of special, distinctive phenomenon at all. 81 00:07:41,140 --> 00:07:45,250 But let's say that it exists. In fact, consciousness really is the only thing we can be sure of. 82 00:07:45,250 --> 00:07:49,690 Everything else is is sort of indirectly present, inferred through our consciousness. 83 00:07:49,810 --> 00:07:54,700 It exists, and it depends in some way on the brain and the body. 84 00:07:54,820 --> 00:08:00,070 So the question is how can mechanisms, this complex neural circuitry inside our skulls, 85 00:08:00,460 --> 00:08:06,490 in the brain and the body explain and predict and control properties of consciousness? 86 00:08:06,490 --> 00:08:14,410 Because that's in general what science does. It often doesn't have to tell you why a phenomenon is part of the universe in the first place. 87 00:08:14,410 --> 00:08:16,420 It's nice if it does, but often it doesn't. 88 00:08:16,420 --> 00:08:22,659 It just helps you explain the properties that it has, predict when they're going to happen, and ideally be able to you control. 89 00:08:22,660 --> 00:08:30,370 And that's the business of science. And the idea is that as we kind of try to do this and build bridges of explanation, 90 00:08:30,370 --> 00:08:36,670 I'll give you some examples and bridges of explanation between what's happening in the brain and what's happening in experience. 91 00:08:37,030 --> 00:08:43,419 This sense of mystery about how something is as apparently magical as consciousness 92 00:08:43,420 --> 00:08:49,840 can be so intimately associated with something as apparently mundane as mere stuff. 93 00:08:49,840 --> 00:08:57,970 That sense of mystery begins to dissolve away and may even disappear in a puff of metaphysical smoke. 94 00:08:58,000 --> 00:09:02,460 That's the hope. We're not there yet, but that's the overall strategy. 95 00:09:02,500 --> 00:09:08,379 So rather than solving this hard problem head on in one dramatic eureka moment of a solution where we say, 96 00:09:08,380 --> 00:09:15,730 ah, that's the that's where the magic happens. The hard problem of consciousness may in fact dissolve over time. 97 00:09:16,000 --> 00:09:19,660 And there is a historical precedent for this. Not a perfect one. 98 00:09:20,260 --> 00:09:29,800 But it wasn't that long ago. 150 years, 200 years, certainly, that people at the time thought that life was beyond the reach of science, 99 00:09:30,340 --> 00:09:35,820 that physics and chemistry could not account for the difference between the living and the non-living. 100 00:09:35,830 --> 00:09:43,120 At that time there was this philosophy of vitalism, the idea that there needed to be a special substance, 101 00:09:43,240 --> 00:09:49,930 and along B tau, a spark of life that would explain this difference, but was otherwise beyond the reach of science. 102 00:09:50,680 --> 00:09:56,200 Now we don't understand everything about life. Yet things like the origin of life is still fairly mysterious, 103 00:09:56,680 --> 00:10:07,510 but there's no sense of mystery that life is really something that fits in with our natural picture of the universe as as complicated stuff. 104 00:10:07,810 --> 00:10:10,959 The sense of mystery about life wasn't solved. 105 00:10:10,960 --> 00:10:12,730 People didn't find the spark of life. 106 00:10:12,730 --> 00:10:20,560 It was dissolved as biologists and chemists got on with the job of explaining properties of life in terms of physics and chemistry. 107 00:10:20,890 --> 00:10:23,890 So that's I mean, that's it's not a perfect historical parallel, 108 00:10:23,890 --> 00:10:30,160 but what I think is important is it tells us that what seems mysterious at one point in history, 109 00:10:30,490 --> 00:10:36,729 with the tools and the concepts that we have at this point. Doesn't always have to seem so mysterious, 110 00:10:36,730 --> 00:10:42,970 and we don't have to have a kind of revolution in physics or philosophy for that sense of mystery to dissipate. 111 00:10:43,910 --> 00:10:46,760 So life has many properties too. It's not just one thing. 112 00:10:47,710 --> 00:10:53,820 And so if we think about consciousness, instead of treating it as this one big scary mystery, what are its different aspects? 113 00:10:53,830 --> 00:10:58,420 What are its different properties? And I like to divide this into roughly three kinds of things. 114 00:10:58,630 --> 00:11:01,150 There's conscious level, which I already mentioned. 115 00:11:01,150 --> 00:11:08,590 This is the kind of global state of consciousness that goes away under anaesthesia and then comes back again and might be, 116 00:11:08,590 --> 00:11:13,090 might change in other states like sleep or hypnosis or things like that. 117 00:11:14,020 --> 00:11:16,810 Then when you are conscious, you're conscious of something. 118 00:11:16,840 --> 00:11:23,170 The sights and sounds and smells and colours and shape and people and places that populate your conscious experience. 119 00:11:23,350 --> 00:11:27,040 At any given time, we can think about that as conscious content. 120 00:11:27,700 --> 00:11:37,150 And then finally there's conscious self, the experience of being you or being me, the experience of being somebody that is probably, 121 00:11:37,360 --> 00:11:43,270 for many of us, the most important aspect of consciousness and the aspect we might take most for granted as well. 122 00:11:44,440 --> 00:11:51,910 So today I don't have time to talk about conscious level, but I want to focus a little bit unconscious content and conscious self. 123 00:11:52,270 --> 00:11:54,490 And then towards the end of the talk, 124 00:11:54,820 --> 00:12:01,600 explore some of the implications of what I've been saying for how we might think about consciousness beyond the human and especially, 125 00:12:01,900 --> 00:12:07,600 I think given the times we live in. What about consciousness in machines and artificial intelligence? 126 00:12:07,750 --> 00:12:12,639 What should we think about that? So that's where we're going. But where we're going to start is somewhere fairly simple. 127 00:12:12,640 --> 00:12:21,850 We're just going to start with colour. Now, colour is such a prominent, pervasive part of our conscious lives, especially our visual lives. 128 00:12:21,850 --> 00:12:26,420 It brings meaning and beauty to the way we visually experience the world. 129 00:12:26,440 --> 00:12:31,930 What could be simpler than colour? But of course, colour is far from straightforward. 130 00:12:32,710 --> 00:12:41,900 Now. The cells in our eyes. The photoreceptors in our eyes have long been known for most of us to be only sensitive to three, 131 00:12:42,440 --> 00:12:46,219 roughly three wavelengths in this whole electromagnetic spectrum, 132 00:12:46,220 --> 00:12:53,240 which goes all the way from radio waves at one end through infrared, ultraviolet, to x rays and gamma rays at the other end. 133 00:12:53,270 --> 00:12:58,190 So this thin slice of reality, that's where we live colour wise. 134 00:12:58,460 --> 00:13:04,920 And even within that thin slice, the cells in our eyes are only sensitive to just three wavelengths. 135 00:13:04,940 --> 00:13:11,090 Yet out of those three wavelengths, the brain is able to conjure millions of distinct colours. 136 00:13:12,090 --> 00:13:16,920 So what we experienced when we experienced colour is both less than what's really there, 137 00:13:16,950 --> 00:13:24,720 because it's just a sample of the electromagnetic spectrum, and the wavelengths within that spectrum aren't themselves coloured. 138 00:13:24,750 --> 00:13:28,860 They're just different wavelengths of of radiation, of energy. 139 00:13:29,550 --> 00:13:31,980 There's less than what's there, but it's also more than what's that? 140 00:13:31,980 --> 00:13:37,590 Because we can conjure millions of colours from just three out of three comes many. 141 00:13:38,790 --> 00:13:45,060 So what we see when we see colour is both less than and more than what's out there in the world. 142 00:13:45,980 --> 00:13:50,870 And I think that applies not just to colour, but to everything we experience. 143 00:13:51,470 --> 00:13:55,549 But let me stick with colour for a second. And here's just one. One illustration of this. 144 00:13:55,550 --> 00:14:02,440 This is one of my favourite visual illusions. The nice thing about talking about consciousness is you can use some of these fun illusions. 145 00:14:02,480 --> 00:14:05,629 How many people have seen this one before? I'm going to see if I can have a look. 146 00:14:05,630 --> 00:14:09,990 A few of you. Okay. This is one of my favourite. It's called the Lilac Chaser Illusion. 147 00:14:10,020 --> 00:14:18,950 I hope it's going to work in this theatre, but I'd like you to stare at the black cross in the middle and try not to blink or move your eyes. 148 00:14:20,370 --> 00:14:30,630 And then. I can hear a couple things. Let me know if something strange starts to happen and you start to see a a maybe a green disk. 149 00:14:31,230 --> 00:14:35,820 People saying that? Yes. And then if you now move your eyes and blink, what happens? 150 00:14:39,730 --> 00:14:45,820 A lot of murmuring happens, apparently, but what should happen is that the magenta disks come back, right? 151 00:14:46,330 --> 00:14:49,870 Um, so there is no green disk in this illusion at all. 152 00:14:49,870 --> 00:14:54,460 There are only magenta disks that turn off one after the other. 153 00:14:54,580 --> 00:14:58,840 So what's happening here is actually three different things going on. 154 00:14:58,840 --> 00:15:02,229 The first thing that's going on is something called Traxler fading. In psychology. 155 00:15:02,230 --> 00:15:07,270 This is this phenomenon where if some things in the periphery of your vision and it has blurry borders, 156 00:15:07,570 --> 00:15:13,810 tends to fade away and be replaced by what's around it. The second thing that's happening is called apparent motion. 157 00:15:13,930 --> 00:15:19,450 So when things turn on and off next to each other, the brain tends to experience movement. 158 00:15:19,690 --> 00:15:26,890 That's how cinema works. How television works. We infer things as smoothly moving from series of static snapshots. 159 00:15:27,190 --> 00:15:30,549 And the third thing that's happening is colour opponents. 160 00:15:30,550 --> 00:15:36,340 See now for the brain. The opposite in the space of colour from magenta is green. 161 00:15:36,790 --> 00:15:47,830 And so when the brain uh, when, when the brain sort of is adapting to the absence of magenta, it, you end up perceiving green instead of magenta. 162 00:15:48,070 --> 00:15:51,399 So these three things together explain what's happening here. 163 00:15:51,400 --> 00:15:56,830 There's actually a fourth thing, which is that I've already told you that colours don't really exist out there in the world. 164 00:15:57,250 --> 00:16:04,420 They're they're a collaboration between the brain and the world. Now, magenta exists even less than other colours. 165 00:16:04,960 --> 00:16:11,140 And the reason for that is that if you think you make magenta by mixing usually red and blue light together, 166 00:16:11,140 --> 00:16:15,640 although the light isn't actually red and blue. But you mix these two lights together, you get magenta. 167 00:16:16,150 --> 00:16:26,410 And so when um, and that's because the red and blue wavelengths sort of either end of the, uh, of the extreme of this thin slice of reality. 168 00:16:26,410 --> 00:16:33,370 So when the brain gets those wavelengths of light, it's expecting something in the middle which would be experienced as green. 169 00:16:33,370 --> 00:16:38,290 So when it doesn't get it, it has to make something up. And what it makes up is magenta. 170 00:16:38,680 --> 00:16:43,240 So what you're seeing when you're seeing green here is you're seeing not not not green. 171 00:16:44,230 --> 00:16:51,500 I hope that makes sense. It didn't make any sense to me. So I thought example I think. 172 00:16:53,480 --> 00:16:56,000 Just underlines. Get put the details aside. 173 00:16:56,000 --> 00:17:03,020 What it what it underlines is that can be this big difference between how things seen in our experience and how they are. 174 00:17:03,650 --> 00:17:08,129 And there's one simple idea which I think can explain these experiences. 175 00:17:08,130 --> 00:17:14,360 And I've just zooming out from the details and can explain all our experiences in one way or another. 176 00:17:14,540 --> 00:17:18,020 And this is the idea that your brain is a prediction machine, 177 00:17:18,650 --> 00:17:28,940 and that everything that you experience is your brain's best guess about what's going on out there in the world or in here in the body. 178 00:17:29,180 --> 00:17:33,650 The brain's best guess about the causes of the sensory input that it gets. 179 00:17:34,460 --> 00:17:39,800 Now, this is a a very old idea, goes back in philosophy as long as you want. 180 00:17:40,160 --> 00:17:46,040 Certainly it goes back to, to Plato. And this is and is out in Plato's Allegory of the cave. 181 00:17:46,280 --> 00:17:55,490 Prisoners are chained to the wall of a cave, and all they can see, or they have access to are shadows cast on the walls by the light from a fire. 182 00:17:55,700 --> 00:18:01,070 And the prisoners take these shadows to be real, because that's all they have access to. 183 00:18:02,300 --> 00:18:08,000 Now, sort of bringing all this, skipping a few thousand years and coming up to the to the present day, 184 00:18:08,000 --> 00:18:13,430 a more modern version of this way of thinking is instead of prisoners trapped inside a cave, 185 00:18:13,820 --> 00:18:18,980 just try for a second to imagine what it's like to be your own brain. 186 00:18:19,640 --> 00:18:28,400 You are your brain. You're trapped inside this bony cave prison of a skull, trying to figure out what's going on out there in the world. 187 00:18:28,430 --> 00:18:31,910 Now there's no light in the skull. There's no sound. It's completely dark. 188 00:18:32,270 --> 00:18:34,219 It's completely silent. 189 00:18:34,220 --> 00:18:43,820 All the brain has to go on when trying to figure out what's there is noisy, ambiguous, uncertain, and unlabelled sensory information. 190 00:18:43,820 --> 00:18:46,639 Electrical signals that come in through the senses. 191 00:18:46,640 --> 00:18:52,040 Now, these electrical signals, these sensory inputs, they don't tell you they didn't come with labels on like, 192 00:18:52,040 --> 00:18:55,280 I'm from a hat or I'm from a table, or I'm from the kidneys. 193 00:18:55,700 --> 00:18:57,260 They're just electrical signals. 194 00:18:57,680 --> 00:19:08,330 So to make sense of them, the brain has to combine these sensory signals with its prior knowledge or expectations about what's going on in the world. 195 00:19:08,450 --> 00:19:14,989 And by combining this ambiguous sensory information with its prior expectations, 196 00:19:14,990 --> 00:19:23,569 the brain can make a best guess about what's going on, and that the idea here is that is what we perceive. 197 00:19:23,570 --> 00:19:26,570 The brain doesn't see light or hear sound. 198 00:19:27,110 --> 00:19:32,600 What we experience is the brain's best guess of the causes of those kinds of signals. 199 00:19:33,960 --> 00:19:40,800 I'll give you one more example of this. Just just so you might be familiar with this one, but we can think about this in this context. 200 00:19:40,810 --> 00:19:44,070 How many people have seen this one? This is a more common visual illusion. 201 00:19:44,070 --> 00:19:47,970 Still not everybody okay. So this is called Adelson's checkerboard. 202 00:19:48,750 --> 00:19:57,510 And. If you look at those two patches, A and B now, hopefully they should look to you to be different shades of grey. 203 00:19:57,540 --> 00:20:03,089 Is that everyone saying that different shades of grey okay, now because obviously it's an illusion. 204 00:20:03,090 --> 00:20:10,800 Obviously that's not true. If I put the thing up here, you can see that they are in fact exactly the same shade of grey. 205 00:20:10,820 --> 00:20:12,930 Everybody see that looks the same. 206 00:20:13,980 --> 00:20:20,790 And if you think I'm tricking you somehow, well, I'll just move this grey bar across and you can see it really is the same shade of grey. 207 00:20:20,790 --> 00:20:26,410 There's. There's no difference. And if I take it away, it looks different again. 208 00:20:28,310 --> 00:20:29,360 So what's going on here? Well, 209 00:20:29,360 --> 00:20:41,500 what's going on here is that your brain has baked into its visual circuits the knowledge that objects in shadow appear darker than they really are. 210 00:20:41,510 --> 00:20:48,130 And it also has this knowledge of how checkerboards work that they alternate between brightness and darkness. 211 00:20:48,740 --> 00:20:53,030 You don't necessarily know that your brain has this knowledge about the world, but it does. 212 00:20:53,570 --> 00:21:01,610 And that's why we see B as lighter than it really is, because it's in the shadow on the checkerboard from the cylinder. 213 00:21:01,880 --> 00:21:07,730 So the brain's expectations can really deeply shape what we consciously experience. 214 00:21:09,020 --> 00:21:15,440 Now I was going to skip this slide, but because I was introduced by by Marcus, I have to have at least one equation in the talk. 215 00:21:15,850 --> 00:21:18,590 That Mark is not efficient. And this is the only one. 216 00:21:19,160 --> 00:21:23,450 Uh, and it's just showing that there's a, there's like a kind of mathematical way to think about this, 217 00:21:23,450 --> 00:21:31,220 a mathematical way that describes this whole idea of the brain as making best guesses about what's out there in the world. 218 00:21:31,550 --> 00:21:35,270 And it's called Bayes theorem. And Bayes theorem goes back hundreds of years. 219 00:21:35,540 --> 00:21:42,020 And it's really about how one should optimally reason in the face of uncertainty. 220 00:21:42,230 --> 00:21:50,570 And the idea here is that, um, we can have we can have some starting situation, some prior belief about what's going on. 221 00:21:50,660 --> 00:21:54,810 And that's the dashed curve on the left. And we can get some new information. 222 00:21:54,830 --> 00:21:59,569 This could be sensory data coming into the brain. And Bayes theory you call that the likelihood. 223 00:21:59,570 --> 00:22:03,590 So that's the the other dashed curve to the right that the slightly broader one. 224 00:22:04,040 --> 00:22:05,479 And then the idea is Bayes theorem. 225 00:22:05,480 --> 00:22:12,830 You combine these two curves so you optimally integrate what you already know with what you what's what new information you get. 226 00:22:13,130 --> 00:22:17,490 And you combine them and you come up with what Bayes called the posterior belief. 227 00:22:17,510 --> 00:22:23,390 So that's what I should believe, given what I already know and what what's coming in next. 228 00:22:24,200 --> 00:22:29,629 And so the idea is that it's really what the brain is trying to do when it's trying to perceive things out there in the world. 229 00:22:29,630 --> 00:22:33,290 It already knows stuff about the world, but it gets new information all the time. 230 00:22:33,500 --> 00:22:37,520 So it has to update what it knows on the basis of this new information. 231 00:22:38,060 --> 00:22:46,160 Now, the first person to my knowledge you really formalising wrote about perception in this way was a German polymath called Hermann von Helmholtz, 232 00:22:46,160 --> 00:22:54,170 and Hermann von Helmholtz said that perception was this process of unconscious inference, and he called it unconscious inference, 233 00:22:54,770 --> 00:23:00,160 to make the point that we're not aware of all this probabilistic wizardry going on under the hood. 234 00:23:00,170 --> 00:23:03,829 We're only aware of the result of it. We're only aware of the output. 235 00:23:03,830 --> 00:23:13,520 We don't know that our brain is doing all this clever stuff, but if it is, it really casts perception, I think, in a very different light. 236 00:23:13,700 --> 00:23:20,330 It flips how we think about things, even if we have never really thought about these things all that much before. 237 00:23:21,140 --> 00:23:25,680 Now, the classical view of how the brain does perception, and this is something of the sort, this, 238 00:23:25,700 --> 00:23:30,350 this kind of picture I would see in textbooks when I was a student some years ago, many years ago. 239 00:23:30,800 --> 00:23:36,410 And the idea in this sort of old classical view is that perception is this kind of outside in, 240 00:23:36,410 --> 00:23:42,170 bottom up process where signals come into the brain from the retina and they progress. 241 00:23:42,170 --> 00:23:48,440 They sort of marched deeper and deeper into the brain, and more complex features are picked out as the signals go further into the brain. 242 00:23:48,440 --> 00:23:55,220 So simple things like edges and colours early on, more complex things like faces later on, the monkey face because this is a monkey brain. 243 00:23:55,700 --> 00:24:02,149 Um, now there may be some activity going in the other direction, but all the heavy lifting is done in this bottom up, 244 00:24:02,150 --> 00:24:06,710 outside in direction, as if the brain is reading the world through the senses. 245 00:24:07,310 --> 00:24:11,990 And that seems right. I mean, it seems kind of how things are that there's a world, 246 00:24:11,990 --> 00:24:18,230 and my brain is sort of just soaking it in through the transparent windows of my eyes and my ears and my other senses. 247 00:24:18,830 --> 00:24:23,480 Now the prediction machine view turns this completely on its head. 248 00:24:23,810 --> 00:24:30,170 Instead of perception being this business of reading out sensory signals from the outside in. 249 00:24:30,890 --> 00:24:38,629 It's much more about perceptual predictions which flow in the opposite direction from the top down and the inside out. 250 00:24:38,630 --> 00:24:43,550 These green arrows here, the brain is always making predictions about the causes of sensory signals, 251 00:24:43,550 --> 00:24:47,480 and then using the sensory signals to update the predictions. 252 00:24:47,790 --> 00:24:57,889 Turns out if the brain does this and always tries to update its predictions to minimise the error that it gets, then it approximates. 253 00:24:57,890 --> 00:25:03,980 It starts to do this, this process of Bayesian inference. Um, but what it means for perception is that. 254 00:25:05,000 --> 00:25:13,340 What we experience is an active generation, rather than a passive registration of the world around us. 255 00:25:14,150 --> 00:25:18,590 So this theory is known by by kind of many names in neuroscience and psychology. 256 00:25:19,100 --> 00:25:24,580 Predictive processing is probably the most common. And the core idea, at least the way I think about this theory. 257 00:25:24,590 --> 00:25:32,510 The core idea is that perceptual content, what we perceive comes from the inside out just as much, if not more, than from the outside in. 258 00:25:32,510 --> 00:25:41,030 And these sensory signals mainly serve to keep the brain's predictions tied to the world in ways that are not necessarily accurate, 259 00:25:41,300 --> 00:25:46,520 but which are most useful in how we and guiding our behaviour. 260 00:25:47,390 --> 00:25:53,840 Now, William James, who was one of the fathers, uh, the first people founding figures in psychology, 261 00:25:54,530 --> 00:25:58,459 said something very similar over 100 years ago in the late 19th century. 262 00:25:58,460 --> 00:26:04,310 He said that was part of what we perceive comes through our senses from the object before us. 263 00:26:04,490 --> 00:26:09,650 Another part, and it may be the larger part, always comes out of our own heads. 264 00:26:10,610 --> 00:26:14,150 He had the same idea. If you didn't, even if you didn't put it in the same way. 265 00:26:15,470 --> 00:26:19,730 Now we can see echoes and shadows of this process in our everyday lives. 266 00:26:19,910 --> 00:26:27,590 There's a phenomenon called pareidolia, which means seeing patterns in fact, faces a very powerful stimuli. 267 00:26:28,130 --> 00:26:34,160 Uh, we, you know, we're very sensitive to the presence of faces because we're very social creatures. 268 00:26:35,580 --> 00:26:45,440 And, um, what this means is the brain is always casting out the prediction of face into, uh, the world to see where it sticks. 269 00:26:45,450 --> 00:26:47,670 And so we can see faces and clouds, 270 00:26:47,850 --> 00:26:54,060 and we can even see faces in things like the arrangements of windows on a building faces a very, very, very prominent. 271 00:26:55,230 --> 00:27:01,470 And, um, one of the things we've been doing at Sussex is taking this idea and playing with it a little bit in the lab. 272 00:27:01,860 --> 00:27:08,879 So we've been combining virtual reality with some machine learning or AI methods to, uh, 273 00:27:08,880 --> 00:27:17,430 to try to simulate what it would be like if the brain had two strong, overly strong predictions to see faces everywhere. 274 00:27:17,640 --> 00:27:21,360 So this is like this is now quite an old school neural network. 275 00:27:21,360 --> 00:27:27,150 It's very good at telling you what's an image, what's in an image if you present it with that image. 276 00:27:27,870 --> 00:27:32,970 Um, but what we did was we, we used a tweak of this called Deep Dream, which runs it backwards. 277 00:27:33,120 --> 00:27:38,729 And what happens in this, uh, this manipulation is instead of the network telling you what's in an image, 278 00:27:38,730 --> 00:27:46,010 you fix the output to something like dog, and you give it an image as input, and then you update the image until the whole thing settles down. 279 00:27:46,020 --> 00:27:54,000 So it's a bit of a brute force way of saying, how can we simulate what it would be like if the brain had a strong prediction to see dog? 280 00:27:54,630 --> 00:28:02,520 Um, and what we did with, uh, my postdoc Casa Katie Zuki is we took this idea and we took some panoramic video of Sussex campus, 281 00:28:02,670 --> 00:28:06,840 and we applied this method to every frame within the video. 282 00:28:06,840 --> 00:28:14,880 And then, um, gave people a VR headset, which they could wear, and then they experience Sussex Campus in a very unusual way. 283 00:28:15,090 --> 00:28:18,330 So I mean, it is different from Oxford, but it's not this different normally. 284 00:28:19,760 --> 00:28:26,180 There's a few too many dogs. But, um, what's interesting about this for me is that it's it's a it's a sort of computational model, 285 00:28:26,180 --> 00:28:31,100 but it's a not not of what people do or any kind of cognition. 286 00:28:31,100 --> 00:28:39,140 It's a model of a different kind of experience. It's a way of using computational models to to model how we experience the world. 287 00:28:39,560 --> 00:28:47,420 Um, what we've been doing just this week actually was, um, because computers have got faster and faster, we can now do this in real time. 288 00:28:47,600 --> 00:28:50,960 So if you visit my my lab, we can put this headset on. 289 00:28:51,080 --> 00:28:58,250 And right then in the moment, you know, you can have this, this very strange, almost psychedelic like experience of what's going on around you. 290 00:28:58,250 --> 00:29:01,430 And of course, we can tweak it in various, various different ways. 291 00:29:01,580 --> 00:29:05,090 We're also taking the same method. And this isn't just fun. 292 00:29:05,090 --> 00:29:08,389 I mean, it is fun. And a lot of science starts off just by it, to be honest. 293 00:29:08,390 --> 00:29:13,580 Just by playing around to see what you can do, to see what kinds of things you can explore. 294 00:29:13,790 --> 00:29:19,579 But we've also been taking this same method now and applying it to try to understand how and why 295 00:29:19,580 --> 00:29:23,809 different kinds of visual hallucinations are actually the way they are and how they're different. 296 00:29:23,810 --> 00:29:29,240 So people with Parkinson's disease have visual hallucinations quite often people with Charles Bonnet syndrome, 297 00:29:29,240 --> 00:29:32,960 which is where people lose a lot of vision through macular degeneration. 298 00:29:33,080 --> 00:29:37,790 They also sometimes have hallucinations. Psychedelics involve hallucinations as well. 299 00:29:37,790 --> 00:29:38,990 They're all kind of different. 300 00:29:39,170 --> 00:29:45,709 But through this method, we can start to understand how and why different kinds of hallucinations are different from each other, 301 00:29:45,710 --> 00:29:50,600 and therefore understand a bit more about perception in the here and now as well. 302 00:29:51,050 --> 00:29:57,200 But there's a sort of single take home from all this, which is that we can think of hallucination. 303 00:29:57,200 --> 00:30:00,290 Here is a kind of uncontrolled perception. 304 00:30:00,290 --> 00:30:04,549 It's what happens when the brain loses its grip. 305 00:30:04,550 --> 00:30:08,690 The brain's predictions lose their grip on their causes in the world. 306 00:30:09,140 --> 00:30:11,870 And now, by the same token, 307 00:30:11,870 --> 00:30:19,550 we can then think of normal perception in the here and now as a kind of controlled hallucination where the brain's best guess is, 308 00:30:19,850 --> 00:30:24,710 uh, reined in by, controlled by that causes in the world. 309 00:30:26,160 --> 00:30:29,580 And I think this way of putting it is important because when we think about hallucination, 310 00:30:29,580 --> 00:30:33,899 we often think of it as, you know, something completely different from how we do things. 311 00:30:33,900 --> 00:30:37,350 Normally. It's another category. People see things here, things that aren't there. 312 00:30:37,710 --> 00:30:44,490 But when we experience things without hallucinating, we have this kind of direct vertical access to how things really are. 313 00:30:44,700 --> 00:30:50,160 I don't think it's like that. I think there's a continuity, a spectrum we can for various parts of the spectrum. 314 00:30:50,460 --> 00:31:00,180 All of our perceptual experience comes from the inside out, and it's all calibrated in some way by sensory input from the outside in. 315 00:31:01,920 --> 00:31:08,930 Now, one implication of this idea is that since we all have different brains, we will all have different experiences. 316 00:31:08,940 --> 00:31:12,090 We will all differ on the inside, just as we do on the outside. 317 00:31:12,270 --> 00:31:17,370 The novelist Anais Nin put this very nicely in her 1961 book, The Seduction of the Minotaur, 318 00:31:17,370 --> 00:31:23,819 when she said we do not see things as they are, we see them as we are and as one. 319 00:31:23,820 --> 00:31:28,810 One last example of this I want to show. I think you've probably seen this before, right? 320 00:31:28,830 --> 00:31:33,239 Remember this. This is almost ten years ago now. This image first exploded across the internet. 321 00:31:33,240 --> 00:31:40,020 So this is a photo of a dress. How many people see this as a white and gold dress? 322 00:31:41,420 --> 00:31:45,470 Quite a few. How many people see it as a blue and black dress? 323 00:31:46,100 --> 00:31:50,270 Many more. How many people don't know what the [INAUDIBLE] I'm talking about? Okay. 324 00:31:50,280 --> 00:31:53,639 No it's not. I mean, this is. Time has passed, so one never knows. 325 00:31:53,640 --> 00:31:57,990 And the lighting in here is strange. See, you know, this came in 2015. 326 00:31:57,990 --> 00:32:04,080 What I think is amazing about this is that those people who see it one way, you know, 327 00:32:04,080 --> 00:32:10,250 the white and gold people out there, it's very hard to imagine what it would be like to see it in a different way. 328 00:32:10,260 --> 00:32:15,260 Right. And you see it the way you see it on. But of course it's the same. 329 00:32:15,260 --> 00:32:19,999 It's the same picture. And the reason we see it differently, we can come back to that later. 330 00:32:20,000 --> 00:32:25,729 It has to do. It probably has to do with how our brains take into account the, you know, the ambient light, 331 00:32:25,730 --> 00:32:30,469 which in this room is quite yellowish, by the way, those people who say it's blue and black, you're right. 332 00:32:30,470 --> 00:32:33,590 The real dress is blue and black. Um, so well done you. 333 00:32:35,080 --> 00:32:41,920 Um but the point and gold people. You might be wrong, but but your brains typically assume that you're outdoors and happy, so, you know, lucky you. 334 00:32:45,010 --> 00:32:51,940 This example is, you know, it exploded into the public consciousness because the difference was so big 335 00:32:51,970 --> 00:32:55,570 that it became noticeable when people started arguing about what do you see? 336 00:32:56,350 --> 00:33:02,679 But that obscures, I think, a deeper point, which is that we will all experience things differently. 337 00:33:02,680 --> 00:33:08,380 And most of the time those differences won't be large enough that they'll surface into our language and behaviour. 338 00:33:08,410 --> 00:33:12,640 You know, we all differ on the outside in terms of body shape, skin colour and so on. 339 00:33:12,970 --> 00:33:17,020 And those differences don't have to be very large. And we can still see them because we can see them. 340 00:33:17,530 --> 00:33:24,580 But differences on the inside a much, much harder to measure because that private my I have my experience and you don't. 341 00:33:24,790 --> 00:33:31,719 We use language to kind of paper over the differences, and it also just seems as though I see the world as it is, you know? 342 00:33:31,720 --> 00:33:37,810 And so why should anybody see it differently? And for me, this has been a very interesting challenge. 343 00:33:37,810 --> 00:33:43,910 We don't know all that much about the diversity of of perceptual experience in this way. 344 00:33:43,930 --> 00:33:50,290 So one project that I've been involved with over the last 2 or 3 years is become the perception senses. 345 00:33:50,500 --> 00:34:00,040 And this has been a very ambitious study involving so far, only 40,000 people each doing hours of experiments to try to understand, 346 00:34:00,370 --> 00:34:04,659 you know, how we each experience the world in our own unique way. 347 00:34:04,660 --> 00:34:12,310 And we've so far had people from, um, 40,000 people from over 100 countries and ages from 18 to 80, 348 00:34:12,970 --> 00:34:18,010 uh, to try to paint a picture of what I've been calling perceptual diversity. 349 00:34:18,340 --> 00:34:21,639 And I use that term because I want to emphasise that it applies to all of us, 350 00:34:21,640 --> 00:34:30,250 and it's not something that's specific to people with so-called neurodivergent conditions like autism or ADHD, who want to study that, too. 351 00:34:30,430 --> 00:34:33,430 But we all differ in how we experience the world. 352 00:34:33,850 --> 00:34:38,610 And there's a kind of I'm dwelling on this because I think there's an important implication. 353 00:34:38,620 --> 00:34:42,219 We haven't analysed the data very much yet, so I don't have any results to show you. 354 00:34:42,220 --> 00:34:46,540 But even just the examples that you've seen, 355 00:34:47,110 --> 00:34:55,629 I think one consequence of thinking about things this way is that it can help us cultivate a bit of humility about our own take on things, 356 00:34:55,630 --> 00:34:59,830 like the way I see things is not the only way other ways are possible. 357 00:35:00,040 --> 00:35:09,220 And so my hope is, and it is very much a hope that by revealing a bit more about how we differ in our perceptual experience of the world, 358 00:35:09,370 --> 00:35:15,970 we will be able to build some some new platforms that will allow better understanding and communication between people, 359 00:35:16,300 --> 00:35:23,830 because we live in perceptual echo chambers. And the first step to getting out of an echo chamber is to realise that you're in one. 360 00:35:25,320 --> 00:35:31,170 Okay, so in the next part of the talk, I want to move on from perception of the world to perception of the self. 361 00:35:31,920 --> 00:35:36,150 And it's easy to take the self for granted, but we should not. 362 00:35:36,690 --> 00:35:43,650 And it's also easy to. So I think make a simple mistake about where the self fits in to this picture of consciousness. 363 00:35:44,100 --> 00:35:47,819 You know the easy how things seem view is something like this. There's a world out there. 364 00:35:47,820 --> 00:35:50,850 We perceive the world. Sensory inputs come into our eyes. 365 00:35:51,310 --> 00:35:57,210 And the self in this view is the is the thing that does the perceiving the essence of you or me. 366 00:35:57,390 --> 00:36:04,980 And that receives that the perception of the world makes and plans decide what to do next and then does that thing. 367 00:36:05,130 --> 00:36:08,580 We make some actions in the world, and the world changes, and round and round we go. 368 00:36:08,580 --> 00:36:12,840 We sense, we think we act. That might be the house, how things seem view. 369 00:36:13,110 --> 00:36:16,200 But how things are is, I think, quite different. 370 00:36:17,300 --> 00:36:24,020 In this view, both experiences of the world and experiences of the self are kinds of perception. 371 00:36:24,020 --> 00:36:29,780 The self is not the thing, the essence, the universe, or the meanness that does the perceiving. 372 00:36:30,020 --> 00:36:39,860 The self is itself a kind of perception, or rather a bundle, a collection of related but different perceptions. 373 00:36:40,640 --> 00:36:45,960 Um. So the self two can be thought of as a kind of controlled hallucination. 374 00:36:46,650 --> 00:36:50,550 The purpose of which is to keep the body alive. 375 00:36:51,270 --> 00:36:56,479 Now, thinking about it this way. First thing that comes up is there are many aspects. 376 00:36:56,480 --> 00:36:59,840 There are many different ways in which we each experience being a self. 377 00:36:59,840 --> 00:37:01,850 It might feel like the self is one thing, 378 00:37:02,150 --> 00:37:08,060 but it breaks down in many ways that the experience of being and having a particular object in the world, the body. 379 00:37:08,090 --> 00:37:14,450 Now this is my body and that isn't. There's the experience of perceiving the world from a particular first person point of view. 380 00:37:15,470 --> 00:37:20,840 There's the volitional self, the experience of being the cause of particular actions, 381 00:37:21,440 --> 00:37:24,750 or the author of my own thoughts when people talk about free will. 382 00:37:24,770 --> 00:37:30,350 This is one of the things they talk about, and only then things like the narrative in the social self. 383 00:37:30,380 --> 00:37:37,140 This is where personal identity comes in. So there's a sense of self in me that's bound to a name and identity. 384 00:37:37,160 --> 00:37:44,780 Memories of the past and plans for the future. Now all of these aspects of self might seem unified intrinsically. 385 00:37:44,780 --> 00:37:52,100 Maybe necessarily so, but we know they aren't. There's many examples in the lab and also in psychiatry, in neurology, 386 00:37:52,400 --> 00:37:56,960 where these different aspects of self come apart in all sorts of different ways. 387 00:37:57,470 --> 00:38:04,310 And I think what this tells us is that the experience of being a self, being you, is not something to take for granted. 388 00:38:04,550 --> 00:38:09,560 It's a fragile construction of the brain, for the brain and the body. 389 00:38:11,020 --> 00:38:19,060 Now I want to focus just on one aspect of this, which is the bodily self, the experience of being and having a particular body. 390 00:38:19,360 --> 00:38:22,570 Now again, we don't really think about this very much, right? The body is just that. 391 00:38:23,050 --> 00:38:26,260 It's this kind of meat robot that takes me from one meeting to another. 392 00:38:27,730 --> 00:38:33,670 But if you think about it from the brain's perspective, the body is also inaccessible. 393 00:38:33,670 --> 00:38:38,799 The brain only gets electrical signals that in this case, come from the body rather than the world. 394 00:38:38,800 --> 00:38:44,290 But it's always having to make a best guess about what is and what is not the body. 395 00:38:44,500 --> 00:38:48,850 And that's a lovely demonstration of this I love this. This is called the rubber hand illusion. 396 00:38:48,850 --> 00:38:56,770 You've probably seen this before. I won't start it just yet. And this shows how easily our experience of what is the body can be altered. 397 00:38:57,520 --> 00:39:02,440 Now, in the rubber hand illusion, what happens is the guy in blue, his real hand is hidden from sight. 398 00:39:02,440 --> 00:39:07,720 Behind this put partition and then a fake rubber hand is placed in front of him. 399 00:39:08,470 --> 00:39:09,820 Um, and he's looking at that. 400 00:39:09,940 --> 00:39:17,710 Then the experiment in green takes two paintbrushes and [INAUDIBLE] stroke both hands simultaneously while the guy's looking at the fake hand. 401 00:39:18,070 --> 00:39:20,500 And what happens, um, for most people, 402 00:39:20,830 --> 00:39:29,200 is that they start to experience a rather uncanny sensation that maybe this rubber hand is in some way part of the body. 403 00:39:30,110 --> 00:39:32,000 And that's the best way to test whether it's working or not. 404 00:39:35,770 --> 00:39:40,270 Um, now, the standard story told about this is that I can't resist a bit of a diversion here. 405 00:39:40,270 --> 00:39:46,839 The standard story told about this is that the bright. It's to do with integration of senses from different modalities. 406 00:39:46,840 --> 00:39:51,700 So you've got vision and you've got touch, and the brain sees this hand and it feels the touch. 407 00:39:51,700 --> 00:39:57,070 So it puts them together and, and comes to the conclusion that the rubber hand is in your hand. 408 00:39:57,460 --> 00:40:01,630 Um, but there's another story actually, which is, which is also part of it. 409 00:40:01,630 --> 00:40:09,040 My colleague at Sussex, Peter Lush, for a number of years has been pointing out that, um, in the rubber hand illusion, 410 00:40:09,040 --> 00:40:14,319 you're basically in a situation where you're strongly encouraged that you should experience something weird. 411 00:40:14,320 --> 00:40:18,309 You know, you've given a fake hand, ask to stare at it, and then asked, is it your hand? 412 00:40:18,310 --> 00:40:22,990 It's a very strong, um, example of what psychologists call demand characteristics. 413 00:40:22,990 --> 00:40:28,150 You're in a situation which is implicitly encouraging you to have a particular kind of experience. 414 00:40:28,420 --> 00:40:35,190 So to test this for Pete, did, uh, the rubber hand illusion on over 350 people at Sussex, uh, 415 00:40:35,200 --> 00:40:44,409 a couple of years ago and found that the degree to which people experienced it correlated with the, um, how hypnotised they were. 416 00:40:44,410 --> 00:40:47,440 We're all hypnotised, able uh, to some degree or other. 417 00:40:47,440 --> 00:40:52,150 It's not a good thing. It's not a bad thing. It's just it's just a thing. It's a stable psychological trait. 418 00:40:52,450 --> 00:40:58,689 And those people who were more suggestible, uh, experienced the rubber hand illusion to a stronger extent. 419 00:40:58,690 --> 00:41:02,139 So I think that tells us that that's at least part of the story. 420 00:41:02,140 --> 00:41:04,480 And it's very compatible with what I've been saying, too, 421 00:41:04,480 --> 00:41:10,930 because what it shows is that that kind of experimental context of the rubber hand allusion, it's a kind of expectation. 422 00:41:10,930 --> 00:41:14,380 You're giving the brain an expectation about what to experience. 423 00:41:14,620 --> 00:41:23,500 And so it does. So talks a bit about having a body and how that is actually something the brain constructs on the fly. 424 00:41:23,510 --> 00:41:32,240 There's also this experience of being a body, of being a living flesh and blood organism with emotions and moods. 425 00:41:32,690 --> 00:41:43,370 And I think at the base of all experience of self and possibly of all experience at all, this simple background feeling of being alive. 426 00:41:44,380 --> 00:41:51,430 Um, this experience of being a body. And this brings up another form of perception entirely, which is called interception. 427 00:41:51,430 --> 00:41:58,770 Interception is rather overlooked in in science, psychology, neuroscience, but is very important. 428 00:41:59,090 --> 00:42:08,380 Interception is all about how the brain senses and controls the interior of the body because again, from the brain's perspective, the interior, 429 00:42:08,410 --> 00:42:17,260 interior, physiological states, the internal physiological state that's also not directly accessible, the brain locked away in its bony skull. 430 00:42:18,040 --> 00:42:22,180 And if it's trying to figure out what the heart is doing, what how the how the lungs are doing, 431 00:42:22,660 --> 00:42:31,510 it has to infer that make a best guess about it on the basis of, um, sensory input and its prior expectations. 432 00:42:31,510 --> 00:42:42,729 So now we have the same dance of prediction and prediction error, but now it's all taking place or largely taking place inside the body itself. 433 00:42:42,730 --> 00:42:47,650 The brain is making predictions about what's happening in the body and updating those predictions. 434 00:42:48,400 --> 00:42:55,040 So what happens when it does this? What? When the brain does this out there in the world, we might experience objects and people and places. 435 00:42:55,040 --> 00:43:03,930 But what happens when the brain is making predictions about the heart or the lungs, or the overall physiological condition of the body itself? 436 00:43:03,950 --> 00:43:10,010 Basically, how good a job it's doing is staying alive. Well, the idea here is that that's what emotion is. 437 00:43:11,240 --> 00:43:17,150 Now again, we tend or there's been a tendency to think of emotion is very different kind of thing. 438 00:43:17,180 --> 00:43:25,100 Now we have thought, we have perception. We have emotion. The the overall picture that that's emerging here is that these are all aspects 439 00:43:25,100 --> 00:43:28,910 of the same underlying process of the brain making and updating predictions. 440 00:43:29,210 --> 00:43:37,880 And when it comes to emotion, these predictions are about the interior of the body, that that's what an emotion is on this view. 441 00:43:38,330 --> 00:43:43,520 William James again said something very similar and sort of slightly flowery Victorian language. 442 00:43:43,520 --> 00:43:48,230 He said bodily changes followed directly the perception of the exciting fact, 443 00:43:48,410 --> 00:43:52,670 and there are feeling of these same changes as they occur is the emotion. 444 00:43:54,950 --> 00:43:57,060 The intuition here is that here's the key. 445 00:43:57,080 --> 00:44:05,180 And the example that William James famously gave was if somebody sees a bear and they feel the frightened run away, what's happening? 446 00:44:05,420 --> 00:44:12,800 It could be that they see the bear that causes them to feel afraid, and the feeling of fear causes them to run away. 447 00:44:13,430 --> 00:44:17,720 But what James said and what what's what I think is happening also in this story, 448 00:44:18,050 --> 00:44:23,420 is that the person sees the bear that immediately induces all kinds of changes in the body. 449 00:44:23,600 --> 00:44:26,930 Adrenaline starts coursing. Cortisol shoots around. 450 00:44:27,320 --> 00:44:31,250 Heart rate goes up. And it's the perception of those changes. 451 00:44:31,760 --> 00:44:37,700 That is the emotion of fear. So it's kind of flipping things around. 452 00:44:38,090 --> 00:44:42,380 Once again, things between how things seem and how they are. 453 00:44:43,880 --> 00:44:49,210 Now there's another. Final aspect of this story, which I think sort of fleshes out a little bit. 454 00:44:49,230 --> 00:44:55,890 So why do emotions feel the way they are and sort of visual experience feels a different way? 455 00:44:56,250 --> 00:44:59,280 Well, these predictions about the interior of the body. 456 00:45:00,200 --> 00:45:04,070 You know that they don't care where things are. They're not about figuring out what's out there in the world. 457 00:45:04,100 --> 00:45:07,310 They really care about controlling and regulating the body. 458 00:45:07,670 --> 00:45:13,639 That that's that's what, um, this process is fundamentally, um, doing. 459 00:45:13,640 --> 00:45:17,510 It's for keeping the body in the States compatible with being alive, 460 00:45:17,510 --> 00:45:22,430 keeping heart rate where it needs to be, keeping blood pressure right where it needs to be, and so on. 461 00:45:23,000 --> 00:45:27,170 So introspective predictions are about controlling things rather than finding things out. 462 00:45:27,170 --> 00:45:29,750 When you can predict something, you can control it pretty well. 463 00:45:30,380 --> 00:45:35,390 So the idea here is that's actually why the brain is a prediction machine in the first place, 464 00:45:35,390 --> 00:45:43,880 because that basic process evolved to keep the body alive, to provide an effective way of regulating the body. 465 00:45:44,210 --> 00:45:49,210 Now, there are many other kind of lines of work that support that, which I'll just skip over. 466 00:45:49,220 --> 00:45:55,340 But it's it's sort of has ancestry in many different areas of psychology and neuroscience. 467 00:45:56,280 --> 00:46:02,440 But I think it explains quite nicely why different experiences are the way they are and not some other way. 468 00:46:02,460 --> 00:46:08,820 Visual predictions underpin visual experiences with objects and people, because that's what those predictions are trying to do. 469 00:46:08,820 --> 00:46:12,360 They're trying to figure out where things are, how they're moving, what they are, 470 00:46:12,810 --> 00:46:21,360 but intercept if predictions underpin emotions and this simple feeling of being alive, because that's what these predictions are all about. 471 00:46:21,600 --> 00:46:24,600 They're about controlling, regulating the body. 472 00:46:25,380 --> 00:46:31,950 Now I'm cutting a long story very short here, but if you pull this thread long enough, I at least for me, 473 00:46:31,950 --> 00:46:41,250 you get to this kind of picture where conscious experience and consciousness itself is intimately related to our nature as living systems. 474 00:46:41,550 --> 00:46:51,480 Because these, this, this whole, um, edifice of the brain is a prediction machine evolved over many millions of years, 475 00:46:51,480 --> 00:46:59,490 develops in each of us, and operates moment to moment in light of this fundamental biological imperative to stay alive. 476 00:46:59,490 --> 00:47:02,969 The reason the brain makes predictions about the world and updates them is 477 00:47:02,970 --> 00:47:07,980 because that circuitry has been there from the get go to keep the body going. 478 00:47:07,980 --> 00:47:13,830 So consciousness in this case, less to do with something like intelligence, more to do with something like being alive. 479 00:47:15,370 --> 00:47:22,029 And this brings me to René Descartes. René Descartes, one of the most famous philosophers of all time, 480 00:47:22,030 --> 00:47:27,540 and cause all sorts of mischief in the philosophy of mind by separating the mind and the body. 481 00:47:27,580 --> 00:47:34,450 And we've struggled to put them back together ever since. But he also separated life from consciousness. 482 00:47:35,110 --> 00:47:39,930 And, um, it's called being a beast machine because he was listening to what he wrote. 483 00:47:39,940 --> 00:47:44,739 He was quite uncharitable about whether non-human animals were conscious. 484 00:47:44,740 --> 00:47:48,820 He called them beast machines. He said, without minds to direct their bodily movements. 485 00:47:49,210 --> 00:47:54,610 Animals must be regarded as unthinking, unfeeling machines that move like clockwork. 486 00:47:55,120 --> 00:48:02,290 So for Descartes, being alive was irrelevant to consciousness or to the kind of consciousness that that matters. 487 00:48:02,950 --> 00:48:07,120 And I've come to think almost the opposite is the case, 488 00:48:07,750 --> 00:48:16,270 that being alive is why we are also conscious that we perceive and experience the world around us and ourselves within it, 489 00:48:16,540 --> 00:48:21,000 with, through, and because of our living bodies. 490 00:48:21,010 --> 00:48:25,810 We are not kind of meat based computers. We are feeling machines. 491 00:48:27,760 --> 00:48:36,459 So I want to finish just the next five minutes. By talking about consciousness in other things, and especially in artificial intelligence, 492 00:48:36,460 --> 00:48:43,420 because I think this view of consciousness is intimately tied to our natures living systems has seen some interesting things to say about us. 493 00:48:44,080 --> 00:48:47,260 So the idea of AI has been with us for a long time. 494 00:48:47,260 --> 00:48:49,420 All those are very, very much more prominent now. 495 00:48:49,990 --> 00:48:56,740 And the idea that I could become not only smart but also conscious has also been around for a long time. 496 00:48:56,740 --> 00:49:03,400 Alan Turing, who basically gave us the many of ideas about computation as we use them now, 497 00:49:03,790 --> 00:49:11,380 also seeded the idea that I could be conscious when he asked the famous question whether machines can think, um. 498 00:49:11,800 --> 00:49:18,730 And in doing this, he also tied consciousness in AI to something that we might call intelligence, 499 00:49:18,880 --> 00:49:22,990 thinking, you know, if an AI can think, maybe it's conscious. 500 00:49:22,990 --> 00:49:27,819 We see this also in a lot of science fiction, whether it's how from 2001, 501 00:49:27,820 --> 00:49:34,330 in intelligence here and consciousness are bound up not with a body, but with this kind of disembodied mind. 502 00:49:34,750 --> 00:49:36,940 Ex Machina, a brilliant film by Alex Garland, 503 00:49:36,940 --> 00:49:45,100 also plays beautifully with this idea of artificial intelligence becoming conscious and for a long time, these. 504 00:49:46,190 --> 00:49:56,180 These ideas, these these possibilities of building machines that had experience, that were sentient was the realm of literature and science fiction. 505 00:49:56,630 --> 00:50:00,470 But now it seems we might actually be able to do that. 506 00:50:00,470 --> 00:50:02,660 And in fact, some people think we're already there. 507 00:50:02,660 --> 00:50:09,680 A couple of years ago, a Google engineer called Blake Lemoine got fired from Google because he claimed that the language model, 508 00:50:09,680 --> 00:50:12,830 the chat bot he was working on, was actually conscious. 509 00:50:13,310 --> 00:50:21,590 Not that many people took him that seriously, but the fact that he even, you know, raised the question with was itself pretty interesting. 510 00:50:21,680 --> 00:50:27,680 So there is this question here. Can artificial intelligence lead to real consciousness? 511 00:50:29,300 --> 00:50:34,610 Answering this question is fundamentally important because it has a huge number of ethical and moral implications, 512 00:50:34,610 --> 00:50:41,270 not only for how we treat AI systems, but for how we understand and deal with our own interactions with them. 513 00:50:41,780 --> 00:50:46,219 And the ground here is really muddy. I read this in the New Statesman last year. 514 00:50:46,220 --> 00:50:50,690 It was something just when there was a lot of fuss and bother about open AI, 515 00:50:51,200 --> 00:50:57,259 and in this paragraph talks about artificial general intelligence and says that, um, 516 00:50:57,260 --> 00:51:02,089 company documents state that the board's purpose is to decide when the company has developed highly 517 00:51:02,090 --> 00:51:07,819 autonomous artificial general intelligence that outperforms humans at most economically valuable work. 518 00:51:07,820 --> 00:51:12,410 And to ensure this technology reshaped society in a in a manner that's safe and benevolent. 519 00:51:12,530 --> 00:51:21,830 In other words, the board exists to decide when the machine is sentient, when it is alive, and to ensure the new consciousness is moral. 520 00:51:22,220 --> 00:51:26,180 So this is mixing all kinds of things together, which are actually very different. 521 00:51:26,180 --> 00:51:35,570 Consciousness is having an experience. Intelligence is doing the right thing at the right time, and being alive is something else altogether. 522 00:51:36,500 --> 00:51:45,260 So we get confused. I think talking about consciousness in AI, when we mix consciousness and intelligence together, they are very different things. 523 00:51:45,260 --> 00:51:51,980 They may correlate quite often in living creatures. The more intelligent you are, the more ways you have of being conscious. 524 00:51:52,460 --> 00:52:02,270 But fundamentally they're different concepts, and I think we confuse them because we're very susceptible to particular psychological biases. 525 00:52:02,270 --> 00:52:04,520 We tend to be very anthropocentric creatures. 526 00:52:04,760 --> 00:52:11,570 You know, we see the world through a human lens, and we think we're intelligent and we know we're conscious, 527 00:52:11,840 --> 00:52:15,440 so we tend to put the two together and confound them always. 528 00:52:15,440 --> 00:52:18,500 We're also very anthropomorphic creatures. 529 00:52:18,710 --> 00:52:26,450 We tend to project human like qualities into things on the basis of what might turn out to be superficial similarities, 530 00:52:26,750 --> 00:52:33,620 and that's become very prominent with this explosion of language models that we have. 531 00:52:33,860 --> 00:52:40,790 Because language, it seems to us, is something that is very distinctively human and very much associated with intelligence. 532 00:52:41,240 --> 00:52:43,820 So when something is able to speak to us fluently. 533 00:52:44,960 --> 00:52:52,760 It's very compelling for us to infer not only that it can speak to us fluently, which it can, but also that it is conscious. 534 00:52:52,770 --> 00:52:55,159 There's a conscious mind behind that, 535 00:52:55,160 --> 00:53:03,590 and I think that is much more a reflection of our biases than an accurate insight into what's going on inside these systems. 536 00:53:03,860 --> 00:53:07,190 By the way, it's often said that language models hallucinate when they make stuff up. 537 00:53:07,700 --> 00:53:12,860 That's part of the problem, too. When we say that we're already assuming something like having an experience. 538 00:53:13,130 --> 00:53:18,950 I think it's much more accurate to say they can. Fabulous. They make stuff up without knowing what they're doing. 539 00:53:20,810 --> 00:53:28,730 There's a deeper reason there, and a deeper reason why. I'm very sceptical of the whole idea that AI or computers could be conscious. 540 00:53:28,970 --> 00:53:34,820 And that's because the more you look at it, the more different brains are from computers. 541 00:53:35,180 --> 00:53:39,650 Now, in a in a computer, there's a very sharp distinction. Give you just one example. 542 00:53:40,670 --> 00:53:45,770 In a computer, there's a sharp distinction by design between the hardware and the software. 543 00:53:46,100 --> 00:53:50,370 What a computer does is independent of what it is. 544 00:53:50,390 --> 00:53:55,879 That's why computers are useful. And you can run the same program on different ones, and they do the same thing in a brain. 545 00:53:55,880 --> 00:54:00,830 There is no distinction between what we might call the mind where and the wetware. 546 00:54:01,040 --> 00:54:07,580 There's no sort of sharp threshold. You can't say what's in my brain and run it on somebody else's brain. 547 00:54:08,060 --> 00:54:12,710 Brains are very much more complicated than computers. 548 00:54:13,040 --> 00:54:22,670 And this this just. And when we reduce ourselves to a computer, I think we just really do a massive disservice to the to how rich, 549 00:54:22,940 --> 00:54:26,750 um, human brains and other animal brains really are. 550 00:54:27,940 --> 00:54:33,459 Now, if these ideas are a little bit on the right track, then it means that what we can do with computers and AI, 551 00:54:33,460 --> 00:54:40,300 we can certainly simulate consciousness, simulate intelligence and so on, but we can't necessarily can't implement it. 552 00:54:40,870 --> 00:54:46,239 And this is this is taken for granted in other things, like if we have computer simulations of weather systems, 553 00:54:46,240 --> 00:54:50,500 we never expect it to get wet or windy inside a computer simulation of a storm. 554 00:54:50,500 --> 00:54:52,270 Right? It just always a simulation. 555 00:54:53,050 --> 00:55:00,700 But for some reason we think that when we have a computer simulation of a brain, we will actually get consciousness out of it. 556 00:55:00,730 --> 00:55:03,760 And I think that's actually very unlikely. 557 00:55:04,270 --> 00:55:08,469 Um, now to finish, I might be wrong. 558 00:55:08,470 --> 00:55:09,790 There are still things to worry about. 559 00:55:09,790 --> 00:55:15,279 Many people disagree with me on this, and think that consciousness is the kind of thing that computers could have. 560 00:55:15,280 --> 00:55:22,250 There are many theories of consciousness. Many of them provide blueprints for how one might build a conscious machine. 561 00:55:22,290 --> 00:55:26,649 I think they're all on the wrong track in this sense. But I might be wrong. 562 00:55:26,650 --> 00:55:34,810 They might be right. And I just want to end by saying there are many reasons why we should not build conscious AI, even if it is very unlikely. 563 00:55:35,500 --> 00:55:39,160 Many reasons. But I think the main reason is that if for some reason we were able to, 564 00:55:39,700 --> 00:55:43,600 we would risk an ethical catastrophe because we would have introduced into the world 565 00:55:43,840 --> 00:55:48,790 a vast new potential for forms of suffering that we might not even recognise. 566 00:55:48,790 --> 00:55:56,589 It would be a very immoral, unethical thing to do. And even AI that seems conscious is also problematic. 567 00:55:56,590 --> 00:56:01,570 It is also bad. And that's almost already here, as we've seen with these language models. 568 00:56:01,990 --> 00:56:08,139 There's many other examples which which I'll skip, but we get ourselves into all kinds of moral quandaries. 569 00:56:08,140 --> 00:56:14,140 If we build technologies that seem conscious, even if they are not, because either we end up caring about them, 570 00:56:14,440 --> 00:56:18,610 even though they're just lifeless hunks of silicon and code, or we don't, 571 00:56:18,610 --> 00:56:24,309 and we treat them badly, even though we can't help feeling that they are conscious, 572 00:56:24,310 --> 00:56:28,900 there's this kind of potential for cognitively impenetrable illusions of consciousness here. 573 00:56:28,900 --> 00:56:32,950 You know, these two lines are different lengths, but you'll never be able to see them. 574 00:56:33,400 --> 00:56:36,610 Sorry, they're the same length, but you will never be able to see them as the same length. 575 00:56:36,610 --> 00:56:46,419 It's a cognitively impenetrable illusion. Same might go for I have a hunch we know we might not be able to avoid experiencing AI systems as conscious, 576 00:56:46,420 --> 00:56:52,659 and that gets us into these really difficult problems. One of my former mentors, the philosopher Daniel Dennett, 577 00:56:52,660 --> 00:56:59,080 who I think gave the first of these lectures 25 years ago, gives a nice thought, I think, which should guide us. 578 00:56:59,080 --> 00:57:02,740 Sadly, we lost Daniel Dennett this year. He died around April. 579 00:57:03,090 --> 00:57:10,450 Um, but he said we should treat AI as tools rather than colleagues and always remember the difference. 580 00:57:11,460 --> 00:57:15,690 I think that's very wise. We are much more than machines. We shouldn't sell ourselves so cheaply. 581 00:57:16,050 --> 00:57:18,840 Do you think that we can be replaced by a language model? 582 00:57:19,560 --> 00:57:24,389 But I want to finish on a more positive note and just point out that consciousness is not just this, 583 00:57:24,390 --> 00:57:29,780 um, this intellectual mystery that fascinates people like me for many years. 584 00:57:29,790 --> 00:57:38,339 It's also a mystery that matters. And that's why I think it's a particularly and important subject to study in medicine, 585 00:57:38,340 --> 00:57:42,510 a better understanding of consciousness to help us deal with conditions in psychiatry, 586 00:57:42,510 --> 00:57:49,770 in neurology, to address the mechanisms rather than just treating the symptoms in technologies not just AI, but also virtual reality. 587 00:57:49,770 --> 00:57:57,060 How we develop and interact with these technologies needs a deeper understanding of how we perceive and experience the world and the self. 588 00:57:57,510 --> 00:58:00,510 Society, which is facing all kinds of challenges, of course, 589 00:58:00,510 --> 00:58:07,830 will benefit from the ability to understand build bridges of empathy between people who experience things in different ways. 590 00:58:08,310 --> 00:58:16,800 We need to make decisions about how we treat non-human animals, um, and even how we treat people before birth. 591 00:58:16,980 --> 00:58:22,980 And at the end of life, too, all of these turn on what we can say about the possibility for consciousness in 592 00:58:23,010 --> 00:58:28,770 non-human animals and unborn humans and in people towards the end of life in law, 593 00:58:28,890 --> 00:58:32,760 when we hold people responsible, depends on what we think about free will. 594 00:58:33,150 --> 00:58:37,860 Now, do we have some consciousness that floats above the brain that makes us do things? 595 00:58:37,860 --> 00:58:41,720 No we don't. So it's a graded, more complex phenomenon. 596 00:58:41,730 --> 00:58:46,170 But of course, it matters when and how we hold people responsible for their actions. 597 00:58:46,800 --> 00:58:52,800 We can also think about how to make life better for those of us who aren't particularly ill or suffering, 598 00:58:52,800 --> 00:58:59,760 but just to cultivate more positive forms of conscious experience by understanding how they might arise. 599 00:59:00,300 --> 00:59:05,220 And again, to understand that we are much more than meat based computers. 600 00:59:05,550 --> 00:59:08,960 But fundamentally, consciousness is worth studying. 601 00:59:08,970 --> 00:59:11,730 It's a bit like the Everest of intellectual problems, really. 602 00:59:11,880 --> 00:59:21,410 It matters because it is there, and for me, it's a real privilege to try to understand this most deep, profound, but also personal of mysteries. 603 00:59:21,420 --> 00:59:22,920 So thank you very much for your attention.