1 00:00:01,330 --> 00:00:05,230 This is bioethics bytes with me, David Edmonds and me Nigel Warburton. 2 00:00:05,500 --> 00:00:12,970 Bioethics Bytes is made in association with Vox, which we here are Centre for Practical Ethics and made possible by a grant from the Wellcome Trust. 3 00:00:13,270 --> 00:00:23,470 For more information about bioethics bytes go to WW W Docs Practical Ethics dot oecs docs ac dot UK or to i-Tunes u. 4 00:00:24,070 --> 00:00:27,070 A train is hurtling towards five people. It's out of control. 5 00:00:27,520 --> 00:00:34,660 You want a footbridge standing next to a very obese man. The only way to save the five, it's to push the man over the footbridge to his certain death. 6 00:00:35,170 --> 00:00:39,010 His boat would stop the train and save five lives. So should you do it? 7 00:00:39,400 --> 00:00:42,400 Should you give him a shove? Most people would say no. 8 00:00:42,790 --> 00:00:46,510 Utilitarians say yes. You should take one life to save five. 9 00:00:47,320 --> 00:00:52,720 Now it turns out that the answer you give will depend on how much serotonin there is flowing through your brain. 10 00:00:53,170 --> 00:01:01,180 This raises an intriguing possibility in the future, might we be able to alter people's moral behaviour with concoctions of chemicals? 11 00:01:01,690 --> 00:01:07,600 That's been the research topic of Mollie Crockett, now based in Zurich, but formerly of Cambridge University. 12 00:01:07,930 --> 00:01:11,350 Molly Crockett, welcome to Bioethics Bites. Thank you very much. 13 00:01:11,530 --> 00:01:17,080 The topic we're going to focus on is brain chemistry and moral decision making. 14 00:01:17,410 --> 00:01:21,940 You've done a lot of research on serotonin levels and how those can affect behaviour. 15 00:01:21,970 --> 00:01:31,000 Could you just say a little bit about that? Yeah, so we've done a number of experiments looking at how boosting or depleting serotonin 16 00:01:31,000 --> 00:01:36,850 levels influenced decision making in terms of reacting to being treated unfairly. 17 00:01:36,880 --> 00:01:43,720 So we use a game from economics called the Ultimatum Game, which has two players, a proposer and a responder. 18 00:01:44,080 --> 00:01:48,550 The proposer suggests a way how to share a sum of money with the responder. 19 00:01:49,030 --> 00:01:52,990 And the responder can either accept the offer, in which case both players get the money, 20 00:01:53,410 --> 00:01:58,030 or the responder can reject the offer, in which case neither player gets any money. 21 00:01:58,360 --> 00:02:07,120 And a lot of studies have shown that responders will reject offers that they think are unfair to prevent unfair proposers from getting any money. 22 00:02:07,150 --> 00:02:11,950 So if I were playing this automation game with you, I might have £30 and say, 23 00:02:11,950 --> 00:02:17,290 I'll give you £5 and keep 25, and then you have to decide whether that's a fair deal or not. 24 00:02:17,290 --> 00:02:20,439 And if you refuse it, neither of us gets the money. 25 00:02:20,440 --> 00:02:23,590 But it seems like a fair deal. We both get the amounts stated. 26 00:02:23,710 --> 00:02:27,970 Exactly. And when this game was first coming out, 27 00:02:27,970 --> 00:02:33,850 it actually surprised a lot of economists because their models predicted that people would always accept an offer, 28 00:02:33,850 --> 00:02:37,390 no matter how unfair, because some money is better than no money. 29 00:02:37,630 --> 00:02:41,620 But it turns out, of course, that a lot of people would rather have nothing. 30 00:02:41,770 --> 00:02:45,310 Then see someone who's treated them unfairly get the lion's share. 31 00:02:45,460 --> 00:02:50,950 So this sort of spurred a whole thunderstorm of research looking at social preferences. 32 00:02:50,950 --> 00:02:57,070 So how we value both our own outcomes, but also how they relate to the outcomes of others. 33 00:02:57,220 --> 00:03:03,250 So you've got this ultimatum game set up and then what happens when you start playing around with serotonin levels? 34 00:03:03,430 --> 00:03:07,150 So what we found is that when you lower people's serotonin levels, 35 00:03:07,270 --> 00:03:12,490 when they're in the role of responder, they're more likely to reject offers they think are unfair. 36 00:03:12,970 --> 00:03:18,490 And conversely, when you boost serotonin levels, this makes them less likely to reject unfair offers. 37 00:03:18,610 --> 00:03:24,219 So there seems to be this link between having a low serotonin state and being 38 00:03:24,220 --> 00:03:27,940 unwilling to let people get away with things that are perceived as unfair. 39 00:03:28,240 --> 00:03:33,490 Exactly. So we're now starting to explore the motivations that seem to be driving this effect. 40 00:03:33,790 --> 00:03:40,330 One idea is that lowering serotonin makes people just more concerned about unfairness, 41 00:03:40,660 --> 00:03:45,020 and another possibility is that it just makes them more revengeful or spiteful. 42 00:03:45,040 --> 00:03:48,220 And so we're doing some other experiments to explore those possibilities. 43 00:03:48,250 --> 00:03:52,210 And when you were doing these experiments, did everybody respond to more or less the same way? 44 00:03:52,570 --> 00:03:59,229 One interesting thing that we found in our experiment, where we enhanced serotonin function with citalopram, 45 00:03:59,230 --> 00:04:07,930 is that individuals who scored higher on empathy tended to show stronger effects of the drug on their behaviour. 46 00:04:07,960 --> 00:04:15,580 We gave people a separate questionnaire asking them questions like, I find it easier to put myself in someone else's shoes, 47 00:04:15,760 --> 00:04:22,240 or I find myself getting very involved in characters emotions and in novels and in films. 48 00:04:22,570 --> 00:04:26,740 People who tended to answer yes to those kinds of questions showed much stronger 49 00:04:26,740 --> 00:04:31,210 effects of the serotonin drug on their behaviour in the ultimatum game. 50 00:04:31,240 --> 00:04:37,120 So what we think might be going on is that our serotonin drug might be enhancing some sort 51 00:04:37,120 --> 00:04:43,840 of emotion or motivation that was already stronger in highly empathetic individuals, 52 00:04:44,020 --> 00:04:48,730 something like the aversion to causing harm or damage to someone else. 53 00:04:48,910 --> 00:04:53,440 We know that serotonin plays a strong role in aggression and in regulating aggression. 54 00:04:53,740 --> 00:04:59,770 And if you think about it, rejecting an unfair offer is sort of an aggressive kind of retaliatory act. 55 00:05:00,040 --> 00:05:08,850 So. What might be going on is that more highly empathic individuals might have a stronger aversion to causing harm in the first place, 56 00:05:09,090 --> 00:05:16,319 which could have been enhanced by the serotonin drug. And we saw evidence for this in a separate domain when we asked people to judge 57 00:05:16,320 --> 00:05:21,360 whether it's morally acceptable to harm one person in order to save many others. 58 00:05:21,390 --> 00:05:23,550 So these are these classic trolley problems. 59 00:05:23,880 --> 00:05:32,420 And what we found is that the serotonin drug also made people less likely to say it's acceptable to harm one in order to save money. 60 00:05:32,430 --> 00:05:39,510 So this is sort of separate evidence for the idea that serotonin might be promoting people's aversion to causing harm to others. 61 00:05:39,870 --> 00:05:41,279 It's interesting the way you describe that, 62 00:05:41,280 --> 00:05:46,770 because some people would put it not so much as causing harm to others, but sacrificing one to save the many. 63 00:05:47,100 --> 00:05:50,790 If you say causing harm to others, it looks like an obviously immoral action. 64 00:05:51,030 --> 00:05:57,660 But if you talk about sacrifice, it becomes something like a morally difficult but appropriate response. 65 00:05:57,930 --> 00:06:06,030 Absolutely. And this is why questions of whether serotonin is a moral enhancer or a moral inhibitor are so complicated, 66 00:06:06,030 --> 00:06:11,790 because within the domain of philosophy and ethics, you have these different sort of schools of moral thought, 67 00:06:11,850 --> 00:06:19,709 one of which the utilitarian school says that it is acceptable to sacrifice one to save money, whereas other perspectives, 68 00:06:19,710 --> 00:06:25,920 like the D.A. logical perspective suggests that actually there are certain actions that are just morally wrong. 69 00:06:25,950 --> 00:06:30,210 Here we have the neurochemistry sort of coming in as an additional layer. 70 00:06:30,240 --> 00:06:36,660 Serotonin seems to push people more towards the D onto logical side, but that's not to say it's making them more or less moral. 71 00:06:37,020 --> 00:06:40,780 Actually, there is a at least one more broad brush moral theory. 72 00:06:40,800 --> 00:06:46,140 If you talk about Aristotelian ism with the emphasis on the development of a virtuous character, 73 00:06:46,380 --> 00:06:51,120 that does seem to be somewhat threatened by this kind of discovery that people's 74 00:06:51,120 --> 00:06:56,260 responses to such a degree affected by things which they were probably unaware of, 75 00:06:56,280 --> 00:07:01,770 and if they could be so easily swayed by whether they've had sufficient protein in their diet. 76 00:07:01,800 --> 00:07:08,940 This seems to be suggesting that at least to be a morally virtuous person might be more difficult than some philosophers are perhaps opposed. 77 00:07:09,270 --> 00:07:16,799 I think that's an interesting possibility. I mean, one point is that the effects of these manipulations are fairly subtle. 78 00:07:16,800 --> 00:07:25,860 So they're not going to dramatically change someone's character, as you say, but they will have effects at the margins. 79 00:07:26,580 --> 00:07:32,850 Another thing that pops to mind is that once we find out about these things and we can learn 80 00:07:32,970 --> 00:07:39,420 how to put ourselves in situations that are likely to produce morally beneficial outcomes. 81 00:07:39,420 --> 00:07:47,370 Maybe a sort of revamp of the concept of Aristotelian character and virtue ethics is that we should try to 82 00:07:47,370 --> 00:07:53,790 learn as much as possible about what are the situational factors that influence our moral behaviour so that 83 00:07:53,790 --> 00:08:00,239 we can act with the goal of being as moral as possible and put ourselves in situations that are going to 84 00:08:00,240 --> 00:08:07,650 promote virtuous failures and to pre-emptively avoid situations that are likely to produce negative behaviour. 85 00:08:08,040 --> 00:08:11,070 When you published your results, there was a lot of press attention. 86 00:08:11,430 --> 00:08:16,830 Some of it had a kind of caricature of what the implications are of this kind of research. 87 00:08:17,280 --> 00:08:23,850 Could you maybe talk a bit about that? Was there anything that you felt went too far in the conclusions it drew from what you'd actually done? 88 00:08:24,660 --> 00:08:30,800 The tricky thing about the work that we're doing is that these neurochemical systems are extremely complicated. 89 00:08:30,810 --> 00:08:34,590 They're very widespread in the brain. They have very far reaching effects. 90 00:08:34,620 --> 00:08:40,109 Serotonin, for example, in addition to influencing moral judgement and behaviour, 91 00:08:40,110 --> 00:08:46,350 also influences sleep, sex, appetite, other aspects of cognition like learning. 92 00:08:46,530 --> 00:08:52,500 So it's not at all straightforward to just make sweeping statements like more serotonin is a good thing. 93 00:08:52,860 --> 00:08:57,120 Less serotonin is a bad thing. You know, these systems are very tightly regulated, self-regulating. 94 00:08:57,120 --> 00:09:05,729 In fact, there have actually been huge breakthroughs in neuroscience in the last decade, and many people are excited about possibilities. 95 00:09:05,730 --> 00:09:10,290 But there are a significant number of people who think that the whole enterprise is reductionist, 96 00:09:10,290 --> 00:09:14,279 that making claims about the nature of humanity on the basis of doing some 97 00:09:14,280 --> 00:09:19,260 experiments on people's brains or on the sorts of chemicals they ingest and so on, 98 00:09:19,440 --> 00:09:22,950 is too simplistic. It doesn't really show us what human beings are like. 99 00:09:23,610 --> 00:09:31,860 I think this reaction to the research, it could be motivated by a bit of unease, let's say, with the implications of this work. 100 00:09:31,890 --> 00:09:39,360 Ultimately, what it suggests is that people aren't as in control of their decisions as they might think that they are, 101 00:09:39,660 --> 00:09:42,510 and in particular the work that we've done on moral values. 102 00:09:42,840 --> 00:09:52,500 It could make people uneasy because people have this conception that moral values are fixed and stable and sort of core to their character. 103 00:09:52,740 --> 00:09:57,690 The fact that we can shift them around below people's awareness I think is a little bit scary. 104 00:09:58,110 --> 00:10:02,370 So one reaction could be to say, Well, people. Or just way more complicated than that. 105 00:10:02,370 --> 00:10:12,430 And it's impossible to reduce something as important and profound and and complex as moral judgement to a single chemical in the brain. 106 00:10:12,460 --> 00:10:15,150 You know, my response is, well, first of all, of course, 107 00:10:15,300 --> 00:10:24,610 morality is not just about serotonin levels or oxytocin levels or electrical impulses in the ventromedial prefrontal cortex. 108 00:10:24,630 --> 00:10:32,820 It's much more complex than that, of course. But we do these experiments where we hold all things equal and see what happens 109 00:10:32,820 --> 00:10:37,820 when we shift one part of this complex apparatus and we do fine effects. 110 00:10:37,830 --> 00:10:41,309 But I think that the implications of this work are rather hopeful. 111 00:10:41,310 --> 00:10:46,590 In fact, they sort of point to ways in which we could improve conflict resolution, for example. 112 00:10:47,310 --> 00:10:54,060 That's interesting. How could manipulating serotonin levels lead to any kind of change in conflict resolution? 113 00:10:54,270 --> 00:10:55,180 There are two ways. 114 00:10:55,200 --> 00:11:06,480 So the first is more of an idea and not related to the neurobiology itself, but just the idea that our values seem to not be as stable as we think. 115 00:11:06,570 --> 00:11:10,020 If we can get that idea out there, which is backed up by brain research. 116 00:11:10,410 --> 00:11:15,210 This could actually encourage people to maybe you listen to those whom they disagree with. 117 00:11:15,330 --> 00:11:20,370 There's a great study that was just out in science in the last year, led by Carol Dweck at Stanford. 118 00:11:20,760 --> 00:11:28,290 They did this work in Israel and Palestine. They randomly assigned Israelis and Palestinians to read one of two articles. 119 00:11:28,320 --> 00:11:33,360 One article suggested that aggressive groups have a fixed nature. 120 00:11:33,390 --> 00:11:38,040 And the other article suggested that aggressive groups have a changeable nature. 121 00:11:38,100 --> 00:11:44,700 And those who read the article about changeable groups were more willing to meet with the other side and hear 122 00:11:44,700 --> 00:11:50,879 their point of view and more willing to compromise on issues of contention like the status of Jerusalem, 123 00:11:50,880 --> 00:11:53,430 for example, from a game theoretic perspective. 124 00:11:53,760 --> 00:12:00,540 This is essentially trying to get people out of bad equilibria, because if you think that your opponent is never going to budge, 125 00:12:00,540 --> 00:12:05,160 then there's no point in investing any resources into negotiating with them. 126 00:12:05,190 --> 00:12:10,530 But if we can get this idea out there that people's moral values actually are not fixed but can change, 127 00:12:10,530 --> 00:12:15,630 then maybe people are going to be a little bit more willing to listen to others in that experiment. 128 00:12:15,660 --> 00:12:19,740 You describe, though, it could be a placebo effect by telling people a falsehood. 129 00:12:19,770 --> 00:12:24,480 You get to a better outcome because the people believe that character is malleable. 130 00:12:24,660 --> 00:12:29,070 But if it's not true, it's not going to produce the end result necessarily. 131 00:12:29,370 --> 00:12:33,479 Yeah, that is a good point. And that sort of, I guess, leads to the second possibility, 132 00:12:33,480 --> 00:12:41,760 which is we might be able to discover ways to chemically make people less attached to their moral beliefs. 133 00:12:42,030 --> 00:12:49,000 It's sort of a fanciful idea, you know, getting negotiators to pop a few pills before heading to the table. 134 00:12:49,020 --> 00:12:55,110 But, you know, if we could actually discover what makes people attached to their beliefs who are willing to defend them at all costs, 135 00:12:55,260 --> 00:13:01,950 if we could find ways to make them less attached, then this this could potentially be a useful tool for conflict resolution. 136 00:13:01,980 --> 00:13:06,440 I mean, we're a long way off from figuring this kind of thing out, but it's an interesting direction. 137 00:13:07,350 --> 00:13:13,379 I can imagine people being quite terrified by that thought of negotiators taking serotonin pills or 138 00:13:13,380 --> 00:13:17,880 something like this before they go in to negotiate and then discarding some of their prior beliefs. 139 00:13:18,240 --> 00:13:22,979 One way of describing that would be what you're doing is actually manipulating people. 140 00:13:22,980 --> 00:13:26,220 And the further stage of that is, of course, when it gets into the hands of the wrong people. 141 00:13:26,310 --> 00:13:33,510 Some tyrant could use this information to manipulate people to agree to things 142 00:13:33,840 --> 00:13:36,960 that they didn't really want to agree to and think it was their own idea. 143 00:13:38,010 --> 00:13:47,159 Yeah, it's a two edged sword, just like with anything. I think the key with developing these kinds of interventions is that the deployment 144 00:13:47,160 --> 00:13:51,330 of them is going to have to be a very highly reasoned and considered process. 145 00:13:51,480 --> 00:13:54,570 You can think of it in terms of a pre-commitment strategy. 146 00:13:55,080 --> 00:14:00,750 We know that emotions influence our decisions, sometimes in very damaging ways. 147 00:14:00,780 --> 00:14:05,759 A lot of good decision making and willpower has to do with foreseeing the effects of 148 00:14:05,760 --> 00:14:10,440 emotions on our decisions and putting ourselves in situations where those can play a role. 149 00:14:10,440 --> 00:14:14,249 Like not going to the grocery store when you're hungry, for example, 150 00:14:14,250 --> 00:14:18,210 because you know that if you go when you're hungry, you're going to buy things that you'll regret later. 151 00:14:18,240 --> 00:14:25,469 In a similar way, you could imagine a negotiator who recognises in himself that he's emotionally attached to certain issues that, 152 00:14:25,470 --> 00:14:32,690 from a sort of utilitarian perspective, aren't really the best issues to be attached to in terms of the well-being of large groups of people. 153 00:14:32,700 --> 00:14:42,660 And such an individual could essentially pre commit to letting go of his emotional attachment to those issues for the duration of the New Year. 154 00:14:42,750 --> 00:14:50,219 Should, for example, you know, of course, we are a long way off from from fully understanding what the implications of these 155 00:14:50,220 --> 00:14:55,320 potential technologies are for conflict resolution and for individual decision making. 156 00:14:55,980 --> 00:15:00,300 What do you see as the future of this kind of research? What is the best outcome? 157 00:15:00,390 --> 00:15:05,250 You could get from the investment of many hours of your lifetime in this area? 158 00:15:05,910 --> 00:15:13,950 Well, recently I've been talking a lot more with philosophers who work on ethics to try and identify potential 159 00:15:13,950 --> 00:15:21,090 avenues for enhancing certain human capabilities that we can pretty much all agree are a good thing. 160 00:15:21,510 --> 00:15:33,000 One area is, is this human motivation to cooperate and to engage in interactions that have mutually beneficial outcomes for everyone involved. 161 00:15:33,180 --> 00:15:39,360 One avenue is to try and figure out how to enhance that motivation and things like empathy and altruism, 162 00:15:39,420 --> 00:15:41,220 which a lot of neuroscientists are working on. 163 00:15:41,520 --> 00:15:51,929 But the key will be to identify particular virtues that we could enhance without increasing people's vulnerability to those who might exploit them, 164 00:15:51,930 --> 00:15:57,990 for example. How do you think research in neuroscience is changing the picture of what a human being is? 165 00:15:58,140 --> 00:16:03,600 Well, I think one thing that is coming out from all this research is that a lot of our 166 00:16:03,600 --> 00:16:08,040 decisions are influenced by factors that are beyond our awareness and control, 167 00:16:08,040 --> 00:16:16,470 which has sort of two and four implications. The first being that parts of ourselves that we think are really stable actually might not be so stable, 168 00:16:16,860 --> 00:16:21,390 and this might make us think more deeply about who we are. 169 00:16:21,780 --> 00:16:28,739 Additionally, I think it points to a need to try and identify these factors so that we can be aware of how we're being 170 00:16:28,740 --> 00:16:34,440 influenced by them and give us the tools to put ourselves in situations that can bring out the best in us. 171 00:16:35,250 --> 00:16:37,590 Molly Crockett, thank you very much. Thanks. 172 00:16:38,280 --> 00:16:46,830 For more information about bioethics bites, go to WW W Dot Practical Ethics, Dot Oaks Dot AC dot UK or iTunes U.