1 00:00:00,940 --> 00:00:04,840 This is bioethics bytes with me, David Edmonds and me Nigel Warburton. 2 00:00:05,110 --> 00:00:12,580 Bioethics Bytes is made in association with Oxford's Hero Centre for Practical Ethics and made possible by a grant from the Wellcome Trust. 3 00:00:12,880 --> 00:00:23,080 For more information about bioethics bites go to WW W Docs, Practical Ethics, Oecs Docs, AC UK or to choose you. 4 00:00:23,560 --> 00:00:29,710 Suppose a genetic engineering breakthrough made it simple, safe and cheap to increase people's intelligence. 5 00:00:30,550 --> 00:00:37,960 Nonetheless, if you ask the averagely intelligent person on the Clapham omnibus whether we should tamper with our genes to boost our brains, 6 00:00:38,290 --> 00:00:45,250 he or she might recoil at the notion. Nick Bostrom, director of the Future of Humanity Institute at Oxford University, 7 00:00:45,580 --> 00:00:50,650 suspects that this reaction may be a result of what he calls status quo bias. 8 00:00:50,920 --> 00:00:54,730 Nick Bostrom, welcome to bioethics bytes. Good to be with you. 9 00:00:55,270 --> 00:00:58,180 We're going to focus on the status quo bias. What is that? 10 00:00:58,660 --> 00:01:07,720 A status quo bias is a preference for the status quo just because it is the status quo, a preference that is inappropriate or irrational. 11 00:01:08,710 --> 00:01:16,690 So could you give an example of that? Well, cognitive psychologists have experimented in different settings with people's responses, 12 00:01:16,690 --> 00:01:23,440 and they find that in many cases, just because an alternative is presented as the status quo, it makes it more attractive. 13 00:01:23,950 --> 00:01:31,810 One experiment subjects were given either a decorated coffee cup or a large chocolate bar randomly. 14 00:01:32,050 --> 00:01:38,680 And then now we're allowed to exchange the particular gift that had been given for the other gift just by holding up a card saying trade. 15 00:01:39,070 --> 00:01:46,480 And it turns out that in this kind of experiment, 90% of the subjects prefer to stick with their original gift. 16 00:01:46,960 --> 00:01:48,610 And this is known as the endowment effect. 17 00:01:48,610 --> 00:01:55,660 As soon as something has become yours, you become unwilling to part with it, even for something that ex-ante would have been just as valuable. 18 00:01:56,080 --> 00:01:56,889 And there are many, 19 00:01:56,890 --> 00:02:06,730 many other cases where it doesn't seem that you can explain this away by postulating that people have formed some special emotional attachment to it. 20 00:02:06,730 --> 00:02:10,900 But it just seems that at least in some of these cases, there is a kind of cognitive error. 21 00:02:11,410 --> 00:02:15,910 That's really interesting you saying that this is generalisable effects across a wide 22 00:02:15,910 --> 00:02:21,580 range of activities and situations where human beings just like to be a bit conservative. 23 00:02:21,580 --> 00:02:23,649 Once they've got something, they stick with it. Yeah, 24 00:02:23,650 --> 00:02:29,740 but one has to be careful to distinguish the cases where it's a genuine bias in the sense of an irrational or 25 00:02:29,740 --> 00:02:36,280 inappropriate response from cases where it's just responding to a brute fact about how human emotions work. 26 00:02:36,490 --> 00:02:43,959 So with a coffee cup, chocolate bar example, you might argue that if you actually find yourself, for whatever strange reason, 27 00:02:43,960 --> 00:02:50,650 emotionally tied to the gift you have been given and it would hurt to give it up, then you are rational responding to that by not trading it in. 28 00:02:51,130 --> 00:02:56,350 But there are other studies where this kind of explanation is implausible. 29 00:02:56,740 --> 00:03:02,200 For example, it turns out that by just changing the wording in how alternatives are presented, 30 00:03:02,620 --> 00:03:05,380 you can make people prefer one alternative or the other. 31 00:03:05,770 --> 00:03:14,079 So if you change, you can save X number of lives to you can prevent x number of deaths, which mean exactly the same thing. 32 00:03:14,080 --> 00:03:17,560 You can get people to reverse their preferences about two alternatives. 33 00:03:18,070 --> 00:03:25,060 So you've already started suggesting how the status quo bias could have implications in bioethics. 34 00:03:25,420 --> 00:03:26,800 Could you say a little bit more about that? 35 00:03:26,860 --> 00:03:35,169 Well, in bioethics, at least insofar as we're interested in making a consequentialist judgement, we face a very difficult challenge, 36 00:03:35,170 --> 00:03:40,990 which is to try to figure out whether the consequences of some intervention will, on balance, be good or bad. 37 00:03:41,290 --> 00:03:44,920 In fact, we face a double epistemic predicament. 38 00:03:45,250 --> 00:03:49,150 We first face the difficulty of predicting what the actual consequences would be. 39 00:03:49,630 --> 00:03:56,770 And then, even if we knew what the consequences would be, it might still be difficult to tell whether, on balance, they would be positive. 40 00:03:56,920 --> 00:04:01,520 And while we should, of course, collect all the relevant facts we can. 41 00:04:01,540 --> 00:04:08,530 There will always be a reliance on subjective judgement in the end to try to weigh up to different arguments for either side. 42 00:04:08,740 --> 00:04:13,870 If you are proposing to increase the tax rate or to reduce infant mortality, 43 00:04:13,900 --> 00:04:18,430 there is no exact science that will tell you what will happen 40 years into the future if you do that. 44 00:04:18,730 --> 00:04:20,470 And the same holds true, say, 45 00:04:20,590 --> 00:04:27,730 if you're proposing to increase average human intelligence or do some other changes in bioethics that have wide ranging consequences. 46 00:04:28,030 --> 00:04:32,020 So, you know, assuming the scientists weren't biased that there is this bias. 47 00:04:32,230 --> 00:04:33,220 Status quo bias. 48 00:04:33,640 --> 00:04:43,420 And you're saying that once we know that we're in a position to start intervening perhaps and removing this tendency in our judgements. 49 00:04:43,750 --> 00:04:51,850 Yeah. So there's some prima facie reason at least for suspecting that in particular judgements in bioethics might be susceptible to status quo bias, 50 00:04:52,180 --> 00:05:00,400 just as so many other human judgements are. So given that there is just reason to suspect that our bioethical judgements might be affected by. 51 00:05:00,480 --> 00:05:07,890 That is called bias. It's interesting to see if we can find some way to reduce or to remove this potentially distorting bias. 52 00:05:08,310 --> 00:05:16,500 And have you found one? Well, one thing you can try is the perhaps counterintuitive approach of trying to reverse your original perspective. 53 00:05:16,800 --> 00:05:22,360 So suppose that somebody is suggesting that we should increase average human intelligence. 54 00:05:22,380 --> 00:05:28,800 And in this thought experiment, let's say that there is a medical safe pill that you could take that will improve your IQ a little bit. 55 00:05:29,130 --> 00:05:33,300 And the question now is, if we make this pill available to society at large, 56 00:05:33,300 --> 00:05:38,520 will the long term overall consequences on balance be good or will they be bad? 57 00:05:38,790 --> 00:05:45,330 What do you have most reason to believe? And so here is how you can test whether you might be infected by status quo bias. 58 00:05:45,570 --> 00:05:51,930 Suppose that somebody had instead proposed that we should decrease average intelligence by a small amount. 59 00:05:52,140 --> 00:05:54,900 So almost everybody would immediately say that that would be a horrible idea. 60 00:05:54,900 --> 00:05:59,550 I mean, you can't be serious, like distributing a pill to make people slightly brain damaged crazy. 61 00:06:00,180 --> 00:06:03,899 But if you hold both that it would be bad to slightly increase average 62 00:06:03,900 --> 00:06:08,730 intelligence and that it would be bad to slightly decrease average intelligence. 63 00:06:09,210 --> 00:06:14,370 You seem to be committed to a position that says that we are currently sitting in a local optimum. 64 00:06:14,610 --> 00:06:17,160 Our intelligence is just at the right level. 65 00:06:17,220 --> 00:06:28,000 It couldn't be improved by increasing or decreasing it, but in a generic curve only very few points will be local optimum for most points. 66 00:06:28,020 --> 00:06:32,970 It will be the case either that you would get the higher value by moving up or by moving down. 67 00:06:33,180 --> 00:06:39,419 And so if you hold this, you then face some burden of justification to explain why it is that we can't 68 00:06:39,420 --> 00:06:43,200 make things better by either increasing or by decreasing average intelligence. 69 00:06:44,280 --> 00:06:51,360 What you're saying is then that it's statistically highly unlikely that we are at that optimum points. 70 00:06:51,660 --> 00:06:58,200 So the chances are that if you're saying that we are at that point, that you're subject to this status quo bias. 71 00:06:58,620 --> 00:07:03,660 Well, unless you can give a good reason for thinking that we are sitting in a local optimum. 72 00:07:03,840 --> 00:07:11,580 Well, one reason might be that evolution has over many thousands and perhaps millions of years taking us to that optimal point. 73 00:07:11,850 --> 00:07:18,870 Right. So this is one broad category of answers to this challenge posed by the reversal test. 74 00:07:19,110 --> 00:07:22,730 And for many parameters, this will yield a very good response. 75 00:07:22,740 --> 00:07:29,190 So if, for example, somebody had proposed that we should increase the ratio of the heart size to the body size of humans, 76 00:07:29,490 --> 00:07:32,760 make our hearts larger, I would say it sounds like a pretty bad idea. 77 00:07:33,000 --> 00:07:36,000 And if somebody had suggested, well, let's make our heart smaller, 78 00:07:36,000 --> 00:07:40,240 let's try to shrink our hearts, I'd say that's probably a pretty dumb thing to do also. 79 00:07:40,260 --> 00:07:45,180 And here I could find a good evolutionary argument that we have adapted probably 80 00:07:45,180 --> 00:07:49,080 to have a size of the heart that's appropriate relative to the size of the body. 81 00:07:49,230 --> 00:07:54,810 The question then is if we try to use this kind of argument, in the case of cognitive enhancement, will it work? 82 00:07:55,020 --> 00:07:59,400 And there are several premises needed for this evolutionary argument to get off the ground. 83 00:07:59,580 --> 00:08:06,900 For a start, we have to think about whether the environment in which we evolved is in relevant respects, 84 00:08:07,170 --> 00:08:16,139 identical to the environment in which we now live. So with regard to heart size, arguably there is no relevant difference with regard to intelligence. 85 00:08:16,140 --> 00:08:18,540 There might be a number of factors that have changed. 86 00:08:18,690 --> 00:08:25,050 On the one hand, you can imagine different trade offs that evolution struck between, say, head size, 87 00:08:25,170 --> 00:08:33,540 which we know correlate with intelligence and metabolic energy costs, a huge proportion of the body's total energy body. 88 00:08:33,540 --> 00:08:38,250 It goes to just running the brain and having a larger brain would be metabolically expensive. 89 00:08:38,520 --> 00:08:42,299 And you could see that in the Pleistocene where food was often scarce. 90 00:08:42,300 --> 00:08:47,670 That would be one point where this trade-off would be optimal. And now we're on Mars forests never far away. 91 00:08:48,000 --> 00:08:51,030 We don't suffer from a calorie deficit, but rather the opposite. 92 00:08:51,030 --> 00:08:55,860 We kind of would rather fancy burning more calories. And if we get more intelligent, that's a bonus. 93 00:08:56,370 --> 00:09:01,920 Other things like the size of the head again at birth, limited by the size of the birth canal. 94 00:09:01,930 --> 00:09:11,010 Now we have caesarean sections, a long period of maturation stretching over a decade or two, dangerous and costly if you are living in the jungle. 95 00:09:11,310 --> 00:09:15,270 But now we could afford to be maturing for a longer period of time. 96 00:09:15,450 --> 00:09:21,090 So there are many of these recent studies suggest that the resources available to us are very different on the demand side. 97 00:09:21,100 --> 00:09:25,860 Similarly, the situation has changed radically. When we were living as hunter gatherers, 98 00:09:25,860 --> 00:09:33,990 there was no need for literacy or advanced numeracy or the ability to stay focussed on some abstract topic for hours at an end. 99 00:09:34,380 --> 00:09:37,950 But in today's society, these cognitive skills are at a premium, 100 00:09:38,280 --> 00:09:46,830 so we have no reason to think that what was maximally adaptive in the evolutionary past would still be adaptive to us today. 101 00:09:47,490 --> 00:09:52,750 So much for the evolutionary argument against the idea that we've reached optimum level. 102 00:09:53,070 --> 00:10:00,030 You could believe that we haven't actually reached an optimum level, but there would be tremendous costs involved in making that. 103 00:10:00,340 --> 00:10:08,350 Or nudge towards the optimal level and that it will actually be better to stay where we are than invest so much in making such a small change. 104 00:10:08,530 --> 00:10:17,680 Yes. So one has to consider transition costs and in some cases this will be a fairly good response to the challenge presented by the reversal test. 105 00:10:17,860 --> 00:10:24,340 If, say, somebody in America propose to switch to the metric system of measurement units, which looks superior. 106 00:10:24,700 --> 00:10:28,330 That might be somebody else countering, saying that even though the metric system would be better, 107 00:10:28,420 --> 00:10:32,890 simpler, more elegant, it just wouldn't be worth it. The transition costs would be too big. 108 00:10:33,070 --> 00:10:38,830 In the case of cognitive enhancement, it seems that this argument from transition cost is impossible. 109 00:10:38,890 --> 00:10:47,200 For a start, the transition cost would be a one off price we would pay, and then we could delay the benefits for an indefinite period of time after. 110 00:10:47,800 --> 00:10:55,100 And although it is true that there would be these transition costs, you know, maybe textbooks would have to be rewritten for more intelligent kids. 111 00:10:55,510 --> 00:10:59,169 Maybe some kids would. If we made a network through, say, 112 00:10:59,170 --> 00:11:06,489 germline genetic enhancement would grow up in families where the parents are not the smartest and maybe they would be bored. 113 00:11:06,490 --> 00:11:13,490 Maybe some games would no longer be interesting. This would be cost to be paid, but they seem utterly trivial in comparison to the benefits. 114 00:11:13,510 --> 00:11:17,500 I mean, school books are constantly rewritten for all kinds of trivial reasons. 115 00:11:17,710 --> 00:11:23,020 Smart kids sometimes are born into dumber families, and although that might cause problems in some cases, 116 00:11:23,020 --> 00:11:28,030 it's not something that would make us think that on balance, it would have been better if all these kids had just been dumber. 117 00:11:28,690 --> 00:11:33,580 Well, what about the risks involved in making a transition, a sort of step into the dark, as it were? 118 00:11:33,910 --> 00:11:40,050 We know where we are now, but we don't really know what's going to happen if we start trying to meddle with genes, 119 00:11:40,060 --> 00:11:43,450 as some people would say, to increase intelligence. Yeah. 120 00:11:43,460 --> 00:11:46,810 So in some cases it would be wise for us to stick where we are, 121 00:11:46,810 --> 00:11:53,620 even if we are pretty sure it's not the optimum because we are so worried about a change making things radically worse. 122 00:11:54,040 --> 00:12:02,109 If you are sitting on the cliff in the dark, you might be well advised to just stay put, even if you think that one direction would lead to safety. 123 00:12:02,110 --> 00:12:05,020 Because if you're walking in the wrong direction, you'll fall off the cliff and die. 124 00:12:05,140 --> 00:12:10,150 And in principle, this this is a valid type of response to the reversal test. 125 00:12:10,510 --> 00:12:14,510 In the case of cognitive enhancement, it's not clear that it works. 126 00:12:14,530 --> 00:12:19,360 However, even if our sole focus was to reduce risk as much as possible. 127 00:12:19,510 --> 00:12:25,660 Increased cognitive capability might also enable us to be better at avoiding many kinds of risk. 128 00:12:25,990 --> 00:12:33,070 More intelligent people, on average, tend to live longer, for example, and avoid many kinds of social mishaps on a social level. 129 00:12:33,280 --> 00:12:43,930 It's at least arguable that we need more intelligent foresight and wiser decision making to meet many of the great challenges of the 21st century. 130 00:12:44,110 --> 00:12:48,400 And although there might be new risks introduced by changing average intelligence, 131 00:12:48,730 --> 00:12:54,670 that might also be many new means of reducing risk that would be made available to us if we were smarter. 132 00:12:55,090 --> 00:13:01,569 There is an additional consideration in this context that is relevant, which is that we need to look not just at the risks, 133 00:13:01,570 --> 00:13:10,210 but also the potential upsides that could be surprising and unanticipated benefits, as well as surprising and unanticipated negative consequences. 134 00:13:10,510 --> 00:13:17,260 With regard to cognitive enhancement, it's possible that these potential upside risks, as it were, are quite enormous. 135 00:13:17,500 --> 00:13:25,180 You could imagine if a few hundred thousand years ago some Australopithecus were sitting around the campfire and pondering this question 136 00:13:25,180 --> 00:13:31,149 whether they should sort of try to enhance their intelligence and the risks involved in that and the benefits that they could foresee. 137 00:13:31,150 --> 00:13:36,490 You know, we would be better able to to hunt some meat and maybe to pick more bananas more efficiently. 138 00:13:36,760 --> 00:13:38,530 This might be the benefits they could foresee. 139 00:13:38,560 --> 00:13:46,030 What they wouldn't have foreseen would be all of mill's higher pressure that we now think constitute perhaps the main good of the human condition art, 140 00:13:46,030 --> 00:13:51,940 literature, music, complex games, cultural achievements, language, all of these things. 141 00:13:52,090 --> 00:13:59,380 It would seem kind of presumptuous for us to think that that our current outlook, we are able to anticipate all the important benefits. 142 00:13:59,740 --> 00:14:04,299 So the response to the risk argument would be that, A, there might be these very important benefits, 143 00:14:04,300 --> 00:14:07,990 some of which are not foreseeable, and that might outweigh the risks. 144 00:14:07,990 --> 00:14:14,200 And B that even if we were only focussed on risks, it's still not clear that it wouldn't be a bad idea to increase intolerance. 145 00:14:14,920 --> 00:14:20,590 I've listened to your responses to the arguments that I gave that about why somebody 146 00:14:20,590 --> 00:14:25,510 might believe that they had reached the optimal point in terms of human intelligence, 147 00:14:25,510 --> 00:14:31,090 for instance. What if I'm still not convinced? Have you got any other argument that might persuade me? 148 00:14:31,600 --> 00:14:36,190 Well, there is one thing we could try, which is the double reversal test. 149 00:14:36,790 --> 00:14:41,110 Maybe it's best explained by considering the following thought experiment. 150 00:14:41,830 --> 00:14:45,190 Disaster poison is leaking into the water system. 151 00:14:45,310 --> 00:14:50,470 This will result in widespread brain damage. An average intelligence will decline. 152 00:14:50,770 --> 00:14:55,240 Fortunately, at just the same time, scientists have made a breakthrough in genetic engineering, 153 00:14:55,420 --> 00:14:59,770 and they've come up with a somatic gene therapy that will slightly boost intelligence. 154 00:15:00,210 --> 00:15:08,550 And it turns out that this intelligence boost for gene therapy would just about offset the intelligence decrease that would arise from the poisoning. 155 00:15:09,060 --> 00:15:14,490 So now the question is, do you think under these special circumstances that it would be on balance, 156 00:15:14,490 --> 00:15:22,319 good to introduced a gene therapy to offset the intelligence decrease that would otherwise occur, assuming there weren't huge risks associated? 157 00:15:22,320 --> 00:15:26,520 Yes, definitely. Excellent. Now, let's suppose that this is what this idea does. 158 00:15:26,970 --> 00:15:36,060 So intelligence stays the same. Now it's ten years later, and it turns out that the poison is gradually being removed from the water system. 159 00:15:36,750 --> 00:15:42,690 And scientists predict that if we do nothing, then people's brains will gradually start to heal. 160 00:15:42,960 --> 00:15:49,260 And that will result in an increase in average intelligence, because the gene therapy enhancement is still in effect. 161 00:15:49,260 --> 00:15:55,410 It's permanent, and if the brain damage is removed, that will result in an increase in average intelligence. 162 00:15:55,650 --> 00:16:02,160 So now the question is, would you say it would be all things considered good to go and pump poison into the 163 00:16:02,160 --> 00:16:07,410 tap water to give people brain damage to prevent their intelligence from increasing? 164 00:16:08,070 --> 00:16:10,340 That would be bizarre. That would be bizarre, I think. 165 00:16:10,350 --> 00:16:17,129 And that then seems to suggest that from a consequentialist point of view, you think it would be better, all things considered, 166 00:16:17,130 --> 00:16:22,650 if we end up in the stage where average intelligence is increased and this double 167 00:16:22,650 --> 00:16:28,950 reversal test works by contrasting two different kinds of perception of the status quo. 168 00:16:29,070 --> 00:16:37,080 So on the one hand, status quo can mean the current state of affairs, and we preserve the status quo by making sure that things don't change. 169 00:16:37,600 --> 00:16:43,890 Another understanding of status quo is that which will happen by default if one doesn't intervene to change it. 170 00:16:43,920 --> 00:16:48,840 And in this thought experiment, of course, these two different status quo concepts conflict. 171 00:16:48,840 --> 00:16:55,290 And so by interpolating between them, we can sort of pinpoint the effect that the fact that something is the status quo has in your judgement, 172 00:16:55,290 --> 00:17:04,170 and then extract out your status quo bias. It's interesting that we still do have a status quo bias and it presumably has some evolutionary origin. 173 00:17:04,530 --> 00:17:08,700 But why on earth would we have it if it leads us to make judgements which are not reliable? 174 00:17:09,210 --> 00:17:15,330 This I don't know, and this is indeed one of the main remaining question marks surrounding these issues. 175 00:17:15,930 --> 00:17:23,700 I would be much more confident about all of these arguments if I knew what exactly the reason is that we have status quo bias in the first place. 176 00:17:24,540 --> 00:17:32,190 There are some partial explanations. So in some situations, status quo bias might be related to this endowment effect, 177 00:17:32,430 --> 00:17:35,640 where as soon as something is yours, you really don't want to lose it. 178 00:17:35,670 --> 00:17:45,240 Now, perhaps in our evolutionary past, being the kind of person that people could often take things from would be bad for your reputation. 179 00:17:45,570 --> 00:17:51,620 Not having the things in the first place might not be as embarrassing as if somebody could trick you out of something that was already yours. 180 00:17:51,630 --> 00:17:58,290 You would look like a fool. So maybe we evolved to become very averse to these situations where something was kind of taken away from us. 181 00:17:58,620 --> 00:18:02,970 But whether that's a true part of the story or all of the story, I'm not sure. 182 00:18:03,670 --> 00:18:07,440 Suppose we could completely eliminate the status quo bias? 183 00:18:07,890 --> 00:18:12,800 What would be the implications for bioethics? It might make us favour cognitive enhancement more. 184 00:18:12,810 --> 00:18:14,580 That's the case we've been focusing on here. 185 00:18:14,730 --> 00:18:23,430 For example, more funding for developing pharmaceutical means of enhancing memory and cognition would be seen more favourably or in genetic means, 186 00:18:23,880 --> 00:18:27,570 for the same reason that we think it would be better if the education system worked better. 187 00:18:27,660 --> 00:18:34,680 One then has to go through on a case by case basis other proposed interventions to see whether our intuitive 188 00:18:34,680 --> 00:18:39,480 judgements do in fact change when they are put through the reversal test or the double reversal test. 189 00:18:39,840 --> 00:18:46,080 In some cases our judgements will not change and that mightn't mean simply that they were not infected by status quo bias. 190 00:18:46,410 --> 00:18:52,560 Certainly the conclusion is not that we always have reason to want to change either up or down. 191 00:18:52,860 --> 00:18:55,430 In many cases, sticking with the status quo is rational. 192 00:18:55,440 --> 00:19:03,600 But in so far as we judge that the status quo is best just because of this bias, then removing the bias will enable us to make better judgements. 193 00:19:04,140 --> 00:19:06,600 Nick Bostrom, thank you very much. My pleasure. 194 00:19:07,290 --> 00:19:15,840 For more information about bioethics bites, go to WW Dot Practical Ethics, Dot Oaks Dot, AC, Dot, UK or iTunes U.