1 00:00:05,860 --> 00:00:12,640 Welcome to the seventh lecture of general philosophy. Today, we'll be talking mainly about free will, 2 00:00:12,640 --> 00:00:24,110 but you will see that what we have to say also brings issues to do with mind and body personal identity action and so forth. 3 00:00:24,110 --> 00:00:33,300 Here we've got David Hume, Charles Darwin, Peter Van in Waggon and Harry Frankfurt, who will play quite a big role today. 4 00:00:33,300 --> 00:00:38,910 OK, so just to remind you, we've got the issue of free will and determinism, 5 00:00:38,910 --> 00:00:45,240 and on the one hand, the incompatible list claims that if determinism is true, 6 00:00:45,240 --> 00:00:54,660 if everything we do is completely causally determined, then that gives us no power to do anything other otherwise than we actually do. 7 00:00:54,660 --> 00:01:05,130 In which case, according to the incompatible list, any moral assessment, any responsibility is out of the question Where is the soft determinist? 8 00:01:05,130 --> 00:01:11,790 That's a compatible list. Who actually believes in determinism takes the world to be determined, 9 00:01:11,790 --> 00:01:21,300 but nevertheless says that moral assessment of our actions is entirely appropriate because these are caused by and manifest our desires, 10 00:01:21,300 --> 00:01:33,740 our purposes, our virtues and vices. So let's focus on this key phrase I could have done otherwise. 11 00:01:33,740 --> 00:01:41,150 It does seem plausible for an action of mine to be genuinely a free action, one that is morally assessable. 12 00:01:41,150 --> 00:01:47,610 It has to have been in some sense possible for me to do other than I did. 13 00:01:47,610 --> 00:01:55,520 So. My doing a wasn't free unless I was able to choose something else may be. 14 00:01:55,520 --> 00:02:01,700 Now if we then interpret the ability to do otherwise in terms of causal possibility, 15 00:02:01,700 --> 00:02:09,800 that leads us towards incompatible ism, but compatible lists are going to interpret the phrase differently. 16 00:02:09,800 --> 00:02:17,810 So most compatible, I think, would accept the principle that to be free, it needs to be possible for me to have done otherwise. 17 00:02:17,810 --> 00:02:22,100 But they'll read that phrase differently. So here I've got an imagined conversation. 18 00:02:22,100 --> 00:02:29,240 The libertarian says you can't have been genuinely free if in fact you had no power to do anything other than you actually did. 19 00:02:29,240 --> 00:02:35,060 And the compatibly says yes, agreed. But if I'd wanted to do B instead of a, 20 00:02:35,060 --> 00:02:42,110 then my own thought process is guided by my own desires and purposes would have caused me to do B instead of a. 21 00:02:42,110 --> 00:02:46,820 There was no external barrier to my doing B. Had I chosen to do so? 22 00:02:46,820 --> 00:02:51,510 That's what I mean when I say I could have done otherwise. 23 00:02:51,510 --> 00:03:02,990 And then the libertarian comes back and says, but actually, if you were causally determined it wasn't in your power to want to do B instead of A. 24 00:03:02,990 --> 00:03:08,570 OK, I want to say a little bit about the concept of choice, because that's playing quite a big role here. 25 00:03:08,570 --> 00:03:15,740 And I think some clarification of exactly what we mean when we talk about a choice can help. 26 00:03:15,740 --> 00:03:23,060 I think it's plausible that when we think of the could have done otherwise intuition, the thought that I'm only free if I could have done otherwise. 27 00:03:23,060 --> 00:03:29,690 We are often thinking in terms of making a choice. But the notion of choice is a very slippery one. 28 00:03:29,690 --> 00:03:37,160 Suppose I'm walking along with my mobile phone. Somebody comes and holds a gun to my head, says, Give me your phone or I'll shoot you. 29 00:03:37,160 --> 00:03:47,200 Some would say I had no choice. I gave him the phone. Suppose somebody looks inside my brain and knows exactly what I'm going to do. 30 00:03:47,200 --> 00:03:52,510 Does that imply that I have no choice? Well, let's consider that. 31 00:03:52,510 --> 00:03:57,700 Noticed there are quite a lot of things that are sometimes meant by a I had no choice. 32 00:03:57,700 --> 00:04:03,910 I've listed sort of five ways in which the phrase might be interpreted. 33 00:04:03,910 --> 00:04:17,170 And you can see some of these are very different from others. Now, I want to object to the hijacking of the notion of choice here, 34 00:04:17,170 --> 00:04:23,770 because it seems to me that the word choice rather unlike the word free, by the way, which carries all sorts of moral implications. 35 00:04:23,770 --> 00:04:29,920 I think the word choice is a perfectly normal word in language that we learn when we are children. 36 00:04:29,920 --> 00:04:34,870 So the standard way of learning the notion of choice is to be given a choice, 37 00:04:34,870 --> 00:04:41,830 say my mother says to me when I'm very young, well, dear, would you like ice cream or would you like cake? 38 00:04:41,830 --> 00:04:48,190 Or would you like fruit? It's your choice. You can have any one of those that you wish. 39 00:04:48,190 --> 00:04:55,330 That's how we learn the notion of choice. Now, if that is a paradigm case of being given a choice, 40 00:04:55,330 --> 00:04:59,500 then it's very odd for some philosopher to come along and say, Oh, but it wasn't a real choice. 41 00:04:59,500 --> 00:05:03,910 If that's what we mean when we talk about a choice, then it was a choice. 42 00:05:03,910 --> 00:05:09,160 Right now, you might want to say, well, it wasn't a free choice or it wasn't an undetermined choice, maybe. 43 00:05:09,160 --> 00:05:17,300 But let's not have the word choice hijacked when it's a perfectly normal word of English language. 44 00:05:17,300 --> 00:05:23,690 Now, let's consider the case where somebody comes and holds a gun to me and asks for my phone. 45 00:05:23,690 --> 00:05:28,070 But let's suppose in this case, it's not actually my phone, it's yours. 46 00:05:28,070 --> 00:05:33,380 You have lent me your phone. I'm walking down the high street with it. 47 00:05:33,380 --> 00:05:38,990 Somebody comes, puts a gun to my head and says, Give me your phone or you're dead. 48 00:05:38,990 --> 00:05:44,980 OK, so I give them your phone. 49 00:05:44,980 --> 00:05:53,260 Now, the fact that it's your phone adds a certain moral dimension to this, though you might raise the question was I right to give the phone over? 50 00:05:53,260 --> 00:05:58,210 And you might think, Well, I did something wrong there, I gave away your phone. 51 00:05:58,210 --> 00:06:05,110 It wasn't mine to give, but I gave it. But actually, I can be absolved of blame because I wasn't responsible. 52 00:06:05,110 --> 00:06:10,750 I wasn't morally responsible because I was acting under coercion. 53 00:06:10,750 --> 00:06:14,830 Now, I just want to point out, there's an easy way out of that. 54 00:06:14,830 --> 00:06:20,050 What the compatibles can say is, no, I did have a choice there. 55 00:06:20,050 --> 00:06:27,610 I even had a free choice there. I made a decision in that situation to do one thing rather than another. 56 00:06:27,610 --> 00:06:32,860 I chose to give your phone away rather than risk being shot. 57 00:06:32,860 --> 00:06:36,700 But I'm not morally blameworthy, I agree on that. 58 00:06:36,700 --> 00:06:40,990 But the reason I'm morally blameworthy is not because I wasn't acting as a responsible individual. 59 00:06:40,990 --> 00:06:48,420 It's because I didn't do anything wrong in that situation. Giving your phone was the right thing to do. 60 00:06:48,420 --> 00:06:52,740 And it seems to me that's a plausible line of argument, right? 61 00:06:52,740 --> 00:06:57,240 So you can see that there are different ways of as it were cutting up the situation, 62 00:06:57,240 --> 00:07:01,320 some people would say, well, there wasn't a choice there or there wasn't a free choice. 63 00:07:01,320 --> 00:07:08,430 The compatibles can perfectly say, well, well, say, yeah, there was a free choice, but every free choice is made in a situation, 64 00:07:08,430 --> 00:07:13,710 and in some situations it can be right to do what in other situations it would be wrong to do. 65 00:07:13,710 --> 00:07:18,900 It would be wrong to me to just give away your phone to any passer by. But if I'm threatened with a gun? 66 00:07:18,900 --> 00:07:24,940 Arguably, that's the right thing to do. OK, with those clarifications in mind, 67 00:07:24,940 --> 00:07:32,470 let's come to this question of could have done otherwise and the the so-called principle of alternate possibilities, 68 00:07:32,470 --> 00:07:39,850 which Harry Frankfurt states thus. So a person is morally responsible for what he's done. 69 00:07:39,850 --> 00:07:47,350 Only if he could have done otherwise. Now Frankfurt, in fact, wants to deny this principle. 70 00:07:47,350 --> 00:07:55,060 He is a compatible host who wants to deny it. I've said that many compatible list would be happy with the principle Frankfurt wants to deny. 71 00:07:55,060 --> 00:08:00,400 It's because he thinks there are certain situations which are often called Frankfurt. 72 00:08:00,400 --> 00:08:09,490 Cases where I can be morally responsible for what I've done can be kind of a free choice as well. 73 00:08:09,490 --> 00:08:15,410 And yet it may not be the case that I could have done otherwise. So let's take such a case. 74 00:08:15,410 --> 00:08:23,330 Here's Jones. He's pondering whether to escape from a building via Dora or Daub. 75 00:08:23,330 --> 00:08:28,910 Maybe he needs to get out, you know, the buildings on fire or something. He's got to get out. 76 00:08:28,910 --> 00:08:33,110 Well, he's got a choice. He can either go for Dora or he can go for it or be. 77 00:08:33,110 --> 00:08:37,040 But in fact, it turns out there's no choice because if he goes to Door B, 78 00:08:37,040 --> 00:08:45,020 he will find that it's actually locked and so [INAUDIBLE] have to go out door A anyway. 79 00:08:45,020 --> 00:08:51,410 Now, think of the case where he chose to go through door. 80 00:08:51,410 --> 00:08:58,850 In that case, we're tempted to say he went through door a freely if there any moral issue hung on it, 81 00:08:58,850 --> 00:09:03,800 he was morally responsible, but arguably he didn't have an alternative. 82 00:09:03,800 --> 00:09:10,990 He could not have got outside to be. So it was kind of inevitable he was going to go out of Dora. 83 00:09:10,990 --> 00:09:18,700 Now, Frankfurt gives some sort of hypothetical examples here, imaginary examples, 84 00:09:18,700 --> 00:09:29,050 where instead of the door being locked and that being the problem, imagine we've got some nasty chap called black who is manipulating Jones. 85 00:09:29,050 --> 00:09:35,360 So action a here is probably some wicked thing that Black wants Jones to do. 86 00:09:35,360 --> 00:09:47,090 Now, Jones might actually choose to do a. In which case we can all agree that he's morally responsible, but black is monitoring Jones's brainwaves. 87 00:09:47,090 --> 00:09:58,960 And as soon as he sees that Jones is inclining towards be instead of a, he interferes with Jones brain waves and makes him choose a instead. 88 00:09:58,960 --> 00:10:08,020 OK, now we'll we might agree that in case be sorry in the if Jones is manipulated by black, 89 00:10:08,020 --> 00:10:13,270 he's not morally responsible for doing it because he was manipulated into it. 90 00:10:13,270 --> 00:10:17,740 But if he chooses a directly, we're inclined to say he is morally responsible. 91 00:10:17,740 --> 00:10:25,330 But arguably he couldn't have done otherwise because if he had started off doing otherwise, black would have interfered and prevented it. 92 00:10:25,330 --> 00:10:30,940 OK, so that's the sort of structure of a of a Frankfurt case. 93 00:10:30,940 --> 00:10:34,510 Now I want to dispute Frankfurt's argument here. 94 00:10:34,510 --> 00:10:40,840 It's it seems to me that it's it's not at all so clear. I mean, first of all, it is rather a farfetched example. 95 00:10:40,840 --> 00:10:47,560 I think we should be genuinely, generally rather sceptical about far fetched examples. 96 00:10:47,560 --> 00:10:55,000 But more crucially, I want to go back here and draw a distinction between S1 and S2. 97 00:10:55,000 --> 00:11:01,000 So S1 is the situation where Jones is initially contemplating what to do. 98 00:11:01,000 --> 00:11:10,840 S2 is the case where he started thinking about it and is moving inclining towards be. 99 00:11:10,840 --> 00:11:14,830 And this is at the point when black intervenes. 100 00:11:14,830 --> 00:11:26,230 Now it seems to me that one can argue that Jones is going for a in situation as one is different from his going for an situationist two. 101 00:11:26,230 --> 00:11:30,640 So if he went for a in situation one, there was something else he could do. 102 00:11:30,640 --> 00:11:39,700 He did have an alternate possibility, which was to head for B and end up doing it via the route with S. 103 00:11:39,700 --> 00:11:49,530 So it's not clear to me that the Frankfurt case is actually dispose of the principle of alternate possibilities. 104 00:11:49,530 --> 00:11:54,330 Now, if we stick with the principle of alternate possibilities, as I've said, 105 00:11:54,330 --> 00:12:03,660 I think most compatible is would in incompatible lists almost certainly would for an incompatible is the principle of alternate possibilities. 106 00:12:03,660 --> 00:12:08,850 The claim that we're not free unless we could have done otherwise. 107 00:12:08,850 --> 00:12:11,280 Is the basis of their claim, for example, 108 00:12:11,280 --> 00:12:21,270 via the consequence argument that to be free determinism has to be false because if term determinism was true in this sense, 109 00:12:21,270 --> 00:12:28,290 we could not have done otherwise. But notice that the compatibles gives a different interpretation. 110 00:12:28,290 --> 00:12:32,940 The compatibly says, Well, I could have done otherwise if I had wanted to. 111 00:12:32,940 --> 00:12:39,120 There was no external barrier to my doing otherwise. 112 00:12:39,120 --> 00:12:48,340 So how should we go? With regard to this. 113 00:12:48,340 --> 00:12:58,760 Well, let's accept that there is a difference between those who are autonomous and those who are not. 114 00:12:58,760 --> 00:13:02,780 If I'm a drug addict or an obsessive. 115 00:13:02,780 --> 00:13:12,440 Then I may be driven by certain desires of mine in such a way that I feel as it were under compulsion to go with them. 116 00:13:12,440 --> 00:13:18,000 You know, if I'm a kleptomaniac and I go to a shop, I may go in intending not to steal anything, 117 00:13:18,000 --> 00:13:25,160 and then I just end up doing it right or I sincerely want to give up cigarettes. 118 00:13:25,160 --> 00:13:29,580 But then I'm with people who are smoking and I simply cannot resist it. 119 00:13:29,580 --> 00:13:37,700 Now, in those sorts of cases, you may say that I'm not free because although I'm doing what I want, 120 00:13:37,700 --> 00:13:45,200 which you might think fits with the compatibles notion, you know, to be free is just to do as you choose. 121 00:13:45,200 --> 00:13:57,560 My choices are not under my control, but that looks rather different from the case where I am choosing something because I genuinely want it. 122 00:13:57,560 --> 00:14:03,800 I mean, suppose, for example, I am given a choice between ice cream and cake, and I choose ice cream. 123 00:14:03,800 --> 00:14:10,220 Why do I choose ice cream? Because I prefer ice cream. My choice of ice cream is utterly predictable. 124 00:14:10,220 --> 00:14:16,730 Anyone who knows me would be able to predict very easily what I'm going to choose. 125 00:14:16,730 --> 00:14:21,140 But am I under some kind of obsessional compulsion, though? 126 00:14:21,140 --> 00:14:28,190 It's just a preference. Now, in that sort of case, it seems odd to say that I'm a slave to my desires. 127 00:14:28,190 --> 00:14:34,070 I'm following my desires because I want to, but it's not like I would change anything about that. 128 00:14:34,070 --> 00:14:40,110 That's very different from the case where you have a compulsive desire that you can't do anything about. 129 00:14:40,110 --> 00:14:43,560 So the compatible list, as well as the incompatible list, 130 00:14:43,560 --> 00:14:52,500 is going to want to acknowledge that we can sometimes as it will be controlled by our desires in an undesirable way. 131 00:14:52,500 --> 00:14:57,360 Now Frankfurt brings in a useful distinction here between First Order desires, for example, 132 00:14:57,360 --> 00:15:03,960 to smoke a cigarette and Second Order desires, for example, to quit smoking and to cease to want them. 133 00:15:03,960 --> 00:15:09,300 Right. So it's a second order desire because I am desiring something about my first order desires. 134 00:15:09,300 --> 00:15:20,670 I'm desiring to cease desiring cigarettes. And if our second order desires are unable to overcome our first order cravings. 135 00:15:20,670 --> 00:15:24,750 It seems plausible to say that we are not fully autonomous. 136 00:15:24,750 --> 00:15:32,820 We are less free than we would be if our second daughter desires were able to control our First Order desires. 137 00:15:32,820 --> 00:15:41,160 So notice that even within a compatible framework, we can make some sorts of distinctions. 138 00:15:41,160 --> 00:15:50,130 So going back to here, we have the distinction between the compatibility and the incompatible view of could have done otherwise. 139 00:15:50,130 --> 00:16:00,960 We have the compatible is basically saying I'm free if I've done what I wanted to and thus seems to be taking our design desires a sort of fixed. 140 00:16:00,960 --> 00:16:08,190 We've now seen that there is some reason for modifying that, for allowing that we may have layers of desires, 141 00:16:08,190 --> 00:16:18,080 some of which we are quite happy to take is fixed, but some of which we actually would wish to be able to modify. 142 00:16:18,080 --> 00:16:27,410 In a similar way, we achieve more intellectual autonomy if we're able to reflect critically on our own commitments rather than being tracked by them. 143 00:16:27,410 --> 00:16:37,310 And this obviously has implications with regard to economic thinking, political thinking, morals, moral and religious assumptions and so on. 144 00:16:37,310 --> 00:16:45,860 And it seems to me that this is a way in which one can think that studying philosophy is a very good way of increasing your autonomy. 145 00:16:45,860 --> 00:16:54,210 It's actually encouraging you to think reflectively and critically about your own assumptions, about your own desires, your own ideals. 146 00:16:54,210 --> 00:17:02,680 And that is giving you an extra layer of autonomy over somebody who is simply a slave to their background. 147 00:17:02,680 --> 00:17:09,580 And if we allow degrees of autonomy in this way, you can also plausibly apply them to infants. 148 00:17:09,580 --> 00:17:19,200 You might allow levels of freedom to animals, even arguably intelligent robots, but we won't go there right now. 149 00:17:19,200 --> 00:17:24,870 So the point here is that we have got the means within a broadly compatible framework, 150 00:17:24,870 --> 00:17:33,300 though clearly an incompatible it could use some of this as well, but a compatible list can accept that there are degrees of autonomy. 151 00:17:33,300 --> 00:17:39,550 None of this contradicts compatible ism. 152 00:17:39,550 --> 00:17:44,110 But isn't this enough? Why should we want more? 153 00:17:44,110 --> 00:17:53,740 Why should we want in determinism if we have this kind of deterministic autonomy where we are able to reflect on our views, 154 00:17:53,740 --> 00:18:06,090 we're able to change our desires, all of that under deterministic causal laws, the indeterminate says that isn't enough. 155 00:18:06,090 --> 00:18:12,300 Now, I suspect that the intuition behind the desire for in determinism is the thought that somehow, 156 00:18:12,300 --> 00:18:16,080 if everything is determined, I'm not genuinely responsible. 157 00:18:16,080 --> 00:18:24,320 I may have this deterministic autonomy, but ultimately everything that I do is caused by prior conditions and the laws. 158 00:18:24,320 --> 00:18:29,910 And that isn't enough, but I'm inclined to respond. 159 00:18:29,910 --> 00:18:33,780 What why should I want in determinism? What's the value of that? 160 00:18:33,780 --> 00:18:40,740 I mean, suppose I'm calculating a chess move. I'm actually very, very happy that my calculation is deterministic. 161 00:18:40,740 --> 00:18:47,640 I'm very happy if I'm in a situation where somebody can come along, look at the position and say, Oh, I know exactly what Pete is going to do there. 162 00:18:47,640 --> 00:18:50,640 So, you know, three move checkmate. I'm sure [INAUDIBLE] see it. Yeah. 163 00:18:50,640 --> 00:18:54,330 Well, I want it to be the case that if there's a three move, checkmate, I will see it right. 164 00:18:54,330 --> 00:19:01,950 I don't want in determinism in that case, why should I and others? 165 00:19:01,950 --> 00:19:10,790 So I'm now going to deal with some, I think, misunderstandings that can lie and lie behind this sort of in deterministic thought. 166 00:19:10,790 --> 00:19:18,050 So some people might say, well, look, if you're determined, you're not really playing an essential role in what goes on. 167 00:19:18,050 --> 00:19:27,740 If everything is determined, you know, it's a thousand years before you were born, it was inevitable that what you are doing now would happen. 168 00:19:27,740 --> 00:19:31,480 Then you aren't playing a role. Well, clearly that's not true. 169 00:19:31,480 --> 00:19:37,010 OK? The explosion of a bomb may be determined. That doesn't imply the bomb isn't doing anything. 170 00:19:37,010 --> 00:19:45,410 The bomb is doing something. It was determined to do something. It's not like without the bomb, things would have gone on in the same way. 171 00:19:45,410 --> 00:19:53,720 So it's not clear why. Also in determinism is going to make me more responsible than otherwise. 172 00:19:53,720 --> 00:19:59,720 I mean, suppose in my brain there is some that there is some kind of random number generator, 173 00:19:59,720 --> 00:20:07,760 which is making me do different things in a completely deterministic way or some quantum mechanical process that has that effect. 174 00:20:07,760 --> 00:20:13,830 How does that help to make me responsible? It's not clear that it does. 175 00:20:13,830 --> 00:20:24,870 And here is Hume arguing that on the contrary, morality requires determinism so far from determinism being contrary to moral responsibility, 176 00:20:24,870 --> 00:20:38,400 moral responsibility ability actually requires that what I do flows from me, from my settled purposes, desires, tendencies. 177 00:20:38,400 --> 00:20:42,270 Actions are, by their very nature, temporary and perishing where they proceed, 178 00:20:42,270 --> 00:20:46,500 not from some cause in the character and disposition of the person who performed them. 179 00:20:46,500 --> 00:20:53,250 They can neither be down to his honour if good nor infamy. If evil is the way I act is actually random, 180 00:20:53,250 --> 00:20:59,880 maybe because I've got some tumour in my brain or something like that that's causing me to act in a completely capricious way. 181 00:20:59,880 --> 00:21:11,080 You don't hold me responsible. You say, no, it's not responsible because it didn't come from your established character and desires and so forth. 182 00:21:11,080 --> 00:21:14,200 So I want to say a little bit now about explanation, 183 00:21:14,200 --> 00:21:24,370 because I think to understand some of the confusions that lie behind some of the thoughts we've been we've been exploring here, 184 00:21:24,370 --> 00:21:31,470 it's good to look at it from a sort of theoretical perspective. 185 00:21:31,470 --> 00:21:41,700 So there's a tempting mistake, and the mistake is to assume that deterministic causal explanation excludes other types of explanation. 186 00:21:41,700 --> 00:21:53,860 So you might think if the causal process x fully explains event e, there's no room for anything else to explain or to be responsible for a. 187 00:21:53,860 --> 00:22:00,910 And I think this is quite a common mistake in incompatibility lines of thought, I mean, you can see how it leads to incompatible ism. 188 00:22:00,910 --> 00:22:08,920 If what I do is determined by the causal laws, then it can't have been due to my purpose is my choices and therefore I'm not responsible. 189 00:22:08,920 --> 00:22:13,150 Interestingly, I think Hume himself makes this mistake or something very close to it. 190 00:22:13,150 --> 00:22:23,530 I've put six slides on him in the appendix. I'm not going to talk about those in this lecture, but do consult those. 191 00:22:23,530 --> 00:22:29,290 So let's go back to the first couple of lectures where one of the things I was pointing out 192 00:22:29,290 --> 00:22:35,620 was that there have been a succession of scientific paradigms going right back to Aristotle, 193 00:22:35,620 --> 00:22:41,230 where and in Aristotle's time, more or less everything was explained in terms of purposes. 194 00:22:41,230 --> 00:22:46,000 Then we got mechanism coming in. After Galileo challenged Aristotle. 195 00:22:46,000 --> 00:22:53,560 When Newton came along, we got explanation in terms of forces and got quantum and space time curvature and so on from physics. 196 00:22:53,560 --> 00:23:04,600 But the one I want to focus on first here is evolution by natural selection, because I think that is a very, very clear illustration. 197 00:23:04,600 --> 00:23:10,280 Now, evolutionary explanation is entirely compatible with determinism. 198 00:23:10,280 --> 00:23:17,810 But it's completely different in structure, and it can be applied in situations where deterministic explanation cannot be applied. 199 00:23:17,810 --> 00:23:23,030 So I want to give you a little example here. All right now, I'm not going to go through this in detail, 200 00:23:23,030 --> 00:23:33,410 but what what we've got here is a brief explanation for why a certain equation called the logistic equation gives a plausible way 201 00:23:33,410 --> 00:23:45,920 of modelling the successive population of a colony of insects within an environment which has limited capacity to support them. 202 00:23:45,920 --> 00:23:56,510 So the point here is you've got the population there up the left from zero to 100000. 203 00:23:56,510 --> 00:24:01,280 Here are the generations being plotted along here, and here is the logistic equation. 204 00:24:01,280 --> 00:24:10,340 Now what we're doing with starting with a random value. And then the population is being plotted. 205 00:24:10,340 --> 00:24:14,960 According to a deterministic equation, and you can see what we're getting is complete chaos, right? 206 00:24:14,960 --> 00:24:23,010 So it depends on the value of the the constant are. 207 00:24:23,010 --> 00:24:26,280 So I'm playing with different values of, AH. 208 00:24:26,280 --> 00:24:34,470 Incidentally, the logistic equation, it's so simple and it's amazing that you can get such complex behaviour from such a simple equation. 209 00:24:34,470 --> 00:24:42,030 Those of you who are studying economics and are at all led to believe that whenever you have an equilibrium in a situation, 210 00:24:42,030 --> 00:24:46,050 you will necessarily reach it. No, absolutely not. 211 00:24:46,050 --> 00:24:52,170 In nature, you can have an equilibrium. There is an equilibrium in all of these, but unless you hit it, spot on. 212 00:24:52,170 --> 00:25:02,300 It's never reached OK. And once we get above 3.5 seven, you can see the behaviour is entirely chaotic. 213 00:25:02,300 --> 00:25:07,100 So deterministic systems, probably most deterministic systems out there in nature, 214 00:25:07,100 --> 00:25:14,060 as opposed to the artificial ones that we study in physics labs or something like that have an element of chaos. 215 00:25:14,060 --> 00:25:21,950 What that means is you get sensitive dependence on initial conditions if you have two scenarios that are very close together, 216 00:25:21,950 --> 00:25:29,270 but just a little bit different. Those differences can amplify instead of smoothing a way towards an equilibrium. 217 00:25:29,270 --> 00:25:36,470 That means that, in fact, an awful lot of what goes on in the world is, in principle, unpredictable. 218 00:25:36,470 --> 00:25:42,260 It's unpredictable because in order to measure things sufficiently accurately to be able to predict what was going to happen, 219 00:25:42,260 --> 00:25:48,770 even if you have the computational power and the knowledge of all the parameters and all the rest. 220 00:25:48,770 --> 00:25:53,480 Basically, it would just completely change the situation, you have to have sensors everywhere in the air, 221 00:25:53,480 --> 00:26:03,350 you know, so the weather is simply unpredictable in principle. Right up to a point when beyond a certain point. 222 00:26:03,350 --> 00:26:07,940 So now let's go for a biological situation. 223 00:26:07,940 --> 00:26:12,620 Bearing that in mind. So there's a famous biological puzzle. 224 00:26:12,620 --> 00:26:16,550 This was discussed back in near the beginning of the 18th century by John Arbuthnot. 225 00:26:16,550 --> 00:26:23,600 It was actually, you know, in the context of is there here? An argument for divine benevolence. 226 00:26:23,600 --> 00:26:33,800 The puzzle basically is why in sexually reproducing species, the ratio of males and females is roughly one to one. 227 00:26:33,800 --> 00:26:37,490 Now we humans, you might think, Well, that's because we're generally monogamous. 228 00:26:37,490 --> 00:26:43,220 You know, we pay a bond and so forth. What about cows? 229 00:26:43,220 --> 00:26:49,080 There's no way it's in the interest of a population of cattle to have equal number of bulls and cows. 230 00:26:49,080 --> 00:26:54,590 Right? It's in their interest, as well as in the interest of the farmers to have lots of cows and very few bulls. 231 00:26:54,590 --> 00:27:03,740 The bulls will be happy and the cows, which are far more useful in virtually every way they have young, they nurture the young. 232 00:27:03,740 --> 00:27:08,880 And if if they're on a farm, they provide milk and so forth. So not. 233 00:27:08,880 --> 00:27:15,110 But it's not only the farmer who prefers lots of cows. Actually, you would think a benign nature would prefer lots of cows. 234 00:27:15,110 --> 00:27:18,560 It would be better for the population breed more effectively and so forth. 235 00:27:18,560 --> 00:27:22,400 If there were more cows than bulls and yet nature produces 50 50. 236 00:27:22,400 --> 00:27:35,330 Why? Well, we can model this, and here is a simple model, but it's the same, you know? 237 00:27:35,330 --> 00:27:39,890 So what we've got here is a model where we have an imaginary species called them dingbats. 238 00:27:39,890 --> 00:27:47,780 And this species actually have a gene. They carry a gene which determines the probability of the offspring being female. 239 00:27:47,780 --> 00:27:51,560 I call this the female offspring probability gene. The FOP gene. 240 00:27:51,560 --> 00:27:57,350 OK. And what we have here is a model where you get males and females. 241 00:27:57,350 --> 00:28:05,060 Initially, the FOP gene value has been sent set to over 90 percent for both the males and the females, as you can see, right? 242 00:28:05,060 --> 00:28:13,040 So I'm biased in favour of a female population. And now what are we going to have these males and females randomly breeding with 243 00:28:13,040 --> 00:28:20,050 random inheritance of the FOP gene with a certain degree of random variation? 244 00:28:20,050 --> 00:28:30,610 And if we run the simulation, you can see at the bottom there, the proportion of females in the population zooms up to 90 percent. 245 00:28:30,610 --> 00:28:36,110 And then inexorably comes down to around 50. 246 00:28:36,110 --> 00:28:44,200 And will end up basically random walking around 50. If they'd had that in the 18th century, if they'd been able to run this sort of model, 247 00:28:44,200 --> 00:28:50,650 what it would have told them is that there is something in the logic of evolution that pushes it, pushes it towards 50 percent. 248 00:28:50,650 --> 00:28:55,930 It's nothing to do with men, all the mill pair bonding. It's nothing to do with X and Y gametes. 249 00:28:55,930 --> 00:29:04,180 It's not random. There is something in the situation that pushes it to 50 percent. 250 00:29:04,180 --> 00:29:12,430 Now, the solution was actually discovered by a guy called Ronald Fisher, a statistician. 251 00:29:12,430 --> 00:29:18,810 And the explanation is below. Imagine you start off with a population that's mainly female. 252 00:29:18,810 --> 00:29:28,590 Suppose there are 90 percent females, 10 percent males. Imagine that you are about to give birth would from the genetic point of view, 253 00:29:28,590 --> 00:29:33,300 from the point of view of your genetic fitness, would you rather give birth to a male or a female? 254 00:29:33,300 --> 00:29:36,420 Well, a male, because every bull on average. 255 00:29:36,420 --> 00:29:46,720 Also that I'm talking about cows here or every male dingbat, if you like, is having nine times as many offspring on average as the females, right? 256 00:29:46,720 --> 00:29:56,120 So if you give birth to a male. You are likely to have far more grandchildren than if you give birth to a female, 257 00:29:56,120 --> 00:30:01,790 so that means if you imagine different individuals with different probability of giving birth to males or females, 258 00:30:01,790 --> 00:30:10,100 those that all happen to have genes that predispose them to have more male offspring will have more grandchildren than the others. 259 00:30:10,100 --> 00:30:19,150 So that gene that predisposes them towards having more male offspring will spread more in the in the grandchild generation. 260 00:30:19,150 --> 00:30:27,200 And the same will go the other way round if you get a bias in either direction. There will be a genetic push back in the other direction. 261 00:30:27,200 --> 00:30:30,680 Now, this is genuinely insightful. All right. 262 00:30:30,680 --> 00:30:32,780 We learn something from that. 263 00:30:32,780 --> 00:30:40,430 We can understand why if I ran that programme a thousand times, you would always find it going towards 50 percent, right? 264 00:30:40,430 --> 00:30:46,450 Always happens. And we've got an explanation why. 265 00:30:46,450 --> 00:30:51,080 But please notice this is not a deterministic causal explanation. 266 00:30:51,080 --> 00:30:58,480 Right? The deterministic causal explanation would all be in terms of one female mating with another female. 267 00:30:58,480 --> 00:31:04,780 Sorry, mating with a male, a certain gene being inherited with a certain random variation and blah blah blah. 268 00:31:04,780 --> 00:31:13,030 The explanation for even you know, the programme I wrote, there would be probably hundreds of thousands of lines. 269 00:31:13,030 --> 00:31:20,350 But the genuine explanation of the phenomenon we want to explain which is not why did this particular individual mate with this one? 270 00:31:20,350 --> 00:31:33,490 But why did it go towards 50 percent? That explanation is illuminating, and it's in evolutionary terms, not in deterministic causal terms. 271 00:31:33,490 --> 00:31:37,980 And just by the way, a lot of this will go for our behaviour, right? 272 00:31:37,980 --> 00:31:46,500 I mean, the way I behave will depend on general tendencies and general desires and so forth. 273 00:31:46,500 --> 00:31:54,240 It won't be a matter of one individual neurone firing in a particular way is that Neurone hadn't fired, 274 00:31:54,240 --> 00:31:57,990 you know, maybe because I had a drink of alcohol before or something. Not a lot. 275 00:31:57,990 --> 00:32:04,860 Just enough to disturb the running of the neurones. All right, then I'd still have done the same, but by a different causal path. 276 00:32:04,860 --> 00:32:06,120 Right? 277 00:32:06,120 --> 00:32:15,180 So, you know, most of what we do, there are lots and lots of countless different causal paths that could all lead us in the same ultimate direction. 278 00:32:15,180 --> 00:32:20,340 And as with the evolutionary case, there may be a good explanation of why we do what we do. 279 00:32:20,340 --> 00:32:28,560 But the the revealing, the insightful explanation is not in terms of lots of individual neurones doing their stuff any more than in this case. 280 00:32:28,560 --> 00:32:37,500 The insightful explanation is in terms of lots of individual, male and female matings. 281 00:32:37,500 --> 00:32:46,140 Take another example of non causal explanation, and this again, is kind of familiar one in this position. 282 00:32:46,140 --> 00:32:54,830 One of my favourite chess positions, partly because I got it against the next British champion in a knockout game when I was 17. 283 00:32:54,830 --> 00:33:03,080 I had, of course, analysed the position before reuptake takes check, really quite a beautiful checkmate. 284 00:33:03,080 --> 00:33:12,370 Now imagine putting that on a computer and the computer plays rook takes night check. 285 00:33:12,370 --> 00:33:20,650 And you can ask, why did it do it? Well, you might think that the explanation is in terms of the circuitry. 286 00:33:20,650 --> 00:33:26,350 No, actually, the revealing explanation is in terms of the rules of chess. 287 00:33:26,350 --> 00:33:31,120 Rook takes Knight is by far the best move. It wins by thoughts quickly. 288 00:33:31,120 --> 00:33:33,970 That's why the computer played it. 289 00:33:33,970 --> 00:33:43,450 Now, of course, there is a causal story about how the computer has been designed so as to enact computer programmes. 290 00:33:43,450 --> 00:33:50,950 And there is a story about how the computer programme, which is running here, has been designed to mirror the laws of chess. 291 00:33:50,950 --> 00:33:59,230 But if you want to predict what a chess computer is going to do on a particular case, the if what you do is not analyse the electronics, 292 00:33:59,230 --> 00:34:08,930 you analyse the chess and analyse the algorithms you know which which have been written in order to control how it plays it. 293 00:34:08,930 --> 00:34:17,690 So I've talked about causal explanations, not excluding other types of explanation, and I'm saying here we have a physical system, 294 00:34:17,690 --> 00:34:27,800 a chess computer, which has been programmed so that it operates to respect logical constraints, constraints about the logic of chess. 295 00:34:27,800 --> 00:34:37,580 So it's a complete fallacy to argue like this. Well, the computer chose takes F6 check because it was physically determined to do so. 296 00:34:37,580 --> 00:34:43,230 So it didn't choose it on any logical ground, and hence we have no reason for taking its decision seriously. 297 00:34:43,230 --> 00:34:45,230 Right? That's obviously nonsense. 298 00:34:45,230 --> 00:34:54,980 The fact that a computer programme which has been designed to solve some problem is acting deterministic doesn't give you any ground, whatever, 299 00:34:54,980 --> 00:35:00,620 for being sceptical about its choice because if it's been programmed well, 300 00:35:00,620 --> 00:35:09,690 then it's deterministic at just deterministic processing is precisely designed to reach the best choice. 301 00:35:09,690 --> 00:35:15,870 Now, presumably, our brains have evolved so that our reasoning likewise generally responds to rational constraints, 302 00:35:15,870 --> 00:35:20,190 that that's why we've got big brains right there useful. 303 00:35:20,190 --> 00:35:26,760 Of course, there are cases where they don't go so well, but generally in ordinary everyday circumstances, 304 00:35:26,760 --> 00:35:34,080 the kinds of circumstances that our brains and bodies have evolved to deal with our brains are normally very efficient problem solvers. 305 00:35:34,080 --> 00:35:39,630 The fact that they are determined should not give us any reason to doubt what they lead us to do. 306 00:35:39,630 --> 00:35:48,850 On the contrary, what they lead us to do is likely most often to be a pretty good guide. 307 00:35:48,850 --> 00:35:51,720 So though tempting, you know, it is tempting to say, well, 308 00:35:51,720 --> 00:35:55,990 if if such and such is causally explained, then it can't be explained rationally or whatever. 309 00:35:55,990 --> 00:36:03,940 That's just not true. Seriously mistaken determinism does not make human action ultimately metaphysically 310 00:36:03,940 --> 00:36:08,170 indistinguishable from the law governing emotions of physical things, 311 00:36:08,170 --> 00:36:15,280 like with the chess computer. You can distinguish between the behaviour of the chess computer and the behaviour of some random physical 312 00:36:15,280 --> 00:36:21,850 system because the chess computer is designed in such a way that what it does respect logical constraints, 313 00:36:21,850 --> 00:36:26,590 which means you can have a logical explanation of what it behaves, how it behaves. 314 00:36:26,590 --> 00:36:31,450 An explanation in terms of the laws of chess, not just physical explanation. 315 00:36:31,450 --> 00:36:37,600 And of course, the same is true for us. And this, by the way, is where Hume makes a mistake. 316 00:36:37,600 --> 00:36:43,000 I mean, Hume obviously is arguing for compatible ism. He is a soft determinist. 317 00:36:43,000 --> 00:36:48,970 But his argument actually involves saying the same kind of causation is involved in every case. 318 00:36:48,970 --> 00:36:50,560 I'm saying, No, he's wrong, actually. 319 00:36:50,560 --> 00:37:02,310 He hasn't noticed that you can have a different kind of explanation, even though you have a system that's equally deterministic. 320 00:37:02,310 --> 00:37:07,410 Yeah, and I make the point at the bottom there that even purely physical events can be explained in one way, 321 00:37:07,410 --> 00:37:12,780 it's a mistake to think that, you know, when something happens, there will only be one explanation of it. 322 00:37:12,780 --> 00:37:23,200 There may be quite a number of explanations that quite a number of different levels. 323 00:37:23,200 --> 00:37:30,010 OK, I suppose I'm contemplating a move in chess, and I'm calculating the possibilities. 324 00:37:30,010 --> 00:37:35,110 I see a way of forcing checkmate and I play accordingly. 325 00:37:35,110 --> 00:37:43,660 So what I did was determined by my desire to win my knowledge of the game, my knowledge of what winning requires. 326 00:37:43,660 --> 00:37:49,900 And my calculation of how winning could be achieved. So let's suppose it was indeed fully determined like that. 327 00:37:49,900 --> 00:37:56,260 Now imagine the incompatible list coming in and saying, but then it wasn't really a free choice of yours. 328 00:37:56,260 --> 00:38:03,160 If it was fully determined by your state of mind, your desires, your knowledge or calculating abilities, your power of movement and so on. 329 00:38:03,160 --> 00:38:10,700 Then you weren't really responsible for it. Now. 330 00:38:10,700 --> 00:38:21,470 That just seems to me to be bizarre. The incompatibility seems to be thinking of my self as some distinct thing. 331 00:38:21,470 --> 00:38:27,130 Something completely outside the causal nexus. An immaterial soul. 332 00:38:27,130 --> 00:38:35,980 So we're coming back to the mind and body stuff. It seems to be like dark arts for you, and we've already seen some difficulties with that. 333 00:38:35,980 --> 00:38:43,030 It's hard to see how some immaterial soul could influence physical events. 334 00:38:43,030 --> 00:38:51,820 It doesn't fit with our understanding of evolution, right? Given that we are evolved animals, the idea that some immaterial souls suddenly came in, 335 00:38:51,820 --> 00:38:56,590 you know, five million years ago or something from nowhere seems rather peculiar. 336 00:38:56,590 --> 00:39:05,740 It's also very odd if we got new material soul that's doing all our thinking that nature invests so much in giving us large brains, 337 00:39:05,740 --> 00:39:17,080 large brains that consume huge amounts of calories to develop, and large brains that notoriously make human childbirth very perilous for the mother. 338 00:39:17,080 --> 00:39:22,030 Why would nature do that if there's an immaterial soul that's doing the thinking? 339 00:39:22,030 --> 00:39:25,930 And it's very hard to see any solid, objective evidence for the view. 340 00:39:25,930 --> 00:39:33,130 We are embodied evolved animals who act in the world as part of the physical causal nexus. 341 00:39:33,130 --> 00:39:38,290 And got a nice cartoon here. This was done by Vladimir, one of my students. 342 00:39:38,290 --> 00:39:50,140 We've got the the Cartesian immaterial soul floating above, and it's got a controller to the pineal gland in the middle of the brain. 343 00:39:50,140 --> 00:40:02,230 I mean, this is a crazy picture, right? It's crazy the idea that somehow the real US is outside our brain, somehow controlling what goes on inside. 344 00:40:02,230 --> 00:40:05,940 That's a false picture. 345 00:40:05,940 --> 00:40:16,230 A better picture, though, again, slightly misleading, is to think in terms of, you know, the mechanisms of our brains here represented as cold wheels, 346 00:40:16,230 --> 00:40:25,260 but in fact, representing them maybe as a computer system would be would be kind of more realistic. 347 00:40:25,260 --> 00:40:34,680 But it's not as though we've got processes going on inside our brain, which are indifferent to the circumstances in which we find ourselves. 348 00:40:34,680 --> 00:40:41,040 Our brains have evolved such that the mechanisms in them are extremely good at solving 349 00:40:41,040 --> 00:40:47,880 the problems which we identify using our senses and on which we operate using. 350 00:40:47,880 --> 00:40:53,710 For example, our hands. So there's the the book about to be moved. 351 00:40:53,710 --> 00:41:01,330 So, you know, this picture is a crazy one, it doesn't make scientific sense, and if you think about it, it's completely cockeyed. 352 00:41:01,330 --> 00:41:09,070 We have to get rid of this idea that there is some self sitting above the causal nexus, you know, 353 00:41:09,070 --> 00:41:15,220 on any plausible picture of reality that can't be right, whether your determined historian determines. 354 00:41:15,220 --> 00:41:23,620 So if if the push towards in determinism comes from this sort of picture, then that's quite wrong. 355 00:41:23,620 --> 00:41:30,910 I don't think that's the only pull towards in determinism, as I'll explain, but I think that's quite a significant one. 356 00:41:30,910 --> 00:41:41,470 I love this quote from Kent. The light of cleaving in free flight, the thin air whose resistance, it feels, 357 00:41:41,470 --> 00:41:48,410 might imagine that her movements would be far more free and rapid in airless space. 358 00:41:48,410 --> 00:41:53,540 The dive thinks that the air is constraining its movement actually without the air. 359 00:41:53,540 --> 00:41:56,420 The dive couldn't fly at all. 360 00:41:56,420 --> 00:42:09,740 So the idea that, you know, oh, if only, you know, I could think freely without having to worry about what the mechanisms going on in my brain. 361 00:42:09,740 --> 00:42:14,600 No, actually, without those mechanisms in your brain, you would not be able to think at all. 362 00:42:14,600 --> 00:42:26,080 That's what makes thinking possible. So it's very implausible to insist on a distinction between what they do and 363 00:42:26,080 --> 00:42:30,970 the actions that result from the workings of my mental and bodily faculties. 364 00:42:30,970 --> 00:42:37,060 Those are the same things and compare some. Imagine somebody saying, Well, the crane, you got a crane. 365 00:42:37,060 --> 00:42:45,940 It did. It itself didn't lift that weight. It was the cranes hook that lifted it, supported by the Cranes Tower, powered by the Cranes engine. 366 00:42:45,940 --> 00:42:51,680 So it wasn't an action of the crane itself. That's crazy. 367 00:42:51,680 --> 00:43:00,470 Lifting things that way, using the tower on the hook and so on. That's what a crane lifting something is. 368 00:43:00,470 --> 00:43:07,670 My playing, moving up, it's making a move in chess just is my hand moving in response to the calculations that are 369 00:43:07,670 --> 00:43:14,000 going on in my brain and those calculations are impacted by things going on in the neurones. 370 00:43:14,000 --> 00:43:22,600 It's not like there's any need to make the decision independently of those processes. 371 00:43:22,600 --> 00:43:29,110 But as I've said, I think there is another pull towards incompatible ism. 372 00:43:29,110 --> 00:43:34,630 And I think that that deserves a separate treatment. 373 00:43:34,630 --> 00:43:40,000 So imagine that the incompatible list admits all this OK, 374 00:43:40,000 --> 00:43:46,900 insofar as we are pulled towards incompatible is incompatible ism by this picture of a Cartesian soul. 375 00:43:46,900 --> 00:43:49,160 Yes, that's wrong. 376 00:43:49,160 --> 00:43:58,430 That there is really only, you know, our brains are making decisions through the processes, through the causal processes in our brains. 377 00:43:58,430 --> 00:44:02,750 Let's admit that they are not an obstacle to our freedom. 378 00:44:02,750 --> 00:44:06,740 They are the way in which we act freely. Nevertheless, 379 00:44:06,740 --> 00:44:12,020 there is a temptation to say that if the prior situation said the situation of the 380 00:44:12,020 --> 00:44:18,020 world a thousand years ago and the laws of nature causally determine how I behave, 381 00:44:18,020 --> 00:44:21,920 then I can't be genuinely responsible for what I've done. 382 00:44:21,920 --> 00:44:27,950 If someone a thousand years ago, you know, Laplace is omniscient, being looking down and saying, 383 00:44:27,950 --> 00:44:36,050 Oh, I foresee that in a thousand years, Milliken will do this, but how can I be held responsible? 384 00:44:36,050 --> 00:44:45,680 There is a feeling here that I'm not ultimately responsible. I'm just a pawn in the hands of nature. 385 00:44:45,680 --> 00:44:57,850 Now, Galen Stralsund has argued along these lines, he thinks this is indeed an important part of how we think about freedom. 386 00:44:57,850 --> 00:45:04,860 But it has the unfortunate implication. That nothing can be morally responsible. 387 00:45:04,860 --> 00:45:14,970 Moral responsibility becomes impossible if I can only genuinely be morally responsible, morally free if I am, 388 00:45:14,970 --> 00:45:21,480 because in the sense of the ultimate cause of the way I behave such that it could, 389 00:45:21,480 --> 00:45:28,650 I am bringing about the circumstances in the causal circumstances which lead me to act the way I do. 390 00:45:28,650 --> 00:45:33,660 Then it turns out moral responsibility is just going to be an illusion. 391 00:45:33,660 --> 00:45:42,470 Actually, Straus and embraces that conclusion, he thinks yes, indeed, it is an illusion. 392 00:45:42,470 --> 00:45:50,650 So it looks like we might be in danger of undermining the theoretical basis of morality. 393 00:45:50,650 --> 00:46:01,680 Perhaps the incompatible list is right if everything is determined. Morality is bunk. 394 00:46:01,680 --> 00:46:07,790 And everything is determined. Maybe that's the case. 395 00:46:07,790 --> 00:46:14,480 Or maybe incompatibilities AM is correct that there is a conflict between determinism and genuine moral responsibility, 396 00:46:14,480 --> 00:46:19,850 but maybe the only non deterministic aspects of the universe are quantum events, 397 00:46:19,850 --> 00:46:25,050 and they don't have any relevance to moral responsibility, so that's not going to help. 398 00:46:25,050 --> 00:46:32,160 And maybe the entire notion of moral responsibility is hopelessly incoherent, whether the universe is determined or not, 399 00:46:32,160 --> 00:46:39,900 because in fact, the only way we could be morally responsible genuinely is if we are causes of ourselves. 400 00:46:39,900 --> 00:46:49,230 And that's just not possible on any account. Well, I don't think we have to go that way. 401 00:46:49,230 --> 00:46:59,100 I think we can follow David Hume here, I'm going to defend this part of his account because humans account of morality, 402 00:46:59,100 --> 00:47:06,450 as we saw in the last lecture, is based on a broadly evolutionary picture. 403 00:47:06,450 --> 00:47:11,310 Morality is an institution that enables us to get along with each other. 404 00:47:11,310 --> 00:47:19,120 It's very important in human society, it plays a crucial role and. 405 00:47:19,120 --> 00:47:29,890 As we evolve, as we evolved towards morality, we develop natural emotional tendencies. 406 00:47:29,890 --> 00:47:35,920 So a very important part of morality, for example, is feeling empathy for other people. 407 00:47:35,920 --> 00:47:41,920 That helps us to cooperate. That is evolutionarily a valuable thing. 408 00:47:41,920 --> 00:47:47,520 We developed a capacity for trusting and for being trustworthy. 409 00:47:47,520 --> 00:47:52,620 And that is one of the most important characteristics of humans. 410 00:47:52,620 --> 00:48:01,290 People who are untrustworthy, people who are found not to be trustworthy fatally compromised their prospects for the future. 411 00:48:01,290 --> 00:48:09,210 All right. One of the most important characteristics you can have in life is to be trustworthy and to be known to be trustworthy. 412 00:48:09,210 --> 00:48:14,580 And your prospects will be hugely better if you are. Now, that may seem counterintuitive. 413 00:48:14,580 --> 00:48:18,300 You know, we may think the person who's self-interested is going to be always looking out 414 00:48:18,300 --> 00:48:22,950 for the chance to do the best for themselves and ignore any commitments to others. 415 00:48:22,950 --> 00:48:29,970 But actually, if you live in a social world, one of the most valuable things that you have is your reputation. 416 00:48:29,970 --> 00:48:35,160 And the best way to be thought of as trustworthy is to actually be trustworthy. 417 00:48:35,160 --> 00:48:43,650 So there are all sorts of reasons why our emotions have developed in such a way as to conform to a moral outlook on things, 418 00:48:43,650 --> 00:48:48,790 both in terms of how we judge other people and how we judge ourselves. 419 00:48:48,790 --> 00:48:59,590 And if we have such emotional responses, if our emotions are attuned to morality, then that's pretty much hardwired in us, right? 420 00:48:59,590 --> 00:49:08,680 We've evolved that way. That's not going to change from discovering some metaphysical truth. 421 00:49:08,680 --> 00:49:14,250 So suppose we somehow discovered that determinism was true. 422 00:49:14,250 --> 00:49:23,210 Would that change our moral sentiments? Well, according to him, know a man who is robbed of a considerable sum, 423 00:49:23,210 --> 00:49:29,970 does he find his fixation for the loss any wise diminished by these sublime reflections? 424 00:49:29,970 --> 00:49:34,590 No, I mean, it's not exactly determinism, it's to do with God and Providence and all the rest. 425 00:49:34,590 --> 00:49:41,070 But in the same way, having these sublime reflections about the metaphysics of the Universe, 426 00:49:41,070 --> 00:49:48,190 that doesn't change the fact that when somebody beats you up and steals your purse, you feel cross. 427 00:49:48,190 --> 00:49:54,610 Why, then, should his moral resentment against the crime be supposed incompatible with them if morality is a natural, 428 00:49:54,610 --> 00:50:00,220 evolved human tendency that engages our emotions? 429 00:50:00,220 --> 00:50:04,450 There's no reason to suppose that suddenly discovering or contemplating some 430 00:50:04,450 --> 00:50:09,610 metaphysical fact about the universe is going to change the way we think morally. 431 00:50:09,610 --> 00:50:20,150 You simply wouldn't expect that to happen. And moreover, in the last lecture, we saw a model of the iterated prisoner's dilemma. 432 00:50:20,150 --> 00:50:30,260 And I made the point there that it has been discovered that nice strategies generally win those that are never the first to defect. 433 00:50:30,260 --> 00:50:41,120 They tend to do much better. You can easily explain why strategies like [INAUDIBLE] for tat which start off cooperating, 434 00:50:41,120 --> 00:50:51,530 but then reciprocate they if they if they are, if another strategy defect against them, they defect back. 435 00:50:51,530 --> 00:50:59,810 That's kind of punishment. And these sorts of computer simulations have shown pretty clearly that one of the best ways of 436 00:50:59,810 --> 00:51:07,250 evolving towards an effective moral structure moral structure within these agents is retribution. 437 00:51:07,250 --> 00:51:13,430 It's having systems which punish others for defecting keeps them in order. 438 00:51:13,430 --> 00:51:18,290 And again, it's not surprising that in society, therefore, with the evolution of society, 439 00:51:18,290 --> 00:51:23,330 with morality and cooperation playing such a crucial role within society that we 440 00:51:23,330 --> 00:51:29,150 have evolved to have natural retributive emotions both positive and negative. 441 00:51:29,150 --> 00:51:35,240 If someone's nice to me, I feel gratitude. If someone does something horrid to me, I feel resentment. 442 00:51:35,240 --> 00:51:38,810 If someone harms my child, I feel very angry. 443 00:51:38,810 --> 00:51:49,040 It's absolutely to be expected that these sorts of emotions would have evolved because they helped to establish and maintain moral behaviour, 444 00:51:49,040 --> 00:51:52,250 cooperative behaviour in society. 445 00:51:52,250 --> 00:52:01,880 Now, if that really is the foundation of morality, if morality is founded on evolved sentiments, there is no reason to worry. 446 00:52:01,880 --> 00:52:06,380 The metaphysical reflections about determinism are going to undermine it. 447 00:52:06,380 --> 00:52:12,380 It has a quite different foundation, and we should probably feel quite grateful as it does. 448 00:52:12,380 --> 00:52:16,405 Thank you.