1 00:00:05,520 --> 00:00:09,600 Welcome to the sixth lecture on general philosophy. 2 00:00:09,600 --> 00:00:16,470 Today, we're going to be covering some related topics which might broadly be called philosophy of action. 3 00:00:16,470 --> 00:00:21,510 We're starting off from where we were in the last lecture when we looked at personal 4 00:00:21,510 --> 00:00:30,400 identity and the link that personal identity had with the idea of caring about our future. 5 00:00:30,400 --> 00:00:34,870 We're then going to be moving on to discuss the issue of self-interest. Obviously, 6 00:00:34,870 --> 00:00:44,140 that's intimately connected with the thought of care about our future self and discussing a little bit about the foundations 7 00:00:44,140 --> 00:00:53,320 of morality before going on to free will and free will is a topic that is very intimately connected with moral issues. 8 00:00:53,320 --> 00:01:00,760 We're not going to be completing free will this time, and we will be coming back to the issue of personal identity briefly as well. 9 00:01:00,760 --> 00:01:08,200 Next time here we have, by the way, Bishop Joseph Butler, whose relevance will become clear later. 10 00:01:08,200 --> 00:01:18,770 David Hume and Charles Darwin, whom we've seen quite a lot of Robert Axelrod and Peter Van in wagon. 11 00:01:18,770 --> 00:01:29,960 So as we saw last time, it is possible to invent cases in which the concept of personal identity becomes threatened. 12 00:01:29,960 --> 00:01:36,170 We can imagine ourselves being duplicated or our brains being split or things like that. 13 00:01:36,170 --> 00:01:47,240 And the question is if I cannot identify a future individual as me, what issues does that raise and Derek half its way round? 14 00:01:47,240 --> 00:01:55,250 That is to say, actually, there's nothing so special about my future personal identity. 15 00:01:55,250 --> 00:02:01,940 What's important is my personal survival, and I might survive into more than one future person, 16 00:02:01,940 --> 00:02:12,700 and I've got just as much reason to care about those future survivants as I have, if they were literally me. 17 00:02:12,700 --> 00:02:23,350 So this naturally leads on to the question what it is about her future self that makes it especially rational to care about our self. 18 00:02:23,350 --> 00:02:31,710 And that's where we're going to be starting today. So I want to introduce a theory called psychological egoism. 19 00:02:31,710 --> 00:02:38,340 It's quite a popular theory. It won't surprise me if quite a lot of you actually hold it. 20 00:02:38,340 --> 00:02:43,290 But let me explain it, and then you can tell me whether you do so. 21 00:02:43,290 --> 00:02:51,630 Psychological egoism is the theory that in fact, everything we do is done with a view to our future self interest. 22 00:02:51,630 --> 00:02:58,500 All of our intentional actions are done ultimately with our own benefit in mind. 23 00:02:58,500 --> 00:03:04,770 Now notice this is a factual claim. It's a claim about human psychology. 24 00:03:04,770 --> 00:03:11,160 It's often combined with a normative claim. People will say it's absolutely fine to look after ourselves. 25 00:03:11,160 --> 00:03:19,790 That's the number one criterion. If you go materialistic with greed is good and all that kind of thing. 26 00:03:19,790 --> 00:03:22,790 And it's a view that's associated, for example, 27 00:03:22,790 --> 00:03:32,240 with Ayn Rand now notice that the psychological egoist doesn't claim that all our actions actually achieve our own benefit. 28 00:03:32,240 --> 00:03:36,560 We might miscalculate. We might not know what will bring us benefit or harm. 29 00:03:36,560 --> 00:03:43,250 We might do something thinking it's going to benefit, benefit us, for example, smoking because we enjoy it. 30 00:03:43,250 --> 00:03:47,660 And then it ends up doing is harm. We might be unlucky. 31 00:03:47,660 --> 00:03:59,760 We might fail in various ways. We might fail because other people following their self-interest cross our path and mess up our plans. 32 00:03:59,760 --> 00:04:02,100 OK, now what do we mean by our own benefit? 33 00:04:02,100 --> 00:04:09,360 Well, the most common interpretation of this and the one I'm going to focus on here is basically maximising our pleasure, minimising our pain. 34 00:04:09,360 --> 00:04:14,550 And that theory is egoistic psychological hedonism. 35 00:04:14,550 --> 00:04:19,470 So hedonism is the claim that what matters is pleasure and pain. 36 00:04:19,470 --> 00:04:29,920 Psychological hedonism is the claim that we are motivated primarily by pleasure and pain. 37 00:04:29,920 --> 00:04:38,470 Now, notice the psychological egoist doesn't have to claim that the only thing I desire is my own self-interest. 38 00:04:38,470 --> 00:04:44,650 It's entirely compatible with psychological egoism that I desire the benefit of my family, 39 00:04:44,650 --> 00:04:49,270 my friends, my country, my favourite football team, whatever it might be. 40 00:04:49,270 --> 00:04:58,030 But the point is that the reason I desire their benefit is because it will ultimately benefit me. 41 00:04:58,030 --> 00:05:04,270 I may want my favourite football team to win, but that's because their winning will bring me pleasure. 42 00:05:04,270 --> 00:05:09,580 And it's the anticipated pleasure which motivates my hope that they will win. 43 00:05:09,580 --> 00:05:19,860 So the crucial claim here is that the only thing we desire ultimately, rather than for the sake of something else, is our own benefit. 44 00:05:19,860 --> 00:05:28,740 Now, if that's true, then it rules out the possibility of genuine altruism, so by altruism, 45 00:05:28,740 --> 00:05:35,130 I understand acting for the ultimate benefit of someone else at potential cost to oneself. 46 00:05:35,130 --> 00:05:39,030 So if I do something that costs me in some way, 47 00:05:39,030 --> 00:05:50,700 that reduces my own pleasure and I do it because I ultimately care about increasing somebody else's, then that is an altruistic act. 48 00:05:50,700 --> 00:05:55,590 Now, some people might think this is just a crazy theory, OK? 49 00:05:55,590 --> 00:06:02,310 Because surely lots of normal, socially well-adjusted, apparently rational people do things like this. 50 00:06:02,310 --> 00:06:13,060 OK, I'm sure most or hopefully all of you do some of these things or things like them from time to time. 51 00:06:13,060 --> 00:06:19,630 And it may seem hard to think of some of these as a self-interested, I mean, 52 00:06:19,630 --> 00:06:25,550 I think the last example is pretty good one taking out life insurance for the benefit of young children. 53 00:06:25,550 --> 00:06:31,180 Right, that's something parents very commonly do. And we did it when we had children, we took out insurance. 54 00:06:31,180 --> 00:06:35,590 And if we died, there would be money to look after the children until they were, you know, 55 00:06:35,590 --> 00:06:45,100 20 or whatever it is that an egoistic act because any benefit they get is going to be after I'm dead. 56 00:06:45,100 --> 00:06:49,990 So. So these things seem unselfish on the face of it. 57 00:06:49,990 --> 00:06:58,120 But the psychological ego has a reply. And I think this is the principal attraction of psychological egoism. 58 00:06:58,120 --> 00:07:04,510 Actually, it's something like this argument. I'm going to call it the a priori argument. 59 00:07:04,510 --> 00:07:07,960 So here it goes. And here we are thinking about the philosophy of action. 60 00:07:07,960 --> 00:07:19,210 What is it to act for a reason? Well, in standard cases, to do action a intentionally is to do a knowingly. 61 00:07:19,210 --> 00:07:26,890 So I have to know what I'm doing and for a purpose. And the purpose will usually be to bring about some outcome. 62 00:07:26,890 --> 00:07:38,100 Let's call that. Oh. But if I'm doing action a in order to bring about oh, and if that's going to give me a purpose, 63 00:07:38,100 --> 00:07:45,690 if that's motivating me to do a, then it must be that I desire, oh, I desire the outcome. 64 00:07:45,690 --> 00:07:56,000 So when I do the action, I am satisfying my desire for, oh, I'm doing a in order to satisfy that desire. 65 00:07:56,000 --> 00:08:04,850 Now, if I'm doing something in order to satisfy my desire in order to get the satisfaction of that desire, 66 00:08:04,850 --> 00:08:16,820 then I am in fact doing it for my own satisfaction. It is ultimately a selfish act may not seem it on the surface, but actually necessarily it is. 67 00:08:16,820 --> 00:08:28,460 OK. Just informally, I'd be interested to ask you to put your hands up if you are attracted by this theory. 68 00:08:28,460 --> 00:08:35,770 OK, yeah, that's that's quite a lot. I mean, typically in these lectures, I find it's about 50 percent. 69 00:08:35,770 --> 00:08:43,780 So we're not talking here about a straw man. This is a theory that a lot of people are attracted to a lot of intelligent people. 70 00:08:43,780 --> 00:08:48,190 Now I'm going to argue that the theory is completely wrong. Utterly wrong. 71 00:08:48,190 --> 00:08:56,650 All right. It's completely wrong headed. So here goes first. 72 00:08:56,650 --> 00:09:00,700 Reason for doubt about it. Let's apply Hume's fork. Remember Hume's fork? 73 00:09:00,700 --> 00:09:04,250 The distinction between relation of ideas and matters of fact. 74 00:09:04,250 --> 00:09:13,320 So psychological hedonism is a claim, it's a claim about the motivation of human action. 75 00:09:13,320 --> 00:09:17,910 Now, is it meant to be a relation of ideas or a matter of fact? 76 00:09:17,910 --> 00:09:20,160 So this is this is why it's called use fork. 77 00:09:20,160 --> 00:09:25,980 You take out the fork and you apply it to the problem, you push it in and you say, Right, which prong are you taking? 78 00:09:25,980 --> 00:09:35,070 Is it a relation of ideas or a matter of fact? It can't be both. Now we see it's a relation of ideas, then it's something that's true by definition, 79 00:09:35,070 --> 00:09:41,900 you are just defining your terms in such a way as to make psychological egoism come out true. 80 00:09:41,900 --> 00:09:46,100 But if you do that, it can't imply anything about the real world. 81 00:09:46,100 --> 00:09:54,410 All right. You are just defining a maybe a selfish act as one that I do in order to satisfy my desire, right? 82 00:09:54,410 --> 00:09:58,200 That can't tell you if you're defining it that way, 83 00:09:58,200 --> 00:10:07,500 then it may trivially be true that anything that I do intentionally is selfish by your definition, but it doesn't say anything about the real world. 84 00:10:07,500 --> 00:10:12,600 On the other hand, if the claim of psychological egoism purports to be a matter of fact, 85 00:10:12,600 --> 00:10:18,810 a description of psychological reality, then it's got to appeal to empirical evidence. 86 00:10:18,810 --> 00:10:23,430 It can't just appeal to an apparently a priori argument. 87 00:10:23,430 --> 00:10:34,980 You can't have it both ways. There must be something wrong here. What's wrong is that there's a seductive ambiguity in the argument. 88 00:10:34,980 --> 00:10:42,600 So satisfaction of a desire is different from getting satisfaction from your desires, being satisfied. 89 00:10:42,600 --> 00:10:47,370 Those are completely different things, but they're very easy to confuse. 90 00:10:47,370 --> 00:10:54,830 So satisfaction of a desire is a logical or semantic relationship between a desire and an outcome. 91 00:10:54,830 --> 00:10:59,600 My desire for zero is satisfied. If it comes about. 92 00:10:59,600 --> 00:11:07,470 Whether or not I know about it. OK. But if I desire something. 93 00:11:07,470 --> 00:11:17,480 And that thing happens, my desire is satisfied, even if I'm completely unaware of its being satisfied. 94 00:11:17,480 --> 00:11:23,110 Now, in the same way, I might guess that something is true. And that. 95 00:11:23,110 --> 00:11:32,320 That belief of mine or I guess, of mine, that he could be a true get all belief, whether or not I know it's true. 96 00:11:32,320 --> 00:11:42,840 So the logical, all semantic relationships between our desires and our beliefs and the truth are independent of our psychological state. 97 00:11:42,840 --> 00:11:46,580 On the other hand, the satisfaction of an agent. 98 00:11:46,580 --> 00:11:55,340 That is a psychological state, if you like a state of happiness or contentment or something like that, typically involving feelings of pleasure. 99 00:11:55,340 --> 00:11:58,130 On contemplating an outcome. 100 00:11:58,130 --> 00:12:08,090 Now, in this sense, my satisfaction at the coming about of a desired outcome, that is the personal pleasure I get from knowing about it. 101 00:12:08,090 --> 00:12:16,730 OK, so those are quite different things. And notice that semantic satisfaction needn't be psychologically satisfying. 102 00:12:16,730 --> 00:12:25,490 This isn't to deny at all the common sense, psychological fact that normally when our desires are satisfied, that gives us pleasure. 103 00:12:25,490 --> 00:12:31,340 When we know about it, nobody's denying that. But they don't always. 104 00:12:31,340 --> 00:12:41,980 There's no necessary connexion here. So, for example, I desire my children to thrive after my death, as I imagine do most parents. 105 00:12:41,980 --> 00:12:49,230 I will not know if they do. That's never going to bring me any satisfaction at all. 106 00:12:49,230 --> 00:12:56,250 Not the satisfaction of that desire. Of course I may I may get pleasure from doing what I can to ensure that it happens, but that's different. 107 00:12:56,250 --> 00:13:03,160 The actual coming a bout of it will bring me no satisfaction at all because I won't be here to enjoy it. 108 00:13:03,160 --> 00:13:10,650 We'll take another case, you know, I might wish to have, say, 10 grandchildren or that say 10 great grandchildren. 109 00:13:10,650 --> 00:13:16,740 I will never know whether that is actually that desire will be satisfied. 110 00:13:16,740 --> 00:13:21,240 Another point is that psychological satisfaction can be based on false information. 111 00:13:21,240 --> 00:13:30,090 So let's suppose a journalist gets kidnapped by some terrorist group and his mother is dying. 112 00:13:30,090 --> 00:13:35,650 The one thing she cares about is that her son is OK. 113 00:13:35,650 --> 00:13:42,280 And there you are by the bedside, talking to you, trying to comfort her as she faces her death. 114 00:13:42,280 --> 00:13:52,210 You hear that the Sun is actually being killed, but you decide to lie and you say the news has just come through your son safe. 115 00:13:52,210 --> 00:13:58,390 Oh, I can die in peace and she dies. Let's not argue whether that's the right thing to do or the wrong thing to do that. 116 00:13:58,390 --> 00:14:03,250 You can see that in that case, the woman feels psychological satisfaction. 117 00:14:03,250 --> 00:14:09,560 But over the satisfaction of a desire which has not, in fact, been satisfied. 118 00:14:09,560 --> 00:14:16,460 So those things are different. Hence, there is a flaw, a straightforward flaw in the a priori argument. 119 00:14:16,460 --> 00:14:21,350 When I do a, I'm doing it in order to satisfy my desire for, Oh, that may be true. 120 00:14:21,350 --> 00:14:28,370 It doesn't follow that I'm doing a in order to experience the psychological satisfaction that I get from I was coming about. 121 00:14:28,370 --> 00:14:35,290 That's just a non sequitur. OK. Doesn't follow. 122 00:14:35,290 --> 00:14:44,530 And by the way, though, particularly those of you who are studying PPE notice that there is a related ambiguity in the term utility, 123 00:14:44,530 --> 00:14:53,530 very often in popular thoughts or discussions about economics, people interpret utility in terms of personal well-being. 124 00:14:53,530 --> 00:15:01,740 Economists don't define it that way. Economists define utility as whatever it is that we are trying to maximise. 125 00:15:01,740 --> 00:15:07,680 And in the case of a parent, for example, that may be the future well-being of their children. 126 00:15:07,680 --> 00:15:09,770 And that's not their own well-being. 127 00:15:09,770 --> 00:15:17,190 Okay, so I've got a quote here, actually, from the introduction to micro economics lectures that are happening this term, 128 00:15:17,190 --> 00:15:26,640 and it's quite clear a utility is a measure of achievement of goals, not personal benefit. 129 00:15:26,640 --> 00:15:30,960 And it's utterly, utterly wrong to put those together. 130 00:15:30,960 --> 00:15:39,630 If you do, if you conflate those, then you are at risk of concluding that the only rational economic way to behave is in your own self-interest. 131 00:15:39,630 --> 00:15:49,710 That is not implied by economic theory. That's a mistake. It's very unfortunate, I think that utility is often misinterpreted in that way, 132 00:15:49,710 --> 00:15:57,750 and there are many people who somehow think that economic logic forces us or would force should force us to be self-interested, 133 00:15:57,750 --> 00:16:10,910 that's simply not true. But surely you might say, when I try to achieve some outcome and succeed, I do get some personal pleasure from achieving it. 134 00:16:10,910 --> 00:16:14,630 So my utility, as the economists define it. 135 00:16:14,630 --> 00:16:20,960 And my pleasure. Do match up. Well, I've said it is plausible that they match in normal cases. 136 00:16:20,960 --> 00:16:26,150 You know, a lot of the time satisfying our desires does bring us pleasure. 137 00:16:26,150 --> 00:16:32,240 Sure. But even when that so it does not follow at all that I perform the action. 138 00:16:32,240 --> 00:16:37,700 In order to obtain that pleasure, I may get pleasure from achieving some goal. 139 00:16:37,700 --> 00:16:41,840 It doesn't follow that. My aim in achieving the goal was to get the pleasure. 140 00:16:41,840 --> 00:16:48,590 Not at all. Indeed, if you think about it, that would be incoherent. 141 00:16:48,590 --> 00:16:57,770 I can't desire to get pleasure from achieving a desire unless I have some other desire to achieve. 142 00:16:57,770 --> 00:17:04,970 That my desires be satisfied cannot possibly be my only motivating desire. 143 00:17:04,970 --> 00:17:10,700 Got a circularity there. You've got to have some independent desire in order to have something from which. 144 00:17:10,700 --> 00:17:15,180 From the satisfaction of which you can get pleasure. 145 00:17:15,180 --> 00:17:20,940 It is then possible to have a higher all the desire of getting that pleasure, but that's a different desire. 146 00:17:20,940 --> 00:17:27,450 So actually, there's something really peculiar about thinking that your only desire is to satisfy your own desires. 147 00:17:27,450 --> 00:17:36,510 It's logically incoherent, and psychological egoism basically gets the order of explanation back to front. 148 00:17:36,510 --> 00:17:42,780 We normally get pleasure from having achieved something that we wanted to achieve. 149 00:17:42,780 --> 00:17:47,640 It's not that we want to achieve the pleasure. We start off wanting to achieve something. 150 00:17:47,640 --> 00:17:53,690 We get the pleasure when we achieve it. That's a much more normal way round. 151 00:17:53,690 --> 00:18:00,260 Now, philosopher called Henry Henry Cejudo, he noted what he called the paradox of hedonism, 152 00:18:00,260 --> 00:18:05,920 the impulse towards pleasure is too predominant defeats its own aim. 153 00:18:05,920 --> 00:18:16,760 So basically, if you set out in a life. With the primary object of getting pleasure, you will fail, 154 00:18:16,760 --> 00:18:25,370 you're far more likely to get pleasure in life to get satisfaction, contentment in life if you've set out with other aims. 155 00:18:25,370 --> 00:18:27,440 Now that is a very well known fact. 156 00:18:27,440 --> 00:18:35,840 It's very well known that people who just pursue pleasure end up with empty, unhappy lives, even killing themselves or whatever, right? 157 00:18:35,840 --> 00:18:41,870 Riches and wealth and fame are not a passport to great happiness. 158 00:18:41,870 --> 00:18:51,760 Most people get happiness in life from pursuing worthwhile aims rather than just pursuing their own pleasure. 159 00:18:51,760 --> 00:18:53,990 And again, that's just a psychological fact. 160 00:18:53,990 --> 00:19:04,280 But I think it's not surprising when you look at it in the light of the points I'm making about the psychology of action. 161 00:19:04,280 --> 00:19:09,200 Now I want to show just how silly psychological egoism is. 162 00:19:09,200 --> 00:19:15,500 So let's take the case of life insurance. I insure my life so that my wife and children will prosper after my death. 163 00:19:15,500 --> 00:19:17,300 OK. I actually did this. 164 00:19:17,300 --> 00:19:27,770 You know, my wife and I did insure our lives precisely to secure our kids well-being if we had both been in a car accident or something like that. 165 00:19:27,770 --> 00:19:33,650 Now the psychological egoist comes along and says, Oh, but you did that because if you didn't do it, 166 00:19:33,650 --> 00:19:37,760 you'd feel terribly uneasy about what was going to happen off to your children after your death. 167 00:19:37,760 --> 00:19:46,670 So actually, the reason for paying the insurance premiums for making that financial sacrifice was to make your your own life easier. 168 00:19:46,670 --> 00:19:53,840 But hang on a second. On the psychological egoist theory, I don't give a damn about what happens to my kids after my death. 169 00:19:53,840 --> 00:20:01,460 I really don't. Sure. Their well-being while I'm alive may bring me pleasure, and I might care about that. 170 00:20:01,460 --> 00:20:11,500 But once I'm dead, forget it. Who cares? So why should I feel uneasy if I'm actually that egotistical? 171 00:20:11,500 --> 00:20:16,710 Then I'm not going to feel uneasy. So why do I feel uneasy? 172 00:20:16,710 --> 00:20:21,090 Well, obviously the obvious explanation is I care about them. 173 00:20:21,090 --> 00:20:26,820 I do care about what happens to them after my death. That's why I would feel uneasy. 174 00:20:26,820 --> 00:20:33,930 And that is why I would indeed get some pleasure, if you like, from paying the insurance premiums. 175 00:20:33,930 --> 00:20:40,540 But that's because it's relieving an anxiety, which I have independently. 176 00:20:40,540 --> 00:20:50,620 I mean, he is a the psychological egoist explanation where I see an old lady wanting to cross the road. 177 00:20:50,620 --> 00:20:57,110 Of course, I don't care in the slightest about her, well, being only about my own pleasure. 178 00:20:57,110 --> 00:21:08,110 OK, I'm a psychological egoist, but I foresee that if I do go and help her across the road, I will feel some pleasure from having done so. 179 00:21:08,110 --> 00:21:17,290 Why? I don't know, because I don't give a damn about her, but somehow or other strangely, I will feel pleasure from helping across the road. 180 00:21:17,290 --> 00:21:21,490 So I'll go and help her to get that pleasure. And by the way, nobody's looking. 181 00:21:21,490 --> 00:21:26,390 So I'm not scoring any points, you know, on the social scale from doing this. 182 00:21:26,390 --> 00:21:31,190 I mean, it's a crazy explanation. Compare it to this one. 183 00:21:31,190 --> 00:21:37,520 I see an old lady wanting to cross the road. I don't want her to get hurt, so I'll go and help her. 184 00:21:37,520 --> 00:21:41,000 I mean, you know, this explanation is crazy, 185 00:21:41,000 --> 00:21:52,100 because it's postulating a completely unexplained pleasure that I anticipate getting from doing something that I don't actually care at all about. 186 00:21:52,100 --> 00:22:01,050 This is simply pointing out that we naturally feel concern about other human beings. 187 00:22:01,050 --> 00:22:05,910 And it's a completely familiar phenomenon, right? This isn't this isn't a surprising thing. 188 00:22:05,910 --> 00:22:12,270 If anything, we should be we should think that psychological egoism is wildly paradoxical. 189 00:22:12,270 --> 00:22:17,250 I just find it astonishing that so many people are attracted by it. 190 00:22:17,250 --> 00:22:22,740 And I can only think that it is something like the superficial appeal of the a priori argument. 191 00:22:22,740 --> 00:22:28,450 And maybe it seems a bold and exciting theory, but. 192 00:22:28,450 --> 00:22:37,300 Another point recall human induction, we are not as intellectual and rational as we like to think of ourselves, 193 00:22:37,300 --> 00:22:42,700 sometimes as being in the case of induction, we just take for granted that the future will resemble the past. 194 00:22:42,700 --> 00:22:48,040 That's an irresistible instinct just as well. We've got it, isn't it? 195 00:22:48,040 --> 00:22:53,680 Now, our motivations also don't generally involve rational calculation. 196 00:22:53,680 --> 00:22:59,830 Nearly as much as the psychological ego is to claim the psychological ego instincts that all the time. 197 00:22:59,830 --> 00:23:05,080 Any intentional action we do, we're busy calculating how it's going to benefit us. 198 00:23:05,080 --> 00:23:11,290 No. Lots and lots of things that we do. We just do pretty much from straight animal instinct. 199 00:23:11,290 --> 00:23:20,400 And as Hume points out, animals instinctively show kindness both to each other and to us. 200 00:23:20,400 --> 00:23:24,990 Do we think they're calculating their self-interest when they do that? 201 00:23:24,990 --> 00:23:37,240 No, that's ridiculous. Well, if we can suppose animals can act kindly without doing calculation, that's likely to be true by humans as well. 202 00:23:37,240 --> 00:23:44,170 And again. An evolutionary explanation is here extremely straightforward. 203 00:23:44,170 --> 00:23:52,540 I mean, it's dead easy to explain by evolution why those of us who are parents typically care deeply about our children. 204 00:23:52,540 --> 00:24:03,400 It's obvious, isn't it? Take a set of parents and not in our society, you know, with all the medical and social support and all the rest. 205 00:24:03,400 --> 00:24:09,490 Think about a relatively primitive society, the kind of society in which humans evolved. 206 00:24:09,490 --> 00:24:19,690 You get parents who don't give lots of attention to their helpless little infants and those who do because they love them dearly. 207 00:24:19,690 --> 00:24:26,230 Which of them is going to have more offspring that survive to adulthood to breed themselves? 208 00:24:26,230 --> 00:24:35,200 Well, obviously the ones who look after their kids. In which case the genes that lead them to look after their kids will be more frequent in 209 00:24:35,200 --> 00:24:40,660 the next generation than the genes of the people who don't give a damn about their kids. 210 00:24:40,660 --> 00:24:54,520 So, of course, the genes that lead to parental care will spread straightforward evolution very, very easy to explain and notice. 211 00:24:54,520 --> 00:25:02,020 I think it's a little bit unfortunate. Richard Dawkins came out with this this phrase, you know, the selfish gene. 212 00:25:02,020 --> 00:25:06,220 And some people, I think, think, Oh, there's something in genetics that makes us selfish. 213 00:25:06,220 --> 00:25:10,570 No, no, no. It's the gene that selfish. 214 00:25:10,570 --> 00:25:14,950 Not us. Right. And it's not really selfish, of course, because the gene isn't thinking. 215 00:25:14,950 --> 00:25:29,920 But the metaphor is that genes that do their best as it were to propagate their own regeneration, their own reproduction into the next generation. 216 00:25:29,920 --> 00:25:36,580 Those are the genes that prosper. And of course, those are precisely the genes that lead to parental care. 217 00:25:36,580 --> 00:25:46,720 So from the genes point of view that the gene that makes us care intensely about our offspring and love them more than almost anything in the world, 218 00:25:46,720 --> 00:25:54,280 those genes are extremely well adapted for surviving into the next generation they're bringing about that they do. 219 00:25:54,280 --> 00:25:58,690 So it's the genes that are selfish. But again, it's a metaphor. 220 00:25:58,690 --> 00:26:03,580 But the whole point is that the selfishness of the genes forces us. 221 00:26:03,580 --> 00:26:09,150 If you like not to be selfish or brings about that, we all know. 222 00:26:09,150 --> 00:26:18,550 Another point, I mean, I've said that the psychological ego is this overly intellectual thinking that we're rationally calculating all the time. 223 00:26:18,550 --> 00:26:23,050 Evolution, of course, works through instincts and emotions. They're far more reliable. 224 00:26:23,050 --> 00:26:29,470 I mean, if you want to get a parent to look after a child, 225 00:26:29,470 --> 00:26:38,380 the mechanism of just giving the parent an intense love for the child and the desire to to keep it away from any danger 226 00:26:38,380 --> 00:26:45,340 that's far more effective than having a parent who's busy calculating as the child walks towards the edge of the cliff. 227 00:26:45,340 --> 00:26:52,210 If the child falls over the cliff, will that make me unhappy? Hmm. I'll have more time to, you know, do other things blah blah blah. 228 00:26:52,210 --> 00:26:58,900 Calculate. Calculate, no, no. Nature just gives us an irresistible instinct child walking towards cliff. 229 00:26:58,900 --> 00:27:05,470 Go and save it. So psychological egoism is wrong on so many angles. 230 00:27:05,470 --> 00:27:11,920 It's illogical. It's back to front. It's overly intellectual. 231 00:27:11,920 --> 00:27:16,150 Now, I have heard this last ditch defence of psychological egoism. 232 00:27:16,150 --> 00:27:20,830 You care about your children, but only because you see them as part of your self. 233 00:27:20,830 --> 00:27:27,400 Indeed, they share your genes. You want your genes to prosper. So it's still fundamentally self-interested. 234 00:27:27,400 --> 00:27:34,110 Now, come on. I'm not stupid. I don't think my children are mean. 235 00:27:34,110 --> 00:27:41,760 Of course I don't. And the idea that people evolved to care about their genes when until what? 236 00:27:41,760 --> 00:27:49,700 Nineteen fifty three or whatever, nobody even knew what a gene was. This is just cloud cuckoo land. 237 00:27:49,700 --> 00:27:54,380 If somebody says, well, when I say your, your children are part of you. 238 00:27:54,380 --> 00:27:58,750 What I mean is you care about them as though they were part of you. 239 00:27:58,750 --> 00:28:06,680 Oh, maybe that's sort of plausible. I care about my children as much as I do about myself in many respects, yes. 240 00:28:06,680 --> 00:28:09,980 And as one gets older, by the way, that gets more so. 241 00:28:09,980 --> 00:28:20,560 But that simply acknowledges psychological egoism is false that simply acknowledging that we do care very deeply about our children. 242 00:28:20,560 --> 00:28:28,480 And I think it's quite interesting, I mean, we started all this with from the issue of personal identity and nowadays, 243 00:28:28,480 --> 00:28:31,650 especially with identity politics and so on. 244 00:28:31,650 --> 00:28:39,280 We do think of people as identifying with a cause and they may be religious, national, political, social and so forth. 245 00:28:39,280 --> 00:28:48,070 And the simple fact is, people often do care more about some cause than they do about their own self-interest. 246 00:28:48,070 --> 00:28:56,270 Many people will undergo all sorts of hardship for a cause, and we may say, Oh, they're identifying with it. 247 00:28:56,270 --> 00:29:03,710 But when we say that we are not saying that somehow they are irrational about their own personal identity, 248 00:29:03,710 --> 00:29:10,880 that they somehow think they've ceased to be a person. All we're saying is that they they treat. 249 00:29:10,880 --> 00:29:19,590 They treat the cause as something that they are intimately committed to. 250 00:29:19,590 --> 00:29:21,720 And there's nothing particularly irrational about that. 251 00:29:21,720 --> 00:29:29,610 I mean, of course, there can be some lining up with some causes might manifest irrationality in various ways. 252 00:29:29,610 --> 00:29:36,910 But but there's no you can't give any rational argument for saying we shouldn't care about this or that. 253 00:29:36,910 --> 00:29:43,800 I mean, you can argue that some commitments are inconsistent, but an important point that David Hume made in the treatise of human nature, 254 00:29:43,800 --> 00:29:55,910 I say it's notorious because he kind of throws it off in a paragraph and raises it as a big question can you ever infer an ought from an is? 255 00:29:55,910 --> 00:30:03,230 And he seems to be hinting that you can't. And this is widely accepted in many philosophers would agree. 256 00:30:03,230 --> 00:30:10,040 Most philosophers, I think, would agree you cannot get an ought from an is that is merely factual. 257 00:30:10,040 --> 00:30:17,940 Claims will not by themselves generate a statement of obligation. 258 00:30:17,940 --> 00:30:27,660 Now, if that's true, then no argument can rationally force us to relinquish our most fundamental normative commitments. 259 00:30:27,660 --> 00:30:36,910 If there is something I care about. So much that it dominates everything else, then nobody's going to be a and it's consistent. 260 00:30:36,910 --> 00:30:41,890 It is not some kind of inconsistent desire that can be shown to be logically faulty. 261 00:30:41,890 --> 00:30:48,760 There is no way I'm going to be reasoned out of it. And indeed, when you reason with people about things like morality, 262 00:30:48,760 --> 00:30:58,930 typically what you do is you either bring new facts to light that may bear on their view or you have potential inconsistencies between their views. 263 00:30:58,930 --> 00:31:04,330 It's impossible to argue about normative issues as it were right from the base. 264 00:31:04,330 --> 00:31:10,600 You've always got to appeal to something that somebody already cares about. 265 00:31:10,600 --> 00:31:19,810 So let's now turn towards morality and on to free will, which is intimately connected with it. 266 00:31:19,810 --> 00:31:27,100 I'm being a little bit dogmatic here and saying I, I simply don't think any of the attempts to found the morality on pure reason. 267 00:31:27,100 --> 00:31:30,760 I mean, Kant is the most famous have actually worked. 268 00:31:30,760 --> 00:31:37,960 I don't suppose they ever will, attempts to argue that morality can be justified in terms of self-interest. 269 00:31:37,960 --> 00:31:45,610 Well, there's something to that. But it seems rather dubious as a general claim unless you believe that there is some 270 00:31:45,610 --> 00:31:51,040 sort of reward and punishment in an afterlife administered by a perfectly just God. 271 00:31:51,040 --> 00:31:59,530 We all know of plenty of cases that seem to tell against that heroic people who've ended up being tortured, 272 00:31:59,530 --> 00:32:07,080 nasty people who've ended up being very rich and comfortable and so forth. 273 00:32:07,080 --> 00:32:12,390 But we might get a more balanced view between moral and selfish attitudes if we 274 00:32:12,390 --> 00:32:18,150 if we see them both as having the same ultimate source they both come from. 275 00:32:18,150 --> 00:32:24,750 If you like the broad biology and evolution and context of human life, 276 00:32:24,750 --> 00:32:29,160 human sketch sketches an account of morality that fits very easily with an evolutionary 277 00:32:29,160 --> 00:32:34,170 perspective because Hume didn't know about evolution as we now understand it. 278 00:32:34,170 --> 00:32:42,240 Darwin was writing a century after Hume. But you can see the kind of story there is a natural appetite between the sexes, 279 00:32:42,240 --> 00:32:49,020 which unites them together and preserves their union till a new tie takes place in their concern for their common offspring. 280 00:32:49,020 --> 00:33:00,030 OK. Makes sense. Boy and girl fancy each other, child comes along a natural evolution, 281 00:33:00,030 --> 00:33:07,560 evolutionary instinct kicks in, they both think this little bundle is absolutely adorable. 282 00:33:07,560 --> 00:33:11,670 And as a result, they've now got a new tie. They're both. 283 00:33:11,670 --> 00:33:17,380 They're both concerned about making sure that this little bundle survives. 284 00:33:17,380 --> 00:33:23,620 A family immediately arises and particular rules are found requisite for subsistence. 285 00:33:23,620 --> 00:33:30,490 When several families unite together into one society, the rules which preserve peace and order enlarge themselves to that extent. 286 00:33:30,490 --> 00:33:33,670 Now, within one family, you know you might not have strict rules, 287 00:33:33,670 --> 00:33:38,830 you might obviously have conventions and they might come from the way the society works. 288 00:33:38,830 --> 00:33:45,340 But once you get societies, families uniting together in a tribe or whatever, there are going to be conflicts of interest. 289 00:33:45,340 --> 00:33:50,440 There are going to be these things are going to need to have rules to sort them out. 290 00:33:50,440 --> 00:33:56,980 And then the boundaries of justice will grow still larger with a corresponding natural progress 291 00:33:56,980 --> 00:34:02,080 of human sentiments when different societies interact for mutual convenience and advantage. 292 00:34:02,080 --> 00:34:17,750 So basically, humans saying morality, rules of justice and so forth naturally come about from the biological reality. 293 00:34:17,750 --> 00:34:29,360 Now, obviously, Hume's story, I mean, particularly the last bit when he's talking about this natural progress of human sentiments. 294 00:34:29,360 --> 00:34:37,220 What I think he means there is that when we interact with people from other societies or, you know, other towns, other tribes, 295 00:34:37,220 --> 00:34:44,390 whatever it is in doing that, we apply principles of justice, fairness and so forth, 296 00:34:44,390 --> 00:34:49,130 and those principles will probably emerge within our smaller groups. 297 00:34:49,130 --> 00:34:53,210 So within our village, we have an idea of justice. You know, this is mine. 298 00:34:53,210 --> 00:34:57,950 That's yours. I won't touch yours if you don't touch mine, you know that sort of thing. 299 00:34:57,950 --> 00:35:02,570 And then when we interact with the neighbouring village, maybe we want to trade. 300 00:35:02,570 --> 00:35:09,060 We bring to that conversation our understanding of justice. 301 00:35:09,060 --> 00:35:18,240 Now you might naturally think, well, OK, that seems so straightforward, but how does the evolutionary story work exactly then? 302 00:35:18,240 --> 00:35:26,910 It's very easy to explain why we should all be concerned intensely about our offspring because they carry our genes. 303 00:35:26,910 --> 00:35:31,440 But how does it work more broadly than the family? 304 00:35:31,440 --> 00:35:40,860 And this is an area where computer modelling has actually recently, I mean, in the last 30, 40 years made a profound difference. 305 00:35:40,860 --> 00:35:45,180 This chap, Robert Axelrod, was particularly influential. 306 00:35:45,180 --> 00:35:48,990 I think he was the first person to try doing this. 307 00:35:48,990 --> 00:35:52,890 So let's look at a game called the prisoner's dilemma. 308 00:35:52,890 --> 00:36:01,870 A lot of you, I imagine, would have come across the prisoner's dilemma. I'll just talk you quickly through a version of it. 309 00:36:01,870 --> 00:36:10,210 OK, so imagine that there are two of us one male, one female, I'll be the male. 310 00:36:10,210 --> 00:36:15,190 The gender isn't important, it just makes it easier for reference. That's all. 311 00:36:15,190 --> 00:36:20,680 So I'll and I'll refer to you as the you're the female, right? 312 00:36:20,680 --> 00:36:30,810 So you're in one room. I'm in another. And each of the rooms is set up in an identical way. 313 00:36:30,810 --> 00:36:34,680 There are two buttons in the room. One is called. One is labelled cooperate. 314 00:36:34,680 --> 00:36:43,000 One is labelled defect. And underneath the buttons, there is a slot where money can come out. 315 00:36:43,000 --> 00:36:49,500 OK, now I don't know who you are and you don't know who I am. 316 00:36:49,500 --> 00:36:56,070 We have never seen each other, we're never going to meet again. This is just a one off interaction between you and me. 317 00:36:56,070 --> 00:37:00,900 You're in your room. I'm in my room. We cannot influence each other at all. 318 00:37:00,900 --> 00:37:06,810 All we can do is press either cooperate or defect. 319 00:37:06,810 --> 00:37:13,950 OK, if I press cooperate. And you press cooperate to. 320 00:37:13,950 --> 00:37:22,000 Then three thousand pounds will come through my slot. And likewise, yours. 321 00:37:22,000 --> 00:37:30,350 If I press cooperate and you press defect. I'll get nothing at all. 322 00:37:30,350 --> 00:37:40,340 You will get 5000. Likewise, if you cooperate and I defect, I will get five thousand pounds. 323 00:37:40,340 --> 00:37:46,040 You will get nothing. If we both defect. 324 00:37:46,040 --> 00:37:50,910 We will both get £1000. Got that. 325 00:37:50,910 --> 00:37:56,230 OK, so what should I do? Well, here's how I might reason. 326 00:37:56,230 --> 00:38:02,140 I don't know what she's going to do. She's either going to cooperate or she's going to defect. 327 00:38:02,140 --> 00:38:10,700 If she cooperates, however, I'm clearly better off defecting, I'll get 5000 pounds instead of three thousand pounds. 328 00:38:10,700 --> 00:38:17,570 But if she defects. Oh, I'm still better off defecting because I'll get one thousand pounds against nothing. 329 00:38:17,570 --> 00:38:27,030 So whatever she does, I'm better defecting. Defect. Meanwhile, you in your room, you're thinking, well, I don't know what he's going to do, 330 00:38:27,030 --> 00:38:32,240 but if he cooperates, I'm better off defecting and if he defects, I'm better off defecting defect. 331 00:38:32,240 --> 00:38:40,140 And we both get £1000 pounds. Well, OK, it's better than nothing. 332 00:38:40,140 --> 00:38:47,430 But we feel pretty silly as we walk outside and talk to some other people who've been playing this game and we talk to them and they said, 333 00:38:47,430 --> 00:38:52,620 Oh, I didn't calculate. No, I just thought it would be nice to cooperate. And I got three thousand pounds. 334 00:38:52,620 --> 00:38:56,640 But there we were, rationally calculating our self-interest. 335 00:38:56,640 --> 00:39:00,090 And we ended up worse off. So that's why it's called the dilemma. 336 00:39:00,090 --> 00:39:09,740 It seems that rational calculation actually can lead to a situation in which everybody does worse. 337 00:39:09,740 --> 00:39:21,890 Now, it does seem in the one shot prisoner's dilemma, if you've just got, you know, one go, it's hard to argue against the case for pressing defect. 338 00:39:21,890 --> 00:39:30,530 Sadly. But the situation changes, right, radically if you iterate the prisoner's dilemma. 339 00:39:30,530 --> 00:39:38,990 This is what Axelrod Rob discovered. If you have a prisoner's dilemma, which is repeated where the interactions continue again, 340 00:39:38,990 --> 00:39:43,010 it doesn't necessarily involve any personal relationship with the individual concerned. 341 00:39:43,010 --> 00:39:51,150 You'll stay in your room. I'm still in my room, but instead of just pressing cooperate and effect once. 342 00:39:51,150 --> 00:39:56,370 We do it repeatedly. I mean, maybe we do it 100 times in a row or something like that. 343 00:39:56,370 --> 00:40:00,750 And you can imagine then the dynamics would change a great deal. 344 00:40:00,750 --> 00:40:05,680 So imagine you're in your room. I'm in my room this time instead of thousands of pounds, it's just pounds. 345 00:40:05,680 --> 00:40:13,770 OK, so imagine we've got 100 or more of these interactions to go. 346 00:40:13,770 --> 00:40:20,250 So I start off well, I know. I know. The right thing to do is defect defect, one pound defect, one pound defect. 347 00:40:20,250 --> 00:40:25,860 This is silly. This is silly. We could both be getting three pounds every time. 348 00:40:25,860 --> 00:40:34,310 I'll try cooperating. Cooperate nothing. Go rotten, woman. 349 00:40:34,310 --> 00:40:40,940 She defected on me, well, I'm going to defect on her defect. £5. 350 00:40:40,940 --> 00:40:46,910 Oh, that's interesting. I cooperated once and she got £5. 351 00:40:46,910 --> 00:40:50,330 Now she's cooperating back. 352 00:40:50,330 --> 00:41:00,980 She got the signal, not that he was intending it in that way, but now let's just try again cooperate 3lb, cooperate 3lb, cooperate three. 353 00:41:00,980 --> 00:41:07,550 You can see how it would happen yet. Not because I care at all about her or she cares at all about me, 354 00:41:07,550 --> 00:41:17,180 but because we are stupidly foregoing the possibility of getting three every time just by being suspicious and getting one every time. 355 00:41:17,180 --> 00:41:23,360 And it's a very interesting backwards induction argument, if you know how many iterations this is going to happen. 356 00:41:23,360 --> 00:41:28,100 And that leads to a very curious and I think rather dubious result about that. 357 00:41:28,100 --> 00:41:36,860 But leave that aside, just take the point that in general, if the prisoner's dilemma is iterated, things change. 358 00:41:36,860 --> 00:41:45,900 The dynamic is different. OK, so let's take an example. 359 00:41:45,900 --> 00:41:56,760 Of three strategies. So I'm going to use red for the strategy that's commonly called all see the naive co-operator. 360 00:41:56,760 --> 00:42:02,790 So if it's worthy of the name strategy, this just cooperates all the time. 361 00:42:02,790 --> 00:42:07,320 And then all d so which all use blue is a nasty defector. 362 00:42:07,320 --> 00:42:15,310 It just defects all the time. And we have one strategy here that's just a little bit clever. 363 00:42:15,310 --> 00:42:24,710 It's called [INAUDIBLE] for tat. It's a nice strategy, meaning it will never defect until it's been directed against. 364 00:42:24,710 --> 00:42:30,220 It's never the first to defect, so it starts out being cooperative, but then reciprocate. 365 00:42:30,220 --> 00:42:35,560 It behaves in exactly the same way that the other guy did on the previous term. 366 00:42:35,560 --> 00:42:45,800 We've got that. So we got three different strategies. Now imagine that you have a context in which that is being played out. 367 00:42:45,800 --> 00:42:50,780 So here's a simple computer model you can see we've got a grid. 368 00:42:50,780 --> 00:42:54,050 Most of them are red, so they're the naive co-operators. 369 00:42:54,050 --> 00:43:02,780 We've got a few blues scattered around there, the nasty defectors, and we've got a fair number of greens there, the [INAUDIBLE] for tat. 370 00:43:02,780 --> 00:43:06,500 What will happen when we set this going well? 371 00:43:06,500 --> 00:43:12,800 Let me just explain how it works. Each cell is interacting with all of the cells around it. 372 00:43:12,800 --> 00:43:19,010 OK. And by the way, the edge is one wraparound to the other side, so they've all got eight neighbours. 373 00:43:19,010 --> 00:43:24,590 So each cell has a sequence of 10 iterated prisoner's dilemma. 374 00:43:24,590 --> 00:43:31,880 Interactions with each of the other set cells with each of its eight neighbours got it with the same scoring that I showed you before. 375 00:43:31,880 --> 00:43:34,960 So it's doing a sequence of 10. 376 00:43:34,960 --> 00:43:46,300 And then once they've all done that, the cell gets taken over by whichever strategy of the nine around it has been most successful, 377 00:43:46,300 --> 00:43:50,410 whichever has the biggest payoff reproduces itself into the new cell. 378 00:43:50,410 --> 00:44:02,550 OK, so let's see what happens. You might be able to guess what's going to happen to start with. 379 00:44:02,550 --> 00:44:16,020 So. Who do you think's winning? Blues winning. 380 00:44:16,020 --> 00:44:20,240 Now, it's very misleading. 381 00:44:20,240 --> 00:44:30,680 Obviously, what's happening is that the nasty defectors are exploiting the naive co-operators wherever you get a blue with a load of reds. 382 00:44:30,680 --> 00:44:36,620 The blue is just defecting, defecting, defecting and the co-operator is just cooperating, cooperating, cooperating. 383 00:44:36,620 --> 00:44:41,870 So the defector gets five for every one of those interactions, and the coffee cooperative gets nothing. 384 00:44:41,870 --> 00:44:45,710 So the defector ends up with 50 over the 10 interactions, right? 385 00:44:45,710 --> 00:44:53,260 So it's scoring big time. So it's not surprising that the Blues are taking over most of it. 386 00:44:53,260 --> 00:44:59,020 And you might think this shows that the Blues are doing really well. Actually, the Blues aren't doing really well. 387 00:44:59,020 --> 00:45:06,250 Because in the middle of any of those blue blocks, you get a blue totally surrounded by all the blues. 388 00:45:06,250 --> 00:45:15,120 Right? And they are being directed against on every single move by their neighbours, so they they're each scoring 10 each time, not 50. 389 00:45:15,120 --> 00:45:18,060 Right. So you get a very misleading impression here. 390 00:45:18,060 --> 00:45:27,560 It's a bit like if you have a country which devotes huge resources to its weapons and neglects its people. 391 00:45:27,560 --> 00:45:32,720 And if you have a war, the country makes great advances against the others and you think, Oh, they're doing really well. 392 00:45:32,720 --> 00:45:38,720 No, they're not. They're not doing really well. They're just really good at attacking other people. 393 00:45:38,720 --> 00:45:44,420 And the Reds in the middle of these red clumps are doing much better in terms of pay off than the Blues in the middle of the Blues. 394 00:45:44,420 --> 00:45:50,900 It's just that when they encounter each other, I'm afraid the Blues exploit them. 395 00:45:50,900 --> 00:46:03,340 So let's let it continue. Now, the great thing about the [INAUDIBLE] for tat is when they interact with the Blues. 396 00:46:03,340 --> 00:46:09,310 Obviously, on the first round of the 10, they cooperate in the blue defects, so they lose. 397 00:46:09,310 --> 00:46:13,060 But then from then on, they defect, defect, defect effect against the defector. 398 00:46:13,060 --> 00:46:18,400 Meanwhile, because some of their other other neighbours are greens, as long as they are, 399 00:46:18,400 --> 00:46:22,630 as long as they have some other [INAUDIBLE] for tat with them, they're cooperating against each other all the time. 400 00:46:22,630 --> 00:46:25,300 So they're getting 30 every time from that pile. 401 00:46:25,300 --> 00:46:32,650 So even a small group of [INAUDIBLE] for tat can fairly easily survive against the Blues, and eventually you see the Blues get completely wiped out. 402 00:46:32,650 --> 00:46:44,310 Good riddance. So there is a nice lesson here, and Axelrod found that this was in fact a robust result. 403 00:46:44,310 --> 00:46:55,950 Nice strategies do better than nasty ones. So what we have here effectively is cooperation evolving, not on the basis of relationships, 404 00:46:55,950 --> 00:47:07,710 not on the basis of kin selection genes being favoured because they're all in the same family, but cooperation through reciprocation. 405 00:47:07,710 --> 00:47:15,930 So nowadays, whereas in the past, before Axelrod and so on, people found it rather a puzzle how altruistic behaviour could evolve. 406 00:47:15,930 --> 00:47:22,560 Now it's accepted that there are plenty of sort of game theoretic explanations. 407 00:47:22,560 --> 00:47:27,810 OK, so let's move now towards free will, and I'm just going to briefly introduce the topic here. 408 00:47:27,810 --> 00:47:38,600 We'll be discussing it much more next time. So we think of people as morally responsible for what they do in general. 409 00:47:38,600 --> 00:47:46,940 But actually, we don't think of them as morally responsible for absolutely every thing they do, only what they do freely. 410 00:47:46,940 --> 00:47:50,950 And there's a big question about what counts as free. 411 00:47:50,950 --> 00:47:56,180 And I've been talking a lot in this lecture about evolved patterns of behaviour. 412 00:47:56,180 --> 00:48:01,130 And you might very well wonder whether that actually impacts negatively on morality. 413 00:48:01,130 --> 00:48:08,720 I mean, if if what I do is, I do because I've been programmed that way, you know, by evolution, by my genes and so forth. 414 00:48:08,720 --> 00:48:16,580 Is it really morally worthy or subject to moral appraisal? 415 00:48:16,580 --> 00:48:22,300 So we're in the realm here of talking about the ancient issue of free will and determinism. 416 00:48:22,300 --> 00:48:27,490 OK, so determinism is basically the thesis that all events are determined by prior causes. 417 00:48:27,490 --> 00:48:34,300 So for any event, given the causal laws governing the universe and the prior state of the world, it was inevitable. 418 00:48:34,300 --> 00:48:41,830 OK, and there's David Hume there saying people normally accept determinism with regard to physical things. 419 00:48:41,830 --> 00:48:48,280 Hume goes on to argue that that's true of human action as well. So everything in the universe is determined. 420 00:48:48,280 --> 00:48:55,090 Laplace gave the famous encapsulation of what determinism would involve you. 421 00:48:55,090 --> 00:49:03,490 Imagine an omniscient intellect that knows the position of every single particle in the universe and how 422 00:49:03,490 --> 00:49:09,370 you know what direction they're moving in the speed it would be able to calculate all future outcomes. 423 00:49:09,370 --> 00:49:15,030 So that's what we're talking about when we talk about determinism. 424 00:49:15,030 --> 00:49:21,300 Now, I've mentioned in the first couple of lectures why the early modern period, the people, 425 00:49:21,300 --> 00:49:28,380 the period of Descartes lock humans, so on why it was such a fruitful period for new philosophy. 426 00:49:28,380 --> 00:49:34,110 And with the scientific revolution, with the overthrowing of the Aristotelian picture of the universe, 427 00:49:34,110 --> 00:49:41,060 certain problems came into relief, which are still with us today. 428 00:49:41,060 --> 00:49:50,730 And in particular, the removal of purpose and the bringing in of mechanism as the way in which the universe broadly works, 429 00:49:50,730 --> 00:49:57,060 seems or risks pushing out a moral accountability. 430 00:49:57,060 --> 00:50:02,940 If we are physical things and we are placed within a physical universe operating according to physical laws, 431 00:50:02,940 --> 00:50:09,630 which means that actually every movement of my body, including, of course, my mouth and my hands and so forth. 432 00:50:09,630 --> 00:50:15,840 If all of that is physically determined, is there really any room for morality? 433 00:50:15,840 --> 00:50:26,390 OK, so just as a sort of introduction to next time, and if you can come along armed with a clear grasp of these distinctions, that will be helpful. 434 00:50:26,390 --> 00:50:32,150 Some people think free will is compatible with determinism. Some don't. 435 00:50:32,150 --> 00:50:37,500 The people who think that free will is incompatible with determinism. 436 00:50:37,500 --> 00:50:44,870 Uh, most of these people are either libertarians or hard determinist because you could be neither. 437 00:50:44,870 --> 00:50:49,130 You might think we don't have free will. Then determinism is false. 438 00:50:49,130 --> 00:50:54,770 But most people in in this game are either libertarians. 439 00:50:54,770 --> 00:50:58,820 They think we do have free will. Free will is incompatible with determinism. 440 00:50:58,820 --> 00:51:03,140 Therefore, determinism is false. They must be false if we are to have free will. 441 00:51:03,140 --> 00:51:10,580 So there are libertarians. On the other hand, you get hard determinist to say determinism is true. 442 00:51:10,580 --> 00:51:15,300 Free will is indeed incompatible with determinism, so we don't have free will. 443 00:51:15,300 --> 00:51:19,030 So they might be likely to think that morality is therefore bunk, 444 00:51:19,030 --> 00:51:29,180 the whole idea of moral responsibility is nonsense because we do not have moral freedom because determinism is true. 445 00:51:29,180 --> 00:51:34,910 So libertarians and hard determinist, they might seem polar opposites, but actually in one respect in this debate, 446 00:51:34,910 --> 00:51:39,650 they're quite close together because they're both incompatible lists, compatible lists, 447 00:51:39,650 --> 00:51:44,840 which I think is definitely the majority amongst contemporary philosophers. 448 00:51:44,840 --> 00:51:51,500 Compatible is, say, having free will and being determined being causally determined are compatible. 449 00:51:51,500 --> 00:52:02,210 There's no reason why I can't be both fully determined by causal laws and at the same time, free notice, by the way. 450 00:52:02,210 --> 00:52:04,220 And that's my position. 451 00:52:04,220 --> 00:52:14,690 By the way, I'm a compatible IST, but I just want to make clear I'm not a soft determinist, so a lot of compatible starts of determinism determined. 452 00:52:14,690 --> 00:52:24,410 It's because they think determinism is true, but it's entirely possible to be a compatible list and not to believe determinism is true, 453 00:52:24,410 --> 00:52:27,260 e.g. because of quantum mechanics and stuff like that. 454 00:52:27,260 --> 00:52:32,480 So my view would be if determinism were true, I don't think that's a particular problem for free will, 455 00:52:32,480 --> 00:52:37,010 but actually, I don't think determinism is true. 456 00:52:37,010 --> 00:52:41,900 OK, so it's important to distinguish between compatible ism and soft determinism, because these days, you know, 457 00:52:41,900 --> 00:52:49,270 since quantum mechanics, there are quite a lot of people who would be compatible without being soft determinism. 458 00:52:49,270 --> 00:52:57,820 OK, so I'm going to briefly mention the a big argument, the consequence argument. 459 00:52:57,820 --> 00:53:09,010 Peter Van wagon's famous argument, which basically tries to push the incompatibilities line and it's intuitively a very powerful argument. 460 00:53:09,010 --> 00:53:16,990 If determinism is true, then all human actions are causally determined consequences of the laws of nature and prior conditions. 461 00:53:16,990 --> 00:53:30,580 That's what determinism means. Hence, I cannot do otherwise than I actually do, except either by falsifying the laws of nature or changing the past. 462 00:53:30,580 --> 00:53:38,470 But I obviously can't do either of those things. If I cannot do otherwise than I actually do, then I don't have free will. 463 00:53:38,470 --> 00:53:44,440 The essence of free will is to be able to do other than you actually do. 464 00:53:44,440 --> 00:53:51,190 So if determinism is true, I don't have free will, but that seems a pretty powerful argument. 465 00:53:51,190 --> 00:53:59,340 It's the probably the chief weapon in the incompatibilities armoury. 466 00:53:59,340 --> 00:54:04,550 Against that, a compatible response, I'm just going to sketch. 467 00:54:04,550 --> 00:54:08,570 Free will matters to us because it involves being able to choose what we want and 468 00:54:08,570 --> 00:54:13,970 being able to achieve our ends accordingly without being prevented from doing so. 469 00:54:13,970 --> 00:54:24,120 So as long as what I did was determined by my own thought process is guided by my own desires and purposes, then what I did was indeed free. 470 00:54:24,120 --> 00:54:32,010 Under this conception, where those purposes and thinking capabilities came from is irrelevant, the fact that I was evolved to prefer certain things. 471 00:54:32,010 --> 00:54:34,180 Who cares? That doesn't matter. 472 00:54:34,180 --> 00:54:40,680 You know, I evolved to prefer ice cream over cake, and so I will predictably choose ice cream over cake whenever offered it. 473 00:54:40,680 --> 00:54:45,060 Absolutely predictable. Does that impugn my freedom? No, no. 474 00:54:45,060 --> 00:54:49,200 I'm perfectly happy choosing the ice cream because I prefer it. 475 00:54:49,200 --> 00:54:55,110 So being free is simply a matter of being able to do what I want to do and in the way I decide to do it. 476 00:54:55,110 --> 00:55:05,640 That is not affected at all by determinism. OK, so then we've got the two positions, and I'll end with David Hume's definition of liberty, 477 00:55:05,640 --> 00:55:10,650 which you'll see is exactly in the spirit of what I just said by liberty. 478 00:55:10,650 --> 00:55:18,270 Then we can only mean a power of acting or not acting according to the determinations of the will that is, if we choose to remain at rest, we may. 479 00:55:18,270 --> 00:55:27,240 If we choose to move, we also may. Now this hypothetical liberty is university allowed to belong to everyone who is not a prisoner or in chains. 480 00:55:27,240 --> 00:55:31,980 OK, so most of us, we're not totally tied up in chains. 481 00:55:31,980 --> 00:55:37,860 We are able to act according to our desires in the situation in which we find ourselves. 482 00:55:37,860 --> 00:55:43,740 Therefore, we have liberty so defined. Peter van in-world and we'll say that's no real liberty at all. 483 00:55:43,740 --> 00:55:53,430 If everything you do is actually forced on you by the laws of nature and prior conditions, then those are your chains. 484 00:55:53,430 --> 00:55:57,180 And on that point, I shall end and we'll pursue this next week. 485 00:55:57,180 --> 00:56:00,298 Thank you.