1 00:00:17,470 --> 00:00:19,400 So thanks to everybody for showing up today. 2 00:00:19,420 --> 00:00:25,480 I must say I'm a little intimidated because I'm not used to speaking in the same series with Roger Penrose and Stephen Hawking. 3 00:00:25,870 --> 00:00:33,160 So but I'll do my best. I'm I must say, John, you did a great job in that introduction. 4 00:00:33,760 --> 00:00:39,160 It's a little embarrassing, in a way. I sometimes wonder if I have a sort of Don Quixote tendency. 5 00:00:40,360 --> 00:00:44,950 But, you know, now I'm trying to do something useful with my life. So I'm going to tell you about that part. 6 00:00:45,640 --> 00:00:53,040 And, you know, when you think about the world and what we do as scientists in particular, 7 00:00:53,050 --> 00:00:58,240 well, any kind of science, I would argue we're contributing to the collective awareness. 8 00:00:59,700 --> 00:01:05,709 You know, models that we as a culture and society use to understand our world, to think about the cause, 9 00:01:05,710 --> 00:01:10,540 effect relationships, to understand what our options are and where they will lead us. 10 00:01:11,620 --> 00:01:15,700 And you might think about three levels of awareness that we have. 11 00:01:16,960 --> 00:01:19,630 One starts with understanding the external environment. 12 00:01:20,140 --> 00:01:29,550 This goes all the way back, really, for millennia to people like Ptolemaic, Ptolemy and Archimedes and people like that. 13 00:01:29,560 --> 00:01:40,420 But when Newton solidified the laws of gravitation and mechanics and we now have the ability to, 14 00:01:40,420 --> 00:01:42,880 for example, predict the motions of planets quite well, 15 00:01:43,540 --> 00:01:49,540 and this becomes a practical thing if you think about a problem like avoiding asteroid collisions, 16 00:01:50,080 --> 00:01:53,080 which we know asteroids have hit the earth many times. 17 00:01:53,080 --> 00:01:58,840 They've done enormous damage comparable to what we might expect from a nuclear war. 18 00:01:59,380 --> 00:02:05,280 And so it's a rather serious challenge, actually. And I thought this was something we had looked till I looked into it. 19 00:02:05,290 --> 00:02:07,930 I was going to use this as an example of something we really had nailed. 20 00:02:08,410 --> 00:02:13,630 And then I started Googling around and I found the NASA's Asteroid Grand Challenge, 21 00:02:14,080 --> 00:02:21,940 and they were quite proud of the fact that we've identified 95% of the asteroids that are greater than one kilometre in radius. 22 00:02:22,540 --> 00:02:25,810 One kilometre radius asteroid would really clobber the earth. 23 00:02:26,080 --> 00:02:35,410 So there's still 5% of them that we don't know are out there. And and if you get down to 300 metre asteroids, we've only identified 10%. 24 00:02:35,770 --> 00:02:41,400 Those could be pretty nasty, too. So even with something like that, we're still actually trying to do things. 25 00:02:41,460 --> 00:02:51,700 Interestingly, they're using crowdsourcing and relying on volunteers and so on, because the issue is if we see the asteroid, say, 26 00:02:51,700 --> 00:02:58,090 ten years before it hits the earth, we can predict things pretty well, but we have to actually see it and we're not looking actively. 27 00:02:59,920 --> 00:03:03,370 Another good example would be weather prediction there. 28 00:03:03,400 --> 00:03:12,650 I think that's a success story. You know, we've been actually predicting the weather for more than a century, initially through subjective methods. 29 00:03:15,490 --> 00:03:23,680 Around the turn of the 19th to 20th century, people started to be able to make weather maps and they began just looking at 30 00:03:23,680 --> 00:03:27,340 these weather maps and seeing patterns subjectively and what was happening. 31 00:03:27,670 --> 00:03:36,070 And professional weather forecasters emerged who would have a catalogue of past weather maps and they would look at the weather map for the day. 32 00:03:37,690 --> 00:03:42,250 You'll remember there were computers back then. They go, Oh, this reminds me of 1925. 33 00:03:42,640 --> 00:03:49,020 And they'd flip through their book and find an example of a weather map that matched sort of today's weather pattern. 34 00:03:49,390 --> 00:03:54,760 Go well, the wind's 20 degrees off and it's five degrees more like your finger. 35 00:03:54,760 --> 00:04:00,910 And then they would make a weather forecast that would be published in the newspaper and people would go on picnics and frequently get rained on. 36 00:04:02,590 --> 00:04:10,390 Now, you know, starting in the fifties, in 1951, Norman was looking for peacetime applications of the digital computer, 37 00:04:10,780 --> 00:04:14,559 and he teamed up with the best meteorologists of the day. In particular. 38 00:04:14,560 --> 00:04:21,100 In the end, a guy named Jules Charney led the charge to they worked hard, 39 00:04:21,100 --> 00:04:25,300 mathematically created a simple one dimensional partial differential equation, 40 00:04:25,750 --> 00:04:33,160 which they simulated on the computer with the help of John Van Norman's wife, who actually did the programming to put it on the computer. 41 00:04:33,640 --> 00:04:38,090 They made a forecast for a couple of weather patterns 12 hours ahead. 42 00:04:38,120 --> 00:04:43,180 And the forecast. Each of them took them 24 hours. So there were well below real time at that point. 43 00:04:44,590 --> 00:04:49,060 But, you know, those forecasts came out not too bad. And so they declared victory. 44 00:04:49,690 --> 00:04:53,860 Billions of dollars got pumped into improving physical weather forecasts. 45 00:04:54,340 --> 00:04:59,469 And I was actually around at the tipping point because in 1980 I was an intern at the 46 00:04:59,470 --> 00:05:05,910 National Centre for Atmospheric Research with working with Edward Lorenz of Chaos Fame. 47 00:05:06,430 --> 00:05:12,400 And and I asked him, Ed, you know, how how are you guys doing relative to this objective weather forecasters? 48 00:05:12,880 --> 00:05:16,300 And it was a very you know, he's a very mild mannered guy, said, well. 49 00:05:16,980 --> 00:05:21,510 We're not doing too bad and we're going to start beating them real soon. 50 00:05:22,200 --> 00:05:25,529 And he was right. 1980 was a tipping point. 51 00:05:25,530 --> 00:05:30,990 But no, that took 30 years to just break even with these guys with their libraries of weather maps. 52 00:05:31,260 --> 00:05:42,240 Now, since 1980, weather forecasts are vastly better than they were in 1980, but your picnic is much less likely to get rained on if the weather. 53 00:05:42,420 --> 00:05:47,180 Weather Forecasting Service says that it won't happen. No. 54 00:05:47,190 --> 00:05:48,540 So weather prediction we're pretty good at. 55 00:05:48,900 --> 00:05:55,170 Now, we're also starting to think about our effect on the environment and climate change is a great example of that. 56 00:05:55,830 --> 00:06:01,049 When you think about it, you know, climate change modellers get a lot of flak, but I think they're doing a pretty good job. 57 00:06:01,050 --> 00:06:10,230 They're telling us something that's rather subtle that's going to happen 50 years from now that will have very profound consequences on the earth, 58 00:06:10,650 --> 00:06:16,020 and even giving us pretty quantitative information about what we have to do to keep it from happening. 59 00:06:17,910 --> 00:06:22,379 So now note that climate change grew out of weather prediction. 60 00:06:22,380 --> 00:06:29,160 They were able to build on a very solid foundation. But we are understanding increasingly more and more about our effect on the environment. 61 00:06:29,550 --> 00:06:38,250 But I think the really hard challenge is to understand our collective effect on ourselves as our world becomes increasingly more artificial. 62 00:06:38,760 --> 00:06:41,339 As you know, there are various estimates, 63 00:06:41,340 --> 00:06:51,780 but many people estimate that we spend about 40% of the energy that's used to fix carbon on the Earth is ultimately going to support humans. 64 00:06:52,140 --> 00:06:59,850 So we're placing a huge load on the planet. And, you know, if you just think about your day, you mostly live in a pretty artificial environment. 65 00:07:00,870 --> 00:07:04,560 So somehow we need to start understanding better our collective effect on ourselves. 66 00:07:04,920 --> 00:07:11,400 Which brings us back into social science and the challenge of understanding things in a quantitative way. 67 00:07:11,730 --> 00:07:15,660 In some areas, like traffic forecasting, for example, we've gotten pretty good. 68 00:07:16,020 --> 00:07:22,770 You know, if I'm these days, if I'm driving back from somewhere at rush hour, 69 00:07:22,770 --> 00:07:30,120 I'll pull out my iPhone and I'll check to see what the traffic's doing and it can reroute me if traffic's messed up. 70 00:07:30,330 --> 00:07:37,860 It does a pretty good job of predicting things like that. But now what about the bigger, bigger things like business cycles, for example? 71 00:07:38,400 --> 00:07:41,970 Well, so here is an example of a plot of a business cycle. 72 00:07:42,300 --> 00:07:45,960 And what you can see is various measures. And I'll come back to some in a minute. 73 00:07:46,260 --> 00:07:52,260 Things like GDP and unemployment belts up and down by large factors as a function of time. 74 00:07:52,620 --> 00:08:06,210 And you can see that we have some big events like this is the crisis of 2008, which the Dallas Fed estimated cost the US between six and $15 trillion. 75 00:08:06,630 --> 00:08:11,370 That's quite a lot of money. That's like a half a year of U.S. GDP. 76 00:08:11,380 --> 00:08:15,180 And for the world, the answer is certainly a lot more than that. 77 00:08:15,570 --> 00:08:19,950 At least many months of GDP, probably a half a year, it suggests. 78 00:08:20,220 --> 00:08:26,040 If that hadn't happened, we could have all gone on vacation for six months and been about even with where we are. 79 00:08:27,150 --> 00:08:31,740 Somehow we didn't get to do that. But so this is an expensive thing. 80 00:08:31,740 --> 00:08:35,520 It costs us as a society a huge amount to have something like that. 81 00:08:35,730 --> 00:08:38,940 And it's not just the rich fat cats that suffer. 82 00:08:39,540 --> 00:08:48,630 It hits the poor the most of all. You know, at this point, unemployment for Greeks under 25 is at 80%. 83 00:08:49,350 --> 00:08:52,899 It's pretty horrendous. It's having a huge toll on Greek society. 84 00:08:52,900 --> 00:08:54,600 You have some very close Greek friends. 85 00:08:54,900 --> 00:09:00,460 They say it's depressing to be in Athens these days because people are just so dispirited about what's going on. 86 00:09:00,930 --> 00:09:07,440 And so here are some examples of what what crises do to unemployment from some arbitrary starting point. 87 00:09:07,770 --> 00:09:17,640 You can see it's not that uncommon for unemployment to swing down to as much as 20% lower than where it was before the crisis started. 88 00:09:17,650 --> 00:09:25,020 So these things hurt. And I would argue I would say there's even a pretty good argument that the Brexit vote might have gone the other 89 00:09:25,020 --> 00:09:30,690 way if we hadn't had the crisis of 2008 and that Donald Trump would not have been nominated for president. 90 00:09:32,980 --> 00:09:37,010 Okay. Now, 2008 wasn't the only crisis. 91 00:09:37,030 --> 00:09:42,300 I already showed you several on this plot here. Financial panics are endemic. 92 00:09:42,310 --> 00:09:47,110 I. This really hit me. It's a long story, so I'm going to give you the short version. 93 00:09:47,110 --> 00:09:53,139 But I happened to be in the Treasury building. I was lobbying for something involving the Office of Financial Research, 94 00:09:53,140 --> 00:09:59,530 which I played a small role in helping start the remarkable grassroots movement of various scientists and 95 00:09:59,530 --> 00:10:05,919 Wall Street practitioners who were shocked by the fact that when Lehman Brothers was about to fail in 2008, 96 00:10:05,920 --> 00:10:13,960 we didn't have a clue who their counterparties were and what effect it would have on the financial system if if we didn't prop them up. 97 00:10:14,560 --> 00:10:22,180 And so much of legislation got the text for legislation got developed by a committee of which I played a small part. 98 00:10:22,440 --> 00:10:27,820 It's about 20 pages. It actually went verbatim into the Dodd-Frank bill and got enacted. 99 00:10:28,270 --> 00:10:34,010 And it's an author's to collect all the data together and allow us to really think systematically about markets. 100 00:10:34,030 --> 00:10:35,410 Now, I was in the Treasury building, 101 00:10:35,650 --> 00:10:44,200 actually lobby the Deputy Secretary of the Treasury to to appoint somebody who had a strong science background to the post. 102 00:10:44,830 --> 00:10:48,130 I was not successful, but that's another story. 103 00:10:49,690 --> 00:10:55,929 But as I was leading, I started to walk out of the building and I noticed through these portraits of the past secretaries of the Treasury. 104 00:10:55,930 --> 00:11:00,129 So I started, you know, looking at them and reading the little inscriptions below. 105 00:11:00,130 --> 00:11:03,810 And it's kind of as I was going backwards in time, boring, boring, boring. 106 00:11:04,060 --> 00:11:08,500 But then I got to the ones that were actually dead when the inscriptions had been written, 107 00:11:08,500 --> 00:11:11,920 and then the inscriptions got much more truthful and much more interesting. 108 00:11:12,430 --> 00:11:20,079 And I noticed that about every other one would say either did a great job of dealing with the 109 00:11:20,080 --> 00:11:26,110 financial panic of the year or did a terrible job of dealing with a financial panic of X year. 110 00:11:26,530 --> 00:11:32,290 And after I got out of there, I actually looked it up and I made a list of these, the financial panics. 111 00:11:32,560 --> 00:11:38,650 As you can see, there's a lot of them. There were routine events back in the old days now. 112 00:11:39,790 --> 00:11:43,990 So and I can't resist throwing the some of these pictures in just for the graphics. 113 00:11:45,070 --> 00:11:55,030 Cartoonists have been good for a while. A particularly important panic was the panic of 1907 that was triggered by events in copper markets. 114 00:11:55,750 --> 00:12:03,730 It looked like the whole economy was on the verge of collapse, and JPMorgan rounded up all the big players on Wall Street, 115 00:12:04,000 --> 00:12:10,720 put them in his office and said and basically gave them orders about what to do to prop up the financial market. 116 00:12:11,140 --> 00:12:16,959 This ended up being hugely controversial because people were understandably suspicious of a guy like Jp morgan. 117 00:12:16,960 --> 00:12:24,910 Did he profit from it? He actually later felt quite, let's say, frustrated because he felt like he'd done the world a good turn. 118 00:12:25,150 --> 00:12:33,280 But this led finally for the US to start a central bank, the Fed in 1913. 119 00:12:34,450 --> 00:12:41,589 And by the way, since then you'll notice if you go back and look at my list panics up until the Fed was 120 00:12:41,590 --> 00:12:47,650 formed came about every little less than eight years on average since the Fed was formed. 121 00:12:47,920 --> 00:12:56,620 Well, we had a doozy, obviously, in 1929 and another big one up here, but they've been somehow bigger but less frequent. 122 00:12:56,620 --> 00:13:00,969 So it seems that it has made a difference, but bigger but less frequent. 123 00:13:00,970 --> 00:13:07,840 They're still happening. We still don't really understand when as they're coming on us, we don't understand they're coming. 124 00:13:08,170 --> 00:13:12,250 And and it's still debated how we should deal with them once we're there. 125 00:13:12,460 --> 00:13:15,010 Witness the big debates about austerity. 126 00:13:15,550 --> 00:13:25,300 Now, given this, you might think just if you go back and do the numbers, if we're talking, say, $50 trillion that a crisis can cost us. 127 00:13:26,140 --> 00:13:29,020 Well, how much research should we be doing to deal with that? 128 00:13:29,860 --> 00:13:36,850 Well, just to take some numbers, the National Science Foundation for Economics and social science in the United States is $30 million a year, 129 00:13:37,150 --> 00:13:42,130 not much the US Federal Reserve and it's like 12 branches. 130 00:13:42,490 --> 00:13:48,160 It spends a total of about $500 million. Some fraction of that goes for research. 131 00:13:48,490 --> 00:13:54,490 And I made a ballpark estimate of how much money gets paid out to academic economists throughout the world. 132 00:13:55,210 --> 00:13:56,650 Maybe it's $1,000,000,000. 133 00:13:57,830 --> 00:14:06,649 So, you know, that's not much when you think that the mistakes that are happening can cost us $50 trillion, you know, $50 trillion. 134 00:14:06,650 --> 00:14:11,300 That's five times ten to the 13 versus ten to the nine. 135 00:14:11,570 --> 00:14:16,340 So we're talking about the order of 10,000 or 50,000 difference. 136 00:14:16,580 --> 00:14:24,710 So even if there's low odds of of making things more successful, it looks like investing more money in economics should be a really good investment. 137 00:14:25,730 --> 00:14:31,280 I'll note just as a reference point, $1,000,000,000 is about what the US budget is for polar research. 138 00:14:31,680 --> 00:14:35,270 Now I'm a great advocate of polar research. I don't want them to cut that one little bit. 139 00:14:35,720 --> 00:14:39,110 But I it does seem like a dichotomy to me. 140 00:14:39,650 --> 00:14:44,990 Now. No, but you ought to ask, of course, would this more funding be effective? 141 00:14:46,070 --> 00:14:52,190 And there are several arguments that you might make to say that it's not maybe the economy is inherently unpredictable. 142 00:14:52,340 --> 00:14:58,340 For example, maybe there's fundamental limits like market efficiency that say it can't happen. 143 00:14:58,610 --> 00:15:03,050 Or maybe social science is a quixotic quest in and of itself. 144 00:15:03,080 --> 00:15:09,400 Maybe people can't be described by law as people like George Soros, who ultimately pays my salary, 145 00:15:09,410 --> 00:15:15,140 have suggested that that's true, though he's still paying my salary, so he must not think it's a complete lost cause. 146 00:15:16,640 --> 00:15:24,230 And now maybe there's another effect, though, which is maybe the culture of economics doesn't scale with more funding. 147 00:15:25,910 --> 00:15:29,660 And I'll note that they are unlike, say, meteorology, 148 00:15:29,660 --> 00:15:36,559 where you can bet every budget cycle they're there in Congress saying we need more money, we need to spur computers, 149 00:15:36,560 --> 00:15:39,440 we need better satellites, we need better measuring instruments, 150 00:15:39,440 --> 00:15:43,820 we need more money because we really we're always pushing the limits of what big science can do. 151 00:15:44,030 --> 00:15:50,659 Economics is not doing that. Now, I just want to note that these arguments are only valid if they're really strong, 152 00:15:50,660 --> 00:15:53,850 because remember, there's that factor of 10,000 or 50,000 there. 153 00:15:54,530 --> 00:16:02,600 So to to think about this question, let's step back now and let's think about the standard template for an economic model. 154 00:16:02,630 --> 00:16:06,470 How do economic models work? So I'm going to teach you economics in about 3 minutes. 155 00:16:06,890 --> 00:16:15,650 A challenging task, but it's only made a little easy and that economic models all follow more or less this template. 156 00:16:16,670 --> 00:16:21,410 So what do economists do? They assume that agents have preferences and beliefs. 157 00:16:22,280 --> 00:16:25,370 The preferences might come in the form of a utility function. 158 00:16:25,670 --> 00:16:29,660 The beliefs might come in in the form of something called rational expectations. 159 00:16:29,900 --> 00:16:33,650 So they they optimise that utility function perfectly. 160 00:16:34,370 --> 00:16:42,860 Then they just assume there's a fixed point equilibrium where agents will maximise their preferences according to their beliefs. 161 00:16:43,580 --> 00:16:46,810 And so then they compute what everybody's actions are. 162 00:16:46,820 --> 00:16:54,770 And then from there you can compute prices and quantities and all the other things that are come out in the economic model. 163 00:16:55,220 --> 00:17:05,600 And so this programs really hit its stride back in the fifties, maybe reached its peak about 1980. 164 00:17:06,260 --> 00:17:13,460 And and then since then, the program in economics has been to modify these assumptions one at a time. 165 00:17:13,850 --> 00:17:21,620 So they, for example, in the old days, they would assume utility and perfect rationality, that everybody would know everything. 166 00:17:22,280 --> 00:17:30,400 They say, no, okay, maybe not everybody knows everything. Maybe there's asymmetric information, maybe there's institutional constraints. 167 00:17:30,410 --> 00:17:36,650 So there are some things that people can't do. Maybe only some of the agents are rational and a few of the other ones are rational. 168 00:17:36,800 --> 00:17:46,040 So they they play with these assumptions one at a time. But the thing that is invariant and they always, almost always do when I say they, 169 00:17:46,040 --> 00:17:53,030 I'm referring to the sort of mainstream, because there are some rebels out there like me that are challenging this. 170 00:17:53,630 --> 00:17:59,030 But but the sticks point assumption is something that's invariant in this. 171 00:17:59,540 --> 00:18:03,680 So I'm going to examine that is the first thing I do in this lecture. 172 00:18:04,010 --> 00:18:08,270 And before I go forward, I just want to comment that there are many kinds of equilibrium. 173 00:18:10,040 --> 00:18:16,520 You know, I was educated as a physicist, so, you know, you immediately learn about forces balancing this trivial kind of equilibrium. 174 00:18:17,330 --> 00:18:21,200 Then you learn about thermodynamic equilibrium. It's a bit more subtle. 175 00:18:21,800 --> 00:18:28,459 Energy flows are balanced in a detailed way. And but in physics, we know what equilibrium is. 176 00:18:28,460 --> 00:18:35,840 We develop a framework outside of equilibrium, and then we can test to see whether equilibrium is an appropriate assumption or not. 177 00:18:36,200 --> 00:18:41,690 When it is appropriate, it's great because it really simplifies everything and you can do the calculations really well. 178 00:18:43,830 --> 00:18:48,060 I've already referred to dynamic equilibrium. That is the notion of a fixed point. 179 00:18:48,450 --> 00:18:53,280 So where things don't change, which underlies almost all math, 180 00:18:53,670 --> 00:18:59,400 economic theory and strategic equilibrium, which is specifically what they're talking about, 181 00:18:59,670 --> 00:19:09,000 where all the agents fully take each other into account and and do that in order to consider their reasoning and in fact, in economic models. 182 00:19:09,210 --> 00:19:12,540 One of the difficulties. Well, it's a good thing and a bad thing. 183 00:19:12,870 --> 00:19:19,230 Things are greatly simplified by assuming equilibrium off the bat, but you're left wondering whether the equilibrium assumption is valid. 184 00:19:20,640 --> 00:19:25,709 The pinnacle of economic theory that this has led us to, or what are called dynamic, 185 00:19:25,710 --> 00:19:32,730 stochastic general equilibrium models and what any Haldane has dubbed a rocking horse economy. 186 00:19:33,390 --> 00:19:37,260 And the reason I say that is, if you think about the analogy to physical equilibrium, 187 00:19:38,550 --> 00:19:43,140 imagine that you have a rocking horse and you bang that rocking horse with a hammer. 188 00:19:43,560 --> 00:19:50,010 So you knock it away from its equilibrium. So now it's rocking and it starts to damp down again. 189 00:19:50,490 --> 00:19:53,639 And then you hit it with a hammer again and it rocks again. 190 00:19:53,640 --> 00:19:56,760 So that's a little bit the assumption. 191 00:19:57,210 --> 00:20:04,170 The assumption is if it weren't for shocks, the economy is in equilibrium, but zuse throw shocks down from on high. 192 00:20:04,530 --> 00:20:09,810 The shocks could be technological shocks because some technology has gotten better. 193 00:20:09,990 --> 00:20:15,670 That could be behavioural shocks because people have changed their attitudes towards things like that. 194 00:20:15,690 --> 00:20:19,860 They become more impatient for some reason, so they're discounting differently than they used to. 195 00:20:20,040 --> 00:20:25,410 The shocks knock the world out of equilibrium, the world moves back towards equilibrium and the shocks hit again. 196 00:20:25,740 --> 00:20:32,850 And these models spend a lot of effort in diagnosing what these shocks are and and and 197 00:20:33,060 --> 00:20:38,850 trying to make some predictions now to see the kind of thing that they're talking about. 198 00:20:38,850 --> 00:20:43,350 Here's another picture of the business cycle. In this case. 199 00:20:43,350 --> 00:20:50,850 We're looking at the logarithm of gross national product and over about half the little more than half a century. 200 00:20:51,240 --> 00:20:54,870 And we're looking at the deviations around the mean trend, which is upward. 201 00:20:55,350 --> 00:20:57,570 So there's already an art in how you do that stuff. 202 00:20:57,960 --> 00:21:07,560 And so so under this view, all these fluctuations are driven by these shocks hitting the world, which is always trying to go back to that mean trend. 203 00:21:08,980 --> 00:21:13,240 No. I want to comment and I'll come back later to say some more about limits to prediction. 204 00:21:13,750 --> 00:21:20,200 But when you make a model like this, you've inherently said that the noise is doing a lot of the work, 205 00:21:20,560 --> 00:21:27,750 because you're saying that the main explanation for knocking for this whole movement in something like a real business cycle model, 206 00:21:27,760 --> 00:21:32,829 I find that a bit of a strange name is that is that these shocks are hitting 207 00:21:32,830 --> 00:21:37,750 and the shocks are something that economists kept on priori cannot predict. 208 00:21:39,040 --> 00:21:44,769 It's always worth remembering that, by the way, whenever somebody talks about noise, noise is typically not fundamental. 209 00:21:44,770 --> 00:21:49,510 I mean, the only fundamental source of noise that we know of is measurement in quantum mechanics. 210 00:21:49,720 --> 00:21:56,200 Everything else just amounts to our ignorance. And not that noise models aren't useful, but they're inherently self-limiting. 211 00:21:57,010 --> 00:22:01,610 So. So now why would you expect that the economy might not be in equilibrium? 212 00:22:01,630 --> 00:22:05,140 Well, first of all, beliefs and outcomes may not be consistent. 213 00:22:05,140 --> 00:22:07,450 And when you dig in to equilibrium theories, 214 00:22:07,450 --> 00:22:15,490 you see that consistency between beliefs and outcomes is a is a deeply important part of what is going on in equilibrium. 215 00:22:15,760 --> 00:22:20,710 And as I've already suggested, the strategy dynamics may not settle into a fixed point. 216 00:22:21,430 --> 00:22:27,880 So to illustrate this, I'm going to begin by thinking about the strategic context where people are reasoning, 217 00:22:28,210 --> 00:22:35,950 and I'm going to use something called a Keynes beauty contest. Now in Keynes's Day, papers would publish. 218 00:22:36,160 --> 00:22:40,870 This is an example of a page of a newspaper from the time they would publish pictures of women. 219 00:22:41,180 --> 00:22:44,980 I admit that's a very sexist thing to do, that I don't think they ever did this with men. 220 00:22:45,400 --> 00:22:53,559 And they would they would then ask people to rate how attractive they thought these women were and people would then rate them. 221 00:22:53,560 --> 00:23:02,350 And then if you voted for the choice that was most popular, your name could be drawn out of a hat and you might win a vacation or something like that. 222 00:23:03,100 --> 00:23:10,990 And but Keynes pointed out that what matters isn't there's no inherent measure of which women are the most beautiful. 223 00:23:11,350 --> 00:23:18,190 The the criterion is, is to guess what other people will think is most beautiful. 224 00:23:18,490 --> 00:23:22,330 And he pointed out that the stock market is very much that kind of thing. 225 00:23:22,330 --> 00:23:31,870 It's not necessarily what makes prices go up isn't what company is the best company, it's what company the people think is the best company. 226 00:23:32,260 --> 00:23:37,450 So that was the context of his doing this. But I'll now give you an example of a different kind. 227 00:23:37,450 --> 00:23:38,260 In that spirit. 228 00:23:39,520 --> 00:23:47,920 You're asked to name a number between zero and 100, and the winner is going to be the one whose guess is two thirds of the average guess. 229 00:23:48,640 --> 00:23:54,580 So I want you to think about this for a few seconds to think what would your guess be? 230 00:23:55,210 --> 00:24:00,460 Now, I've done this in the past by polling people and I've discovered that it's a bit of a dangerous 231 00:24:00,460 --> 00:24:05,110 thing to do because it can be a little embarrassing once I explain what's happening. 232 00:24:05,290 --> 00:24:10,750 So I don't want to embarrass anybody. Also, I can't make my snotty remarks because I really wouldn't want to offend anybody. 233 00:24:10,900 --> 00:24:16,480 So just think about it. Formulate your guess. I want everybody to pick a number, you know, how are you going to do it? 234 00:24:17,290 --> 00:24:21,970 All right. Now, let me show you what about 30,000 Danes did. 235 00:24:22,270 --> 00:24:25,900 Now, this was published in the newspaper, so they had more chance to think about it than you did. 236 00:24:26,560 --> 00:24:33,700 And and you can see that, you know, the guess is there are people who actually guessed 100. 237 00:24:35,770 --> 00:24:41,090 Now, let's go back and think about you wonder a little bit, you know, okay. 238 00:24:41,110 --> 00:24:45,640 I mean, this kind of might explain why people are actually going to vote for Donald Trump. 239 00:24:48,700 --> 00:24:52,630 Okay. Let's suppose all of you but one, just 100. 240 00:24:53,200 --> 00:24:56,860 Okay. If everybody goes to 100, then what would the winning guess be? 241 00:24:57,040 --> 00:25:00,460 What, 66? Right, two thirds of 100. 242 00:25:01,090 --> 00:25:04,780 So obviously, anything greater than 100 is a loser. 243 00:25:05,230 --> 00:25:08,590 So all those guys that there's a lot of quite a few of them, 244 00:25:08,950 --> 00:25:16,149 either they didn't understand the game or they really weren't thinking, now let's suppose then you go, okay, I'll get 66. 245 00:25:16,150 --> 00:25:20,560 But you go, well, 66 is pretty high because other people aren't total dummies either. 246 00:25:20,890 --> 00:25:26,020 And why would they all get to 100? Well, okay, so maybe they would guess 50. 247 00:25:26,620 --> 00:25:31,150 And now let's take two thirds of that. So that would be about a third. 248 00:25:32,110 --> 00:25:36,969 So you see there's actually a peak there, but then you might go, well, okay, 249 00:25:36,970 --> 00:25:42,150 suppose people get stopped then I'm going to suppose I'm going to reason one more all two time. 250 00:25:42,160 --> 00:25:48,040 That's the typical gas. And so in fact, that's the winner around around just above 20. 251 00:25:48,820 --> 00:25:58,959 So but but note, if you were an economist and you were trained to think about rational expectations, you would have iterated this process, too, 252 00:25:58,960 --> 00:26:05,770 and you would have guessed the Nash equilibrium, which is zero, because that's where you end up if you assume that everybody is really smart. 253 00:26:05,770 --> 00:26:11,440 If we were all smart and we all knew that everybody else was. Mart, we would just go to zero and it would be a fixed point equilibrium. 254 00:26:11,740 --> 00:26:21,710 Everything would be peachy. It would be easy. So my snotty remark is, okay, these people are obviously idiots and these guys are idiots for months. 255 00:26:22,570 --> 00:26:26,500 So, all right. My snotty remark. Okay. 256 00:26:26,500 --> 00:26:33,070 So when is this assumption justified? So I'm going to show you an example of some work we did to test this in the context of game theory. 257 00:26:33,520 --> 00:26:38,480 The players can make one of several possible moves. So a game is something where players make moves. 258 00:26:38,500 --> 00:26:46,060 So it's like the games you're familiar with. At each turn, players receive payments according to the set of moves of all players. 259 00:26:46,090 --> 00:26:52,510 In other words, if on that turn, Martin makes a move and I make a move and a list makes a move, 260 00:26:52,600 --> 00:26:56,190 and we make move number, you know, listen, makes move number one. I make number two. 261 00:26:56,470 --> 00:26:58,060 Martin makes move number three. 262 00:26:58,360 --> 00:27:06,850 We then look up in the payoff matrix what are payment is and each of us get some payoff that may be quite specific to each of us. 263 00:27:07,360 --> 00:27:13,110 And so that payoff matrix defines the game. Then we're going to play that game repeatedly. 264 00:27:13,110 --> 00:27:16,049 So we're going to play it again and again so we have a chance to learn from it. 265 00:27:16,050 --> 00:27:23,940 So and by the way, I think if we played that Keane's beauty contest game repeatedly, we would have seen a progression towards zero. 266 00:27:25,500 --> 00:27:29,640 And we're good, but we're going to make the players learn their strategies, unlike the. 267 00:27:31,600 --> 00:27:34,510 Well, I guess I had excuse me this on my next slide. 268 00:27:38,000 --> 00:27:44,209 There is there is a standard theory by Nash which says that there's always a fixed point in a game like that. 269 00:27:44,210 --> 00:27:48,230 There's always an equilibrium like the zero equilibrium in the beauty contest. 270 00:27:48,530 --> 00:27:53,690 As long as players can choose moves and introduce an element of randomness to keep the other players guessing, 271 00:27:54,200 --> 00:27:56,660 as long as they can do that, there always exists to fix point. 272 00:27:56,990 --> 00:28:02,510 But Nash didn't prove it stable so we don't know that that's behaviourally what people are actually going to do. 273 00:28:02,810 --> 00:28:07,220 And the experiment I just showed you suggest that that's not what people typically do. 274 00:28:07,940 --> 00:28:13,520 But what do they do now? Oh, okay. Let me give you a little bit of intuition. 275 00:28:15,210 --> 00:28:20,010 So the first thought I have on this is simple games should be easier to learn than difficult games. 276 00:28:20,520 --> 00:28:26,130 So I'm reminded of when I was about nine or ten on the playground, 277 00:28:26,550 --> 00:28:31,770 we started playing this game called Tic TAC tOe, which I think in Britain you called nuts and crosses. 278 00:28:32,400 --> 00:28:40,500 So you make a little three by three grid, you put four zero, you try and get a, you know, a diagonal, horizontal or vertical line of connected things. 279 00:28:40,500 --> 00:28:41,640 Whoever does that first wins. 280 00:28:42,360 --> 00:28:50,190 So at some point, I figured out that I could be zero and I could always block the other guy from doing it, so I could always at least get a draw. 281 00:28:51,030 --> 00:28:57,600 And so for about a day, maybe I felt like a real hero because I was just cleaning up. 282 00:28:58,320 --> 00:29:02,070 And then all my friends figured it out and the game became completely boring. 283 00:29:02,070 --> 00:29:06,880 We quit playing it. So that's a good example of tic tac toe is a simple game. 284 00:29:06,900 --> 00:29:12,310 Children learn the Nash equilibrium. Now, what about more complicated games? 285 00:29:12,360 --> 00:29:15,810 Say, take chess. So now I'm going to tell you a little joke. 286 00:29:16,050 --> 00:29:21,320 How did two economists play chess? Well, you know, so there's a random element to chess. 287 00:29:21,710 --> 00:29:25,850 You begin by choosing white or black, and then they choose white or black, 288 00:29:25,850 --> 00:29:31,040 and then they argue about whether white or black wins under the Nash equilibrium of the game. 289 00:29:31,820 --> 00:29:36,200 Now, why is the joke a little silly? Well, because we don't know what the Nash equilibrium of the game is. 290 00:29:37,520 --> 00:29:45,679 It is a deterministic game once you've chosen white or black. No, I have to tell the corollary, the joke, because I'm a physicist, 291 00:29:45,680 --> 00:29:50,240 so it's rude to tell jokes about economists without telling a joke on your own tribe. 292 00:29:51,530 --> 00:29:58,850 So how did two physicists play chess? Well, they get quite drunk and they start moving the pieces around randomly. 293 00:29:59,360 --> 00:30:05,690 And they argue about whether the game is more or less interesting under scaling of the size of the board and the number of pieces. 294 00:30:06,590 --> 00:30:11,570 So you'll see from what I'm about to present that I'm a physicist now. 295 00:30:11,750 --> 00:30:17,030 So what we did is we said we wanted to understand generic behaviour and I did want to make sure 296 00:30:17,030 --> 00:30:21,500 that my comment was registered that there's a we're not the first to think about this issue. 297 00:30:21,530 --> 00:30:23,899 Lots of people have worried about this from the beginning. 298 00:30:23,900 --> 00:30:31,490 In fact, von Neumann himself was quite critical about game theory for its emphasis on Nash equilibrium, and in particular at Oxford. 299 00:30:31,490 --> 00:30:36,650 We have a couple of people who have been pioneers in this Paten, Yang and Vince Crawford. 300 00:30:36,950 --> 00:30:43,430 So other people have thought about this, but we took a different approach and we really wanted to understand what's generically true. 301 00:30:44,570 --> 00:30:46,310 So we constructed games at random. 302 00:30:46,880 --> 00:30:53,450 So remember, there's this payoff matrix that determines how much you get paid under the collection of moves that get made. 303 00:30:53,480 --> 00:30:56,510 We put random numbers into that payoff matrix. 304 00:30:56,930 --> 00:31:00,860 Once we put those random numbers in, we keep the payoff matrix fixed. 305 00:31:02,540 --> 00:31:06,470 And then we have the players. We give the players a learning algorithm and we have them play. 306 00:31:07,160 --> 00:31:14,360 Now our goal is a bit like we want to characterise the behaviour a priori as much as we can. 307 00:31:14,720 --> 00:31:15,260 In other words, 308 00:31:15,410 --> 00:31:22,610 we want to know are there some rules of thumb that might tell you in advance whether a game's likely to converge to an equilibrium or not? 309 00:31:23,090 --> 00:31:26,330 An analogy can be made to the Reynolds number and fluid turbulence, 310 00:31:26,780 --> 00:31:34,370 which is a number the ratio of the shear stress to viscosity nine dimensional number that numbers around one. 311 00:31:34,640 --> 00:31:39,050 Then I pretty much know the fluids laminar that is the dynamics sits at a fixed point. 312 00:31:39,890 --> 00:31:42,320 On the other hand, if that number is 10,000, 313 00:31:42,620 --> 00:31:49,130 then I have a quite I'm virtually certain that the fluid flow is going to be highly turbulent and chaotic. 314 00:31:49,760 --> 00:31:53,240 So we're looking for something a little like that now. 315 00:31:53,960 --> 00:31:59,480 So we actually did something. We didn't just populate these payoff matrices completely at random. 316 00:31:59,870 --> 00:32:01,040 We choose payoffs. 317 00:32:01,040 --> 00:32:08,150 So they were normally distributed and we constrain the playoffs and we began by studying two players and then we'll go to more players later. 318 00:32:08,600 --> 00:32:18,050 So when we have two players, we constrain them so that we have a parameter gamma that involves the relative payoff matrices of the two players. 319 00:32:18,290 --> 00:32:22,220 If game is one, the payoff matrices of the players are identical. 320 00:32:22,520 --> 00:32:25,940 So if I'm playing Martin, his payoff matrix and mine are exactly the same. 321 00:32:26,270 --> 00:32:33,770 If I win, he wins. If I'm a zero, if I win, he might well, he might lose completely uncorrelated. 322 00:32:33,980 --> 00:32:37,880 And if karma's minus one, we have a zero sum game. If I win, he loses. 323 00:32:38,720 --> 00:32:40,730 So that's an important parameter we discovered. 324 00:32:41,420 --> 00:32:47,900 We also chose a learning algorithm in which players learn their strategies using an algorithm called experience weighted attraction. 325 00:32:48,410 --> 00:32:53,120 We chose this. Under this algorithm, players choose their strategies probabilistically. 326 00:32:53,540 --> 00:32:58,040 So if I was saying rock, paper, scissors, I would have a certain probability of playing rock, 327 00:32:58,070 --> 00:33:02,330 certain probability of paper, certain probability of playing scissors on any given turn. 328 00:33:02,990 --> 00:33:06,500 Those probabilities can change dynamically at time and time, 329 00:33:06,710 --> 00:33:11,630 but on every turn I'm going to roll some dice that are going to be balanced according to those probabilities, 330 00:33:12,110 --> 00:33:15,530 to determine what how likely I'm going to select something. 331 00:33:16,070 --> 00:33:22,549 And we chose this because some economists had done experiments and shown that it's 332 00:33:22,550 --> 00:33:26,870 a pretty good model for how people actually behave in games in a laboratory. 333 00:33:27,830 --> 00:33:31,070 It was also analytically tractable for us to study it. 334 00:33:31,610 --> 00:33:37,669 Now, this this learning algorithm has a parameter alpha that determines memory because the 335 00:33:37,670 --> 00:33:43,220 probability of mimicking a given move is dependent on how well it's been working in the past. 336 00:33:43,820 --> 00:33:49,250 If Alpha Zero, the past can be infinitely long as I just accumulate more and more data. 337 00:33:49,460 --> 00:33:54,200 And something that happened years ago is equivalent to something that that happened the last turn. 338 00:33:54,860 --> 00:34:01,670 If Alpha is greater than zero, then I pay more attention to things that happened the last turn than things that happened in the past. 339 00:34:02,540 --> 00:34:06,890 Now, what did we find? And I should say one more comment that's important. 340 00:34:07,520 --> 00:34:12,910 Being physicists, we chose the limit where there's lots of possible moves. 341 00:34:12,920 --> 00:34:14,840 We looked at really complicated games. 342 00:34:15,200 --> 00:34:23,510 Now we did that because that allowed us to go into the thermodynamic limit where we can actually compute stuff using cool analytic techniques. 343 00:34:24,380 --> 00:34:27,950 So that my collaborator, Tobi Scala, is an expert in. 344 00:34:28,460 --> 00:34:33,680 So what did we find? What we found is what happens really depends on the parameters. 345 00:34:34,070 --> 00:34:40,220 This parameter alpha that determines memory of the strategy is important and the parameter. 346 00:34:41,130 --> 00:34:49,770 Gamma is also important. And there's a big region here where you just converge to a unique fixed point that's close to a Nash equilibrium. 347 00:34:49,980 --> 00:34:53,550 So that's very much like what the economists are typically assuming. 348 00:34:53,850 --> 00:34:57,330 This is the zone where traditional economics is okay. 349 00:34:58,410 --> 00:35:02,460 But then we found this chaotic zone over here that I'll tell you more about in a minute. 350 00:35:03,300 --> 00:35:08,280 And then there was so this happens, by the way, when Gamma is less than zero. 351 00:35:08,280 --> 00:35:13,500 So the games are competitive in the sense that if I win, Martin tends to lose. 352 00:35:15,360 --> 00:35:19,890 And so we saw these chaotic cases here where there was no convergence to a fixed point. 353 00:35:20,190 --> 00:35:28,470 When Gamma is greater than zero, we saw multiple fixed points, which in the large state spaces we were looking at where each player had 50 moves. 354 00:35:28,740 --> 00:35:34,260 There could be hundreds of these fix points. And which one you converge to depends on exactly where you start. 355 00:35:34,680 --> 00:35:40,740 So there's multiple basins of attraction in the dynamical system that corresponds to the learning dynamics. 356 00:35:40,980 --> 00:35:46,469 Now, what's going on in these chaotic cases when the cases so we saw lots of things. 357 00:35:46,470 --> 00:35:54,030 So we saw, for example, now here I'm going to pick three out of the 50 strategies or I should call them three out of pure strategies. 358 00:35:54,030 --> 00:35:57,240 This is another term or three out of the 50 possible moves. 359 00:35:58,080 --> 00:36:01,530 So this is the probability of making that move as a function of time. 360 00:36:02,730 --> 00:36:08,090 And what you can see in this case that corresponds to these parameters, there's just an oscillation. 361 00:36:08,100 --> 00:36:11,940 So the players are just going around around in a loop. History is repeating itself. 362 00:36:12,540 --> 00:36:19,530 And in the state space where you take a couple of the moves and you just project things out of that axis, 363 00:36:19,920 --> 00:36:25,080 you see an attractor whose dimension is essentially one, and they just go around on this loop. 364 00:36:25,740 --> 00:36:34,319 In other cases, you see things like this where you see what we're able to do to tell for using various techniques is a low dimensional, 365 00:36:34,320 --> 00:36:42,059 strange attractor. And in other cases, the dimension of that attractor gets bigger and bigger. 366 00:36:42,060 --> 00:36:47,490 It's still a strange attractor, but the dimension gets bigger and the behaviour gets more and more random. 367 00:36:47,970 --> 00:36:53,440 Here it starts to just look like a ball of yarn. It's to all intents and purposes, a random process. 368 00:36:54,000 --> 00:36:59,490 And though there are some interesting things going on here, if you look at one of the strategies here, 369 00:36:59,940 --> 00:37:03,420 the strategy is sometimes played with the probability of approaching one. 370 00:37:03,750 --> 00:37:10,770 And notice this is a logarithmic scale. And another points, the probability of the strategy being used can be ten to the minus, 371 00:37:10,770 --> 00:37:15,180 20 virtually never used, and then it can come back and become very popular again. 372 00:37:15,390 --> 00:37:24,810 So there's an automatic generation of fads of different strategies being used in this world, which has a certain realism. 373 00:37:25,260 --> 00:37:28,410 Now you can ask what happens when you go to more players? 374 00:37:28,890 --> 00:37:34,260 What changes if instead of two player games you have multiplayer games? 375 00:37:34,470 --> 00:37:42,420 Now let me first make the remark that we were able to analytically calculate this blue line to very high approximation. 376 00:37:42,990 --> 00:37:51,030 So we were quite proud of that. And it takes 20 pages of hard math to do that, but it matched pretty well with it with what we did in the simulations. 377 00:37:51,750 --> 00:37:54,750 Now, what about when we have more than two players? 378 00:37:54,780 --> 00:37:57,780 Well, when you have more than two players, by the time you get up to, say, 379 00:37:57,780 --> 00:38:03,430 ten players, it becomes really hard to simulate this for more than two moves. 380 00:38:03,450 --> 00:38:06,560 So we can't even go into that high move zone again. 381 00:38:06,570 --> 00:38:11,610 So here we really had to rely on our theory. We tested our theory as best we could. 382 00:38:12,000 --> 00:38:20,280 We pushed the theory out there, and we showed that the area of the chaotic zone increases is the square root of the number of players. 383 00:38:21,090 --> 00:38:30,030 And so, in fact, what happens is, as you add more players, this boundary just translates itself over like this. 384 00:38:30,480 --> 00:38:37,110 And chaos takes more and more of the parameter space, as does this complicated region. 385 00:38:37,290 --> 00:38:43,590 And in the limit where you have lots of players, it's you never see the classic Nash fixed equilibrium. 386 00:38:44,100 --> 00:38:45,989 Now, if you're an economist, 387 00:38:45,990 --> 00:38:53,850 this should make you a little nervous because this is suggesting that when there's a lot of players in a complicated game, 388 00:38:54,210 --> 00:38:57,360 there may be it may be the generic case. 389 00:38:57,630 --> 00:39:03,930 I would argue you have to justify that you're going to use a model that just has a fixed point, because what it means, 390 00:39:03,930 --> 00:39:09,060 if you assume a fixed point in something complicated, like I would say the stock market is a good example. 391 00:39:09,360 --> 00:39:17,640 You're you're approximating this big ball of yarn with this complicated dynamics of the strategies by a point sitting in the middle of this. 392 00:39:18,270 --> 00:39:24,420 Now it's a point with some noise. And as I said, this is for many practical purposes, a random process. 393 00:39:24,720 --> 00:39:30,630 But I think there can be key differences, like as I actually showed in one of my earlier papers, 394 00:39:30,810 --> 00:39:37,140 you can sometimes take advantage of this when there's chaos to make a prediction about the endogenous dynamics. 395 00:39:37,800 --> 00:39:41,100 And and I think it can be insightful as to the type of the model. 396 00:39:41,280 --> 00:39:47,040 And by the way, as a side effect we saw here, you see the change in the total payoff as a function of time. 397 00:39:47,970 --> 00:39:51,050 You see these periods where things are pretty quiet. 398 00:39:51,300 --> 00:39:53,340 You see periods where things are quite noisy. 399 00:39:53,520 --> 00:39:59,220 This is very reminiscent of what you see if you look at returns in the stock market, something called clustered volatility. 400 00:39:59,430 --> 00:40:02,430 This was a surprise, not something we expected to see. 401 00:40:02,970 --> 00:40:06,210 Now I have to add some caveats and I'm worried about running out of time here. 402 00:40:06,540 --> 00:40:09,179 But what happens with other learning algorithms? Well, 403 00:40:09,180 --> 00:40:18,600 we've explored some like we've explored some versions of the kind of level K learning that you you saw people using in that beauty contest example. 404 00:40:20,430 --> 00:40:25,380 We've, you know, you can you can ask whether the ensemble of games we're choosing is representative. 405 00:40:26,160 --> 00:40:29,790 You can ask about games with just a few actions, like two by two games. 406 00:40:30,370 --> 00:40:34,979 Marco Pagano, my graduate student here in math, has been looking at that, 407 00:40:34,980 --> 00:40:43,469 has actually made a a classification of what happens under all two by two games and showed that while chaos is now much less likely, 408 00:40:43,470 --> 00:40:47,370 there is still a zone where chaos happens even in two by two games. 409 00:40:48,340 --> 00:40:52,660 No. So how do you cope with this kind of nonlinear non-equilibrium behaviour? 410 00:40:52,840 --> 00:40:56,050 If you throw out those six point models, what do you do instead? 411 00:40:56,620 --> 00:41:01,479 So I would argue that you have to you have to go to simulation, 412 00:41:01,480 --> 00:41:06,670 because if you want to make any kind of realistic models, you end up with nonlinear equations. 413 00:41:06,970 --> 00:41:12,430 And nonlinear equations are typically not solvable via any other means than simulation. 414 00:41:12,880 --> 00:41:18,700 And just as a note, computation has revolutionised the physical and natural sciences. 415 00:41:19,780 --> 00:41:24,040 I think it accounts for most of the progress we've made over the last 50 or 60 years. 416 00:41:24,670 --> 00:41:32,800 And this was really driven home to me, this concept, the importance of nonlinear dynamics or nonlinear equations in general. 417 00:41:33,730 --> 00:41:38,440 When when I was a graduate student, Stan Hull came to the physics department at UC Santa Cruz, 418 00:41:38,440 --> 00:41:44,020 where I was studying and gave a lecture in which he began by apologising for using the term nonlinear. 419 00:41:44,830 --> 00:41:53,440 And he said, I'm apologising because calling an equation nonlinear is calling is like calling an animal a non elephant. 420 00:41:53,920 --> 00:42:00,310 Because most equations are nonlinear and we've been blinded to that fact by the by. 421 00:42:00,340 --> 00:42:04,749 Because up until now Ulam was one of the pi. 422 00:42:04,750 --> 00:42:09,729 Up until recently, all we could solve were linear equations. 423 00:42:09,730 --> 00:42:14,320 So we were like the mathematician sitting under the lamppost looking for our keys, 424 00:42:14,650 --> 00:42:20,050 because that was the only place there was any light, even though we knew we've dropped the key somewhere else. 425 00:42:20,680 --> 00:42:27,430 And so now I was actually one of the first people to use computers to look at nonlinear. 426 00:42:27,640 --> 00:42:33,879 Nonlinear systems was the very first example of simulating something on a computer. 427 00:42:33,880 --> 00:42:38,950 In 1945 was the Fermi past ultimate experiment in which they put masses with 428 00:42:38,950 --> 00:42:45,820 coupled springs on the computer and they expected just to see ergodic behaviour. 429 00:42:46,210 --> 00:42:50,410 They were just trying to confirm something that was standard in statistical mechanics. 430 00:42:50,590 --> 00:42:55,239 They thought the energy would be equally partitioned among the modes of oscillation and instead 431 00:42:55,240 --> 00:42:59,350 they saw solitary waves propagating back and forth the opposite of what they predicted. 432 00:42:59,740 --> 00:43:07,090 And so no social science, I would argue, has not benefited as physical and natural science have. 433 00:43:07,450 --> 00:43:13,840 I think there's two reasons for that. One is the elementary processes are not well understood, so it can be harder to put things on the computer. 434 00:43:14,110 --> 00:43:20,980 And secondly, there is different cultural attitudes. That has to do with, I think, symmetry breaking in the history of economics. 435 00:43:21,280 --> 00:43:28,150 But I'll maybe come back and say a little bit about that at the end. Now, I want to give you another example from my work of how you can deal with. 436 00:43:30,240 --> 00:43:35,910 Out of equilibrium behaviour that relates to something called leverage cycles and leverage cycles, 437 00:43:35,910 --> 00:43:41,800 as we discovered in 2008, are one of the most important examples of the dynamics of systemic risk. 438 00:43:42,480 --> 00:43:49,390 They were foreseen before 2008 by people like Hyman Minsky who wrote about them in words. 439 00:43:49,410 --> 00:43:56,100 He didn't have a mathematical model, and basic story is that during calm times, people get complacent. 440 00:43:56,400 --> 00:43:59,670 They compete for profits. Leverage creeps up. 441 00:44:00,030 --> 00:44:02,790 I'll explain what leverage is in a minute. But people take more risks. 442 00:44:03,180 --> 00:44:09,270 And then as leverage gets higher and higher, eventually some negative shocks are going to hit and they're going to cause a crash. 443 00:44:09,960 --> 00:44:15,660 JANJUA Two couples who I've collaborated with on many papers also had a theory of leverage cycles. 444 00:44:15,660 --> 00:44:22,140 He actually coined the term involving heterogeneous investors that optimists use more leverage. 445 00:44:22,350 --> 00:44:25,020 Bad news comes along that wipes out the optimists. 446 00:44:25,320 --> 00:44:32,550 And because the optimists were baked into prices before then, there's a big downward spike because they get removed from the picture. 447 00:44:33,300 --> 00:44:37,680 Now I'm going to show an even simpler model than either of these two. 448 00:44:38,490 --> 00:44:44,100 That was developed with Christoph Almond's, who was my graduate student here and is now a postdoc at the LSC. 449 00:44:46,450 --> 00:44:52,210 So it involves leverage. Now, what is that? That involves buying assets with borrowed money. 450 00:44:52,780 --> 00:45:00,640 And the leverage is a number that's the ratio, the value of those risky assets to how much money you actually put down. 451 00:45:00,670 --> 00:45:05,740 So if you buy a house and you make a down payment, that's equal to 20% of the price of the house. 452 00:45:06,100 --> 00:45:14,650 That implies a 5 to 1 leverage because the value of the house is at the moment where you purchase it 5 to 1, the ratio is 5 to 1. 453 00:45:14,800 --> 00:45:19,970 Now, of course, after that, the value of the House may change and your leverage may go up and down. 454 00:45:20,260 --> 00:45:21,940 And it turns out that's very important. 455 00:45:23,630 --> 00:45:32,450 Our model is a model that it's targeted at explaining rather ambitiously the great moderation and the subsequent crisis. 456 00:45:32,870 --> 00:45:41,120 What happened historically is that stock market prices went up over a period of a decade or so. 457 00:45:42,890 --> 00:45:50,510 Stock market volatility went down and leverage for assets of US broker dealers, 458 00:45:51,230 --> 00:45:58,820 speculative traders went up and actually skyrocketed in 2008, then crash down sent volatility really high. 459 00:45:59,060 --> 00:46:04,310 The stock market came down with it. So can we explain this kind of behaviour from a simple model? 460 00:46:05,000 --> 00:46:10,549 Now I have to say before I present this model, we begin by making a rather complicated model. 461 00:46:10,550 --> 00:46:16,940 We had a bunch of banks and a bunch of assets and the banks were managing things and initially we didn't have any risk control. 462 00:46:17,270 --> 00:46:23,120 And we said, what happens when we throw in Basel two the regulation that banks are supposed to follow? 463 00:46:23,960 --> 00:46:30,110 And we saw the whole economy going up and down and this big oscillation and we said, Holy [INAUDIBLE], what's going on? 464 00:46:30,590 --> 00:46:36,650 And we began simplifying the model and we simplified it down to the model I'm going to show you here. 465 00:46:37,460 --> 00:46:44,300 So when we strip things down to the essence, all we needed, it turned out, were two agents, a bank, 466 00:46:44,300 --> 00:46:49,400 which you can think of as any agent that using leverage and a fundamentalist who is 467 00:46:49,400 --> 00:46:54,710 an agent who isn't and who buys undervalued assets and sells overvalued assets. 468 00:46:55,990 --> 00:47:00,280 We assume there's one risky asset plus cash and we make four assumptions. 469 00:47:00,280 --> 00:47:07,930 We assume the bank uses an exponential moving average of historical volatility to estimate the expected volatility going forward. 470 00:47:08,440 --> 00:47:14,590 So what does that mean? That the bank if you go back to the kind of picture I showed you a moment ago, 471 00:47:14,920 --> 00:47:22,719 the banks who are leveraging are looking at this time series and they're using a historical average of what this has 472 00:47:22,720 --> 00:47:29,260 done over the last couple of years or so in order to decide how volatile things are likely to be going forward. 473 00:47:29,560 --> 00:47:33,070 That's actually a very reasonable thing to do because volatility is pretty persistent. 474 00:47:33,580 --> 00:47:36,940 And so the path of the future is in general, like the past. 475 00:47:37,360 --> 00:47:40,360 There can be spikes, as we see here, but most of the time it's a pretty good rule. 476 00:47:41,480 --> 00:47:47,440 And I stand on that assumption because I can just tell you, you survey what people do. 477 00:47:47,740 --> 00:47:49,480 They do that when I was a prediction company, 478 00:47:49,780 --> 00:47:58,210 the firm where Norman Packard and I formed after we didn't get rich playing roulette, we started an investment management firm. 479 00:47:58,780 --> 00:48:01,960 They don't throw you out for winning. And we did pretty well. 480 00:48:02,140 --> 00:48:07,760 And we did this, too, like everybody else. Basel two risk management. 481 00:48:07,770 --> 00:48:11,910 This is a standard thing. Actually this came in while we were a prediction company in the late nineties. 482 00:48:12,240 --> 00:48:15,400 You something called value at risk where you set the leverage target. 483 00:48:15,420 --> 00:48:20,790 I'll come back and explain what that is. We use a standard assumption about price formation supply equals demand, 484 00:48:21,030 --> 00:48:26,430 so we find prices that way and we assume this fundamentalist buys undervalued assets and vice versa. 485 00:48:26,730 --> 00:48:30,930 So the diagram of the model looks like this. We have a fund, we have a bank. 486 00:48:31,230 --> 00:48:36,510 The fund detects a mispricing, we just assume a fixed valuation. 487 00:48:37,230 --> 00:48:41,820 It updates the portfolio. Wait, there's a little shock that keeps knocking things out of equilibrium. 488 00:48:41,820 --> 00:48:47,130 So we're using a little bit. The same trick is that dynamic, stochastic general equilibrium model. 489 00:48:47,310 --> 00:48:51,960 But with the important thing, we don't enforce strategic equilibrium in this model. 490 00:48:52,920 --> 00:49:00,900 And. The bank similarly updates its risk estimate in force as a leverage constraint, and we form prices. 491 00:49:01,480 --> 00:49:07,320 And now let me mention a key fact, which is that for a passive investor with leverage greater than one like the bank. 492 00:49:07,740 --> 00:49:11,400 If prices drop, leverage goes up and vice versa. 493 00:49:11,910 --> 00:49:18,540 Now, so why is that? If you look at the formula for leverage with prices drop, it affects the risky assets. 494 00:49:18,960 --> 00:49:22,920 It affects the equity, which is the assets minus the liabilities. 495 00:49:23,190 --> 00:49:29,830 It affects these assets by the same amount. But because you're subtracting off liabilities in the denominator, this effect is bigger. 496 00:49:30,090 --> 00:49:36,570 And so you see an amplification. So so when when prices drop, leverage goes up. 497 00:49:36,960 --> 00:49:45,330 Now, if you're under a if you have to control your leverage, this becomes destabilising because now your leverage is higher than it should be. 498 00:49:45,840 --> 00:49:49,770 So you're above your target. So you have to sell assets to match your target. 499 00:49:50,100 --> 00:49:53,700 So notice your risk control is now destabilising the market. 500 00:49:54,150 --> 00:50:01,110 So although it's a sensible thing for you to do as an individual for the world as a whole, if if everybody is doing this, 501 00:50:01,110 --> 00:50:09,390 their price impact is destabilising thing because they're buying into rising markets and selling in into falling markets. 502 00:50:09,690 --> 00:50:16,980 And in fact, when we originally tried to make this model, we tried to make it with just banks and it just blew up because it's just destabilising. 503 00:50:17,310 --> 00:50:21,540 We had to put the fundamentalists in there even to stabilise things. 504 00:50:22,350 --> 00:50:28,230 Now just to explain value at risk, that is a.k.a. Basel two, 505 00:50:28,980 --> 00:50:39,030 value at risk is an and a little rule that mathematically looks like this whereby when volatility increases, your leverage decreases. 506 00:50:39,360 --> 00:50:48,630 And according to Basel two, it would follow this blue line where this B parameter here is minus a half and there's some limit, but up to the limit. 507 00:50:49,350 --> 00:50:55,980 If you set the limit to zero, you can see it's just your leverage is just going as one over the standard deviation of volatility. 508 00:50:56,640 --> 00:51:03,360 Very sensible thing to do from a macroprudential point of view, because if the world's riskier, you should take less leverage and take less risk. 509 00:51:04,710 --> 00:51:08,520 Supported and endorsed by the establishment. 510 00:51:08,760 --> 00:51:11,940 By the way, this dash green line would correspond to constant leverage. 511 00:51:12,450 --> 00:51:21,599 Now, what happens when we run this model? Well, in one regime, when we make the banking sector small, we see random oscillations. 512 00:51:21,600 --> 00:51:27,210 And I should mention this model is now a dynamical system. It's got it's about six dimensional. 513 00:51:28,590 --> 00:51:30,810 And it's it's stochastic in general. 514 00:51:31,200 --> 00:51:36,569 So here we're running it and you see that things are kind of randomly varying around some price, but not much is happening. 515 00:51:36,570 --> 00:51:42,240 The leverage is fluctuating a bit, but not much is happening. Now, suppose we make the banking sector bigger. 516 00:51:42,240 --> 00:51:49,170 And what does that mean? Well, when the banking sector is small, it means we don't give them much money or we don't let them take very much leverage. 517 00:51:49,530 --> 00:51:55,660 But if we let them do either one of those things more strongly, then we see something happening like this. 518 00:51:55,680 --> 00:52:03,660 This is that oscillation I already mentioned, where prices are slowly building and then crashing and slowly building and then crashing. 519 00:52:03,990 --> 00:52:12,420 And leverage is similarly slowly building and crashing. Now, interestingly, we can turn the noise completely off if we turn the noise off. 520 00:52:12,750 --> 00:52:17,820 Now we go to a fixed point in the small banking sector version, things just get flat. 521 00:52:18,330 --> 00:52:21,900 But for the large banking sector, we see an oscillation. 522 00:52:22,440 --> 00:52:28,409 And in fact, this oscillation we can show is a chaotic oscillation where things go around some loop 523 00:52:28,410 --> 00:52:34,170 in a space that involve price and who owns the asset and what the perceived risk is. 524 00:52:34,380 --> 00:52:37,710 And you can see there's some kind of complicated behaviour over in this part. 525 00:52:39,930 --> 00:52:47,070 But if you slice this thing, you'll see the characteristic fractal structure that you expect from a low dimensional, strange attractor. 526 00:52:47,460 --> 00:52:53,280 And by the way, there's a hyperbolic fixed point for those of you that have taken dynamical systems and know what that means. 527 00:52:53,550 --> 00:53:01,170 That's driven by this place where the leverage of is going to infinity, where things get really unstable in this model. 528 00:53:02,700 --> 00:53:06,720 Now let me just discuss a little bit what happens as you very parameters. 529 00:53:07,050 --> 00:53:12,060 There's two key parameters. One is alpha, which is controlling the risk of the individual agents. 530 00:53:12,330 --> 00:53:19,740 And B, which is controlling the risk policy that we let them follow because we're interested in, you know, what what are the alternatives? 531 00:53:20,340 --> 00:53:31,570 And the the face base looks like this. So Basel two corresponds to -0.5 for this parameter B, that makes the risk control more or less pro-cyclical. 532 00:53:31,590 --> 00:53:40,200 It's called and as long and this this is corresponds to varying alpha but I plot leverage instead because that really simplifies the plot. 533 00:53:41,070 --> 00:53:45,990 And what we see is there's a region here where things are nice and stable. This is where the fixed point equilibrium works. 534 00:53:46,590 --> 00:53:53,610 There's a region up here where things oscillate and the boundary between those involves a hop bifurcation. 535 00:53:53,610 --> 00:53:57,630 Things happen very suddenly. It's a flip bifurcation. 536 00:53:57,990 --> 00:54:03,630 Things happen very suddenly. You suddenly go from a fixed point to a finite amplitude oscillation. 537 00:54:03,960 --> 00:54:10,200 And as a regulator, you really better understand this because obviously the consequences could be serious. 538 00:54:10,500 --> 00:54:14,010 Now, does this explain the great moderation in the crisis? 539 00:54:14,340 --> 00:54:17,700 Well, I wouldn't want to argue that it is the explanation. 540 00:54:18,330 --> 00:54:21,330 But on the other hand, it could have caused it. 541 00:54:21,720 --> 00:54:31,020 And I forgot to say something essential back here, which is that when you look at this oscillation here, the time axis is not arbitrary. 542 00:54:31,050 --> 00:54:39,930 We've calibrated this model. These are measured in years. So these these run ups automatically come out with a period of around 15 years. 543 00:54:40,440 --> 00:54:46,350 And the levels of leverage that trigger this to happen are quite plausible for what happens in the market. 544 00:54:46,680 --> 00:54:50,100 So it is we get a reasonably good match with what's actually going on. 545 00:54:50,280 --> 00:54:56,670 And the model very clearly says the leverage was too high leading up in the period leading up to the crisis. 546 00:54:57,960 --> 00:55:06,360 So it could have caused the great moderation in the crisis. And and therefore, the housing bubble may have just been the spark that made this happen. 547 00:55:07,050 --> 00:55:10,140 Now, how can we do things more realistically? 548 00:55:10,440 --> 00:55:14,400 Well, we've been working on making what are called agent based models. 549 00:55:14,760 --> 00:55:18,899 What I just showed you was a very simple example of an agent based model in general. 550 00:55:18,900 --> 00:55:21,960 If you want to make things more realistic, you've got to do better than that. 551 00:55:22,140 --> 00:55:28,020 You're not going to get an accurate model of the economy with a with a model that just has two agents and one asset. 552 00:55:29,160 --> 00:55:30,150 We're not that lucky. 553 00:55:32,610 --> 00:55:40,950 I argue that to surmount the fundamental problem of macroeconomics, which is that if you think about the business cycle I showed you, 554 00:55:41,220 --> 00:55:45,930 there's only about 15 well observed business cycles in the history of civilisation. 555 00:55:47,430 --> 00:55:52,500 The ones in the distant past are probably not completely relevant to what's going on now. 556 00:55:52,920 --> 00:55:56,790 So we have very limited data to use to calibrate models. 557 00:55:57,000 --> 00:56:00,329 We have to, I would argue, make good fundamental models. 558 00:56:00,330 --> 00:56:03,930 We can't just use statistical models because the course of dimensionality blocks us. 559 00:56:04,470 --> 00:56:10,050 And and and we've got to use micro data to validate them. 560 00:56:10,410 --> 00:56:16,020 Now I'm out of time, so I'm not going to say much about this. But just to say, if you look at something like climate models, 561 00:56:16,350 --> 00:56:23,160 it's been key that they're not just looking at time series like global mean temperature and global carbon dioxide. 562 00:56:23,460 --> 00:56:27,840 They're looking at details about the weather patterns they predict. 563 00:56:27,840 --> 00:56:32,730 And they can show they can do things like join Tierra del Fuego together with Antarctica. 564 00:56:32,880 --> 00:56:38,760 There are simulations. The world suddenly gets vastly warmer than it is now because they block the Antarctic current. 565 00:56:38,970 --> 00:56:43,800 And you can look back in the in the record and see that that's in fact how it was done. 566 00:56:44,460 --> 00:56:52,170 Now, my vision of the economics of the future would be a simulation of the economies of the major countries of the world. 567 00:56:52,620 --> 00:56:56,130 We get the micro big data inputs directly off the Internet. 568 00:56:56,580 --> 00:57:05,910 We might even couple this to other social models. We'd have teams focusing on each components on households, firms, banks, 569 00:57:06,210 --> 00:57:12,810 etc. and they would be monitoring, using the Internet whether their models are really being followed. 570 00:57:13,380 --> 00:57:16,830 I'm going to skip over this because I want to wrap up pretty soon. 571 00:57:18,660 --> 00:57:21,450 Now, how well do I think we can do that? 572 00:57:21,840 --> 00:57:28,900 Well, we're not going to do that perfectly, because there are things like market efficiency, market efficiencies, a self-organising process. 573 00:57:28,920 --> 00:57:32,820 I'll say a bit in a moment, it limits the predictability of tradable assets. 574 00:57:33,120 --> 00:57:37,889 I've already alluded to the fact that I think in the real world we see something 575 00:57:37,890 --> 00:57:42,960 that's very close to turbulence in financial assets and in the economy. 576 00:57:43,140 --> 00:57:49,650 And in fact, there's a beautiful paper in nature from about 26 just pointing out that as you vary this, 577 00:57:50,040 --> 00:57:58,140 if you do a match up of the statistics of fluid turbulence to those of the financial system, the match up is quite accurate. 578 00:58:03,150 --> 00:58:07,710 I don't think we're going to be able to make detailed forecasts about the future of the economy. 579 00:58:08,010 --> 00:58:10,110 But I do think there's a lot of stuff we can predict, 580 00:58:10,560 --> 00:58:18,630 and that is there's a lot of information we can compress by making better models that we can use to formulate policies. 581 00:58:20,190 --> 00:58:23,730 It's back to this question of social science versus physics. 582 00:58:24,030 --> 00:58:28,470 You know, unlike atoms, people can't think. That does put. 583 00:58:29,040 --> 00:58:32,130 So people can think, sorry. Sometimes I think they can't. 584 00:58:32,550 --> 00:58:38,290 But yes, people can think. What scientific limits does this impose? 585 00:58:38,310 --> 00:58:43,320 Well, it does impose some limits. It's you know, economics is a [INAUDIBLE] of a lot harder than physics. 586 00:58:43,590 --> 00:58:49,079 I've done both, so I can vouch for that. But that doesn't mean everything's impossible. 587 00:58:49,080 --> 00:58:54,360 That doesn't mean you have to just throw up your hands. And I would point to the consistency of psychological experiments. 588 00:58:54,690 --> 00:58:56,579 There's a lot of psychological experiments. 589 00:58:56,580 --> 00:59:03,900 If you put people in a certain situation, you can make a pretty good prediction that 70% of them will do one thing and 30% will do another thing, 590 00:59:04,140 --> 00:59:11,520 and it holds up fairly consistently through time and so on with some variations, cultural variation and so on. 591 00:59:12,150 --> 00:59:17,580 I think big data empowers us to do incredible things that we couldn't do before. 592 00:59:18,060 --> 00:59:22,470 I already mentioned the example of traffic forecasting, which is a sort of trivial example. 593 00:59:24,360 --> 00:59:31,860 You know, it's a bit frustrating if you look at how we go about running the models that are actually run to understand economies, 594 00:59:32,250 --> 00:59:41,700 they're run through a methodology based on national accounting that was developed in the 1950s and has evolved only little since then. 595 00:59:42,090 --> 00:59:46,490 It involves surveys. It involves boiling everything down into aggregate data. 596 00:59:46,500 --> 00:59:53,580 It involves the numbers coming out often a year after they were actually the events were actually happening. 597 00:59:53,820 --> 01:00:01,020 I just think we can do infinitely better than that. Now, I want to say a bit about market efficiency and then conclude, you know, 598 01:00:02,190 --> 01:00:06,120 the hypothesis of market efficiency says that markets are inherently unpredictable. 599 01:00:06,480 --> 01:00:10,440 Now, when I heard about that, my immediate reaction is, okay, I've got to go try and predict markets. 600 01:00:10,890 --> 01:00:19,530 And, you know, and I think we and many others have since then showed that actually markets are somewhat predictable, 601 01:00:19,530 --> 01:00:24,660 predictable enough to make some somebody but even, you know, they're good firms like ours. 602 01:00:25,050 --> 01:00:29,250 We were right, maybe on 55% of our bets and wrong on 45% of them. 603 01:00:29,280 --> 01:00:32,850 So it's still pretty noisy. So I don't want to say that this is easy. 604 01:00:33,870 --> 01:00:42,810 The world is actually close to being market efficient. The question is, I argue, how much difference does it make? 605 01:00:43,680 --> 01:00:48,089 How much difference do these inefficiencies make? Well, they make an important difference. 606 01:00:48,090 --> 01:00:51,240 They put a lot of money in my pocket. Okay. 607 01:00:51,600 --> 01:00:55,770 But but in general, for society, is this something we ought to care about? 608 01:00:56,190 --> 01:00:59,400 And I argue that we should, because there's a catch. 609 01:00:59,790 --> 01:01:05,279 You can see automatically markets can't be perfectly efficient because market efficiency requires arbitrage, 610 01:01:05,280 --> 01:01:10,470 cause you've got people out there to exploit them, but there's no inefficiencies to exploit the law, go home. 611 01:01:10,740 --> 01:01:15,450 And so the market will be inefficient again. So there must always be some residual inefficiencies. 612 01:01:15,960 --> 01:01:24,210 And so the way I would put this is markets are efficient to first order, but necessarily inefficient at second order. 613 01:01:24,600 --> 01:01:31,710 And I think this has important consequences. I wrote a paper back in 2002 with a market ecology hypothesis, 614 01:01:32,850 --> 01:01:38,130 and the hypothesis is that we need to think about what the strategies that agents are following, 615 01:01:38,370 --> 01:01:44,370 what are these inefficiencies that they're feeding off of? How do these different kinds of agents interact with each other? 616 01:01:44,790 --> 01:01:56,189 And and and what happens? And I did a lot of stuff in that paper where I basically mapped the financial system into the theoretical framework 617 01:01:56,190 --> 01:02:03,690 that's used in ecology and showed that you got all the same things that ecologists have worried about in, 618 01:02:04,200 --> 01:02:05,760 in financial systems. 619 01:02:06,120 --> 01:02:17,760 And I'm working with illicit coinage in health and others to try and use these ideas to actually get better models for understanding financial risk. 620 01:02:18,030 --> 01:02:23,249 Because one of the key hypotheses here, the last one is that imbalances in the market, 621 01:02:23,250 --> 01:02:28,230 food web can drive crises when market ecology is get out of balance. 622 01:02:28,530 --> 01:02:29,940 Strange things can happen. 623 01:02:30,120 --> 01:02:39,930 And I would argue this is what happened in the crisis of 2008, when subprime mortgages and strange new derivatives got way out of hand. 624 01:02:40,650 --> 01:02:45,840 Now, there's a lot of challenges to doing this kind of thing, to making realistic models. 625 01:02:47,940 --> 01:02:54,090 There's a group of us here who are working on this, most of whom are in the Mathematical Institute. 626 01:02:55,380 --> 01:02:59,820 We're trying to create the key building blocks. We're trying to take advantage of the collective, 627 01:03:00,030 --> 01:03:06,360 collective wisdom that's been accumulated by the small cadre of people that are working on out of equilibrium 628 01:03:06,630 --> 01:03:14,220 agent based models to understand how to do simulations in economics make them really simple and easy to use. 629 01:03:14,430 --> 01:03:17,940 We have an open source project. Any of you can track what we're doing. 630 01:03:18,210 --> 01:03:21,210 Join it. All the codes available as it's being created. 631 01:03:21,480 --> 01:03:26,730 If you have programming skills, please, and your interest in this kind of thing, please contact us. 632 01:03:27,160 --> 01:03:33,399 And we'll sign you up. We're trying to create a library of standard models that anybody who wants to join can build 633 01:03:33,400 --> 01:03:38,860 on to deal with the problem that these kind of models actually are time intensive to create. 634 01:03:39,160 --> 01:03:47,139 And if you have to create everything from scratch, it's hard to make to accumulate progress because a typical graduate student will spend several 635 01:03:47,140 --> 01:03:52,060 years just building writing software to get to the point where they can do something useful. 636 01:03:52,270 --> 01:03:57,760 And we're focusing on three use cases at the moment housing markets, insurance industry and bank stress testing. 637 01:03:58,300 --> 01:04:01,300 Now, to conclude, I mean, I apologise. 638 01:04:01,300 --> 01:04:06,790 I haven't said anything in this lecture about long range evolution, about growth, 639 01:04:07,060 --> 01:04:13,420 innovation, the things that actually cause long term structural change in the economy. 640 01:04:13,870 --> 01:04:19,240 There just isn't enough time to talk about it here. Let me just say that's another component of the work I do. 641 01:04:20,740 --> 01:04:29,410 We've shown together with others like Francois Lafond, that actually a lot of these trends are surprisingly forecast able. 642 01:04:29,710 --> 01:04:33,820 There are laws like Moore's Law and Rights Law that make that true. 643 01:04:34,060 --> 01:04:39,070 So I think there is and this is another unexplored area in economics. 644 01:04:39,670 --> 01:04:47,980 Another key application that we desperately need are better economic models to steer us toward a sustainable economy. 645 01:04:49,090 --> 01:04:54,460 Current rates of growth are simply unsustainable. How are we ever going to do this without a huge crisis? 646 01:04:54,760 --> 01:05:03,070 I would argue our existing models are hopelessly flawed. And I think actually this is a mathematics lecture. 647 01:05:03,460 --> 01:05:10,150 I think what we're doing, I think the ecology of the way mathematics done is done, is evolving. 648 01:05:11,140 --> 01:05:14,530 I will stand who was one of the pioneers of that? 649 01:05:15,070 --> 01:05:19,450 As a graduate student, we were pioneers. We started doing what we called experimental economics. 650 01:05:19,570 --> 01:05:23,410 Sorry, experimental mathematics. We put equations on computers. 651 01:05:23,650 --> 01:05:32,650 We played around with them. We tried to map out the phenomenology to serve as a kind of scout for more focussed theory. 652 01:05:32,980 --> 01:05:38,380 And I think that's increasingly the way things are being done and will be done in the future. 653 01:05:38,710 --> 01:05:40,900 I think that will happen in economics as well. 654 01:05:41,170 --> 01:05:49,870 And simulation needs to play a key role in between data on one side and sharper small models on the other side. 655 01:05:50,080 --> 01:05:56,440 We need to be moving more towards an economic big science because while I'm a strong supporter of small science, 656 01:05:56,890 --> 01:06:04,660 you can't do everything with just small science. Imagine if we were trying to deal with global warming without large scale global circulation models. 657 01:06:04,870 --> 01:06:11,920 We just wouldn't be able to do it. Doing this is going to require a culture shift and it's a big job. 658 01:06:12,700 --> 01:06:16,269 And I appreciate you being patient and listening to me. 659 01:06:16,270 --> 01:06:18,480 I'm 10 minutes over time. Thank you.