1 00:00:01,990 --> 00:00:05,750 ARI: Hello, I'm Ari. [CLAUDINE: And I'm Claudine] Welcome to Proving the Negative. 2 00:00:05,750 --> 00:00:09,490 ARI: We're a podcast all about exploring the different sides of cybersecurity, 3 00:00:09,490 --> 00:00:13,780 from political to computer science, international relations to mathematics. 4 00:00:13,780 --> 00:00:17,240 Join us as we talk to our friends about the work they do. 5 00:00:17,240 --> 00:00:23,240 SEAN: My elevator pitch starts with: trust and reputation are multifaceted concepts. 6 00:00:23,240 --> 00:00:27,650 Our experience of them is psychological and human and social. 7 00:00:27,650 --> 00:00:30,500 But there's a whole other perspective that we can take on [trust & reputation], 8 00:00:30,500 --> 00:00:32,750 which is this more formal (or computational) one. 9 00:00:32,750 --> 00:00:37,310 Rather than talking about them in terms of describing behaviour or how we feel, 10 00:00:37,310 --> 00:00:40,070 which are, of course, very valid, important aspects, 11 00:00:40,070 --> 00:00:46,010 we can talk about them in terms of more prescriptive, very precise models 12 00:00:46,010 --> 00:00:53,560 which can use maths or simulations to describe trust or reputation behaviour. 13 00:00:53,560 --> 00:00:57,750 Now, trying to describe human behaviour mathematically is a huge endeavour. 14 00:00:57,750 --> 00:00:59,980 As a computer scientist with an interest in these concepts, 15 00:00:59,980 --> 00:01:04,300 we find a happy medium in cybersecurity. My research talks less about individual 16 00:01:04,300 --> 00:01:06,700 behaviour, though of course we try and account for that. 17 00:01:06,700 --> 00:01:09,859 We talk more about the context in which humans and computers interact. 18 00:01:09,859 --> 00:01:13,100 And this is why cybersecurity becomes very important, because an important 19 00:01:13,100 --> 00:01:16,540 idea in trust and reputation, and in these sorts of social interactions, 20 00:01:16,540 --> 00:01:20,680 is that of a network. A network is just this mesh of interconnected individuals, 21 00:01:20,680 --> 00:01:23,860 which may be people or computers or... who knows?! 22 00:01:23,860 --> 00:01:27,000 The point is that there are powerful mathematical and computational tools 23 00:01:27,000 --> 00:01:28,630 for talking about networks. 24 00:01:28,630 --> 00:01:33,500 We take these tools and use them to try and describe things like: 25 00:01:33,500 --> 00:01:36,580 how information might move around a network connectivity. 26 00:01:36,580 --> 00:01:40,240 E.g., person X might have a lot of connections and person Y very few... 27 00:01:40,240 --> 00:01:45,199 Finally, how how can these sorts of behavioural aspects be exploited and 28 00:01:45,199 --> 00:01:47,800 how can those exploitations be mitigated or prevented? 29 00:01:47,800 --> 00:01:51,520 ARI: There's a lot to unpack here... What is a model? What do you use them for? 30 00:01:51,520 --> 00:01:56,460 SEAN: It's almost like a toy. It's a way of describing some part of the world 31 00:01:56,460 --> 00:01:59,560 without, of course, having to go and build that entire part of the world. 32 00:01:59,560 --> 00:02:05,013 Physics is a really good example of a mathematical model - rather than 33 00:02:05,013 --> 00:02:10,110 setting up every possible physical [experiment] (e.g., testing all pendulums... 34 00:02:10,110 --> 00:02:12,550 ... by building a pendulum and swinging it around), 35 00:02:12,550 --> 00:02:15,850 We can instead come up with a linguistic description of them in this precise 36 00:02:15,850 --> 00:02:18,850 language as to how [this pendulum] behaves. 37 00:02:18,850 --> 00:02:23,450 Models don't have to be mathematical - even the way we talk about ourselves: 38 00:02:23,450 --> 00:02:28,270 "I tend to be an optimistic person" or "I tend to prefer spicy food". 39 00:02:28,270 --> 00:02:34,720 These are models, these are descriptions of ourselves that try and capture us. 40 00:02:34,720 --> 00:02:39,850 CLAUDINE: You mentioned that you were trying to "account for behaviour"? 41 00:02:39,850 --> 00:02:43,240 SEAN: The world is complicated, and humans are particularly complicated part. 42 00:02:43,240 --> 00:02:46,210 Let's say you have a description of a network. You have a good description 43 00:02:46,210 --> 00:02:50,230 of how, if A communicates to B then it takes this long, and happens in this way... 44 00:02:50,230 --> 00:02:53,000 That's great. What really becomes important when we talk about 45 00:02:53,000 --> 00:02:54,610 human behaviour, is the decision-making. 46 00:02:54,610 --> 00:02:58,300 Why does A communicate to B? When do they choose to do so? 47 00:02:58,300 --> 00:02:59,860 What do they choose to communicate? 48 00:02:59,860 --> 00:03:03,540 That's where humans get really complicated. What we try and do 49 00:03:03,540 --> 00:03:08,120 is define mathematical descriptions of a person's behaviour. 50 00:03:08,120 --> 00:03:13,120 Let's say you're trying to describe the time it takes for A to tell B 51 00:03:13,120 --> 00:03:14,720 about something new they found out. 52 00:03:14,720 --> 00:03:17,200 There's lots of different mathematical functions describing, 53 00:03:17,200 --> 00:03:20,650 as time passes, what's the likelihood they communicate? 54 00:03:20,650 --> 00:03:24,200 Exponential distribution is (just to throw the term out there) a common model 55 00:03:24,200 --> 00:03:28,330 that might be used to model someone who's trying to communicate as much as they can. 56 00:03:28,330 --> 00:03:33,340 What this exponential model does is say, regardless of how much time has passed, 57 00:03:33,340 --> 00:03:37,910 0 to 5 seconds, 5 to 10 seconds... that doesn't matter. All that matters is they're 58 00:03:37,910 --> 00:03:42,130 likely to try and communicate sooner rather than later and as soon as possible. 59 00:03:42,130 --> 00:03:46,870 ARI: Do I need to know what exponential distribution means? 60 00:03:46,870 --> 00:03:49,450 SEAN: One gorgeous thing about the exponential distribution, 61 00:03:49,450 --> 00:03:51,460 that really distinguishes it from other models, 62 00:03:51,460 --> 00:03:55,380 (to give an example of why it's useful in this kind of research) 63 00:03:55,380 --> 00:03:58,480 is that it's got this property called 'memorylessness'. 64 00:03:58,480 --> 00:04:00,930 The easiest way to describe it is [that] the eagerness, if you like, 65 00:04:00,930 --> 00:04:04,180 of someone to communicate doesn't change with time. 66 00:04:04,180 --> 00:04:09,012 Whether 5 or 10 seconds have passed, the chances of you communicating 67 00:04:09,012 --> 00:04:11,110 in the next five seconds is always the same. 68 00:04:11,110 --> 00:04:15,000 The chance of you communicating between times 5-10 is the same as 69 00:04:15,000 --> 00:04:18,040 the chance of you communicating between times 20-25 70 00:04:18,040 --> 00:04:21,417 (if you haven't communicated yet). We are not trying to capture things like 71 00:04:21,417 --> 00:04:25,370 "Do people communicate more at midday or lunch time?" 72 00:04:25,370 --> 00:04:28,300 We are trying to capture this idea that people want to communicate 73 00:04:28,300 --> 00:04:30,430 as soon as possible, and they don't care what time of day is. 74 00:04:30,430 --> 00:04:35,200 They don't care if it's 10 or 5PM, they want to get the information out there. 75 00:04:35,200 --> 00:04:36,730 And that's not always a perfect model, but when you're trying to describe big 76 00:04:36,730 --> 00:04:42,520 networks where a lot of people are communicating throughout the day, 77 00:04:42,520 --> 00:04:49,040 it can [realistically] capture the sheer level of conversation that happens. 78 00:04:49,040 --> 00:04:53,300 While knowing what an exponential distribution is is important if you want to 79 00:04:53,300 --> 00:04:56,000 look at our specific model, it is not at all necessary to understand trust as a whole, 80 00:04:56,000 --> 00:04:59,450 [this model] is just one aspect of how we're looking at [trust and reputation]. 81 00:04:59,450 --> 00:05:03,260 We try and model semi realistic pseudo people. 82 00:05:03,260 --> 00:05:08,060 These aren't real human beings, this doesn't describe human beings perfectly. 83 00:05:08,060 --> 00:05:10,790 It is a statistical average of human behaviour, 84 00:05:10,790 --> 00:05:15,110 that we try and capture. That can come from psychological research... 85 00:05:15,110 --> 00:05:20,780 We've looked at psychology papers and taken numbers from there, in terms of 86 00:05:20,780 --> 00:05:26,330 who communicates and how often. For more mathematical or theoretical work, 87 00:05:26,330 --> 00:05:34,550 we take the most reasonable behaviour we can, simpler to implement & describe. 88 00:05:34,550 --> 00:05:40,550 CLAUDINE: You said you were looking at behaviours [drawn from] psychology. 89 00:05:40,550 --> 00:05:43,150 What are the behaviours that you've been looking at? 90 00:05:43,150 --> 00:05:46,010 SEAN: There are a lot of robust models for things like personality type [and] 91 00:05:46,010 --> 00:05:49,400 how conscientious people are about sharing certain kinds of news 92 00:05:49,400 --> 00:05:52,310 (that piece of work was very much about sharing news and sharing fake news), 93 00:05:52,310 --> 00:05:57,410 how extroverted people are (how much they communicate/how many friends...). 94 00:05:57,410 --> 00:06:00,900 The kind of research that gives a statistical average gloss 95 00:06:00,900 --> 00:06:04,850 of human behaviour can be powerful for us because that means that rather than 96 00:06:04,850 --> 00:06:09,000 trying to capture the nitty gritty of human behaviour (which is very hard to 97 00:06:09,000 --> 00:06:14,150 describe mathematically), instead, we can capture generalities in human behaviour. 98 00:06:14,150 --> 00:06:19,040 And say "if we were trying to capture an average human in a particular context..." 99 00:06:19,040 --> 00:06:21,200 The paper we were looking at was was done in Italy. 100 00:06:21,200 --> 00:06:26,930 Italian people might behave differently from people living in France or the UK. 101 00:06:26,930 --> 00:06:29,720 Taking these papers, saying, "What do they say about human behaviour?" 102 00:06:29,720 --> 00:06:31,460 and "How can we describe that mathematically?"... 103 00:06:31,460 --> 00:06:35,330 But then also respecting limitations. ARI: Let's turn it around to you. 104 00:06:35,330 --> 00:06:42,200 How have you approached the biggest challenge in your own research? 105 00:06:42,200 --> 00:06:45,650 SEAN: Through a blend of my interests in computer science and psychology and 106 00:06:45,650 --> 00:06:51,500 human behaviour and even personal experience... I have questions around how 107 00:06:51,500 --> 00:06:55,460 people's personal experiences influence how and why they trust. 108 00:06:55,460 --> 00:06:59,630 See, the thing about trust is that it really is about our expectations of other human 109 00:06:59,630 --> 00:07:03,410 beings. Experiences with other human beings is what drives that behaviour. 110 00:07:03,410 --> 00:07:08,540 You'll see in a lot of research around trust and reputation (online or offline), 111 00:07:08,540 --> 00:07:12,600 the social backgrounds people have, the experiences they have, 112 00:07:12,600 --> 00:07:15,500 gender or age or nationality, political perspective... 113 00:07:15,500 --> 00:07:19,000 these influence who we trust and why we trust them 114 00:07:19,000 --> 00:07:21,110 and what can break that trust and what can make that trust. 115 00:07:21,110 --> 00:07:26,660 There is so much more to be learned about how an individual's experience, 116 00:07:26,660 --> 00:07:30,320 both in terms of the world they grew up in and the people they interact with, 117 00:07:30,320 --> 00:07:33,300 how that influences the decisions they then make in later life and 118 00:07:33,300 --> 00:07:35,600 the kinds of people they trust and why they trust them. 119 00:07:35,600 --> 00:07:42,600 If we want to understand why people behave the way they do, 120 00:07:42,600 --> 00:07:46,940 understanding where people come from is the only way to get to the root of it. 121 00:07:46,940 --> 00:07:51,000 CLAUDINE: Is there a big question that you would like to investigate, 122 00:07:51,000 --> 00:07:54,950 if you had unlimited resources, be that time, money, person power? 123 00:07:54,950 --> 00:07:57,110 SEAN: This idea of the biggest challenge is so interesting. 124 00:07:57,110 --> 00:08:01,600 I was thinking to myself, is this from a methodology perspective in terms of 125 00:08:01,600 --> 00:08:03,960 learning new things to code or learning new kinds of maths? 126 00:08:03,960 --> 00:08:07,070 But there's also this personal aspect of doing the PhD. 127 00:08:07,070 --> 00:08:10,820 The amount you have to learn, to deal with the unknown as a researcher... 128 00:08:10,820 --> 00:08:15,300 Up until now, there's two things that you know for sure: 1) What you need to learn 129 00:08:15,300 --> 00:08:19,400 (regardless of whether you learn it or not, on the first day of a new course they give 130 00:08:19,400 --> 00:08:23,700 you the syllabus - 'this is important for the exams, and this might not be'). 131 00:08:23,700 --> 00:08:27,750 2) You also know that someone knows the answer. There is a marking scheme 132 00:08:27,750 --> 00:08:32,400 and if you can't find the answer, someone can find it for you and help you get to it. 133 00:08:32,400 --> 00:08:36,700 As a researcher, all of those things are gone. First of all, no one knows what the 134 00:08:36,700 --> 00:08:39,450 answers to your questions are. No one knows what they even look like, whether 135 00:08:39,450 --> 00:08:41,960 you should talk about them mathematically or psychologically... 136 00:08:41,960 --> 00:08:44,919 [You're] left with this double problem of, first of all, asking yourself, 137 00:08:44,919 --> 00:08:47,720 "How do I even begin trying to answer this question?" 138 00:08:47,720 --> 00:08:51,470 And second of all, "How do I even know I'm right when I get there?" 139 00:08:51,470 --> 00:08:54,630 "Is there a right answer?" 140 00:08:54,630 --> 00:09:01,150 I dealt with this the way a lot of people do. I've been happy being part of CDT, 141 00:09:01,150 --> 00:09:06,000 we've had so much interaction with researchers - PhD students a few years in 142 00:09:06,000 --> 00:09:09,200 post-docs and professors and so many people who've gone through all of this. 143 00:09:09,200 --> 00:09:15,140 Understand that, as a researcher, you're not really looking for the right answer. 144 00:09:15,140 --> 00:09:19,453 A lot of the time, what you're looking for is to just add to human knowledge 145 00:09:19,453 --> 00:09:22,850 to come up with new answers and sometimes better answers, 146 00:09:22,850 --> 00:09:26,000 and sometimes just completely trying to give a new perspective. 147 00:09:26,000 --> 00:09:30,260 Like, for example, trying to say, "Can we describe trust computationally?" 148 00:09:30,260 --> 00:09:34,250 For me, that's been one of the biggest challenges for sure - the perspective shift. 149 00:09:34,250 --> 00:09:37,970 ARI: How does trust link to reputation? 150 00:09:37,970 --> 00:09:42,830 SEAN: There's a thesis known [as] one of the first in my area: computational trust. 151 00:09:42,830 --> 00:09:51,630 A big part of [it] is just talking about how many different definitions of trust there are. 152 00:09:51,630 --> 00:09:58,530 The way I define trust is asking yourself: "if I had to rely on this person... 153 00:09:58,530 --> 00:10:01,150 (you don't have to rely on them, you don't have to depend on them) 154 00:10:01,150 --> 00:10:04,740 ...but if I were to depend on this person, would they do what's good for me? 155 00:10:04,740 --> 00:10:08,460 That's a big part of what trust is, it's about dealing with the unknown. 156 00:10:08,460 --> 00:10:11,490 I take, at least in my research, a very particular view of what reputation is. 157 00:10:11,490 --> 00:10:18,000 An idea of generalising trust to [a] more social aspect where you can say trust 158 00:10:18,000 --> 00:10:21,000 comes from my interactions with you. 159 00:10:21,000 --> 00:10:25,080 A simple way of looking at it is you and I interact and I build up trust that way. 160 00:10:25,080 --> 00:10:29,640 Reputation becomes important because we have this idea of exchanging 161 00:10:29,640 --> 00:10:33,930 information. Without having ever interacted with you, 162 00:10:33,930 --> 00:10:35,790 I might have a friend tell me something about you. 163 00:10:35,790 --> 00:10:42,550 I might have a friend say, "Ari is friendly...". That's what reputation is, it's 164 00:10:42,550 --> 00:10:48,895 when we make trust-based decisions off of, not just our own experiences, but the 165 00:10:48,895 --> 00:10:52,500 experiences of people around us: the kind of narratives that exist of a person, 166 00:10:52,500 --> 00:10:56,940 or of a thing or of a company or of a new games console or whatever. 167 00:10:56,940 --> 00:11:00,150 To me, that's what reputation is. It's that social aspect of trust, 168 00:11:00,150 --> 00:11:03,150 that narrative aspect of trust where we're telling each other about things, 169 00:11:03,150 --> 00:11:05,494 even though we haven't had those direct experiences. 170 00:11:05,494 --> 00:11:09,000 You can have opinions passed from person to person to person, and none 171 00:11:09,000 --> 00:11:13,350 of them but the first has interacted with the thing itself. That's interesting, 172 00:11:13,350 --> 00:11:17,220 at the same time, risky. ARI: What's interesting about it? What are the risks? 173 00:11:17,220 --> 00:11:21,890 SEAN: What's interesting is how powerful a mechanism it is for survival. 174 00:11:21,890 --> 00:11:25,380 From person to person, from tribe to tribe, from friend group to friend group, 175 00:11:25,380 --> 00:11:31,500 "stay away from that person or that place, it's not good for you". 176 00:11:31,500 --> 00:11:36,500 The problem with that power, as is the case with [anything] involving language, 177 00:11:36,500 --> 00:11:42,100 while it's useful for exchanging information and giving us eyes 178 00:11:42,100 --> 00:11:44,980 that stretch to the other side of the world, 179 00:11:44,980 --> 00:11:53,750 the danger there is [that] without direct experience, people can fake it. 180 00:11:53,750 --> 00:11:56,000 You can say what you like about a person and 181 00:11:56,000 --> 00:11:59,010 what you say about a person now might be not true of them later, 182 00:11:59,010 --> 00:12:01,939 or it might not be true of them in a different context. 183 00:12:01,939 --> 00:12:04,440 It's a bit of a dark perspective to take, but it's a realistic one. 184 00:12:04,440 --> 00:12:07,000 A woman might have a good experience with another woman, but a man 185 00:12:07,000 --> 00:12:08,370 might not have a good experience with that woman. 186 00:12:08,370 --> 00:12:11,705 And similarly, a man might have a good experience with another man, 187 00:12:11,705 --> 00:12:14,220 but a woman might not have a good experience of that second man. 188 00:12:14,220 --> 00:12:17,580 That's why it's really important that not only can these things be faked, but they 189 00:12:17,580 --> 00:12:21,180 can also be wrong. Sometimes we just don't know what we are talking about. 190 00:12:21,180 --> 00:12:25,000 It's really important that we understand the context in which this information 191 00:12:25,000 --> 00:12:33,120 is gathered and from whom and what biases they might have. 192 00:12:33,120 --> 00:12:40,290 CLAUDINE: What does trust mean? How would you define trust? 193 00:12:40,290 --> 00:12:44,000 SEAN: Making a model that you can make sense of [and is meaningful], 194 00:12:44,000 --> 00:12:47,430 has to be computational, mathematical and well-defined. Sociology (discussing... 195 00:12:47,430 --> 00:12:52,320 human behaviour and things like that) has been making fantastic steps with 196 00:12:52,320 --> 00:12:55,650 network theory and the use of this kind of structure in describing interactions. 197 00:12:55,650 --> 00:12:58,600 This is very important for trying to account for nuances. 198 00:12:58,600 --> 00:13:02,300 Traditional epidemiological models assume that everyone is in contact with 199 00:13:02,300 --> 00:13:06,000 everyone all the time, and they were very simple, mathematically. 200 00:13:06,000 --> 00:13:08,000 They could be useful and they could say some meaningful things, 201 00:13:08,000 --> 00:13:09,750 but a lot of the time they would fall short. 202 00:13:09,750 --> 00:13:13,290 They have been made much more powerful with this network structure. 203 00:13:13,290 --> 00:13:17,760 The reason why this is so good for capturing nuances of social interactions 204 00:13:17,760 --> 00:13:24,000 is that it is trying to capture this idea that all relationships are made up of this 205 00:13:24,000 --> 00:13:29,480 big, messy, interconnected mesh of individual relationships. 206 00:13:29,480 --> 00:13:32,670 So person to person, computer to computer, person to computer... 207 00:13:32,670 --> 00:13:37,200 Your relationship with the group is so much about your relationship with 208 00:13:37,200 --> 00:13:38,580 the individuals of the group. 209 00:13:38,580 --> 00:13:44,790 There are methods in networks theory to capture this aggregate, 210 00:13:44,790 --> 00:13:51,330 That would be a bit more advanced. We use the model we do because it's about 211 00:13:51,330 --> 00:13:55,110 respecting the concept of relationship as the fundamental social atom. 212 00:13:55,110 --> 00:13:59,150 That, society and social interaction IS [all about] relationships. 213 00:13:59,150 --> 00:14:03,250 ARI: Sean, when you say "we"...? SEAN: 80% of the time, me and my supervisor. 214 00:14:03,250 --> 00:14:06,870 The other 20%, maybe us and some collaborators. When I say "we", 215 00:14:06,870 --> 00:14:10,680 I'm talking about this community of people in diverse fields like 216 00:14:10,680 --> 00:14:14,790 computational trust and analytical sociology... 217 00:14:14,790 --> 00:14:19,030 People fascinated by formal models and formal methods, their backgrounds are 218 00:14:19,030 --> 00:14:23,100 often in physics and computer science, who really care about these things too. 219 00:14:23,100 --> 00:14:25,500 Some people love doing maths for the sake of maths, 220 00:14:25,500 --> 00:14:30,550 and some people love studying people as they are. People like myself, who put 221 00:14:30,550 --> 00:14:34,830 these two worlds together: perfect models that we make up in our heads 222 00:14:34,830 --> 00:14:36,929 and this messy, complicated world that we actually live in. 223 00:14:36,929 --> 00:14:40,170 Trying to find some way of describing the latter in terms of the former. 224 00:14:40,170 --> 00:14:45,660 ARI: But Sean, social sciences are not real sciences. How could you do this? 225 00:14:45,660 --> 00:14:48,200 SEAN: I don't think very many people use the word science 226 00:14:48,200 --> 00:14:51,090 having ever attended a philosophy of science course or read a book on the 227 00:14:51,090 --> 00:14:56,640 philosophy of science. There is a lot of nuance as to what science is. A big 228 00:14:56,640 --> 00:15:01,170 aspect of it is just about trying to be humble to the experiences that we have. 229 00:15:01,170 --> 00:15:06,180 This idea of social sciences not being a science - it's not a debate I understand. 230 00:15:06,180 --> 00:15:09,000 CLAUDINE: I think it's good to hear that articulated because for those of us who 231 00:15:09,000 --> 00:15:13,750 do a lot of inter/multidisciplinary work, you hear "social science isn't real science", 232 00:15:13,750 --> 00:15:16,530 or "we shouldn't take this into account because they can't run an experiment". 233 00:15:16,530 --> 00:15:20,910 SEAN: You can't say that someone's not a scientist because they can't run an experiment. 234 00:15:20,910 --> 00:15:25,500 Physicists can't always run experiments. Simulations have become so important. 235 00:15:25,500 --> 00:15:30,430 In the past we could get a pendulum, or particles and shoot them at each other. 236 00:15:30,430 --> 00:15:36,500 To me, science is about this constant interplay between these narratives 237 00:15:36,500 --> 00:15:38,580 and models and little toys we make up on our heads to 238 00:15:38,580 --> 00:15:41,880 describe the world we live in and the actual experiences that we have. 239 00:15:41,880 --> 00:15:47,300 If someone is trying to respect both of those and is trying to find some crazy, 240 00:15:47,300 --> 00:15:50,520 beautiful link between the two, that's really what a scientist is. 241 00:15:50,520 --> 00:16:00,510 CLAUDINE: What was your most notable failure, how did you overcome it? 242 00:16:00,510 --> 00:16:04,000 SEAN: I'd say one of my biggest failures was trying to come up with this perfect 243 00:16:04,000 --> 00:16:07,900 model at the beginning, which didn't exist. Spending a lot of time worrying 244 00:16:07,900 --> 00:16:11,660 and not getting things done when in fact, sometimes all you can really do 245 00:16:11,660 --> 00:16:14,670 is dive into those kinds of observations and experiences and see what happens. 246 00:16:14,670 --> 00:16:17,760 The reason this is hard to answer isn't because I'm completely free of failure, 247 00:16:17,760 --> 00:16:21,000 but rather, as a researcher, you're just wading through the unknown. 248 00:16:21,000 --> 00:16:24,330 And there are so many things that can change at a moment's notice, 249 00:16:24,330 --> 00:16:29,780 A failure in my approach tied into - what am I actually trying to do? 250 00:16:29,780 --> 00:16:34,350 At the beginning - having no idea what I was doing, for one. Getting lost in this 251 00:16:34,350 --> 00:16:37,970 tornado of theoretical perspectives and practical perspectives, and people 252 00:16:37,970 --> 00:16:41,820 gathering datasets and running statistics and making up very deep 253 00:16:41,820 --> 00:16:51,000 theoretical models of how human brains do logic and all this kind of stuff. 254 00:16:51,000 --> 00:16:54,060 I spent so much time at the beginning trying to come up with this perfect, 255 00:16:54,060 --> 00:16:57,750 multi-disciplinary approach that accounted for sociology and psychology 256 00:16:57,750 --> 00:16:59,520 and logic and maths and networks and all these things, spending a lot of time 257 00:16:59,520 --> 00:17:07,000 just muddling through and not really being able to get started on anything. 258 00:17:07,000 --> 00:17:10,590 Doing multi-disciplinary research, there's so much that you have to learn. 259 00:17:10,590 --> 00:17:13,530 As my supervisor often says, you have to just start with something. 260 00:17:13,530 --> 00:17:17,910 It's often in the mistakes that you learn how your model can be better. 261 00:17:17,910 --> 00:17:20,970 There is no methodology to find the answer with these things, you know. 262 00:17:20,970 --> 00:17:25,440 It really is about people trying all kinds of things to learn more about the world. 263 00:17:25,440 --> 00:17:28,000 All of those things are valid and important and some of them 264 00:17:28,000 --> 00:17:30,850 may have more practical use sooner than later and some may not. 265 00:17:30,850 --> 00:17:34,500 And that really isn't what we're doing as researchers. We're not trying to solve 266 00:17:34,500 --> 00:17:37,250 problems quite as much as we're trying to answer questions. 267 00:17:37,250 --> 00:17:38,460 ARI: What is cyber security [to you]? 268 00:17:38,460 --> 00:17:44,180 SEAN: Cyber security has to revolve around information technology. 269 00:17:44,180 --> 00:17:47,320 What most of us are talking about is computers. Whether you want to 270 00:17:47,320 --> 00:17:49,920 introduce books and things into that notion, I think you can. 271 00:17:49,920 --> 00:17:53,100 Cyber security is more than just computer security or computer science. 272 00:17:53,100 --> 00:17:57,570 It's about taking a step back and looking at the bigger picture. 273 00:17:57,570 --> 00:18:03,360 Zoom-bombing: people joining Zoom calls and displaying unpleasant images. 274 00:18:03,360 --> 00:18:07,000 I remember a conversation I had with someone - they said that they didn't 275 00:18:07,000 --> 00:18:10,600 believe it was a security problem because if you put a password on your 276 00:18:10,600 --> 00:18:13,620 Zoom call then then no one can get into it. 277 00:18:13,620 --> 00:18:18,400 And I find that fascinating because to me, when we're talking about security, 278 00:18:18,400 --> 00:18:20,370 we are, of course, trying to secure computers. 279 00:18:20,370 --> 00:18:24,750 We're securing these computers because they mean something to us. 280 00:18:24,750 --> 00:18:27,900 How can these new technologies be used to do bad things? 281 00:18:27,900 --> 00:18:31,470 How do we prevent these bad things being done? What are bad things? 282 00:18:31,470 --> 00:18:38,700 That goes into complex areas, like secure devices being used inappropriately. 283 00:18:38,700 --> 00:18:43,500 It might be a good thing that you can put, tracking on your phone if it's to 284 00:18:43,500 --> 00:18:47,200 keep your child safe. But at the same time, in an abusive relationship it can be 285 00:18:47,200 --> 00:18:52,160 dangerous. It can be used as a tool for bad. That's why cyber security is 286 00:18:52,160 --> 00:18:56,000 absolutely inseparable from social and psychological aspects because ultimately 287 00:18:56,000 --> 00:19:00,180 it's human beings who value and use these devices. 288 00:19:00,180 --> 00:19:03,600 We have to learn - not just about human behaviour and not just about 289 00:19:03,600 --> 00:19:07,270 device behaviour, but the interaction between the two, to get 290 00:19:07,270 --> 00:19:11,000 Anything meaningful done. Otherwise you are just encrypting data on a box 291 00:19:11,000 --> 00:19:13,900 and putting it in that box and you're putting that box in a safe and no one's 292 00:19:13,900 --> 00:19:14,740 touching that box. Great. That's a very safe box. It is also a useless box and it's 293 00:19:14,740 --> 00:19:18,670 also a meaningless box. 294 00:19:18,670 --> 00:19:22,000 ARI: Yeah! If it's not helpful to anyone, what's the point? 295 00:19:22,000 --> 00:19:26,800 CLAUDINE: Sean, I think that that's probably the best answer we've gotten... 296 00:19:26,800 --> 00:19:30,130 ARI: Do you have any tips for keeping up to speed with cyber security? 297 00:19:30,130 --> 00:19:34,100 SEAN: For me, reading papers, looking at direct results that have been found 298 00:19:34,100 --> 00:19:36,520 for someone more technical that might be looking at bug reports. 299 00:19:36,520 --> 00:19:41,400 Talking about reputation - blog posts and community. If something is not 300 00:19:41,400 --> 00:19:50,500 your area of expertise, engage with the community. Follow someone who seems 301 00:19:50,500 --> 00:19:53,480 to know what they're talking about, ask people questions or read forums. 302 00:19:53,480 --> 00:19:57,500 Anything like that. Watch lectures or talks online. 303 00:19:57,500 --> 00:20:00,060 CLAUDINE: Join us next week for another fascinating conversation. 304 00:20:00,060 --> 00:20:04,470 In the meantime, you can tweet at us @HelloPTNPod 305 00:20:04,470 --> 00:20:08,730 You can subscribe on Apple Podcasts or wherever you listen to podcasts. 306 00:20:08,730 --> 00:20:14,560 The title there is PTNPod. See you next week. [ARI: Bye!]. 307 00:20:14,560 --> 00:20:19,750 CLAUDINE: This has been a podcast from the Centre for Doctoral Training in Cybersecurity, University of Oxford. 308 00:20:19,750 --> 00:20:24,640 Funded by the Engineering and Physical Sciences Research Council.