1 00:00:08,770 --> 00:00:17,850 Welcome to a discussion of the relationships, positive or negative, between artificial intelligence and human autonomy. 2 00:00:17,850 --> 00:00:22,110 Like our event four weeks ago on ethics and in education. 3 00:00:22,110 --> 00:00:28,780 This is a seminar associated with the new Institute for Ethics in A.I. at the University of Oxford. 4 00:00:28,780 --> 00:00:33,850 If you're interested in finding links to other related events both past and forthcoming, 5 00:00:33,850 --> 00:00:45,290 together with recordings and podcasts categorised by topic, then go to the Philosophy Faculty Home page and click on the ethics in a I think. 6 00:00:45,290 --> 00:00:50,930 I'm Peter Milliken. Gilbert Ryul fellow and professor of philosophy at Hartford College, Oxford. 7 00:00:50,930 --> 00:01:00,060 And I'll be chairing tonight's event. Autonomy is a core value in contemporary Western societies, 8 00:01:00,060 --> 00:01:08,440 a value that's often invoked in debates about practical ethics and politics and indeed lies at the heart of liberal democracy. 9 00:01:08,440 --> 00:01:17,080 Understandably, therefore, advocates of artificial intelligence are keen to emphasise how A.I. systems can enhance role autonomy, 10 00:01:17,080 --> 00:01:26,170 giving us easy access to information on which to base our autonomous decisions, enabling us to achieve our personal goals more efficiently. 11 00:01:26,170 --> 00:01:36,570 And so. But on the other hand, there are growing worries that A.I. systems, in fact, pose a number of serious threats to our the. 12 00:01:36,570 --> 00:01:42,240 One prominent example in recent years, not least in various elections and referendums, 13 00:01:42,240 --> 00:01:48,180 has been the use of manipulative techniques that take advantage of our weaknesses and biases, 14 00:01:48,180 --> 00:01:56,550 leading us to make decisions or to be persuaded of views that might be quite different from those that we would otherwise have adopted. 15 00:01:56,550 --> 00:02:01,660 Such tricks have been part of the advertisers repertoire for generations, of course, 16 00:02:01,660 --> 00:02:08,640 but machine learning on our social media sites and so forth makes it possible for us to be individually 17 00:02:08,640 --> 00:02:18,040 targeted and exploited both by sales people and political groups in ways that were previously unimaginable. 18 00:02:18,040 --> 00:02:22,630 All this raises the question of whether it's possible to harness the considerable power 19 00:02:22,630 --> 00:02:29,450 of A.I. to improve our lives in a way that's fully compatible with respect for Tom. 20 00:02:29,450 --> 00:02:36,110 And whether we need to reconceptualize both the nature and value of autonomy in the digital age. 21 00:02:36,110 --> 00:02:47,890 That's the focus of our seminar tonight. I'm delighted to be joined by three Oxford academics who are particularly interested in these two. 22 00:02:47,890 --> 00:03:00,190 First, Carina Pringle. Korina is a research fellow at the Institute for Ethics in Azi University of Oxford. 23 00:03:00,190 --> 00:03:07,470 She's also research affiliates at the Centre for Governance of A.I. in the Future of Humanity Institute, again here at Oxford. 24 00:03:07,470 --> 00:03:12,620 Guerena works on the ethics and governance of A.I. with a particular focus on autonomy, 25 00:03:12,620 --> 00:03:18,660 and there's both publicly advocated and published on the importance of accountability mechanisms for A.I. 26 00:03:18,660 --> 00:03:29,070 Welcome, Katrina. It's good to be here. Second, Jonathan PUE. 27 00:03:29,070 --> 00:03:34,860 Jonathan is senior research fellow at the Oxford You ahero Centre for Practical Ethics, 28 00:03:34,860 --> 00:03:41,880 researching on how far A.I. ethics should incorporate traditional conceptions of autonomy and moral status. 29 00:03:41,880 --> 00:03:48,870 He recently led a three year project on the ethics of experimental deep brain stimulation and neuro hacking, 30 00:03:48,870 --> 00:03:56,760 and in 2020 published a book, Autonomy, Rationality and Contemporary Bioethics with Oyuki. 31 00:03:56,760 --> 00:04:04,650 And he's written on a wide range of ethical topics, particularly on issues concerning personal autonomy and informed consent. 32 00:04:04,650 --> 00:04:13,080 Welcome, Jonathan. Thanks for having me. And third, Jess Mallie. 33 00:04:13,080 --> 00:04:16,110 Is policy lead at Oxfords Data Lab, 34 00:04:16,110 --> 00:04:25,200 leading its engagement work to encourage use of modern computational analytics in the NHS and ensuring public trust in healthcare records. 35 00:04:25,200 --> 00:04:29,560 Notably those developed in response to the coded pandemic, 36 00:04:29,560 --> 00:04:36,890 Jess is also pursuing a related doctorate at the Oxford Internet Institute Digital Ethics Lab. 37 00:04:36,890 --> 00:04:45,560 As technical adviser for the Department of Health and Social Care, she co-authored the NHS Code of Conduct, the data driven technologies. 38 00:04:45,560 --> 00:04:50,820 Welcome, Jess. Hi, Peter. Thanks for having me. 39 00:04:50,820 --> 00:04:57,290 Each of our speakers will give a short talk, Paul, about the discussion and then the next talk and so forth, 40 00:04:57,290 --> 00:05:02,700 and the event as a whole will last for 90 minutes or about that. 41 00:05:02,700 --> 00:05:07,890 So we'll have plenty of time for discussion. And you're very welcome to offer your own questions to the speakers. 42 00:05:07,890 --> 00:05:18,600 Indeed, you are encouraged to do so. So please feel free at any time to pose questions by typing them into the comments box on YouTube. 43 00:05:18,600 --> 00:05:23,010 I'll be noting these as we go and posing them to the speakers at various points. 44 00:05:23,010 --> 00:05:27,210 So the sooner you get your questions in, the more opportunities there will be. 45 00:05:27,210 --> 00:05:35,650 There will be for having them addressed. OK, I hope that's all clear, and now we'll move to our first speaker. 46 00:05:35,650 --> 00:05:46,160 Karina? So Carina's, as I mentioned, this research fellow at the Institute for Ethics and I research affiliates at the Centre for Governance of VII. 47 00:05:46,160 --> 00:05:52,280 And Korina, you're one of our new appointments under the Ethics in A.I. Initiative at Oxford. 48 00:05:52,280 --> 00:05:56,560 So it's it's great to have you here. 49 00:05:56,560 --> 00:06:03,770 I noted that you're a strong advocate for accountability in A.I. development and you've addressed the Mexican Senate on this topic. 50 00:06:03,770 --> 00:06:05,960 So that sounds interesting. 51 00:06:05,960 --> 00:06:17,140 And you were featured as one of the 35 under 35 future leaders by the Barcelona Centre for International Affairs and the Banco Santander. 52 00:06:17,140 --> 00:06:20,430 So over to you. Yeah. 53 00:06:20,430 --> 00:06:23,490 Thank you. Thank you very much, Peter, for having me. 54 00:06:23,490 --> 00:06:31,710 So my task today is to provide an overview over human autonomy and the various ways that you might affect affected. 55 00:06:31,710 --> 00:06:33,930 Autonomy, radio and radio. 56 00:06:33,930 --> 00:06:42,690 As Peter already mentioned, is one of our core values in our society and a very broadly refers to our effective capacity to make decisions, 57 00:06:42,690 --> 00:06:48,950 to live our lives according to our own choosing. And according to our own standards. 58 00:06:48,950 --> 00:06:54,860 So basically, me points to our capacity of being self-governing agents. 59 00:06:54,860 --> 00:07:02,450 And the question we're exploring today in the seminar is whether any assistance might actually threaten this ability or capacity and if so, 60 00:07:02,450 --> 00:07:12,490 in what sense they might do. Now, the first point I'd like to make is that when we try to assess the impacts of a on human autonomy, 61 00:07:12,490 --> 00:07:19,840 I think it's really important to be clear on what aspect or or what dimension of autonomy one is interested in. 62 00:07:19,840 --> 00:07:28,750 So, for example, one possible dimension concerns the effects of A.I. systems on our ability to make decisions and especially our ability 63 00:07:28,750 --> 00:07:36,310 to make decisions that are in some important sense our own and that are not subject to a distorting external forces. 64 00:07:36,310 --> 00:07:41,790 And here's some of the main issues that arise in the context of a AI, as Peter already pointed out. 65 00:07:41,790 --> 00:07:50,510 Our online manipulation and deception. But clearly our case is assuming the economy is undermined by the use of AI systems. 66 00:07:50,510 --> 00:07:57,760 But also, more generally, there's the question of how the AI systems might affect and shape our preferences. 67 00:07:57,760 --> 00:08:00,010 So, for example, by now, 68 00:08:00,010 --> 00:08:09,290 there are quite a few studies that show how people change their preferences to match the recommendations they're presented with and by recommendation. 69 00:08:09,290 --> 00:08:17,320 In the case certainly has been made that this to some extent, to some extent, fundamentally undermines our autonomy. 70 00:08:17,320 --> 00:08:21,970 So, I mean, this is one of these cases where it's probably a bit more complicated than this, 71 00:08:21,970 --> 00:08:25,940 because in order to show that these systems actually undermine autonomy, 72 00:08:25,940 --> 00:08:31,570 one would also need to show how is it different from ordinary cases of preference 73 00:08:31,570 --> 00:08:36,500 adaptation that we experience on a day to day basis when we ask our friends, 74 00:08:36,500 --> 00:08:48,790 you know, which movies they recommend, for example. Now, I'd just mention the potential of A.I. systems to influence our beliefs and our decisions. 75 00:08:48,790 --> 00:09:00,220 But there is another aspect of Upon Tomie which has to do rather with the execution of those decisions instead of the formation of the decisions. 76 00:09:00,220 --> 00:09:07,750 I remember reading an article several years ago, which was during the very early stages of social robot development, 77 00:09:07,750 --> 00:09:16,360 and there was the case that some Japanese engineers tried to build a robot which was supposed to display affection and to give lots of hugs to people. 78 00:09:16,360 --> 00:09:25,470 And for some reason, the robot became very, very eager and I mean very, very eager up to the point that it would not let you leave the room. 79 00:09:25,470 --> 00:09:29,990 I assume simply constantly hugging her and not letting go. 80 00:09:29,990 --> 00:09:35,500 Now, this is a case where the engineer can you have made up her mind about leaving the room. 81 00:09:35,500 --> 00:09:43,430 So she's clearly made the decision, but their boat was actively preventing her from executing the decision. 82 00:09:43,430 --> 00:09:53,500 And note of the ethics guidelines on. I actually bring up this dimension of autonomy and emphasise that human beings need to remain in control of our 83 00:09:53,500 --> 00:10:05,370 systems and especially that they need to be in control over but over whether when and how tasks are outsourced Joy. 84 00:10:05,370 --> 00:10:12,990 Now, the question of control is especially a concern that has been raised in the context of opponents I systems, 85 00:10:12,990 --> 00:10:16,860 but also in the context of decision making and guarantees, too. 86 00:10:16,860 --> 00:10:24,050 The worry is very much that decisions of important decisions of importance are outsourced. 87 00:10:24,050 --> 00:10:32,450 Try AI systems, then humans are users might not be able to reclaim decision making power. 88 00:10:32,450 --> 00:10:42,560 Finally, another concern that is sometimes raised in the context of a AI and human autonomy is that through this process of outsourcing, 89 00:10:42,560 --> 00:10:47,810 an increasing amount of tasks to assist us, we actually make ourselves dependent on the technology. 90 00:10:47,810 --> 00:10:54,950 And as a result of this dependence, we might lose our ability to act autonomous with these kinds of arguments. 91 00:10:54,950 --> 00:11:02,390 I think it's important that we're a bit careful to not equate human autonomy with Merrilee. 92 00:11:02,390 --> 00:11:10,460 Independence and feminist philosophers, for example, quite rightly pointed out that no human is fully self sufficient, 93 00:11:10,460 --> 00:11:17,690 and independence and self sufficiency particularly seem ill chosen as values that is worth striving for. 94 00:11:17,690 --> 00:11:24,010 And those as part of our conception of autonomy. 95 00:11:24,010 --> 00:11:28,700 And I also think it's important to note that dependence on technology either is a bad thing. 96 00:11:28,700 --> 00:11:37,310 Nor does it imply a lessening of our autonomy. So you cherry user, for example, could be very dependent on their wheelchair. 97 00:11:37,310 --> 00:11:44,240 But it seems it seems rather misguided to say that as a result of this dependence, they would be less autonomous. 98 00:11:44,240 --> 00:11:53,980 In fact, if anything, I think that future probably increases their autonomy because it provides them with. 99 00:11:53,980 --> 00:12:03,850 Now, this brings me to my second and last point, which is that the answer to the question of whether or not a Swinton's human autonomy, 100 00:12:03,850 --> 00:12:08,330 Righi, very highly depends on the context and the system in question. 101 00:12:08,330 --> 00:12:13,700 So there is no there exists unfortunately, no one fits all approach and neither are there. 102 00:12:13,700 --> 00:12:16,470 One fits all policy responses. 103 00:12:16,470 --> 00:12:23,210 So, for example, it makes a big difference whether manipulation by assistance is intentional or whether it's unintentional. 104 00:12:23,210 --> 00:12:29,600 So Cambridge Analytica, the Cambridge analytical scandal is an example where the attempt to manipulate 105 00:12:29,600 --> 00:12:34,920 voters through targeted political advertising then was very much intentional. 106 00:12:34,920 --> 00:12:47,830 So it was intended by those employing the. Manipulation through, let's say, biased search rankings or through YouTube video recommendations. 107 00:12:47,830 --> 00:12:55,060 If they take place in many cases, can be unintentional and can be a side effect of their recommendation algorithm question. 108 00:12:55,060 --> 00:13:04,300 So that's more technical problem or a problem of mis specification of of the goal of the goal of the algorithm. 109 00:13:04,300 --> 00:13:09,010 But the kind of governance responses that we'd need to implement in order to 110 00:13:09,010 --> 00:13:13,540 address these two cases of manipulation that say both of them are be manipulation. 111 00:13:13,540 --> 00:13:17,920 And there's, of course, also a lot of debate about what counts as manipulation. 112 00:13:17,920 --> 00:13:25,390 Let's assume these are true cases of manipulation. Then the two the government responses need to be it would need to look very 113 00:13:25,390 --> 00:13:30,110 different for the case of Cambridge Analytica and for the case of that same bias. 114 00:13:30,110 --> 00:13:38,590 Google search rankings. And similarly, just to just to stay with the example of manipulation for the moment. 115 00:13:38,590 --> 00:13:45,580 From a moral point of view, not all cases of manipulation are equally morally reprehensible. 116 00:13:45,580 --> 00:13:47,170 So, for example, 117 00:13:47,170 --> 00:13:57,540 it makes a big difference whether I am nudged into buying a more expensive car insurance or whether I am nudged into paying my taxes at the state. 118 00:13:57,540 --> 00:14:02,310 So and similarly, in the case of it, it just makes it more different than my health care. 119 00:14:02,310 --> 00:14:08,900 Not just to undergo treatment, but in my heyday. Not just me to spend even more time playing the games on my iPhone. 120 00:14:08,900 --> 00:14:17,560 And so, again, we really need to really need to assess these applications on a case by case basis and B, 121 00:14:17,560 --> 00:14:22,690 be especially sensitive to the context they are deployed in. 122 00:14:22,690 --> 00:14:30,090 I'm very glad that the remainder of the seminar is dedicated to explore a very specific case of failure and autonomy. 123 00:14:30,090 --> 00:14:37,290 I would also become very clear that even then, there remains plenty of room for disagreement. 124 00:14:37,290 --> 00:14:42,600 OK. So to wrap up, I'd just like to briefly reiterate the two main points I just paint. 125 00:14:42,600 --> 00:14:48,870 The first was that me as a concept, it's terribly ambiguous and that we need to care. 126 00:14:48,870 --> 00:14:57,150 But one aspect of autonomy we are talking about before we can even begin to ask the question of whether it poses a threat to it. 127 00:14:57,150 --> 00:14:59,400 The second one was that even then, 128 00:14:59,400 --> 00:15:08,640 we have to be sensitive to content and just have to accept that there is no silver bullet that allows us to assess all possible threats to once. 129 00:15:08,640 --> 00:15:16,680 Thank you very much. Mute. 130 00:15:16,680 --> 00:15:22,180 Thank you, Carina. There we are. 131 00:15:22,180 --> 00:15:28,360 Very interesting. Could I just press you on a couple of questions arising from what you've charged? 132 00:15:28,360 --> 00:15:33,820 So you gave the example of human versus machine autonomy. 133 00:15:33,820 --> 00:15:44,940 You gave the example of a wheelchair user. Now, I think it's it is quite tempting to think that in any situation, 134 00:15:44,940 --> 00:15:51,240 there's a certain certain total amount of autonomy which is divided up between the various agents involved. 135 00:15:51,240 --> 00:15:57,330 So if you have more autonomy for the all the automated system, then that's less autonomy for the user. 136 00:15:57,330 --> 00:16:02,450 And somebody might press that a bit further with the wheelchair user and they might say, look. 137 00:16:02,450 --> 00:16:11,450 The wheelchair increases the autonomy of the user, but only to the extent that it's under their control. 138 00:16:11,450 --> 00:16:22,100 As soon as you start putting A.I. systems into the wheelchair, that means that it's making decisions, then that's reducing the autonomy of the user. 139 00:16:22,100 --> 00:16:27,770 So then they might try to suggest that actually it kind of is a limited amount of autonomy to go around. 140 00:16:27,770 --> 00:16:32,720 And therefore, more autonomy for the automatic system means less for us. 141 00:16:32,720 --> 00:16:36,190 How would you respond? Yeah. 142 00:16:36,190 --> 00:16:37,740 So first of all, 143 00:16:37,740 --> 00:16:46,690 I'd say that there is a fundamental difference between what we consider to be human autonomy and what we consider to be system autonomy. 144 00:16:46,690 --> 00:16:50,140 Our economy is used in the context of autonomous systems. 145 00:16:50,140 --> 00:17:00,280 I think human case autonomy is an intrinsic value and it plays an important role for moral agency responsibility and so on is something to strive for. 146 00:17:00,280 --> 00:17:05,510 In the system case, there is at least no such. 147 00:17:05,510 --> 00:17:11,400 There is at least no such moron agency. And so there is no autonomy in this sense. 148 00:17:11,400 --> 00:17:15,640 And autonomy more refers to something more like independence. So independence. 149 00:17:15,640 --> 00:17:26,860 Human operators or also in the case of of learning algorithms or machine learning first to their ability to learn on the basis of experience. 150 00:17:26,860 --> 00:17:32,410 So basically, this means that autonomy in the case of machines is not an intrinsic value, 151 00:17:32,410 --> 00:17:38,140 but it's like at most an instrumental one by suggesting that there is some kind of a shared amount of 152 00:17:38,140 --> 00:17:45,910 autonomy and that there might be even a Trade-Off between our autonomy and the autonomy of the system, 153 00:17:45,910 --> 00:17:50,980 we're basically putting human autonomy and system autonomy one on one and the same level. 154 00:17:50,980 --> 00:17:54,040 And this I mean, as I as I just pointed out, 155 00:17:54,040 --> 00:18:00,910 cannot be right because the two refer to very different things and fulfil a very different functional roles as well. 156 00:18:00,910 --> 00:18:06,390 So I do think that this idea of a Trade-Off is somewhat misguided. 157 00:18:06,390 --> 00:18:12,280 And I think you brought up the wheelchair at example. I agree with the amount of control. 158 00:18:12,280 --> 00:18:18,850 But then again, we get this question of control is often very underspecified. 159 00:18:18,850 --> 00:18:24,280 It's very unclear what amount of control and we actually would actually want in this case. 160 00:18:24,280 --> 00:18:37,030 So, for example, you might want to be able if for a car to be able to break it, to be able to stop or to choose a particular to particular route. 161 00:18:37,030 --> 00:18:40,870 Otherwise, you could say, well, I'm not having the economy to steer the car. 162 00:18:40,870 --> 00:18:44,230 But in other cases, these kind of issues would never come up. 163 00:18:44,230 --> 00:18:50,840 If you enter a roller coaster, you don't have the you don't want to control the rollercoaster with a stop or two. 164 00:18:50,840 --> 00:18:56,560 It took two to leave the rollercoaster, so. Yeah. 165 00:18:56,560 --> 00:19:05,060 So maybe it maybe just this question of control is very, very contentious in itself because it's supposed to have control over the system. 166 00:19:05,060 --> 00:19:07,680 So, yeah, things get very complicated. 167 00:19:07,680 --> 00:19:16,190 But coming back to the Trade-Off, I do think that there is that there is no such Trade-Off between the autonomy. 168 00:19:16,190 --> 00:19:27,940 Right. Thank you very much. Can I just follow up just a bit on the concept of human autonomy and you say that SINNEN shouldn't take value? 169 00:19:27,940 --> 00:19:30,660 I think most of us would sign up to that. 170 00:19:30,660 --> 00:19:42,440 But is that a value that geographically and historically universal or some might think it's a relatively local thing for modern Western cultures? 171 00:19:42,440 --> 00:19:46,420 Yes. So this is a very, very a criticism one hears very often. 172 00:19:46,420 --> 00:19:56,280 The idea that autonomy is a has this Western overly individualistic flavour to the individual at the centre of action. 173 00:19:56,280 --> 00:20:00,070 And I think here philosophy and in particular Western philosophy, 174 00:20:00,070 --> 00:20:05,820 actually has moved on quite a lot and has moved away from this individualistic notion. 175 00:20:05,820 --> 00:20:14,350 So on the one hand, there are now a lot of concepts of relational autonomy that are emerging from the 176 00:20:14,350 --> 00:20:18,520 coming from the feminist literature on autonomy that put very much the relation, 177 00:20:18,520 --> 00:20:24,760 the relational nature of autonomy at the heart, saying, you know, we are all people, we are all embedded in our society. 178 00:20:24,760 --> 00:20:30,270 We are all reliant on our social interaction to solve the problem. 179 00:20:30,270 --> 00:20:38,960 And then furthermore, there are also some procedural accounts of upon me that identify and person as appointments, 180 00:20:38,960 --> 00:20:45,240 virtue of the procedure by which the person has made the decision or hold certain beliefs. 181 00:20:45,240 --> 00:20:55,630 And these accounts are also value neutral. And so basically there are no values baked into these accounts and they can't be. 182 00:20:55,630 --> 00:21:04,600 And I think our successfully employed across cultures and it seems that also when you're when you ask people to think countries like China 183 00:21:04,600 --> 00:21:15,040 are known to have more collective mentality and a very different way of thinking about individual action and social action even there, 184 00:21:15,040 --> 00:21:19,900 which kind of these notions of autonomy and do find resonance. 185 00:21:19,900 --> 00:21:24,540 And the people would actually ask. Thank you very much indeed. 186 00:21:24,540 --> 00:21:28,890 We will now move on to our second speaker. See you later, Karina. 187 00:21:28,890 --> 00:21:34,290 We'll move on to our second speaker. Jonathan PUE. 188 00:21:34,290 --> 00:21:42,970 Welcome back. Jonathan mentioned when I introduced Jonathan, he's senior research fellow at the Oxford Heroes Centre of Practical Ethics. 189 00:21:42,970 --> 00:21:50,890 One thing I said then was that you had a project on the ethics of experimental deep brain stimulation, 190 00:21:50,890 --> 00:21:56,000 neuro hacking, hacking, the software incorporated into brain computer interfaces. 191 00:21:56,000 --> 00:21:59,950 Can you just a little bit more about that? Sounds intriguing. Yeah. 192 00:21:59,950 --> 00:22:10,150 It was a very interesting project. So over the course of that grand project, I was collaborating with experts in neurosurgery and cybersecurity. 193 00:22:10,150 --> 00:22:14,140 And there's some emerging evidence now to suggest that it would be possible to 194 00:22:14,140 --> 00:22:19,750 hack the wireless elements that are incorporated into brain stimulation devices. 195 00:22:19,750 --> 00:22:28,350 So historically, it's been established that it's possible to hack pacemakers, cardiac pacemakers, 196 00:22:28,350 --> 00:22:32,890 and now that wireless systems are being incorporated into things like the brain stimulation. 197 00:22:32,890 --> 00:22:37,810 This raises some interesting questions philosophically as well as just generally for society. 198 00:22:37,810 --> 00:22:43,990 Do you mean do you mean hack for evil purposes? Some third party interfering with it? 199 00:22:43,990 --> 00:22:45,550 It technically could be possible. 200 00:22:45,550 --> 00:22:53,290 So there have been studies showing that it would be possible to hack a cardiac pacemaker to potentially stop it from functioning. 201 00:22:53,290 --> 00:22:56,350 That hasn't yet been fully established of neuro hacking, 202 00:22:56,350 --> 00:23:01,890 but the technical possibility is certainly something that researchers are are interested in and certainly worried about. 203 00:23:01,890 --> 00:23:08,410 So there's a lot of interest in how we can develop the security of these devices far more robustly. 204 00:23:08,410 --> 00:23:19,270 Mm hmm. Very interesting. And today you're going to talk to us about a particular example of autonomy or lack of or threats to in health care. 205 00:23:19,270 --> 00:23:25,550 But you're you're more on the positive side. I think so. But I want to say how we develop over the course of the evening. 206 00:23:25,550 --> 00:23:32,560 Good overview. Okay. Thank you, Peter. Let me just share my screen. Good luck. 207 00:23:32,560 --> 00:23:35,080 Hopefully that's that's come up. 208 00:23:35,080 --> 00:23:44,230 So Kareen has given us a fantastic introduction to the various ways in which we can think about autonomy and how that's implicated in AI systems. 209 00:23:44,230 --> 00:23:49,450 We decided to look at a particular case study to focus our discussion. 210 00:23:49,450 --> 00:23:57,310 We're going to use the study of inhealth tools. And I think that's a really interesting case to think about, because these M health tools, 211 00:23:57,310 --> 00:24:01,090 which I'll introduce a little later, represent something of a middle ground, 212 00:24:01,090 --> 00:24:07,510 if you like, between questions in the burgeoning field of A.I. ethics and the field of bioethics, 213 00:24:07,510 --> 00:24:12,370 which is perhaps a little more well established and which is my home discipline. 214 00:24:12,370 --> 00:24:19,060 So in the first part of talk, I'm going to say a little bit about how I think about autonomy in bioethics. 215 00:24:19,060 --> 00:24:24,820 And fortunately, Kareena and I agree on most things, so I can move for that fairly quickly. 216 00:24:24,820 --> 00:24:33,040 But I'm going to present a visualisation, if you like, of how you can view the concept of autonomy and draw on that to make some arguments in favour 217 00:24:33,040 --> 00:24:40,570 of the idea that MLV tools can be in a system which might be used to enhance autonomy. 218 00:24:40,570 --> 00:24:51,840 So let's start with that. So, as Karina mentioned, the fundamental concept, the rearms cultural autonomy, is this property of self governance? 219 00:24:51,840 --> 00:24:59,400 One thing I would like to add to that is this point that we can think about autonomy in two different ways. 220 00:24:59,400 --> 00:25:05,460 We can first talk about it at a local level. So we might be interested in whether an individual is autonomous. 221 00:25:05,460 --> 00:25:09,510 With respect to a particular decision or a particular action. 222 00:25:09,510 --> 00:25:15,750 But we can also be interested in whether an individual is autonomous over extended periods of time, 223 00:25:15,750 --> 00:25:20,370 perhaps with respect to a long term goal or project. Now, 224 00:25:20,370 --> 00:25:25,740 one reason it's important to notice this distinction is that sometimes our global 225 00:25:25,740 --> 00:25:29,850 autonomy can demand quite different things of us than our individual autonomy. 226 00:25:29,850 --> 00:25:35,580 Sometimes when we left to our own devices about making decisions about how to achieve our long term goals, 227 00:25:35,580 --> 00:25:40,140 we might in fact be less successful than if we perhaps outsource some of those decisions. 228 00:25:40,140 --> 00:25:47,160 So that's going to be a really important point, I think, for thinking about the overall implications of a I for autonomy. 229 00:25:47,160 --> 00:25:55,620 Now, as Korina suggests in her presentation, there are two at least two dimensions we can talk about when we think about autonomy. 230 00:25:55,620 --> 00:26:04,590 Now, in bioethics, there's a fair tradition which really emphasises the importance of freedom and ability to self-government, to autonomy. 231 00:26:04,590 --> 00:26:09,750 So the key question here is gonna be, am I able to act on the basis of my own decisions? 232 00:26:09,750 --> 00:26:16,050 And that's, again, going to be really important when we think about the implications of a I for the primary reason 233 00:26:16,050 --> 00:26:22,530 that A.I. systems promise to hugely enhance our abilities to do the things that we want to do. 234 00:26:22,530 --> 00:26:25,560 Luciano, Floridian colleagues capture this idea really nicely. 235 00:26:25,560 --> 00:26:32,940 I think in a recent paper I wrote that put at the service of human intelligence A.I. can enhance human agency. 236 00:26:32,940 --> 00:26:40,950 We can do more, better and faster thanks to A.I. And that's really an important point, I think, coming into this conversation. 237 00:26:40,950 --> 00:26:43,980 But of course, again, during on screen his earlier presentation, 238 00:26:43,980 --> 00:26:49,950 that's not that or that's not all that autonomy amounts to, as well as being able to do the things we want. 239 00:26:49,950 --> 00:26:57,200 It's also crucially important that we're autonomous with respect to our decisions about what it is that we want to do in the first place. 240 00:26:57,200 --> 00:27:00,720 And there are some different elements that can feed into that. 241 00:27:00,720 --> 00:27:06,210 Of course, in order to make autonomous decisions, I need to have sufficient understanding of what my options are like. 242 00:27:06,210 --> 00:27:13,100 That's a point that's familiar from the medical context. It provides the foundation for the doctrine of informed consent. 243 00:27:13,100 --> 00:27:15,170 And there's also essentially this reflective element, 244 00:27:15,170 --> 00:27:24,670 we might think that autonomous decision making must be grounded by motivational states that somehow reflect my values. 245 00:27:24,670 --> 00:27:28,050 Now, once they have that quiet general and somewhat basic framework in mind, 246 00:27:28,050 --> 00:27:34,540 we think we can begin to see how different Freda's to autonomy feed into the different elements involved. 247 00:27:34,540 --> 00:27:44,450 So deception and information manipulation, it seems to me, a ways in which we can undermine the level of understanding that autonomy might require. 248 00:27:44,450 --> 00:27:50,840 And as Kareen mentioned, in a way that's going to be a particularly important source of threats to autonomy. 249 00:27:50,840 --> 00:27:56,930 We've seen the prevalence of fake news. The development of filter bubbles and echo chambers really pose threats to the kind of 250 00:27:56,930 --> 00:28:02,360 understanding that autonomous decision making requires other kinds of interference, 251 00:28:02,360 --> 00:28:04,760 perhaps target the reflective element of autonomy, 252 00:28:04,760 --> 00:28:12,140 perhaps nudging us into making decisions we wouldn't be effectively endorse or inducing addictive forms of behaviour. 253 00:28:12,140 --> 00:28:18,980 So I'm just going to reiterate Carina's conclusion. Absolutely. The answer we can give to this general question has to be it depends. 254 00:28:18,980 --> 00:28:23,720 The A.I. systems can both enhance and diminish these different aspects of autonomy. 255 00:28:23,720 --> 00:28:29,990 Now, what I hope visualising that framework will help us to do is to bring out this point that the very same 256 00:28:29,990 --> 00:28:34,980 application of I can enhance some elements of autonomy whilst at the same time threatening others. 257 00:28:34,980 --> 00:28:42,230 So I think with respect to these applications, there can be trade-offs about the kind of effects they're having. 258 00:28:42,230 --> 00:28:47,470 So that bramblett mind. Let's now turn to the case study of the health tools. 259 00:28:47,470 --> 00:28:54,860 So very broadly speaking, M health tools are mobile and wireless technologies that support the achievement of health objectives. 260 00:28:54,860 --> 00:29:02,570 These can take a wide array of forms. I'm going to use just two examples for the purposes of the presentation. 261 00:29:02,570 --> 00:29:06,710 First, Fitbit switch. I'm sure many of you are probably familiar with. 262 00:29:06,710 --> 00:29:11,000 After all, there are roughly 28 million active global users of these things. 263 00:29:11,000 --> 00:29:18,890 So Fitbit pictured that the small bracelet essentially that you can use that can monitor your levels of physical activity, 264 00:29:18,890 --> 00:29:28,610 counting the number of steps you take, for instance, and also certain how parameters such as your heart rate, other health tools are targeted. 265 00:29:28,610 --> 00:29:32,810 More specifically, are individuals living with particular medical conditions. 266 00:29:32,810 --> 00:29:37,550 So the my sugar out there is targeted at individuals living with diabetes. 267 00:29:37,550 --> 00:29:44,360 It allows users to monitor their blood glucose levels and also to log their eating activity. 268 00:29:44,360 --> 00:29:50,270 Now, the earliest iterations of these MLV tools were essentially monitoring devices, 269 00:29:50,270 --> 00:29:55,160 but there they are increasingly becoming more and more sophisticated and they're beginning to 270 00:29:55,160 --> 00:30:01,280 integrate AI systems so that they can tailor personalised recommendations to their users. 271 00:30:01,280 --> 00:30:06,950 And the more sophisticated they become, the more likely it is they're going to raise some of the issues about autonomy. 272 00:30:06,950 --> 00:30:11,090 But we've already seen coming up in our discussion so far. 273 00:30:11,090 --> 00:30:18,140 Now, what I want to do is drawn the framework I talked about a few minutes ago to give some reasons for thinking why am I have tools? 274 00:30:18,140 --> 00:30:24,470 Might be the kind of AI system which could potentially be used to enhance our autonomy. 275 00:30:24,470 --> 00:30:28,550 Later on, when just give her presentation. I think she's going to raise some points. 276 00:30:28,550 --> 00:30:35,310 Why am I a little misguided about this? But hopefully I can give a positive case in favour first. 277 00:30:35,310 --> 00:30:43,200 So the first argument draws in this point that inhealth tools, it seems, could be used to enhance the user's understanding. 278 00:30:43,200 --> 00:30:45,170 As I mentioned in the framework earlier, 279 00:30:45,170 --> 00:30:51,450 an autonomous decision maker is typically going to be an informed decision maker in order to make autonomous decisions. 280 00:30:51,450 --> 00:30:54,870 We have to have some understanding of what our options are like. 281 00:30:54,870 --> 00:31:00,950 We need to know how we can go and apply our values in the world. 282 00:31:00,950 --> 00:31:04,830 And when they're functioning properly and health tools can give us a range of information 283 00:31:04,830 --> 00:31:10,560 which might be really quite useful for how we make decisions concerning our own health. 284 00:31:10,560 --> 00:31:18,270 So Fitbit, of course, can give you real time information about your physical activity and your your heart rate. 285 00:31:18,270 --> 00:31:23,340 And that can really be useful when you're trying to plan an efficient exercise regime. 286 00:31:23,340 --> 00:31:32,160 Of course, we are now fully aware that physical activity and exercise is hugely important for our overall health. 287 00:31:32,160 --> 00:31:35,250 Now, of course, that argument does have to be caveated in a number of ways. 288 00:31:35,250 --> 00:31:43,730 And one crucial way is that it assumes that the information that we are receiving from mental health tools is always going to be accurate. 289 00:31:43,730 --> 00:31:47,040 And now that's perhaps a slightly problematic assumption at the moment, 290 00:31:47,040 --> 00:31:55,530 because there are a number of reasons why consumer grade monitoring devices won't necessarily give you highly accurate information. 291 00:31:55,530 --> 00:32:05,240 So that is absolutely a caveat. However, the hope might be that this is a technological bug, which over time could be could be overcome. 292 00:32:05,240 --> 00:32:14,800 And the general point remains that if we can enhance understanding using these tools, so much the better for individual autonomy. 293 00:32:14,800 --> 00:32:22,030 The second argument moves away from the autonomy of our decisions over to the more practical dimension of autonomy. 294 00:32:22,030 --> 00:32:23,500 I mentioned earlier. 295 00:32:23,500 --> 00:32:33,190 So one of the big obstacles we face when trying to achieve long term health objectives is maintaining adherence to healthy behaviours. 296 00:32:33,190 --> 00:32:36,130 Again, let me just focus on on the case of exercise. 297 00:32:36,130 --> 00:32:42,850 We all know that regular exercise is important for health, but many of us don't particularly enjoy it. 298 00:32:42,850 --> 00:32:46,720 Vigorous exercise can be a very unpleasant experience. 299 00:32:46,720 --> 00:32:55,890 So one of the key goals of trying to get people to engage in more physical activity is trying to ensure that we can maintain adherence. 300 00:32:55,890 --> 00:33:02,380 And so what we see in these more sophisticated health tools is that they have started to use motivational strategies 301 00:33:02,380 --> 00:33:09,130 to help ensure that users are achieving the long term goals that made them sign up for the tool in the first place. 302 00:33:09,130 --> 00:33:16,090 So Fitbit, for example, will now offer badges once you've passed certain milestones, almost literally. 303 00:33:16,090 --> 00:33:27,440 In this case, my sugar app uses an even more kind of complex motivation strategy, if you like, a game ify defeating diabetes. 304 00:33:27,440 --> 00:33:36,580 So the app displays diabetes as a monster that you have to try and tame by achieving certain healthy behaviours. 305 00:33:36,580 --> 00:33:42,790 So there is a sense in which these strategies can help users achieve their global health goals. 306 00:33:42,790 --> 00:33:50,330 Remember, I made that distinction between local and global autonomy. But again, there's a caveat here. 307 00:33:50,330 --> 00:33:53,120 And this, again, draws in some of the remarks Katrina made. 308 00:33:53,120 --> 00:34:00,750 We might worry that the motivational strategies that these tools are using are perhaps manipulative at the local level. 309 00:34:00,750 --> 00:34:08,780 They're getting users to engage in individual decisions and actions which they perhaps wouldn't reflectively endorse. 310 00:34:08,780 --> 00:34:17,120 Now, I think there are two really important questions we have to ask here. The first is, are the motivational strategies themselves manipulative? 311 00:34:17,120 --> 00:34:24,470 Are they nudges that circumvent our effective capacities? That's perhaps a question that we can come back to and discussion. 312 00:34:24,470 --> 00:34:30,350 But the second question, which I think is equally important, but perhaps overlooked sometimes, 313 00:34:30,350 --> 00:34:35,090 is we have to ask, how much does this matter from the perspective of autonomy? 314 00:34:35,090 --> 00:34:40,490 And this is where the distinction between local autonomy and global autonomy becomes really quite important. 315 00:34:40,490 --> 00:34:44,330 I mentioned from the perspective of our global health goals, 316 00:34:44,330 --> 00:34:51,890 these motivational strategies might be highly effective in getting us or helping us to achieve the goals that we want to achieve. 317 00:34:51,890 --> 00:34:58,550 And so we might feel that if there are problems at the local level, these perhaps from the perspective of autonomy, 318 00:34:58,550 --> 00:35:06,900 could be outweighed by the manner in which they provide a boost to our global autonomy. 319 00:35:06,900 --> 00:35:15,210 The third and final argument I want to highlight is what relates to this narrative of empowerment that has surrounded MLV tools. 320 00:35:15,210 --> 00:35:19,260 Now just is going to go into a lot more detail about this. 321 00:35:19,260 --> 00:35:25,800 But the general idea I take is that these tools have been marked in a way to which is suggested 322 00:35:25,800 --> 00:35:30,870 they're going to empower patients to give them responsibility for their own health care. 323 00:35:30,870 --> 00:35:35,730 Now, one thing I think is interesting about that is generally when we think about autonomy, 324 00:35:35,730 --> 00:35:42,660 we tend to think that giving people more responsibility to make their own choices is one way of fostering their autonomy. 325 00:35:42,660 --> 00:35:46,980 That point will be familiar to anyone who's been involved in raising young children. 326 00:35:46,980 --> 00:35:53,970 We try and foster their capacity for autonomy by giving them more and more decisions as they grow older. 327 00:35:53,970 --> 00:35:59,190 How applicable is that to the healthcare context? Well, I think this is something we're going to talk about. 328 00:35:59,190 --> 00:36:03,700 I think one of the interesting things about this development is in medical ethics. 329 00:36:03,700 --> 00:36:10,230 Traditionally, the patient's autonomy has to a large extent been the responsibility of the physician. 330 00:36:10,230 --> 00:36:17,580 Part of their duty of care is to ensure the patient is able to make autonomous decisions about their own healthcare. 331 00:36:17,580 --> 00:36:25,680 Perhaps these MLV tools are one way in which we're finding a shift away from that physician based responsibility for patient autonomy. 332 00:36:25,680 --> 00:36:33,590 That's an interesting development and we have to weigh it against the preferences of those who perhaps don't wish to have that kind of responsibility. 333 00:36:33,590 --> 00:36:39,270 With responsibility for health care choices comes accountability. And there are, of course, problems of that. 334 00:36:39,270 --> 00:36:42,780 So just going to raise some very interesting points about this. 335 00:36:42,780 --> 00:36:50,250 Now, as a parliament, I would suggest there is one way of viewing that narrative through lens in which it can be understood 336 00:36:50,250 --> 00:36:56,340 as enhancing autonomy in one quite general sense so that there the owners wanted to present. 337 00:36:56,340 --> 00:37:04,060 And now I'm back to Peter. Thank you very much, Jonathan. 338 00:37:04,060 --> 00:37:12,440 Wait a minute. Sorry about that technical glitch. 339 00:37:12,440 --> 00:37:18,440 Thank you very much indeed, Jonathan, that was very interesting. Could I ask you to. 340 00:37:18,440 --> 00:37:25,380 So just a couple of questions that might help to shed light on on on some of what you're saying. 341 00:37:25,380 --> 00:37:36,780 Joe, do you do you think in general that providing users with more accurate information, for example, generated by a systems? 342 00:37:36,780 --> 00:37:44,640 Will that pretty much always enhance their autonomy? Yeah, that's a really interesting question, Peter. 343 00:37:44,640 --> 00:37:53,760 I mean, traditionally in medical ethics, there has been this view that as soon as we can get information to patients, then the more the better. 344 00:37:53,760 --> 00:38:00,300 But we've seen that it's actually, of course, hugely problematic. And that is going to be exacerbated in the context of by the first problem is the 345 00:38:00,300 --> 00:38:04,890 sheer amount of content that we can give to users with whether health systems. 346 00:38:04,890 --> 00:38:12,120 You know, that these systems can generate huge amounts of data and providing all of that to users would simply overwhelm them. 347 00:38:12,120 --> 00:38:17,160 So the first problem we're going to face is that we need some way of filtering the information. 348 00:38:17,160 --> 00:38:25,170 And as soon as we introduce filtering, we introduce the possibility also for some kind of influence or decision about how to filter 349 00:38:25,170 --> 00:38:29,400 the information we give to users is always going to involve some kind of value judgement. 350 00:38:29,400 --> 00:38:35,860 So that's one form about the content of the information we provide. The second is the manner in which information is presented. 351 00:38:35,860 --> 00:38:41,790 And again, this is a problem that has become more and more apparent in medical ethics over recent years. 352 00:38:41,790 --> 00:38:50,070 It is extremely difficult to present seemingly even very scientific medical information in an entirely value neutral way. 353 00:38:50,070 --> 00:38:52,410 So the framing effects are a good example of this. 354 00:38:52,410 --> 00:39:00,180 You can influence the weight that information is afforded in an individual's decision making by the manner in which you present it. 355 00:39:00,180 --> 00:39:05,730 So providing information is not always going to enhance autonomy for the simple 356 00:39:05,730 --> 00:39:11,910 reason that simply providing information doesn't entail enhancing understanding. 357 00:39:11,910 --> 00:39:17,370 Right. Thank you very much. That's raising some really interesting issues. 358 00:39:17,370 --> 00:39:21,360 I'm going to postpone a follow up question there until everybody's onboard, 359 00:39:21,360 --> 00:39:25,710 because I think that I'll be interested to see what you will have to say about it. 360 00:39:25,710 --> 00:39:30,720 Different question. I obviously is a new thing in the world. 361 00:39:30,720 --> 00:39:41,640 It's making a lot of noise, but it's not the first technology that's raised the spectre of manipulation. 362 00:39:41,640 --> 00:39:46,860 Does it raise fundamentally new questions then? 363 00:39:46,860 --> 00:39:53,500 Yeah, I it's it's a question that I think of, to be honest, in some ways, I think the answer is no. 364 00:39:53,500 --> 00:40:01,020 And you touched on this, Peter, in your introductory remarks, because you're a systems when they do influence or manipulate us, 365 00:40:01,020 --> 00:40:06,960 are taking advantage of the same flaws in our decision making about many other technologies we've already taken advantage of. 366 00:40:06,960 --> 00:40:13,110 And you alluded to the fact that advertisers have known about these rules and our decision making for many, many years now. 367 00:40:13,110 --> 00:40:17,010 So in that way, the forms of influence aren't necessarily new. 368 00:40:17,010 --> 00:40:22,230 But the threat posed by ISIS items is, I think, undeniably greater. 369 00:40:22,230 --> 00:40:26,220 The forms of manipulative interference are more pervasive, 370 00:40:26,220 --> 00:40:32,100 and they're also likely to be much more effective because of their highly targeted and personalised nature. 371 00:40:32,100 --> 00:40:38,460 And again, that's that's something you alluded to. Now, for me, I think that raises interesting questions about manipulation. 372 00:40:38,460 --> 00:40:44,760 But perhaps the more theoretical question it raises is that it draws this really 373 00:40:44,760 --> 00:40:51,000 quite tight links between the concepts of privacy and autonomy to this point. 374 00:40:51,000 --> 00:40:54,870 In practical ethics generally and certainly in medical ethics, 375 00:40:54,870 --> 00:41:01,980 these two moral concepts of autonomy and privacy have really been separated as quite discrete ethical concepts. 376 00:41:01,980 --> 00:41:03,750 Now, in theory, that might still be true. 377 00:41:03,750 --> 00:41:10,250 But I think what we're learning is that in the practise of A.I., these two concepts are going to be much more symbiotic. 378 00:41:10,250 --> 00:41:17,110 So any investigation into one really has to pay close attention to the other. 379 00:41:17,110 --> 00:41:25,090 Very interesting. You may be aware we have an event recently which was a launch event for my colleague Charissa Valises. 380 00:41:25,090 --> 00:41:32,660 Privacy is power. You can absolutely. 381 00:41:32,660 --> 00:41:37,800 So Caressa Crystal is my office mate. So we had some great conversations about that. 382 00:41:37,800 --> 00:41:42,060 I see you will have discussed lots about this. Yes. And I suppose that's the book. 383 00:41:42,060 --> 00:41:46,350 Yes, indeed. These are interesting issues. And they're so interlocked, aren't they? 384 00:41:46,350 --> 00:41:50,220 Which is one of the strengths of the ethic today. 385 00:41:50,220 --> 00:41:55,620 I initiative here that all these things are coming together in a really beautiful way. 386 00:41:55,620 --> 00:41:59,820 Well, thank you very much, Jonathan. And now we move on to Jess. 387 00:41:59,820 --> 00:42:06,420 Jess Smally. Hello. 388 00:42:06,420 --> 00:42:16,560 Yes. Welcome back. As I mentioned, when introducing you to start with your policy, lead the data lab at the University of Oxford. 389 00:42:16,560 --> 00:42:22,310 And you're also doing a doctorate in the digital ethics lab. 390 00:42:22,310 --> 00:42:32,340 But previously with technology adviser at the Department of Health and Social Care and the subject matter experts at NHS X x. 391 00:42:32,340 --> 00:42:40,600 This tongue twister and you co-authored the NHS Code of Conduct for data driven health care technologies. 392 00:42:40,600 --> 00:42:52,310 I love to you. Just a word about the contrast between working in the NHS and then coming and doing research at Oxford. 393 00:42:52,310 --> 00:42:57,030 Yeah, it's an interesting one. One one sort of led to the other, I guess. 394 00:42:57,030 --> 00:43:04,050 I had been in Oxford previously to do my undergrad and then I went and worked for the NHS straightaway. 395 00:43:04,050 --> 00:43:10,400 I was supposed to be there for six weeks and then I ended up staying for about six years. 396 00:43:10,400 --> 00:43:16,950 But I was it was when we were developing all of these policies with regards to how do you use 397 00:43:16,950 --> 00:43:21,990 data and in particular how to use machine learning and AI in healthcare contexts responsibly, 398 00:43:21,990 --> 00:43:30,930 that I sort of say I'm not really sure we have enough knowledge in policymaking settings in order to be able to make these decisions. 399 00:43:30,930 --> 00:43:39,600 And so my solution to that was to come back to Oxford and sort of try and generate some of the knowledge and understanding myself. 400 00:43:39,600 --> 00:43:47,800 I suppose the contexts are slightly different when when you're in an academic context, you have. 401 00:43:47,800 --> 00:43:49,510 A lot of freedom of thought. 402 00:43:49,510 --> 00:43:56,260 So you can almost think in a constraint, less matter manner, and you can't really do that when you're in a policymaking context, 403 00:43:56,260 --> 00:43:59,780 because there are so many constraints that you have to think about. 404 00:43:59,780 --> 00:44:04,600 You have to think about stakeholders engagement. You have to think about interlocking policies. 405 00:44:04,600 --> 00:44:10,900 And how does this interact is that you have to think about relationships between other departments. 406 00:44:10,900 --> 00:44:17,020 And you also have to balance quite complicated trade-offs sake, as we've seen throughout the coronavirus pandemic, 407 00:44:17,020 --> 00:44:21,410 you know, trade-offs between promoting public health versus promoting the economy. 408 00:44:21,410 --> 00:44:25,390 And sometimes these are presented as dichotomous and they're not necessarily. 409 00:44:25,390 --> 00:44:31,300 But you don't have to think about those kinds of things so detailed in in an academic context, 410 00:44:31,300 --> 00:44:37,660 it's still it's OK to just think about one concept and think about it in a quite narrow way. 411 00:44:37,660 --> 00:44:45,880 So I suppose that's the main difference. If it's really interesting, I mean, you're getting both perspectives to her to a rather extreme degree. 412 00:44:45,880 --> 00:44:48,310 I mean, philosophy is one of the subjects perhaps, 413 00:44:48,310 --> 00:44:54,340 which promotes thinking all sorts of weird and wonderful things in a strange thought experiment, perhaps more than others. 414 00:44:54,340 --> 00:45:01,180 And then, on the other hand, healthcare must be one of the areas where you're most constrained by considerations 415 00:45:01,180 --> 00:45:05,720 of cost and politics and public acceptability and sort of other things. 416 00:45:05,720 --> 00:45:10,080 Anyway, over to you, Jess. Thank you. 417 00:45:10,080 --> 00:45:17,990 I will send my screen. But hopefully you can all see that. 418 00:45:17,990 --> 00:45:24,680 So I'm going to basically pick up on the point that Jonathan made right at the end of that talk in response to a question from Peter. 419 00:45:24,680 --> 00:45:32,570 That information does not automatically enhance a person's understanding and explore why this has implications for autonomy, 420 00:45:32,570 --> 00:45:35,810 in particular with regards to the empowerment narrative, 421 00:45:35,810 --> 00:45:44,810 with how M house tools or health apps are positioned in the world of policy making as part of an overarching health strategy. 422 00:45:44,810 --> 00:45:49,290 Basically, I'm going to be the killjoy for the next ten or fifteen minutes or so, 423 00:45:49,290 --> 00:45:55,670 and then we can have a discussion about whether or not I'm right to be the killjoy or whether I should in fact be more positive. 424 00:45:55,670 --> 00:46:04,970 This is all largely based on paper that I wrote with the John F. Kennedy, who Jonathan also mentioned, called The Limits of Empowerment. 425 00:46:04,970 --> 00:46:13,600 So. To start with, I think it's about it's a burden to know that the empowerment narrative is very heavily entrenched in 426 00:46:13,600 --> 00:46:19,420 policymaking documents throughout the NHS and has been probably for about the last 10 or 15 years. 427 00:46:19,420 --> 00:46:29,980 But it's become considerably more. Announced in the last five years, as we have seen the rise in the use of wearables and apps, 428 00:46:29,980 --> 00:46:34,240 etc., and this is a quote from the previous secretary of state said this. 429 00:46:34,240 --> 00:46:41,020 Jeremy Hunt, not Robin and Matt Handcock making a speech in September in 2017. 430 00:46:41,020 --> 00:46:44,560 And as you can see, that latter half of that sentence is an accompanying. 431 00:46:44,560 --> 00:46:48,850 All of this is likely to be a big shift in power from doctors to patients as patients 432 00:46:48,850 --> 00:46:53,860 use technology to put themselves in the driving seat of their own healthcare destiny. 433 00:46:53,860 --> 00:47:00,280 And this really sums up the argument that's underpinning the empower the person narrative. 434 00:47:00,280 --> 00:47:08,140 This idea that if we give people empowerment, empowering tools and we give them access to more information and we give them access to 435 00:47:08,140 --> 00:47:13,300 algorithms that can make their healthcare personalised and predictive and preventative, 436 00:47:13,300 --> 00:47:24,000 then they will be better, enhance their entire autonomy, will be enhanced, and they will be better equipped to take care of their of their own health. 437 00:47:24,000 --> 00:47:32,590 This idea isn't new. It's kind of been around this idea of empowering people to take hold of their own health care has existed since about the 1970s. 438 00:47:32,590 --> 00:47:41,760 You can see in that graph it starts to pick up around the 1970s and it was present in sort of neo liberal House policy. 439 00:47:41,760 --> 00:47:47,130 But it's really boomed in the late 1990s and and the early 2000s, 440 00:47:47,130 --> 00:47:55,020 in part in reaction to the sustainability crisis faced by the NHS under the NHS in the 1990s, 2000. 441 00:47:55,020 --> 00:48:02,700 Especially now, the population is vastly different to what it was dealing with in sort of 70 or so years ago. 442 00:48:02,700 --> 00:48:07,410 We have people who are much older. We have people with much more complicated health needs. 443 00:48:07,410 --> 00:48:10,350 People tend to be dealing with chronic conditions, 444 00:48:10,350 --> 00:48:19,860 rabel and acute conditions and things that require Long-Term Care rather than acute short term, very serious illnesses or injuries. 445 00:48:19,860 --> 00:48:23,550 And that's started to put pressure on the resources of the healthcare system. 446 00:48:23,550 --> 00:48:30,170 And so this started to prompt the government and policymakers to pivot towards a proactive paradigm of care. 447 00:48:30,170 --> 00:48:34,260 So rather than being reracked reactive, want something? Somebody was sick. 448 00:48:34,260 --> 00:48:39,720 Encourage the shift towards health care that is sometimes known as P4 medicine. 449 00:48:39,720 --> 00:48:48,510 So personalised, preventative, putit, predictive and participatory. And it's often in that context that we hear arguments about the use of A.I. 450 00:48:48,510 --> 00:48:52,500 So I will lo know everything about you and it will be able to predict when 451 00:48:52,500 --> 00:48:59,400 you're going to get sick and it will predict based on your digital phenotype, when all of the information around you as well as your genetics. 452 00:48:59,400 --> 00:49:04,000 And we will basically be able to intervene and stop everyone from getting sick. 453 00:49:04,000 --> 00:49:08,070 This argument, you can follow it through and becomes quite extreme in some cases, 454 00:49:08,070 --> 00:49:14,700 but it's particularly relevant for the use of health apps, this idea that the data is generated only on you. 455 00:49:14,700 --> 00:49:20,940 It's your personal information and it's giving you personalised recommendations. 456 00:49:20,940 --> 00:49:30,090 And this shift and this idea between reactive to preventative medicine sort of coincided with the Internet becoming commercially available. 457 00:49:30,090 --> 00:49:38,040 And this led to info, liberal arguments. So this idea, which is partly the idea that Jonathan was just picking up on that the idea of 458 00:49:38,040 --> 00:49:42,960 giving patients more access to information and a sort of date keeper less way. 459 00:49:42,960 --> 00:49:51,810 So democratising access to to information via the Internet would lead to the emergence of what's known as a generation of expert patients. 460 00:49:51,810 --> 00:49:55,890 And these patients would be have a greater level of self esteem. 461 00:49:55,890 --> 00:50:00,480 They would have they would be empowered to be active participants in their own healthcare, 462 00:50:00,480 --> 00:50:04,710 and it would ultimately result in them having a better quality of life. 463 00:50:04,710 --> 00:50:15,300 And this boom of the arrival of M health and the Internet, the later adoption's of the Internet and the mobile web and particularly wearables, 464 00:50:15,300 --> 00:50:20,760 has resulted in this empowerment narrative taking off even more than it ever did before. 465 00:50:20,760 --> 00:50:27,450 But the problem is, in all of these policy documents, ranging from sort of early, early narratives, 466 00:50:27,450 --> 00:50:37,750 even from the sort of late 90s of the National Programme for I.T. right through until last year, 2000 and nineteens. 467 00:50:37,750 --> 00:50:43,750 Sort of their next step of the NHS strategy is that it never actually says. 468 00:50:43,750 --> 00:50:53,390 How does the data empower people? We take them word empowerment to literally mean a transference of power in order to be able to, 469 00:50:53,390 --> 00:50:57,970 you know, be the person who is responsible for taking care of your own health. 470 00:50:57,970 --> 00:51:05,530 How does this actually work? And that's when we start to see that there are limitations to this argument here. 471 00:51:05,530 --> 00:51:10,090 You can see to the icon reflections of me as the self. 472 00:51:10,090 --> 00:51:14,470 And there's my data that will. And this is why I find the context of healthcare. 473 00:51:14,470 --> 00:51:18,670 And I so interesting because healthcare is very physical. 474 00:51:18,670 --> 00:51:22,270 It's very kinaesthetic. It's like I understand my body. I can touch my body. 475 00:51:22,270 --> 00:51:27,660 I can feel it. I know if I have taken some medicine, I know if I've had an operation, 476 00:51:27,660 --> 00:51:35,920 something that you have quite a defined and a group, you can quite easily conceptualise it in your mind. 477 00:51:35,920 --> 00:51:44,350 The problem with A.I. and with health tools is they're not necessarily operating on your physical self, operating on your digital self. 478 00:51:44,350 --> 00:51:48,040 And the idea is with this empowerment narrative is you use words. 479 00:51:48,040 --> 00:51:49,950 I turn the digital medical gaze. 480 00:51:49,950 --> 00:51:58,030 This idea of, oh, I look at my phone, my phone has told me that I took 10000 steps today, but I could probably take 15000 tomorrow. 481 00:51:58,030 --> 00:52:03,970 And that will make me better. Or oh, I ate too much chocolate today and maybe I should eat some more broccoli tomorrow. 482 00:52:03,970 --> 00:52:10,510 It is this idea that you look down at your health, your phone, you're looking at a version of your data self, 483 00:52:10,510 --> 00:52:19,300 your data self highlight to you the limits of your health, and then it automatically triggers a response in you to make better health care. 484 00:52:19,300 --> 00:52:24,340 But there are many potential barriers to that actually working. 485 00:52:24,340 --> 00:52:33,110 For one thing, we don't know who set the baseline. So you can only be unhealthy or healthy depending on what the definition of healthy is. 486 00:52:33,110 --> 00:52:41,350 And that is always going to be a value laden decision. There is no such thing as a neutral definition of what weight is. 487 00:52:41,350 --> 00:52:46,270 What about what the perfect weight is, for example, or what is the perfect diet? 488 00:52:46,270 --> 00:52:57,340 These things all have value laden like implications, but it's not often clear to people that those decisions have been made in a value based way. 489 00:52:57,340 --> 00:53:05,590 And it's also not clear to people what those baselines are. The Ten Thousand Steps thing, for example, is largely based on the logic. 490 00:53:05,590 --> 00:53:08,320 It's better for people to move more. 491 00:53:08,320 --> 00:53:17,170 There is no real scientific evidence suggests that 10000 steps is the ultimate outcome and that the peak number of steps that people should be taking. 492 00:53:17,170 --> 00:53:20,710 If you take in 10000 steps a day every day of your life, 493 00:53:20,710 --> 00:53:26,410 taking 10000 steps tomorrow is not necessarily going to make you fitter or make you lose weight. 494 00:53:26,410 --> 00:53:32,590 And we have to expose the fact that these things can be politically driven. 495 00:53:32,590 --> 00:53:37,990 There might be political reasons. There might be nefarious reasons why one baseline is set for another. 496 00:53:37,990 --> 00:53:43,390 And there also should be aware of the fact that sometimes these these baselines are biased. 497 00:53:43,390 --> 00:53:48,880 Often we talk about the fact that healthcare data or A.I. algorithms can be biased. 498 00:53:48,880 --> 00:53:57,520 We don't always talk about the fact that health care is often biased. Now we know more about people who use health care systems more. 499 00:53:57,520 --> 00:54:02,120 We also know more about people who have been prioritised in health care system more. 500 00:54:02,120 --> 00:54:08,640 And we often know more about people who look like me rather than people who might have different different colours of skin, 501 00:54:08,640 --> 00:54:16,300 who might come from different ethnicities. And all of those baselines based on me get translated into em house tools. 502 00:54:16,300 --> 00:54:20,920 And I used to police people's behaviour who come from different contexts, who have different bodies, 503 00:54:20,920 --> 00:54:24,910 who have different health needs, and who have different cultural contexts. 504 00:54:24,910 --> 00:54:32,260 And yet at the same time, we are still arguing that they are being empowered to improve their own health. 505 00:54:32,260 --> 00:54:37,720 Based on the baseline over which they had no control and you never see this happening. 506 00:54:37,720 --> 00:54:41,810 It's all happening on your data self. It's never happening on your physical self. 507 00:54:41,810 --> 00:54:47,800 And this undermines what can be called your integrity of self. So do I understand what is happening to me? 508 00:54:47,800 --> 00:54:53,680 What decisions are being made about me and why I am being nudged to move in a particular way? 509 00:54:53,680 --> 00:55:00,160 And so ultimately, these health tools and apps are effectively policing people's behaviour, 510 00:55:00,160 --> 00:55:08,860 and these values become internalised constantly and reinforced by push notifications and by the Buzz's and other gamification features, 511 00:55:08,860 --> 00:55:15,040 all of which are supposed to be motivating but are actually making people complicit in their own self surveillance. 512 00:55:15,040 --> 00:55:20,500 And you were basically enslaving people to this idea that I must improve myself 513 00:55:20,500 --> 00:55:26,020 constantly rather than empowering them because healthy has become institutionalised. 514 00:55:26,020 --> 00:55:31,240 The idea of your behavioural norms obligates people to constantly improve their house. 515 00:55:31,240 --> 00:55:37,250 So you are never healthy. You're always somewhat sick. You're always somewhat incomplete. 516 00:55:37,250 --> 00:55:42,020 And you were always supposed to be striving for continuous improvement. Oh, you can do 10 birdies. 517 00:55:42,020 --> 00:55:46,040 Can you do 12? Can you do fifteen? Can you run a marathon? 518 00:55:46,040 --> 00:55:55,020 Now, can you run an ultra marathon? Can you eat only green vegetables and kale rather than just green vegetables, kale and maybe a chocolate biscuit? 519 00:55:55,020 --> 00:55:57,950 We're always trying to improve people. 520 00:55:57,950 --> 00:56:06,350 And so rather than enhancing people's autonomy, what we can actually see is, um, have to start being seen as hyper nudging. 521 00:56:06,350 --> 00:56:14,600 They are reducing people's autonomy and altering and replacing this idea of self-improvement by libertarian, 522 00:56:14,600 --> 00:56:19,940 paternalistic algorithms that alter the presentation of the digital self to nudge people 523 00:56:19,940 --> 00:56:24,380 into taking predetermined actions which they have not themselves decided to take. 524 00:56:24,380 --> 00:56:27,890 And they have been made on about policing them against baselines, 525 00:56:27,890 --> 00:56:35,260 which individual people were not involved in taking or deciding or knowing whether they were the right ones for them. 526 00:56:35,260 --> 00:56:39,820 So this comes to the point that Korina was making right at the beginning. 527 00:56:39,820 --> 00:56:45,100 What is the difference between procedural and relational autonomy and health apps? 528 00:56:45,100 --> 00:56:53,800 And this idea of empower the person are largely based on the procedural definition of autonomy, which is a very narrow definition of autonomy. 529 00:56:53,800 --> 00:56:56,710 And it puts all of the attention on the decision. 530 00:56:56,710 --> 00:57:04,480 So I have empowered a person to know that potentially that behaviour is not very healthy and that they could make a difference. 531 00:57:04,480 --> 00:57:09,130 They could make a decision to take a different type of behaviour and that would make them healthier. 532 00:57:09,130 --> 00:57:17,800 But this has got nothing to do with whether or not that person actually has the ability and therefore the power to make that change. 533 00:57:17,800 --> 00:57:23,380 Let's give an example. If we take someone, you should be breaking more as you should be doing more exercise. 534 00:57:23,380 --> 00:57:27,100 We don't know if that person has access to a gym. 535 00:57:27,100 --> 00:57:32,380 We don't know if they have small children at home that they cannot leave if they need to go for a run. 536 00:57:32,380 --> 00:57:39,100 We don't know similarly about people's foods where people have easy access to fruits and vegetables and fresh food, 537 00:57:39,100 --> 00:57:44,260 or is the only food that is accessible and affordable to them. A McDonald's and a Mars bar. 538 00:57:44,260 --> 00:57:51,340 We don't know all of these things. What sometimes known as the social determinants of health over which people have 539 00:57:51,340 --> 00:57:56,650 limited control and are really the responsibilities of public health programmes. 540 00:57:56,650 --> 00:58:03,520 And you cannot be saying and suggesting that people's autonomy has been enhanced if we have told them and 541 00:58:03,520 --> 00:58:09,580 policed them and made them feel guilty and responsible for their own ill health through these nudging, 542 00:58:09,580 --> 00:58:14,190 hyper nudging apps if they actually have no power and ability to change it. 543 00:58:14,190 --> 00:58:21,190 Instead, we are making that person feel is an overwhelming level of guilt and responsibility, which can in fact alter the autonomy, 544 00:58:21,190 --> 00:58:26,530 particularly if they don't really understand why they have been told to take this action in the first place, 545 00:58:26,530 --> 00:58:31,570 because it has been done on their data self, not on their physical self. 546 00:58:31,570 --> 00:58:40,660 And so ultimately, these things could, in fact, being really limiting on the person's autonomy because it is acting on procedural autonomy. 547 00:58:40,660 --> 00:58:44,170 And really what we should be talking about is relational autonomy. 548 00:58:44,170 --> 00:58:49,480 So people's ability to make autonomous decisions in their wider context and in-house 549 00:58:49,480 --> 00:58:54,910 apps and apps have no real implications for this wider relational autonomy. 550 00:58:54,910 --> 00:59:00,760 But we make them feel like they do. And that undermines people's autonomy instead. 551 00:59:00,760 --> 00:59:05,680 And so finally, to talk to some of the points that Jonathan was making about informed consent, 552 00:59:05,680 --> 00:59:11,300 informed consent is at the heart of ethical care and current medical practise. 553 00:59:11,300 --> 00:59:17,920 So his idea that, you know, I am an attorney, I make an autonomous decision about my health care because I fully understand the 554 00:59:17,920 --> 00:59:21,970 risks and I fully understand the benefits and I understand those trade-offs. 555 00:59:21,970 --> 00:59:29,560 And the doctor can explain them to me. And I know that the doctor has explained correctly because there is evidence proving that this works has been 556 00:59:29,560 --> 00:59:35,950 tested in a randomised controlled trial and that has been reported on the particular harms that I might face. 557 00:59:35,950 --> 00:59:41,530 So I can make an a genuine autonomous decision that I am happy to accept the risks 558 00:59:41,530 --> 00:59:46,270 because I believe that they are outweighed by the benefits in the world of health apps. 559 00:59:46,270 --> 00:59:50,440 This doesn't happen. Health apps are poorly regulated. 560 00:59:50,440 --> 00:59:56,380 They're poorly governed. Very few of them have actually got evidence search showing that they worked. 561 00:59:56,380 --> 01:00:02,890 This is because there is currently no internationally agreed upon standard or specific regulatory 562 01:00:02,890 --> 01:00:08,110 or accreditation system that is designed to protect individual consumers from the risks. 563 01:00:08,110 --> 01:00:12,970 You go on the App Store, of which there are 300 more than 300000 thousand apps, 564 01:00:12,970 --> 01:00:18,160 you will see them all making these wild claims about how they can improve your health care. 565 01:00:18,160 --> 01:00:22,150 Very few of them mention risks if they do mention risks. 566 01:00:22,150 --> 01:00:28,300 It tends to be a disclaimer, which is a legal requirement in order to not be a medical device. 567 01:00:28,300 --> 01:00:32,500 And that disclaimer will say this is not a medical device. 568 01:00:32,500 --> 01:00:39,670 Do not take this advice to be medical advice. If you need medical advice, seek a doctor, but is often buried very small and very, very small. 569 01:00:39,670 --> 01:00:45,850 Small print doesn't take into account other types of harms, in particular harms and people's autonomy, 570 01:00:45,850 --> 01:00:50,650 such as their feelings of self access, efficacy, their self esteem. 571 01:00:50,650 --> 01:00:55,300 If this app has promised me that I would lose weight and I didn't, I must be a failure. 572 01:00:55,300 --> 01:01:02,590 If this app is, promise me that this will reduce my pain levels. Maybe I'm just really sensitive to pain and I'm being a wuss. 573 01:01:02,590 --> 01:01:06,150 We don't we don't have the evidence that these things work. 574 01:01:06,150 --> 01:01:11,080 There are exceptions. There are some exceptions. There are some apps that are really well tested. 575 01:01:11,080 --> 01:01:17,860 They are reported in public. But that is actually the exception rather than the rule. 576 01:01:17,860 --> 01:01:24,310 And so if we don't have the efficacy that they are actually working on, we don't know whether the descriptions are accurate. 577 01:01:24,310 --> 01:01:29,500 We don't know whether the risks are balanced with the potential benefits. 578 01:01:29,500 --> 01:01:34,680 How can somebody really be giving informed consent to use an M health to. 579 01:01:34,680 --> 01:01:39,890 Therefore, how can they really be making an autonomous decision? That's what they want to do. 580 01:01:39,890 --> 01:01:44,190 And so all of these things are concerning. But we can be positive. 581 01:01:44,190 --> 01:01:50,400 You know, Jonathan has shown us that the positive, not everything is terrible. And I don't want to be a killjoy all the time. 582 01:01:50,400 --> 01:01:57,870 I love data and health and apps. And that's why I got into this space in the first place, because I think the potential for transmission is huge. 583 01:01:57,870 --> 01:02:02,880 But we need to be questioning and we need to reframe how we position these things. 584 01:02:02,880 --> 01:02:09,570 They shouldn't be the centre. They shouldn't be the be all and everything. And house apps are not going to be the solution to all else. 585 01:02:09,570 --> 01:02:14,040 And there are a couple of things that we should be making our policy in a strategic level sort 586 01:02:14,040 --> 01:02:19,200 of decisions and reframing issues in order to help us think a little bit more critically. 587 01:02:19,200 --> 01:02:25,590 First of all, we should think about the informational environment or your infosphere as being social determinants of health. 588 01:02:25,590 --> 01:02:30,630 Not everyone is at is given equal access to equal information. 589 01:02:30,630 --> 01:02:35,760 Not everyone is given equal accurate access to accurate information. 590 01:02:35,760 --> 01:02:40,620 Different people have different levels of ability to judge what is accurate and what is trustworthy. 591 01:02:40,620 --> 01:02:46,500 And this includes in health apps, some health apps come up with wildly inaccurate information. 592 01:02:46,500 --> 01:02:50,010 There a really great study. You can find it online, which showed, for example, 593 01:02:50,010 --> 01:02:56,820 that there are a number of apps which we're recommending people who are suffering from very severe psychological issues that they should, 594 01:02:56,820 --> 01:03:00,190 in fact, treat this by drinking hard liquor. 595 01:03:00,190 --> 01:03:08,770 This is not a medically recommended treatment, but people don't necessarily know how to judge those types of information. 596 01:03:08,770 --> 01:03:15,000 That's a, quote, extreme example, but there are more nuanced examples. And potentially we should think about whether, in fact, 597 01:03:15,000 --> 01:03:22,140 the role of data and the role of health apps in the collecting and analytics and algorithms and all of this in the health care system is, 598 01:03:22,140 --> 01:03:25,740 in fact, P4 medicine. So that personalised, predictive, 599 01:03:25,740 --> 01:03:34,950 participatory type medicine or whether it is in fact that better targeted at improving population has and so should we in fact 600 01:03:34,950 --> 01:03:43,080 shift the narrative and the level of abstraction or the way we analyse this problem from empower the person to enable the group? 601 01:03:43,080 --> 01:03:46,260 And finally, we should be making a buyer beware market. 602 01:03:46,260 --> 01:03:53,370 We should make people who are using these health apps better aware of the fact that there are limitations to their use. 603 01:03:53,370 --> 01:03:57,000 We should make them aware of what evidence there is for them to work. 604 01:03:57,000 --> 01:04:03,930 We should make them aware of the fact of whether this app has been tested on one group of people, but not on another group of people. 605 01:04:03,930 --> 01:04:09,360 And we should just be more transparent and we should be holding people who make these health apps accountable to the same 606 01:04:09,360 --> 01:04:15,840 standards and holding them accountable to the same standards that we hold other people who make medical devices accountable. 607 01:04:15,840 --> 01:04:26,000 And that is it. Thank you very much. I'm looking forward to the discussion. 608 01:04:26,000 --> 01:04:30,230 Thank you very much. I'm just trying to restart my video. 609 01:04:30,230 --> 01:04:36,200 There we go. Thank you, Jeff. That was really interesting and powerful stuff in there. 610 01:04:36,200 --> 01:04:49,420 Can I ask you to. From one point of clarification, so a natural answer to quite a lot of what you've said. 611 01:04:49,420 --> 01:04:54,620 Or start, at any rate, in responding to it would be to say, look, 612 01:04:54,620 --> 01:05:02,250 there's a distinction between devices that give information and devices that give advice. 613 01:05:02,250 --> 01:05:08,970 And when I go and stand on my bathroom scales. It just gives me information. 614 01:05:08,970 --> 01:05:13,130 It doesn't tell me, you know, Peter, you ought to have your. 615 01:05:13,130 --> 01:05:21,330 You shouldn't have that extra drink or something. Now in it, do you think that's that's OK, that's harmless. 616 01:05:21,330 --> 01:05:26,440 It's when these systems advise us what to do. 617 01:05:26,440 --> 01:05:33,240 They're, as it were, taking things for granted about us perhaps usurping our autonomy and so forth. 618 01:05:33,240 --> 01:05:43,610 Would you be happy if the devices we wore on our wrists, etc., just gave information about our heart rate and all the rest? 619 01:05:43,610 --> 01:05:53,940 And when combined with advice and I think that's an interesting question and it's a really important thing to to investigate. 620 01:05:53,940 --> 01:06:00,740 I think it's partly it's the way that that can that information is information only. 621 01:06:00,740 --> 01:06:05,670 What's the why were you doing it? Why were you looking at that particular piece of information? 622 01:06:05,670 --> 01:06:10,290 And also comes to Jonathan's point about the fact that information can be presented 623 01:06:10,290 --> 01:06:16,110 in different ways that alters whether or not it has a harm on a person's autonomy. 624 01:06:16,110 --> 01:06:21,500 And so if I look at the scales. And I'm not particularly bothered how much I weigh. 625 01:06:21,500 --> 01:06:30,530 I just need to know because I'm I might be entering a competition that requires that I know how much how much I weigh. 626 01:06:30,530 --> 01:06:37,930 Then then that's that's one thing. If I'm looking at their information because I want to. 627 01:06:37,930 --> 01:06:44,750 I do, in fact, want to lose weight. And then how that information is presented does make it quite big difference. 628 01:06:44,750 --> 01:06:47,580 And it's also depends on the information or context in it. 629 01:06:47,580 --> 01:06:52,290 Why has that person looked at it in the first place and what's their ability to understand what it means? 630 01:06:52,290 --> 01:07:00,560 And so one of the things we've looked at recently is misinformation online and different people's abilities to interpret it. 631 01:07:00,560 --> 01:07:05,270 And we've known for a while that pro Einasleigh pro anorexia communities exist on the Internet 632 01:07:05,270 --> 01:07:11,520 and there's people who promote anorexia as a lifestyle rather than as a psychological illness. 633 01:07:11,520 --> 01:07:20,450 And on those types of forums, you see people promoting tips around how do you lose weight and how do you how do you minimise your hunger? 634 01:07:20,450 --> 01:07:23,390 And they will have on there things like optimum weights. 635 01:07:23,390 --> 01:07:28,370 If people go and still look at information or they're using a Fitbit to tell them how many steps they have taken, 636 01:07:28,370 --> 01:07:34,790 even though it's not necessarily telling them what to do, they still go on seeking that information for different for different reasons. 637 01:07:34,790 --> 01:07:40,400 So informational, informational, behavioural seeking context makes a difference. 638 01:07:40,400 --> 01:07:48,140 Can I just ask you to clarify that? I mean, you you were you've talked quite a lot about how there's a lot of misinformation out there. 639 01:07:48,140 --> 01:07:53,750 I think you were suggesting towards the end that there should be some sort of regulation. 640 01:07:53,750 --> 01:08:04,100 Now, what you might get from a regulator is here is the government's advice on optimal BMI or whatever it is. 641 01:08:04,100 --> 01:08:11,790 Would you be happy if the advice were all were well authenticated or as well authenticated be? 642 01:08:11,790 --> 01:08:21,290 Or how much is your objection to the fact that we're receiving this bombardment of advice on how we should be, how we should be and how much? 643 01:08:21,290 --> 01:08:25,600 Is it an objection to the fact that a lot of that advice is bogus? 644 01:08:25,600 --> 01:08:33,110 And quite a large part, a reaction to the fact that I think that the information and the advice that people is giving is bogus. 645 01:08:33,110 --> 01:08:41,420 It's not just that. I think it's also this. It's more also to do with this idea that we're assuming that just because we've given people information, 646 01:08:41,420 --> 01:08:45,980 they know what to do with it and that they, in fact, can do something about it. 647 01:08:45,980 --> 01:08:49,910 And it's particularly this idea that by giving people information, 648 01:08:49,910 --> 01:08:56,580 we have shifted responsibility for their health care from the health care system to that particular individual. 649 01:08:56,580 --> 01:09:02,330 And then the issue of that is that if people don't act on it and the right way and their health care doesn't, 650 01:09:02,330 --> 01:09:06,860 their health care doesn't improve even if the advice has been verified. 651 01:09:06,860 --> 01:09:14,930 But because their context, it makes it impossible. We've framed that person as a bad user and a person who is become sick on 652 01:09:14,930 --> 01:09:21,080 purpose because they didn't do what they wanted or what we wanted them to do. 653 01:09:21,080 --> 01:09:30,880 And if you're constantly reminding people of that nudging, then internalising that value, that's really what worries me. 654 01:09:30,880 --> 01:09:42,740 And just finally, I mean, a question I asked them earlier to be how much these issues knew, how much of a distinctive they are systems, 655 01:09:42,740 --> 01:09:48,920 because there have always been books on health nudging people to behave in a particular way. 656 01:09:48,920 --> 01:09:53,720 Some of the million informed social is the difference. Simply be the fact that it's on your wrist. 657 01:09:53,720 --> 01:10:02,480 They're nudging you all the time. Yeah, it's it's it's there is an it's an exacerbation of the problem rather than something that is completely new. 658 01:10:02,480 --> 01:10:06,950 And it is because it's exactly the answer that was given earlier. It's all pervasive. 659 01:10:06,950 --> 01:10:15,710 You can't really escape it all the time. And in particular, it's it's not even just this that these types of behaviours and nudging and towards 660 01:10:15,710 --> 01:10:19,820 people's health behaviours isn't even just happening on people who choose it. 661 01:10:19,820 --> 01:10:27,770 So, like, I can choose not to Fitbit because actually I have before and I really didn't like the experience because it made me too aware. 662 01:10:27,770 --> 01:10:34,400 But I will still go online and just be scrolling Instagram and I will see a thing that tells me I'll actually, 663 01:10:34,400 --> 01:10:40,580 hey, have you drunk this new skinny tea. But it is it's it's impossible to avoid. 664 01:10:40,580 --> 01:10:44,630 And that's partly happening because Instagram knows. Oh, right. 665 01:10:44,630 --> 01:10:52,550 Well, is young and female probably. She does care about how much she weighs because society is telling her that she should care about that. 666 01:10:52,550 --> 01:10:56,840 So I'm going to show her all of these things all the time, too. It's it's all pervasive. 667 01:10:56,840 --> 01:11:01,180 And that's really what makes a difference. All right. Thank you very much. 668 01:11:01,180 --> 01:11:07,130 So I'm going to invite our other two speakers back now. They will. 669 01:11:07,130 --> 01:11:15,990 Think magically appear. Jonathan and Carina. Hello, welcome back. 670 01:11:15,990 --> 01:11:19,350 Carina. So to start with, 671 01:11:19,350 --> 01:11:27,060 we've got quite a few questions from the audience that I'm going to get my head around in the next couple of minutes and start throwing at you. 672 01:11:27,060 --> 01:11:33,430 But first of all, Jonathan, I'd like to hear your responses to what YESSes said. 673 01:11:33,430 --> 01:11:39,260 Sure. Well, I think there are maybe some things we disagree about, but there certainly is a lot of agreement. 674 01:11:39,260 --> 01:11:44,450 I certainly agree. We have the need to regulate these apps if they're going to have to take such a prominent role. 675 01:11:44,450 --> 01:11:50,930 And Jesse's hugely well informed about the many, many problems with the apps as they currently stand. 676 01:11:50,930 --> 01:11:55,670 So I learnt a lot and I find it really very interesting. 677 01:11:55,670 --> 01:12:03,800 I suppose one kind of general thought I have about this is concerned the worry for moving responsibility from healthcare professionals to the user. 678 01:12:03,800 --> 01:12:06,110 I do take Jess's concern, 679 01:12:06,110 --> 01:12:14,360 but this comes back to something she said at the beginning about this idea of sustainability and health care and how much the current status quo, 680 01:12:14,360 --> 01:12:19,850 if you like, is actually able to meet patient demands that we might want it to me. 681 01:12:19,850 --> 01:12:27,050 So let's take physical activity as an example. The ten thousand step guideline, I agree that there are lots of problems with that, 682 01:12:27,050 --> 01:12:32,750 but there are some pretty well validated baselines for what a good baseline of 683 01:12:32,750 --> 01:12:39,650 physical activity police should be is a hundred and fifty minutes in the NHS. That's been very well researched and very well validated. 684 01:12:39,650 --> 01:12:46,940 Now, one of the problems we have trying to achieve a good level of physical activity is that when people go to their GP, 685 01:12:46,940 --> 01:12:52,220 as the GP is, are having to rely on self reports of how much exercise people are doing. 686 01:12:52,220 --> 01:12:55,040 So let's take them health tool out of it. 687 01:12:55,040 --> 01:13:00,770 How honest are people giving information to their GP, particularly about valuable things like the amount they drink, 688 01:13:00,770 --> 01:13:05,180 the amount of excise they do things better, so they probably overestimate it a little bit. 689 01:13:05,180 --> 01:13:09,330 That's, of course, going to influence the kind of advice the GP is able to give them. 690 01:13:09,330 --> 01:13:15,320 Well, I said a couple of years ago, coming back to the physical activity example, a group, 691 01:13:15,320 --> 01:13:23,750 a GP wrote a letter to the General Medical Council basically saying that I felt very uncomfortable with the fact that they didn't feel able to 692 01:13:23,750 --> 01:13:33,080 prescribe exercise as a medicine because prescribing excises as a medicine is very different from simply saying don't follow the guidelines. 693 01:13:33,080 --> 01:13:36,950 So I wrote a paper about that a couple of years ago. 694 01:13:36,950 --> 01:13:44,930 Now, one way in which these M health tools could fill that gap is a you've got potentially and I understand the problems with accuracy here, 695 01:13:44,930 --> 01:13:51,170 but you could have a technology which reliably monitors levels of physical activity and potentially 696 01:13:51,170 --> 01:13:58,280 fills a gap by providing recommendations based on well validated baselines of physical activity, 697 01:13:58,280 --> 01:14:03,530 where there is at present a bit of a lacuna in primary healthcare provision. 698 01:14:03,530 --> 01:14:08,070 So I want to present that as one way in which I think maybe the shift, 699 01:14:08,070 --> 01:14:13,310 the responsibility from healthcare professional to the user or the patient doesn't 700 01:14:13,310 --> 01:14:17,810 necessarily always have to be bad because the way that healthcare is currently set up, 701 01:14:17,810 --> 01:14:23,060 there are gaps and giving people when I said responsibility in the sense of 702 01:14:23,060 --> 01:14:27,320 holding them wholly accountable by giving them a bit of responsibility to say, 703 01:14:27,320 --> 01:14:34,850 look, we need your help, guys. We need you to monitor some things that we as primary care physicians might not catch on our own because we 704 01:14:34,850 --> 01:14:41,060 simply can't maintain the healthcare system sustainably if we have to pay that much attention to everyone. 705 01:14:41,060 --> 01:14:46,460 So that's a counter to some of Jessi's very important and excellent points. 706 01:14:46,460 --> 01:14:53,630 Okay. Right. I think I'm I'm going to move on at this point to questions from the audience. 707 01:14:53,630 --> 01:15:02,540 You've got some some very interesting ones here. But I'm going to also give you an opportunity at the end to respond to each other. 708 01:15:02,540 --> 01:15:08,060 Sure. So a few of these have been directed towards particular individuals. 709 01:15:08,060 --> 01:15:11,900 One for you, Korina. This is from Oliver. 710 01:15:11,900 --> 01:15:22,700 If the machine if the human machine interaction cooperation is construed as a single system in and of itself. 711 01:15:22,700 --> 01:15:28,070 Can you think of that? I mean, what would you say about the Highbridge autonomy that that system might have? 712 01:15:28,070 --> 01:15:34,420 So I think I think the thought there is that you are distinguishing between human and artificial autonomy. 713 01:15:34,420 --> 01:15:46,210 But suppose you've got a. If you think of it as a single system, that's quite an interesting question about how to conceptualise the economy. 714 01:15:46,210 --> 01:15:58,050 Yeah, and I think it's an excellent, excellent conceptualizer conceptualisation of of the topic and also just coming back to the wheelchair example. 715 01:15:58,050 --> 01:16:01,770 I think this is a very, very good example where, in fact, 716 01:16:01,770 --> 01:16:10,020 we have a technological artefact that is not that isn't there we don't consider separate from the individual who uses the artefact. 717 01:16:10,020 --> 01:16:15,540 So I would say that a lot of a lot of people were relying on wheelchairs, 718 01:16:15,540 --> 01:16:20,670 would not consider that wheelchair as a as I mean, obviously something something separate. 719 01:16:20,670 --> 01:16:28,290 But like in they would in their conception of autonomy, it really becomes part of their. 720 01:16:28,290 --> 01:16:38,100 And how they move around, it becomes part of their life. So I think it's a I think it's a very, very interesting way of conceptualising it. 721 01:16:38,100 --> 01:16:44,190 I think there's one thing maybe that we should be wary about. 722 01:16:44,190 --> 01:16:47,790 And then maybe this comes back to the idea of control. 723 01:16:47,790 --> 01:16:54,930 It is about how much control you have about this knowledge about the system itself and how much are we dependent on, 724 01:16:54,930 --> 01:17:02,820 let's say, manufactures and how make the incentive structures of private firms and manufacturers. 725 01:17:02,820 --> 01:17:07,530 When we when we make use of these systems, and I think it's it's an excellent idea. 726 01:17:07,530 --> 01:17:12,720 So just coming back to this distinction between the human autonomy and the system autonomy there. 727 01:17:12,720 --> 01:17:22,380 I would say this system contributes to. I mean, I would say it's a we should understand this because there is a human being involved. 728 01:17:22,380 --> 01:17:30,200 We should stop and say the system is contributing to the intrinsic intrinsic value of human autonomy. 729 01:17:30,200 --> 01:17:36,900 And so that becomes the main. So when you've got the two together, the main focus becomes the human autonomy. 730 01:17:36,900 --> 01:17:40,470 And how far the machine feeds into that. Exactly. 731 01:17:40,470 --> 01:17:48,210 So then the machine basically becomes a becomes a tool of the human rather than a separate agent in itself. 732 01:17:48,210 --> 01:17:52,770 Right. Okay. Thank you. Question for you, Jess. 733 01:17:52,770 --> 01:18:02,470 Is there a difference between the value laden health and the values of the health care professionals the patients can't be interacting with? 734 01:18:02,470 --> 01:18:03,750 That's from well, really. 735 01:18:03,750 --> 01:18:11,220 I mean, I think at the point there is that we're we might be nudged by health apps, but we might be nudged by health professionals. 736 01:18:11,220 --> 01:18:18,430 Did you see a significant difference that. Thank you for the question as a great question. 737 01:18:18,430 --> 01:18:24,870 Off of him, when I talk about A.I., an in-house guy and sort of pre face, 738 01:18:24,870 --> 01:18:31,830 what I want to say by the fact that I do actually think there is a lot of opportunity for these things to make things better. 739 01:18:31,830 --> 01:18:42,870 And I I and that often when you are the the ethicist in the room, you have to be the negative person saying, oh, this is this is bad, this bad. 740 01:18:42,870 --> 01:18:50,670 And in doing that, you automatically set it up to make it so that analogue or the existing system is already perfect. 741 01:18:50,670 --> 01:18:58,620 I don't think the existing system is like the values that doctors use right now are necessarily always perfect. 742 01:18:58,620 --> 01:19:04,110 And therefore the apps are always bad. A bit a bit like, you know, Jonathan was saying, I do. 743 01:19:04,110 --> 01:19:12,270 I do think that there, in fact, can be used in ways that are good and helpful to people. 744 01:19:12,270 --> 01:19:18,180 My concern is that they are you don't have to be a clinician in order to develop a healthcare app. 745 01:19:18,180 --> 01:19:22,050 You can just be anyone who knows how to write a line of code. 746 01:19:22,050 --> 01:19:25,110 And you can just be tech bro in Silicon Valley who thinks, hey, 747 01:19:25,110 --> 01:19:31,590 I know how to do this and will make me some money if I make this afternoon, will nudge people in this way or nudge people in that way. 748 01:19:31,590 --> 01:19:35,220 And we don't have any oversight over that process. 749 01:19:35,220 --> 01:19:40,650 We don't know where the idea originated from or who decided that this was how I was going to design it. 750 01:19:40,650 --> 01:19:41,280 Right. Sorry. 751 01:19:41,280 --> 01:19:48,470 From the way baselines were also through to the way that it manipulates a person's behaviour through the way that it presents information. 752 01:19:48,470 --> 01:19:57,950 You know, it's really hard to go and see on the App Store and see who who even made it and therefore follow that that chain of accountability back. 753 01:19:57,950 --> 01:20:08,640 And it's really that that concerns me. It's not the idea that healthcare systems or currently in doctors automatically good at automatically bad. 754 01:20:08,640 --> 01:20:15,990 But we are shifting in the way these things are and we are losing sight of the accountability change and who to go and say, 755 01:20:15,990 --> 01:20:20,880 why did you make that decision and have you thought about the implications and 756 01:20:20,880 --> 01:20:26,550 who do I as the person who's used this app and I maybe had a negative outcome. 757 01:20:26,550 --> 01:20:33,900 Who do I tell that I had a negative outcome as a result of using your app so that I know it doesn't happen to another person? 758 01:20:33,900 --> 01:20:39,210 And that's what worries me more than anything. Right. Thank you very much. 759 01:20:39,210 --> 01:20:50,500 Jonathan, a question for you. Would you say more about the merging of autonomy and privacy and how this promotes an idea of A.I. exceptionalism? 760 01:20:50,500 --> 01:20:58,910 OK. It's a very interesting question. So the merging of autonomy and privacy, again, I'm gonna be drawing on a little bit on the book. 761 01:20:58,910 --> 01:21:02,600 That's fine. You hate it. There's an awful lot in Chris's wonderful book. 762 01:21:02,600 --> 01:21:07,410 So I urge the person who asked the question to go and cheque that out. 763 01:21:07,410 --> 01:21:14,480 But something I learnt from Chris's book was that many of the invasions of privacy that are apparent in A.I. systems, 764 01:21:14,480 --> 01:21:23,750 and I think that's also relevant from justice spoken about these health tools use a huge amount of personal data. 765 01:21:23,750 --> 01:21:33,560 Now, the way in which these A.I. systems tend to influence our decision making makes a great deal of use of that personal data. 766 01:21:33,560 --> 01:21:41,270 So the reason that I think the threat that AI systems potentially posed to autonomy is so pervasive and effective is that these 767 01:21:41,270 --> 01:21:49,390 systems use the data they have obtained by invading our privacy to target very specific forms of manipulative influence. 768 01:21:49,390 --> 01:21:54,650 And so already there's a very close relationship. Does it go the other way? 769 01:21:54,650 --> 01:22:02,600 I mean, well, in medical ethics generally, I mentioned earlier that autonomy and privacy have been treated as slightly separate concepts. 770 01:22:02,600 --> 01:22:05,180 That's not wholly true at a practical level, of course, 771 01:22:05,180 --> 01:22:14,420 because we talk about informed consent to handing over parts of our privacy so that there's always been a bit of a link between the two. 772 01:22:14,420 --> 01:22:23,000 But for the reasons I've just said, I think that becomes much more, much tighter in the context of AI with regards to exceptionalism, 773 01:22:23,000 --> 01:22:30,710 I suppose hopefully I've maybe answered a bit of that question in the sense that the fundamental point is going to be that the way in which 774 01:22:30,710 --> 01:22:39,920 manipulate manipulative interference operates by our systems is by using the data that has been obtained by invasions into the privacy. 775 01:22:39,920 --> 01:22:45,520 And I think that's something of a new feature of AI today. 776 01:22:45,520 --> 01:22:56,620 Thank you. Back to you. Karina from from Maia, how can the difference between human autonomy and system autonomy be communicated well to the public? 777 01:22:56,620 --> 01:23:08,120 When we're talking about A.I. and she was particularly interested in what Carina said about how feminism much inform our definitions. 778 01:23:08,120 --> 01:23:15,050 Yeah. So I think a good way to communicating is first by understanding what is what is behind human 779 01:23:15,050 --> 01:23:20,920 autonomy and namely that has something to do with moral agency and with responsibility. 780 01:23:20,920 --> 01:23:29,450 And in contrast to what we actually talking about when we talk about autonomous systems in the context of why we're talking about autonomous vehicles, 781 01:23:29,450 --> 01:23:37,310 for example, or lethal autonomous weapons systems. Now, arguably, these are infant or autonomous systems. 782 01:23:37,310 --> 01:23:45,980 So you could say maybe when they ask us to reach them, if they reach the point that we can consider the moral agents. 783 01:23:45,980 --> 01:23:57,350 Then this discussion would need to be changed. But feminism comes in because feminism questions this this idea of independence. 784 01:23:57,350 --> 01:24:04,940 I said it was a very independence and self-sufficiency to see is a very it was it used to be a very or still is a very male centric idea. 785 01:24:04,940 --> 01:24:13,790 And in fact, women long knew that there was a social relationships and emotional relationships are are important. 786 01:24:13,790 --> 01:24:23,040 I mean, I think so. Men Vectron. But it was certainly not in the forefront of political philosophy more generally. 787 01:24:23,040 --> 01:24:28,610 So feminism rejects this idea of autonomy as independence. 788 01:24:28,610 --> 01:24:34,790 And in the case of autonomous systems, when we talk about autonomy, that is independence. 789 01:24:34,790 --> 01:24:41,640 It is independence from human beings. So I think that here is a really nice connexion to be drawn. 790 01:24:41,640 --> 01:24:44,750 Tried to if you want to communicate it to the public. 791 01:24:44,750 --> 01:24:54,150 Talk about independence and autonomy and then show the concept of autonomy in the human context is much, much cheaper than just. 792 01:24:54,150 --> 01:25:01,620 Thank you very much. It's really interesting. Moving on to jets now, I think this is one for you. 793 01:25:01,620 --> 01:25:10,830 How can an individual who fit this trim roof? How could an individual who feels that they are being subjected to guilt inducing nudges 794 01:25:10,830 --> 01:25:17,250 based on their data self rather than their physical self advocate for themselves? 795 01:25:17,250 --> 01:25:29,520 Especially if this is via NHS services. I realise you might be walking on thin ice in your sleep. 796 01:25:29,520 --> 01:25:33,600 I mean, that's it. That's a fascinating that's a fascinating question. 797 01:25:33,600 --> 01:25:39,350 I mean, I guess my my first question is my first point is that if you have become aware of that, 798 01:25:39,350 --> 01:25:43,080 you are already in a position to advocate for yourself, then. 799 01:25:43,080 --> 01:25:50,210 Then if you you want to wear the back that is actually happening. 800 01:25:50,210 --> 01:25:54,600 And, you know, you can you can question it and you can say, look, I mean, 801 01:25:54,600 --> 01:25:59,070 this is often one of the things that we have these discussions about and explain ability 802 01:25:59,070 --> 01:26:03,510 and I and II and why explain ability is important because you can say actually, 803 01:26:03,510 --> 01:26:10,730 I don't think this is right. So if you can say I don't think this is right because of X, Y and Z, you know, 804 01:26:10,730 --> 01:26:16,200 like the OP is telling me that I've got pain care and actually I have yellow hat like this. 805 01:26:16,200 --> 01:26:27,720 You can point out the obvious differences just by the very fact that you're aware and you can sort of champion it that way. 806 01:26:27,720 --> 01:26:35,820 But the and I think this is really interesting argument to bring out in a really good thing to be having a discussion about in public, 807 01:26:35,820 --> 01:26:45,540 because I think often what happens is you get the reverse of this question and you get this idea that doctors are being presented by patients who say, 808 01:26:45,540 --> 01:26:53,580 hey, Dr. Google told me this was what was wrong. And you disagree with Dr. Google and I believe Google over year, which probably does happen. 809 01:26:53,580 --> 01:26:58,080 But I also think increasingly that the situation that you're describing happens as well. 810 01:26:58,080 --> 01:27:04,110 They're being told to do one thing, whether it be a decision making, an automatic triaging tool. 811 01:27:04,110 --> 01:27:10,490 You go do this or you go do that or something that is actively acting on what you should do in your 812 01:27:10,490 --> 01:27:18,000 healthcare and no patient not feeling as though they have any put any means of questioning it. 813 01:27:18,000 --> 01:27:22,590 So find the specific differences point. Point them out. 814 01:27:22,590 --> 01:27:29,410 If that person or the NHS services are still not listening to you, then find somebody else who can. 815 01:27:29,410 --> 01:27:33,860 And if that's not possible, then you should be making a fuss about it. 816 01:27:33,860 --> 01:27:40,070 And and I say, may you do any kind of public advocacy if you really want to do it, try and do it in public. 817 01:27:40,070 --> 01:27:47,250 But it's to interest rates. A bit of a worry here I have about the very concept of enco the very concept of autonomy. 818 01:27:47,250 --> 01:27:54,210 And I I was I was thinking when Korina was speaking that this this might be an issue. 819 01:27:54,210 --> 01:27:57,780 And also, you know, what you were saying about the Health Care Act. 820 01:27:57,780 --> 01:28:08,340 Now, suppose I go to a doctor and I ask the doctor how I should be treated and the doctor gives some advice. 821 01:28:08,340 --> 01:28:16,660 And I want to know why. Can you explain it? Now, it might be to explain it in any terms that I can understand. 822 01:28:16,660 --> 01:28:20,430 We'll just completely distort, distort the facts of the maths. 823 01:28:20,430 --> 01:28:28,630 You know, so much research going into this, so much understanding of the physiology and the biochemistry and all sorts of different things. 824 01:28:28,630 --> 01:28:34,820 And it's not clear always that getting more information increases your autonomy. 825 01:28:34,820 --> 01:28:40,450 Right. You're in it. For most of us, being getting all the medical information that's relevant. 826 01:28:40,450 --> 01:28:51,420 Would just completely confuses. And it may be that maybe that the concept of autonomy is a rather problematic one. 827 01:28:51,420 --> 01:28:58,080 You know, we tend naively to think that we're autonomous when we fully understand the reasons for doing what we do, 828 01:28:58,080 --> 01:29:00,990 something like that, and we do it with a knowledge of those. 829 01:29:00,990 --> 01:29:10,600 But actually, in any really complex scenario, like in medicine, we're always relying on the authority of others, only a vast majority of it in. 830 01:29:10,600 --> 01:29:16,100 They. Is that a serious problem with the concept of autonomy? 831 01:29:16,100 --> 01:29:21,080 Should we just be making do with. Partial autonomy. 832 01:29:21,080 --> 01:29:26,390 I'd just be interested in your views on that. All of you. I I can maybe speak to that first. 833 01:29:26,390 --> 01:29:33,050 This is something I talk a fair bit about in the book that you mentioned at the beginning, Peter. 834 01:29:33,050 --> 01:29:36,380 I think the medical ethics. There's always been this view that, look, of course, 835 01:29:36,380 --> 01:29:40,340 we can't expect autonomy to require full understanding of the sort that you're 836 01:29:40,340 --> 01:29:44,960 talking about because you would simply overwhelmed patients with information. 837 01:29:44,960 --> 01:29:50,070 So the concept we tend to invoke in these discussions is that of material information. 838 01:29:50,070 --> 01:29:56,330 So the kind of autonomy that's required for informed consent requires that the 839 01:29:56,330 --> 01:30:03,560 patient understands information that is material to the treatment decision. Then very controversial question is, well, how do we define information? 840 01:30:03,560 --> 01:30:06,620 That's material from information that's not. 841 01:30:06,620 --> 01:30:12,890 And that conversation that's been largely carried out in the legal domain and in the past five years or so, 842 01:30:12,890 --> 01:30:22,110 we've got quite big changes in the law in England, at least regarding how we define information that is material to treatment decisions. 843 01:30:22,110 --> 01:30:30,440 So I don't think it's an insurmountable problem. I don't think the concept of autonomy for use in medical ethics or for that matter in a I. 844 01:30:30,440 --> 01:30:35,510 Has to be wedded to this idea of full understanding. Of course, that's not obtainable. 845 01:30:35,510 --> 01:30:42,330 But that doesn't get us around all the very difficult questions. And we're certainly seeing this play out in medical law in England today. 846 01:30:42,330 --> 01:30:47,870 And I don't think the discussions about that are over. Right. 847 01:30:47,870 --> 01:30:55,070 Just do you want to add anything on that? I mean, our largely largely agree. 848 01:30:55,070 --> 01:31:00,740 I think I think it's complicated. I think one of the things that we mentioned in the paper, which I haven't mentioned, 849 01:31:00,740 --> 01:31:07,370 is also as sort of a more metter version of autonomy and the like. 850 01:31:07,370 --> 01:31:13,820 Making an autonomous decision to give over the decision making ability to someone else is still an autonomous decision, 851 01:31:13,820 --> 01:31:16,400 as is the autonomous decision to not want to know. 852 01:31:16,400 --> 01:31:23,780 So I don't want to know that you've predicted that this has happened and this has happened, and I still think that's worthwhile. 853 01:31:23,780 --> 01:31:26,720 So like Jonathan just said, I don't think it's an insurmountable problem. 854 01:31:26,720 --> 01:31:34,700 I think it becomes problematic when we don't look at the way it's potentially being harmed critically. 855 01:31:34,700 --> 01:31:43,810 And there I take it there would be a crucial difference between deferring to an authority that's well authenticated. 856 01:31:43,810 --> 01:31:49,390 And rightly respected. And so basically going to the doctor, 857 01:31:49,390 --> 01:31:58,240 as opposed to taking your advice from some app that's been written by someone in Silicon Valley who may or may not be reliable. 858 01:31:58,240 --> 01:32:04,480 Yes. At the moment. Yes. I don't say that that it's not impossible that we could get to a point where we 859 01:32:04,480 --> 01:32:11,830 have enough trust and reliability in the other definition of like autonomous, 860 01:32:11,830 --> 01:32:15,790 autonomous decision making that we would feel comfortable doing that. 861 01:32:15,790 --> 01:32:22,180 And also, we should be aware, like the point I made earlier. Not all doctors are fantastic all the time. 862 01:32:22,180 --> 01:32:30,520 So we should we shouldn't make make this this distinction between the always perfect human and that always imperfect algorithm. 863 01:32:30,520 --> 01:32:37,810 Those those two extremes don't exist. But we should be critically analysing both. 864 01:32:37,810 --> 01:32:44,670 I mean, presumably it's not impossible. We get to a stage where technology. 865 01:32:44,670 --> 01:32:46,620 Is in general more reliable than the doctor, 866 01:32:46,620 --> 01:32:56,850 precisely because the very best decision making algorithms can be put into it and reproduced all over the place in lots of handheld devices. 867 01:32:56,850 --> 01:33:06,120 Completely, completely possible from from. Especially from a very cold technical definition either. 868 01:33:06,120 --> 01:33:08,850 There are many other which we don't have time to talk about today. 869 01:33:08,850 --> 01:33:16,150 But there are many other aspects of housecat like empathy and my ability to understand you as a person, which in an algorithm cannot advocate. 870 01:33:16,150 --> 01:33:23,170 But from a purely technical perspective, yes, that's true. Yes, actually, that Anna asks a couple of questions. 871 01:33:23,170 --> 01:33:30,570 Well, I mean, one of which is rather resonantly. How does this metric cultural obsession of the empowered, quantified self. 872 01:33:30,570 --> 01:33:35,800 Converse with the almost mystical discourse around wellness? 873 01:33:35,800 --> 01:33:45,830 Interesting. And can the researchers speak to the gender specific gap in algorithm design for health care? 874 01:33:45,830 --> 01:33:56,480 The well-known fact that most algorithms apparently based on male data rather than female data storage historically. 875 01:33:56,480 --> 01:34:00,960 Do you have anything to. Oatmeal. Does any. 876 01:34:00,960 --> 01:34:05,840 All Korina. One is Duncan sorry, my room connexion is somewhat shocking. 877 01:34:05,840 --> 01:34:13,730 So I understood half of the question, so maybe I'll pass it over to two jets and then make a comment after. 878 01:34:13,730 --> 01:34:22,580 So the fact that the first question about this sort of forever quantified, quantified self and the mythical example of wireless, 879 01:34:22,580 --> 01:34:28,760 I think those two things go hand-in-hand because the wellness in this industry is booming 880 01:34:28,760 --> 01:34:34,220 on this idea that you can how to fly and data fly every single aspect of your life. 881 01:34:34,220 --> 01:34:40,550 And this was the point I made about this fact that you're made to feel as though you're almost constantly in somewhat sick, 882 01:34:40,550 --> 01:34:43,250 in the sense that you're always somewhat suboptimal. 883 01:34:43,250 --> 01:34:50,270 And there is an out there somewhere an optimal version of you that you can achieve if you continue to listen to this app. 884 01:34:50,270 --> 01:34:52,700 And then this happened and this apple, you know, 885 01:34:52,700 --> 01:35:01,670 I'm gonna follow this influencer and they're going to tell me that I should only drink kale and then that I do yoga five times a day. 886 01:35:01,670 --> 01:35:02,900 But then at the same time, 887 01:35:02,900 --> 01:35:13,400 you'll have that a different person on a different app telling you that you should go running 100 miles a day and only eat red meat. 888 01:35:13,400 --> 01:35:20,180 It is constantly over bombarding people and that is what's driving the profit making. 889 01:35:20,180 --> 01:35:23,150 And I think we should be very conscious of that. 890 01:35:23,150 --> 01:35:29,170 And we should accept the fact that maybe we don't want to like how to quantify every single aspects of our lives. 891 01:35:29,170 --> 01:35:38,150 Now, how do you how do you quantify the very beneficial from a health mental health perspective and therefore a physical health perspective? 892 01:35:38,150 --> 01:35:42,040 Things like having a good conversation with a friend or laughing. 893 01:35:42,040 --> 01:35:47,120 These are all things that are aspects of the human existence that make you feel better. 894 01:35:47,120 --> 01:35:50,600 But we shouldn't necessarily be quantifying them. 895 01:35:50,600 --> 01:35:57,860 And I think we should be therefore very critical of this idea that you can quantify every aspect of wellness. 896 01:35:57,860 --> 01:36:05,000 No doubt. Right. Well, thank you very much indeed. That's been a really interesting and enlightening discussion. 897 01:36:05,000 --> 01:36:13,850 What's the food for thought there? Just a shame we have to stop because I think I feel we've raised a lot more questions than we've answered, 898 01:36:13,850 --> 01:36:23,540 but it gives us all an appreciation of what a complex and multifaceted topic this is. 899 01:36:23,540 --> 01:36:32,660 The session's been recorded. It will be added to the very rich collection of A.I. ethics resources that we're building up at Oxford. 900 01:36:32,660 --> 01:36:41,780 As I said at the beginning, you can find links to past and forthcoming events, the growing set of recordings and podcasts categorised by topic. 901 01:36:41,780 --> 01:36:49,440 If you go to the philosophy faculty page and click on the ethics, you know I link. 902 01:36:49,440 --> 01:36:55,190 Before saying goodbye, I'd like to thank Wes Williams, Vicki McGuinness and the whole team. 903 01:36:55,190 --> 01:37:00,410 Torch for helping us with the organisation and technical arrangements for this seminar. 904 01:37:00,410 --> 01:37:06,290 They've made everything much easier for the four of us. And this is hugely appreciated. 905 01:37:06,290 --> 01:37:10,090 Thank you. Thank you very much to the speakers. 906 01:37:10,090 --> 01:37:18,090 Of course. Carina, Jonathan and Jess being really, really interesting. 907 01:37:18,090 --> 01:37:23,420 Interesting what you've said, and I'm sure our listeners will have enjoyed that. 908 01:37:23,420 --> 01:37:26,930 Thank you, listeners, for watching. This is for watching. 909 01:37:26,930 --> 01:37:33,110 I'm not just watching our viewers, especially those who ask questions. 910 01:37:33,110 --> 01:37:40,010 The really interesting questions, it's a shame we didn't have longer to spend on them to look out for our future events. 911 01:37:40,010 --> 01:38:20,068 From the link I mentioned. Thank you again and goodbye.