1 00:00:00,180 --> 00:00:05,760 Thank you. I also don't have any side always this too much. 2 00:00:05,760 --> 00:00:11,450 So I'm also very aware that I am kind of the stop gap between you and wine and getting out of this very hungry. 3 00:00:11,450 --> 00:00:15,720 And so I I will try and not be very boring. 4 00:00:15,720 --> 00:00:24,990 I think what you have had so far is a really wonderful introduction to try and talk about because I sit in a number of different 5 00:00:24,990 --> 00:00:31,740 intersections in that I am both an employee of the university and a student at the university in a different part of the university. 6 00:00:31,740 --> 00:00:40,170 And I also am working in academia and I am actually making policy to simplify the government today because of. 7 00:00:40,170 --> 00:00:49,680 I have to very much be talking from the Oxford perspective. But if you have questions about specific things about them afterwards. 8 00:00:49,680 --> 00:00:57,240 But really, what I try and do and I think Sandra phrased it really, really well is look at what is legally required, 9 00:00:57,240 --> 00:01:07,020 which in the context of medicine and data use tends to be consent minimisation, then what is ethically desirable and what is technically feasible. 10 00:01:07,020 --> 00:01:12,630 And then we try and develop what we refer to as a principle proportionate approach that 11 00:01:12,630 --> 00:01:19,980 tries to move the what ethics into the how of AI ethics and to really operationalise it. 12 00:01:19,980 --> 00:01:24,420 So you look at a number of different things. First of all, that how can I actually help? 13 00:01:24,420 --> 00:01:27,660 I think this is something that often gets missed in ethical conversations. 14 00:01:27,660 --> 00:01:36,000 People always talk about the risks, but actually a very big risk, particularly from the perspective of the NHS that has a duty to capitalise on. 15 00:01:36,000 --> 00:01:42,810 All of the ways that you can enable healthcare is the opportunity cost of not letting it happens. 16 00:01:42,810 --> 00:01:50,040 So to try and look at why does it help, which tends to be largely in categories such as diagnostic screening and triage, 17 00:01:50,040 --> 00:01:57,960 that's where most research currently focuses, primarily because if that's where the data is more easily accessible and it's where it's standardised, 18 00:01:57,960 --> 00:02:06,540 population health services time basically better versions and less bad versions of Google flu discovery, 19 00:02:06,540 --> 00:02:12,780 such as drug discovery and what's sometimes referred to as P4 medicine or precision medicine. 20 00:02:12,780 --> 00:02:17,490 And those are the kind of areas that we look at and terms of why we think it might help. 21 00:02:17,490 --> 00:02:23,490 And then the question is how can it actually hurt? You've already heard many of the reasons why, 22 00:02:23,490 --> 00:02:30,090 and one of the things that I co-authored is one of those many ethical principles that Brent had up on that on that slide, 23 00:02:30,090 --> 00:02:36,080 which is the Department of Health and Social Care as a code of conduct for the NHS. 24 00:02:36,080 --> 00:02:42,870 So it's good the NHS Code of Conduct for data driven health and Care Technologies, which just really rolls off the tongue. 25 00:02:42,870 --> 00:02:47,250 The reason it is cool that is because most of the applications that we see currently in 26 00:02:47,250 --> 00:02:52,650 use within the NHS or within the healthcare sector at large is at least at the frontline, 27 00:02:52,650 --> 00:02:59,280 do not fall into the category of being. Yet they tend to be far, far simpler. 28 00:02:59,280 --> 00:03:03,890 Basic statistics said we tried to encompass it with data to get it, 29 00:03:03,890 --> 00:03:08,850 but the things that are trying to pass is that that code of conduct and almost everything that 30 00:03:08,850 --> 00:03:14,250 we've heard about so far tends to focus on just the individual protecting just the individual, 31 00:03:14,250 --> 00:03:20,320 which stems from this connexion with medical ethics being around protecting the person. 32 00:03:20,320 --> 00:03:25,230 And as we have seen autonomy, beneficence, more Maleficent's justice and accessibility. 33 00:03:25,230 --> 00:03:32,490 And was that really, really important when you're operating and thinking of the perspective from a systems perspective? 34 00:03:32,490 --> 00:03:40,230 You also need to think about how ethics, how ethical risks can arise, a number of other different levels, then the interpersonal level. 35 00:03:40,230 --> 00:03:46,920 So how might it change the relationship between clinicians and patients, for example, and also between the patient and themselves? 36 00:03:46,920 --> 00:03:51,540 This is a very interesting dichotomy that you have in the healthcare sector with AI is that 37 00:03:51,540 --> 00:03:57,360 you are dealing with simultaneously a very physical being and an entirely digital sphere. 38 00:03:57,360 --> 00:04:05,130 And sometimes those two things do not actually match, and that can cause some concern for me to be aware of the other. 39 00:04:05,130 --> 00:04:08,550 The other impact that you can see if group level impacts, 40 00:04:08,550 --> 00:04:15,510 sectoral level impacts and set societal level impacts, a group and societal level impacts might be where, 41 00:04:15,510 --> 00:04:23,830 for example, that we have problems is the fact that if the NHS is grounded on this principle that it is careful or are we X? 42 00:04:23,830 --> 00:04:34,050 It's like making that less true by focussing on the introduction of AI when I should be used in a way that makes it more tree. 43 00:04:34,050 --> 00:04:41,370 But we could in fact introduce new and evolving inequalities and exacerbate those already existing. 44 00:04:41,370 --> 00:04:49,140 And then the other thing that sectoral level and this is particularly important government is taking a systems perspective is the issue with trust. 45 00:04:49,140 --> 00:04:54,480 How might you lose trust in the provision of health care from the states 46 00:04:54,480 --> 00:04:59,800 because you have made mistakes too early on until there's always a thought of? 47 00:04:59,800 --> 00:05:07,300 Levels that you might think about, and then the other area that we need to try and look at is, well, you might have. 48 00:05:07,300 --> 00:05:12,280 Different ethical issues arising at different levels of the machine learning pipeline. 49 00:05:12,280 --> 00:05:21,550 One of the things I often say is if you have written ethics in to the business case that coded it out by the time it has got two deployments. 50 00:05:21,550 --> 00:05:32,230 Really clear example that we're currently tackling with this a couple of things is one is because the question of liability is a little bit unsure. 51 00:05:32,230 --> 00:05:40,520 It tends to be interpreted that I land on the final connexion who's made the decision, but all of. 52 00:05:40,520 --> 00:05:46,160 Tools that are deployed at the front line or deployed in the App Store say things like schematics, for example, 53 00:05:46,160 --> 00:05:52,160 which is an app that can take a photo of the mall and it will tell you your level of risk a skewed towards false 54 00:05:52,160 --> 00:05:59,450 positives because it's less risk to the company to say that you have a diagnosis when you don't need to do a false name. 55 00:05:59,450 --> 00:06:08,420 The issue that this is having is that while as these technologies are supposed to be deployed in order to enable people to take care of themselves. 56 00:06:08,420 --> 00:06:15,890 In fact, it is actually doing the opposite and driving people towards frontline care because people get these very high level ratings of risk. 57 00:06:15,890 --> 00:06:22,040 Similarly, we have an issue with like breast cancer screening algorithms, for example, 58 00:06:22,040 --> 00:06:28,190 that have drastically increase the capacity to recognise the potential of a pathology being, 59 00:06:28,190 --> 00:06:33,700 as you can imagine, many scans at once, but a doctor can still only treat one person at once. 60 00:06:33,700 --> 00:06:39,890 So in fact, if you created a bottleneck, which is an ethical concern from the perspective of that, 61 00:06:39,890 --> 00:06:46,100 you then might have the person who is living with the potentially anxiety inducing diagnosis, 62 00:06:46,100 --> 00:06:50,690 and that has very big implications for their medical integrity and that sort of mental integrity. 63 00:06:50,690 --> 00:06:54,650 But wait much longer for treatment. 64 00:06:54,650 --> 00:07:03,830 So those are the kinds of barriers that we try and look at and try and then develop standards and policies that build on the regulatory framework. 65 00:07:03,830 --> 00:07:10,250 There is work going on to assess the regulation and the various things that create regulatory sandboxes in order to 66 00:07:10,250 --> 00:07:18,130 test ideas that takes much longer than it takes to develop standards and policies and the implementation of them. 67 00:07:18,130 --> 00:07:23,180 Obviously, one of the things, for example, that we have looked at is the code of conduct. 68 00:07:23,180 --> 00:07:32,450 First of all, is an abstraction of an abstraction. It's based on the ethical principles, but it is far more trying to operationalise them. 69 00:07:32,450 --> 00:07:36,260 So instead of saying things like autonomy, it says things in there, 70 00:07:36,260 --> 00:07:46,860 like design for your specific user state how the technology will impact them and how it will help and how it is better than what is already in place. 71 00:07:46,860 --> 00:07:51,380 And there's a number of operational things that we have done to make that a little 72 00:07:51,380 --> 00:07:55,640 bit more realistic because of the complexities that you're dealing with in medicine, 73 00:07:55,640 --> 00:07:57,590 such as safety and efficacy. 74 00:07:57,590 --> 00:08:06,170 So for example, one of the things that we developed was the standards for evidence of efficacy in digital health technologies. 75 00:08:06,170 --> 00:08:12,380 And similarly, what is socially acceptable commercial models for health services. 76 00:08:12,380 --> 00:08:24,740 And then to build a platform that shows how people can actually go in puts provide that evidence to the NHS of how you are meeting those principles. 77 00:08:24,740 --> 00:08:28,340 What evidence have you provided and allow people to actually review? 78 00:08:28,340 --> 00:08:34,310 This is all available in a big report that and it gets published a couple of weeks ago. 79 00:08:34,310 --> 00:08:43,190 And really, that's what we're trying to do. Look at what are the issues to do with the fact that data is now circulating outside of the system? 80 00:08:43,190 --> 00:08:50,750 In many places, you now have private companies that made it more about your health than the health care system 81 00:08:50,750 --> 00:08:56,780 and that how does that breaking down traditional barriers that have been in good place? 82 00:08:56,780 --> 00:09:01,250 As we heard earlier, things like the difference between medical research and medical practise when 83 00:09:01,250 --> 00:09:05,960 you're working in a cybernetic loop with data space that starts to break down. 84 00:09:05,960 --> 00:09:11,390 Look at the risks that approach at different levels of society, 85 00:09:11,390 --> 00:09:18,800 different stages in the machine learning pipeline and how we actually operationalise those so that instead of just saying, 86 00:09:18,800 --> 00:09:25,482 isn't it nice that everyone is going to be fair? How are we actually going?