1 00:00:00,120 --> 00:00:09,210 Very much for coming to this talk today. I would like to introduce John Trioli who is here from Aberdeen in the Health Services Research Unit. 2 00:00:09,670 --> 00:00:17,580 I know that Sean introduces background and it should take about an hour and there will be questions at the end. 3 00:00:19,200 --> 00:00:22,890 Okay, thank you. So, so I'm from Aberdeen. 4 00:00:23,280 --> 00:00:23,849 A friend of mine, 5 00:00:23,850 --> 00:00:30,720 a colleague of mine was here last week for the Evidence Life Conference and she said that Oxford was hotter than the surface of the sun. 6 00:00:31,200 --> 00:00:36,400 So it's maybe not quite as hot as that today, but it's a lot hotter than it was in Aberdeen. 7 00:00:36,420 --> 00:00:39,790 So great to be here. I'm a trial methodology. 8 00:00:40,170 --> 00:00:46,440 What I'm really interested in is how we can improve the way we do trials. 9 00:00:46,500 --> 00:00:53,130 So I don't have a particular focus on one type of trial or another, but it's how we do trials. 10 00:00:53,460 --> 00:01:01,740 So I'm going to talk a lot about efficiency, things we might do to learn about doing trials more efficiently, 11 00:01:02,670 --> 00:01:06,480 and also raising awareness of some of the issues right now with trials. 12 00:01:06,730 --> 00:01:15,180 And I want to talk about this thing called trial footage. And what I also want to make clear is that we meaning trial forge, 13 00:01:15,840 --> 00:01:23,190 we think everybody who has an interest in trials can contribute to Trial Forge and making trials more efficient. 14 00:01:23,850 --> 00:01:25,679 So we're always looking for collaborators. 15 00:01:25,680 --> 00:01:32,640 One of the key things with trial footage, which I hope by the end of the talk I'll convince you, is absolutely essential, 16 00:01:33,030 --> 00:01:40,169 is that we do more coordinated activity and we do a lot more collaboration all the ways years and years, 17 00:01:40,170 --> 00:01:46,190 decades even can pass without really knowing much more than we did a decade earlier. 18 00:01:46,200 --> 00:01:49,680 And I'd actually point out some clear examples where that is the case. 19 00:01:50,220 --> 00:01:54,960 So coordination and collaboration is what we're about with Trial Forge. 20 00:01:55,860 --> 00:02:00,930 Now, I guess many of you will know that every year there's something called International Clinical Trials Day. 21 00:02:02,190 --> 00:02:04,380 And a couple of years ago it wasn't this year, it was last year. 22 00:02:04,680 --> 00:02:11,339 An Irish methodology research group thought it would be a great idea to get some people 23 00:02:11,340 --> 00:02:16,740 who work in trials to say something about why they thought trials were important. 24 00:02:17,520 --> 00:02:25,770 And this particular one is from Howell Williams, who is the director of the UK's Health Technology Assessment Program. 25 00:02:25,770 --> 00:02:32,880 So that NIH, our Health Technology Assessment program, one of the big public funders of trials here in the UK. 26 00:02:33,300 --> 00:02:44,220 And he said, look, trials are right at the backbone of primary research that here in the UK underpins what we do in the NHS. 27 00:02:44,790 --> 00:02:46,460 They're right there at the centre. 28 00:02:47,490 --> 00:02:58,680 It is really, really important stuff that goes on in clinical trials and what we do in trials finds its way into systematic reviews, 29 00:02:58,680 --> 00:03:01,889 finds its way into clinical guidelines, 30 00:03:01,890 --> 00:03:10,379 and those guidelines influence the care of thousands or millions of people, not only here of course in the UK but around the world. 31 00:03:10,380 --> 00:03:14,040 They are trials very important. 32 00:03:14,550 --> 00:03:19,170 They say here trials do change lives, they are important. 33 00:03:20,040 --> 00:03:23,220 And that makes it odd that people like me. 34 00:03:23,610 --> 00:03:34,740 Trial methodology is not clinical, but a trial methodology is the way that we and our colleagues make decisions about how to design or how to run, 35 00:03:35,280 --> 00:03:38,610 how to analyse, how to disseminate our results. 36 00:03:38,730 --> 00:03:44,100 Those decisions are informed by almost no evidence at all. 37 00:03:45,270 --> 00:03:52,530 So if we think of something at trial that might change the way we manage one aspect of, 38 00:03:52,530 --> 00:04:00,359 say, diabetes care, that thing, that trial could change the care that patients receive. 39 00:04:00,360 --> 00:04:04,620 It's an evidence based component or is a component of our evidence based system. 40 00:04:05,700 --> 00:04:14,220 We triallists do not have an evidence based approach to how we make decisions we do as this person. 41 00:04:14,250 --> 00:04:20,729 This is a great quote. I say show it all the time from somebody called Monica Shah, what she and her colleagues were doing. 42 00:04:20,730 --> 00:04:28,890 They were asking themselves, how could we better select sites for multicenter trials, cardiovascular trials? 43 00:04:29,670 --> 00:04:38,579 And she made this observation, which is absolutely spot on. We do things because we think that is the right way to do that. 44 00:04:38,580 --> 00:04:40,950 We use judgement, we use experience, 45 00:04:40,950 --> 00:04:49,260 we use knowledge and we are forced to use that to a far greater extent than we ought to because there is no alternative. 46 00:04:50,070 --> 00:04:58,290 We do not have a large body of evidence to which we can go to try and improve our decisions. 47 00:04:58,680 --> 00:05:03,420 So if I wanted to. Develop an evidence based trial management approach. 48 00:05:04,110 --> 00:05:10,770 There is not an evidence base that I can go to to do that, and I'll show you that in a moment. 49 00:05:11,910 --> 00:05:16,820 So what Monica is saying here and she's spot on is it's weird. 50 00:05:17,240 --> 00:05:20,790 It's something so important, as we heard Howell say on his slide. 51 00:05:20,940 --> 00:05:30,270 Something that is right at the backbone of the primary research underpinning clinical care here in the UK and elsewhere, 52 00:05:30,750 --> 00:05:36,000 that the way we go about doing much of the activity is far from informed by 53 00:05:36,000 --> 00:05:42,900 evidence and we have not systematically tried to build up that evidence base, 54 00:05:44,070 --> 00:05:46,590 which is one of the things that we are trying to do with Trial Forge. 55 00:05:47,100 --> 00:05:56,969 So one of the things we want to do with this is to try and systematically build up evidence so that those decisions about how best to recruit, 56 00:05:56,970 --> 00:06:00,180 how best to retain, how should we disseminate our results? 57 00:06:00,480 --> 00:06:06,630 Should we send somebody out on a face to face site initiation visit, or is it okay to do it remotely? 58 00:06:07,410 --> 00:06:12,959 We want to try and provide some evidence to support those decisions because to a first approximation, 59 00:06:12,960 --> 00:06:17,280 there is nothing for any of those things right now, almost nothing. 60 00:06:18,210 --> 00:06:20,760 So we want to try and change that and to do that. 61 00:06:20,970 --> 00:06:28,830 That's why we need the coordination and collaboration, because leaving things as they are right now, nothing much has changed for a long time. 62 00:06:30,180 --> 00:06:34,050 So I mentioned efficiency. I want to think about efficiency in two ways. 63 00:06:34,740 --> 00:06:39,210 Think about it as scientific efficiency and process efficiency. 64 00:06:39,850 --> 00:06:44,020 Now, you could cook this in a couple of other ways, I guess, but this is what we think about in Aberdeen. 65 00:06:44,550 --> 00:06:52,530 And scientific efficiency boils down to having an important question and then choosing the right design and process. 66 00:06:52,530 --> 00:07:00,330 Efficiency is about doing things within your trial in a way that is most appropriate given your research question, 67 00:07:00,510 --> 00:07:09,090 your resources, and what you're trying to do. An important thing to remember about both of these, and particularly the scientific efficiency, 68 00:07:09,390 --> 00:07:17,490 is that it is entirely possible to kill the trial on paper before anything happens. 69 00:07:18,060 --> 00:07:22,770 You can do something here which renders the trial irrelevant. 70 00:07:22,830 --> 00:07:29,190 Nobody will care about it. It is fundamentally messed up from the very beginning. 71 00:07:29,910 --> 00:07:34,060 So it's dead. I'm going to show you an example of that process efficiency. 72 00:07:34,080 --> 00:07:39,840 You can do the same there. You might have a great question, chose the right approach up here. 73 00:07:40,740 --> 00:07:46,020 But some of the things you have chosen to do within the trial, perhaps, for example, 74 00:07:46,470 --> 00:07:54,780 the point at which you try to recruit is so spectacularly wrong that the trial fails, even though it's a great question. 75 00:07:55,020 --> 00:08:00,660 You've chosen the right design, but the process is poor, poor to the point of killing it. 76 00:08:01,350 --> 00:08:04,800 So these two things are important, very important. 77 00:08:05,070 --> 00:08:08,550 And it's possible to ruin the trial from the very beginning. 78 00:08:10,020 --> 00:08:16,560 The key thing I think to think about when designing any trial and Clifford and I were talking about this before I started, 79 00:08:16,570 --> 00:08:19,800 was, who are you designing the trial for? 80 00:08:20,820 --> 00:08:29,040 Who is the person or people or groups that you think will gain benefit from the results of your trial? 81 00:08:29,160 --> 00:08:36,660 And three or five or ten years time? Who is that person or persons of what do they need? 82 00:08:37,560 --> 00:08:43,440 What is the information need that they have? Not you. Nobody cares what I I'm a trial meteorologist. 83 00:08:43,440 --> 00:08:46,890 My I don't provide care. I don't meet patients. 84 00:08:47,280 --> 00:08:52,980 What I think is irrelevant. What is important is who is it I'm designing my trial for. 85 00:08:53,010 --> 00:09:01,800 In my own case, it's almost always clinicians and patients hoping to improve their decision making and possibly policymakers, 86 00:09:02,160 --> 00:09:08,340 people who might choose to pay for this in a roll out if it shows itself to be effective. 87 00:09:09,090 --> 00:09:13,170 And therefore the information need has nothing to do with me. 88 00:09:13,620 --> 00:09:19,140 It has everything to do with those clinicians and patients and possibly policymakers. 89 00:09:19,950 --> 00:09:25,020 So there is a trial that I'm involved with right now where we have what we've called a decision package. 90 00:09:26,250 --> 00:09:31,740 And that package of outcomes is relevant for the funder, which in this case is the Scottish Government, 91 00:09:32,520 --> 00:09:36,180 because they are thinking of should we roll this thing out in the future? 92 00:09:36,930 --> 00:09:43,020 What they want to be reassured of is within this package there is enough information for them 93 00:09:43,020 --> 00:09:47,850 to make a decision in four or five years time that we should roll this out across Scotland. 94 00:09:48,840 --> 00:09:52,710 Well, I think is irrelevant. They are the people making the decision. 95 00:09:53,190 --> 00:09:59,730 That is what we should put into that trial. So we are thinking who is the person for whom this trial is? 96 00:09:59,950 --> 00:10:05,010 Designed. So here's a pictorial way of trying to think about this. 97 00:10:05,520 --> 00:10:12,450 Let's imagine that we are designing a trial. And this trial we have a person or group in mind, 98 00:10:13,320 --> 00:10:23,010 and those individuals are thinking what we want is some piece of information that will remove uncertainty about a particular treatment. 99 00:10:23,730 --> 00:10:30,000 When I'm talking, let's say I'm a clinician, when I'm talking to one of my patients, I have a lot of uncertainty about this treatment. 100 00:10:30,750 --> 00:10:39,210 I want you to provide information that can reduce the uncertainty in that decision, and that already has some implications. 101 00:10:40,260 --> 00:10:44,160 That person is thinking about a particular type of patient. 102 00:10:44,940 --> 00:10:50,160 That person almost certainly thinks of where that care will be delivered. 103 00:10:50,580 --> 00:10:54,030 Might be a general practitioner, which means immediately we think about primary care, 104 00:10:55,200 --> 00:11:02,970 that person will have some ideas about the outcomes that would provide reassurance to him or her that this treatment is the most appropriate one, 105 00:11:03,210 --> 00:11:09,840 not this one. That decision package that I was talking about, he or she, together with a patient, 106 00:11:10,080 --> 00:11:14,610 will have some ideas at what point that outcome or outcomes should be measured. 107 00:11:14,970 --> 00:11:22,110 Do I care if this thing is measured after 30 days, or am I really interested in a much more sustained outcome? 108 00:11:22,380 --> 00:11:30,570 That's what affects our decision. That's the thing we need to know. We might be thinking about the resources available to deliver the intervention. 109 00:11:30,840 --> 00:11:34,410 The pieces of kit that I am expected to use. 110 00:11:34,560 --> 00:11:40,470 I don't have access to an MRI. Therefore, what's the point in building an MRI into your protocol? 111 00:11:40,650 --> 00:11:46,140 I don't have access to. It's no good. So let's say overall, if we take those things together, 112 00:11:46,410 --> 00:11:53,910 the information need that that individual has together with his or her patients looks like this piece of vase of flowers. 113 00:11:54,030 --> 00:11:57,120 That is what the trial should look like. 114 00:11:58,020 --> 00:12:01,020 So that's the ideal. This is the trial. 115 00:12:01,920 --> 00:12:05,190 Now, if we think of how long the trial will take, two years. 116 00:12:05,190 --> 00:12:09,660 We could burn our way through a few million pounds, involve five or 600 people. 117 00:12:10,620 --> 00:12:15,180 And at the end of that, what our user is looking for is this. 118 00:12:16,080 --> 00:12:25,290 That's the thing that he or she was thinking about. If we deliver this, it is not quite perfect, but it's is pretty good. 119 00:12:25,890 --> 00:12:29,190 Doing trials is tricky. There are constraints. 120 00:12:29,190 --> 00:12:32,220 There are compromises we have to make. We only have so much money. 121 00:12:32,700 --> 00:12:39,690 We have to recruit within a reasonable amount of time to inform the decisions that those people might want. 122 00:12:39,720 --> 00:12:43,980 There's no point spending 20 years, perhaps, if a decision has been made in three. 123 00:12:44,610 --> 00:12:50,100 So what is delivered is unlikely to be perfect, but that is pretty close. 124 00:12:50,940 --> 00:12:55,170 If if I was one of the decision makers here and I got that, I think, yes, 125 00:12:56,010 --> 00:13:01,440 they they have worked hard to deliver something that I can use in my decisions. 126 00:13:02,490 --> 00:13:05,670 It's not perfect, but I understand why it's not perfect. 127 00:13:05,680 --> 00:13:14,070 They've told me that it's pretty good. I now have a bit less uncertainty when I make my decisions together with my patient. 128 00:13:14,430 --> 00:13:17,640 Not that that's what we would like to see happening. 129 00:13:18,840 --> 00:13:23,970 Sometimes that happens. However, this is what I think happens far too much. 130 00:13:24,510 --> 00:13:30,750 This is the picture that we're aiming for and a trial team spends time doing it. 131 00:13:31,110 --> 00:13:39,840 Trial design runs its trial recruits to 500 people, spend three or four years, burns its way through a couple of million pounds and delivers this. 132 00:13:41,610 --> 00:13:44,790 This is a perfectly respectable salad. 133 00:13:45,300 --> 00:13:49,590 It looks great. It is not flowers. 134 00:13:50,910 --> 00:13:58,170 It's a thing of beauty in its own right. It is, however, completely irrelevant to the decisions that those users have. 135 00:13:58,500 --> 00:14:06,180 They want this. They don't care about that. It might be beautiful to the researchers, but it is not addressing that need. 136 00:14:06,600 --> 00:14:12,690 The information need to reduce the uncertainty that that clinician had together with his or her patient. 137 00:14:13,140 --> 00:14:17,820 Your three or four years of work wasted as far as that person is concerned. 138 00:14:19,110 --> 00:14:24,660 Let's look at an example. This is a a nice and sad piece of work. 139 00:14:25,050 --> 00:14:30,780 In equal measure, this is a group who did a systematic review over a decade ago, 140 00:14:31,080 --> 00:14:35,700 and it was looking for the choice of analgesic for a particular procedure. 141 00:14:37,020 --> 00:14:40,800 And what they did is they did a systematic review. They found about 50 trials. 142 00:14:41,160 --> 00:14:43,800 And the result of those 50 trials was clear. 143 00:14:44,370 --> 00:14:53,850 It showed that one analgesic in particular, lignite cane, was the one to use because it was effective, it was cheap and it was easy to use. 144 00:14:54,720 --> 00:14:59,430 So in that review done over a decade ago, they said lignite came. 145 00:14:59,970 --> 00:15:03,810 This is the one to use. It's cheap, easy to use. It's effective. 146 00:15:05,100 --> 00:15:12,690 Future research should compare whatever it is that's being proposed with ligne k. 147 00:15:13,410 --> 00:15:24,660 That is the clinically relevant comparison because we already have an effective, cheap, easy to use analgesic in the form of ligne k. 148 00:15:25,440 --> 00:15:33,389 Now this particular publication has fast forwarded a decade and they repeated their review of what they were 149 00:15:33,390 --> 00:15:41,310 particularly interested in was how much influence that systematic review the earlier one had on the design of trials. 150 00:15:42,120 --> 00:15:50,460 And they were particularly interested in the choice of comparator because the choice of comparator came out so clearly from that earlier review. 151 00:15:51,330 --> 00:15:57,900 So they had a definition of clinical relevance, which was the thing on the test is compared to ligne k, 152 00:15:58,410 --> 00:16:04,380 why would we not compare it to an effective cheap, easy to use analgesic? 153 00:16:05,400 --> 00:16:09,059 So when they look for the new trials that have been doing that decade, 154 00:16:09,060 --> 00:16:18,330 they found 136 new trials and they concluded that 87 of those trials asked a 155 00:16:18,330 --> 00:16:24,150 clinically irrelevant question because they chose a comparator other than ligne cane. 156 00:16:24,930 --> 00:16:27,540 So eight of them, I think it was eight chose placebo. 157 00:16:28,740 --> 00:16:35,580 So if you imagine a clinician who's wanted to make a decision as to whether the new whizzy thing is better than ligne cane, 158 00:16:35,580 --> 00:16:41,070 which he or she has been using for a decade. And what you say is, well, the new thing is better than placebo. 159 00:16:41,340 --> 00:16:46,440 What he or she is going to say is, I do not use placebo, I use ligne cane. 160 00:16:46,530 --> 00:16:53,219 How does it perform against ligne cane? And the trial team are going to say we have no idea because we didn't compare it. 161 00:16:53,220 --> 00:16:57,090 Telling the cane and the commission is going to say, okay, I'm not interested. 162 00:16:57,180 --> 00:17:09,240 Lovely, bye. By 87, two thirds of all the trial activity in that particular area over a decade wasted, clinically irrelevant. 163 00:17:10,140 --> 00:17:17,310 Almost certainly all of those trials were trying, or most of them trying to inform a direct, immediate clinical decision. 164 00:17:17,700 --> 00:17:22,590 And by making one choice, they have rendered them irrelevant. 165 00:17:22,860 --> 00:17:33,840 So these reviewers here were deeply saddened at the lack of influence that systematic review had on the direction of research over a decade. 166 00:17:34,290 --> 00:17:41,910 And this is one little area of health research. My sense is if you were to do that in many other areas, you would find exactly the same picture. 167 00:17:42,390 --> 00:17:47,670 Indeed, there are a group based here in Oxford who did a survey asking trialists about 168 00:17:47,670 --> 00:17:53,730 relevant systematic reviews and the awareness they had before designing their trials. 169 00:17:54,060 --> 00:18:00,690 And half of them had no idea of the existence of relevant systematic reviews before they designed their trials. 170 00:18:00,960 --> 00:18:05,850 In other words, they designed their trials in complete ignorance of things like this. 171 00:18:07,320 --> 00:18:14,580 Very easy in that situation to make design decisions which render your trial irrelevant to the people you think you are trying to support. 172 00:18:16,050 --> 00:18:21,840 This is not a new revelation. So this is two French statisticians, 1967 said this. 173 00:18:21,840 --> 00:18:28,590 Look, many most therapeutic trials are messed up. 174 00:18:29,190 --> 00:18:32,910 I paraphrase messed up from the very beginning. 175 00:18:33,270 --> 00:18:36,270 The inadequacy is basic. 176 00:18:37,200 --> 00:18:42,210 What they are saying here is that you do not statistic your way out of these errors. 177 00:18:42,960 --> 00:18:47,520 You are stuffed, you've messed up, and you do it from the very beginning. 178 00:18:47,520 --> 00:18:56,820 The way it is formulated is wrong. So you can do that on a piece of paper before you come anywhere close to a participant. 179 00:18:57,750 --> 00:19:03,330 And it doesn't matter what happens from that point on. Messed up that was said in 1967. 180 00:19:03,660 --> 00:19:06,930 Some of you have probably heard of Doug Altman, who is based here in Oxford, 181 00:19:07,230 --> 00:19:13,320 and he wrote a paper, I think it was 1994 called A Scandal of Poor Medical Research. 182 00:19:13,320 --> 00:19:17,400 And he spoke in Oxford last week about that very paper. 183 00:19:17,880 --> 00:19:27,120 And the only regret to, I guess, one regret, he does not think there's been any change since 1994 in the state of health research. 184 00:19:27,420 --> 00:19:35,970 And his only regret is that when he wrote the scandal of poor medical research, that he did not call it the scandal of bad medical research. 185 00:19:36,030 --> 00:19:37,530 Call it what it is. This is bad. 186 00:19:39,000 --> 00:19:47,580 That paper was voted a few years ago, was the most significant paper published in the BMJ in 20 years highlighting this sort of problem. 187 00:19:47,730 --> 00:19:53,460 And it continues and will continue to continue unless we do something a little bit different. 188 00:19:54,780 --> 00:19:57,810 So one of the sort, they start talking about some of the things we might be able to do. 189 00:19:58,170 --> 00:20:06,540 So one of the things I think we can. With the design issue is just be a bit more careful about how we go about our design decision making. 190 00:20:06,900 --> 00:20:12,750 This is one example. A colleague of mine, Kirsty Loudon, so she developed something called Tracey too. 191 00:20:12,840 --> 00:20:24,510 She was a Ph.D. student of mine, so I'm biased. But the point of this tool is to make trial teams think about who am I doing this trial for? 192 00:20:25,110 --> 00:20:29,550 What are the sorts of things that they require from my trial? 193 00:20:29,820 --> 00:20:34,710 And am I doing those things? And it's just a will you work your way around this wheel? 194 00:20:34,740 --> 00:20:38,190 There's a nice description in a paper. There's a website to support it. 195 00:20:39,480 --> 00:20:44,910 If you print out a little crib sheet, there's a sort of eight page crib sheet, four of which is worked example. 196 00:20:45,210 --> 00:20:49,380 You can get the bulk, I think, of value from this having read four pages. 197 00:20:49,950 --> 00:20:54,510 It's easy, I think, to get your head around. But it asks things like Who should be in my trial? 198 00:20:55,860 --> 00:21:01,919 Who say, Where should I get those people from? What sort of outcome should I be measuring? 199 00:21:01,920 --> 00:21:08,220 What should I throw at the trial to make it work? How often should I contact the participants? 200 00:21:08,580 --> 00:21:12,570 Very simple. This is not rocket science. This is not some resistance listicle technique. 201 00:21:12,750 --> 00:21:15,960 This is just thinking through your decision making. Do it on paper. 202 00:21:16,140 --> 00:21:20,280 Involve the whole trial team, which is what Kirsty has done in later work. 203 00:21:20,520 --> 00:21:29,879 Involve the whole trial team. It's quite common to find that people who are members of the same trial team have different views 204 00:21:29,880 --> 00:21:35,130 about this type of trial they are developing and it's good to find that out early rather than late. 205 00:21:35,760 --> 00:21:39,840 So I think that's one sort of thing. It's not rocket science. 206 00:21:39,990 --> 00:21:44,790 It's basically a structured way of thinking through the consequences of our design 207 00:21:44,790 --> 00:21:50,969 decisions before we get stuck into the job of doing the trial and doing the trials. 208 00:21:50,970 --> 00:21:56,910 We're going to talk about next with process, and I'm going to talk about some favourite topics of mine because these are what 209 00:21:56,910 --> 00:22:02,790 we focus on in Aberdeen and the first of these is participants recruitment. 210 00:22:03,810 --> 00:22:09,990 Everybody is terrified about recruitment for good reason. If you don't have any recruits, you are stuffed. 211 00:22:10,560 --> 00:22:16,230 It doesn't matter how wonderful everything else is. If you don't have participants, you have no trial. 212 00:22:17,760 --> 00:22:24,930 Now, if we look to try and think about how many trials there are on the go at any one time a group have. 213 00:22:25,830 --> 00:22:29,910 A few years ago they published an article and they thought something called A Bastion, 214 00:22:30,480 --> 00:22:36,750 and they thought it was about 25,000 new trial publications every year. 215 00:22:37,650 --> 00:22:41,310 To a first approximation, every one of those has to recruit participants. 216 00:22:41,550 --> 00:22:49,200 There might be some odd database ones, but let's say first approximation, 25,000, all of them recruiting Cochrane, 217 00:22:50,070 --> 00:22:56,250 an organisation which many of you are no doubt familiar with, but they try and collate evidence on trials. 218 00:22:56,250 --> 00:23:01,050 In particular, they have a database of things that are called trials types of study. 219 00:23:01,380 --> 00:23:02,940 There's about a million on that. 220 00:23:03,450 --> 00:23:10,560 So let's say there are a million trials, 25,000 new publications every year, 25,000 almost certainly an underestimate. 221 00:23:10,800 --> 00:23:15,210 It's a doubt it includes lots of pharma stuff, but essentially there are a lot of trials. 222 00:23:15,990 --> 00:23:22,830 All of them need to recruit. So you would think that there would be a vast quantity of literature that each of 223 00:23:22,830 --> 00:23:26,850 those trial teams can look to to make their decisions about recruitment better. 224 00:23:28,290 --> 00:23:31,650 This graph is very common. This is from one of my trials. 225 00:23:32,820 --> 00:23:37,980 The red one is projected recruitment. This is how we thought things were going to go. 226 00:23:38,310 --> 00:23:48,360 And the blue one is actual recruitment. And what generally happens is they diverge at some point never to come back together again. 227 00:23:48,900 --> 00:23:55,020 In this trial. It worked out fine in the end, but we got an extension. About half do doesn't always help. 228 00:23:55,590 --> 00:23:59,850 About half of the really best trials hit their targets. 229 00:24:00,270 --> 00:24:05,459 So now people have looked and they tend to look at the best trials. About half of them hit their target. 230 00:24:05,460 --> 00:24:10,020 This is a very common problem. Our projection might be hopeless, entirely possible. 231 00:24:10,380 --> 00:24:15,570 Something goes wrong here, but this is very common for this particular trial. 232 00:24:15,930 --> 00:24:19,050 This is a picture which others here may also recognise. 233 00:24:19,260 --> 00:24:24,330 That trial had five sites and this is a graph of recruitment per site. 234 00:24:25,380 --> 00:24:29,700 So what is obvious is what is happening at sites four and five. 235 00:24:30,300 --> 00:24:39,510 And if we had a graph that plotted energy and tears against site, then that would be reversed. 236 00:24:40,020 --> 00:24:45,989 These would be really high, these would be much lower. So those two at the end soaked up a lot of energy. 237 00:24:45,990 --> 00:24:50,250 And what would be an obvious question to ask is why on earth did we choose those? 238 00:24:50,820 --> 00:24:54,420 Could we have predicted that? An answer is, I don't know. 239 00:24:54,630 --> 00:24:57,900 We don't have any tools really that can help us with that. 240 00:24:57,900 --> 00:25:03,430 I'll talk about something later on. But this prompted but this is very familiar. 241 00:25:03,430 --> 00:25:12,669 And yet trial teams do not have something easy or good that is evidence based or evidence informed, at least to make the choice of, well, 242 00:25:12,670 --> 00:25:18,790 we can not bother with four and five, we need to find some others because they are never going to perform to try and avoid this situation. 243 00:25:20,350 --> 00:25:24,100 Now I look after the Cochrane review of interventions, 244 00:25:24,520 --> 00:25:31,870 things that trialists have tried to improve recruitment with, and I've been doing this since 2006. 245 00:25:32,620 --> 00:25:36,310 So this one picture I show here is from 2010. 246 00:25:37,540 --> 00:25:43,810 We've just submitted an update. It's taken us ages to just submitted it now for this one, the 2010 one. 247 00:25:44,440 --> 00:25:49,000 If we were very kind, we would say that since the time of the Romans, 248 00:25:49,660 --> 00:25:55,299 we go back there as far as we can and we have reasonable evidence for three 249 00:25:55,300 --> 00:26:01,630 things that trial teams could try and reasonably expect to improve recruitment. 250 00:26:01,960 --> 00:26:06,400 And the reality is there's only one of them that could be widely applied. 251 00:26:07,180 --> 00:26:16,990 So if you think a million trials have happened with adding an extra 25,000 at least every year, all of them have to recruit more or less. 252 00:26:17,290 --> 00:26:25,660 And we have one thing in 2010 that might help them recruit, which we can anticipate them being able to apply to their trial. 253 00:26:25,720 --> 00:26:34,030 These other two are a bit odd. One of those things, and I'll mention what it is in a moment, so let's jump forward to this new update. 254 00:26:34,270 --> 00:26:39,010 So it's not through Cochrane yet, so we've just submitted it. This is what it looks like right now. 255 00:26:39,310 --> 00:26:46,720 So in 2017, we have moved from 45 included studies to 68. 256 00:26:47,350 --> 00:26:51,520 They have compared in that bunch of studies, 72 things. 257 00:26:51,850 --> 00:26:55,150 It involves more than 74,000 people. 258 00:26:55,660 --> 00:27:04,120 And if we kind, we have reasonable quality evidence for three things that trialists could use to improve recruitment. 259 00:27:04,330 --> 00:27:07,360 And this this are the same as before. 260 00:27:07,990 --> 00:27:10,870 This one is the one that is the most promising. 261 00:27:11,710 --> 00:27:19,780 And what it boils down to is if you have posted a letter to a potential trial participant and that person has not responded, 262 00:27:20,080 --> 00:27:24,459 why don't you phone them up? That is the pinnacle of trial. 263 00:27:24,460 --> 00:27:32,230 Methodological research with regard to recruitment, phone up people who don't respond to a letter that's as good as it gets, give or take. 264 00:27:32,710 --> 00:27:37,270 This one is about having all trials as being open so we stop blinding trials. 265 00:27:37,630 --> 00:27:44,020 Difficult to think that that will be widely applied, but there is something interesting developing here. 266 00:27:45,280 --> 00:27:49,690 So this is the key thing and it brings us on to collaboration and coordination. 267 00:27:50,380 --> 00:28:01,959 So what this is, is a very involved way of developing participant information, leaflets costs about £10,000, does a lot of PPE, a lot of iterations. 268 00:28:01,960 --> 00:28:10,510 And you get a participant information leaflet that is very is co-produced together with potential participants. 269 00:28:11,050 --> 00:28:16,600 And as a recruitment intervention, it might have a very small benefit of about 1%, 270 00:28:16,600 --> 00:28:21,730 although there is in the confidence interval the risk that it's doing harm. So you might have a small benefit. 271 00:28:22,660 --> 00:28:30,040 The difference between these and that one that I think is most important is that here these studies were both done in 2004. 272 00:28:30,760 --> 00:28:36,340 Here one study was done in 2000 for the user in 2013. 273 00:28:36,580 --> 00:28:44,020 And here these three studies with far more participants were done between 2014 and 2017. 274 00:28:44,740 --> 00:28:50,800 And the reason we have three here in a relatively short period of time is because of coordination and collaboration. 275 00:28:51,400 --> 00:28:59,110 So a group led by Peter Barry in Manchester, coordinated activity on this and then trial teams collaborated. 276 00:28:59,620 --> 00:29:04,210 There are more trials to come because I was involved with one which is not in there yet. 277 00:29:05,380 --> 00:29:08,710 In other words, with collaboration and coordination, 278 00:29:09,040 --> 00:29:15,849 you can get to some real confidence about whether this thing is effective or not in a relatively short period of time. 279 00:29:15,850 --> 00:29:20,920 Whereas here we have a decade pretty much that has passed between the two studies. 280 00:29:21,730 --> 00:29:26,260 They come along every now and again, but not very often less left to their own devices. 281 00:29:26,260 --> 00:29:27,670 There isn't much replication. 282 00:29:28,630 --> 00:29:37,810 And indeed of the 72 comparisons in this review, there are only seven where there is more than one instance of that intervention being tested. 283 00:29:38,440 --> 00:29:44,170 So the majority of them are tested once and it's hard to have confidence in something that has been tested once. 284 00:29:46,570 --> 00:29:52,780 All of this about poor recruitment has a cost. This is a really nice study from the US one medical centre. 285 00:29:53,290 --> 00:29:59,220 They asked how much do all these failing studies cost us? So they looked over a five year period found. 286 00:29:59,380 --> 00:30:09,100 260 trials that they shut down because of poor recruitment and poor recruitment for these with no participants or one. 287 00:30:10,000 --> 00:30:13,300 So we're not even thinking like they recruited 20% of their target. 288 00:30:13,330 --> 00:30:18,250 They recruited zero or one, and then they were shut down 260. 289 00:30:19,030 --> 00:30:27,490 And that cost them $1,000,000, $1,000,000 to just go through contracting, go through ethics, go through site initiation. 290 00:30:27,580 --> 00:30:32,530 And then it was shut down with that most one participant per trial at most. 291 00:30:33,250 --> 00:30:36,610 It's an awful lot of work for no reward. 292 00:30:37,600 --> 00:30:44,649 And in a sense, you can have some sympathy with those trial teams when the only substantial thing to offer 293 00:30:44,650 --> 00:30:49,440 them with regards to an evidence based recruitment technique is if you've posted a letter, 294 00:30:49,450 --> 00:30:51,760 why don't you phone them up if you haven't heard it? 295 00:30:51,790 --> 00:30:57,190 I mean, that's in terms of an evidence informed recruitment strategy, if that's all they had to go on. 296 00:30:57,790 --> 00:31:03,670 Perhaps not surprising that we are burning through large quantities of money with studies that fail to recruit. 297 00:31:04,810 --> 00:31:10,450 That's what we want to try and do better if we work very hard to get people into our trial. 298 00:31:10,840 --> 00:31:16,090 We then want them to stay. So retention is something we are focusing on. 299 00:31:16,150 --> 00:31:19,420 In Aberdeen, it's hard to recruit. 300 00:31:19,630 --> 00:31:22,720 You really do not want your participants to then drop out. 301 00:31:23,260 --> 00:31:31,120 So this group have done a Cochrane review and oddly enough they find three things if we're kind 302 00:31:31,450 --> 00:31:37,090 that would be effective and they all boil down to paying people to send back questionnaires. 303 00:31:37,900 --> 00:31:45,730 So if you were doing a trial that involved a face to face visit, somebody has to come in to your office and give you some data. 304 00:31:45,940 --> 00:31:52,360 There is nothing in that review that's going to say this is something you could use and reasonably expect it to improve your recruitment. 305 00:31:52,690 --> 00:31:53,590 And let's think, 306 00:31:53,830 --> 00:32:03,340 just imagine how many of those 25,000 trials that will be published this year used face to face visits as some component of their trial. 307 00:32:03,670 --> 00:32:10,960 There is no good evidence to support an effective recruitment retention strategy that involves face to face visits. 308 00:32:11,620 --> 00:32:17,880 It really ought to be given how common they are. Now, a little bang on about data collection. 309 00:32:17,890 --> 00:32:23,230 This is me on a step ladder. And the point here is we collect a lot of data in a trial. 310 00:32:23,240 --> 00:32:29,680 So in this particular talk, I'd worked out how tall the pile of paper would be in a particular trial. 311 00:32:30,040 --> 00:32:32,620 We printed out everything that we collected on a double sided paper. 312 00:32:32,770 --> 00:32:37,840 It would be just over three metres, which is slightly higher than I could reach on that step like that. 313 00:32:38,350 --> 00:32:46,089 If you've collected three metres worth of paper, that's a lot of work and you really want to make sure that those data end up out 314 00:32:46,090 --> 00:32:51,220 there in the public domain informing people's decisions if it's in a cupboard. 315 00:32:51,970 --> 00:32:58,750 Well, nobody's health is going to improve from data in a cup. It needs to be out there in one form or another. 316 00:32:59,530 --> 00:33:05,079 This is a nice study, the only one I've seen so far that has done this sort of thing where they asked how 317 00:33:05,080 --> 00:33:10,630 much of our data they did this for their own trials ends up in the public domain. 318 00:33:11,170 --> 00:33:17,110 So all cancer trials, they found eight. They looked at this portfolio of eight and they did something quite simple. 319 00:33:17,800 --> 00:33:21,460 Here's a data collection form. Here are trial publications. 320 00:33:22,180 --> 00:33:27,430 Which items of data on here find their way here on the data collection form? 321 00:33:27,730 --> 00:33:30,969 Is it in the public domain? Does anybody? 322 00:33:30,970 --> 00:33:35,830 And what they did then is work out the proportion, and then they worked out a median across the eight trials, 323 00:33:36,160 --> 00:33:40,500 and then you would want to have a guess at the proportion of data that were not used. 324 00:33:40,500 --> 00:33:47,980 A median proportion. I say 80? 325 00:33:48,430 --> 00:33:49,600 Yeah, 82. 326 00:33:51,490 --> 00:33:58,420 So I don't know whether they collected three metres worth of data, but if you had a three metre pile and 80% of it, you know it's up here somewhere. 327 00:33:58,990 --> 00:34:04,810 Then we're down to this bit that's being used. What is happening to that other 80%? 328 00:34:05,020 --> 00:34:09,250 Now some of the data are never really for publication, so there might be some tracking data. 329 00:34:09,400 --> 00:34:15,129 So we have another project where we're looking at the proportions of different types of data that are collected. 330 00:34:15,130 --> 00:34:21,400 Some of it is tracking, some of it might be processed data saying, yes, we have got informed consent, things like that. 331 00:34:21,610 --> 00:34:28,960 Some of it is safety data, but it's probably not 82% or anything like 80%. 332 00:34:29,470 --> 00:34:33,100 In fact, we know from our or the project is nothing like that. 333 00:34:34,090 --> 00:34:41,530 So if you in the other project, 40% across this is five trials, 40% of the data collected are secondary outcome data, for example. 334 00:34:41,530 --> 00:34:49,479 So there's a lot of outcome data, health economics take a big chunk to this is something I think we should all think about. 335 00:34:49,480 --> 00:34:53,230 Will we be able to get these data out into the public domain? 336 00:34:53,620 --> 00:34:57,130 Because if we cannot, well, why why are we collecting it? 337 00:34:57,820 --> 00:35:03,670 Because that will perhaps causes recruitment problems. It will almost certainly causes retention problems. 338 00:35:04,060 --> 00:35:08,890 Who wants to fill out a huge form anyway? If we can make it shorter, then why wouldn't we? 339 00:35:09,340 --> 00:35:18,040 So this statistic should be in our heads, I think whenever we are designing the data collection for a trial on the same theme, 340 00:35:18,220 --> 00:35:21,790 this is an MSI student doing some work for Trial Forge. 341 00:35:21,790 --> 00:35:30,519 What we've asked him to do is how much time the trial team spend collecting the primary outcome data, 342 00:35:30,520 --> 00:35:38,290 the thing that they themselves have defined as the most important piece of data compared to the secondaries, 343 00:35:38,860 --> 00:35:44,920 the outcomes that they themselves have defined to be of less important than the primary. 344 00:35:46,240 --> 00:35:49,360 So what David Pickles, the MSI student, 345 00:35:49,360 --> 00:35:58,000 has done is to look so far at 20 protocols and figure out how much time the trial team spends on primary outcome data. 346 00:35:58,510 --> 00:36:03,520 And that's, as you can see here, a lot of very short orange bars. 347 00:36:04,840 --> 00:36:08,650 And the reason they're short is because the next slide, I'm going to show you the secondary outcome data, which is this. 348 00:36:09,340 --> 00:36:12,130 And the overwhelming impression here is there's a lot of blue. 349 00:36:13,540 --> 00:36:20,470 So these trials across the PE, not quite every single one, but most of them spend more time and sometimes much, 350 00:36:20,530 --> 00:36:26,169 much more time on the outcomes that they themselves have defined to be of less 351 00:36:26,170 --> 00:36:31,300 importance than the outcome that they really think of as the most important outcome. 352 00:36:31,720 --> 00:36:36,430 This one here, which is the worst offender and is a trial I am personally involved with. 353 00:36:37,150 --> 00:36:45,940 The ratio here is 1 to 32. For every hour they spend collecting primary outcome data, they spend 32 on secondary outcomes. 354 00:36:46,270 --> 00:36:52,570 If you do it across, the piece is 1 to 6, one hour on primary, 6 hours in secondary. 355 00:36:53,050 --> 00:36:57,070 And I guess the point here is not that we should collect always just one outcome. 356 00:36:57,620 --> 00:37:02,410 We should think carefully about the workload distribution within our trial. 357 00:37:03,670 --> 00:37:12,310 I was also saying to Clifford what we tend to find in Aberdeen at any rate, is that if things are going quite badly with response rates, 358 00:37:12,460 --> 00:37:18,820 we very quickly find that actually a one page primary outcome questionnaire suddenly becomes enough. 359 00:37:19,000 --> 00:37:20,500 If we can get that back, we're happy. 360 00:37:21,100 --> 00:37:25,900 And then you might ask yourself, Well, I wonder why we weren't happy with that at the beginning, or something at least similar to it. 361 00:37:26,200 --> 00:37:27,040 We're happy now. 362 00:37:27,460 --> 00:37:34,060 Maybe we could have been happy with something similar to that right the beginning and saved ourselves and everybody else a bit of pain. 363 00:37:34,780 --> 00:37:37,899 So it's just to think what's the distribution? 364 00:37:37,900 --> 00:37:40,120 Is it appropriate for our trial? 365 00:37:41,110 --> 00:37:49,239 This again, is the sort of thing we're trying to fix or attempt to help, at least within trial force raising some of these issues. 366 00:37:49,240 --> 00:37:52,330 Do we really mean to have that sort of distribution? 367 00:37:52,450 --> 00:37:56,050 I don't think we do. We certainly didn't do it mean that in our trial. 368 00:37:56,990 --> 00:37:59,520 So I'm going to try and wind down a bit towards the end next. 369 00:38:00,100 --> 00:38:06,760 So all the things that we are doing, recruitment, as I said, we really focussed on recruitment. 370 00:38:07,260 --> 00:38:13,270 We talked a lot about this review. This is the randomised evaluations, the one where we have, 371 00:38:13,270 --> 00:38:19,210 if we're kind three things where we might expect some sort of benefit with regards to recruitment. 372 00:38:20,170 --> 00:38:22,570 So that's just been submitted the update. 373 00:38:23,110 --> 00:38:29,050 But there are other reviews which we think are equally important and we should stop thinking of this on its own. 374 00:38:29,920 --> 00:38:33,430 So the other one that's up here is to do with factors. 375 00:38:33,430 --> 00:38:40,720 What influences recruitment, what are the things that affect the ease or difficulty of recruitment? 376 00:38:41,350 --> 00:38:45,990 So we have a collaborator. In Galway, Catherine Whorton, who's leading that. 377 00:38:46,020 --> 00:38:53,940 So we work with them on that, but they lead that. What are the things that influence recruitment while you're doing this? 378 00:38:54,600 --> 00:39:00,839 I noticed that we were rejecting a lot of non randomised evaluations of recruitment interventions. 379 00:39:00,840 --> 00:39:07,540 What they tend to be is what my PhD student Heidi and I have now come to call yield studies. 380 00:39:07,560 --> 00:39:10,770 We tried eight things to recruit participants. 381 00:39:11,040 --> 00:39:14,640 This is the number of participants who came in via each technique. 382 00:39:15,060 --> 00:39:22,920 There's an awful lot of those sorts of studies. So we thought we should look at them to see if there's anything useful that we could find. 383 00:39:22,920 --> 00:39:27,540 And I actually was quite optimistic about some types of intervention because I thought they'd be enough. 384 00:39:28,380 --> 00:39:31,650 So you might worry about the the actual design. 385 00:39:31,710 --> 00:39:36,920 The approach is non-randomised, but there would be enough studies in there that we might be able to see something. 386 00:39:36,930 --> 00:39:42,330 We found over a hundred studies Heidi has in that review. 387 00:39:42,630 --> 00:39:51,510 And without wanting to pre-empt too much of what Heidi's results will be, too a first approximation, these have no useful value whatsoever. 388 00:39:51,960 --> 00:39:59,670 They are done spectacularly badly reported in a way that means it's impossible to draw any useful information from them. 389 00:39:59,850 --> 00:40:03,690 In other words, a complete waste of time. Almost all of them. 390 00:40:04,020 --> 00:40:08,040 The worst offender. Just because it amuses me. I'll tell you what it is. 391 00:40:08,640 --> 00:40:16,260 It's a paper. It's four pages long. They used eight techniques, I think it was eight, and they recruited 13 participants. 392 00:40:17,010 --> 00:40:18,780 And then they spent time writing that paper. 393 00:40:18,810 --> 00:40:24,900 Now, if you're designing your recruitment strategy and somebody has recruited 13 participants using eight strategies, 394 00:40:25,140 --> 00:40:28,330 whatever they write is of no use to whatsoever. 395 00:40:28,350 --> 00:40:33,150 It doesn't matter what they say, it's irrelevant. And yet it's in there. 396 00:40:33,990 --> 00:40:38,250 That's the worst offender. But there are an awful lot like that. 397 00:40:38,580 --> 00:40:42,870 It's hard to see what others are to take from those results. 398 00:40:43,620 --> 00:40:52,200 But we want to link these up. So Heidi's wrote the protocol in a pretty amusing way, where one of our possible outcomes was that we would say, 399 00:40:52,410 --> 00:40:56,489 Please stop doing these, and that is likely to be one of our conclusions. 400 00:40:56,490 --> 00:40:59,850 Please stop doing them. But we might punt some over here. 401 00:40:59,850 --> 00:41:03,420 There are some which are done quite well. I basically have them linked. 402 00:41:03,570 --> 00:41:06,570 We should be looking at things perhaps over here. 403 00:41:06,570 --> 00:41:10,290 Maybe we can do randomisation for a particular reason, but it looks promising. 404 00:41:10,320 --> 00:41:16,920 So let's then suggest that we from that review we say we need a randomised evaluation. 405 00:41:17,220 --> 00:41:25,210 We might have randomised evaluations up here which suggest that there are some barriers and facilitators which can be targeted by these. 406 00:41:25,230 --> 00:41:29,580 They seem to be effective against those facilitators to a first approximation. 407 00:41:30,210 --> 00:41:40,710 All of these are not specifically linked to anything in particular with regarding a factor they just dreamed of somebody had to first approximation. 408 00:41:41,040 --> 00:41:43,320 So that linkage is not done. 409 00:41:43,350 --> 00:41:51,150 And we we think this is something we could use for lots of processes within trials, factors, different types of evaluations and link them. 410 00:41:52,140 --> 00:41:58,650 So we're doing that also for retention, and we think we should start thinking much more about these as a trio. 411 00:41:58,770 --> 00:42:03,989 A Trinity is what Heidi calls and of reviews, and that they are linked. 412 00:42:03,990 --> 00:42:08,010 They're explicitly linked. Now, 413 00:42:08,190 --> 00:42:13,739 something that has come through directly from trial footage and this is something which I'm this that you'll 414 00:42:13,740 --> 00:42:20,370 see it's it's a tiny little piece of the trial world but demonstrates something which I think is important. 415 00:42:21,170 --> 00:42:28,680 Remember from that Cochrane review, we had things where a decade passed and nothing much came about to replicate a result. 416 00:42:29,520 --> 00:42:33,179 Now here we are because it came from Aberdeen and Dundee. 417 00:42:33,180 --> 00:42:42,840 So is a Scottish intervention. We wanted to improve response rates to a questionnaire and the innovation that came then here from Dundee. 418 00:42:42,870 --> 00:42:45,540 Debbie Bonetti, in fact, is a psychologist. 419 00:42:45,870 --> 00:42:54,000 What she said was, we should think of returning that questionnaire as a behaviour and our cover letter that goes together with that questionnaire. 420 00:42:54,270 --> 00:42:57,210 We should think of that as a behaviour change intervention. 421 00:42:58,020 --> 00:43:06,540 So this over here is a template that Debbie produced based on psychological theory as a behavioural change intervention, 422 00:43:06,990 --> 00:43:11,280 and they use that in a dental trial and it increased the response rates by 6%. 423 00:43:11,820 --> 00:43:18,720 So an absolute increase of 6% for something which was the cheapest chips to do and you were going to send a letter anyway. 424 00:43:19,350 --> 00:43:22,469 Doesn't make it longer and we're not much. 425 00:43:22,470 --> 00:43:28,590 Maybe a page long, but small useful benefit for essentially no additional cost. 426 00:43:29,340 --> 00:43:31,320 That was promising. But it's one study. 427 00:43:31,860 --> 00:43:39,480 So what we did with the trial for IDEA was can we coordinate some activity around this particular intervention? 428 00:43:40,020 --> 00:43:43,860 Can we get other Triallists to test this? In their trial. 429 00:43:44,400 --> 00:43:47,590 So we asked all of these of we asked lots of people. 430 00:43:47,610 --> 00:43:50,820 These are the ones who said, yes, we're interested. We can help you. 431 00:43:51,000 --> 00:43:54,630 If we have a trial that looks suitable, we'll help. So they were all positive. 432 00:43:55,620 --> 00:44:05,190 And then this is how things stand right now. It's a far from I think it was 2015, 2014, maybe the first trial that Aberdeen won with Dundee. 433 00:44:05,730 --> 00:44:12,900 They had one result and now we have four. And if we look at these sort of pictures here from the ad and the plot, 434 00:44:13,170 --> 00:44:19,830 then this is our overall estimate and it's looking like there's a small benefit, nothing spectacular, 435 00:44:20,160 --> 00:44:28,200 but something small potential possibility that it's doing some harm, I think is there's still grounds to do one or two more of these, 436 00:44:28,800 --> 00:44:36,840 but these were done them to get 2014 to 2017 by a bit of coordination, a bit of collaboration. 437 00:44:37,170 --> 00:44:40,770 Then we end it with this picture, which for what is it? 438 00:44:40,770 --> 00:44:49,050 65 of those interventions in the recruitment review are still waiting for evaluation number two. 439 00:44:49,650 --> 00:44:56,190 So with that coordination, that collaboration, we end it with this picture in a relatively short period of time. 440 00:44:56,340 --> 00:45:00,329 Of course, because this is retention, you also have to wait for the trials to reach that point. 441 00:45:00,330 --> 00:45:03,630 So we can't speed some of these things up beyond that limit. 442 00:45:03,990 --> 00:45:09,180 But this, I think, is the way to go. It's a tiny little piece of the trial pathway. 443 00:45:09,690 --> 00:45:17,580 A couple of more studies I suspect will say, yeah, we've nailed that very small but useful benefit. 444 00:45:18,180 --> 00:45:21,479 Hard to see harm. Just go ahead and do it. 445 00:45:21,480 --> 00:45:25,170 It's one of those options you might want to think about it. Cheapest chips. 446 00:45:25,170 --> 00:45:28,920 You're going to send a letter anyway. Do it like this might improve response rates by 3%. 447 00:45:29,910 --> 00:45:33,270 Why not? Potentially for dental trials in Scotland? 448 00:45:33,420 --> 00:45:38,880 They've already made that decision because they are already enough satisfied enough by that. 449 00:45:38,910 --> 00:45:43,590 But this, I think, is the collaboration that we need to do if we want to improve the situation. 450 00:45:44,040 --> 00:45:52,680 That day Doug Altman was talking about in 1994, his scandal of poor research, this lack of evidence that we have for much of what we do. 451 00:45:54,090 --> 00:45:55,110 And then this is the last one. 452 00:45:55,710 --> 00:46:02,760 So you remember the graph that I had about site selection where there were three bars here and then two flat ones over here, 453 00:46:03,090 --> 00:46:07,680 three sites doing all the recruitment. We had two sites in addition that didn't do very much. 454 00:46:08,130 --> 00:46:11,580 So in Aberdeen we're thinking, can we do that better? 455 00:46:11,580 --> 00:46:17,460 Can we make those decisions in a better way? And the trial managers suggested this idea. 456 00:46:18,090 --> 00:46:21,180 So the trial managers came up with this idea for their project. 457 00:46:21,180 --> 00:46:27,480 And what they're doing essentially is at the point at which they sign off a trial for sorry, a trial site. 458 00:46:27,930 --> 00:46:31,560 This site is now ready to recruit. They made a prediction. 459 00:46:32,610 --> 00:46:35,790 So we weren't stopping any trials going ahead, any sites going ahead? 460 00:46:36,180 --> 00:46:39,660 We were just asking the trial managers, okay, you've now got that site ready to go. 461 00:46:40,230 --> 00:46:44,610 Do you think they will hit their six month target, yes or no? 462 00:46:45,120 --> 00:46:49,679 And why? That's all we asked them to do. We didn't give them any guidance or suggestion. 463 00:46:49,680 --> 00:46:54,630 We said, well, they hit that six month target. Why? And then that ran for a while. 464 00:46:55,500 --> 00:47:02,430 And this is what we got there. Ten trial managers got involved in that across seven trials and they made 39 predictions. 465 00:47:02,430 --> 00:47:13,020 So we got a fair bunch of predictions. And the bottom line is, over here in that bit of the study, if those trial managers said, 466 00:47:13,020 --> 00:47:21,120 yes, this site will recruit, well, then it might or it might not by 50, 50, more or less. 467 00:47:21,990 --> 00:47:25,710 If the trial mind just said no, this site will not recruit. 468 00:47:26,190 --> 00:47:31,110 Well, they were much better at that, so the odds were that was going to be correct. 469 00:47:32,130 --> 00:47:36,000 So where we are right now, remember, there's no guidance given at all. 470 00:47:36,000 --> 00:47:40,380 We just ask, do you think they hit the target or not? But with nothing. 471 00:47:40,920 --> 00:47:47,670 If a trial manager says no, we ought to sit up and listen because chances are they'll be right. 472 00:47:48,390 --> 00:47:53,390 It's not perfect, but the chances are there's something going on there that is worth thinking about up here. 473 00:47:54,450 --> 00:47:58,830 Less so. So the next stage is we got a lot of qualitative work. 474 00:47:59,250 --> 00:48:04,440 And what we're thinking about now is can we give a couple of steering questions? 475 00:48:04,890 --> 00:48:09,660 Will this site hit its target? You might want to think particularly about this and this. 476 00:48:10,380 --> 00:48:18,000 So we have a body of qualitative work that has come from the reasons why, plus some focus group work we did with the trial managers. 477 00:48:18,000 --> 00:48:25,379 So what we're hoping is that by a little bit of steering, we can get the trial managers to reflect on a couple of things. 478 00:48:25,380 --> 00:48:30,900 I don't know what those couple of things are, but that's that's the hope that will hopefully increase both of these. 479 00:48:30,900 --> 00:48:35,490 And the reason I'm I like this idea. There are no complicated forms. 480 00:48:35,760 --> 00:48:39,270 There's no extra data collection. We don't send anything extra to the sites. 481 00:48:39,540 --> 00:48:47,850 We use the trial manager expertise. They're essentially asking the person who is tasked with setting up the sites, what do you think? 482 00:48:49,200 --> 00:48:51,270 No extra work because they're thinking that anyway, 483 00:48:51,690 --> 00:48:57,270 we just might want to give them a little bit of a steer to think about a couple of things in particular just to increase this. 484 00:48:57,750 --> 00:49:02,819 But they hope the idea there was not to increase the burden of sending stuff to 485 00:49:02,820 --> 00:49:07,379 sites because there are some very long forms that sites some some centres do use. 486 00:49:07,380 --> 00:49:10,830 We don't want to do that. We've got this expertise. Let's try and tap into that. 487 00:49:11,760 --> 00:49:19,620 So to finish off to a very large extent within trials, we do things the way we do them because that's how we've always done them. 488 00:49:20,070 --> 00:49:28,760 And really, we're often forced into that because there is no alternative, there is no systematic evidence base available to tap into. 489 00:49:28,770 --> 00:49:33,940 And that's one of the things we want to try and address with Trial Forge Collaboration. 490 00:49:33,960 --> 00:49:38,070 Coordination is absolutely central, I think, to changing that picture. 491 00:49:38,070 --> 00:49:43,980 And if anybody is interested in, okay, we've got an idea at our site or our centre, 492 00:49:44,250 --> 00:49:50,760 then we would be interested because I think it's the ideas that come from the ground which are going to be most useful. 493 00:49:51,270 --> 00:49:59,430 And also to do it collaboratively, collaboratively in a coordinated way means that we might get some answers much faster than we would otherwise do. 494 00:50:00,200 --> 00:50:01,680 So thank you very much for your attention.