1 00:00:02,990 --> 00:00:08,720 [Auto-generated transcript. Edits may have been applied for clarity.] Welcome to you all. So there will be a mixture here of welcome to people on the course that's going this week on the practice of space healthcare. 2 00:00:09,110 --> 00:00:14,509 Um, and also welcome to people across the wider program within the ABC program as well. 3 00:00:14,510 --> 00:00:19,249 So thanks for joining us as well for, uh, this talk, um, which I'm happy very much happy. 4 00:00:19,250 --> 00:00:22,790 It's going ahead. Uh, personal, personal interest and personal conflict. 5 00:00:22,790 --> 00:00:29,030 Uh, I was our seniors, um, one of his default supervisors, uh, and was keen to sort of, 6 00:00:29,330 --> 00:00:33,739 um, share some of his novel work, um, which has now been published. 7 00:00:33,740 --> 00:00:38,810 I'm sure Sonia will let you know where where you can find some of this, uh, information that he's publishing on, uh, that he's talking on today. 8 00:00:39,140 --> 00:00:43,549 Um, also, now, I'll let you do your introductions. It's always nice to hear from the speaker themselves. 9 00:00:43,550 --> 00:00:49,040 And then I think we can, um, we can then get straight in. We'll have some, you know, time for a presentation. 10 00:00:49,040 --> 00:00:53,329 And I'm sure Sony has built in some time for so many questions, uh, at the end as well. 11 00:00:53,330 --> 00:00:58,610 So thank you very much. And I hasten to add that David was my favourite DFL supervisor. 12 00:00:58,610 --> 00:00:59,750 I just have to throw that in there. 13 00:00:59,930 --> 00:01:08,540 Uh uh, so, uh, my name's Presidio, and uh, as you heard, I had just completed last year a DPhil in evidence based healthcare, uh, at Oxford. 14 00:01:08,540 --> 00:01:13,640 So I'm very proud to be back. My background is in physiotherapy, was a physiotherapist for 30 years. 15 00:01:13,790 --> 00:01:19,610 Um, so it was my first DPhil. The other one is in, uh, exercise and physiology research, Alzheimer's disease. 16 00:01:19,970 --> 00:01:23,510 So I've got a long history with clients with, uh, complex interventions of all sorts. 17 00:01:24,140 --> 00:01:29,060 But I want to talk to you about today is, um, something I care a great deal about. 18 00:01:29,060 --> 00:01:32,120 And it's not just the what in the clinical trial, but also the how. 19 00:01:32,510 --> 00:01:33,709 And we're going to talk about intervention, 20 00:01:33,710 --> 00:01:40,310 fidelity and what that means for how we make sense of the results that we get from clinical trials in, in health care. 21 00:01:41,360 --> 00:01:47,210 So I'm sure that many people here come, uh, cook every day. And we've all had the experience of trying to follow a recipe. 22 00:01:47,730 --> 00:01:51,110 One of the things you know is that some recipes are easier to follow than others. 23 00:01:51,530 --> 00:01:59,090 Some recipes might have one ingredient and two steps, and it's surprising how difficult it can be to actually hard boiled egg and not have rubber. 24 00:01:59,690 --> 00:02:02,750 Um, on the other hand, some recipes are a bit more complicated. 25 00:02:03,230 --> 00:02:08,420 They might have significantly more ingredients, like a Charlotte room with 19 ingredients and several technical steps. 26 00:02:10,220 --> 00:02:12,050 Health care interventions can be quite similar. 27 00:02:12,230 --> 00:02:18,860 There are some trials, some treatments that have more ingredients, more procedures and more components than others. 28 00:02:18,950 --> 00:02:21,770 So for example, compared to a pharmaceutical intervention. 29 00:02:22,100 --> 00:02:27,470 And it's not to say that those aren't complex, but in terms of what happens during a trial, you know, the way that they're delivered. 30 00:02:28,280 --> 00:02:34,520 Other interventions tend to be more complex, for example, surgery, rehabilitation, behavioural medicine. 31 00:02:34,880 --> 00:02:41,780 They have multiple components that might influence each other, such as, for example, the skill or the experience of the provider. 32 00:02:42,260 --> 00:02:45,980 When we think about trials, that involves surgery, for example, if it's a new surgery, 33 00:02:46,310 --> 00:02:49,820 whether the surgeons in the trial have experience where they can influence the outcome, 34 00:02:50,150 --> 00:02:58,190 they might get better at performing it over the first 15 and 20 or 30 patients, and so their proficiency over time may influence the outcomes. 35 00:02:58,790 --> 00:03:04,670 You can also have this mixture of the participants characteristics, and then participants will take to the intervention more than others. 36 00:03:05,270 --> 00:03:08,450 There will be a lot of tailoring adaptation you might require for some participants. 37 00:03:08,840 --> 00:03:11,840 Someone, for example, who is very thin, someone who's living with obesity. 38 00:03:11,840 --> 00:03:17,630 Someone who's much older or much younger. You'll have differences in dosages and durations and sometimes even differences in context. 39 00:03:18,410 --> 00:03:22,580 And one of those things that's also very interesting about these complex interventions, these, you know, 40 00:03:22,580 --> 00:03:28,350 um, surgery, rehab, interventional cardiology is that some of the treatments take place over time. 41 00:03:28,370 --> 00:03:34,130 It's not just one treatment, it's one series of treatments 3 or 4 times a week for a few months, for example. 42 00:03:34,670 --> 00:03:37,790 It's one of the reasons I always say surgery is not a treatment. It's a profession. 43 00:03:38,280 --> 00:03:42,620 Physiotherapy is not a treatment, it's a profession. Um, they are made of multiple treatments. 44 00:03:44,570 --> 00:03:52,400 And because these complex interventions have multiple ingredients and multiple steps, you get more scope for variation, um, for unexpected results. 45 00:03:52,550 --> 00:03:56,660 If anyone's a fan of the Great British Baking Show and I'm, I am horribly addicted to it. 46 00:03:57,110 --> 00:04:01,640 You can get the Paul Hollywood stare. Sometimes what you ended up with was not quite what you intended with. 47 00:04:02,540 --> 00:04:05,150 Now, the interesting thing is to try to figure out what went wrong. 48 00:04:06,350 --> 00:04:12,410 We often know whether an intervention succeeded or failed to achieve its outcomes in a trial. 49 00:04:12,890 --> 00:04:17,270 We don't often know why are we don't often investigate the how or the why as much. 50 00:04:17,930 --> 00:04:21,950 Sometimes it might be that there were modifications to, you know, the recipe as if we were cooking. 51 00:04:22,490 --> 00:04:27,350 Sometimes there might be a difference in skill and technique. All we know is that sometimes we have a result. 52 00:04:27,710 --> 00:04:31,190 Someone, for example. I like to liken this to the reviews you see on a recipe online. 53 00:04:31,670 --> 00:04:34,970 Someone said I never felt made something that was worse. It was horrible. 54 00:04:35,000 --> 00:04:42,100 It was just ready for the bin. But when you start digging in, you start seeing that perhaps some of these reviews were followed. 55 00:04:42,110 --> 00:04:46,700 People who didn't follow the directions and some modifications matter less than others. 56 00:04:47,150 --> 00:04:53,060 So, for example, someone saying, well, you know, the only change I made was using two 14 ounce cans instead of one 28 ounce cans. 57 00:04:53,480 --> 00:05:00,540 That's probably not going to make any difference. However, you might also start finding that other things were changed that would make a difference. 58 00:05:01,200 --> 00:05:06,240 Hey, these biscuits were lovely, but they were a bit dry. Then again, I baked them for twice as long as they were meant to be. 59 00:05:06,930 --> 00:05:13,520 Very likely that might have been part of the problem. Or I didn't have any pastry, so I made I use sponge fingers. 60 00:05:13,520 --> 00:05:18,049 I don't like cheese, so I use some fruit and jelly. I thought you should have cream. I said put where cream on it. 61 00:05:18,050 --> 00:05:22,100 In the end, they didn't really taste like cheese straws. They were more like a trifle. 62 00:05:22,100 --> 00:05:27,440 So these are things you see all the time in cooking, but they're also things that can happen in a clinical trial. 63 00:05:29,090 --> 00:05:33,440 So what was it? Was it about recipe? Was it a poor cook? Was there a different recipe? 64 00:05:33,950 --> 00:05:37,220 All of these are the same teleporter that we share in clinical trials. 65 00:05:37,640 --> 00:05:42,710 Again, we know much more about whether they that's the intervention or the trials succeeded to attain the outcomes and why. 66 00:05:44,100 --> 00:05:48,780 All of these things can have an effect. Sometimes we don't know what's the treatment, just ineffective. 67 00:05:49,080 --> 00:05:53,069 Some treatments will just not be effective because of the way that they're designed, 68 00:05:53,070 --> 00:05:56,820 or because they can't target the condition or the function that they're meant to. 69 00:05:57,630 --> 00:06:03,240 Or sometimes it can be ineffective delivery. You've got a wonderful treatment that could succeed, but it wasn't delivered well. 70 00:06:04,080 --> 00:06:05,729 Could it be that they were Co interventions? 71 00:06:05,730 --> 00:06:12,150 You had a wonderful treatment that wasn't delivered well and someone modified it and said, you know, I'd like to add a little this and a little that. 72 00:06:12,150 --> 00:06:17,250 And so you're getting a multitude of treatments mixed together and sometimes it's just the wrong treatment for that patient. 73 00:06:19,210 --> 00:06:23,200 What I want to talk to you about is just that, that intervention, fidelity. 74 00:06:23,230 --> 00:06:26,860 And you might wonder what fidelity is other than a bank intervention. 75 00:06:26,860 --> 00:06:33,580 Fidelity. It's the degree to which an intervention in a trial was delivered as it was intended in the protocol. 76 00:06:34,450 --> 00:06:36,730 You'll see this by many names. It's a rose by many names. 77 00:06:37,360 --> 00:06:45,720 It'll often pop up in the evidence base as treatment integrity or intervention integrity or adherence compliance, concordance quality. 78 00:06:45,730 --> 00:06:55,270 It goes by many names, but but the basic underlying definition is did the intervention deliver it and trial sufficiently resemble what was intended? 79 00:06:55,930 --> 00:07:01,780 When we have a high level of fidelity, we have greater confidence that the outcome we see was due to the intervention, 80 00:07:02,530 --> 00:07:08,950 less likelihood that the outcome we see was due to some form of contamination or some introduction of Co interventions, 81 00:07:09,340 --> 00:07:11,530 some form of bias that was introduced by that. 82 00:07:12,250 --> 00:07:18,370 We have greater confidence that there's a lower likelihood of confounding by introduction of random variability, 83 00:07:18,370 --> 00:07:23,139 because you're adding things into the mix that weren't meant to be there. But this is really important as well. 84 00:07:23,140 --> 00:07:28,450 You have a higher confidence and a higher chance of reproducibility and effective implementation. 85 00:07:28,750 --> 00:07:35,980 One of the biggest things that we have difficulty with in complex interventions is difficulty in reproducing, uh, interventions. 86 00:07:36,460 --> 00:07:42,820 If anyone has ever, uh, read through a paper thoroughly and looked at what was done, it's very difficult sometimes to know exactly what was done. 87 00:07:43,330 --> 00:07:46,870 That's a matter of reporting to some degree, but it also influences fidelity. 88 00:07:47,260 --> 00:07:50,380 It's very possible that if you can't tell what was done by reading the paper, 89 00:07:50,710 --> 00:07:55,540 that somewhere along the way you may not have been able to reproduce it even in the trial. 90 00:07:57,650 --> 00:08:02,270 Now the other wonderful benefit of fidelity. It also limits type one to type three errors. 91 00:08:02,840 --> 00:08:07,750 And you might think type three is this new. This is something in Canada that we don't have in the UK. 92 00:08:07,760 --> 00:08:15,260 We thought there are only two types of errors. So my best way of describing this is think of the boy cry wolf if anyone remembers that, 93 00:08:15,410 --> 00:08:20,840 um, that story if we took the null hypothesis, is there is no wolf. 94 00:08:21,350 --> 00:08:25,160 And we took an alternate hypothesis and said, no, there is a wolf. I'm going to prove to you there's a wolf. 95 00:08:26,270 --> 00:08:31,670 A type one error is that the villagers believe the boy when he says, you know, Wolf, but there's actually no wolf. 96 00:08:32,240 --> 00:08:34,670 They're rejecting the null hypothesis incorrectly. 97 00:08:36,360 --> 00:08:43,430 A type two error is that the villagers do not believe the boy when there actually is a wolf there, rejecting the alternate hypothesis incorrectly. 98 00:08:43,440 --> 00:08:47,139 And you know, there go the poor sheep. The top three areas. 99 00:08:47,140 --> 00:08:50,830 The villages don't believe the boy when there is no wolf, but they do it for the wrong reason. 100 00:08:51,700 --> 00:08:55,310 Like for instance, saying, well, there's no such things that will. This world. 101 00:08:55,850 --> 00:09:00,530 So all of these types of errors are things that can be heightened by having poor intervention. 102 00:09:00,530 --> 00:09:04,400 Fidelity, again, because you don't know what's actually contributing to the outcome, you see. 103 00:09:06,990 --> 00:09:11,940 And throwing spaghetti at the wall. One of the things that we have with port facilities, we just don't know what's stuck to the wall. 104 00:09:12,420 --> 00:09:16,590 We have no we very decreased ability to know what actually contributed to the outcome. 105 00:09:16,950 --> 00:09:19,049 Did the intervention fail to achieve the outcome, 106 00:09:19,050 --> 00:09:25,440 or did they succeed because of the active ingredients that came with the intervention or active ingredients we added by modifying it? 107 00:09:25,830 --> 00:09:31,500 Were the things that were added, things that were left out? We just have decreased confidence in the findings of individual trials. 108 00:09:32,160 --> 00:09:37,880 This really starts to matter that when we can think of systematic reviews and then when we think of meta analysis, when we start compounding, 109 00:09:37,890 --> 00:09:44,670 when we start adding all this variability together in pooled analysis of data, and even more so when we think of evidence based guidelines, 110 00:09:44,820 --> 00:09:48,330 because those are going to depend on the systematic reviews and that analysis. 111 00:09:50,600 --> 00:09:54,110 Now, you know, we keep talking about this delivery of an intervention. 112 00:09:54,270 --> 00:09:59,630 How we follow this recipe is that only by following recipes is it really about how we deliver things. 113 00:10:00,200 --> 00:10:07,660 Well, no, it's not just about how the intervention is. Does this a lot of fidelity also deals with the participants, what participants do. 114 00:10:07,670 --> 00:10:15,260 And most of these interventions and complex interventions involve some active participation, some active steps taken by the participants. 115 00:10:15,500 --> 00:10:22,879 They have to carry out the intervention or enact the intervention, avoid certain behaviours, follow certain behaviours and that's adherence. 116 00:10:22,880 --> 00:10:28,370 And that's we'll talk about how that's a part of fidelity also. But that really tells you what the participants did, 117 00:10:28,370 --> 00:10:35,720 how they follow the directions or the images that were given to them as part of the intervention in the trial, for example, maintaining randomisation. 118 00:10:36,410 --> 00:10:40,250 There are some number of trials that have looked at the outcomes, for example, 119 00:10:40,250 --> 00:10:45,230 of a physical activity promotion program or the outcome of a surgical intervention over time. 120 00:10:46,110 --> 00:10:48,830 And in some of these, what you see is that you have a crossing over. 121 00:10:49,190 --> 00:10:52,850 People who were meant to have only surgery also went to rehabilitation on the side. 122 00:10:53,270 --> 00:10:56,929 People who are meant to only exercise three times a week thought, well, it'll do me good. 123 00:10:56,930 --> 00:11:05,509 I'll exercise seven times a week. This failure to maintain the randomisation breaks the fidelity, and unlike fidelity, it's known by many names. 124 00:11:05,510 --> 00:11:08,660 You'll see compliance, concordance, any number of things in the literature. 125 00:11:10,400 --> 00:11:13,550 So this brings us up to the problems, to the gaps that we have. You know, mind the gap. 126 00:11:13,790 --> 00:11:16,920 We have many definitions in terms of ability and adherence. 127 00:11:16,940 --> 00:11:24,890 It makes it hard to know what we're talking about at times. We have this possible possibility of these overlapping components and concepts. 128 00:11:25,040 --> 00:11:28,190 What what's fidelity. What's adherence. What counts, what doesn't. 129 00:11:28,730 --> 00:11:34,340 Sometimes we just don't know how to conceptualise it. If I can't tell you what it looks like, how do I monitor it in my trial? 130 00:11:35,060 --> 00:11:38,720 Um, and sometimes you have this conflict between the theoretical side of fidelity. 131 00:11:39,200 --> 00:11:44,210 Okay. In theory, it should do this. In theory it should do that. But what does it actually do in a clinical trial? 132 00:11:44,570 --> 00:11:47,900 Well, is there anything I can measure? Anything I can see and feel with my hands? 133 00:11:48,530 --> 00:11:53,179 Um, and what does it do, for example, in these very complex trials that we're looking at, 134 00:11:53,180 --> 00:11:56,930 for example, you know, surgery and physiotherapy and behavioural medicine. 135 00:11:58,870 --> 00:12:01,960 So what I'll tell you is a bit about some of the research that came out of my DPhil. 136 00:12:02,200 --> 00:12:08,920 And my DPhil is called Faithful but Flexible. And it was all about intervention ability in these complex interventions in health care. 137 00:12:09,430 --> 00:12:12,520 And it was a big DPhil that took place between 2018 and 2023. 138 00:12:13,090 --> 00:12:19,969 And really what it did is that it helped to frame fidelity, to come up with an empirically based definition of intervention, fidelity, 139 00:12:19,970 --> 00:12:24,220 and then hearings to come up with a framework for how to how to use it, 140 00:12:24,220 --> 00:12:29,050 how to conceive it or monitor it in these complex interventions and what I would call the physical domain. 141 00:12:29,440 --> 00:12:36,220 So things such as surgery, rehabilitation, physiotherapy, occupational therapy, speech therapy, things that have a physical component to them. 142 00:12:36,730 --> 00:12:39,010 It also came up with an assessment tool that we'll look at later. 143 00:12:39,850 --> 00:12:46,360 An important thing is that to quantify the effect of intervention fidelity on the treatment effect estimates, we get in a randomised controlled trial. 144 00:12:46,680 --> 00:12:48,970 And it actually gave us the direction of bias. 145 00:12:49,330 --> 00:12:53,050 Sometimes we know that something's going to influence something, but we're not certain in one direction. 146 00:12:53,380 --> 00:12:57,840 We'll talk about that coming up next. But we're going to see how this all played out and in the detail. 147 00:12:57,840 --> 00:13:01,450 We also saw how it played out in real time in an ongoing, pragmatic, 148 00:13:01,450 --> 00:13:07,870 randomised controlled trial comparing surgery to rehabilitation for people with a knee ligament deficiency. 149 00:13:09,800 --> 00:13:14,570 So first let's define it. Let's what's what. What? How did we come to define fidelity? 150 00:13:15,590 --> 00:13:21,740 Interestingly, when I'm doing a systematic review and looking at how was defined and conceived, there were 95 different ways of doing this. 151 00:13:22,130 --> 00:13:27,650 When you have 95 different ways of defining something, you might as well have none because it becomes very difficult to have a common language. 152 00:13:28,400 --> 00:13:34,850 So through thematic analysis, through thematic synthesis, what we came up with was the first empirically based definition for it. 153 00:13:35,240 --> 00:13:44,030 And you can really think of it as an umbrella concept, as two distinct but related components in the fidelity of delivery and participant adherence. 154 00:13:44,600 --> 00:13:48,140 And both of them are essential for the clinical trial to be faithful to this protocol. 155 00:13:48,560 --> 00:13:52,700 And both are essential for understanding of how an intervention can influence its outcomes. 156 00:13:53,660 --> 00:13:59,260 And we'll see how. So the first part intervention delivery fidelity. 157 00:13:59,650 --> 00:14:04,180 This focuses mainly on the actions of the intervention as the actions of the practitioner or the provider. 158 00:14:04,810 --> 00:14:09,940 It looks at the quality of the intervention, delivery or the performance of the intervention during the trial. 159 00:14:10,660 --> 00:14:13,840 It reflects the correspondence, how the intervention was done. 160 00:14:14,140 --> 00:14:17,350 Was it similar, sufficiently similar to what was intended in the protocol? 161 00:14:18,010 --> 00:14:22,900 Was it in accordance with the procedures as they were laid out in the study manual or the treatment manual? 162 00:14:23,560 --> 00:14:26,530 Did this thing look in sufficiently as it should have? 163 00:14:27,160 --> 00:14:32,230 So here's an example from a paper literature where you see this, uh, paper about a new Start intervention. 164 00:14:32,680 --> 00:14:37,530 And you can see how they're trying to characterise it as compliance with the new Start intervention. 165 00:14:37,980 --> 00:14:42,280 That you see one of the other terms, it'll be monitored throughout the trial by observations, 166 00:14:42,280 --> 00:14:47,709 regular collections of activity records to assess adherence and to understand whether the 167 00:14:47,710 --> 00:14:52,960 facilitators deliver the intervention in accordance with the training and as it was intended. 168 00:14:53,590 --> 00:14:58,720 Sometimes you'll even see this, um, mentioned not in the trial paperwork itself or the trial application, 169 00:14:59,080 --> 00:15:02,110 but you might see it later when they if they publish a process evaluation. 170 00:15:02,650 --> 00:15:07,930 But one way or the other, what you can see there is this attempt to see was this intervention delivered as we wanted to be. 171 00:15:10,350 --> 00:15:13,390 But the other half of this is now the actions of the participants. 172 00:15:13,460 --> 00:15:16,710 That adherence. And again, that focus is mainly on what? 173 00:15:16,730 --> 00:15:19,010 On their role and how they interact with intervention. 174 00:15:19,400 --> 00:15:23,620 Whether they accepted the randomisation, you know, did they stay in the group that you had randomise into. 175 00:15:24,260 --> 00:15:27,290 Did they initiate the intervention to which they were allocated? 176 00:15:27,770 --> 00:15:31,220 And then how will they follow the prescribed allocated intervention? 177 00:15:31,550 --> 00:15:35,120 So for example, this could be whether they attended the required number of therapy sessions, 178 00:15:35,630 --> 00:15:38,630 whether they performed some activity that was part of the intervention, 179 00:15:38,630 --> 00:15:43,910 a sufficient number of time or uh, or sufficiently similar to how they were intended to. 180 00:15:44,780 --> 00:15:51,020 Um, for example, did the participants also in a surgery arm avoid physiotherapy, avoid exercising, just have surgery. 181 00:15:52,520 --> 00:15:56,210 And you can see an interesting example here in the paper about bariatric surgery. 182 00:15:56,600 --> 00:16:01,520 When they're looking at the rates of adherence to the behavioural recommendations that come along with the surgery. 183 00:16:01,850 --> 00:16:05,420 So did the participants also attend the regular appointments that are meant to attend? 184 00:16:05,900 --> 00:16:11,750 Did they follow the dietary, uh, recommendations, the physical activity recommendations, the vitamin use recommendations. 185 00:16:12,440 --> 00:16:16,460 So they can see already whether the participants were really carrying out the intervention or not. 186 00:16:16,970 --> 00:16:20,180 Because that would tell you a lot, for example, about the success of bariatric surgery. 187 00:16:20,660 --> 00:16:24,930 You can have a fantastic surgery, but if everything the participants do before, 188 00:16:24,950 --> 00:16:29,810 after and around it departs completely from the recommendations, you may not have a successful surgery. 189 00:16:30,020 --> 00:16:33,200 Not because of the physical procedure, but because of everything that happened around it. 190 00:16:35,750 --> 00:16:40,130 But this is all great. So now we know what to call it. But what does it do? What can we say fidelity does? 191 00:16:40,520 --> 00:16:44,360 Can it bias the results of RCTs? And again think of bias. 192 00:16:44,360 --> 00:16:52,670 When we think of bias what we're really looking at are systematic errors or deviations from the truth that result in results or inferences of a trial. 193 00:16:53,450 --> 00:16:56,479 So, for example, flaws in the design or conduct of a trial. 194 00:16:56,480 --> 00:17:01,310 And again, that can, you know, we think a fidelity there can bias that there are treatment effect estimates. 195 00:17:01,700 --> 00:17:06,590 This can lead to an underestimation or an overestimation of the true intervention effect. 196 00:17:06,980 --> 00:17:10,790 So you can have these biases that can either skewing toward the null or away from the null. 197 00:17:11,540 --> 00:17:17,000 For example, we know this is the case with uh, lack of uh, of, of uh, of blinding in clinical trials. 198 00:17:17,510 --> 00:17:24,709 We know that when we have participants who were not blinded, they tend their data tends to exaggerate the effect size in, 199 00:17:24,710 --> 00:17:28,190 uh, in some clinical trials and in systematic reviews of clinical trials. 200 00:17:29,210 --> 00:17:34,490 We know, for example, that, um, observer bias also is exaggerated, 201 00:17:34,790 --> 00:17:41,450 is associated with exaggerated odds of an intervention succeeding or exaggerated odds of an outcome. 202 00:17:41,720 --> 00:17:46,620 Up to, you know, one third, uh, exaggeration of the odds. Sometimes. 203 00:17:46,620 --> 00:17:53,520 We also know, for example, that even, um. You know, study quality, for example, can bias the results of the trial. 204 00:17:53,880 --> 00:17:56,220 So for example, in this paper, um, by Adidas, 205 00:17:56,220 --> 00:18:02,790 we can see that the when you look at whether something was low risk of bias or uncertain risk of bias or high risk of bias, 206 00:18:03,090 --> 00:18:08,850 that also can exaggerate, um, our lead to an overestimation of your treatment effects in a clinical trial. 207 00:18:10,300 --> 00:18:15,850 So what about low fidelity? Well, previously people had just theorised, hey, if you have decreased fidelity, 208 00:18:16,210 --> 00:18:19,330 you're going to have lower treatment effects because you're introducing all these things. 209 00:18:19,340 --> 00:18:24,610 Your participants are being exposed to different levels of the the active ingredient of intervention. 210 00:18:24,610 --> 00:18:30,370 They're getting different doses. Other people had theorised that if, you know, sort of mathematically, that if you have lower fidelity, 211 00:18:30,820 --> 00:18:34,209 you would need a significantly larger sample size to to overcome it. 212 00:18:34,210 --> 00:18:38,920 So if you had a 25% decrease in fidelity, you need a four times larger sample size. 213 00:18:39,310 --> 00:18:42,490 Think of how problematic that is with a lot of our complex intervention trials. 214 00:18:42,580 --> 00:18:48,130 They don't have enormous samples to begin with. You can see a pharmaceutical trial with 10,000 participants. 215 00:18:48,370 --> 00:18:51,130 You're very rarely see a surgical trial with 10,000 participants. 216 00:18:52,210 --> 00:18:57,220 You know, but we didn't know what the actual effect on the treatment effects are in a randomised controlled trial. 217 00:18:57,290 --> 00:19:00,870 It never been shown empirically. We didn't know what direction it would be. 218 00:19:00,880 --> 00:19:05,890 Would it lower the effect? Would it exaggerate the effect? So I want to tell you about this paper. 219 00:19:06,070 --> 00:19:09,520 It's, uh, one of the papers of my DPhil in the Journal of Clinical Epidemiology. 220 00:19:10,060 --> 00:19:16,090 And here we look precisely for that. And we wanted to see how the different levels of intervention fidelity. 221 00:19:17,180 --> 00:19:21,050 Influence the treatment effect estimates derived from these randomised controlled trials. 222 00:19:21,470 --> 00:19:25,430 In complex interventions. And this was a meta epidemiological study. 223 00:19:25,430 --> 00:19:26,930 And by that epidemiological study, 224 00:19:26,930 --> 00:19:33,620 what I mean is a study in which we're looking at the impact of different of characteristics on intervention effect estimates. 225 00:19:34,130 --> 00:19:36,560 Typically you do this from a collection of meta analysis. 226 00:19:36,980 --> 00:19:42,950 So you're contrasting the results of trials with that characteristic with these also trials without the characteristics. 227 00:19:43,370 --> 00:19:45,769 And then you're looking at the estimates of average bias. 228 00:19:45,770 --> 00:19:50,240 Or you're calculating an estimate of average bias that's associated with that characteristic. 229 00:19:50,750 --> 00:19:52,219 And we've seen these um, 230 00:19:52,220 --> 00:20:00,170 use increasingly we've seen them use it about to investigate the effects of poor blinding of a failure to, to, uh, to randomise. 231 00:20:00,170 --> 00:20:07,010 Uh, so the fact sample of sequence generation and sample size, we've seen them for any number of things, but it had not been done for fidelity yet. 232 00:20:07,550 --> 00:20:08,420 So in this study, 233 00:20:08,420 --> 00:20:15,110 our aim was to measure the association between intervention fidelity and the estimate of treatment effects in these complex interventions. 234 00:20:15,110 --> 00:20:18,800 Trials. So how do we do it? Well, actually, 235 00:20:18,910 --> 00:20:23,020 when I'm pumping and cutting and in the cockpit library and did a really exhaustive search 236 00:20:23,020 --> 00:20:27,459 over ten years for meta analysis of randomised controlled trials in these physical, 237 00:20:27,460 --> 00:20:33,370 complex interventions. So these were in physical therapy speech therapy physical activity exercise interventions. 238 00:20:33,940 --> 00:20:37,780 You may wonder this man mentioned surgery before. Why isn't surgery not here. 239 00:20:38,320 --> 00:20:44,830 Surgery wasn't here because there were not a sufficient number of trials of surgery measuring or reporting fidelity, which is really interesting also. 240 00:20:45,400 --> 00:20:54,850 Um, so we had these meta analysis with trials and these, um, these professions or these disciplines with continuous outcomes, 241 00:20:55,450 --> 00:21:02,140 with at least four randomised controlled trials per minute analysis and at least two of them reporting or monitoring fidelity in some way. 242 00:21:02,900 --> 00:21:10,209 Yeah. Um, to, to figure out how many we needed. We as a sample size and power population there are there is one um uh published 243 00:21:10,210 --> 00:21:14,020 by the Sorbonne Paris Centre for Research in Epidemiology that we use for that. 244 00:21:14,290 --> 00:21:19,540 It told us we needed a minimum of 14 meta analysis and a minimum of 140 RCTs. 245 00:21:19,810 --> 00:21:23,860 Of course, we exceeded that just to be safe, but it just gave us a baseline to start with. 246 00:21:25,590 --> 00:21:31,350 To look at the effects of fidelity. You know what difference this. Quantifiable difference that this make in the results of these studies. 247 00:21:31,890 --> 00:21:37,020 We use what's called a meta meta analytic approach. It's kind of like doing meta analysis of meta analysis. 248 00:21:37,530 --> 00:21:46,110 We calculated the standard mean differences between the, uh, trials in which the Arctis in which there was fidelity and in trials. 249 00:21:46,380 --> 00:21:48,990 The Arctis in which we didn't identify any fidelity. 250 00:21:49,530 --> 00:21:57,690 And then we looked at the mean impact of fidelity, um, the, the treatment effect estimates within each meta analysis and then across all of them. 251 00:21:58,200 --> 00:22:04,290 Then we pulled all the standard mean differences, and we wanted to see if they were different in these trials, trials with and on fidelity. 252 00:22:04,950 --> 00:22:10,020 And we did some of analysis. We wanted to see, well, what what influences risk of bias do we have? 253 00:22:10,500 --> 00:22:13,770 Is it something you see more or less in trials a low risk of bias? 254 00:22:14,220 --> 00:22:19,200 Is this something you might see more or less in trials with larger sample sizes compared to the smaller ones? 255 00:22:19,200 --> 00:22:23,370 Because we know sample size, you know, influences treatment effect estimates the same way. 256 00:22:23,370 --> 00:22:26,999 The risk of biases. Then we did something else. We did meta regression. 257 00:22:27,000 --> 00:22:28,559 This is just like, uh, 258 00:22:28,560 --> 00:22:34,379 regression analysis you would do when you want to see how things are related to they have a linear relationship or are they associated. 259 00:22:34,380 --> 00:22:43,290 Can you predict one thing from the other? We did this here to to see how influence, how a change in fidelity may influence the results of a trial, 260 00:22:43,290 --> 00:22:47,760 even accounting for or controlling for their sample size or the risk of bias. 261 00:22:49,300 --> 00:22:53,200 So what do we get? We. Screenprint 427 Mint Analysis. 262 00:22:53,410 --> 00:22:57,280 It's much more work than sounds like. We found that 19 were ultimately eligible. 263 00:22:57,680 --> 00:23:02,739 Eight of them in physical activity or exercise promotions, seven in physiotherapy, four in speech therapy. 264 00:23:02,740 --> 00:23:07,870 And again these you know, speech physio and speech therapy are not in treatment their profession, but they were treatments that fell within that. 265 00:23:08,620 --> 00:23:12,430 Um, we look at 203 randomised controlled trials. 266 00:23:13,210 --> 00:23:17,890 The median we can say is that that they had ten randomised controlled trials per men analysis. 267 00:23:18,730 --> 00:23:24,500 There were 248 separate comparisons between a treatment, an experimental treatment and a control or comparison. 268 00:23:24,500 --> 00:23:28,450 In there it represented the experience of 25,000 participants. 269 00:23:28,450 --> 00:23:39,070 So it's a nice number. And what we found is generally there's a very poor amount of fidelity monitoring or reporting in this body of evidence. 270 00:23:39,340 --> 00:23:44,830 So only 31% of these papers really actually reported that they monitored or maintained fidelity in some way. 271 00:23:45,520 --> 00:23:52,419 And what you see on that graph is that when we looked at the number of papers that didn't report anything was 100, 141, 272 00:23:52,420 --> 00:24:00,549 the large amount as we move, slide the needle toward higher and higher degrees or levels of fidelity, it starts getting smaller and smaller. 273 00:24:00,550 --> 00:24:05,920 In the end. Only a very small number by comparison, really looked at both ability and adherence. 274 00:24:05,920 --> 00:24:10,270 So that already tells you this is a shortcoming, a gap in or in this case. 275 00:24:11,440 --> 00:24:16,930 So then we looked at the standard mean differences, the difference between the effect of the intervention that we're interested in, 276 00:24:17,200 --> 00:24:22,030 the experimental one, the treatment and the effect of it compared to the control. 277 00:24:22,420 --> 00:24:26,380 And when we found that they were very significant systematic differences in these effect estimates, 278 00:24:27,220 --> 00:24:30,220 uh, which was surprising, not what people had predicted. 279 00:24:30,940 --> 00:24:33,579 Basically, when you had fidelity completely absent, 280 00:24:33,580 --> 00:24:40,149 you tended to have a larger treatment effect estimate as you added more degrees of fidelity, more degrees of adherence, 281 00:24:40,150 --> 00:24:42,520 more degrees of closeness to what was intended, 282 00:24:42,910 --> 00:24:50,320 you tended to get a smaller and more precise treatment effect estimate by precise meaning that the confidence intervals were more narrow. 283 00:24:50,980 --> 00:24:53,650 So what this is telling you is that no, 284 00:24:53,830 --> 00:24:59,770 having very poor fidelity is or was associated with an exaggerated and overestimated treatment effect estimate. 285 00:25:00,670 --> 00:25:02,620 Um, and not not a smaller one. 286 00:25:03,370 --> 00:25:10,240 Even interestingly, the heterogeneity of these things was much smaller as fidelity improved, much greater as fidelity decreased. 287 00:25:11,140 --> 00:25:16,960 And this was the same where the we looked at no fidelity versus perfect fidelity or different degrees of fidelity. 288 00:25:18,880 --> 00:25:24,910 So we thought, okay, what about low risk of bias? Will this be the same when you remove all these other things that come with risk of bias, 289 00:25:24,910 --> 00:25:28,390 you know, all these other potential confounders and contamination and all these things? 290 00:25:29,010 --> 00:25:35,860 Then we saw the same result. The more that you move the needle toward a better degree, a higher degree of fidelity and adherence, 291 00:25:36,340 --> 00:25:40,570 the smaller, the more precise the treatment effect estimate and its confidence intervals became. 292 00:25:40,840 --> 00:25:43,419 So again, it gives you the sense that there's an exaggeration, 293 00:25:43,420 --> 00:25:48,910 potentially an overestimation of your effect when people have not monitored or maintained fidelity. 294 00:25:49,300 --> 00:25:55,900 And this was the same way that we looked at only trials with small with a low risk bias or only larger trials, you know, 295 00:25:55,900 --> 00:25:59,170 where you're less likely to have these exaggerate effects that come with smaller trials, 296 00:25:59,650 --> 00:26:04,480 even with larger sample sizes of the same relationship, um, that you can see there. 297 00:26:05,820 --> 00:26:09,300 But I want I want to point out something interesting, um, that will come to you later. 298 00:26:09,780 --> 00:26:16,140 Is that in all of these, when you only looked at adherence, you just wanted to know, did people show up for the right number of sessions? 299 00:26:16,590 --> 00:26:20,490 Did people show up for the right number of of of of of classes. 300 00:26:20,850 --> 00:26:22,170 But you didn't care what they did. 301 00:26:22,650 --> 00:26:29,670 That tended to give you a higher treatment effect, always that when you really monitored how many times they came in, how well they did it. 302 00:26:30,030 --> 00:26:33,090 We'll talk about why that is later, but I want to plant the idea now. 303 00:26:35,200 --> 00:26:38,540 So what was our take on? Poor fidelity may. 304 00:26:38,590 --> 00:26:43,480 Certainly. Bias. In fact, estimates in randomised clinical trials of complex interventions. 305 00:26:43,810 --> 00:26:45,220 Those are very linear association. 306 00:26:45,920 --> 00:26:52,450 Uh, as you again, as you move that needle over from less fidelity to more, you start seeing that the standard mean differences decrease. 307 00:26:53,350 --> 00:26:58,360 Um, by the end of it for when you in from nothing, from no fidelity and adherence to perfect fidelity, 308 00:26:58,690 --> 00:27:04,300 you actually had a 0.24 smaller standard mean, uh, difference and smaller effect size. 309 00:27:04,810 --> 00:27:10,270 And this again is when you adjust for sample size and risk of bias. We'll talk about why that matters. 310 00:27:10,270 --> 00:27:17,079 But just to show you again, even when we looked at, um, you know, the, the when we looked at this in all the trials, 311 00:27:17,080 --> 00:27:20,800 when we looked at this, uh, trials where we considered their sample size, 312 00:27:21,220 --> 00:27:26,140 the risk of bias, um, when we looked at it separately in only trials, the low risk of bias, 313 00:27:26,140 --> 00:27:29,980 it's actually a slightly higher effect, but not statistically significantly higher than in all the trials. 314 00:27:30,310 --> 00:27:36,590 But what you're definitely seeing is, is very linear relationship between the. Why did this happen? 315 00:27:36,680 --> 00:27:42,200 This sounds, you know, sounds counterintuitive, right? You think, hey, wait a minute. If you have better fidelity, shouldn't you have a bigger effect? 316 00:27:42,530 --> 00:27:46,700 One. Smaller one. Well, um, there are a lot of possible reasons to that. 317 00:27:46,790 --> 00:27:53,419 It's kind of hard to answer from, from that study, but one of the things that you may see is that when you have poor fidelity, 318 00:27:53,420 --> 00:27:57,650 what you might be getting is the cumulative effect of a lot of extra goodies being added along the way. 319 00:27:58,190 --> 00:28:00,860 You know, when you're not really carefully monitoring what you're doing. 320 00:28:00,860 --> 00:28:06,680 And I know from my own clinical practice, no matter what someone learns and continuing in, no matter what treatment you see, 321 00:28:06,710 --> 00:28:10,580 you always have this temptation to say, I know, I know, I know, you're supposed to do this. 322 00:28:11,450 --> 00:28:16,399 You know, I don't know. Let's say, uh, somebody has spinal, a spinal cord, you know, sciatica. 323 00:28:16,400 --> 00:28:17,270 And you want to work on something. 324 00:28:17,660 --> 00:28:22,910 I know it says to do it that way, but I find in my own experience that if I do that, but also add something else that works better, or. 325 00:28:23,300 --> 00:28:28,550 I know, I know, but I prefer to do that, and I prefer to do it with, you know, in a hot room instead of in on a table. 326 00:28:29,000 --> 00:28:35,720 You know, all this accumulated addition to little bits here and there is probably adding a significant number of active ingredients. 327 00:28:36,080 --> 00:28:40,459 So partially it could be that the reason you're seeing this larger, you know, 328 00:28:40,460 --> 00:28:44,240 exaggerated request amount and in fact, the poor fidelity, there's more going in the system. 329 00:28:44,750 --> 00:28:48,020 The person's being exposed to additional effect mediating interventions. 330 00:28:48,770 --> 00:28:52,460 Um, there's also an increase in intervention intensity. There's kind of that interesting rule of life. 331 00:28:52,670 --> 00:28:56,750 Sometimes, no matter what you do with a patient, the fact you're doing something, people get better. 332 00:28:57,320 --> 00:29:01,430 You know, it's kind of what we see sometimes, even with some of the the sham and placebo interventions. 333 00:29:02,480 --> 00:29:05,150 Interestingly, you can see how this may be the case. 334 00:29:05,150 --> 00:29:11,360 Also, remember I said, hey, the people who were we only cared about how often they came, you know, but we don't care what they did. 335 00:29:11,360 --> 00:29:14,360 They also had a bigger effect you can consider as well. 336 00:29:14,360 --> 00:29:18,499 They're just being exposed to more and more often. But we don't know what that more is. 337 00:29:18,500 --> 00:29:22,250 So that might be why they're having sort of this potentially exaggerated effect. 338 00:29:23,790 --> 00:29:26,939 So what's the take home message with with this aspect of it? We don't. 339 00:29:26,940 --> 00:29:31,830 Fidelity matters. It's not something that we often talk about. It's something we should definitely, definitely should talk about more. 340 00:29:32,190 --> 00:29:33,870 Something we should plan in to our trials. 341 00:29:34,800 --> 00:29:39,960 The treatment effect estimates may be overestimated or less precise when we don't monitor and maintain fidelity. 342 00:29:39,990 --> 00:29:44,040 And by that I mean, again, how we deliver the intervention, but also what the participants did. 343 00:29:44,760 --> 00:29:48,450 Again, think about what this may mean to mean analysis when we start pooling these errors. 344 00:29:48,630 --> 00:29:51,410 Uh, what it might mean for clinical practice guidelines, 345 00:29:51,420 --> 00:29:57,930 how confident can we be in recommending one treatment over the other if we're not certain what was actually delivered in the trials, 346 00:29:57,960 --> 00:30:05,640 on whose evidence we rely? And are these trials a good basis for evidence or a good evidence source for clinical decision making? 347 00:30:06,300 --> 00:30:08,640 And I'll give you a little, little example of how that worked out. 348 00:30:09,240 --> 00:30:15,750 One of the things we did is we said, okay, if trials with poor fidelity or fidelity tend to have this exaggerated effect, 349 00:30:15,990 --> 00:30:19,500 and trials with much better fidelity tend to have this more precise effect. 350 00:30:19,890 --> 00:30:27,870 What if we went back and either added or took away that, that that that effect from the trials, depending on how their fidelity was? 351 00:30:28,620 --> 00:30:35,459 And what we found is that some of the meta analysis no longer had a statistically significant difference between the 352 00:30:35,460 --> 00:30:41,070 intervention they were they were investigating and the control or the comparison that actually that really matters. 353 00:30:41,260 --> 00:30:48,510 Um, and the other one was that we talked sometimes about the effect the size of a, you know, a relationship with Cohen's d, for example. 354 00:30:48,960 --> 00:30:54,420 Some of them went from a small effect to no effect whatsoever. Some of them went from a moderate effect to a small effect. 355 00:30:55,140 --> 00:30:59,040 And so you can see this here. For example, these are meta analysis about different interventions. 356 00:30:59,610 --> 00:31:04,919 And you're seeing this change in direction. And sometimes the the statistical significance vanishes. 357 00:31:04,920 --> 00:31:09,210 Sometimes the uh, the you know, those are the size of the effect. 358 00:31:09,220 --> 00:31:15,750 Matter vanishes when we want to make a decision or recommendation for patient care on something that may really not have an effect at all. 359 00:31:16,260 --> 00:31:20,069 So could this potentially lead us to make them to make poor decisions? 360 00:31:20,070 --> 00:31:24,940 We have to use caution. If we don't know if this fidelity or if it wasn't monitor reported. 361 00:31:24,960 --> 00:31:32,160 We just have to be a bit Korff caution cautious because it may potentially lead to the adoption of some interventions that are ineffective. 362 00:31:32,850 --> 00:31:38,280 And what about the possibility that we may prematurely reject some interventions that actually could have been effective? 363 00:31:40,140 --> 00:31:43,740 What's the caveats to this? You know, there's always a chicken in it. There's always a caveat. 364 00:31:44,220 --> 00:31:47,250 You might say, well, here's a new. But maybe they did it. They just didn't report it. 365 00:31:47,790 --> 00:31:48,930 Is that a chicken or the egg? 366 00:31:49,560 --> 00:31:56,190 You know, is it possible that you have these randomised trials where they actually reported fidelity because they didn't get the result they want, 367 00:31:56,190 --> 00:31:57,900 and they were looking for somewhere to pin that? 368 00:31:58,650 --> 00:32:04,700 Um, could you have a situation where these randomised controlled trials with the largest effect said, we don't need to bother with this. 369 00:32:04,710 --> 00:32:08,950 Look at how big the effect was. Um, so I can tell you that that was not the case. 370 00:32:08,970 --> 00:32:13,350 We actually looked at that as well. I'm nothing if not thorough. And so that did not pan out. 371 00:32:13,770 --> 00:32:18,370 So there is more there than that. You might say. Also, what about the stage of development of interventions? 372 00:32:18,390 --> 00:32:23,990 Oftentimes we're doing these trials to see if an intervention is does they have promise enough to keep developing it. 373 00:32:24,000 --> 00:32:28,170 That's possible. You know that in an early stage of these interventions, some of them aren't. 374 00:32:28,170 --> 00:32:31,950 They're just not stable. They're not defined enough. We don't yet know what to call them. 375 00:32:31,950 --> 00:32:35,700 We haven't sorted out the parameters yet. So that's why it's hard to deliver them with fidelity. 376 00:32:36,480 --> 00:32:40,380 Um, it's also possible that, you know, you can have a very robust intervention, 377 00:32:40,770 --> 00:32:44,880 and the intervention is so robust that even if you deliver badly, you can still get a treatment effect. 378 00:32:45,690 --> 00:32:50,340 I think that's less likely, but it's possible. And then again, here's interesting question. 379 00:32:50,640 --> 00:32:54,750 This is all focussed on the treatment and the experimental, uh, intervention. 380 00:32:55,380 --> 00:33:03,060 No one's looked at it in the control. What if you have a well delivered experimental intervention and the control is is not delivered well at all. 381 00:33:03,450 --> 00:33:06,600 That's going to throw the treatment effect estimate also off as well. 382 00:33:06,930 --> 00:33:10,020 We may not be comparing things the way we think we are we. 383 00:33:10,030 --> 00:33:13,499 And that's something that hasn't been looked at. Yeah that might be another phase for the future. 384 00:33:13,500 --> 00:33:19,260 But um something else to keep in mind. This is just referring to what we know was done for the for the treatment. 385 00:33:19,290 --> 00:33:25,020 Not first comparison. So now that I'm giving you that, you might say, okay, great, wonderful. 386 00:33:25,030 --> 00:33:26,890 I'm going to go look for this part. How do I do it? 387 00:33:27,670 --> 00:33:34,680 You know, and one of the difficulties we had and we're coming to the end just just to give you the, you know, the wake up warning, uh, 388 00:33:34,720 --> 00:33:37,840 the very little guidance that was available for how to monitor fidelity, 389 00:33:37,840 --> 00:33:42,910 especially in surgery, especially in rehabilitation and all these complex interventions. 390 00:33:43,630 --> 00:33:47,560 We had tools and they were made more for for psychology, for education, for behavioural health. 391 00:33:47,980 --> 00:33:52,900 But they don't always transfer well to surgery or to rehab. They don't transfer well to these physical complex interventions. 392 00:33:53,350 --> 00:33:59,620 So and some of the domains don't really work really well defined. Some of them may not have really also been very, very adaptable for us. 393 00:33:59,630 --> 00:34:02,710 So for example a lot of them have this category called enactment. 394 00:34:03,010 --> 00:34:06,190 And what they want to know is whether a participant uses the skills in daily life. 395 00:34:07,190 --> 00:34:11,810 That may not be as applicable to these interventions, because that doesn't necessarily tell you whether it was delivered well. 396 00:34:12,260 --> 00:34:15,850 Somebody can deliver an intervention beautifully. We see this with smoking cessation. 397 00:34:15,860 --> 00:34:20,120 We see this with even physical activity recommendations. You can deliver that beautifully. 398 00:34:20,120 --> 00:34:24,379 If someone could say, yeah, I don't really care or I know, I know, but it's too hard for me. 399 00:34:24,380 --> 00:34:29,660 You don't understand. So that doesn't mean it wasn't deliver of the fidelity. That's more a, you know, measure of effectiveness or other things. 400 00:34:30,650 --> 00:34:36,110 And most of the tools that were available, they were they were made really to assess fidelity while you were conducting the trial. 401 00:34:36,470 --> 00:34:41,750 But what do you and I do when we just want to know if fidelity was conducted when we're reading a paper? 402 00:34:42,740 --> 00:34:46,820 And so what I want to present to you is a checklist that came out of this thesis of mine, 403 00:34:47,450 --> 00:34:52,190 and it's called the consider checklist and stands for Complex Interventions design, delivery and receipt. 404 00:34:52,490 --> 00:34:55,160 Consider I had tried this with acronym. 405 00:34:55,160 --> 00:35:00,680 If I ever if anyone's ever used the website, you put the key terms in and you you know they'll help you come up with an acronym. 406 00:35:01,100 --> 00:35:04,250 It came up with some fantastic ones. Confound, confuse. 407 00:35:04,580 --> 00:35:12,139 I didn't think those were very those are ideas helpful. So please consider what this framework does is it is looking at different aspects 408 00:35:12,140 --> 00:35:17,060 of intervention fidelity in various parts of a randomised control trial. 409 00:35:17,060 --> 00:35:23,000 So you know what you're looking for at the time. I'm not going to describe it much, but you can but you can find it. 410 00:35:23,480 --> 00:35:27,860 It's a it's available online. It's also available in the epidemiological paper that we discussed. 411 00:35:29,000 --> 00:35:32,000 I'll show you the I'm going to show you the tool in more detail. 412 00:35:32,000 --> 00:35:35,450 But just to give you the overview, it's a multi-dimensional assessment. 413 00:35:35,960 --> 00:35:41,780 It's looking at both fidelity and adherence in the publication, whether it was supported, whether it was monitoring, whether it was reported. 414 00:35:42,410 --> 00:35:47,000 It's got eight items and they're mapped onto the other checklist and tools that we use all the time, 415 00:35:47,000 --> 00:35:52,070 such as Tiger for intervention reporting, survey for reporting of exercise intervention, spirit. 416 00:35:52,730 --> 00:35:59,930 Um, all all of the items are mapped onto those. It makes it easier to find there's an exploration, um, or an explanation and elaboration paper. 417 00:36:00,500 --> 00:36:02,930 And the tool went through a validation study and survey, 418 00:36:03,020 --> 00:36:09,230 and it found that it had a lot of internal consistency and reliability, and the users found that it was easy to use. 419 00:36:10,330 --> 00:36:11,200 So what does it look like? 420 00:36:12,640 --> 00:36:19,690 The first five items are really looking at whether there are things in the paper that you can identify that would support fidelity. 421 00:36:19,930 --> 00:36:24,970 Ask you to say whether they were absent where there were somewhat discernible or they were mostly discernible. 422 00:36:25,300 --> 00:36:26,890 So they'll look at things like materials. 423 00:36:27,310 --> 00:36:35,590 Does it tell you in the paper about whether the therapists or the practitioners or the surgeons had training, whether they had manuals, 424 00:36:35,590 --> 00:36:39,760 whether they had materials they can use to ensure fidelity is looking at things like, 425 00:36:39,760 --> 00:36:43,209 did they describe the amount of leeway that the practitioner had? 426 00:36:43,210 --> 00:36:48,010 Can they adopt things? Could they not adopt things where the intervention was, where they trained? 427 00:36:48,520 --> 00:36:55,300 Did the paper describe even the scheduled duration, the intensity or the dose, enough so that you know that they were delivering things as intended? 428 00:36:56,020 --> 00:36:59,260 So, for example, this is from the elaboration and expectation paper. 429 00:36:59,890 --> 00:37:04,690 What you see here is like here's an example from an actual paper where it says if you want to score, you know, 430 00:37:04,690 --> 00:37:08,739 a score of one would look like the treatment program consisted of this regular model that was manual, 431 00:37:08,740 --> 00:37:13,420 but all three groups were provided with pedometers and logbooks to monitor what they did. 432 00:37:13,960 --> 00:37:17,740 So this is the kind of thing you see in the explanation elaboration paper with a tool. 433 00:37:18,100 --> 00:37:21,220 It's telling you how to decide how to score something. 434 00:37:21,640 --> 00:37:28,030 You know, for example, when it says a score of two, photographic details of the program were distributed to each participant with 435 00:37:28,030 --> 00:37:30,940 a logbook to record the number of days that the exercises were performed. 436 00:37:31,390 --> 00:37:36,040 It's telling you they were really thinking closely, how do we know and how do we monitor what the participants were doing? 437 00:37:38,300 --> 00:37:41,900 The second section is then looking at how fidelity was monitored. 438 00:37:42,110 --> 00:37:48,530 How did they know that things stayed the way they were? Describe the strategies used to monitor or maintain that, 439 00:37:48,530 --> 00:37:55,820 not the participants adherence or the delivery of the or the intervention, and again then scored from zero. 440 00:37:56,000 --> 00:37:59,900 If nothing happened all the way up to three, if if both of them were monitored and defined. 441 00:38:00,380 --> 00:38:07,430 And as an example, you'll see here's a score of three a week from taking from a paper, a weekly therapy log was collected. 442 00:38:07,430 --> 00:38:14,989 The parents were asked to practice this intervention daily, and video recorded at least ten minutes each week for review during the group. 443 00:38:14,990 --> 00:38:20,720 So there was recording and people were actually reviewing them to say, okay, this looks sufficiently like what we intended it to look like. 444 00:38:21,290 --> 00:38:26,870 All the intervention sessions were videotaped. At least two sessions per child were chosen, randomly coded with a checklist. 445 00:38:27,260 --> 00:38:31,730 At least 80% correct on each of six techniques was required to meet fidelity criteria. 446 00:38:32,360 --> 00:38:38,360 They're actually telling you that they met, they match this a certain amount, and he explains you why he was caught up the way you would. 447 00:38:40,750 --> 00:38:47,560 The next section of it, um, is really looking at, you know, did they actually did the report that the report, the modifications. 448 00:38:47,860 --> 00:38:52,000 Because oftentimes you have to modify interventions for particular purposes. 449 00:38:52,540 --> 00:38:56,770 This ask you whether in the paper itself whether you could see that they reported it. 450 00:38:56,980 --> 00:39:03,460 Did they keep track of it. So for example you know, this is an example how one of the items is scored from, um, from an actual paper. 451 00:39:04,060 --> 00:39:09,670 No parent met the fidelity or implementation criteria baseline at weeks 12, 21 and 25. 452 00:39:09,670 --> 00:39:13,510 Find the parents and that group. Um, none of them met fidelity of implementation. 453 00:39:13,960 --> 00:39:19,120 Unfortunately, that's not great news for that particular trial. That's basically saying nobody did what they were intended to do. 454 00:39:19,690 --> 00:39:23,950 But it gives you an example of how to score it. And then very detailed example of why the score. 455 00:39:25,890 --> 00:39:29,550 And finally, the last thing is. Okay. What's your final assessment? 456 00:39:30,330 --> 00:39:33,890 Did they maintain fidelity? Fidelity there? Did they maintain adherence. 457 00:39:33,900 --> 00:39:39,930 Isn't there. And again in the paper what you get is very specific um examples of how to do this and how to score it. 458 00:39:40,260 --> 00:39:43,980 Because what you really want to know is did participants adhere. 459 00:39:44,010 --> 00:39:48,450 Did they do what they wanted to do at least 75% of the time? And was fidelity maintained? 460 00:39:48,750 --> 00:39:52,710 Did things look the way they were meant to do at least 75 to 80% of the time. 461 00:39:53,010 --> 00:39:58,530 So you have very specific guidance there that can be very helpful when looking at this for a paper, 462 00:39:58,530 --> 00:40:02,280 for a systematic review from an analysis, or just to understand what actually happened. 463 00:40:03,060 --> 00:40:08,280 So what's our final take on back to the pastries. What do you put in that box of cupcakes to take home with you? 464 00:40:08,940 --> 00:40:12,540 The take home is fidelity matters again, so it sounds trite, but it really does matter. 465 00:40:12,960 --> 00:40:20,550 Fidelity influences treatment effect estimates. It tells us a bit about the how or the why and why interventions may or may not have worked. 466 00:40:21,240 --> 00:40:23,760 Um. And get something there. 467 00:40:24,070 --> 00:40:31,810 And then conditions and policymakers keep in mind that we all rely on the synthesis of the best available evidence to inform our decision making. 468 00:40:32,110 --> 00:40:38,290 But we have to consider fidelity. And then as well, we need caution when we when we're not certain if fidelity was there or not. 469 00:40:38,590 --> 00:40:47,290 You can see that this is becoming more, um, it's entering the consciousness more even the risk of bias to the ROV to from Cochrane. 470 00:40:47,830 --> 00:40:52,420 It does come up with concepts that are very close to and related to fidelity. It's not quite exactly the same, but it's there. 471 00:40:53,020 --> 00:40:56,500 Um, fidelity can be assessed in trial complications. This can be done. 472 00:40:57,500 --> 00:41:00,560 We should probably. I think we should make this a greater part of quality assessment. 473 00:41:00,570 --> 00:41:04,600 It's not really required now, but I think it should be. And again, there's a framework available. 474 00:41:04,610 --> 00:41:08,480 There are other frameworks available. There is a framework available and checklist there freely available. 475 00:41:09,200 --> 00:41:14,359 And I definitely welcome more development in collaboration. You know I mean it'd be great topic road test and more. 476 00:41:14,360 --> 00:41:18,200 But the take home message is monitor fidelity don't just taken at face value. 477 00:41:19,300 --> 00:41:22,890 Um, if anyone's interested in knowing more. That's me. Um. 478 00:41:22,900 --> 00:41:26,140 And, uh, I'm always happy to answer questions or collaborate in any way. 479 00:41:31,580 --> 00:41:36,680 Thank you. Arsenio. Uh, fantastic. Yes. Do you take that, uh, QR code? 480 00:41:37,210 --> 00:41:42,310 Right. What does that. What does that send me off? If I if I look look up that code now, it's like. 481 00:41:42,860 --> 00:41:46,750 And LinkedIn will connect you with me I do. All right. 482 00:41:47,770 --> 00:41:49,888 Cool. Um, great. Thank you. Um.