• By Dartington SRU
  • Posted on Thursday 21st August, 2014

Poor fidelity for child mental health programmes

We know that better fidelity when implementing an evidence-based programme leads to better outcomes. But a review investigating the fidelity of 10 studies of interventions for children with multiple mental health conditions found that only one met the standard for “high” fidelity.

The review claims that researchers and practitioners can do more to ensure that psychosocial interventions for children with multiple mental health conditions are put into practice exactly as intended and without bias. It highlights a “fidelity patchwork”: while most studies succeeded on several dimensions of fidelity and bias, almost none succeeded on all.

Researchers at the University of Guelph, Canada, concentrated their review on 10 studies. Each study reported on the randomised controlled trial of a psychosocial intervention for children or adolescents with co-morbid mental health problems – a population whose complex issues may make fidelity in treatment and lack of bias even more important than usual.

They used two ratings: one for fidelity, and one for bias. The first, the Intervention Fidelity Assessment Checklist, examined whether the study reported different fidelity components - for example, number of contacts, intervention content and training of providers.

The second measure was the Cochrane Collaboration tool for assessing risk of bias in the methodology. Its criteria include randomly assigning participants to the intervention or control condition, “blinding” participants or experimenters to the condition to which a participant belongs, reporting all exclusions from the analysis, and reporting all outcomes.

First, the good news. Results from the fidelity measure indicate that most researchers and practitioners pay attention to and report on the design and delivery components of an intervention. All 10 studies made use of a manual to ensure that the intervention was delivered as intended. Half of the studies assessed participants’ performance of intervention-specific skills.

Most studies also did well on certain bias measures. They often explained their bias-reduction strategies such as randomisation and blinding. Many reported exclusions from the study, which helps to avoid attrition bias.

Unfortunately, this is not the whole story. Other components of both fidelity and bias appeared much less frequently in the review of studies. It seems that researchers and practitioners understand the importance of fidelity and bias, but may not be aware of all of the elements that need to be taken into account.

In particular, the review revealed an apparent lack of reporting on the details surrounding the provider of the intervention. For example, the provider’s training and skills, and their level of supervision and monitoring, were little discussed. Further, none of the studies measured any potential unintended effects of the provider, such as their perceived credibility or warmth.

In addition, several studies did not check that the amount of treatment delivered to the participant was at the intended level, and several did not test participants’ comprehension of treatment material. On top of this, few studies made attempts to assess whether participants could perform intervention skills in different settings.

Finally, some articles were rated “unclear” on risk of bias – not necessarily because the studies failed to take all the steps to avoid bias, but because they did not report what they had done.

While most studies succeeded on several dimensions, almost none succeeded on all. Only 1 study out of 10 received over 80% adherence on both the fidelity and bias measures.

What can be done to improve the picture? There are several barriers to using fidelity strategies, the authors note.

First, there is no industry-standard measure of fidelity or common guidelines for using fidelity strategies. Second, practitioners have to be trained to use fidelity strategies, which takes time and money. Third, the desire to tailor a treatment to particular individuals may quite understandably deter a clinician from implementing an intervention as intended.

The authors offer a number of caveats. The conclusions of this study refer to efforts to implement interventions for a specific population – children with co-morbid mental health problems – which the authors admit is a tough area in which to measure effectiveness. Also, the review is based on a small sample of studies, and uses simple measures to capture complex concepts. Finally, it is possible that some of the studies under review actually used strategies to increase fidelity and reduce bias that they did not report in their publications.

McArthur, B. A., Burnham Riosa, P., & Preyde, M. (2011). Review: Treatment fidelity in psychosocial intervention for children and adolescents with comorbid problems. Child and Adolescent Mental Health 17 (3), 139-145.

Return to Features