Baird et al. and StrongMinds FAQ: Why one study doesn’t overturn the case
We are sometimes asked why we recommend StrongMinds despite a study – Baird et al. (2024) – suggesting a small impact. Some people would give this study a lot of weight because it’s an RCT of a partner delivering treatment with StrongMinds’ supervision. However, the relevance (‘external validity’ in academic jargon) of the study to how StrongMinds operates today is much weaker than it appears. We base our cost-effectiveness estimates on this study, the wider psychotherapy literature, and StrongMinds’ monitoring data. While Baird et al. lowers our estimate, we still recommend StrongMinds as a ‘top charity’. StrongMinds is also fundraising for a new RCT, which we welcome and expect will give stronger, more directly relevant evidence.
We have previously discussed this issue at length in our report into psychotherapy in LMICs (McGuire et al., 2024b). However, as that report is long, and the Baird et al. study is discussed across several sections, we’ve put together a brief FAQ about this particular study.
What is Baird et al.?
Baird et al. (2024) published a working paper (not peer-reviewed yet) of a RCT testing a version of group interpersonal therapy in Uganda, where StrongMinds supervised a partner (BRAC) to deliver the intervention. It found very small effects, and most results were not statistically significant beyond the earliest follow-up. Furthermore, this is the only published RCT with StrongMinds involved.
Why might you think this means we shouldn’t recommend StrongMinds?
ndeed, if one puts 100% of the weight on Baird et al. – instead of the other sources (which we discuss more below) – this reduces the cost-effectiveness to 6.95 WELLBYs created per $1,000 donated. This is just below, but close to, our estimate for cash transfers (7.55 WELLBYs created per $1,000 donated).
At first glance, it’s understandable to think this study should outweigh everything else. It’s a well-conducted RCT, it took place in Uganda where StrongMinds operates, and it involved a partner delivering the programme with StrongMinds’ supervision. Surely that should make it highly relevant?
We think not, and make the case below.
Why is this study not very relevant to StrongMinds?
External validity refers to how well results from one setting apply to another and should always be considered. And while this is the only published RCT with StrongMinds involved, we still think Baird et al. has relatively low external validity because of the following issues that do not apply to StrongMinds’ current programme:
- The RCT was a pilot from 2019 of the first time StrongMinds had implemented their programme via a partner organisation (BRAC).
- This was the first time StrongMinds had delivered psychotherapy to adolescents, and the first time they had it delivered through youth facilitators (StrongMinds primarily does therapy for adults led by adults).
- The facilitators were inexperienced and given insufficient supervision.
- Attendance was low, with 44% of participants failing to attend any sessions (compared to 4% in StrongMinds’ actual programmes).
- Furthermore, the long-term data collection overlapped with COVID – arguably, this might be sufficient by itself for concluding any study has low external validity in non-pandemic times.
These issues are noted by Baird et al (p. 29). and/or StrongMinds themselves.
How do you account for Baird et al. without ignoring other evidence?
Our recommendation of StrongMinds is based on three kinds of evidence:
- A systematic review and meta-analysis of 84 RCTs of psychotherapy in low- and middle-income countries (this was all the available studies at the time),
- StrongMinds’ own monitoring and evaluation (‘M&E’) data, and
- What we call ‘charity-related causal evidence’ – in this case, this is only Baird et al. for now.
We combine these different data sources (in a Bayesian manner) by giving them different weights based on statistical uncertainty and some subjective adjustments to account for higher order uncertainties like relevance.
We strive to not base our evaluations on only one single study, especially when there is a wealth of studies about the interventions that can give us an important prior belief. Of course, relevance should play an important role here (and perhaps negative specific studies should get extra weight), but as we mentioned above, we don’t think this study is particularly relevant.
In academia, there’s a well-established hierarchy of evidence, where systematic reviews and meta-analyses are above single studies (e.g., Guyatt et al., 2008). However, note that how to deal with different sources of data like this in cost-effectiveness analyses is not solved – we have done a lot of thinking and reading about this, and asked other academics, and there is no consensus.
The meta-analysis and monitoring and evaluating data sources both find much bigger positive results. Nevertheless, we include Baird et al. in our analysis and give it about 20% of the total weight. It would get only ~3% of the weight if we assumed it was as relevant as any of the other 84 psychotherapy RCTs in the meta-analysis. So we still do give a lot more weight to Baird et al. based on relevance.
Baird et al. caused us to lower our cost-effectiveness estimate of StrongMinds, but we don’t see this study as a decisive reason to conclude StrongMinds is ineffective and/or not recommendable.
To conclude that Baird et al. reflects the true impact of StrongMinds psychotherapy today, one would need to believe that the existing academic studies and/or StrongMinds own M&E data are in error or irrelevant.
Our recommendations are based on comparisons of charities, and no charity (or charity analysis) is without problems. We recommend StrongMinds as a top charity despite accounting for this study. Currently, we estimate StrongMinds to have a cost-effectiveness of 40 WELLBYs created per $1,000 donated, around five times more cost-effective than direct cash transfers.
Is there a new RCT of StrongMinds on the way?
As we mentioned, there is only Baird et al. as an RCT that involves StrongMinds, but this has low relevance. There is a report of a controlled (but not randomised) study of StrongMinds (Peterson et al., 2024), so this lacks some validity but does find better results. There is an upcoming RCT of StrongMinds comparing more to fewer sessions, but not a control group.
Hence, we are very keen for StrongMinds to run a large, high-quality randomised controlled trial of its current programme. StrongMinds is currently fundraising a new RCT, which we welcome and should provide stronger, more directly relevant evidence about the programme’s impact. It will be conducted in collaboration with IDinsight and experts like Dr Victoria Baranov.
If the new trial shows effects as small as Baird et al., we would seriously reconsider our recommendation. If it shows moderate or large effects, that would strengthen our confidence further. Namely, with new evidence, we update our views.
Where can I learn more about this?
For details, see our latest report (200 pages including appendices; McGuire et al., 2024b). The summary, Section 7.3, and Appendix L3 cover the key points. You can even see how different weights affect the overall cost-effectiveness of StrongMinds.