fbpx

Our Blunders

by | September 2023

Last updated the 1st of September 2023.

HLI strives to be rigorous and transparent in our work, but we don’t always get it right – sometimes, we blunder. This page logs instances where we don’t think we’ve reached our ideals and focuses on broad issues that impact how others perceive and trust our work. We consider a ‘blunder’ something that meets at least one of following criteria:

  • Importance: it causes a non-trivial change in our strategy or analysis (e.g., a 20% change in cost-effectiveness for an intervention).
  • Unambiguous: it is a clear error; we won’t include issues we think reasonable people could disagree about.
  • Applicable: it identifies a specific change we can make to improve our processes.

Acknowledging these faults helps us address issues and move forward as a stronger, continually-improving organisation.

If you believe we should list other items here, please don’t hesitate to let us know at hello@happierlivesintitute.org. Note: we also document substantial updates to our cost-effectiveness analyses in our changelog.

2022: We were overconfident and defensive in communicating our charity recommendations

How we blundered

In November 2022, we posted our annual charity recommendations on the effective altruism forum. In this post, we made a couple of errors:

  1. We described our recommendation for StrongMinds using language that was too strong: “We’re now in a position to confidently recommend StrongMinds as the most effective way we know of to help other people with your money”. On reflection, we think this was too bold a claim to make, given the level of uncertainty and depth of our analysis.
  2. The post’s original title was “Don’t give well, give WELLBYs”. Although this was meant in a playful manner as a nod to GiveWell, another charity evaluator, it was tone-deaf and came off as adversarial. We swiftly edited the title to “Don’t just give well, give WELLBYs”  to soften it.

Steps we’re taking to improve

  1. We will be more careful about communicating our work, including greater emphasis on the uncertainties in, and the limitations of, our analyses. In line with this, we’ve developed a process for classifying the quality of evidence and the depth of our work in a more principled and transparent manner (this will be posted on our website in September 2023).
  2. We are actively striving to take a tone in our communications that is more cautious and less adversarial. Anything that might be perceived negatively is now run by our communications manager.

2022: We failed to edit claims in a timely manner once issues had been pointed out

How we blundered

After receiving feedback about necessary corrections to our cost-effectiveness estimates for psychotherapy and StrongMinds (see item below “We made preventable errors in our analysis comparing cash transfers to psychotherapy”), we failed to update our materials on our website in a timely manner. This failure was due, in part, to being short-handed and, in part, to wanting to wait to update the website until we had written a full report about the changes. Ultimately, we think the website should reflect our current thinking.

Steps we’re taking to improve

We are placing a higher priority on making sure our public recommendations are up to date. We have updated our website to reflect our corrected estimates, and we now note which analyses are being updated. In Q4 2023 we also plan to publish new web pages that will summarise our charity evaluations, providing a more centralised location to report our current cost-effectiveness estimates without the need to publish full, detailed reports.

2021-2022: We made preventable errors in our analysis comparing cash transfers to psychotherapy

How we blundered

In our 2021 analysis comparing cash transfers to psychotherapy, we made some preventable errors that caused non-trivial adjustments to our cost-effectiveness estimate. These included:

  1. We made a data entry error. In our meta-analysis, we recorded that Kemp et al. (2009) found a positive effect, but in fact it was a negative effect. This correction reduced our estimated ‘spillover effect’ for psychotherapy (the effect that someone receiving an intervention had on other people) and, therefore, reduced the total cost-effectiveness estimate.
  2. We did not include standard diagnostic tests of publication bias. If we had done this, we would have decreased our confidence in the quality of the literature on psychotherapy that we were using.

Steps we’re taking to improve

Since this work was completed, we’ve added two more researchers to the team, which has allowed us to better address these issues.

  1. We have updated our analysis with the correct value. Our research process now involves double-checking all key research inputs and reproducing all key research outputs.
  2. We have sought advice from leading experts to develop our methods for handling publication bias. We will use these methods in our updated evaluation of psychotherapy, scheduled for Q4 of 2023, although choosing which method is best is still a matter of debate. We have also further developed our general research methodology to ensure we follow best practices, such as following recommendations from the Cochrane Collaboration for conducting systematic reviews and meta-analyses. We will post an article outlining our research methodology in Q4 2023.

Connect