Advertisement

SKIP ADVERTISEMENT

The New Health Care

Alcoholics Anonymous and the Challenge of Evidence-Based Medicine

Credit...Ken Orvidas

Do Alcoholics Anonymous participants do better at abstinence than nonparticipants because they are more motivated? Or is it because of something inherent in the A.A. program?

How researchers answered these questions in a recent study offers insight into challenges of evidence-based medicine and evidence-informed policy.

The study, published in the journal Alcoholism: Clinical and Experimental Research, teased apart a treatment effect (improvement due to A.A. itself) and a selection effect (driven by the type of people who seek help). The investigators found that there is a genuine A.A. treatment effect. Going to an additional two A.A. meetings per week produced at least three more days of alcohol abstinence per month.

Separating treatment from selection effects is a longstanding problem in social and medical science. Their entanglement is one of the fundamental ways in which evidence of correlation fails to be a sign of causation. For many years, researchers and clinicians have debated whether the association of A.A. with greater abstinence was caused by treatment or a correlation that arises from the type of people who seek it.

Such confounding is often addressed with an experiment in which individuals are randomly assigned to either a treatment or a nontreatment (or control) group in order to remove the possibility of self-selection. The treatment effect is calculated by comparing outcomes obtained by participants in each group. Several studies of A.A. have applied this approach. For instance, Kimberly Walitzer, Kurt Dermen and Christopher Barrick randomized alcoholics to receive treatment that strongly encouraged and supported A.A. participation or a control group. The former exhibited a greater degree of abstinence.

In an ideal randomized controlled trial (R.C.T.), everyone selected for treatment receives it and no one in the control group does. The difference in outcomes is the treatment effect, free of bias from selection.

That’s the ideal. However, in practice, randomized controlled trials can still suffer selection problems.

It’s one thing to assign individuals to treatment or control. It’s another to compel them to stick to the group to which they’re assigned. In many studies, researchers cannot. For instance, what’s to stop an individual assigned to the non-A.A. group (the control group) from attending A.A. meetings? Or, what forces those in the treatment group to attend them? Nothing.

A real-world trial has what is known as crossover — people not sticking to their random assignment. It can occur, for instance, if less motivated or sicker people stop adhering to treatment. Or, perhaps, more motivated ones find a way to receive treatment even when assigned to a control group. Because motivation and health can affect switching and be related to outcomes, they can obscure genuine treatment effects. In other words, they inject a selection effect.

For a study with crossover, comparing treatment and control outcomes reflects the combined, real-world effects of treatment and the extent to which people comply with it or receive it even when it’s not explicitly offered. (If you want to toss around jargon, this type of analysis is known as “intention to treat.”) A limitation is that the selection effects introduced by crossover can obscure genuine treatment effects.

To know whether we should do more work to help individuals comply with treatment, it’s important to know if the treatment itself actually works. For that, we need an assessment that’s free of the effects of crossover.

Keith Humphreys, Janet Blodgett and Todd Wagner provided one for A.A. Though it’s based on study data with crossover, it corrects for it by focusing on the subset of participants who do comply with their random assignment. In a hypothetical example, imagine that 50 percent of the sample receive treatment regardless of which group they’ve been assigned to. And likewise imagine that 25 percent are not treated no matter their assignment. In this imaginary experiment, only 25 percent would actually be affected by random assignment. These are known as “marginal patients” — not marginal because they don’t matter but because they’re the margin affected by randomization.

Analysis of marginal patients yields an estimate of the treatment effect that is free from the bias-introduced crossover. However, it’s not always the case that the resulting treatment effect is the same as one would obtain from an ideal randomized controlled trial in which every patient complied with assignment and no crossover occurred. Marginal patients may be different from other patients.

This is a limitation of such analysis: It provides an estimate of a true treatment effect, but only for those who change behavior due to treatment availability. (This type of analysis — what economists and other social scientists call “instrumental variables analysis” — has been applied in many other studies, including the recent study of Oregon’s Medicaid program that expanded by lottery in 2008.)

Despite the limitation, analysis of marginal patients reflects real-world behavior, too. Not everyone will comply with treatment. But, among those who do, are they made better off? That’s a question worth answering.

The Humphreys study does so and tells us that A.A. helps alcoholics, apart from the fact that it may attract a more motivated group of individuals. With that established, the next step is to encourage even more to take advantage of its benefits.

Austin Frakt is a health economist with several governmental and academic affiliations. He blogs at The Incidental Economist, and you can follow him on Twitter at @afrakt.

The Upshot provides news, analysis and graphics about politics, policy and everyday life. Follow us on Facebook and Twitter. Sign up for our weekly newsletter.

Advertisement

SKIP ADVERTISEMENT