After 15 years of building linkages between evidence, policy, and practice in social programs for children and families, I have one thing to say about our efforts to promote evidence-based decision-making: We have failed to capture the hearts and minds of the majority of decision-makers in the United States.

I’ve worked with state and federal leadership, as well as program administrators in the public and nonprofit spheres. Most of them just aren’t with us. They aren’t convinced that the payoffs of evidence-based practice (the method that uses rigorous tests to assess the efficacy of a given intervention) are worth the extra difficulty or expense of implementing those practices.

Defining Positive Outcomes
Defining Positive Outcomes
What do we really mean when we talk about "positive outcomes"? In this series, produced in partnership with Third Sector Capital Partners, contributors from a variety of sectors discuss how they apply the term to programs and policies.

Why haven’t we gotten more traction for evidence-based decision-making? Three key reasons: 1) we have wasted time debating whether randomized control trials are the optimal approach, rather than building demand for more data-based decision-making; 2) we oversold the availability of evidence-based practices and underestimated what it takes to scale them; and 3) we did all this without ever asking what problems decision-makers are trying to solve.

If we want to gain momentum for evidence-based practice, we need to focus more on figuring out how to implement such approaches on a larger scale, in a way that uses data to improve programs on an ongoing basis.

What would this approach look like?

We must start by understanding and analyzing the problem the decision-maker wants to solve. We need to offer more than lists of evidence-based strategies or interventions. What outcomes do the decision-makers want to achieve? And what do data tell us about why we aren’t getting those outcomes with current methods?

In my experience, both policymakers and those who pitch social programs to them put too little time into this fundamental question. Instead, everyone rushes straight to solutions: A range of vested interests show up at decision-makers’ doors, offering their program or intervention. Evidence-based programs are just one of many options and get lost in the crowd. There are often a plethora of theories—and far too little hard data—about why the problem exists; decision-makers select strategies based on their assumptions about the nature of the problem. Various theories themselves may be based on research evidence, making them seem evidence-based to decision-makers. But without localized data to diagnose the problem within a specific context, proposed interventions in fact may not apply, and may cause decision-makers to waste precious time and money solving a problem they never actually had.

For instance, in early childhood education, experts and researchers debate whether the achievement gap between less- and more-privileged children is the result of toxic stress among children or poor teaching. In fact, those are just two of many different theories about the achievement gap, and they suggest really different paths of intervention. But we offer little support for public agencies and nonprofits to help them better understand how to weigh these different approaches in specific contexts.

Only after decision-makers and program designers and implementers understand the problem at hand should we ask which strategies will be most effective at solving it. Which approaches have been effective elsewhere? This is where evidence-based practice comes in.

We need high standards for this evidence, but given the limited number of truly evidence-based practices, we also need to help decision-makers understand how best to proceed when rigorously tested solutions aren’t available. Effectiveness matters, but so do cost, population, and context. Policymakers need strategies that will apply across a broad range of communities and populations. Niche interventions found to work with small segments of a population often will not address problems at a scale policymakers require. If these interventions can’t be effectively applied to broader populations, then they need to be packaged with other interventions that can fill those gaps.

A county child welfare administrator needs a set of interventions at her disposal to address the diversity of issues and populations that come through her door—families with acute housing crises, mental health issues, or substance abuse. Moreover, she needs a system that reliably diagnoses the needs of each family and connects them to the right intervention at the right time to meet those needs.

It is difficult to translate available evidence to existing systems. Randomized control trials are simplest and cheapest when they randomize access to a program among individuals within a community, in a lottery-type system. Randomizing program access across communities, schools, counties, or other large units is not impossible, but it is much more complicated and expensive. Moreover, the results of these system-level randomized trials can be less helpful for program improvement than individual-level ones because the nature of the intervention is more complex. Yet most public programs operate on the level of complex systems, and we need to figure out how to approach issues of methodological rigor in these circumstances.

Another problem with current implementation of evidence-based practice is that it undervalues the need for ongoing monitoring and continuous improvement. Our “what works” conversations are too static. It is as if interventions found to be effective once or twice will be effective for all time and in all contexts. Research on early childhood school readiness program Head Start has shown otherwise. Head Start is most likely to have benefits for kids when there aren’t comparable alternatives (e.g., other center-based care or public pre-K). Likewise, evaluations of early childhood education programs have shown smaller effects over the past few decades, likely because of control groups’ increased participation in child care over time.

Given that an evidence-based practice may not be effective in all places or contexts, it is absolutely critical that individual decision-makers have and use local data to monitor their progress. Continuous improvement in social programs requires lots of types of information, including data on how well a program reaches its target population, whether the needs of that population are changing, whether interventions are effectively implemented, and whether outcomes are moving as expected.

So how do we move forward?

None of the following ideas is rocket science, nor am I the first person to say them, but they do suggest ways that we can move beyond our current approaches in promoting evidence-based practice.

1. We need better data.

As Michele Jolin pointed out recently, few federal programs have sufficient resources to build or use evidence. There are limited resources for evaluation and other evidence-building activities, which too often are seen as “extras.” Moreover, many programs at the local, state, and national level have minimal information to use for program management and even fewer staff with the skills required to use it effectively.

When I was in government, we spent tens of millions of dollars on the randomized control trials of Head Start, which provided information on the effectiveness of the program at a high level but offered little about how to improve the program. The federal agency administering Head Start had remarkably little data to understand what programs were being implemented at the local level or how well they were being implemented.

We need to figure out what data is needed to support evaluation, research and development, and program management, and advocate for collecting it.

2. We should attend equally to practices and to the systems in which they sit.

Systems improvements without changes in practice won’t get outcomes, but without systems reforms, evidence-based practices will have difficulty scaling up. This means we need to attend to evidence, data, and decision-making at a lot of different levels. The information needed and choices made by the executive director of a nonprofit are really different from those of a state administrator who does not directly run any programs.

3. You get what you pay for.

One fear I have is that we don’t actually know whether we can get better outcomes in our public systems without spending more money. And yet cost-savings seem to be what we promise when we sell the idea of evidence-based practice to legislatures and budget directors.

In fact, building systems to better use data and implement evidence-based practice is likely to take additional money. And scaling many evidence-based practices could require improvements in systems—better paid and trained staff, better management information systems—that can be costly as well.

When we tried to scale evidence-based social-emotional curricula in Head Start, we found that many programs lacked the resources—including a strong coaching workforce—needed to implement these practices well. Yet as a field, we have paid little attention to the infrastructure and capacity building necessary to implement evidence-based programs.

4. We need to hold people accountable for program results and promote ongoing improvement.

There is an inherent tension between using data for accountability and using it for program improvement. In his inaugural address, President Obama stated, “The question we ask today is not whether our government is too big or too small, but whether it works… . Where the answer is no, programs will end.” It doesn’t seem unreasonable that we should stop funding what doesn’t work and move that money toward what does. But when there is a risk of being defunded for showing weaknesses, no one is going to speak candidly about the need to improve. And, as discussed above, continuous improvement is going to be key to getting to outcomes at scale.

No doubt highlighting the cost-saving component of evidence-based practice for governments has helped the movement get as far as it has. Yet to move forward, we will need to figure out how to be strategic about funding in a way that doesn’t stifle innovation and improvement.

We all want better outcomes for kids, and we want them yesterday. But it took us a long time to build the current system of social programs, and it will take time to right the ship. If we’ve learned one thing in the child and family policy arena, it is that there are no silver bullets. If we let our desire to move fast jeopardize our success, we risk creating skepticism about the benefits of evidence-based decision-making altogether. And that would be a missed opportunity.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Jennifer Brooks.