Latin American Evidence Week: 10 emerging lessons

15 November 2017

The 2017 Latin American Evidence Week (Semana de la Evidencia), promoted by On Think Tanks and a growing number of organisations across the region, delivered another great opportunity to reflect on and support the use of evidence in policy.

The Evidence Week is a decentralised festival of events held in October since 2016.

The numbers in 2017

Although we are still updating the final count, the Evidence Week proved to be quite successful:

  • 53 events, including a couple in which the organisers made an opportunistic connection.
  • 10 countries: Honduras, Guatemala, Panamá, El Salvador, Ecuador, Colombia, Peru (including events outside of Lima), Bolivia, Paraguay and Chile. In Ecuador, several organisations came together to set up a local hub of the Evidence Week.
    • 35+ organisations involved in the production of events -and many more involved in the events themselves.
  • 2,000+ participants: not counting online participants and those who joined by means of “overlapping” events (e.g. the Inclusion Week in Peru, organised by the Ministry of Development).

Lessons on producing, communicating and using evidence

You can read the full report (in Spanish): Semana de la Evidencia Reporte Final 2017

The following lessons provide an initial reflection on some of the ideas that emerged during the Evidence Week. They help understand why evidence is or isn’t used in policymaking in Latin America. These lessons are very relevant to other regions.

1. There isn’t always enough evidence

There are clear gaps in evidence -or, more accurately, in the opportunity to generate evidence. This refers to gaps in capacity (both individual and organisational), in funding, and in numbers (of researchers, research centres and other policy research actors).

The Peruvian research landscape offers an illustration of this situation. In a country of 30 million people, a third live in Lima, the capital. Close to half of the funding for research is spent in Lima and close to half of the number of research centres (including independent organisations, such as think tanks) are based in Lima. Some regions have a handful of research institutions.

As a consequence, there are regions where, even if policymakers chose to inform their decisions with the latest research based evidence, they would not be able to. Tragically, too, it is where we find the most complex challenges where this absence is most clear.

The paradox, however, is that there are more funds for research in universities outside of Lima than in the capital. Peru’s tax system sends back hundreds of millions of dollars in rebates to mining districts. A percentage of this can be spent by public universities. However, to date, only few have managed to mobilise funds for this purpose -very small amounts.

Beyond regional disparities one still finds serious thematic or population disparities. The most vulnerable are still largely invisible. The rural poor, sexual minorities, victims of domestic violence, poor ethnic minorities, etc. remain worryingly under-studied. They require the resources that are hardest to get: long term and flexible; and the kind of research that is the hardest to deliver: field-building and interdisciplinary.

2. There are many gaps that need to be closed – and not just between researchers and policymakers

The obsession with the phrase: “bridging research and policy” can be misleading and distracting. There are other relationships that should be strengthened if policy is ever going to be better informed:

  • Between those who design policy and those who implement policy -often in the same sector and the same ministries – and evaluators. This operational disconnect makes it impossible for evidence generated at the implementation level to feed into policy (and programme) design. Time and time again, we find policy interventions designed with limited (and flawed) knowledge of the realities of the people they intend to benefit. This knowledge, however, exists; if only designers and implementers talked to each other, many mistakes would be avoided.
  • The various sectors (ministries), often working to service the same population groups or the same geographical space, do not collaborate and therefore do not share the evidence they have (this often happens within a sector, too). To illustrate this we can look at how little has been done in Peru to share (or at the very least make available) the evidence (consultancies, studies, impact evaluations) produced by an increasing number of monitoring and evaluation units and policy labs within various ministries. Small improvements in their websites would go a long way towards addressing this challenge.
  • Evaluators, in particular, do not yet pay enough attention to the potential  users of their evaluations. While governments are commissioning more evaluations they are not necessarily using them more. There is a problem with their design: they do not take into account who will use them, what for, and how.
  • Between researchers and the private sector. Many policy researchers focus their attention on policymakers but fail to recognise that corporations and business leaders constitute both an important final audience for their research and potential influencing partners.
  • Between think tanks, universities and funders. We found that collaboration is still limited. It is often said that is is easier for think tanks to collaborate with foreign partners than with local ones. An event on local foundations illustrated this point: the leaders of two local foundations had studied together at university but had never talked to each other in the official roles.

3. There is a fragmentation of the research sector and policy research centres which is encouraged by the how the government consumes evidence

In Peru, at least, a greater demand for evidence has brought about a greater number of consultancy opportunities for researchers; but not necessarily for research centres. Thus researchers increasingly behave as freelancers.

This has its advantages: institutions carry an overhead, they may demand certain administrative roles, could limit the policy spaces in which researchers may be able to operate, etc.

But it has disadvantages, too. On their own, researchers do not have the communications support that efforts to influence policy might demand, they are not protected from the meddling of the client in the research process, they are less likely to be able to work on large and multi-year efforts and, therefore, find it hard to develop a consistent body of knowledge.

The consequence on the broader sector is a weakening of the centres and limits to the growth of the community of policy research centres. This leads to a less dynamic evidence  informed policy debate.

Ultimately, this does not serve policymakers, either. It does not contribute towards a better informed policy but rather a very distant second: policy based on single studies.

4. The full-cycle is long and expensive but it is also the most meaningful

The full-cycle refers to the long-term (and unexpected) journey that all the think tanks that won in their category at the 2018 Think Tank Awards in Peru went through: from developing an idea, through its dissemination, to its uptake.

Uptake, of course, refers to many different things: helping inform the policy agenda to informing the academic agenda, informing a new policy, informing decisions, opening a new opportunity for further action, empowering vulnerable groups to join and even conquer an existing policy space, etc.

The process of dissemination, in all cases, proved to be prolonged and multi-channel. At one time or another this involved a fantastic range of publications, trainings and events, aggressive media strategies, consultancies and advisory work, digital communications, and lots of waiting for windows of opportunities and luck.

Longer, even, were the processes of developing the ideas that shaped their impact. These were not mere inputs at the start of a project. Instead, they involved several iterations, feedback from peers, potential users and the border public, and they were part and parcel of the dissemination and update processes.

These stories are rare, of course, but they offer the chance to achieve transformative and sustainable change.

This illustrates the importance of institutions to support long-term and unexpected processes. None of the think tanks knew what would take to achieve impact -or when this would happen. None had a detailed and well-laid out plan at the start of it all. In the absence of meaningful core or programatic funding they had to patch together sufficient funds, time and other resources to maintain a minimum level of commitment to these issues and to their mission.

5. Innovation is emerging as a way to bypass the full-cycle but it isn’t necessarily evidence based nor good for policymaking in general

The innovation agenda appears to be at odds with the full-cycle stories of the think tanks described above. Its proponents champion a break from the past, they prefer to start anew, and they seek relatively quick wins.

Innovation in policy has become a new buzzword. +Some governments even have established innovation units or labs. They seek out and celebrate “new” interventions. In fact, they have become quite good a celebrating before evaluating. The problem with innovations is that there is likely to be very little evidence that they work. They are, innovations, after all. We should expect that more will fail than succeed.(Surprisingly, though, few of the innovations that government tries out seem to fail -not even stumble).

While there are benefits from being innovative in policymaking we should also be careful about the quality of the process and the outcomes they produce. Innovation without progress would not be desirable.

But when we explore why so many ministers and their teams are championing a culture of innovation, we find an unexpected explanation. The public sector is not a flexible organism; instead it is more like a rigid machine. To change it, to adapt a programme, re-launch a project, decommission a service, takes huge political capital that few politicians have. In the context of weak policymaking bodies (and weak civil services), improvements become insurmountable challenges.

Add to this the fact that most ministers and their teams have only a few months or a year, at most, to leave their mark; that they are often outsiders, on account to the weakness of political parties; and that that they can count with only limited support from senior civil servants to navigate and steer their ministries.

It is easier, therefore, to come up with an entirely new project, programme or policy that simply sits on top of (or besides) existing ones. In doing so, there is an inevitable rejection of the knowledge that already exists and greater demand for evidence that demonstrates that these innovations (often copied from other contexts) actually work. This also reduces incentives to generate evidence to fix problems in design or small implementation issues or to learn from existing projects, programmes or policies.

Innovation and experimentation are positive developments in policymaking. But they need to be accompanied by a greater acceptance of failure and efforts to maintain a focus on what could be improved or reformed.

6. Policymakers (local, national and international) do not need impact evaluations as much as they need other kinds of research and analysis

Impact evaluations provide the ultimate answer to the question of whether something has worked or not. They may offer insights into why a programme did not deliver as expected -or why it did. But they will not provide a solution to these shortcomings -or successes.

Most policymakers, most of the time, need answers to practical questions: how much does something cost? where are the people a programme should be targeting? what skills should service providers have? how can these skills be improved? These are questions that require evidence; but of the kind that do not require an impact evaluation.

Yet, increasingly, public resources are being directed towards large scale impact evaluations and experimentation. In countries where research capacity is limited -there even aren’t enough researchers to undertake all the impact evaluations that governments are signing up to do- a better balance needs to be struck. Otherwise, we may know which interventions worked an which didn’t but we won’t know why or how to make them work in the future. Plus, there will be lots of other questions that will remain unanswered.

Why, then, favour impact evaluations? Possibly, because they are easier to commission (one large project instead of lots of small research projects), they help to spend more and faster, or because they offer a chance for policymakers to keep the company of senior researchers, national and international.

7. Policymaking in private is still preferred over policymaking in public

We found, both through the organisation of the Evidence Week and from the discussions at the events themselves, that there is still a predisposition by policymakers for keeping their work private and minimising the public exposure of their ideas and their actions.

The sense we get is that many still think that if they share their work they become more vulnerable. They do not see the value and additional benefits that feedback might offer; in fact, feedback is taken as criticism -and criticism is not taken well.

This translates, for example, in events designed to minimise dialogue: keynote speakers with friendly panels and little space for questions or interventions from the audience.+

This sense of vulnerability partly stems a poor understanding of the complexities of policymaking which translates into a fear of failure by policymakers and a thoughtless condemnation of failure by political forces, the media and the general public. 

Fear is a powerful force. While events organised by think tank and universities have, in some cases, included participants (panelists or speakers) from other institutions, the events proposed and organised by government bodies where by and large disconnected from other sectors. In 2016, for instance, the events produced by the Ministry of Finance and the Ministry of Education did not include participants from other sectors.

8. Evidence can empower vulnerable populations; but we have to discuss what we mean by evidence

Evidence informs but it also empowers; especially those whose experiences are less visible. The 2017 Evidence Week had a particular interest in rural development and the rural youth. Two events organised in Peru and in Bolivia offered young men and women from rural areas the opportunity to share their experiences; “their evidence”.

Even if we accept that their testimonies were no more than anecdotes, it is impossible to deny their legitimacy as a source of information and inspiration for policymaking. Their testimonies demanded just as much (if not more) attention than the results of an impact evaluation, a randomised control trial or a systematic review.

9. It is a mistake to think that there are different evidences for different groups

In discussing “whose evidence matters” we are tempted to think that different groups prefer, have or accept different kinds of evidence – but not others. We force a false concentration between rural versus urban, sociologists versus economists, policymakers versus researchers, etc.

Examples abound. A regional governor who participated in the Evidence Week claimed that he preferred practical evidence to academic evidence. But this does not mean that he is not open to other kinds of evidence and, to show he is serious about this, he sponsored a declaration that compels his government to organise an Evidence Week every year.

At the local level, in fact, there is a policy research ecosystem that is hungry for more and better evidence and to participate in a national conversation about the role of evidence in policy from which they feel largely excluded.

They want that conversation to recognise that there are several kinds of evidence -and several ways of knowing- but they do not want to dismiss others.

At the local level, too, some of the problems we find at the national level disappear (or are less pronounced). The operational disconnect between designers and implementers of policies and programme is less obvious. There is also less of a cultural disconnect that explains the many policy blunders where policies fail to address the real needs and expectations of their intended beneficiaries.

At the local level, greater coordination between sectors is possible and the machine is easier to adjust – or the ship is easier to turn- thus reducing the temptation to innovate by default.

10. Dialogue is the key word

Dialogue emerges as a way to understand and promote the use of evidence in policymaking.

What King and Crewe refer to as policy blunders can be explained by a deficit in dialogue and could be avoided by more and better dialogue. The same appears to be true in evidence informed policy.

Without formal mechanisms of dialogue there are serious difficulties in feeding the knowledge of implementors into the design process. Without an expectation of dialogue there is little chance that an evaluation effort will consider its users at the time of its planning. Without an ongoing dialogue with the public it is unlikely that policymakers and researchers will effectively understand the needs of the most vulnerable and marginalised -who are also the least visible.

Dialogue is also necessary between researchers and policy research centres – but not so that they will all agree all the time, rather so that they may help to build fields of research around the issues that remain understudied.

Dialogue between public and private funders, as well as between foundations, national and international, would help address common challenges and potentially mobilise new funds towards policy research.

The absence of dialogue between sectors, certainly at the level where evidence is generated, limits the potential for collaboration.

Finally, dialogue, like the kind that the Evidence Week promotes, can help to identify some of the issues that hinder and promote the production, communication and use of evidence in policy.