Making science work for development

Evaluating the impact of research programmes

The challenge

In the 1980s the International Development Research Centre (IDRC) of Canada supported research in fishery economics in South East Asia. By 1996, when this funding stopped, IDRC was supporting more than 80 researchers in 14 teams at universities, research institutions and government fisheries agencies in Indonesia, Malaysia, Thailand, the Philippines and Vietnam. These individuals became leaders in fisheries, in government, and in research centres, bringing with them a culture of evidence based policy making around sustainable ecosystem management.

IDRC could see changes positively impacting on lives in Asia (impact as something that “You know it when you see it”), but could not trace a direct causal claim to the research. To advocate for change, the fisheries researchers had used knowledge generated by IDRC funded research, but they had also worked with other people, brought in other evidence and engaged politically over a period of time after the research was completed. All of this was completely outside the purview and control of IDRC.

Funding research programmes is a core part of what IDRC and other development organisations do; but as this short example shows, it can be incredibly difficult to speak to the impact of this investment and tease out the contribution actually made by research.

The workshop

In October 2012 UKCDS, DFID and IDRC convened a workshop to help themselves and other funders explore and better understand the options for and challenges of evaluating the impact of their research programmes. Having the evidence to know what works effectively and what doesn’t is crucial to ensure future funding is invested well, represents value for money and maximises international development outcomes.

The workshop brought together experts with practical experience of designing, testing or using tools and methodologies specifically for research impact evaluation. Experts were asked to profile different methods and identify where they work well and what are their limitations.  We aimed to map the principle options for research impact evaluation while noting gaps where more, or different, techniques may be required.

This resource aims to provide an accessible, structured synthesis of the workshop content, supplemented by additional material and links assembled by the convening organisations. It is not designed as a comprehensive account of the available tools and approaches for evaluating research impact; methods are not reviewed or commended.

These PDFs focus on the issues, tools and challenges particularly associated with evaluating research impact, as opposed to more generic issues relevant to evaluating other interventions in development. The emphasis is on evaluation at the programme level, rather that of individual research projects.

The available PDFs include:

  • Why and what? Motivations for evaluating impact and tools for framing the right question
  • Important issues to consider in evaluating a research programme (best practice)
  • Approaches and methods for evaluating the impact of research
  • Potential challenges to more effective evaluation of research impact
  • Compiled resources

Acknowledgements
The information here is version 1.1 compiled in Spring/Summer 2013, written mainly by Ian Thornton, UKCDS, with inputs from Andrew Shaw, DFID. We are very grateful to the workshop speakers and participants for their contributions, and to Duncan Green, Oxfam, and Margaret Macadam, ESRC, for their helpful comments at review. We welcome comments, additional input and/or suggested references by email to infoatukcds [dot] org [dot] uk

Display in funding landscape?: