ABSTRACT
Development actors are always seeking reliable and cost-effective methods to assess the impact of their programmes. In particular, there are frequently calls to evaluate programmes for which no pre-intervention (or ‘baseline’) data are available. In these cases, evaluators often rely on retrospective survey questions to reconstruct the baseline situation. This article explores the accuracy of such retrospective survey data, using data from two surveys carried out nearly six years apart among women in rural Ethiopia. We find that the proportion of survey items for which baseline data and retrospective data do not agree is 22%. Responses to the retrospective questions are more closely associated with respondents’ situation at the time of the survey than with their situation at the time they were being asked to recall. Consequently, 72% of respondents were allocated to different quintiles of household wealth, depending on whether the true baseline or the retrospective baseline data were used. We show that controlling for retrospective baseline data can considerably underestimate the impact of the intervention being evaluated. This suggests that there is a need for caution in interpreting the findings of evaluations based on such data and in drawing policy conclusions from them.
Acknowledgements
The authors would like to thank Oxfam GB for generously making the data available for this research, and the anonymous reviewers of the paper for their valuable comments. The analysis benefited from discussions with Tsegay Gebrekidan Tekleselassie.
Disclosure statement
The authors report that there are no competing interests to declare. The views expressed in this publication are those of the authors and do not necessarily reflect the views of Oxfam GB, the Innovation Growth Lab, where I work, is hosted by Nesta, the World Food Programme, or Agence Française de Développement.
Notes
1. We test this assumption in our data using the values of the respondent’s age collected at and . The differences in the and values, after adjusting for the six-year time difference, are very close to zero on average.
2. There were some divergences between the 2013 and the 2018 surveys in the lists of income sources, housing characteristics and assets asked about in the survey. Items that are not exactly comparable between the two surveys are omitted from our analysis.
3. We have replicated the analysis in this paper using the original, quantitative form of the variables for all those variables for which quantitative data are available. The key findings are unchanged, the only notable difference being an indication (statistically significant at the five per cent level) that older respondents made more recall errors.
4. It may be objected that the recall errors could be partly or wholly a consequence of survey fatigue, given the long list of items respondents were asked to recall (particularly with respect to asset ownership). If so, we would expect respondents’ recall to be more accurate for items that were asked about earlier than for those that were asked about later in the questionnaire. We cannot test this rigorously because the asset types that were most common in the population (and therefore most prone to recall errors) tended to be those that were asked about nearer the start of the assets section of the questionnaire. However, it is worth noting that we find no indication in our data of a relationship between question order and recall accuracy.
5. The evaluation as carried out by Oxfam made use of more sophisticated analysis to estimate the local average treatment effect on the specific individuals who participated in the project activities. However, the simple model discussed here is sufficient as an illustration.
Additional information
Notes on contributors
Rob Fuller
Rob Fuller is the Evaluation Manager in the Innovation Growth Lab. He previously worked at Oxfam GB as an impact evaluation adviser.
Simone Lombardini
Simone Lombardini was, during the time spent conducting research for this article, the Impact Evaluation Lead at Oxfam GB. He is now an Evaluation Officer at the World Food Programme.
Cecilia Poggi
Cecilia Poggi is a labour economist and Research Officer for Social Protection at Agence Française de Développement. She holds a PhD in Economics from the University of Sussex, UK.