BEST PRACTICES FOR AN IMPACT EVALUATION

Article published here, 16/05/2018

After having done an Impact Evaluation, I dared to write some reflections that I acquired during the process. This could be my humble Decalogue of lessons learned:

1.     Explain clearly to the partners the value of an impact evaluation: Sometimes, it might be only the donor claiming for an impact evaluation, and the partners might not have understood the difference between monitoring the results of the project and the value of an impact evaluation. That means the difference between monitoring outcomes (“description of the factual”) and determining (“using a counterfactual”) if the observed outcomes or effects are attributed (and how) to the intervention. To sum up, if the program or policy worked or not. In this case, it was particularly important as the project was experimenting an innovative pilot scheme, which wanted to be scaled up to other districts.

2.     Data is not clean, prepared and well-organised as it was during class-exercises: The evaluation was prepared ex-post so… do not expect wonderful randomisations, and wish that at least there is baseline data, either from primary sources or from secondary sources. Prepare yourself to spend most of your time looking for data among different websites and sources, sending many emails to counterparts or getting excels in local offices that might need time to decode.

3.     When is this the right time to do an impact evaluation? Many times, the impact evaluation starts when the logistics of the projects allow for it, not when the right moment is. Sometimes, this could be too early to capture the effect pursued by the project or too late, so other unintended effects might have interfered. It was not easy to identify in which part of that range we were.

4.     Find the right questions to answer: In my case, the client wanted to know if the program implemented had any specific effect on the informality, on the economy and on the employment. In this broad scenario, I need to be the one finding the right questions to answer, having into consideration the goal of the project, goal of your client and lastly, the data available.

5.     Find a good counterfactual: In my case, I had to compare between districts so I used a diff-in-diff to infer the program impact by comparing the pre- to post-intervention change in the outcome of interest for the “treated” district relative to the other “control” districts. So, I had to challenge different risks and assumptions to verify that I had a good counterfactual. One was the “Parallel Paths” assumption. Which basically means that without intervention, the average change in the “treated” district would represent the counterfactual. The second was to avoid “contagion”, which means that the other “control” districts might have not been affected by other interventions or effects. To find control variables that eliminate these potential effects was not easy.

6.     Plan your methods based on the “program theory”: It is worth spend time in advance understanding what the possible theories are. Those that link the intervention and the outcome. It means, constructing the causal story or causal chain from inputs to outcome, including alternative paths. Thanks to that, I was able to search more efficiently for data, to choose better the right question or to incorporate other methods to the impact evaluation. In my case I used a mixed-method approach. I used a regression-based technique (diff-in-diff) as it was not possible to use experimental techniques due to lack of randomisations at the beginning of the project and because the impact evaluation was planned ex post. Then, I added other qualitative methods that could help us to complement and explain the quantitative results. I did individual interviews, focus groups discussions, and a relatively small survey to thirty entrepreneurs. These and the literature review, helped to interpret better the results. This triangulation was useful. Otherwise, I learnt that I could fall on the “black box”, where I would find an impact but without an explanation about why.

7.     Correlation does not mean causation: Multiple factors can affect the outcomes. Isolating and accurately measuring the particular contribution of an intervention and ensuring that the causality runs from the intervention to the outcome was a challenge. First, because of lack of data, I was not able to control for some variables, and second, there might be variables I did not realise that were affecting the outcome. So it was good to spend time exploring them and write the results with precaution, establishing the association observed but warning the potential factors that might not have been possible to control for.

8.     Impact evaluation serves for lesson-learning and accountability (very useful for future scale-ups) but how to incorporate the lesson learnt in the initial project is not so easy: The results and the recommendations have been very useful for the accountability of the project and also to incorporate recommendations for future scale-up. What I saw more complicated is how to incorporate the feedback in the initial project as it was already finished. That will depend on the willingness of the implementing partners, but without resources, commitments or time allocation from the project. Some ideas could be, to incorporate the impact evaluation as part of a continuous monitor and evaluation system of the program or to include few last funds or resources for a final activity that can incorporate the recommendations of the evaluation.

9.    Learn about the local context to estimate if the same results could be applicable to another context (external validity):  During the evaluation, it was important to differentiate between the local/particular vs general factors that are influencing the impact of the program. That allowed me to estimate a better approximation of the external validity of the results. In my case, as I was an external and foreign evaluator, it was key and essential to be accompanied by another local consultant who was providing comments about the local context.

10.  Yout ethics will be tested: This is a final key aspect. Sometimes, the results of your impact evaluation might not give positive results or might even be negative. In many cases, either your client or some of the partners might be interested in having some positive results as they invested a lot of resources and they might be interested in having scale-up projects. As a professional, you need to be prepared, to be honest with your results. then I learned a couple of things. First, if you are using a mixed-method approach, you will be getting some hints and results that can be shared with stakeholders from the beginning, do not wait until the end to share your results. And second, a negative result is not bad, if you were able to identify the reasons why, so you can provide explanations and recommendations that can improve future interventions.

Hopefully, these reflections are useful for others and I can improve them in the future.

@borjamonde

2018-05-31T12:08:03+00:00May 30th, 2018|Comments Off on BEST PRACTICES FOR AN IMPACT EVALUATION