Are experimental methods the gold standard of evaluation? Are other methods valid and useful to answer questions about impact. If you are interested in impact evaluation, but too afraid (of math and statistical formulae) to ask, this is the workshop for you.
Does your program work? How can you be sure? This workshop is a user-friendly introduction to rigorous quantitative impact evaluation methods (experimental & quasi-experimental), their scope and limitations. It provides a guided tour of complex methodologies for those who do not have -nor want to acquire- advanced training in statistics, but need to grasp fundamentals of impact evaluation methodology: emerging evaluators, commissioners, policy-makers and development activists.
Experimental impact evaluation (using randomized controlled trials, akaRCTs) is often depicted as the gold standard of evaluation because it specifically addresses questions of causal inference. The main strength of this methodological strategy is to prevent the fallacy of attribution for being able to isolate the independent causal effect of a given program on the treated population (beneficiaries).
Through experimental control, this strategy creates a counterfactual that provides robust and valid evidence of average treatment effects (ATE)? What does that mean and why should we care?
This workshop will help you grasp these concepts, learn how they are operationalized in specific statistical methodological strategies and understand their pros and cons.
Our common goal in this week-long journey is to:
- Identify the need and the usefulness of rigorous quantitative impact evaluation and master the difference between impact evaluation and other types of evaluation.
- Understand the difference between experimental and quasi-experimental methods, and why they matter.
- Identify main requirements to perform and/or commission an impact evaluation.
- Understand and assess the quality and usefulness of quantitative impact evaluation reports.
- Realize why people using these methods to learn “what works” in the development field do win Nobel Prizes, yet still critically assess their usefulness.
- Understand the must-know technical terminology required to pursue more advanced training in impact evaluation.
Monday 14th Sept. – Thursday 17th Sept.: 14:00 – 16:30 CET (Central European Time)
Friday 18th Sept.: 14:00-17:00 CET
[click here for your time-zone]
- What defines impact evaluation? Definition of impact
- The fallacy of attribution
- Randomized controlled trials and counterfactuals
- The Masters of Ceteris Paribus
- Esther Duflo´s TED Talk
- Internal and external validity
- Is an RCT out of the question? Implementation, ethics, politics and logistics
- Plan B: propensity score matching
- Spotlight: the Nairobi scandal
- Difference in Difference
- Regression discontinuity
- Real life examples
- Ricardo Haussman, “The problem with evidence based policy”, Project Syndicate
- Criticisms to “black box” impact evaluation
- What if there´s no impact?
- What do we do with the findings?
- Read case scenario material (provided by the instructor)
- Findings are not born equal: reliability, usefulness and applicability
- The moment of truth: evidence and decision-making
Gertler et al., Impact evaluation in practice:
All supplementary materials will be available here at this platform or directly provided by the instructor.
1. If you had not enrolled in this workshop? What would you have been doing this week?
Please write 1-2 paragraphs stating everything that you would have been doing this week with some detail (from daily routine to planned activities).
2. Get to know our group and join the conversation!
Can you tell us something random (and interesting) about you in less than 2 minutes?
Please go to: https://flipgrid.com/339d7c75, guest password: Ipdet2020!
YOUR WORKSHOP INSTRUCTOR
Dr Claudia Maldonado is Professor of Public Administration at the Center for Research and Teaching in Economics in Mexico City (MPA from Princeton; PhD from University of Notre Dame). She specializes in evaluation and social policy and promotes critical engagement in methodological debates in the field. Founding Director of the Center for Learning on Evaluation and Results for Latin America, she has trained public officials and evaluators in impact evaluation throughout Latin America and coordinates de Diploma in Public Policy and Evaluation at CLEAR. Editor of several books and author of research articles on evaluation, performance-based management in Latin America, and the methodological landscape of the field. Claudia Maldonado is a member of the Evaluation Advisory Panel at UNICEF and ad honorem member of the National Academy of Evaluators in Mexico. She has taught impact evaluation at the Universidad Complutense, El Colegio de México, National Evaluation Capacities, the Latin American Center for Administration for Development, ESAP-Brazil, etc.
YOUR WORKSHOP FACILITATOR
Laszlo Szentmarjay will assist Claudia as facilitator throughout the workshop. Laszlo is Project Manager at the Center for Evaluation (CEval). He holds a Master’s degree in Cultural Geography from the Friedrich-Alexander-University Erlangen-Nuremberg and a Bachelor’s degree in Political Science and History from the Ludwig-Maximilians-University Munich. In particular, he has specialized in the fields of international relations, development research and adaption to climate and environmental change, especially in Latin America. For IPDET he is the primary responsible for all questions around program registration and scholarships.