4-b 101 on Meta-Evaluation, or: How to evaluate evaluations

Instructor: Dr. Stefan Silvestrini
Facilitator: Marielle Schaer Selby

OUR SCHEDULE:
Monday 19th October - Friday 23rd October: 13:00 – 16:00 CEST (Central European Summer Time)
[click here for your time-zone]

PRE-WORKSHOP TASK:
Please bring along one evaluation, including at least the final report, its annexes (with data collection instruments) and its Terms of Reference. If available, also include the technical proposal and the inception report. It can be an evaluation that you conducted yourself or that you have commissioned or managed, or one you found on the internet and think it is interesting.

Ideally, it is an evaluation that you deem particularly well done (good practice), not worth the effort (worst case) or one that you are actually wondering how good it is (question mark).
In any case make sure that you are entitled to share it with the other participants and that no personal data protection rights are infringed (e.g. by deleting names of authors, respondents, commissioners). Be prepared to briefly present this evaluation during the training and elaborate your personal assessment.

We will use the evaluations of all participants to develop and implement an exemplary Meta-Evaluation!

Stefan Silvestrini · September 25, 2020

DESCRIPTION

Do evaluations always provide the ‘right’ information? I mean, are they valid, reliable, correct in a sense? Can we believe their findings? And after all, what is the merit of our work to society, or at least for the people who read our reports?
Not only evaluators contemplate about such questions from time to time. More and more commissioners and political decision-makers want to know if they can trust evaluations and what they can actually learn from them. Moreover, the added value of such exercises for providing the empirical basis for decision making remain often obscure. Hence it is no wonder that the demand for meta-evaluations rises.

We all know what evaluations are good for and how we should implement them, don’t we? There are numerous textbooks, training programs, podcasts, which provide methodological and practical guidance. Complying with such standards and following scientific codes of conduct should warrant valid, reliable and useful findings. – But can we prove it? How do we actually find out about the methodological quality of an evaluation, the validity and reliability of its findings, and eventually the usefulness of its conclusions and recommendations? Basically, how can we distinguish a good one from a bad one?

These are exactly the questions we want to answer in this workshop. We discuss criteria that help us decide if we can trust an evaluation, how we can assess its methodological quality, check if it was appropriately prepared and implemented, and eventually of use. We further jointly elaborate a meta-evaluation approach from the analysis grid to suitable methods for content analysis. Going beyond the evaluation report, we dive into the potentials of meta-evaluation as well as its limitations.

So, if you want to know how to evaluate an evaluation, come in and find out. Bring along – actually please send it a few days in advance – an evaluation you always wanted to know how good it is. I promise you, afterwards you will know (better).

OBJECTIVES

The participants

  • are able to assess the quality of an evaluation according to common standards,
  • know which criteria should be used therefore and how they can be applied.
  • They evaluate evaluations in practice and thereby learn about the added value of a systematic approach to it.
  • Finally, the participants will gather an overview about practical tools and applications for meta-evaluations.

RECOMMENDED FOR

This workshop is particularly relevant for commissioners of evaluations who want to learn about the quality of the evaluations they receive or who want to develop their own quality assurance framework for evaluations. It is, however, likewise interesting for evaluation professionals who want to learn according to which quality standards their work may be assessed if it becomes subject to a meta-evaluation. Eventually, the workshop may be suitable to everyone who reads evaluation reports and contemplates about their trustworthiness.

LEVEL

This is an intermediate level workshop.

PREREQUISITES

Participant need to have a basic understanding about evaluation as a management tool. While it is not necessary to have conducted an evaluation, ideally, the participants should have read a number of evaluation reports before and know about their strengths and weaknesses.

AGENDA

Monday
  • Introduction into Meta-Evaluation
  • What makes a good evaluation? – Quality criteria for evaluations
  • Participants’ presentations of example evaluations

Outside class: Reflecting own’s evaluation quality under consideration of quality criteria

Tuesday
  • Developing an analysis grid for a meta-evaluation
  • Practical group work: development of an analysis grid

Outside class: Drafting a recipe for a good meta-evaluation

Wednesday
  • Presentation of group work results
  • Finalization of an exemplary meta-evaluation analysis grid
  • Small-team work: Application of the analysis grid

Outside class: Meta-evaluating two evaluations

Thursday
  • Presentation and discussion of meta-evaluations
  • Tools and methods for analyzing meta-evaluation results

Outside class: Analyzing meta-evaluation results

Friday
  • Aggregation of individual meta-evaluation and results synthesis
  • Findings from meta-evaluations conducted at CEval
  • Wrap-up and feedback

About Instructor

Stefan Silvestrini

Stefan Silvestrini is the CEO of the CEval GmbH, the commercial spin-off of the Center for Evaluation at the Saarland University. He is a Sociologist by education with a particular focus on empirical methods and impact evaluation. His main research areas include Development Cooperation, Climate Change and Adaptation, Environmental Management, Rural and Urban Development, Good Governance, Gender and Humanitarian Aid as well as Education and Technology Assessment. In these fields Stefan has led numerous evaluations for governmental and non-governmental organizations in the European Union, South and South-East Asia as well as Sub-Sahara Africa.

1 Course

Not Enrolled
This course is currently closed

Course Includes

  • 7 Lessons
  • 5 Topics