Skip to content


The following challenges have been crowd-sourced and selected from the IPDET and EvalYouth community.

Please note that while the following challenges should serve as a guideline and inspiration, your project must not work on any specific country or context that is mentioned. Your group will discuss and determine the exact focus of your Evaluation Hackathon Project.

The main goal is getting engaged with each other and come up with innovative and out-of-the box thinking.

Quality of Evaluation

Regardless of a lot of hardships and limits in evaluation in times of COVID-19, it is obliged to keep good evaluation quality. Thus, we need to find a way how to maintain good quality of evaluation. We as evaluators have a role to play in providing evidence to our leaders – how can we ensure that it is given with good-enough rigour?

Data Collections of Covid-19

Due to the situation caused by the Covid-19 pandemic worldwide, travels abroad are currently and during the coming months impossible. Therefore, on site evaluations cannot be carried out and we have to look for different means to gather the necessary data, e.g. with the help of technological means. Similar conditions for an evaluation could also arise for other reasons. How can we use modern technologies and existing data in order to carry out “off-site” evaluations without access to the field, replacing methods of data gathering by others? What potential, but also what lacks does such an approach have, especially when evaluators’ interaction with site residents cannot be replaced?

Adaptive Evaluation

During crises such as COVID-19, evaluation teams need to rely on remote data collection methods. This entails intrinsic potential biases against hard-to-reach populations that need to be mitigated through innovative methods and tools. What methods and tools can evaluators use to ensure that hard-to-reach populations are not left behind in evaluations undertaken during crises? Covid 19 unveiled an invisible thread linking methodological challenges, need for equity, and “do no harm”. To be effective, evaluations have to adapt and respond to each of these challenges. What concrete and practical tools can we develop by looking at the intersection of methodological challenges, “do no harm”, equity, and innovation? What can we learn and immediately apply from evaluation methods designed to operate in fluid and uncertain conditions and with imperfect information?

Institutions performance

With the advent of Covid-19, institutions are suffering stagnation due to health requirements and the changing political and economic environment. It has been hard to secure the prerequisites for being able to carry out the work. If Covid-19 persists and measures continue to be in place with drastic consequences for an institution, how will institutions be able to carry out their work?

Covid-19 as a confounding factor

We know from numerous studies that exogenous factors, such as economic crises, natural disasters or rising conflicts can superimpose societal developments that are subject to development interventions – which again are subject to evaluations. As evaluators we are familiar with such challenges and know how to deal with them adequately. However, we also know that things are getting trickier when the magnitude of influence of a confounding factor exceeds the expected size of an intervention’s impact. This is even more true, if there is more than one of such nasty factors. So, what to do if technically speaking the ‘signal-to-noise ratio’ between our subject of investigation and the unwanted interference is becoming too small for such a corrective maneuver? How can we attribute an observable change to an intervention if this change is influenced significantly more by something else than the intervention under investigation itself? How would you evaluate an intervention aiming at improving the resilience of vulnerable groups at the very moment their resilience is severely tested by a collapsing regional economy?

Tools and Methods

Since in Ecuador, as a consequence of the COVID-19, many initiatives and projects have emerged about collecting, buying, and selling food items and health supplies. So, we want to implement a pilot project with those characteristics and evaluate its real impact. Our first challenge emerges when we want to build de baseline of our pilot project. The challenge is that we have restrictions on collecting traditional survey information. Furthermore, people in developing countries do not have access to computers or smartphones nor the Internet. And, because of that, online surveys are not an option for us. Then, we need to think of alternative ways to collect information. How to design and implement an information tool to build a baseline and after that to evaluate new projects in the context of the pandemic?

Measuring efficiency

Evaluating efficiency is weaker and not substantive enough compared to assessing effectiveness, sustainability and relevance in most project, programme and policy evaluations. What are ways of assessing efficiencies?

Country-led evaluations

Donors continue to dominate evaluation design and use in developing countries like Liberia, even though they expect the change story to affect the people. How can we make evaluation country led instead of donors led?

Lack of readiness

The board members, staff, and volunteers of NGO's were not ready for the self-assessment process and doesn’t seen it as a priority. We have encountered an absence of understanding of tools, lack of interest, and the delay in the filling of the tools by the NGO's participating to the process. What impact does de-motivation in relation to the board members’ self-evaluation process have on the performance and effectiveness of an NGO?effectiveness of an NGO?

Young and Emerging Evaluators (YEE)

Young and emerging evaluators (YEE) often hit a hard brick wall when trying to find a job in evaluation: they are not hired because they don't have enough experience, but cannot gain experience if they are not hired. Furthermore, they are not been actively engaged in developmental discourse and discussion. How can we help YEE better enter the evaluation career path? How can they participate in post-Covid recovery and evaluation development agenda?