3.1. Inclusive evaluations in times of crisis to ensure that no one is left behind
During crises such as COVID-19, evaluation teams need to rely on remote data collection methods. This entails intrinsic potential biases against hard-to-reach populations that need to be mitigated through innovative methods and tools.
In the context of the COVID-19 pandemic and other crises, data collection within remote communities and with hard-to-reach end beneficiaries faces challenges due to travel restrictions for international consultants, limited availability of national evaluators due to growing demand, and shifting priorities of field office staff and counterparts due to the COVID-19 response. The limited access to field sites can lead to a heavy reliance on remote data collection methods, which, in combination with convenience sampling, entails intrinsic potential biases against under-represented groups in the selection of respondents.
One potential bias relates to the under-representation of groups with limited or no access to the Internet and/or mobile networks, as well as respondents who cannot read and are thus precluded from taking part in online or SMS surveys. The risk of under-representation is particularly high for populations that are already hard to reach and often left behind when using traditional data collection methods during conventional times. This includes, for instance, victims of trafficking in persons, people with drug use disorders, and prisoners who have HIV/Hepatitis C.
The challenge resulting from this is to ensure that hard-to-reach populations are not left behind in evaluations undertaken during crises. To overcome this challenge, the IPDET / EvalYouth 2020 Evaluation Hackathon can help explore and develop methods and tools that evaluation teams can use to ensure that under-represented groups (i.e. those affected by difficulties to collect data in the field) will be included in the data collection and that data analysis takes their disadvantages into account.
What methods and tools can evaluators use to ensure that hard-to-reach populations are not left behind in evaluations undertaken during crises?
How can we select respondents during data collection in crisis contexts in an inclusive manner that takes disadvantaged persons into account?
When and how should we disaggregate data at the analysis stage by individual characteristics (e.g. sex, age, income, disability, religion, ethnicity)?
3.2. Adaptive evaluations in times of crisis: methodology, equity and do no harm
Covid 19 unveiled an invisible thread linking methodological challenges, need for equity, and “do no harm”. To be effective, evaluations have to adapt and respond to each of these challenges. We can use this as an opportunity to innovate.
Can we address methodological challenges, equity and “do no harm” through innovation?
Common challenges emerging from our own experience as well as from policy discussions ( IEG, Evaluation During Covid19, https://ieg.worldbankgroup.org/blogseries/25846) are related to issues such as restricted access to direct beneficiaries, availability of stakeholders, and the central-local divide. In our experience, adaptation is happening at all steps of the evaluation process in many ways. Some examples are the use of online/distant tools, rethinking data collection involving communities.
It is clear that evaluations have to be adaptive to be effective, but at the same time, they have to be mindful of limitations and bias that adaptation entails. As stated by Better Evaluation, to be effective, evaluations have to be adaptive and shift toward processes and methods that are more suitable to operate in fluid and uncertain conditions and with imperfect information. We can learn from evaluation approaches specifically designed to deal with these type of conditions, such as evaluation of peacebuilding processes, humanitarian actions, as well as developmental evaluations.
Furthermore, we are mindful of the effects of adaptation concerning vulnerability, equity and “do no harm”. There is a common thread linking methodological challenges, access, data collection methods, equity and “do no harm”. For some programmes that have a final evaluation being performed, there is a feeling that outcomes and results have been severely negatively affected by Covid 19. This is determined by three main issues: 1) the ability to get proper data on those outcomes from country teams that have lost access to their areas of implementation; 2) changes in the activities towards Covid 19-responses; 3) The fact that Covid-19 is directly affecting progress on subjects that were key for those programmes such as Covid 19 effects on civic space, on the increase of GBV, etc. All these three issues are strongly related to different types of vulnerability, marginalisation, and equity. Therefore, ensuring that adaptive evaluations provide an understanding of how the most disadvantaged and marginalised segments of the population have been affected by the crisis is crucial to uphold a Do-no-harm principle. These points are not only experienced by our organisation, but also reflected in several policy discussions including for example blog posts on Better Evaluation and IEG.
What concrete and practical tools can we develop by looking at the intersection of methodological challenges, “do no harm”, equity, and innovation?
What can we learn and immediately apply from evaluation methods designed to operate in fluid and uncertain conditions and with imperfect information?