Nathalie Le Denmat: "Evaluating Our Interventions Is Essential"

published on 02 April 2019
  • logo linkedin
  • logo email
Nathalie le denmat, evaluation of projects AFD
Agence Française de Développement regularly finances studies to measure the impacts of the development projects it supports. Nathalie Le Denmat, Head of the Evaluation and Learning Department, explains why there will be more and more evaluation at AFD, but also why researchers are divided as to the method.

Why is it important to evaluate the impact of development aid?

Evaluating our interventions is essential for two reasons. First of all, to be able to account for our activities: have the projects we have supported been of benefit to the populations we wanted to reach in terms of quality of life, health, or education? But also so that we can learn from experience. We are developing projects in difficult and changing environments, which means that we need to be able to evaluate and identify what has or has not worked. This continual learning process is really a priority for us.

Evaluation has existed at Agence Française de Développement (AFD) for forty years. We carry out several types of evaluation. Project evaluations – 30% of our projects are evaluated, and we aim to increase this to 50% from 2020 onwards, which is a good international standard. We also carry out evaluations with a wider scope, for example focused on a particular theme, instrument or strategy. For example, we are currently studying fifteen years of AFD support for the irrigation sector through 100 projects.

And then there are scientific impact evaluations, which rigorously analyse how far observed outcomes can be attributed to our intervention, independently of the other factors that influence such outcomes. These are conducted over long periods (two to eight years, sometimes more), as the impacts of a project on living conditions are measured over the long term. AFD conducts only a relatively small number of these evaluations as they are costly, but we are planning more and more of them because they bring real value added to the knowledge of development, contribute to the international debate and have positive impacts on our practices.


In concrete terms, how are these evaluations conducted?

These are conducted with European research organisations and organisations in the countries where the projects are implemented. When the necessary data are not available, which is often the case in the countries where we support projects, we also try to involve the national statistical offices, and sometimes we organise joint training sessions for the investigators who will go out and survey households.
We can also use digital tools on mobile phones or satellite data, to monitor a project intended to combat deforestation for example.


There are several ways of carrying out a scientific evaluation, but researchers are divided on these methods… Why is that?

There are three main types of methods. The experimental quantitative or "randomised" method, which consists of randomly selecting two groups in the population eligible for a project, one benefiting from the aid and the other not, and comparing the progress made by the two groups. This method is widely used in the healthcare field, or even for opinion polls, and can be very expensive, up to €2 million to evaluate a development project. It also presents challenges as it means randomly selecting the beneficiaries of a project, which can be complicated to put into practice on the ground as well as being ethically questionable.

The second quantitative method, known as quasi-experimental, consists of identifying populations that seem to be comparable to the beneficiaries, without selecting them randomly. This is therefore more flexible in its implementation and still likely to provide robust results. However, some "selection bias" is liable to remain, i.e. there will still be some structural differences between the beneficiaries and the control group, which will distort the comparison.

The last method, the mixed method, consists of adding to one of the first two methods a qualitative study that will endeavour to pose the research questions better and to explain why we arrive at the results obtained.
 

Researchers are divided as to whether the experimental method should be presented as the ultimate in scientific evaluation.

When a scientific impact evaluation using the quasi-experimental method was carried out on a project to encourage Mauritanian women to attend pre and post-natal medical check-ups for a very modest fee, the qualitative phase of the study provided an insight into why the most vulnerable women were not reached, as well of the organisational mechanisms that led to a loss of efficacy of the scheme when it was extended to the whole of the country.

This enabled us to adjust the project, to try and apply a system to make it free for the most vulnerable and to provide more support for the improvement of the quality of care in local medical centres (click here to read the study in french).

Researchers are divided on the fact that the experimental method is presented as the ultimate in scientific evaluation, with some believing that it can also entail certain types of scientific bias and that, depending on the context and the evaluation questions, the quasi-experimental method can be more appropriate.
 

What is AFD's position?

The researchers who met at AFD on 19 March to debate these issues acknowledged the importance of the initial question asked in an evaluation in the choice of the method that will be used to answer it. In other words, it is only as a second step that the question of the methodology best suited to answering the question asked must be considered.

AFD is in entirely in line with this trend and refuses to brandish one or other of these methods in a dogmatic way. We are all in favour of mixed methods, which introduce a qualitative component based on sociology, anthropology or other specialist fields, and which, by examining more deeply the chain of causes and effects leading to the results, can give us ideas on how to adjust future projects and be more efficient.


What obstacles still need to be removed to improve evaluation procedures?

First of all, we must continue to promote a culture of evaluation! It is not yet a culture that has been totally taken on board in the French mindset… We find it difficult, even if we are making progress, to look at what our actions achieve and question if we are getting it right.

Then, for these evaluations to irrigate our strategies and projects, everyone needs to take ownership of them. We therefore have to get project managers, as well as decision-makers, interested by putting out relevant operational messages.

We are also taking measures to raise the awareness of our partners in the emerging and developing countries. Evaluation is a particularly interesting way of enhancing a dialogue on a project and it also contributes to capacity building and better governance.

Evaluations reflect the complexity of the environments in which AFD intervenes, and we try to draw clear messages from them without reducing the quality of the studies conducted – which are methodologically of very high quality. It is difficult, but we are progressing. And we will soon be publishing our first report on the evaluations carried out.
 


Further reading:

Evaluation, a Learning Tool to Improve Practices

Yop.Crealab: Digital Creation Makes Its Mark In Côte D’Ivoire