EVALUATION AND ITS DISCONTENTS :
do we learn from experience in development?
March 26th, 2012
Ministère de l’Economie, des Finances et de l’Industrie , Paris
, Agence Française de Développement, CEO
, EUDN President
A brief overview of the instruments for evaluating economic policies.
Chair: François Bourguignon, Paris School of Economics
Speaker: Sir James A. MIRRLEES, winner of the 1996 Nobel Memorial Prize in Economic Sciences.
Irrespectively of the field of interest, economic policies have always been subjected to evaluations that span from predictions based on intuitions prior to decisions, to simple post-implementation analyses comparing the situation before and after the policy is implemented. More rigorous methods are certainly required to properly generate the ``counterfactual’’, namely the state of the world that would have occurred in the absence of the policy considered.
Given the essentially microeconomic nature of the policies and projects under consideration, three main approaches have been adopted over the past 50 years: cost-benefit analysis, micro-simulation (with or without modelling behaviour), and experimental approach (controlled or ``natural’’). From a macro-economic perspective, the two main approaches have consisted of modelling all the interactions between the economies (General Equilibrium models), and estimating cross-country regressions on samples of countries.
The aim of this session will consist in exposing a clear picture of these methods in their conceptual and empirical framework in order to proceed to a double evaluation. On the one hand, we will address the issue of the relative efficiency of the aforementioned methods in matters of policy and project evaluation. On the other hand, we will assess the degree of validity of these methods’ results when applied to different contexts than the ones for which they were originally designed. How can one explain that cost-benefit analysis has nowadays fallen into oblivion? And how can one account for the tremendous success of cross-country analyses in the 1990s (as opposed to panel regressions)? And what should be the toolbox for today’s evaluators?
Experiments and impact evaluation: arguments for and against.
Chair: Pierre JACQUET, Chief economist, Agence Française de Développement
Paul GERTLER, University of California, Berkeley
Jean David NAUDET, Agence Française de Développement
Controlled experiments – the comparison of treatment groups to randomly selected ones – have sometimes been qualified as the ``gold standard’’ of evaluation. The scientific results produced by this method are indeed beyond any scientific questioning. As a matter of fact, the number of such experiments has grown exponentially and these studies tend to monopolize the content of academic journals, especially in the field of development. Moreover, impact evaluation projects increasingly rely primarily on the comparison of treatment and control groups, even though the choice of groups is not always random, hence implying that these studies amount to a before/after comparison.
Despite the attractiveness of this approach from a scientific perspective, one may wonder whether, as some proponents claim, it will enable us to gradually distinguish what ``works’’ from what ``does not’’. Will we be able to build a list of policies and projects such that the decision-makers’ task will amount to finding the best match to their aims and means? This is an unlikely outcome, unless one is able to conduct an infinitely large amount of experiments so that this catalogue eventually exhibits the desired diversity. Not everything can be evaluated with controlled experiments. The results of specific policies or projects depend both on the context in which they are implemented, as well as their characteristics. Controlled experiments will never prove able to rigorously cover the entire range of potential cases and possibilities.
Given these limitations, we ought to question ourselves on the appropriate use of these evaluation techniques, and on the exact extent to which to apply them. We should equally inquire into the relationship between these methods and other approaches, such as more ``structural’’ ones that rely on our knowledge of human behaviour. Is it possible, for instance, to imagine a real strategy regarding experiments and impact evaluation, which would take into account their cost, the feasibility constraints, the lessons to be learnt, but also the consequences of this strategy on the project managers’ behaviour when it comes to transparency and accountability?
Is indicators-based management a guarantee of efficiency?
Chair: Mamadou DIOUF, Columbia University
Jodi NELSON, Bill and Melinda Gates Foundation
Evaluation is certainly not a question relevant exclusively to public policies or to projects managed by public authorities. This task is perhaps of the outmost importance to firms. Indeed, a host of practices observed in public organisations were derived from management principles that have been used for a long time in the corporate world. The indicators-based management constitutes an example of technique that was incorporated by the managerial levels of virtually any type of organisation. This practice fulfils a multiplicity of roles: it enhances the thinking on the targets to select, provides incentives for increasing actors’ efficiency, and allows for an evaluation of the results through an analysis of indicators. The present session will focus on this last aspect.
While indicators-based management stems from a relatively trivial principle of management, the use of such indicators may not generate the expected efficiency unless a series of properties is satisfied: observability, precision, non-manipulation, exhaustivity... The actors under scrutiny could, for instance, satisfy some approximate indicators without, however, fulfilling the general results in a satisfactory manner. Such a situation could arise when the indicators rest on some rather specific and secondary aspects of the task. At the other extreme, if the chosen indicator proves too general, elements irrelevant to the task will be included and will therefore blur the evaluation. Indeed, the precise impact of the actors’ efforts may not be discernible in that case.
Since indicators-based management has been embraced by the development field both at the very aggregate level of the Millennium Development Goals and also at the more specific level of the aid-efficiency indicators promoted by the Paris declaration, it seems desirable to thoroughly think about the optimal use of this type of management.
Applying evaluation to development and development aid.
Chair: Alexandra SILFVERSTOLPE TOLSTOY, Ministry for Foreign Affairs, Sweden.
Ruerd RUBEN, Ministry of Foreign Affairs, The Netherlands
Miguel SZEKELY, former Undersecretary for Planning and Evaluation, Mexico
Any development aid donor, including NGOs, is willing to show to its individual contributors that spending in the aid-recipient country is efficient. Similarly, leaders of developing countries may be willing to convince their population – and/or their donors – that the development policies implemented are good and efficient in fighting the various dimensions of poverty. The difficulty to publicize the concrete results of development aid and the associated doubts surrounding its efficiency create in developed countries a context of hesitation towards official development aid (ODA). The failure to register results may therefore explain the observed weariness towards ODA, its gradual shift towards social sectors (education, health), the strong fragmentation of donor efforts that tend to affect aid effectiveness but also the relative success of NGOs which better communicate on their achievements.
Improving the transparency of development aid operations and their impact on recipient countries therefore constitutes the major way out of the torpor characterizing the development community. This goal can only be achieved by a rigorous implementation of evaluation procedures regarding projects, programs, and policies undertaken in developing countries, whether they benefit from foreign aid or not. To ensure that development policies and development aid are being implemented in the most efficient way, it is necessary to overcome the political, economic, institutional, and technical barriers that prevent evaluations from taking place. This will constitute the central theme of this session.
, EUDN President, Paris School of Economics.