About evaluations
What is evaluation?
Danida adheres to the OECD/DAC’s definition of evaluation, as presented in its Quality Standards for Development Evaluation (2010): “Development evaluation is the systematic and objective assessment of an on-going or completed development intervention, its design, implementation and results.”
All Danida’s evaluations are carried out by a team of external experts, selected on the basis of their professional competence, independence and experience in relation to the topic and in conducting evaluations. Upon completion, all evaluation reports are made available to the public. Evaluations thus differ from other less resource-demanding monitoring activities (reviews), which are mainly intended as management tools in the ongoing monitoring of development activities. An evaluation may cover one or more projects or programmes, a strategy etc. The entire Danish portfolio of aid modalities is subject to evaluation – including i.a. bilateral and multilateral assistance, NGO cooperation , research cooperation, business instruments, support to climate change adaptation and mitigation, and support to fragile states. In line with the Paris declaration on aid effectiveness, some evaluations are conducted in cooperation with other donors, and almost all evaluations actively involve local actors from the concerned partner country or countries.
There are two overriding purposes of evaluating development cooperation. One is to provide politicians, the general public in Denmark and the Danish partner countries with documentation for the use and results of aid resources (accountability). Evaluations constitute a tool for assessing the potentials and limitations of development cooperation in terms of promoting economic and social development. At the same time, the evaluation process is an important tool for furthering the effectiveness of development cooperation. By means of in-depth analysis, evaluations may help explain why some activities are successful while others are not, and this information can be used to improve approaches and methods in development cooperation (learning aspect).
How are evaluations carried out?
An evaluation process generally has three phases: Preparation, Implementation and Reporting. The preparatory phase begins with the planning of a two-year evaluation programme. The programme is prepared by the department for Evaluation, Learning and Quality (LEARNING) based on consultations with i.a. the Danida Board, senior management in Danida, representatives of other departments and embassies as well as the wider resource base. Decisions on which evaluations are to be conducted are made on the basis of i.a. availability of data and the perceived potential for generating knowledge of use for future programme planning. At this point an assessment is also made of whether it can be expected to be advantageous to implement certain evaluations together with other donors.
The final evaluation programme is forwarded to the Parliamentary Committee on Foreign Affairs for comments and finally approved by the Minister for Development Cooperation.
When the general topic of an evaluation has been defined, an approach paper is usually conducted in order to summarise important background information about the evaluation topic, identify possible issues that will need special methodological reflection, propose specific evaluation questions etc. On the basis of the approach paper, LEARNING prepares draft Terms of Reference , describing what is required of the coming evaluation. This includes its overall purpose, methodological requirements, geographical and thematic scope, specific evaluation questions, requirements concerning the composition of the evaluation team etc. The draft Terms of Reference are forwarded for consultation among stakeholders in Danida, and the final Terms of Reference take resulting comments into account to the extent possible. The evaluation is subsequently put out to tender following EU procurement rules.
The implementation phase begins with the selection and contracting of an evaluation team consisting of independent, external consultants. The selected team is then provided with background information by LEARNING in collaboration with other relevant units in the Ministry of Foreign Affairs. On the basis of this, it develops an operational plan for the evaluation, in consultation with relevant stakeholders. The workplan will typically involve extensive review of existing documents, further development of approach and methodology, field work with interviews and/or questionnaire surveys among stakeholders in the partner countries, analysis of data that has been collected, final reporting etc.
LEARNING serves as manager for the evaluation, either alone or, for joint evaluations, together with evaluation units from partner countries and/or other donor agencies. For larger evaluations a reference group is established, with a composition that reflects the topic and purpose of the evaluation. Members can include representatives of relevant embassies and departments, national and international resource persons and representatives from the partner country. The reference group advises the evaluation team throughout the entire evaluation process.
LEARNING monitors the evaluation process in order to ensure i.a. that the evaluation is undertaken in accordance with the Terms of Reference, Danida’s evaluation guidelines, OECD/DAC’s quality standards for evaluation and other relevant policies and guidelines. Further information about the division of responsibility between LEARNING, the evaluation team and other stakeholders and how the independence of evaluations is safeguarded can be found in the ”Codes of Conduct”, which are an integrated part of Danida’s evaluation guidelines.
Reporting. On the basis of the analysis of collected data, the evaluation team prepares a first draft of the evaluation report, and draft conclusions and recommendations are often presented to the parties concerned in a stakeholder workshop. LEARNING reviews the draft with a view to assuring the methodological quality of the report, and comments concerning factual information, methods, conclusions and recommendations are provided by relevant stakeholders, members of the reference group and external peer reviewers. The evaluation team considers these comments when preparing the final draft, but has the right to draw independent conclusions and the sole responsibility for the final conclusions of the evaluation.
Evaluation follow-up
When an evaluation has been completed, a response is written by senior management of the embassy or department in the Ministry of Foreign Affairs responsible for the activity that has been evaluated. This ’management response’ is written in English and ensures that there is systematic determination of the consequences an evaluation should have. Dissemination of the evaluation to Danida staff and management takes place by the conclusions and recommendations of all evaluations being presented in Danida’s internal programme committee but also through presentations at workshops that take place during the evaluations process. The evaluations thus contribute to internal knowledge sharing and to the development and improvement of overall policies and procedures.
When an evaluation is made public, a Danish summary is also produced describing the most important conclusions and recommendations of the evaluation as well a brief version of Danida comments on the report. The Danish summary is distributed internally in the Ministry of Foreign Affairs – also to relevant embassies – and is made available to the public on this website along with the evaluation report itself.
Two to three years after the evaluation the extent to which and how the evaluations published during the previous two-year period have been followed up is examined.
Further information about Danida’ approach to evaluation of development assistance, including follow-up procedures, can be found in Danida’s evaluation guidelines. These are available under "Relevant documents" in the menu.