[Editor’s Note: This is the second part of the interview with Diego Gonnet, Director of the Department of Information for Management and Open Government at the Direction of Management and Evaluation, Office of Planning and Budget, Presidency of the Republic of Uruguay. He holds a B.A. Political Science from the University of the Republic of Uruguay and a M.A. in Public Policy from Victoria University of Wellington, New Zealand. This interview is part of a wider series of talks with current and former officials, politicians and researchers from different countries in Latin America about the role of research and information in the public policy process. Interviews are an input to the development of an upcoming online course produced by P&I aimed at promoting the use of evidence to inform policy decisions in Latin America. The first part of the interview is available here. The Spanish version of this interview is available here.]
What factors hinder the evaluation process, from the generation of information to its effective use in the public policy process?
The weakness of the information systems and existing administrative records within state agencies is notorious: systems and records have not been digitalized, nor designed to allow a statistical analysis of the collected data. There are also weaknesses in the quality of the information that is uploaded. When you ask a question, you usually find that nobody recorded the information required to answer it. Therefore, we try to work directly with ministries to achieve better data and administrative records that will eventually facilitate monitoring and evaluation.
On the other hand, projects tend to show weaknesses in their design and do not have the data necessary to monitor or evaluate. Among other things, evaluations help recognize what are the weak points of information to assess the impact of a project.
Another difficulty has to do with the fact that usually the analyzed projects are not identified in accounting terms, so it is necessary to make ad-hoc cost analysis to have some approximation of cost-efficiency.
Certain lack of capacity to systematically collect information and produce analysis is also verified among officials. Even the more analytical profiles have deficiencies in communicating information so that they cannot effectively reach decision makers.
Finally, the demand for empirical evidence to make decisions is often low, depending largely on decision makers’ training, background and profile. And most authorities have a more political and less of a management profile, so they seldom recognize the importance of qualified information.
Beyond these weaknesses, the evaluative process works properly. The big challenge is its formalization in the budgetary process, so it is incorporated as an input for planning and budget review.
What advantages and disadvantages do you find in hiring external evaluations versus having trained teams that can develop evaluations internally?
In the Uruguayan government there are diverse experiences. At the Office of Planning and Budget we commission project evaluations to external consultants. But the Ministry of Social Development has a specialized evaluation unit so that projects are evaluated by officials of the Ministry itself, sometimes with some support from the University.
Personally, I prefer the model that we carry out at the Office of Planning and Budget. The important thing is to build a framework that ensures independence of the evaluator as well as the usefulness of the generated results. A weakness we sought to reverse was the fact that when evaluators were hired they used to work with a sort of blank check to evaluate and develop their conclusions. That resulted in evaluations of varying quality and usefulness, as evaluators took advantage of the laxity of the terms of reference.
Supported by the Chilean experience, we designed a methodology that ensures the independence of the evaluator, but also provides accompaniment by the Office. The Direction of Management and Evaluation guides the process and provides the methodology, but the balance and the opinion is external and, therefore, independent. Thus, the process seeks to ensure that evaluations are responsive to questions considered relevant from the point of view of management: we have toolkits, workshops with evaluators are also done before starting the work, while the office facilitates the dialogue between evaluators and the evaluated agency, requesting the information that the formers need to do their job.
The evaluated agency also accompanies the process. This favors that evaluations are used constructively and that decision-makers take ownership of the results, achieving an environment in which the evaluated agencies, at the end of the road, find elements that are useful for their work and also increase organizational learning.
How do you choose the evaluators? What features should they gather?
For these type of evaluations, a team is shaped as a result of three separate calls, so we can count on three profiles with different specialties: one evaluator has experience in organizational analysis, the other possesses knowledge of the specific subject or sector of the evaluated project (e.g., health, education, etc.), and the last one is a junior researcher who works on gathering data and analyses the information, that should have a good methodological training. When we manage to gather the three evaluators, we carry out an induction work in order to present what is expected from their different roles, the methodology and expectations of the evaluation, and then we accompany the process of gathering information.
What types of formats are used to communicate evaluations’ findings?
We work with standardized reports which are a component of the methodology. To present the results, we turn to executive summaries and meetings. However, the challenge is to have shorter and more dynamic formats for presenting the results to decision makers or agencies of projects evaluated in a more effective and powerful way.
What actions can be implemented from the State to promote capacity development of its officials to take advantage of the available evidence and promote its use within their workspaces?
Short courses and workshops focused on practical actions should be explored, where officials can take into account the rationality of decision makers, including their incentives for short and medium term, so that they learn to develop more effective messages. Some officials come to learn this with practice, but it takes longer.
Comments