[Editor’s note: This post was written and published by Heather Lanthorn and Suvojit Chattopadhyay. Heather is PhD Candidate in Global Health Policy (Harvard University) based in India, and Suvojit is a development professional with a particular focus on M&E. It builds on a previous post.]
reminder: the scenario
in our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.
and yet. the role of evidence in decision-making of this kind is unclear.
in response, we argued for something akin to patton’s utilisation-focused evaluation. such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.
(this parallels woolcock’s definition of rigor, here. to the extent that we focus on a scenario in which donors and the M(e)E team design an evaluation intended to inform scale-up decisions (of effectiveness (not efficacy) across the relevant geographies and with likely implementers, say), we sidestep some – though not all – of pritchett (and sandefur’s) critiques of rigor vis-a-vis the multiple dimensions of context.)
in this post, we continue to explore this scenario, which sets up a seemingly ideal case of evidence-informed decision-making (donor-commissioned directly instrumental evaluations (rather than symbolic or conceptual)). we move beyond designing usable/useful evaluations to ask what might facilitate donors making decisions that are, if not necessarily “right,” at least informed, reasoned and justifiable given available evidence. to this end, we present a potential set of criteria to (begin a conversation on how to) set-up a process that can yield thoughtful, reasoned and ‘fair’ decisions that take evidence into account.
to begin, we ask what does influence decision-making at present.
what does drive decision-making?
the recent semantic shift from “evidence-based” to “evidence-informed” decision-making reflects a brewing recognition among evidence nerds that decision-making isn’t — can’t be (?), shouldn’t be (??)** — made in a strictly technocratic way. most political scientists and policymakers — and certainly politicians — have known this for a very long time.
politics are constitutive of policymaking. full stop. it is naive to proceed under any illusions about evidence replacing politics or the need to think more generally. researchers, M&Eers can learn more about the processes; party platforms and ideologies; election cycles and decision timetables — and potentially understand how to leverage them — but they don’t go away no matter how hard we wish for technocratic decision-making.
participants at a 2012 conference on evidence-based policy-making generally agreed that “evidence is a relatively minor factor in most policy maker’s decision making” and that “many other factors” influence the decisions made. additional factors in policy decision-making include:
inertia, path-dependence, habit
administrative feasibility to implement
decision-maker and public values, ideologies and perceptions about the way things are and ought to be
political benefit/cost of adding or removing a visible program
alignment of program’s expected impact trajectory with political cycles, opportunity windows
personal & professional ambition, the interests of powerful advocates and lobbyists
justifying past budgets and decisions
personal and expert experience, gut feelings
given that all this (and more) is usually part of any decision-making reality, we try lay out, below, an approach to guide decision-making.
deliberative process our proposal draws heavily on norman daniels’s work on “accountability for reasonableness, (A4R)” a rawlsian-influenced approach to procedural justice with regards to distributing resources scarcer than the needs that require fulfilling.***** daniels asks whether in the absence of clearly fair outcomes or principles, if a fair process could be established in a particular context.
to this end, A4R pursues “pure (if imperfect) procedural justice” – a process by which, in the absence of clear principles4 of decision-making (for example, strictly following the results of a cost-effectiveness analysis** or giving complete priority to the worst-off), ex ante agreement on the process of decision-making will lead to outcomes that can be accepted as “fair.”***
in this case, we ask how we could shape the decision-making deliberation process ex ante so that, regardless of the decision taken by designated decision-makers, all stakeholders feel the decision is ‘fair’ because the process was deemed fair, even if it was not their favored outcome? daniels proposes four criteria to guide the formation of such a process.
below, we introduce the basic criteria. we will look at each of these in greater detail in a set of posts in future. (get excited!)
relevant reasons
what types of reasons will be considered “relevant,” and therefore permissible, in decision-making? these reasons, once agreed could also influence the types of data collected in the evaluation itself. we are not proposing that each of the criteria could be giving an ex ante weight so that there is a precise algorithm for decision-making, only that it will be agreed in advance what is on and off the table.
another key consideration, of course, is who will be involved in setting the relevant reasons and who will be involved in the actual decision-making. would there, for example, be a mechanism for public participation or comment?
2. transparency
how transparent should the decision-making process be, including those reasons deemed relevant for decision-making? should everything be made public or does that make it too to speak honestly, some stakeholders will need ‘cover’ and to not have their full views publicized. might a commitment to transparency scare away implementing organisations from trying out innovative ideas for fear of failure – especially if it might be publicly known?
a commitment to transparency includes deciding the extent to which each of the following will be made public and at what point in time: the determined relevant reasons public, the process of deliberation, the full transcript or just a summary of the deliberation.
3. revisibility
in the initial A4R framework, based on health insurance, the revisibility criterion related to appeals made given new evidence. for donor programmes that employ a particular technology that yields it prohibitively expensive to scale, we can imagine that a breakthrough that lowers the price the technology should also lead the donor to revisit their decision not to scale.
another twist on revisibility in the case of development programmes could be an appeals process for the public / civil society that were part of the programme pilot, to argue for (or against) the programme being continued.
4. enforce-ability
the enforceability criterion requires that someone have the institutional authority to make sure that the other conditions are met.
* we would like to acknowledge the sounding-board excellence of arjun, payal, sameer and urmy, as representatives of MNDC.
** as a case of strictly following CEA and why evidence perhaps shouldn’t be - setting aside whether it can be -- the only driving reason for a decision, consider the 1990s’ Oregon effort at strictly technocratic priority-setting for medicaid.
*** daniels notes that the validity of the approach is premised on the supposition that we can better agree on a fair process than on principles -- this premise needs to be empirically documented and tested to move the conversation forward.
**** see daniels on the ‘four unsolved rationing problems’ with regard to health decision-making at a population level.
***** daniels’ ideas have yet to be tested empirically.
Comments