This is a companion to a series of posts following the launch of an Involve research paper comparing public engagement in policy involving science and technology to public engagement in aid policy. The first post summarised the paper. The second and third posts proposed some different ways that DFID could engage the public in deliberation about aid policy. This post explores the linkages between public engagement and aid effectiveness.
Accountability and assessment are at the core of effective aid. DFID has made a clear commitment to these principles, and has followed through with internal reforms which put value for money, management for results and evidence-based decisions at the centre of their work. These are vital to good aid.
However, clumsy application of these ideas, driven by a desire to be transparent to the public, risks inappropriate assessment processes which are neither accountable nor improve aid effectiveness. DFID has a list of 23 quantifiable results which it has committed to in its second tier of results. But what does 9 million children “supported in primary education”, or the number of people benefitting from cash transfer programmes, mean? Moreover, DFID claims that results at this level can be ‘directly linked’ to DFID funds. Proving direct causation is not easy, and is next to impossible for the kind of complex programming that frequently happens in the aid context.
Every project differs in context, challenges and implementation. By lumping different project outcomes together through aggregation, the numbers risk becoming meaningless for accountability purposes. By insisting on proving attribution rather than contribution, the challenges of measurement in complex environments, with many different aid actors involved, can be almost impossible. Even in a court of law, where a single case is weighed up, the test for showing causation is ‘beyond reasonable doubt’ rather than attribution.
The requirements for aggregated numbers and attribution are driven by DFID’s proper concern for UK tax-payers’ money, but both are costly in terms of time and opportunity costs. When budgets are under downward pressure due to the value for money agenda, they tend to shut down the time for learning on how to do aid programming better – as a recent survey of development professionals noted, “accountability trumps learning”.
The same study, reporting on experiences on donors’ accountability requirements, suggests it risks excluding smaller organisations based in the south. These organisations have administrative and measurement systems which are less advanced, but have the knowledge and capacity to do good work – if they are supported. The agenda has also driven up spending on consultants – of which I have been one – with a 2009 paper suggesting that between 1993 and 2006, DFID’s budget on consultants has jumped from £0.2 million to £256.2 million.
This is not saying that measurement or evidence are unnecessary: indeed, the reverse, they are vital. However, they must be appropriate and must be targeted not just at resource management, but also at helping politically astute programming. To do this requires ensuring the policy-makers meet the expectations of the public through meaningful processes.
Aggregated numbers and impossible-to-prove causal propositions are not useful either for effectiveness or transparency. Indeed, they are not required at other levels of DFID’s reporting structure, where attribution is not required, for example Level One, on achievement of the MDGs, Level Three which focuses on operational effectiveness rather than on ultimate results.
We argue in our paper that improved public engagement in international aid will allow the policy makers to understand better the expectations of the public requirement, help to understand the challenges, and refine meaningful accountability processes which do not hamstring development programming.
The full report: Resetting the Aid Relationship (PDF document)
Image by aussiegall