With so much public money being spent on the Big Society, the government is going to be under pressure to demonstrate that this is money well spent. However, centrally planned and implemented evaluations will act against the development of authentic, self-reliant community groups. How can government resolve this tension?

The Big Society is an important plank of the Coalition’s programme of government. The Prime Minister has launched, or relaunched, it four times. He has expended significant political capital in the process. The Cabinet Office recently awarded the organisation Locality the contract for training 5,000 community organisers at a cost to the taxpayer of £15m. Other government departments are launching, or in some cases rebranding, projects and programmes as part of the Big Society; as one example of many, the Department of Health recently announced a £6m fund to support volunteering in health and social care. All of these activities are aimed at building the participation of individuals in activities that strengthen communities.

The pressure is going to be on the government, and David Cameron personally, to demonstrate the success of the Big Society. Evaluation will play a significant role in answering questions about the effectiveness and efficiency of government spending on the Big Society.

The pressure to evaluate programmes of government to promote the Big Society exposes a contradiction at the heart of the Big Society.

The Big Society is built on the idea that the state needs to get out of the way. The aim is to provide space for communities to identify the problems that most affect them. The government expects that communities will then take responsibility for solving the problems they have identified.

This exposes the contradiction that faces the government as it thinks about how to evaluate the Big Society. Effective evaluation will need metrics that can be replicated across Big Society activities – otherwise how can their effectiveness and efficiency be compared? These metrics will struggle to represent the reality of what is going on in the many different communities up and down the country. Even worse from the perspective of the Big Society, they will represent a straitjacket imposed on communities constraining the actions they can take as they try to fit the central metrics. Finally, developing central, funder driven metrics will force communities running Big Society projects to face upwards, to be accountable to those handing out the money. This will mean they will find it harder to be accountable to their communities themselves. Centrally commissioned evaluation therefore risks undermining the fundamental premise on which the idea is built.

NESTA has tried to tackle these problems head on as part of its Neighbourhood Challenge. The challenge, awarded to seventeen communities across England, aims to support community-led innovation. It hopes to show how community organisations can galvanise people to work together to create innovative responses to local priorities, particularly in neighbourhoods with low levels of social capital. NESTA is supporting these organisations with training, practical tools and small amounts of, what they call, catalytic money.

As Alice Casey, the Challenge Programme Manager, explained how NESTA intends to move away from demanding regular reports and a big end of project evaluation , “systems created with good intent, but that end up generating tons of work and reams of paper that adds questionable value. Accountability for public money is of course hugely important for funders, but does anyone really read the monitoring information that you send in to them? So we thought we’d try something a bit different.”

To do this NESTA have provided each community group with a basic blog template, some training and a requirement that they use the blog to report monthly about progress.

While NESTA clearly see this as piloting a different approach to evaluation, they are hoping that real time, public reporting like this will have a number of positive impacts. Specifically, they hope it will:

  • open the project reporting to a more people than just the funder;
  • promote communication with the projects, increase their profiles and reach;
  • help NESTA understand how projects experience the projects on the ground as events happen (in contrast to the final evaluation after everything has finished);
  • share that information in a way that traditional reporting mechanisms can’t; and
  • provide greater transparency and accountability.

This experiment is to be welcomed, it seems like an imaginative approach to overcoming some of the issues that traditional evaluation and project monitoring raise, particularly in the context of the Big Society.

The approach is not without its own potential limitations of course. It will be interesting to see whether NESTA is able to collect information of sufficient quality and comparability across the seventeen pilots to be able to draw more general lessons, for example. The approach also raises questions of how individuals within communities who aren’t wired up will be able to access, and engage with, the reports. However, this is a much larger issue with formal evaluation and shouldn’t be seen as a criticism of this approach – rather it highlights again the tension between adequate reporting and evaluation on the one hand, and real accountability to communities on the other.

Effective evaluation of the Big Society must mean finding ways to build community conversations and strengthen horizontal accountability. Evaluation will have failed if it results in a glossy report that closes those conversations down and continues to focus communities upwards towards the money. However, such approaches can’t replace the rigorous evaluation that is needed in many cases; the challenge therefore will be in getting the balance right.

This post was written for the October edition of The Evaluator, the magazine of the UK Evaluation Society.

Image credit: __o[FI]__