Published on July 22, 2013

“So, who evaluates the evaluator?”

By Helen Fisher

Helen is Involve’s communications and evaluation associate, supporting the knowledge base and development of projects involving these areas. She also provides Involve with sector-specific input on engagement around the topics of energy, climate change and science.

ExamI’ve worked with and around Involve for many years now, in fact since the organisation’s inception; I have fond memories of sitting in a dusty office one evening back in 2004 helping to mail out the invitations to Involve’s first ever event.

I’m extremely happy to be working even more closely now with Involve as its Communications and Evaluation Associate. Aside from science and energy related topics, which I am particularly interested in, I’ll be blogging on issues around communications within or around public engagement and evaluation of engagement projects, two topics close to Involve’s organisational heart. I thought I’d start with evaluation.

A few weeks ago, during a meeting for an engagement project in which I am part of the evaluation team, I had a discussion with a fellow attendee. It stuck with me because of a single question he asked: “so, who evaluates the evaluator?” I was taken aback at first, but on further thinking I began to have some sympathy for his concerns. What makes me qualified to evaluate anything?

I began to think about some of the specific questions around evaluating engagement processes: why do you want to evaluate, who is the evaluation for, what do you want to find out, how do you find it out, and how and what do you then report back? Basic stuff, but surely if you can answer these questions to the satisfaction of all involved, you’re well on your way to being a trusted evaluator. No?

No. I think the point is that it shouldn’t just be up to the evaluator to answer these questions or to frame the evaluation. Evaluation should be built in from the start, planned early, and adapted as the project progresses.

Why and who for are fundamental questions to which the answers should be known prior to appointing an evaluator. And what you want to know should stem directly from this. There is a huge difference, for example, between assessing the value for money of a project for a commissioning body, and assessing personal value to the public participants. Do you assess impacts short or long-term, and are those impacts from the participants’ point of view or the policy makers’? How do you measure success? How do you prioritise what to evaluate?

These questions should be answered in partnership with the evaluators, not solely by them. The methodology and reporting is of course a key part of the evaluator’s remit and requires a necessary amount of experience and expertise, but even these aspects can be governed to a degree by an oversight body.

So really, an evaluator’s first job is to ensure that some of these key upfront questions have been answered thoroughly and to build a delivery structure around them. Beyond this, I believe an evaluator should bring a rigorous methodological approach, an enquiring mind, a degree of flexibility, a willingness to provide constructive and sometimes challenging feedback, and the ability to draw clear insight from varied, sometimes conflicting, sources. This all contributes to the delivery of a robust methodology and insightful, useful report. How you definitively assess all of these qualities through a written tender, however, is a question for another day.

But I keep thinking back to that question of “who is the evaluation for?” Because surely this is the most important question, and surely these should be the people that ultimately evaluate the evaluator. And when we talk about public engagement, surely a large proportion of these people are the members of the public themselves, not just those who take part in the process, but the wider population who ultimately stand to be impacted by project outcomes and whose money is being spent on engagement projects and evaluation.

Making evaluation more participative can increase relevancy, make outcomes more effective, create ownership and empowerment, and build capacity in those taking part; it sounds like a fantastic thing. But it can be difficult to deliver in terms of time, resource and energy for all involved. People do it though, and have been for some time, for example in the development sector, in the States, and here in the UK.

I wonder if there is a way, then, of taking steps towards a more participatory approach becoming more commonplace, one that goes beyond asking participants to contribute in the usual form of mid- or end-of-process interviews and questionnaires.

You could start by involving them directly in the oversight body, in drafting evaluation or governance criteria, or in a longer-term process of review and governance across a range of engagement projects. But before we do any of this, we have to ask ourselves do they actually care? And, if not, what should we do about it?  Frame people’s involvement in a different way perhaps; only look towards a participatory approach for the issues that elicit a naturally high level of public interest? Or should we just stick the kettle on and type up another evaluation questionnaire…

Image by Alberto G.

Leave a Reply