Involve is changing!

We're having a radical makeover to match our new strategy, with a new website coming very soon.

Join our Newsletter

and be the first to hear when it's ready.

In the meantime, you can continue to our old website to see our news, work and ways you can support us.

Published on January 18, 2013

Playing policy top-trumps: Is there a best way to inform policy?

2433705367_59afb7ed50_oEmily Dawson works at King’s College London (Department of Education & Professional Studies) and University College London (Department of Science & Technology Studies) doing a mixture of research and teaching on public engagement with science and science in society subjects. The following is a guest blog that Emily has prepared for Involve.

One of the happy benefits of doing academic work is that you get to watch what I like to think of as idea-fashions. Ideas about who or what should influence policy making are particularly suitable for trend-watching because they are at potentially powerful crossroads of funding, power and influence. Notice, if you will, the British Autumn/Winter collections of ‘ideas about policy making 2012/13; the RCT as policy panacea’.

Making its winter debut in Haynes et al. (2012), ‘Test, learn, adapt: Developing public policy with randomised control trials’[1] (better known, perhaps, as the report Ben Goldacre was involved with), the RCT is versatile enough to be worn in any context. From medicine to education, let the RCT cater for all of your needs to inform policy this season.

Facile metaphors aside, there is something interesting going on here. Let me switch from the catwalk to the playground and suggest a game of top-idea-trumps for policy makers.

Evidence trumps opinion: public engagement trumps expertise alone: what trumps RCTs?

In recent weeks questions over the role of evidence in policy making have been discussed at length in person at a series of seminars organised by a number of organisations including the University of Cambridge, Centre for Science & Policy, Sciencewise and the Institute for Government, as well as on the related twitter stream. The last one of these will takes place at Sussex’s SPRU early next month.

So what’s all the fuss about? The issues in question go to the heart of policy making to ask, ‘on what basis are policies made?’

Many people have argued that policies, especially government policies, should be based on evidence. In other words, that research ought to inform policy decisions. And by policies/policy, in debates like this I sometimes find it helpful to read ‘funding’.

One method in particular, the randomised control trial, or RCT, has been discussed as a sort of ultimate, most reliable & most robust form of evidence; the top trump. Given these credentials, it has been argued that RCTs are the research method of choice for those involved in developing evidence for policy making. Of note here, have been the efforts of Ben Goldacre. As a proponent of RCTs, Ben Goldacre has been vocal in suggesting that RCTS, which play a key role in developing medical practice, ought to be equally key in other fields, for example, in developing education policy & practice[2].

The discussions I hear, read & join about evidence based research and the potential role of RCTs often become polarised. In one corner are those who remain steadfastly convinced that RCTs present the most useful route to informing & influencing policy, while in the other corner are those equally resolute in their view that RCTs have little or no place outside medical research.

Through these enlightening chats I, like many others, have become an advocate of the middle ground. RCTs are one research method amongst many. Neither perfect nor irrevocably flawed, RCTs are not easily translated into complex social situations with variables that are hard to identify, let along control for. At the same time, RCTs can be very useful and could play a valuable role in any research setting, in the same way that interviews, ethnographic studies or surveys are useful research tools. In other words, a well-designed, well-executed, well-analysed RCT is no more a top-trump than a well-designed interview study.

There are two things that really interest me about these debates, the first, and frankly less interesting, is the trend in discussions of evidence-based policy to focus on the potential role of RCTs. In itself, an interesting shift in the debate, and potentially a side issue. It is a somewhat deceptive debate, because people arguing against RCTs are not necessarily arguing against the use of evidence by policy makers. You would be hard pushed to find people who believe policy would be better based on whimsy rather than evidence. In some senses then, the question of whether policy should be informed by research is easy to answer (for me) with a ‘yes’.

But there is a second, more interesting issue in these idea-fashions. For at least a decade & a half scholarship and professional debate about science policy making has focused increasingly on the role of public voices in policy making processes. In the much discussed move from ‘public understanding of science’ to ‘public engagement with science’ the role of experts from the scientific community was superseded, at least in idealised terms, by the role of public voices. In top-trumps terms, there was a shift from experts as all-knowing, answer-providers to the logics of publics, their attitudes, experiences & myriad forms of knowledge.

The question lingering in my mind as a result of these discussions about evidence-based policy,  is whether the emphasis on evidence is a return to the logics of expertise?

If the arguments made by scholars from the social studies of science and the risk society have demonstrated anything, it’s that definitive answers are hard to come by. Just as experts from within the scientific community continue to disagree (for such is the nature of research), so too is there potential for contradictory evidence.

Whether the definitive answers come directly from experts or from experts via their research evidence, there will always be additional dimensions to weigh & discuss in any decision making process. There is also considerable power to be leveraged by those who are in a position to decide which evidence is ‘best’, most robust, most representative, most valid in analysis, most ‘public’ or most expert.

The use of evidence in policy making is then, to me, no more or less valuable than the other tools available to policy processes, including increased transparency and public engagement processes. None of these tools, idea-fashions or trump cards are, in the messiness of the real world, mutually exclusive. Perhaps the more difficult discussions that we need to have are not about what constitutes the best form of evidence, but about finding useful ways to pull all these ideas & processes together for policy making.

[1] Haynes, L., Service, O., Goldacre, B., & Torgerson, D. (2012). Test, learn, adapt: Developing public policy with randomised control trials.; London: Cabinet Office Behavioural Insights Team.

[2] This is not a new argument, if you want to read more, have a look at the writings of Robert Slavin, who has long argued for the use of RCTs in education research and policy making, and to weigh that out you could have a read of Gert Beista or Patti Lather who argument vehemently against the use of RCTs in that same field (these guys are just the tip of the iceberg).

Image by: unloveablesteve

9 Responses to “Playing policy top-trumps: Is there a best way to inform policy?”

  1. Chris
    January 18, 2013 at 4:34 pm

    “You would be hard pushed to find people who believe policy would be better based on whimsy rather than evidence.” – Whilst true, it is trivial to find people who make policy based on what they believe without regard to the evidence.

    • emily
      January 21, 2013 at 1:12 pm

      Hi Chris, thanks for making that point. I think most people are not averse to the idea of using evidence to support policy development. The difficult part seems (to me at least) to be about the detail of how and why different forms of evidence are used.

  2. January 18, 2013 at 5:13 pm

    I’m sorry to say this, but I really do think it is peculiarly arrogant to suggest that doing RCTs thoughtfully – and using them as part of a range of appropriate methods for assessing policies – is somehow the author’s own special middle ground.

    In relation to what strawman is this the “middle ground”?

  3. Adam
    January 18, 2013 at 5:29 pm

    Yes, the road from RCT evidence to making policy decisions isn’t automatic and does need to be discussed. But that misses the fundamental point:

    Are enough government proposals, alternatives and novel ideas from elsewhere properly trialled and robustly assessed? No. The End

  4. January 18, 2013 at 5:34 pm

    I agree, Adam.

    It is remarkable to see so much resistance and melodrama from the social science research community in response to simple suggestion that we do more RCTs.

    There is also a bizarre suggestion around that proponents of more RCTs think qualitative research has no role at all. People who are willing to engage in this kind of simple fabrication have no role to play in any serious discussion.

    • emily
      January 21, 2013 at 1:13 pm

      Hi Ben, thanks for your comments here & elsewhere. The ‘middle ground’ was really somewhere that I had arrived at personally after being involved in conversations with colleagues on this subject for the last year. Some of those conversations were heated while others were not. Some included arguments about what kinds of knowledge were more important and more likely to influence policy than others. The blog post was one way in which I was reflecting back on those discussions. In that sense, I would agree I am not the first, or only, person to have come to the conclusion that RCTs are one useful method amongst many and can play a worthwhile role in developing policy.

      I do think it is important to be able to have discussions about the roles of evidence in policy making. You are probably very familiar with arguments for and against RCTs in particular, but for those of us not as close to the debate as you, it can be useful to reflect on the issues involved.

  5. Vikki
    January 18, 2013 at 8:17 pm

    As part of the Gov social research community those I work with are very keen to do more RCTs or at least have proper comparison groups and propensity score matching. Many of us are working hard to gain the political will to do this. Glad to say we have one coming up in my area as part of mixed method project and Cabinet Office will support us. There is some progress and it is very welcome.

    • emily
      January 21, 2013 at 1:14 pm

      Hi Vikki, thanks for letting us know about your project, it sounds really interesting. I’d be very interested to hear about how it goes. For me, discussions with colleagues aside, a mixed methods model could provide a useful perspective to understand more about what kinds of methods are useful in practice for different parts of the policy development process. Good luck with it – and if you can share any more information about the project, please do!

  6. Hannah
    February 15, 2013 at 6:13 pm

    Thanks for this – an interesting article. I’m a researcher in mental health currently conducting a study on a particular policy programme. I think what it comes down to for me is that the ability of RCTs to tell us ‘what works’ or not has always been overstated for social interventions. It can’t tell us about ‘real world’ application where context, causality mechanisms and therefore outcomes will vary – I think realist approaches which take these into account, are likely to be much more useful. My concern is that focusing on RCTs as the answer might squeeze more methodologically sophisticated studies out.

Leave a Reply