Emily Dawson works at King’s College London (Department of Education & Professional Studies) and University College London (Department of Science & Technology Studies) doing a mixture of research and teaching on public engagement with science and science in society subjects. The following is a guest blog that Emily has prepared for Involve.
One of the happy benefits of doing academic work is that you get to watch what I like to think of as idea-fashions. Ideas about who or what should influence policy making are particularly suitable for trend-watching because they are at potentially powerful crossroads of funding, power and influence. Notice, if you will, the British Autumn/Winter collections of ‘ideas about policy making 2012/13; the RCT as policy panacea’.
Making its winter debut in Haynes et al. (2012), ‘Test, learn, adapt: Developing public policy with randomised control trials’ (better known, perhaps, as the report Ben Goldacre was involved with), the RCT is versatile enough to be worn in any context. From medicine to education, let the RCT cater for all of your needs to inform policy this season.
Facile metaphors aside, there is something interesting going on here. Let me switch from the catwalk to the playground and suggest a game of top-idea-trumps for policy makers.
Evidence trumps opinion: public engagement trumps expertise alone: what trumps RCTs?
In recent weeks questions over the role of evidence in policy making have been discussed at length in person at a series of seminars organised by a number of organisations including the University of Cambridge, Centre for Science & Policy, Sciencewise and the Institute for Government, as well as on the related twitter stream. The last one of these will takes place at Sussex’s SPRU early next month.
So what’s all the fuss about? The issues in question go to the heart of policy making to ask, ‘on what basis are policies made?’
Many people have argued that policies, especially government policies, should be based on evidence. In other words, that research ought to inform policy decisions. And by policies/policy, in debates like this I sometimes find it helpful to read ‘funding’.
One method in particular, the randomised control trial, or RCT, has been discussed as a sort of ultimate, most reliable & most robust form of evidence; the top trump. Given these credentials, it has been argued that RCTs are the research method of choice for those involved in developing evidence for policy making. Of note here, have been the efforts of Ben Goldacre. As a proponent of RCTs, Ben Goldacre has been vocal in suggesting that RCTS, which play a key role in developing medical practice, ought to be equally key in other fields, for example, in developing education policy & practice.
The discussions I hear, read & join about evidence based research and the potential role of RCTs often become polarised. In one corner are those who remain steadfastly convinced that RCTs present the most useful route to informing & influencing policy, while in the other corner are those equally resolute in their view that RCTs have little or no place outside medical research.
Through these enlightening chats I, like many others, have become an advocate of the middle ground. RCTs are one research method amongst many. Neither perfect nor irrevocably flawed, RCTs are not easily translated into complex social situations with variables that are hard to identify, let along control for. At the same time, RCTs can be very useful and could play a valuable role in any research setting, in the same way that interviews, ethnographic studies or surveys are useful research tools. In other words, a well-designed, well-executed, well-analysed RCT is no more a top-trump than a well-designed interview study.
There are two things that really interest me about these debates, the first, and frankly less interesting, is the trend in discussions of evidence-based policy to focus on the potential role of RCTs. In itself, an interesting shift in the debate, and potentially a side issue. It is a somewhat deceptive debate, because people arguing against RCTs are not necessarily arguing against the use of evidence by policy makers. You would be hard pushed to find people who believe policy would be better based on whimsy rather than evidence. In some senses then, the question of whether policy should be informed by research is easy to answer (for me) with a ‘yes’.
But there is a second, more interesting issue in these idea-fashions. For at least a decade & a half scholarship and professional debate about science policy making has focused increasingly on the role of public voices in policy making processes. In the much discussed move from ‘public understanding of science’ to ‘public engagement with science’ the role of experts from the scientific community was superseded, at least in idealised terms, by the role of public voices. In top-trumps terms, there was a shift from experts as all-knowing, answer-providers to the logics of publics, their attitudes, experiences & myriad forms of knowledge.
The question lingering in my mind as a result of these discussions about evidence-based policy, is whether the emphasis on evidence is a return to the logics of expertise?
If the arguments made by scholars from the social studies of science and the risk society have demonstrated anything, it’s that definitive answers are hard to come by. Just as experts from within the scientific community continue to disagree (for such is the nature of research), so too is there potential for contradictory evidence.
Whether the definitive answers come directly from experts or from experts via their research evidence, there will always be additional dimensions to weigh & discuss in any decision making process. There is also considerable power to be leveraged by those who are in a position to decide which evidence is ‘best’, most robust, most representative, most valid in analysis, most ‘public’ or most expert.
The use of evidence in policy making is then, to me, no more or less valuable than the other tools available to policy processes, including increased transparency and public engagement processes. None of these tools, idea-fashions or trump cards are, in the messiness of the real world, mutually exclusive. Perhaps the more difficult discussions that we need to have are not about what constitutes the best form of evidence, but about finding useful ways to pull all these ideas & processes together for policy making.
 Haynes, L., Service, O., Goldacre, B., & Torgerson, D. (2012). Test, learn, adapt: Developing public policy with randomised control trials. https://update.cabinetoffice.gov.uk/resource-library/test-learn-adapt-developing-public-policy-randomised-controlled-trials; London: Cabinet Office Behavioural Insights Team.
 This is not a new argument, if you want to read more, have a look at the writings of Robert Slavin, who has long argued for the use of RCTs in education research and policy making, and to weigh that out you could have a read of Gert Beista or Patti Lather who argument vehemently against the use of RCTs in that same field (these guys are just the tip of the iceberg).
Image by: unloveablesteve