This is the second of two posts inspired by a recent Cambridge University Centre for Science and Policy (CsaP) event on Risk and Uncertainty. Given Involve’s focus, these posts don’t attempt to do justice to the event as a whole, but rather reflect on those aspects that have relevance for public engagement.
My first post covered what the first panellists had to say about the Precautionary Principle. However, I noted at the end of it that, despite learning a lot, I actually found the discussion unsatisfactory. This post briefly explores why, but my view is summed up by these two tweets which capture my question to the panel and a summary of the answer.
The first two speakers, Professor David Salisbury and Mark Cantley, spoke about vaccination and biotech respectively.
Salisbury told a tale, recounted in my previous post, of vaccination policy that is partly driven by taking too much account of unquantified risks to public health. The impact of this has been negative as children who would otherwise have been protected go unvaccinated.
Cantley told a story about the European biotech industry being strangled at birth by over-zealous application of the precautionary principle in European law and policy. His view was that this was negative because it prevented the positive gains to be made from growing GM crops from being realised.
Both of these speakers appeared to me to have a model of public policy that is driven by the deficit model; if only the public understood the science better then they’d do the ‘right’ thing. Salisbury firmly placed the blame on the Daily Mail and other tabloids for misinforming the public. Cantley on green NGOs pushing one view and successfully lobbying eurocrats. The view expresses was that lobby groups are pushing one point of view driven by funding, and as the tweet above shows, ‘wrong’ belief systems.
To a large extent I have sympathy with Salisbury as he tries to get the message out about vaccination safety in the face of what is all too often extremely unhelpful media coverage. However, both speakers appear to think that the issues at stake are (almost) exclusively about the science. In Cantley’s case he appears to totally miss the point that the application of GM technology has potentially huge economic, regulatory, political, social and culture implications. These are issues that, regardless of the scientific safety of the technology, the public has a legitimate view that must be heard and factored into any decisions.
Both speakers appear to take no account of public distrust of policy decisions relating to science and technology. I wonder, in the case of vaccination for example, if the solution to the problem isn’t thinking through ways in which the public can be more involved in the regulation of vaccination policy as a way of building up greater trust in the way policy is made. If the press is really the problem then finding ways to get round it other than trying to shout louder about scientific safety seems to be a sensible thing to try.
In the session after lunch, Professor Andrew Challinor spoke about developing climate change models. It was a fascinating talk which I’m not going to try to summarise here. Instead I want to focus on a discussion that developed in the room, on twitter and over lunch. This is encapsulated by the following twitter exchange that developed during the question and answer session. The first tweet summarises a point that Challinor made, the second a response over twitter challenging it:
I put this question to the panel. In essence the answer was that the problem is time. The right message will get out in end, but decisions are being made right now on basis of the cherry picked message and these are locking us into higher CO2 emission paths.
This seems to me to be a classic case of policy-makers (and scientists) focusing on the immediate problem rather than thinking about the long term implications of being seen by the public to lack transparency. This was backed up by the final panel session at the end of the day where the majority of the panellists appeared to be of the view that, even in highly contentious and polarised issues like climate change, it is better to be transparent because getting caught holding onto data, being wrong, or even lying, is far worse.
In fact, I wonder if the problem is as simple as should be transparent or not? Are there ways that the public could be included in decisions about when and how data should be made public? I’m not suggesting for one moment that the public should be involved in evaluating the data itself, but by involving them in helping to develop and implement clear principles some trust might be built back into the system.
Those that addressed the question on the final panel appeared to broadly agree with my basic view which is that no matter how hard it is in the short-term, in the longer run more transparency will be helpful. It will both build a stronger foundation for trust in systems in government as well as more productive debates.
The trouble at the moment is that many of those who control the data are right at the sharp end of a polarised debate. It is understandably difficult for them to see ways in which they can increase transparency while supporting policy decisions that balance the various competing trade-offs. I think they need help so that they can develop the confidence that this can be done. In this way I hope they can start winning back the confidence of the public.
Picture credit: seanmichaelragan