Published on March 15, 2012

Understanding the precautionary principle*

By Simon Burall

Simon Burall is the Director of Involve. He has extensive experience in the fields of democratic reform, governance, public participation, stakeholder engagement, and accountability and transparency.

caution signs in shape of people surrounding uneven pavement

This is the first in a short series of posts pulling together my thoughts from a recent conference on Risk and Uncertainty. This first post focuses on the Precautionary Principle and its application to public engagement. 

One of the biggest challenges I think government faces is in communicating risk and uncertainty to the public in a way that fosters constructive debate about the trade-offs inherent in a number of different areas of decision-making. So when the invitation to the Cambridge University Centre for Science and Policy (CsaP) to an event on Risk and Uncertainty came my way I jumped at the chance to learn more. You’ll find some of the presentations at the end of that second link if you’re interested.

This is the first of two posts (second coming here) capturing a few of my thoughts which occurred during and after the event. Given Involve’s focus, I’m not attempting to do justice to the event as a whole, but rather reflect on those aspects that have relevance for public engagement, particularly the policy implications of science and technology.

This first post is more by way of background and charts my shift in understanding of the precautionary principle. Although I do identify a couple of ways that a better understanding of the principle might help identify when it would be appropriate to involve the public in policy decisions.

The day kicked off with a session on the Precautionary Principle. None of the speakers was that enamoured with the concept. The chair of the session, (Lord) Phil Willis noted that there is no accepted definition of the term, and yet it is used in a number of different places within EU law. This is clearly a problem.

The misapplication of the precautionary principle

Professor David Salisbury, is Director of Immunisation at the Department of Health. He spoke strongly against misapplication of the principle in the area of childhood vaccination. He highlighted a number of cases where rare risks of harm are identified and used to push for vaccination programmes to stop. In some of these cases there wasn’t enough evidence of the risks associated with the vaccination to allow you to balance that against the harm of stopping vaccinating children. His main point was that in the absence of evidence the precautionary principle should not be used.

I was talking to a friend who works in the financial sector about the conference the weekend after. He said much the same thing in a different way. When the precautionary principle is useful it allows you to balance up different risks – but this is just the economic tool of cost-benefit analysis. Where you don’t have enough information to balance risks the precautionary principle becomes meaningless – or as Salisbury seemed to be saying actively dangerous.

Salisbury noted that any clinical trial is going to be relatively small compared to a roll out of a drug or vaccination to the whole population. A trial can never pick-up a 1 in 100,000 risk, for example. To hammer home this point: because we can’t know the full risk even after a vaccination trial has taken place, a misapplication of the precautionary principle would say that we shouldn’t licence the vaccine. However, this would be the wrong thing to do because we don’t have enough evidence to balance the risk of not vaccinating against the risk of vaccinating. Instead we must monitor and evaluate the risk at regular intervals to allow us to make informed judgements about safety.

This discussion has resulted in quite a large shift in my understanding of the application of the precautionary principle, and at first sight appears to argue against the use of the term, in favour of using the concept of cost-benefit analysis.

The limits of the precautionary principle

Another of the speakers, Professor Peter Sammonds, looked at the use of the precautionary principle in the context of natural hazards impacting on critical infrastructure. He took the recent earthquake and subsequent tsunami in Japan as his main case study.

He has visited the area affected by the tsunami. He showed pictures of some of the walls built to protect some districts from tsunamis. He said it is inconceivable that even bigger walls could be built (the costs and engineering problems would be too big), and yet historical records – sediment and documentary – show that Japan has suffered even larger tsunamis than the one which hit in March 2011. While the precautionary principle might suggest engineering your way out of the risk, this may actually not be possible/ desirable.

He noted that despite the destruction of physical infrastructure, surprisingly few people lost their lives. This was in part down to the efforts of the army, following a plan developed in advance. A critical element in its success was that the population knew what to do in the event of a tsunami. As did the fact that the army was able to follow the plan in a flexible way, without waiting for orders from the centre (which were taking days to arrive anyway).

His main point was that there are other things you can do in response to a predicted risk beyond identifying a technical solution; these could be political, social and institutional. I took away from this that many of these actions will require the willing involvement of the public, and suggest a that it will be just as important to involve the public in identifying and implementing solutions to risks as it will to do technical assessments of solutions.

While I learnt a lot in this session, there was much about it that I found unsatisfactory. However, I’ll save that for a later post. For now I’ll just finish with the final words of the Chair, Phil Willis:

[blackbirdpie url=”https://twitter.com/#!/sburall/status/177734796682866688″]

For me this highlights the point that different people may be affected differently by a particular risk. It’s impossible to identify the one response that will work for everyone. As a result technical solutions are only part of the answer and finding ways to engage the public in making the trade-offs required is critical.

*This post is based on my tweets from the CSaP event, but wouldn’t have been possible without @Commutiny doing a quick and dirty capture of the tweets straight after the event – thanks Roxanne!

Picture credit: alykat

One Response to “Understanding the precautionary principle*”

Leave a Reply