As a policy professional, I’ve been wondering recently whether or not I could one day be replaced by an algorithm.
After all, the best guess is that 35% of jobs in the UK will be automated in the next 20 years, so why not mine? Other professions like doctors, lawyers, journalists and have already been flagged at risk.
I think it’s useful to try to answer this sort of question (selfish reasons aside) because it helps illustrate some of the public’s anxieties around automation.
Where to start?
The BBC have produced a tool where you can look up the likelihood of your job being automated based on research by Oxford University academics Michael Osborne and Carl Frey.
This research ranks jobs against nine key skills to assess likelihood of automation: social perceptiveness, negotiation, persuasion, assisting and caring for others, originality, fine arts, finger dexterity, manual dexterity and the need to work in a cramped work space.
The argument is that if you score highly on these criteria, you are less at risk. For example, social workers and nurses are unlikely to be automated, as a crucial part of those jobs is empathy, and that is difficult for machines.
So, here are the results of this analysis for the job categories that seem most similar to mine (there was no obvious category that I fell into):
The results are pretty wide ranging, and while the researchers have done their best to categorise the jobs in a methodical way, the reality is that every job is different, and there will be enormous variation within these categories.
What’s most useful in this sort of analysis is that it helps you think about the different tasks that make up your own job, and which could be automated in future – after all, while most jobs won’t disappear completely, they will certainly change.
However, this sort of approach also misses a key point: the question we need to ask is not simply what can be automated, but what should be automated. And it is this issue that is at the heart of whether or not I will one day be replaced by a robot.
Here it is useful to start from what we can actually ask an algorithm to do:
Take a credit assessment as an example. An algorithm can analyse your personal financial information based on indicators of creditworthiness. This analysis can then serve as an input to decision making. But we can also take automation a step further, and simply have a computer decide whether or not you have passed a credit assessment on the basis on this analysis.
It is this second step that starts to raise some complicated issues around agency and control. This is because it feels like some decisions require human judgment, and shouldn’t simply be handed over to a computer.
Perhaps this is because when a decision is automated, it obscures who is responsible for establishing the parameters of the decision (for programming the algorithm), and it makes holding them accountable more difficult. And then there is the complexity of codifying difficult decisions in algorithms (e.g. should our driverless cars be programmed to kill some people in order to save a larger number when they are faced with an unavoidable crash?).
So, will I still have a job in twenty years?
A significant part of policy making is subjecting data to analysis, and hopefully technology will make this easier and faster (it already has).
But there is also another part that involves judgment and decision making. While it might be possible to automate much of this in the future too, we will need to approach this cautiously, with appropriate attention to transparency around the decision making criteria and ways in which those responsible for criteria can be held accountable.
So perhaps policy makers aren’t in danger of being replaced by algorithms; rather, we’ll all just be writing policy about when and how best to use algorithms.