In political terms, 2016 has been a year of uncertainty. Yet, it has also seen the rising dominance of algorithms, complex mathematical calculations that follow a pre-set pattern and are increasingly used in technology designed to predict, control and alter human behaviour.
Algorithms try to use the past as an indicator of the future. As such, they are neutral. They do not have prejudices and are unemotional. But algorithms can be programmed to be biased or unintentional bias can creep into the system. They also allow large corporations to make largely hidden decisions about how they treat consumers and their employees. And they allow government organisations to decide how to distribute services and even justice.
The danger of algorithms being used unfairly or even illegally has led to recent calls by the UK Labour party for greater regulation not just of tech firms but of the algorithms themselves. But what would tighter rules on algorithms actually cover? Is it even possible to regulate such a complex area of technology?
Algorithms are used by governments and corporations alike to try and foresee the future and inform decision making. Google, for example, uses algorithms to auto-fill its search box as you type into it and to rank the websites it lists after you hit the return button, directing you to certain websites over others. Self-driving cars use algorithms to decide their route and speed, and potentially even whom to run over in an emergency situation.
Financial corporations use algorithms to assess your risk profile, to determine whether they should give you a loan, credit card or insurance. If you are lucky enough to be offered one of their products, they will then work out how much you should pay for that product. Employers do the same to select the best candidates for the job and to assess their workers’ productivity and abilities.
Even governments around the world are becoming big adopters of algorithms. Predictive policing algorithms allow the police to focus limited resources on crime hotspots. Border security officials use algorithms to determine who should be on a no-fly list. Judges could soon use algorithms to determine the re-offending risk of an offender and select the most appropriate sentence.
Driving by algorithm. Shutterstock
Given this extensive influence algorithms now have over our lives, it’s not surprising that politicians would like to bring them under greater control. But algorithms are usually commercially sensitive and highly lucrative. Corporations and government organisations will want to keep the exact terms of how their algorithms work a secret. They may be protected by intellectual property rights such as patents and confidentiality agreements. So the ability to regulate the actual algorithms themselves will be extremely difficult to achieve.
This hidden nature of algorithms might itself be a fruitful source of regulation. The law could be amended to force all companies and government agencies to more widely publicise the fact that decision making in the organisation will be taken by way of an algorithm. But such an approach would only serve to improve transparency. It would do nothing to regulate the actual algorithmic process. So the focus on regulation would need to shift to the inputs and the outputs of the algorithm.
In the UK, the current law of judicial review would be enough to cover the inputting of data into algorithms by governmental bodies. Judicial review allows judges to assess the legality of decisions taken by public bodies. So judges could determine whether the data inputted into the algorithm was correct, relevant and reasonable. The ultimate decision taken by the public body based on the output given by the algorithm would also be subject to judicial review, asking whether the final decision was proportionate, lawful and reasonable.