Regulation by and of Algorithms

Posted: 3 Apr 2016

Date Written: March 30, 2016

Abstract

An increasing range of important decisions are being made by algorithms, and an extensive literature has emerged on the extent to which they can be monitored and regulated by adapting the mechanisms that have been used for human behaviour, in particular by some form of transparency. But the limitations of this process are well-known. In attempting to regulate the behaviour of financial trading algorithms, for example, it has been shown that: the source code cannot adequately specify the behaviour of the algorithm in the absence of the training data used to educate it; that the persistence of patterns in training or historical data can blind even smart algorithms to ‘trend breaks’ and the emergence of crisis, bubbles and herding; that the collective behaviour of coupled, communication and interacting algorithms may be unpredictably different from their individual behaviour, even when they are all identical; and that small (even endogenous) changes in structure or transient feedback loops can lead to catastrophic disruptions in system behaviour. At the same time, algorithms are becoming ever more central, entrusted with vital decisions and even serving on corporate Boards of Directors (thus holding delegated authority). Automated processing of data and automated decision rules operating on these data are becoming ubiquitous, and in many settings sophisticated models capable of deep learning and machine intelligence are being out-competed by faster, simpler models whose structure gives no clue as to their collective behaviour and provides neither useful insight nor effective points of control. Moreover, the interaction of developments on the compute side with those on the network side is becoming ever-more intricate. Even the ‘human layer’ is affected; individuals confronted with complex flows of high-speed data have no time to reflect and take responsibility, but have to react in increasingly ‘algorithmic’ ways in order to survive. This raises a set of challenges to regulation that are particularly important in the telecommunications domain. First, it suggests that detailed data aggregated and centralised may not provide adequate oversight over dispersed actors; indeed, modern data analytics is becoming decentralised and even less transparent as a result. Second, it means that ‘incidental’ aspects of communication and computation system performance (like latency or attenuation) may seriously interfere with the performance of automated sensing and decision systems, and that this in turn may change the intensity and pattern of communications through networks. Third, it means that regulatory relationships, by which individual actors are held responsible for certain decisions or outcomes, may no longer provide effective governance. This paper draws on work being conducted in the Alan Turing Institute to analyse the growing importance of interacting systems of algorithms for communications networks and, through them for a set of domain-specific examples (telemedicine, financial trading and power grids). It develops proposals for structural or ‘macro-prudential’ monitoring and regulatory mechanisms to apply to algorithmic systems and identifies some principles that can usefully be developed when designing automated or algorithmic approaches to the regulation of human behaviour.

Keywords: algorithms, regulation, computation, network performance, telemedicine, algo-trading, smart grids, data analytics

JEL Classification: A1, C6, C7, D11, D21, D43, D52, D7, D8

Suggested Citation

Cave, Jonathan, Regulation by and of Algorithms (March 30, 2016). Available at SSRN: https://ssrn.com/abstract=2757701

Jonathan Cave (Contact Author)

University of Warwick ( email )

Gibbet Hill Rd.
Coventry, West Midlands CV4 8UW
United Kingdom

Do you have negative results from your research you’d like to share?

Paper statistics

Abstract Views
813
PlumX Metrics