Accessibility Tools

  • Content scaling 100%
  • Font size 100%
  • Line height 100%
  • Letter spacing 100%
Henry Fraser reviews We, the Robots? Regulating artificial intelligence and the limits of the law by Simon Chesterman
Free Article: No
Contents Category: Science and Technology
Review Article: Yes
Show Author Link: Yes
Article Title: The people, not the robots
Article Subtitle: Key issues in the regulation of AI
Online Only: No
Custom Highlight Text:

The age of Artificial Intelligence (AI) has arrived, though not so much an age of sentient robots as one of ubiquitous data collection and analysis fuelling automated decisions, categorisations, predictions, and recommendations in all walks of life. The stakes of AI-enabled decision-making may be as serious as life and death (Spanish police use a system called VioGén to forecast domestic violence) or as trivial as the arrangement of pizza-toppings.

Featured Image (400px * 250px):
Alt Tag (Featured Image): Henry Fraser reviews 'We, the Robots? Regulating artificial intelligence and the limits of the law' by Simon Chesterman
Book 1 Title: We, the Robots?
Book 1 Subtitle: Regulating artificial intelligence and the limits of the law
Book Author: Simon Chesterman
Book 1 Biblio: Cambridge University Press, $73.58 hb, 309 pp
Book 1 Readings Link: booktopia.kh4ffx.net/gbgqPg
Display Review Rating: No

Simon Chesterman’s new book, We, the Robots?, takes on the challenges that AI poses to law and regulation. An eminent Australian legal scholar with a background in the study of international law and public authority, Chesterman examines the capacities and limitations of existing regulatory tools, and imagines new institutions that might fill the gaps.

One of the book’s many virtues is the clarity with which it frames the challenges in question. There is a tendency, as Chesterman notes, to anthropomorphise ‘intelligent’ machines, attributing to them a degree of agency or even sentience that is not (yet) warranted. The literature on AI and regulation is littered with vague misgivings about the law’s incapacity to hold humans responsible for ‘unforeseeable’ harms caused by autonomous robots. Likewise, there is an abundance of speculative schemes for recognising AI systems as ‘legal persons’, like corporations, that can own property, sue, and be sued. Chesterman is sceptical of this kind of AI exceptionalism. In equal measure, he is optimistic about the capacity of existing laws to adapt and of new institutions to fill regulatory gaps.

In this respect, the book’s title is perhaps misleading. Evoking American constitutionalism, it suggests a sci-fi future in which sentient robots assert their rights. The subject of ‘constraining superintelligence’ occupies fewer than six pages in a twenty-eight-page chapter on ‘Personality’. It is not that Chesterman dismisses the possibility of super-intelligent AI – he even ventures a few thought experiments about how new entities of this kind might be embedded into the legal system. Rather, he prefers to focus on the pressing questions posed by AI technologies that are already in use.

These, he understands, are questions about us: we, the people, not the robots. ‘[T]he problem with autonomy,’ he writes, ‘is not some mysterious quality inherent in the AI system. Rather it is a set of questions about whether, how, and with what safeguards human decision-making authority is being transferred to a machine.’ Chesterman is interested in who should bear the risks of bad decisions by AI systems; how we can ensure that AI-enabled government decision-making is legitimate; and, perhaps the most interesting question: ‘whether there are classes of decisions for which a human being must not only be able to take responsibility but actually be responsible’.

These three themes – risk, legitimacy, and morality – are the glue that holds the book together. Chesterman covers an enormous amount of ground, deftly traversing more than a dozen areas of law. Each chapter begins with an anecdote about people, how they have used AI, been affected by it, or what human nature demands of it. A story about medieval pig trials, for example, illustrates our fundamental need to lay blame on a moral agent for harm. The anecdotes keep the stakes of AI regulation firmly in view and ease the general reader into a book that is necessarily heavy on legal detail.

For the reader with a more developed interest in AI or law, the book does a superb job of mapping and organising key issues in the regulation of AI. But it is more than a synthesising exercise. What Chesterman propounds is a typology of automated decisions, with different ethical and legal requirements applying to each category in the typology.

In order effectively to regulate AI, Chesterman argues, the law will need to allocate risks in a way that minimises harm, using civil liability regimes like negligence, product liability, and insurance. For some decisions, however, it is not enough to manage risk. ‘[T]here are certain public functions,’ Chesterman writes in a characteristic turn of phrase, ‘that should not be outsourced at all, as their legitimacy requires that they not merely be attributable to a human, but actually be performed by one.’ This rubric applies to decisions by the executive and judicial branches of government. In other cases, ‘shared morality requires that human actors be held to account’ for the decisions they delegate to machines, regardless of whether they could predict them in advance.

Occasionally, the typology feels too neat. The claim that ‘shared morality’ will be able to tell us which decisions should only be made by humans, and never by machines, does not quite satisfy. It’s hard to disagree with a prohibition on delegating the decision to kill a human being in war to a machine. Still, curious readers may wonder where exactly the line is to be drawn, especially in domains where AI appears to make better decisions than humans. Aren’t there circumstances when the utility of an AI system might outweigh qualms about legitimacy or morality?

To this objection, Simon Chesterman might make the reasonable rejoinder that red lines are still being drawn. Fittingly, the final part of the book reflects on how norms around AI should develop globally, through the coordinated actions of scientists, states, businesses, and international organisations.

Comments powered by CComment