- Free Article: No
- Contents Category: Artificial Intelligence
- Custom Article Title: ‘AI will kill us/save us: Hype and harm in the new economic order’
- Review Article: Yes
- Article Title: AI will kill us/save us
- Article Subtitle: Hype and harm in the new economic order
- Online Only: No
- Custom Highlight Text:
Ilya Sutskever was feeling agitated. As Chief Scientist at OpenAI, the company behind the AI models used in ChatGPT and in Microsoft’s products, he was a passionate advocate for the company’s mission of achieving Artificial General Intelligence (AGI) before anybody else. OpenAI defines AGI as ‘highly autonomous systems that outperform humans at most economically valuable work’, the development of which will benefit ‘all of humanity’. OpenAI’s mission, Sutskever believed, gave humanity its best chance of getting to AGI safely. But he worried about failing the mission. He fretted to his colleagues: What if bad actors came after its technology? What if they cut off his hand and slapped it on a palm scanner to access its secrets?
- Book 1 Title: Empire of AI
- Book 1 Subtitle: Inside the reckless race for total domination
- Book 1 Biblio: Allen Lane, $55 hb, 496 pp
- Book 1 Cover Small (400 x 600):
- Book 1 Cover (800 x 1200):
- Book 1 Readings Link: https://www.readings.com.au/product/9780241678923/untitled-340983--author-301767--2025--9780241678923#rac:jokjjzr6ly9m
- Book 2 Title: The AI Con
- Book 2 Subtitle: How to fight Big Tech’s hype and create the future we want
- Book 2 Biblio: Bodley Head, $36.99 pb, 288 pp
- Book 2 Cover Small (400 x 600):
- Book 2 Cover (800 x 1200):
- Book 2 Readings Link: https://www.readings.com.au/product/9781847928627/the-ai-con--emily-m-bender-alex-hanna--2025--9781847928627#rac:jokjjzr6ly9m
An award-winning journalist and former reporter for The Wall Street Journal and senior editor for AI at MIT Technology Review, Karen Hao now trains other journalists to report on artificial intelligence. Underpinning many of the stories in Empire of AI is her impressive level of access to corporate insiders, including more than 150 interviews with over ninety current and former executives and employees at OpenAI. These interviews, and the many private documents she viewed, inform the book’s vivid and credible scenes showing the unfolding dramas as they happened at the company, including the short-lived ousting of Sam Altman from the role of CEO. Throughout her book, Hao traces evolving conflicts between OpenAI’s stated aim of developing broadly beneficial artificial intelligence, the manipulative and deceitful behaviour of Altman as seen from the perspective of ex-colleagues and former board members, and actual harm resulting from the actions of the company.
In 2019, academic researcher Shoshana Zuboff’s influential book Surveillance Capitalism defined its object of study as a ‘new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales’, an order that is ‘marked by concentrations of wealth, knowledge, and power unprecedented in human history’. That is, an imperial economic order. The new wave of research on artificial intelligence as a colonial and imperial enterprise, Hao notes in her book, led her to broaden her initial focus beyond OpenAI.
The book shuttles back and forth between the innermost circles of the OpenAI empire, where billionaires incubate sci-fi visions of the future, and its periphery, where the hardest and dirtiest work of building massive AI models gets done, in countries such as Kenya, Venezuela, Uruguay, and Chile. Hao travels to interview workers from these countries. The dual vision this brings to the book is ambitious, but essential. Moving from the AI centre to its peripheries and back, the reader viscerally connects the technology’s shiniest visions to its horrifying harms in geographically far-flung places. The subtitle of the UK edition promotes this expanded focus: ‘Inside the reckless race for total domination’.
The harms that Hao uncovers are hard to forget. In the chapter ‘Disaster Capitalism’, Hao describes the acute mental distress experienced by a Kenyan content moderator named Mophat Okinya, who worked on training data for OpenAI and endured daily exposure to textual depictions of child sexual abuse and other appalling forms of violence. Okinya was contracted to review fifteen thousand pieces of this content per month. Big AI companies such as OpenAI train their AI models on the entire contents of the internet, then develop special models to filter out harmful content before it surfaces for users in more affluent parts of the world. Outsourcing this work to impoverished workers such as Okinya, via a data company called Sama, was OpenAI’s cheap solution for developing those models.
In the same chapter, Hao describes how other data companies, such as Appen and Scale AI, which undertake high-volume, high-pressure, ‘crowdsourced’ data and content moderation work for tech giants, set up their platforms to micromanage, surveil, and block workers in poorer countries who do not perform with the speed, consistency, or expertise expected on these tasks.
Here, I must declare a past entanglement. I worked for seventeen years at Appen in its language division. This was not the division whose cat-and-mouse worker processes and policies are described in troubling detail in Empire of AI. Yet, what I saw from a distance supports Hao’s contentions. Managing the ‘crowd’ was increasingly done with the help of built-in algorithms. These platforms and labour practices, as Hao describes them, treated workers as ‘disposable’. Finding ‘an immiserated pool of labor that will do piecework under almost any conditions’ and shifting from country to country in search of the optimal pool, has become part of the default business model of many AI data companies. At the top of the chain, Big AI can maintain a clean appearance, while applying financial, temporal, and quality thumbscrews to outsourced data workers through the contract conditions agreed with data companies.
Another set of AI-related harms reported by Hao are communal and environmental. Visiting Quilicura, an impoverished and drought-affected neighbourhood in Santiago, Chile, which houses a Google data centre using scarce potable water to cool its servers, Hao finds the promise of compensatory green space for local residents unfulfilled, the neighbourhood blocks ‘piled high with refuse’. Elsewhere in Chile, in a community called Cerillos, Hao reports that Google had at first proposed a data centre that would use ‘one thousand times the amount of water consumed’ by the community’s entire population. This, in a country suffering from a fifteen-year, ongoing megadrought.
Chilean President Sebastian Pinera delivering a speech during the announcement of the expansion of Google's Chilean data centre, Quilicura, Chile, 2018 (Image/Alamy)
Closer to home, Hao spends many pages on the familial issues raised in public by Sam Altman’s younger sister, Annie Altman. Drawing a sympathetic portrait of Annie, who in 2021 alleged childhood sexual abuse by Sam, Hao compiles extensive source material from Annie’s recollections, her therapist’s notes, and the family’s communications, including her brother’s, piecing it together like a jigsaw. She leaves the reader to draw their own conclusions from the fit of the pieces.
Emily Bender and Alex Hanna’s book, The AI Con, overlaps with Empire of AI in its critique of the prevalent AI doomer/boomer ideologies (AI could kill us all; AI will save us all). Emily Bender is Professor of Linguistics at the University of Washington and has a deep knowledge of natural language processing methods. She is a strong critic of artificial intelligence’s capacity to mimic human communication. Alex Hanna is a sociologist and Director of Research at the Distributed AI Research Institute (DAIR), having previously worked on Google’s Ethical AI team. They wrote the book as an extension of their existing collaboration on a podcast called Mystery AI Hype Theatre 3000.
Another common thread between the two books is the nature of their authors’ intentions. Both books seek to intervene in the current trajectory of AI development – to throw a book-shaped spanner in the works. Each book, however, foregrounds a different tactic. Hao’s is exposure and awareness-raising, making visible the hidden connections between power and abuse. Bender and Hanna’s tactic is ridicule, debunking and correcting technological myths, which they do from a position of technical, linguistic, and sociological knowledge.
Bender and Hanna highlight what they describe as a ‘hype cycle’, arguing that it is a key enabler of the excesses of AI scaling, investment, data extraction, and implementation in unsuitable contexts. Billion-dollar investments require billions of adopters to ensure returns for investors. FOMO, the fear of missing out, is a powerful driver of adoption. So far, so serious. But where Hao foregrounds the ugly machinery of imperial AI practices, Bender and Hanna laugh at the emperor’s imaginary clothes. ‘A characteristic’, they write, ‘of our current hype cycle is that the con men are taking a series of tropes from science fiction … and injecting them into discussions at the highest echelons of business and government.’ Fantastically optimistic visions (think of OpenAI’s mission to ‘benefit all of humanity’) ‘are useful to those creating the technology because it makes them appear powerful – if not godlike’. God-like men attract investment.
The AI Con opens with a chapter on AI text and image generation systems and their training process. Bender was a co-author of the famous ‘stochastic parrots’ paper, which argued that chatbots lack human understanding and communicative intent. The AI Con refers to AI generation models as ‘synthetic text extruding machines’ and ‘media synthesis machines’ to underline the point. As ChatGPT took off, Sam Altman tweeted: ‘i am a stochastic parrot, and so r u’. Such remarks belittle the complexity of human cognition. ‘In this line of argumentation,’ Bender and Hanna respond in the book, ‘humans can be reduced to our outputs and the ways in which we interact with our environment, with people, and with written and visual production.’ In the same chapter, the authors draw a direct line from the definitions of ‘intelligence’ that underlie the phrase ‘artificial intelligence’ to their racist, ableist, and eugenicist roots. These roots continue to expand underground through the influence of Elon Musk and many other equally powerful, but less visible, financial backers of Big AI, such as Peter Thiel and Marc Andreessen.
In the chapters that follow, The AI Con addresses the automated decision-making systems used in social services (‘Automation in the name of efficiency here only makes the government more efficient at harming families’); the development of AI systems to replace creative, scientific, and journalistic tasks; the doomer and boomer ideologies that drive AI development and investment; and ‘strategies to combat AI hype, such as robust regulation, data and privacy legislative proposals, and strong worker protections’. Some of the arguments presented have been made in other forums, but they are usefully (and entertainingly) consolidated here. It sounds fantastical when the authors describe a future workplace meeting ‘filled with AI agents’ representing workers, speaking generated text ‘to be heard by no one’. But Microsoft researchers are in fact pursuing ‘the idea of a Ditto – an agent that visually resembles a person, sounds like them, possesses knowledge about them, and can represent them in meetings’, as described in a 2024 research paper. Zoom is interested too.
Both Empire of AI and The AI Con underscore that their authors are not AI technophobes who want to stop the clock. Rather, they are realists who are clear on how the current path of hyper-scale AI development is leading to terrible abuses of economic power and the irreversible degradation of environments and communities. They are aware, too, that ridicule and exposé can only go so far in promoting resistance to this kind of artificial intelligence. There is, both acknowledge, a third and complementary type of intervention – building genuine alternatives to the products and services of Big AI. Both conclude on a similar note of cautious optimism by outlining alternative technological development paths.
In their final chapters, both books refer to the same organisation: Te Hiku Media. Why? This Māori-led organisation using AI-driven speech recognition technologies to transcribe speech in Te Reo Māori stands as a genuine alternative model to Big AI services and products. Te Hiku Media embeds community consent deeply in its development. It brings technical AI knowledge to the Māori community, while protecting the community’s rights to its language and cultural data with a licence that invokes the Māori principle of guardianship, kaitiakitanga. Genuinely alternative forms of artificial intelligence, such as Te Hiku Media’s, have different success criteria from the scaling paradigm of Big AI systems: they enable data privacy, data sovereignty, local data processing at low energy cost and with minimal climate impact, with benefits flowing back to the communities whose data is offered with consent and full understanding, and not extracted through imperial labour practices and by stealth.
Big AI is now embedded in most spheres of life: personal computing, social media, shopping, educational institutions, finance, and government. There is indeed a sense of inevitability about all of this. But Hao, Bender, and Hanna remind us that this sense is not the historical force of unstoppable progress. Neither is it the foregone conclusion of human evolution towards machine cognition, as proposed by transhumanist AI developers. Rather, making it seem inevitable are the multi-billion-dollar investments, the billion- and trillion-dollar company valuations, and the governments and organisations desirous of inducements offered by Big AI – enabling, in the process, the worst excesses of AI-driven exploitation and extraction.
Reading Hao’s book, it may be tempting to see the imperial machinery of Big AI as impinging only on countries and workers in the Global South. That would be a mistake. ‘The empire’s devaluing of the human labor that services it is also just a canary,’ Hao argues. She warns that it foretells ‘how the technologies produced atop this logic will devalue the labor of everyone else’. Creatives and coders are just the first wave to be affected, she says.
The Australian Productivity Commission recently recommended that Australia should consider further reducing its protections on personal data and copyrighted works, allowing data mining and extraction for AI model training as an exception to other, more highly regulated, uses. Reading Empire of AI, I became aware of how little the Commission’s proposal differs in kind from the acts of colonial submission described in the book. Consider the Kenyan government, ‘hungry for foreign investment’, allowing foreign AI companies to operate content moderation operations on its soil that devastate its citizens; or the Chilean government, ‘tethered to the extraction economy’, inviting AI companies to build data centres using vast quantities of land and water, to the shock and detriment of its local communities; or the Zimbabwean government, permitting the extraction of its citizens’ biometric data without their consent, in exchange for the (unfulfilled) promise of funding a digital identification system to support fair elections. Such examples offer clear warnings and incentives to consider how artificial intelligence could be imagined and built in other ways. Life as an imperial outpost for the extractive ambitions of American Big AI companies might not be the future that we dream of for ourselves and our children. Reading these two books is essential to understanding why.
Comments powered by CComment