The invisible labour behind 'intelligent' machines

Tech leaders compare AI’s electricity demand to the energy needed to ‘train a human’. In doing so, they judge people and server racks by the same dehumanising efficiency metric. Soumi Banerjee and Mo Hamza explain how this logic is most brutally realised in planetary AI supply chains; in the hidden work that makes ‘intelligent’ machines seem autonomous

Can you reduce a human to an energy bill?

At the recent AI Impact summit 2026 gathering in New Delhi, a delegate challenged OpenAI CEO Sam Altman about AI’s soaring energy demands. His response? 'It takes a lot of energy to train a human'; 20 years of life and all of the food you eat before you become intelligent.

Altman's remark sounds like common sense – intelligence does, of course, cost – but it smuggles in a worldview. Accepting the comparison lets us evaluate a living person and a server rack by the same standard: efficiency. If AI answers faster than I do, am I inefficient? If a machine writes essays, composes music, and generates images, are we just the beta version of intelligence?

A machine processes prompts; a person processes pain. A machine predicts patterns; a person builds purpose. Reducing humans to energy users shrinks us. That shrinkage affects how we speak about ‘the human’, and how we treat the people who keep AI systems running.

The grand illusion of autonomy

We are told AI is autonomous, ‘frictionless’ technology that thinks, sees, and decides. Marketeers promote smart cities, automated welfare checks, and predictive policing as developments that will secure our future. Pull back the curtain, and much of this ‘machine intelligence’ is the exhaustion of human attention.

We are told AI is autonomous, but behind the sleek interface lies a vast workforce performing ghost work; repetitive tasks that computers struggle with

But behind the sleek interface lies a vast workforce performing repetitive tasks that computers struggle with. These people decide whether an image contains a pedestrian, whether a phrase constitutes harassment, whether a video is violent, and whether a facial expression signals ‘anger’ or ‘threat’. This hidden workforce performs ghost work – the human labour that sustains systems marketed as automated.

From security to ‘resilience’

Governments once promised security: protection through welfare, labour regulation and public services. Today, they have replaced that promise with ‘resilience’ that demands constant adaptation. Portraying the world as permanently unstable shifts the burden of survival onto individuals, households and workers. The resilient subject is expected to accept insecurity as normal, and manage it privately. Nowhere is this more visible than in the labour that trains AI.

The ‘responsibilisation’ of the precariat

Before a model can ‘learn’, people must produce ‘ground truth’: labelled data turning messy reality into machine-readable categories. Workers draw boxes around bodies, tag faces, transcribe audio, rate search results, and filter content. They are the ‘human in the loop’.

India is a major hub. Industry reports based on NASSCOM estimates suggested that in 2021, about 70,000 people worked in data annotation in India. The total Indian market is worth around US$250m. Much of the demand comes from the United States.

Before a model can learn, data annotators must turn messy reality into machine-readable categories. India is a major hub for this labour, with many workers on short-term contracts

Experts predict the 2025 Asia data annotation market overall (which includes India) will be ~US $860+m. India accounts for a significant share of that demand. Workers often undertake this labour through short-term contracts and platform-based tasks. And this creates a neoliberal labour relationship: it treats workers as flexible inputs expected to self-manage stress, uncertainty and exposure, while pushing the system’s risks onto them.

Automating inequality

Data annotation is often described as neutral ‘technical’ work, but it is profoundly political. Before an AI system can detect a riot, a threat, or suspicious behaviour, we must first train a human to recognise these categories and reproduce them at scale. This aligns judgment with the security anxieties and cultural assumptions of distant clients rather than local realities.

Workers learn where the invisible line lies between ‘normal’ and ‘suspicious’, based on standards that may be culturally inconsistent or reductive. Annotators collapse complex identities into rigid administrative labels demanded by models.

This is discipline: training humans to see the world in machine-processable ways. Algorithmic bias is not only within models; labour regimes and classifications that decide what counts as a threat, a face, a body, or a person also reproduce such biases along the supply chain.

The dark heart of the AI dream

AI's sleek interface is not a sign of a less violent world; it merely means that somebody else absorbed that violence. Equidem’s 2025 investigation into data labellers and content moderators documents severe occupational and psychological harms in the global digital supply chain. These harms include exposure to violent and abusive content, weak protections, and limited avenues for redress.

Calling this ‘outsourcing’ misses what is extracted. This is a contemporary form of colonial extraction: value drawn from cognitive labour under structural insecurity, routed upwards to firms insulated from costs. The result is a necropolitical global division of labour in which some bodies are more expendable, and which normalise slow harms as the price of digital convenience.

What accountability would look like

Responsible AI requires a broader perspective. Energy matters; the International Energy Agency estimates data centres used about 1.5% of global electricity consumption in 2024, and projects consumption to rise sharply by 2030.

But kilowatts are not the only cost. Accountability must include transparency about where data work is done and under what conditions; enforceable labour standards throughout subcontracting chains; limits on exposure to traumatic content; fair wages; stable contracts; collective bargaining rights; oversight of procurement; paid breaks and mental health support; and meaningful channels for grievance. It should include democratic energy planning: aligning data-centre growth with decarbonisation and local consent, rather than treating expansion as an unquestionable 'AI age' necessity. Consumers and policymakers should demand that claims of autonomy be qualified by disclosures about human labour and environmental impact.

Consumers and policymakers should demand that claims of autonomy be qualified by disclosures about human labour and environmental impact

We are not arguing for romanticising humans and demonising machines. But we refuse to advocate a worldview that treats people as interchangeable fuel. When defending AI reduces humans to mere 'energy users', the technology consumes not only electricity, but our moral vocabulary for what a life is worth.

This article presents the views of the author(s) and not necessarily those of the ECPR or the Editors of The Loop.

Contributing Authors

Photograph of Soumi Banerjee Soumi Banerjee PhD Candidate, School of Social Work, Lund University More by this author
Photograph of Mo Hamza Mo Hamza Professor of Risk Management and Societal Safety, Lund University More by this author

Share Article

Republish Article

We believe in the free flow of information Republish our articles for free, online or in print, under a Creative Commons license.

Creative Commons License

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

The Loop

Cutting-edge analysis showcasing the work of the political science discipline at its best.
Read more
THE EUROPEAN CONSORTIUM FOR POLITICAL RESEARCH
Advancing Political Science
© 2026 European Consortium for Political Research. The ECPR is a charitable incorporated organisation (CIO) number 1167403 ECPR, Harbour House, 6-8 Hythe Quay, Colchester, CO2 8JF, United Kingdom.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram