Article

Who Is Responsible for Workplace Injuries in the New and Dynamic Frontier of AI?

The history of industrial revolutions has shown that technology can create prosperity — but only if balanced with worker protections.

The Fourth Industrial Revolution (4IR) is transforming workplaces at an unprecedented pace. Automation, artificial intelligence (AI) and robotics are redefining how we work, where we work, and even who — or what — does the work.

But as businesses embrace AI-driven efficiency, a crucial question lingers in the shadows: who is responsible when AI causes workplace injuries?

We have been here before. The First Industrial Revolution exposed workers to hazardous machinery without legal protection. The Second Industrial Revolution introduced electricity-powered factories, sparking the birth of labour unions. The Third Industrial Revolution brought computers, improving ergonomics but also causing new health risks like repetitive strain injuries.

Each revolution forced society to rethink workplace safety, expand social protections, and regulate emerging technologies.

Yet, as we enter 4IR, workplace safety laws and social insurance systems remain stuck in the past, designed for a workforce that is rapidly disappearing. The risk landscape has changed, and our policies must evolve to keep up. As Adriana Milea and Lucian-Ionel Cioca indicate, “ensuring a high level of welfare and safety and health of workers favourably influences the productivity, associated costs, efficiency and success of the entire industrial system”.

The rise of invisible injuries

AI and automation are reducing traditional workplace accidents — machines do not suffer from fatigue, and robots do not trip over cables. But that does not mean the workplace is becoming safer. A new set of risks is emerging.

Consider the hidden injuries of 4IR. Mental health strains from constant surveillance and AI-driven performance monitoring are eroding worker morale. Repetitive strain injuries are becoming more common as workers spend long hours monitoring automated systems. AI-driven decision-making errors are creating unsafe working conditions, such as algorithms prioritizing speed over safety in manufacturing settings. Technical malfunctions in AI-powered robots lead to unpredictable accidents, with no clear liability framework to determine who is at fault.

The problem is not just new types of injuries but also the blurring of legal boundaries in an AI-driven workplace.

Who is liable? The great responsibility void

Traditionally, if a worker slipped on a factory floor, liability was clear: the employer was responsible. But what happens when a worker is injured by a malfunctioning AI system? Is the employer at fault for deploying the AI? Is the AI developer liable for faulty programming? Or is the injury simply written off as an “unfortunate algorithmic error”?

This challenge is further compounded by the fact that while deep learning — the most advanced form of AI — delivers high accuracy, it does so at the cost of transparency, making it difficult to interpret how decisions are made.

For example, in workplace injury assessments, an AI system may accurately predict risk factors or determine liability, yet provide no clear explanation of how it reached its conclusion, raising concerns about accountability and fairness.

These grey areas in liability are dangerous because social insurance systems were not designed for AI-induced injuries. Most existing frameworks still focus on traditional workplace hazards — slips, falls, and chemical exposures. AI-driven risks do not fit neatly into these categories.

And then there’s the issue of remote work injuries. If an employee working from home in Germany is injured while performing AI-assisted tasks for a US-based company, whose labour laws apply? Workers’ compensation policies were built for physical workplaces, not a world where people “clock in” from anywhere.

The economic dilemma: who pays for AI-induced injuries?

Expanding social insurance to cover automation-related injuries will require significant funding. But who should foot the bill?

Employers deploying AI-driven automation should contribute to an insurance fund for workplace injuries caused by emerging technologies.

Governments must modernize social security systems to account for AI-related workplace risks, but relying solely on government funding could increase taxation burdens.

AI developers should also be held accountable — if an AI algorithm causes harm, should the software manufacturer be liable, just like a motor car manufacturing company is responsible for faulty brakes?

One possible solution is an AI Liability Fund, where companies deploying automation pay into a collective insurance pool, ensuring that AI-related injuries do not leave workers financially stranded.

The global disconnect: cross-border AI injuries

The borderless nature of AI-driven work creates a regulatory nightmare. A worker in South Africa might be injured while remotely working for a UK-based employer. An automated AI system deployed in Japan could cause an injury to a gig worker in Canada. A worker in Australia could successfully claim compensation for tripping over a puppy fence while working from home, setting a precedent for remote work cases worldwide.

The harmonization of international labour laws will be crucial in addressing workplace injuries in AI-driven environments. Without global standards, companies may exploit regulatory loopholes, moving AI-dependent workforces to countries with weaker labour protections.

AI can be part of the solution

Despite the risks, AI itself can help improve workplace safety, if deployed responsibly. AI-powered wearables can track worker fatigue and predict injuries before they happen. Smart algorithms can monitor stress levels, ensuring that AI-driven performance tracking does not push employees to burnout.

However, these tools must be implemented with worker rights in mind, avoiding privacy violations and unethical surveillance.

A call for action: policy must catch up

We cannot wait for high-profile AI accidents before taking action. Just as previous industrial revolutions forced legal reforms, 4IR demands a new framework for workplace safety and social insurance.

Workplace safety laws must expand to include both physical and psychological risks related to AI-driven work environments. Liability rules must be strengthened, ensuring that employers, AI developers, and policymakers share responsibility for AI-induced injuries.

Workers’ compensation policies must be modernized to explicitly cover remote work injuries and automation-related accidents.

AI-driven safety innovations should be encouraged while protecting workers from AI overreach and surveillance abuse. International labour standards must also be developed for AI-related injuries to prevent regulatory loopholes in a globalized workforce.

The future of work must be safe and fair

If AI is to truly benefit humanity, it cannot offload risks on to workers while profits flow to corporations. The history of industrial revolutions has shown that technology can create prosperity — but only if balanced with worker protections.

The responsibility for AI-driven workplace safety does not rest on workers’ shoulders alone. Employers, policymakers, AI developers, and global institutions must come together to ensure that the future of work is not only efficient but also just, equitable, and safe.

It’s time to close the responsibility gap before it’s too late.

This article was first published by Daily Maverick. Read the original article on the Daily Maverick website.

Suggested citation: Letlhokwa George Mpedi, Marwala Tshilidzi. "Who Is Responsible for Workplace Injuries in the New and Dynamic Frontier of AI?," United Nations University, UNU Centre, 2025-02-13, https://unu.edu/article/who-responsible-workplace-injuries-new-and-dynamic-frontier-ai.