Article

AI is Not a High-Precision Technology, and This Has Profound Implications for the World of Work

If AI is too precise, it may be overfitting by memorizing rather than learning, which has significant implications for the future of work.

In the grand narrative of technological progress, artificial intelligence (AI) has been hailed as a transformative force, poised to revolutionise industries and improve the accuracy of tasks once impossible for machines.

From predicting interstate conflicts to diagnostic tools in healthcare, AI is deployed in areas where precision, efficiency, and reliability are paramount.

Yet, an uncomfortable truth lurks beneath the surface: AI is far from a high-precision technology. If AI is too precise, it is considered flawed because it is memorising instead of learning, a phenomenon called over-fitting. This inherent imprecision in AI systems has significant implications for the world of work, especially in sectors that rely on human judgement, flexibility, and adaptability.

One critical framework for understanding AI’s limitations comes from the statistician George Box, who famously said, “all models are wrong, but some are useful”. Box’s insight, aimed initially at statistical models, is especially relevant in AI, which relies on models to predict, classify, and make decisions.

AI systems, especially those based on machine learning, are ultimately just models — approximations of reality, built on often incomplete, biased, or overly simplistic data. These models are “wrong” because they can never fully capture the complexity of the real world, but they can still be “useful” when deployed with an understanding of their limitations.

Understanding these limitations can empower us to use AI more effectively and responsibly.

Probabilistic systems

Despite the hype, AI technologies — particularly those based on machine learning — are probabilistic systems. They rely on patterns and probabilities to make decisions rather than exact, deterministic rules.

For example, a machine learning algorithm trained to identify cancerous tumours from medical images can be incredibly effective in many cases, but it still has a margin of error. Sometimes, it misclassifies benign growths as malignant, or vice versa.

This lack of precision can have serious consequences, especially in healthcare, where mistakes can be life-threatening. It’s crucial to be aware of these potential risks and to approach AI with caution and a critical eye.

A valid comparison here is nuclear technology, a field representing the gold standard for precision and control. Nuclear technology operates within incredibly narrow tolerances, whether for energy generation or medical applications like radiotherapy.

The exact amount of uranium or plutonium in a reactor, or the precise calibration of a radiation beam, must be controlled to the millimetre, as even the slightest deviation can have catastrophic consequences.

In this sense, nuclear technology is highly deterministic — its behaviour is governed by physical laws rather than imprecise data for AI, making nuclear technology more precise than AI technology.

AI, by contrast, operates on probabilities and approximations. Even with vast amounts of data and processing power, AI models cannot guarantee exact outcomes because they are trained on historical data and predict future behaviours based on patterns.

For example, a certain margin of error might be tolerable in some industries, but the gap becomes evident when you compare this to a field like nuclear energy.

In nuclear technology, precision is not only expected — it is critical. An AI misclassification in hiring might lead to a bad hire; a misstep in nuclear technology could lead to a disaster. The stakes of precision, therefore, are much higher in nuclear technology, where every variable must be controlled, leaving little room for error.

High-level reasoning

The imprecision of AI also ties into Moravec’s paradox, a concept introduced by roboticist Hans Moravec. Moravec observed that while AI excels at tasks requiring high-level reasoning, it struggles with tasks humans find simple and intuitive, such as perception and sensorimotor skills.

In other words, AI can outperform humans in areas like chess or data analysis, but flounders in areas like grasping objects or understanding complex emotions. This paradox exposes the fragility of AI in dealing with tasks that demand a combination of physical skill, perception, and context-dependent judgement — critical components of many jobs in the world of work.

Moravec’s Paradox suggests that tasks humans perceive as easy — like walking or interpreting social cues — are some of the hardest for AI to replicate.

The implication for the world of work is profound. Jobs that require intuitive, sensorimotor skills like caregiving, construction or hospitality are far more difficult to automate than tasks like data processing, scheduling, or pattern recognition.

This contradicts the assumption that manual labour will be the first to be automated. The paradox implies that the most vulnerable jobs rely on abstract cognitive skills, while tasks that require human intuition, agility, and empathy are much more challenging to replicate with AI.

Powerful and limited

Box and Moravec’s work together paint a picture of AI that is both powerful and limited. While AI can be “useful” in specific, well-defined tasks, it is “wrong” because it cannot fully replicate the nuance and adaptability of human intelligence.

AI models are only as good as the data they are trained on, and their application is fraught with challenges when the tasks become more embodied or socially complex.

Despite its limitations, AI has proven to be remarkably useful across a wide range of applications. For example, AI has become indispensable for fraud detection and risk management in finance. Algorithms can sift through vast transaction data to identify abnormal patterns far more quickly than humans could.

While these systems occasionally flag legitimate transactions as suspicious, they are still invaluable for detecting and preventing fraudulent activity at a scale that would be impossible for human analysts alone. The benefits of AI here far outweigh the occasional misstep, as the technology dramatically reduces the incidence of undetected fraud.

The utility of AI is undeniable, even if its predictions are not 100% accurate.

One of the critical implications of AI’s imprecision, amplified by Moravec’s paradox, is the risk of over-reliance. When employers and organisations see AI as an infallible tool, they may hand over critical decision-making powers to systems not designed for precise, context-sensitive judgements.

If organisations outsource too much decision-making to AI, they risk diminishing the quality of their work.

Moreover, AI’s imprecision raises questions about accountability. When an AI system makes a mistake, who is responsible? Is it the developer who created the algorithm, the organisation that deployed it, or the workers who rely on its recommendations?

This ambiguity creates a dangerous grey area in workplaces where no one is held accountable for decisions that affect people’s lives, jobs, and well-being. In scenarios like autonomous vehicles or predictive policing, the consequences of AI’s imprecision can have devastating societal impacts.

Significant concern

Another significant concern is the impact on workers themselves. As AI takes over more functions in various industries, the role of human workers shifts from active decision-makers to passive overseers of machines. This deskilling of labour could create a workforce that is less equipped to intervene when AI systems fail or require human intuition.

Workers may be expected to manage complex technologies without fully understanding how they work, leading to frustration, disempowerment, and job dissatisfaction.

The myth of AI as a high-precision technology also shapes how we view the future of work. Many proponents of AI automation argue that machines will handle all the tedious, repetitive tasks, freeing humans to focus on more creative and strategic roles.

But this vision ignores the reality of AI’s limitations. In manufacturing, logistics, and customer service sectors, AI has led to increased surveillance of workers, micromanagement, and the squeezing of human labour to fit the demands of imperfect machines.

AI systems often set unreasonable, unachievable targets or, based on faulty assumptions, force workers to keep up with machines that don’t fully understand the nature of their work.

Moravec’s Paradox reminds us that task automation is not as straightforward as it may seem. It challenges the assumption that robots and AI will quickly take over manual or low-skilled jobs, emphasising that tasks that require human intuition, empathy, and sensory experience are challenging for machines to master.

Cognitive abstraction

Meanwhile, roles that rely heavily on cognitive abstraction, like data processing or routine financial tasks, are more susceptible to AI automation.

Box’s work reminds us that no matter how advanced AI becomes, it remains a model of reality, not reality itself. This is crucial for the world of work.

Rather than displacing workers, AI should be understood as a tool that complements human abilities.

In the governance of AI, it is crucial to acknowledge that AI is not a high-precision technology, ensuring that regulations, standards, and policies account for its inherent limitations and probabilistic nature to prevent over-reliance and mitigate potential risks.

This article was first published by Daily Maverick. Read the original article on the Daily Maverick website.

 

Suggested citation: Marwala Tshilidzi. "AI is Not a High-Precision Technology, and This Has Profound Implications for the World of Work," United Nations University, UNU Centre, 2024-09-30, https://unu.edu/article/ai-not-high-precision-technology-and-has-profound-implications-world-work.

Related content

Article

AI’s Critical Digital Transformation Role in Public Governance

By aligning with the AU’s strategies, South Africa can set a precedent for how digital transformation can serve as a catalyst for governance.

11 Dec 2024

Project

R&D Spillovers, Exports and Employment: The Role of Buyer- Supplier Relationships

Investigating the influence of buyer-supplier relationships on firm innovation, performance, and employment structure, using data from Turkish firms.

11 Dec 2024