Article

Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth

When accuracy is confused with truth, there is high risk of harm, especially in fields where human judgment and ethical considerations are critical.

In an era where artificial intelligence (AI) is heralded as a cornerstone of innovation, examining the often overlooked distinction between accuracy and truthfulness is crucial. The precision of AI predictions and analyses can be seductive, leading many to conflate high accuracy with truth.

However, this conflation is misleading and potentially dangerous, as AI systems increasingly influence critical aspects of our lives, from finance to health care to legal judgments.

Accuracy in AI refers to aligning predictions with a given set of data or expected outcomes. It is a technical measure, quantifiable and concrete, providing a sense of reliability and dependability.

For instance, an AI model predicting stock market trends with high accuracy might correctly forecast price movements based on historical data patterns and real-time market analysis. Yet, this accuracy does not assure truthfulness.

The model’s predictions can be entirely correct within the scope of its data while being untrue due to external factors it cannot predict or account for, such as an unexpected political event or a company’s internal scandal caused by information asymmetry.

The distinction between accuracy and truthfulness is particularly pronounced in the context of AI predictions about workplace performance. Consider an AI system tasked with determining employee productivity. It may analyse metrics such as hours logged, emails sent, and tasks completed to forecast future performance accurately.

However, while these metrics are accurate, they do not fully capture an employee’s capabilities, motivations, or potential issues. An employee on the verge of burnout may be highly productive today. However, their performance may suffer significantly if their mental health deteriorates — a factor AI may be unable to predict.

Real-world consequences

This disparity is more than just a theoretical issue; it has real-world consequences. AI systems are increasingly used in hiring decisions, performance evaluations and promotions. If these systems rely solely on accurate but incomplete data, they risk reinforcing biases and ignoring critical human factors, resulting in unfair or ineffective decisions.

When accuracy is confused with truth, there is a high risk of harm, especially in fields where human judgment and ethical considerations are critical.

Furthermore, AI’s reliance on historical data may exacerbate existing biases and injustices. An AI trained on biased data will produce biased results, regardless of how accurate its predictions appear.

Assume an AI used in criminal justice systems bases its forecasts on historical crime data. In that case, it may disproportionately affect specific communities, reflecting and perpetuating societal biases rather than presenting an objective truth.

Developing and implementing AI systems with a nuanced understanding that accuracy does not imply truthfulness is critical for navigating this complex landscape. Integrating ethical considerations, ongoing human oversight and diverse data inputs is imperative to ensuring a holistic and truthful application of AI technologies.

This also requires AI developers to be transparent about their models’ limitations and the possibility of inaccuracies caused by unaccounted-for variables.

The mean square error (MSE), widely used in AI training, is the root of the problem of why AI confuses truth and accuracy. The MSE is a standard metric AI uses to assess prediction accuracy. While MSE is adequate for evaluating continuous numerical predictions, it is not good at assessing discrete or abstract concepts such as truth.

The distinction between continuous and discrete values can be demonstrated using everyday examples. Continuous values can take any number within a range and change smoothly, such as a thermometer’s reading of 20°C, 20.1°C, 20.12°C, etc.

Within the context of continuous values, a doctor can be 81.3% accurate, meaning an average accurate diagnosis of 813 patients for every 1,000 patients she sees. Discrete values, however, can only be specific, separate numbers with no in between, such as counting apples, which can be one, two, or three, but not 2.5.

In many ways, truth is discrete. For example, I was born in Duthuni Village, and this information is either true or false, not in between. As a result, claiming that I was born in London is entirely incorrect, and there is no in between.

Truth is discrete, not continuous. Continuous values are like a flowing stream, whereas discrete values are like distinct steps.

Truth encompasses nuanced, qualitative aspects that the MSE cannot capture because it is based on numerical deviation. For example, an AI model may have a low MSE when predicting employee performance using quantifiable metrics such as hours worked. However, this does not account for the underlying truth about an employee’s well-being, job satisfaction, or ethical behaviour.

Critical for a true assessment

These qualitative factors are critical for a true assessment. However, they are invisible in an MSE-focused evaluation, highlighting the limitations of relying solely on this metric for comprehensive and truthful insights.

Now that we are in the era of generative AI, I asked Google Gemini: “Who is Tshianeo Marwala?” and its response was: “Tshianeo Marwala is the former Vice-Chancellor and Principal of the University of Johannesburg in South Africa. He is known for his work in the field of artificial intelligence and its applications in various sectors. The Tshianeo Marwala Scholarship Fund at the University of Johannesburg supports women from underprivileged backgrounds pursuing postgraduate studies in fields related to the Fourth Industrial Revolution.”

The information Google Gemini provides is highly accurate for Tshilidzi Marwala, but not Tshianelo Marwala. However, its answer is not true because it does not describe my grandmother, Mrs Tshianeo Marwala, as asked but describes Tshilidzi Marwala. This is because Tshilidzi has much more internet presence than Tshianeo, and the laws of probability dictate that it will confuse Tshianeo with Tshilidzi.

Scenarios like these should make us extremely cautious when using generative AI. Consider the implications of answers like these had I asked about a specific medical diagnosis. They could have life-threatening consequences!

Another example is Amazon’s AI-powered recruitment tool, which was created to make hiring easier by evaluating CVs and selecting candidates. This tool analysed historical data to identify patterns among successful hires at the company.

However, it was later discovered that the AI system was sexist and biased against female candidates. This bias arose because the historical data used to train the AI primarily comprised male applicants’ CVs, highlighting gender disparities in the tech industry. As a result, the AI incorrectly downgraded CVs that used terms more commonly found on women’s resumés, such as “women’s chess club captain”, even though these terms did not indicate a lack of qualification.

This example shows how AI accuracy, based on historical data, does not reflect the truth that women and men are equally talented.

While AI accuracy can be effective, it is not a substitute for truthfulness. Recognising and addressing the limitations of accuracy is critical to utilising AI’s full potential responsibly and ethically as it continues to permeate our lives. We can only ensure that AI serves humanity truthfully and equitably if we recognize this distinction rather than deceive ourselves with the illusion of accuracy.

Context, ethics, and multiple perspectives all play a role in determining truth. To ensure the truth, rigorous validation against real-world outcomes, ongoing human oversight, and transparency about the assumptions and biases embedded in AI algorithms are required.

We must actively question and interpret AI outputs within broader societal and ethical frameworks to ensure that AI-based decisions and actions are fair, equitable, and consistent with the true complexities of human experiences and environments.

It is thus essential to ingrain this dictum into the AI world: AI accuracy is not necessarily the truth!

This article was first published by Daily Maverick. Read the original article on the Daily Maverick website.

Suggested citation: Marwala Tshilidzi. "Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth," United Nations University, UNU Centre, 2024-07-18, https://unu.edu/article/never-assume-accuracy-artificial-intelligence-information-equals-truth.

Related content

Event

Our UNESCO Chair team at the Science, Technology and Innovation Policy Week

DOMINICAN REPUBLIC: María de las Mercedes Menéndez, UNU-MERIT PhD fellow and member of our UNESCO Chair team, will be speaking on 13 November 2024 at a Science, Technology & Innovation Policy Week event.

-

Project

Innovate for Clean Agricultural Technologies (INFoCAT)

Advancing women’s and youth economic empowerment in rural areas of African countries, by promoting low-cost clean energy-powered technology solutions.

05 Nov 2024