Blog Post

Artificial Intelligence in Healthcare: A Cautionary Tale

AI's Potential in Healthcare and Beyond: A Balancing Act between Innovation and Human Rights. Learn about the opportunities and risks.

Technology is fast moving into every aspect of our lives. Big Tech companies today know not only which films and series people prefer to watch; but also which items people like to buy when and where; and which symptoms and conditions people are searching for. Big data and machine learning impact most aspects of modern life, from entertainment, commerce, and healthcare.

Technologies, such as Artificial Intelligence (AI),can potentially transform many aspects of healthcare. For instance, AI is increasingly used to enhance clinical diagnosis by developing algorithms based on the analysis of millions of data points drawn from medical images, symptoms and biomarkers drawn from electronic medical records.AI is also drawing from large amounts of electronic data to help predict future diseases and optimize and customize treatment regimens. From oncology to infectious diseases AI has already been used, and further uses of language learning models are constantly being explored.

However, there are also risks associated with AI which range from the potential for mistakes to be made, for existing biases and prejudices to be replicated andfor inequities in health to be generated or deepened.

AI also poses other serious threats. These relate to the use of AI to collect and analyze massive amounts of personal data to profile and target people without their knowledge. While such data can be used to improve disease surveillance, it can also lead to the erosion of privacy and the misuse of personal data by a handful of Big Tech companies for commercial purposes. It is therefore essentialthat policymakers and the public consider and debate the appropriate use of AI in healthcare, who determines this and how AI is used in an equitable manner and reaches those in desperate need of improved healthcare.Beyond healthcare, there are other risks as well. With the rapidly evolving technology and difficulty of separating reality with deepfakes, unbridled implementation of AI could lead to the breakdown of trust and deepen social unrest and strife, resulting in public health impacts. Recognizing this, the UN Secretary-General Antonio Guterres (UNSG) has called for transparency, accountability and oversight ofAI.

In addition, AI has the potential to develop a new generation of lethal and autonomous weapons which can result in unimaginable public health harm. Indeed, UNSG spoke of these risks in his address to UN Security Council by cautioning against “horrific levels of death and destruction” that malicious use of AI could potentially cause.

While Big Tech companies are presenting themselves as “saviors of algorithmic harm and not perpetuators,” a key question arises: how we determine safety, beyond what some have called “safety-washing” without necessary commitments, practices and accountability. Safety, as writers of the recent editorialin Science Magazine pointed out is not just about “alignment of values” or avoiding harm, but is about “understanding and mitigating risks to those values” and ensuring that technology does not get used in the “pursue of power and profit at the expense of human rights”. This is key, if AI is ever to be seen as a force of good in healthcare and beyond. However, to do so, technological fixes are simply not enough. To begin with this requires a deeper understanding of political economy of AI and its application.

The future outcomes of the development of AI will depend on policy decisions taken now and on the effectiveness of regulatory institutions to minimize risk and harm and maximize benefit. Crucially, as with other technologies, preventing or minimizing the threats posed by AI will require international agreement and cooperation.

This,however will not be easy. While, numerous medical professional organizations are calling for stricter regulations of AI to protect patients and clinicians, the Big Tech companies are already warning that such rules could threaten the viability of AI in healthcare and also their intellectual property rights. These must not be seen as mutually exclusive; a stronger human rights-based regulation of AI is imperative for its benefits to reach those in need. Without human rights safeguards and meaningful public consultation, AI and associated threaten privacy, freedom of expression, and freedom of association, thus violating rights and degrading trust, which also undermines the effectiveness of such technologies

As we contend with the growing adoption of AI to address our healthcare needs, we should learn from past experiences and ensure that health and human rights are protected every step of the way. 

__

This op-ed was originally published on the print version of The Edge Newspaper in Malaysia on 2 September 2023

Also published by UN Malaysia website. 

Related content