ugc_banner

Doctor AI will see you now: Navigating the ethics of smart medicine

New DelhiWritten By: Medhavi MishraUpdated: Jun 23, 2023, 04:08 PM IST
main img

Photograph:(Twitter)

Story highlights

The European Union (EU) parliament members approved the world’s first Artificial Intelligence Act (EU AI Act) on June 14, 2023, which, if implemented, would be among the pioneering regulations introduced by a significant regulatory body to govern the field of AI. 

Google recently launched its cutting-edge AI technology in health care making it possible to predict cardiovascular events by analysing eye scans. This breakthrough indicates a potential shift away from diagnostic techniques such as CT scans and MRIs. 
A few possible use cases of AI in healthcare are enhanced medical imaging and diagnostics, personalised treatment and precision medicine, accelerated drug discovery and development and telemedicine and remote patient monitoring. With AI's potential to revolutionise healthcare, it is imperative to examine not only the opportunities it presents but also the ethical, legal, and regulatory challenges that accompany its implementation. 

The problem

While the integration of AI in healthcare holds tremendous promise, it also presents several challenges that need to be addressed for its successful implementation. Data security and ethics are number one on the list. 

The integration of AI, with its capacity to process vast amounts of personal health information, raises questions of patient privacy, consent, and data security. For example, the recently announced Google AI that can catch heart diseases by simply examining retinal scans used a medical dataset consisting of almost 300,000 patients in order to train their cardiovascular prediction algorithm research collaboration with a hospital located in Tamil Nadu. There are several ethical and regulatory questions in such use cases. Did the patients consent for their retinal data to be recorded and fed into an algorithm? Since retinal data is biometric, was their consent taken to store such sensitive biometric data? Was the AI designed ethically?

What does the law say?

Data security
Under various data protection regimes globally such as EU’s GDPR or California’s (USA) CCPA, data is broadly divided into two categories:
1.    Personal Data
2.    Sensitive Personal Data (SPD)

Data such as health records, sexual orientation, biometric data, genetic data, and political opinion, comes under the ambit of SPD which is regulated heavily in most international jurisdictions. Therefore, health data used to train an AI will undoubtedly qualify as SPD. The GDPR imposes strict requirements for the processing of SPD. It generally prohibits the processing of such data unless specific conditions, such as explicit consent or legal obligations, are met. The GDPR sets forth two levels of administrative fines on organisations:

1.    Up to €10 million or 2% of the global annual turnover of the preceding financial year, whichever is higher. 
2.    Up to €20 million or 4% of the global annual turnover of the preceding financial year, whichever is higher.

India too recognises health data as SPD. However, in a world where the EU's General Data Protection Regulation (GDPR) has established global benchmarks for data protection, these rules alone fall short of meeting the required standards or imposing reasonable penalties.

ALSO READ| Research suggests AI can predict political views based on facial analysis, raising privacy concerns

The United States regulates Protected Health Information (PHI) with a specific act known as Health Insurance Portability and Accountability Act (HIPAA). This law sets standards and rules for the protection of sensitive health information and ensures the privacy and security of individuals' PHI by healthcare professionals. The penalties for violation may go as high as 50,000 dollars per violation.
 
So how does one train an AI with health data and how does one regulate it? One way is data aggregation which involves combining and anonymising health data from many people to create bigger datasets, making it difficult to identify specific individuals. Another is, data de-identification techniques, such as removing or encrypting personal information like names and addresses to prevent direct identification or using methods like replacing identifiers with pseudonyms or unique tokens before this data is entered into the AI database.

The GDPR does not explicitly define anonymisation, but it acknowledges that anonymised data is outside the regulation's scope. 

India’s dream Digital Data Protection Bill (equivalent to EU’s GDPR) still is in limbo. The government has been engaging in a legislative dance, volleying the draft bill back and forth in revisions. If left unregulated any longer, practically, our most intimate health data can be accessible to large corporations without our consent or control.

AI bias
As patients entrust their well-being to AI systems, safeguards must be in place to prevent defective diagnoses and biased algorithms. For example, if a military health data source is utilised, the AI model is likely to generate limited insights about the female population due to the predominant representation of male service members. If left unchecked, AI systems can inherit and amplify the biases present in the data they are trained on. Biased algorithms in healthcare may lead to unequal access to treatments or misdiagnoses for certain populations, perpetuating inequities in care.

In an actual case of UnitedHealth Group's Optum division in the US, by solely relying on past spending data without considering broader societal factors, the algorithm falsely equated the severity of illness for black patients with healthier white patients. As a result, the algorithm significantly underestimated the number of black patients in need of additional care, ultimately depriving them of necessary support.

The European Union (EU) parliament members approved the world’s first Artificial Intelligence Act (EU AI Act) on June 14, 2023, which, if implemented, would be among the pioneering regulations introduced by a significant regulatory body to govern the field of AI. Article 6 of the EU AI Act read with Annexure II , categorises medical devices as ‘high risk’ which means high-risk AIs will be subjected to rigorous risk assessments and authoritative oversight. This act establishes obligations for both providers and users of high-risk AI systems, thereby subjecting hospitals and clinics utilising such AI devices to scrutiny. 

In the United States, the Food and Drug Administration(FDA) has proposed a comprehensive five-year action plan aimed at regulating AI in healthcare . The plan includes the establishment of a regulatory framework for Software as a Medical Device (SaMD), guidelines for good machine learning practices, transparency requirements for users and patients, regulation of algorithm bias and robustness, and evaluation of real-world performance. As per the plan, the FDA will collaborate with research partners to develop methods for identifying, evaluating, and mitigating bias in AI/ML algorithms.

The Indian Council of Medical Research (ICMR) has also released a guiding document titled "The Ethical Guidelines for Application of AI in Biomedical Research and Healthcare." (ICMR Framework) This comprehensive document is the only one framework existing in India that addresses AI in healthcare, however, this is only a guiding document and has no authority of law.

AI also ‘hallucinates’
AI hallucinations can manifest when the AI attempts to respond to a prompt without possessing all the requisite information to provide an accurate answer. In such cases, it is not uncommon for the AI to fabricate information in order to fill the gaps and generate a response. 

Therefore, it is possible that AI can conjure up a diagnosis due to a lack of data, ultimately paving the way for an erroneous treatment plan. The stakes are high, for healthcare, such missteps can precipitate life-threatening outcomes. However, if an AI is trained on diverse data, and uses dropout techniques for training, it may lessen its ‘hallucination’ tendencies.

Who’s liable? Although it will depend on the terms of service or specific organisation contracts with the technology company offering such AI services, generally, the organisation using the AI maybe held liable in case AI makes such mistakes. AI development frameworks worldwide such as OECD AI Principles, World Economic Forum, AI Governance Framework all advocate for ‘transparent’, ‘accountable’ AI development which involves openly communicating design, training data, validation, and incorporating diverse perspectives to minimise biases and problems such as ‘hallucinations’. 

Globally, jurisdictions are struggling to regulate AI. While frameworks and guidelines are welcome, the need for an authoritative law such as the EU AI Act is ever more pressing. Because, in the realm of smart medicine, the stakes are too high for complacency.

(Disclaimer: The views of the writer do not represent the views of WION or ZMCL. Nor does WION or ZMCL endorse the views of the writer.) 

WATCH WION LIVE HERE