Does AI require Regulation?

By Arunima Rajan

Healthcare AI platforms

According to news reports, the Bureau of Indian Standards (BIS) is preparing a comprehensive set of standards for AI-related applications in India. Healthcare Executive asks Global CXOs to weigh in on the issue.

Edward Tian, CEO, GPTZero

Healthcare AI platforms need standards and guidelines. At a bare minimum, guidelines need to be in place to ensure that legal considerations like HIPAA are fully met and protected. Without solidifying guidelines that guarantee legal compliance, AI technology can get out of hand and cause massive legal troubles for healthcare organizations.

Jim Boswell, CEO, OnPoint Healthcare

I believe healthcare AI platforms need clear standards and guidelines. As AI becomes more widespread in healthcare, there are growing concerns about ethics and trustworthiness. Without proper regulations, we could see unintended consequences, especially when dealing with something as critical as patient care. Without some level of oversight, there's a real risk of AI being misused or misunderstood, particularly in healthcare. AI doesn't always grasp the full context or nuances a human clinician would. But, in healthcare, there's always a clinician overseeing the process, which acts as a safety net to ensure AI is used appropriately. Our approach ensures we use AI responsibly, with clinicians always involved to guide and verify AI-generated insights. It's about finding that balance between innovation and safety. Having a standard approach to regulate AI from development through deployment could help, but we need to be careful not to stifle creativity and innovation. If we regulate too heavily, we risk slowing down advancements that could benefit patients significantly. Healthcare is fundamentally about human care, and AI should support that by making processes more efficient, not replacing the human touch. At OnPoint, we believe in using AI to keep clinicians in control, ensuring safe, effective care while encouraging innovation.

Jason Alan Snyder, founder, SuperTruth.AI

SuperTruth builds AI solutions to ingest and understand health data, remediate data decay, and ensure the highest quality and most accurate data. It addresses transparency and accountability by embedding these principles into every stage of our development process. We start by ensuring that the data used to train AI models is meticulously cleaned, structured, and free from biases. This rigorous data management forms the foundation for transparency, as it allows us to document the origins and transformations of the data throughout its lifecycle. We also prioritize explainability when working with AI models, ensuring that healthcare providers can understand the reasoning behind AI-driven decisions. Robust auditing mechanisms reinforce accountability. We maintain detailed records of data handling, model training, and deployment processes. This ensures that if an issue arises, we can trace it back to its source, facilitating timely corrections and maintaining trust with our clients.

Standardized guidelines play a critical role in supporting these efforts. They provide a uniform framework that all stakeholders can follow, ensuring consistency and reliability across the industry. By adhering to standardized guidelines, we can more easily demonstrate compliance with best practices, which is essential for building and maintaining trust in AI systems, particularly in a sector as sensitive as healthcare.

Fundamental Ethical principles that should Guide the Development and Deployment of AI in Healthcare

The ethical principles guiding AI in healthcare must include fairness, transparency, accountability, privacy, and respect for patient autonomy. These principles are abstract ideals and essential for responsible AI development and deployment.

Fairness ensures that AI systems do not perpetuate or exacerbate biases, particularly in clinical settings where disparities in treatment can have serious consequences. SuperTruth actively identifies and mitigates biases in data and algorithms, striving to create AI systems that deliver equitable patient outcomes. Transparency is crucial for building trust. Healthcare providers and patients need to understand how AI-driven decisions are made, so models should be designed with explainability in mind. We believe transparency should extend beyond the AI's decision-making process, including the data used, the algorithms applied, and the outcomes generated. Accountability is another fundamental principle. In healthcare, the stakes are high, and the potential for harm is real. SuperTruth maintains strict accountability with thorough documentation and regular audits, ensuring every aspect of the data lifecycle is monitored and recorded.

Privacy and respect for patient autonomy are fundamental in healthcare. AI systems must be designed to protect patient data and uphold individuals' privacy. At SuperTruth, we enforce and embrace advanced encryption and data anonymization techniques to ensure that patient data is secure and used in a manner that respects patient consent.

Integrating these principles into a standardized framework requires collaboration across the industry. Standardized guidelines should mandate regular bias assessments, transparent reporting practices, and robust privacy protections, ensuring these ethical principles are consistently applied across all healthcare AI systems.

Jay Anders, Chief Medical Officer, Medicomp Systems

Healthcare is on the cusp of a transformative era, with artificial intelligence (AI) and large language models (LLMs) demonstrating remarkable potential in streamlining clinical documentation, quality measurement, and medical coding. However, despite a recent flurry of regulation, there remains no centralized framework governing artificial intelligence in the U.S.

For example, the recently released Health Data, Technology, and Interoperability (HTI-1) Final Rule HTI-1 establishes transparency requirements for AI and predictive algorithms used in certified health IT, ensuring clinical users can access crucial information about the algorithms supporting their decision-making processes. These requirements represent a heavy lift for HIT vendors who must now demonstrate fairness and lack of bias in their algorithms, communicate intended use cases and limitations, document data sources, models, and performance metrics used in algorithm development and validation, provide evidence of real-world effectiveness in improving patient outcomes and clinical decision-making, and implement safety monitoring and reporting systems with guidance on appropriate use and interpretation of algorithm outputs. However, in many ways, the requirements lack "teeth" because the Office of the National Coordinator for Health has failed to include stringent requirements for disclosing source attributes and evaluating model fairness in the absence of stricter standards.

Steve Kearney is the Global Medical Director at SAS

AI needs standards and guidelines across all industries. However, regulators should focus mainly on areas like health care, public safety, and finance, where legacies of inequity exist and harm can be perpetuated at scale by AI. Unlike in retail, where the risk might be just a poor shirt recommendation, risks in those industries affect civil rights, livelihoods, and personal well-being. However, the need for regulation isn't just to prevent harm. Smart regulation establishes guardrails that ensure transparency and create a consistent, level playing field for AI users and developers. AI offers healthcare organizations – including physician practices, hospital systems, and health insurers – a wide variety of opportunities to make people more knowledgeable and processes more efficient. If governments are too heavy-handed with regulation, innovation can be stifled, and the technology's promise is undermined.

There is understandable fear that unregulated AI could cause widespread harm, particularly in areas where people's health and well-being are at stake. Some primary areas of concern include implicit and unrecognized bias, often attributable to a model's training data. It's critical to scrutinize outputs and choose solutions incorporating trustworthy AI into their models. Another fear is intentional harm through misuse, such as in healthcare claims fraud. Generative AI can be a valuable weapon in the fight against fraud, but it has also made fraudsters' tasks easier. It used to require more medical knowledge to create fake medical records and claims, and creating a slew of them was labour-intensive. GenAI makes it easier to iterate and produce realistic-looking records, diagnoses, and documentation, including medical images. A third concern is the unintentional harm caused by "automation bias" when GenAI output is inaccurate but given more credence because it came from an automated system.

The good news is that existing laws and regulations on data protection, privacy, and consumer protection already provide protections and serve as a foundation for AI policy. In healthcare, we have strict rules and regulations around how patient information can and cannot be used. AI doesn't change that. Regulations like HIPAA will continue to apply, even to AI-generated content.

That said, policymakers were just getting their arms around the complexities of what we might think of as "traditional" AI when generative AI arrived, further complicating matters. Globally, policymakers are evaluating an omnibus approach, much like the EU AI Act, versus a more surgical, industry-specific approach. Governments have an unenviable task of developing nuanced policy that is broadly applicable while not being overly prescriptive, as technology rapidly develops with varying risks within industries and among use cases.

Just as healthcare adheres to a specific cadence of audits, validations, and regulatory changes, we believe the same, or similar, cadence will be required of AI regulations within and across the industry. The regulation of AI will be and must be an iterative process. Innovation will always outpace policy, and governments cannot stay ahead, nor should they try. Regulations must be regularly updated and amended to account for breakthroughs and, unfortunately, unforeseen adverse outcomes. At SAS, we've played a crucial role in helping various ministries of health, payer organizations, and healthcare systems as they prepare their own AI strategies and provide guidance to evaluate and operationalize AI for use.

At SAS, we consider the AI lifecycle under the umbrella of responsible innovation. A responsible innovation approach considers ethical implications at every step of the process, from ideation to development, deployment, and sunsetting. Ideally, we will have as much consistency as possible across geographies to avoid patchwork regulations that confuse users and developers and make compliance all the more challenging.

Fortunately, emerging regulations are coalescing around specific, consistent themes that support responsible AI, such as:

Humans at the centre : Proposed and emerging regulations agree that AI should be used ethically, respect fundamental human rights, and be prevented from reinforcing existing inequalities or causing harm. Additionally, there is convergence around the need for AI governance in the form of human oversight and intervention, especially in high-stakes areas like healthcare.

Transparency and explainability : There is consistency around understanding how a decision was made, including the data and algorithms involved. This aids in addressing issues like bias, model drift, and audibility in AI decision-making processes.

Data privacy and security : Building on existing laws such as the GDPR in Europe, regulations frequently include language that empowers citizens to take control of their data, ensures AI systems are designed to protect user privacy, and prioritizes data security.

Risk-based approaches : The EU AI Act and NIST (National Institute of Standards and Technology) framework are prime examples of risk-based approaches, where the level of regulation depends on the potential risks the AI system poses. This allows for flexibility in the stringency of compliance and reporting while protecting people from the dangers of higher-risk applications of AI.

Accountability and Inclusivity : Who's to blame when AI goes wrong, and who's involved in identifying potential adverse impacts? There is an unmet need for precise accountability mechanisms and a need to increase underrepresented stakeholders to develop and deploy AI. This includes having diverse voices at the table in the ideation phase.

Maria Madison, Interim Dean, Brandeis University's Hellowe School for Social Policy and Management

Stringent bias testing, diverse data representation, and continuous monitoring can help ensure the ethical development and implementation of AI technologies within the healthcare sector, especially regarding marginalized populations. The best way to address these challenges unique to diverse communities is to have AI regulations that mandate the inclusion of diverse datasets, require impact assessments, and ensure community engagement. Public health researchers and policymakers should be actively involved in these conversations concerning AI and the shaping of these regulations to ensure that they emphasize inclusivity, equity, and the protection of vulnerable groups. Regulatory frameworks should enforce explainability, regular audits, and clear guidelines on data usage, focusing on patient consent and privacy to promote transparency and accountability when these AI systems are used in the healthcare space. Adopting adaptive regulatory practices, like the FDA's Software as a Medical Device (SaMD) framework, can help AI regulations balance innovation with protection. Frameworks such as this emphasize safety while allowing technological advancements.