Cracking the AI Code: Brian Anderson on the High Stakes of Trust, Ethics and Innovation in Healthcare

By Arunima Rajan

Brian Anderson is the Chief Executive Officer of the Coalition for Health AI (CHAI), a non-profit coalition focused on developing a set of consensus-driven guidelines and best practices for Responsible AI in Health and supporting the ability to test and validate AI for safety and effectiveness independently. Before leading CHAI, Anderson was the Chief Digital Health Physician at MITRE, where he led research and development efforts across major strategic initiatives in digital health alongside industry partners and the U. S. Government, and worked closely with the White House COVID Task Force and Operation Warp Speed. In an interview with Arunima Rajan, Anderson talks about core guidelines for use of AI in healthcare, importance of external validation, need for addressing biases in AI, role of CXOs in ethical AI implementation and need for continuous monitoring and evaluation of AI solutions.

How can hospital leaders strike a balance between the advantages of AI in healthcare and the need to prioritize ethics and patient safety?

To ensure the responsible development and implementation of AI in healthcare, it is crucial to embrace external validation and confirm that AI models undergo rigorous independent evaluations before being implemented. Healthcare organizations should leverage resources from organizations like CHAI to access guidance and tools that support under-resourced health systems, fostering equitable access to AI validation. Incorporating diverse voices from startups and community-based organizations is essential to enhance representation and equity in AI tools in healthcare. Additionally, enacting standards for data privacy and cybersecurity is vital to maintaining a strong commitment to patient trust and ethical practice.

What are the core guidelines healthcare organizations should follow to ensure AI is used responsibly and ethically?

It is important to prioritize patient safety and well-being above all else, ensuring AI systems do not compromise care quality or introduce new risks. Organizations should also maintain transparency about AI use, clearly communicating to patients when and how AI is being utilized in their care. Additionally, healthcare providers need to address potential biases in AI algorithms to promote fairness and prevent disparities in care delivery. Ongoing monitoring and evaluation of AI performance is crucial, with human oversight maintained to catch and correct errors. Organizations should invest in training to ensure healthcare workers can work effectively and ethically alongside AI technologies.

How can hospitals tackle biases in AI algorithms, especially when they’re applied in clinical decision-making?

Hospitals must implement rigorous testing and validation of AI systems before deployment, with a specific focus on identifying biases, including evaluating the system’s performance on underrepresented groups. To reduce these biases, hospitals

can advocate for diverse and representative training data for AI algorithms, encompassing patients from various demographic backgrounds and develop clear guidelines and protocols for clinicians on how to appropriately use AI tools in decision-making, emphasizing that AI should enhance rather than replace clinical judgment.

What measures would you suggest to ensure patient data privacy and security while introducing AI in healthcare?

To protect patient information, implement robust data encryption and anonymization techniques– this includes ensuring data is de-identified before being used to train an AI. It is crucial to develop clear data governance policies and procedures that outline how patient data will be collected, used, stored, and protected when leveraging AI. This should include guidelines on consent, data retention, and permitted uses. We must continue to work with policymakers to develop regulations that address the unique privacy challenges posed by AI in healthcare while enabling the pursuit of innovation.

How can healthcare leaders verify the safety and effectiveness of AI systems before rolling them out in hospitals?

Leaders need to feel confident in the testing and validation process being used to certify AI tools prior to deployment in a care setting. To achieve this confidence, conduct thorough clinical trials or pilot studies in controlled environments before full-scale deployment. This allows for real-world evaluation while minimizing risks to patients. Representation is key not only in the data sets and patient populations being used, but also in the assessment of these AI tools– we will all greatly benefit from a comprehensive assessment of AI systems from various perspectives, multidisciplinary teams, and stakeholders from all corners of healthcare.

What role should hospital CXOs play in creating and implementing ethical guidelines for AI use in their organizations?

Chief Experience Officers play an important role in creating and implementing ethical guidelines for AI in their organizations. A key focus should be prioritizing patient-centric values by emphasizing the protection of patient autonomy, privacy, and safety. CXOs should require that AI systems used in clinical decision-making are transparent in their functioning and able to explain their outputs. This is crucial for maintaining trust and accountability and is a key part in ensuring a positive patient experience throughout their healthcare journey. CXO's need to ensure that AI is implemented and configured for use by the end-user mitigates human-cognitive bias in the use of the AI tool, and that the use of the AI tool is as intuitive and usable as possible. CXOs can also engage with policymakers, industry partners, and patient advocacy groups to stay ahead of evolving ethical standards and best practices in AI.

How can healthcare organizations maintain transparency in AI-based decision-making, particularly when it impacts patient care?

Healthcare organizations can implement clear documentation and disclosure practices, keep detailed records of AI systems being deployed, and make information on system development, training data, and limitations readily available to clinicians, patients, and administrators. Organizations can also develop clear communication strategies to inform patients when AI is being used in their care, and how, providing patients with the opportunity to ask questions and offer consent for AI-assisted decision-making.

What are the key factors to consider when obtaining informed consent from patients for AI use in healthcare, especially in culturally diverse settings?

The key here is providing a detailed and clear explanation of how AI will be used in their care. Avoid technical jargon and focus on what information is important to a patient. Ensure consent information is available in multiple languages and that qualified medical interpreters are available when needed. Avoid relying solely on automated translation for crucial consent information regarding treatment decisions.

How can hospital leaders address concerns from healthcare professionals who may be reluctant to accept AI-driven medical decisions?

Hospital leaders can communicate that AI is meant to support clinical decision-making, not replace human expertise, experience, and judgment. To address healthcare professionals' concerns about AI-driven medical decisions, leaders should prioritize transparency, ensure appropriate implementation, invest in education and upskilling, and maintain human oversight throughout the process. It is important to involve clinicians and healthcare professionals in AI development and implementation: in the process of selecting, customizing, validating, and implementing AI solutions. This creates a level of ownership and ensures the AI aligns with clinical workflows.

What steps should healthcare organizations take to continuously monitor and evaluate the performance and ethical implications of AI systems after they’ve been implemented?

Healthcare organizations can create a multidisciplinary team responsible for monitoring the performance of AI systems, including clinicians, data scientists, ethicists, and patient representatives. This process can be enhanced by collaborating across organizations to learn from each other and more appropriately account for differences in patient populations and data sets across the country. Healthcare organizations should establish an AI Governance Committee responsible for partnering with business units to regularly monitor model performance through dashboards and key performance indicators (KPIs), conduct periodic audits and ethical reviews, implement a robust incident reporting system, engage in ongoing stakeholder feedback collection, and ensure continuous updates to AI systems based on new data and evolving ethical standards. It is crucial to regularly assess the quality and representativeness of the data being used to train and operate AI systems, and the outcomes produced, to allow continued learning and improvement– we must create protocols and processes that are adaptive to the changing landscape of technology, healthcare, and regulation.