Blue Planet Studio - stock.adobe

How healthcare organizations can prioritize AI governance

As artificial intelligence continues to gain traction in healthcare, health systems and other stakeholders must work on building effective AI governance programs.

AI continues to take the healthcare industry by storm, with leaders increasingly interested in how the tools could help solve issues in administration, operations and clinical care. As AI continues to advance, healthcare use cases are likely to expand.

But to realize the potential of this emerging technology, healthcare organizations must effectively navigate evolving hurdles in AI development, validation and deployment.

One of these challenges -- AI governance -- will require a significant lift from health systems, as governance guides how an organization implements and oversees the use of AI in compliance with applicable laws and best practices.

During a recent webinar, leaders from Sheppard Mullin Richter & Hampton LLP came together to outline how health systems, hospitals and other provider organizations should approach building an AI governance program.

The importance of healthcare AI governance

The presenters emphasized that laying the foundation for effective governance starts by understanding how AI is incorporated into healthcare services, medical devices and operating solutions.

They noted that AI is not a single technology, and offers various capabilities that might be of interest to healthcare stakeholders. Common forms of AI -- including machine learning (ML), deep learning, natural language processing (NLP), generative AI (GenAI) and large language models (LLMs), software as a medical device and clinical decision support software -- are appropriate for different industry use cases.

NLP, for example, can be utilized in the context of ambient clinical intelligence, which has shown promise in improving operating room efficiency. ML is useful for predictive analytics, while GenAI and LLMs have demonstrated significant potential to improve clinical documentation and nurse efficiency.

Positive healthcare experiences often are the result of patient trust in the health system and in the professionals delivering the care. So, transparency and interpretability of AI will be paramount as the technology continues to evolve further … We want to make sure that AI systems are open to scrutiny and that they are understandable to users.
Esperance BectonAssociate, Sheppard Mullin Richter & Hampton LLP

But using these tools in a way that prioritizes patient safety and privacy requires healthcare stakeholders to formulate risk management and ethical principles around the organization's use of AI. Taken together, these considerations form the basis of AI governance.

"The term 'AI governance' describes a system of laws, policies, frameworks, practices and processes within an organization that allows the implementation, management and oversight of the use of AI and may address the design, development, procurement and deployment of AI," explained Carolyn Metnick, partner at Sheppard Mullin.

AI governance principles

Metnick further noted that an AI governance approach is key to supporting consistency, standardization and responsible AI use that minimizes the risk of potential harms, such as AI misuse, unreliable outputs and model drift.

These risks can be exacerbated by the scale and scope of AI tools, and not all risks can be anticipated initially. The impacts of AI adoption, then, might have greater risks than those of deploying a non-AI solution.

Various laws and guidance have been developed in recent years to help mitigate these risks, some of which are healthcare-specific.

The White House's Blueprint for an AI Bill of Rights, Biden's executive order on safe and trustworthy AI, the Federation of State Medical Boards' AI governance best practices, the National Academy of Medicine's AI Code of Conduct, and the World Health Organization's guidelines for GenAI governance, among others, can all be used to inform the responsible deployment of AI tools in healthcare.

Biden's executive order directed the Department of Health and Human Services to create an AI task force and create a strategic plan for responsible deployment in the industry. This plan began to take shape earlier this year, alongside the release of the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) final rule.

In addition to these frameworks, some states are also moving to regulate AI.

In March, the Utah Artificial Intelligence Policy Act (UAIP) was enacted to help establish liability for AI use that violates relevant consumer protection laws. The act -- which went into effect in May -- imposes disclosure requirements for organizations using GenAI in customer interactions.

"The [UAIP] … requires licensed providers to disclose when an individual is interacting with generative AI when receiving healthcare services," noted Lotan Barbaresso, an associate at Sheppard Mullin. "That disclosure must be made verbally at the beginning of a conversation or electronically prior to a written exchange."

Colorado also recently established a similar law, known as the Colorado Artificial Intelligence Act (CAIA), which will take effect on February 1, 2026.

CAIA is focused on protecting consumers from algorithmic discrimination by requiring deployers of AI to make customers aware that they are interacting with an AI system. Further, the law creates obligations for developers and deployers of high-risk AI systems related to documentation, disclosure, risk analysis, governance and impact assessments.

CAIA conceptualizes a high-risk AI system as any AI that plays a significant role in making a consequential decision, including those related to healthcare, employment, housing and other services. The act also enables the state attorney general to bring legal action for violations of CAIA, such as a healthcare algorithm discriminating based on race or other protected characteristics.

Barbaresso further highlighted that outside of these laws and guidance, industry collaborations like the Coalition for Health AI are also making headway in establishing responsible AI principles.

Many of these principles are centered on transparency, accountability, security, privacy and inclusion, which can be used to build the foundation of an AI governance program.

Key considerations for an AI governance program

Metnick emphasized that, in many ways, a governance framework serves to operationalize an organization's values. By considering existing frameworks and regulations and how to best align with them, healthcare stakeholders can begin developing an AI governance program in line with their enterprise's risk tolerance.

She noted that there is no one-size-fits-all approach, as a healthcare technology startup and an established health system are likely to have very different levels of risk tolerance. They are also unlikely to have identical values, though many might overlap.

This is the value of starting with established governance frameworks -- healthcare organizations can explore these and determine which ones are closely aligned in terms of shared values, which allows stakeholders to tailor aspects of each framework to suit their needs.

The Organisation for Economic Co-operation and Development's AI principles, the Fair Information Practice Principles and the National Institute of Standards and Technology's AI Risk Management Framework are three well-known frameworks stakeholders can use to inform and tailor their own governance programs.

However, there are key elements that any healthcare AI governance program should have, including a governance committee, policies and procedures, training, auditing and monitoring.

An AI governance committee oversees an organization's development, deployment and use of AI technologies to ensure that these processes are aligned with strategic goals, ethical standards and regulatory requirements.

"The committee should include members from various disciplines, such as healthcare providers, AI experts, and ethicists," explained Esperance Becton, an associate at Sheppard Mullin. "A member of the legal team [can] ensure that different perspectives are considered during the decision-making process. The committee should also help identify potential risks associated with AI technologies and develop strategies to mitigate them. This could include monitoring the evolution of AI within the organization and identifying dangerous or risky use cases."

The development of AI governance policies and procedures provides a structured framework to guide a consistent, standardized approach to how an organization engages with AI, she noted. If clearly defined, these policies can also help establish accountability, data management, and AI testing and validation protocols.

These procedures are also critical for managing incidents related to AI systems, should they arise.

Regular trainings, as with any tool, are necessary to ensure that AI tools are being used appropriately based on the user's staff role, the organization's policies and applicable law.

Auditing and monitoring help organizations learn how AI is being used, by whom and for what purpose. These facets of AI governance are designed to determine that any tool being used within the enterprise is vetted and formally approved, along with maintaining its performance over time. This aspect of the governance framework is also where stakeholders would lay out an incident response plan -- similar to a disaster recovery plan for data -- which would address who the incident should be referred to, how that process is documented, whether an algorithm should be suspended and for how long, and whether any regulators at the state or federal level should be notified.

Along with these considerations, the presenters emphasized the importance of privacy, security, transparency and interpretability when developing an AI governance framework.

"Positive healthcare experiences often are the result of patient trust in the health system and in the professionals delivering the care. So, transparency and interpretability of AI will be paramount as the technology continues to evolve further," Becton said. "We want to make sure that AI systems are open to scrutiny and that they are understandable to users."

Shania Kennedy has been covering news related to health IT and analytics since 2022.

Next Steps

How does the White House AI Bill of Rights apply to healthcare?

How will Biden's executive order on trustworthy AI impact healthcare?

Exploring the National Academy of Medicine's AI Code of Conduct

Building a generative AI-ready healthcare workforce

Dig Deeper on Artificial intelligence in healthcare

xtelligent Health IT and EHR
xtelligent Healthtech Security
xtelligent Healthcare Payers
xtelligent Pharma Life Sciences
Close