Health Data, Regulatory and Health Industry, CE Quizzes

Ethical Issues Loom as Artificial Intelligence Shows Promise for Health Information

Artificial intelligence (AI) holds plenty of potential in healthcare. The AI market is expected to grow at a compound annual growth rate of 36.4 percent from 2024 to 2030, according to Grand View Research.

Clinical decision support, surgical assistance, personalized medicine, patient monitoring, and operations are just a handful of possible applications that healthcare professionals could embrace. At the same time, administrative uses, such as automated coding and other applications could benefit health information (HI) professionals.

Because AI is fueled by large volumes of data, HI professionals are positioned to take the helm as ethical stewards as tools are developed and deployed.

“The needs of the patient come first. In today's evolving healthcare environment, ‘prioritizing patient needs’ underscores the indispensable role of robust data. Due to their expertise encompassing data governance and analytics, HI professionals are poised to play a fundamental role in establishing and sustaining enterprise quality management systems for AI,” says Shauna Overgaard, PhD, manager of artificial intelligence at the Mayo Clinic in Minnesota.

HI professionals can step up and address data integrity, harmonization, and governance concerns as AI tools are developed and used, according to Overgaard.

Data security and privacy – oft-mentioned concerns as the healthcare industry adopted an array of electronic solutions over the past several decades – have bubbled to the top when considering the ethics of AI. Because AI is based on large volumes of observations and data, the call to keep this information private and secure continues to increase.

Keeping Patients in the Loop

In addition to safeguarding data, healthcare organizations need to ensure that patients are cognizant of how their personal data will be used as AI tools come into play.

“As we develop AI in healthcare, we have an opportunity, if not a moral imperative, to provide patients with autonomy and ownership of their data. Part of this means transparent communication about the intended use of patient data, ensuring rigorous data anonymization and security, and establishing robust governance frameworks, which are early steps,” Overgaard says.

To ensure that patients are aware of how their data is being used in AI applications, Overgaard suggests that, at the very least, healthcare organizations take the following steps:

  • Consult patients;
  • Obtain informed patient consent for data usage;
  • Discuss the risks associated with data sharing;
  • Adhere to privacy regulations; and
  • Consider potential benefits to patients having visibility into the output of the AI models.

Preventing AI Bias

While privacy, security, and consent loom as pressing ethical issues, the rise in AI has brought an additional concern to the table: Bias. Without accurate and complete data, AI tools could be erroneously developed and, therefore, might function in a way that discriminates against certain populations.

“Perhaps the greatest ethical hazard is found in the fact that all these AI systems learn from the data that they're given. They're basically interpolating that data,” says William Bosl, PhD, a professor at the University of San Francisco School of Nursing and Health Professions. “And so, if the data is biased, then they learn biased ways of processing information … so the tools could be operating based on information about best treatments of certain populations, but not necessarily best for that particular patient. That's a danger.”

Bosl, for example, has developed algorithms for clinical decision support in African continents. In this context, it’s important to stay away from leveraging algorithms that have been trained on North American populations.

Consider the following: The criteria for determining whether mental health treatment is warranted and what treatment is needed might be different in that cultural context – thereby making the data that fuels the AI tool biased, Bosl notes.

Overgaard points out that using biased data can further perpetuate biases that already exist in healthcare.

“The demonstrated systematic biases in AI systems can result in disparate and potentially unfair treatment, particularly in exacerbating care disparities related to social determinants of health,” Overgaard says. “Carefully addressing bias insertion points across the lifecycle and proactively addressing them is crucial for ensuring the equitable and effective deployment of AI in healthcare, and this demands vigilant attention from HI professionals.”

As such, HI professionals need to make sure they are collecting accurate and complete data by ensuring the data is “reflective of that patient experience, that it's complete, that it's an accurate depiction, and that it doesn't introduce unnecessary bias that can introduce problems… if the quality of the data is questionable, that's going to lead to potential challenges with any AI tool that would be generated using that data,” says David Marc, PhD, CHDA, who serves as associate professor, department chair, and program director in the Health Informatics Graduate Program at The College of St. Scholastic, in Duluth, MN.

To combat this bias, Marc recommends that HI professionals and others heed “algorithmic discrimination protections.” Such protections make sure that HI professionals and others assess the data sources and the algorithms to ensure equity.

It’s especially important to determine what data vendors relied on to create their AI tools. Because vendors often rely on historical data sets, healthcare leaders should determine if the data meshes with the AI technology’s purpose. Leaders also need determine if the AI model has been tested and if the tool can perform in their environment. In addition, healthcare organizations should consider the value of working with vendors that employ HI professionals that are helping them to optimally use data to design AI solutions, according to Marc.

“Unfortunately, HI professionals at large probably are not sitting at the table. But there are organizations that are investing in those relationships with successful outcomes,” Marc says.

Following Ethical Guidance

Healthcare organizations and HI professionals need to purposefully address ethical issues.

“First, I recommend HI professionals form a close partnership with patient advocacy groups and legal and ethical experts to ensure they follow ethical guidelines that prioritize patient confidentiality, transparency in AI algorithms, and responsible management of sensitive health data,” Overgaard says. “As AI becomes an integral part of HI roles, it's crucial to adhere to these principles.”

To help in this pursuit, the Coalition for Health AI (CHAI), a community of academic health systems, organizations, and expert practitioners of AI and data science, has published a blueprint and is now writing up the results of its modified Delphi Study that offers guidance and assurance for AI evaluation and implementation in healthcare. They are incorporating the nonprofit as a 501(c)(6) and will continue public-private partnerships with federal agencies, including the Food and Drug Administration, National institute of Standards and Technology, National Academy of Medicine (NAM), Office of the National Coordinator for Health Information Technology and National Institutes of Health, to align with nationwide standards for responsible AI.

“Our commitment is to provide ongoing guidance and support to make the evaluation and implementation of health AI effective, fair, and beneficial to patients, and all are invited to join this effort. We are uniting in a pre-competitive space for the benefit of our patients. We’re dedicated to avoiding an ‘ivory tower’ approach or a detached indifference to practicality. We need, and welcome, a diverse representation of healthcare to advance responsible health AI,” says Overgaard, who is a member of the CHAI steering committee and co-lead of the CHAI transparency working group. 

In addition, researchers from the Mayo Clinic and Duke University in November 2023 proposed a framework to align translational and regulatory science by tailoring quality management system (QMS) principles throughout the life cycle of developing, deploying, and using AI technologies in healthcare. The framework is currently undergoing continuous development and evolving with reporting guidelines and adoption at Mayo. Researchers are working with the NAM AI Code of Conduct to identify the roles and responsibilities of each stakeholder at each stage of the AI lifecycle.

“We aim to expedite the translation of AI innovation from research to practice, but we remain steadfast in our duty to adhere to risk-based evaluation and transparent reporting. Aligning translational and regulatory science enables healthcare organizations to adhere to changing regulations, minimize redundancies, and align internal governance practices with a commitment to scientific rigor and medical excellence,” Overgaard says. “As a rudimentary description, an enterprise QMS [quality management system] establishes key actions for successfully testing and integrating AI in healthcare. The primary objective may be to create a structure that safeguards the responsible and secure integration of AI technologies into clinical settings while upholding the utmost quality and patient well-being standards. This is work HI professionals are well-positioned to lead.”

For instance, to foster a proactive quality culture for AI, HI professionals can lead by:

  • Staying on top of AI development;
  • Incorporating necessary software during the transition from model to product;
  • Ensuring adherence to QMS procedures for industry and regulatory compliance;
  • Contributing to the objective testing and auditing process for technology components;
  • Applying best practices for software life-cycle management tailored for AI;
  • Establishing a compliance-facilitating infrastructure; and
  • Leading the creation of policies and procedures within the QMS, focusing on governance, prioritization, development, evaluation maintenance, monitoring, issue reporting, and safety surveillance
  • An enterprise QMS framework, informed by regulatory precedents and expert insights, empowers organizations to prioritize patient needs and build trust in adopting innovative AI technologies,” Overgaard says. “The role of the HI professional is foundational to this notion.”

John McCormack is a Riverside, IL-based freelance writer covering healthcare information technology, policy, and clinical care issues.