Health Data, Regulatory and Health Industry

Using Artificial Intelligence in Healthcare: Transforming Care Delivery and Patient Outcomes

Artificial intelligence (AI), sometimes also called augmented intelligence, is revolutionizing healthcare in many ways. Right now, AI is already being used to enhance clinical decision-making, improve patient safety, streamline workflows, and address other systemic challenges. It’s undergoing major expansion and technological innovation now, but AI isn’t new; it’s been used in healthcare for decades, especially for clinical decision support. However, with the advent of generative AI (genAI), awareness has reached new highs. AI's applications span multiple domains, from diagnostics and personalized treatment plans to governance and ethical considerations. As genAI continues to evolve, assessing its impact on medical practice and ensuring its responsible deployment is critical.

Before we dive into this concept, here are some things to remember. Everything, including the genAI that seems human, is built on probabilities. The first use of the term artificial intelligence dates to the 1950s, when researchers began to develop techniques to emulate human cognitive functions. In the late 1990s, with the internet and increased computer power, machine learning (ML) emerged, especially supervised ML. For this type of ML, the model developer already has the answer. Think about programming a computer to detect breast cancer: to “train” the model, would feed in images of tissues with known breast cancer and those without. In the 2010s, researchers developed neural networks, systems programmed for advanced processing, usually called deep learning. Now, we are in the genAI era of large language models (LLMs), which can generate output in multiple forms, such as text, image, or audio. I drafted this article based on my general knowledge of AI and using references. For transparency, genAI was used to help edit this article! I used a combination of the editor found in Microsoft Word and Grammarly, provided by my university, for editing assistance. I’ve turned both of these on for my computer, so I don’t have to prompt the software for specific assistance. I can also choose to dismiss the notifications.

AI in Diagnostics and Clinical Decision Support

Diagnostic use of AI is among its most transformative applications in healthcare. In 2019, Dr. Eric Topol wrote about the emergence and use of deep learning models for detecting disease in medical images and predicting genetic mutations from histological slides. Now, these models are often built into clinical decision support systems (CDSS) to provide real-time, evidence-based recommendations to clinicians. In this way, AI systems analyze vast amounts of medical literature, patient data, and guidelines to assist healthcare professionals in diagnosing and treating patients more effectively.

Personal anecdotes can sometimes illustrate impact most clearly, and I have one to share about diagnostic AI. In early 2023, a close family member of mine was taking medication for depression due to a death in the family; however, they were not improving. This had been the situation for more than six months. Then, their primary care provider ordered a pharmacogenomic study. A simple cheek swab was collected to provide DNA, which was sent to a specialized lab for analysis. The results of the analysis recommended a medication change, which then resulted  in significant improvement. This improved care was directly related to the use of AI analyzing both genetic and medication data to optimize the personal medication use —or as I like to say, true personalized precision medicine.

Of course, as with all AI, these diagnostic tools are not static—their functioning can change over time, for various reasons, and those changes must be closely monitored for quality control. That leads us to AI’s potential impact on patient safety.

Patient Safety and Adverse Events in an AI World

Because AI models are always changing based on the information they integrate, they are fallible! AI can provide wrong or bad answers or even insert false data and information. Again, this is because all AI is a model using probabilities. It is not human (although, of course, humans are fallible, too). The AI for IMPACTS framework emphasizes the need to evaluate AI tools based on integration, governance, transparency, and scalability to ensure their real-world effectiveness. When implementing AI for any purpose, whether clinical or administrative, significant attention must be paid to the accuracy of the use and whether the recommendations change over time. You may hear this referred to as semantic (or meaning) drift. Depending upon the use and risk associated with the use, keeping a human in the decision-making loop might be required. In other cases, if the use is very low risk or has little variation, less monitoring may be required. Many organizations are attempting to determine how to best manage this process, with questions of where in the organization does this responsibility belong, what type of checks should be conducted and how often, and who is qualified or should be trained for this purpose. Luckily, groups such as the Coalition for Health AI (CHAI) are working to develop best practices for monitoring and other important tasks related to AI.

At the same time, when it works well, AI can improve patient safety by predicting adverse events and reducing medical errors. Singh et al. (2024) proposed that AI-driven models used within an electronic health record (EHR) can identify patients at high risk for complications, such as sepsis or hospital readmissions, enabling timely interventions. Additionally, AI facilitates the identification of medication errors and potential drug interactions, reducing risks associated with polypharmacy. Accurate predictions require accurate models, though—and accurate models must be free from bias.

Addressing Bias and Promoting Equity in AI Models

Obviously, AI has both potential and challenges in its healthcare applications. Bias in AI algorithms has been widely documented, particularly in clinical decision-making tools that incorporate race-based adjustments. Vyas et al. (2020) highlighted that race correction in medical algorithms may contribute to health disparities by influencing treatment decisions and resource allocation. For example, AI models that adjust the estimated glomerular filtration rate (eGFR) based on race may delay specialist referrals for Black patients, exacerbating existing inequities in nephrology care. DeCamp and Lindvall (2023) advocated for continuous auditing of AI models and the development of fairness-aware algorithms to mitigate bias. AI governance frameworks, such as those proposed by the World Health Organization (WHO), emphasize inclusivity, transparency, and accountability in AI deployment.

In reality, the unbiased dataset does not exist,  because almost anything can be biased. It could be race, as in these examples— but it could also be insurance status, or past spending in healthcare, or geographical location, or almost any characteristic by which we humans categorize people, places, and entities. Developers of models need to ensure diverse and representative training datasets. Implementers and users of models, as I am, must be trained to ask about the intended population for the model and how this compares to the training set, as well as whether the model was externally validated. Only with all of us paying close attention can AI be optimized to provide equitable healthcare outcomes.

AI in Workflow Optimization and Operational Efficiency

Beyond clinical applications, AI enhances healthcare operations by automating administrative tasks and optimizing resource allocation. AI-powered chatbots and virtual assistants streamline patient triage, reducing the burden on frontline healthcare workers. Additionally, predictive analytics enable hospitals to anticipate patient admissions, optimize staffing, and manage bed availability more efficiently.

Natural language processing (NLP) technologies are now a commodity and have the potential to improve clinical documentation by transcribing and summarizing physician-patient interactions. This reduces clerical workload, allowing clinicians to focus more on patient care. AI-driven automation in medical coding and billing also enhances revenue cycle management, minimizing errors and ensuring compliance with regulatory requirements.

Ethical and Regulatory Considerations in AI Governance

As AI adoption accelerates, robust governance frameworks are essential to ensure ethical deployment. The Department of Health and Human Services (HHS) Assistant Security for Technology Policy (ASTP) introduced the Health Data, Technology, and Interoperability rule (HTI-1) in January 2024. This rule sets out requirements for clinical decision support and predictive clinical decision support in the use of certified electronic health records. The US Food and Drug Administration (FDA) has also introduced regulatory pathways for AI-enabled medical devices, emphasizing the need for real-world validation and post-market surveillance.

The AI for IMPACTS framework provides a structured approach to evaluating AI tools based on criteria such as interoperability, cost-effectiveness, and long-term clinical impact. It is important for health information (HI) professionals to be educated and aware of how AI should be implemented and managed. Many developers are working on AI models and incorporating AI into as many places as possible. However, without careful management and governance, the best model in the world can literally be useless. HI professionals should also be paying attention to what is happening with AI regulations at different levels of government. I could not develop a model on my best day; however, when I attended the Office of the National Coordinator for Health Information Technology (ONC) meeting in December 2023 and learned about HTI-1, I knew it was going to be a game changer. I started reading and listening to as much information on the topic of AI as I could. I searched out the applicable federal regulations at agencies such as the FDA and National Institute of Standards and Technology (NIST). I made a pact with myself that I was going to spend 15–20 minutes a day, minimum, experimenting with AI tools. This has led to me now chairing a workgroup at our university to conduct a university-wide AI needs assessment to develop training for faculty, staff, and students; partnering with our VP of AI to launch a low-cost, HIPAA- and FERPA-compliant chat tool, especially for our students; and receiving regular emails from our government liaisons to comment on legislation being introduced in our state. Some days, it seems like too much, but I was at a conference recently, listening to someone speak on AI and wished I were 30 years younger, so that I could be involved in the long-term roll-out of this amazing technology.

AI is reshaping healthcare by improving diagnostics, enhancing patient safety, optimizing workflows, and addressing systemic biases. However, its integration must be guided by ethical considerations, rigorous validation, and comprehensive governance frameworks. As AI evolves, interdisciplinary collaboration among clinicians, policymakers, HI professionals, and many others will be essential to harness its full potential while ensuring equitable and safe healthcare delivery.


Susan H. Fenton, PhD, RHIA, ACHIP, FAHIMA, FAMIA, is Dr. Doris L. Ross Professor and Vice Dean for Education at the McWilliams School of Biomedical Informatics at UTHealth Houston.