Health Data, Privacy and Security

Updating HIPAA Security to Respond to Artificial Intelligence

Artificial intelligence (AI), also commonly referred to as augmented intelligence in healthcare applications, is the use of mathematical modeling applied to known algorithms and trained on data sets for specified use-cases, and it has the potential to support clinical providers and patients in the complex practice of medicine.  

A common misconception about AI in medicine is the notion of AI “replacing” physicians and other providers in the practice of medicine. However, the best use cases for AI in healthcare will support medical decision-making. This is critical as clinical decision support tools reduce medical errors and support diagnosis and treatment planning and provide nudges for best practice advisories and preventive care reminders.  

Furthermore, AI in healthcare assists clinicians in a data-intensive field by providing timely information on the current patient condition, as well as laboratory and radiology results, which then helps identify gaps in care. Having all of that in the hands of the clinician assists with prediction of future conditions and/or diseases whereby early alerts using predictive analytics can be employed. AI is also used for wearable devices and remote patient monitoring such as continuous glucose monitoring, real-time insulin pump adjustments, smart implantable devices, pacemakers, defibrillators, and deep brain stimulators. 

Research shows that errors related to healthcare are a concern in the industry. Research published in the Irish Journal of Medical Science in 2022 reviewed how AI in healthcare presents significant opportunities to improve quality and safety using big data analytics, rendered for various special applications in healthcare. The benefits of AI include having more accurate and efficient diagnoses.  

For example, radiologists can use AI to assist in imaging analysis to improve efficiency and accuracy, potentially reducing workforce productivity pressures. AI can also help with patient-focused information such as appointments, lab and imaging results, and messaging with providers. This increases patient satisfaction by improving communication with providers. Providers can use large language models similar to ChatGPT to provide a first draft response for patient messaging.  

Data analytics is fundamental to certain surgical robotics and can provide safer and more efficient surgeries. This allows surgeons to perform more complicated or precise surgeries with better patient outcomes. Consumer wearables that track various health information can apply AI technology to process large amounts of data to help providers interpret the significance of the information, which can be reviewed with patients in clinic visits or virtual/remote care.  

A 2020 article in the JAMA Journal of Ethics reviewed several serious risk implications related to AI in healthcare. System malfunctions pose a major risk as larger amounts of data are being stored or received, which can cause system shutdowns. In addition, using AI can cause the downtime to be longer due to the quantity of data that must be restored. There is a need for constant monitoring and upgrades with increasingly complex privacy protections.  

Using de-identified medical data is referred to as data repurposing and is necessary for AI training, but there is significant risk of private information being accessed by unauthorized parties. This should require explicit patient consent as it is a unique use-case not covered in standard patient consent obtained upon admission to a hospital for treatment and/or operations, billing, and insurance under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule. This would constitute a special-purpose informed consent from patients when patient health information (PHI) is used for research and AI purposes. Importantly, current HIPAA laws do not apply to PHI after it has been de-identified for this purpose. 

Data Security and Ethical Implications 

HIPAA provides data privacy and security provisions for safeguarding medical information. It also regulates how healthcare providers and affiliated parties handle and use patient data. Its associated regulations, the Health Information Technology for Economical and Clinical Health Act (HITECH) of 2009 and the Omnibus Rule of 2013, add further protections for PHI.  

The key components of the HIPAA Security Rule require physical safeguards such as restricting access to offices, technical safeguards such as using computer logins and encryption, and administrative safeguards such as having strict policies and procedures along with a training program. The HIPAA Privacy Rule regulates the uses and disclosures of PHI.  

The intersection of HIPAA and AI depends on large amounts of data, including PHI governed by HIPAA.  

Organizations need to improve data security, reduce the risks of cyber threats, and maintain constant vigilance for potential weakness in the administrative, technical, and physical safeguards. The developing role of AI in healthcare means it will be important to update HIPAA regulations as technology evolves. Some proposed HIPAA updates include clarification of language in the Notification of Privacy Practices on how patient data is used to promote consumer comprehension and transparency, encryption of data in transit, and the secure transfer of PHI to decrease the risk of cyber-attacks and eliminating third-party vendor access.  

Alternatively, broadening the definition of covered entity and redefining security and privacy liability can improve consumer protections. Organizational policies and procedures should be updated to address artificial intelligence applications for administrative and medical-support functions, and HIPAA training for employees should include AI functions.  

Any discussion of healthcare uses of AI and HIPAA security must include ethical implications. Patient safety is paramount and nonmaleficence considers how to protect patients from harm. Regulations on AI are frequently outpaced by development and innovation as AI shows no signs of slowing down.  

As previously mentioned, AI utilizes large datasets to generate accurate diagnoses, but these data sets may reflect explicit or implicit bias. Assessing and controlling this bias will be crucial to assure equity of care. There are cybersecurity concerns that can lead to HIPAA violations. The huge datasets needed for AI/machine learning are a likely target for hacking and cyber-attacks as well. Breaches of data are likely to expose larger amounts of data that may include data from the entire lifespan of patients, including specific genetic predispositions and specially protected populations.  

Under AI, data ownership and breach liability have not been fully determined. In addition, de-identification as a privacy strategy for research is ineffective. Patients have been successfully reidentified in datasets that meet HIPAA standards. This places patient privacy at risk. So who would be held responsible? Data pools also may cross national and international borders, yet privacy rules are not uniform across jurisdictions. With HIPAA as a US federal law, prosecution and compensation may not be possible outside the nation’s borders, and developers will need to assure appropriate compliance for international jurisdictions. 

Quality healthcare takes precedence, and healthcare mistakes can have devastating consequences. For example, IBM Watson for Oncology, based in AI, was shown to give “unsafe and incorrect” treatment recommendations during pre-clinical testing. Other ethical considerations include provider liability; some AI functions have been shown to provide more accurate diagnoses than humans. Thus, is there an obligation to use it if it's more effective? If it is used, then who’s liable if AI and the provider disagree. What if harm is incurred?  

Providers are required to explain procedures in understandable language and, due to technological complexity, they may not be able to properly explain the AI functions. On the other hand, users may not have digital literacy or understand the implications and risks of AI. Developers of AI may not even understand how it reached conclusions or diagnoses. In addition, providers may not be able to corroborate AI recommendations as modern computing can hide techniques, and thus the technology is opaque to scrutiny.  

Currently, if AI/machine learning is used, the provider is ultimately responsible. What if the patient determines there is too much risk? Can the patient withdraw their consent, and how would it be taken out of data sets within algorithms? In addition, user agreements with health apps, social media, and chatbots powered by AI are not held to HIPAA standards. So, there is much to consider and future research that is needed. 

AI Drives HIPAA Changes 

Given all the operational and ethical considerations, what changes are necessary for HIPAA security? Proactive vulnerability assessment and fortification as well as enhanced encryption and anonymization requirements are necessary. Access management can be improved, and this can emanate from the “need to know” and “minimum necessary” standards. The Department of Health and Human Services (HHS) has advocated for enhanced care coordination, and sharing pertinent information is vital. In this manner, the need to know can be broadened as well as the amount of information shared. Doing such can result in greater care coordination while still protecting PHI. 

The Notice of Privacy Practices can contain language in the consumer rights section on how AI is used for one’s patient health information. Under the HIPAA Security Rule, 45CFR164.304, the technical safeguards can talk of the use of AI and how AI can detect nuances and anomalies in data use. Furthermore, under 45CFR164.312(d), it needs to be determined how to include AI when talking about person or entity authentication (45CFR164).  

Having the government formulate a policy on AI is another direction. The Food and Drug Administration (FDA) can take the lead as it has determined that oversight is necessary to guarantee the safety of patients and their PHI. To this end, the FDA released an outline in 2019 that proposed to regulate AI technologies on medical devices. In January 2021, the FDA finalized its “Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan.” Under this framework, AI applications can be iteratively upgraded while still being adequately monitored for patient safety and privacy.  

An example of HIPAA security and AI in healthcare is Hank.ai, an AI-focused company providing productivity-enhancing software and APIs (application programming interface) built for business processes such as document indexing, data entry, medical coding, and auditing.  

Hank.ai's mission is to help humans by eliminating repetitive and laborious manual tasks. The platform is based on natural language processing, computer vision, machine learning, neural networks, and autonomous computer programming. It offers augmentation to the existing workflows while concurrently learning, enabling clients to improve the efficiency, accuracy, and charge capture of human medical coders, clinical documentation integrity specialists, and medical insurance auditors. 

Accuracy can be improved compared to human-only data entry, as the Hank.ai platform is trained on a massive dataset, which allows it to accurately classify, extract, and organize data from difficult to read documents, handwritten forms, PDFs, images, emails, and more. This can reduce the incidence of costly errors, time required to submit insurance claims, and the rate of claim denial. 

Another example of AI and HIPAA security is the Darktrace product. Darktrace measures every keystroke made by the organization to build a large data set of “normal” activity. Just as with Hank.ai, the platform is based on collating data into large data sets. Darktrace can identify “attack patterns” that deviate from normal activity. When a breach is detected, the threat can be mitigated in real time, not after the fact. This is another defense to meet the HIPAA technical security standards. 

AI in cybersecurity is a major topic of discussion within the healthcare industry. The HHS Office for Civil Rights (OCR) maintains a publicly accessible database of breaches affecting at least 500 individuals. Healthcare organizations experience the highest reported cost and occurrence of data breaches, with a 51 percent increase in breaches since 2019, and a 10 percent increase in cost from 2020-2021.  

Machine learning-based intrusion detection systems show promise in protecting healthcare organizations by focusing on four main components: 

  • Ensure confidentiality of secure communications from outside monitoring 
  • Authentication features to protect against unauthorized access 
  • Integrity and accuracy of care data 
  • Access controls, whereby only specific authorized entities can access the data, are critical 

Behind all four of these are hybrid intelligent intrusion detection systems (HIIDS) used in the detection and prevention of cyber-attacks.  

Optimizing privacy and security compliance using AI models should include implementing privacy-preserving techniques when protecting PHI. Some examples include: cryptographic techniques, which use mathematical concepts and algorithms to transform messages in a way that makes them difficult to decipher; differential privacy, a mathematical model that involves adding random or noisy data to sensitive information to obscure each entity's contributions; and federated learning, a method of training AI models securely that ensures the privacy of data. This approach allows for the extraction of insights to support the development of new AI applications.  

In conclusion, AI tools can be a valuable asset for healthcare organizations in meeting their HIPAA security and privacy compliance obligations. The key to the efficacy of AI for HIPAA compliance is specialized purpose, and when the technology is optimized for security and privacy in the healthcare context, organizations will be able to improve security while reducing the human cost by automating many of the tasks involved in compliance.  

As AI continues to be useful in healthcare, the HIPAA rules will need to be requisitely updated. This will ensure that AI and HIPAA are operating in tandem. 


Kelly Carlin, RN, CCRN, CVRN, is a nurse in the Hyperbaric Department at the University of Maryland Medical Center Downtown, Baltimore, MD, and a master’s in Healthcare Informatics student at Wake Forest University, School of Professional Studies in Winston-Salem, NC.  

Jacob Hansen, DO, is a staff anesthesiologist and Epic (EHR) service line expert at AdventHealth in Hendersonville, NC, and a master’s in Healthcare Informatics student at Wake Forest University, School of Professional Studies in Winston-Salem.  

Anna Hart is a technical lead/quality improvement specialist at Atrium Health Wake Forest Baptist in Winston-Salem, NC, and a master’s in Healthcare Informatics student at Wake Forest University, School of Professional Studies in Winston-Salem.  

Jody Johnson, BS, RT(R)(CT), is a lead CT technologist at Novant Health Forsyth Medical Center in Winston-Salem, and a master’s in Healthcare Informatics student at Wake Forest University, School of Professional Studies in Winston-Salem.  

Joan M. Kiel, PhD, CHPS, is the chairperson of University Healthcare Compliance and Professor HAPH at Duquesne University in Pittsburgh, PA, and an adjunct faculty member at Wake Forest University, School of Professional Studies in Winston-Salem. 

Scott Wells is a senior associate of talent solutions for Pivot Point Consulting in Nashville, TN, and a master’s in Healthcare Informatics student at Wake Forest University, School of Professional Studies in Winston-Salem.