Health Data, Workforce Development

HI Professionals Need to Make Sure They Know What Constitutes Artificial Intelligence

Sam R. Thangiah, PhD, wouldn’t be surprised if coffee roasters started to claim that your morning cup of joe is infused with artificial intelligence (AI).

Thangiah is a professor in the computer science department and director of the artificial intelligence and robotics lab at Slippery Rock University in Arkansas — and to him, it seems like every vendor is touting products that leverage AI, even when some of these solutions really don’t contain the necessary features to define themselves that way.

According to Andrew Boyd, PhD, a professor of biomedical and health information science at the University of Illinois, Chicago, there currently is no specific definition of AI for products or solutions. As such, the phrase itself can be troublesome and “has become a marketing term now,” Boyd says.

With AI being heralded as a magic solution for working “smarter,” it’s especially important for health information (HI) professionals to understand what truly constitutes AI, recognize when it is useful, and identify situations where it is unnecessary. This knowledge is critical to avoid mislabeled solutions, overspending, and making ill-informed decisions.

What Exactly Is AI?

Even though the definition of AI is still evolving, there are a few specific and necessary features of it. Having a firm grasp on what those are can help HI professionals assess what vendors are offering, regardless of how products are labeled. First, a software solution should only be considered as AI when it solves unstructured problems, Thangiah says.

With unstructured problems, there is no step-by-step process that leads to a solution. In contrast, structured problems have a clear process that leads to a solution, so AI wouldn’t be needed. Software that simply processes information more quickly – such as linear regression or statistical analysis tools – cannot be considered AI, although they are often packaged that way, Thangiah says.

For marketeers to rightfully claim that a solution uses AI, the tool needs to solve complex problems.

Also, AI software should mimic some form of intelligence. “This can be human intelligence, but it doesn’t have to be,” Thangiah says. “Some of the algorithms that I have been working with are biological in nature. They're able to mimic things like how an ant works together to get food or how there's a swarm intelligence.”

What’s more, AI solutions should continuously learn. A product qualifies as AI if it has the ability to process data, learn from it, and then make recommendations based on that learning, according to Ash Anwar, senior director of data and AI at Molecular You, a Portland-based company that has developed a health transparency platform for healthcare providers.

“True AI solutions should demonstrate either adaptive learning, deep pattern recognition, or predictive modeling capabilities that evolve as more data is processed,” Anwar says. “Non-AI solutions typically follow static rules or traditional statistical models that don’t change unless reprogrammed by humans. In contrast, AI solutions can improve their accuracy or performance over time as they are exposed to new data.”

When Is Using AI Warranted?

Perhaps just as important as knowing what AI is, HI professionals need to recognize when they do or don’t need the technology. For example, while AI is not typically needed to process claims, AI could prove useful when trying to detect fraud in those claims.

“Detecting fraud would require some type of intelligence because in these situations, AI programs could go through millions of claims to detect patterns and determine if there is something wrong,” Thangiah says. “For example, AI could quickly detect if fraud was occurring when tens of thousands of oxycontin pills were being prescribed to patients in a town whose population was around 2,000.”

Using the features described above—its ability to solve an unstructured problem and learn from data—the AI program would compare the number of prescriptions with the overall number of patients in a given geographic area, then flag that based on population data to determine which prescriptions are legitimate.

AHIMA offers guidance to help HI professionals navigate AI. For example, the AHIMA website’s AI section provides a regulatory resource guide that is updated to reflect the AI regulatory landscape in Washington, DC. Additional AHIMA resources on the AI site include presentations such as the impact of AI on healthcare, webinars on upskilling the HI workforce in the age of AI and other issues, and articles on topics such as what to ask vendors about the benefits and risks of implementing AI.

Purchasing products that are mislabeled as AI can result in significant problems for healthcare organizations. For example, healthcare leaders and staff members are apt to overestimate the capabilities of a mislabeled solution. If they believe that a solution provides intelligence when it is merely processing information, for instance, this could result in poor decisions, Anwar says.  

Determining which solutions are and which are not AI could also help HI professionals better allocate resources. “AI solutions require more resources for implementation, training, and ongoing maintenance than non-AI tools,” Anwar says.

Healthcare leaders need to keep in mind that vendors typically will charge more for software products with AI than for those without intelligence, AI experts say. As a result, these leaders need to determine if a solution is justly labeled as AI and also need to determine when they do and do not need products with AI.

“Vendors will charge more for AI solutions. Healthcare organizations don’t want to pay more for a software solution that they really don’t need,” Thangiah says. “And organizations really don’t want to pay more for a software product that is marketed as an AI solution when it really is not. If a software vendor is indicating their software has AI, it is critical for healthcare leaders to ask the software vendor to give them details on the AI models being used in that software. There should always be transparency of what AI models are being used in a software.”


John McCormack is a Riverside, IL-based freelance writer covering healthcare information technology, policy, and clinical care issues.