Decision-Support Tools, Tinged with Race and Gender Bias, Draw Regulator Scrutiny
Healthcare providers that use artificial intelligence (AI) programs to assist with diagnostic and care-related decisions should expect scrutiny from state and federal regulators due to concerns about the potential age, race, and gender biases that have been built into these systems.
State attorneys general (AGs) have showed an interest in pursuing issues related to patient privacy and AI, according to Paul W. Connell, a partner at the law firm Reed Smith, in a post on the firm’s website.
“There is no question that as the country emerges from the current pandemic, AGs will be on a collision course with health care providers who utilize AI and the companies that develop and market these diagnostic programs. We should expect a thorough investigation into the use of AI and any role it played in patient care decisions as we lost 138,000 to the virus,” Connell wrote in July. As of this week, more than 177,000 Americans have died of COVID-19.
He referred to a recent series of articles about AI and healthcare in STAT News that are partially funded by the Commonwealth Fund. As STAT News reported, healthcare companies are adopting AI-powered solutions used for a number of population health and clinical decision support purposes, and patients are infrequently informed about how these tools are used in their care.
Decision-support programs use algorithms to help providers with numerous tasks, such as assessing hospital readmission risk, identifying inpatients susceptible to sepsis, predicting which patients might deteriorate quickly, conducting discharge planning, or prompting physicians to perform an MRI. The article argues that providers are putting patients at risk because many of these programs don’t have a proven track record and are “fraught with bias.”
For example, a study published in October of 2019 in the journal Science found that an algorithm used widely in decision support programs “was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs,” Nature reported, in an analysis of the Science article.
Another STAT News analysis found that an algorithm used by kidney transplant surgeons determines that “kidneys from Black donors are more likely to fail than kidneys from donors of other races. Because Black patients are more likely to receive an organ from a Black donor, the algorithm reduces the number of kidneys available to them.”
Mary Butler (mary.butler@ahima.org) is senior editor at the Journal of AHIMA.