Two Google-related Patient Privacy Cases Impact Data Sharing Efforts

Through no clear fault of its own, Google has found itself involved in two separate patient privacy incidents in recent weeks—one in England and one involving an Indian pathology lab. The two events may give healthcare organizations pause before engaging in any data sharing projects involving a search engine.

Back in December an Indian pathology lab accidentally uploaded blood testing data from 43,000 patients to the Internet, including their names and HIV status, which were subsequently indexed by Google, the Washington Post reported. Then, in late June, Google adjusted its content removal policies that will allow the company to accept requests from individuals to remove medical record contents. As Bloomberg noted, this change is “a departure from its typically hands-off approach to policing the web,” due to its relatively narrow content removal policies.

Another case involves a data sharing project between England’s National Health Service (NHS) and Google’s artificial intelligence software. Like other American providers with similar partnerships with Google, the NHS Royal Free Foundation Trust shared personal data from 1.6 million patients with Google Deep Mind, Google’s artificial intelligence company. An investigation by Google and the Information Commissioner’s Office (ICO), a UK data watchdog group, found that the NHS failed to comply with the country’s Data Protection Act when sharing the patient data with Google.

According to the British newspaper The Telegraph, the partnership between Google and the NHS was intended to monitor and diagnose acute kidney injury via technology that sent alerts to physicians through an app called Streams. The ICO investigation found that Royal Free did not inform participants that Google would have access to sensitive patient information such as HIV status, mental health history, and abortions.

“The Royal Free did not tell patients that Google’s DeepMind would have access to such information, but said it had ‘implied consent’ because patients knew the Streams app offered ‘direct care,’” the Telegraph reported.

ICO commissioner Elizabeth Denhem summarized investigation’s findings in a blog post. Many of the findings could apply to American companies, as well as providers that have to comply with HIPAA, which is comparable to the UK’s Data Protection Act. She notes that progress and advances in medicine don’t need to be at odds with legal privacy protections.

“Our investigation found that the Trust did carry out a privacy impact assessment, but only after Google DeepMind had already been given patient data. This is not how things should work,” Denhem wrote. “The vital message to take away is that you should carry out your privacy impact assessment as soon as practicable, as part of your planning for a new innovation or trial. This will allow you to factor in your findings at an early stage, helping you to meet legal obligations and public expectations.”

Mary Butler is the associate editor at The Journal of AHIMA.

Submit a Comment

Your email address will not be published. Required fields are marked *

Share This

Share This

Share this post with your friends!