The Mayo Clinic will store health data in Google’s cloud and use its AI expertise to unearth insights. But Google has made mistakes before.
In the 1880s, when the world-renowned Mayo Clinic was still a young fraternal surgical practice in the newish state of Minnesota, its doctors scribbled notes about their patients into heavy, leather-bound ledgers. But in 1907, a physician there named Henry Plummer came up with something better. He thought the episodes of a patient’s medical history should all be in one place, not scattered between many doctors’ journals. So he introduced a new system, creating for every Mayo patient a centrally housed file folder and a unique identifying number that was to be inscribed on every piece of paper that went inside it—doctor’s notes, lab results, patient correspondence, birth and death records. And recognizing the scientific value in these dossiers, he also convinced Mayo’s leadership to make them available for teaching and research to any physician at the practice.
This development marked the beginning of modern medical record-keeping in the US. And from the beginning, the endeavor has been animated by an inextricable tension between sharing and secrecy—between the potential to mine patient data for new medical insights and the rights of patients to keep that information private.
Last week, that tension came to the fore again when the Mayo Clinic announced that Google would begin securely storing the hospital’s patient data in a private corner of the company’s cloud. It’s a switch from Microsoft Azure, where Mayo has stored patient data since May of last year, when it completed a years-long project to get all of its care sites onto a single electronic health record system. (Project Plummer, it was called.)
The change signals the storied hospital’s ambitions for its vast troves of patient data. Google is leading the much-hyped effort to use artificial intelligence to improve health care, with experiments reading medical images, analyzing genomes, predicting kidney disease, and screening for eye problems caused by diabetes. As part of the 10-year partnership, Google plans to unleash its deep AI expertise on Mayo’s colossal collection of clinical records. The tech giant also plans to establish an office in Rochester, Minnesota, to support the partnership, but declined to say how many employees will staff it or when it will open.
Hospital officials say that strict controls will limit Google’s access to Mayo’s data. Yet despite the best intentions and loftiest goals, data has a way of escaping its silos. And some health data experts worry that these kinds of partnerships pull at the fraying threads of the US’s aging privacy laws and the patchwork of regulations covering medical data.
“The problem is Google’s business model is to use or sell data,” says Lawrence Gostin of Georgetown Law School, who has written extensively about reforming health data privacy laws. “I’m far from convinced that Google might not use identifiable information for its business purposes.”
That would be information the company is not supposed to have unless patients explicitly consent, according to the US Health Insurance Portability and Accountability Act, or HIPAA, the highest health information privacy law of the land. HIPAA requires that health care providers not disclose any personally identifiable health information to third parties without express patient authorization.
But skeptics like Gostin have reason to doubt that Google’s data-ravenous operating style is compatible with the sensitive business of health care. Some of the tech company’s other health experiments have run into regulatory and legal problems, including an app called Streams that its DeepMind subsidiary is developing into an AI-powered assistant for doctors and nurses. A partnership between DeepMind and the UK’s National Health Service to trial the app broke the law by giving the company overly broad access to records on 1.6 million patients, according to a 2017 investigation by the country’s data protection regulator.
That same year, Google entered into a data-mining partnership with the University of Chicago Medical Center that is now being challenged in court. This past June, a patient sued UCMC and Google, alleging that his and thousands of other patients’ electronic medical records were given to Google without having been stripped of time and date stamps.
Google and UCMC have both denied the allegations. If true, they would be a clear violation of HIPAA. But the lawsuit wasn’t actually brought under HIPAA. Instead, it alleges deceptive and unfair business practices under Illinois’ consumer protection law, as well as violations of common-law privacy rights. The complaint describes how Google could, in theory, completely legally receive de-identified medical records and then combine them with its vast stores of data about how people behave online—including geolocation, search queries, and social media posts—to re-identify individuals.
“Up until recently the current mode of thinking has been that if these records have no name, no address, then nothing bad can happen, and I just don’t think that’s true anymore,” says Michelle Mello, a health law expert at Stanford who has written about the Google/UCMC case. She points out HIPAA was enacted in 1996, before Google existed and when the US’s 20 million internet users browsed only about 30 minutes each day. What tech companies can do with de-identified data exposes gaps in data privacy that grow wider with every Google search and Facebook post, she says.
“Even when they’ve been responsibly transmitted, once these data are out there, they’re no longer in the custody of companies bound by regulations of any kind, and we don’t know what linkages might be performed and where these data might ultimately end up,” she says. “There’s a lot you can do with people’s data without violating any promises to them.”
In light of these kinds of concerns, Mayo Clinic officials say they have been careful about how they’ve structured the Google partnership. Google will be contractually prohibited from combining Mayo clinical data with any other datasets, according to a hospital spokesperson. That means that whatever data Google has about a person through its consumer-facing services, such as Gmail, Google Maps, and YouTube, can’t be combined with caches of scrubbed Mayo medical records. To ensure this, the hospital will only make de-identified data accessible to Google inside the Mayo-controlled private cloud, where it has the ability to monitor any activity.
Gostin says this should reassure patients to some extent, but is by no means a guarantee of privacy. “Holding Google to their privacy commitments is hard and it would need Mayo to intervene judicially,” he says. Patients likely wouldn’t have their own legal recourse if the agreement were not honored. “The real solution is national legislation mandating better privacy in multiple spheres,” he says. “Including data and cloud-based services, social media, and the internet.”
While not yet competing with climate change, gun control, Russian election interference and other world-burning political priorities at present, Mello believes this kind of change is inevitable. “The pace this technology is moving is out of step with the public’s expectations about privacy,” she says. “So I think we’ll soon start seeing a demand for formal regulations.” On the issue of regulation, Google declined to comment.
Standards of practice can and do change, as new technologies come along and societal values shift. During Plummer’s time and for many years after, Mayo doctors could dig through any patient’s records in the name of science. Then HIPAA and other human subject regulations came along. And pioneers like Mayo found different ways to use their data to advance medical research. New laws, fit for the privacy needs of the current moment, need not halt progress. If anything they might inspire innovation.
All Rights Reserved for Megan Molteni