Dr Google will see you now

The tech giant's partnership with a healthcare system gives it access to the health records of millions of patients, which raises the issue of data privacy again

FILE - In this Sept. 24, 2019, file photo Isabelle Olsson, head of color & design for Nest, shows the Pixel buds in a case at Google in Mountain View, Calif. Alphabet Inc., parent company of Google reports financial earns on Monday, Oct. 28. (AP Photo/Jeff Chiu, File)
Powered by automated translation

There can be no doubt now that artificial intelligence does help save lives. AI technologies are increasingly being used for robotic surgery, medical image analysis, studying large volumes of medical data and even patient diagnosis.

Of course, the success of any AI system is heavily dependent on the data available and developers often need access to patient information in order to devise effective medical systems. The more ambitious the goals, the more data is required.

It should come as no surprise that Google, one of the largest AI developers in the world, this week announced it a partnership agreement with Ascension, the second largest healthcare system in the US. The deal will gain Google access to the health records of millions of Americans across 21 states.

What though has proved to be a surprise to the media, American public and other stakeholders is that the partnership (code-named "Project Nightingale") began last year in secret and without communication with doctors or patients, reported the Wall Street Journal.

Although the old English adage “trust me, I’m a doctor” perhaps does not carry the same weight as it once did, patient privacy is something that the medical profession and governments around the world take very seriously. Meanwhile, there are privacy advocates who have concerns about moves to share patient data more widely and the impact on personal privacy.

In this case, Ascension has confirmed the project is in compliance with the US 1996 Health Insurance Portability and Accountability Act and Google wrote in a blog post on Monday that patient data "cannot and will not be combined with any Google consumer data".

Frankly, it’s unlikely that the Project Nightingale team will be interested in, for instance, Mrs Smith’s 2008 kidney stone operation in particular. What will interest them is the large volume of patient data that can be prepared for AI systems to analyse at scale, thereby identifying trends, similarities in data related to physical conditions and shedding light on medical anomalies.

For example, everyone wants a cure for cancer and its diagnosis is one of the most active areas in AI-assisted research. By preparing volumes of medical image data from CT or MRI scans and creating algorithms to process that data, AI systems can often learn to spot the signs of cancer far earlier than human technicians, allowing for earlier patient diagnosis and treatment. Broadly speaking, the greater the volume of cancer cases that can be analysed by AI, the better the system will work and so more lives can potentially be saved.

The same principle applies for many different subjects of medical machine learning projects. It’s often possible to get some encouraging results from small sets of data, but to ensure reliability and realise the full benefit of using AI technologies for medical analysis, bigger data sets are required.

In the past, many technology providers have had access to patient records and personal medical data, and in the US this has long been governed by a law that was written to legislate sharing of health records among ecosystem partners. So, why the uproar about Google gaining access to patient healthcare records this week?

It may simply boil down to a matter of trust.

Although comparing Google's handling of confidential patient records and its handling of personal social media data would hardly be a fair comparison, last year's data security breach of Google+ compromising the data of more than five million users is still fresh in the minds of people, policymakers and the cyber-security community.

Meanwhile, there is the investigation into potential “monopolistic behaviour” launched by 50 US states and territories in September. There is also the antitrust ruling by the European Commission earlier this year requiring Google to pay a fine of €1.49 billion (Dh5.9bn). Neither help the digital giant engender public perceptions of trust.

Nevertheless, few would argue that harnessing the power of Google's AI to improve patient treatment, reduce pain and suffering and, ultimately, save more lives is a bad thing.

The fact is that, in need of improvement, though some of them maybe, there are existing laws and professional standards that can be applied to the usage of patient data. As Google chief executive Sundar Pichai said earlier this year to Indian news channel NDTV: "If AI can shape health care, it has to work through the regulations of healthcare."

That is a given. Trust, on the other hand, isn’t always a matter of law.

Carrington Malin is an entrepreneur, marketer and writer who focuses on emerging technologies