Augmented reality can save lives and we're veering towards mass adoption

AR devices will soon be able to take instructions directly from our brains, writes Olivier Oullier

An attendee uses a Virtual Reality (VR) headset at the World Nuclear Exhibition (WNE), the trade fair event for the global nuclear community in Villepinte near Paris, France, June 27, 2018. REUTERS/Benoit Tessier
Powered by automated translation

“And suddenly the memory revealed itself. The taste was that of the little piece of madeleine which on Sunday mornings at Combray…when I went to say good morning to her in her bedroom, my aunt Leonie used to give me, dipping it first in her own cup of tea or tisane. The sight of the little madeleine had recalled nothing to my mind before I tasted it.”

These words are Marcel Proust's, to describe his experience as he ate what became the most famous madeleine in history. What many people do not realise is that Proust gave a beautiful and poetic illustration of what we now call augmented reality: the enhancement of an experience with additional information.

Our brains are constantly changing our reality by contextualising – and biasing – our perception in light of our memories, learning, past experiences, and of course of our beliefs and imagination.

And since humans always want more, we have also witnessed a plethora of scientific and technological innovations to augment reality. Take for example the microscope and the telescope, allowing us to spot details the naked eye cannot see.

A more recent way to enhance our experience of the world is the ability provided to us by smart devices such as mobile phones or glasses to overlay in real time additional information onto physical objects. For the general public augmented reality often means Google glasses and Pokemon Go. But there is a lot more to it.

In a report on virtual reality (VR) and augmented reality (AR) released in March, global law firm Perkins Coie LLP identified gaming, education as well as healthcare and medical devices as the top three sectors where investments in AR and VR are most likely to be directed over the next 12 months.

Surgery, for instance, is already greatly benefiting from AR. One of the pioneers on that front is Dr Shafi Ahmed of the Royal London and St Bartholomew's Hospitals in the UK. Dr Ahmed very early on understood the transformative power of leveraging smart glasses and social media in healthcare.

In 2014, more than 14,000 students across 132 countries and 1,100 cities attended an operation in which he removed cancerous tissue from a 78-year-old man – virtually. Since then he has performed other surgeries live on social media. The last time I met Dr Ahmed was at the Gitex conference in Dubai in 2017 as we spoke at the same session.

We discussed how his work not only revolutionises the way medical students can learn and interact with their mentors, but also provides an effective solution to broaden medical education and therefore solve the shortage of experienced physicians to train students in remote parts of the world.

_____________________________________

Read more from Opinion:

_____________________________________

I strongly encourage you to watch him online, to realise that using augmented reality in operating rooms goes way beyond streaming surgery. One of the major innovations is the possibility afforded by AR to overlay key information, including medical scans, onto the patient’s body as the surgeon operates. It minimises localisation errors, not to mention the possibility to collaborate with peers from the other side of the planet in real time.

There is, however, still a major user experience hurdle when one is using AR in surgery. The surgeon's attention and movements are not totally dedicated to the patient as he or she needs to make precise movements in the air or use vocal commands to open a menu, choose an option or zoom into medical images on the AR head-mounted display.

Clearly, when a surgeon operates, you’d prefer his or her hands to be devoted fully to the patient. This is where brain-computer interfaces (BCIs) can change the game by allowing mental commands of the AR device and its content.

Last week I was in Shanghai for meetings and to receive an award at the Global Virtual Reality Conference for our work on BCIs. The following day, at the Mobile World Congress Shanghai, the biggest mobile event in Asia, attended by more than 60,000 people from 112 countries, Emotiv, our company, and Vizua, a Seattle-based cloud-computing company, introduced for the first time our joint mental command solutions for augmented reality devices.

At the end of last year in Paris, Vizua, Terrarecon and Microsoft HoloLens introduced AR medical imaging mapping during orthopaedic surgery. Thanks to 3D-printing, it is now possible to embed our brain sensors into the HoloLens. This allows us to pick up patterns of electrical activity in the brain and to transform them into mental commands for surgeons.

So they can keep their hands on the patients while zooming on medical images or browsing menus, simply with the power of their mind.

In its report Perkins Coie LLP indicates that the biggest obstacles to mass adoption of AR technology are issues related to user experience, content offering and cost.

In healthcare as in any other sector, having to speak, press buttons or make movements in the air prevent smart glasses from reaching mass adoption. Silent and motionless communication between the brain and augmented reality devices and content thanks to mental commands is the key feature that will make any AR truly pervasive.

Professor Olivier Oullier is the president of Emotiv, a neuroscientist and a DJ. He served as global head of strategy in health and healthcare and is a member of the executive committee of the World Economic Forum