Google launches points to future where AI will trump hardware

Pixel 2 cameras and Home Max speaker use intelligent software

A man takes a photo of a Google Pixel 2 phone during a launch event in San Francisco, California, U.S. October 4, 2017. REUTERS/Stephen Lam
Powered by automated translation

The most important announcement made by Google at its fall product launch event last week wasn’t in regards to a feature the upcoming Pixel 2 smartphone will have, but rather one it won’t have.

Namely: a second rear-facing camera lens.

The five-inch Pixel 2 and its larger cousin, the six-inch Pixel XL 2, will instead have only a single lens, bucking a growing trend in high-end smartphones.

While dual-lens rear cameras first appeared on smartphones in 2011, the trend took off in earnest last year, appearing on phones including the LG G5, Huawei P9, and of course the iPhone 7 Plus. Apple is continuing with two lenses with this year's iPhone 8 Plus and iPhone X. Samsung has jumped on board with its latest, the Galaxy Note 8, as have a few others, including the Oppo R11 and the OnePlus 5.

The idea that all these phones are pushing is that dual lenses make for better pictures, since one lens can capture foreground details while the other does the background. The phone’s software then combines the two images into a single photo that is superior to what just one lens can produce.


Read more:


It’s solid logic, except that the Pixel 2 – which is set for launch in six countries this month, although a UAE release date is still unknown – readily beats its competitors despite having just the single lens.

Influential image-quality testing site DxOMark has anointed the Pixel 2 the king of the smartphone heap, giving its camera an unprecedented rating of 98. That score is the highest ever for a smartphone and tops the iPhone 8 Plus and Galaxy Note 8, both of which received 94.

Google’s result is phenomenal given its different approach but it’s also a sign of larger things to come as far as consumer gadgets are concerned.

The search behemoth is setting new heights in image quality not because of improvements to hardware such as lenses, although those are happening, too, but rather because it is applying machine learning and artificial intelligence to what is an otherwise analogue process.

As the company’s engineers explained during last week’s launch event, the Pixel 2’s camera relies on AI crunching information in the background – or the cloud, rather – to improve image quality.

Google’s cameras are basically learning from the billions of photos on the internet. The Pixel 2 can intelligently identify and separate backgrounds and foregrounds based on what the company’s algorithms have gleaned from processing that huge trove of data.

So, while a dual-lens camera might use two simultaneously shot photos to create a single, good-looking portrait, for instance, the Pixel 2 can effectively arrive at the same result by using the example of many, many other similar photos.

Or, as the DxOMark ratings indicate, it’s actually arriving at better results. Inevitably, Google’s competitors will attempt to apply the same techniques to their phones.

What could be truly fascinating is if Google eventually joins the party and adds a dual-camera system to a future Pixel phone, in addition to its AI processing. Consumers will continue to be the beneficiaries as image quality goes even further through the roof.

The application of AI to consumer tech isn’t just happening with cameras, though. Google is also taking the same approach with its Home Max speaker, which is launching in the United States in December and elsewhere next year.

As with its previously released Google Home speaker, the Max will house the Google Assistant voice-activated AI, which provides users with audible answers on everything from recipes and weather to traffic conditions and news reports.

The Home Max, however, is geared for quality audio and again, it uses AI to deliver it. Aside from higher-end physical specs, the speaker also has AI that can detect where it is in a room – say, near a wall or in a corner – and automatically adjust levels accordingly.

Sonos introduced a similar feature called TruePlay a few years ago, but it required manual interaction from the user. The speaker honed in on your sound-emitting phone as you walked around the room with it, building a sort of audio map of its environment.

The Google Home Max does basically the same thing, but automatically.

It hasn’t been tested in the wild yet so Google doesn’t have any top marks from influential audio authorities to boast about, but the underlying philosophy – where AI boosts the capabilities of the hardware – is the same. In that vein, it’s a safe bet the Home Max will get similar positive reviews, even if it takes an iteration or two to get there.

Either way, Google’s approach is smart for several reasons. The capabilities of analogue hardware, whether it be camera lenses or speaker woofers and tweeters, can only be pushed so far. Software can take them further, and machine learning and AI further still.

In this particular field, the company has a huge and potentially insurmountable lead over its competitors, given that its omnipresence on the internet.

It was hard to imagine even a decade ago that a simple search engine would eventually let us take better pictures, but that’s where our gadgets are going.