Google may be synonymous with the act of searching the internet, but in recent months the firm has shown a newfound resolve to promote itself as a maker of gadgets. Last month it bought a stake in smartphone manufacturer HTC for just over US$1 billion (Dh3.67 billion), and last week a launch event in San Francisco focused on hardware.
Two new versions of its smart speaker, Google Home, were a clear attempt to challenge the market dominance of the Amazon Echo, while the sleek, minimal Pixel 2 smartphone followed Apple's controversial lead in dumping the standard headphone port.
Also unveiled was a set of earphones, Pixel Buds, which provided a jaw-dropping onstage moment – a rare thing in the staged world of product demonstrations. One Google employee held her finger to a Pixel Bud in her right ear and said a phrase in Swedish; the translated phrase was spoken to her colleague in English via a smartphone, and the colleague's reply was translated back into Swedish and played into her ear via the Bud.
This powerful piece of machine learning-driven technology was already available via the Google Translate app – the Pixel Buds were merely a way of accessing it – but it was a theatrically impressive demonstration of how language barriers are being broken down.
Saved for last, however, was an example of machine learning that felt genuinely groundbreaking. It came in the form of a small camera called Google Clips, which in many ways looked laughably toy-like and primitive; a small button, a blinking LED and no display to show where the lens might be pointing. But Clips hasn't been designed to help you take great photos; it identifies great photo opportunities and takes them for you.
Powered by image analysis and crafty algorithms, it learns over time which scenes you might want to keep for posterity, and when it spies such a thing through its 130 degrees field of vision, it springs into action by recording a seven-second sequence of images.
"Do I know that face? Is that face smiling at the camera? Is it lit nicely?" These are the kind of questions Clips asks itself, absolving us of the need to ask them of ourselves – but more impressively, that intelligence is built in; Clips doesn't connect to the internet to fire those questions at a server located on the other side of the world. Indeed, Clips' offline independence immediately nulls the obvious question that has been asked by those who deem it "creepy": what about privacy issues?
The very idea of a camera watching for things of interest and recording them is bound to invite comparisons with some of the most sinister passages in George Orwell's novel 1984, particularly given that it's brought to us by a company that trades largely in data. But Google has evidently tried hard to make Clips look like an unthreatening piece of technology, and we have been assured that it only communicates with your phone (ie. the pictures it takes will only end up outside that cosy arrangement if you specifically want them to).
"We care very deeply about privacy and control," said Google's hardware product developer, Juston Payne, to the website TechCrunch, "and it was one of the hardest parts of the whole project."
Clips would seem to herald two significant changes in the way we interact with cameras. The first takes its cue from the unassailable truth that we behave differently when we know a camera is pointing at us: with its intelligent, background operation, Clips could theoretically take snaps that are more candid, more honest and true to life.
The second effect – a significant one in the modern era – is that it frees us from the act of taking those pictures.
Smartphone users are often criticised for experiencing life through a lens rather than immersing themselves in the moment, and the idea of a camera that might take care of assembling your personal archive might be a compelling one for many people.
This all prompts the question of whether we really need to record everything as comprehensively as we do, but we're being increasingly steered towards the idea of logging everything and then rewinding later to find things of interest. Facebook presents users with pre-made slideshows of their weekend, ready to be edited and replayed; Google Maps takes a note of roads we've travelled, distances we walked and places of interest we've visited, in case we'd like to relive them. The notion of "life-logging", where a device captures our days in their entirety for our future delight (or embarrassment) has been a recurring one, from Microsoft's SenseCam (2009) to the Narrative Clip (2012); both of them were wearable cameras that took regular pictures, but neither set the world alight.
The problem with life-logging, as one Microsoft researcher, Gordon Bell, discovered while using SenseCam, is that you end up with too much data of poor quality, and no time to sift through it. But what if the camera knew what constituted a good picture, and could perform that sifting for us? That's evidently where Google Clips is headed.
Priced at US$249 (Dh915), it's unlikely that Clips will become a consumer hit, and Google will know this. But it represents something bigger; by upturning the way we think about recorded media, it's testing a societal shift.
As it's a fairly expensive, un-aspirational device which is cut off from the internet, Clips will avoid the furore that was directed at Google's eyeglasses, Google Glass – a conspicuous device with a touchpad, camera and display that allowed people to document their surroundings with minimal fuss. But you can see how Clips and Glass could one day combine. The technology behind Clips will only get savvier: today it's good at faces, in the future it'll be good at picturesque sunsets or shimmering rainbows, and if the algorithm that powers Clips can learn what each of us might deem interesting or notable, it effectively puts part of our brain inside the camera.
Our only job, then, will be to go back and look through the pictures for that one magical moment, in the knowledge it will almost certainly be there.