Humans vs algorithms: Why Netflix is bringing back people to predict what we want to watch
Some digital companies, including the streaming giant, are replacing algorithms with real curators offering personal recommendations
The internet presents us with a dizzying array of media and an almost limitless choice of what to add to the queue next – whether it is music, news, film, games, television or home-made videos. Big technology companies help us make those choices by using algorithms, assessing our preferences to find us content that is supposedly related, and grouping us with people with similar tastes on the assumption that we will want the same things.
These complex calculations create an endless procession of media for us to consume, but the character and relentless nature of it has led to some disquiet among consumers. Algorithms, it is said, feed us only the news that we want to hear, reinforce our prejudices and lead us down dark avenues of misinformation, but they can also strip our entertainment choices of surprise and serendipity. Yes, you might like romantic comedies and R&B music, but that is not necessarily all you want to consume.
Algorithms can categorise, but there is still a role for humans to play
In recent weeks and months, a growing number of platforms have adopted human recommendation mechanisms, including Facebook and HBO. Experts are hired to make selections on the consumer’s behalf, bringing what might be seen as a personal touch to an otherwise machine-driven process. Last week, Netflix, a pioneer in algorithm-powered recommendation, joined the party with a new, human-driven feature it calls Collections. “We’re always looking for new ways to connect our fans with titles we think they’ll love,” says the company. Algorithms may have an unrivalled ability to categorise media, but it would seem that there is still a part for humans to play in the recommendation process.
We’re getting weary of them, and we can’t quite put our finger on the reasoning behind it
Data scientist Kris Schaffer
“Some of these moves may be headline-seeking responses to the bad press that algorithms have been getting,” says media researcher Taina Bucher, author of the book If … Then: Algorithmic Power and Politics. But she also notes that algorithms are not independent actors, and already carry a lot of human influence. “There is nothing about an algorithm that automatically creates a filter bubble or narrows our worldview,” she says. “There is, however, a business goal in thinking about how popularity works.”
In other words, algorithms do as they are told, and if a business model is based on the number of eyeballs, the most important goal is to grab our attention rather than give us what we actually desire.
Can an algorithm push us towards conspiracy theories?
From Amazon’s “customers who bought this item” section to YouTube’s “recommended videos”, algorithms have proved remarkably successful from a business perspective, and thanks to the mass of data they accumulate they have become highly sophisticated, too. (Indeed, Netflix’s algorithm is said to be responsible for 80 per cent of viewing activity on the streaming platform.)
But algorithms designed to predict what we want inevitably end up guiding us, and the resulting confirmation of our biases has come in for criticism. On YouTube, it has been observed how searches for dull facts can envelop viewers in conspiracy theory and push them towards extreme points of view.
On Facebook, chief executive Mark Zuckerberg has admitted that it is extreme content that racks up the most views. Similar forces are also at play in the world of entertainment, as our mild inclinations towards certain genres indicate certain tastes.
Algorithms simply aren’t able to optimise for the whole sociology of taste
“We’re seeing the results of this now, we’re getting weary of them, and we can’t quite put our finger on the reasoning behind it,” says data scientist Kris Schaffer, author of the book Data versus Democracy. “But what’s becoming clear is that personalisation does not mean you’re being treated as an individual. It just means that you’re being grouped with other people that you have no personal connection with. When you’re being treated as one number in a class of numbers, that can start to feel weird.”
People vs algorithms: Who wins?
The incorporation of human recommendation systems has proved successful on many platforms. Curators of popular Spotify playlists have become hugely influential figures, while Apple has hired big names to recommend music and news to subscribers. HBO’s new website – which goes by the strangely dystopian title Recommended By Humans – was launched with a press release extolling the virtues of personal advice from people who “really love television”. Bucher understands how some of us, in an information-heavy age, would want to revert to that method of discovery.
“There is definitely something in the idea of a human communicating that information to another human, because with cultural products it’s about knowing context, emotions and mood,” she says. “Algorithms simply aren’t able to optimise for the whole sociology of taste.”
Human recommendations also help contain the unimaginable quantity of stuff that is available to us. “In my local bookstore, the people working there will give me a manageable amount of things to read, and that gives me a sense of regained agency,” says Shaffer.
Bucher agrees that our resentment at becoming passive receptors of endless streams of content could make us want human guidance, but wonders whether that resentment is misplaced. “Why do we expect some random person employed by Netflix to know our tastes better?” she asks. “We may get a sense of agency from that, but why?”
Algorithms will always be needed to do the heavy lifting of filtering media. Bucher believes that they have largely done a good job of managing the workload, and that the use of people is more about responding to public pressure and managing the expectations of its users.
The situation is illustrated by Facebook’s August announcement of reintroducing human editors to its News tab. “Facebook has been doing this on and off for years,” she says.
“They’ve added humans in until a scandal occurs, at which point they’ve removed them and gone back to an automated system. That allows them to avoid responsibility, but then they’ve put humans back in when another scandal occurs.”
Human tastemakers are, it seems, merely the sticking plaster on computerised recommendation systems which will forever be open to criticism. “Using humans may or may not address the problem or improve the output,” says Shaffer. “But when it comes to the media, perception is a significant part of the reality.”
Updated: September 3, 2019 05:04 PM