Too many cooks spoil the broth? Not for this smartphone app


  • English
  • Arabic

If you have a smartphone, there’s an app for pretty much anything these days, from saving lives to popping virtual bubble-wrap. But now there’s an app for answering questions even Google can’t handle – like who will win next Sunday’s Abu Dhabi Grand Prix.

Called Pyne, its developers describe it as a “social polling app” that allows anyone to put a question to the entire world (or, at least, those with the app).

It could be anything from the outcome of a future event to where best to stay in town. The app then collates all the responses and finds the overall winner – which, if the app takes off, could be based on the views of hundreds of thousands of people.

This may well sound like another hi-tech waste of time; certainly the asininity of the first questions to hit the app (“Is the USA better than Canada?”) isn’t encouraging.

But the underlying idea that a diverse group of people can reliably answer questions beyond the reach of any individual is far from silly.

There is mounting evidence for the reality of such “wisdom of the crowd”, and major corporations now use it to make major strategic decisions.

The secret of its success is, however, only now becoming clear.

On the face of it, trusting the insights of some random horde seems ridiculous. That was certainly the view of the Victorian polymath Francis Galton as he watched people attempting to guess the weight of an ox at a livestock exhibition around a century ago.

Collecting the hundreds of entry-forms, he found that only one person had got the ox’s weight of 1197lbs (around 550kg) spot on. But Galton then noticed something unexpected. While the range of the guesstimates was predictably broad, the median value was 1208lb (555kg) – within just 1 per cent of the true weight.

Writing up his findings for the journal Nature, Galton suggested the median value represented the collective wisdom of the crowd that had taken part in the competition.

The reason it proved so accurate, he argued, was partly because people had to pay to enter the competition – thus weeding out those with no real expertise.

The prospect of using their expertise to win the prize then encouraged those who did take part to do their best.

Galton’s arguments failed to make many converts, not least because it lacked the theoretical underpinning of the standard ways of getting insights from crowds.

According to statistical theory, reliable samples are both large and random. The randomness helps ensure the sample is representative, while sheer size helps pin down the truth more precisely.

Yet the crowd that took part in the ox weight guessing contest was anything but random, and wasn’t big enough to pin down the weight as well as it did.

Even so, the evidence for the effectiveness of the wisdom of crowds has been accumulating for years.

Since the late 1980s, academics at the University of Iowa have operated a competition for predicting the outcome of US elections. Set up as a kind of stock market, participants can buy or sell “shares” in their belief about the likely success of election candidates, and make money if they prove right.

Like Galton’s weight-guessing competition, the market allows the collective wisdom of the participants to emerge.

And once again, the results have been impressive. Earlier this year, an analysis of the Iowa market predictions showed they routinely outperform conventional polling results.

Similar reliability has been demonstrated by other such “prediction markets”, such as the Hollywood Stock Exchange (HSX).

This offers fake dollar prizes in exchange for insights into the likely success of movies at the box-office and the Oscars.

The predictions have proved so reliable that they are now relied on by studio management to decide future movies.

Major corporations, such as Proctor & Gamble, Ford and Google have emerged as users of the wisdom of crowds effect to make strategic decisions.

Despite this, the phenomenon has become the real-life target of the apocryphal academic’s question: “Well, it might work in practice, but does it work in theory?”

Now some hard theory is emerging. And it explains how even small crowds can produce surprisingly reliable insights.

The secret of their success lies in something absent from the conventional theory of polling: the traits of the individuals.

Standard polls view people as coloured balls in a jar. If you want to estimate how many of each “colour” are in the jar, you just take out a decent-sized sample at random, and count them.

Yet as pollsters know to their cost, this analogy is far from perfect. They have often been caught out by the fact that people sometimes refuse to reveal or even lie about what type of “ball” they are.

As a result, even an apparently large, random sample can prove to be anything but – with hopelessly misleading results.

In contrast, give even a small number of people enough incentive, and they’ll be more than happy to reveal their true colour. And if that includes genuine insight into some problem, the result can be far more impressive than expected.

But the wisdom of crowds relies on more than motivating people with real skill. Earlier this year, a team led by Professor Clintin Davis-Stober [CORR] of the University of Missouri showed that the crowd needs another key feature: diversity.

They found that once a crowd involves a certain number of genuine experts, it’s better to focus on boosting the diversity of the crowd.

In fact, it’s actually worth swapping acknowledged experts for less knowledgeable mavericks, just to widen the range of collective wisdom of the crowd.

Put simply, this is because experts tend to have broadly similar views and draw on similar sources of information. This can lead to even quite small biases turning into hefty collective errors.

In short, the emerging theory of the wisdom of crowds shows that there is such a thing as having too many experts.

That’s the least of the challenges now facing the new Pyne app. It may one day tap the wisdom of crowds, but for the time being at least, it only proves the theorem about what happens if one asks a silly question.

Robert Matthews is visiting reader in science at Aston University, Birmingham