Brexit has shown that predicting political results can be polls apart from reality

After a third major failure in as many years, pollsters now must succeed at the US Presidential election if they are to keep any credibility intact.

A workers counts ballots after polling stations closed in the Referendum on the European Union in Islington, London. Neil Hall / Reuters
Powered by automated translation

Amid the turmoil sparked by the UK’s decision to quit the EU, there is one player in the drama that has so far managed to escape the scrutiny it demands: the polling industry.

Opinion polls claiming to give insight into the outcome of political decisions have been around since the 1930s. Over the decades, they have become a huge international business, whose insights are relied on by everyone from charities to big business and governments.

And pivotal events like national elections provide a global showcase for the polling industry.

Yet the “Brexit” referendum proved to be the latest – and arguably most dramatic – evidence of its failings.

For most of the campaign, leading polling companies were forecasting a victory for the “Remain in Europe” camp. The predicted margin of victory varied from barely 1 per cent to a hefty 10 per cent. The final published poll gave a commanding eight-point lead to Remain.

Comfortably exceeding the standard error of plus-or-minus several per cent, this prompted even staunch members of the opposing Leave camp to admit they expected to lose.

But just a few hours later they were celebrating a 52-to-48 per cent win, and the pollsters were scraping egg off their face – again.

For this is the third failure in as many years for leading polling companies in the UK alone.

Their prediction of the margin of the vote rejecting Scottish independence in the 2014 referendum was hopelessly wide of the mark, while their forecast of the outcome of the 2015 UK general election was just flat wrong.

It’s a similar story elsewhere: recent polls of political events ranging from the US mid-terms to Israeli elections have been worse than useless.

Bluntly, the whole concept of polling looks broken – and pollsters know it.

They also know the remedy. What they don’t know is how to make it work for them.

It takes the form of a mathematical result first uncovered more than 300 years ago, one whose powers so entranced its discoverer, the great Swiss mathematician Jacob Bernoulli, that he named it the Golden Theorem.

In essence, it shows how to gauge the properties of vast collections of things by looking only at a sample of them.

What makes the theorem so astonishing is how small these samples can be.

For example, it shows that the voting intentions of an entire nation can be gauged pretty well by taking a sample of just 1,000 people.

According to the theorem, this tiny sample is enough to pin down the proportions voting, say, In or Out, to plus or minus three per cent with 95 per cent reliability.

Amazingly, it doesn’t matter how big the total population is. Whether you’re trying to gauge the intentions of a city of 100,000 or a nation of 100 million, the same sample size of 1,000 will achieve that plus-or-minus 3 per cent precision in 19 polls out of 20.

This utterly counterintuitive result underpins the entire polling industry. Without the Golden Theorem (technically known as the Weak Law of Large Numbers – don’t ask), there would be no alternative but to simply ask everyone.

People often dismiss opinion polls on the grounds that, “well, I’ve never been asked to take part”. The theorem explains why: your chances of taking part in any one such poll are equal to the tiny sample size divided by the huge population - which is a pretty small probability.

So if the whole process of polling is backed by rock-solid mathematics, why is it becoming increasingly unreliable?

The explanation lies in the fact that, like all results in mathematics, the Golden Theorem comes with “Terms and Conditions” that have to be obeyed if its guarantees are to be honoured.

And prime among these Ts & Cs is the requirement that the sample be representative of the population as a whole.

In other words, the sample should be a microcosm of the overall population, with any differences being merely the result of chance, and thus likely to cancel each other out.

But this also threatens the success of the theorem in the real world, as it demands that pollsters ensure their samples are representative – and that is proving increasingly difficult.

The simplest way to weed out any biases is to use random sampling, in which people are picked to take part like a “lucky dip” of stones plucked from a jar.

But people aren’t stones. Even the simple act of reaching them – by phone, internet or face-to-face - introduces bias.

For example, younger people are more likely to use the internet than the elderly, so random emailing introduces an age bias. Phoning doesn’t help, either: you could well end up with a “random” sample of the retired or unemployed who spend all day near their phone.

In the Brexit campaign, polls conducted by the internet gave markedly different results to phone polls – though both still managed to miss making an accurate prediction of the final result.

Then there’s the problem of gauging whether responders will vote as they say they will – or even vote at all.

Barely one in three of those aged 24 or under actually bothered to vote in the Brexit referendum. Meanwhile, the widespread belief that many “don’t knows” would stick with the status quo and vote Remain also proved unfounded.

On top of all this is the commercial pressure on polling companies, which, paradoxically, is leading them to become less reliable.

Researchers like Harry Enten, of statistics website FiveThirtyEight, have found evidence of “herding”, where polls by different companies tend to converge near the end of a campaign.

As the resulting consensus is so often wrong, the suspicion is that polling companies prefer to be collectively wrong than trust their own methodology.

All this will need to be borne in mind in the polling industry’s next big test: the US Presidential Election.

Judging by its recent performance, we’d be wise to put all its prognostications in the box marked “don’t know”.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham. His new book, Chancing It: The Laws of Chance and what they mean for you, is out now.