Making data fit the reality – or the limits of economics


  • English
  • Arabic

I once met a civil servant who insisted that the only way to make good public policy was to run randomised controlled trials.

Running a country, in her view, is a bit like running a pharmaceutical company – pathogens run rampant, vaccines are administered, their success is measured against an unvaccinated control group and a decision to proceed with manufacture and inoculation is made.

In the stable of folksy economics clichés used by journalists and politicians in their battles to help or hinder the public in its understanding of economics, the doctor is a common trope.

Economists from the IMF are frequently compared to physicians giving recalcitrant patients – often developing countries – their medicine.

For the civil servant, comparing politics to medicine was a way to make good on the promise of public policy that didn’t need ideology – you had an objective measure of what worked, and so grand ideological projects such as neoliberalism and socialism were misguided.

For international institutions, too, the appearance of scientific thinking can lend credence to their policy recommendations. If economics is like medicine, then heeding your doctor’s advice is a very good idea.

So when I read, in a paper on methodology in econometrics, by Aris Spanos, an economist at the University of Virginia, that “the accumulation of mountains of untrustworthy empirical evidence over the last century is a symptom of weaknesses in the current methodological framework for empirical modelling in economics” – this gave me pause.

Mr Spanos argues that the statistical models used by many econometricians are statistically inadequate – they don’t use enough tests to check whether the correlations they find in the data stand up to different kinds of scrutiny.

One paper, by Mr Spanos and Deborah Mayo, another economist at the University of Virginia, runs through a number of procedures usually used in econometrics papers to specify a relationship between the US population and a mystery variable.

The authors run through standard techniques deployed by econometricians that suggest that there is a valid relationship between the variables – before revealing that the mystery variable is the number of shoes owned by Mr Spanos’ grandmother over time.

The authors then apply further statistical tests – not commonly used in the econometrics literature, they say – to show that the supposed statistical relationship doesn’t stand up to more robust testing.

This is interesting because it shows that even the more empirically orientated subdiscipline of economics, econometrics, can get things wrong quite often. It is one thing to criticise economists for coming up with stylised models that are untestable. It’s another to worry that even their testable models may not have much traction.

The discussions I have with people whose job is to guess at the economic weather in the Middle East suggest that their estimates don’t make much use of econometrics.

They are furnished with a regular series of data points of greater or lesser reliability, which they must make the best out of.

China-watchers look to cement output and car sales as crude estimators of consumer demand – and hence, the future course of growth. Property-market analysts must work out whether Dubai is the second-most attractive real estate market in the world, as Savills thinks, or the world’s worst performing, as Knight Frank believes.

Macroeconomic forecasting when your data is patchy becomes a case of fitting what evidence you have into a plausible narrative about the future direction of trends.

It is hard to remove the psychology from this – in the face of a complex, unparseable reality, people find a narrative and revise it as new data comes along.

Technical study, and a few models of stochastic processes, help to make some sense of complexity. Morgan Stanley’s five-variable buy signal is of this type. But it cannot claim to explain very much of what’s out there – Morgan Stanley analysts have admitted that its five-gun salute tends to fire prematurely.

The civil servant’s faith in the predictability of the social world, and the ease with which simple trials could help her to tame it, struck me as wrong at the time, but I couldn’t easily explain what was wrong with it. After all, how bad an idea can it be to look at the evidence?

In social science, practitioners are always running up against the limits of the usefulness of the data they work with. Fitting data to ideas, and ideas to the real world, will remain a human task. The civil servant believed that the data would answer all the questions she could have. She was wrong – alongside doctors, there must also be priests.

This continues our series of weekly analysis articles by a rotating group of The National's beat reporters. Adam Bouyamourn covers economics.

Follow The National's Business section on Twitter