The row over the harvesting of private Facebook information on a massive scale has created a scandal on both sides of the Atlantic that is still rumbling on three weeks later.
At the heart of what seems like a fairly technical problem are serious social and political ramifications to do with privacy, consent and manipulation. These questions, for so long buried under sweeping generalisations of technological progress, bringing communities closer and the easy sharing of personal views and data, are only now coming to the public's understanding.
Nor are they confined to one country. They reach from Silicon Valley to the Middle East's tech hubs of Cairo, Amman and Dubai, which this week was hosting the Arab Media Forum, where experts were discussing how artificial intelligence is being used to generate fake news sites to spread false information and muddy the waters of legitimate news outlets. There might be no clear way to resolve those questions of privacy and consent, either for governments and individuals.
Rather than rehashing the story of the British data analysis firm Cambridge Analytica and its harvesting of tens of millions of Facebook profiles, as well as the allegations that it might have attempted to use those profiles to sway elections as far afield as Nigeria, it is better to focus on the essential problem, which is that the vast amounts of data companies can gather on users can be used in ways that neither the users nor the companies can predict. Therefore it is not possible for users to give their genuine, informed consent.
In isolation, handing over pieces of data to companies – photos, information about which products you like, who your friends are, where and when you were born – is not a problem. But it is the steady accretion of that data, multiplied by years of information and hundreds of millions of profiles, that adds up to data with enormous predictive capability and significant privacy issues.
After the Cambridge Analytica scandal broke, media companies fell over themselves to highlight exactly how much data companies such as Facebook keep on users. The response from the public was broadly horrified. Few had realised what they were handing over was kept for so long.
But it is the predictive capacity that is more frightening and which raises genuine questions of consent. Take a simple, intimate example: based on a few pieces of information, it is possible to guess if a woman is pregnant with stunning statistical accuracy. A woman who “likes” on Facebook a particular brand that is strongly associated with motherhood could well be pregnant. Factor in her age, whether or not the friends she interacts with are also mothers, the post from a year ago announcing that she was engaged and the statistical likelihood increases. Such information is valuable to companies who advertise to new mothers, but it is information that the woman did not explicitly consent to giving.
The corporate response, which Facebook initially offered over this scandal, was that users had agreed to this when they signed up for the terms and conditions. But these terms and conditions, as anyone who has ever signed up for a website service, app or software, are vast, running to dozens of pages, mostly in legal and technical language that few can understand. Banks used to do the same thing until government regulation forced them to put the salient features of their products in clear language.
The trouble is that few understand what the salient issues are – not merely the public but companies as well. Technology is moving rapidly and new ways of interpreting the vast oceans of data are constantly being invented. Companies might genuinely not know what ways the data will be interpreted in the future, making it impossible to ask users if they consent.
In the Middle East, these concerns are multiplied for three main reasons. The first is the expectation of privacy among the Arab population. The second is the question of cross-border regulation. For a company like Careem, for example, a ride-sharing app that is a rival to Uber in about 20 countries in Africa and Asia and based in the UAE, its privacy terms on registering, as with many international companies, enable it to share personal data with all its affiliates, subsidiaries and marketing partners.
The third reason is that of security, for the public and governments and that is hardest of all. Facebook and other tech companies have already admitted co-operation with the US government, meaning that the government potentially has access to information on hundreds of millions of people who are not its own citizens. That is a national security issue for dozens of countries.
But it matters for Middle Eastern users too, particularly those who lived under the authoritarian regimes of Iraq, Syria, Libya and others. In those countries, given how widespread intelligence networks were, people might be less comfortable with companies sharing their information with governments or, as has been proposed in Europe, the government, not tech companies, overseeing private information.
All of which leads back to consent. In different countries, people have different levels of comfort with the revelation of personal information. Only now is a discussion being started. Where it will end – greater government oversight for tech companies, a user-led revolution to #DeleteFacebook or a ban by governments on the use of such tech within its borders – is unresolved. But tucked away in the discussion, which one can sense from the handful of politicians who have spoken knowledgeably about this, is the lingering belief that it might not be solvable, that the juggernaut might have broken free. Data, like other vast untapped resources of the past, might simply be too valuable a resource for governments to stop mining.