Tech firms will have to guard people's data more closely, say analysts
Facebook's Libra plan has attracted criticism from governments because it offers an ability for the firm to create a centralised client identity
Technology firms will need to police how they store people's data more carefully as regulators in other parts of the world follow the European Union's lead, according to KPMG's global FinTech co-lead.
Speaking at the SIBOS conference in London on Monday, Anton Ruddenklau said that Chinese regulators will follow the West’s lead and have 'their own version of GDPR in the next couple of years”, to protect data diversity.
Mr Ruddenlau works with many companies that require customer identification to set up accounts and log into them. He believes that rules will change to hand customers more power over their data.
“We have our own sovereignty over our ID. My ID is mine, your ID is yours. It's not HSBC's, it's not Amazon's, it's not Google's. So part of the regulation — that will come through I think in the West as well — will reassert that. I think that'll very much change the way our identities are stored,” he said.
He argued that sensitive customer information would be stored in a “digital vault”, which would be owned by the customer and shared when appropriate.
Robert Prigge, president of Jumio, an identity verification company, agreed with Mr Ruddenklau, citing Facebook’s recent attempt to launch its ‘Libra’ cryptocurrency.
The decision was met with outcry from governments because buried in the social media network’s documentation was its ability to create a centralised identity of its clients.
“That's part of the reason why governments are reacting so overwhelmingly negatively to the idea. It's not just the crypto part — it was actually going to be Facebook's ability to create its own extragovernmental source of truth and identity.”
Despite pushback from the regulators, Facebook still plans to launch Libra in 2020.
Commenting on individual data, Mr Prigge said the future will be based more around people opting in and deciding what sort of information can be shared about them and when.
But he warned that governments and financial institutions will have challenges around identification.
“I think one of the real challenges is establishing that it's really you in the very beginning. Unless you've got that ground truth to start with, it becomes very difficult to build upon that.
“There's a million people out there who say they verify who you say you are, but extremely few companies actually verify who you really are,” he said.
Participants on the panel said there were an increasing amount of data breaches and more concerns around apps collecting personal data.
For example, Russia’s Face App, which was released in 2017, has raised questions about security concerns generated by facial images on the app. It is unclear what happens to photos of people’s faces once they are on the app’s servers and how long they are stored there.
Tech companies are increasingly amassing vast databases of facial photographs to train artificial intelligence to recognise faces.
Georgia Steele-Matthews, chief product officer at PixelPin, an authentication systems that uses pictures of people instead of passwords, said that facial recognition software still has a long way to go to be up to a high standard.
Speaking on the panel, she pointed to a “gender shades study”, which showed that facial recognition software was biased towards white men.
“If you are a white-skinned male, the error rates on Face ID is 1 per cent. If you are a dark-skinned female, the error rate is 35 per cent and this is an issue that Apple is now grappling with because it was developed in a certain way to have an unconscious bias. There's a lot of things that are starting to come out here where the tech needs to catch up,” she said.
Published: September 24, 2019 02:16 PM