When 26-year-old Dhruv Ghulati left his job as a city banker to found fact-checking start-up Factmata last year, his mission was clear.
“Our ultimate goal is to solve misinformation in the world,” Mr Ghulati says. “It won’t be solved today or tomorrow, but I think it’s absolutely a solvable problem.”
While tech majors such as Google, Twitter and Facebook come under increasing pressure to tackle the proliferation of fake and misleading news online, this London-based firm has quietly been building up its own news platform, which uses artificial intelligence to detect and correct information online.
Similar to Wikipedia and Quora, Factmata's social news platform will rank users on credibility rather than follower counts, and allow a community of users to share, question and fact-check news, aided by machine learning.
The platform is intended to be used by journalists, researchers and anyone in the public who wants "a better informed opinion on the world", Mr Ghulati tells The National. "It's a platform which, we hope, will prioritise the quality of interactions rather than clicks and shares, giving readers a cleaner, better and safer experience online."
He says his company has clocked onto something that other platforms haven’t, which is that to solve the problem of fake news and misleading content online, you need to take a multi-pronged attack involving humans as well as technology.
“Existing approaches tend to focus on purely algorithmic systems, but this means things can fall into a sort of "AI blackbox", where content is wrongly categorised as problematic or misinformative.
“What we’re trying to build is a symbiosis between a news product and a back-end system that can detect this kind of content. Humans and machines need to work together to try and solve these problems.”
The platform will launch in pilot form later this year. But it has already attracted major high-profile backers including billionaire investor Mark Cuban, internet entrepreneur Sunil Paul, Twitter co-founder Biz Stone and Craigslist co-founder Craig Newmark. At the start of February, it closed a $1 million seed funding round with the cash going towards further product development and expanding its team.
Mr Ghulati says his system will be able to detect fake news, spoof accounts, rumours, misleading stories and even extremist content.
“Terrorist content is definitely on our radar too,” he says. “The platform extends beyond just fake news. We’ve been having discussions with various advisors and think tanks that help the government in relation to extremist material online.”
Factmata’s core customers, however, will be advertisers, publishers, social media companies and broadcasters who want to filter out fake news, "clickbait" headlines and propaganda.
“The advantage we have as a third party, new start-up is that we’re able to start from scratch … we don’t have the same constraints that big firms do, from a business and shareholders perspective,” Mr Ghulati says. “What it essentially comes down to is changing the way we think about information online.”
Factmata is one of a growing number of technology firms that have made it their personal mission to fix the problem of misleading stories being spread on digital platforms.
The rise of social media has allowed fake news to spread faster than ever, while the term gained new popularity as a staple of President Donald's Trump’s tweets. Claims of Russian hacking in the US election and the Brexit vote have also added a new political impetus to the fight against it.
Tackling misinformation and related cyber incidents was a hot topic at the recent Munich Security Conference. German firm Siemens took the opportunity of the conference to launch a charter of trust for a secure digital future, saying cybersecurity threats could not be met with just a seat belt or an airbag.
“Confidence in the security of data and networked systems is a key element of digital transformation," said Siemens chief executive Joe Kaeser.
Another company exposed to this theme is Digital Shadows, a British cyber security firm that searches the Dark Web for activity that could threaten businesses.
Its clients rely on it to monitor the internet for stolen passwords and email addresses, sensitive files being leaked online, and other evidence that they have been hacked. Its flagship product, SearchLight, continuously scans more than 100 million data sources online and uses these to alert customers to situations where a security breach may have taken place.
Last September, it raised $26 million in a funding round led by Octopus Ventures, an early backer of British tech giant Zoopla, to develop its technology further.
The company's revenues have more than doubled in the last two years, with the recent focus on fake news driving more and more customers to seek out its services.
"Fake news isn't a new problem, it's been around for centuries," Rafael Amado, senior research analyst at Digital Shadows, tells The National. "But technology is making it a much bigger problem."
When Digital Shadows was set up in 2011, its raison d’etre was not, in fact, to combat fake news or misinformation, but rather to map and protect companies’ digital footprint more generally. But in the past 12 months or so, that mission has meshed with the new impetus on misinformation, Mr Amado says.
“A lot of the things we were already doing, like monitoring the Web for spoof domains, fake social media accounts and document leakage, happened to be the exact same techniques and tactics that fake news and misinformation actors are exploiting for their campaigns.
“So we have more clients who are bringing us in to help them combat this.”
Those customers include banks, healthcare firms, oil and gas companies and legal firms. Major media companies have also used its services to tackle security breaches and hacktivist campaigns during international sporting events, such as the Winter Olympics.
Digital Shadows uses machine learning and other ways of filtering information on the internet. But, like Factmata, it also relies on human experts to provide the additional context and awareness that their clients need to protect themselves from digital risks.
These analysts speak multiple languages and include military professionals, academics, intelligence experts and IT personnel.
The risks they help to identify include cyber threats, data exposure, brand exposure, VIP exposure, infrastructure exposure, physical threat and third-party risk. For instance, it once found the confidential board minutes of one of its clients, a large financial services company, leaked online, exposing a major security incident that the company was still trying to control.
“By finding that information early, we were able to tell them, the information was taken down, and they were able to put in place the measures they needed. They then made the relevant announcements at the right time,” Mr Amado says.
“That’s a really clear example of where we provided a lot of value,” he adds. “If that sort of content makes it online, our SearchLight platform will find it very quickly and we can tell our clients before it’s too late.”
Predicting the next move by hackers can be difficult, he acknowledges. “But that’s why we are out there on so many different sources, constantly trying to monitor what malicious actors are talking about, how they’re thinking, what tools they’re developing, what new technologies they’ll use in the future.”
Mr Amado says fake news is not just a political matter, but something that has huge implications for businesses.
“Everyone’s talking about fake news because of the US election, because of Brexit, because of politics, but this is an issue that goes beyond politics – there’s a business element to it too,” he says. “We have businesses who are very worried about smear stories against their VIPs, for example, or about damaging information about them turning up online. Things like that can really damage the share price of a company.”
Other players in this field agree that the problem of “alternative facts” goes well beyond politics.
“It’s a much bigger problem than fake news being spread on social media platforms,” says Edward Roberts, director of product marketing at cyber security firm Distil Networks.
“The political aspect of it has created far more knowledge of the problem, but it’s not just in that arena. There are businesses around the world that are getting inundated with these kinds of issues.”
Distil Networks was founded in 2011 with the purpose of detecting automated abuse, or bots. It has offices across Europe and the US, with customers including big airlines such as Lufthansa, ticketing companies such as StubHub, ecommerce firms, travel and booking engines, banks and healthcare firms.
“Bots go around the Web and execute something over and over again, because that’s what machines are very good at, and humans are very bad at,” Mr Roberts says. “Their goal could be creating a fake account on Twitter or Facebook, or trying to log into accounts with stolen credentials.”
The company played a part in the recent New York Times investigation into obscure American firm Devumi, which exposed its sale of tens of thousands of fake Twitter followers and bots to celebrities, executives and other social media influencers. "We're trying to stop behaviour like that," Mr Roberts says.
The technology works by setting various “traps” or hurdles to detect bots, which are constantly fine-tuned and refined by a team of experts. Using machine learning, they can look at things like mouse movement or time spent on a site to determine whether the user is a human or a bot.
“It’s a combination of tech and people, and the system is constantly evolving because bots are constantly evolving and becoming more sophisticated to evade detection. It’s a cat and mouse game,” Mr Roberts says.
“It can be mitigated and managed, but it will never be totally eliminated.”
So what’s next for the firms at the front line of the fight against fake news? Most analysts believe that over time, typical market forces will come into play, involving likely consolidation or mergers of some of the new companies that have sprung up recently.
“Which AI fake news fighter is ultimately going to win? Conceptually it’s quite hard to try and pick a winner,” says Ameet Patel, senior analyst at Northern Trust Capital Markets.
“If you’re a start-up in the AI space, you’re coming into a very favourable argument. Cybersecurity isn’t going away and that’s a major tailwind.”
He adds: “Chances are, one or two or three will end up being the provider of choice for some solution or another. I imagine over time, you end up getting a roll-up of AI solutions that turn into a company that becomes an exceptionally important part of the chain.”
George Salmon, equity analyst at Hargreaves Lansdown, agrees.
“Often in these situations, it’s a binary outcome,” he says. “Either you’ll strike gold with the products and services that you develop, or you’ll strike out and not get any real return at all.”
The gains, he says, can potentially be huge, but so too can the losses.
“The ones that succeed could well go on to become very valuable ideas and therefore valuable companies if they’re listed themselves,” he adds.