LONDON // The world of online advertising regulation was brought into sharp focus late in March when a host of major brands pulled spend from Google due to the inappropriate placement of adverts.
The British government, along with a handful of businesses, such as the French media agency Havas, the UK's Guardian newspaper and the cosmetics firm L'Oréal, all froze their UK accounts because corporate adverts had appeared alongside websites run by hate preachers and white supremacists.
“We have placed a temporary restriction on our YouTube advertising pending reassurances from Google that government messages can be delivered in a safe and appropriate way,” a British government spokesperson said. At the time of writing, the UK account freeze was still in place by all brands.
With millions of sites on its network and 400 hours of video uploaded to YouTube every minute, Google has publicly acknowledged that it must do more to change its technology and its policies to give more control to advertisers on its platforms. Currently it flags and then reviews questionable content, and deals with about 200,000 flags a day. The company says 98 per cent of those are reviewed within 24 hours. "In a very small percentage of cases, ads appear against content that violates our monetisation policies," Ronan Harris, Google UK's managing director, told The National. "We promptly remove the ads in those instances, but we know we can and must do more."
A spokesman at Google’s Dubai office says all of the policy changes by the firm will be globally applicable. “We will continue to review and take action against videos that are flagged to us.”
What: Google has been taken to task over its advertising controls.
Why: Companies' ads have been appearing on extremist or inappropriate websites.
The company is also deploying new machine learning (ML) systems to help to identify content that may be objectionable to advertisers. “We have a team of human reviewers that help determine action after our ML identifies something as potentially inappropriate, and then they feed back to the systems so they continue to be trained and get better and better every time,” the spokesman, who declines to be named, said.
“Considering the high volumes of content uploaded every minute on YouTube, we believe this issue can only be addressed with technology, which is why we’ll continue to invest on that front.”
The firm says its algorithms may now decide that a video is not suitable for ads if it potentially falls into more than one objectionable category – even if it does not definitively fall into one.
The Dubai spokesman says Google wants to make sure its controls make it easier for brands to exclude higher risk content and fine-tune where they want their ads to appear. “For YouTube, we’re adding categories like ‘sexually suggestive’ and ‘profanity and rough language’ under the ‘sensitive subjects’ umbrella. We’re also letting advertisers execute their choices across all campaigns at once with account-level controls.”
Amid the furore over ad placement, Bloomberg reported on Tuesday that Rupert Murdoch’s News Corp is introducing a new service to ensure online ads do not appear next to fake news or offensive videos.
News Corp’s Storyful unit, which filters through the fire hose of social media for publishers and brands, will track websites known as purveyors of fake news or extremist content and share that list with advertisers, who can use it to keep ads from appearing in controversial places, it said.
“This will be one way to give advertisers peace of mind,” Storyful’s chief executive, Rahul Chopra, told Bloomberg.
The ad-buyer GroupM and marketing firm Weber Shandwick will be the first two companies to use the Storyful database.
Google has also published online a new site that shows what brands can do to secure their online safety. The newly uploaded rules now guide clients through the complicated world of self-administered brand controls.
A spokesman for the Guardian News and Media group, who also declines to be named, said: "We've made it clear we think it is completely unacceptable that Google allows advertising for brands like The Guardian to appear next to extremist and hate-filled videos … but we're encouraged by Google's commitment to create a more transparent and responsible way of serving ads online.
“Clearly there is more work to be done across the industry to create a clear system of self-regulation that guarantees an open, transparent and safe digital advertising environment.”
James Reynolds, the founder of the Dubai-based online marketing firm SEOsherpa.com, says responsibility for appropriate ad placement should not fall solely on the ad networks. “It’s very difficult for someone like Google to monitor and then remove odious content from two million plus websites in their ad inventory,” he said. He said with more care and using available controls, advertisers can minimise the risk of their brand getting shown alongside dubious content. “Google and other ad platforms do offer the tools to control where [and where not] ads get shown, the issue is often that ad managers are not skilled at, or familiar with, using them.”
Austyn Allison, the editor of the marketing news magazine Campaign Middle East, says companies can also use blacklists of sites that their ads will not run on, or "whitelists" of safe sites. He admits this strategy can be imperfect and hampers the raw market economy of programmatic advertising, but "it's a step in the right direction".
“For example, Choueiri Group’s DMS is a digital representation company that deals with local, trustworthy sites,” he said. “Google’s network is so large that it can be hard to know where your ads will go. A smaller network is like a ready-made whitelist. The downside is that your ad will only be seen by people on the sites that network represents.”
Dimitri Metaxas, the regional executive director for specialist companies at Omnicom Media Group (OMG) Mena, says: “We also recommend that brands deploy their advertising through reputable ad-tech platforms, such as Double Click Bid Manager (DBM), that have built-in brand safety features that will cover most of their needs.”
OMG says it also uses third-party brand safety tools, such as Moat or Integral Ad Science, to provide added protection and insights. “The challenge with Google and more specifically YouTube is that it currently does not accept third-party brand safety monitoring and this means you have to rely upon their built-in controls only. For a website that generates over 400 hours of new video content every minute, brand safety is a real challenge,” he said.
Mr Reynolds says brand adverts that are shown next to incendiary content may experience some loss of credibility, but he says far-reaching brand damage is unlikely. “Most consumers are savvy to the fact that ads are displayed using a variety of criteria such as the topics users are interested in, what sites they’ve visited in the past and what they’ve been searching on search engines – the website the ad is shown on is just one of many criteria an advertiser could select, but in the vast majority of cases probably hasn’t.”
Mr Allison agrees. “I don’t think many people feel those brands are genuinely supporting terrorism or pornography. But, in the same way that we share and laugh at those awkward ad placements in magazines, people will notice and share these and that can be humiliating for the brand. Also, it is the funding of the people behind that content which puts the brand in a tough moral situation. I doubt advertisers are pulling their campaigns because they fear people will believe they support ISIL, but they are doing it to stop a proportion of their marketing budget going to terrorism.”
Mr Allison says it will take time to improve the situation but, he adds: “Now that advertisers are starting to pull their ads from networks that let them end up in the wrong places, there will be a stronger financial incentive for the ad tech firms to work out ways to stop this happening.”
And, perhaps, for the advertising companies to better train their technical staff.
Follow The National's Business section on Twitter