Since its launch in 2017, the Chinese-made video app TikTok has been downloaded more than a billion times. Its combination of user-generated clips and pop music has made it hugely popular with young people, many of whom have used it to lip-sync and dance their way to global stardom.
Described as "the last sunny corner of the internet", it's become known as a place where talented people can freely express themselves and get noticed. But according to internal TikTok documents obtained by investigative news website The Intercept, those opportunities are far from equal.
The report describes how videos deemed insufficiently attractive by TikTok’s moderators are often "shadow banned"; in other words, quietly suppressed so they fail to gain traction.
What might contribute to this unattractiveness, according to the document? “Chubby,” “have obvious beer belly,” “ugly facial looks,” “senior people with too many wrinkles,” and “facial deformities”.
And how does the company believe users with these attributes might affect business? “If the character’s appearance or the shooting environment is not good, the video will be much less attractive, not worthing [sic] to be recommended to new users.”
So, does TikTok’s dogged pursuit of viewing figures betray a lack of humanity?
TikTok’s core demographic adore the app, but there have been concerns over suppression of content for a number of months. American singer-songwriter Lizzo recently criticised the firm for removing videos of her wearing a bathing suit.
On TikTok it appears, the world is your audience, just as long as TikTok say it’s OK.
The key to the success of the app has been its “For You” screen, featuring new videos based on clips you’ve already expressed an interest in. Artificial intelligence experts have declared the algorithm behind it to be an amazing achievement; every second, billions of calculations process and analyse visual material, then queue up more of the same.
The result? A compelling, never-ending stream of video that’s perfectly tailored to your tastes. In the parlance of the industry, it’s incredibly “sticky”; it reels you in and keeps you there.
However, the content of the “For You” feed is not left entirely to algorithms, according to the report. It suggests that TikTok employees are encouraged to search other platforms for popular videos (one example is said to be an Instagram search for the hashtag #BeachGirl). They are then reportedly encouraged to download and reshare them. And while new content is quietly inserted, other content has its audience restricted.
A former app moderator from Turkey, Nese Idil, revealed to Middle East Eye at the end of last year how "politics, religious content or drinking would be chief reasons to take a video down … [but] in most cases, they would change the algorithms to minimise the content's impact by not posting it to the front page."
She added: “One Chinese female superior wanted us to delete videos from villages or shantytowns, because 'they were looking stinky' to her.”
Such allegations date back to the middle of last year, when German digital rights organisation Netzpolitik published contents of a leaked TikTok memo. It revealed that videos featuring people with disabilities were marked “Risk 4” (only visible in the country it was uploaded from) and others were marked “Auto R”, meaning removal from the “For You” feed.
TikTok’s response to this, and more recent criticism, is that “most” of the guidelines “are either no longer in use, or in some cases appear to never have been in place,” and that any overbearing rules “represented an early blunt attempt at preventing bullying”.
But when Netzpolitik found that videos were being suppressed for merely featuring overweight people, TikTok’s claim of altruism– that they were keeping vulnerable people away from online rudeness – looked less than honest.
“TikTok moderation is happening on a huge scale,” says Hannah Donovan, chief executive of video editing app Trash.
“[It] awards greater freedom to participate to people who don’t break the rules, or who make other peoples’ experiences worse.” Crucially, TikTok transgressors are never made aware that they’ve been muted.
In his book Custodians Of The Internet, Tarleton Gillespie outlines why such arbitrary and secret dual standards are dangerous. "It should matter to everyone when information on platforms has been filtered, channeled and choreographed in way that make it appear that we are encountering the same world, while in fact, unawares, we may be navigating [different worlds] superimposed on the same platform."
Donovan also notes that in the coming months and years, the choices being made by TikTok will have “an immense impact on our culture”. This means that these decisions will have long lasting ramifications.
In May, TikTok's US office is due to open a "Transparency Centre". By allowing experts to examine the way moderators follow guidelines, the firm aims to be more open about its activities. But there appears to be an inherent clash between the free and open internet and the world's fourth most popular smartphone app. And that's not easily solved.
TikTok has responded to the moderation policies detailed in the report, saying: "The livestream guidelines in question have been removed … Local teams apply the Community Guidelines that we published in January, all aimed at keeping TikTok a place of open self-expression and a safe environment for users and creators alike.
"The other policies mentioned represented an early blunt attempt at preventing bullying. We recognize that this was not the correct approach and have ended it."