Yesterday, Instagram announced <a href="https://www.thenational.ae/lifestyle/wellbeing/are-you-sure-you-want-to-post-this-new-tool-on-instagram-prompts-bullies-to-reflect-1.884283">new features it hopes will curb online bullying</a> on the app, and today, Twitter has released an update to its hateful conduct policy that means they will censor language that dehumanises others on the basis of religion. The social media company announced that, effective immediately, tweets like these will be removed from the platform when they're reported (until today, they would have been left on Twitter and considered reasonable ideological debate): The social network already bans hateful language related to religion when it's aimed at individuals. The change broadens that rule to forbid language that likens members of religious groups to subhumans or vermin. Tweets like the above sent before today will be removed, but will not directly result in any account suspensions (because they were sent out before the rule was set). Horrific incidents such as the Christchurch mosque shooting in March and the Easter attacks in Sri Lanka in April are widely seen to have been, at least in part, stoked by hateful rhetoric on social media. The change has been made after Twitter sought feedback from users who use the app in Arabic, English, Spanish and Japanese languages. The call led to 8,000 responses from more than 30 countries. These respondents said they wanted to be able to use the above sort of language with hate groups, non-marginalised groups and political groups, which is why Twitter is now specifically banning hate speech against religions. Another key element of the feedback was that Twitter's rules are too hard to understand, and so the company has condensed their rules document down from about 2,500 words to under 600. "In 280 characters or less, each rule clearly describes exactly what is not allowed on Twitter," it says of the change. You can find the <a href="https://help.twitter.com/en/rules-and-policies#twitter-rules">Twitter rules and policies here.</a> The feedback also suggested users don't think Twitter enforces its rules "fairly and consistently", so the company has said it has "developed a longer, more in-depth training process with its teams to make sure they were better informed when reviewing reports". Some people, via Twitter, wondered why the company was focusing only on this one element of hate speech: Some simply thanked the service for their move: While others wanted to know if these rules would actually be enforced: