Twitter is removing its image-cropping algorithm after finding bias

The social media platform discovered that the algorithm occasionally discriminated based on race and gender

FILE PHOTO: People holding mobile phones are silhouetted against a backdrop projected with the Twitter logo in this illustration picture taken September 27, 2013. REUTERS/Kacper Pempel/Illustration/File Photo/File Photo

Twitter is removing its image-cropping algorithm after a review found bias in the automated system.

On Wednesday, the social media platform announced that it had analysed the algorithm after getting feedback that it "didn't serve all people equitably".

After reviewing the model over several months, it discovered that the image-cropping algorithm chose points of focus on pictures, which Twitter calls the saliency, and found occasional instances of bias in cropped pictures in favour of women and lighter skin tones.

According to findings, there was an 8 per cent difference from demographic parity in favour of women, 4 per cent difference in favour of white individuals overall, while comparisons of white and black women showed a difference of 7 per cent in favour of white women. In comparisons of black and white men, there was a 2 per cent difference in favour of white men.

The analysis also delved into objectification biases – or the “male gaze” – after people on Twitter identified instances where cropping chose a woman’s chest or legs as a salient feature. However, they “did not find any evidence that the algorithm cropped images of men or women in areas other than their face at a significant rate".

To determine this, they gave the algorithm 100 randomly chosen images and found that only three centred on bodies over faces. It stated that "when images weren't cropped at the head, they were cropped to non-physical aspects of the image, such as a number on a sports jersey".

Twitter launched the saliency algorithm in 2018 to crop images to improve consistency in the size of photos and allow viewers to see more tweets in one glance.

However, considering the controversial findings, they are dropping the feature to “give people more control over how their images appear while also improving the experience of people seeing the images in their timeline".

In a blog post, Rumman Chowdhury, Twitter's director of software engineering, said that "even if the saliency algorithm was adjusted to reflect perfect equality across race and gender subgroups, they were concerned by representation harm of the automated algorithm when people weren't allowed to represent themselves as they wish".

"We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research. One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people."

The news comes a month after Twitter announced it was launching an initiative to analyse its algorithm fairness on social media.