Researchers have found that Twitter's image-cropping algorithm, which went viral after it was shown to exclude black people and men, is also coded with an indirect bias against a number of other groups.
The discovery was made during the Def Con hacker convention in Las Vegas in the US, where researchers found that the feature also discriminates against women who wear headscarves, people with white or grey hair and people who use wheelchairs.
The feature was removed after it was discovered that the image-cropping algorithm chose points of focus on pictures, which Twitter calls the saliency, and found occasional instances of bias in cropped pictures in favour of women and lighter skin tones.
According to findings, there was an 8 per cent difference from demographic parity in favour of women, 4 per cent difference in favour of white individuals overall, while comparisons of white and black women showed a difference of 7 per cent in favour of white women.
In comparisons of black and white men, there was a 2 per cent difference in favour of white men.
Twitter launched the saliency algorithm in 2018 to crop images to improve consistency in the size of photos, so viewers could see more tweets in one glance.
"We considered the trade-offs between the speed and consistency of automated cropping with the potential risks we saw in this research. One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people," Rumman Chowdhury, Twitter's director of software engineering, wrote in a blog post in May.
After the initial discovery, the company launched a competition, inviting researchers to identify other ways the image-cropping algorithm could cause harm.
Twitter gave out cash prizes, including $3,500, to the winner, and smaller amounts to runners-up.