Google CEO says flawed AI responses unacceptable as they offended users and showed bias

Last week, the company suspended image generation of individuals by its generative AI tool Gemini

Sundar Pichai, chief executive of Alphabet and its subsidiary Google. EPA
Powered by automated translation

Inaccurate images and text generated by Google’s generative artificial intelligence tool Gemini have “offended” users and “shown bias”, the company’s chief executive said.

“That’s completely unacceptable and we got it wrong,” Sundar Pichai wrote in an employees' memo seen by The National.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

Last week, the Alphabet-owned company suspended the image generation of individuals by Gemini, previously known as Bard, following criticism regarding its handling of racial issues.

Google at the time apologised for “missing the mark” and said it was working to address those issues immediately.

The company took the step after users on X highlighted several instances in which the AI model displayed inaccurate images concerning the race of the subjects.

For example, users on social media complained the AI tool had generated inaccurate images of historical figures – such as showing the US founding fathers as women and people of colour.

However, Mr Pichai said Google aimed to build on AI products and technical announcements it has made in the past few weeks.

“We have always sought to give users helpful, accurate and unbiased information in our products, that’s why people trust them," he said.

“We will be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evaluations and red-teaming, and technical recommendations.”

Alphabet’s stock, which has surged more than 55 per cent in the past 12 months, was trading 0.33 per cent down at $139.64 pre-market on Wednesday.

Problems with Google’s AI tools could reinforce the notion of the company as “an unreliable source for AI” and create an opportunity for competitors, Melius Research analyst Ben Reitzes wrote in a research note, Bloomberg reported.

It is not the first time Google has attracted a backlash over perceived irresponsible use of its technology. In 2015, the company had to issue an apology after its photos app categorised a black couple as "gorillas".

Over the past few months, Google has doubled down on its AI efforts as it wrestles with the Microsoft-backed OpenAI’s ChatGPT. This month, OpenAI launched Sora, a new generative AI model to create video from users’ text prompts.

Launched in December, Gemini is the first AI model to beat human experts on MMLU (Massive Multitask Language Understanding), one of the most widely used methods of testing the knowledge and problem-solving abilities of AI.

Updated: February 28, 2024, 1:59 PM