Google announced a range of new artificial intelligence-powered features in its translation application at its Live from Paris virtual event on Wednesday.
The new features include more contextual translation options with descriptions and examples, a redesigned app for Apple’s iOS operating system, and an augmented-reality translation feature through Google Lens.
Contextual options mean that words and phrases with different meanings will be translated based on the context of the text.
“Whether you are trying to order bass for dinner or play a bass during tonight’s jam session, you have the context you need to accurately translate and use the right turns of phrase, local idioms, or appropriate words depending on your intent,” said Xinxing Gu, Google’s product manager.
English, French, German, Japanese and Spanish will be introduced in the coming weeks, he said.
On Monday, the Alphabet-owned company launched a new conversational AI service called Bard to compete with rival ChatGPT, an AI service created by OpenAI.
The service aims to create innovative ways to engage with information, from language and images to videos and audio.
The Google Translate app, which recently received a fresh look on the Android operating system, will get a new design on iOS in few weeks.
The restructured app on iOS will offer a larger space for typing and more accessible entry points for translating conversations, voice input and lens camera translation.
Google has also added new gestures to make the translation app more accessible.
They include the ability to select a language with fewer taps, holding the language button to pick a recently used language with a swipe, and swiping down on the home screen text area to quickly bring up recent translations.
Thirty-three new languages are now available on-device in the Translate app — including Basque, Corsican, Hawaiian, Hmong, Kurdish, Latin, Luxembourgish, Sudanese, Yiddish and Zulu, Google said.
“Advances in AI have given us the ability to translate images with Lens, which enables you to search what you see using the camera on your device,” said Mr Gu.
“In a big step, advanced machine learning also means that we are able to blend translated text into complex images, so it looks and feels much more natural.
"Soon, we will expand web image translation to give you more options for translating image-based content regardless of how you search for it."
Google Lens is a set of vision-based computing capabilities that can understand what users are looking at and use that information to copy or translate text, identify plants and animals, explore locales, discover products, find visually similar images and other useful actions.
The Google Translate app is used by more than 1 billion people globally.
The company has also announced its plan to add an “immersive view” feature to Google Maps that will let users feel like they are in a place.
Google is initially introducing this feature in London, Los Angeles, New York, San Francisco and Tokyo.
It plans to extend it to other cities, including Amsterdam, Dublin, Florence and Venice, in the coming months.