'Cheaper and faster' ChatGPT rival being built in Abu Dhabi

Technology Innovation Institute is building a local alternative to other large language models

AI is reaching a new level of competition — a good thing for consumer choice. Reuters
Powered by automated translation

The bot wars are heating up with Abu Dhabi entering the arena, announcing its own large language model on Wednesday to compete with the likes of OpenAI, DeepMind and Google.

Technology Innovation Institute, a government-backed research hub, introduced Falcon LLM, which is under development for a wide variety of applications, from chatbots and language translation to content generation and sentiment analysis.

“My top priority is to pave the way for the development of more powerful and advanced technologies in the UAE," Ebtesam Almazrouei, a director in the AI research lab at TII, said in an interview with The National.

"We are committed to making the UAE a key player in the global arena of advanced technology."

Falcon LLM is a landmark announcement for us, but this is just the beginning. By the end of the year, we will be sharing news on a huge increase in capabilities in this space
Dr Ray Johnson, Technology Innovation Institute, Abu Dhabi

She has the platform from OpenAI in her sights as she and her dozen other collaborators at TII work to provide a local alternative.

Falcon is not yet commercially available, and a timeline was not disclosed, but the ambition is to eventually offer the model to government entities, start-ups and the private sector so the economy is less dependent on LLMs from the major tech players in the increasingly competitive ― and secretive ― artificial intelligence space.

TII, the applied research arm of Abu Dhabi's Advanced Technology Research Council, is a critical part of the UAE's efforts to diversify from a reliance on oil exports and develop a knowledge-based economy.

Falcon is a cheaper and faster model to run than GPT-3 and models from DeepMind and Google in an outside performance evaluation by Stanford University, to be published in the coming weeks.

The team at TII touts the quality of its training data.

Falcon is a 40 billion-parameter language model that was trained from a final dataset with more than one trillion collected tokens of web data.

Stanford's evaluation is an industry benchmark that tests LLMs on the same scenarios, "allowing for controlled comparisons", according to the university.

Falcon's accuracy, bias and its ability to reason will also be tested and those results are expected to be made public in the coming weeks as well.

OpenAI announced the latest update to its GPT model, called GPT-4, on Tuesday but declined to reveal how much bigger the LLM is or why exactly it can perform better than its predecessors in an interview with MIT Technology Review.

“That’s something that, you know, we can’t really comment on at this time,” OpenAI’s chief scientist, Ilya Sutskever, said to the publication.

“It’s pretty competitive out there.”

The announcement of Falcon is a good reminder that OpenAI's GPT model ― while grabbing mainstream attention ― is part of a much wider effort by technology companies to capture a part of this booming market.

“The year 2023 is turning out to be the year of AI," Dr Ray Johnson, chief executive of TII, said on the research hub's announcement.

"Falcon LLM is a landmark announcement for us, but this is just the beginning. By the end of the year, we will be sharing news on a huge increase in capabilities in this space."

Updated: March 16, 2023, 1:11 PM