Abu Dhabi's TII unveils Falcon 2 series that aims to take on Google and Meta

New models are open sourced, offering unrestricted access to developers globally

TII said it aims to boost the existing AI models by adding more neural powers and introducing a range of sizes. Khushnum Bhandari / The National
Powered by automated translation

The Technology Innovation Institute, the Abu Dhabi government-backed research centre, has unveiled the second iteration of its large language model, Falcon 2, to compete with the likes of models developed by Meta, Google and OpenAI.

Large language models are types of generative artificial intelligence that can imitate human intelligence. They can distinguish, review, translate, forecast and produce new content – text, audio or visual – using large data sets.

TII, which is the applied research pillar of the UAE capital’s Advanced Technology Research Council (ATRC), said the new series is multilingual and has been tested against several prominent AI models.

It comes with two versions: Falcon 2 11B that is trained on 5.5 trillion tokens with 11 billion parameters, and Falcon 2 11B VLM that offers vision-to-language model capabilities to convert visual inputs into textual outputs.

“While Falcon 2 11B has demonstrated outstanding performance, we reaffirm our commitment to the open-source movement with it, and to the Falcon Foundation,” said Faisal Al Bannai, secretary general of ATRC and strategic research and advanced technology affairs adviser to the UAE President.

“With other multimodal models soon coming to the market in various sizes, our aim is to ensure that developers and entities that value their privacy, have access to one of the best AI models to enable their AI journey.”

Democratising generative AI

Falcon 2 11B and 11B VLM are open-source models, offering unrestricted access to developers globally. They can tackle tasks in various languages including English, French, Spanish, German and Portuguese.

Google and OpenAI, the two front-runners in the generative AI field, have predominantly maintained shut foundational models, expressing concern that large language models could be manipulated to spread misinformation or other potentially dangerous content.

But proponents of open-source software say keeping these systems closed unfairly curtails innovation and hampers their potential to improve the world.

Falcon models giving tough fight to Meta and Google

The Falcon 2 11B outshines the performance of Facebook-parent Meta’s newly launched Llama 3 with 8 billion parameters, and performs on par with Google’s Gemma 7B, according to Hugging Face, a collaboration platform for the global machine learning community.

Falcon 2 11B VLM, meanwhile, has the capability to identify and interpret images and visuals from the environment, providing a wide range of applications across industries such as health care, finance, e-commerce, education and the legal sector.

These applications range from document management, digital archiving and context indexing to supporting those with visual impairments, TII said.

UAE minister calls for global coalition to regulate artificial intelligence

UAE minister calls for global coalition to regulate artificial intelligence

Highly scalable models

The new models can run efficiently on one graphics processing unit, making them scalable and easy to deploy and integrate into lighter infrastructures like laptops, it added.

TII said it aims to broaden the Falcon 2 next-generation models by introducing a range of new sizes. These models will be boosted with advanced machine learning capabilities like “mixture of experts”.

This method involves merging smaller networks with distinct specialisations, ensuring that the most knowledgeable domains collaborate to deliver highly complex and customised responses.

Updated: May 14, 2024, 1:40 PM