Addressing the challenges and risks of artificial intelligence today is critical to prevent any “big catastrophe”, the UAE's most senior AI official has said.
The power and abilities of AI are on a scale “unlike anything we've seen before”, which means those who are ahead of the industry and the regulators have the means of causing widespread damage, said Omar Al Olama, Minister of State for AI, Digital Economy and Remote Work Applications.
“Can we wait 20 years to govern AI? I think it's going to be too late. So my biggest fear is we're going to take too long to address some inevitable [challenges],” Mr Al Olama said at the Dubai Assembly on Generative AI on Wednesday.
“It just sounds to me the range of dangers are just so enormous to pinpoint.”
Complicating matters, according to Mr Al Olama, is that AI cannot be governed as a technology in itself, but rather by its use and through the verticals it is implemented in.
“You can't govern AI. It is impossible. Whoever tells me that they can is out of their minds,” the minister said.
“With a lot of humility, I'll tell you why we can't govern it: AI is not one tool or capability. A self-driving car requires very different governance requirements than a large language model or a computer vision system.”
Several figures from various major industries have sounded alarms on AI, which has gained significant momentum with the advent of generative AI.
Perhaps the most disturbing is the possibility of AI harming, or even killing, humans.
In May, scientists and technology leaders, including high-level executives at industry majors Microsoft and Google, issued a warning that AI raises the possibility of human extinction.
Signatories of the document included Sam Altman, the chief executive of OpenAI, which created the sensational generative AI platform ChatGPT, and Geoffrey Hinton, a former Google executive who is considered to be the “godfather of AI”.
Mr Hinton sounded alarm bells after his departure from the company in May: from eliminating jobs to the threat of AI becoming sentient and weaponising the technology, Mr Hinton voiced his regrets about the innovations that he had a hand in creating.
Future iterations of AI could become a threat to humans because of their unexpected behaviour. He also dreads the day that truly autonomous weapons – “killer robots” – may become a reality.
The emergence of weaponised technology could stem from threat actors successfully using operational technology environments to cause human casualties by 2025, research firm Gartner has warned.
In June, an adviser to British Prime Minister Rishi Sunak said AI systems are on track to become powerful enough to “kill many humans” within just two years, urging policymakers to bring AI systems into a place of control.
AI-powered drones, for instance, could work well for military operations, Mr Al Olama said.
“But then if this goes wrong, it's a big issue, right? So these are fears that we need to address and look at now,” he said.
The minister, however, insisted that this fear could be overcome by co-operation.
“I think if you have a fear channel it towards the future that you want to create. There are a lot of unknowns and it is natural; we need to deal with them,” Mr Al Olama said.
“Empower yourself with the tools necessary. Be aware of what is happening and create this movement that will bring others with you. I don't think the future is going to be a selfish future; it's going to be a great future.”