AI's global rise: Experts urge democratic tech and common standards

New measures are needed to help authorities detect deepfakes and other risks associated with AI technology early on

Globally, AI investments are projected to hit $200 billion by 2025. Khushnum Bhandari / The National
Powered by automated translation

Democratisation of artificial intelligence technology and the establishment of “common standards” for both large and small players are required to ensure that the sector is regulated, experts told a World Economic Forum session on Thursday.

This will help authorities identify deepfakes and other AI-related perils at an advanced stage and promptly hold the culprits accountable, they added.

“It is just unsustainable, impractical and infeasible to cleave to the view that only a handful of west coast companies with enough GPU [graphics processing unit] capacity, enough deep pockets and enough access to data can run this foundational technology,” Nick Clegg, president for global affairs at Meta, told the “Hard power of AI” session.

“We are an advocate of open source to democratise this technology … [we need] common standards on how to identify [deepfakes] … basically what's called invisible watermarking in the images and videos that can be generated by generative AI tools. That does not exist at the moment.”

Worldwide, the AI industry is booming.

Investors have put more than $4.2 billion into generative AI start-ups in 2021 and 2022 through 215 deals after interest surged in 2019, recent data from CB Insights showed.

Globally, AI investments are projected to hit $200 billion by 2025 and could possibly have a bigger impact on gross domestic product, Goldman Sachs Economic Research said in a report in August.

But despite a meteoric rise of AI industry, authorities have been scrambling to regulate the sector as new innovations in AI continue to outpace existing guidelines.

“I hear audios of politicians that are clearly fake, but people believe them," Irish Prime Minister Leo Varadkar said.

"Fast detection is going to be really important so that we can find out where it is coming from. The platforms have a huge responsibility to take down [fake] content and take it down quickly.

“Also, people and societies have to adapt this new technology. That will happen anyway … [whenever] there's a new technology people learn how to live with it.”

But Mr Varadkar said he is very optimistic about AI’s “extraordinary” future potential and people should not worried by the reports of many traditional jobs disappearing.

“History tells us that any time there was a technological advancement, people believe it will eliminate jobs," he said.

"What usually happens is that some jobs become obsolete and new forms of employment are created.”

Meet the world's first CEO with artificial intelligence

Meet the world's first CEO with artificial intelligence

Last month, the EU became the first major governing body to enact major AI legislation with the Artificial Intelligence Act, stipulating what can and cannot be done, and announcing fines of up to more than €35 million ($38.4 million) for non-compliance.

The new law is the culmination of efforts by the EU after it released the first draft of its rule book in 2021, allowing it to take the early lead in safety standards for the technology.

“The European Union is definitely the first institution who tries to categorise the risks of AI and from these risks there are certain things that have to be done,” said Karoline Edtstadler, Federal Minister for the EU and Constitution at the Austrian Chancellery.

"AI is a very powerful technology and we [also] see a lot of downsides emerging from it.

“We need global rules and in an ideal world we can agree on regulations and restrictions worldwide."

Ms Edtstadler said that this collaboration should involve the industry and technology sectors to collectively discuss potential outcomes.

Updated: January 18, 2024, 8:07 PM