Why global regulators are in a quandary on how to govern AI

Debate on who is responsible for data used to train large language models only part of 'manifold tensions'

John Marshall, executive director of the World Ethical Data Foundation, Akram Awad, partner at Boston Consulting Group, and Stephen Almond, executive director for regulatory risk at the UK government's Information Commissioner’s Office, during a panel discussion moderated by Mustafa Alrawi, acting managing director of CNN Business Arabic, during the Dubai Assembly for Generative AI at the Museum of the Future on Thursday. Chris Whiteoak / The National
Powered by automated translation

Global regulators are worrying over how to regulate generative artificial intelligence, amid "tensions" that exist between authorities and the technology sector in general, industry leaders have said.

These tensions are "manifold" and exist within the open-source communities developing AI models, as well as in the debate over who is responsible or accountable throughout the supply chain, particularly on the data used to train large language models, the Dubai Assembly for Generative AI heard on Thursday.

As a result, organisations have been unable to unlock the potential of AI, while in some cases technology companies could be putting their own innovations on hold because they are wary of the ambiguities or uncertainties of regulation.

"We're seeing regulators globally worrying precisely about what it would even mean to regulate AI, because the definition is a point of contention in itself," said Stephen Almond, executive director for regulatory risk at the UK government's Information Commissioner’s Office.

"Then we look at generative AI and the enormous complexities ... we have an ecosystem of very complicated supply chains from processes in the first place to how they're handled, and ethical considerations and compliance that goes into that all the way through to, I think most importantly, the accumulation or extraction data."

AI has gained momentum with the introduction of generative AI, which rose to prominence thanks to ChatGPT, the language model-based tool made by Microsoft-backed OpenAI.

Its sudden rise has also raised questions about how data is used in AI models and how the law applies to the output of those models, such as a paragraph of text or a computer-generated image.

Omar Al Olama, Minister of State for AI, Digital Economy and Remote Work Applications, earlier told the assembly AI cannot be governed as a technology in itself but rather by its use – and that a "big catastrophe" could happen if it is not regulated soon enough.

Already, notable organisations have been dragged into legal proceedings.

Microsoft was named in a class-action lawsuit late last year when a number of coders sued the company, its subsidiary GitHub and OpenAI, alleging their creation of Copilot relied on "software piracy on an unprecedented scale" and seeking damages of $9 billion.

In August, National Public Radio reported that The New York Times was considering filing a case against OpenAI to protect the IP rights associated with its reporting.

Windows operating system-maker Microsoft, however, is attempting to streamline issues on the use of AI services: last month, it announced a commitment to legally protect its customers if they are sued for using its AI services, as the use of more advanced iterations of the technology become more widespread.

"There are 193 countries that signed up for the [Ethics of Artificial Intelligence], so that's really a foundation," said Akram Awad, a partner at Boston Consulting Group, referring to the treaty signed by members of the United Nations Educational, Scientific and Cultural Organisation in November 2021.

"However, when it comes to the practicality of how you apply that, then there would be a lot of nuisances and [it is] almost impossible to separate the application of AI ethics or responsible AI from the cultural, social and even economic context."

The UN agency had cautioned that AI "is bringing unprecedented challenges", including increased gender and ethnic bias, significant threats to privacy, dignity and agency, dangers of mass surveillance and increased use of unreliable AI technology in law enforcement.

"What we actually do require is that culture of responsibility at every level in organisations themselves and in industry bodies that provide assurance around this," said John Marshall, executive director of the World Ethical Data Foundation.

"Just as enforcement and explainability are really important parts of enabling competent institutions and individuals, companies should know in some sense what the consequences of their technologies will be."

Explainer: What is ChatGPT and why are academics concerned?

Explainer: What is ChatGPT and why are academics concerned?
Updated: October 12, 2023, 11:42 AM