AI leaders welcome UK government investment in 'agile' regulations

Investment plans include £90 million investment in nine research hubs which will look into ways in which artificial intelligence can be developed responsibly

Robot Ai-Da at the House of Lords in London. Experts say the development of regulations in AI needs to keep pace with the evolution of the technology itself and be fully aware of the potential risks associated with it. Getty Images
Powered by automated translation

Leading companies in artificial intelligence have reacted positively to the UK government's latest commitment to keep future legislation and regulation agile regarding the fast-growing technology.

On Tuesday, the UK government pledged more than £100 million ($125 million) to make sure that emerging AI technologies are responsibly developed and safely implemented.

“I welcome the UK government’s statement on the next steps for AI regulation, and the balance it strikes between supporting innovation and ensuring AI is used safely and responsibly,” said Lila Ibrahim, chief operating officer at Google DeepMind.

Aidan Gomez, co-founder and chief executive of Cohere, said: “By reaffirming its commitment to an agile, principles-and-context-based, regulatory approach to keep pace with a rapidly advancing technology the UK government is emerging as a global leader in AI policy.

“The UK is building an AI-governance framework that both embraces the transformative benefits of AI while being able to address emerging risks.”

The head of Amazon's operations in the UK, John Boumphrey, said the UK's efforts to create “guardrails” for AI should be balanced with “allowing for continued innovation”.

“We encourage policymakers to continue pursuing an innovation-friendly and internationally co-ordinated approach, and we are committed to collaborating with government and industry to support the safe, secure, and responsible development of AI technology,” he said.

Experts said the development of rules and regulations in AI needs to keep pace with the evolution of the technology itself and be fully aware of the potential risks associated with it.

“Moving quickly here while thinking carefully about the details will be crucial to balancing innovation and risk mitigation, and to the UK’s international leadership in AI governance more broadly,” said Tommy Shaffer Shane, AI Policy Advisor at the Centre for Long-Term Resilience.

The money will be spent “to support regulators and advance research and innovation on AI, including hubs in healthcare and chemical discovery,” the Department of Science, Innovation and Technology said as part of the government's response to the AI Regulation White Paper consultation, which was launched about a year ago.

The plans include £90 million to set up nine AI research hubs at universities across Britain that will develop methods of using AI responsibly in sectors such as healthcare and chemistry.

In addition, £10 million is being made available to regulators to “prepare and upskill” their staff to “address the risks and harness the opportunities” of AI. The government also said that regulators, including Ofcom and the Competition and Markets Authority, have been asked to make clear how they intend to manage AI technology by the end of April.

“I am personally driven by AI’s potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future,” said Michelle Donelan, Secretary of State for Science, Innovation, and Technology.

“AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.”

Prime Minister Rishi Sunak has said in the past that while AI affords a great opportunity, the technology could increase fraud and fear, result in cyberattacks and even pose a risk to humanity itself. Some trade unions have warned that even as AI could potentially improve business efficiency, it could also spell the demise of millions of jobs. A study carried out by the online employment company, ResumeBuilder, found that of a sample of 750 businesses using AI, 44 per cent said they would lay off staff in 2024, as a direct result of using the technology.

Cyber threats

On Tuesday, the UK and France held a joint conference in London to launch an international agreement which aimed to address “the proliferation of commercial cyber intrusion tools”.

Experts are concerned that AI technology has the potential to make the tools used by cybercriminals much more powerful.

Opening the event at Lancaster House in London, Deputy Prime Minister Oliver Dowden said he was “proud that the UK is building on its existing capabilities and taking action as a world leader on cyber threats and innovation”.

Representatives from 35 nations attended the conference, together with tech companies, legal experts, and human rights defenders, as well as some companies involved in the making and selling of cyber intrusion tools and services.

Although a list of intrusion tool makers was unavailable, cyber experts from Apple, Google, Microsoft and BAE Systems were confirmed among the attendees.

“The proliferation of commercially available cyber intrusion tools is an enduring issue, with demand for capability to conduct malicious cyber operations growing all the time,” said Paul Chichester, director of operations at the National Cyber Security Centre.

“We need a thriving global cyber security sector to maintain the integrity of our digital society, and by working together to improve oversight and transparency in how this capability is being developed, sold and used, we can reduce the impact of the threat to us all.”

The conference follow last November's AI Safety Summit at Bletchley Park, where major technology firms agreed to submit their AI plans and models for review before launching to the public.

The opposition Labour party welcomed the government's plans for regulations and risk mitigation, but said it was “still missing a plan to introduce legislation that safely grasps the many opportunities AI presents”.

Updated: February 07, 2024, 6:07 AM