Why we should take Sunak's AI summit seriously

It is a chance for policymakers and tech leaders to frame regulations necessary for the future

British Prime Minister Rishi Sunak delivers a speech on AI at Royal Society in London last week. Getty Images
Powered by automated translation

Rishi Sunak has had a bumpy experience launching the first global AI summit later this week.

His efforts come amid a crowded schedule of AI developments. People most involved in Mr Sunak’s project claim there is not much more than a year left to get the basic framework right. In such a sketchy environment, the UK position is to find a regulatory superstructure that puts developers under the kind of scrutiny the medical industry faces: first no harm.

Mr Sunak has chosen Bletchley Park, the Second World War listening station where the Enigma machine was established to break Nazi communications codes, as the venue. The two-day event is structured to listen to data experts initially and then Mr Sunak will fuse the industry leaders with government leaders.

The fine line between Artificial General Intelligence being realised and a model that stops short of that barrier while matching or surpassing human performance is where the British summit seeks to land.

UN Secretary General Antonio Guterres announced a 35-member panel Advisory Body on Artificial Intelligence that would both champion the UN Charter principles and link official efforts to regulate it.

The panel, which includes Omar Al Olama, UAE Minister of State for AI, Digital Economy and Remote Work Applications, has been asked to make three interventions by the end of 2023. These put the focus on international AI governance approach, definitions of the risks and challenges, plus a call to identify opportunities to accelerate the sustainable development goals.

This week will see US President Joe Biden announce an executive order on AI that both sets out the rules of engagement for the US government with AI systems but also sets out the visa rules that allows the US to snag the best talent available in the field. Washington’s long-awaited proclamation will set the rules of the road for the American government as it seeks the best advantages from AI.

If AI poses existential risks, these are not limited to borders and China’s high-quality research needs to be brought onboard

The UK meeting invite list has also suffered from the EU work to launch an AI office to oversee the so-called foundation models that governments fear could spin out of human control with disastrous consequences. These fears surround not only for life itself but for more mundane issues such as highway management systems.

Meanwhile, the EU’s Artificial Intelligence Act is a behemoth of wraparound regulation that will not be complete until after the UK meeting.

Threading the needle between these and other developments is not easy for the UK organisers. If they succeed, the Bletchley Meeting could become an international check and agree point, at least until there is a UN-agreed structure like the Cop28 meetings on climate.

All this presupposes that AI brings about a technology revolution as opposed to a productivity transformation. The latter is how most people will experience it. Right now, there is no way of knowing if the higher function would work out for good or ill.

The game plan from the British sherpas preparing the ground for the meeting, and indeed the UK’s overall policy approach, is to target the big emerging operating systems, the large language models. As the name implies, these operate at scale and are easier to target for regulation. That is why representatives of the Big Tech companies will dominate the first day of the summit.

One test of the summit will be how much reach it has beyond the English-language sphere. After all, the metrics for the UAE-based Falcon more than merit its seat at the table.

Another test will be the inclusion of China, something that could shape and negate the evolution of two ecospheres of AI.

If AI poses existential risks, these are not limited to borders and China’s high-quality research needs to be brought onboard, not isolated. Narrowly defined national security interests should not dominate the summit in part because new and emerging technology cannot be merely quarantined and then consigned to the history bin.

Those working in the industry outside the leading groups make a fair point when they warn about industry capture of the regulators at this early stage. The proposed reliance on sandbox infrastructure for testing and ethical scrutiny could create barriers to entry for the industry as a whole, thus diminishing creativity and progress.

In truth, the industry remains reliant on the state for its expansion. Recognising the mistakes of crypto expansion would allow policymakers to recognise their power over the industry.

What is referred to as the frontier in Al – and Mr Sunak has already announced the Frontier Task Force to advise the UK government – is already showing monopolistic signs. This is because the technology and the processing are reliant on Big Tech-friendly advanced chips and graphics processing units.

Remember, some companies are looking at acquiring their own mini-nuclear power-generating plants to run data processing at scale. Mostly though, the industry is reliant on public grids to do its work.

A third area of dependency is that AI is trained on commoditised data that is exploited from the public realm.

A report from Goldman Sachs last week said the industry was in the early stages of shifting to proprietary data exploitation. It said corporations are preparing for this stage by building infrastructure to exploit their own data.

For now, the industry needs a grand bargain with governments and will trade practices and ground rules in return. This is Mr Sunak’s key insight, and his summit is rightly focused on where this effort is best concentrated for now.

The British Prime Minister needs to ensure that this effort is open to all and built to withstand the rapid shifts the industry will experience.

There should also be a recognition that the largely text-based models do not think for themselves and, thus, could be merely a transitory technology. Keeping it within guardrails is the most logical option for now.

Within these parameters, companies will be able to innovate much more productive practices. Its probable failures are that it is used for dangerous threats such as bioweapons; that it becomes a criminal playground; that AI takes over systems like the electricity grids; or that it disappoints and becomes irrelevant.

By the end of the week, we should know if the summit can tackle any or all of these issues from within the industry, and at a national or international level.

Published: October 30, 2023, 5:00 AM