The EU and G7 countries have agreed on new guiding principles and a code of conduct that will ensure developers produce “trustworthy” artificial intelligence systems.
In a release, the European Commission said that the 11-point document offers guidance on how to tackle incidents and patterns of misuse after AI products have been placed on the market.
It is hoped that the new measures will also help developers to “identify, evaluate and mitigate risks across the AI life cycle”.
Companies should post public reports on the capabilities, limitations and use and misuse of their own AI systems, while also investing in robust security controls.
These principles have in turn been used to compile a voluntary code of conduct that will provide detailed and practical guidance for organisations developing foundation models and generative AI.
Leaders of the G7 nations kicked off the process in May at a ministerial forum called the “Hiroshima AI process”.
The guidelines set a landmark for how major countries govern AI, amid privacy concerns and security risks.
In the release, the EU said that different jurisdictions are likely to take their own unique approaches to enforcing the guidelines.
Ursula von der Leyen, President of the European Commission, on Monday said the benefits of AI for the world were “huge” but the technology brings its own challenges.
Ms von der Leyen called on AI developers to sign and enact the code of conduct “as soon as possible”.
It comes as world leaders prepare for a summit at Bletchley Park in the UK which will focus on how to ensure AI can be used safely around the world.
The White House has confirmed that US Vice President Kamala Harris will attend the summit instead of President Joe Biden, while Canadian Prime Minister Justin Trudeau, French President Emmanuel Macron and German Chancellor Olaf Scholz are not expected to attend.