EU sets guidelines for ethical development of artificial intelligence

Pilot phase to involve a wide range of stakeholders

Hanson Robotics Inc. humanoid robot "Sophia" speaks to attendees on the opening day of the MWC Barcelona in Barcelona, Spain, on Monday, Feb. 25, 2019. At the wireless industry’s biggest conference, over 100,000 people are set to see the latest innovations in smartphones, artificial intelligence devices and autonomous drones exhibited by more than 2,400 companies. Photographer: Angel Garcia/Bloomberg
Powered by automated translation

The European Commission has put human autonomy and accountability at the heart of new guidelines to regulate artificial intelligence and ensure the public trusts the technology.

The EU initiative sets out seven essential requirements for AI and follows a global debate on whether companies should prioritise ethical concerns over business interests.

Brussels hopes their guidelines will quell concerns among EU citizens about the technology, while giving European companies a competitive edge in the industry that will boost global exports.

“We do not want to stop innovation, but the added value of the EU approach is that we are making it a people-focused process. People are in charge,” EU commissioner for the digital economy Mariya Gabriel said.

Regulations were designed to ensure algorithms did not discriminate on the grounds of age, gender or race. However, EU officials have said repeatedly that there are no plans to move beyond non-binding guidelines and issue legislation on AI.

In December last year, an ethics guideline was drafted by an independent expert group, after taking into account more than 500 comments received through the European Alliance, a forum for companies, public administrations and organisations to engage in discussions with the experts drafting the regulations.

“The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies," the commission's digital chief Andrus Ansip said.

The guidelines aim to ensure the following requirements are met:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: the traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The Commission will launch a pilot phase this summer involving a wide range of stakeholders. Early next year, the expert group will review the requirements and propose any next steps.

IBM Europe chairman Martin Jetter, who was part of the group of experts, said the guidelines “set a global standard for efforts to advance AI that is ethical and responsible".