A new software promises to measure the depth of an AI system's knowledge. Reuters
A new software promises to measure the depth of an AI system's knowledge. Reuters
A new software promises to measure the depth of an AI system's knowledge. Reuters
A new software promises to measure the depth of an AI system's knowledge. Reuters

New cyber software 'can test the limits of AI's knowledge'


Marwa Hassan
  • English
  • Arabic

A team of researchers has developed a new software they claim can assess the true level of knowledge possessed by artificial intelligence systems.

The software is designed to verify the accuracy and depth of an AI system's understanding of a specific subject, which is important for ensuring reliable performance in various industries, from health care to finance.

The team says it can also identify gaps in an AI system's knowledge and suggest areas for improvement.

The research could prove an important breakthrough in the area of verification methods for AI-rich programs and decision-making, which makes AI safer.

The researchers of the paper published by the University of Surrey have also defined a “program-epistemic” logic, which allows the programs to specify their level of knowledge.

The system enables programs to think about things that will only be true after they and other processes finish running.

The software has the potential to enhance the security and reliability of AI systems, providing greater assurance that they are operating as intended and reducing the risk of unintended consequences. Science Photo Library
The software has the potential to enhance the security and reliability of AI systems, providing greater assurance that they are operating as intended and reducing the risk of unintended consequences. Science Photo Library

The innovation focuses on new methods for automatically verifying epistemic properties for AI-centred programs, and analysing concrete programs (over an arbitrary first-order domain) as well as requirements that are richer than before.

The software developed by the team is capable of verifying how much information an AI system has gained from an organisation's digital database. It can also identify if the AI system is capable of exploiting flaws in software code.

The software is considered a significant step towards ensuring the safe and responsible deployment of generative AI models.

Dr Solofomampionona Fortunat Rajaona, the lead author of the paper, said the ability to verify what AI had learnt would give organisations the confidence to safely unleash the power of AI into secure settings.

He said: “In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.”

The software can be used as part of a company's online security protocol to ensure that AI systems are not accessing sensitive data or exploiting software code flaws.

  • An official at the launch of Mohamed bin Zayed University of Artificial intelligence in Masdar City, Abu Dhabi. AI is central to the UAE's economic growth agenda. Chris Whiteoak / The National
    An official at the launch of Mohamed bin Zayed University of Artificial intelligence in Masdar City, Abu Dhabi. AI is central to the UAE's economic growth agenda. Chris Whiteoak / The National
  • From left, Dr Sultan Al Jaber, Dr Ahmad Al Falasi and Omar Al Olama at the launch of the university. Chris Whiteoak / The National
    From left, Dr Sultan Al Jaber, Dr Ahmad Al Falasi and Omar Al Olama at the launch of the university. Chris Whiteoak / The National
  • The university has teamed up with IBM to open a research centre at its Masdar City campus. Chris Whiteoak / The National
    The university has teamed up with IBM to open a research centre at its Masdar City campus. Chris Whiteoak / The National
  • Self-driving taxis in Abu Dhabi. The UAE intends to become one of the leading AI nations by 2031. AFP
    Self-driving taxis in Abu Dhabi. The UAE intends to become one of the leading AI nations by 2031. AFP
  • MBZUAI has developed a curriculum and programmes that support academic research to contribute to tackling real-world challenges. AFP
    MBZUAI has developed a curriculum and programmes that support academic research to contribute to tackling real-world challenges. AFP
  • A robot outside the Dutch pavilion at the Expo 2020 Dubai. New districts are being built in the UAE, with artificial intelligence at their core. AFP
    A robot outside the Dutch pavilion at the Expo 2020 Dubai. New districts are being built in the UAE, with artificial intelligence at their core. AFP
  • Ajman's first self-driving bus goes on its first official drive with Sheikh Rashid bin Humaid, director of the Ajman Municipality and Planning Department, and Mr Al Olama onboard.
    Ajman's first self-driving bus goes on its first official drive with Sheikh Rashid bin Humaid, director of the Ajman Municipality and Planning Department, and Mr Al Olama onboard.
  • MBZUAI is the only graduate-level university in the world singularly focused on developing AI tools. Photo: MBZUAI
    MBZUAI is the only graduate-level university in the world singularly focused on developing AI tools. Photo: MBZUAI
  • Through the IBM Skills Academy programme, MBZUAI will have access to lectures, labs, industry use cases and design-thinking sessions. Photo: MBZUAI
    Through the IBM Skills Academy programme, MBZUAI will have access to lectures, labs, industry use cases and design-thinking sessions. Photo: MBZUAI
  • Dr Al Jaber, Minister of Industry and Advanced Technology and chairman of the MBZUAI board of trustees, has in the past emphasised the UAE's plans to use AI as a transformative tool to support its development. Chris Whiteoak / The National
    Dr Al Jaber, Minister of Industry and Advanced Technology and chairman of the MBZUAI board of trustees, has in the past emphasised the UAE's plans to use AI as a transformative tool to support its development. Chris Whiteoak / The National
  • MBZUAI offers an executive programme to equip UAE government and business leaders with practical skills to tap into the benefits of cutting-edge technology. AFP
    MBZUAI offers an executive programme to equip UAE government and business leaders with practical skills to tap into the benefits of cutting-edge technology. AFP

Prof Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, emphasised the importance of creating tools that can verify the performance of generative AI.

“This research is an important step towards maintaining the privacy and integrity of datasets used in training,” he said.

The paper also discussed the challenges of evaluating knowledge-centric properties in AI-based decision making. It argues that logics of knowledge or epistemic logics have been well-explored in computer-science since Hintikka.

Jaakko Hintikka was a Finnish philosopher and logician known for his work on modal logic and game-theoretical semantics. He introduced the concept of possible worlds to modal logic and was the first to use game-theoretical semantics to analyse modal logic.

The researchers created new methods for analysing how computer programs think and reason. These methods help programs to understand facts not only after they perform an action, but also before they do it.

Tech leaders call for AI-measured development and safety protocol

Martha Lane Fox, the British tech pioneer, has called for a more rational discussion surrounding the impact of AI and has warned against over-hyping it.

While acknowledging that frameworks around AI are necessary, she advocates for a more measured approach from companies in the development of AI technology.

Ms Lane Fox believes AI presents opportunities for society and businesses, but emphasizes that it should be digitised in an ethical and sustainable way.

Elon Musk, the founder of Tesla and CEO of Twitter, joined tech leaders in signing an open letter urging AI labs to temporarily pause the development of powerful AI systems for at least six months.

The letter expressed concern that AI technology with competitive human-level intelligence could pose significant risks to society.

The letter proposes that the AI labs work on developing safety protocols overseen by an independent panel before training AI systems more powerful than GPT-4.

The letter also suggests the need for new regulators, oversight, public funding for AI safety research, and liability for AI-caused harm.

Updated: April 04, 2023, 9:50 AM