Mustafa Suleyman became chief executive of Microsoft AI last year. AFP
Mustafa Suleyman became chief executive of Microsoft AI last year. AFP
Mustafa Suleyman became chief executive of Microsoft AI last year. AFP
Mustafa Suleyman became chief executive of Microsoft AI last year. AFP

British-Syrian AI boss Mustafa Suleyman calls for a 'humanist' AI


Lemma Shehadi
  • English
  • Arabic

Microsoft AI’s chief executive Mustafa Suleyman has called for a “humanist” approach to artificial intelligence that will set limits on the technology to keep it under human control.

Mr Suleyman is known for his AI activism and as the co-founder of DeepMind, one of the world’s leading AI research companies. His father was a London taxi driver from Syria and his mother an English nurse working in the National Health Service.

AI could lead to rapid advances in health care and scientific research that would greatly improve human life, he said on Monday. But it also came with risks that the technology could develop unfettered.

He urged people to be more aware of these risks, and to collectively call for an ethical approach to the technology. “If you're not a little bit afraid at this moment, then you're not paying attention. Fear is healthy and necessary,” he said.

“We have to declare our belief in a humanist super intelligence, one that is always aligned to human interest, that works for humans, that makes the world a better place,” he said, speaking on the BBC’s Today programme.

AI is coming close to being able to produce new knowledge, which Mr Suleyman views as the exciting next step in the technology’s progression. “The real quest of AI, in my opinion, is to try to produce new knowledge. At the moment, AI does a pretty good job of reproducing existing knowledge, but really we want it to tell us something that we don't actually know,” he said.

Current AI models’ ability to show empathy and to create entirely new images based on prompts are a sign this is already happening. The effects will be significant for more complex uses in molecular medicine and laboratory research.

“If you can apply the same set of methods not just to generate images or to videos or to text, but generating the next sequence in DNA or in any time series data, then you could potentially produce synthetic molecules, or you could produce entire academic papers from scratch, including all of the proofs that you might want to go and test in the biology lab,” he said.

But it also comes with challenges, particularly in terms of job losses. “These are fundamentally labour-replacing technologies,” he said. He added that the theory that new jobs will be created “just isn’t true in the long term”.

Jobs such as HR, marketing and project management, the skills of which are “quite predictable and quite automatable”, or which follow “a very strict decision tree”, are at risk. But further down the line, more complex training positions, such as paralegals and junior accountants, will also be at risk.

His advice for young people to prepare for an AI-dominated job market is to not only understand the technology but to be mindful of the political implications of AI and to use it for good. “The first step is to recognise the technology is political and that it carries huge ethical weight ... to use it to be able to shape it in the best interests of everybody,” he said.

Mustafa Suleyman at the CogX Festival in London. Matthew Davies / The National
Mustafa Suleyman at the CogX Festival in London. Matthew Davies / The National

Another challenge will be to regulate and limit AI’s ability to self-improve and operate with “complete autonomy”. Some AI models seeking to make vastly complex calculations, such as exploring the limits of the universe, are currently being designed with “superpowers unlike anything that we could imagine".

“It is designed inherently to self-improve, set its own goals, operate with complete autonomy. Those are three capabilities, which, to my mind, look like we couldn't control it. If we can't control it, it isn't going to be on our side. It's going to overwhelm us,” he said.

He said the ethical principles of AI need to be baked in “by design” and that any future development would need to prove that the technology can be contained. “It has to be a core property of any future development that we can provably contain and secure and align it, stop its spread that isn't under our control, that it can't be hacked or leaked."

Updated: December 29, 2025, 12:48 PM