A ‘robot’ activist for the Campaign to Stop Killer Robots. Getty Images
A ‘robot’ activist for the Campaign to Stop Killer Robots. Getty Images
A ‘robot’ activist for the Campaign to Stop Killer Robots. Getty Images
A ‘robot’ activist for the Campaign to Stop Killer Robots. Getty Images

Artificial Intelligence: No longer the stuff of science fiction


  • English
  • Arabic

It was supposed to be a straightforward good-news PR interview, set up by Intel before Christmas to showcase the new software it had designed for Stephen Hawking, the theoretical physicist locked in by motor neurone disease and utterly reliant on computers for communicating with the outside world.

The 72-year-old former Lucasian Professor of Mathematics at Cambridge University, author of A Brief History of Time and widely regarded as one of the smartest human beings alive, finally had at his disposal a truly sophisticated adaptive word-prediction software to assist his "speech".

But ACAT, Hawking’s new Assistive Contextually Aware Toolkit, had got one of the greatest minds of our time thinking about the implications of artificial intelligence, and the televised interview with Rory Cellan-Jones, the BBC’s technology correspondent, was about to go off-piste.

The new software, plumbed in to just about everything Hawking had ever written, said or published, was so intuitive that context alone was sufficient for it to guess what his next words would be. ACAT was, in short, putting words into Hawking’s mouth.

Midway through a jocular inquiry by Cellan-Jones about why, despite the new software, he had decided to stick with his clunky robotic voice, Hawking suddenly delivered an unscheduled apocalyptic warning about the perils of artificial intelligence – a message rendered even more ominous by the expressionless professor’s familiar robotic voice.

It was a scene that would not have looked out of place in a film from the Terminator franchise, in which Skynet, a man-made ­artificial-intelligence system, achieves autonomy and decides to wipe out the now redundant humans who created it.

“The primitive forms of artificial intelligence we already have, have proved very useful,” Hawking began. “But I think the development of full artificial intelligence could spell the end of the human race.”

There was a curiously long pause as Hawking, and his new software, appeared to choose his words carefully.

“Once humans develop artificial intelligence,” Hawking continued eventually, “it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

All that was missing was Arnold Schwarzenegger’s cyborg, sent back in time to silence the ­professor and his timely warning.

Hawking isn’t the first person, real or fictional, to raise concerns that, in creating machines capable of outsmarting us, we are programming our own destruction.

In 1921, the Czech playwright Karel Capek staged a play that predicted the fall of mankind at the hands of a race of cyborg slaves who, learning to think for themselves, turn against their human creators. Rossum's ­Universal Robots also gave the world the word robot, taken from robota, the Slovak word for menial work.

Since then, conjured up by ­existential angst at the hubristic nature of our self-satisfied cleverness, an army of sinister robots and disembodied entities imbued with artificial intelligence and bent on mankind’s destruction has poured forth from the pens of science-fiction writers.

HAL 9000, the disembodied computer star of Arthur C Clarke's 1968 film 2001: A Space Odyssey, is a classic of the genre. In charge of systems on the spaceship ­Discovery One, HAL is capable of autonomous reasoning, a blessing that quickly mutates into a curse when his human companions suspect he is malfunctioning and try to switch him off. HAL, logical to a fault, interprets this as a threat to his mission and sets about killing the crew.

In the real world, however, genuine artificial intelligence – the ability of a computer to recreate or simulate human intelligence, with all its vagaries – has proved somewhat elusive.

The term was coined in 1955 by John McCarthy, then a mathematician at Dartmouth College in New Hampshire. In 1956, McCarthy organised a conference to ­explore the theory that “every aspect of ... intelligence can in principle be so precisely described that a machine can be made to simulate it”.

McCarthy died in 2011, at the age of 84, after a lifetime devoted to the unfulfilled notion. In the way of the modern digital world, he left behind much of himself as a virtual presence on a website, strangely chaotic for a computer scientist and untended since his last entry, but still echoing with his thoughts and work.

And what is clear from this now long-untended garden of ideas is that most of the blooms languishing in it remain precisely that – only ideas.

Almost 60 years after McCarthy’s vision of artificial intelligence, and despite the enthusiastic endorsement of the concept by fiction writers, there is still no such thing as a ­general- ­purpose robot, equipped with genuine artificial intelligence, capable of understanding, navigating and interacting with the world and its other inhabitants in an entirely autonomous manner.

Apple’s Siri, apparently understanding the questions we speak into our iPhones and responding with relevant information or actions, is merely working from a list of recognised words and actions and is not even a step down the right road.

Likewise, Google’s new driverless car, unveiled last week and due to be menacing humans on the streets of California in the new year, will merely be reacting to information from sensors and in accord with the programmed imperatives of an algorithm.

Nevertheless, there remains no shortage of scientists plodding along in the fictional footsteps of Miles Dyson, the Cyberdyne ­Systems engineer who creates the Skynet software that takes over the world in the Terminator film franchise, and dreaming of the day when machines will outsmart humans.

Kevin Warwick, professor of ­cybernetics at Reading University in the United Kingdom, is one of them. In his 1997 book March of the Machines, Warwick declared not only that it was possible that machines more intelligent than us would one day take over, but that it could happen in the next 10 or 20 years.

“The future points to machines which can evolve into better, even more intelligent machines and which can replace any parts that become faulty,” he enthused. “In this way machines could become immortal.”

In 2009, undeterred by the ­failure of his prediction to come true, Warwick appeared in the documentary Transcendent Man, celebrating the "the life and ­ideas" of Ray Kurzweil, a computer scientist, futurist, author and, since 2012, the head of engineering at Google, where he is charged with developing machine ­intelligence.

For Kurzweil, a cult figure to bot-heads everywhere, the buzz word is “singularity” – not the singularity of the Big Bang, as posited by Hawking and others, but “an era in which our intelligence will become increasingly nonbiological and trillions of times more powerful than it is today”.

In short, as his 2005 book, The Singularity is Near, made clear, Kurzweil can't wait for the day when humans will mingle their DNA with the software of computers to achieve immortality – and that day, he claimed at a conference of like-minded "singularitarian immortalists" in 2013, was a mere 30 years away.

The problem with all the excitement stirred by the pronouncements of Hawking, Kurzweil and others, says Mark Bishop, professor of cognitive computing at Goldsmiths, University of London, is that it all hangs on the single assumption “that every ­aspect of human mentality can be instantiated by a computer program”.

If that is true, he tells me, “then I think that Hawking, Kurzweil and Warwick are correct, that there will be a time at which machine intelligence is better than human on all aspects than we can imagine.”

There are, however, “at least three good foundational reasons for doubting whether some critical ­aspects of human mentality can be engineered by the execution of a computer program”.

One, no purely computational system can ever understand the symbols that it ­manipulates.

Two, for a machine to develop truly humanlike intelligence, it must possess consciousness – an impossible fantasy, as Bishop argued in a 2002 paper, Dancing with Pixies, and as likely as consciousness being located in “the cup of coffee I’m drinking or the chair I’m sitting on”.

Scientists, he says, “don’t want to think there are pixies lurking in their cup of coffee, and so we are drawn to reject the idea that the execution of a computer program brings forth consciousness”.

Three, as the physicist Roger Penrose pointed out in his 1989 book, The Emperor's New Mind: "There is something about mathematical insight – the 'Aha!' moment – that is fundamentally non-computable," says Bishop.

He concludes that “we’ve got pretty strong grounds for thinking there will always be a gap between what computers can do and what computers plus humans can do”.

This, pretty much, was the conclusion drawn in 1973 by James Lighthill, Hawking’s immediate predecessor as Lucasian Professor of Applied Mathematics at Cambridge.

Lighthill had been asked by the British government to evaluate the state of AI research and in his subsequent report, Artificial Intelligence: A General Survey, concluded that the idea of "a general-­purpose robot ... that could substitute for a human being over a wide range of human activities" was a mere "mirage".

If the results of last year’s annual Loebner Prize are anything to go by, it remains no less of a mirage 40 years on.

The prize, run by the Society for the Study of Artificial Intelligence and Simulation of Behaviour, is one of the oldest competitions to test a computer’s ability to match or mimic human behaviour.

Judges use computer terminals to interact with the unseen competitors, each one comprises two entities – one human, the other an AI system. After interrogation, the judge must decide which of the two is human.

Some of the answers given by “Rose” – which, with a score of 89.17 per cent, was last year’s winning system – will be depressingly familiar to users of Siri, Google and other voice-controlled bots. In response to the query: “What’s the weather like where you are?”, Rose offered: “I’m an American woman.”

Yet Rose was positively superhuman compared to Masha, in 19th position with a mere 35 per cent. To the same query about the weather, it replied: “You will not believe, but as a child I was in love with Jim di Griza. Garrison famously created his image. I always dreamed that one day he will take me with this wretched planet.”

“Take me to your leader”, it ain’t. Hardly the small talk one might expect from an entity poised to take over the world.

But that doesn’t mean we should relax, says Bishop. Hawking is right to be afraid, but for the wrong reason.

“We don’t have to imagine the scenario that Hawking, Warwick and Kurzweil paint, of superhuman intelligent robots, to be scared of what robots might do to humanity,” he says.

“There is every reason to be alarmed about autonomous weapons, for example, without thinking that the AI in them is cleverer than humans. I call it ­Artificial Stupidity.”

While we remain relatively hopeless at producing general-­purpose, all-thinking, all-dancing robots, we are becoming frighteningly good at creating semi-­autonomous entities, which, once “briefed” by human programming, are allowed to go off and pursue their dedicated function, often with unforeseen consequences.

Artificial Stupidity was demonstrated perfectly by the strange case of the US$23-million textbook. No one, not even the author Peter Lawrence, could have expected The Making of a Fly, his dry, out-of-print 1992 academic treatise on genetic evolution, to become the most valuable book in the world in 2011.

But that was to reckon without the automated pricing bots that are increasingly being used online by Amazon book dealers. Programmed to trawl the web for rival offerings according to predetermined marketing strategies, they will match, or marginally exceed or undercut the prices.

In this case, two bots with incompatible agendas locked horns and, unnoticed by their human “masters”, between them automatically bumped up the price to an absurd level. On one book dealer’s Amazon page, The Making of a Fly peaked at $23,698,655.93 (plus $3.99 for shipping).

The bots were using algorithms “that didn’t have a built-in sanity check on the prices they produced”, noted the Berkeley biologist and blogger Michael Eisen, one of whose students had spotted the pricing arms race. There were, he concluded, “seemingly endless possibilities for both chaos and mischief”.

It is in the light of this, says ­Bishop, that we should consider the army of robot sentinels, armed with machine guns and grenade launchers and capable of detecting and killing human intruders from more than three kilometres away, that is gathered on the southern side of Korea’s demilitarised zone.

So far, he says, humans have been left in the decision loop of the deadly Samsung SGR-A1, “but there’s purely a software switch that can be flipped any time to make these robots go into fully ­autonomous mode”.

Last month, a study by researchers at Darmstadt University of Technology in Germany concluded that attempts to programme “values and principles of conduct, such as the Geneva Convention” into lethal autonomous weapons would fail, precisely because their intelligence was not human and lacked the ability to make ­decisions based on human moral values.

Faced with a moral quandary – kill a long-wanted terrorist, saving an unknown number of lives in the future, or spare him and the innocent children in the kill zone? – an algorithm would just hazard a guess.

In the paper Logical Limitations to Machine Ethics with ­Consequences to Lethal Autonomous Weapons, the authors highlight Israel's use of the Guardium, an unmanned and potentially entirely autonomous armed vehicle that has been prowling Israel's border with Gaza for the past three years.

The armoured Guardium is not only equipped with cameras and a range of sensors to allow it to detect incursions, but is armed with lethal and non-lethal weapons and programmed to respond accordingly to threats.

So far, remotely monitored and controlled by humans, it hasn’t been left to its own devices – but the point, say the authors, is that it could be, with the press of a key. Such Unmanned Ground Vehicles take the concept of remotely controlled drones “one step further and aim to make human agency fully redundant in the control loop”.

But human agency is already fully redundant on board the X-47B Unmanned Combat Air System.

America’s contemporary ­unmanned armed drones, ­responsible for the deaths of many innocent bystanders as well as their intended targets, are not autonomous – they are flown by remotely located human pilots. But for the past seven years, Northrop Grumman and the US navy have been developing the strike-­fighter-sized X-47B, an entirely autonomous unmanned drone, with no human operators in the control loop, capable of ­carrying 2,000 kilograms of bombs or missiles.

“It isn’t very often you get a glimpse of the future,” said Ray Mabu, the secretary of the navy, after witnessing the historic first landing by the aircraft on the USS George HW Bush in July 2013. Such aircraft, he said, “have the opportunity to radically change the way presence and combat power are delivered from our aircraft carriers.”

Others, however, have expressed reservations about who would be accountable for the drone’s strike capabilities.

“Lethal actions should have a clear chain of accountability,” Noel Sharkey, professor of ­artificial intelligence and ­robotics at the UK’s University of ­Sheffield, told the Los Angeles Times in 2012. “This is difficult with a ­robot weapon. The robot cannot be held accountable. So is it the ­commander who used it? The ­politician who authorised it?”

If the Campaign to Stop Killer Robots has anything to do with it, the days of Guardium and ­other such weapons could soon be numbered.

An international ­coalition of nine NGOs launched in October 2013 with the objective of pre-emptively banning all autonomous weapons, the ­campaign has already been successful in forcing the issue onto the agenda of the UN, which in April this year will hold a dedicated meeting of experts under the umbrella of its Convention on Conventional Weapons, adopted in 1980 and signed by 115 nations, including the UAE.

The UN’s under-secretary-­general, Michael Møller, has urged delegates to “take bold action ... you have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control”.

Unsurprisingly, perhaps, the military-industrial complex evoked by president Eisenhower in his 1961 farewell speech, in which he warned of “the potential for the disastrous rise of misplaced power”, takes a less clearly defined moral stance on the issue.

In 2006, five years before he died, the 79-year-old McCarthy attended a conference at Dartmouth College to mark the 50th anniversary of his original symposium on artificial intelligence.

The event was paid for with a $200,000 (Dh735,000) grant from Darpa, the US government’s Defense Advanced Research Projects Agency, on the condition that the participants “focus on US defense and homeland security needs”.

The agency’s mission is “creating and preventing strategic ­surprise”, and autonomous weapon systems are high on its agenda.

Take one of its current projects, the Anti-submarine Warfare Continuous Trail UnManned Vessel – an autonomous sub­marine hunter designed “under the premise that a human is never intended to step aboard at any point in its operating cycle”.

Substitute “Cyberdyne” for “Darpa”, and you don’t have to be Stephen Hawking, or even ­Terminator-killing resistance leader John Connor, to see where all this might be heading.

Jonathan Gornall is a regular ­contributor to The National.

thereview@thenational.ae

The specs

Engine: 2.0-litre 4-cylinder turbo hybrid

Transmission: eight-speed automatic

Power: 390bhp

Torque: 400Nm

Price: Dh340,000 ($92,579

Brief scores:

Juventus 3

Dybala 6', Bonucci 17', Ronaldo 63'

Frosinone 0

A Cat, A Man, and Two Women
Junichiro
Tamizaki
Translated by Paul McCarthy
Daunt Books 

Timeline

2012-2015

The company offers payments/bribes to win key contracts in the Middle East

May 2017

The UK SFO officially opens investigation into Petrofac’s use of agents, corruption, and potential bribery to secure contracts

September 2021

Petrofac pleads guilty to seven counts of failing to prevent bribery under the UK Bribery Act

October 2021

Court fines Petrofac £77 million for bribery. Former executive receives a two-year suspended sentence 

December 2024

Petrofac enters into comprehensive restructuring to strengthen the financial position of the group

May 2025

The High Court of England and Wales approves the company’s restructuring plan

July 2025

The Court of Appeal issues a judgment challenging parts of the restructuring plan

August 2025

Petrofac issues a business update to execute the restructuring and confirms it will appeal the Court of Appeal decision

October 2025

Petrofac loses a major TenneT offshore wind contract worth €13 billion. Holding company files for administration in the UK. Petrofac delisted from the London Stock Exchange

November 2025

180 Petrofac employees laid off in the UAE

ZIMBABWE V UAE, ODI SERIES

All matches at the Harare Sports Club:

1st ODI, Wednesday - Zimbabwe won by 7 wickets

2nd ODI, Friday, April 12

3rd ODI, Sunday, April 14

4th ODI, Tuesday, April 16

UAE squad: Mohammed Naveed (captain), Rohan Mustafa, Ashfaq Ahmed, Shaiman Anwar, Mohammed Usman, CP Rizwan, Chirag Suri, Mohammed Boota, Ghulam Shabber, Sultan Ahmed, Imran Haider, Amir Hayat, Zahoor Khan, Qadeer Ahmed

What are the influencer academy modules?
  1. Mastery of audio-visual content creation. 
  2. Cinematography, shots and movement.
  3. All aspects of post-production.
  4. Emerging technologies and VFX with AI and CGI.
  5. Understanding of marketing objectives and audience engagement.
  6. Tourism industry knowledge.
  7. Professional ethics.

Sting & Shaggy

44/876

(Interscope)

Gulf Under 19s final

Dubai College A 50-12 Dubai College B

Sly%20Cooper%20and%20the%20Thievius%20Raccoonus
%3Cp%3E%3Cstrong%3EDeveloper%3A%3C%2Fstrong%3E%20Sucker%20Punch%20Productions%3Cbr%3E%3Cstrong%3EPublisher%3A%3C%2Fstrong%3E%20Sony%20Computer%20Entertainment%3Cbr%3E%3Cstrong%3EConsole%3A%3C%2Fstrong%3E%20PlayStation%202%20to%205%3Cbr%3E%3Cstrong%3ERating%3A%3C%2Fstrong%3E%205%2F5%3C%2Fp%3E%0A

Hotel Silence
Auður Ava Ólafsdóttir
Pushkin Press

World record transfers

1. Kylian Mbappe - to Real Madrid in 2017/18 - €180 million (Dh770.4m - if a deal goes through)
2. Paul Pogba - to Manchester United in 2016/17 - €105m
3. Gareth Bale - to Real Madrid in 2013/14 - €101m
4. Cristiano Ronaldo - to Real Madrid in 2009/10 - €94m
5. Gonzalo Higuain - to Juventus in 2016/17 - €90m
6. Neymar - to Barcelona in 2013/14 - €88.2m
7. Romelu Lukaku - to Manchester United in 2017/18 - €84.7m
8. Luis Suarez - to Barcelona in 2014/15 - €81.72m
9. Angel di Maria - to Manchester United in 2014/15 - €75m
10. James Rodriguez - to Real Madrid in 2014/15 - €75m

ESSENTIALS

The flights 
Fly Etihad or Emirates from the UAE to Moscow from 2,763 return per person return including taxes. 
Where to stay 
Trips on the Golden Eagle Trans-Siberian cost from US$16,995 (Dh62,414) per person, based on two sharing.

Three ways to boost your credit score

Marwan Lutfi says the core fundamentals that drive better payment behaviour and can improve your credit score are:

1. Make sure you make your payments on time;

2. Limit the number of products you borrow on: the more loans and credit cards you have, the more it will affect your credit score;

3. Don't max out all your debts: how much you maximise those credit facilities will have an impact. If you have five credit cards and utilise 90 per cent of that credit, it will negatively affect your score.

The studios taking part (so far)
  1. Punch
  2. Vogue Fitness 
  3. Sweat
  4. Bodytree Studio
  5. The Hot House
  6. The Room
  7. Inspire Sports (Ladies Only)
  8. Cryo
THE SPECS

Engine: 3.9-litre twin-turbo V8

Transmission: seven-speed dual clutch

Power: 710bhp

Torque: 770Nm

Speed: 0-100km/h 2.9 seconds

Top Speed: 340km/h

Price: Dh1,000,885

On sale: now

Know your Camel lingo

The bairaq is a competition for the best herd of 50 camels, named for the banner its winner takes home

Namoos - a word of congratulations reserved for falconry competitions, camel races and camel pageants. It best translates as 'the pride of victory' - and for competitors, it is priceless

Asayel camels - sleek, short-haired hound-like racers

Majahim - chocolate-brown camels that can grow to weigh two tonnes. They were only valued for milk until camel pageantry took off in the 1990s

Millions Street - the thoroughfare where camels are led and where white 4x4s throng throughout the festival

MATCH INFO

 

Maratha Arabians 107-8 (10 ovs)

Lyth 21, Lynn 20, McClenaghan 20 no

Qalandars 60-4 (10 ovs)

Malan 32 no, McClenaghan 2-9

Maratha Arabians win by 47 runs

Most sought after workplace benefits in the UAE
  • Flexible work arrangements
  • Pension support
  • Mental well-being assistance
  • Insurance coverage for optical, dental, alternative medicine, cancer screening
  • Financial well-being incentives 
COMPANY%20PROFILE%20
%3Cp%3EName%3A%20DarDoc%3Cbr%3EBased%3A%20Abu%20Dhabi%3Cbr%3EFounders%3A%20Samer%20Masri%2C%20Keswin%20Suresh%3Cbr%3ESector%3A%20HealthTech%3Cbr%3ETotal%20funding%3A%20%24800%2C000%3Cbr%3EInvestors%3A%20Flat6Labs%2C%20angel%20investors%20%2B%20Incubated%20by%20Hub71%2C%20Abu%20Dhabi's%20Department%20of%20Health%3Cbr%3ENumber%20of%20employees%3A%2010%3C%2Fp%3E%0A
The%20specs
%3Cp%3E%3Cstrong%3EEngine%3A%3C%2Fstrong%3E%201.8-litre%204-cyl%20turbo%0D%3Cbr%3E%3Cstrong%3EPower%3A%20%3C%2Fstrong%3E190hp%20at%205%2C200rpm%0D%3Cbr%3E%3Cstrong%3ETorque%3A%3C%2Fstrong%3E%20320Nm%20from%201%2C800-5%2C000rpm%0D%3Cbr%3E%3Cstrong%3ETransmission%3A%20%3C%2Fstrong%3ESeven-speed%20dual-clutch%20auto%0D%3Cbr%3E%3Cstrong%3EFuel%20consumption%3A%3C%2Fstrong%3E%206.7L%2F100km%0D%3Cbr%3E%3Cstrong%3EPrice%3A%3C%2Fstrong%3E%20From%20Dh111%2C195%0D%3Cbr%3E%3Cstrong%3EOn%20sale%3A%20%3C%2Fstrong%3ENow%3C%2Fp%3E%0A