Does wondering about the ethics of AI make me a Luddite?
A few days ago, the New York Police Department decided to withdraw from service a four-legged robotic device that could reach places where sending a police officer might be too dangerous.
The device, nicknamed ‘Digidog’, became a source of heated debate, with some critics, according to the story in the New York Times, likening it to “a dystopian surveillance drone”.
Digidog was perhaps a little unlucky. A better public awareness campaign and a more careful selection of when to deploy it might have won people over.
In discussing this matter, a friend and I agreed – and we are not alone in holding this view – that there is plenty of scope for using Artificial Intelligence and robots such as Digidog to retrieve explosive devices, for example.
I must confess though that I am rarely at the front of the queue when it comes to adopting new technology. Indeed, a few friends have suggested that I am a bit of a technophobe, or a Luddite.
That term dates back to a labour movement in Britain in the early 19th century that opposed mechanised production because it undermined the skilled craftsmen of the day. The 19th century Luddites had a point. Skilled craftsmen vanished and their movement was firmly put down, with dozens being hanged or transported to Australia.
A couple of centuries later, admitting to doubts about the use of AI might prompt a few jeers or wry smiles, but nothing worse.
Now, I am quite accustomed to the pace of technological innovation: I have not dipped a nib into an inkwell for decades and can cope easily with laptops, mobile phones and the internet.
Technology must remain the servant of mankind – not the other way around
I am well aware of the countless benefits of technology and the extent to which AI can improve lives. But even so, truth be told, I am concerned about where the adoption of this technology will lead.
Over 50 years ago, I met Yuri Gagarin, the Soviet cosmonaut who was the first man in space. His achievements seemed almost unbelievable back then. Today, I can accept without difficulty and with pride, the UAE’s remarkable success in sending the Hope Probe to circle Mars.
As we innovate, we can see the benefits that lie ahead. Advances in technology – harnessing virtual reality across disciplines, simplifying communications to reduce unnecessary travel – the advantages of these are clear. There are bound to be changes in patterns of employment as new jobs appear, requiring new skill sets. And in 2021, just as there were 200 years ago, some craftsmen may find themselves out of work.
For the longer term, though, I wonder where this will lead. And to that end, are we having the right discussions? Not just the technical and practical ones, but the philosophical ones too.
Is technology focused on assisting human beings or might it actually change our very natures? Medical research has already made it possible to think of engineering genetics, changing the DNA of foetuses to remove before birth ailments or characteristics that are deemed undesirable. What will those sort of technological strides result in?
More from Peter Hellyer
It brings to mind an interaction from a couple of years ago. Back in 2019, I was given a tour of a display by the Museum of the Future at the World Government summit in Dubai.
Our guide explained to our party how it might be possible in the decades ahead to revolutionise not only health care but the process of learning. The first part was fine, covering 3D printing of artificial limbs and so on.
The tour then moved on to topics such as how information could be transferred to the brain, downloading data straight to our minds as one might on to a computer, until over time, it might be possible to provide tomorrow’s human beings with access to literally every piece of knowledge.
I asked one of our guides: “Have you looked into the ethics of engineering humans in this way?”
“No,” they said. "We don’t have answers to that.”
The relevance of the rest of the body seemed to fade; as did the importance of personal learning and experience. The distant future appeared to be dominated by the use of technology and AI. This is all in the realm of theory, of course. And if, one day, it does come to pass, I may not be around. But, frankly, it scared me as we still need answers to ethical questions about this use of technology.
There is immense value in the quest for the next stage of innovation. And yet, I hope that generations to come remember that technology must remain the servant of mankind – not the other way around. It cannot be something that alters our essential humanity. And if that makes me a 21st century Luddite, so be it.
Peter Hellyer is a consultant specialising in the UAE’s history and culture
While you're here
Updated: May 9, 2021 02:45 PM