If you look into the sky tonight, you might see asteroid 5020 Asimov – or the Asimov crater on Mars. If you live an exceptional life, you might win one of the four Asimov literary awards. Or you could pick up any of the more than 500 novels and non-fiction books written by Isaac Asimov, who would have celebrated his 100th birthday today. Nicknamed "The Great Typewriter" for his prolific output, Asimov is famous as a science fiction writer. But his creative works include crime and young adult fiction. In non-fiction, he wrote everything from textbooks to popular guides, on topics from chemistry and astronomy to history and literary criticism. He also found time to teach, conduct research in biochemistry, promote humanism and serve in the US army during the Second World War. As a young science fiction fan, I came to Asimov through his <em>Foundation</em> novels, the concise yet epic history of the fall and rise of galactic empires in the far future. Do not worry if you are not familiar with the <em>Foundation</em> series and do not even have time for three short novels. It is due to become a tent-pole series for the Apple TV Plus streaming service later this year. I was drawn to the <em>Foundation</em> stories by the wonderful illustrations of legendary sci-fi artist Chris Foss, which adorned the covers of the books <em>Foundation</em>, <em>Foundation and Empire</em>, and <em>Second Foundation</em>. Three decades later, the enduring legacy of the series is "psychohistory", which is a fictional science that blends crowd psychology and advanced mathematics. It is based on collecting huge amounts of information about people's behaviour and aggregating it to discover the patterns that predict the future of entire civilisations. Asimov began the <em>Foundation</em> series when computing pioneers such as Alan Turing were laying the ground for today's information society and when personal data was collected in door-to-door surveys. Today, our data is harvested every time we interact in our connected world, feeding the "Big Data" phenomenon made infamous by the machinations of Cambridge Analytica, the British political consulting firm, but also practised for the public good by scientists such as Peter Turchin. Big Data may be the defining science of the 21st century yet Asimov invented it 80 years ago in the guise of psychohistory. It neatly settles the debate about whether science fiction has run out of things to predict for our technological society. The fact is, <a href="https://www.thenational.ae/arts-culture/film/cult-sci-fi-films-are-virtually-our-reality-1.705409" target="_blank">technology is still playing catch-up with the imagination</a> and I, for one, will not be satisfied until <a href="https://www.thenational.ae/business/technology/amazon-is-working-on-a-wearable-that-can-read-human-emotions-1.865643" target="_blank">my robot butler is delivered by an Amazon drone</a>. Which brings me to Asimov’s second great fictional legacy: the "Laws of Robotics". As an author wrangling a novel about artificial intelligence set 100 years from now, I live in the creative shadow of the Laws of Robotics, which are essentially simple commandments that are hardwired into Asimov’s fictitious robots to prevent them from turning against their creators. Like psychohistory, this octogenarian concept is so embedded in science fiction – and the laws are such a fundamentally good idea – that it is almost impossible to write about robots without addressing them. The Laws of Robotics are hardwired into the brains of Asimov’s robots so that they cannot be ignored or reprogrammed, and at first glance their elegant logic seems to guarantee that robots can never be a danger to humans. The first law commands robots not to harm humans or allow them to be harmed by inaction. The second law tells them to obey human commands, unless they would have to break the first law. The third law allows robots to protect themselves unless that will break the first or second laws. Asimov later added a "zeroth law" to supersede the original three: a robot may not harm humanity, or, by inaction, allow humanity to come to harm. <em>I, Robot</em> is the definitive collection of Asimov's early robot stories, many of which deal with robots trying to obey the laws in the face of human behaviour – and failing with tragic consequences. In this series, the robots are rarely at fault: human whims are just too contradictory, perverse or plain malicious for robots to obey without someone getting hurt. As a writer in the early 21st century, the Laws of Robotics seem to me like the naive dream of a liberal humanist at the dawn of the modern era. Asimov was both a liberal and a humanist, and he created the laws as a reaction to the tide of Frankenstein-style robots that appeared in pulp fiction, <a href="https://www.thenational.ae/robot-wars-1.335078" target="_blank">forever bent on destroying humanity</a>. <em>The Terminator</em> movie of 2019 showed that this idea will not go out of fashion. But just as Arnold Schwarzenegger's killer robot moved beyond his murderous programming, I wonder how the Laws of Robotics might change in a future inspired by the 2020s. The next decade will see robots increasingly employed in every aspect of our lives, from caring for the elderly to replacing soldiers and pilots on the battlefield – at least for nations that can afford them. We already live with robots that look very different from the humanoid machines that Asimov imagined, such as self-driving cars and robot vacuum cleaners. But what worries me is that I can see very little in the actions of today’s world leaders or titans of technology that suggests that they would sign up to anything as humane as Asimov’s Laws of Robotics. If robots do become ubiquitous and intelligent, we will need to ensure, at the very least, that they only harm our enemies, place our survival before their own and do not try to vacuum up our pets. This approach might look less like fundamental laws, and more like the beliefs humans already embrace to give ourselves a place in the universe. Making your robot believe that you are a master seems like a good way to make it do your bidding – all the more so if obedience is rewarded with heavenly bliss and disobedience is punished with a living hell. As an author, my happy task is to see what happens when one robot develops a flaw in that system of rewards and punishments, and learns to pass it on. Humans are notoriously bad at seeing what can go wrong when you pretend you are a master, and based on the way we treat other primates – let alone people who simply look different to us – my robot’s masters are unlikely to embrace a creation that starts to think for itself. But with humans wrapped up in their own squabbles, free robots could be a powerful ally or a dangerous enemy. It is a truism that as writers we stand on the shoulders of giants, and I cannot think of a better creative boost than a leg-up from Isaac Asimov. <em>Alex Lane is an author and journalist based in the UK</em>