Will We Actually Use Isaac Asimov's Laws of Robotics?

Will We Actually Use Isaac Asimov's Laws of Robotics?
AP Photo/Fabian Bimmer
X
Story Stream
recent articles

Legendary science fiction author Issac Asimov's Three Laws of Robotics seem as timeless as they are thought-provoking. You'd be hard-pressed to find an adult sci-fi fan alive today who hasn't heard of them. Hard-wired into almost all of the positronic robots in his stories, the laws are designed as a safety mechanism to keep autonomous droids in check. They are:

First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Given the laws' lasting impact and Asimov's standing as both a prolific author of fiction and a professor of biochemistry, it's often been wondered whether they will actually be utilized in the design of real-life artificial intelligence.

So far, the consensus seems to be "no".

"Asimov’s rules are neat, but they are also bullshit," roboticist and sci-fi author Daniel Wilson bluntly stated, adding, "For example, they are in English. How the heck do you program that?"

Coding issues aside, there are genuine problems with the three laws. Indeed, many of Asimov's stories centered on how they can be circumvented. As P.W. Singer, a contributing editor at Popular Science and an expert on 21st century warfare, wrote:

For example, in one of Asimov’s stories, robots are made to follow the laws, but they are given a certain meaning of “human.” Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as “human.” They follow the laws, but still carry out genocide.

When Asimov wrote his laws, he could not fully foresee how humans would use artificial intelligence. Today, machines beat us in Jeopardy, chess, and poker. They drive our cars and perform our surgeries. Instead of possessing self-awareness, they run on intricate algorithms programmed by human creators. In Asimov's vision, A.I. machines are our servants. In the real world, they are extensions of ourselves. That means that any laws which govern artificially intelligent machines must also govern the humans who engineer them.

Various thinkers and organizations have considered what these laws might be. At NewScientist, Gilead Amit used Asimov's laws as a starting point.

"A robot may not injure a human being or allow a human being to come to harm – unless it is being supervised by another human," he wrote. Moreover, "a robot must not impersonate a human," less the distinction between human and machine be disturbingly blurred.

Google researchers have set more rules. For example, an A.I. must not unnecessarily damage other things in pursuit of its goals, nor should it take shortcuts to "hack" its programmed goals. Think of a robot trying to "clean" a mess by simply putting something over it.

Cambridge Consultants, a leading STEM consulting firm, outlined ideals for responsible A.I. in a 2018 report.

  • Responsibility: There needs to be a specific person responsible for the effects of an autonomous system’s behaviour. This is not just for legal redress but also for providing feedback, monitoring outcomes and implementing changes.
  • Explainability: It needs to be possible to explain to people impacted (often laypeople) why the behaviour is what it is. This is vital for trust.
  • Accuracy: Sources of error need to be identified, monitored, evaluated and if appropriate mitigated against or removed.
  • Transparency: It needs to be possible to test, review (publicly or privately), criticise and challenge the outcomes produced by an autonomous system. The results of audits and evaluation should be available publicly and explained.
  • Fairness: The way in which data is used should be reasonable and respect privacy. This will help remove biases and prevent other problematic behaviour becoming embedded.

Given all that we've learned about artificial intelligence in the many decades since Isaac Asimov penned his Three Laws of Robotics, we can safely conclude that they are out of date. While they gave rise to provocative plots, they are best left in writing.



Comment
Show comments Hide Comments