Asimov's Laws of Robotics and Their Impact on AI

November 2, 2023
by Jonathan MacDonald

Science fiction has often served as a source of inspiration for technological innovation. Isaac Asimov, a prolific science fiction writer, introduced the world to his "Three Laws of Robotics" in his 1942 short story "Runaround" (included in the 1950 collection "I, Robot"). These laws were fictional guidelines designed to ensure the safety and ethical behavior of robots, but they have had a profound impact on the development of AI. In this blog, we'll explore how Asimov's laws could shape the AI market.

The Laws of Robotics

Before delving into the impact of Asimov's laws, let's briefly review the first three laws themselves:

  1. A robot may not harm a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In ‘The Evitable Conflict’ Asimov wrote that the machines generalise the First Law to mean: No machine may harm humanity; or, through inaction, allow humanity to come to harm.

Later still, in ‘Foundation and Earth’ a “zeroth law” was introduced (in other words, coming before the first), with the original three suitably rewritten as subordinate to it: A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

At SELF, we believe these laws should establish a moral and ethical framework for the behavior of AI and any derivatives of it, prioritising the safety and well-being of humans.

The Birth of AI Digital Assistants

Fast forward to the development of AI digital assistants. The AI digital assistant market has seen remarkable growth over the past decade. Companies like Amazon, Google, Apple, and Microsoft have developed popular digital assistants, such as Amazon's Alexa, Google Assistant, Apple's Siri, and Microsoft's Cortana.

In June 2016, Satya Nadella, the CEO of Microsoft Corporation, had an interview with the Slate magazine and reflected on what kinds of principles and goals should be considered by industry and society when discussing artificial intelligences:

1. "AI must be designed to assist humanity", meaning human autonomy needs to be respected.
2. "AI must be transparent" meaning that humans should know and be able to understand how they work.
3. "AI must maximize efficiencies without destroying the dignity of people."
4. "AI must be designed for intelligent privacy" meaning that it earns trust through guarding their information.
5. "AI must have algorithmic accountability so that humans can undo unintended harm."
6. "AI must guard against bias" so that they must not discriminate against people.

At SELF we feel whilst stated intent is important, the execution of that should adhere strictly to the intent - otherwise, the words are meaningless.

Here's how Asimov's laws could influence the AI assistant market:

  • Safety First: The Zeroth and First Laws of Robotics, which prioritise the safety of humans and humanity, should have a direct impact on the design of AI digital assistants. These systems should be programmed to prioritize user safety and avoid any actions that might harm people. They should also incorporate human feedback and understanding to minimise misunderstandings that could lead to harm.
  • Obedience to Humans: Asimov's Second Law, which mandates that robots obey human orders, should influence an AI digital assistants' core functionality. These assistants should be designed to follow user instructions to the best of their abilities. This law should ensure that the AI digital assistants are subservient tools that serve human needs. This is something in stark contrast to those who are hell-bent on creating machines that can think without humans.
  • Self-Preservation: While the Third Law focuses on a robot's self-preservation, it could have a more abstract influence on AI digital assistants. In this context, the law reflects the need for digital assistants to maintain their operational integrity and reliability. If a digital assistant were to fail constantly, it would not serve its intended purpose. Therefore, developers need to ensure the system's stability and performance.

Ethical Considerations

Incorporating Asimov's laws into AI digital assistants doesn't come without ethical dilemmas and challenges. As AI becomes more advanced, it raises questions about the autonomy and decision-making capabilities of digital assistants. Striking a balance between fulfilling user requests and avoiding harmful actions can be complex.

Moreover, AI digital assistants, like any technology, are subject to human biases, which could inadvertently lead to harm or unfair treatment. Developers must work diligently to minimise such biases and ensure that their AI systems follow Asimov's laws.


Isaac Asimov's Three Laws of Robotics, despite being fictional, need to significantly influence the development and operation of AI digital assistants. We believe these laws must shape the AI digital assistant market by prioritising safety, obedience to humans, and the need for self-preservation. While adhering to these laws presents challenges and ethical considerations, they continue to guide the responsible development and use of AI digital assistants, ensuring that they remain valuable tools while minimizing potential harm to users. The impact of Asimov's laws in this field, if executed ethically, should be a positive testament to the enduring influence of science fiction on the real world of technology.

Keep exploring