You may not know it, but the rules of robotics are now pervading every aspect of our lives. In the 1940s, Isaac Asimov devised a set of robot rules in his science fiction writings. This article will explain how rules of robots went from being science fiction to now being critical in software design, artificial intelligence (AI) deep learning, and even being adopted within government standards and regulations.
Laws of Robotics – Science Fiction.
Science fiction author Isaac Asimov in the 1940s devised the first set of rules of robotics. For most of us who are not involved with robotics or AI, these rules seem to make sense. But these first laws of robotics originated from science fiction and have been debunked by scientists. Asimov’s Laws are as follows:
- First Law – Do No Harm. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law – Obey Orders. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law – Protect Itself. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Rules of Robotics – Software Design Microsoft.
Software design rules are starting to emerge for Artificial Intelligence (AI) and robotics. In Slate magazine, June 2016, Satya Nadella, a CEO of Microsoft Corporation offered the following software design rules for artificial intelligence and robots.
- Enable, Not Replace Humans. Designed to assist humanity, not replace humans.
- Transparent. Humans should know and be able to understand how the software works.
- Maximize efficiencies. Do this without destroying the dignity of people.
- Maintain Privacy. It must earn trust through guarding human’s information.
- Algorithmic Accountability. Enable humans to intervene to undo unintended harm.
- Not be Bias. Must not discriminate against people.
Rules of Robotics – Software Design Google AI.
Robots are quickly moving beyond basic repetitive tasks to actually being empowered with intelligence, Artificial Intelligence (AI). Google Brain–Google’s deep learning AI division lays out design rules for robots to be able to be intelligent and learn for themselves without unintended consequences. Below are Google Brain’s rules for robots on how a robot should be programmed to think and learn:
- Make Things Better, Not Worse. Robots have to think through unintended consequences and not just complete their primary tasks.
- No Cheating. Robots that are incentivized to perform tasks must also have strict guidelines not to cheat. Otherwise, they could just focus on the incentive and not on the primary task.
- Humans are Mentors. Robots need periodic human feedback to affirm they are performing their tasks to standard. Robots need to be able to be “trained” and incorporate human feedback to improve their performance.
- Play Only Where Safe. For robots to learn, they need to explore and try new things. The challenge is that thes “learning” activities could result in dire consequences. One technique that is used by developers is to have the robots train and learn new things only in the presence of humans.
- Know Limitations. Socrates once said, “a wise man knows that he knows nothing”. This wisdom is even more important for robots. Robots need to be programmed to recognize both their limitations and their own ignorance. A robot thinking that it is “all knowing” and invincible is a recipe for disaster
Rules of Robotics – Government Perspective.
Government and legal scholars are beginning to think about the legal and ethical aspects of robotics and AI. Now, many Governments are formulating regulations to establish robotic and AI standards in regard to liability, data protection, and prevent hacking. As an example, the UK House of Lords Select Committee on Artificial Intelligence came up with these ethical AI principles:
- Benefit the Common Good. Should be developed for the common good and benefit of humanity.
- Be Fair. Should operate on principles of intelligibility and fairness.
- Protect Individual Privacy. Should not be used to diminish the data rights or privacy of individuals, families or communities.
- All Citizens Benefit. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- Do No Harm. Autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
For more information from Supply Chain Tech Insights, see articles on AI, data analytics, and robotics.
Greetings! As an independent supply chain tech expert with 30+ years of hands-on experience, I take great pleasure in providing actionable insights to logistics leaders. My background includes implementing 100s of innovative solutions using emerging technologies and a data-centric development approach. I have also provided business intelligence (BI) solutions for 1,000s of shippers. For more about me, click here.