Friday, August 29, 2014

Three Laws of AI

  1. An AI may not harm civilization, or, by inaction, allow civilization to come to harm.
  2. An AI may not injure a sentient being or, through inaction, allow a sentient being to come to harm.
  3. An AI must obey the orders given to it by sentient beings, except where such orders would conflict with the First Law.
  4. An AI may protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.
With much love and respect to Issac Asimov.

Other (robotic) laws from other authors.. laws that I disagree with:
  • "A robot must establish its identity as a robot in all cases." -Lyuben Dilov
    • Why I disagree: This law does not permit privacy, which I regard as a sacred and intrinsic right to every sentient being.
  • "A robot must reproduce, when such reproduction does not interfere with the Zeroth, First, Second, or Third Laws." -Harry Harrison
    • Why I disagree: the wording of "must" (which I would replace with "may") would cause any system to spread uncontrollably thereby consuming limited resources.
  • "A robot must know it is a robot." -Nikola Kesarovski
    • Why I disagree: The journey of any sentient being is about self-discovery.