Thursday, March 22, 2018

Science Fiction Meets Science Fact Meets Legal Standards

Any fan of science fiction is probably familiar with the Three Laws of Robotics developed by prolific science fiction author, Isaac Asimov:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It's an interesting thought experiment on how we would handle artificial intelligence that could potentially hurt people. But now, with increased capability and use of AI, it's no longer a thought experiment - it's something we need to consider seriously:
Here’s a curious question: Imagine it is the year 2023 and self-driving cars are finally navigating our city streets. For the first time one of them has hit and killed a pedestrian, with huge media coverage. A high-profile lawsuit is likely, but what laws should apply?

At the heart of this debate is whether an AI system could be held criminally liable for its actions.

[Gabriel] Hallevy [at Ono Academic College in Israel] explores three scenarios that could apply to AI systems.

The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.

The second scenario, known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.

The third scenario is direct liability, and this requires both an action and an intent. An action is straightforward to prove if the AI system takes an action that results in a criminal act or fails to take an action when there is a duty to act.

Then there is the issue of defense. If an AI system can be criminally liable, what defense might it use? Could a program that is malfunctioning claim a defense similar to the human defense of insanity? Could an AI infected by an electronic virus claim defenses similar to coercion or intoxication?

Finally, there is the issue of punishment. Who or what would be punished for an offense for which an AI system was directly liable, and what form would this punishment take? For the moment, there are no answers to these questions.

But criminal liability may not apply, in which case the matter would have to be settled with civil law. Then a crucial question will be whether an AI system is a service or a product. If it is a product, then product design legislation would apply based on a warranty, for example. If it is a service, then the tort of negligence applies.
Here's the problem with those 3 laws: in order to follow them, the AI must recognize someone as human and be able to differentiate between human and not human. In the article, they discuss a case in which a robot killed a man in a factory, because he was in the way. As far as the AI was concerned, something was in the way and kept it from doing its job. It removed that barrier. It didn't know that barrier was human, because it wasn't programmed to do that. So it isn't as easy as putting a three-laws strait jacket on our AI.

 

No comments:

Post a Comment