This is an interesting question. No doubt our libertarian compatriots have discussed this over a few beers.
Is it the manufacturer? That seems to make sense. Is it the person or people who wrote the code? That seems to make sense too. But what if the code was written by another robot? (Which is happening.) Well, then who made the robot who wrote the faulty or malicious code? What if it was developed with taxpayer research money? What if the code was originally written in China or Russia? Does it matter whether the actions of the robot are “malicious” anyway? How does one measure maliciousness in a robot? Is robot maliciousness even possible?
C’mon pot smoking grad students at MIT, we’re counting on you to figure this out. You guys got this ball rolling in the first place.
Criminal liability usually requires an action and a mental intent (in legalese an actus rea and mens rea). Kingston says Hallevy explores three scenarios that could apply to AI systems.
The first, known as perpetrator via another, applies when an offense has been committed by a mentally deficient person or animal, who is therefore deemed to be innocent. But anybody who has instructed the mentally deficient person or animal can be held criminally liable. For example, a dog owner who instructed the animal to attack another individual.
That has implications for those designing intelligent machines and those who use them. “An AI program could be held to be an innocent agent, with either the software programmer or the user being held to be the perpetrator-via-another,” says Kingston.
The second scenario, known as natural probable consequence, occurs when the ordinary actions of an AI system might be used inappropriately to perform a criminal act. Kingston gives the example of an artificially intelligent robot in a Japanese motorcycle factory that killed a human worker. “The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine,” says Kingston. “Using its very powerful hydraulic arm, the robot smashed the surprised worker into the machine, killing him instantly, and then resumed its duties.”
The key question here is whether the programmer of the machine knew that this outcome was a probable consequence of its use.