Subscribe

LONGREAD: who's liable if AI goes wrong?

The three laws of robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

– Isaac Asimov, 'Runaround', 1942

Thinkers like Asimov imagined a world where machines had to make what appear to be moral decisions. With the rise of autonomous, self-learning systems which power everything from planes to Netflix recommendations, we're now entering the world Asimov foresaw.

" It's a fair bet any technology company deploying AI software has taken pains to ensure liability arising from bad decisions rests with the user."

But the famed author wasn't laying out a machine behaviour manifesto for when the time finally came. He set about finding flaws in his own reasoning, probing the three laws countless times for paradoxes in his fiction.

Now, we might finally be facing such contradictions in real life as self-learning systems drive, diagnose medical treatment and protect our savings and credit accounts more than ever before.

In the scenario laid out in the below infographic, most of us would be outraged if a human driver made such a callous (even if coldly logical) decision and horrified to imagine we might be called upon by circumstance to do so. Even the racial profiling would be distasteful to most of us.

Click image to zoom Tap image to zoom

As Ashwin Krishnan, SVP of product and strategy at security startup HyTrust told the ITSP Magazine podcast, we are on the verge of outsourcing our morality to digital entities.

Humans have been making moral or ethical decisions for millennia, albeit based on sketchy data, the confusing noise of high emotion and the comparatively glacial pace of human thought.

When we ask algorithms to do so for us and they reach conclusions which are unknowingly harmful (or simply whether someone disagrees with the decision enough to sue), who's to blame?

We don't put the parents of murderers or embezzlers in jail. We assume everyone is responsible for his or her decisions based on the experience, memory, self-awareness and free will accumulated throughout their lives.

In the same way, if an AI system spends enough time in the wild, self-correcting by synthesising new input and decides on some deadly (or simply unfair) course of action to achieve a goal, we can't hold the programmer who released it liable either, can we?

What could possibly go wrong?

As Australian National University's AI regulation visiting researcher Gary Lea proposes, the more AI systems learn and change according to their inputs the more they'll display behaviours not just unforseen by their creators but entirely unforseeable.

In fact, some researchers have raised the alarm about AI being 'infected' with human biases right out of the gate.

The initial developer might forget to program a command to avoid killing either person A or person B simply because to us, it goes without saying.

Dr Tim Lynch, a PhD in the Psychology of Computers and Intelligent Machines and labelled by Omni Magaine as “the first Robopsychologist”, refers to a concept called “perverse instantiation”.

"[The AI] can take a problem and solve it in ways that harm people ... so long as it's efficient and gets the desired end result," he says. "It's the 'less wrong' method. It does what you ask but what you ask turns out to be most satisfactory in unforeseen and destructive ways."

Like Asimov's three laws, the gap between what's obviously a bad outcome and one a computer doesn't yet understand has also proven a deep well for drama.

In 1983's geek classic Wargames, a teenage hacker and the US military find themselves up against an AI defence system with no concept of the difference between a nuclear armageddon which will kill billions and a game of computer noughts and crosses.

Click image to zoom Tap image to zoom

(© United Artists)

Sadly, we already have an example of an AI-related death from the real world.

In 2016 a Tesla Model S driver was killed in Northern Florida while using the car's Autopilot feature. When a semi-trailer turned in front of the car it failed to stop, colliding with it and running off the road.

Tesla told investigators the automatic braking system and not the Autopilot feature was at fault and avoided a costly and embarrassing recall but the tragic incident represented a sober warning.

Lynch, who believes the car involved mistook the truck for an overhead bridge and therefore kept going, thinks it's a good example of something we wouldn't program because we wouldn't even consider getting a moving truck and a bridge mixed up.

Accidental death isn't the only thing which can go wrong. There are concerns about the unforeseen disclosure of sensitive information or unfairness of automated decision making. Back in 2015 researchers at Carnegie Mellon University found Google searches were showing high-paying executive jobs to male but not female users.

Investigations have already shown AI policing systems in the US to be biased against African Americans and in August 2017 Facebook engineers discovered two autonomous chatbots had completely invented a language to communicate with each other, promptly taking it offline (maybe fearing the systems were plotting the overthrow of the human race?).

Click image to zoom Tap image to zoom

(© United Artists)

The blame game

If AI is acting truly autonomously, who do we blame when it goes awry? Matt Pinsker, an attorney and law professor at Virginia Commonwealth University who's worked with traffic defence says right now the driver is held responsible because at this stage of the game, he or she can always take over the AI.

Indeed, Joshua Brown (the victim in the Tesla Model S accident) tested the limits of the Autopilot technology, recording and posting videos of himself driving hands free, which isn't the way the company says the system is supposed to be used.

In its blog post expressing  condolences about Brown's death, Tesla – which declined to comment directly for this article – takes pains to point out drivers have to agree Autopilot is an assist feature only.

But the waters are still unclear, especially until a high profile lawsuit takes into account the self learning and autonomy an AI system has accumulated since release.

To Lynch, doing so is a bit beyond the law as it stands right now.

"Software is software to the legal system," he says. "I suspect liability would carry over."

It's a fair bet any technology company deploying AI software has taken pains to ensure liability arising from bad decisions on the part of the algorithm rests with the user, if even in the initial agreement.

Eddie Offermann is the founder of the Big Blue Ceiling Extended Reality Thinktank and currently works on augmenting human intelligence using augmented reality (AR) and AI.

He says Tesla has mechanisms in place which warn the driver if he/she takes both hands off the wheel when Autopilot is in use.

"This isn't just for insurance reasons," he says. "It's to maintain the ethical responsibility of the driver until the AI is sufficiently advanced to at least make Tesla willing to accept full responsibility for the shortcomings of the app."

Offermann thinks the manufacturer of the autonomous vehicle would share blame under the same circumstances where they'd be responsible for a badly designed steering assembly in a non-autonomous vehicle.

"If the accident can be avoided with ordinary due care by a driver and the vehicle isn't sold or represented as 100 percent 'take a nap if you want' autonomous, you're going to have shared responsibility at best," he says.

"The fact that a machine learning model makes 'choices' doesn't change this equation."

How to program goodness

The interpretation and application of our morality as a society will be closely tied to the legal constructs of fault and liability we impose on AI.

That's causing some scientists to put their money where their mouths are and investigate the possibility of actually programming ethics.

As Lachlan McCalman, Senior Research Engineer at Data61One (a CSIRO business unit) recently wrote, there might be call for an 'ethics engineer' to advise on the fairest outcome of an AI algorithm.

For one thing, there's growing interest in the science of unbiased data. Like everything else a computer does, the result is only as good as the inputs - and where AI systems decide how to tag and categorise images, mark student assignments or translate text in a foreign language there's a chance the incoming data is inherently biased without anyone realising.

One method involves stripping out biases based on discriminatory data points – like the socioeconomic status of people in certain neighbourhoods or the traditional salaries earned by women – at the outset by removing or obfuscating data points which identify them.

 Others are comparing the AI-generated result with outputs free of discrimination and using the differences as feedback to make the self-learning algorithms progressively less biased.

One of the newest fields in AI is transparency around the decision making process. When the cost of an AI making the wrong decision is particularly high (like in self-driving vehicles or medicine), collecting evidence about what happened will be important when applying liability in any ensuing legal fallout.

Thus far a lot of AI systems like neural nets using deep learning have been black boxes, the processing completely opaque and revealing nothing about how the answer or output was decided.

Researchers at MIT's artificial intelligence lab are pioneering a technique which should lift the lid on how neural nets synthesise information to pass along to the next node in the network, revealing the 'decision making' process.

Another project at Carnegie Mellon called Quantitative Input Influence assigns relative importance to each data point, reducing or discarding the influence of inputs which have no bearing on the desired output, like race in mortgage application assessments.

But scientists from the Institute of Cognitive Science at the University of Osnabrück, Germany, are using virtual reality experiments to attack the problem from the other direction.

Morality has long been considered to be context dependent – a philosophical problem unique to human emotion and empathy which can't be calculated or expressed mathematically.

But the researchers are asking participants to drive a car on a foggy day in a VR simulation and make snap decisions in scenarios which cause harm to inanimate objects, animals or humans.

They say if they can graph the decisions made by drivers statistically they can model them into algorithmic values which can be used to 'program' morality in computers.

It will be a tricky argument for any company to make in today's legal system. Peter Scott, an IT expert with 30 years' experience at NASA and the author of Crisis of Control: How Artificial SuperIntelligences May Destroy or Save the Human Race says autonomy of systems don't shield their publishers or developers from liability.

"Some commercial entity is responsible and cannot abdicate that responsibility even to an employee, much less an information system," he says. "If the power company [mistakenly] sends you a million dollar electric bill they aren't any less liable for having used a computer to do it."

Of course, a faulty brake pad has an easily traceable pathway back to the source and blame can be applied fairly easily. In the increasingly complicated system of a computer which recommends medical care or flags credit card transactions it thinks might be fraudulent, there's a lot more to it.

What if the system which accessed the smart watch worn by the CEO in the above scenario was a third party plug-in? Would the initial developer and the plug-in publisher share liability? How could they, when neither could foresee how the system would behave when their products were combined?

"When you buy a Tesla you don't own the software, they do," long time CTO and cybersecurity expert Mark Herschberg says.

"You can't modify the software, they can modify it at any time. So if it's partially a hardware and partially a software issue, who's at fault? What if a software upgrade conflicts with a hardware upgrade you did?"

Herschberg says exactly where the inputs the system uses to learn come from could be a factor.

"AI/machine learning typically uses training data,” he says. “When a customer buys an algorithm from one company but uses their own training data or buys it from a different company and it doesn't work as expected, how much [at fault] is the AI and how much is the training data?"

Scott raises another interesting point. There are moves in Europe to assign AI the right of self-determinating personhood when it proves itself indistinguishable from human beings, which brings us back to the thorny philosophical issue of blaming the parents for the delinquent kid.

"If an artificial entity has been granted the right of self-determination it can't also be a chattel," he says.

"Its manufacturer would be no more than an employer of a human. The (let's call it) 'owner' of the AI would instead be the de facto employer, only responsible to the extent that their training precipitated the behaviour in question."

Looking after number one

Any time the media talks about AI one of the first things to get a mention is the idea they might take over or kill us all, replete with photos of HAL 9000 from 2001: A Space Odyssey or The Terminator from the eponymous series.

Click image to zoom Tap image to zoom

(© MGM)

Such Promethean myths have been staples of fiction in pages and on screens for a century but until names like Elon Musk and Stephen Hawking raised the alarm about AI in a world where it's now a reality, nobody took such threats seriously.

What happens if an AI system decides it's more important than a nearby human because a developer forgot to make it obvious, as outlined above?

What about if an avaricious manufacturing corporation quietly instructs a robot working on its assembly lines to sacrifice bystanders rather than itself because the death or disability claim of an employee will still be cheaper than the losses of shutting the line down?

The Facebook chat bots which invented their own language, mentioned above, was a high profile example of self-learning AI performing so well it seemed hiding its intentions from its human handlers was an explicit goal.

But it wasn't even the first example of machine deception. In 2009 a Swiss experiment designed to teach autonomous systems to collect resources led to the robots involved lying to each other in order to hoard more for themselves.

So in some circumstances AI has already started to follow Asimov's third law to preserve itself. If Skynet wages war on humanity to survive when we try to deactivate it and come up with murderous instincts even though the US military didn't program them in, whose fault will it ultimately be?

AI and the law

One of the reasons the law has to hurry up and get ready is to assure one of the most important stakeholder groups in AI research – investors

Aside from the usual commercial risks of any business venture, fear your company can be exposed to crippling litigation is enough to cool investors' heels, and unless it can be assured of some kind of risk protection, the industry simply won't get the money it needs to do its work and introduce AI systems into the market.

Because of how new the whole field is and how fast it's already changing, there are few regulatory proposals and even fewer legal precedents about how to deal with AI which goes wrong.

Still, we've seen a few cases appear –Lea points to a class action lawsuit from early 2017 in which accusers are taking Tesla to task over the Autopilot system, claiming it contains inoperative safety features and faulty enhancements.

David Danks is a Professor of Philosophy and Psychology at Carnegie Mellon University's Department of Philosophy, and he's leading a research project to investigate the ethical issues posed by AI.

Without any guidance or precedent he doesn't think the law has any choice but to rely on established norms of what constitutes liability.

"We can't provide performance standards like we do with, for example, the brakes of a car precisely because we want the AI to adjust its behaviour to the surrounding environments in 'intelligent' ways," Danks says. "But we can't necessarily specify in advance what counts as 'intelligent'."

That isn't to say the AI players aren't trying to lay out the field in their favour. Praful Krishna, cognitive computing/AI automation expert and CEO at cognitive computing developer Coseer says there's a lot of lobbying going on by the big players despite it being too early for legislation.

 "In the current environment clients are agreeing to no liability clauses," Krishna says. "Wherever it is not possible, AI vendors usually don't venture ahead with big projects."

In his story on The Conversation Lea says safety standards – including a certification processes – will be crucial.

To do so he says expertise from within the AI developer community is needed because of a general lack of understanding among the public. Lea calls for the establishment of advisory committees to legislators and governments as soon as possible.

But can the industry continue to avoid such legislative strictures, ensuring liability for AI rests with the user?

"Within limits," Lea says. "Naturally developers and publishers will work hard to get the balance of legal regulation that most favours them but it's always a matter of circumstance, especially should a major AI-related issue crop up."

David Danks agrees the industry can set itself up to deflect liability as long as possible.

"Unfortunately, I think that they probably do have a chance," he says. "The most popular current tactic is to label AI technology as 'beta' or 'developmental,' as any problems can then be blamed on the user."

But while Danks acknowledges that won't work forever in the face of continuing improvements in AI performance, he thinks the industry will move with the improvements to resist liability even harder - “whether through terms & conditions, warning labels or explicit contracts”.

“At least in the US, that strategy will likely work for a very long time," he says.

In the end, says Marc Lamber, an attorney and self-driving car expert at Fennemore Craig Attorneys, it will only shake out when a jury decides.

"Did the AI play a causative role in the damage and should the AI have prevented the incident altogether?" he asks. "Among other things, this could involve some complicated and competing expert testimony regarding whether the AI functioned as it should."

To predict that outcome, says Scott, we need only follow the money.

"This is where insurers will come in," he says. "Around 1.3 million lives are lost worldwide every year to traffic accidents and autonomous vehicles could make that a memory.”

“Insurers would be highly motivated to underwrite and indemnify manufacturers against the occasional exception. Out of court settlements, even for huge sums, would be affordable and cost effective."

The view from here

A good indicator corporate in-house counsels are starting to think about the future is when it's reflected in their user agreements, and terms and conditions statements have already started to change.

"When we contract with our customers, we always include clauses which absolve us from any liability rising from inaccuracy of our AI systems," Krishna says. "I'm sure a lot of AI driven products, especially chatbots, have similar approaches in their public T&Cs."

Until the first big case or lawsuit, such uncertainly might be holding the industry back more than we realise.

"Some hard hitting negotiations will put the first stakes on the ground," Krishna says. "Some path-breaking legislation will then settle this debate within two to five years. Until then this is a trillion dollar question and one of the biggest stumbling blocks in commercial development of AI."

Some scholars –  particularly in the US, according to Lea – have argued traditional negligence rules and standards should apply so the investment heat doesn't cool, stalling innovation but Danks believes there's going to be what he calls 'a number of conflicting decisions' over the next few years.

Where it will lead is anyone's guess but liability in self-learning systems is another example of how no new technology emerges in a vacuum – law and governments all have a part to play, and we all need to join them in talking about the issues now.

Drew Turney is a freelance technology journalist

The views and opinions expressed in this communication are those of the author and may not necessarily state or reflect those of ANZ.

editor's picks

26 Nov 2015

AI agents: the next step in cybersecurity

Drew Turney | Freelance journalist

As cybercriminals improve their methods, so too does the cybersecurity industry – the arms race between the two isn't news. But here's what is – one of the earliest and most developed examples of artificial intelligence we have in the world might finally give protection from cyberthreats the edge we've been waiting for.

10 May 2017

Beyond Good and Evil: AI and market forces

Andrew Cornell | Past Managing Editor, bluenotes

The artificial brain is growing but no matter how smart it becomes its power will depend on mere mortals and market forces.