SlideShare a Scribd company logo
Paste title slide graphics over this grey
box in slide deck
Liability Issues in Autonomous and Semi-
Autonomous systems
John Buyers & Dr Sanjeev Ahuja – ITechlaw World Conference, Miami, May 2016
osborneclarke.com
Introduction – AI – expectations vs. reality
osborneclarke.com Private & Confidential
3
Introduction – AI – expectations vs. reality
osborneclarke.com
Introduction – AI – Distinguishing Features
• AI – Distinguishing Features
• Neural Networks
• Learning
• Recognition (processing of external stimuli)
• Decision making capabilities
• Autonomy (or semi-autonomy) to act (or react)
following decisions
• Creativity ?
• Ability to communicate ?
• Self-Awareness ?
“Any sufficiently advanced technology is
indistinguishable from magic”
– Arthur C. Clarke
osborneclarke.com
Introduction – AI – Liability – a Sliding Scale
osborneclarke.com
Introduction – AI – Existing Liability Frameworks
Current
AI Liability
Product Liability
Contract Negligence (Tort)
Consumer
Protection
Legislation (UK CPA
1987) (Strict
Liability)
osborneclarke.com
Introduction – AI – Future Liability Frameworks
Future
AI Liability
Insurance Funded
(Strict Liability)
Turing Registries
Individuation or
Machine
Personhood
Agency Crime
osborneclarke.com
AI – Intelligent Machines as Complex Products
Driverless Car Example
Intelligent Machines as complex products
• Features
• Multiple manufacturers
• Sensors
• Software
• Many systems
• Multiple points of failure
• Existing liability models work well where
machine functions (and hence
responses) can be directly traced to
human design, programming and
knowledge
osborneclarke.com
AI – Intelligent Machines as Complex Products
Driverless Car Example
Intelligent Machines as complex products
• Potential targets for liability
 Vehicle manufacturer
 Reseller
 Importer
 Components manufacturers
 Computer software provider
 Machine designer
 Components designer
 Machine "operator"
osborneclarke.com
Product Liability - Contract
• Ensures that manufacturer/retailer sells products that meet contractual
standards
• Predominantly aimed at Pure Economic Loss (ie. monetary value)
• Contract liability created by:
• Express terms (as to defects and/or warranties)
• Implied terms as to quality and fitness for purpose (see in UK
Consumer Rights Act 2015/Sale of Goods Act 1979) – not defects
per se
• Fault based liability – Claimant must prove:
• Breach of express/implied term
• Breach caused loss - causation (Hadley v. Baxendale)
• Remedies: Damages (put party in position had contract been
performed properly)
osborneclarke.com
Product Liability - Contract
• Advantages:
• Can define the scope/boundaries of the contract obligation and
hence the liability
• Implied terms
• Disadvantages:
• Claims limited by Contract Privity – ie. between the contract
counterparties (with limited exceptions: see Contract (Rights of
Third Parties) Act 1999)
• Different standards apply to UK consumer contracts and B2B
contracts
osborneclarke.com
Product Liability - Tort
• Underpins “duty of care”
• Can run concurrently with contractual duties – see Donoghue v. Stevenson
[1932] AC 562
• Predominantly used for real (ie non-pecuniary) loss/damage
• Fault based liability - Claimant must prove:
• Defendant owes a duty of care
• Defendant failed in that standard of care
• In that failure damage was caused (which was reasonably foreseeable)
• Remedies: Damages (put party in position had the tort not occurred)
osborneclarke.com
Product Liability - Tort
• Advantages:
• Exists independent of contract relationship – but need to prove
“duty of care” subsists
• Disadvantages:
• No recovery for pure economic (ie financial) loss
• Contributory negligence can reduce recovery
• as can: volenti non fit injuria (voluntary assumption of risk)
• Evidence of “fault” normally subsists with Defendant
osborneclarke.com
Product Liability – Consumer Protection Act 1987
• Implements Directive 85/374/EC on Liability for Defective Products
• Introduces “strict” product liability regime for defective products in UK
• Does not affect availability of Contractual or Tort based remedies
Scope:
• A person who is injured, or whose personal property is damaged by a
product can claim against the manufacturer or supplier if it can be shown
if the product is defective
• No requirement to show fault but burden of proof on claimant to show
defect existed
Defects
• Defect exists where “the safety of the product is not such as persons
generally are entitled to expect” – Section 3(1) CPA 1987
osborneclarke.com
Product Liability – Consumer Protection Act 1987
• Advantages:
• No requirement to show fault
• No privity
• Wide potential selection of liability targets – suppliers/manufacturers
• Disadvantages:
• Still causation issues – need to show defect
• No actions for pure economic loss (again)
• Software/Intellectual products – unclear whether these are covered
by the Directive/CPA
• Development Risks Defence
osborneclarke.com
AI – Intelligent Machines as Complex Products
Driverless Car Example
Target Contract Tort CPA (Product Liability)
Vehicle Manufacturer Only if you have contracted
directly
Only if you can prove
negligence
Only if you can prove “fault”
Reseller Only if you have contracted
directly
Unlikely Only if you can prove “fault”
Importer Only if you have contracted
directly
Unlikely Only if you can prove “fault”
Components Manufacturers No – may be liable under
indemnity to your counterparty
Maybe under contributory
action
No
Computer Software Provider No (unless you have licensed
software directly)
Only if you can prove
negligence – otherwise
possibly under contributory
action
No
Machine
Designer/Components
Designer
No– may be liable under
indemnity to your counterparty
Only if you can prove
negligence – otherwise
possibly under contributory
action
Only if the designer is the
manufacturer
Machine Operator No Only if you can prove
negligence
No
osborneclarke.com
AI – A failure in causation ?
Intelligent Machines as complex products
• All of the following scenarios allow conventional product liability analysis:
• Traceable defects (in systems, software)
• machine decisions that are based on defective programming
• Failure to provide correct operating instructions
• Incorrect operation (if relevant)
• But what about inexplicable failures ?
osborneclarke.com
AI – A failure in causation ?
Tort – A partial Solution:
• res ipsa loquitur – "the thing speaks for itself"
• does not ordinarily occur without negligence
• caused by an agency within the exclusive control of the defendant
• not due to any voluntary action or contribution on part of plaintiff
• Defendant's non-negligent explanation doesn’t completely explain injury
• Toyota Motor Corporation (2013) WL 5763178 (Texas)
• multiple successive similar "inexplicable" failures
CPA – Another partial fix
• Consumer expectations test moderates causation
osborneclarke.com
Insurance model
But what about inexplicable isolated incidents?
• Existing legal models (including causation) fail
• Argument for introduction of strict liability model
• See Accident Compensation Act 1973 – New Zealand
• Increased complexity means increased costs
• costs of litigating complex autonomous systems – could be
better spent on compensating victims rather than lawyers
and expert witnesses
• Incentivises stable environment for these systems to be
developed (rather than stifling innovation)
osborneclarke.com
Insurance model
Turing Registries as a basis for AI liability?
• Submit intelligent machines to certification process
• Quantify "risk" certification based on spectrum:
• Higher intelligence, higher autonomy, greater
consequences, higher premium
• Pay premium for certification
• Premium funds risk "common pool"
• No dealing with "uncertified" AI: system becomes self-fulfilling
• Similar, but not the same as traditional insurance
• Removes causation, proximate cause and allows for wilful
acts of AI (usually excluded)
osborneclarke.com
Individuation – Intelligent Machines as “persons”
• Machines as “agents” of human principals
• Criminal liability consequences?
• Turing tests
• How to define machine personhood: some contemporary and historical
analogues
• Legal persons: Corporations
• Animals
• Children
• Slaves ? “One who Is under the power of a master, and who belongs to
him; so that the master may sell and dispose of his person, of his
industry, and of his labor, without his being able to do anything, have
anything, or acquire anything, but what must belong to his master.” Civ.
Code La. (1825) Art. 35
osborneclarke.com
Conclusion
“In law, a person is legally liable when
he/she is financially and legally responsible
for something.”
Wikipedia
“If we want to avoid the injustice of holding
men responsible for actions of machines
over which they could not have sufficient
control, we must find a way to address the
responsibility gap in moral practice and
legislation”
The Responsibility Gap, A. Matthias
Paste end slide graphics over this grey
box in slide deck
Thank you

More Related Content

PPTX
Robbie the robot goes (w)rong!
PDF
Licensing of IP rights and competition law – HOVENKAMP – June 2019 OECD discu...
PPTX
Patents That Cannot Be Infringed
PPTX
9worlds robots
PPT
Tim Haley\'s E2790 Presentation
PPTX
Toyota motor mfg v williams pp
PPTX
“Permissionless Innovation” & the Grand Tech Policy Clash of Visions to Come
PDF
Holdup & Royalty Stacking: Theory & Evidence - Anne Layne-Farrar - December 2...
Robbie the robot goes (w)rong!
Licensing of IP rights and competition law – HOVENKAMP – June 2019 OECD discu...
Patents That Cannot Be Infringed
9worlds robots
Tim Haley\'s E2790 Presentation
Toyota motor mfg v williams pp
“Permissionless Innovation” & the Grand Tech Policy Clash of Visions to Come
Holdup & Royalty Stacking: Theory & Evidence - Anne Layne-Farrar - December 2...

Similar to Liability Issues in Autonomous and Semi-Autonomous Systems (20)

PDF
Legal Liability for IOT Cybersecurity Vulnerabilities
PPTX
New legal obligations and liability under MDR and IVDR
PDF
Alias: Liability and automation: issues and challenges for socio-technical sy...
PPTX
New legal obligations under MDR and IVDR
PPT
Software liability
PPTX
Regulatory Nets vs the Fishing Hook of Litigation - BSides Las Vegas 2017
PPTX
Security Vulnerabilities, the Current State of Consumer Protection Law, & how...
PPTX
AI and the Law
PDF
AI and Accountability
PDF
Testing Is How You Avoid Looking Stupid
PDF
CII: Addressing Gender Bias in Artificial Intelligence
PPTX
Blockchain: Future Legal Issues
PDF
Cybersecurity (and Privacy) Issues - Legal and Compliance Issues Everyone in ...
PDF
The future of Auto insurance
PDF
Mobile: the up and downside of risk
PPTX
Algorithm Lawyers – How & How Soon?
PPTX
Accessibility 101 for Financial Institutions
PDF
ChatGPT, Generative AI Data Security Considerations
PDF
Cybersecurity Brief: Understanding Risk, Legal Framework, & Insurance
Legal Liability for IOT Cybersecurity Vulnerabilities
New legal obligations and liability under MDR and IVDR
Alias: Liability and automation: issues and challenges for socio-technical sy...
New legal obligations under MDR and IVDR
Software liability
Regulatory Nets vs the Fishing Hook of Litigation - BSides Las Vegas 2017
Security Vulnerabilities, the Current State of Consumer Protection Law, & how...
AI and the Law
AI and Accountability
Testing Is How You Avoid Looking Stupid
CII: Addressing Gender Bias in Artificial Intelligence
Blockchain: Future Legal Issues
Cybersecurity (and Privacy) Issues - Legal and Compliance Issues Everyone in ...
The future of Auto insurance
Mobile: the up and downside of risk
Algorithm Lawyers – How & How Soon?
Accessibility 101 for Financial Institutions
ChatGPT, Generative AI Data Security Considerations
Cybersecurity Brief: Understanding Risk, Legal Framework, & Insurance
Ad

Recently uploaded (20)

PDF
Trademark, Copyright, and Trade Secret Protection for Med Tech Startups.pdf
PPTX
UDHR & OTHER INTERNATIONAL CONVENTIONS.pptx
PDF
The Advocate, Vol. 34 No. 1 Fall 2024
PDF
algor mortis or cooling of body after death THANATOLOGY
PDF
New York State Bar Association Journal, September 2014
PPTX
Learning-Plan-4-Core-Principles.pptx htts
PPT
Understanding the Impact of the Cyber Act
PPTX
Ethiopian Civil procedure short note.pptx
PPTX
PART-3-FILIPINO-ADMINISTRATIVE-CULTURE.pptx
PDF
NRL_Legal Regulation of Forests and Wildlife.pdf
PDF
Louisiana Bar Foundation 2023-2024 Annual Report
PPTX
FFFFFFFFFFFFFFFFFFFFFFTA_012425_PPT.pptx
PPTX
Ethiopian Tort Law Short Note by Mikiyas.pptx
PDF
Plausibility - A Review of the English and EPO cases
PPT
looking_into_the_crystal_ball - Merger Control .ppt
PDF
The AI & LegalTech Surge Reshaping the Indian Legal Landscape
PDF
Constitution of India and fundamental rights pdf
PPTX
Sexual Harassment Prevention training class
PPTX
Lecture Notes on Family Law - Knowledge Area 5
PDF
AHRP LB - Quick Look of the Newly-initiated Koperasi Merah Putih (KMP).pdf
Trademark, Copyright, and Trade Secret Protection for Med Tech Startups.pdf
UDHR & OTHER INTERNATIONAL CONVENTIONS.pptx
The Advocate, Vol. 34 No. 1 Fall 2024
algor mortis or cooling of body after death THANATOLOGY
New York State Bar Association Journal, September 2014
Learning-Plan-4-Core-Principles.pptx htts
Understanding the Impact of the Cyber Act
Ethiopian Civil procedure short note.pptx
PART-3-FILIPINO-ADMINISTRATIVE-CULTURE.pptx
NRL_Legal Regulation of Forests and Wildlife.pdf
Louisiana Bar Foundation 2023-2024 Annual Report
FFFFFFFFFFFFFFFFFFFFFFTA_012425_PPT.pptx
Ethiopian Tort Law Short Note by Mikiyas.pptx
Plausibility - A Review of the English and EPO cases
looking_into_the_crystal_ball - Merger Control .ppt
The AI & LegalTech Surge Reshaping the Indian Legal Landscape
Constitution of India and fundamental rights pdf
Sexual Harassment Prevention training class
Lecture Notes on Family Law - Knowledge Area 5
AHRP LB - Quick Look of the Newly-initiated Koperasi Merah Putih (KMP).pdf
Ad

Liability Issues in Autonomous and Semi-Autonomous Systems

  • 1. Paste title slide graphics over this grey box in slide deck Liability Issues in Autonomous and Semi- Autonomous systems John Buyers & Dr Sanjeev Ahuja – ITechlaw World Conference, Miami, May 2016
  • 2. osborneclarke.com Introduction – AI – expectations vs. reality
  • 3. osborneclarke.com Private & Confidential 3 Introduction – AI – expectations vs. reality
  • 4. osborneclarke.com Introduction – AI – Distinguishing Features • AI – Distinguishing Features • Neural Networks • Learning • Recognition (processing of external stimuli) • Decision making capabilities • Autonomy (or semi-autonomy) to act (or react) following decisions • Creativity ? • Ability to communicate ? • Self-Awareness ? “Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke
  • 5. osborneclarke.com Introduction – AI – Liability – a Sliding Scale
  • 6. osborneclarke.com Introduction – AI – Existing Liability Frameworks Current AI Liability Product Liability Contract Negligence (Tort) Consumer Protection Legislation (UK CPA 1987) (Strict Liability)
  • 7. osborneclarke.com Introduction – AI – Future Liability Frameworks Future AI Liability Insurance Funded (Strict Liability) Turing Registries Individuation or Machine Personhood Agency Crime
  • 8. osborneclarke.com AI – Intelligent Machines as Complex Products Driverless Car Example Intelligent Machines as complex products • Features • Multiple manufacturers • Sensors • Software • Many systems • Multiple points of failure • Existing liability models work well where machine functions (and hence responses) can be directly traced to human design, programming and knowledge
  • 9. osborneclarke.com AI – Intelligent Machines as Complex Products Driverless Car Example Intelligent Machines as complex products • Potential targets for liability  Vehicle manufacturer  Reseller  Importer  Components manufacturers  Computer software provider  Machine designer  Components designer  Machine "operator"
  • 10. osborneclarke.com Product Liability - Contract • Ensures that manufacturer/retailer sells products that meet contractual standards • Predominantly aimed at Pure Economic Loss (ie. monetary value) • Contract liability created by: • Express terms (as to defects and/or warranties) • Implied terms as to quality and fitness for purpose (see in UK Consumer Rights Act 2015/Sale of Goods Act 1979) – not defects per se • Fault based liability – Claimant must prove: • Breach of express/implied term • Breach caused loss - causation (Hadley v. Baxendale) • Remedies: Damages (put party in position had contract been performed properly)
  • 11. osborneclarke.com Product Liability - Contract • Advantages: • Can define the scope/boundaries of the contract obligation and hence the liability • Implied terms • Disadvantages: • Claims limited by Contract Privity – ie. between the contract counterparties (with limited exceptions: see Contract (Rights of Third Parties) Act 1999) • Different standards apply to UK consumer contracts and B2B contracts
  • 12. osborneclarke.com Product Liability - Tort • Underpins “duty of care” • Can run concurrently with contractual duties – see Donoghue v. Stevenson [1932] AC 562 • Predominantly used for real (ie non-pecuniary) loss/damage • Fault based liability - Claimant must prove: • Defendant owes a duty of care • Defendant failed in that standard of care • In that failure damage was caused (which was reasonably foreseeable) • Remedies: Damages (put party in position had the tort not occurred)
  • 13. osborneclarke.com Product Liability - Tort • Advantages: • Exists independent of contract relationship – but need to prove “duty of care” subsists • Disadvantages: • No recovery for pure economic (ie financial) loss • Contributory negligence can reduce recovery • as can: volenti non fit injuria (voluntary assumption of risk) • Evidence of “fault” normally subsists with Defendant
  • 14. osborneclarke.com Product Liability – Consumer Protection Act 1987 • Implements Directive 85/374/EC on Liability for Defective Products • Introduces “strict” product liability regime for defective products in UK • Does not affect availability of Contractual or Tort based remedies Scope: • A person who is injured, or whose personal property is damaged by a product can claim against the manufacturer or supplier if it can be shown if the product is defective • No requirement to show fault but burden of proof on claimant to show defect existed Defects • Defect exists where “the safety of the product is not such as persons generally are entitled to expect” – Section 3(1) CPA 1987
  • 15. osborneclarke.com Product Liability – Consumer Protection Act 1987 • Advantages: • No requirement to show fault • No privity • Wide potential selection of liability targets – suppliers/manufacturers • Disadvantages: • Still causation issues – need to show defect • No actions for pure economic loss (again) • Software/Intellectual products – unclear whether these are covered by the Directive/CPA • Development Risks Defence
  • 16. osborneclarke.com AI – Intelligent Machines as Complex Products Driverless Car Example Target Contract Tort CPA (Product Liability) Vehicle Manufacturer Only if you have contracted directly Only if you can prove negligence Only if you can prove “fault” Reseller Only if you have contracted directly Unlikely Only if you can prove “fault” Importer Only if you have contracted directly Unlikely Only if you can prove “fault” Components Manufacturers No – may be liable under indemnity to your counterparty Maybe under contributory action No Computer Software Provider No (unless you have licensed software directly) Only if you can prove negligence – otherwise possibly under contributory action No Machine Designer/Components Designer No– may be liable under indemnity to your counterparty Only if you can prove negligence – otherwise possibly under contributory action Only if the designer is the manufacturer Machine Operator No Only if you can prove negligence No
  • 17. osborneclarke.com AI – A failure in causation ? Intelligent Machines as complex products • All of the following scenarios allow conventional product liability analysis: • Traceable defects (in systems, software) • machine decisions that are based on defective programming • Failure to provide correct operating instructions • Incorrect operation (if relevant) • But what about inexplicable failures ?
  • 18. osborneclarke.com AI – A failure in causation ? Tort – A partial Solution: • res ipsa loquitur – "the thing speaks for itself" • does not ordinarily occur without negligence • caused by an agency within the exclusive control of the defendant • not due to any voluntary action or contribution on part of plaintiff • Defendant's non-negligent explanation doesn’t completely explain injury • Toyota Motor Corporation (2013) WL 5763178 (Texas) • multiple successive similar "inexplicable" failures CPA – Another partial fix • Consumer expectations test moderates causation
  • 19. osborneclarke.com Insurance model But what about inexplicable isolated incidents? • Existing legal models (including causation) fail • Argument for introduction of strict liability model • See Accident Compensation Act 1973 – New Zealand • Increased complexity means increased costs • costs of litigating complex autonomous systems – could be better spent on compensating victims rather than lawyers and expert witnesses • Incentivises stable environment for these systems to be developed (rather than stifling innovation)
  • 20. osborneclarke.com Insurance model Turing Registries as a basis for AI liability? • Submit intelligent machines to certification process • Quantify "risk" certification based on spectrum: • Higher intelligence, higher autonomy, greater consequences, higher premium • Pay premium for certification • Premium funds risk "common pool" • No dealing with "uncertified" AI: system becomes self-fulfilling • Similar, but not the same as traditional insurance • Removes causation, proximate cause and allows for wilful acts of AI (usually excluded)
  • 21. osborneclarke.com Individuation – Intelligent Machines as “persons” • Machines as “agents” of human principals • Criminal liability consequences? • Turing tests • How to define machine personhood: some contemporary and historical analogues • Legal persons: Corporations • Animals • Children • Slaves ? “One who Is under the power of a master, and who belongs to him; so that the master may sell and dispose of his person, of his industry, and of his labor, without his being able to do anything, have anything, or acquire anything, but what must belong to his master.” Civ. Code La. (1825) Art. 35
  • 22. osborneclarke.com Conclusion “In law, a person is legally liable when he/she is financially and legally responsible for something.” Wikipedia “If we want to avoid the injustice of holding men responsible for actions of machines over which they could not have sufficient control, we must find a way to address the responsibility gap in moral practice and legislation” The Responsibility Gap, A. Matthias
  • 23. Paste end slide graphics over this grey box in slide deck Thank you

Editor's Notes

  • #2: Annotated Copy © 2016, Osborne Clarke LLP
  • #3: Sanjeev has provided "state of the nation" in relation to AI My task to ground in legal analysis – separate expectations from reality Intelligent machines play heavily on psyche – hollywood Stephen Hawking has also stoked fire with his comments Yearning for AI peculiarly quixotic: on the one hand want machines to be infallible; on other expect them to be imbued with characteristics of humanity Reality is that we are still some way off true AI sentience, however we need to be prepared and to rehearse the liability arguments when that does happen Present liability and consequences as essentially scalable At one end – (current state of art) – we have AI systems as assemblage of complex components using tort, contract and consumer protection principles to trace liability. At other end – self aware thinking machines that accrue artifical personhood, self-responsibility and liability for their actions At latter end we tend to skirt more into the realms of Sci-Fi but it is important to understand where liability models weaken. Ultimately the law will need to adapt and flex to accommodate this new technology
  • #4: This is a graphic illustration of our misplaced belief in the superhuman abilities of our machines [Although the clip is shocking – beyond a few bruises, no-one hurt]
  • #5: Won't spend time explaining this – covered in depth by Sanjeev Bullets inform characteristics of autonomous, semi-autonomous and in yellow, truly intelligent machines At its most basic level, we have machines that can respond and make limited pre-defined decisions on a limited basis, in response to external stimuli and in accordance with programmed software parameters This progresses to learning machines that have the capacity to make autonomous decisions that are not directly traceable to their programming – as Sanjeev has explained – they learn but their behaviours are not truly explicable. We are still however some way off the "holy grail" of creating a truly self-aware sentient machine To analogise – you can get into a Google car which will drive you from A to B, and make decisions en route – but you still cannot have a sensible conversation with it!
  • #6: So before we get into a discussion on existing liability frameworks, it is worth exploring the sliding scale theme and bear in mind that any future liability frameworks are going to need to differentiate between dumb machines (the toaster) at one end and the super-intelligent sentient HAL 9000 at the other end The HAL 9000 on the right of the slide will have a greater degree of autonomy, independent thought and hence greater consequences coming from its action, and when it is created it is very likely to be imbued with a sense of personhood The two ends of the spectrum are easy to rationalise. The real conundrum is the machine in the middle – the one which is autonomous [having the power of independen t action] and whose decisions have real consequences, but is yet to achieve true sentience
  • #7: So lets take a look at the existing liability frameworks which could apply to "machine generated consequences" These are essentially focussed on product liability – broken down into three distinct categories: Contract Law, Negligence (tort) and so far as the UK is concerned strict liability under the Consumer Protection Act 1987 We will look at scope of each briefly and then consider potential issues in relation to their application to AI systems
  • #8: Before we do this however it is worth providing a brief overview of the liability models which could apply to AI systems in the future These are at the moment, theoretical, but could usefully deal with some of the consequences of machine actions. I have broken this down into a discussion of an insurance funded strict liability model – including so called "Turing Registries". There are of course a whole spectrum of liabilities which could apply to the theoretical sentient machines we have mentioned earlier – such as agency or crime – I have mentioned them for completeness here but time doesn't permit me to speak to them in this presentation. [If you are interested, I suggest you take a look at the paper which is on the Itech website which has more detail.]
  • #9: So let's more back to the current legal position Clearly the most conventional analysis we can apply to intelligent or semi intelligent machines is as complex products Taking driverless car as an example, what we can see is that it is an assemblage of many and varied integrated systems that are produced by multiple manufacturers For a driverless car to work effectively, it needs sensors to navigate road obstructions, such as radar and laser detection. Must also have a computer to direct its action and that computer needs a logic framework within which to operate – internally by use of its own operating software and externally by reference to map data All of these systems need to work together effectively – and this is without consideration of all the usual mechanical components which form a standard car – which must also be present and functioning. As mentioned on the slide: existing causative liability models work well when machine functions (and hence responses) can by and large be traced back to human design, programming and knowledge. [They break down when this cannot be done.]
  • #10: All of this complexity gives rise to a potential plethora of liability targets, ranging from the vehicle manufacturer itself, all the way down to the designer of an individual component, depending upon where the actual defect, fault or breach occurs.
  • #11: Contract has a clear role to play in determining product based liability – sold products need to meet contractually determined standards Contract liability is aimed at the recovery of financial (or pure economic) loss as a result of breach of these contractual standards, however obviously, as we are all aware, contract liability can in some circumstances lead to the recovery of damages for consequential loss and/or damage. Contract terms and hence liability may be either express – as to defects and warranties or implied. In the UK there are implied terms as to quality, fitness for purpose, title and description in the Consumer Rights Act 2015 (for B2C contracts or the Sale of Goods Act 1979 (for B2B or C2C). Although there is not a focus on “defects” per se under the Sale of Goods legislation, there is clearly an emphasis on conformity with description. Arguably that could amount to nearly the same thing: a failure to conform to a description or specification is very close to a “defect” in practical terms. We’ll take a look at the relevant strengths and weaknesses of contract liability in this context in a moment but need to remember that contract is a causative based liability framework. Claimant must prove that there was a breach of either an express or an implied term and that that breach caused the loss. Finally, it is worth remembering that the primary remedy for breach of contract is damages (as assessed to put the innocent party in the position they would have been had the contract been correctly performed).
  • #12: The primary advantage of contract liability - open to the contract counterparties to determine the scope of the contract responsibilities and obligations as between them. This means that it is open to tailor the agreement to the functions and performance of the AI system involved. Implied terms provide an inbuilt consumer expectation test - Where goods are sold “in the course of a business” there is an implied term that the goods are of satisfactory quality (14(2)) and fit for a purpose that the buyer has made known to the seller (14(3)). Products are therefore of satisfactory quality if they meet the standard that a reasonable person would regard as satisfactory, taking into account their description, price and all other relevant circumstances. Major disadvantage of contract liability is that it is not a liability that applies generally to the “whole world” but rather one which is constrained by contract privity. This means that obligations can only be enforced by contract counterparties: in some situations it is possible to conceive of a contract relationship subsisting in your use of an intelligent systems – but equally speaking in many others, there will not be. There is a lack of consistency in contractual standards in relation to contracts for the sale of goods which make the application of the framework complex. So for example, for public policy reasons, there are higher standards which apply to contracts made with consumers – such as for example public statements on the specific characteristics of the goods made about them by the Seller or the producer, particularly in advertising which can be taken into account in such circumstances
  • #13: Product liability in tort refers to a breach of a duty of care in negligence. Since Donoghue v. Stevenson in 1932, tortious duties can run concurrently with contractual liabilities. If a consumer purchases products in a form intended to reach him or her without the possibility of reasonable intermediate examination and with the knowledge on the part of the producer that the absence of reasonable care in the preparation of the product will result in personal injury or property damage, which is reasonably foreseeable, then that producer owes a duty to take reasonable care in their production. Donoghue v. Stevenson concerned decomposed snails in ginger beer bottles but it does not take much to extrapolate that analysis to a driverless car or a surgical robot. Again, it is worth pointing out the causative nature of tort as a liability framework – it is fault based. The claimant must prove that the defendant owed him or her a duty of care, they failed in that standard and damage was caused as a result. In contrast to contractual damages, tort based damages are awarded on the basis of putting the injured party in the position they would have been had the tort not occurred.
  • #14: In contrast to contract, the scope of potential liability is wide. It could equally apply to manufacturers, producers and anyone directly involved in the manufacture and distribution of a product with a defect - BUT you do need to establish that a duty of care subsists and was breached – and irrespective of this that the relevant chain of causation is not broken by the damage being too remote. There are a number of disadvantages– worth pointing them out briefly. There are very real difficulties in claiming damages for pure economic loss in tort – certainly so far as the UK is concerned, and since the high water mark of Junior Books v. Veitchi, there are only a limited number of circumstances where this is possible, including for example, negligent advice from surveyors. Contributory negligence can also act as a defence to liability, if it is shown that the claimant should have known of the defect but negligently failed to recognise it or negligently used the product or failed to take account of its operating instructions. In such cases damages are reduced to a degree which is commensurate with the claimant’s negligence. Volenti non fit injuria – or voluntary assumption of risk is less common in product liability cases – on the basis that if a claimant knows of the defect they are less likely to use it and if they do, that usually breaks the causative chain between defect and damage. Finally, it is worth pointing out that factually, proving liability in tort can be very difficult – especially in product liability cases, as very often the details that are required to show liability are held by the defendant.
  • #15: Finally, in relation to product liability, there is the Consumer Protection Act 1987 which implements the EU directive 85/374/EC on Liability for Defective Products. This Act introduces a strict liability regime which does not affect the general availability of Contract and Tort based remedies. What the Act provides is that a person who is injured or whose personal property is damaged by a product will be able to claim against a manufacturer or supplier of that product (and certain other third parties) if it can be shown that the product was defective. There is no requirement to prove fault on the part of the manufacturer but obviously there is a requirement on the claimant to show that the defect existed on the preponderance of the evidence. The Act introduces a consumer expectations test in that a defect exists where “the safety of the product is not such as persons generally are entitled to expect” – (s3(1) CPA 1987). Consumer expectations themselves are subject to a reasonableness test.
  • #16: On the advantages of the CPA regime– there is no requirement to show fault; neither is there a privity requirement – the regime itself allows for a wide variety of potential liability targets, including suppliers and manufacturers. There are still some problems with consumer protection product liability –causation still exists – although it is limited to the finding of defects and moderated by a consumer expectation test. The Act is also designed to cover claims for real damage, so does not encompass claims for pure economic loss. In the context of AI, there are also problems with the definition of “Product” under the Act. Product is defined as “any goods or electricity and includes products aggregated into other products, whether as component parts, raw materials or otherwise” (s1(2)c). Quite ambiguously for our purposes the Act is not clear as to whether software and/or other products of an intellectual type are included in the definition of its scope. Disembodied Software per se is not treated as a “good” under English law although there is an argument which might encompass software embedded into functional hardware. Finally there is also the developmental risks defence, which provides a defence to the manufacturer “if the scientific and technical knowledge at the time the product was manufactured was not such that a producer of a similar product might have been able to discover the defect” s4(1)e. This is obviously highly relevant to our current discussion which inevitably involves the “state of the art” in relation to machine development.
  • #17: So what I have attempted to do on this slide is map how the respective existing liability regimes might apply on an anecdotal basis to our earlier example of the driverless car. I won’t go into detail here – but hopefully you can see how they interact with each other and provide for a very complicated potential matrix of coverage.
  • #18: So to recap all of the previously discussed liability frameworks require some element of causation to a greater or lesser degree. In general terms, the scenarios that I have listed below are very comfortably dealt with by them; traceable defects; machine decisions that can be traced back to defective programming; failures to provide correct operating instructions – and incorrect operation of machines. But what happens when the “defect” is inexplicable, or an event cannot in fact be traced back to a defect or a fault or a directly related human error? As we have seen in Sanjeev’s presentation, as intelligent machines and AI systems “learn” for themselves, their behaviours are increasingly less and less directly attributable to human programming. These machines are not acting on a prescriptive instruction set, but a system of rules that may not have anticipated the precise circumstances under which the machine should act. To take the example of our driverless car, what if our vehicle has been programmed to look after and preserve the safety of its occupants and also to avoid pedestrians at all costs and is placed in an unavoidable situation where it has to make a decision as to whether to avoid a pedestrian crossing into its path (and thereby run into a brick wall, injuring or even killing its occupants) or running over the pedestrian (and thereby saving its occupants). Can any outcome of that decision be said to be a failure or a defect – even if people are injured or possibly killed as a result? It is at this relatively new interface where existing product liability frameworks begin to weaken and break down.
  • #19: There are some partial fixes in our existing liability frameworks. Tort in particular provides for the principle of res ipsa loquitur – or the thing speaks for itself. The doctrine is equally applicable in the US and the UK. Res ipsa loquitur is useful in dealing with cases where there are multiple successive inexplicable failures which cannot in themselves be readily explained. A classic example of the application of this was in the US case of Toyota Motor Corporation, where Toyota found that for no particular reason, many of its high end Lexus model cars simply accelerated – despite the intervention of their drivers. Despite much investigation, the cause of these failures could not be pinpointed. Toyota took the step of settling 400 pending case against it after an Oklahoma jury applied the doctrine of res ipsa loquitur and awarded the plaintiffs in that case $3m in damages. Of course, as we have already seen the Consumer Protection Act in the UK also provides a partial fix as the requirement to identify a defect in a “product” is moderated by a consumer expectations test.
  • #20: What this does not fix of course is the inexplicable isolated incident. That is where our existing frameworks fail and we move increasingly into more and more speculative territory. There are clear public policy arguments now for the introduction of a strict liability insurance based model in circumstances such as the one I mentioned earlier. The 1973 Accident Compensation Act in New Zealand is a classic example of such a system working in practice – of course not directly in relation to AI systems, but rather in connection with motor vehicle accidents. In New Zealand, road traffic accidents are not litigated, but rather victim compensation is automatically paid at Government set tariffs and funded by motor insurance premiums. Negligence and breach of contract actions are becoming more and more complex to litigate – as resources are spent identifying what has (or indeed might) have gone wrong. In particular, the argument runs that it is better spending the money compensating the victims of accidents and incidents involving autonomous systems than it is on expensive lawyers and expert witnesses. So far as research and development is concerned, a strict liability insurance based model will also incentivise research on new intelligent AI based systems, rather than forcing R&D divisions of corporations to consider what defensive steps they should be taking to avoid a class action.
  • #21: Some commentators have argued specifically that for intelligent machines we need to go a step further and set up what have been termed “Turing Registries” after the great computer pioneer Alan Turing. This would work by submitting intelligent machines to a testing and certification process to quantify the certification based on a spectrum as follows: the higher the intelligence and autonomy and hence greater consequences for failure, the higher the premium payable to “release” that machine into the working environment. The premium payable for certification would be paid by the developer or manufacturer of the AI entity wanting to deploy the AI into the market. Much as in the manner of the New Zealand strict liability insurance model discussed earlier, the premiums would fund a “common pool” under which risks would be paid out. The system could become self-fulfilling if AIs were prohibited from use without this certification. As has been pointed out, this model is similar, but not identical to insurance – it does remove causation and proximate cause but also allows for the wilful acts of AIs – normally something that is excluded by insurance.
  • #22: Now firmly in the realms of science-fiction, we need to consider briefly the inevitable question of how our liability rules will cope with machines when they individuate – that is to say develop distinct legal personalities and individual identities of their own. Clearly if machines possess a sentient personality of their own then there is no reason why they cannot directly accrue liability in the same manner in which living breathing humans accrue it. There are no answers here obviously, but rather questions. We can however consider some current and historical legal analogues. How are we going to treat self-aware machines and are we going to fall into the unfortunate historical bear traps of the past? Hopefully the bullets I have added here will give you all some food for thought.
  • #23: So finally the conclusions As I mentioned on my first slide, liability as essentially a sliding scale which is based factually on the degree of legal responsibility society places on a person. Historically responsibility and hence liability levels are not static – the able minded and children and mentally incapable adults have different levels of liability – the latter having little or no responsibility for their actions and therefore a commensurate degree of low accountability and liability. Until relatively recently, the question of whether or not a machine should be accountable and hence liable for its actions was a relatively trite one – a machine was merely a tool of the person using or operating it. There was absolutely no question of a machine assuming a level of personal accountability or even “personhood” as they were incapable of autonomous or semi-autonomous action. We are now effectively at a "tipping point" in how we manage the machine generated consequences of our new AI creations. As we have seen, Contract, Tort and Strict Liability consumer protection laws are effective to a degree in relation to managing these consequences but effectively break down where cause and effect cannot be made out. The law needs to change to accommodate the consequences of these new machines
  • #24: Questions