Abraham & Sharkey on AI Liability
Ken Abraham & Cathy Sharkey have posted to SSRN Untangling AI Liability. The abstract provides:
This Article is the first full consideration of the role tort law can play in addressing the harms that socially beneficial Artificial Intelligence technologies may cause. Such harms, which pose tort liability issues of both principle and policy, have thus far evaded comprehensive scholarly analysis. The seemingly insurmountable difficulties posed by the “black-box” problem of AI—i.e., the inability to peer inside AI applications or operations to identify what and how AI has done in any given instance—have thwarted certain scholars altogether, while others have resorted to modest, piecemeal approaches, aiming to tackle individual aspects of emerging AI harms one-by-one.
The groundbreaking analytical framework that we provide here, by contrast, sets forth a coherent, overall approach to the array of issues that will inevitably emerge in tandem with impending, far-reaching state-by-state AI tort litigation. Departing from the views of torts skeptics, we show how many of the intellectual and practical challenges that AI liability seems to pose can be resolved with tort law principles that are conventional under current law, or workable with minimal adjustment or modification. We demonstrate how, in multiple contexts—including first-generation AI cases involving self-driving cars and chatbots—putative “black-box” obstacles to AI liability can be addressed by the objective standard of reasonableness of the defendant’s conduct, the products liability defective design standard, or intentional torts such as invasion of privacy and defamation. Critically, all these standards and principles apply when liability is outcome dependent and does not turn on the “state of mind” of an actor or the operation of the AI “decisionmaking” process.
We argue that an even more significant challenge AI liability poses is thus not its “black-box” nature but, instead, a matter that has to date been overlooked: the jurisprudential challenge of choosing between uniform and diverse liability rules. The centuries-long evolution of the common law of torts has been one of movement away from a highly-diverse set of rules toward a more nearly uniform, one-size-fits-all approach to standards of care. In our view, however, the optimal approach to liability for AI losses will prove not to be uniform, but diverse. That is, treating all forms of AI as a product subject to design defect and failure to warn liability, or all forms of AI as a service subject to negligence liability, or all forms of AI liability as warranting immunity from liability by analogy to Section 230 of the Communications Decency Act, will, more often than not, be unworkable. Instead, we argue that different applications of AI and/or types of AI-related harms should be governed (depending on the setting) alternately by negligence, products liability, strict liability, and/or legislatively-created approaches. Whether tort law can effectively employ such diverse approaches to what seems, at least from an aerial view, to be the same subject matter, is the critical question.
This critical question spawns two other key questions, which we also address: the interplay between our proposed diverse jurisprudential approach and state and federal regulation of AI liability, and the potential availability and scope of insurance against such liability, and, in turn, how such insurance will inform determinations as to what AI liability is reasonable to impose.