AI is rapidly being deployed around the world in many different use cases, with few guidelines for manufacturers, developers, and operators to follow. Along with the complexity of creating the technology, there remain many unanswered legal questions. As the CTO of a startup making privacy technologies accessible and easy to integrate, I have spent a significant amount of time thinking at the intersection of privacy, AI, and liability. More recently, I’ve taken a wider lens and have been deliberating over AI and liability overall. While many developers and engineers would rather stay as far away from thinking about the law as possible (I was one of them), that attitude might very well come back to bite us. As AI becomes more ubiquitous, we will be affected by the way failures in AI systems will be arbitrated, and by who and what is deemed to be at fault for a given failure. I was recently given “Rechtshandbuch Artificial Intelligence und Machine Learning” (in English: Legal Handbook of Artificial Intelligence and Machine Learning”) an excellent book on the legal implications of AI that covers the matter of liability.
The book, edited and published by Markus Kaulartz and Tom Braegelmann, provides an overview of one of the most significant legal conundrums our societies are facing; namely, that laws must keep up with technology, but must do so in such a way that does not hinder innovation. We look specifically at Chapter 4.2, written by Maren Wöbbeking, which explores the careful balance that must exist between legal liability and the technologies behind Artificial Intelligence (AI). Key to achieving this balance are accurate risk measurements associated with autonomous systems and carefully attributing responsibility to system manufacturers, system operators, and bystanders.
Risks of Autonomous Systems
When considering legal liability around autonomous vehicles, for example, one must take into account both (1) the processes for autonomous decision-making and (2) the traceability and explainability of the decisions. It becomes particularly obvious why proper decision-making traceability is essential in order to determine liability when we specifically consider the disastrous effects that human-falsified input data might have on an outcome.
Wöbbeking quite rightly points out the implications of human interactions with autonomous systems when it comes to determining liability: the majority of autonomous systems still rely heavily on supervised or semi-supervised learning (meaning that one or many humans have to inform the system on which inputs correspond to which outputs at the training phase), and there is a growing literature in adversarial machine learning that’s dedicated to thwarting the many ways in which inputs to an AI system during both inference and training can lead to completely unexpected outputs/outcomes.
Broadly speaking, end-to-end approaches using a singular, large network yield the best results, but how can one pinpoint failure in such a system? Think of a complex system such as an autonomous car, with multiple sensory inputs such as LIDAR & cameras, what would happen if the camera calibration where to drift? It might be a better idea to build seperate networks with clearly defined inputs & outputs for this reason — but then, this might impact accuracy, increasing the risk of failure.
In addition to considering potential attacks to AI models through the use of deceptive inputs or even to slight equipment malfunctions that detrimentally modify input signals, as Wöbbeking mentions, it is a human’s responsibility to determine that an autonomous system is being used in the same environment it was trained to run in (i.e., was created for).
Another aspect of liability law should consider whether appropriate measures were taken in order to mitigate the risks of using AI. Under the constrained environments that AI are currently deployed in, they seem to actually reduce the risk of performing certain tasks when compared to a human performing the same task. Risk mitigation will become increasingly relevant and crucial if autonomous systems are to be deployed in more varied and less constrained environments.
Wöbbeking proceeds to discuss which questions might be covered by existing or new liability laws. One particularly difficult question to answer is whether the risks inherent in the state-of-the-art autonomous systems which otherwise reduce risk should be borne by the injured party, the operator, or the manufacturer.
One possible framework that could apply to autonomous systems is that of strict liability.
“Strict liability differs from ordinary negligence because strict liability establishes liability without fault. In other words, when a defendant is held strictly liable for harm caused to the plaintiff, he is held liable simply because the injury happened.” — https://lawshelf.com/coursewarecontentview/introduction-to-strict-liability/
The manufacturer of the autonomous system is the party who has the most knowledge and control over the risks and also the most incentive to cut costs. Allocation of responsibility to the manufacturer therefore becomes an extra incentive for them to thoroughly evaluate and mitigate risk. However, holding the manufacturers strictly liable has the very real risk of inhibiting innovation. While manufacturers have a lot more control than operators, they cannot always control whether the operators have deployed a system according to the instructions. Operators themselves often have a choice between either using or not using an autonomous system they are provided with. Their obligations, while limited, are crucial: reducing risk by using the system as instructed.
An alternative framework that could be applied to autonomous systems is that of proportional liability.
“Proportional Liability — refers to an arrangement for the assignment of liability in which each member of a group is held responsible for the financial results of the group in proportion to its participation.” — https://www.irmi.com/term/insurance-definitions/proportional-liability
Programmers, data providers, manufacturers, all further developers, third parties manipulating the system, operators, and users all influence the system and contribute to a possible wrong decision. Taking multi-causality into account, while more complex than blaming a single party, might be the right way to assess liability. Propositional liability might bypass any innovation inhibition by the manufacturers, while still holding irresponsible manufacturers more accountable for their lack of risk mitigation. It would help increase the likelihood that operators will take the necessary precautions around autonomous system use.
Finally, Wöbbeking postulates that a possible regulation for the allocation of any recourse and liability risks would be to pool the risks through a community insurance solution. Possibly a solution similar to social security law. This would probably avoid the complexities associated with liability law and (among other benefits) would also mitigate the disadvantages of a specific risk allocation.
Whatever the legal future of AI looks like, there are some clear takeaways for AI system developers on what we can do now to prepare ourselves; namely:
- Clearly document the design & testing process;
- Follow software engineering best practices — e.g. no dynamic allocation of memory and no use of recursion;
- Take great care in designing validation and test sets. And make sure a new test set is used after each major system update;
- Account for bias.
Selected Relevant Resources (cited in Kaulartz/Braegelmann)
There exists a vibrant ecosystem of specialized security tools. The sad truth is that it is almost impossible to reach 100% invulnerability. What can we do to get closer?
In the past three years there has been a massive wake-up in customer awareness about privacy. Many customers are now refactoring how they buy, taking their business elsewhere if they don’t trust a company’s data practices.
Privacy Enhancing Technologies Decision Tree:
for developers, managers, and founders looking to
integrate privacy into their software pipelines
The new Tensorflow Lite XNNPACK delegate enables best in-class performance on x86 and ARM CPUs — over 10x faster than the default Tensorflow Lite backend in some cases.
Some techniques to improve DALI resource usage & create a completely CPU-based pipeline.
We introduce the four pillars required to achieve perfectly privacy-preserving AI and discuss various technologies that can help address each of the pillars.
We discuss a practical application of homomorphic encryption to privacy-preserving signal processing, particularly focusing on the Fourier transform.
We cover the basics of homomorphic encryption, followed by a brief overview of open source HE libraries and a tutorial on how to use one of those libraries (namely, PALISADE).
A number of people ask us why we should bother creating NLP tools that preserve privacy. Apparently not everyone spends hours thinking about data breaches and privacy infringements.
We cover symmetric encryption, asymmetric encryption, homomorphic encryption, differential privacy, and secure multi-party computation.