Liability in the Digital Age
The digitalisation of factories, surgical procedures and even toasters, is raising questions relating to liability. Who is liable if your e-toaster burns down your home? What happens when the artificial intelligence operating on a patient makes an erroneous decision?
There are many definitions of artificial intelligence (AI). Generally, most of these definitions highlight the ability of a program, system, device or a robot to ‘mimic’ human behaviour. AI does this through the utilisation of diverse technologies and is also capable of learning and developing its skills independently, while performing tasks that are traditionally done by human beings.
As explained above, artificial intelligence can execute tasks that demand independent decision-making as effectively as any human would perform the given task. An accident involving artificial intelligence is an incident where the AI has played a role in the chain of events that led to the accident. But while the use of AI is slowly entering physical products as well as administrative decision-making processes, concerns about AI’s uncontrolled actions involving risks have emerged.
New EU regulations and liability regime for AI and emerging tech
During 2022, the European Commission will seek to establish regulations around artificial intelligence and emerging technologies to help further clarify liability. This will happen on two tracks. Within the Coordinated Plan on AI, the Commission will propose measures adapting the liability framework to the challenges of new technologies, including AI.
In addition, prepared by the European Commission’s Directorate-General for the Internal Market, Industry, Entrepreneurship & SMEs (DG GROW), it will also propose revisions of the General Product Liability Directive which concerns the liability for injuries and property owned by consumers caused by any kind of physical products. Of course, liability leads to compensation only after something has occurred. From the Risk Management perspective, the Commission is proposing to also revise product safety legislation like the Machinery Directive and the General Product Safety Directive to take new technologies into account.
What kind of reforms can we expect for liability rules?
The Expert Group on Liabilities and New Technologies established by the Commission released a report, Liability for artificial intelligence and other emerging digital technologies1) in 2019, highlighting possible considerations to allocate the liability. Currently, the producer or importer (to ETA) of the product is liable for injuries and physical damage caused due to safety defects. But with new technologies, the blame could be put also on the providers of software, network or IoT based connected systems, users of the new technologies or, indeed, producers of AI technology used in the systems. The expert group suggested the following solutions:
- A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation.
- In situations where a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be considered in determining who primarily operates the technology.
- A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary.
- Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.
- For situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability.
- Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof.
- Emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order to not be to the detriment of the victim.
- The destruction of the victim’s data should be regarded as damage, compensable under specific conditions.
- It is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.
An insurer’s perspective
Artificial intelligence risks are present in many different insurance lines and there are several risks involved with AI and digitalised products and services. These are compounded by the fact that these are new risks with insufficient statistical data available to fully meet the requirements for insurable risk. If new allocation rules of liability between producers, digital service providers and owners and users of the equipment are implemented, it may turn out to be both complicated and expensive for the liability insurer of one alternative liable party to adjust the claim.
So far, plain data has usually not been considered physical property. If any forms of financial loss or immaterial loss are to be included, it will be a tough challenge for the insurance industry to incorporate it into insurance products.
If an AI program or device causes an accident today, what is the common practice when establishing liability?
Currently, the producer of the product is liable for bodily injury or property damage to a consumer’s property. In business-to-business relations, the product liability is based on contracts and sale-of-goods legislation and is as such not dependent on the EU product liability legislation. However, in case of bodily injuries, the product liability legislation applies no matter if the product is in industry use or at home.
Establishing a foundation for how liability will work with autonomous technologies is a challenge. If a technology has been accepted for implementation into vehicles and in addition approved to be applied in a city centre, for example, then who is ultimately liable when an accident happens.
Establishing a foundation for how liability will work with autonomous technologies is a challenge.
Why do we need liability guidelines for AI and emerging technologies?
In preparatory work by the Commission, it has been stressed that new technologies do not fit into the framework of the now 35-year-old Product Liability Directive (85/374/EEC), because the accidents may not be caused by individual products but rather by systems of interconnected products, software, or even independently operating AI.
Artificial intelligence is also seen as technology having a strong influence in societies through systems of controlling and following citizens, analysis of big data and in automatic decision-making processes, thus needing specific regulation to protect civil liberties.
However, in the comments from industries like European Technology Industries, Orgalim and Federation of European Risk Management Associations (FERMA), as well as from Insurance Europe, the needs of stricter liability rules have been opposed.
These industries and associations have highlighted the fact that there are already other liability regimes that apply to any kind of injury or property damage cases and new rules would only have negative impacts on product development because the new rules would not be clear. In addition, any kind of compulsory liability insurance requirements sometimes suggested for AI have been firmly rejected by the industries.
Of course, the consumers representatives like The European Consumer Organisation (BEUC), have seen a need to update EU product liability law so that it extends to digital contents products and services.
1) European Commission, Directorate-General for Justice and Consumers, Liability for artificial intelligence and other emerging digital technologies, Publications Office, 2019, https://data.europa.eu/doi/10.2838/25362