Explanatory Memorandum to COM(2022)496 - Adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive) - Main contents
Please note
This page contains a limited version of this dossier in the EU Monitor.
dossier | COM(2022)496 - Adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive). |
---|---|
source | COM(2022)496 |
date | 28-09-2022 |
1. CONTEXT OF THE PROPOSAL
·Reasons for and objectives of the proposal
This explanatory memorandum accompanies the proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI). In a representative survey of 2020 1 , liability ranked amongst the top three barriers to the use of AI by European companies. It was cited as the most relevant external obstacle (43%) for companies that are planning to, but have not yet adopted AI.
In her Political Guidelines, Commission President Ursula von der Leyen laid out a coordinated European approach on AI 2 . In its White Paper on AI of 19 February 2020 3 , the Commission undertook to promote the uptake of AI and to address the risks associated with some of its uses by fostering excellence and trust. In the Report on AI Liability 4 accompanying the White Paper, the Commission identified the specific challenges posed by AI to existing liability rules. In its conclusions on shaping Europe’s digital future of 9 June 2020, the Council welcomed the consultation on the policy proposals in the White Paper on AI and called on the Commission to put forward concrete proposals. On 20 October 2020, the European Parliament adopted a legislative own-initiative resolution under Article 225 TFEU requesting the Commission to adopt a proposal for a civil liability regime for AI based on Article 114 of the Treaty on the Functioning of the EU (TFEU). 5
Current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI-enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity (the so-called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether. These concerns have also been retained by the European Parliament (EP) in its resolution of 3 May 2022 on artificial intelligence in a digital age. 6
If a victim brings a claim, national courts, faced with the specific characteristics of AI, may adapt the way in which they apply existing rules on an ad hoc basis to come to a just result for the victim. This will cause legal uncertainty. Businesses will have difficulties to predict how the existing liability rules will be applied, and thus to assess and insure their liability exposure. The effect will be magnified for businesses trading across borders, as the uncertainty will cover different jurisdictions. It will particularly affect small and medium-sized enterprises (SMEs), which cannot rely on in-house legal expertise or capital reserves.
National AI strategies show that several Member States are considering, or even concretely planning, legislative action on civil liability for AI. Therefore, it is expected that, if the EU does not act, Member States will adapt their national liability rules to the challenges of AI. This will result in further fragmentation and increased costs for businesses active throughout the EU.
The open public consultation informing the Impact Assessment of this proposal, confirmed the problems explained above. In the opinion of the public, the ‘black box’ effect can make it difficult for the victim to prove fault and causality and there may be uncertainty as to how the courts will interpret and apply existing national liability rules in cases involving AI. Furthermore, it showed a public concern as to how legislative action on adapting liability rules initiated by individual Member States, and the ensuing fragmentation, would affect the costs for companies, especially SMEs, preventing the uptake of AI Union wide.
Thus, the objective of this proposal is to promote the rollout of trustworthy AI to harvest its full benefits for the internal market. It does so by ensuring victims of damage caused by AI obtain equivalent protection to victims of damage caused by products in general. It also reduces legal uncertainty of businesses developing or using AI regarding their possible exposure to liability and prevents the emergence of fragmented AI-specific adaptations of national civil liability rules.
·Consistency with existing policy provisions in the policy area
Contents
- This proposal is part of a package of measures to support the roll-out of AI in Europe by fostering excellence and trust. This package comprises three complementary work streams:
- The input from the Expert Group report was complemented by three additional external studies:
- Three policy options were assessed:
- Policy option 3: a staged approach consisting of:
- The monitoring mechanism could cover the following types of data and evidence:
This proposal is part of a package of measures to support the roll-out of AI in Europe by fostering excellence and trust. This package comprises three complementary work streams:
–a legislative proposal laying down horizontal rules on artificial intelligence systems (AI Act); 7
–a revision of sectoral and horizontal product safety rules;
–EU rules to address liability issues related to AI systems.
In the AI Act proposal, the Commission has proposed rules that seek to reduce risks for safety and protect fundamental rights. Safety and liability are two sides of the same coin: they apply at different moments and reinforce each other. While rules to ensure safety and protect fundamental rights will reduce risks, they do not eliminate those risks entirely. 8 Where such a risk materialises, damage may still occur. In such instances, the liability rules of this proposal will apply.
Effective liability rules also provide an economic incentive to comply with safety rules and therefore contribute to preventing the occurrence of damage. 9 In addition, this proposal contributes to the enforcement of the requirements for high-risk AI systems imposed by the AI Act, because the failure to comply with those requirements constitutes an important element triggering the alleviations of the burden of proof. This proposal is also consistent with the general 10 and sectoral product safety proposed rules applicable to AI-enabled machinery products 11 and radio equipment. 12
The Commission takes a holistic approach in its AI policy to liability by proposing adaptations to the producer’s liability for defective products under the Product Liability Directive as well as the targeted harmonisation under this proposal. These two policy initiatives are closely linked and form a package, as claims falling within their scope deal with different types of liability. The Product Liability Directive covers producer’s no-fault liability for defective products, leading to compensation for certain types of damages, mainly suffered by individuals. This proposal covers national liability claims mainly based on the fault of any person with a view of compensating any type of damage and any type of victim. They complement one another to form an overall effective civil liability system.
Together these rules will promote trust in AI (and other digital technologies) by ensuring that victims are effectively compensated if damage occurs despite the preventive requirements of the AI Act and other safety rules.
·Consistency with other Union policies
The proposal is coherent with the Union’s overall digital strategy as it contributes to promoting technology that works for people, one of the three main pillars of the policy orientation and objectives announced in the Communication ‘Shaping Europe's digital future’ 13 .
In this context, this proposal aims to build trust in the AI and to increase its uptake. This will achieve synergies and is complementary with the [Cyber Resilience Act] 14 , which also aims to increase trust in products with digital elements by reducing cyber vulnerabilities and to better protect business and consumer users.
This proposal does not affect the rules set by [the Digital Services Act (DSA)], which provide for a comprehensive and fully harmonised framework for due diligence obligations for algorithmic decision making by online platforms, including its exemption of liability for providers of intermediary services.
In addition, by promoting the roll-out of AI, this proposal is linked to the initiatives under the EU strategy for data 15 . It also strengthens the Union’s role to help shape global norms and standards and promote trustworthy AI that is consistent with Union values and interests.
The proposal also has indirect links with the European Green Deal 16 . In particular, digital technologies, including AI, are a critical enabler for attaining the sustainability goals of the Green Deal in many different sectors (including healthcare, transport, environment and farming).
·Main economic, social and environmental impacts
The Directive will contribute to the rollout of AI. The conditions for the roll-out and development of AI-technologies in the internal market can be significantly improved by preventing fragmentation and increasing legal certainty through harmonised measures at EU level, compared to possible adaptations of liability rules at national level. The economic study 17 underpinning the Impact Assessment of this proposal concluded – as a conservative estimate – that targeted harmonisation measures on civil liability for AI would have a positive impact of 5 to 7 % on the production value of relevant cross-border trade as compared to the baseline scenario. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. This would lower stakeholders’ legal information/representation, internal risk management and compliance costs, facilitate financial planning as well as risk estimates for insurance purposes, and enable companies – in particular SMEs – to explore new markets across borders. Based on the overall value of the EU AI market affected by the liability-related problems addressed by this Directive, it is estimated that the latter will generate an additional market value between ca. EUR 500mln and ca. EUR 1.1bln.
In terms of social impacts, the Directive will increase societal trust in AI-technologies and access to an effective justice system. It will contribute to an efficient civil liability regime, adapted to the specificities of AI, where justified claims for compensation of damage are successful. Increasing societal trust would also benefit all companies in the AI-value chain, because strengthening citizens’ confidence will contribute to a faster uptake of AI. Due to the incentivising effect of liability rules, preventing liability gaps would also indirectly benefit all citizens through an increased level of protection of health and safety (Article 114(3) TFEU) and the obviation of sources of health risks (Article 168(1) TFEU).
As regards environmental impacts, the Directive is also expected to contribute to achieving the related Sustainable Development Goals (SDGs) and targets. The uptake of AI applications is beneficial for the environment. For instance, AI systems used in process optimisation make processes less wasteful (e.g. by reducing the amount of fertilizers and pesticides needed, decreasing the water consumption at equal output, etc.). The Directive would also impact positively on SDGs because effective legislation on transparency, accountability and fundamental rights will direct AI’s potential to benefit individuals and society towards achieving the SDGs.
2. LEGAL BASIS, SUBSIDIARITY AND PROPORTIONALITY
· Legal basis
The legal basis for the proposal is Article 114 TFEU, which provides for the adoption of measures to ensure the establishment and functioning of the internal market.
The problems this proposal aims to address, in particular legal uncertainty and legal fragmentation, hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.
The proposal addresses obstacles stemming from the fact that businesses that want to produce, disseminate and operate AI-enabled products and services across borders are uncertain whether and how existing liability regimes apply to damage caused by AI. This uncertainty concerns particularly Member States where businesses will export to or operate their products and services. In a cross-border context, the law applicable to a non-contractual liability arising out of a tort or delict is by default the law of the country in which the damage occurs. For these businesses, it is essential to know the relevant liability risks and to be able to insure themselves against them.
In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. For example, AI strategies adopted in Czechia 18 , Italy 19 , Malta 20 , Poland 21 and Portugal 22 mention initiatives to clarify liability. Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation.
Therefore, adaptations of liability rules taken on a purely national basis would increase the barriers to the rollout of AI-enabled products and services across the internal market and contribute further to fragmentation.
·Subsidiarity
The objectives of this proposal cannot be adequately achieved at national level because emerging divergent national rules would increase legal uncertainty and fragmentation, creating obstacles to the rollout of AI-enabled products and services across the internal market. Legal uncertainty would particularly affect companies active cross-borders by imposing the need for additional legal information/representation, risk management costs and foregone revenue. At the same time, differing national rules on compensation claims for damage caused by AI would increase transaction costs for businesses, especially for cross-border trade, entailing significant internal market barriers. Further, legal uncertainty and fragmentation disproportionately affect start-ups and SMEs, which account for most companies and the major share of investments in the relevant markets.
In the absence of EU harmonised rules for compensating damage caused by AI systems, providers, operators and users of AI systems on the one hand and injured persons on the other hand would be faced with 27 different liability regimes, leading to different levels of protection and distorted competition among businesses from different Member States.
Harmonised measures at EU level would significantly improve conditions for the rollout and development of AI-technologies in the internal market by preventing fragmentation and increasing legal certainty. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. Moreover, only EU action can consistently achieve the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI across the internal market. This would ensure a consistent (minimum) level of protection for all victims (individuals and companies) and consistent incentives to prevent damage and ensure accountability.
·Proportionality
The proposal is based on a staged approach. In the first stage, the objectives are achieved with a minimally invasive approach; the second stage involves re-assessing the need for more stringent or extensive measures.
The first stage is limited to the burden-of-proof measures to address the AI-specific problems identified. It builds on the substantive conditions of liability currently existing in national rules, such as causality or fault, but focuses on targeted proof-related measures, ensuring that victims have the same level of protection as in cases not involving AI systems. Moreover, from the various tools available in national law for easing the burden of proof 23 , this proposal has chosen to use rebuttable presumptions as the least interventionist tool. Such presumptions are commonly found in national liability systems, and they balance the interests of claimants and defendants. At the same time they are designed to incentivise compliance with existing duties of care set at Union or national level. The proposal does not lead to a reversal of the burden of proof, to avoid exposing providers, operators and users of AI systems to higher liability risks, which may hamper innovation and reduce the uptake of AI-enabled products and services.
The second stage included in the proposal ensures that, when assessing the effect of the first stage in terms of victim protection and uptake of AI, future technological, regulatory and jurisprudential developments will be taken into account when re-assessing the need to harmonise other elements of the claims for compensation or other tools related to liability claims, including for situations where strict liability would be more appropriate, as requested by the European Parliament. Such assessment would also likely consider whether such a harmonisation would need to be coupled with mandatory insurance to ensure effectiveness.
·Choice of instrument
A directive is the most suitable instrument for this proposal, as it provides the desired harmonisation effect and legal certainty, while also providing the flexibility to enable Member States to embed the harmonised measures without friction into their national liability regimes.
A mandatory instrument would prevent protection gaps stemming from partial or no implementation. While a non-binding instrument would be less intrusive, it is unlikely to address the identified problems in an effective manner. The implementation rate of non-binding instruments is difficult to predict and there is insufficient indication that the persuasive effect of a recommendation would be strong enough to produce consistent adaptation of national laws.
This effect is even more unlikely for measures concerning private law, of which non contractual liability rules form part. This area is characterised by long-standing legal traditions, which makes Member States reluctant to pursue coordinated reform unless driven by the clear prospect of internal market benefits under a binding EU instrument or the need to adapt to new technologies in the digital economy.
The existing significant divergences between Member States’ liability frameworks are another reason why a recommendation is unlikely to be implemented in a consistent manner.
1. RESULTS OF EX POST EVALUATIONS, STAKEHOLDER CONSULTATIONS AND IMPACT ASSESSMENTS
·Stakeholder consultations
An extensive consultation strategy was implemented to ensure a wide participation of stakeholders throughout the policy cycle of this proposal. The consultation strategy was based on both public and several targeted consultations (webinars, bilateral discussions with companies and various organisations).
After the initial questions on liability which were part of the public consultation on the White Paper on AI and the Commission report on safety and liability, a dedicated online public consultation was open from 18 October 2021 to 10 January 2022 to gather views from a wide variety of stakeholders, including consumers, civil society organisations, industry associations, businesses, including SMEs, and public authorities. After analysing all the responses received, the Commission published a summary outcome and the individual responses on its website 24 .
In total, 233 responses were received from respondents from 21 Member States, as well as from third countries. Overall, the majority of stakeholders confirmed the problems with burden of proof, legal uncertainty and fragmentation and supported action at EU level.
EU citizens, consumer organizations and academic institutions overwhelmingly confirmed the need for EU action to ease victims’ problems with the burden of proof. Businesses, while recognising the negative effects of the uncertainty around the application of liability rules, were more cautious and asked for targeted measures to avoid limiting innovation.
A similar picture appeared regarding the policy options. EU citizens, consumer organizations and academic institutions strongly supported measures on the burden of proof and harmonising no-fault liability (referred to as ‘strict liability’) coupled with mandatory insurance. Businesses were more divided on the policy options, with differences depending in part on their size. Strict liability was considered disproportionate by the majority of business respondents. Harmonisation of the easing of the burden of proof gained more support, particularly among SMEs. However, businesses cautioned against a complete shift of the burden of proof.
Therefore, the preferred policy option was developed and refined in light of feedback received from stakeholders throughout the impact assessment process to strike a balance between the needs expressed and concerns raised by all relevant stakeholder groups.
·Collection and use of expertise
The proposal builds on 4 years of analysis and close involvement of stakeholders, including academics, businesses, consumer associations, Member States and citizens. The preparatory work started in 2018 with the setting up of the Expert Group on Liability and New Technologies (New Technologies Formation). The Expert Group produced a Report in November 2019 25 that assessed the challenges some characteristics of AI pose to national civil liability rules.
–a comparative law study based on a comparative legal analysis of European tort laws focused on key AI-related issues 26 ;
–a behavioural economics study on the impacts of targeted adaptations of the liability regime on consumers’ decision making, in particular their trust and willingness to take up AI-enabled products and services 27 ;
–an economic study 28 covering a number of issues: the challenges faced by victims of AI applications compared to victims of non-AI devices when trying to obtain compensation for their loss; whether and to what extent businesses are uncertain about the application of current liability rules to their operations involving AI, and whether the impact of legal uncertainty can hamper investment in AI; whether further fragmentation of national liability laws would reduce the effectiveness of the internal market for AI applications and services, and whether and to what extent harmonising certain aspects of national civil liability via EU legislation would reduce these problems and facilitate the overall uptake of AI technology by EU companies.
·Impact assessment
In line with its “Better Regulation” policy, the Commission conducted an impact assessment for this proposal examined by the Commission's Regulatory Scrutiny Board. The meeting of the Regulatory Scrutiny Board on 6 April 2022 led to a positive opinion with comments.
Policy option 1: three measures to ease the burden of proof for victims trying to prove their liability claim.
Policy option 2: the measures under option 1 + harmonising strict liability rules for AI use cases with a particular risk profile, coupled with a mandatory insurance.
–a first stage: the measures under option 1;
–a second stage: a review mechanism to re-assess, in particular, the need for harmonising strict liability for AI use cases with a particular risk profile (possibly coupled with a mandatory insurance).
The policy options were compared by way of a multi-criteria analysis taking into account their effectiveness, efficiency, coherence and proportionality. The results of the multi-criteria and sensitivity analysis show that policy option 3, easing the burden of proof for AI-related claims + targeted review regarding strict liability, possibly coupled with mandatory insurance, ranks highest and is therefore the preferred policy choice for this proposal.
The preferred policy option would ensure that victims of AI-enabled products and services (natural persons, businesses and any other public or private entities) are no less protected than victims of traditional technologies. It would increase the level of trust in AI and promote its uptake.
Furthermore, it would reduce legal uncertainty and prevent fragmentation, thus helping companies, and most of all SMEs, that want to realise the full potential of the EU single market by rolling out AI-enabled products and services cross-border. The preferred policy option also creates better conditions for insurers to offer coverage of AI-related activities, which is crucial for businesses, especially SMEs to manage their risks. It is namely estimated that the preferred policy option would generate an increased AI market value in the EU-27 between ca. EUR 500mln and ca. EUR 1.1bln in 2025.
·Fundamental rights
One of the most important functions of civil liability rules is to ensure that victims of damage can claim compensation. By guaranteeing effective compensation, these rules contribute to the protection of the right to an effective remedy and a fair trial (Article 47 of the EU Charter of Fundamental Rights, referred to below as the Charter) while also giving potentially liable persons an incentive to prevent damage, in order to avoid liability.
With this proposal, the Commission aims to ensure that victims of damage caused by AI have an equivalent level of protection under civil liability rules as victims of damage caused without the involvement of AI. The proposal will enable effective private enforcement of fundamental rights and preserve the right to an effective remedy where AI-specific risks have materialised. In particular, the proposal will help protect fundamental rights, such as the right to life (Article 2 of the Charter), the right to the physical and mental integrity (Article 3), and the right to property (Article 17). In addition, depending on each Member State’s civil law system and traditions, victims will be able to claim compensation for damage to other legal interests, such as violations of personal dignity (Articles 1 and 4 of the Charter), respect for private and family life (Article 7), the right to equality (Article 20) and non-discrimination (Article 21).
In addition, this proposal complements other strands in the Commission’s AI policy based on preventive regulatory and supervisory requirements aimed directly at avoiding fundamental rights breaches (such as discrimination). These are the AI Act, the General Data Protection Regulation, the Digital Services Act and EU law on non-discrimination and equal treatment. At the same time, this proposal does not create or harmonise the duties of care or the liability of various entities whose activity is regulated under those legal acts and, therefore, does not create new liability claims or affect the exemptions from liability under those other legal acts. This proposal only introduces alleviations of the burden of proof for the victims of damage caused by AI systems in claims that can be based on national law or on these other EU laws. By complementing these other strands, this proposal protects the victim's right to compensation under private law, including compensation for fundamental rights breaches.
4. BUDGETARY IMPLICATIONS
This proposal will not have implications for the budget of the European Union.
5. OTHER ELEMENTS
·Implementation plans and monitoring, evaluation, monitoring programme and targeted review
This proposal puts forward a staged approach. To ensure that sufficient evidence is available for the targeted review in the second stage, the Commission will draw up a monitoring plan, detailing how and how often data and other necessary evidence will be collected.
–reporting and information sharing by Member States regarding application of measure to ease the burden of proof in national judicial or out-of-court settlement procedures;
–information collected by the Commission or market surveillance authorities under the AI Act (in particular Article 62) or other relevant instruments;
–information and analyses supporting the evaluation of the AI Act and the reports to be prepared by the Commission on implementation of that Act;
–information and analyses supporting the assessment of relevant future policy measures under the ‘old approach’ safety legislation to ensure that products placed on the Union market meet high health, safety and environmental requirements;
–information and analyses supporting the Commission’s report on the application of the Motor Insurance Directive to technological developments (in particular autonomous and semi-autonomous vehicles) pursuant to its Article 28c(2)(a).
·Detailed explanation of the specific provisions in the proposal
1. Subject matter and scope (Article 1)
The purpose of this Directive is to improve the functioning of the internal market by laying down uniform requirements for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems. It follows up on the European Parliament’s Resolution 2020/2014(INL) and adapts private law to the needs of the transition to the digital economy.
The choice of suitable legal tools is limited, given the nature of the burden-of-proof issue and the specific characteristics of AI that pose a problem for existing liability rules. In this respect, this Directive eases the burden of proof in a very targeted and proportionate manner through the use of disclosure and rebuttable presumptions. It establishes for those seeking compensation for damage a possibility to obtain information on high-risk AI systems to be recorded/documented pursuant to the AI Act. In addition to this, the rebuttable presumptions will give those seeking compensation for damage caused by AI systems a more reasonable burden of proof and a chance to succeed with justified liability claims.
Such tools are not new; they can be found in national legislative systems. Hence, these national tools constitute helpful reference points on how to address the issues raised by AI for existing liability rules in a way which interferes as little as possible with the different national legal regimes.
In addition, when asked about more far-reaching changes such as a reversal of the burden of proof or an irrebuttable presumption, businesses provided negative feedback in consultations. Targeted measures to ease the burden of proof in form of rebuttable presumptions were chosen as pragmatic and appropriate ways to help victims meet their burden of proof in the most targeted and proportionate manner possible.
Article 1 indicates the subject matter and scope of this Directive: it applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes. This means namely regimes that provide for a statutory responsibility to compensate for damage caused intentionally or by a negligent act or omission. The measures provided in this Directive can fit without friction in existing civil liability systems, since they reflect an approach that does not touch on the definition of fundamental concepts like ‘fault’ or ‘damage’, given that the meaning of those concepts varies considerably across the Member States. Thus, beyond the presumptions it establishes, this Directive does not affect Union or national rules determining, for instance, which party has the burden of proof, what degree of certainty is required as regards the standard of proof, or how fault is defined.
In addition, this Directive does not affect existing rules regulating the conditions of liability in the transport sector and those set by the Digital Services Act.
While this Directive does not apply with respect to criminal liability, it may be applicable with respect to state liability. State authorities are also covered by the provisions of the AI Act as subjects of the obligations prescribed therein.
This Directive does not apply retroactively, but only to claims for compensation of damages that occur as from the date of its transposition.
The proposal for this Directive has been adopted together with the proposal for a revision of the Product Liability Directive 85/374/EEC, in a package aiming to adapt liability rules to the digital age and AI, ensuring the necessary alignment between these two complementary legal instruments.
2. Definitions (Article 2)
The definitions in Article 2 follow those of the AI Act to ensure consistency.
Article 2(6)(b) provides that claims for damages can be brought not only by the injured person but also by persons that have succeeded in or have been subrogated into the injured person’s rights. Subrogation is the assumption by a third party (such as an insurance company) of another party’s legal right to collect a debt or damages. Thus one person is entitled to enforce the rights of another for their own benefit. Subrogation would also cover heirs of a deceased victim.
In addition, Article 2(6)(c) provides that an action for damages can also be brought by someone acting on behalf of one or more injured parties, in accordance with Union or national law. This provision aims to give more possibilities to persons injured by an AI system to have their claims assessed by a court, even in cases where individual actions may seem too costly or too cumbersome to bring, or where joint actions may entail a benefit of scale. To enable victims of damage caused by AI systems to enforce their rights in relation to this Directive through representative actions, Article 6 amends Annex I to Directive (EU) 2020/1828.
3. Disclosure of evidence (Article 3)
This Directive aims to provide persons seeking compensation for damage caused by high-risk AI systems with effective means to identify potentially liable persons and relevant evidence for a claim. At the same time, such means serve to exclude falsely identified potential defendants, saving time and costs for the parties involved and reducing the case load for courts.
In this respect, Article 3(1) of the Directive provides that a court may order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage. Requests for evidence are addressed to the provider of an AI system, a person who is subject to the provider’s obligations laid down by Article 24 or Article 28 (1) of the AI Act or a user pursuant to the AI Act. The requests should be supported by facts and evidence sufficient to establish the plausibility of the contemplated claim for damages and the requested evidence should be at the addressees’ disposal. Requests cannot be addressed to parties that bear no obligations under the AI Act and therefore have no access to the evidence.
According to Article 3(2) the claimant can request the disclosure of evidence from providers or users that are not defendants only in case all proportionate attempts were unsuccessfully made to gather the evidence from the defendant.
In order for the judicial means to be effective, Article 3(3) of the Directive provides that a court may also order the preservation of such evidence.
As provided in Article 3 i, first subparagraph, the court may order such disclosure, only to the extent necessary to sustain the claim, given that the information could be critical evidence to the injured person’s claim in the case of damage that involve AI systems.
By limiting the obligation to disclose or preserve to necessary and proportionate evidence, Article 3 i, first subparagraph, aims to ensure proportionality in disclosing evidence, i.e. to limit the disclosure to the necessary minimum and prevent blanket requests.
The second and third subparagraphs of Article 3 i further aim to strike a balance between the claimant’s rights and the need to ensure that such disclosure would be subject to safeguards to protect the legitimate interests of all parties concerned, such as trade secrets or confidential information.
In the same context, the fourth subparagraph of Article 3 i aims to ensure that procedural remedies against the order of disclosure or preservation are at the disposal of the person subject to it.
Article 3(5) introduces a presumption of non-compliance with a duty of care. This is a procedural tool, relevant only in cases where it is the actual defendant in a claim for damages who bears the consequences of not complying with a request to disclose or preserve evidence. The defendant will have the right to rebut that presumption. The measure set out in this paragraph aims to promote disclosure but also to expedite court proceedings.
4. Presumption of causal link in the case of fault (Article 4)
With respect to damage caused by AI systems, this Directive aims to provide an effective basis for claiming compensation in connection with the fault consisting in the lack of compliance with a duty of care under Union or national law.
It can be challenging for claimants to establish a causal link between such non-compliance and the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the relevant damage. Therefore, a targeted rebuttable presumption of causality has been laid down in Article 4 (1) regarding this causal link. Such presumption is the least burdensome measure to address the need for fair compensation of the victim.
The fault of the defendant has to be proven by the claimant according to the applicable Union or national rules. Such fault can be established, for example, for non-compliance with a duty of care pursuant to the AI Act or pursuant to other rules set at Union level, such as those regulating the use of automated monitoring and decision-making for platform work or those regulating the operation of unmanned aircraft. Such fault can also be presumed by the court on the basis of a non-compliance with a court order for disclosure or preservation of evidence under Article 3(5). Still, it is only appropriate to introduce a presumption of causality when it can be considered likely that the given fault has influenced the relevant AI system output or lack thereof, which can be assessed on the basis of the overall circumstances of the case. At the same time, the claimant still has to prove that the AI system (i.e. its output or failure to produce one) gave rise to the damage.
Paragraphs (2) and (3) differentiate between, on the one hand, claims brought against the provider of a high-risk AI system or against a person subject to the provider’s obligations under the AI Act and, on the other hand, claims brought against the user of such systems. In this respect, it follows the respective provisions and relevant conditions of the AI Act. In the case of claims based on Article 4(2), the defendants’ compliance with the obligations listed in that paragraph have to be assessed also in the light of the risk management system and its results, i.e. risk management measures, under the AI Act.
In case of high-risk AI systems as defined by the AI Act, Article 4 i establishes an exception from the presumption of causality, where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. This possibility can incentivise defendants to comply with their disclosure obligations, with measures set by the AI Act to ensure a high level of transparency of the AI or with documenting and recording requirements.
In the case of non-high risk AI systems, Article 4(5) establishes a condition for the applicability of the presumption of causality, whereby the latter is subject to the court determining that it is excessively difficult for the claimant to prove the causal link. Such difficulties are to be assessed in light of the characteristics of certain AI systems, such as autonomy and opacity, which render the explanation of the inner functioning of the AI system very difficult in practice, negatively affecting the ability of the claimant to prove the causal link between the fault of the defendant and the AI output.
In cases where the defendant uses the AI system in the course of a personal non-professional activity, Article 4(6) provides that the presumption of causality should only apply if the defendant has materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so. This condition is justified by the need to balance the interests of injured persons and non-professional users, by exempting from the application of the presumption of causality the cases in which non-professional users do not add risk through their behaviour.
Finally, Article 4(7) provides that the defendant has the right to rebut the causality presumption based on Article 4(1).
Such effective civil liability rules have the additional advantage that they give all those involved in activities related to AI systems an additional incentive to respect their obligations regarding their expected conduct.
5. Evaluation and targeted review (Article 5)
Various national legal systems provide for different strict liability regimes. Elements for such a regime at Union level were also suggested by the European Parliament in its own-initiative resolution of 20 October 2020, consisting of a limited strict liability regime for certain AI-enabled technologies and a facilitated burden of proof under fault-based liability rules. The public consultations also highlighted a preference for such a regime among respondents (except for non-SMEs businesses), whether or not coupled with mandatory insurance.
However, the proposal takes into account the differences between national legal traditions and the fact that the kind of products and services equipped with AI systems that could affect the public at large and put at risk important legal rights, such as the right to life, health and property, and therefore could be subject to a strict liability regime, are not yet widely available on the market.
A monitoring programme is put in place to provide the Commission with information on incidents involving AI systems. The targeted review will assess whether additional measures would be needed, such as introducing a strict liability regime and/or mandatory insurance.
6. Transposition (Article 7)
When notifying the Commission of national transposition measures to comply with this Directive, Member States should also provide explanatory documents which give sufficiently clear and precise information and state, for each provision of this Directive, the national provision(s) ensuring its transposition. This is necessary to enable the Commission to identify, for each provision of the Directive requiring transposition, the relevant part of national transposition measures creating the corresponding legal obligation in the national legal order, whatever the form chosen by the Member States.