The extraordinary development of artificial intelligence (AI) systems raises a number of societal and legal issues. The liability in the event of damage caused by AI is a key point.
In order to provide the best possible response to this issue, two draft directive proposals have been presented by the European Commission:
What are the main aspects of the AI Liability Directive?
Firstly, the directive gives any victim of damage caused by a high-risk AI system the right to access evidence from the relevant company (AI provider or deployer, depending on the case).
This measure is intended to avoid the risk that some victims may be discouraged from bringing a liability action by the difficulty of proving the fault of the AI provider or deployer, given the complexity of certain AI systems.
Secondly, the victim’s burden of proof is reduced by the introduction of a presumption of causality, insofar as the fault is established and the causal link between this fault and the damage seems sufficiently plausible.
What are the major changes in the Product Liability Directive?
In order to adapt the existing legal framework to the particularities of AI, the European Commission has proposed the following changes in particular:
- software, AI systems and products in which AI is incorporated are now expressly referred to as “products“;
- victims have a right to access evidence similar to the AI Liability Directive;
- refusal to provide the evidence requested gives rise to a presumption that the product is defective;
- a double presumption (defectiveness and causal link) may also be applied if the victim has provided sufficiently relevant evidence;
- manufacturers may be held liable for modifications they make to products they have already placed on the market (e.g. via software updates or machine learning);
- software providers and providers of digital services that affect how a product works can also be held liable.
Our advice:
These directive proposals are the first legal instruments aimed at harmonizing, to some extent, national liability rules in the event of harm caused by an AI.
These new rules need to be taken into account in AI-based projects, so that the applicable constraints and risks they raise can be anticipated in contracts and project management methods.
For further information about this future legal framework, sign up for our free webinar earlegal training course – IA en plein essor : réinventer le droit pour un cadre adapté ?
Lawyers of the Lexing’s Creactivity department are available to support you in any AI-related project.