The new European regulatory framework for artificial intelligence is on the way. Do not wait for your competitors to be the early birds: anticipate the new legal constraints and get a head start for your AI projects !
Artificial intelligence demonstrated its potential contribution in many areas of human activity, through gains in efficiency, innovative solutions or improvements in the quality of existing products or services (notably through a refined analysis of customer needs and expectations), and better optimization in management (notably of projects). Artificial intelligence can therefore be a growth leverage. It can also offer new opportunities by helping teams to get rid of some recurring tasks.
While the benefits of artificial intelligence no longer need to be demonstrated, the power of this new technology is a cause for concern, and many questions need to be asked: transparency, traceability, control of learning and evolution by data, reliability, biases, security and confidentiality.
All legitimate questions that have been amplified by the success of ChatGPT, but also by the media coverage of some cases of AI tools’ misuse.
To address these concerns, a number of important legislative initiatives have been initiated, with Europe as pathfinder. The aim of the European legislator is to regulate the use of artificial intelligence, so that AI-based solutions and tools are developed and used in accordance with the values of the European Union, and so that clear principles of responsibility ensure user confidence.
The pillars of this AI regulation are:
- The European AI Regulation (AI Act) (1);
- The AI Liability Directive (2);
- The Revised Product Liability Directive (3).
The legal framework governing artificial intelligence includes numerous obligations and rules that need to be taken into account right from the initial phase of an AI project, in order to ensure compliant AI system design and avoid subsequent modifications.
Identifying points subject to compliance with the European AI Act is therefore a crucial step in AI projects. Similarly, identifying contractual risks ensures the legal security of an AI project and the subsequent exploitation of its results. The combination of these two elements makes it possible to list the applicable constraints and the elements to be reflected in the contracts governing the implementation of an AI solution.
Anticipating these legal and contractual constraints helps avoiding subsequent modifications and delays, and ensures compliance right from the design stage. A significant competitive advantage, in a field where deadlines are of crucial importance.
Thanks to its recognized expertise in innovation and technology law, Lexing has developed tools to support companies in their artificial intelligence projects:
- training aids for AI project teams;
- tools to help identify applicable rules;
- a range of solutions facilitating the integration of regulatory constraints into projects and the accompanying legal documents (practical information sheets, model clauses, etc.).
If you are looking for a reliable legal partner to support your AI projects, do not hesitate to contact the Lexing team!
The AI Act aims to regulate the marketing and use of AI systems in Europe.
It has a dual objective. Firstly, to impose specific obligations on the various operators in the AI chain (suppliers, deployers, importers or distributors) in order to guarantee trust and safety for end-users. Secondly, to promote AI systems that are in line with the values of the European Union, and in particular to ensure respect for the fundamental rights.
Overview of the Regulation’s structure
The AI Act is based on a risk management approach to the use of AI. Depending on the degree of risk, AI systems are classified into different categories to which different obligations apply. The higher the risk, the more stringent the obligations.
Certain uses of AI are also totally prohibited, such as systems that use subliminal techniques to mislead, or systems that exploit people’s vulnerabilities.
High-risk AI systems, on the other hand, are subject to the strictest rules. For the supplier of an AI system of this type, these mainly concern obligations relating to:
- the implementation of a risk management system describing a comprehensive policy (risk identification, assessment, treatment of risks, management of any residual risks), as well as a quality control system;
- the development of a data governance policy to ensure the quality of data used in the development and training of AI models;
- setting up logging systems to track the use of AI throughout its lifecycle;
- the design of AI tools ensuring a level of reliability and cybersecurity appropriate to their intended use;
- the provision of full documentation of the AI system and its compliance with the requirements of the AI Act, with the obligation of transparency towards users (with a view to ensuring that they are fully aware of the capabilities and performance of the AI tool they are using, and indicating how to ensure human control over this usage).
In addition to the distinction based on the type of AI system, the Regulation also introduces a distinction based on the role of the actor in the AI system’s development/operation chain. The more closely the role is linked to the design and configuration of the tool, the more onerous the obligations will be. The supplier of the AI system is logically the most exposed to the specific obligations imposed by the Regulation.
The importance of AI system qualification
The definition of rules adapted to the level of risk, and therefore to each type of AI system, makes it crucial to qualify the AI system within this risk scale.
In order to be able to anticipate the implementation of the AI Act, it is therefore necessary to carefully assess the nature of each AI tool to determine the rules with which it will have to comply.
For example, an AI system intended for use as a safety component in a product, or which is itself a product subject to third-party conformity assessment under EU harmonization legislation (e.g. medical devices), is considered “high risk”.
Annex III of the AI Act lists a series of systems considered to be high-risk.
Role identification
The AI Act uses a second classification axis, based on the role of actors within the AI chain. This classification is complementary to that based on the nature of the AI system. It is therefore essential to correctly assess the legal qualification that must be associated with each participant in a project linked to an AI system.
For example, the supplier is the one who develops an AI system (or has it developed by a third party on his behalf) and places it on the market in his name and on his behalf. Any project that relies in whole or in part on the use of AI systems therefore needs to verify the collaboration arrangements between the participants, to ensure that the roles covered by the AI Act are properly identified. This qualification of roles makes it possible to identify the applicable obligations, so that they can be reflected in the contracts governing the AI project.
In parallel with the proposed Regulation on AI, a new Directive on AI liability has been proposed by the European Commission. The aim is to boost the confidence of users of AI systems, by establishing harmonized rules on compensation for damage caused by AI systems, and by making it easier for victims to seek civil redress.
The difficulty of establishing fault and a causal link when dealing with technically complex systems is an obstacle to the widespread adoption of AI systems.
The proposed Directive is therefore built around two main axes: simplifying recourse by lightening the burden of proof through a presumption of causality. It also guarantees access to the technical elements needed to establish fault, by imposing specific obligations on suppliers of AI systems.
Presumption of causality
Insofar as it is often difficult in practice to prove the link between a fault and the use of a technical tool such as an AI system, the proposed Directive introduces a presumption of causality once the fault has been proven and the link between this fault and the damage seems reasonably plausible. As a result, the burden of proof for the victim is simplified, as he or she no longer has to establish in detail how the damage was caused by a fault on the part of the AI provider or deployer.
Access to evidence by the victim
Proving fault requires the analysis of technical data, which is not always easily accessible. The Directive therefore grants victims of damage resulting from high-risk AI systems the right to obtain the necessary information from the company concerned (AI supplier or user as the case may be).
This provision is designed to encourage victims to seek redress, without being blocked by the opacity or complexity of AI systems.
This directive facilitates the recourse of victims who have suffered damage because of a defective product.
The use of AI systems on a large scale opens up the prospect of numerous cases of tool malfunction, in which identifying the party at fault could prove very complex. The aim of revising this Directive is therefore to secure the possible use of no-fault liability for AI systems.
In terms of scope, the notion of defective product has been extended to include software and products in which AI systems are implemented.
From an evidential point of view, it should be noted that refusal to provide the evidence (particularly technical evidence) requested by the victim gives rise to an automatic presumption that the product in question is defective. Furthermore, if the victim succeeds in providing relevant evidence that it is plausible that the damage is linked to a defect in the AI system, a double presumption may be applied, making it possible to presume both the defectiveness of the product and the causal link between this defectiveness and the damage suffered. Finally, the judge may also reverse the burden of proof if the victim encounters excessive difficulties due to the technical or scientific complexity of the case.
It is also important to note that manufacturers can be held liable for modifications they make to products already on the market, whether through updates or AI machine learning. Similarly, suppliers of software or digital services that influence the operation of a product may also be liable for damages caused by the product’s defect.
It should also be noted that the assessment of “defect” will take into account new circumstances, such as the potential impact of a product’s learning capability on its safety after deployment.
In addition to our various news articles, which you can find below, we offer you a few videos:
Our latest news on this topic
-
06/01/25
L’impact de L’IA dans le secteur de l’architecture
Ismaël OulouanAlexandre Cassart -
02/12/24
Comment contractualiser la gestion des risques liés à l’IA ?
Anne-Valentine RensonnetAlexandre Cruquenaire -
05/11/24
AI-generated contracts: advantages and disadvantages
Marion DardenneAlexandre Cruquenaire