Artificial Intelligence (AI) influences contract law at various stages of a contract, from its formation and performance to the liability of the parties. It can also be at the heart of the contract itself. This is the case when a relationship is established between an AI system supplier and a deployer. If the latter suffers a loss as a result of using the AI system, the question of liability arises: who is responsible, the supplier or the AI itself ?
Since AI has no legal personality, it cannot be held liable. Consequently, when a contract binds the parties, the contractual liability of the supplier may be engaged in the event of damage caused to the deployer by the use of the AI system. With the entry into force of the European Regulation on Artificial Intelligence, specific obligations are imposed on the various players. These obligations are reinforced when the AI system processes sensitive data, such as biometric, medical or recruitment data. In such cases, the AI system is referred to as a “high-risk system”: “high-risk system”. In the event of a breach of the supplier’s obligations, the supplier may be held contractually liable if this causes harm to the deployer.
Supplier obligations to secure a high-risk AI system
Quality management of an AI system relies on rigorous data management systems to ensure relevance and reliability, as well as risk management, maintenance and incident notification. Compliance assessment, through audits or with the competent authority, is essential, with corrective action in the event of non-compliance and communication to the deployers. Human control and cooperation with the relevant authorities are also essential to guarantee compliance with standards and obligations.
Supplier’s contractual liability
If these obligations are not met, the risk of the AI system causing harm to the deployer increases. Potential risks include:
- Cyber attacks : an attack can lead to a data breach and disclosure.
- Errors induced by poor-quality data : AI develops by processing a large volume of data; if this data has not been carefully managed to ensure its quality and relevance, the system runs the risk of basing itself on erroneous information.
- Errors induced by out-of-date data : if data is not regularly updated, AI could mislead deployers.
- Lack of human control : without human supervision, the system could produce incorrect or inappropriate results.
- Data loss : if the system is not regularly maintained, this could lead to significant data loss.
Not only is the supplier contractually liable, but he will also have to pay an administrative fine (amount to be determined in Belgium).
Our advice:
All these issues need to be carefully assessed, so don’t hesitate to seek advice from a lawyer.
Sign up for our earlegal training course on 13 December 2024 “Framing your AI projects: contractual issues”!
We will address the following questions:
- What are the main contractual aspects to consider at the AI development stage?
- What are the contractual aspects to be taken into account at the AI marketing stage?
- How to ensure proper management of AI-related incidents in the contractual chain?
- How to contractually manage your compliance with the AI regulation?
For more information, click here.