Who is responsible for AI-driven decisions in business?
“We use AI through a software provider, so the liability lies with them.”
This is one of the most dangerous assumptions businesses still make today.
More and more companies use artificial intelligence in very practical ways: for recruitment, customer service, fraud prevention, pricing, or risk assessment. However, the EU Artificial Intelligence Act makes it clear that responsibility also lies with the business that uses the AI system.
Many companies fail to recognise that the AI they use may qualify as high-risk. The AI Act explicitly states that systems used for recruitment and employee evaluation, creditworthiness or risk assessment, access to services, or pricing decisions may fall within the high-risk category (AI Act, Annex III).
In such cases, mandatory requirements apply, including risk management processes (Article 9), appropriate data governance (Article 10), system documentation and traceability (Articles 11–12), human oversight (Article 14), and sufficient accuracy, robustness, and cybersecurity (Article 15).
It is important to emphasise that not every use of AI automatically constitutes high risk. If AI is used for internal content, marketing, customer request routing, or analytical support without making decisions about individuals, complex compliance processes are not required.
However, even in these cases, businesses must know where AI is being used, avoid delegating decisions with legal or financial consequences solely to algorithms, and inform users when they interact with AI, as required by the transparency obligations (Article 50 AI Act).
Why this is not just theory
This responsibility logic existed even before the AI Act. Amazon abandoned its CV screening algorithm after it was found to produce discriminatory outcomes. In the Netherlands’ social benefits case, liability rested with public authorities that relied on algorithmic assessments.
Most businesses use AI via third-party SaaS solutions, but this does not remove the user’s responsibility (Article 29). While providers are required to ensure compliance with the AI Act — including risk management (Article 9), data governance (Article 10), documentation and traceability (Articles 11–12), accuracy, robustness and security (Article 15), and conformity assessment before placing high-risk systems on the market (Article 43) — regulators will still assess compliance through the actions of the user if these obligations are not effectively met.
This is because the business is the party that selected and deployed the AI system.
What this means in practice
As a result, SaaS agreements must include clear provider warranties on compliance with the AI Act, obligations to cooperate in the event of incidents, and well-defined liability and indemnification mechanisms.
If your business uses or develops technology-based solutions and you want to ensure that your contracts reflect the AI Act requirements not just on paper but in practice, get in touch.
📩 info@prevence.legal
📞 +370 664 42822