January 06 2026

Who is responsible for AI-driven decisions in business?

“We use AI through a software provider, so the liability lies with them.”
This is one of the most dangerous assumptions businesses still make today.

More and more companies use artificial intelligence in very practical ways: for recruitment, customer service, fraud prevention, pricing, or risk assessment. However, the EU Artificial Intelligence Act makes it clear that responsibility also lies with the business that uses the AI system.

Where businesses most often get it wrong?

Many companies fail to recognise that the AI they use may qualify as high-risk. The AI Act explicitly states that systems used for recruitment and employee evaluation, creditworthiness or risk assessment, access to services, or pricing decisions may fall within the high-risk category (AI Act, Annex III).

In such cases, mandatory requirements apply, including risk management processes (Article 9), appropriate data governance (Article 10), system documentation and traceability (Articles 11–12), human oversight (Article 14), and sufficient accuracy, robustness, and cybersecurity (Article 15).

What does this mean for small businesses?

It is important to emphasise that not every use of AI automatically constitutes high risk. If AI is used for internal content, marketing, customer request routing, or analytical support without making decisions about individuals, complex compliance processes are not required.

However, even in these cases, businesses must know where AI is being used, avoid delegating decisions with legal or financial consequences solely to algorithms, and inform users when they interact with AI, as required by the transparency obligations (Article 50 AI Act).

Why this is not just theory

This responsibility logic existed even before the AI Act. Amazon abandoned its CV screening algorithm after it was found to produce discriminatory outcomes. In the Netherlands’ social benefits case, liability rested with public authorities that relied on algorithmic assessments.

AI used through SaaS: where the real risk lies

Most businesses use AI via third-party SaaS solutions, but this does not remove the user’s responsibility (Article 29). While providers are required to ensure compliance with the AI Act — including risk management (Article 9), data governance (Article 10), documentation and traceability (Articles 11–12), accuracy, robustness and security (Article 15), and conformity assessment before placing high-risk systems on the market (Article 43) — regulators will still assess compliance through the actions of the user if these obligations are not effectively met.

This is because the business is the party that selected and deployed the AI system.

What this means in practice

As a result, SaaS agreements must include clear provider warranties on compliance with the AI Act, obligations to cooperate in the event of incidents, and well-defined liability and indemnification mechanisms.

If your business uses or develops technology-based solutions and you want to ensure that your contracts reflect the AI Act requirements not just on paper but in practice, get in touch.

📩 info@prevence.legal
📞 +370 664 42822

Subscribe to our newsletter!

Business trends and legal insights — all in one place.

Please wait...

Dėkojame! Sekmingai prisiregistravote

Let's talk

    Cookies

    This website uses essential and statistical cookies to ensure the attractive display of the Prevence website, maintain its functionality, and provide a smooth browsing experience.

    Essential cookies are a condition for using our website. If you choose to reject these types of cookies, we cannot guarantee the proper functioning of the website during your visit. You can manage the use of functional and third-party cookies by adjusting your browser settings.

    The legal basis for processing data collected through cookies is your consent. We do not link a visitor’s IP address or email address to data that would enable their identification. This means that each visitor’s session will be recorded, but visitors to the Prevence website will remain anonymous.

    When using a browser to access our content, you can configure your browser to accept all cookies, reject all cookies, or notify you when a cookie is sent. Each browser is different, so if you are unsure how to adjust your cookie settings, check the help menu of your browser. Your device’s operating system may also offer additional cookie controls. If you do not wish information to be collected through cookies, use the simple procedure available in most browsers that allows you to decline the use of cookies. To learn more about managing cookies, visit: http://www.allaboutcookies.org/manage-cookies/.


    Warning: file_exists(): open_basedir restriction in effect. File(action-scheduler-en_US.mo) is not within the allowed path(s): (/home/prence/:/tmp:/usr/share/pear) in /home/prence/domains/prevence.legal/public_html/wp-content/plugins/wpml-string-translation/classes/MO/Hooks/LoadTranslationFile.php on line 82

    Warning: file_exists(): open_basedir restriction in effect. File(action-scheduler-en_US.l10n.php) is not within the allowed path(s): (/home/prence/:/tmp:/usr/share/pear) in /home/prence/domains/prevence.legal/public_html/wp-content/plugins/wpml-string-translation/classes/MO/Hooks/LoadTranslationFile.php on line 85