May 08 2025

Artificial intelligence & confidentiality: what every business party should know

rtificial intelligence (AI) is becoming an increasingly valuable business tool – from accelerating data analysis to automating processes and generating content or commercial insights.

Yet, reliance on AI does not remove legal responsibility for protecting confidential information.

Before entering into negotiations, businesses typically exchange information that may be highly sensitive – technical know-how, commercial proposals, internal procedures, or growth strategies. Such information is usually protected by a non-disclosure agreement (NDA).

However, most NDA templates do not reflect the realities of AI use, which means parties may remain exposed to confidentiality risks – especially when data is processed or unintentionally leaked via AI tools.

⚠️ Key risks

Modern AI systems such as ChatGPT, Copilot, and Gemini operate on cloud-based infrastructure. User-provided data is often stored on external servers and, in some cases, used to further train the AI model – unless that setting is manually disabled.

This creates several risks:

  • Sensitive data may be indirectly reused in responses to other users

  • Data may be stored in third countries (raising GDPR concerns)

  • The responsibility for a data leak may lie not only with an employee, but also with the company that allowed such AI use

📍 For example, in 2023, Samsung publicly confirmed that employees had uploaded internal code into ChatGPT for analysis. This content may have been used for model improvement, raising the risk of confidential data being indirectly exposed through AI-generated outputs.

Samsung responded by banning AI tools for work use. Similar incidents are common in smaller businesses – especially when employees use AI tools to process sensitive partner proposals or prepare replies to commercial queries.

✅ What should your NDA include?

🛑 Prohibition on using AI tools without prior consent
Explicitly state that neither party may process or transmit the other party’s confidential information through AI tools without prior written consent. This is essential, as many generative AI tools operate in uncontrolled cloud environments where uploaded content may be stored, analyzed, or used for training – creating serious confidentiality risks.

🔐 Security standards & limitations for AI tools
If AI use is permitted, the NDA should require that tools:

  • Meet specific data protection standards

  • Allow disabling model training

  • Operate within a controlled, non-public cloud environment

This helps minimize the risk that confidential data will be misused or accessed by third parties.

📄 Confidentiality of AI-generated outputs
The NDA should not only protect the originally shared information, but also all AI-generated content derived from confidential data – including reports, summaries, calculations, or insights. This is especially relevant when AI creates outputs based on a partner’s sensitive business inputs.

⚖️ Liability for AI-related breaches
The NDA should clearly establish that the party using AI assumes full responsibility for any resulting data loss, disclosure, or GDPR violation. This reduces the risk of disputes and ensures legal clarity.

💬 In summary

Protecting confidential information in the age of AI requires not only technical awareness, but also legal foresight.

It’s no longer enough to sign a standard NDA – businesses must fully understand how AI tools operate and the risks they pose.

If you’re using AI tools in negotiations, proposal drafting or any phase involving confidential data – make sure your NDA is up to date.

📩 Need support reviewing or updating your NDA? The Prevence team is here to help.

Subscribe to our newsletter!

Business trends and legal insights — all in one place.

Please wait...

Dėkojame! Sekmingai prisiregistravote

Let's talk