P freeprivacypolicy.app
Blog

Privacy policy for AI products

Training data, prompts, model outputs, and the EU AI Act — what your policy needs to cover in 2026.

By FreePrivacyPolicy Editorial Team · Privacy compliance editors · AI Privacy · 4 min read

Generate AI-ready policy Free · no signup · hosted public URL

The four extra disclosures AI products need

  1. Whether prompts and outputs are used for training. If yes, disclose explicitly and provide an opt-out — the EU and California regulators are watching.
  2. Which foundation models you call. Name the provider (OpenAI, Anthropic, Google, Mistral) and link to their data-handling docs.
  3. Retention of prompt logs. Default OpenAI Enterprise retention is 30 days unless you opt for zero-retention; disclose your tier.
  4. Automated decision-making notice (GDPR Article 22). If a model output materially affects the user without a human in the loop, you must say so.

EU AI Act transparency duties

The EU AI Act (Regulation (EU) 2024/1689) entered into force in August 2024 with a phased application. From August 2026 onward, deployers of high-risk AI must keep technical documentation and inform affected users. From August 2025, general-purpose AI providers must publish a summary of training data. Your privacy policy is a good place to surface those notices for end users.

Ready to publish?

Answer six questions, get a hosted public URL the App Store, Google Play, and ad networks accept. No credit card.

Generate AI-ready policy

Frequently asked questions

Do I have to name OpenAI in my policy if I use ChatGPT?
If your product calls the OpenAI API at runtime with user data, yes — OpenAI is a subprocessor and must be named.
What is "automated decision-making" under GDPR?
A decision based solely on automated processing that produces legal or similarly significant effects on the user. A pure recommendation usually does not qualify; an automated loan denial does.

Related reading