Skip to content

AI & GDPR in WordPress: A compliance guide for agencies and freelancers

Marina Brocca
Smartphone screen displaying an AI assistant interface, with a dark background and blurred text in the background.

It all started at the first Spanish WordCamp of the year, in Zaragoza. That’s where I ran into the Modular DS team, and somewhere between croquettes at a post-event dinner, the idea for this article was born.

Coincidentally, my talk that day (in Spanish) was about this exact topic. I didn’t expect the reaction it triggered, but in hindsight, it makes total sense. Artificial intelligence (AI) is everywhere. Everyone is using it in their business, but very few people actually know the rules of the game. And navigating this space without a compass is risky. It can affect your reputation, your clients’ trust, and ultimately, your results.

If you build sites or manage WordPress projects, you’ve probably already noticed that AI is part of your daily work: chatbots, WooCommerce recommendations, automated content generation…

Technology moves faster than regulation, but on August 2, 2025, the European AI Act officially came into force, adding new obligations on top of the GDPR. Even if your agency is based in the US, Canada, or elsewhere, the moment you work with European users or clients, these rules apply to your projects.

So let’s take a look at what this means for your work and what you need to do to stay compliant.

How does AI compliance impact your work as a WordPress professional?

If you build, manage, or maintain multiple sites for clients, this is the key idea you need to understand:

You’re not just a bystander. You’re the person “installing the engine” in your client’s car, and that comes with both technical and legal responsibility.

When you integrate AI into a third-party website, the AI Act and the GDPR assign you a very specific role. Understanding that role is what allows you to raise your professional standards and clearly differentiate yourself.

You are the “Deployer” or “Integrator”: In most cases, your job is to take third-party technology (an OpenAI API or an AI WordPress plugin) and put it to work for your client. Your baseline responsibility is to make sure it’s implemented legally and safely.

Your client is the “Controller”: Legally, the website owner is responsible for users’ personal data. But here’s the catch: they trust you, as the expert, to deliver something that won’t cause them trouble.

Your core obligation: information and due diligence.

Your job no longer ends when the AI plugin shows “Active.” As a WordPress professional, your responsibility now includes:

  • Choosing compliant providers. Not every plugin is good enough. Prioritize tools that meet EU standards and provide proper documentation.
  • Configuring transparency. You are technically responsible for making sure users see disclosures like “I’m a bot” or “AI-generated content.” If those labels are missing, the technical fault is yours.
  • Advising clients. You must explain the risk level of the AI you’re installing. Is it a simple search assistant, or a system that filters job applications?

1. Classifying your WordPress projects: The AI Act “traffic light”

You don’t need to be a lawyer, but you do need to know where each AI plugin fits.

The AI Act defines four risk levels for AI systems and practices. As a WordPress agency or freelancer, you usually act as the Deployer, so here’s how to assess AI usage during audits or proposals.

Unacceptable risk: Prohibited systems

Systems that manipulate behavior or exploit vulnerabilities. For example, an AI plugin designed to infer emotional weaknesses and push aggressive sales tactics.

If a client asks for this, don’t install it. It’s illegal, and penalties can reach 7% of global annual revenue.

High risk: Extreme caution

Systems that affect people’s rights or livelihoods. For example, plugins for AI-based CV screening, creditworthiness calculators, or health diagnostics.

These are rare in the WordPress ecosystem, but you should recognize them. In these cases, require a Declaration of Conformity from the provider, ensure human oversight, and keep detailed usage records.

The good news is that high-risk projects justify higher fees. Many agencies charge 20-30% extra for managing compliance and oversight.

Limited risk: The current standard

This is where most WordPress AI integrations live, and where most agencies operate today. For example, support chatbots, AI text generators, or recommendation systems.

The key requirement here is transparency. Users must know they’re interacting with AI from the first moment. You can implement it in two ways.

For chatbots, add a welcome message like: “Hi, I’m the virtual assistant for [Company Name]. You’re interacting with an AI system. If you’d prefer to speak with a human, you can request human support at any time by clicking here.”

For AI-generated content, add a visible label or footer note: “This content was generated or assisted by AI and reviewed by our editorial team for accuracy.”

This point caused the most debate during my talk; some SEO folks nearly had a meltdown! But that’s the way it is.

Here’s the rule you should remember: If users don’t know they’re interacting with AI, the issue isn’t legal. It’s technical, and it’s on you.

Minimal risk: Everyday usage

This includes invisible AI features you already use, such as spam filters (e.g., Akismet), internal search engines, or grammar checkers.

You have no extra legal obligations here, but it’s good practice to document them in a short internal AI usage protocol, especially if you want to position yourself as an agency that’s compliant by default.

2. Legal texts

Legal pages aren’t filler; they’re your credibility on display.

Here’s how to update them under the new framework:

  • Updated references. Mention the AI Act alongside the GDPR and remove outdated references.
  • Dedicated AI section. In your Privacy Policy, clearly explain: “We use AI for [purpose]. Data is processed by [provider], in compliance with GDPR. You may object by contacting [email].”
  • Rights and automated decisions. Inform users about their right to object to automated decisions (GDPR Art. 22) and to request a human review.

Practical tip: Don’t start from scratch. Use professional legal templates specifically designed for AI-powered websites to make sure all key compliance points are covered. If you work with EU clients or collaborate with Spanish-speaking legal teams, check out these legal kits (available in Spanish).

3. Cookies and consent

One of the most neglected areas of AI compliance, yet one of the most important.

AI systems rely on tracking to recommend products or personalize content, but tracking is only allowed when the user has given explicit consent. And this is where many WordPress sites fall short.

  • Use a proper CMP (Consent Management Platform). Avoid basic banners. Use a certified platform. My go-to recommendation is Usercentrics Cookiebot (Google-certified and GDPR-reliable).
  • Differentiate purposes. Analytics and AI personalization must be accepted separately.
  • Cookie audits. Regularly scan the site for hidden trackers that could compromise compliance.

4. Chatbots and recommendations

If you install chatbots or recommendation engines (especially in e-commerce), the law requires clarity.

  • Clear identification: The bot must openly state it’s AI. Don’t try to disguise it.
  • Granular consent in forms: If data is used for multiple purposes (e.g., order handling and AI marketing), you need separate checkboxes.
  • No discrimination: If you use AI for dynamic pricing or suggestions, audit algorithms to ensure they don’t unfairly discriminate by IP, location, or device.

Lastly, offer a human alternative. The law requires that users be able to opt out of interacting with AI if transparency is insufficient or there is confusion.

The simplest solution is to add a persistent button in the chat interface: “Can’t find what you’re looking for? Talk to a human agent.” This should pause the AI flow and notify the support team via email or Slack.

5. Automating compliance

At my Zaragoza talk, I shared a real example of blind automation: A company uploaded 700 CRM contacts into an AI to personalize marketing messages.

Result? 2% of the messages contained serious hallucinations: invented job titles, companies, and personal details. And there was no staging, no review, no emergency stop.

Here’s how it should have been handled.

The panic button and human oversight

If you install an AI system, you’re responsible for how it’s technically used and any risk resulting from it. Before any large-scale automation, implement these safety layers (tools like n8n are perfect for this):

  • Progressive rollout (canary testing). Don’t send all 700 emails at once. Send 10 or 20 messages first. Stop. Review. If those are OK, continue,
  • Human review node. Instead of sending the message directly, route AI output to Slack or a spreadsheet so a human can validate it before the actual sending is triggered.
  • Quality filters. Use a second AI as an auditor to review the output and detect inconsistencies before giving final approval.

Once risk is controlled, automation becomes your compliance ally:

  • User rights management (automatic data deletion). Set up an automated workflow that can receive a user deletion request and immediately execute it across all your databases and connected AI tools.
  • Log cleanup (data retention limits). The GDPR does not allow you to store personal data indefinitely. Instead of relying on memory or good intentions, schedule an automatic task (for example, a cron job) that periodically deletes old chat logs. Every six months is a common and reasonable interval.
  • Consent traceability. Use tools like n8n to securely record off-site when and how users agreed to AI usage. If you ever face an audit or inspection, this “digital notary” becomes your strongest piece of evidence.

Remember that these processes should serve control, not reckless speed. Automating compliance helps you scale your agency, but only when there’s still a human eye supervising the process.

And please, never use personal customer data to train public AI models (such as ChatGPT’s training mode).

6. AI-based decisions: When humans must step in

Recommending a shampoo isn’t the same as denying insurance coverage.

When AI decisions can negatively affect a person, human intervention is mandatory. For example, when an AI system denies a discount, rejects a refund request, or cancels a subscription, the user has the right to ask for a human review of that decision.

How should this be communicated to users?

This isn’t something you can hide in the fine print. If a decision is made automatically, you must explain it clearly and at the right moment.

At a minimum, the user should understand:

  • What’s happening: “Your request is being evaluated automatically by our system.”
  • The logic involved: “We use criteria based on [X, Y, and Z] to provide an immediate response.”
  • The consequences: What this decision means for the user and what happens next.

When is human intervention mandatory?

If an AI-driven decision can significantly harm or impact a user, they have the right not to be subject to a decision based solely on automated processing. In these cases, you must always provide a clear human “escape route.” Common scenarios include:

  • E-commerce and finance. When AI is used to approve or deny installment payments, credit options, or refunds.
  • Human resources. When an AI system automatically filters or rejects job applicants on a careers site.
  • Subscriptions and digital services. When an algorithm cancels or suspends a user’s account due to suspected fraud or unusual behavior.

Examples in WordPress projects

If an AI system decides to expel a student after detecting “plagiarism” in an online exam, the system must allow that student to appeal the decision and request a human review. A tutor or instructor should be able to examine the case and validate (or correct) the AI’s conclusion.

The same applies to dynamic pricing. If an AI adjusts the price of a ticket or product based on a user’s profile, the user has the right to understand why a specific price is being applied to them instead of another.

Golden rule for agencies: if you implement this kind of system, ensure your client has an existing channel for requesting human intervention or review, whether that’s an email address or a form. Adding a button is pointless if no one is actually responding on the other side.

WordPress AI & GDPR compliance checklist

Here’s your compliance compass for WordPress projects.

Before delivering a project or closing a maintenance audit, take the time to verify every item below. If you can’t justify each one, the project isn’t finished.

Think of this checklist as a way to protect both you and your client.

1. Transparency and user perception

  • AI identification. Does the chatbot or virtual assistant clearly identify itself as a machine from the very start of the interaction?
  • Content labeling. If the website generates text or images automatically, is there a visible label or notice indicating that the content was “Generated by AI” or “AI-assisted”?
  • Human support. Is there a clear and easy way for users to request human intervention, such as a “Talk to a human agent” button, when they need it?

2. Legal texts and privacy (GDPR + AI Act)

  • Privacy Policy. Is the Privacy Policy up to date, and does it explicitly mention which AI tools are being used, for what purpose, and who the provider is?
  • User rights. Are users clearly informed of their right to object to automated decisions?

3. Consent and data collection

  • Cookie management. Are you using a certified consent management platform to block AI-related cookies until the user has explicitly accepted them?
  • Granular consent. In forms, are there separate checkboxes for the core service purpose and for using that data in AI-driven marketing or training?
  • Sensitive data. If the AI processes sensitive data, such as health information, religious beliefs, or biometric data, is consent explicit and reinforced?

4. Risk management and responsibility

  • Risk classification. Have you identified whether the system is low, limited, or high risk? If it falls into the high-risk category, do you have a declaration of conformity from the provider?
  • Bias audit. Have you checked that the recommendation or filtering systems do not discriminate unfairly based on factors such as gender, age, or location?
  • Liability boundaries. Have you signed a responsibility agreement with your client that clearly defines who is accountable for the use of AI once the website is delivered?

5. Security and automation (maintenance)

  • Encrypted data transfers. Is data sent to AI APIs encrypted using SSL/TLS?
  • Log cleanup. Are automated processes in place to clean up chat histories and old personal data according to defined retention periods?
  • Traceability. Do you have a traceable record showing when and how users gave their consent, in case you ever need to prove it during an inspection?

AI compliance as a competitive advantage

GDPR and AI regulations aren’t obstacles; they’re frameworks for building more ethical, resilient websites.

As a WordPress agency or freelancer, delivering a site that’s not just beautiful but legally solid positions you as a high-level professional and builds unshakable trust.

And if you manage multiple sites, remember: compliance doesn’t stop at launch. Keeping them up to date and secure is also part of the best practices that regulatory compliance relies on.

Your real value isn’t just technical execution. It’s the confidence and peace of mind you give your clients.

Imagen de Marina Brocca sobre fondo morado
Autor
Marina Brocca
Specialist in digital regulation, data protection, and the European AI Act
Marina has extensive experience advising entrepreneurs, brands, and online projects that want to scale without taking on unnecessary legal risk. She is also a speaker, educator, and legal blogger focused on digital compliance.

Stay in the loop

Be the first to hear about new features, product updates, and everything we’re building at Modular DS.