The question every business owner asks first
"Are we even allowed to do this?"
That's how nearly every conversation starts when business owners first consider AI agents. Automating customer inquiries, processing invoices, matching data across systems – it sounds great. But what about data protection?
The short answer: Yes, you can. But not blindly. Switzerland has clear rules – and they're less complicated than most people fear.
No legal vacuum: Swiss data protection law already covers AI
Switzerland's revised Federal Act on Data Protection (FADP) has been in force since 1 September 2023. What many don't realise: it's technology-neutral. It applies to AI-powered data processing just like any other software.
The Federal Data Protection Commissioner (FDPIC) confirmed in May 2025: existing data protection legislation is directly applicable to AI.
The 4 key principles for AI agents
| Principle | What it means for your AI agent |
|---|---|
| Purpose limitation | The agent may only process data for its defined purpose – e.g., answering inquiries, not secretly building profiles |
| Proportionality | Only process necessary data. A booking agent doesn't need health records |
| Transparency | Customers must know they're communicating with an AI agent |
| Privacy by Design | Data protection must be built in from the start – not bolted on later |
Automated decisions (Art. 21 FADP)
When your AI agent makes decisions that significantly affect a person – for example, declining a service request or calculating a price – three rules apply:
- You must inform the person that the decision was made automatically
- The person can express their point of view
- On request, a human must review the decision
Important: Swiss law doesn't prohibit automated decisions – it requires transparency and the possibility of human review. A pragmatic approach that works well for SMEs.
EU AI Act: Does it affect Swiss companies?
In short: Possibly yes. The EU AI Act has been in effect since 1 August 2024, and like the GDPR, it has extraterritorial reach. If your AI agent serves customers in the EU, you may be affected.
The good news: Your typical use cases are low-risk
| Use case | Risk category | Obligation |
|---|---|---|
| Chatbot for customer inquiries | Limited risk | Inform users they're interacting with AI |
| Invoice processing (invoices, receipts) | Minimal risk | Virtually no specific obligations |
| Data matching between systems | Minimal risk | Virtually no specific obligations |
| AI-generated text (e.g., draft replies) | Limited risk | Mark as AI-generated |
| Recruitment / CV screening | ⚠️ High risk | Comprehensive compliance requirements |
Most AI agents for SMEs – customer inquiries, invoice processing, data matching – fall under minimal or limited risk. The main obligation: transparency.
The trap: If you use AI for hiring decisions (screening CVs, evaluating employees), you're in high-risk territory. This means strict documentation, monitoring, and governance requirements.
5 practical steps for compliant AI agents
1. Sign a Data Processing Agreement (DPA)
If an external provider operates your AI agent or processes data, you need a DPA. This covers:
- What data is processed
- What security measures apply
- What happens in case of a data breach
- Which sub-processors are involved
Practical tip: Ask your AI provider specifically: "Is our data used to train models?" The answer must be "No" – or contractually excluded.
2. Clarify data residency
Where is the data processed? This isn't a theoretical question.
- Switzerland or EU/EEA: Ideal. Since January 2024, Switzerland has a confirmed EU adequacy decision – data flows freely.
- US cloud providers: Caution. Even with Swiss data centres, US companies are subject to the CLOUD Act, which can conflict with data protection requirements.
- Swiss hosting: Providers like Infomaniak, Safe Swiss Cloud, or Exoscale offer hosting under Swiss law – maximum control.
3. Build in transparency
Inform your customers when they're communicating with an AI agent. This isn't just a legal requirement – it builds trust.
Typical implementation:
- Notice in chat: "You're communicating with our AI assistant. A team member can take over at any time."
- In your privacy policy: section on automated data processing
4. Enable human review
Build a "human-in-the-loop." This means:
- For uncertain answers, the agent escalates to a human
- Customers can always request human handling
- Important decisions (pricing, contract commitments) require human approval
5. Check if you need a Data Protection Impact Assessment
If your AI agent processes personal data at scale or uses new technologies, a Data Protection Impact Assessment (DPIA) may be required. If high residual risk remains, the FDPIC must be consulted.
For most SME use cases (FAQ bots, invoice processing), this is typically not required. But for sensitive data (health, financial), a quick assessment is worthwhile.
Why Switzerland has an advantage
Three factors make Switzerland attractive for AI deployment:
1. EU adequacy: Confirmed since January 2024. Data flows smoothly between Switzerland and the EU – no bureaucratic overhead.
2. Pragmatic legislation: The Swiss FADP is less restrictive than the GDPR. Automated decisions are allowed (with disclosure), no mandatory Data Protection Officer, and fines are moderate (max CHF 250,000).
3. Local infrastructure: Swiss cloud providers enable data processing under Swiss law – without CLOUD Act risks.
One important difference: Who pays the fine?
| Swiss FADP | EU GDPR | |
|---|---|---|
| Fine targets | The responsible natural person (e.g., CEO) | The company |
| Maximum fine | CHF 250,000 | EUR 20M or 4% of annual turnover |
In Switzerland, the CEO bears personal responsibility. One more reason to get data protection right from the start.
What's coming next?
Switzerland isn't standing still:
- In February 2025, the Federal Council announced ratification of the Council of Europe Convention on AI and Human Rights
- An implementation bill is expected by the end of 2026
- The approach is sector-specific – stricter for the public sector than for private businesses
The direction is clear: more regulation is coming, but focused on high-risk applications. For typical SME automation, little is expected to change.
Checklist: Is your AI project compliant?
- Does the agent only process data necessary for its purpose?
- Do your customers know they're communicating with AI?
- Can customers request human review?
- Do you have a DPA with your AI provider?
- Do you know where the data is processed (Switzerland/EU)?
- Is your data not used for model training?
- Is a "human-in-the-loop" built in for important decisions?
If you can tick all seven, you're on the right track.
Conclusion: Data protection isn't an obstacle – it's a quality mark
The regulatory landscape for AI agents in Switzerland is clear and manageable. Most SME use cases – customer inquiries, invoice processing, data matching – fall under minimal or limited risk.
The key: Transparency, proportionality, and a proper Data Processing Agreement. Build these in from the start, and you'll save yourself trouble later – while building trust with customers and employees.
Not sure if your AI project meets the requirements? We'll review it together in an initial consultation – free and without obligation.