"The EU AI Act applies to all companies using AI." That is true. But in practice, not everyone has the same obligations - and certainly not on the same dates. Here is a concrete framework, with real examples, to know exactly what applies to you before 2 August 2026.
The Central Question: Is It an "AI System" Under the Regulation?
Before even discussing risk levels, you need to know whether your tool falls under the legal definition of an "AI system". The EU AI Act defines an AI system as:
A machine-based system designed to operate with varying levels of autonomy, capable of generating outputs such as predictions, recommendations, decisions or content that influence physical or virtual environments.
In plain terms: if your tool predicts, recommends, decides or generates content - it is probably an AI system under the regulation. Software based purely on fixed rules (without machine learning or probabilistic models) is generally excluded.
The 4 Categories: What Changes for Each
Prohibited Practices - in force since 2 February 2025
These uses are banned outright. No delay, no possible postponement.
Concrete examples:
- A system that subliminally manipulates a user's purchasing decisions (AI-powered dark patterns)
- A social scoring tool that evaluates citizens based on their overall behaviour
- A real-time facial recognition system in a public space for surveillance purposes (except tightly controlled law enforcement exceptions)
- Tools exploiting the psychological vulnerabilities of users to influence their behaviour
For SMEs: this perimeter is rarely a direct concern. But check your marketing and advanced personalisation tools.
High-Risk Systems (Annex III) - deadline postponed to 2 December 2027*
These are the most regulated systems. Annex III lists 8 specific domains. If your tool falls into one of these domains, the obligations are substantial: risk assessment, technical documentation, mandatory human oversight, registration in the EU database.
Most common cases for SMEs:
| Tool | High risk? | Why |
|---|---|---|
| CV screening / ATS software with AI scoring | Yes | Annex III - Employment and workforce management |
| Credit scoring or financial eligibility tool | Yes | Annex III - Access to essential financial services |
| Student / pupil assessment system | Yes | Annex III - Education and vocational training |
| Medical diagnostic support tool | Yes | Annex III - Healthcare |
| Basic lead-scoring CRM | No | No decision affecting fundamental rights |
| Marketing content generation tool | No | Limited risk (transparency only) |
| Customer service chatbot | No | Limited risk (transparency only) |
* If the Digital Omnibus is formally adopted before August 2026. Otherwise, the deadline remains 2 August 2026.
Limited Risk - transparency obligation on 2 August 2026 (firm)
This is the most common category for SMEs. It covers all systems in contact with users that generate content or simulate human interaction.
Main obligation: inform the user they are interacting with AI.
Customer service chatbot
Does your website have a virtual assistant? You must clearly indicate to the user that they are talking to an AI. A simple introductory message is sufficient in most cases: "Hello, I am a virtual assistant..."
AI-generated content
If you use AI to generate articles, emails, product descriptions or any other content intended for the public, you must indicate that this content was produced or assisted by AI. The exact modalities depend on the type of content.
Deepfakes and synthetic content
Images, videos or audio manipulated or generated by AI: marking obligation. This is the domain of the Code of Practice currently being finalised (expected June 2026).
Minimal Risk - no specific obligation on 2 August 2026
The vast majority of everyday AI tools fall into this category. No specific regulatory obligation applies on 2 August 2026.
Examples:
- Your email spam filter
- Spell and grammar checkers (Grammarly, LanguageTool...)
- Machine translation tools (DeepL, Google Translate)
- Content recommendations on platforms
- AI-based SEO optimisation tools
- Planning and productivity tools (AI in Notion, Slack...)
The Cross-Cutting Obligation: AI Literacy Training
Whatever your risk category, one obligation applies to all companies from 2 August 2026: AI literacy training for employees who use or supervise AI systems.
This obligation does not require a complex certified training programme. It means your teams must understand:
- What the AI they use actually does
- Its limitations and potential biases
- When and how to exercise their human judgement
An internal document, an awareness session, or a procedure note can constitute a first compliant step.
The Particular Case of SaaS Tools You Use
A question that comes up frequently: "My SaaS tool uses AI. Am I responsible?"
The answer is: partially. The SaaS provider is responsible for the compliance of the AI system as a provider. But you, as a deployer, have your own obligations:
- Verify that the high-risk systems you use are compliant
- Inform your end users of the use of AI
- Maintain human oversight over important decisions
- Keep usage logs if required
In practice: ask your SaaS providers for their AI Act compliance documentation. Good providers have already prepared it or are in the process of doing so.
Summary: What Applies to You on 2 August 2026
| Situation | Obligation on 2 August 2026 |
|---|---|
| You use a chatbot | Indicate it is an AI |
| You generate content with AI | Indicate AI assistance |
| You have employees using AI | AI literacy training |
| You use an automated CV screening tool | High-risk compliance (potential delay until Dec. 2027) |
| You use a spam filter | Nothing specific |
| You use DeepL or a translation tool | Nothing specific |
Not sure which category your tools fall into? AiCompliBot analyses your AI systems in 5 minutes and gives you your exact obligations - free of charge.