Regulation (EU) 2024/1689 — better known as the EU AI Act — has been progressively entering into force since February 2025. By August 2, 2026, the vast majority of its obligations will apply to all companies using AI in Europe, including SMEs.
Yet more than 60% of European SMEs have not yet started their compliance process. This guide explains, without legal jargon, what you need to do — and why acting now is the best strategy.
Who Is Affected by the EU AI Act?
The EU AI Act applies to any organisation that develops, places on the market, or uses an artificial intelligence system in the European Union — regardless of its size.
In practical terms, your SME is affected if you:
- Use an AI-powered customer service chatbot
- Have integrated a recruitment tool with automatic scoring
- Use data analysis or prediction software
- Use generative AI tools (writing, translation, coding)
- Have developed or commissioned an application integrating AI
In other words: if you use AI in any capacity in your professional activity, you are most likely affected.
Provider or Deployer: What Is the Difference?
The regulation distinguishes two main roles:
The provider is the one who develops and places an AI system on the market. If you have created a SaaS product integrating AI, you are a provider. The obligations are the most demanding: technical documentation (Annex IV), CE marking, registration in the EU database.
The deployer is the one who uses an AI system in the context of their professional activity. The vast majority of SMEs are deployers — they use SaaS tools or AI APIs without having developed them. Obligations are proportionate: human oversight, informing users, maintaining logs.
Some SMEs are both provider and deployer: for example, an agency that develops AI tools for its clients AND uses AI tools internally.
The 4 Risk Levels Under Annex III
The EU AI Act classifies AI systems into four categories, from highest to lowest risk:
🚫 Unacceptable Risk (prohibited since February 2025)
These practices are simply prohibited:
- Subliminal manipulation of behaviours
- Exploitation of vulnerabilities (age, disability)
- Generalised social scoring by public authorities
- Real-time facial recognition in public spaces (with limited exceptions)
🔴 High Risk (strict obligations — August 2026 deadline)
High-risk systems listed in Annex III notably include:
- Recruitment software and automated CV screening
- Credit scoring systems
- Medical decision-support tools
- Educational assessment systems
- Tools used by law enforcement
For these systems, obligations are substantial: risk assessment, technical documentation, mandatory human oversight, registration in the EU database.
🟡 Limited Risk (transparency obligations)
Chatbots, deepfakes and AI-generated content fall into this category. The main obligation is transparency: users must know they are interacting with AI.
🟢 Minimal Risk (no specific obligations)
The vast majority of everyday AI tools (spam filters, content recommendations, translation tools) fall into this category. No specific regulatory obligations apply.
The August 2, 2026 Deadline: What Enters Into Force
The application schedule is progressive:
| Date | What enters into force |
|---|---|
| 2 February 2025 | Prohibitions (unacceptable risk) + AI literacy obligation |
| 2 August 2025 | Obligations for GPAI models (general-purpose AI) |
| 2 August 2026 | Transparency obligations (Art. 50) + high-risk systems (Annex III) |
| 2 August 2027 | High-risk systems integrated in regulated products (Annex I) |
The August 2, 2026 deadline is the most important for SMEs. It covers transparency obligations (applicable to all) and high-risk systems under Annex III.
The Digital Omnibus, a political agreement of March 11, 2026, proposes a maximum postponement to December 2, 2027 for certain high-risk system obligations. But this text is not yet positive law — and transparency obligations (Art. 50) remain unchanged at August 2, 2026.
Sanctions for Non-Compliance
The EU AI Act provides for a progressive sanctions regime:
- Up to €35M or 7% of global turnover for prohibited practices
- Up to €15M or 3% of global turnover for non-compliance of high-risk systems
- Up to €7.5M or 1% of global turnover for providing inaccurate information
For SMEs, the regulation explicitly provides for proportionate enforcement (Article 62). Authorities will take into account the size and resources of the organisation. But a full exemption does not exist.
Where to Start? The 3 Priority Steps
Step 1 — Take inventory of your AI systems
List all AI-integrated tools you use or have developed: HR software, CRM with scoring, chatbots, content generation tools, AI APIs. Ask yourself: "Does this software make decisions or help me make them?"
Step 2 — Identify your risk level
For each system, determine whether it appears in the high-risk categories of Annex III. If you are unsure, start with the official Future of Life Institute Compliance Checker — a free tool that quickly determines whether your system is in scope. AiCompliBot then takes over to provide you with a complete action plan.
Step 3 — Prepare your documentation
Depending on your role and risk level, you may need to implement human oversight, inform your users about AI use, or build a compliance file. Start with the highest-risk systems.
Conclusion: Acting Now Means Getting Ahead
There are 4 months left before the August 2, 2026 deadline. That is tight — but sufficient if you start today. SMEs that anticipate turn a regulatory constraint into a competitive advantage: they reassure their clients, partners and investors.
The AiCompliBot questionnaire lets you identify your obligations in 5 minutes, for free. It is the first step towards confident compliance.