Skip to main content
Back to resources
13 April 2026·7 min read

AI Act and Recruitment: Is Your CV Screening Software High-Risk?

Does your company use CV screening software? A candidate scoring tool? An automated video interview system? Then you fall under one of the EU AI Act's strictest classifications: high-risk.

Area 4 of Annex III of Regulation (EU) 2024/1689 explicitly targets AI systems used in employment, worker management and access to self-employment. This is not a grey area - it is one of the clearest cases in the text. And the obligations that follow are substantial.

Why recruitment is classified as high-risk

The legislator's reasoning is straightforward: an AI-driven recruitment decision can have a major impact on a person's life. A candidate filtered out by an algorithm often has no visibility into the reasons for that rejection. Algorithmic biases (gender, origin, age) have been documented in scientific literature for years.

Recital 57 of the regulation states that such systems "can perpetuate historical patterns of discrimination" and that "the nature of decisions made in this domain justifies enhanced oversight".

HR systems explicitly targeted by Annex III (area 4)
  • Automated screening and ranking of applications (CV screening)
  • Candidate scoring or ranking
  • Automated filtering of applications
  • Video interviews with analysis of expressions, tone or body language
  • Decision-support tools for promotion or dismissal
  • Automated performance monitoring and evaluation of employees
  • Automated task allocation based on individual behaviour

What this means for your SME in practice

If you use any of these tools, even purchased from a third-party vendor, you have obligations as a deployer (Article 26). And if you develop your own HR scoring tool, you are a provider (Articles 8 to 15) with even heavier obligations.

As a deployer (you use an existing tool)

Fundamental Rights Impact Assessment (FRIA). Before deploying the system, you must carry out a specific impact assessment (Article 27). This is not a DPIA under the GDPR - it is a separate document analysing the system's impact on candidates' fundamental rights: non-discrimination, privacy, human dignity.

Human oversight. You must designate natural persons responsible for supervising the system's operation (Article 14). In practice: a human recruiter must be able to understand the tool's recommendations, challenge them, and make the final decision.

Transparency towards candidates. Persons subject to the system must be informed that AI is involved in the recruitment process (Article 50). Not buried in a privacy policy - clearly and in advance.

Works council information. If you have a works council (or equivalent), it must be informed of the AI system's deployment before it goes into service (Article 26, paragraph 7).

Log retention. Activity logs generated by the system must be retained for an appropriate period and made available to the supervisory authority (Article 26, paragraph 6).

As a provider (you develop the tool)

Obligations are significantly heavier: risk management system (Article 9), quality and representative training data (Article 10), complete Annex IV technical documentation (Article 11), conformity assessment (Article 43), CE marking, and continuous post-market monitoring.

Common HR tools and their likely classification

Concrete examples

Likely high-risk: LinkedIn Recruiter with AI filtering, HireVue (video interviews + analysis), Pymetrics/Harver (AI behavioural tests), any ATS with automatic application scoring, internal CV screening tools with weighted keyword matching.

Likely limited risk: Pre-qualification chatbots asking factual questions (availability, salary expectations) without scoring the candidate. Transparency obligation only (Article 50).

Likely minimal risk: AI-assisted job posting writing tools, spell checkers in applications. No specific obligation.

The most common mistake: "The vendor handles compliance"

No. The tool provider (vendor) has its own obligations (technical documentation, CE marking). But as a deployer, your obligations exist independently. You cannot outsource your FRIA, human oversight or candidate transparency responsibilities by invoking your vendor contract.

Check what your vendor provides, however: some are starting to offer "compliance packs" including technical documentation and the usage instructions required by Article 13. That is a good starting point, but it does not cover your own obligations.

Penalties

Infringements related to high-risk systems can result in fines of up to €15 million or 3% of global annual turnover, whichever is higher (Article 99). For an SME, even a fraction of this amount can be existential.

Where to start?

  1. Inventory your HR tools. List all software involved in recruitment, evaluation or employee management. Include third-party and internal solutions.

  2. Classify each system. For each one, determine whether it falls under Annex III area 4. The AiCompliBot questionnaire can do this in 5 minutes.

  3. Carry out the FRIA. For confirmed high-risk systems, the fundamental rights impact assessment is the priority. AiCompliBot generates a personalised FRIA draft based on your situation.

  4. Implement human oversight. Designate a responsible person, train them, document the supervision process.

  5. Inform stakeholders. Candidates, employees, works council - everyone must know that AI is involved in HR processes.

The deadline is approaching. If your SME uses AI in recruitment, compliance is not optional. But it is achievable, especially if you start now.

Classify your HR systems for free on AiCompliBot - 5 minutes to know exactly where you stand.

Ready to assess your compliance?

Free diagnostic in 5 minutes. No credit card required.

Start my free diagnostic →