VerefVeref
All articles

Compliance

EU AI Act and hiring: what HR teams need to know.

A plain-English summary of the EU AI Act's high-risk hiring provisions, what they mean for your verification stack, and how to stay compliant without slowing hiring down.

PF

Poya Farighi

Founder, Veref

March 10, 20269 min read

The EU AI Act classifies hiring as a high-risk use of AI. That puts any automated system in a recruiting workflow under specific rules about human oversight, transparency, and bias testing. Non-compliance carries fines of up to €35 million or 7% of global annual turnover, whichever is higher. The provisions start applying to high-risk AI systems in hiring from August 2, 2026.

This piece is a plain-English summary for HR, Talent, and Security leaders who want to understand what the Act requires, what counts as compliant, and what to ask vendors to prove. It is not legal advice; talk to your counsel before making structural changes.

What does the EU AI Act say about hiring?

The EU AI Act is the first comprehensive, risk-tiered regulation of AI systems anywhere in the world. It came into force on August 1, 2024. The rules that matter most for hiring teams start applying in phases through 2026 and 2027.

Annex III of the Act lists specific AI system use cases that are classified as "high-risk." Among them, point 4 covers "employment, workers' management and access to self-employment." Any AI system used to do any of the following in a hiring context is high-risk under the Act:

  • Recruit or select natural persons (including placing targeted job ads, analysing and filtering applications, or evaluating candidates).
  • Make decisions affecting the terms of work-related relationships, promotion, or termination.
  • Allocate tasks based on individual behaviour or personality traits.
  • Monitor and evaluate the performance and behaviour of workers.

High-risk systems carry a stack of specific obligations: a risk management system, data governance, technical documentation, record-keeping, transparency to users, human oversight, accuracy, robustness, cybersecurity, and post-market monitoring. The obligations apply to the provider (the vendor building the system), the deployer (the employer using it), and in some cases both.

What is human-in-the-loop, and why is it non-negotiable?

Article 14 of the Act requires that high-risk AI systems be designed to be "effectively overseen by natural persons during the period in which they are in use." The effective oversight must include the ability to:

  • Fully understand the capabilities and limitations of the system.
  • Remain aware of the possible tendency of automatically relying on system output ("automation bias").
  • Correctly interpret the system's output.
  • Decide, in a particular situation, not to use the system or to disregard, override, or reverse its output.
  • Intervene in the operation of the system or interrupt it through a "stop" button.

In a hiring context, this means every consequential outcome (rejecting a candidate, moving them to a lower-priority pipeline, flagging them for additional scrutiny) must be made by a human with the ability to override the system. A scored candidate is fine. A rejection threshold that fires with no human in the loop is not.

The Act goes further for adverse decisions. Article 86 gives any person affected by a high-risk AI decision the right to a clear and meaningful explanation of how the decision was reached. In a hiring context, that means candidates can ask why they were filtered out and receive a real answer, not a template.

What counts as an adverse decision?

The short answer is: anything that disadvantages the candidate. Rejection is the obvious case. Moving a candidate to a lower-priority track, flagging them for additional screening, or reducing their visibility in a recruiter's queue all count. If the system changes what happens to the candidate, the right of human oversight and the right to explanation apply.

The less obvious case is the scoring itself. A score that is labelled "advisory" but functionally determines outcomes (because no recruiter ever promotes a candidate below a certain score) is not saved by the label. The Act is concerned with actual practice, not nominal architecture.

What does transparency to the candidate look like?

Article 13 requires transparency in how the system works. Article 50 (and related GDPR obligations) requires transparency to the person affected. In hiring, that translates to four concrete things the candidate should see.

First, clear notice at the point of verification that an AI-assisted process is being used. "Your interview will be recorded and analysed for integrity signals. Here is what that means." A single sentence in the consent flow is sufficient if it is genuinely clear.

Second, a plain-English explanation of what signals are collected. Not the underlying ML architecture; what the system actually records and why. "We check your government ID against your selfie, we match your face throughout the interview against your ID, we analyse the audio for authenticity, and we look at response patterns."

Third, a path to contest an outcome. If the candidate is not moved forward and believes the system contributed to the decision, they need a way to raise that concern and receive a response from a human.

Fourth, a retention and deletion notice. How long data is kept, what happens to it at the end of that period, and how the candidate can request earlier deletion.

The Veref privacy notice and the candidate-facing consent flow both cover these. Candidates see the full picture before they verify, not in a linked document they never open.

What about bias testing and fairness?

The Act requires documented bias evaluation across relevant demographic cohorts, both before deployment and on every material update to the system. The methodology has to be appropriate for the use case, the results have to be documented, and the documentation has to be available to the regulator on request.

In hiring, the relevant cohorts are the ones that existing anti-discrimination law protects: race and ethnicity, gender, age, disability, and in some jurisdictions religion, sexual orientation, and gender identity. Bias testing methodology typically uses the Equalised Odds framework or a related measure that compares false-positive and false-negative rates across cohorts.

The practical implication for a deployer (the employer using the system) is that you should be able to ask your vendor for the latest bias evaluation report and receive a document that is not marketing prose. If the vendor cannot produce the document, or the document is a PR piece rather than a technical evaluation, that is a procurement red flag.

Veref publishes its bias evaluation methodology and results to customers under NDA. Every model release includes a regression evaluation across the cohorts named above, plus a specific check for accuracy on lower-quality webcam streams (which historically correlate with lower socioeconomic access).

How does Veref stay compliant by design?

Four defaults make Veref compliant with the EU AI Act out of the box.

No auto-rejections, ever. The system does not make adverse decisions. Every signal surfaces to the recruiter as evidence, with the raw underlying clip and transcript attached. The recruiter makes the call. This is a product-level commitment, not a setting; there is no configuration that would enable auto-rejection.

Evidence is always available. When any signal fires, the session record captures the underlying clip, the transcript segment, and the timestamp. The recruiter can see what the system saw. Candidates can request access to this evidence through our privacy portal and receive it within a GDPR-compliant timeline.

Bias testing is continuous. Every model release includes a bias regression suite. Methodology and results are available to customers on request, and summary results are referenced in the Security page.

Candidate transparency is built into the flow. Candidates see what will be captured before they verify. They see the privacy notice in plain English. They have a clear path to contest outcomes and to delete their record.

This is not marketing. If you are a procurement officer or a compliance lead evaluating vendors, ask for a bias evaluation report, ask for the human-in-the-loop protocol documentation, and ask for the candidate-facing transparency copy. Those three documents from a vendor are the minimum evidence of compliance readiness.

Does this apply outside the EU?

Three scenarios to consider.

You are a US-based company with no EU candidates. The EU AI Act does not apply directly. However, the principles (human-in-the-loop, transparency, bias testing) are converging globally. NYC Local Law 144 already requires bias audits for automated employment decision tools. California, Colorado, and Illinois have related legislation. The EEOC has issued AI-specific guidance. The practical compliance posture you would adopt for the EU AI Act is a reasonable default even without a specific obligation.

You are a US-based company with EU candidates. The Act applies to the portion of your hiring that touches EU candidates. For any employer with distributed hiring (most Fortune 500 and a lot of Series A-plus startups), this is functionally the whole funnel, because a vendor cannot be compliant for some candidates and non-compliant for others.

You are a UK-based company. The UK has not adopted the EU AI Act. It has adopted a lighter-touch, principles-based approach via the ICO and the regulators for each sector. The practical result is similar: human-in-the-loop is the ICO's expectation for consequential decisions in hiring, transparency is a UK GDPR baseline, and bias testing is good practice even if not explicitly mandated.

Treat the EU AI Act as the floor, not the ceiling. A hiring stack that is compliant in Brussels will be compliant everywhere else.

What should procurement ask vendors?

A short checklist for the head of Talent or the CISO running a vendor review:

  • Show me the human-in-the-loop protocol. What human action is required for any adverse outcome?
  • Show me the bias evaluation report. Which cohorts were tested, on what data, and what were the results?
  • Show me the candidate-facing transparency copy. What does the candidate actually see?
  • Show me the evidence delivery mechanism. When a signal fires, how does the recruiter access the underlying clip or transcript?
  • Show me the data retention policy. How long is biometric data kept, and what is the deletion path?
  • Show me the DPA. Is it available off the shelf, and does it cover Standard Contractual Clauses for international transfers?

Any vendor that cannot produce all six is not ready for EU deployment. Veref produces all six as part of standard onboarding. If you want to see them, book a call and we will send them ahead of the meeting.

Sources and further reading

  1. [1]EU AI Act full text · European Parliament, 2024
  2. [2]High-risk AI in employment, guidance · European Commission, 2024
  3. [3]EEOC guidance on AI in hiring · U.S. EEOC, 2023
  4. [4]New York City Local Law 144 (automated employment decision tools) · NYC Department of Consumer and Worker Protection, 2023

Frequently asked questions

Does the EU AI Act apply to my US-based company?+

If you hire anyone located in the EU at the time of application, yes. For multinational employers the Act effectively applies to the whole candidate funnel.

Does a risk score count as an automatic decision?+

If the score is used by a recruiter as evidence, no. If a score threshold automatically triggers rejection with no human review, yes. The difference is where the human decision lives in the workflow.

What should I ask vendors to prove?+

Ask for a bias evaluation report, the human-in-the-loop protocol documentation, and candidate transparency copy. Any vendor that cannot produce all three is not a compliant option.

When do the provisions actually kick in?+

The high-risk AI system obligations for hiring apply from August 2, 2026. Some foundational obligations (prohibited practices, AI literacy) applied earlier. Post-market monitoring and full enforcement are in effect from August 2027.

What about the UK?+

The UK has not adopted the EU AI Act, but the Information Commissioner's Office has issued guidance that broadly aligns with its principles. The practical effect is similar: human-in-the-loop for consequential decisions, transparency to candidates, bias testing.

Ready to verify your next hire end-to-end?

See Veref on a 25-minute demo with your real candidate flow.

Book a demo