August 2026: This date will be a test for many companies. Then the central obligations of the EU AI Act should apply to so-called high-risk systems – and numerous AI applications that are used in human resources departments today fall precisely into this category. Recruiting algorithms, performance analyses, talent scoring: What has long been commonplace internally will be regulated from this deadline. Violations can result in fines of up to 15 million euros.

However, it is currently unclear whether the regulations will actually take effect from August. If the so-called digital omnibus package is approved by the EU member states and the European Parliament beforehand, it will probably be postponed until December 2027. As long as this has not happened, the following applies.

Many HR tools are classified as high-risk by AI regulations

Specifically: According to the AI ​​Act, AI systems are divided into four categories depending on the possible damage to fundamental rights and security – prohibited applications, high-risk systems, systems with transparency requirements and applications with minimal risk.

The second category is particularly crucial for human resources processes. Annex III of the Regulation specifically mentions AI systems used in the context of employment: applications for selecting or evaluating applicants, systems for analyzing employee performance, algorithms for supporting promotion or career decisions.

Staggered deadlines for the AI ​​regulations instead of a deadline

The EU AI Act formally came into force on August 1, 2024. However, his duties take effect gradually:

  • February 2025: The obligation for so-called AI literacy comes into force. Companies must ensure that employees who use or develop AI have sufficient competence in using these systems.
  • August 2025: Special rules for providers of so-called general-purpose AI models – i.e. large language models and comparable basic models – come into force. They concern transparency and documentation obligations, such as training data and copyright aspects. For companies that simply use such models, there are usually no direct obligations – but there are indirect effects, as many HR tools are based on such basic models.
  • August 2026: The currently relevant date for most companies. From August 2, 2026, high-risk AI systems must meet the central requirements of the regulation – unless the so-called digital omnibus proposal is passed in time.
  • August 2027: An extended transition period applies to AI applications that are part of other regulated products (such as medical devices or machines). HR software is generally not included.

Which AI rules will apply to companies from August 2026

From August, high-risk AI systems must, among other things, meet the following requirements: structured risk management – i.e. the systematic identification, assessment and control of possible damage caused by the AI ​​system – proven data quality, technical documentation, human supervision of automated decisions and transparency obligations towards data subjects.

Anyone who fails to fulfill these obligations risks severe sanctions: Violations of the high-risk requirements can be punished with up to 15 million euros or three percent of global annual sales – whichever is higher. When using banned AI practices – including manipulative systems to influence people, real-time biometric surveillance in public spaces or AI-supported social scoring – the upper limit is even higher: up to 35 million euros or seven percent of sales.

There is also a practical time problem: experience shows that a conformity assessment – ​​i.e. the structured check of whether an AI system meets all legal requirements before it can be used – takes three to six months. Companies that only start analyzing their AI systems in spring 2026 will hardly be able to meet the deadline.

Digital Omnibus: postponement to December 2027?

Parallel to the existing schedule, the EU Commission presented the so-called digital omnibus package in November 2025. The AI ​​Omnibus included goes far beyond a cosmetic adjustment: the deadline for high-risk systems is to be postponed to December 2027, documentation requirements for smaller companies are to be reduced and the AI ​​literacy requirement is actually to be defused – a binding corporate obligation would become a mere recommendation to the Commission and member states.

The postponement has not been decided. The proposal still has to go through the ordinary legislative procedure – it needs the approval of EU member states and the European Parliament. In order for the postponement to come into force on time, the changes must be adopted before August 2, 2026.

What about internal AI systems?

The AI ​​Act regulates a lot – but not everything. A gap that could be particularly consequential for HR departments: AI systems that are used exclusively internally, without direct contact with customers or the public. Stefan Eder, lawyer and founder of the legal tech company Cybly, draws attention to this blind spot: “AI systems used internally – i.e. those without a direct external interface – often do not even fall under regulatory obligations.”

The lawyer refers to the research paper “Internal Deployment Gaps in AI Regulation”, which analyzes exactly this blind spot. The authors identify three structural problems: lack of clarity about when internal systems actually fall under regulatory obligations; static compliance assumptions that do not capture the continuous development of internal systems; and an information asymmetry that gives regulators little insight into AI systems used internally. The underlying problem, says Eder, is less a regulatory failure than a question of regulatory design: many frameworks were developed with external products in mind.

His advice: “Regardless of regulatory scope, organizations should treat internal AI systems with the same governance discipline as external ones.” So with internal AI inventories, structured risk analyzes before use, continuous monitoring of system changes and clearly defined responsibilities.


Info

Then take a look at our dossier on the topic. There we continuously put together current reports, analyses, deep dives and tools for the use of AI in everyday HR for you.

Read it!

Act now, don’t wait

One thing is certain: The EU AI Act does not create a single changeover date, but rather an implementation process lasting several years – with August 2026 as the central deadline for most HR-relevant systems or December 2027.

However, many companies and interest groups still view the mammoth law critically. “The AI ​​Act was intended to ensure legal certainty for artificial intelligence in Europe – the exact opposite is currently threatened,” criticizes Susanne Dehmel, a member of the management board of the industry association Bitkom. Too many obligations are formulated vaguely and the bureaucratic effort is too high. In addition, there is the governance gap described by Eder: AI systems used internally often fall through the cracks in terms of regulation – even though their effect on employees can be just as consequential as the externally visible tools.


Sven Frost is responsible for HR tech, which includes the areas of digitalization, HR software, time and access, SAP and outsourcing. He also writes about labor law and regulations and is responsible for the editorial planning of various special human resources publications.

Share.
Leave A Reply

Exit mobile version