The EU AI Act, with various provisions set to be implemented over an extended period, takes a use-case based approach to regulate the development and use of artificial intelligence. The goal is to strike a balance between innovation and the protection of fundamental rights and promoting safety.
This legal framework applies to every organization, whether based in the EU or elsewhere, that provides AI tools or services used within the EU.
The act categorises AI systems based on their risk, with risk levels ranging from minimal to unacceptable.
Minimal-risk applications do not require too much regulation or oversight.
Limited-risk uses require improved transparency, with the obligation to inform users that they are deployed. Transparency requirements must be met within 12 months
High-risk applications include, among others, those in the medical and transportation sectors and require strict compliance related to security, transparency and quality. They will require a conformity assessment to check their compliance with established standards. The timeframe for meeting these obligations is 36 months (meaning 2027), with 12 months for meeting the transparency obligations
AI systems of unacceptable risk are banned within 6 months. Such an application would be social scoring.
Compliance assessments and penalties
In order to ensure compliance, member states are required to create notifying bodies responsible for conformity assessments. The assessments can be in the form of self-assessment or third-party evaluations. Fines for noncompliance can be as high as €35 million or 7% of a company's global revenue (the higher of these two, similarly to fines related to the GDPR).
Transparency and accountability
The Act emphasizes transparency, particularly for generative AI systems, which must disclose when content is AI-generated and ensure that they do not produce illegal content. High-impact general-purpose AI models will be subject to rigorous evaluations and must report serious incidents to the European Commission.
HR as a high-risk area
There are several artificial intelligence applications used for human resources purposes which are regarded as high-risk. These include:
- Recruitment (for example targeting job ads)
- Selection (application analysis and candidate evaluation)
- Decisions affecting employment terms, promotion, and termination
- Task allocation based on individual traits or behaviour
- Monitoring and evaluating employee performance and behaviour
Therefore, such high-risk systems must meet several requirements as set out by the new act.
- Robust risk management systems must be put in place
- High-quality training data must be used in order to prevent discrimination
- Users must be transparently informed about the capabilities of such systems
- The appropriate level of human oversight must be used
- The levels of security, traceability and accuracy must be high
In case an employer decides to use such high-risk AI systems, its obligations will include:
- Registering the system and monitor its performance
- Implementing quality management solutions
- Maintaining and retaining detailed records
- Reporting any incidents
General preparation tasks, keeping in mind that the AI Act is not enforced until 2027, may include auditing current and planned AI use cased, reaching out to AI vendors to better understand their approach to compliance, and establishing best practices and starting employee trainings on AI governance, use and regulations.