The European Union’s AI Act: Establishing a Regulatory Framework for AI Systems
The European Union’s AI Act came into force 1st August 2024, marking a significant step toward establishing a regulatory and legal framework for AI systems within Europe. This act categorises AI systems based on their potential impact on safety, human rights, and societal well-being, dividing them into three categories: prohibited, high-risk, and minimal risk. AI systems embedded in human resources will face scrutiny starting in August 2026.
Despite the UK no longer being part of the EU, many UK firms will still need to comply with the EU law. Consequently, it is anticipated that any UK legislation will closely mirror the EU’s regulations.
High-risk systems, subject to the strictest requirements, are those with significant impacts on people’s safety, well-being, and rights. Low-risk applications, such as AI-enabled video games and spam filters, currently make up about 85% of AI systems used in the EU, although this proportion may decrease as AI becomes more prevalent in the workplace. Non-compliance with the Act could result in fines of up to 7% of a company’s annual global turnover. Implementation will be phased, allowing firms time to assess their systems and establish monitoring and compliance procedures.
Over the next year, the Act will be rolled out in stages:
- Six months from now, AI practices with unacceptable risks to health, safety, or fundamental human rights will be banned.
- In nine months, the AI Office will finalise codes of conduct covering obligations for developers and deployers.
- In one year, rules for providers of General Purpose AI (GPAI) – such as ChatGPT – will come into effect, requiring organisations to align their practices with the new rules.
AI used in employment will need to comply by August 2026. These systems, deemed capable of causing significant harm to health, safety, fundamental rights, environment, democracy, and the rule of law, will be subject to strict regulations. Thomas Regnier, a spokesperson for the European Commission, emphasised that the legislation aims to protect citizens and businesses, not stifle innovation. EU competition chief Margrethe Vestager reinforced this by highlighting that the European approach to technology prioritises people and their rights.
Starting in February 2025, certain high-risk AI capabilities will be banned. These include biometric categorisation systems sorting people by politics, religion, sexual orientation, and race, untargeted scraping of facial images from the internet or CCTV, emotion recognition in workplaces and educational institutions, and social scoring based on behaviour or personal characteristics.
Generative AI models like ChatGPT will be regulated starting in August 2025. Developers will need to evaluate models, assess, and mitigate systemic risks, conduct adversarial testing, report serious incidents to the European Commission, ensure cybersecurity, and report on energy efficiency.
Transparency is a critical guiding principle under the Act. Companies must ensure AI operations are well-documented and supervised by humans to prevent unintended legal violations.
The EU’s AI Act represents a pivotal moment in regulating AI systems, aiming to protect safety, human rights, and societal well-being while fostering innovation. As the implementation phases roll out, businesses must prepare to comply with stringent requirements, especially in high-risk areas like employment. Transparency, documentation, and human oversight will be essential to navigating these new regulations successfully. The Act underscores the balance between leveraging AI’s potential and ensuring it serves the broader good of society.