Skip to main content
search

The EU AI Act is the world’s first attempt at a comprehensive legal framework for artificial intelligence. Its goal is to ensure that AI systems used within the EU are safe, transparent, and respect fundamental rights. For governance and compliance teams, this means the spotlight is firmly on how AI is managed – not just by those building it, but by anyone deploying or relying on it.

How the EU AI Act Classifies AI Systems: An Overview

The Act uses a risk-based approach to regulation, dividing AI systems into four categories: unacceptable, high, limited, and minimal risk.

  • Unacceptable risk systems are prohibited entirely. These include things like social scoring by governments or emotion recognition in the workplace.
  • High-risk systems are permitted but heavily regulated. These include AI used in critical areas like biometric identification, recruitment, credit scoring, and medical devices.
  • Limited and minimal risk systems (like spam filters or AI-generated music recommendations) are subject to lighter transparency rules or no obligations at all.

Governance teams need to audit existing and planned AI systems and map them to these categories. This risk classification determines the level of scrutiny and compliance effort required.

Preparing for High-Risk Requirements

If your organisation is using or developing high-risk AI systems, the Act imposes a set of mandatory requirements before these systems can be placed on the market or put into use. These include:

  • A robust risk management system
  • High-quality and representative training data, tested for bias
  • Clear human oversight mechanisms
  • Transparent technical documentation and record-keeping
  • Logging capabilities to track system behaviour and support post-market monitoring
  • Processes to report serious incidents or malfunctions

High-risk systems rely on high-quality, well-governed data. That includes knowing where the data came from, how it’s processed, and whether it’s free from bias. Organisations should ensure data governance policies are applied throughout the AI lifecycle to meet compliance expectations.

These obligations must be built into your AI governance model. Importantly, compliance isn’t a one-off task – it requires continuous monitoring and the ability to demonstrate that risks are being actively managed over time.

Establishing Ownership and Internal Controls

AI governance can’t sit with a single department. Compliance with the EU AI Act will require organisations to define clear roles and responsibilities, integrate AI risks into existing risk frameworks, and ensure appropriate internal controls are in place.

This includes:

  • Appointing a responsible person or team for AI compliance
  • Including AI oversight in internal audits and risk assessments
  • Documenting decision-making around model development, deployment, and updates
  • Establishing regular training programmes for teams using or impacted by AI

AI systems often interact with personal or sensitive data. That’s why it’s essential to align AI governance with your data privacy and protection frameworks. Mapping AI use to your data inventory and understanding the flow of data across systems is a key first step toward accountability.

Third-party audits or certifications may also play a role in demonstrating compliance, especially in highly regulated sectors.

Staying Informed and Agile

The AI Act is expected to come into effect in phases, with different timelines for different risk categories. Secondary legislation and technical guidance will follow, so organisations need to stay agile. As guidance evolves, expect increased scrutiny around data sourcing, retention, and deletionnot just how AI models are built, but how data is managed across systems. Governance and compliance teams should stay informed through regulatory updates and ensure internal policies evolve alongside new requirements.

At Nephos, we combine technical expertise and the strategic business value of traditional professional service providers to deliver innovative data solutions. From strengthening AI governance to ensuring your data is well-managed, privacy-compliant and audit-ready, we help organisations lay the foundations for trustworthy AI. Click here to know more.

 

Aysha Aziz

Aysha's professional journey, navigating through various industries with a significant focus on the complex arenas of legal and compliance, has enabled her to craft thought leadership content that addresses the unique challenges faced by these sectors. Her content offers a unique blend of technical knowledge and practical application, providing readers with actionable insights on how to navigate the legal and compliance landscape while leveraging data storage technologies.

Close Menu

© Nephos Technologies Ltd.