Disclaimer: This blog post is for informational purposes only and does not constitute legal advice. Always consult legal and compliance professionals for tailored guidance on meeting regulatory requirements.
What is the European Union Artificial Intelligence Act (EU AI Act)?
The European Union Artificial Intelligence Act (EU AI Act) was officially signed into law by European Union lawmakers in July 2024 and officially ewnrterd in to force on August 1, 2024. Similar to how the General Data Protection Regulation (GDPR) set new global standards for data protection, the EU AI Act creates a harmonized approach to AI regulation. Its risk-based model categorizes AI systems from “minimal” to “unacceptable” risk, imposing tailored obligations based on the level of potential harm. But how does the EU define an AI system? According to the EU, AI systems are defined as the following:
“An AI system is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition was created to highlight the different risks that AI systems present compared to traditional software. In addition, the EU AI Act also has exceptions to the scope in which it is to be applied. For example, AI systems that are for military, defence, national security, or scientific research and development purposes, whether public or private, are not within the scope of the EU AI Act.
Who Must Comply with the EU AI Act?
The EU AI Act has a broad reach, impacting businesses and organizations that develop, distribute, or use AI systems within the European Union—even if they’re based outside of it.
Here’s a breakdown of Chapter 1, Article 2 of the EU AI Act – who falls under its scope:
- AI providers – Any company placing AI systems or general-purpose AI models on the EU market, regardless of where they’re based.
- AI deployers – Organizations using AI systems in the EU.
- Companies outside the EU – If an AI system’s output is used in the EU, the provider or deployer is still subject to the regulation.
- Importers and distributors – Businesses bringing AI systems into the EU market.
- Product manufacturers – If a company integrates AI into their products under their own brand, they must comply.
- Authorized representatives – Companies that represent AI providers who are not based in the EU.
- Individuals affected by AI – Anyone in the EU impacted by AI systems.
Key Exemptions in the EU AI Act
While the regulation is comprehensive, it does not apply to:
- AI systems used exclusively for military, defense, or national security purposes.
- AI developed and used solely for scientific research and development.
- AI used by public authorities or international organizations in certain law enforcement and judicial cooperation scenarios.
- AI systems released under free and open-source licenses, unless they’re classified as high-risk.
- Individuals using AI for personal, non-professional activities.
The EU AI Act is designed to ensure AI is deployed responsibly while balancing innovation and protection. If your company is involved with AI in the EU, it’s essential to understand how these rules apply to your business.
Note: The Act’s extraterritorial scope means non-EU companies may also fall under its purview if they offer AI systems to EU-based users or impact EU citizens.
The Four Risk Categories Under the EU AI Act
There are four types of risk general-purpose AI models can present to society according to the EU AI Act’s risk-based approach:
Unacceptable Risk
This level encompasses AI practices that violate EU fundamental rights and values, leading to a complete prohibition. Under the EU AI Act, such systems are deemed too harmful to be allowed on the market. Examples might include AI-driven social scoring by governments or systems that explicitly infringe on human dignity or privacy at their core.
High Risk
High-risk AI systems are those that can significantly impact health, safety, or fundamental rights. These systems must undergo conformity assessments and be subject to post-market monitoring. They often appear in critical sectors, such as healthcare, law enforcement, and education, where errors or biases in AI-driven decisions could cause serious harm or infringe on fundamental rights.
Transparency Risk
This category includes AI systems that pose risks of impersonation, manipulation, or deception—such as chatbots, deep fakes, or AI-generated content. While not prohibited, providers must comply with information and transparency obligations. For instance, users should be clearly informed when they are interacting with an AI, and the system’s capabilities or limitations must be disclosed.
Minimal Risk
Common AI systems like spam filters or recommender engines fall under this category. Because the likelihood of causing harm to users’ fundamental rights or safety is low, these applications face no specific regulations beyond baseline legal requirements. Minimal-risk AI solutions generally have flexibility in design and deployment but must still respect overarching data protection and consumer protection laws.
Source: 4 levels of risk under the risk-based approach
EU AI Act Compliance Requirements for Providers of High-Risk AI Systems
Under the EU AI Act (Chapter 3, Articles 8–17), providers of high-risk AI systems must meet strict requirements to ensure transparency, safety, and accountability. These obligations span the entire AI lifecycle, from development to deployment:
- Risk Management System – AI providers must implement a continuous risk management framework to identify, assess, and mitigate risks throughout the system’s lifecycle.
- Data Governance – Training, validation, and testing datasets must be relevant, representative, complete, and as error-free as possible to ensure fairness and reliability.
- Technical Documentation – Providers must maintain detailed records demonstrating compliance and submit documentation to regulators upon request.
- Automated Record-Keeping – High-risk AI systems should be designed to log key events, helping authorities track risks and modifications over time.
- User Instructions – Clear guidance must be provided to deployers to support their compliance and safe usage of the AI system.
- Human Oversight – AI systems must be built to allow human intervention and monitoring, ensuring users can step in if necessary.
- Accuracy, Robustness & Cybersecurity – Systems must meet high standards of reliability and security to prevent vulnerabilities and malfunctions.
- Quality Management System – A structured compliance framework should be in place to monitor and enforce these requirements.
What is the EU AI Act Implementation Timeline?
- April 2021: Original proposal by the European Commission.
- December 2022: Council of the EU agreed on a common position.
- June 2023: European Parliament adopted its stance.
- December 2023: Provisional agreement reached.
- Early 2024: Expected formal adoption of the final text.
- Mid-2024: Official publication in the EU’s Official Journal.
- 2025-2026: Gradual implementation period:
- 6 months after entry: General provisions and prohibited practices.
- 24 months after entry: High-risk AI rules.
Source: EU AI Act Timeline
Note: As of the publication of this blog post, the EU AI Act has no enforceable provisions for AI providers or deployers. However, this will change on February 2, 2025, when the first enforceable provisions come into effect:
- Ban on AI systems with unacceptable risk
- AI literacy requirements – organizations operating in the EU market must ensure AI literacy among employees involved in AI use or deployment. This applies to both AI system providers and AI users.
While these provisions become enforceable on February 2, 2025, companies should use this time to prepare for compliance. The penalties for non-compliance are significant, reaching up to €35 million or 7% of global annual turnover.
Tip: These dates are subject to change. Start planning well in advance to avoid last-minute compliance scrambles.
Other AI Risk Management Frameworks
Organizations seeking to streamline compliance and bolster their approach to AI governance should look to other established AI-focused frameworks like:
- NIST AI RMF: It focuses on the government, mapping, measuring, and management of AI risks. For a deeper dive, read our blog post on NIST AI RMF.
- ISO 42001: It standardizes AI governance and AI management systems (AIMS), emphasizing quality, safety, and reliability. Check out our ISO 42001 overview to see how it can complement your compliance efforts.
Next Steps: How to Prepare for the EU AI Act’s Implementation
The EU AI Act represents a significant shift in how AI is regulated, with cybersecurity playing a pivotal role in this transformation. By understanding the Act’s risk-based approach, preparing for conformity assessments, and aligning with established frameworks like NIST AI RMF 1.0 and ISO 42001, organizations can future-proof their AI strategies—and bolster trust in their cybersecurity solutions.
Your Action Items:
- Start early – Identify what risk category may apply to your AI system – whether your are a deployer or provider.
- Invest in governance – Develop transparent, well-documented AI systems that withstand regulatory scrutiny.
- Stay informed – Monitor legislative updates and collaborate with legal and technical experts for a holistic approach to compliance.
Ensuring compliance with the EU AI Act requires a structured approach that integrates automation with expert guidance. Carbide’s hybrid model provides both a tech-enabled platform and direct support from security and compliance experts to help organizations meet regulatory requirements. Our team assists with risk assessments, governance controls, and alignment with frameworks like NIST AI RMF and ISO 42001.
Book a free consultation today to learn how Carbide can simplify compliance and strengthen your AI security strategy.