Signzy US

Signzy Logo

EU AI Act Guide 2024: Compliance, Timeline & Penalties Explained

December 18, 2024

9 minutes read

🗒️  Key Highlights
  • While the EU AI Act is now law, businesses have until February 2025 before the first major restrictions kick in.
  • Most provisions take full effect by August 2026, creating a window for businesses to adapt their AI systems methodically. Yes, that might seem far off, but considering the depth of changes some systems will need, it’s hardly excessive.
  • Non-compliance carries serious consequences – organizations can face fines reaching €35 million or 7% of total worldwide annual turnover.

Let’s be clear about one thing – artificial intelligence has settled in and is here to stay. 

And now, with the EU AI Act, we finally have real, practical rules about how to use it responsibly. 

It’s the GDPR moment for AI.

Remember the Wild West days of AI? When businesses could use any algorithm, anywhere, without explaining how it worked? Those days are over.

For the first time, we know exactly what ‘responsible AI’ means – not in theory, but in detailed, actionable requirements.

Read on to find out how this Act will impact your business.

What is the EU AI Act?

The EU AI Act is the world’s first comprehensive law governing how businesses can develop and use artificial intelligence, setting strict rules for AI systems based on risk levels. Effective from August 2024, it carries penalties of up to €35 million or 7% of global revenue for violations.

The Act doesn’t just apply to European companies. Whether your business is based in the US, UAE, or anywhere, if your AI system touches the lives of EU citizens in any way, these rules apply to you. 

The Act specifically outlines what’s prohibited (like social scoring systems), others are considered high-risk and need extra due diligence (think healthcare or infrastructure), and some just need to be transparent about being AI (like chatbots). More on this in a minute.

Timeline of Implementation

INFORMATION FOR INFOGRAPHIC

August 1, 2024: The AI Act officially entered into force, marking the beginning of a new era in AI regulation within the European Union18.

February 2, 2025: Prohibitions on unacceptable-risk AI systems and requirements for AI literacy come into effect. This signifies the initial enforcement of the Act’s core principles18.

August 2, 2025: Governance rules and obligations for GPAI models will become applicable, establishing a framework for the responsible development and deployment of versatile AI systems21.

August 2, 2026: Most provisions of the AI Act, with the exception of certain high-risk AI systems, will become fully applicable. 

Does the EU AI Act Apply to My Business?

As per official documentation, if your AI systems affect EU citizens or operate in EU markets, these rules likely apply to your business. The EU AI Act identifies three key roles that determine how the rules affect your business.

  1. Providers: These are companies creating or substantially modifying AI systems. Whether you’re building a machine learning model from scratch or significantly adapting an existing one, you’re considered a provider.
  2. Deployers: This category represents businesses putting AI to work. You’re a deployer if your company uses AI tools for customer service, data analysis, or decision-making. This includes everything from using simple chatbots to implementing complex automated systems. Yes, you’re still responsible for ensuring they meet EU requirements when serving European users. However, primary compliance responsibility often lies with the tool provider.
  3. Importers and Distributors:  These are the businesses that help non-EU AI systems reach European markets. They carry an important responsibility – making sure these systems meet EU requirements before they ever reach European users.

There’s some good news as well, especially for smaller businesses: the EU AI Act isn’t trying to squash innovation or overwhelm smaller companies. If you’re running a startup or small business, you’ll face lower fines if things go wrong, and you’ll have access to special testing environments (called regulatory sandboxes) to make sure you’re getting things right.

If you are still confused, ask yourself these questions:

  • Do your AI systems interact with EU residents?
  • Are you developing, modifying, or deploying AI tools?
  • Do you handle high-risk applications like recruitment, credit scoring, or healthcare?

A “yes” to any of these suggests you’ll need to align with the Act’s requirements – but don’t worry, we’ll cover exactly what that means in the next section.

EU AI Act Risk Categories Explained 

The EU AI Act creates a clear framework that helps businesses understand their obligations based on their AI system’s potential impact. There are four categories you need to understand. 

1. Unacceptable Risk

These are AI applications that simply aren’t allowed in the EU market. This category includes systems that could seriously harm people or manipulate their behavior in dangerous ways. 

For example, a social credit scoring system that rates citizens based on their behavior would be banned. Similarly, AI that uses subliminal techniques to influence people’s choices or exploits vulnerabilities of specific groups, like children or elderly people, is strictly prohibited.

2. High-Risk Systems

This is where most business-critical AI applications fall. These systems can be used but need robust controls and ongoing monitoring. 

For these systems, businesses need to:

  • Maintain detailed documentation about system design and purpose
  • Ensure human oversight of AI decisions.
  • Conduct thorough risk assessments.
  • Implement quality management systems.
  • Monitor performance after deployment.

A hiring algorithm that screens job applications would qualify as high-risk because it significantly affects people’s livelihoods. The same goes for AI systems that assess creditworthiness, detect fraud, or help make medical diagnoses.

Most businesses from the financial services sector fall under this category.

3. Limited Risk

Take customer service chatbots or image generation tools – they need to be clearly labeled as AI, but don’t require the intensive oversight of high-risk systems. The key here is transparency: users should always know when they’re interacting with AI rather than humans.

4. Minimal Risk

This covers AI applications with minimal impact on people’s rights or safety. AI-powered spam filters or basic recommendation systems for entertainment content are some examples. While these systems still need to follow the general principles of responsible AI use, they face the lightest regulatory requirements.

Now that you know whether you need to follow this Act or not, and the risk level you fall under, let’s understand how to comply with the EU AI Act.

EU AI Act Compliance Requirements

Before the EU AI Act, different companies took different approaches, and it wasn’t always clear what “responsible AI use” really meant. The Act just changes this by setting clear expectations and specific steps businesses need to take.

Core Requirements for All AI Systems

Every business using AI, regardless of risk level, needs to start with the basics. This means creating clear documentation about your AI systems and establishing basic governance structures. 

The most basic requirement is knowing your AI systems inside and out. Sounds obvious. But you’d be surprised how many businesses discover they’re using more AI than they realized during their first audit. From that automated email sorter to your customer service system – they all count.

High-Risk System Requirements

Now, if you’re using AI for something that significantly impacts people’s lives – like deciding who gets a loan or who gets hired – you’re in high-risk territory. This is where the Act gets serious, but for good reason. 

Think about it: if an AI system were making decisions about your business, wouldn’t you want to know it’s being carefully monitored?

Here’s what this looks like in practice:

  • Data and Training Controls: You’ll need to ensure your training data is high-quality and representative. For instance, if you’re using AI in recruitment, your training data should include diverse candidate profiles to prevent bias.
  • Risk Management Systems: This means continuously monitoring your AI systems for potential issues. A financial services company using AI for credit decisions would need regular checks to ensure their system isn’t developing unfair biases over time.
  • Human Oversight: Real people need to be able to supervise and override AI decisions when necessary.
  • Record Keeping and Documentation: This includes keeping records of training data, methodologies, and any significant decisions made by the AI.

Transparency Requirements

One of the EU AI Act’s clearest messages is about being honest with people. Using a chatbot? Let people know they’re talking to AI. Generated content through AI? Label it clearly.

But what if you fail to comply with these regulations? Next section is just about that.

EU AI Act Fines and Penalties

Violation Fine
Use of a prohibited AI system Up to €35 million or 7% of total worldwide annual turnover
Non-compliance with high-risk AI system obligations Up to €15 million or 3% of total worldwide annual turnover
Non-compliance with limited-risk AI system obligations Up to €15 million or 3% of total worldwide annual turnover
Non-compliance with GPAI obligations Up to €15 million or 3% of total worldwide annual turnover
Providing incorrect or misleading information to authorities Up to €7.5 million or 1% of total worldwide annual turnover

How to Prepare for the EU AI Act

The EU AI Act is coming, deadlines are looming, and there’s pressure to get things right. 

For most businesses, the real challenge isn’t understanding what to do – it’s figuring out how to implement changes efficiently without disrupting operations.

Take identity verification, for example. If you’re running a financial service or handling sensitive customer data, you’re probably already using some form of AI-powered verification. Under the new Act, these systems need to be transparent, fair, and auditable. That’s a tall order if you’re building everything from scratch. But you don’t have to.

You can meet compliance requirements by partnering with providers who’ve already done the heavy lifting. Think about it – why reinvent the wheel when you can use proven solutions that are built with compliance in mind?

If you are looking for API solutions, Signzy offers ready-to-use APIs for KYC, KYB, identity verification, and document validation. Explore our suite today!

Spread the knowledge!

Found this useful? Share what you learned!

FAQ

Yes, if they’re still in use. Existing AI systems must comply with the Act’s requirements based on their risk level. There are transition periods to help businesses adapt to older systems.

You share responsibility for compliance. It’s crucial to verify your providers’ compliance status and have clear agreements about who handles specific requirements.

No blanket exemptions exist for small market impact. However, the Act has proportionate requirements and reduced penalties for smaller businesses and minimal-risk systems.

Not necessarily. Many businesses choose to apply EU AI Act standards globally for consistency and efficiency, though it’s possible to maintain different standards for different markets.

Scroll to Top