Non-compliance with the EU AI Act can be expensive. Very expensive. Depending on the breach, penalties can reach up to 7% of annual global turnover or €35 million, whichever is higher. The regulation entered into force in August 2024 and becomes fully applicable in August 2026, with key obligations already taking effect along the way.

For many organisations, 2026 marks the moment when AI regulation moves from awareness to accountability. What was once something to monitor is now something boards, regulators, and customers expect organisations to understand — and demonstrate.

The challenge is not simply understanding the regulation. It is understanding what it means in practice. Which organisations are affected? Does “high-risk AI” apply to your systems? What would a regulator expect to see as evidence? And how do you turn legal requirements into something structured and defensible?

What the EU AI Act is designed to do

The EU AI Act introduces a risk-based approach to regulating artificial intelligence. The higher the potential impact of an AI system on people’s rights, safety, or opportunities, the stricter the obligations placed on it. At its core, the Act separates AI use into categories — from prohibited practices to high-risk systems and lower-risk use cases with transparency requirements. It also introduces obligations for general-purpose AI models, reflecting how widely these technologies are now used across industries.

In practical terms, the regulation does four things:

  • prohibits a limited set of AI practices,
  • imposes strict requirements on high-risk systems,
  • introduces transparency obligations for certain uses,
  • and creates rules for general-purpose AI models.

This structure means the Act applies far beyond technology providers. It affects organisations that develop, deploy, integrate, procure, or rely on AI systems in the EU — often in ways they do not initially recognise as regulated.

Why many organisations are already in scope

A common assumption is that the EU AI Act mainly affects large technology companies. In reality, exposure often comes from how AI is used, not who builds it. AI is now embedded in everyday business processes. It appears in recruitment tools, credit decisions, medical support systems, customer interactions, and internal workflows. In many cases, organisations are already using AI in ways that fall within the scope of the Act without having formally recognised it.

This is particularly relevant in areas such as hiring and workforce management, financial decision-making including credit or insurance, healthcare and diagnostics, education and evaluation, public services or infrastructure, and customer-facing tools that rely on generative AI.

A useful way to reframe the issue is to ask: Where are AI-supported decisions already influencing outcomes for individuals? That is typically where regulatory exposure begins.

Why 2026 matters

The AI Act is being implemented in stages, but by August 2026 most provisions are fully applicable. By then, organisations are expected to have a clear understanding of their AI landscape and how it is governed. That does not mean every process must be perfect. But it does mean organisations should be able to demonstrate:

  • visibility over AI systems in use,
  • a consistent approach to classification,
  • defined accountability,
  • documented controls,
  • and the ability to produce evidence if required.

Waiting until enforcement becomes visible is a risky approach. By the time questions are asked, the expectation is that structured answers already exist.

What “high-risk AI” means in practice

For many organisations, this is where uncertainty begins. Whether an AI system is considered high-risk depends less on the technology itself and more on how it is used. Systems fall into this category when they influence decisions that can materially affect people’s lives — their employment, financial access, healthcare, education, safety, or legal position.

In practice, this often includes AI used in processes such as screening or ranking job applicants, assessing creditworthiness or insurance risk, supporting clinical or medical decisions, evaluating students or training outcomes, or prioritising access to essential services. These are not niche edge cases, but core operational processes in many organisations. At the same time, not every AI application carries the same weight. A chatbot answering general enquiries is very different from a system influencing hiring or lending decisions. The distinction lies in impact.

What organisations often underestimate is how quickly “useful automation” becomes “regulated decision support” once it begins to shape real-world outcomes. That is why classification needs to be consistent, documented, and reviewable — not an informal judgement made once and left unchallenged.

ISO 42001 Checklist PDF Download
Loading...
FREE DOWNLOAD

ISO/IEC 42001 Readiness Checklist

If you are beginning to assess how AI is governed in your organisation, this checklist provides a structured starting point. It helps you identify where AI is used, evaluate your current governance approach, and understand what may be required to align with ISO/IEC 42001.

From AI systems to governance systems

Where high-risk AI is involved, the focus of the regulation shifts. It is no longer only about the model itself. It is about the governance system around it. Regulators are interested in whether organisations can demonstrate control. That includes how risks are identified, how decisions are documented, how oversight is maintained, and how issues would be handled if something goes wrong.

This typically requires organisations to address areas such as:

  • risk management and accountability,
  • data governance and documentation,
  • transparency and traceability,
  • human oversight,
  • monitoring and incident handling.

Taken together, these are not isolated requirements. They form a governance structure.

Compliance, in practice, is less about proving that a model works and more about showing that the organisation around it is in control.

Prohibited practices and general-purpose AI

The Act also defines a limited set of prohibited AI practices. While narrow in scope, they carry the most significant penalties. Organisations should be able to demonstrate that these have been considered and excluded as part of their governance process.

At the same time, general-purpose AI introduces another layer of responsibility. Many organisations rely on AI capabilities embedded in third-party tools rather than building their own models.

This raises practical questions around supplier oversight, transparency, documentation, and how upstream risks are managed downstream. In practice, governance cannot stop at procurement. It needs to extend across how AI is actually used in day-to-day operations.

 

The questions organisations are now facing

Across industries, the conversation around AI has shifted. It is no longer driven only by innovation. It is increasingly shaped by accountability. Leadership teams are asking questions such as:

  • Where exactly are we using AI?
  • Which of these uses could be high-risk?
  • Who owns the associated risks?
  • What documentation exists?
  • Can we explain our controls to a regulator or auditor?

These are not purely legal questions. They are governance questions.
Organisations that already operate structured management systems often have an advantage. They are used to defining responsibilities, maintaining documentation, and demonstrating control. The challenge is extending that same discipline to AI.

How governance frameworks support compliance

The EU AI Act defines what organisations are expected to address, but it does not prescribe how governance should be structured internally.

This is where management system approaches become valuable. They provide a way to bring together responsibilities, processes, and controls into something consistent and repeatable.

In practice, a structured AI governance approach brings together clearly defined roles and responsibilities, a complete inventory of AI systems and use cases, and a consistent method for classification and risk assessment. It also includes lifecycle controls supported by documented policies, along with mechanisms for monitoring, escalation, and continuous improvement.

For many organisations, ISO/IEC 42001 is emerging as a practical framework in this space. It provides a structured, auditable approach to managing AI systems. It is important to keep the distinction clear:

  • the EU AI Act sets legal requirements,
  • governance frameworks help operationalise them,
  • independent assessment strengthens trust in how they are applied.

Where to start

For most organisations, the starting point is not a deep technical review of every model. It is clarity. A practical approach often follows a simple progression.

  • First, build visibility. Map where AI is already used across the organisation — including internal systems, third-party tools, and embedded features that may not be immediately obvious.
  • Next, establish ownership. Define who is responsible for oversight, risk evaluation, and documentation. Without clear accountability, governance tends to remain informal.
  • Then, assess exposure. Identify where high-risk or prohibited-use questions may arise. This is rarely a purely technical exercise — it requires input from legal, compliance, risk, and operational teams.
  • Finally, begin to formalise governance. Align policies, controls, and processes into a model that is consistent and repeatable.

The goal is not perfection from the outset. It is defensibility — the ability to show that AI systems are understood, risks are assessed, and governance is applied in a structured way.

 

What this means in practice

The EU AI Act is not only a regulatory development. It is a test of organisational maturity. The organisations best prepared for 2026 will not necessarily be those with the most advanced AI capabilities. They will be the ones that can clearly demonstrate how those capabilities are governed.

They will be able to explain:

  • what they use,
  • what risks it creates,
  • how those risks are managed,
  • and who is accountable.

For many, the question is no longer whether AI governance needs to be formalised, but whether it can be demonstrated with confidence. Rather than starting from scratch, the focus is increasingly on extending existing management systems to reflect how AI is actually used today — in a way that is structured, transparent, and can be clearly evidenced when it matters.

Talk to DQS

If you would like to understand how the EU AI Act applies to your organisation and what an audit-ready AI governance approach looks like in practice, contact your local DQS office to start the conversation.

Get in touch
Author

Aakriti Patwari

Her work centers on communicating complex topics such as data security, compliance, and emerging technologies, supporting global awareness and trust in standards, frameworks, and evolving regulatory landscapes.

Loading...

You Might Also Enjoy These Reads

Discover more articles that dive deep into related themes and ideas.
Blog
Loading...

AWS and Azure Are ISO 27001 Certified — But That Doesn't Mean Your Company Is

Blog
Loading...

NIS-2 for Managing Directors: Duties, Liability, and Implementation

Blog
Loading...

Why ISO 42001 is the Essential Strategic Upgrade to Your ISO 27001 Certification