Artificial intelligence (AI) has long since arrived in the corporate world and offers numerous opportunities for increasing efficiency, decision support and automation. New developments reach us almost on a daily basis. However, all these opportunities are accompanied by challenges - especially with regard to regulatory requirements and compliance. The legal framework for AI is taking on clear contours - particularly as a result of the European AI Regulation (now EU AI Act). The mandatory implementation of new standards such as ISO 42001 could also soon be on the agenda.
Artificial Intelligence in Practice - Why Regulation Matters
Artificial intelligence is already widely used across many industries, often with a significant impact on processes and workflows. Every day, AI tools support general tasks such as translation, information structuring, and content creation. In marketing, AI enables personalization and data analysis. In logistics, it helps optimize supply chains. Its application is even more specific—and more critical—in healthcare, where AI is increasingly used for patient management and diagnostics. In cybersecurity, extended detection and response (XDR) systems rely heavily on machine learning.
As AI adoption grows, it is crucial for companies to develop a strategic approach to regulation at an early stage. This helps minimize liability risks and secure competitive advantages. At the same time, organizations must raise awareness of potential risks, including data protection issues, manipulation, and misinformation. Ethical considerations — such as preventing discrimination and limiting intrusive profiling — also play an important role.
The EU AI Act: An Initial Legal Framework
Regulation (EU) 2024/1689, of the European Parliament and of the Council of 13 June 2024 — referred to as the EU AI Act — is the world’s first comprehensive legal framework for artificial intelligence. Its aim is to ensure the safe and trustworthy use of AI systems. The regulation was proposed by the European Commission in April 2021. Most of its provisions will apply across EU Member States from August 2, 2026. In Germany, the designation of a national supervisory authority has not yet been finalized.
It is important to note that the EU AI Act does not replace existing data protection regulations, such as the General Data Protection Regulation (GDPR). These continue to apply in parallel. The regulation classifies AI applications according to their risk potential:
- Minimal risk: Applications such as AI-supported spam filters or recommendation systems are not subject to specific regulatory requirements.
- Limited risk: This category includes applications such as chatbots, which are subject to transparency obligations (e.g. informing users that they are interacting with an AI system).
However, it is not only the risk that influences the measures to be implemented, but also the way in which your company interacts with AI. The scope of the AI Act covers the following actors:
- High risk: AI systems used in areas such as critical infrastructure, personnel decisions, or credit assessments are subject to strict requirements regarding transparency, security, and risk management. The criteria for high-risk classification are defined in Annex III of the EU AI Act.
- Unacceptable AI: Certain AI applications, such as manipulative technologies or social scoring systems, are prohibited.
Particularly powerful AI systems may also pose a systemic risk, which requires additional safeguards. The implementation of the AI Act entails particular effort, especially if high-risk systems are used. In this case, companies must prove that their systems are secure, transparent and non-discriminatory.
However, it is not only the risk that influences the measures to be implemented, but also the way in which your company deals with artificial intelligence. The scope of the AI Act covers the following players:
- You are considered a supplier if you develop an AI or place it on the market under your name. A representative of a provider based in the EU is considered an authorized representative.
- An operator uses an AI for professional purposes under his own responsibility.
- Are you the distributor (EU) of an AI manufactured by a supplier in a third country? Then you are considered an importer.
Or do you provide an AI in the EU as part of the supply chain? In this case, you are considered a distributor.
Obligations for Deployers (excerpt)
Do you use artificial intelligence in your operations? If so, you are required to take several measures. These include maintaining an inventory of AI applications used within your organization, conducting a risk classification, training employees, and fulfilling transparency obligations. Strictly speaking, maintaining such an inventory is not explicitly required under the EU AI Act. However, in practice, it will be difficult to demonstrate compliance with risk assessments and other obligations without it.
You should also establish a structured process for the introduction of new AI systems and define clear guidelines for their use.
Obligations for Providers of High-Risk Systems (excerpt)
Do you develop AI systems with a high-risk profile or place them on the market under your own name? If so, you must comply with extensive requirements regarding documentation, registration, and labeling. In addition, you are responsible for ensuring robust risk management and information security for your AI system. A full overview of the obligations for providers can be found in Chapter III, Section 3, starting from Article 16 of the EU AI Act.
Note: Chapters I and II also define obligations for both providers and deployers. Of particular importance is Article 4 in Chapter I, which addresses the training of employees in AI literacy. This means that any organization using AI should take appropriate action to build internal competencies.
Implementing the EU AI Act Through ISO Standards
From a management systems perspective, two international standards are particularly suitable for supporting AI compliance, including compliance with the EU AI Act: ISO/IEC 42001 and ISO/IEC 27001. Both standards follow the Harmonized Structure (HS) used across ISO management system standards, which allows them to be integrated seamlessly into existing management systems.
ISO/IEC 42001:2023 - A Management System for AI
ISO/IEC 42001:2023 is the first international standard for an Artificial Intelligence Management System (AIMS). Its objective is to provide organizations with a structured framework for implementing, monitoring, and continuously improving AI systems. The standard covers, among other areas:
- AI governance structures: Defining responsibilities and accountability for AI initiatives
- Risk management: Identifying, assessing, and mitigating AI-related risks
- Transparency and traceability: Ensuring proper documentation and explainability of AI models
- Ethics and compliance: Promoting fairness, protecting human and fundamental rights, and preventing discrimination
ISO/IEC 42001 is a certifiable standard and can therefore serve as a valuable tool for demonstrating compliance. DQS now offers ISO/IEC 42001 certification worldwide, making it one of the first certification bodies to include this standard in its portfolio. More broadly, ISO/IEC 42001 provides organizations with a structured framework for future-proofing their AI processes.
Approaching AI Compliance via ISO/IEC 27001
In addition to certification under ISO/IEC 42001, an approach based on ISO/IEC 27001—the international standard for information security management systems—can also be beneficial. As an established and certifiable standard, ISO/IEC 27001 enables companies to build a solid foundation for the secure handling of AI-related data and systems. The following measures can support this approach:
- Expand risk management: ISO/IEC 27001 requires risk assessments for IT systems. These can be extended to AI models to identify and address risks at an early stage.
- Implement data and model protection: Information security controls from ISO/IEC 27001 can be applied to protect AI training data, algorithms, and models from manipulation. As with any software system, AI systems can be safeguarded using controls from Annex A. Measures related to secure software development—such as architecture, lifecycle management, and testing—can largely be adapted to AI systems, even though development approaches may differ.
- Establish governance: Define clear responsibilities and compliance requirements for AI-supported processes, similar to those in an information security management system.
- Continuous monitoring: Security and transparency mechanisms from ISO/IEC 27001 can be applied to AI models to continuously assess their performance and associated risks.
Conclusion: Viewing the EU AI Act as an Opportunity
Regulatory requirements for artificial intelligence will continue to evolve and become more stringent. Companies that engage early with the EU AI Act, as well as standards such as ISO/IEC 42001 and ISO/IEC 27001, have the opportunity to use AI systems in a secure and compliant manner, depending on their specific use cases. By aligning strategically with European regulations—particularly through the adoption of recognized international standards—organizations can not only reduce regulatory risks and meet supervisory requirements, but also strengthen the trust of customers and business partners in their AI technologies. This, in turn, can create a sustainable competitive advantage.
Note: The author of this article is not a lawyer. This article does not constitute legal advice and makes no claim to completeness.
Current ISO/IEC 42001
Find out more about the international standard for an effective AI management system (AIMS) and possible certification. No obligation and free of charge.
DQS - Because Not All Audits Are the Same
ust as every company and organization uses artificial intelligence in its own way, the goals they pursue with it are equally diverse. To ensure the safe and effective use of AI systems, a new international management system standard specifically for AI has been available since the end of 2023: ISO/IEC 42001. DQS is one of the first certification bodies worldwide to offer certification in accordance with ISO/IEC 42001.
Benefit from the expertise of our specialists. Learn about the key requirements of the standard and what they mean for your organization. For over 40 years, we have stood for impartial audits and certifications. Our approach goes beyond standard audit checklists. Put us to the test—we look forward to hearing from you.
Trust and expertise
Our texts and publications are written exclusively by our standards experts or long-standing auditors. If you have any questions about the content or our services, please feel free to contact the author by email at: [email protected]
Note: For reasons of readability, the generic masculine form is used. However, all gender identities are included where applicable.