The rapid adoption of artificial intelligence (AI) across multiple industries has led to the introduction of the Artificial Intelligence Act in the European Union (EU 2024/1689), a comprehensive regulatory framework designed to address the risks associated with AI technologies.

Unlike industry-specific regulations, the AI Act applies broadly, covering various sectors, including healthcare and AI in medical devices, and non-EU companies are not exempt from needing to stay up-to-date.

For medical device manufacturers, compliance now extends beyond the Medical Device Regulation (MDR) (EU 2017/745) to include the AI Act. Where there is multiple legislation, there are potential regulatory overlaps. The MDR emphasizes patient safety, clinical effectiveness, and risk management—principles that align with the AI Act’s risk-based approach to AI governance. Given these shared objectives, integrating AI-related requirements into an existing Quality Management System (QMS) is both practical and beneficial for North American companies looking to operate internationally.

Recognizing this need, Article 17 of the AI Act explicitly allows for the integration of AI-specific QMS requirements into existing sectoral frameworks, stating:

"Providers of high-risk AI systems that are subject to obligations regarding quality management systems or an equivalent function under relevant sectoral Union law may include the aspects listed in paragraph 1 as part of the quality management systems pursuant to that law."

This paper explores key similarities and differences between the MDR and AI Act, focusing on areas such as data management, lifecycle processes, post-market clinical follow-up (PMCF), and regulatory compliance. Through this approach, medical device manufacturers can efficiently incorporate AI compliance into their QMS, ensuring both regulatory alignment and patient safety.

Data as the "Raw Material" in AI-Powered Medical Devices: A Regulatory Perspective


In AI-driven medical devices, data is the foundational input—comparable to raw materials in traditional device manufacturing. Just as the MDR requires full traceability of materials to ensure biocompatibility, structural integrity, and overall safety, AI models must be developed on well-characterized, validated, and traceable datasets.

Under MDR, material traceability safeguards device performance and patient safety. In the context of AI, dataset traceability is equally critical—influencing model performance, clinical validity, and regulatory acceptability.

This analogy serves to highlight the need for structured data governance frameworks in AI system development. The following table draws parallels between MDR material traceability principles and AI data management requirements.

However, note that AI datasets are also subject to unique requirements—such as fairness and non-discrimination, which extend beyond traditional MDR considerations.

Aspect MDR Requirement AI Requirement
Source Identification Every material used in a device must be fully traceable to a documented origin. Every AI training data must be fully traceable to a documented origin (e.g., hospital databases, clinical trials, and publicly available datasets).
Certification Raw materials must meet safety and performance standards. Data must be validated for accuracy, completeness, and relevance (e.g., ensuring AI is trained, validated, and tested on clinically relevant data).
Properties Documentation The chemical, mechanical, and biocompatibility properties must be recorded for each material. AI datasets must document data diversity, demographic representation, and statistical biases to ensure fairness.
Change Management Any modification to materials must be tested and documented. Any change in training datasets (e.g., adding new patient populations) must be evaluated for impact on AI model performance.
Safety Testing, Bias and Risk Assessment Materials must be tested to ensure no adverse biological effects. AI datasets must be assessed for biases, data gaps, and errors to prevent unreliable or discriminatory outputs.

Lifecycle Management of AI-Driven vs. Traditional Medical Devices

The software lifecycle is a critical regulatory aspect for both traditional software-based medical devices (SW-MDs) and AI-driven medical devices (AI-MDs). While both require structured development, transparency, and traceability, AI in medical devices introduce unique challenges due to the nature of AI models.

Aspect Traditional Software-Based Medical Devices (SW-MDs) AI-Driven Medical Devices (AI-MDs)
Traceability Software code and requirements must be fully documented and version-controlled. AI model architecture, training steps, datasets and all processing steps must be documented and version-controlled.
Reproducibility Deterministic: Given the same inputs, the software always produces the same output. Probabilistic: AI models generate outputs based on learned patterns, leading to slight variations in results.
Performance Stability Remains stable unless explicitly updated. May degrade over time due to data drift, requiring ongoing monitoring.
Change Management Follows structured version control and regression testing before deployment. AI models require continuous retraining and revalidation to maintain accuracy.
Verification & Validation (V&V) Ensures software functions correctly and meets requirements. AI V&V includes dataset validation, bias assessment, and stress testing for edge cases.

To align with MDR principles, AI-based systems must adopt:


• Structured AI training, validation, and testing processes to ensure reproducibility and performance consistency.
• Comprehensive documentation that allows the AI-driven medical device to be rebuilt from documentation alone while maintaining its intended performance.
• Post-market monitoring to detect performance drift, biases, and unintended outcomes.

Post-Market Clinical Follow-up (PMCF) for AI-Driven Medical Devices

PMCF is a regulatory requirement under MDR (EU 2017/745), ensuring medical devices maintain safety and effectiveness after entering the market. While traditional devices with stable technology may not require continuous PMCF, AI in medical devices necessitate ongoing monitoring due to their reliance on evolving real-world data.

Aspect SW-MDs AI-MDs
Need for Continuous PMCF Often limited for well-established, stable technologies that do not change post-deployment. Essential, as AI models are highly dependent on real-world data, which may differ from training data.
Performance Stability Remains stable unless explicitly updated with new software versions. May degrade over time due to data drift, requiring ongoing monitoring and recalibration.
Risk of Unanticipated Behavior Low risk if premarket testing is comprehensive and software remains unchanged. High risk, as AI models trained on historical datasets may not generalize well to unseen real-world data.
Key Monitoring Focus Primarily focused on bug fixes, cybersecurity risks, and software reliability. Includes performance drift, bias detection, real-world accuracy, and unintended clinical risks.
Regulatory Expectation for PMCF Periodic updates based on reported issues, user feedback, and adverse event reports. Requires continuous real-world performance assessment, data-driven updates, and retraining validation.
Approach to Risk Mitigation Patch updates, security fixes, and periodic revalidation if major changes occur. Adaptive lifecycle management, including automated model monitoring, revalidation, and regulatory reporting.
Data Collection & Analysis Often retrospective, based on clinical literature reviews and post-market incident reports. Real-time or near-real-time data collection, requiring advanced monitoring infrastructure and compliance.

For traditional software-based medical devices, PMCF ensures ongoing safety but is often static and reactive. In contrast, AI-driven medical devices require a proactive and dynamic PMCF approach due to their reliance on evolving real-world data. Without continuous monitoring, AI models risk performance degradation, bias introduction, and potential patient harm—making PMCF a regulatory and ethical necessity.

Conclusion

To ensure compliance with the MDR and AI Act, AI-driven medical devices must integrate rigorous data management, lifecycle monitoring, and continuous PMCF. By adopting AI-specific quality management practices, manufacturers can ensure their devices remain safe, effective, and regulatory-compliant throughout their lifecycle.

As the use of AI in medical devices evolves, a structured and proactive regulatory strategy will be critical for balancing innovation with patient safety and legal compliance.

CE Marking under Regulation MDR (EU) 2017/745

Do you have questions about obtaining CE Marking in Europe for your medical device under Regulation MDR (EU) 2017/745? Reach out to us for more information and assistance.

Contact us!

Relevant articles and events

You may also be interested in this
Blog
MED Blog ISO 13485 - Clauses excluded not applicable
Loading...

Understanding ISO 13485: Excluded Clauses vs. Not Applicable Clauses

Blog
Header Blog TD under MDR - Common Mistakes
Loading...

Common Mistakes in Technical Documentation for Medical Devices Under MDR

Blog
CAPA for Medical Devices: Common Mistakes and How to Avoid Them
Loading...

CAPA for Medical Devices: Common Mistakes and How to Avoid Them