The U.S. Food and Drug Administration (FDA) recently released its draft guidance titled "Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submissions Recommendations." This draft comes at a crucial time as artificial intelligence (AI) is transforming healthcare and medical technology. To date, the FDA has authorized over 1,000 AI-enabled devices, underscoring the increasing role of AI in advancing patient care. The FDA guidance outlines essential considerations for the design, development, and ongoing maintenance of AI-enabled medical devices, with a focus on ensuring that these technologies remain both safe and effective while addressing the unique challenges of AI.

Key Recommendations from the FDA Guidance


Transparency and Documentation


A notable aspect of the FDA's guidance is on transparency. The FDA encourages manufacturers to clearly disclose how AI is integrated into their products, providing details including the type of model used, the datasets employed for development and validation, and the methods for ongoing updates and maintenance.
Developers are also urged to use tools like model cards, which are concise documents that provide essential information about an AI model to enhance user understanding. Transparency extends to the datasets used, requiring manufacturers to include demographic details and a clear description of methods employed to ensure that the AI system is applicable across diverse patient populations.
Additionally, the FDA mandates that for most authorized devices, a public summary must be available. These summaries should offer enough detail to ensure transparency and provide stakeholders with the information on the AI model's design,  functionality, and validation.


Addressing Bias in AI Models


Bias in AI-enabled devices is a major concern highlighted in the draft guidance. AI models can produce skewed results if trained on insufficient or unrepresentative data, potentially disadvantaging certain demographic groups. The FDA recommends that manufacturers use robust validation datasets that accurately reflect the intended user population.
Manufacturers are advised to assess device performance across various subgroups, outline data collection methods, and describe strategies to improve dataset diversity. This helps to ensure that AI devices perform reliably across different patient populations and clinical settings, reducing the risk of biased outcomes.


Postmarket Monitoring


Given the dynamic nature of AI, there is a risk of performance degradation over time due to changes in input data or device contexts. The FDA's draft guidance emphasizes the importance of postmarket performance monitoring to proactively identify and address such changes that could impact the device’s effectiveness.
Although not mandatory for all devices, post marketing monitoring plans are required for devices under certain regulatory pathways. These plans should outline methods for tracking performance, deploying updates, and implementing corrective actions to maintain safety and efficacy.


Pre-Determined Change Control Plans (PCCP)


Consistent with previous guidance, the FDA supports the use of PCCPs for AI-enabled devices. This framework allows manufacturers to improve device performance post-market without the need for a new submission, provided the changes are pre-approved within the PCCP.


Public Feedback and Next Steps


The FDA invites public comments on the draft guidance until April 7, 2025. Additionally, the agency plans to host a webinar on February 18, 2025, to discuss the contents of the document. This open approach reflects the FDA’s commitment to collaboration with stakeholders in shaping the regulations for digital health technologies.


You can read the complete draft here.