On January 7, 2025, the FDA issued a draft guidance called Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. The document clarifies how sponsors, manufacturers, and other industry developers should approach artificial intelligence (AI) to support safe, effective development and marketing of AI-based tools.
The guidance discusses the use of AI models in the nonclinical, clinical, post-marketing, and manufacturing phases of the drug product life cycle, where the specific use of the AI model is to produce information or data to support regulatory decision-making as it relates to safety, efficacy, or the quality of the product. It does not cover AI use in drug discovery or operational efficiencies that do not affect patient safety, drug quality, or study reliability.
The FDA emphasizes that a key aspect of the appropriate application of AI modeling in drug development and regulatory evaluation is ensuring model credibility—trust in the performance of an AI model for a particular context of use (COU). COU refers to the specific role and scope of the AI model used to address a question of interest. In this guidance, the FDA proposes a “risk-based credibility assessment framework” that may be used for establishing and evaluating the credibility of an AI model for a particular COU. The FDA provides a seven-step process containing detailed recommendations on how to plan, gather, organize, and document information for establishing AI model credibility when the model is used to produce information or data intended to support regulatory decision-making:
- Define the question of interest that will be addressed by the AI model.
- Define the COU for the AI model.
- Assess the AI model risk.
- Develop a plan to establish the credibility of the AI model output within the COU.
- Execute the plan.
- Document the results of the credibility assessment plan and discuss deviations from the plan.
- Determine the adequacy of the AI model for the COU.
The FDA gives a hypothetical example that runs from Step 1 through Step 3 to illustrate the high stakes involved in using AI models in drug development. In the scenario, a drug candidate associated with life-threatening side effects is being advanced, and the sponsor proposes using an AI model to categorize patients based on their risk of adverse events. The AI model would determine whether patients should be monitored as outpatients or admitted for inpatient surveillance. Given the AI model’s critical role in this decision, the potential consequences are significant. A mistake could lead to a high-risk patient being inappropriately categorized as low risk, resulting in a potentially life-threatening situation without the proper treatment.
The FDA expects sponsors to develop credibility assessment plans to establish the credibility of AI model outputs, and it expects such plans to be commensurate with the AI model risk and tailored to the specific COU. Whether, when, and where the plan will be submitted to the FDA depends on how the sponsor engages with the FDA and on the AI model and COU.
The guidance concludes by providing a variety of ways for sponsors and other interested parties to engage with the FDA on issues related to AI model development. The FDA encourages those who intend to use AI in their processes to reach out early on and emphasizes the importance of engagement in a timely manner to “set expectations regarding appropriate credibility assessment activities” for the model. Early engagement would also help the sponsor identify potential challenges and open discussion about how to conquer these challenges.
The FDA seeks public comment on this draft guidance until April 7, 2025. The FDA specifically seeks feedback on the alignment of this draft guidance with industry experience, the adequacy of the available options for sponsors and other interested parties to engage with the FDA on AI usage, and the need for more specific guidance for using AI in post-marketing pharmacovigilance. If finalized, this guidance would be the first of its kind to incorporate comprehensive guidance and recommendations for the design, development, documentation, and maintenance of these AI models.
On January 23, 2025, President Donald Trump signed an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence,” affirming the administration’s commitment to sustaining and enhancing America’s global AI dominance. Trump further took action to rescind the prior Biden Administration executive order on AI (Executive Order 14110) and called for review of Biden-era AI policies, directives, regulations, orders, and other actions to identify any actions taken pursuant to the now rescinded executive order that may act as a barrier to AI innovation and for such actions to be addressed. We will continue to monitor the new administration’s AI policies as they take shape.
If you have questions on the draft guidance, please contact the authors or a member of the Goodwin Life Sciences Regulatory & Compliance team.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee similar outcomes.
Contacts
- /en/people/t/thonse-sukrti
Sukrti Thonse
Associate