Business & Legal, Tech
FDA, Foley Hoag, machine learning

Public Health Agencies Release “Guiding Principles” for Good Machine Learning Practice

The U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s Medicines and Healthcare Products Regulatory Agency (MHRA) jointly issued a guidance document entitled, “Good Machine Learning Practice for Medical Device Development: Guiding Principles.” The document outlines 10 guiding principles surrounding the development of Good Machine Learning Practice (GMLP) to help ensure that medical devices that use artificial intelligence and machine learning (AI/ML) are safe and effective.

These guiding principles are a part of a larger collaborative initiative among international regulators and organizations, including the International Medical Device regulators Forum (IMDRF) and international standards organizations, to address various issues concerning the regulation of medical device software. The new guiding principles are intended to identify additional avenues for collaboration while considering the unique nature of AI/ML products.

According to the Guiding Principles, developers of medical devices that use AI/ML should:

  1. Leverage multi-disciplinary expertise throughout the total product life cycle;
  2. Implement good software engineering and security practices, paying special attention to data quality assurance, data management, and cybersecurity practices;
  3. Ensure that clinical study participants and data sets are representative of the intended patient population;
  4. Select and maintain training data sets that are independent of test sets;
  5. Select reference datasets that are based upon best available methods to ensure that clinically relevant and well-characterized data are collected;
  6. Tailor the model design to the available data while ensuring that it reflects the intended use of the device;
  7. Consider the influence of human factors when the model has a “human in the loop”;
  8. Develop and execute test plans to demonstrate device performance during clinically relevant conditions;
  9. Provide clear and essential information to users in a manner that is appropriate for the intended audience; and
  10. Monitor models that have been deployed for real-world use to ensure that safety and performance are maintained or improved.

FDA is seeking feedback on these principles through a public docket that the agency opened for comment following the release of a discussion paper on its Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine-Learning (AI/ML)-Based Software as a Medical Device (SaMD).

At the beginning of the year, FDA issued its Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. And, on October 14, 2021, the agency hosted a virtual public workshop designed to: (1) identify unique considerations in achieving transparency for users of AI/ML-enabled medical devices and ways in which transparency might enhance the safety and effectiveness of these devices; and (2) gather input from various stakeholders on the types of information that would be helpful for a manufacturer to include in the labeling and public facing information of AI/ML-enabled medical devices, as well as other potential mechanisms for information sharing. This increased level of regulatory policy engagement shows at least a desire on the part of FDA to keep up with advancing technological features making an ever-increasing presence in the FDA-regulated medical products space. The Digital Health Center of Excellence, comprised of experts who provide regulatory advice and support regarding digital health technology to the Center for Devices and Radiological Health (CDRH), is leading these efforts on behalf of FDA.


This post was originally published on the Foley Hoag blog.

Foley Hoag logo





Upcoming Events


Related Articles